text
stringlengths
14
1.76M
# Complex solutions to Maxwell’s equations Sachin Munshi∗ and Rongwei Yang Department of Mathematics and Statistics, SUNY at Albany, Albany, NY 12222, U.S.A<EMAIL_ADDRESS><EMAIL_ADDRESS>Department of Mathematics and Statistics, SUNY at Albany, Albany, NY 12222, U.S.A<EMAIL_ADDRESS> ###### Abstract. This paper provides a view of Maxwell’s equations from the perspective of complex variables. The study is made through complex differential forms and the Hodge star operator in $\mathbb{C}^{2}$ with respect to the Euclidean and the Minkowski metrics. It shows that holomorphic functions give rise to nontrivial solutions, and the inner product between the electric and the magnetic fields is considered in this case. Further, it obtains a simple necessary and sufficient condition regarding harmonic solutions to the equations. In the end, the paper gives an interpretation of the Lorenz gauge condition in terms of the codifferential operator. ###### Key words and phrases: Maxwell’s equations, differential forms, Hodge star operator, harmonic functions, holomorphic functions, Lorenz gauge ###### 2010 Mathematics Subject Classification: Primary 35Q61, 78A25; Secondary 32A10 ∗ Corresponding author ## 1\. Introduction Named after the physicist and mathematician James C. Maxwell, Maxwell’s equations form the foundation of classical electromagnetism, optics, and electrodynamics (see for instance [1, 7, 13]). They are a set of partial differential equations that describe the interactions between the electric and magnetic fields that emerge from distributions of electric charges and currents, and of course, how these fields change in time. Maxwell’s equations are the differential equations $\displaystyle\nabla\cdotp\mathbf{B}$ $\displaystyle=$ $\displaystyle 0,$ (1.1a) $\displaystyle\nabla\times\mathbf{E}+\frac{\partial\mathbf{B}}{\partial t}$ $\displaystyle=$ $\displaystyle 0,$ (1.1b) $\displaystyle\nabla\cdotp\mathbf{E}$ $\displaystyle=$ $\displaystyle\frac{\rho}{\epsilon_{0}},$ (1.1c) $\displaystyle\nabla\times\mathbf{B}-\frac{1}{c^{2}}\frac{\partial\mathbf{E}}{\partial t}$ $\displaystyle=$ $\displaystyle\mu_{0}\mathbf{J},$ (1.1d) where $\mathbf{E}=(E_{1},E_{2},E_{3})$ is the electric field, $\mathbf{B}=(B_{1},B_{2},B_{3})$ is the magnetic field, the scalar $\rho$ is the electric charge density, and the vector $\mathbf{J}$ is the electric current density vector. Moreover, $\epsilon_{0}$ is the vacuum permittivity, $\mu_{0}$ is the vacuum permeability, and $c:=1/\sqrt{\epsilon_{0}\mu_{0}}$ is the speed of light in vacuum. For ease of use, most modern physicists and mathematicians simply set $c=1$, as shall we throughout this paper. For convenience, we fix the notation $x_{0}=ct=t$. Then in a more compact form, equations (1.1a)-(1.1d) can be written in differential forms over Minkowski space-time as $\displaystyle dF$ $\displaystyle=$ $\displaystyle 0,$ (1.2) $\displaystyle\star\text{ }d\star F$ $\displaystyle=$ $\displaystyle J,$ (1.3) where $\displaystyle F$ $\displaystyle=-dx_{0}\wedge\left(E_{1}dx_{1}+E_{2}dx_{2}+E_{3}dx_{3}\right)$ (1.4) $\displaystyle-B_{1}dx_{2}\wedge dx_{3}+B_{2}dx_{1}\wedge dx_{3}-B_{3}dx_{1}\wedge dx_{2},$ and it is referred to as the Faraday $2$-form. Here, $J$ is the current $1$-form, $d$ is the exterior differential operator, and $\star$ is the Hodge star operator ([1]). These will be described in detail in this paper. By the Poincaré lemma, Equation (1.2) implies that locally $F=d\omega$ for some differentiable $1$-form $\omega=\eta_{0}dx_{0}+\eta_{1}dx_{1}+\eta_{2}dx_{2}+\eta_{3}dx_{3}.$ This paper investigates the case when $\eta_{j},0\leq j\leq 3$, are all harmonic functions in $(x_{0},x_{1},x_{2},x_{3})$ with respect to the Euclidean or Minkowski metric. If one takes advantage of the identification $\mathbb{R}^{4}\simeq\mathbb{C}^{2}$, i.e. $\mathbb{R}^{4}\ni\left(x_{0},x_{1},x_{2},x_{3}\right)\mapsto\left(x_{0}+ix_{1},x_{2}+ix_{3}\right):=\left(z_{1},z_{2}\right)\in\mathbb{C}^{2},$ then $\omega$ can be written as $\omega(z)=f_{1}dz_{1}+f_{2}dz_{2}+f_{\bar{1}}d\bar{z}_{1}+f_{\bar{2}}d\bar{z}_{2}$. The following is the main result. ###### Theorem 1.1. Let $f_{j},f_{\bar{j}},\text{ }j=1,2$ be harmonic functions on $\mathbb{C}^{2}$. Then the complex differential form $F_{\omega}:=d\omega$ is a solution to the source-free Maxwell’s equations in the Euclidean metric if and only if $\bar{\partial}_{1}f_{1}+\bar{\partial}_{2}f_{2}+\partial_{1}f_{\bar{1}}+\partial_{2}f_{\bar{2}}$ is constant. A parallel theorem holds with respect to the Minkowski metric on $\mathbb{R}^{4}\simeq\mathbb{C}^{2}$. A solution $F_{\omega}$ to the Maxwell’s equations with respect to the Minkowski metric is said to be wavelike if each of the functions $f_{j},f_{\bar{j}},\text{ }j=1,2$, is a solution to the d’Alembertian equation (or wave equation) $\left(\frac{\partial^{2}}{\partial x_{0}^{2}}-\frac{\partial^{2}}{\partial x_{1}^{2}}-\frac{\partial^{2}}{\partial x_{2}^{2}}-\frac{\partial^{2}}{\partial x_{3}^{2}}\right)u=0.$ Known solutions to the Maxwell’s equations are all wavelike, so the next result seems surprising. ###### Theorem 1.2. There exist non-wavelike solutions to the source-free Maxwell’s equations with respect to the Minkowski metric. The paper is organized as follows. ###### Contents 1. 1 Introduction 2. 2 Preliminaries 1. 2.1 Complex Differential Forms 2. 2.2 Hodge Star Operator $\star$ 3. 2.3 Self-dual and Anti-self-dual Forms 4. 2.4 The Hodge Laplacian 3. 3 Maxwell’s Equations 1. 3.1 Classical Version 2. 3.2 Differential Forms Version 4. 4 Harmonic Solutions To Maxwell’s Equations 1. 4.1 Euclidean Metric Case 2. 4.2 Minkowski Metric Case 5. 5 On the Lorenz gauge 6. 6 Concluding Remarks ## 2\. Preliminaries Recall that $\mathbb{R}^{4}$ is just Euclidean space of real dimension 4\. As a vector space, a point in $\mathbb{R}^{4}$ may be considered as a row vector $\mathbf{x}=\left(x_{0},x_{1},x_{2},x_{3}\right),x_{i}\in\mathbb{R}$. Note that in the Lorentzian signature, $\mathbb{R}^{4}$ is denoted as $\mathbb{R}^{1,3}$, and is referred to as Minkowski space-time, with $x_{0}$ as the time variable (denoted $ct$), where $c$ is the speed of light, and $x_{1},x_{2},x_{3}$ are the space variables. Going back to viewing $\mathbb{R}^{4}$ in the Euclidean metric, we have the identification $\mathbb{R}^{4}\simeq\mathbb{C}^{2}$, i.e. $\mathbb{R}^{4}\ni\left(x_{0},x_{1},x_{2},x_{3}\right)\mapsto\left(x_{0}+ix_{1},x_{2}+ix_{3}\right):=\left(z_{1},z_{2}\right)\in\mathbb{C}^{2}.$ ### 2.1. Complex Differential Forms We first introduce some preliminaries on complex differential forms. Consider $\mathbb{C}^{n}$ with points given by coordinates $z=\left(z_{1},z_{2},\dots,z_{n}\right)$. The tangent space of $\mathbb{C}^{n}$ is $T\left(\mathbb{C}^{n}\right)=\text{span}\left\\{\frac{\partial}{\partial z_{k}},\frac{\partial}{\partial\bar{z}_{k}}:1\leq k\leq n\right\\},$ and the cotangent space is given by $T^{\ast}\left(\mathbb{C}^{n}\right)=\text{span}\left\\{dz_{k},d\bar{z}_{k}:1\leq k\leq n\right\\}.$ ###### Definition 2.1. Let $f$ be a smooth function on a domain $\mathcal{M}\subset\mathbb{C}^{n}$. Consider the linear operators $\partial,\bar{\partial},d$ defined to act on $f$ as follows: $\displaystyle\partial f=\sum_{k=1}^{n}\frac{\partial f}{\partial z_{k}}dz_{k},\hskip 14.22636pt\bar{\partial}f=\sum_{k=1}^{n}\frac{\partial f}{\partial\bar{z}_{k}}d\bar{z}_{k},\hskip 14.22636ptdf=\left(\partial+\bar{\partial}\right)f.$ Here, $d$ is called the complex exterior differential operator, while $\partial,\bar{\partial}$ are called the Dolbeault operators. Recall that a smooth function $f$ on a domain $\mathcal{M}$ is said to be holomorphic if $\bar{\partial}f=0$ everywhere on $\mathcal{M}$. Clearly, this means that $f$ is analytic in each variable ([16]). Note that if $f$ is holomorphic, then $df=\partial f$. The following fact is well-known. ###### Fact 2.2. $\partial^{2}=\bar{\partial}^{2}=d^{2}=0$. For a multi-index $I=i_{1}i_{2}\cdots i_{p}$ we assume $i_{1}<i_{2}<\cdots<i_{p}$ and define its length $|I|=p$. ###### Definition 2.3. The space of complex $\left(p,q\right)$-forms on $\mathcal{M}$ is defined as $\Omega^{p,q}\left(\mathcal{M}\right):=\left\\{\sum_{|I|=p,|J|=q}f_{I,J}dz_{I}\wedge d\bar{z}_{J}:p+q\leq 2n,f_{I,J}\in C^{\infty}\left(\mathcal{M}\right)\right\\}.$ We may drop the $\mathcal{M}$ from the definition for convenience. Clearly, $\Omega^{1,0}$ is the space of complex differential forms containing only the $dz_{k}$ terms, and $\Omega^{0,1}$ is the space of forms containing only the $d\bar{z}_{k}$ terms. Then in terms of the exterior product on differential forms we have $\Omega^{p,q}=\underbrace{\Omega^{1,0}\wedge\cdots\wedge\Omega^{1,0}}_{p}\wedge\underbrace{\Omega^{0,1}\wedge\cdots\wedge\Omega^{0,1}}_{q}.$ Slightly abusing notation, we shall adhere to the following definition throughout this paper. ###### Definition 2.4. $\Omega^{k}:=\bigoplus_{p+q=k}\Omega^{p,q}$ is the space of all complex differential forms of total degree $k=p+q$. ###### Definition 2.5. For each $p>0$, the forms given by $\sum_{|I|=p}f_{I}dz_{I}$, where each $f_{I}$ is holomorphic, are called holomorphic p-forms, and they form a holomorphic section of $\Omega^{p,0}$. Note that if $\eta=\sum_{|I|=p}f_{I}d\bar{z}_{I}$, with $f_{I}$ holomorphic, then $\bar{\partial}\eta=0$. ### 2.2. Hodge Star Operator $\star$ We briefly go over some basics of Hodge theory withholding any discussion on topology or manifold theory. We identify $\mathbb{C}^{n}$ with ${\mathbb{R}}^{2n}$ with the representation $z_{k}=x_{2k-2}+ix_{2k-1},\ k=1,2,...,n$. Given a nondegenerate self-adjoint $2n\times 2n$ matrix $g=\left(g_{st}\right)$, it induces a sesquilinear form $\langle\cdot,\cdot\rangle$ on the tangent space $T\left(\mathbb{R}^{2n}\right)$ with complex coefficients by the evaluations $\left\langle\alpha\frac{\partial}{\partial x_{s}},\beta\frac{\partial}{\partial x_{t}}\right\rangle_{g}:=\alpha\overline{\beta}g_{st},$ where $\alpha$ and $\beta$ are complex numbers and $0\leq s,t\leq 2n-1$. Then on the cotangent space $T^{*}\left(\mathbb{R}^{2n}\right)$ one has the corresponding sesquilinear form given by $\langle\alpha dx_{s},\beta dx_{t}\rangle_{g}:=\alpha\overline{\beta}g^{st},$ where $(g^{st})=g^{-1}$. Using the fact $z_{k}=x_{2k-2}+ix_{2k-1}$ above and the following representation $\frac{\partial}{\partial z_{k}}=\frac{1}{2}\left(\frac{\partial}{\partial x_{2k-2}}-i\frac{\partial}{\partial x_{2k-1}}\right),k=1,2,...,n,$ one may regard the aforementioned sesquilinear form $\langle\cdot,\cdot\rangle_{g}$ as a sesquilinear form on $T^{*}\left(\mathbb{C}^{n}\right)$. Further, it can be extended to a sesquilinear form on $\Omega^{p}$ such that for $\eta=\eta_{1}\wedge\cdots\wedge\eta_{p},\ \xi=\xi_{1}\wedge\cdots\wedge\xi_{p}$ one has $\langle\eta,\xi\rangle_{g}=\det[\langle\eta_{s},\xi_{t}\rangle_{g}]_{s,t=1}^{p},\text{ }\eta,\xi\in\Omega^{p}.$ One observes that if the matrix $g$ is positive definite then $\langle\cdot,\cdot\rangle_{g}$ is an inner product on the set of constant $p$-forms for each $1\leq p\leq 2n$. ###### Definition 2.6. The Hodge star operator $\star:\Omega^{p}\left(\mathbb{C}^{n}\right)\rightarrow\Omega^{2n-p}\left(\mathbb{C}^{n}\right)$ with respect to the bilinear form $\langle\cdot,\cdot\rangle_{g}$ is a linear operator such that for $\eta,\xi\in\Omega^{p}\left(\mathbb{C}^{n}\right)$ one has $\eta\wedge\star\bar{\xi}=\langle\eta,\xi\rangle_{g}\textup{vol}_{g},$ where $\textup{vol}_{g}=\left(\frac{i}{2}\right)^{n}\sqrt{|\det g|}dz_{1}\wedge d\bar{z}_{1}\wedge\cdots\wedge dz_{n}\wedge d\bar{z}_{n}$ is the volume form ([11]). ###### Example 2.7. For the Euclidean metric on ${\mathbb{R}}^{2n}$ we have $g=I_{2n}$. Let $\Omega^{p}\left(\mathbb{R}^{2n}\right)$ denote the space of real differential $p$-forms over $\mathbb{R}^{2n}$. Then it is well-known that $\star\left(dx_{0}\wedge dx_{2}\wedge\cdots\wedge dx_{p-1}\right)=dx_{p}\wedge dx_{p+2}\wedge\cdots\wedge dx_{2n-1}.$ (2.1) And for a permutation $\sigma=\left(i_{0},i_{1},\dots,i_{2n-1}\right)$, we have $\star\left(dx_{i_{0}}\wedge dx_{i_{1}}\wedge\cdots\wedge dx_{i_{p-1}}\right)=\left(-1\right)^{\sigma}dx_{i_{p}}\wedge dx_{i_{p+1}}\wedge\cdots\wedge dx_{i_{2n-1}}$ depending on the parity of $\sigma$. In particular, $\star\left(dx_{0}\wedge dx_{1}\wedge\cdots\wedge dx_{2n-1}\right)=1$ 111$dx_{0}\wedge dx_{1}\wedge\cdots\wedge dx_{2n-1}$ is the volume form.. Moreover, for a $p$-form $\omega$, $\star^{2}\omega=\left(-1\right)^{p\left(2n-p\right)}\omega.$ This is only true in the Euclidean metric. Generally, $\star^{2}\omega=\left(-1\right)^{p\left(2n-p\right)}s\omega$, where $s$ is the parity of the signature of the inner product defined by the metric. The next two examples give the Hodge dual on $\mathbb{C}^{2}$ with respect to the Euclidean metric and the Minkowski metric, respectively, and they will be used later. Using the earlier identification $z_{1}=x_{0}+ix_{1},z_{2}=x_{2}+ix_{3}$, we have $dz_{1}=dx_{0}+idx_{1},dz_{2}=dx_{2}+idx_{3},d\bar{z}_{1}=dx_{0}-idx_{1},d\bar{z}_{2}=dx_{2}-idx_{3},$ and these complex $\left(1,0\right)$\- and $\left(0,1\right)$-forms span the complex cotangent space $T^{\ast}\left(\mathbb{C}^{2}\right)$. Clearly, $T^{\ast}\left(\mathbb{C}^{2}\right)$ is also spanned by the real forms $dx_{k},0\leq k\leq 3$ with complex coefficients. ###### Example 2.8. One may directly use Definition 2.6 to compute the Hodge dual of complex differential forms, or one may first write them as real forms, apply Example 2.7 and then convert back to complex forms. We shall use the latter approach. Let’s start with $\star dz_{1}$. Since $dz_{1}=dx_{0}+idx_{1}$, it follows that $\displaystyle\star dz_{1}$ $\displaystyle=\star dx_{0}+i\star dx_{1}$ $\displaystyle=\left(dx_{1}\wedge dx_{2}\wedge dx_{3}\right)-i\left(dx_{0}\wedge dx_{2}\wedge dx_{3}\right)$ $\displaystyle=\left(dx_{1}-idx_{0}\right)\wedge\left(dx_{2}\wedge dx_{3}\right)$ $\displaystyle=\left(dx_{1}-idx_{0}\right)\wedge\frac{i}{2}\left(dz_{2}\wedge d\bar{z}_{2}\right)$ $\displaystyle=\left(dx_{0}+idx_{1}\right)\wedge\frac{1}{2}\left(dz_{2}\wedge d\bar{z}_{2}\right)$ $\displaystyle=\frac{1}{2}\left(dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}\right),$ where the fourth line follows from the fact that $\displaystyle dz_{2}\wedge d\bar{z}_{2}$ $\displaystyle=\left(dx_{2}+idx_{3}\right)\wedge\left(dx_{2}-idx_{3}\right)=-2idx_{2}\wedge dx_{3}.$ Likewise one verifies that $\displaystyle\star dz_{2}=-\frac{1}{2}\left(dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}\right),\star d\bar{z}_{1}=\frac{1}{2}\left(dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}\right),\star d\bar{z}_{2}=-\frac{1}{2}\left(dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}\right),$ and $\displaystyle\star\left(dz_{1}\wedge d\bar{z}_{1}\right)=dz_{2}\wedge d\bar{z}_{2},$ $\displaystyle\star\left(dz_{2}\wedge d\bar{z}_{2}\right)=dz_{1}\wedge d\bar{z}_{1},\hskip 14.22636pt$ $\displaystyle\star\left(dz_{1}\wedge d\bar{z}_{2}\right)=-dz_{1}\wedge d\bar{z}_{2},$ $\displaystyle\star\left(dz_{2}\wedge d\bar{z}_{1}\right)=-dz_{2}\wedge d\bar{z}_{1},$ $\displaystyle\star\left(dz_{1}\wedge dz_{2}\right)=dz_{1}\wedge dz_{2},\hskip 14.22636pt$ $\displaystyle\star\left(d\bar{z}_{1}\wedge d\bar{z}_{2}\right)=d\bar{z}_{1}\wedge d\bar{z}_{2}.$ The Hodge star of $3$-forms can be computed using the fact that $\star^{2}\omega=(-1)^{p(n-p)}\omega$, and one has $\displaystyle\star\left(dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}\right)=2dz_{2},\hskip 14.22636pt\star\left(dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}\right)=-2dz_{1},$ $\displaystyle\star\left(dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}\right)=2d\bar{z}_{2},\hskip 14.22636pt\star\left(dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}\right)=-2d\bar{z}_{1}.$ Finally, for the unique $\left(2,2\right)$-form, we have $\displaystyle\star\left(dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}\right)=4.$ ###### Example 2.9. The Minkowski metric on ${\mathbb{R}}^{1,3}$ has the signature $\left(+---\right)$ represented by the metric matrix $g_{\textbf{mink}}=\begin{pmatrix}1&0&0&0\\\ 0&-1&0&0\\\ 0&0&-1&0\\\ 0&0&0&-1\end{pmatrix}.$ (2.2) In this case, the inner product of complex $1$-forms is determined by the facts: $\langle dz_{1},dz_{1}\rangle=\langle dz_{1},dz_{2}\rangle=\langle dz_{1},d\bar{z}_{2}\rangle=\langle dz_{2},d\bar{z}_{2}\rangle=0$ and $\langle dz_{1},d\bar{z}_{1}\rangle=\langle dz_{2},dz_{2}\rangle=2.$ Calculation of the Hodge star operator on $\mathbb{C}^{2}$ with respect to the Minkowski metric is similar to that in Example 2.8. Here we only list its action on 2-forms and 3-forms for later use: $\displaystyle\star\left(dz_{1}\wedge dz_{2}\right)=-dz_{2}\wedge d\bar{z}_{1},$ $\displaystyle\star\left(dz_{1}\wedge d\bar{z}_{2}\right)=-d\bar{z}_{1}\wedge d\bar{z}_{2},$ $\displaystyle\star\left(dz_{2}\wedge d\bar{z}_{1}\right)=dz_{1}\wedge dz_{2},$ $\displaystyle\star\left(d\bar{z}_{1}\wedge d\bar{z}_{2}\right)=dz_{1}\wedge d\bar{z}_{2},$ $\displaystyle\star\left(dz_{1}\wedge d\bar{z}_{1}\right)=-dz_{2}\wedge d\bar{z}_{2},$ $\displaystyle\star\left(dz_{2}\wedge d\bar{z}_{2}\right)=dz_{1}\wedge d\bar{z}_{1};$ and $\displaystyle\star\left(dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}\right)=2dz_{2},\hskip 28.45274pt\star\left(dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}\right)=2dz_{\bar{1}},$ $\displaystyle\star\left(dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}\right)=2d\bar{z}_{2},\hskip 28.45274pt\star\left(dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}\right)=2dz_{1}.$ ### 2.3. Self-dual and Anti-self-dual Forms We now focus on the Hodge star operator on the complex differential forms on $\mathbb{C}^{2}$ with respect to the Euclidean metric and the Minkowski metric. 1. Under the Euclidean metric, we have $\star^{2}\omega=\omega$ for every $\omega\in\Omega^{2}(\mathbb{C}^{2})$. Self-dual and anti-self-dual forms are the eigenvectors corresponding to the eigenvalues $1$ and $-1$, respectively, of the Hodge star operator on $\mathbb{C}^{2}$. ###### Definition 2.10. A differential form $\omega$ is said to be self-dual if it is equal to its Hodge dual, i.e. $\star\omega=\omega$. If $\star\omega=-\omega$, then $\omega$ is said to be anti-self-dual. Out of the complex differential forms we considered in the previous subsection, six of them correspond to the pair $\left(p,q\right)$ such that $p+q=2$. Let $\Omega_{+}^{2},\Omega_{-}^{2}$ denote the bases of self-dual and anti-self-dual forms, respectively, in $\Omega^{2}$. Then by the computations in Example 2.8 we have that $\displaystyle\Omega_{+}^{2}$ $\displaystyle=\text{span}\left\\{dz_{1}\wedge dz_{2},d\bar{z}_{1}\wedge d\bar{z}_{2},dz_{1}\wedge d\bar{z}_{1}+dz_{2}\wedge d\bar{z}_{2}\right\\},$ (2.3) $\displaystyle\Omega_{-}^{2}$ $\displaystyle=\text{span}\left\\{dz_{1}\wedge d\bar{z}_{2},dz_{2}\wedge d\bar{z}_{1},dz_{1}\wedge d\bar{z}_{1}-dz_{2}\wedge d\bar{z}_{2}\right\\}.$ 2. Under the Minkowski metric, we have $\star^{2}\omega=-\omega$ for any $\omega\in\Omega^{2}(\mathbb{C}^{2})$. Hence $\star$ has eigenvalues $i,-i$. With a bit of abuse of terminology, the eigenspaces corresponding to them are often also called self-dual and anti-self-dual forms, respectively. For consistency we shall also denote them by $\Omega^{2}_{+}$ and $\Omega^{2}_{-}$, respectively. Then by the computations in Example 2.9, we have $\displaystyle\Omega_{+}^{2}$ $\displaystyle=\text{span}\left\\{dz_{1}\wedge dz_{2}+idz_{2}\wedge d\bar{z}_{1},d{z}_{1}\wedge d\bar{z}_{1}+idz_{2}\wedge d\bar{z}_{2},d{z}_{1}\wedge d\bar{z}_{2}+id\bar{z}_{1}\wedge d\bar{z}_{2}\right\\},$ (2.4) $\displaystyle\Omega_{-}^{2}$ $\displaystyle=\text{span}\left\\{dz_{1}\wedge dz_{2}-idz_{2}\wedge d\bar{z}_{1},d{z}_{1}\wedge d\bar{z}_{1}-idz_{2}\wedge d\bar{z}_{2},d{z}_{1}\wedge d\bar{z}_{2}-id\bar{z}_{1}\wedge d\bar{z}_{2}\right\\}.$ ### 2.4. The Hodge Laplacian The Hodge star operator gives rise to a Hermitian bilinear form on compactly supported differentiable $p$-forms defined by $(\eta,\xi)_{g}:=\int_{\mathbb{C}^{n}}\eta\wedge\star\bar{\xi}=\int_{\mathbb{C}^{n}}\langle\eta,{\xi}\rangle_{g}\textup{vol}_{g}.$ The differential operators $d:\Omega^{p-1}\left(\mathbb{C}^{n}\right)\rightarrow\Omega^{p}\left(\mathbb{C}^{n}\right)$, where $1\leq p\leq 2n$, have the following natural adjoint with respect to the bilinear form $(\cdot,\cdot)_{g}$. ###### Definition 2.11. The co-differential operator $d^{*}:\Omega^{p}\left(\mathbb{C}^{n}\right)\rightarrow\Omega^{p-1}\left(\mathbb{C}^{n}\right)$ is defined by $d^{*}\omega:=\left(-1\right)^{n\left(p+1\right)+1}\star d\star\omega,$ (2.5) where $\omega$ is any differential $p$-form. In particular, for $\mathbb{C}^{2}$ we have $d^{*}\omega=-\star d\star\omega$. ###### Definition 2.12. A differential $p$-form $\omega$ is said to be Hodge-Laplace harmonic (HL- harmonic for short) if $\Delta\omega:=\left(dd^{*}+d^{*}d\right)\omega=0,$ (2.6) where $\Delta$ is referred to as the Hodge Laplacian (or the Laplace-de Rham operator). For more information on complex differential forms we refer readers to [1, 2, 9, 19]. ## 3\. Maxwell’s Equations Maxwell equations have several equivalent formulations ([1, 6, 13, 18]). From the viewpoint of physics, there are versions of Maxwell’s equations based on electric and magnetic potentials that allow one to solve the equations as a boundary value problem within the realms of classical physics and quantum mechanics. Quantum mechanics is not in the scope of this paper, so we shall only consider Maxwell’s equations in the classical sense. Moreover, the space- time formulations of Maxwell’s equations are primarily used in high-energy physics and gravitational physics, in conjunction with Einstein’s theories of special relativity and general relativity. From the viewpoint of mathematics, Maxwell’s equations are important in vector calculus, potential field theory, gauge theory, differential geometry, topology, and many other studies ([1, 7, 9]). ### 3.1. Classical Version Let $\mathbf{E}=\left(E_{1},E_{2},E_{3}\right)$ and $\mathbf{B}=\left(B_{1},B_{2},B_{3}\right)$ be electric and magnetic fields, respectively, in a convex region of $\mathbb{R}^{4}$ with Lorentzian signature $+---$222This is essentially the Minkowski space-time $\mathbb{R}^{1,3}$., where the $E_{j},B_{j}$ are scalar-valued functions of time and space. In this vector space, we represent points as coordinate column vectors $\left(x_{0},x_{1},x_{2},x_{3}\right)$, where $x_{0}=ct$ ($c$ being the speed of light) is the time variable and $x_{1},x_{2},x_{3}$ are the space variables 333For simplicity, we often set $c=1$.. Maxwell’s equations are given by the following partial differential equations: $\displaystyle\nabla\cdotp\mathbf{B}$ $\displaystyle=$ $\displaystyle 0,$ (3.1a) $\displaystyle\nabla\times\mathbf{E}+\frac{\partial\mathbf{B}}{\partial t}$ $\displaystyle=$ $\displaystyle 0,$ (3.1b) $\displaystyle\nabla\cdotp\mathbf{E}$ $\displaystyle=$ $\displaystyle\rho,$ (3.1c) $\displaystyle\nabla\times\mathbf{B}-\frac{\partial\mathbf{E}}{\partial t}$ $\displaystyle=$ $\displaystyle\mathbf{J},$ (3.1d) where the scalar $\rho$ is the electric charge density and the vector $\mathbf{J}$ is the electric current density vector. The equations (3.1a) and (3.1b) are homogeneous, while the equations (3.1c) and (3.1d) are inhomogeneous. From vector calculus, $\nabla\cdotp\left(\nabla\times\mathbf{A}\right)=0$ for any smooth vector field $\mathbf{A}=(A_{1},A_{2},A_{3})$ and $\nabla\times\left(\nabla\phi\right)=0$ for any scalar function $\phi$ 444From physics viewpoint, a scalar field.. Now by Poincaré’s lemma, in $\mathbb{R}^{3}$, if $\nabla\cdotp\mathbf{B}=0$, then $\mathbf{B}=\nabla\times\mathbf{A}$ for some vector field $\mathbf{A}$. This leads us to the potential field theory aspect of electromagnetism. So in this context, one considers the magnetic vector potential $\mathbf{A}$ and the electric scalar potential $\phi$ such that $\displaystyle\mathbf{B}$ $\displaystyle=\nabla\times\mathbf{A},$ (3.2) $\displaystyle\mathbf{E}$ $\displaystyle=-\frac{\partial\mathbf{A}}{\partial t}-\nabla\phi.$ (3.3) Putting (3.2) and (3.3) in equations (3.1c) and (3.1d), we obtain $\rho=-\frac{\partial}{\partial t}\left(\nabla\cdotp\mathbf{A}\right)-\Delta\phi,$ (3.4) where $\Delta$ in (3.4) is the Laplacian in $\mathbb{R}^{3}$. So if we let $\mathbf{A}^{{}^{\prime}}=\mathbf{A}-\nabla\psi,\phi^{{}^{\prime}}=\phi+\frac{\partial\psi}{\partial t}$, for any scalar field $\psi$, then the 4-vector $\left(\phi^{{}^{\prime}},\mathbf{A}^{{}^{\prime}}\right)$ solves (3.2) and (3.3), but not uniquely. In particular, this gives one the gauge freedom to choose $\phi$ such that $\left(\phi,\mathbf{A}\right)$ satisfies the Lorenz555Not to be confused with Lorentz! gauge: $\frac{\partial\phi}{\partial t}+\nabla\cdotp\mathbf{A}=0,$ (3.5) which implies that $\displaystyle\rho$ $\displaystyle=\left(\frac{\partial^{2}}{\partial t^{2}}-\Delta\right)\phi,$ (3.6) $\displaystyle\mathbf{J}$ $\displaystyle=\left(\frac{\partial^{2}}{\partial t^{2}}-\Delta\right)\mathbf{A},$ (3.7) where $\frac{\partial^{2}}{\partial t^{2}}-\Delta$ is the d’Alembertian, or wave operator. Since Lorentz transformations keep the Minkowski metric invariant, the d’Alembertian gives a Lorentz scalar. Further, Maxwell’s equations are Lorentz invariant and gauge invariant. ### 3.2. Differential Forms Version Set $\omega=\phi dx_{0}-A_{1}dx_{1}-A_{2}dx_{2}-A_{3}dx_{3},$ where ${\bf A}=(A_{1},A_{2},A_{3})$ is a time-dependent smooth vector field in ${\mathbb{R}}^{3}$. This $1$-form $\omega$ is often referred to as the magnetic potential 1-form. With the Lorenz gauge from (3.5), we assume $\omega$ satisfies the following normalization: $\frac{\partial\phi}{\partial x_{0}}+\frac{\partial A_{1}}{\partial x_{1}}+\frac{\partial A_{2}}{\partial x_{2}}+\frac{\partial A_{3}}{\partial x_{3}}=0.$ (3.8) Now set $J=\rho dx_{0}+J_{1}dx_{1}+J_{2}dx_{2}+J_{3}dx_{3}.$ (3.9) Define the $2$-form $F_{\omega}=d\omega$. This is called the Faraday field strength or simply the Faraday $2$-form. Using (3.2) and (3.3), we have $\displaystyle F_{\omega}$ $\displaystyle=-dx_{0}\wedge\left(E_{1}dx_{1}+E_{2}dx_{2}+E_{3}dx_{3}\right)$ (3.10) $\displaystyle-B_{1}dx_{2}\wedge dx_{3}+B_{2}dx_{1}\wedge dx_{3}-B_{3}dx_{1}\wedge dx_{2}.$ With the exterior derivative $d$ and the Hodge star operator $\star$, the Maxwell’s equations take the concise form $\displaystyle dF_{\omega}$ $\displaystyle=0,$ (3.11) $\displaystyle\star\text{ }d\star F_{\omega}$ $\displaystyle=J,$ (3.12) where (3.11) is referred to as the Bianchi identity. Equation (3.11) is equivalent to the homogeneous Maxwell’s equations (3.1a) and (3.1b); while (3.12) is equivalent to the inhomogeneous Maxwell’s equations (3.1c) and (3.1d). We may also refer to (3.11) and (3.12) as the exterior differential form of Maxwell’s equations. And we say $\omega$, or alternatively $F_{\omega}$, is a solution to the Maxwell’s equations if (3.11) and (3.12) are satisfied. It is now apparent that solutions to the Maxwell’s equations are not unique, since if $\omega$ is a solution then ${\omega^{\prime}}$ is also a solution for $\omega^{\prime}=\omega+d\psi$, where $\psi$ is any smooth function. This fact gives the gauge freedom of choosing $\phi$ that satisfies the Lorenz gauge condition (3.5). This differential form formulation of Maxwell’s equations also makes the following remark apparent. ###### Remark 3.1. In vacuum, that is, when $\rho=0,J=0$, it follows from (3.11) and (3.12) that every self-dual or anti-self-dual 2-form $F_{\omega}$ is a solution to the source-free Maxwell’s equations. ## 4\. Harmonic Solutions To Maxwell’s Equations In this section we would like to consider Maxwell’s equations in vacuum, and study some complex solutions and, in particular, harmonic solutions to those equations. But before doing this, we shall first construct a complex differential form solution to Maxwell’s equations initially not in vacuum. So let’s begin by considering smooth functions $f_{1},f_{2},f_{\bar{1}},f_{\bar{2}}$ in the two variables $z_{1},z_{2}$. Define the complex differential form $\omega\left(z\right)=f_{1}dz_{1}+f_{2}dz_{2}+f_{\bar{1}}d\bar{z}_{1}+f_{\bar{2}}d\bar{z}_{2}.$ (4.1) The form $\omega$ acts as a potential, much like the magnetic potential discussed in Section 3.2. In a more general context, it is often referred to as a connection form. The associated curvature form or curvature field is defined as $F_{\omega}:=d\omega+\omega\wedge\omega$. Since $\omega$ is scalar- valued, we have $\omega\wedge\omega=0$ and therefore $F_{\omega}=d\omega$. Direct computation shall verify that $\displaystyle F_{\omega}$ $\displaystyle=\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)dz_{1}\wedge dz_{2}+\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)d\bar{z}_{1}\wedge d\bar{z}_{2}$ $\displaystyle+\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right)dz_{1}\wedge d\bar{z}_{2}+\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right)dz_{2}\wedge d\bar{z}_{1}$ (4.2) $\displaystyle+\left(\partial_{1}f_{\bar{1}}-\bar{\partial}_{1}f_{1}\right)dz_{1}\wedge d\bar{z}_{1}+\left(\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2}\right)dz_{2}\wedge d\bar{z}_{2}.$ Now in terms of real 2-forms, we can write (4.2) in the following way: $\displaystyle F_{\omega}$ $\displaystyle=-2i\left(\partial_{1}f_{\bar{1}}-\bar{\partial}_{1}f_{1}\right)dx_{0}\wedge dx_{1}$ $\displaystyle+(\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)+\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)$ $\displaystyle+\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right)-\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right))dx_{0}\wedge dx_{2}$ $\displaystyle+i(\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)-\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)$ $\displaystyle-\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right)-\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right))dx_{0}\wedge dx_{3}$ (4.3) $\displaystyle-2i\left(\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2}\right)dx_{2}\wedge dx_{3}$ $\displaystyle+(-\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)-\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)$ $\displaystyle+\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right)-\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right))dx_{1}\wedge dx_{3}$ $\displaystyle+i(\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)-\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)$ $\displaystyle+\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right)+\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right))dx_{1}\wedge dx_{2}.$ One can easily determine the electric and magnetic components in view of (3.10): $\displaystyle E_{1}$ $\displaystyle=2i\left(\partial_{1}f_{\bar{1}}-\bar{\partial}_{1}f_{1}\right),$ $\displaystyle E_{2}$ $\displaystyle=-\left(\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)+\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)+\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right)-\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right)\right),$ $\displaystyle E_{3}$ $\displaystyle=-i\left(\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)-\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)-\left(\partial_{1}\bar{f}_{2}-\bar{\partial}_{2}f_{1}\right)-\left(\partial_{2}\bar{f}_{1}-\bar{\partial}_{1}f_{2}\right)\right),$ $\displaystyle B_{1}$ $\displaystyle=2i\left(\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2}\right),$ $\displaystyle B_{2}$ $\displaystyle=-\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)-\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)+\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right)-\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right),$ $\displaystyle B_{3}$ $\displaystyle=-i\left(\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)-\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)+\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right)+\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right)\right).$ In general, since ${\bf E}$ and ${\bf B}$ are complex vectors in $\mathbb{C}^{3}$, the “electromagnetic dynamics” occurs in $7$-dimensional space ($\dim\mathbb{C}^{3}$ plus one dimension for time). The spatial inner product is defined by $\langle{\bf E},{\bf B}\rangle=E_{1}\bar{B_{1}}+E_{2}\bar{B_{2}}+E_{3}\bar{B_{3}}.$ To facilitate the computation of this inner product and the energy density $\frac{1}{2}(|{\bf E}|^{2}+|{\bf B}|^{2})$, we write (4.2) as $F_{\omega}=F_{12}dz_{1}\wedge dz_{2}+F_{\bar{1}\bar{2}}d\bar{z}_{1}\wedge d\bar{z}_{2}+\sum_{j,k=1,2}F_{j\bar{k}}dz_{j}\wedge d{\overline{z_{k}}}.$ (4.4) Then the following can be verified by direct computation: $\displaystyle\langle{\bf E},{\bf B}\rangle=4F_{1\bar{1}}\overline{F_{2\bar{2}}}+2\left[(F_{12}-F_{2\bar{1}})(\overline{F_{12}+F_{2\bar{1}}})+(F_{\bar{1}\bar{2}}+F_{1\bar{2}})(\overline{F_{\bar{1}\bar{2}}-F_{1\bar{2}}})\right].$ (4.5) The energy density of the electromagnetic dynamics can be computed as $\displaystyle\frac{1}{2}(|{\bf E}|^{2}+|{\bf B}|^{2})=2(|F_{12}|^{2}+|F_{\bar{1}\bar{2}}|^{2}+\sum_{j,k=1,2}|F_{j\bar{k}}|^{2}).$ (4.6) If $g$ is the Euclidean metric on $\mathbb{C}^{2}$, then one can also verify that $|{\bf E}|^{2}+|{\bf B}|^{2}=\langle F_{\omega},F_{\omega}\rangle.$ In particular, one observes that if $f_{\bar{i}}=\bar{f}_{i},i=1,2$ then both the electric field ${\bf E}$ and the magnetic field ${\bf B}$ are real vector- fields in ${\mathbb{R}}^{3}$. For example, one can write $E_{1}=2iF_{1\bar{1}}=2i\left(\overline{\bar{\partial}_{1}f_{1}}-\bar{\partial}_{1}f_{1}\right)$ and see that $E_{1}$ is $4$ times the imaginary part of $\bar{\partial}_{1}f_{1}$. Other components can be checked similarly. Alternatively, one may observe that since $\omega$ is real in this case, the curvature field $F_{\omega}=d\omega$ must be real. In this case, one has $F_{\bar{1}\bar{2}}=\overline{F_{12}}$ and $F_{1\bar{2}}=-\overline{F_{2\bar{1}}}$, and it follows that $\displaystyle\langle{\bf E},{\bf B}\rangle=4\left(F_{1\bar{1}}\overline{F_{2\bar{2}}}+|F_{12}|^{2}-|F_{2\bar{1}}|^{2}\right).$ (4.7) If ${\bf E}$ and ${\bf B}$ are real vectors, then $\cos^{-1}\frac{\langle{\bf E},{\bf B}\rangle}{|{\bf E}||{\bf B}|}$ is the angle between the electric field and the magnetic field. But when ${\bf E}$ and ${\bf B}$ are complex vectors, the physical meaning of $\langle{\bf E},{\bf B}\rangle$ is less clear. Although the original Maxwell’s equations were formulated under the Minkowski metric, the differential form formulation (3.11) and (3.12) makes good sense under other metrics. In the sequel we shall consider the Maxwell’s equations in this formulation under two different metrics. ### 4.1. Euclidean Metric Case Under the Euclidean metric, applying the Hodge star operator to (4.2) (cf. Example 2.8), one has $\displaystyle\star F_{\omega}$ $\displaystyle=\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)dz_{1}\wedge dz_{2}+\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)d\bar{z}_{1}\wedge d\bar{z}_{2}$ $\displaystyle-\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right)dz_{1}\wedge d\bar{z}_{2}-\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right)dz_{2}\wedge d\bar{z}_{1}$ (4.8) $\displaystyle+\left(\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2}\right)dz_{1}\wedge d\bar{z}_{1}+\left(\partial_{1}f_{\bar{1}}-\bar{\partial}_{1}f_{1}\right)dz_{2}\wedge d\bar{z}_{2}.$ It is immediate that $F_{\omega}$ is self-dual if and only if $\displaystyle\partial_{1}f_{\bar{1}}-\bar{\partial}_{1}f_{1}$ $\displaystyle=\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2},$ (4.9) $\displaystyle\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}$ $\displaystyle=\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}=0,$ and it is anti-self-dual if and only if $\displaystyle\partial_{1}f_{\bar{1}}-\bar{\partial}_{1}f_{1}$ $\displaystyle=-\left(\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2}\right),$ (4.10) $\displaystyle\partial_{1}f_{2}-\partial_{2}f_{1}$ $\displaystyle=\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}=0.$ The following fact follows immediately from (4.5). ###### Corollary 4.1. Let $F_{\omega}$ be a self-dual solution to the Maxwell equations in vaccum with respect to the Euclidean metric on $\mathbb{C}^{2}$. Then $\langle{\bf E},{\bf B}\rangle\geq 0$ holds with equality only if $\omega$ is a trivial solution. ###### Proof. In fact, Equations (4.9) shows that $F_{1\bar{1}}=F_{2\bar{2}}$ and $F_{1\bar{2}}=F_{2\bar{1}}=0$. Hence by (4.5) one has $\langle{\bf E},{\bf B}\rangle=2(2|F_{1\bar{1}}|^{2}+|F_{12}|^{2}+|F_{\bar{1}\bar{2}}|^{2})\geq 0.$ If $\langle{\bf E},{\bf B}\rangle=0$ then all six coefficients of the $2$-forms in $F_{\omega}$ are $0$ and hence $\omega$ is a trivial solution.∎ Since $F_{\omega}=d\omega$, $F_{\omega}$ is exact. Hence if $F_{\omega}$ is self-dual or anti-self-dual then $d\star F_{\omega}=\pm dF_{\omega}=\pm d^{2}\omega=0.$ Now if $f_{1},f_{2}$ are holomorphic and $f_{\bar{1}},f_{\bar{2}}$ are conjugate holomorphic, then the equations in (4.9) are automatically satisfied. We thus have the following fact. ###### Proposition 4.2. The form $F_{\omega}$, determined by $\omega$ in (4.1) with $f_{1},f_{2}$ holomorphic and $f_{\bar{1}},f_{\bar{2}}$ conjugate holomorphic, is a self- dual solution for the source-free Maxwell’s equations (4.2) and (4.8) under the Euclidean metric. Further, in this case $\displaystyle F_{\omega}$ $\displaystyle=\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)dz_{1}\wedge dz_{2}+\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)d\bar{z}_{1}\wedge d\bar{z}_{2}.$ (4.11) Consequently, we have ###### Corollary 4.3. If $\omega$ is of the form (4.1) and $f_{1},f_{2}$ are holomorphic and $f_{\bar{1}},f_{\bar{2}}$ are conjugate holomorphic, then $\displaystyle E_{1}$ $\displaystyle=B_{1}=0,$ $\displaystyle E_{2}$ $\displaystyle=B_{2}=-\left((\partial_{1}f_{2}-\partial_{2}f_{1})+(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}})\right),$ $\displaystyle E_{3}$ $\displaystyle=B_{3}=-i\left((\partial_{1}f_{2}-\partial_{2}f_{1})-(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}})\right).$ It is surprising that in this case the electric field and the magnetic field coincide, or in other words they are mathematically indistinguishable. The same phenomenon occurs later in Example 4.6 on Dirac monople. Of course self-duality (resp. anti-self-duality) of the differential form $F_{\omega}$ does not necessarily require $f_{j},f_{\bar{j}},\text{ }j=1,2$ being holomorphic (resp. conjugate holomorphic). For instance, $f_{1}=z_{1}+\bar{z}_{1},f_{2}=z_{2}+\bar{z}_{2},f_{\bar{1}}=\bar{z}_{1}-z_{1},f_{\bar{2}}=\bar{z}_{2}-z_{2}$ satisfy (4.9), while $f_{1}=z_{1}-\bar{z}_{2},f_{2}=z_{2}-\bar{z}_{1},f_{\bar{1}}=\bar{z}_{1}-z_{1},f_{\bar{2}}=z_{2}-\bar{z}_{2}$ satisfy (4.10). The next example exihibits a $1$-dimensional electromagnetic dynamics. ###### Example 4.4. Let $\tau\left(z\right)=f_{1}\left(dz_{1}+d\bar{z}_{1}\right)+f_{2}\left(dz_{2}+d\bar{z}_{2}\right)$, where $f_{1}$ and $f_{2}$ are holomorphic in $z_{1}$ and $z_{2}$. We claim that ${F}:=d\tau$ is self-dual if only if ${F}=m\left(dz_{1}\wedge d\bar{z}_{1}+dz_{2}\wedge d\bar{z}_{2}\right)$ for some constant $m$. First, it is clear that if ${F}=m\left(dz_{1}\wedge d\bar{z}_{1}+dz_{2}\wedge d\bar{z}_{2}\right)$ then it is self-dual. Conversely, since $f_{1},f_{2}$ are holomorphic, using the definition of ${F}$, after some simplification we have $\displaystyle{F}$ $\displaystyle=\partial_{1}f_{1}dz_{1}\wedge d\bar{z}_{1}+\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)dz_{1}\wedge dz_{2}+\partial_{1}f_{2}dz_{1}\wedge d\bar{z}_{2}$ (4.12) $\displaystyle+\partial_{2}f_{1}dz_{2}\wedge d\bar{z}_{1}+\partial_{2}f_{2}dz_{2}\wedge d\bar{z}_{2}.$ After setting ${F}=\star{F}$ and comparing coefficients, it follows that $\partial_{1}f_{1}=\partial_{2}f_{2},\text{ }\partial_{1}f_{2}=0=\partial_{2}f_{1}.$ (4.13) The second relation in (4.13) implies $f_{1}$ is independent of $z_{2}$, and likewise $f_{2}$ is independent of $z_{1}$. From the first relation, since $\partial_{1}f_{1}=\partial_{2}f_{2}$, $f_{1}$ and $f_{2}$ must be constant. Therefore, we can write $f_{1}=mz_{1}+c_{1},f_{2}=mz_{2}+c_{2}$ for constants $m,c_{1},c_{2}\in\mathbb{C}$. Substituting the conditions in (4.13) into (4.12), we have $\displaystyle{F}$ $\displaystyle=m\left(dz_{1}\wedge d\bar{z}_{1}+dz_{2}\wedge d\bar{z}_{2}\right),$ $\displaystyle=-2im\left(dx_{0}\wedge dx_{1}+dx_{2}\wedge dx_{3}\right).$ (4.14) One can read off the electric and magnetic components in (4.14) by comparing with (3.10): $\displaystyle E_{1}$ $\displaystyle=B_{1}=-2im,$ $\displaystyle E_{2}$ $\displaystyle=B_{2}=E_{3}=B_{3}=0,$ which shows that the electromagnetic dynamics in this case is $1$-dimensional. We now continue with the discussion on the general case that $f_{i},f_{\bar{i}},i=1,2$ are smooth functions on $\mathbb{C}^{2}$. After applying the exterior derivative to (4.8) and some simplification we obtain $\displaystyle d\star F_{\omega}$ $\displaystyle=\left(-\partial_{2}\bar{\partial}_{1}f_{1}+\left(2\partial_{1}\bar{\partial}_{1}+\partial_{2}\bar{\partial}_{2}\right)f_{2}-\partial_{1}\partial_{2}f_{\bar{1}}-\partial_{2}^{2}f_{\bar{2}}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}$ $\displaystyle+\left(-\left(\partial_{1}\bar{\partial}_{1}+2\partial_{2}\bar{\partial}_{2}\right)f_{1}+\partial_{1}\bar{\partial}_{2}f_{2}+\partial_{1}^{2}f_{\bar{1}}+\partial_{1}\partial_{2}f_{\bar{2}}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}$ (4.15) $\displaystyle+\left(-\bar{\partial}_{1}\bar{\partial}_{2}f_{1}-\bar{\partial}_{2}^{2}f_{2}-\bar{\partial}_{2}\partial_{1}f_{\bar{1}}+\left(2\bar{\partial}_{1}\partial_{1}+\bar{\partial}_{2}\partial_{2}\right)f_{\bar{2}}\right)dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}$ $\displaystyle+\left(\bar{\partial}_{1}^{2}f_{1}+\bar{\partial}_{1}\bar{\partial}_{2}f_{2}-\left(\bar{\partial}_{1}\partial_{1}+2\bar{\partial}_{2}\partial_{2}\right)f_{\bar{1}}+\bar{\partial}_{1}\partial_{2}f_{\bar{2}}\right)dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}.$ Applying the Hodge star operator to (4.15) and rearranging terms, we have $\displaystyle\star\text{ }d\star F_{\omega}$ $\displaystyle=2\left(\left(\partial_{1}\bar{\partial}_{1}+2\partial_{2}\bar{\partial}_{2}\right)f_{1}-\partial_{1}\bar{\partial}_{2}f_{2}-\partial_{1}^{2}f_{\bar{1}}-\partial_{1}\partial_{2}f_{\bar{2}}\right)dz_{1}$ $\displaystyle+2\left(-\bar{\partial}_{1}^{2}f_{1}-\bar{\partial}_{1}\bar{\partial}_{2}f_{2}+\left(\bar{\partial}_{1}\partial_{1}+2\bar{\partial}_{2}\partial_{2}\right)f_{\bar{1}}-\bar{\partial}_{1}\partial_{2}f_{\bar{2}}\right)d\bar{z}_{1}$ (4.16) $\displaystyle+2\left(-\partial_{2}\bar{\partial}_{1}f_{1}+\left(2\partial_{1}\bar{\partial}_{1}+\partial_{2}\bar{\partial}_{2}\right)f_{2}-\partial_{1}\partial_{2}f_{\bar{1}}-\partial_{2}^{2}f_{\bar{2}}\right)dz_{2}$ $\displaystyle+2\left(-\bar{\partial}_{1}\bar{\partial}_{2}f_{1}-\bar{\partial}_{2}^{2}f_{2}-\bar{\partial}_{2}\partial_{1}f_{\bar{1}}+\left(2\bar{\partial}_{1}\partial_{1}+\bar{\partial}_{2}\partial_{2}\right)f_{\bar{2}}\right)d\bar{z}_{2}.$ The RHS of (4.16) can be viewed as a complex current form that we may denote by $J$. For the sake of simplicity, we can rewrite (4.16) as $\star\text{ }d\star F_{\omega}=J:=P_{1}dz_{1}+P_{\bar{1}}d\bar{z}_{1}+P_{2}dz_{2}+P_{\bar{2}}d\bar{z}_{2},$ (4.17) where the coefficients $P_{j},P_{\bar{j}},j=1,2$ can be easily read off from (4.16). Likewise, we can write the RHS of (4.16) in terms of real 1-forms in view of (3.9): $J=\left(P_{1}+P_{\bar{1}}\right)dx_{0}+i\left(P_{1}-P_{\bar{1}}\right)dx_{1}+\left(P_{2}+P_{\bar{2}}\right)dx_{2}+i\left(P_{2}-P_{\bar{2}}\right)dx_{3}.$ Again, if $f_{\bar{i}}=\bar{f}_{i},i=1,2$ then $J$ is real. Here $P_{1}+P_{\bar{1}}$ can be considered as a scalar electric charge density $\rho$, while the last three coefficients for the above current 1-form correspond to the last three components of the electric current density vector $\mathbf{J}=\left(\rho,J_{1},J_{2},J_{3}\right)$ in Section 3.1. So to summarize, with smooth complex-valued functions $f_{1},f_{2},f_{\bar{1}},f_{\bar{2}}$ in two variables $z_{1},z_{2}$, we determined from the complex differential form $\omega$ in (4.1), a solution $F_{\omega}$ (in (4.2)) to a complex analogue of Maxwell’s equations $dF_{\omega}=0$ and $\star d\star F_{\omega}=J$. Let $\nabla^{2}:=4\left(\partial_{1}\bar{\partial}_{1}+\partial_{2}\bar{\partial}_{2}\right)$ be the Laplacian on $\mathbb{C}^{2}$ in the Euclidean metric. Then a complex function $f$ is said to be harmonic if $\nabla^{2}f=0$ on $\mathbb{C}^{2}$. The following is the main result of this subsection. ###### Theorem 4.5. Let $f_{j},f_{\bar{j}},\text{ }j=1,2$ be harmonic functions. Then the complex differential form $\omega$ is a solution to the source-free Maxwell’s equations in the Euclidean metric if and only if $\bar{\partial}_{1}f_{1}+\bar{\partial}_{2}f_{2}+\partial_{1}f_{\bar{1}}+\partial_{2}f_{\bar{2}}$ is constant. ###### Proof. First, we can rewrite (4.15) as $\displaystyle d\star F_{\omega}$ $\displaystyle=\left(\nabla^{2}f_{2}-\partial_{2}\bar{\partial}_{1}f_{1}-\partial_{2}^{2}f_{\bar{2}}-\partial_{1}\partial_{2}f_{\bar{1}}+\partial_{1}\bar{\partial}_{1}f_{2}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}$ $\displaystyle+\left(-\nabla^{2}f_{1}+\partial_{1}\bar{\partial}_{2}f_{2}+\partial_{1}^{2}f_{\bar{1}}+\partial_{1}\partial_{2}f_{\bar{2}}-\partial_{2}\bar{\partial}_{2}f_{1}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}$ (4.18) $\displaystyle+\left(\nabla^{2}f_{\bar{2}}-\partial_{1}\bar{\partial}_{2}f_{\bar{1}}-\bar{\partial}_{2}^{2}f_{2}+\partial_{1}\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{1}\bar{\partial}_{2}f_{1}\right)dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}$ $\displaystyle+\left(-\nabla^{2}f_{\bar{1}}+\partial_{2}\bar{\partial}_{1}f_{\bar{2}}+\bar{\partial}_{1}^{2}f_{1}-\partial_{2}\bar{\partial_{2}}f_{\bar{1}}+\bar{\partial}_{1}\bar{\partial}_{2}f_{2}\right)dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}.$ Since $f_{j},f_{\bar{j}},\text{ }j=1,2$ are harmonic, we have $\nabla^{2}f_{j}=0$ and $\nabla^{2}f_{\bar{j}}=0$, which implies $\displaystyle d\star F_{\omega}$ $\displaystyle=\left(-\partial_{2}\bar{\partial}_{1}f_{1}-\partial_{2}^{2}f_{\bar{2}}-\partial_{1}\partial_{2}f_{\bar{1}}-\partial_{2}\bar{\partial}_{2}f_{2}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}$ $\displaystyle+\left(\partial_{1}\bar{\partial}_{2}f_{2}+\partial_{1}^{2}f_{\bar{1}}+\partial_{1}\partial_{2}f_{\bar{2}}+\partial_{1}\bar{\partial}_{1}f_{1}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}$ $\displaystyle+\left(-\partial_{1}\bar{\partial}_{2}f_{\bar{1}}-\bar{\partial}_{2}^{2}f_{2}-\partial_{2}\bar{\partial}_{2}f_{\bar{2}}-\bar{\partial}_{1}\bar{\partial_{2}}f_{1}\right)dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}$ $\displaystyle+\left(\partial_{2}\bar{\partial}_{1}f_{\bar{2}}+\bar{\partial}_{1}^{2}f_{1}+\partial_{1}\bar{\partial}_{1}f_{\bar{1}}+\bar{\partial}_{1}\bar{\partial}_{2}f_{2}\right)dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}.$ For $F_{\omega}$ to satisfy the source-free Maxwell’s equations, we require $d\star F_{\omega}=0$, which we may now write in matrix form as $\text{diag}\\{-\partial_{2},\partial_{1},-\bar{\partial}_{2},\bar{\partial}_{1}\\}\begin{pmatrix}\bar{\partial_{1}}&\bar{\partial_{2}}&\partial_{1}&\partial_{2}\\\ \bar{\partial_{1}}&\bar{\partial_{2}}&\partial_{1}&\partial_{2}\\\ \bar{\partial_{1}}&\bar{\partial_{2}}&\partial_{1}&\partial_{2}\\\ \bar{\partial_{1}}&\bar{\partial_{2}}&\partial_{1}&\partial_{2}\end{pmatrix}\begin{pmatrix}f_{1}\\\ f_{2}\\\ f_{\bar{1}}\\\ f_{\bar{2}}\end{pmatrix}=\mathbf{0},$ where “diag” stands for a diagonal matrix and $\mathbf{0}$ denotes the column 4-vector of zeroes. Clearly, the above is true if and only if $\bar{\partial}_{1}f_{1}+\bar{\partial}_{2}f_{2}+\partial_{1}f_{\bar{1}}+\partial_{2}f_{\bar{2}}$ is a constant.∎ ###### Example 4.6. With $z_{1}=x_{0}+ix_{1},z_{2}=x_{2}+ix_{3}$, consider the form $\displaystyle\omega\left(z\right)$ $\displaystyle=i\left(x_{0}dx_{1}-x_{1}dx_{0}+x_{2}dx_{3}-x_{3}dx_{2}\right)$ $\displaystyle=\frac{1}{2}\left(\eta\left(z\right)-\overline{\eta\left(z\right)}\right),$ (4.19) where $\eta\left(z\right)=\bar{z}_{1}dz_{1}+\bar{z}_{2}dz_{2}$. In this case, we have $f_{1}=\frac{1}{2}\bar{z}_{1},f_{2}=\frac{1}{2}\bar{z}_{2},f_{\bar{1}}=-\frac{1}{2}z_{1},f_{\bar{2}}=-\frac{1}{2}z_{2}$, where $f_{j},f_{\bar{j}},\text{ }j=1,2$ are not holomorphic (resp. conjugate holomorphic) but are all clearly harmonic. Moreover, we have the associated curvature form $F_{\omega}=d\omega=-\left(dz_{1}\wedge d\bar{z}_{1}+dz_{2}\wedge d\bar{z}_{2}\right),$ which is self-dual in the Euclidean metric. Now observe that $\displaystyle\partial_{1}f_{\bar{1}}-\bar{\partial}_{1}f_{1}$ $\displaystyle=-1=\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2},$ $\displaystyle\partial_{1}f_{\bar{2}}$ $\displaystyle=0=\bar{\partial}_{1}f_{2},$ $\displaystyle\partial_{2}f_{\bar{1}}$ $\displaystyle=0=\bar{\partial}_{2}f_{1}.$ The above conditions clearly satisfy the self-duality of $F_{\omega}$. Moreover, $\bar{\partial}_{1}f_{1}+\bar{\partial}_{2}f_{2}+\partial_{1}f_{\bar{1}}+\partial_{2}f_{\bar{2}}=0$. The above example is related to the Dirac monopole ([17]), which is a hypothetical magnetic charge. The original idea was proposed in a 1931 paper by Paul Dirac ([3]). Evidently, the above example shows that the existence of Dirac monopoles does not conflict with Maxwell’s equations in vacuum (away from the magnetic monople). See [3, 15] and the references therein for more background on this rather intriguing subject. A notable fact here is that ${\bf E}={\bf B}$. It is easy to compute that ${\bf E}={\bf B}=(-2i,0,0)$, and therefore $\langle{\bf E},{\bf B}\rangle=\frac{1}{2}(|{\bf E}|^{2}+|{\bf B}|^{2})=4.$ Theorem 4.5 leads to easy constructions of non-self-dual solutions to Equations (3.11) and (3.12) in vaccum. ###### Example 4.7. In fact, a simple working example that satisfies the conditions in Theorem 4.5, but fails the conditions for $F_{\omega}$ to be self-dual nor anti-self- dual, is $f_{1}=2\bar{z}_{1}-z_{2},f_{2}=z_{1}+2\bar{z}_{2},f_{\bar{1}}=z_{1}+\bar{z}_{1},f_{\bar{2}}=z_{2}+\bar{z}_{2}$. ### 4.2. Minkowski Metric Case We believe that much of the work in Section 4.1 can be done in a parallel manner with respect to other bilinear forms $\langle\cdot,\cdot\rangle_{g}$ on ${\mathbb{R}}^{4}$, where $g$ is a nondegenerate constant $4\times 4$ self- adjoint matrix. But since the Minkowski metric on ${\mathbb{R}}^{1,3}$ is more conforming with our reality, and it is indeed where the Maxwell’s equations were initially studied, we shall work it out in details in this subsection. Recall that the d’Alembertian in the Minkowski metric is given by $\Box=\frac{\partial^{2}}{\partial x_{0}^{2}}-\frac{\partial^{2}}{\partial x_{1}^{2}}-\frac{\partial^{2}}{\partial x_{2}^{2}}-\frac{\partial^{2}}{\partial x_{3}^{2}}=\frac{\partial^{2}}{\partial x_{0}^{2}}-\nabla^{2},$ (4.20) where again $\nabla^{2}$ is the Laplacian in $\mathbb{R}^{3}$. Using the identities $\displaystyle\partial_{1}=\frac{1}{2}\left(\frac{\partial}{\partial x_{0}}-i\frac{\partial}{\partial x_{1}}\right),\hskip 28.45274pt\bar{\partial}_{1}=\frac{1}{2}\left(\frac{\partial}{\partial x_{0}}+i\frac{\partial}{\partial x_{1}}\right)$ $\displaystyle\partial_{2}=\frac{1}{2}\left(\frac{\partial}{\partial x_{2}}-i\frac{\partial}{\partial x_{3}}\right),\hskip 28.45274pt\bar{\partial}_{2}=\frac{1}{2}\left(\frac{\partial}{\partial x_{2}}+i\frac{\partial}{\partial x_{3}}\right),$ (4.21) we have the following. ###### Lemma 4.8. In terms of complex variables, the d’Alembertian in the Minkowski metric is given by $\Box=2\left(\partial_{1}^{2}+\bar{\partial}_{1}^{2}-2\partial_{2}\bar{\partial}_{2}\right).$ (4.22) ###### Proof. Using the first two identities in (4.21), note that $\displaystyle\partial_{1}+\bar{\partial}_{1}$ $\displaystyle=\frac{\partial}{\partial x_{0}},\hskip 28.45274pti\left(\partial_{1}-\bar{\partial}_{1}\right)=\frac{\partial}{\partial x_{1}}.$ Then the first two terms of the d’Alembertian in (4.20) are $\displaystyle\left(\frac{\partial}{\partial x_{0}}+\frac{\partial}{\partial x_{1}}\right)\left(\frac{\partial}{\partial x_{0}}-\frac{\partial}{\partial x_{1}}\right)$ $\displaystyle=\left(\partial_{1}+\bar{\partial}_{1}+i\left(\partial_{1}-\bar{\partial}_{1}\right)\right)\left(\partial_{1}+\bar{\partial}_{1}-i\left(\partial_{1}-\bar{\partial}_{1}\right)\right)$ $\displaystyle=\left(\partial_{1}+\bar{\partial}_{1}\right)^{2}+\left(\partial_{1}-\bar{\partial}_{1}\right)^{2}$ $\displaystyle=2\left(\partial_{1}^{2}+\bar{\partial}_{1}^{2}\right).$ Similarly, using the last two identities in (4.21), it follows that the last two terms of the d’Alembertian in (4.20) are $-4\partial_{2}\bar{\partial}_{2}$. Therefore, $\Box=2\left(\partial_{1}^{2}+\bar{\partial}_{1}^{2}-2\partial_{2}\bar{\partial}_{2}\right)$. ∎ ###### Definition 4.9. A function $f$ is said to be $M$-harmonic if $\Box f=0$. Here the “$M$” refers to the Minkowski metric. It is well-known that $M$-harmonic functions $\psi$ describe waves propagating in ${\mathbb{R}}^{1,3}$. A smooth $1$-form as defined in (4.1) is said to be wavelike if the functions $f_{j},f_{\bar{j}},j=1,2$ are all $M$-harmonic. Likewise, the curvature field $F_{\omega}$ is said to be wavelike if its coefficient functions $F_{12},F_{\bar{1}\bar{2}}$, and $F_{j\bar{k}},j,k=1,2$ in (4.4) are all $M$-harmonic. It is easy to check that if $\omega$ is wavelike then $F_{\omega}$ is also wavelike, for instance, $\Box F_{12}=\Box\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)=\partial_{1}\Box f_{2}-\partial_{2}\Box f_{1}=0,$ and other coefficients are checked similarly. In particular, this implies that $\Box{E_{j}}=\Box{B_{j}}=0,1\leq j\leq 3$, i.e., ${\bf E}$ and ${\bf B}$ are waves in the space $\mathbb{C}^{3}$. However, as we will see a bit later, there exists non-wavelike $\omega$ for which $F_{\omega}$ is wavelike. Now let $f_{j},f_{\bar{j}},\text{ }j=1,2$ be complex smooth functions as before. Then $F_{\omega}$ is still the same as in (4.2). However in the Minkowski metric, by Example 2.9 we have $\displaystyle\star F_{\omega}$ $\displaystyle=\left(\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}\right)dz_{1}\wedge d\bar{z}_{2}+\left(\partial_{2}f_{1}-\partial_{1}f_{2}\right)dz_{2}\wedge d\bar{z}_{1}$ $\displaystyle+\left(\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}\right)dz_{1}\wedge dz_{2}+\left(\bar{\partial}_{2}f_{1}-\partial_{1}f_{\bar{2}}\right)d\bar{z}_{1}\wedge d\bar{z}_{2}$ (4.23) $\displaystyle+\left(\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2}\right)dz_{1}\wedge d\bar{z}_{1}+\left(\bar{\partial}_{1}f_{1}-\partial_{1}f_{\bar{1}}\right)dz_{2}\wedge d\bar{z}_{2}.$ In the Minkowski metric on $\mathbb{C}^{2}$, the curvature form $F_{\omega}$ is self-dual (resp. anti-self-dual) provided $\star F_{\omega}=\pm iF_{\omega}$ ([1, 5]). So self-duality of $F_{\omega}$ requires $\displaystyle\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}$ $\displaystyle=i\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right),$ $\displaystyle\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}$ $\displaystyle=i\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right),$ (4.24) $\displaystyle\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2}$ $\displaystyle=i\left(\partial_{1}f_{\bar{1}}-\bar{\partial}_{1}f_{1}\right).$ On the other hand, anti-self-duality of $F_{\omega}$ requires $\displaystyle\partial_{2}f_{\bar{1}}-\bar{\partial}_{1}f_{2}$ $\displaystyle=-i\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right),$ $\displaystyle\bar{\partial}_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{\bar{1}}$ $\displaystyle=-i\left(\partial_{1}f_{\bar{2}}-\bar{\partial}_{2}f_{1}\right),$ (4.25) $\displaystyle\partial_{2}f_{\bar{2}}-\bar{\partial}_{2}f_{2}$ $\displaystyle=-i\left(\partial_{1}f_{\bar{1}}-\bar{\partial}_{1}f_{1}\right).$ The following fact is immediate. ###### Corollary 4.10. If $\omega$ is a self-dual or anti-self-dual solution to the Maxwell’s equations in vaccum with respect to the Minkowski metric, then $\langle{\bf E},{\bf B}\rangle$ is either $0$ or purely imaginary. In particular, if $\omega$ is a real self-dual or anti-self-dual solution then $\langle{\bf E},{\bf B}\rangle=0$. ###### Proof. If $\omega$ is self-dual, then (4.25) indicates that $F_{2\bar{1}}=iF_{12},\ \ F_{\bar{1}\bar{2}}=iF_{1\bar{2}},\ \ F_{2\bar{2}}=iF_{1\bar{1}}.$ Applying these relations to (4.5), one has $\displaystyle\langle{\bf E},{\bf B}\rangle=$ $\displaystyle-4i|F_{1\bar{1}}|^{2}+2(1-i)^{2}|F_{12}|^{2}+2(1+i)^{2}|F_{1\bar{2}}|^{2}$ $\displaystyle=-4i\left(|F_{1\bar{1}}|^{2}+|F_{12}|^{2}-|F_{1\bar{2}}|^{2}\right).$ In the case $\omega$ is anti-self-dual, parallel computations yield $\langle{\bf E},{\bf B}\rangle=4i\left(|F_{1\bar{1}}|^{2}+|F_{12}|^{2}-|F_{1\bar{2}}|^{2}\right).$ If $\omega$ is real then $\langle{\bf E},{\bf B}\rangle$ is real and therefore it must be equal to $0$.∎ To proceed, as in the previous subsection we assume $F_{\omega}$ is neither self-dual nor anti-self-dual in the Minkowski metric. So for the source-free Maxwell’s equations to be satisfied, we require $d\star F_{\omega}=0$. In this case, we have $\displaystyle d\star F_{\omega}$ $\displaystyle=\left(\partial_{1}\partial_{2}f_{1}-\left(\partial_{1}^{2}+\bar{\partial}_{1}^{2}-\partial_{2}\bar{\partial}_{2}\right)f_{2}+\partial_{2}\bar{\partial}_{1}f_{\bar{1}}-\partial_{2}^{2}f_{\bar{2}}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}$ $\displaystyle+\left(\partial_{1}\bar{\partial}_{1}f_{1}-\bar{\partial}_{1}\bar{\partial}_{2}f_{2}-\left(\partial_{1}^{2}-2\partial_{2}\bar{\partial}_{2}\right)f_{\bar{1}}-\partial_{2}\bar{\partial}_{1}f_{\bar{2}}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}$ (4.26) $\displaystyle+\left(\partial_{1}\bar{\partial}_{2}f_{1}-\bar{\partial}_{2}^{2}f_{2}+\bar{\partial}_{1}\bar{\partial}_{2}f_{\bar{1}}-\left(\partial_{1}^{2}+\bar{\partial}_{1}^{2}-\partial_{2}\bar{\partial}_{2}\right)f_{\bar{2}}\right)dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}$ $\displaystyle+\left(-\left(\bar{\partial}_{1}^{2}-2\partial_{2}\bar{\partial}_{2}\right)f_{1}-\partial_{1}\bar{\partial}_{2}f_{2}+\partial_{1}\bar{\partial}_{1}f_{\bar{1}}-\partial_{1}\partial_{2}f_{\bar{2}}\right)dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}.$ Observe that for $f_{j},f_{\bar{j}},\text{ }j=1,2$ holomorphic and respectively conjugate holomorphic, equations (4.24) and (4.25) imply that all the coefficients of the $2$-forms in $F_{\omega}$ are $0$. Hence there is no nontrivial self-dual or anti-self-dual solution to the source-free Maxwell equations in this case. However, it follows from the above computation that $d\star F_{\omega}=\left(\partial_{1}\partial_{2}f_{1}-\partial_{1}^{2}f_{2}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}+\left(\bar{\partial}_{1}\bar{\partial}_{2}f_{\bar{1}}-\bar{\partial}_{1}^{2}f_{\bar{2}}\right)dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}.$ Hence $d\star F_{\omega}=0$ if and only if $\partial_{1}\partial_{2}f_{1}-\partial_{1}^{2}f_{2}=\bar{\partial}_{1}\bar{\partial}_{2}f_{\bar{1}}-\bar{\partial}_{1}^{2}f_{\bar{2}}=0.$ (4.27) Further, if $f_{\bar{j}}=\overline{f_{j}},\text{ }j=1,2$ then the above two equations are the same. One thus obtains the following fact. ###### Proposition 4.11. Let $f_{1}$ and $f_{2}$ be holomorphic functions and $f_{\bar{j}}=\overline{f_{j}},\text{ }j=1,2$. Then $\omega$ is a solution to the Maxwell’s equations in vacuum with respect to the Minkowski metric if and only if $\partial_{2}f_{1}-\partial_{1}f_{2}$ is independent of the variable $z_{1}$. Similar to Proposition 4.2 and Corollary 4.3, in this case we have $\displaystyle F_{\omega}$ $\displaystyle=\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)dz_{1}\wedge dz_{2}+\overline{\left(\partial_{1}f_{2}-\partial_{2}f_{1}\right)}d\bar{z}_{1}\wedge d\bar{z}_{2},$ $\displaystyle=2\Re\left((\partial_{1}f_{2}-\partial_{2}f_{1})dz_{1}\wedge dz_{2}\right),$ and consequently, $\displaystyle E_{1}$ $\displaystyle=B_{1}=0,$ $\displaystyle E_{2}$ $\displaystyle=B_{2}=2\Re(\partial_{1}f_{2}-\partial_{2}f_{1}),$ $\displaystyle E_{3}$ $\displaystyle=B_{3}=-2\Im(\partial_{2}f_{1}-\partial_{1}f_{2}),$ where $\Re(a)$ and $\Im(a)$ stand for the real, and respectively, imaginary part of a complex number $a$. Observe that ${\bf E}={\bf B}$ in this case, which resembles the Dirac monople example we examined earlier. ###### Example 4.12. There are plenty of holomorphic functions $f_{1}$ and $f_{2}$ that satisfy the condition in the above proposition. For instance, let $f_{1}=z_{1}^{2}h(z_{2})+g(z_{2}),\ f_{2}=\frac{z_{1}^{3}}{3}\partial_{2}h(z_{2}),$ where $g$ and $h$ are arbitrary one-variable entire functions. Then $\partial_{2}f_{1}-\partial_{1}f_{2}=\partial_{2}g(z_{2})$, which is independent of $z_{1}$. Further, since in this case $\Box f_{1}=2h(z_{2}),\ \ \ \Box f_{2}=2z_{1}\partial_{2}h(z_{2}),$ which can be nonzero, the $1$-form $\omega$ may not be wavelike. Further, it is easy to see that $\partial_{2}g(z_{2})$ is M-harmonic and hence $F_{\omega}$ is wavelike. We state this observation as follows. ###### Corollary 4.13. There are real analytic non-wavelike solutions to the Maxwell’s equations in vacuum. ###### Remark 4.14. Corollary 4.3 and the above observations also indicate that, under both the Euclidean metric and the Minkowski metric, the Maxwell’s equations in vacuum have solutions in which the electric field and the magnetic field are mathematically indistinguishable. However, it is not clear if such solutions exist in nature. Now coming back to our familiar wavelike solutions we have the following fact. Its proof is similar to that of Theorem 4.5. ###### Theorem 4.15. Assume $\omega$ as in (4.1) is wavelike. Then it is a solution to the Maxwell’s equations in vacuum under the Minkowski metric if and only if $\partial_{1}f_{1}-\bar{\partial}_{2}f_{2}+\bar{\partial}_{1}f_{\bar{1}}-\partial_{2}f_{\bar{2}}$ is constant. ###### Proof. Let $\mathbf{0}$ denote the column 4-vector of zeroes. Since $\omega$ is wavelike, we have $\Box f_{j}=\Box f_{\bar{j}}=0,j=1,2$, which means $\left(\partial_{1}^{2}+\bar{\partial}_{1}^{2}-2\partial_{2}\bar{\partial}_{2}\right)f_{j}=0,\left(\partial_{1}^{2}+\bar{\partial}_{1}^{2}-2\partial_{2}\bar{\partial}_{2}\right)f_{\bar{j}}=0.$ (4.28) Plugging (4.28) into (4.26) and rearranging terms, we have $\displaystyle d\star F_{\omega}$ $\displaystyle=\left(\partial_{1}\partial_{2}f_{1}-\partial_{2}\bar{\partial}_{2}f_{2}+\partial_{2}\bar{\partial}_{1}f_{\bar{1}}-\partial_{2}^{2}f_{\bar{2}}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}$ $\displaystyle+\left(\partial_{1}\bar{\partial}_{1}f_{1}-\bar{\partial}_{1}\bar{\partial}_{2}f_{2}+\bar{\partial}_{1}^{2}f_{\bar{1}}-\partial_{2}\bar{\partial}_{1}f_{\bar{2}}\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}$ $\displaystyle+\left(\partial_{1}\bar{\partial}_{2}f_{1}-\bar{\partial}_{2}^{2}f_{2}+\bar{\partial}_{1}\bar{\partial}_{2}f_{\bar{1}}-\partial_{2}\bar{\partial}_{2}f_{\bar{2}}\right)dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}$ $\displaystyle+\left(\partial_{1}^{2}f_{1}-\partial_{1}\bar{\partial}_{2}f_{2}+\partial_{1}\bar{\partial}_{1}f_{\bar{1}}-\partial_{1}\partial_{2}f_{\bar{2}}\right)dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}.$ To satisfy the source-free Maxwell’s equations, we need $d\star F_{\omega}=0$, which in matrix form is $\text{diag}\\{\partial_{2},\bar{\partial}_{1},\bar{\partial}_{2},\partial_{1}\\}\begin{pmatrix}\partial_{1}&-\bar{\partial}_{2}&\bar{\partial}_{1}&-\partial_{2}\\\ \partial_{1}&-\bar{\partial}_{2}&\bar{\partial}_{1}&-\partial_{2}\\\ \partial_{1}&-\bar{\partial}_{2}&\bar{\partial}_{1}&-\partial_{2}\\\ \partial_{1}&-\bar{\partial}_{2}&\bar{\partial}_{1}&-\partial_{2}\end{pmatrix}\begin{pmatrix}f_{1}\\\ f_{2}\\\ f_{\bar{1}}\\\ f_{\bar{2}}\end{pmatrix}=\mathbf{0}.$ Clearly, this is true if and only if $\partial_{1}f_{1}-\bar{\partial}_{2}f_{2}+\bar{\partial}_{1}f_{\bar{1}}-\partial_{2}f_{\bar{2}}$ is constant and this completes the proof.∎ ## 5\. On the Lorenz gauge It was indicated in Section 3.2 that given a smooth $4$-vector $(\phi,A_{1},A_{2},A_{3})$ one can associate with it the magnetic potential $1$-form $\omega=\phi dx_{0}-A_{1}dx_{1}-A_{2}dx_{2}-A_{3}dx_{3}.$ The Lorenz gauge condition (3.5) stipulates the normalization (3.8) regarding the sum of partial derivatives, namely, $\frac{\partial\phi}{\partial x_{0}}+\frac{\partial A_{1}}{\partial x_{1}}+\frac{\partial A_{2}}{\partial x_{2}}+\frac{\partial A_{3}}{\partial x_{3}}=0.$ Theorems 4.5 and 4.15 indeed give a mathematical explanation as to why the Lorenz gauge matters. Here we give a unified treatment. ###### Corollary 5.1. Let $\omega$ be a smooth $1$-form as defined in (4.1). Then the sum of partial derivatives appearing in Theorem 4.5 is $-\frac{1}{2}d^{*}\omega$, and that in Theorem 4.15 is $\frac{1}{2}d^{*}\omega$. ###### Proof. First, recall that in the Euclidean metric over $\mathbb{C}^{2}$ we have that $d^{*}\omega=-\star d\star\omega$. Then using the calculations in Example 2.8 one easily verifies that $\star\omega=\frac{1}{2}\left(f_{1}dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}+f_{\bar{1}}dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}-f_{2}dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}-f_{\bar{2}}dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}\right).$ It follows that $\bar{\partial}_{1}f_{1}+\bar{\partial}_{2}f_{2}+\partial_{1}f_{\bar{1}}+\partial_{2}f_{\bar{2}}=-\frac{1}{2}d^{*}\omega.$ The sums of partial derivatives appearing in (3.8) is the real variable version of that in Theorem 4.15. In the Minkowski metric over $\mathbb{C}^{2}$, we have $d^{*}\omega=\star d\star\omega.$ (5.1) Under the Minkowski metric, using Example 2.9 we have $\displaystyle\star\omega=\frac{1}{2}$ $\displaystyle(f_{1}dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}+f_{2}dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}+f_{\bar{1}}dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}$ (5.2) $\displaystyle+f_{\bar{2}}dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}).$ Now applying the exterior derivative to (5.2), after simplifying and rearranging terms, we have $\displaystyle d\star\omega$ $\displaystyle=\frac{1}{2}\left(\partial_{1}f_{1}-\bar{\partial}_{2}f_{2}+\bar{\partial}_{1}f_{\bar{1}}-\partial_{2}f_{\bar{2}}\right)\left(dz_{1}\wedge d\bar{z}_{1}\wedge dz_{2}\wedge d\bar{z}_{2}\right)$ (5.3) $\displaystyle=-2\left(\partial_{1}f_{1}-\bar{\partial}_{2}f_{2}+\bar{\partial}_{1}f_{\bar{1}}-\partial_{2}f_{\bar{2}}\right)\left(dx_{0}\wedge dx_{1}\wedge dx_{2}\wedge dx_{3}\right).$ Since $\star\left(dx_{0}\wedge dx_{1}\wedge dx_{2}\wedge dx_{3}\right)=-1$ in the Minkowski metric, it follows that $d^{*}\omega=2\left(\partial_{1}f_{1}-\bar{\partial}_{2}f_{2}+\bar{\partial}_{1}f_{\bar{1}}-\partial_{2}f_{\bar{2}}\right).$ (5.4) ∎ In the case we write $\omega$ in the real form $\phi dx_{0}-A_{1}dx_{1}-A_{2}dx_{2}-A_{3}dx_{3},$ then (5.4) implies $d^{*}\omega=-\left(\frac{\partial\phi}{\partial x_{0}}+\frac{\partial A_{1}}{\partial x_{1}}+\frac{\partial A_{2}}{\partial x_{2}}+\frac{\partial A_{3}}{\partial x_{3}}\right).$ If $d^{*}\omega$ is a constant, say $k$, then one can easily modify $f_{i},f_{\bar{i}},i=1,2$ such that $d^{*}\omega$ becomes $0$. For example, in the Euclidean metric we can replace $f_{1}$ by $f_{1}+\frac{k}{2}\bar{z}_{1}$ and keep other functions unchanged. Similar modification can be done in the Minkowski metric. In this view, the Lorenz gauge condition is just a trivial strengthening of the condition $d^{*}\omega$ being constant. Therefore, we shall say that a smooth $1$-form $\omega$ satisfies the Lorenz gauge condition if $d^{*}\omega$ is constant. With the foregoing observation, in the Euclidean metric case one can write (4.15) as $\displaystyle d\star F_{\omega}$ $\displaystyle=\left(2\nabla^{2}f_{2}+\frac{1}{2}\partial_{2}d^{*}\omega\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}-\left(2\nabla^{2}f_{1}+\frac{1}{2}\partial_{1}d^{*}\omega\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}$ $\displaystyle+\left(2\nabla^{2}f_{\bar{2}}+\frac{1}{2}\bar{\partial_{2}}d^{*}\omega\right)dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}-\left(2\nabla^{2}f_{\bar{1}}+\frac{1}{2}\bar{\partial_{1}}d^{*}\omega\right)dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}.$ And in the Minkowski metric case, one can write (4.26) as $\displaystyle 2d\star F_{\omega}$ $\displaystyle=\left(-\Box^{2}f_{2}+\partial_{2}d^{*}\omega\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{1}+\left(-\Box^{2}f_{1}+\bar{\partial_{1}}d^{*}\omega\right)dz_{1}\wedge dz_{2}\wedge d\bar{z}_{2}$ $\displaystyle+\left(-\Box^{2}f_{\bar{2}}+\bar{\partial_{2}}d^{*}\omega\right)dz_{1}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}+\left(-\Box^{2}f_{\bar{1}}+{\partial_{1}}d^{*}\omega\right)dz_{2}\wedge d\bar{z}_{1}\wedge d\bar{z}_{2}.$ Moreover, if $\omega$ is a solution to the Maxwell’s equations, we have $d^{*}d\omega=d^{*}F_{\omega}=0$. Hence $\Delta\omega=(dd^{*}+d^{*}d)\omega=0$ if and only if $dd^{*}\omega=0$, i.e., $d^{*}\omega$ is a constant, or in other words $\omega$ satisfies the Lorenz gauge condition. We summarize Theorem 4.5, Theorem 4.15, and the foregoing observations in the following corollary. ###### Corollary 5.2. Let $\omega$ be a solution to the Maxwell’s equations in vacuum under the Euclidean or Minkowski metric. Then the following are equivalent. 1. (a) $\omega$ satisfies the Lorenz gauge condition. 2. (b) $\omega$ is harmonic or respectively wavelike. 3. (c) $\omega$ is HL-harmonic. If $\omega$ as defined in (4.1) is a solution to the Maxwell’s equations in vacuum under the Euclidean or Minkowski metric, then for every smooth function $u$ the form $\omega^{\prime}=\omega+du$ is also a solution because $F_{\omega^{\prime}}=d\omega^{\prime}=d\omega+d^{2}u=d\omega=F_{\omega}.$ This is the gauge invariance of the Maxwell’s equations in differential forms. We write the above gauge transformations as $f^{\prime}_{j}=f_{j}+\partial_{j}u,\ \ \ f^{\prime}_{\bar{j}}=f_{\bar{j}}+\bar{\partial}_{j}u,\ j=1,2.$ Then with respect to Theorem 4.5 direct computations give $\bar{\partial}_{1}f^{\prime}_{1}+\bar{\partial}_{2}f^{\prime}_{2}+\partial_{1}f^{\prime}_{\bar{1}}+\partial_{2}f^{\prime}_{\bar{2}}=\bar{\partial}_{1}f_{1}+\bar{\partial}_{2}f_{2}+\partial_{1}f_{\bar{1}}+\partial_{2}f_{\bar{2}}+\frac{1}{2}\nabla^{2}u,$ (5.5) and likewise with respect to Theorem 4.15 we have ${\partial}_{1}f^{\prime}_{1}-\bar{\partial}_{2}f^{\prime}_{2}+\bar{\partial}_{1}f^{\prime}_{\bar{1}}-\partial_{2}f^{\prime}_{\bar{2}}={\partial}_{1}f_{1}-\bar{\partial}_{2}f_{2}+\bar{\partial}_{1}f_{\bar{1}}-\partial_{2}f_{\bar{2}}+\frac{1}{2}\Box u.$ (5.6) It is known that ([4, 8]) for every smooth function $h$ on ${\mathbb{R}}^{4}$, the equations $\nabla^{2}u=h,\ \ \text{and}\ \ \Box u=h$ both have solutions (non-unique). Hence there exists a smooth function $u$ such that $\omega^{\prime}=\omega+du$ satisfies the Lorenz gauge condition with respect to the Euclidean metric (or the Minkowski metric). Hence by Corollary 5.2 the curvature field $F_{\omega}=F_{\omega^{\prime}}=d\omega^{\prime}$ is harmonic (or respectively wavelike). We summarize this observation as follows. ###### Corollary 5.3. Let $F_{\omega}$ be a solution to the Maxwell’s equations in vacuum under the Euclidean or Minkowski metric. Then $F_{\omega}$ is harmonic, or respectively wavelike. In particular, this indicates that there is no non-wavelike solution to the Maxwell’s equations in vacuum. ## 6\. Concluding Remarks Complex analysis is a core component in mathematics, and it has also played an increasingly important role in modern physics. It is thus meaningful to reinterpret some fundamental theories in physics from a complex perspective, for instance special relativity, Maxwell’s equations, and Yang-Mills equations, whose original fomulations were in real variables. This reinterpretation will not only provide a complex formulation of the theories, but also give rise to new and natural observations from this point of view. The exploration in this direction has been made in literature, see for example [10, 12], but it is far from being complete. This paper shall serve as a starting point for the authors to explore greater applications of complex analysis to physics theories. Acknowledgments. The authors would like to thank Marius Beceanu and Oleg Lunin for valuable comments on the initial draft of this paper. This paper is in part based on the first author’s doctoral dissertation ([14]) submitted to SUNY at Albany, and he is grateful to the Department of Mathematics and Statistics for providing him an opportunity to pursue his research interests. ## References * [1] J. Baez and J. P. Muniain: _Gauge Fields, Knots, and Gravity_ , vol. 4, World Scientific, London 1994. * [2] R. W. R. Darling: _Differential Forms and Connections_ , 1st ed., Cambridge University Press, New York 1994. * [3] P. A. M. Dirac: Quantised singularities in the electromagnetic field, _Proc. R. Soc. Lond. A, Containing Papers of a Mathematical and Physical Character_ 133, no. 821, 60-72 (1931). * [4] L. C. Evans: _Partial Differential Equations_ , 2nd ed., vol. 19, American Mathematical Society, Providence, RI 2010. * [5] B. Felsager: _Geometry, Particles, and Fields_ , Springer Science & Business Media, New York 2012. * [6] D. Fleisch: _A Student’s Guide to Maxwell’s Equations_ , Cambridge University Press, Cambridge, UK 2008. * [7] T. A. Garrity: _Electricity and Magnetism for Mathematicians: A Guided Path from Maxwell’s Equations to Yang-Mills_ , Cambridge University Press, New York 2015. * [8] S. Hassani: _Mathematical Physics: A Modern Introduction to its Foundations_ , 2nd ed., Springer Science & Business Media, New York 2013. * [9] D. D. Holm: _Geometric Mechanics: Dynamics and Symmetry_ , vol. 1, Imperial College Press, London 2008. * [10] C. Hoyos, N. Sircar, and J. Sonnenschein: New knotted solutions of Maxwell’s equations, _J. Phys. A: Math. Theor._ 48, no. 25, 255204 (2015). * [11] D. Huybrechts: _Complex Geometry: An Introduction_ , Springer Science & Business Media, Heidelberg, Germany 2006. * [12] F. Kleefeld: Complex covariance, _arXiv:1209.3472v1_ , 2012. * [13] J. C. Maxwell: Viii. a dynamical theory of the electromagnetic field, _Philos. Trans. R. Soc. Lond._ 155, 459-512 (1865). * [14] S. Munshi: Maxwell’s equations and Yang-Mills equations in complex variables: New perspectives, ProQuest Dissertations Publishing, 1-69 (2020). * [15] J. L. Pinfold: Dirac’s dream–the search for the magnetic monopole, _AIP Conf. Proc._ 1304, 234-239 (2010). * [16] R. M. Range: _Holomorphic Functions and Integral Representations in Several Complex Variables_ , vol. 108, Springer Science & Business Media, New York 2013. * [17] W. G. Ritter: Gauge theory: Instantons, monopoles, and moduli spaces, _arXiv:math-ph/0304026v1_ , 2003. * [18] M. S. Swanson: _Path Integrals and Quantum Processes_ , Dover Publications Inc. (Courier Corporation), Mineola, NY 2014. * [19] L. W. Tu: _Differential Geometry, Connections, Curvature, and Characteristic Classes_ , Springer, Cham, Switzerland 2017.
# Tight upper bound on the quantum value of Svetlichny operators under local filtering and hidden genuine nonlocality Lingyun Sun1 Li Xu1 Jing Wang1 Ming Li1 Shuqian Shen1 Lei Li1 Shao-Ming Fei2,3 1College of the Science, China University of Petroleum, 266580 Qingdao, China 2 School of Mathematical Sciences, Capital Normal University, 100048 Beijing, China 3 Max-Planck-Institute for Mathematics in the Sciences, 04103 Leipzig, Germany ###### Abstract Nonlocal quantum correlations among the quantum subsystems play essential roles in quantum science. The violation of the Svetlichny inequality provides sufficient conditions of genuine tripartite nonlocality. We provide tight upper bounds on the maximal quantum value of the Svetlichny operators under local filtering operations, and present a qualitative analytical analysis on the hidden genuine nonlocality for three-qubit systems. We investigate in detail two classes of three-qubit states whose hidden genuine nonlocalities can be revealed by local filtering. Suggested keywords ###### pacs: 03.67.Mn,03.65.Ud ## I Introduction As important physical resources, quantum correlations like entanglement play fundamental roles in quantum information processing QE-2009-RevModPhys.81.865 ; New_QE-2013-Front.Phys ; New_QE-2019-Front.Phys , with numerous applications in quantum communication protocols with lower complexity communication- complexity2002-PhysRevLett.89.197901 ; communication- complexity2010-RevModPhys.82.665 and higher security Security2001-PhysRevLett.87.117901 ; Security2006-PhysRevA.73.012314 . Two systems $A$ and $B$ are entangled if the measurements on system $A$ does affect the probabilities of the measurement outcomes from system $B$, and vise versa. For tripartite systems, there exit correlations so called genuine entanglement Quantum_entanglement2011-PhysRevLett.106.250404 . A tripartite state may be genuine entangled even if any pair of the subsystems are separable. The stronger correlations than entanglement are nonlocal correlations. Two systems $A$ and $B$ may be locally correlated even if they are entangled, as long as the correlations of their measurement outcomes can be described by classical correlation models of probability. For a bipartite $\rho$, let $P(ab|XY)$ be the probability of measuring $X$ on subsystem $A$ with outcome $a$ and $Y$ on subsystem $B$ with outcome $b$. If the probability correlation $P(ab|XY)$ can be expressed in the form, $P(ab|XY)=\sum\limits_{\lambda}{q_{\lambda}}P_{\lambda}(a|X)P_{\lambda}(b|Y)$, where $\lambda$ is regarded as a shared local hidden variable, $q_{\lambda}\geq 0$, and $\Sigma_{\lambda}q(\lambda)=1$, then the state $\rho$ is regarded as locally correlated, and admits a local hidden-variable (LHV) model. The bipartite Bell nonlocality Bell-1964-Phys.1.195 ; Bell- nonlocality2014-RevModPhys.86.419 can be witnessed by the violation of Bell inequalities Bell-nonlocality2014-RevModPhys.86.419 . Similar to the quantum entanglement, the quantum nonlocality becomes subtler for multipartite and high dimensional systems Def-MNL-2013-PhysRevA.88.014102 ; New_nonlocilty-FOP2012 ; New_nonlocilty-FOP2018 . Let $\rho$ be a tripartite state. Performing local measurements $X$, $Y$ and $Z$ on the subsystems $A$, $B$ and $C$ with outcomes $a$, $b$ and $c$, respectively, we say the state is three local if the corresponding probability correlations $P(abc|XYZ)$ can be written as $P(abc|XYZ)=\sum\limits_{\lambda}{q_{\lambda}}P_{\lambda}(a|X)P_{\lambda}(b|Y)P_{\lambda}(c|Z),$ (1) where $0\leq q_{\lambda}\leq 1$ and $\Sigma_{\lambda}q_{\lambda}=1$. Otherwise, the state is called nonthree or full local. A nonthree local state is said to be hybrid-nonlocal, admitting bi-LHV model, if $P(abc|XYZ)=\sum\limits_{\lambda}{q_{\lambda}}P_{\lambda}(ab|XY)P_{\lambda}(c|Z)+\sum\limits_{\mu}{q_{\mu}}P_{\mu}(ac|XZ)P_{\mu}(b|Y)+\sum\limits_{\upsilon}{q_{\upsilon}}P_{\upsilon}(bc|YZ)P_{\upsilon}(a|X),$ (2) where $0\leq q_{\lambda},q_{\mu},q_{\upsilon}\leq 1$, and $\Sigma_{\lambda}q_{\lambda}+\Sigma_{\mu}q_{\mu}+\Sigma_{\upsilon}q_{\upsilon}=1$. If the probability correlation can not be written in form (2), the state is called genuine tripartite nonlocal. The genuine tripartite nonlocality of a state can be detected by the Svetlichny inequality (SI) Svetlichny1987-PhysRevD.35.3066 . The violation of SI is a sufficient condition for the genuine tripartite nonlocality. However, generally it is not easy to verify such violations. In LM2017-PhysRevA.96.042323 , the authors presented a tight upper bound for the maximal quantum value of the Svetlichny operator. For bipartite case, it has been shown that the nonlocality of certain quantum states can be revealed by using local filters before performing a standard Bell test, known as genuine hidden nonlocality localfilter-2013-PhysRevLett.111.160402 . In Max-entanglement- PhysRevA.68.012103 , Verstraete et. al demonstrated that the optimal local filtering operations can maximize certain entanglement measures. Moreover, quantum properties such as Bell nonlocality and steerability Steering2007-PhysRevLett.98.140402 of specific quantum states can be revealed by local filtering. In lf-CHSH-PhysRevLett.74.2619 ; PhysLettA.210.151 , the authors considered two-qubit states which do not violate CHSH inequality before, but do violate after performing local filtering operations. The maximal violation of the CHSH inequality and the lower bound of the maximal violation of V$\acute{e}$rtesi inequality under local filtering operations were computed analytically in li2017maximal . In Hidden- steerability2019-PhysRevA.99.030101 , Pramanik et al. showed that there exist initially unsteerable bipartite states which show steerability after local filtering. For a tripartite state $\rho$, under local filtering transformations one gets, ${\rho}^{\prime}=\frac{1}{N}\left(F_{A}\otimes F_{B}\otimes F_{C}\right)\rho\left(F_{A}\otimes F_{B}\otimes F_{C}\right)^{\dagger},$ (3) where $N={\rm tr}\left[\left(F_{A}\otimes F_{B}\otimes F_{C}\right)\rho\left(F_{A}\otimes F_{B}\otimes F_{C}\right)^{\dagger}\right]$ is a normalization factor, $F_{A}$, $F_{B}$ and $F_{C}$ are positive operators acting on the local subsystems, respectively. In AON2020-PhysRevLett.124.050401 , Tendick et al. discussed the relation between entanglement and nonlocality in the hidden nonlocality scenario, and presented a fully biseparable three-qubit bound entangled state with a local model for the most general measurements. By using the ${\rm\acute{S}}$liwa’s inequality and an iterative sequence of semidefinite programs it is shown that the local model breaks down when suitable local filters are applied, which demonstrates the activation of nonlocality in bound entanglement, as well as that genuine hidden nonlocality does not imply entanglement distillability. In this paper, we first study the maximal quantum value of the Svetlichny operators after local filtering operations for any three-qubit system. A tight upper bound for the maximal value of the Svetlichny operators after local filtering is obtained. Then we take the color noised Greenberger-Horne- Zeilinger (GHZ)-class states as examples to illustrate how local filter operations work in nonlocality improvement. We show that the hidden genuine nonlocalities can be revealed by local filtering for these classes of three- qubit states. ## II Tight upper bound on the value of Svetlichny operator under local filtering The Svetlichny operator in Svetlichny inequality Svetlichny1987-PhysRevD.35.3066 reads $\mathcal{S}=A\otimes[(B+B^{\prime})\otimes C+(B-B^{\prime})\otimes C^{\prime}]+A^{\prime}\otimes[(B-B^{\prime})\otimes C-(B+B^{\prime})\otimes C^{\prime}],$ (4) where $A,A^{\prime},B,B^{\prime},C$ and $C^{\prime}$ denote the local observables of the form $G=\vec{g}\cdot\vec{\sigma}=\Sigma_{k=1}^{3}g_{k}\sigma_{k}$, $G\\!\in\\!\\{A,A^{\prime},B,B^{\prime},C,C^{\prime}\\}$ and $\vec{g}\\!\in\\!\\{\vec{a},\vec{a}\,^{\prime},\vec{b},\vec{b}\,^{\prime},\vec{c},\vec{c}\,^{\prime}\\}$, respectively. $\vec{\sigma}=(\sigma_{1},\sigma_{2},\sigma_{3})$ with $\sigma_{i}$, $i=1,2,3$, the standard Pauli matrices. $\vec{g}$ is a three- dimensional real unit vector. The mean value of the Svetlichny operator for an arbitrary three-qubit state $\rho$ admitting a bi-LHV model satisfies the following inequality Svetlichny1987-PhysRevD.35.3066 , $|\langle\mathcal{S}\rangle_{\rho}|\leq 4,$ (5) where $\langle\mathcal{S}\rangle_{\rho}$=tr$(\mathcal{S}\rho)$. A state violating the inequality (5) is called genuine three-qubit nonlocal. It has been shown that the maximal quantum value of the Svetlichny operator for three-qubit systems is upper bounded LM2017-PhysRevA.96.042323 , $\mathcal{Q}(\mathcal{S})\equiv{\rm max}|\langle\mathcal{S}\rangle_{\rho}|\leq 4\lambda_{1},$ (6) where $\lambda_{1}$ is the maximal singular value of the matrix $M=(M_{j,ik})$, with $M_{ijk}$=tr$[\rho(\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{k})],i,j,k=1,2,3.$ The upper bound is tight if the degeneracy of $\lambda_{1}$ is more than 1, and the two degenerate nine-dimensional singular vectors corresponding to $\lambda_{1}$ take the form of $\vec{a}\otimes\vec{c}-\vec{a}\,^{\prime}\otimes\vec{c}\,^{\prime}$ and $\vec{a}\otimes\vec{c}\,^{\prime}+\vec{a}\,^{\prime}\otimes\vec{c}$. Let $F_{A}=U\Sigma_{A}U^{\dagger}$, $F_{B}=V\Sigma_{B}V^{\dagger}$ and $F_{C}=W\Sigma_{C}W^{\dagger}$ be the spectral decompositions of the filter operators $F_{A}$, $F_{B}$ and $F_{C}$, respectively, where $U$, $V$ and $W$ are unitary operators. Set $\delta_{l}=\Sigma_{A}\sigma_{l}\Sigma_{A}$, $\eta_{m}=\Sigma_{B}\sigma_{m}\Sigma_{B}$ and $\gamma_{n}=\Sigma_{C}\sigma_{n}\Sigma_{C}$. Without loss of generality, we assume that $\Sigma_{A}=\begin{pmatrix}x&0\\\ 0&1\end{pmatrix}$, $\Sigma_{B}=\begin{pmatrix}y&0\\\ 0&1\end{pmatrix}$ and $\Sigma_{C}=\begin{pmatrix}z&0\\\ 0&1\end{pmatrix}$ with $x,y,z\geq 0$. Let $X=(X_{m,ln})$ be a matrix with entries given by $X_{lmn}={\rm tr}[\varrho(\delta_{l}\otimes\eta_{m}\otimes\gamma_{n})],~{}~{}~{}l,m,n=1,2,3,$ (7) where $\varrho$ is any state that is locally unitary equivalent to $\rho$. ###### Theorem 1. For the local filtered quantum state ${\rho}^{\prime}=\frac{1}{N}\left(F_{A}\otimes F_{B}\otimes F_{C}\right)\rho\left(F_{A}\otimes F_{B}\otimes F_{C}\right)^{\dagger}$ of a three-qubit $\rho$, the maximal quantum value of the Svetlichny operator $\mathcal{S}$ defined in Eq. (4) satisfies $\mathcal{Q}(\mathcal{S})^{\prime}={\rm max}\left|\left\langle\mathcal{S}\right\rangle_{{\rho}^{\prime}}\right|\leq 4{\lambda_{1}}^{\prime},$ (8) where $\left\langle\mathcal{S}\right\rangle_{{\rho}^{\prime}}={\rm tr}(\mathcal{S}{\rho}^{\prime})$, ${\lambda_{1}}^{\prime}$ is the maximal singular value of the matrix $X/N$, with $X$ defined in Eq. (7), taking over all quantum states $\varrho$ which are locally unitary equivalent to $\rho$. Equivalently, ${\lambda_{1}}^{\prime}$ is also the maximal singular value of the matrix ${M}^{\prime}=({M_{j,ik}}^{\prime})$, with ${M_{ijk}}^{\prime}={\rm tr}[{\rho}^{\prime}(\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{k})]$, $i,j,k=1,2,3.$ ###### Proof. The normalization factor $N$ has the following form, $\displaystyle N$ $\displaystyle={\rm tr}\left[(U\Sigma_{A}^{2}U^{\dagger}\otimes V\Sigma_{B}^{2}V^{\dagger}\otimes W\Sigma_{C}^{2}W^{\dagger})\rho\right]$ $\displaystyle={\rm tr}\left[(\Sigma_{A}^{2}\otimes\Sigma_{B}^{2}\otimes\Sigma_{C}^{2})(U^{\dagger}\otimes V^{\dagger}\otimes W^{\dagger})\rho(U\otimes V\otimes W)\right]$ $\displaystyle={\rm tr}\left[(\Sigma_{A}^{2}\otimes\Sigma_{B}^{2}\otimes\Sigma_{C}^{2})\varrho\right],$ where $\varrho=(U^{\dagger}\otimes V^{\dagger}\otimes W^{\dagger})\rho(U\otimes V\otimes W)$. Since $\rho$ and $\varrho$ are local unitary equivalent, they have the same value of the maximal violation of the SI. From the double cover relationship SO(3)-1995-PhysRevA.52.4396 ; SU(3)-LM-2014-PhysRevA.89.062325 between the special unitary group $SU(2)$ and the special orthogonal group $SO(3)$, $U\sigma_{{}_{i}}U^{\dagger}=\sum_{j=1}^{3}O_{ij}\sigma_{j}$, where $U$ is any given unitary operator and the matrix $O$ with entries $O_{ij}$ belongs to $SO(3)$, we have $\displaystyle{M_{ijk}}^{\prime}$ $\displaystyle={\rm tr}[{\rho}^{\prime}(\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{k})]$ $\displaystyle=\frac{1}{N}{\rm tr}\left[\left(F_{A}\otimes F_{B}\otimes F_{C}\right)\rho\left(F_{A}^{\dagger}\otimes F_{B}^{\dagger}\otimes F_{C}^{\dagger}\right)(\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{k})\right]$ $\displaystyle=\frac{1}{N}{\rm tr}\left[\rho(U\Sigma_{A}U^{\dagger}\sigma_{i}U\Sigma_{A}U^{\dagger}\otimes V\Sigma_{B}V^{\dagger}\sigma_{j}V\Sigma_{B}V^{\dagger}\otimes W\Sigma_{C}W^{\dagger}\sigma_{k}W\Sigma_{C}W^{\dagger})\right]$ $\displaystyle=\frac{1}{N}\sum_{l,m,n}{\rm tr}\left[(U^{\dagger}\otimes V^{\dagger}\otimes W^{\dagger})\rho(U\otimes V\otimes W)(\Sigma_{A}O_{il}^{A}\sigma_{l}\Sigma_{A}\otimes\Sigma_{B}O_{jm}^{B}\sigma_{m}\Sigma_{B}\otimes\Sigma_{C}O_{kn}^{C}\sigma_{n}\Sigma_{C})\right]$ $\displaystyle=\frac{1}{N}\sum_{l,m,n}O_{il}^{A}O_{jm}^{B}O_{kn}^{C}{\rm tr}\left[\varrho(\Sigma_{A}\sigma_{l}\Sigma_{A}\otimes\Sigma_{B}\sigma_{m}\Sigma_{B}\otimes\Sigma_{C}\sigma_{n}\Sigma_{C})\right]$ $\displaystyle=\frac{1}{N}\sum_{l,m,n}O_{il}^{A}O_{jm}^{B}O_{kn}^{C}{\rm tr}\left[\varrho(\delta_{l}\otimes\eta_{m}\otimes\gamma_{n})\right]$ $\displaystyle=\frac{1}{N}\left[O_{A}X\left(O_{B}^{T}\otimes O_{C}^{T}\right)\right]_{ijk}.$ (9) Therefore, we have ${M}^{\prime}=\left[O_{A}X\left(O_{B}^{T}\otimes O_{C}^{T}\right)\right]/N$, and $\left({M}^{\prime}\right)^{\dagger}{M}^{\prime}=\frac{1}{N^{2}}\left(O_{B}\otimes O_{C}\right)X^{\dagger}O_{A}^{\dagger}O_{A}X\left(O_{B}\otimes O_{C}\right)^{\dagger}=\frac{1}{N^{2}}\left(O_{B}\otimes O_{C}\right)X^{\dagger}X\left(O_{B}\otimes O_{C}\right)^{\dagger}.$ (10) By noticing the orthogonality of the operator $O_{B}\otimes O_{C}$, one obtains that ${M}^{\prime}\left({M}^{\prime}\right)^{\dagger}$ has the same eigenvalues as $X^{\dagger}X/N^{2}$. Hence, $M^{\prime}$ has the same singular values as $X/N$. Let $\vec{v}$ be a nine-dimensional singular vector of the matrix $X/N$. Then $(O_{B}\otimes O_{C})\vec{v}$ is a nine-dimensional singular vector of the matrix $M^{\prime}$. ∎ ## III Tightness of the upper bound and hidden genuine nonlocality As applications of the Theorem 1, we consider the activation of the hidden genuine nonlocality of three-qubit systems. We present two classes of three- qubit states which admit bi-LHV model before local filtering, but display genuine nonlocality after local filtering. Let us begin with the two-qubit isotropic states ${\chi}_{iso}(p)=p|\phi\rangle\langle\phi|+(1-p)\frac{I_{4}}{4},$ (11) where $|\phi\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$ is the maximally entangled state, $I_{4}$ denotes the $4\times 4$ identity matrix, and $0\leq p\leq 1$. The state ${\chi}_{iso}(p)$ is fully local for $0\leq p\leq 0.4167$ and the related local model can be generalized from isotropic states to the following states LHV-PhysRevLett.99.040403 , $\hat{\chi}(p,\theta)=p|\psi_{s}\rangle\langle\psi_{s}|+(1-p)\frac{I_{4}}{4},$ (12) where $|\psi_{s}\rangle={\rm cos}\,\theta|00\rangle+{\rm sin}\,\theta|11\rangle$, and $0\leq\theta\leq\pi/4$. Consider the following states, $\chi(p,\theta)=p|\psi_{s}\rangle\langle\psi_{s}|+(1-p)|0\rangle\langle 0|\otimes\frac{I_{2}}{2},$ (13) where $I_{2}$ denotes the $2\times 2$ identity matrix, $0\leq\theta\leq\pi/4$, and $0\leq p\leq 1$. Following the protocol presented in localfilter-2013-PhysRevLett.111.160402 , we have that $\chi(p,\pi/4)$ admits an LHV model as $\hat{\chi}(p,\pi/4)$. Namely, $\chi(p,\theta)$, with $0\leq\theta\leq\pi/4$, admits an LHV model for $0\leq p\leq 0.4167$. Any bipartite local states can be converted to multipartite states with a bi- local model construction-PhysRevLett.115.030404 . Following the construction given in construction-PhysRevLett.115.030404 , we can transform the states $\chi(p,\theta)$ into the mixture of the colored noise and the three-qubit GHZ-class states, $\rho_{\chi}(p,\theta)=p|\Psi_{s}\rangle\langle\Psi_{s}|+(1-p)|00\rangle\langle 00|\otimes\frac{I_{2}}{2},$ (14) where $|\Psi_{s}\rangle={\rm cos}\,\theta|000\rangle+{\rm sin}\,\theta|111\rangle$. Analogously, the state $\rho_{\chi}(p,\theta)$ admits an bi-LHV model for $0\leq p\leq 0.4167$. In the following, we set $\theta=\pi/8$ and consider the activation of the hidden genuine nonlocality of $\rho_{\chi}(p,\pi/8)$ under local filtering. Firstly, based on the genuine multipartite concurrence of three-qubit _X_ states X.GME-2012-PhysRevA.86.062303 , one can show that $\rho_{\chi}(p,\pi/8)$ is genuine multipartite entangled for $0<p\leq 1$. Secondly, the quantum state $\rho_{\chi}(p,\pi/8)$ attains the upper bound on the mean values of the SI operators, but never violates the SI, which can be seen from the matrix $M$ of $\rho_{\chi}(p,\pi/8)$ defined in (6), $M=\left(\begin{array}[]{ccccccccc}\displaystyle\frac{\sqrt{2}p}{2}&0&0&0&\displaystyle-\frac{\sqrt{2}p}{2}&0&0&0&0\\\ 0&\displaystyle-\frac{\sqrt{2}p}{2}&0&\displaystyle-\frac{\sqrt{2}p}{2}&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&\displaystyle\frac{\sqrt{2}p}{2}\end{array}\right).$ (15) The singular values of the matrix $M$ are $p$, $p$ and $\displaystyle\frac{\sqrt{2}p}{2}$. Hence, $\lambda_{1}=p$. The upper bound of the maximal mean value of the Svetlichny operator is $\mathcal{Q}(\mathcal{S})={\rm max}|\langle\mathcal{S}\rangle_{\rho_{\chi}(p,\pi/8)}|\leq 4\lambda_{1}=4p$. In order to assure that the upper bound can be used to determine the violation of the SI, one needs to prove that the bound is attained for the state $\rho_{\chi}(p,\pi/8)$, which requires that two nine-dimensional singular vectors have the forms $\vec{a}\otimes\vec{c}-\vec{a}\,^{\prime}\otimes\vec{c}\,^{\prime}$ and $\vec{a}\otimes\vec{c}\,^{\prime}+\vec{a}\,^{\prime}\otimes\vec{c}$ exist. We select the two singular vectors corresponding to the degenerated $\lambda_{1}$ as $\vec{v_{1}}=(1,0,0,0,-1,0,0,0,0)^{T}=(1,0,0)^{T}\otimes(1,0,0)^{T}-(0,-1,0)^{T}\otimes(0,-1,0)^{T}$ and $\vec{v_{2}}=(0,-1,0,-1,0,0,0,0,0)^{T}=(1,0,0)^{T}\otimes(0,-1,0)^{T}+(0,-1,0)^{T}\otimes(1,0,0)^{T}$. Setting $\vec{a}=(1,0,0)^{T}$, $\vec{a}\,^{\prime}=(0,-1,0)^{T}$, $\vec{c}=(1,0,0)^{T}$ and $\vec{c}\,^{\prime}=(0,-1,0)^{T}$, and choosing $\vec{b}$ and $\vec{b}\,^{\prime}$ to be suitable unit vectors of proper measurement directions of $B$ and $B^{\prime}$ in Eq. (4), we can show that the upper bound $4p$ is attained for $\rho_{\chi}(p,\pi/8)$. Nevertheless, the violation of the SI never happens for the quantum state $\rho_{\chi}(p,\pi/8)$ as expected. Now we consider the local filtering of $\rho_{\chi}(p,\pi/8)$. By direct computation, we have the matrix $\tilde{M}=(\tilde{M}_{m,ln})=({\rm tr}[\rho_{\chi}(p,\pi/8)(\delta_{l}\otimes\eta_{m}\otimes\gamma_{n})])$, $l,m,n=1,2,3$, $\tilde{M}=\left(\begin{array}[]{ccccccccc}\displaystyle\frac{\sqrt{2}pxyz}{2}&0&0&0&\displaystyle-\frac{\sqrt{2}pxyz}{2}&0&0&0&0\\\ 0&\displaystyle-\frac{\sqrt{2}pxyz}{2}&0&\displaystyle-\frac{\sqrt{2}pxyz}{2}&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&D\\\ \end{array}\right),$ (16) where $D=\displaystyle-\frac{2-\sqrt{2}}{4}p-\frac{1-p}{2}x^{2}y^{2}+\frac{2+\sqrt{2}p}{4}x^{2}y^{2}z^{2}$. The singular values of the matrix $\tilde{M}$ are $pxyz$, $pxyz$ and $D.$ Since $\rho_{\chi}(p,\pi/8)$ and $\varrho_{\chi}(p,\pi/8)$ are locally unitary equivalent, we conclude that ${pxyz}/{N}$, ${pxyz}/{N}$ and ${D}/{N}$ are the singular values of the matrix $X/N$, where $N={\rm tr}[\rho_{\chi}(p,\pi/8)(\Sigma^{2}_{A}\otimes\Sigma^{2}_{B}\otimes\Sigma^{2}_{C})]=\displaystyle\frac{2-\sqrt{2}}{4}p+\frac{1-p}{2}x^{2}y^{2}+\frac{2+\sqrt{2}p}{4}x^{2}y^{2}z^{2}$, which are also the singular values of the matrix $M^{\prime}$ in Theorem 1. The maximal singular value $\lambda_{1}^{\prime}$ is ${pxyz}/{N}$ for given $p$, with ${pxyz}/{N}>{D}/{N}$. Then the upper bound of the maximal value of the Svetlichny operator is given by $\mathcal{Q}(\mathcal{S})^{\prime}={\rm max}|\langle\mathcal{S}\rangle_{\rho_{\chi}^{\prime}(p,\pi/8)}|\leq 4\lambda_{1}^{\prime}=\displaystyle\frac{4pxyz}{N}.$ (17) The matrix $X/N$ also has the singular vectors $\vec{v_{1}}$ and $\vec{v_{2}}$ with respect to the singular value $\lambda_{1}^{\prime}$. According to Theorem 1, the singular vectors of $M^{\prime}$ corresponding to $\lambda_{1}^{\prime}$ are $(O_{B}\otimes O_{C})\vec{v_{1}}=O_{B}\,\vec{a}\otimes O_{C}\,\vec{c}-O_{B}\,\vec{a}\,^{\prime}\otimes O_{C}\,\vec{c}$ and $(O_{B}\otimes O_{C})\vec{v_{2}}=O_{B}\,\vec{a}\otimes O_{C}\,\vec{c}\,^{\prime}+O_{B}\,\vec{a}\,^{\prime}\otimes O_{C}\,\vec{c}$, where $O_{B}$ and $O_{C}$ belong to $SU(3)$. The upper bound is saturated as the singular vectors can be written in required decomposition forms. The SI is violated if and only if $\lambda_{1}^{\prime}=\displaystyle\frac{pxyz}{N}>1$. Maximizing $\lambda_{1}^{\prime}$ under the restriction $\displaystyle\frac{pxyz}{N}>\frac{D}{N}$, we obtain that the quantum states $\rho_{\chi}^{\prime}(p,\pi/8)$ violate the SI and are genuine three-qubit nonlocal for $0.3697\leq p\leq 1$, although $\rho_{\chi}(p,\pi/8)$ is bi- local, see Fig. 1. Figure 1: The state $\rho_{\chi}(p,\pi/8)$ admits a bi-local hidden model for $0\leq p\leq 0.4167$ and never violates SI for $0\leq p\leq 1$ as the upper bound in Theorem 1 is saturated. The locally filtered state shows the genuine nonlocality for $0.3697\leq p\leq 1$. The hidden genuine tripartite nonlocality is revealed for $0.3697\leq p\leq 0.4167$. Now we consider another class of three-qubit states. Consider the mixture of the three-qubit GHZ states and the colored noise, $\rho=p|{\rm GHZ}\rangle\langle{\rm GHZ}|+\frac{1-p}{4}\widetilde{I_{0}}\otimes I_{4},$ (18) where $|{\rm GHZ}\rangle=\frac{1}{\sqrt{2}}(|000\rangle+|111\rangle)$, $\widetilde{I_{0}}=\left(\begin{array}[]{cc}1&0\\\ 0&0\\\ \end{array}\right)$ and $0\leq p\leq 1$. The state $\rho$ is genuine multipartite entangled for $0<p\leq 1$ by using the criterion given in X.GME-2012-PhysRevA.86.062303 . The corresponding matrix $M$, $M=\left(\begin{array}[]{ccccccccc}p&0&0&0&-p&0&0&0&0\\\ 0&-p&0&-p&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&0\\\ \end{array}\right),$ (19) has singular values $\sqrt{2}p$, $\sqrt{2}p$ and 0, i.e., $\lambda_{1}=\sqrt{2}p$. The upper bound of the maximal value of the Svetlichny operator satisfies $\mathcal{Q}(\mathcal{S})=\rm{max}|\langle\mathcal{S}\rangle_{\rho}|\leq 4\lambda_{1}=4\sqrt{2}p$. This upper bound is saturated since two nine- dimensional singular vectors of the forms, $\vec{a}\otimes\vec{c}-\vec{a}\,^{\prime}\otimes\vec{c}\,^{\prime}$ and $\vec{a}\otimes\vec{c}\,^{\prime}+\vec{a}\,^{\prime}\otimes\vec{c}$ can be found in the following way. Take the singular vectors corresponding to $\lambda_{1}$ to be $\vec{v_{1}}=(1,0,0,0,-1,0,0,0,0)^{T}$ and $\vec{v_{2}}=(0,-1,0,-1,0,0,0,0,0)^{T}$, which have exactly the forms, $(1,0,0)^{T}\otimes(1,0,0)^{T}-(0,-1,0)^{T}\otimes(0,-1,0)^{T}$ and $(1,0,0)^{T}\otimes(0,-1,0)^{T}+(0,-1,0)^{T}\otimes(1,0,0)^{T}$, respectively. By defining $\vec{a}=(1,0,0)^{T}$, $\vec{a}\,^{\prime}=(0,-1,0)^{T}$, $\vec{c}=(1,0,0)^{T}$ and $\vec{c}\,^{\prime}=(0,-1,0)^{T}$, and selecting suitable $\vec{b}$ and $\vec{b}\,^{\prime}$, the upper bound is attained. Therefore, the state $\rho$ in Eq. (18) violates the SI if and only if $0.707107<p\leq 1$. The matrix $\tilde{M}=(\tilde{M}_{m,ln})=({\rm tr}[\rho(\delta_{l}\otimes\eta_{m}\otimes\gamma_{n})])$, $l,m,n=1,2,3$, has the form, $\tilde{M}=\left(\begin{array}[]{ccccccccc}pxyz&0&0&0&-pxyz&0&0&0&0\\\ 0&-pxyz&0&-pxyz&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0&D\end{array}\right),$ (20) where $D=-\frac{1}{2}p+\frac{1-p}{4}x^{2}-\frac{1-p}{4}x^{2}y^{2}-\frac{1-p}{4}x^{2}z^{2}+\frac{1+p}{4}x^{2}y^{2}z^{2}.$ The singular values of $\tilde{M}$ are $\sqrt{2}pxyz,$ $\sqrt{2}pxyz$ and $D.$ Due to the local unitary equivalence between $\rho$ and $\varrho$, the singular values of the matrix $X/N$ are ${\sqrt{2}pxyz}/{N}$, ${\sqrt{2}pxyz}/{N}$ and ${D}/{N}$, where $N={\rm tr}[\rho(\Sigma^{2}_{A}\otimes\Sigma^{2}_{B}\otimes\Sigma^{2}_{C})]=\frac{1}{2}\,p+\frac{1-p}{4}x^{2}+\frac{1-p}{4}x^{2}y^{2}+\frac{1-p}{4}x^{2}z^{2}+\frac{1+p}{4}x^{2}y^{2}z^{2}.$ According to Theorem 1, these values are also the singular values of the matrix $M^{\prime}$. We have $\lambda_{1}^{\prime}=\frac{\sqrt{2}pxyz}{N}$ for $\frac{\sqrt{2}pxyz}{N}>\frac{D}{N}$. The matrix $X/N$ also has the same singular vectors $\vec{v_{1}}$ and $\vec{v_{2}}$ with respect to $\lambda_{1}^{\prime}$. The singular vectors of $M^{\prime}$ with respect to $\lambda_{1}^{\prime}$ are $(O_{B}\otimes O_{C})\vec{v_{1}}$ and $(O_{B}\otimes O_{C})\vec{v_{2}}$, namely, $O_{B}\,\vec{a}\otimes O_{C}\,\vec{c}-O_{B}\,\vec{a}\,^{\prime}\otimes O_{C}\,\vec{c}\,^{\prime}$ and $O_{B}\,\vec{a}\otimes O_{C}\,\vec{c}\,^{\prime}+O_{B}\,\vec{a}\,^{\prime}\otimes O_{C}\,\vec{c},$ with $O_{B}$ and $O_{C}$ belonging to $SU(3)$. Hence the upper bound is also saturated for the locally filtered state. Therefore, the state violates the SI if and only if $\lambda_{1}^{\prime}=\frac{\sqrt{2}pxyz}{N}>1$. Then the upper bound of the maximal value of the Svetlichny operator satisfies $\mathcal{Q}(\mathcal{S})^{\prime}={\rm max}|\langle\mathcal{S}\rangle_{\rho}|\leq 4\lambda_{1}^{\prime}=\displaystyle\frac{4\sqrt{2}pxyz}{N}.$ (21) Based on the above analysis, the genuine nonlocality of the quantum state $\rho^{\prime}$ is detected by the SI for $0.3334\leq p\leq 1$. Therefor, the hidden genuine nonlocality of $\rho$ is revealed by local filtering operations for $0.3334\leq p\leq 0.7071$, see Fig. 2. Figure 2: Denote $f(p)$ the maximal value of $\mathcal{Q}(\mathcal{S})$: Max$|\langle\mathcal{S}\rangle_{\rho}|$ (dashed line) max$|\langle\mathcal{S}\rangle_{\rho^{\prime}}|$ (solid line). As the upper bound in Theorem 1 is saturated, for $0.3334\leq p\leq 0.7071$ the state $\rho=p\,|{\rm GHZ}\rangle\langle{\rm GHZ}|+\frac{1-p}{4}\widetilde{I_{0}}\otimes I_{4}$ does not violate the SI, but its locally filtered state $\rho^{\prime}$ shows genuine nonlocality. ## IV Conclusions and Discussions We have presented a qualitative analytical analysis of the hidden genuine nonlocality for three-qubit systems by providing a tight upper bound on the maximal quantum value of the Svetlichny operators under local filtering operations. The tightness of the upper bounds have been investigated through detailed color noised quantum states. We have presented two classes of three- qubit states whose hidden genuine nonlocalities can be revealed by local filtering. Our results give rise to an operational method in investigating the genuine nonlocality for three-qubit mixed states. Moreover, the method presented in this paper can also be used in optimizing the maximal quantum violations of other Bell-type inequalities for tripartite or multipartite quantum systems under local filtering. ## Acknowledgment Acknowledgments This work is supported by NSFC (11775306, 11701568, 11675113), the Fundamental Research Funds for the Central Universities (18CX02035A, 18CX02023A, 19CX02050A), Beijing Municipal Commission of Education under Grant No. KZ201810028042, Beijing Natural Science Foundation (Z190005), and Academy for Multidisciplinary Studies, Capital Normal University. ## References * (1) R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki. Quantum entanglement. Rev. Mod. Phys. 81, 865 (2009). * (2) M. Li, M.-J. Zhao, S.-M. Fei, and Z.-X. Wang. Experimental detection of quantum entanglement. Front. Phys. 8(4), 357-374. (2013). * (3) Q. Dong, A. J. Torres-Arenas, G.-H. Sun, W.-C. Qiang, and S.-H. Dong. Entanglement measures of a new type pseudo-pure state in accelerated frames. Front. Phys. 14(2), 21603 (2019). * (4) Č. Brukner, M. Żukowski, and A. Zeilinger. Quantum Communication Complexity Protocol with Two Entangled Qutrits. Phys. Rev. Lett. 89, 197901 (2002). * (5) H. Buhrman, R. Cleve, S. Massar, and R. Wolf. Nonlocality and communication complexity. Rev. Mod. Phys. 82, 665 (2010). * (6) V. Scarani, and N. Gisin. Quantum Communication between $N$ Partners and Bell’s Inequalities. Phys. Rev. Lett. 87, 117901 (2001). * (7) G. He, J. Zhu, and G. Zeng. Quantum secure communication using continuous variable Einstein-Podolsky-Rosen correlations. Phys. Rev. A 73, 012314 (2006). * (8) J.-D. Bancal, N. Gisin, Y.-C. Liang, and S. Pironio. Device-Independent Witnesses of Genuine Multipartite Entanglement. Phys. Rev. Lett. 106, 250404 (2011). * (9) J. S. Bell, On the Einstein Podolsky Rosen paradox. Physics. Physique. Fizika. 1, 195 (1964). * (10) N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner. Bell nonlocality. Rev. Mod. Phys. 86, 419 (2014). * (11) J.-D. Bancal, J. Barrett, N. Gisin, and S. Pironio. Definitions of multipartite nonlocality. Phys. Rev. A 88, 014102 (2013). * (12) M. D. Reid, Q.-Y. He, and P. D. Drummond. Entanglement and nonlocality in multi-particle systems. Front. Phys. 7(1), 72-85 (2012). * (13) J. Batle, A. Farouk, O. Tarawneh, and S. Abdalla. Multipartite quantum correlations among atoms in QED cavities. Front. Phys. 13(1), 130305 (2018). * (14) G. Svetlichny. Distinguishing three-body from two-body nonseparability by a Bell-type inequality. Phys. Rev. D 35, 3066 (1987). * (15) M. Li, S. Shen, N. Jing, S.-M. Fei, and X. Li-Jost. Tight upper bound for the maximal quantum value of the Svetlichny operators. Phys. Rev. A 96, 042323 (2017). * (16) F. Hirsch, M. T. Quintino, J. Bowles, and N. Brunner. Genuine Hidden Quantum Nonlocality. Phys. Rev. Lett. 111, 160402 (2013). * (17) F. Verstraete, J. Dehaene, and B. D. Moor. Normal forms and entanglement measures for multipartite quantum states. Phys. Rev. A 68, 012103 (2003). * (18) H. M. Wiseman, S. J. Jones, and A. C. Doherty. Steering, Entanglement, Nonlocality, and the Einstein-Podolsky-Rosen Paradox. Phys. Rev. Lett. 98, 140402 (2007). * (19) S. Popescu. Bell’s Inequalities and Density Matrices: Revealing ”Hidden” Nonlocality. Phys. Rev. Lett. 74, 2619 (1995). * (20) N. Gisin. Hidden quantum nonlocality revealed by local filters. Phys. Lett. A 210, 151 (1996). * (21) M. Li, H. Qin, J. Wang, S.-M. Fei, and C.-S. Sun. Maximal violation of Bell inequalities under local filtering. Scientific Reports 7, 46505 (2017). * (22) T. Pramanik, Y.-W. Cho, S.-W. Han, S.-Y. Lee, Y.-S. Kim, and S. Moon. Revealing hidden quantum steerability using local filtering operations. Phys. Rev. A 99, 030101 (2019). * (23) L. Tendick, H. Kampermann, and D. Bruss. Activation of Nonlocality in Bound Entanglement. Phys. Rev. Lett. 124, 050401 (2020). * (24) J. Schlienz, and G. Mahler. Description of entanglement. Phys. Rev. A 52, 4396 (1995). * (25) M. Li, T. Zhang, S.-M. Fei, X. Li-Jost, and N. Jing. Local unitary equivalence of multiqubit mixed quantum states. Phys. Rev. A. 89, 062325 (2014). * (26) M. L. Almeida, S. Pironio, J. Barrett, G. Tóth, and A. Acín. Noise Robustness of the Nonlocality of Entangled Quantum States. Phys. Rev. Lett. 99, 040403 (2007). * (27) R. Augusiak, M. Demianowicz, J. Tura, and A. Acin. Entanglement and Nonlocality are Inequivalent for Any Number of Parties. Phys. Rev. Lett. 115, 030404 (2015). * (28) S. M. Hashemi Rafsanjani, M. Huber, C. J. Broadbent, and J. H. Eberly. Genuinely multipartite concurrence of $N$-qubit $X$ matrices. Phys. Rev. A 86, 062303 (2012).
# Inference on the New Keynesian Phillips Curve with Very Many Instrumental Variables Max-Sebastian Dovì <EMAIL_ADDRESS> This research is funded by the German National Merit Foundation and the European Research Council via Consolidator grant number 647152. I thank Sophocles Mavroeidis and Anna Mikusheva for very helpful comments and suggestions. I also thank seminar participants at the University of Oxford, and participants at the 2019 European Conference of the Econometrics Community. All errors and omissions are my own. ###### Abstract Limited-information inference on New Keynesian Phillips Curves (NKPCs) and other single-equation macroeconomic relations is characterised by weak and high-dimensional instrumental variables (IVs). Beyond the efficiency concerns previously raised in the literature, I show by simulation that ad-hoc selection procedures can lead to substantial biases in post-selection inference. I propose a Sup Score test that remains valid under dependent data, arbitrarily weak identification, and a number of IVs that increases exponentially with the sample size. Conducting inference on a standard NKPC with 359 IVs and 179 observations, I find substantially wider confidence sets than those commonly found. ## 1 Introduction Instrumental variable (IV) methods are often used to conduct limited- information inference on (structural) single-equation macroeconomic relations that describe the dependence of a scalar variable on a set of covariates. Examples of such macroeconomic relations include New Keynesian Phillips Curves (NKPCs), Euler equations, and Taylor rules. IV-based limited-information inference on such macroeconomic relations has arguably proven popular because there is no requirement that parts of the model other than the specified relation itself be necessarily true to conduct valid inference. In virtually all applications, the relation is assumed to contain an additive error term that is shown (e.g., by the assumption of Rational Expectations (RE)) or primitively assumed to be uncorrelated with predetermined variables excluded from the specified relation. This makes any predetermined variable a valid IV. As documented extensively in the existing literature, using IVs to conduct limited-information inference on such macroeconomic relations often runs into issues related to weak identification. This occurs when the variation in the IVs is only able to explain a small portion of the variation of the endogenous variables.111For the case of NKPCs, see [22, 30, 27, 24, 16, 25], for the case of Euler equations see [5, 23, 37, 34], for the case of Taylor rules see [30, 26]. This problem is especially pronounced when the analysis is restricted to using only a few variables to forecast the endogenous variables, a restriction that arises when using IV methods that treat the number of IVs as fixed relative to the sample size. Since any predetermined variable is a valid (if not very informative) IV, this naturally raises the question of which IVs to choose out of the very many available ones. The limited literature that seeks to formally address the high dimensionality of the available IVs in such macroeconomic settings is primarily motivated by the potential inefficiency of using IVs selected in an ad-hoc way [9, 6, 8, 30, 22]. Through simulations and/or empirical applications, these studies find smaller confidence sets than the ones implied by IVs traditionally used in the past. Although this evidence is certainly suggestive, it should be noted that formal efficiency claims rely on conditions that are not easily verifiable in practice.222For instance, factor-based approaches to reduce the dimensionality of the IVs likely work well if there is a factor structure, and if whatever explains most of the variation in the IVs, also explains (a good portion of) the variation of the endogenous variables. While the former may be made plausible through certain tests, the latter remains an assumption the researcher has to make. Similarly, a LASSO-based selection of IVs works well only under the assumption that the relation between the endogenous variables and the candidate IVs is sufficiently sparse. Rather than being motivated by such efficiency concerns, this paper revisits the question of high-dimensional limited-information inference because some types of formal or intuitive regularisation can lead to invalid inference, even if weak-IV robust methods are used after regularisation. This is due to what [12] call the ‘endogeneity bias’, which arises when variables are selected on the basis of their in-sample correlation with a model’s error terms. The first contribution of this paper consists in illustrating how improper selection of IVs can lead to invalid inference in the context of limited- information inference on a standard NKPC. I do this by extending the simulations in [27] to the more realistic case where the econometrician does not have oracle knowledge on which IVs are the relevant ones, but rather has to choose amongst the very many available IVs. I consider different IV selection techniques, and show that several of them result in substantially invalid inference. The example of the NKPC is chosen for the sake of concreteness and due to its popularity in the literature. The same concerns extend to any of the many cases in Macroeconomics where a given structural equation can be estimated with very many valid IVs. As a second contribution, I propose a Sup Score test to conduct IV-based limited-information inference on single-equation Macroeconomic relations. Contrarily to other approaches in the literature, this statistic requires no assumption on the factor structure of the IVs, nor does it make any sparsity- type assumption that requires only a few of the very many IVs to be relevant, while allowing for a number of IVs that increases exponentially with the sample size. This test directly contributes to the (very) many weak IVs literature predominantly restricted to the cross-sectional case (see [29, 13, 7, 1]), and can find application well beyond the example of NKPCs considered in this paper. The third contribution consists in applying the selection procedures considered in the simulation section and the Sup Score test to conduct IV- based limited-information inference on a standard hybrid NKPC with 359 IVs on a sample of 179 observations. I find that both the IVs selected and the confidence set implied by the selection procedure that yields the worst size distortion in the simulations are similar to the IVs selected and the confidence set implied by the IVs traditionally used in the past. This suggests that the results previously reported in the literature may suffer from endogeneity bias, and that they hence may undercover the true parameter values. By contrast, the confidence sets implied by the Sup Score test are considerably wider. Notation. For any real number $a$, $\left\lfloor a\right\rfloor$ indicates the smallest integer $b$ such that $b\leq a$. For any two real numbers $c$ and $d$, $c\lesssim d$ if $c$ is smaller than or equal to $d$ up to a universal positive constant. The remaining notation follows standard conventions. Organisation of the paper. Section 2 introduces the model considered in this paper. Section 3 outlines the methods used in this paper to conduct inference in the context of very many IVs. Section 4 provides simulation-based evidence on the size and power of these methods. Section 5 revisits inference on the US NKPC using very many IVs. Section 6 concludes. ## 2 Model The structural equation I consider is the hybrid NKPC of [17], $\pi_{t}=c+\lambda s_{t}+\gamma_{f}\mathbb{E}_{t}\left[\pi_{t+1}\right]+\gamma_{b}\pi_{t-1}+u_{t},$ (1) where $\pi_{t}$ is the inflation rate, $s_{t}$ is the forcing variable, and $\lambda$, $c$, $\gamma_{f}$, and $\gamma_{b}$ are parameters of the model. $u_{t}$ is an unobserved disturbance term, which can be interpreted as a measurement error, or as a shock to inflation, such as a cost-push shock. The identifying moment conditions can be derived within the framework of Generalised Instrumental Variable (GIV) estimation. In this approach, realised one-period-ahead inflation is substituted in for expected inflation. This means that Equation (1) can be re-written as $\pi_{t}=c+\lambda s_{t}+\gamma_{f}\pi_{t+1}+\gamma_{b}\pi_{t-1}+\underbrace{u_{t}-\gamma_{f}\left[\pi_{t+1}-\mathbb{E}_{t}[\pi_{t+1}]\right]}_{{\epsilon}_{t}}.$ If it is further assumed that $\mathbb{E}_{t-1}[u_{t}]=0$, the assumption of RE gives rise to the moment conditions $\mathbb{E}[{Z}_{t}{\epsilon}_{t}]=0,$ for any $k\times 1$ vector of predetermined variables ${Z}_{t}$. Due to the very large number of predetermined time series available, the dimension of $Z_{t}$ is comparable to or larger than the number of observations, $T$. It should be noted that the example of NKPCs (including the particular specification chosen), and the assumption of RE are not central to two of the contributions of this paper. The same concerns relating to the endogeneity bias persist, and the same Sup Score test proposed below remains valid for the broad class of models defined by single-equation relations of the type $y=g(Y,X,\theta)+\varepsilon,$ (2) and moment equations given by $\mathbb{E}[{Z}_{t}\varepsilon_{t}]=0,$ (3) where $y$ is a $T\times 1$ vector, $g$ is a known real-valued function, $Y$ is a $T\times p_{1}$ matrix of endogenous covariates, $X$ is a $T\times p_{2}$ matrix of exogenous covariates, $Z$ is a $T\times k$ matrix of variables such that $k\geq p_{1}+p_{2}$, $\theta$ is a $(p_{1}+p_{2})\times 1$ vector of coefficients, $p_{1}$ and $p_{2}$ are both fixed, and $\varepsilon$ is a $T\times 1$ vector of error terms. This setup encompasses many popular applications in Macroeconomics, where $k$ is of the same magnitude or even larger than $T$, such as limited-information inference on NKPCs, Euler equations, and Taylor rules. In particular, the NKPC considered in Equation (2) can be mapped into the more general model in Equation (2) as follows. Since the NKPC is linear, the exogenous (predetermined) variables can be partialled out. Hence, $y=M_{X}\pi$, $Y=M_{X}[s\text{ }\pi_{+1}]$, $M_{X}=I-X(X^{\prime}X)^{-1}X^{\prime}$, $X=[{1}_{T\times 1}\text{ }\pi_{-1}]$, $g(Y,X,\theta)=Y\theta$, $\theta=[\lambda,\gamma_{f}]^{\prime}$, $\varepsilon=M_{X}\epsilon$, $Z=M_{X}\tilde{Z}$, $\tilde{Z}$ is a $T\times(k-2)$ matrix of excluded IVs, $s,\pi_{+1}$, $\pi_{-1}$, and $\epsilon$ are the $T\times 1$ stacked vectors of $s_{t}$, $\pi_{t+1}$, $\pi_{t-1}$, and $\epsilon_{t}$, respectively. ## 3 Methodology For all methods considered in this paper, confidence sets are constructed by inverting statistics that test the hypothesis $H_{0}:\theta=\theta_{0}\text{ vs }H_{1}:\theta\neq\theta_{0}.$ (4) The $(1-\alpha)$ confidence set can be constructed by collecting the values of $\theta_{0}$ for which the null hypothesis in Equation (4) is not rejected at the $\alpha$ level of significance. For convenience, define ${\varepsilon}_{0}\equiv y-g(Y,X,\theta_{0})$. ### 3.1 Post-Selection Low-Dimensional Inference Most of the existing literature that conducts inference on relations of the form presented in Equation (2) using moment conditions of the type shown in Equation (3) has employed methods that require the IVs to be low-dimensional. In the presence of very many IVs, these approaches can be seen as a two-step procedure. First, the IVs are selected. Second, a low-dimensional (weak- identification robust) method is applied with the selected IVs. The first step is usually not made explicit, and is often not given any attention, which makes it impossible to model this step accurately. In Section 3.1.2, I consider three different selection procedures that reasonably cover (in terms of their deleterious effect on subsequent inference) the range of selection procedures used in the previous literature. These are random selection, ‘crude thresholding’, and LASSO. In Section 3.1.1, I outline the $S$ statistic of [34], which forms the post-selection inferential method common to all three selection procedures considered in this paper. Before proceeding, it is helpful to gain some intuition as to why IV selection may lead to invalid IVs. For simplicity, suppose that all variables are endogenous (or that the model is linear and that the exogenous covariates have been partialled out). Consider the following projection (‘first stage’) $Y=Z\zeta+v,$ where $\zeta$ is a $k\times 1$ vector of coefficients and $v$ is a $T\times 1$ vector of error terms. Consider the case of no identification at all, $\zeta=0$, and a selection procedure that selects the IVs that are most highly correlated with the endogenous variables, $Y$. This amounts to selecting those IVs that are most highly correlated in-sample with the first-stage error term. By the endogeneity of the system, this means that those IVs most correlated with the error term, $\varepsilon$, will be selected, so that _conditional on selection_ , the IVs are no longer valid. This phenomenon carries over more broadly to cases of weak (but non-zero) identification as discussed in [19]. #### 3.1.1 The [34] $S$ Statistic In this paper, the GMM-based $S$ statistic of [34] will be used for low- dimensional post-selection inference.333More powerful and computationally intensive (GMM-based) weak-identification robust methods could be used instead of the $S$ statistic (see [30, 24]). Considering them instead of the $S$ statistic does not qualitatively affect the results of the simulations, while increasing their computational burden substantively. Furthermore, [27, p. 165 ] state that amongst the different specifications for the NKPC they consider, the confidence sets implied by these more powerful methods are similar to the ones implied by the $S$ statistic. Letting $k_{s}\geq p_{1}+p_{2}$ denote the number of IVs selected, the $S$ statistic is given by $T$ times the value of the continuously updated GMM objective function given by $S(\theta_{0})=T\varepsilon_{T}(\theta_{0})^{\prime}W_{T}(\theta_{0})\varepsilon_{T}(\theta_{0}),$ (5) where $\varepsilon_{T}(\theta_{0})=T^{-1}\sum_{t=1}^{T}Z_{st}\varepsilon_{0t}$, $Z_{st}$ is the $k_{s}\times 1$ vector containing the IVs selected, and $W_{T}(\theta_{0})$ is the continuously updated $k_{s}\times k_{s}$ weight matrix that is a consistent estimator of the covariance matrix of the moment conditions of the selected IVs as in [24, 34]. Throughout, I use the heteroscedasticity and autocorrelation consistent (HAC) estimator of [31]. Under the null hypothesis in Equation (4) and the regularity conditions discussed in [34], this statistic is asymptotically $\chi^{2}_{k_{s}}$. Whenever $p_{2}\neq 0$ (i.e., there are exogenous covariates in the relation), the exogenous covariates can be concentrated out, to yield the concentrated $S$ statistic as in [34, Theorem 3 ].444In both the simulations and the empirical application below, the constant and the one-period lagged inflation are concentrated out. The $S$ statistic further recommends itself in this context because it allows for a straightforward test of the exclusion restrictions of the IVs. It may be hoped that any substantial bias caused by improper selection may be flagged in the form of a low $p$-value for the test of the null hypothesis that the IVs selected, $Z_{s}$, are uncorrelated with the structural error term, $\varepsilon$. To investigate this possibility further, in the simulations, I also evaluate the weak-identification robust Hansen test. This is given by the minimum value of the $S$ statistic in Equation (5). Without making an assumption of strong identification, this statistic is asymptotically bounded by a $\chi^{2}_{k_{s}-p_{2}}$ distribution [27, p. 178], which provides a weak-identification robust critical value for the test of the overidentifying restrictions of the IVs selected. #### 3.1.2 Ad-Hoc Selection of Instrumental Variables Conducting inference with the $S$ statistic requires selecting a sufficiently small subset of $k_{s}$ IVs from the available $k$ IVs.555An often-used rule of thumb is to select $k_{s}$ to be of the order of magnitude of $T^{1/3}$. This rate result is motivated by the results in [4], and [32], who show that this rate condition is sufficient for the case of independent data. Recently, fully weak-identification robust AR-type statistics have been developed that allow for the number of IVs to be of the order of magnitude of $T$ [29, 13, 1]. However, all of these approaches treat the IVs as fixed, and are hence not applicable in the context of time series. In most of the empirical studies on IV-based limited-information inference on macroeconomic relations, no explicit reason is given for choosing the $k_{s}$ IVs that are subsequently used for analysis. Often, the choice of IVs is simply motivated with reference to previous studies that used those IVs. It is hence impossible to model the choice of IVs of the previous literature accurately in a simulation exercise. As an (imperfect) approximation, I consider the following three selection procedures. The first selection procedure involves randomly selecting $k_{s}$ IVs out of the $k$ available IVs. Since the selection of IVs is not informed by the data itself, this selection procedure is guaranteed to not violate the identifying moment conditions. The second selection procedure I consider will be referred to as crude thresholding. This involves first computing $p_{1}$ separate $k\times 1$ vectors containing the sample correlations between the endogenous variables and all the candidate IVs, sorting the IVs in descending order of correlation, and constructing the vector of IVs for post-selection inference by taking the union of the first $\left\lfloor k_{s}/p_{1}\right\rfloor$ entries in each of the vectors. By selecting the variables based on in-sample correlations, this selection procedure is likely to break the exclusion restriction of the IVs selected. Although (to my knowledge) this crude thresholding has not been applied to IV-based limited-information inference, more sophisticated versions of thresholding have been considered in the past (e.g., [30, Appendix B ] and [6]).666The hard thresholding in [30, Appendix B ] and [6] is not applicable in high-dimensional contexts, since OLS is infeasible when there are more variables than observations. The first two selection procedures (random selection and crude thresholding) arguably cover the extremes in terms of the effects IV selection can have on the validity of the IVs. Random selection provides the selection ideal, since it leaves the identifying moment conditions completely unaffected. However, particularly with reference to the traditional IVs often considered in the literature, it seems unlikely that random selection (over the very many available predetermined macroeconomic time series) led to choosing proximate lags of the endogenous variables as IVs. Indeed, given the persistence of most macroeconomic time series (and hence of the endogenous variables in any given application), it seems plausible that at least part of the motivation for considering proximate lags of the endogenous variables as IVs stems from their ability to usefully explain some of their in-sample variation. Suggestive evidence for this type of selection is also given by the fact that the IVs selected by crude thresholding in the empirical application in Section 5 show substantial overlap with these traditional IVs.777See also the ranking of IVs based on $t$-values in [30, Table B1 ]. Therefore, it seems likely that random selection and crude thresholding provide a suggestive lower and upper bound on the selection-induced bias that could underlie existing empirical applications. The third selection procedure I consider is a LASSO-based selection of IVs. This is motivated by the recent increase in popularity of penalisation-based approaches to the (very) many IV problem (see [19, 7, 33]). Furthermore, LASSO-based approaches to IV selection have also been applied to the case of NKPCs in [9, 8]. Here, IVs are selected by solving a LASSO optimisation problem of the following form for each endogenous variable $\displaystyle\hat{\zeta}_{r}$ $\displaystyle=\underset{\zeta_{r}\in\mathbb{R}^{k}}{\text{ arg min }}\sum_{t=1}^{T}(Y_{rt}-\zeta_{r}^{\prime}Z_{t})^{2}+\Lambda_{r}|\zeta_{r}|,$ where $Y_{rt}$ is the element in position $t$ of the $T\times 1$ vector $Y_{r}$ given by the $r^{th}$ column of $Y$, $\zeta_{r}$ for $r=1,\dots,p_{1}$ is a $k\times 1$ vector, and $\Lambda_{r}>0$ for $r=1,\dots,p_{1}$ are scalar penalty parameters that are set such that $\hat{\zeta}_{r}$ has $\left\lfloor k_{s}/p_{1}\right\rfloor$ elements. The IVs selected are given by the IVs that have at least one corresponding non-zero entry in at least one of $\hat{\zeta}_{r}$ for $r=1,\dots,p_{1}$. ### 3.2 A High-Dimensional Sup Score Test for Dependent Data Although the interplay between weak identification and high-dimensional IVs has recently received some attention (see [19]), none of the currently available approaches are both robust to arbitrarily weak identifcation and applicable in a time-series context. Indeed, to the best of my knowledge, the only approach that is formally robust to arbitrarily weak identification in the presence of very many IVs is the Sup Score test of [7]. The Sup Score test of [7], however, treats the IVs as fixed, and is hence not applicable in time- series contexts. In this section, I propose a Sup Score test that remains valid under high-dimensional dependent data using recent results of [39, 38]. The Sup Score statistic I propose is given by $\mathcal{R}=\underset{1\leq j\leq k}{\text{ max }}\left|\frac{1}{\sqrt{T}}{Z}_{j}^{\prime}{\varepsilon}_{0}\right|.$ (6) This can be seen as a non-studentised version of the [7] Sup Score statistic, which in turn can be interpreted as an extension to high dimensions of the [3] (AR) statistic. It also bears some resemblance to the non-studentised AR statistic proposed by [20]. The critical values for the test statistic in Equation (6) are computed using a block bootstrap. Let $l_{T}\equiv\left\lfloor T/b_{T}\right\rfloor$, where $b_{T}$ is the block length. Define the block sums $\hat{A}_{tj}=\sum_{l=(t-1)b_{T}+1}^{tb_{T}}{Z}_{lj}{\varepsilon}_{0l}-\left\\{\mkern 1.5mu\overline{\mkern-1.5muZ^{\prime}\varepsilon_{0}\mkern-1.5mu}\mkern 1.5mu\right\\}_{j},\text{ for }t=1,\dots,l_{T},$ where $\left\\{\mkern 1.5mu\overline{\mkern-1.5muZ^{\prime}\varepsilon_{0}\mkern-1.5mu}\mkern 1.5mu\right\\}_{j}$ is the $j^{th}$ element of the $k\times 1$ vector $\frac{1}{T}\sum_{t=1}^{T}Z_{t}\varepsilon_{t}$. Consider the bootstrap statistic given by $L_{\hat{A}}=\underset{1\leq j\leq k}{\text{ max }}\frac{1}{\sqrt{T}}\left|\sum_{t=1}^{l_{T}}\hat{A}_{tj}e_{t}\right|,$ where $\\{e_{t}\\}$ is a sequence of i.i.d. $\mathcal{N}[0,1]$ random variables. The critical value for a test of size $\alpha$ of Equation (4) is given by $c(\alpha)=\text{inf}\left\\{\gamma\in\mathbb{R}:\mathbb{P}(L_{\hat{A}}\leq\gamma|\\{{Z}_{t}{\varepsilon}_{0t}\\}_{t=1}^{T})\geq 1-\alpha\right\\}.$ The decision rule for testing the null hypothesis in Equation (4) at the $\alpha$ level of significance is given by $\text{Reject }H_{0}\iff\mathcal{R}>c(\alpha).$ I now turn to conditions that are sufficient to ensure that the test described above has correct size. ###### Assumption 1. 1. i. $Z_{t}\varepsilon_{t}$ is a stationary time series that allows for the causal representation $Z_{t}\varepsilon_{t}=\mathcal{G}(\dots,u_{t-1},u_{t})$ for some measurable function $\mathcal{G}$, where $u_{t}$ are a sequence of mean- zero i.i.d. random variables. Furthermore, assume that $Z_{tj}\varepsilon_{t}=\mathcal{G}_{j}(\dots,u_{t-1},u_{t})$ for all $j=1,\dots,k$, where $\mathcal{G}_{j}$ is the $j$th component of the map $\mathcal{G}$. 2. ii. $\mathbb{E}[Z_{t}\varepsilon_{t}]=0$, $\mathbb{E}[Z_{tj}^{2}\varepsilon_{t}^{2}]>0$, and $\mathbb{E}[Z_{tj}^{4}\varepsilon_{t}^{4}]<\infty$ for all $j=1,\dots,k$. 3. iii. $k\lesssim\text{exp}(T^{b})$, $b_{T}\lesssim T^{\tilde{b}}$ for $b<1/15$, $4\tilde{b}+7b<1$, $\tilde{b}-2b>0$. 4. iv. $\underset{1\leq j,h\leq k}{\text{ max }}\sum_{l=-\infty}^{\infty}|l|\mathbb{E}[|Z_{tj}\varepsilon_{t}Z_{t+l,h}\varepsilon_{t+l}|]=O(T^{\breve{b}})$, $\breve{b}<\tilde{b}-2b$. 5. v. $\mathbb{E}[|\mathcal{G}_{j}(\dots,u_{t-1},u_{t})-\mathcal{G}_{j}(\dots,u^{*}_{-1},u^{*}_{0},u_{1},\dots,u_{t})|^{q}]\leq C\rho^{t}$, for some $0<\rho<1$, and some positive constant $C$, where $q\geq 4$, and $\\{u^{*}_{t}\\}$ are i.i.d. copies of $\\{u_{t}\\}$. Assumption 1.i. requires the product of the IVs and the error terms to be stationary, and have some causal representation. Assumption 1.ii. makes weak assumptions on the moments of the data, and includes the identifying moment condition. In practice, I standardise the IVs in-sample to ensure that the test is invariant to the scaling of IVs. Assumption 1.iii. bounds the degree of high dimensionality permitted and the size of the block bootstraps. Although the restriction on the dimensionality ($b<1/15$) is stronger than the ones usually encountered in the independent case (see [14, 7]), it still allows for very many IVs compared to the sample size. Assumption 1.iv. imposes restrictions on the correlation of the product of the IVs with the error term across different points in time. Assumption 1.v. imposes a (uniform) Geometric Moment Contraction (GMC) restriction on the product of the IVs and the error terms as in [35]. The GMC requires that the process under consideration have a sufficiently ‘short memory’. Processes that obey such a condition include (under suitable assumptions) standard linear processes (e.g., standard vector autoregressions and Volterra processes) as well as several nonlinear processes (e.g., autoregressive models with conditional heteroscedasticity, random coefficient autoregressive models, and exponential autoregressive models). I refer to [35, 39, 10, 38, 36, 21] and the references therein for a discussion of the different processes that obey such a condition. No assumption on the first stage (i.e., the relationship between $Y$ and $Z$) has to be made. This means that the proposed Sup Score test is uniformly valid over all (finite) values of the coefficient on the IVs in the first stage (including arbitrarily weak identification). This also means that no restriction on the factor or sparsity structure of the first stage has to be imposed. The lack of assumptions on the first stage also implies that the Sup Score test does not suffer from any ‘missing IV problem’ (see also [15]). Whether these conditions are satisfied in any given macroeconomic application depends on the error terms (i.e., the structural equation), and on the properties of the excluded IVs. Example 1 shows that under suitable assumptions on the error term that encompass, amongst others, some popular assumptions made in the literature on NKPCs (e.g., [16]), it is only required that the IVs satisfy a GMC condition. This is attractive in the context of limited-information inference, since the researcher only has to assume that the IVs belong to one of the many processes that have been shown to obey such a condition, without having to take a stance on the particular process. ###### Example 1. Assume that $\varepsilon_{t}$ is i.i.d. across $t$, $\mathbb{E}[\varepsilon_{t}]=0$, $\mathbb{E}[\varepsilon_{t}^{2}]>0$, and $\mathbb{E}[\varepsilon_{t}^{4}]<\infty$. Assume that $Z_{t}$ is a stationary time series and allows for the causal representation $Z_{t}=\mathcal{F}(\dots,v_{t-1},v_{t}),Z_{tj}=\mathcal{F}_{j}(\dots,v_{t-1},v_{t})$ for some measurable function $\mathcal{F}$, where $v_{t}$ are a sequence of mean-zero i.i.d. random variables (independent of $\varepsilon_{t}$). Assume further that $\mathbb{E}[Z_{tj}^{2}]>0$ and $\mathbb{E}[Z_{tj}^{4}]<\infty$ for all $j=1,\dots,k$. Assume that the conditions on the dimensionality of the IV problem in Assumption 1.iii. are satisfied. Further, assume that $Z_{t}$ satisfies: $\mathbb{E}[|Z_{tj}-\mathcal{F}_{j}(\dots,v^{*}_{-1},v_{0}^{*},v_{1},\dots,v_{t})|^{4}]<\tilde{C}{\tilde{\rho}}^{t}$ (7) where $\\{v^{*}_{t}\\}$ are i.i.d. copies of $\\{v_{t}\\}$, $\tilde{C}$ is some constant, and $0<\tilde{\rho}<1)$. Then the conditions in Assumption 1. hold. ###### Proof. See Appendix A. ∎ I now state the main theoretical result of this paper, which ensures that the approach proposed controls the size of the test.888It should be noted, however, that–similarly to other sup-based test statistics, such as in [11, 7]–the above approach is not efficient. This is to be expected, given the weak assumptions made on (the structure of) the IVs. The (finite-sample) power properties of the above approach will be investigated in the simulation section below. The results show that it has non-trivial power. ###### Theorem 1. Under Assumption 1. and the null hypothesis in Equation (4), $\underset{T\to\infty}{\text{ lim }}\mathbb{P}(\text{Reject }H_{0})\leq\alpha.$ ###### Proof. See Appendix B. ∎ Theorem 1 makes it possible to construct confidence sets by inverting the test as outlined above. ## 4 Simulations The simulations presented in this section serve a twofold purpose. First, I use the simulations to study how improper selection of IVs can lead to problematic post-selection inference. Second, I use the simulations to illustrate the asymptotic validity of the Sup Score test established in the section above, as well as its finite-sample power properties. Taken together, the simulations hence motivate and further justify applying the Sup Score test proposed in Section 2 in practice.999Due to the focus of this paper on the bias introduced by IV selection, I do not consider the factor-based approaches of [22, 30]. The substantial biases caused by the improper selection of a small number of IVs can also serve to motivate the use of such factor methods. However, the factor-based GMM approach in [30] seems to treat the number of IVs as fixed (and does not provide formal conditions for validity), while the factor AR statistic of [22] is only applicable in a high-dimensional context if a sufficiently strong factor structure is assumed. In contrast, the Sup Score test proposed in this paper remains valid in high-dimensional contexts regardless of the factor or sparsity structure of the IVs. The simulations in this paper are based on the approach in [27]. The central difference is that rather than modelling the econometrician as having perfect knowledge of the relevant IVs, and incorrectly employing methods that are not robust to weak identification, I model the econometrician as using exclusively weak-identification robust methods, but not knowing which IVs correspond to the truly relevant ones. Given the extensive literature that pointed out that NKPCs can suffer from weak identification, this setup seems closer to the estimation problem that an econometrician is likely to face. I base my simulations on the simplest possible specification considered in [27]. This involves imposing the restriction $\gamma_{b}+\gamma_{f}=1$ (which is known to the econometrician) and setting $c=0$ (which is not known to the econometrician), so that the NKPC can be re-written as $(1-\gamma_{f})\Delta\pi_{t}=\lambda s_{t}+\gamma_{f}\mathbb{E}[\Delta\pi_{t+1}]+\epsilon_{t}.$ (8) I embed this NKPC into a dynamic system by specifying that the reduced-form dynamics of the forcing variable and inflation follow a VAR model given by $\displaystyle\begin{bmatrix}\pi_{t}\\\ s_{t}\\\ f_{t}\end{bmatrix}$ $\displaystyle=\begin{bmatrix}a_{11}&a_{12}&a_{13}\\\ a_{21}&a_{22}&a_{23}\\\ a_{31}&a_{32}&a_{33}\\\ \end{bmatrix}\begin{bmatrix}\pi_{t-1}\\\ s_{t-1}\\\ f_{t-1}\end{bmatrix}+\begin{bmatrix}u_{1t}\\\ u_{2t}\\\ u_{3t}\end{bmatrix},$ (9) where $\begin{bmatrix}u_{1t}\\\ u_{2t}\\\ u_{3t}\end{bmatrix}\overset{i.i.d.}{\sim}\mathcal{N}\left[0,\begin{bmatrix}\omega_{11}&\omega_{12}&\omega_{13}\\\ \omega_{21}&\omega_{22}&\omega_{23}\\\ \omega_{31}&\omega_{32}&\omega_{33}\\\ \end{bmatrix}\right],$ and $f_{t}$ is a scalar factor variable. All coefficients except for $a_{11},a_{12}$, and $a_{13}$ have to be calibrated. The coefficients $a_{11},a_{12}$, and $a_{13}$ are backed out of the NKPC based on the [2] algorithm. High dimensionality of the IVs is introduced by specifying that there exists an $m\times 1$ vector of variables $Q_{t}$ that follow the process given by $Q_{t}=\xi f_{t}+u_{4t},u_{4t}\overset{i.i.d.}{\sim}\mathcal{N}\left[0,I_{m}\right],$ (10) where $\xi$ is an $m\times 1$ vector of factor loadings. The econometrician conducts inference on $\lambda$ and $\gamma_{f}$ within the GIV and RE framework, $\displaystyle\Delta\pi_{t}$ $\displaystyle=c+\lambda s_{t}+\gamma_{f}(\pi_{t+1}-\pi_{t-1})+\epsilon_{t},$ (11) $\displaystyle\mathbb{E}[Z_{st}\epsilon_{t}]$ $\displaystyle=0,$ where the variables are defined as in Section 2 and Section 3. The econometrician does not observe the factor itself, but only observes the forcing variable and inflation, as well as the $m$ variables in $Q_{t}$. In this setup, $Z_{st}$ is a subset of the available IVs given by $Z_{t}=[1,\pi_{t-1},s_{t-1},Q_{t-1}^{\prime}]^{\prime}$ that always includes a constant (since it is specified in the structural equation the econometrician considers).101010For the case of post-selection inference based on the $S$ statistic, the constant is concentrated out. For the case of the Sup Score statistic, it is partialled out. This setup recommends itself for two reasons. First, it constitutes a minimal departure from popular simulations in the existing literature. This ensures that any reported results are not an artefact of a particularly uncharitable setup.111111It is, for instance, straightforward to include the variables in $Q_{t}$ directly in the reduced-form VARs. However, the results from such a DGP are very sensitive to the particular calibration of the parameters chosen. Second, it ensures the existence of a sufficiently small set of (excluded) ‘oracle IVs’ (given by $\pi_{t-1},s_{t-1},f_{t-1}$) without necessarily imposing a sparse setup on the observed first-stage projection (although it can be imposed by setting $a_{13}=a_{23}=\omega_{13}=\omega_{31}=\omega_{23}=\omega_{32}=0$ or simply $\xi=0$).121212Ensuring a sufficiently sparse set of ‘oracle IVs’ further motivates considering only a single lag of a single factor. The former is desirable because it allows for a comparison of the different ad-hoc inference procedures relative to the most efficient approach (conditional on using the $S$ statistic). The latter is desirable because it allows for a more general (and perhaps more realistic, see [18]) approach to modelling the first stage. This setup is able to achieve both a sparse (unobserved) oracle first stage and an observed first stage that is not necessarily sparse because the elements of $Q_{t-1}$ that have a non-zero corresponding entry in $\xi$ will contain some relevant variation for identification, due to the dependence of the endogenous variables $[\pi_{t+1}-\pi_{t-1},s_{t}]^{\prime}$ on $f_{t-1}$. Based on the setup above, it is possible to derive two different concentration parameters ($\mu^{2}_{O}$ and $\mu^{2}_{E}$) that reflect the strength of identification in the sparse unobserved oracle first stage and the observed first stage. The details are given in Appendix C. The calibrations are as follows. Throughout, I set $\gamma_{f}=0.8$, $\lambda=0.05$, $T=100$, and $\omega_{11}=0.07$, $\omega_{12}=\omega_{21}=0.03$, $\omega_{22}=0.7$, $\omega_{13}=\omega_{31}=\omega_{23}=\omega_{32}=0$, and $\omega_{33}=0.4$ (see also [27]). The results are not sensitive to this choice of covariance matrix, and this setup makes it possible to create a perfectly sparse observed first stage by setting $a_{23}=0$. For simplicity, I set $a_{31}=a_{32}=0$, so that the factor structure follows an autoregressive process with coefficient given by $a_{33}=0.7$ (the results do no change appreciably if this is relaxed or a different choice for $a_{33}$ is considered). I set $m=200$ and $\xi_{q}=\tau(-1)^{q}\log\left((q+1)^{2}/mq\right)$ for $q=1,\dots,m$ and $\tau=0.05$. This is meant to provide a deterministic calibration that balances positive and negative, as well as large and small coefficients. The small value chosen for $\tau$ ensures that the information on the factor contained in the observed variables is sufficiently diluted, and that there is some interesting variation in the informational content of the unobserved oracle first stage and the one actually observed.131313I refer to Appendix C for more details on this. The derivations also show that choosing small values of $\tau$ has a similar effect to choosing a larger term for the variance of the errors in Equation (10). The results are unaffected by different choices of $\xi$ or $\tau$. For all selection procedures, I force the selection of $k_{s}=4$ IVs to ensure that the first stage is not overfitted, which again ensures that any distortions in inference are attributable to the selection step itself. For the $S$ statistic, I set the lag-length for the [31] HAC variance estimator to 4. For the Sup Score test proposed above, I set the block size to $b_{T}=4$ and the bootstrap replications to $500$. I allow $a_{21}$, $a_{22}$, and $a_{23}$ to take on different values. The coefficient $a_{23}$ controls how informative the factor is in predicting the endogenous variables, and by extension how informative the variables in $Q_{t-1}$ are. Table 1 shows the size of the $S$ and Sup Score statistic following the different selection procedures outlined above for a test with nominal size 10%. The calibrations chosen ensure that a broad range of identification strength and sparsity structures are considered. The first panel for $a_{23}$ corresponds to the perfectly sparse first stage where none of the variables in $Q_{t-1}$ are informative IVs. As a consequence, the concentration parameter of the unobserved oracle first stage is the same as the one that is observed. The second and third panel increase the dependence of the two endogenous variables on the unobserved factor. Due to the dense calibration of $\xi$, this means that all of the IVs observed by the econometrician are at least somewhat informative. Since the variables in $Q_{t}$ contain noisy information on the unobserved factor, the concentration parameter in the observed first stage will now be lower than the concentration parameter of the unobserved oracle first stage. As expected, the oracle IVs yield correct, if somewhat conservative, size. Since random selection does not make use of any correlations present in the actual data, the $S$ statistic with randomly selected IVs also yields correct size. The results for crude thresholding and the LASSO suggest that in all cases size is not controlled, although the distortions appear to be somewhat milder for the LASSO. The results for the Sup Score test proposed in this paper show that the test controls for size regardless of the DGP considered. Table 1 also reports the rejection frequency of a two-step approach that tests the null hypothesis at a given level of significance only if the robust test of overidentifying restrictions fails to reject the hypothesis of exogeneity for the IVs selected at that level of significance. This is a very conservative approach. Indeed, when faced with evidence that the selected IVs may be endogenous, rather than abandoning the analysis altogether, it seems more likely that the econometrician will proceed to select other IVs, potentially worsening the endogeneity bias. Even in this conservative approach, crude thresholding fails to control for size. LASSO selection followed by this two-step approach appears to control for size. These results suggest that while the test of overidentifying restrictions can help mitigate some of the endogeneity bias introduced by improper selection, it is unable to fully remove it. Figure 1 shows the power of the different approaches. I present the results for the case where $a_{21}=a_{22}=a_{23}=0.450$. The results are similar for other calibrations. The map traced out by the oracle IVs corresponds to the most powerful procedure possible (conditional on exclusively using the $S$ statistic) that controls for size. The results show that randomly selecting IVs yields no power. This is unsurprising, given that in this setup the first stage is sparse, so that random selection predominantly selects not very informative IVs. The power heatmaps for crude thresholding and the LASSO have a similar shape to the oracle heatmaps. However, for certain parts of the parameter space considered, the rejection frequency of these procedures is substantially higher than the one of the oracle test. Conditional on using the same test post-selection, both crude thresholding and the LASSO can be at most as powerful as the test that directly uses the oracle IVs. Therefore, this excess rejection frequency is spurious, which in practice would translate to small confidence sets. The power heatmaps for the Sup Score test show that the Sup Score test has non-trivial power. Table 1: Simulation results: size. | | $a_{23}=0.000$ ---|---|--- | | $a_{21}=0.000$ | $a_{21}=0.200$ | $a_{21}=0.450$ | $a_{22}$ | $0.000$ | $0.200$ | $0.450$ | $0.000$ | $0.200$ | $0.450$ | $0.000$ | $0.200$ | $0.450$ | $\mu^{2}_{O}$ | 0.000 | 4.082 | 24.175 | 0.000 | 4.070 | 23.938 | 0.000 | 4.040 | 23.393 | $\mu^{2}_{E}$ | 0.000 | 4.082 | 24.175 | 0.000 | 4.070 | 23.938 | 0.000 | 4.040 | 23.393 Oracle | R.F. | 0.070 | 0.042 | 0.046 | 0.062 | 0.064 | 0.045 | 0.069 | 0.059 | 0.046 T.S. | 0.058 | 0.037 | 0.038 | 0.057 | 0.055 | 0.041 | 0.060 | 0.055 | 0.039 Random | R.F. | 0.107 | 0.125 | 0.123 | 0.114 | 0.115 | 0.115 | 0.112 | 0.120 | 0.135 T.S. | 0.100 | 0.118 | 0.112 | 0.108 | 0.107 | 0.111 | 0.100 | 0.112 | 0.124 Crude | R.F. | 0.445 | 0.421 | 0.428 | 0.417 | 0.433 | 0.429 | 0.396 | 0.419 | 0.440 Thresholding | T.S. | 0.200 | 0.179 | 0.160 | 0.186 | 0.191 | 0.177 | 0.183 | 0.223 | 0.150 LASSO | R.F. | 0.199 | 0.208 | 0.216 | 0.204 | 0.198 | 0.220 | 0.189 | 0.219 | 0.230 T.S. | 0.129 | 0.130 | 0.095 | 0.124 | 0.114 | 0.094 | 0.130 | 0.147 | 0.088 Sup Score | R.F. | 0.030 | 0.027 | 0.037 | 0.033 | 0.038 | 0.035 | 0.035 | 0.018 | 0.033 T.S. | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | | $a_{23}=0.200$ | | $a_{21}=0.000$ | $a_{21}=0.200$ | $a_{21}=0.450$ | $a_{22}$ | $0.000$ | $0.200$ | $0.450$ | $0.000$ | $0.200$ | $0.450$ | $0.000$ | $0.200$ | $0.450$ | $\mu^{2}_{O}$ | 11.198 | 19.501 | 48.211 | 12.137 | 21.030 | 49.983 | 13.431 | 23.283 | 53.527 | $\mu^{2}_{E}$ | 6.638 | 14.262 | 38.910 | 7.144 | 15.044 | 38.117 | 7.827 | 16.092 | 36.214 Oracle | R.F. | 0.059 | 0.051 | 0.058 | 0.057 | 0.053 | 0.042 | 0.063 | 0.035 | 0.056 T.S. | 0.041 | 0.041 | 0.047 | 0.042 | 0.042 | 0.033 | 0.043 | 0.026 | 0.045 Random | R.F. | 0.108 | 0.106 | 0.107 | 0.121 | 0.114 | 0.100 | 0.125 | 0.112 | 0.129 T.S. | 0.102 | 0.103 | 0.096 | 0.113 | 0.107 | 0.093 | 0.113 | 0.109 | 0.116 Crude | R.F. | 0.413 | 0.411 | 0.406 | 0.409 | 0.411 | 0.400 | 0.416 | 0.365 | 0.405 Thresholding | T.S. | 0.176 | 0.172 | 0.171 | 0.198 | 0.192 | 0.193 | 0.208 | 0.197 | 0.219 LASSO | R.F. | 0.188 | 0.194 | 0.223 | 0.186 | 0.196 | 0.193 | 0.192 | 0.179 | 0.195 T.S. | 0.112 | 0.114 | 0.108 | 0.113 | 0.127 | 0.090 | 0.130 | 0.118 | 0.119 Sup Score | R.F. | 0.024 | 0.032 | 0.032 | 0.028 | 0.033 | 0.032 | 0.028 | 0.033 | 0.026 T.S. | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | | $a_{23}=0.450$ | | $a_{21}=0.000$ | $a_{21}=0.200$ | $a_{21}=0.450$ | $a_{22}$ | $0.000$ | $0.200$ | $0.450$ | $0.000$ | $0.200$ | $0.450$ | $0.000$ | $0.200$ | $0.450$ | $\mu^{2}_{O}$ | 56.388 | 81.202 | 101.025 | 61.410 | 83.525 | 101.183 | 66.590 | 83.989 | 107.289 | $\mu^{2}_{E}$ | 29.100 | 42.505 | 52.008 | 30.144 | 41.690 | 47.471 | 31.006 | 40.080 | 42.374 Oracle | R.F. | 0.084 | 0.067 | 0.056 | 0.067 | 0.063 | 0.081 | 0.056 | 0.065 | 0.087 T.S. | 0.060 | 0.051 | 0.042 | 0.049 | 0.048 | 0.065 | 0.037 | 0.052 | 0.072 Random | R.F. | 0.107 | 0.131 | 0.101 | 0.115 | 0.105 | 0.123 | 0.131 | 0.125 | 0.122 T.S. | 0.097 | 0.121 | 0.095 | 0.108 | 0.102 | 0.112 | 0.119 | 0.117 | 0.105 Crude | R.F. | 0.372 | 0.416 | 0.391 | 0.383 | 0.359 | 0.395 | 0.381 | 0.395 | 0.398 Thresholding | T.S. | 0.186 | 0.203 | 0.180 | 0.182 | 0.180 | 0.210 | 0.191 | 0.204 | 0.292 LASSO | R.F. | 0.201 | 0.215 | 0.189 | 0.201 | 0.204 | 0.195 | 0.186 | 0.222 | 0.212 T.S. | 0.127 | 0.122 | 0.104 | 0.118 | 0.125 | 0.119 | 0.117 | 0.138 | 0.189 Sup Score | R.F. | 0.016 | 0.026 | 0.032 | 0.024 | 0.027 | 0.037 | 0.030 | 0.041 | 0.037 T.S. | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ | $-$ _Notes_ : R.F. denotes the rejection frequency. T.S. denotes the rejection frequency where a null hypothesis is rejected only if the robust test of overidentifying restrictions fails to reject the selected IVs’ exogeneity. Nominal test size: 10%. 1,000 Monte Carlo replications. (a) Oracle (b) Random (c) Crude Thresholding (d) LASSO (e) Sup Score Figure 1: Simulation results: power. $a_{21}=a_{22}=a_{23}=0.45$. Nominal test size: 10%. 1,000 Monte Carlo replications. ## 5 Empirical Application Data for the empirical part of this paper is taken from FRED. I use the non- farm labour share as transformed in [17] as the forcing variable. I use the inflation rate implied by the GDP deflator. I consider the period 1974Q2-2018Q4, and include 90 variables aimed to reflect different parts of the US economy based on the list in [28] with four lags each, transforming them as recommended therein.141414I do not include all the variables listed in [28] since they are not all available over a sufficiently long period of time. This yields 179 observations with 359 IVs. Appendix D contains a detailed description of the data. A natural question to ask is whether considering these 359 IVs is enough to dispel concerns about potential endogeneity biases. Though being more than any number of IVs previously considered in the literature, there are certainly more valid IVs (i.e., additional predetermined variables). However, a substantial endogeneity bias caused by the selection of these 359 IVs would emerge only if the variables were included in the list of [28] based on their correlation with the endogenous variables of this application. This seems very unlikely. For all ad-hoc selection procedures, I limit the number of IVs selected to four, to ensure that overfitting is not a concern, and set the lag-length for the [31] HAC variance estimator to 4. The results do not change appreciably when other values are chosen. The confidence sets yielded by the traditional IVs and the ad-hoc selection procedures are shown in Figure 2, and the corresponding IVs are listed in Table 2. Mirroring the results in Section 4, the confidence set resulting from random selection is extremely wide, and suggests that the hybrid NKPC is essentially unidentified. The confidence set from applying the LASSO is smaller than the one implied by random selection, but it also does not exclude that the coefficient on expected inflation is in fact equal to zero. Table 2: Identity of the IVs for each of the selection procedures. Traditional | Random | Crude Thresholding | LASSO ---|---|---|--- PRS85006173.-1 | WILL5000IND.-3 | PRS85006173.-1 | PRS85006173.-1 PRS85006173.-2 | NDMANEMP.-3 | PRS85006173.-2 | PRS85006173.-2 GDPDEF.-2 | EXUSUK.-3 | DSERRG3M086SBEA.-1 | DSERRG3M086SBEA.-1 GDPDEF.-3 | PERMITMW.-4 | CES3000000008.-1 | SRVPRD.-3 _Notes_ : PRS85006173 refers to Nonfarm Business Sector: Labor Share, GDPDEF refers to Gross Domestic Product: Implicit Price Deflator, WILL5000IND.-3 refers to Wilshire 5000 Total Market Index, NDMANEMP.-3 refers to All Employees, Nondurable Goods, EXUSUK.-3 refers to U.S. / U.K. Foreign Exchange Rate, PERMITMW.-4 refers to New Private Housing Units Authorized by Building Permits in the Midwest Census Region, DSERRG3M086SBEA refers to Personal consumption expenditures: Services (chain-type price index), CES3000000008 refers to Average Hourly Earnings of Production and Nonsupervisory Employees, Manufacturing, SRVPRD refers to All Employees, Service-Providing. (a) Traditional (b) Random (c) Crude Thresholding (d) LASSO Figure 2: 90% confidence sets for the NKPC in Equation (1) using the $S$ statistics with selected IVs by different ad-hoc procedures. The identity of the IVs selected by each of the procedures is shown in Table 2. The confidence sets implied by traditional IVs and crude thresholding are qualitatively very similar. This similarity is explained by the fact that the IVs chosen by crude thresholding are very similar to the traditional IVs, as shown in Table 2. This suggests two things. First, it suggests that the ad-hoc selection procedures used in this paper may in fact provide a reasonable approximation to the approach taken for selecting IVs in the past literature. Second, given that in the simulation exercise in Section 4 crude thresholding yields the worst size distortions of the selection procedures considered, this result suggests that the confidence sets reported in the previous literature are likely to suffer from at least some distortion due to endogeneity bias. In particular, it suggests that the process of trying to find ‘strong’ IVs may have led to an undercovering of the true parameter values. The confidence sets of the Sup Score test proposed in this paper for different block lengths (4, 6, 8, and 10) are shown in Figure 3. For all block lengths considered (the results do not appear to be sensitive to the choice of block length), the confidence sets are smaller than the ones yielded by the random selection approach, but wider than for the traditional, crude thresholding, and LASSO approach. This is likely due to a combination of the incorrect size of the latter approaches, and the low power of the Sup Score test documented in Section 4. The results suggest that while certain parts of the parameter space considered can be rejected at the 10% level of significance, neither $\lambda$ nor $\gamma_{f}$ are found to be different from zero for all values of the parameter space considered. (a) $b_{T}=4$ (b) $b_{T}=6$ (c) $b_{T}=8$ (d) $b_{T}=10$ Figure 3: 90% confidence sets for the NKPC in Equation (1) using the Sup Score test. The Sup Score test can also help shed some light on what the most relevant IVs are, since there is a (likely) unique IV that maximises the Sup Score statistic in Equation (6) for every null hypothesis being tested. I record the identity of the IV maximising the Sup Score statistic for each null hypothesis tested, and report the results in Table 3 and Figure 4. Table 3 shows the identity of the IVs maximising the Sup Score statistic. Figure 4 shows in which part of the parameter space the different IVs maximise the Sup Score statistic. Two things stand out. First, the IVs that feature prominently in Table 2 (i.e., IVs selected by the ad-hoc procedures) also tend to feature in the set of IVs maximising the Sup Score statistic (i.e., lags of PRS85006173 and CES3000000008). The reason why the confidence sets are larger for the Sup Score test is in part due to the Sup Score test being able to account for the very many other valid IVs that these variables were chosen from. Second, there are some IVs that maximise the Sup Score statistic that do not feature in Table 2, such as the three-period lagged Housing Starts in Northeast Census Region (e.g., HOUSTNE.-3). This relates to the predominant motivation for wanting to consider very many IVs: the truly relevant IVs can often be ‘exotic’, in the sense that intuition alone would not point to their relevance. Table 3: Identity of the IVs maximising the Sup Score statistic. IV | # $H_{0}$ | Description ---|---|--- CES3000000008.-1 | 13,879 | One-period lag of Average Hourly Earnings of Production and Nonsupervisory Employees, Manufacturing HOUSTNE.-3 | 11,021 | Housing Starts in Northeast Census Region PRS85006173.-3 | 8,404 | Three-period lag of Nonfarm Business Sector: Labor Share PRS85006173.-1 | 7,263 | One-period lag of Nonfarm Business Sector: Labor Share PRS85006173.-2 | 4,339 | Two-period lag of Nonfarm Business Sector: Labor Share CUMFNS.-3 | 1,802 | Three-period lag of Capacity Utilization: Manufacturing CUSR0000SAS.-2 | 1,742 | Two-period lag of Consumer Price Index for All Urban Consumers: Services in U.S. City Average IPCONGD.-3 | 68 | Three-period lag of Industrial Production: Consumer Goods DDURRG3M086SBEA.-1 | 2 | One-period lagged personal consumption expenditures: durable goods Figure 4: Location in the parameter space $\gamma_{f}\times\lambda$ where the different IVs in Table 3 maximise the Sup Score statistic. ## 6 Conclusion IV-based limited-information estimation of single equations has become increasingly popular in Macroeconomics over the last 20 years. Using a simulation exercise based on NKPCs, I showed that selecting IVs in ad-hoc ways (random selection, crude thresholding, and LASSO) can invalidate them, thus yielding invalid inference even if tests with desirable properties (such as robustness to weak identification) are used post-selection. To address this issue, I propose a Sup Score test that remains valid for high-dimensional IVs and for time series data. In the same simulation exercise that showed that ad- hoc selection procedures can lead to invalid inference, this statistic yielded correct size and reasonable power. Finally, I applied the Sup Score test to conduct inference on the US NKPC with 359 IVs on a sample size of 179 observations. The results showed that the confidence sets implied by the Sup Score test are substantially wider than the ones of all other approaches. The simulation results and the empirical application point to the importance of developing further high-dimensional IV methods with good power properties that remain valid under dependence and arbitrarily weak identification. ## References * [1] Stanislav Anatolyev and Nikolay Gospodinov “Specification Testing in Models with Many Instruments” In _Econometric Theory_ 27.2, 2010, pp. 427–441 * [2] Gary Anderson and George Moore “A Linear Algebraic Procedure for Solving Linear Perfect Foresight Models” In _Economics Letters_ 17.3, 1985, pp. 247–252 * [3] Theodore Anderson and Herman Rubin “Estimation of the Parameters of a Single Equation in a Complete System of Stochastic Equations” In _The Annals of Mathematical Statistics_ 20.1, 1949, pp. 46–63 * [4] Donald W.. Andrews and James Stock “Testing with many weak instruments” In _Journal of Econometrics_ 138.1, 2007, pp. 24–46 * [5] Guido Ascari, Leandro M Magnusson and Sophocles Mavroeidis “Empirical evidence on the Euler equation for consumption in the US” In _Journal of Monetary Economics_ , 2019, pp. 1–24 * [6] Omer Bayar “Weak instruments and estimated monetary policy rules” In _Journal of Macroeconomics_ 58, 2018, pp. 308–317 * [7] Alexandre Belloni, Daniel Chen, Victor Chernozhukov and Christian Hansen “Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain” In _Econometrica_ 80.6, 2012, pp. 2369–2429 * [8] Tiago Berriel, Marcelo Medeiros and Marcelo Sena “Instrument selection for estimation of a forward-looking Phillips Curve” In _Economics Letters_ 145, 2016, pp. 123–125 * [9] Tiago Berriel, Marcelo Medeiros and Marcelo Sena “Regularization and Identification of the New Keynesian Phillips Curve” In _European Conferences of the Econometrics Community Conference Paper_ , 2019, pp. 1–27 * [10] Xiaohong Chen, Qi-Man Shao, Wei Biao Wu and Lihu Xu “Self-normalized Cramér-type moderate deviations under dependence” In _The Annals of Statistics_ 44.4, 2016, pp. 1593–1617 * [11] Victor Chernozhukov, Denis Chetverikov and Kengo Kato “Inference on Causal and Structural Parameters using Many Moment Inequalities” In _The Review of Economic Studies_ 86.5, 2018, pp. 1867–1900 * [12] Victor Chernozhukov, Christian Hansen and Martin Spindler “Post-Selection and Post-Regularization Inference in Linear Models with Many Controls and Instruments” In _American Economic Review_ 105.5, 2015, pp. 486–490 * [13] Federico Crudu, Giovanni Mellace and Zsolt Sándor “Inference in Instrumental Variable Models with Heteroscedasticity and Many Instruments” In _Econometric Theory_ 77, 2020, pp. 1–30 * [14] Han Deng and Cunhui Zhang “Beyond Gaussian Approximation: Bootstrap for Maxima of Sums of Independent Random Vectors” In _arXiv_ 1705.09528, 2020, pp. 1–58 * [15] Jean-Marie Dufour “Comment” In _Journal of Business & Economic Statistics_ 27.3, 2009, pp. 318–321 * [16] Jean-Marie Dufour, Lynda Khalaf and Maral Kichian “Inflation dynamics and the New Keynesian Phillips Curve: An identification robust econometric analysis” In _Journal of Economic Dynamics and Control_ 30.9, 2006, pp. 1707–1727 * [17] Jordi Galì and Mark Gertler “Inflation dynamics: A structural econometric analysis” In _Journal of Monetary Economics_ 44.2, 1999, pp. 195–222 * [18] Domenico Giannone, Michele Lenza and Giorgio Primiceri “Economic predictions with big data: the illusion of sparsity” In _CEPR Discussion Paper_ 12256, 2018, pp. 1–27 * [19] Christian Hansen and Damian Kozbur “Instrumental variables estimation with many weak instruments using regularized JIVE” In _Journal of Econometrics_ 182.2, 2014, pp. 290–308 * [20] Joel L Horowitz “Non-Asymptotic Inference in Instrumental Variables Estimation” In _arXiv_ 1809.03600, 2018, pp. 1–33 * [21] Tailen Hsing and Wei Biao Wu “On weighted U-statistics for stationary processes” In _The Annals of Probability_ 32.2, 2004, pp. 1600–1631 * [22] George Kapetanios, Lynda Khalaf and Massimiliano Marcellino “Factor-Based Identification-Robust Interference in IV Regressions” In _Journal of Applied Econometrics_ 31.5, 2015, pp. 821–842 * [23] Frank Kleibergen “Testing Parameters in GMM Without Assuming that They Are Identified” In _Econometrica_ 73.4, 2005, pp. 1103–1123 * [24] Frank Kleibergen and Sophocles Mavroeidis “Weak Instrument Robust Tests in GMM and the New Keynesian Phillips Curve” In _Journal of Business & Economic Statistics_ 27.3, 2009, pp. 293–311 * [25] Adrian Ma “GMM estimation of the new Phillips curve” In _Economics Letters_ 76, 2002, pp. 411–417 * [26] Sophocles Mavroeidis “Monetary Policy Rules and Macroeconomic Stability: Some New Evidence” In _American Economic Review_ 100.1, 2010, pp. 491–503 * [27] Sophocles Mavroeidis, Mikkel Plagborg-Møller and James Stock “Empirical Evidence on Inflation Expectations in the New Keynesian Phillips Curve”’ In _Journal of Economic Literature_ 52.1, 2014, pp. 124–188 * [28] Michael McCracken and Serena Ng “FRED-MD: A Monthly Database for Macroeconomic Research” In _Journal of Business & Economic Statistics_ 34.4, 2016, pp. 574–589 * [29] Anna Mikusheva and Liyang Sun “Inference with Many Weak Instruments” In _arXiv_ 2004.12445, 2020, pp. 1–30 * [30] Harun Mirza and Lidia Storjohann “Making Weak Instrument Sets Stronger: Factor-Based Estimation of Inflation Dynamics and a Monetary Policy Rule” In _Journal of Money, Credit and Banking_ 46.4, 2014, pp. 643–664 * [31] Whitney Newey and Kenneth West “A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix” In _Econometrica_ 55.3, 1987, pp. 703–708 * [32] Whitney Newey and Frank Windmeijer “Generalized Method of Moments With Many Weak Moment Conditions” In _Econometrica_ 77.3, 2009, pp. 687–719 * [33] Serena Ng and Jushan Bai “Selecting Instrumental Variables in a Data Rich Environment” In _Journal of Time Series Econometrics_ 1.1, 2009, pp. 1–34 * [34] James Stock and Jonathan Wright “GMM with Weak Identification” In _Econometrica_ 68.5, 2000, pp. 1055–1096 * [35] Runmin Wang and Xiaofeng Shao “Hypothesis Testing for High-Dimensional Time Series Via Self-Normalisation” In _Mimeo_ , 2019, pp. 1–30 * [36] Wei Biao Wu “Nonlinear System Theory: Another Look at Dependence” In _Proceedings of the National Academy of Sciences_ 102.40, 2005, pp. 14150–14154 * [37] Motohiro Yogo “Estimating the Elasticity of Intertemporal Substitution When Instruments Are Weak” In _The Review of Economics and Statistics_ 86.3, 2004, pp. 797–810 * [38] Xianyang Zhang and Guang Cheng “Bootstrapping High Dimensional Time Series” In _arXiv_ 1406.1037, 2014, pp. 1–53 * [39] Xianyang Zhang and Guang Cheng “Gaussian approximation for high dimensional vector under physical dependence” In _Bernoulli_ 24.4A, 2018, pp. 2640–2675 ## Appendix A Proof of Example 1 ###### Proof. Assumption 1.ii. holds by the assumption of mean-zero independent error terms, the assumption of non-zero variances, and the assumption of finite fourth moments. Assumption 1.iv. is satisfied by the assumption of (mean-zero) independent error terms. Since $\mathbb{E}[\varepsilon_{t}^{4}]<\infty$, I can re-write Equation (7) as $\mathbb{E}[|Z_{tj}-\mathcal{F}_{j}(\dots,v^{*}_{-1},v_{0}^{*},v_{1},\dots,v_{t})|^{4}]\mathbb{E}[|\varepsilon_{t}|^{4}]\leq\breve{C}{\tilde{\rho}}^{t},$ for some new constant $\breve{C}$. Then it follows that $\displaystyle\breve{C}{\tilde{\rho}}^{t}\geq\mathbb{E}\left[\left(\mathcal{F}_{j}(\dots,v_{t-1},v_{t})\varepsilon_{t}-\mathcal{F}_{j}(\dots,v^{*}_{-1},v^{*}_{0},v_{1},\dots,v_{t})\varepsilon_{t}\right)^{4}\right].$ I now define $\tilde{v}_{t}=[v_{t}^{\prime},\varepsilon_{t}]^{\prime}$ (so that $\tilde{v}_{t}$ is an i.i.d. mean-zero random variable), which yields $Z_{tj}\varepsilon_{t}=\mathcal{F}_{j}(\dots,v_{t-1},v_{t})\varepsilon_{t}\equiv\tilde{\mathcal{F}}_{j}(\dots,\tilde{v}_{t-1},\tilde{v}_{t}),$ so that $\mathcal{F}_{j}$ continues to be a measurable function with arguments that are i.i.d. random variables. Therefore, Assumption 1.i. holds. Since $\varepsilon_{t}$ is independent across $t$ and of $v_{s}$ for $s=1,\dots,T$ and identically distributed, it follows that $\mathcal{F}_{j}(\dots,v^{*}_{-1},v^{*}_{0},v_{1},\dots,v_{t})\varepsilon_{t}=\tilde{\mathcal{F}}_{j}(\dots,\tilde{v}^{*}_{-1},\tilde{v}^{*}_{0},\tilde{v}_{1},\dots,\tilde{v}_{t}),$ where $\\{\tilde{v}^{*}_{t}\\}$ are i.i.d. copies of $\\{\tilde{v}_{t}\\}$. Thus, $\breve{C}{\tilde{\rho}}^{t}\geq\mathbb{E}\left[\left(\tilde{\mathcal{F}}_{j}(\dots,\tilde{v}_{t-1},\tilde{v}_{t})-\tilde{\mathcal{F}}_{j}(\dots,\tilde{v}^{*}_{-1},\tilde{v}^{*}_{0},\tilde{v}_{1},\dots,\tilde{v}_{t})\right)^{4}\right].$ Therefore, Assumption 1.v. holds. ∎ ## Appendix B Proof of Theorem 1 ###### Proof. Throughout, it is assumed that the null hypothesis in Equation (4) holds, so that $\varepsilon_{0t}$ is replaced by $\varepsilon_{t}$. The proof is a straightforward application of the results in [39] (referred to as ZC18 in the sequel) and [38] (referred to as ZC14 in the sequel). To this end, let $W_{t}=[W_{t1},\dots,W_{tj}]^{\prime}$ be a Gaussian sequence which is independent of $Z_{t}\varepsilon_{t}$ and preserves the autocovariance structure of $Z_{t}\varepsilon_{t}$. Let $L_{Z\varepsilon}=\underset{1\leq j\leq k}{\text{ max }}\frac{1}{\sqrt{T}}{Z}_{j}^{\prime}\varepsilon$ and $L_{W}=\underset{1\leq j\leq k}{\text{ max }}\frac{1}{\sqrt{T}}\mathfrak{W}_{j}$ where $\mathfrak{W}_{j}$ is the $T\times 1$ vector containing the $j$th column of the matrix $W=[W_{1},\dots,W_{T}]$. I first verify that the conditions in Assumption 1. are sufficient for Theorem 2.1 in ZC18 to hold. Assumption 2.1 in ZC18 holds since by Assumption 1.ii. $Z_{tj}\varepsilon_{t}$ has finite fourth moments, so that setting $\mathfrak{D}_{n}$ in ZC18 to $T^{(3-12\tilde{b}-13b)/32}$, and $h(\cdot)$ in ZC18 to $h(x)=x^{4}$ satisfies the first of the two possible conditions in Assumption 2.1 of ZC18 by the assumption that $12\tilde{b}+13b<3$ (which is implied by the restrictions on $b$ and $\tilde{b}$ given in Assumption 1.iii.). Assumption 2.2 in ZC18 holds by replacing $M$ in ZC18 with $b_{T}$, and setting $\gamma$ in ZC18 to $\gamma=T^{-(1-4\tilde{b}-7b)/8)}=o(1)$ (see also the sentence immediately following Theorem 3.2 in ZC14). Assumption 2.3 in ZC18 contains two conditions. The first condition (what they express as $c_{1}<\underset{1\leq j\leq k}{\text{ min }}\sigma_{j,j}\leq\underset{1\leq j\leq k}{\text{ max }}\sigma_{j,j}<c_{2}$) holds since by Assumption 1.ii., $Z_{tj}\varepsilon_{t}$ has non-degenerate finite second moments. The second condition (what they express as $\sum_{j=1}^{+\infty}j\theta_{j,k,3}<c_{3}$) is satisfied by Assumption 1.v., since, as per Remark 3.2 in [35], the GMC condition used in the present paper (and arguably in the literature that uses physical dependence measures more broadly) is equivalent to the one used in ZC18 and ZC14. Therefore, by Theorem 2.1 in ZC18, under the conditions in Assumption 1., the process $Z_{t}\varepsilon_{t}$ can be approximated by its Gaussian equivalent, i.e., $\underset{a\in\mathbb{R}}{\text{ sup }}\left|\mathbb{P}(L_{Z\varepsilon}\leq a)-\mathbb{P}(L_{W}\leq a)\right|\lesssim T^{-(1-4\tilde{b}-7b)/8}.$ (B.1) The bound in Equation (B.1) satisfies the first condition for Theorem 4.2 in ZC14.151515The careful reader will have noticed that Theorem 4.2 in ZC14 appeals to the conditions in Theorem 3.3 in ZC14 which is virtually the same theorem as Theorem 2.1 in ZC18 except for an additional GMC assumption on the Gaussian equivalent of $Z_{t}\varepsilon_{t}$. However, a careful reading of the proof of Theorem 4.2 in ZC14 reveals that this theorem exclusively appeals to the conditions in Theorem 3.3 in ZC14 in order to establish a bound on the Gaussian approximation as in Equation (B.1) above. Since Theorem 2.1 in ZC18 establishes this bound without this assumption, the GMC on the Gaussian equivalent of $Z_{t}\varepsilon_{t}$ can be dropped in appealing to Theorem 4.2 in ZC14. It remains to verify Condition 2 of Assumption 4.1 in ZC14. Condition 2 in Assumption 4.1 in ZC14 requires checking four conditions. The first condition (what they express as $\bar{\sigma}_{x,M}\lor\bar{\sigma}_{x,N}\lesssim n^{s_{1}}$) is satisfied by Assumption 1.v., since by Remark 4.1 in ZC14, the first condition of Condition 2 of Assumption 4.1 in ZC14 is satisfied with $s_{1}=0$ whenever the data in question obeys the GMC condition. The second condition (what they express as ${\varsigma}_{x,M}\lor{\varsigma_{x,N}}\lesssim n^{s^{\prime}_{2}/2}$) is satisfied by setting their $M,N$ to $b_{T}$ and $s^{\prime}_{2}$ to $\tilde{b}$ and noticing that for all $j=1,\dots,k$, $\left(\frac{1}{b_{T}^{2}}\mathbb{E}\left[\left|\sum_{t=1}^{b_{T}}{Z_{tj}\varepsilon_{t}}\right|^{4}\right]\right)^{1/4}\leq\left(\frac{1}{b_{T}^{2}}\mathbb{E}\left[\underset{1\leq t\leq b_{T}}{\text{ max }}Z_{tj}^{4}\varepsilon_{t}^{4}\right]{b_{T}^{4}}\right)^{1/4}\leq b_{T}^{1/2}\mathbb{E}\left[\underset{1\leq t\leq T}{\text{ max }}Z_{tj}^{4}\varepsilon_{t}^{4}\right]\lesssim b_{T}^{1/2}$ by Assumption 1.ii.. Thus, $\left(\mathbb{E}\left[\underset{1\leq j\leq k}{\text{ max }}\left|\sum_{t=1}^{b_{T}}\frac{Z_{tj}\varepsilon_{t}}{\sqrt{b_{T}}}\right|^{4}\right]\right)^{1/4}\lesssim b_{T}^{1/2}\lesssim T^{\tilde{b}/2},$ by Assumption 1.iii.. This ensures that the second condition of Condition 2 of Assumption 4.1 in ZC14 is satisfied. The third condition (what they express as ${\varpi}_{x}\lesssim n^{s_{3}}$) is satisfied by Assumption 1.iv. and setting $s_{3}=\breve{b}$. The fourth condition (what they express as $s_{b}^{\prime}>0$) is satisfied since $(1-6\tilde{b})/2>0$, $(1-6b-\tilde{b})/2-\tilde{b}>0$, and $\tilde{b}-2b-\breve{b}>0$ by the assumption made on $b$, $\tilde{b}$, and $\breve{b}$ in Assumption 1.iii. and Assumption 1.iv.. It is hence possible to invoke Theorem 4.2 in ZC14, which yields $\underset{\alpha\in(0,1)}{\text{ sup }}\left|\mathbb{P}(L_{Z\varepsilon}\leq\tilde{c}(\alpha))-\alpha\right|\lesssim T^{-c},$ where $\displaystyle\tilde{c}(\alpha)$ $\displaystyle=\text{inf}\left\\{\gamma\in\mathbb{R}:\mathbb{P}(\tilde{L}_{\hat{A}}\leq\gamma|\\{{Z}_{t}{\varepsilon}_{0t}\\}_{t=1}^{T})\geq 1-\alpha\right\\},$ $\displaystyle\tilde{L}_{\hat{A}}$ $\displaystyle=\underset{1\leq j\leq k}{\text{ max }}\frac{1}{\sqrt{T}}\sum_{t=1}^{l_{T}}\hat{A}_{tj}e_{t},$ and $\\{e_{t}\\}$ is a sequence of i.i.d. $\mathcal{N}[0,1]$ random variables. The constant $c$ is positive, since $(1-5b-\tilde{b})/2>0$, $(1-6b-\tilde{b})/2-\tilde{b}>0$, $\tilde{b}-2b-\breve{b}>0$, and $(1-4\tilde{b}-7b)>0$ by the assumption made on $b$, $\tilde{b}$, and $\breve{b}$ in Assumption 1.iii. and Assumption 1.iv.. Finally, notice that the procedure proposed in this paper is computing only the means of random variables. This means that the ‘influence function’ ($IF$ in ZC14) does not have to be estimated (since the true value is known under the null hypothesis). It also means that the statistic is ‘exactly linear’, i.e., the remainder term $\mathcal{R}_{N_{0}}$ in ZC14 is zero. This implies that the two conditions in Assumption 5.1 in ZC14 are trivially satisfied (since, in their notation, $\mathcal{E}_{AB}=\mathcal{R}_{N_{0}}=0$). Also, the block length of $Z_{tj}\varepsilon_{t}$ is simply unity so that $N_{0}$ in ZC14 is simply $T$ and the dimension of the parameter to be estimated ($q_{0}$ in their notation) is simply the number of IVs considered, $k$. By Theorem 5.1 in ZC14, which requires the conditions for Theorem 4.1 and Assumption 5.1 in ZC14 to hold, and the identifying moment condition $\mathbb{E}[Z^{\prime}\varepsilon]=0$, it hence follows that $\underset{\alpha\in(0,1)}{\text{ sup }}\left|\mathbb{P}\left(\underset{1\leq j\leq k}{\text{ max }}\sqrt{T}\left|\frac{1}{T}Z_{j}^{\prime}\varepsilon\right|\leq{c}(\alpha)\right)-\alpha\right|\lesssim T^{-c},$ i.e., $\underset{\alpha\in(0,1)}{\text{ sup }}\left|\mathbb{P}\left(\mathcal{R}\leq{c}(\alpha)\right)-\alpha\right|\lesssim T^{-c}.$ Letting $T\to\infty$ yields the required result. ∎ ## Appendix C Derivation of Concentration Parameters for Simulations I first derive the concentration parameter for the unobserved oracle first stage. The derivations for this concentration parameter are very similar to those in the online appendix of [27]. The endogenous variables in the model the econometrician estimates (Equation (11)) can be written in terms of the excluded IVs as161616As in [27, Online Appendix ], the constant can be omitted for the purposes of deriving the concentration matrix because in all simulations it is set equal to zero in the structural NKPC. $\begin{bmatrix}\pi_{t+1}-\pi_{t-1}\\\ s_{t}\end{bmatrix}=\underbrace{\begin{bmatrix}1&0&0\\\ 0&0&0\\\ \end{bmatrix}}_{E_{1}}\begin{bmatrix}\pi_{t+1}\\\ s_{t+1}\\\ f_{t+1}\\\ \end{bmatrix}+\underbrace{\begin{bmatrix}0&0&0\\\ 0&1&0\\\ \end{bmatrix}}_{E_{2}}\begin{bmatrix}\pi_{t}\\\ s_{t}\\\ f_{t}\end{bmatrix}+\underbrace{\begin{bmatrix}-1&0&0\\\ 0&0&0\end{bmatrix}}_{E_{3}}\begin{bmatrix}\pi_{t-1}\\\ s_{t-1}\\\ f_{t-1}\end{bmatrix}.$ (C.1) Define $p_{t}=[\pi_{t+1}-\pi_{t-1},s_{t}]^{\prime}$, $R_{t}=[\pi_{t},s_{t},f_{t}]^{\prime}$, $u_{t}=[u_{1t},u_{2t},u_{3t}]^{\prime}$, $\Psi=\begin{bmatrix}a_{11}&a_{12}&a_{13}\\\ a_{21}&a_{22}&a_{23}\\\ a_{31}&a_{32}&a_{33}\\\ \end{bmatrix},\text{ and }\Omega=\begin{bmatrix}\omega_{11}&\omega_{12}&\omega_{13}\\\ \omega_{21}&\omega_{22}&\omega_{23}\\\ \omega_{31}&\omega_{32}&\omega_{33}\\\ \end{bmatrix}.$ Equation (C.1) can now be written as $\displaystyle p_{t}$ $\displaystyle=E_{1}R_{t+1}+E_{2}R_{t}+E_{3}R_{t-1}$ $\displaystyle=(E_{1}\Psi^{2}+E_{2}\Psi+E_{3})R_{t-1}+(E_{1}\Psi+E_{2})u_{t}+E_{1}u_{t+1}$ $\displaystyle=DR_{t-1}+w_{t},$ for $D=E_{1}\Psi^{2}+E_{2}\Psi+E_{3}$, $w_{t}=(E_{1}\Psi+E_{2})u_{t}+E_{1}u_{t+1}$. Assuming that $R_{t}$ is stationary, and letting $\Gamma=\mathbb{V}\left[R_{t}\right]$, $\text{vec}(\Gamma)=\left(I_{9}-\Psi\otimes\Psi\right)^{-1}\text{vec}(\Omega).$ The population projection of $p_{t}$ on $R_{t-1}$ has coefficient matrix given by $\displaystyle M$ $\displaystyle=\mathbb{E}[p_{t}R_{t-1}^{\prime}]\Gamma^{-1}$ $\displaystyle=\mathbb{E}\left[(DR_{t-1}+w_{t})R_{t-1}^{\prime}\right]\Gamma^{-1}$ $\displaystyle=D\mathbb{E}\left[R_{t-1}R_{t-1}^{\prime}\right]\Gamma^{-1}$ $\displaystyle=D,$ since $\mathbb{E}[w_{t}R_{t-1}^{\prime}]=\mathbb{E}\left[\left((E_{1}\Psi+E_{2})u_{t}+E_{1}u_{t+1}\right)R_{t-1}^{\prime}\right]=0$. The projection error of the unobserved oracle first stage is given by $\displaystyle e_{t}$ $\displaystyle=p_{t}-MR_{t-1}$ $\displaystyle=DR_{t-1}+w_{t}-DR_{t-1}$ $\displaystyle=w_{t}.$ The variance of the population projection error of the unobserved oracle first stage $\Sigma=\mathbb{V}[e_{t}]$ is hence given by $\displaystyle\Sigma$ $\displaystyle=\mathbb{V}[w_{t}]$ $\displaystyle=(E_{1}\Psi+E_{2})\Omega(E_{1}A+E_{2})^{\prime}+E_{1}\Omega E_{1}^{\prime}.$ The concentration matrix of the unobserved oracle first stage is then given by $C=T\Sigma^{-1/2}D\Gamma D^{\prime}\Sigma^{-1/2^{\prime}},$ where $\Sigma^{-1/2}\Sigma^{-1/2,^{\prime}}=\Sigma^{-1}$. The minimum eigenvalue $\mu^{2}_{O}$ of matrix $C$ gives the concentration parameter of the unobserved oracle first stage. The steps to derive the concentration matrix corresponding to the first stage observed by the econometrician are similar. Define $\tilde{R}_{t}=[\pi_{t},s_{t},Q_{t}^{\prime}]^{\prime}$, and $\Xi=\begin{bmatrix}1&0&0\\\ 0&1&0\\\ 0_{m\times 1}&0_{m\times 1}&\xi\end{bmatrix},F=\begin{bmatrix}0_{2\times 2}&0_{2\times m}\\\ 0_{m\times 2}&I_{m}\end{bmatrix},$ so that $\tilde{R}=\Xi R_{t}+\tilde{u}_{t}$, where $\tilde{u}_{t}=[0,0,u_{4t}^{\prime}]^{\prime}$, $\mathbb{V}[\tilde{u}_{t}]=F$. Assuming that $R_{t}$ is stationary, $\tilde{R}_{t}$ is also stationary. Letting $\tilde{\Gamma}=\mathbb{V}[\tilde{R}_{t}]$, $\displaystyle\tilde{\Gamma}$ $\displaystyle=\mathbb{V}\left[\Xi R_{t}+\tilde{u}_{t}\right]$ $\displaystyle=\Xi\mathbb{V}\left[R_{t}\right]\Xi^{\prime}+F$ $\displaystyle=\Xi\Gamma\Xi^{\prime}+F.$ The population projection of $p_{t}$ on $\tilde{R}_{t-1}$ has coefficient matrix given by $\displaystyle\tilde{M}$ $\displaystyle=\mathbb{E}[p_{t}\tilde{R}_{t-1}^{\prime}]{\tilde{\Gamma}}^{-1}$ $\displaystyle=\mathbb{E}[p_{t}({R}_{t-1}^{\prime}\Xi^{\prime}+\tilde{u}_{t}^{\prime})]{\tilde{\Gamma}}^{-1}$ $\displaystyle=\mathbb{E}[p_{t}{R}_{t-1}^{\prime}]\Xi^{\prime}{\tilde{\Gamma}}^{-1}+\mathbb{E}[p_{t}\tilde{u}_{t}^{\prime}]{\tilde{\Gamma}}^{-1}$ $\displaystyle=\mathbb{E}[(DR_{t-1}+w_{t}){R}_{t-1}^{\prime}]\Xi^{\prime}{\tilde{\Gamma}}^{-1}$ $\displaystyle=D\mathbb{E}[R_{t-1}R_{t-1}^{\prime}]\Xi^{\prime}{\tilde{\Gamma}}^{-1}+\mathbb{E}[w_{t}R_{t-1}^{\prime}]\Xi^{\prime}{\tilde{\Gamma}}^{-1}$ $\displaystyle=D\Gamma\Xi^{\prime}{\tilde{\Gamma}}^{-1}.$ The projection error for the observed first stage is given by $\displaystyle\tilde{e}_{t}$ $\displaystyle=p_{t}-\tilde{M}\tilde{R}_{t-1}$ $\displaystyle=DR_{t-1}+w_{t}-\tilde{M}\tilde{R}_{t-1}$ $\displaystyle=DR_{t-1}+w_{t}-\tilde{M}(\Xi R_{t-1}+\tilde{u}_{t-1})$ $\displaystyle=(D-\tilde{M}\Xi)R_{t-1}+w_{t}-M\tilde{u}_{t-1}.$ The variance of the population projection error of the observed first stage $\tilde{\Sigma}=\mathbb{V}[\tilde{e}_{t}]$ is hence given by $\displaystyle\tilde{\Sigma}$ $\displaystyle=\mathbb{V}[(D-\tilde{M}\Xi)R_{t-1}]+\mathbb{V}[w_{t}]+\mathbb{V}[\tilde{M}\tilde{u}_{t-1}]$ $\displaystyle=(D-\tilde{M}\Xi)\Gamma(D-\tilde{M}\Xi)^{\prime}+\Sigma+\tilde{M}F\tilde{M}^{\prime}.$ The concentration matrix of the unobserved oracle first stage is then given by $\tilde{C}=T\tilde{\Sigma}^{-1/2}\tilde{M}\tilde{\Gamma}\tilde{M}^{\prime}\tilde{\Sigma}^{-1/2^{\prime}},$ where $\tilde{\Sigma}^{-1/2}\tilde{\Sigma}^{-1/2,^{\prime}}=\tilde{\Sigma}^{-1}$. The minimum eigenvalue $\mu^{2}_{E}$ of matrix $\tilde{C}$ gives the concentration parameter of the observed first stage. ## Appendix D Data Table 4: Data description. Code | Description | Type | Code | Description | Type ---|---|---|---|---|--- RPI | Real Personal Income | G | PERMIT | New Private Housing Units Authorized by Building Permits | G INDPRO | Industrial Production Index | G | PERMITNE | New Private Housing Units Authorized by Building Permits in the Northeast Census Region | G CUMFNS | Capacity Utilization: Manufacturing | D | PERMITMW | New Private Housing Units Authorized by Building Permits in the Midwest Census Region | G IPFINAL | Industrial Production: Final Products (Market Group) | G | PERMITS | New Private Housing Units Authorized by Building Permits in the South Census Region | G IPCONGD | Industrial Production: Consumer Goods | G | PERMITW | New Private Housing Units Authorized by Building Permits in the West Census Region | G IPDCONGD | Industrial Production: Durable Consumer Goods | G | DPCERA3M086SBEA | Real personal consumption expenditures (chain-type quantity index) | G IPNCONGD | Industrial Production: Nondurable Consumer Goods | G | CMRMTSPL | Real Manufacturing and Trade Industries Sales | G IPBUSEQ | Industrial Production: Business Equipment | G | UMCSENT | University of Michigan: Consumer Sentiment | G IPMAT | Industrial Production: Materials | G | M1SL | M1 Money Stock | G IPMANSICS | Industrial Production: Manufacturing (SIC) | G | M2SL | M2 Money Stock | G IPB51222s | Industrial Production: Residential utilities | G | TOTRESNS | Total Reserves of Depository Institutions | G IPFUELS | Industrial Production: Fuels | G | BUSLOANS | Commercial and Industrial Loans, All Commercial Banks | G CLF16OV | Civilian Labor Force Level | G | REALLN | Real Estate Loans, All Commercial Banks | G UNRATE | Unemployment Rate | N | NONREVSL | Total Nonrevolving Credit Owned and Securitized, Outstanding | G UEMPMEAN | Average Weeks Unemployed | D | DTCOLNVHFNM | Consumer Motor Vehicle Loans Owned by Finance Companies, Outstanding | G UEMPLT5 | Number Unemployed for Less Than 5 Weeks | G | DTCTHFNM | Total Consumer Loans and Leases Owned and Securitized by Finance Companies, Outstanding | G UEMP5TO14 | Number Unemployed for 5-14 Weeks | G | INVEST | Securities in Bank Credit, All Commercial Banks | G UEMP15OV | Number Unemployed for 15 Weeks & Over | G | FEDFUNDS | Effective Federal Funds Rate | G UEMP15T26 | Number Unemployed for 15-26 Weeks | G | TB3SMFFM | 3-Month Treasury Bill Minus Federal Funds Rate | N UEMP27OV | Number Unemployed for 27 Weeks & Over | G | TB6SMFFM | 6-Month Treasury Bill Minus Federal Funds Rate | N PAYEMS | All Employees, Total Nonfarm | G | T1YFFM | 1-Year Treasury Constant Maturity Minus Federal Funds Rate | N USGOOD | All Employees, Goods-Producing | G | T5YFFM | 5-Year Treasury Constant Maturity Minus Federal Funds Rate | N CES1021000001 | All Employees, Mining | G | T10YFFM | 10-Year Treasury Constant Maturity Minus Federal Funds Rate | N USCONS | All Employees, Construction | G | AAAFFM | Moody’s Seasoned Aaa Corporate Bond Minus Federal Funds Rate | N MANEMP | All Employees, Manufacturing | G | BAAFFM | Moody’s Seasoned Baa Corporate Bond Minus Federal Funds Rate | N PRS85006173 | Nonfarm Business Sector: Labor Share | GG | TWEXMMTH | Trade Weighted U.S. Dollar Index: Major Currencies, Goods | G DMANEMP | All Employees, Durable Goods | G | EXSZUS | Switzerland / U.S. Foreign Exchange Rate | G NDMANEMP | All Employees, Nondurable Goods | G | EXJPUS | Japan / U.S. Foreign Exchange Rate | G SRVPRD | All Employees, Service-Providing | G | EXUSUK | U.S. / U.K. Foreign Exchange Rate | G USTPU | All Employees, Trade, Transportation, and Utilities | G | EXCAUS | Canada / U.S. Foreign Exchange Rate | G USWTRADE | All Employees, Wholesale Trade | G | WPSFD49502 | Producer Price Index by Commodity for Final Demand: Personal Consumption Goods | G USTRADE | All Employees, Retail Trade | G | WPSID61 | Producer Price Index by Commodity for Intermediate Demand by Commodity Type: Processed Goods for Intermediate Demand | G USFIRE | All Employees, Financial Activities | G | WTISPLC | Spot Crude Oil Price: West Texas Inter mediate (WTI) | G USGOVT | All Employees, Government | G | PPICMM | Producer Price Index by Commodity Metals and metal products: Primary nonferrous metals | G CES0600000007 | Average Weekly Hours of Production and Nonsupervisory Employees, Goods-Producing | G | CPIAUCSL | Consumer Price Index for All Urban Consumers: All Items in U.S. City Average | G AWOTMAN | Average Weekly Overtime Hours of Production and Nonsupervisory Employees, Manufacturing | D | CUSR0000SAC | Consumer Price Index for All Urban Consumers: Commodities in U.S. City Average | G AWHMAN | Average Weekly Hours of Production and Nonsupervisory Employees, Manufacturing | D | CUSR0000SAD | Consumer Price Index for All Urban Consumers: Durables in U.S. City Average | G CES0600000008 | Average Hourly Earnings of Production and Nonsupervisory Employees, Goods-Producing | G | CUSR0000SAS | Consumer Price Index for All Urban Consumers: Services in U.S. City Average | G CES2000000008 | Average Hourly Earnings of Production and Nonsupervisory Employees, Construction | G | PCEPI | Personal Consumption Expenditures: Chain-type Price Index | G CES3000000008 | Average Hourly Earnings of Production and Nonsupervisory Employees, Manufacturing | G | DDURRG3M086SBEA | Personal consumption expenditures: Durable goods (chain-type price index) | G HOUST | Housing Starts: Total: New Privately Owned Housing Units Started | G | DNDGRG3M086SBEA | Personal consumption expenditures: Nondurable goods (chain-type price index) | G HOUSTNE | Housing Starts in Northeast Census Region | G | DSERRG3M086SBEA | Personal consumption expenditures: Services (chain-type price index) | G HOUSTMW | Housing Starts in Midwest Census Region | G | GDPDEF | Gross Domestic Product: Implicit Price Deflator | G HOUSTS | Housing Starts in South Census Region | G | WILL5000IND | Wilshire 5000 Total Market Index | G HOUSTW | Housing Starts in West Census Region | G | GDPC1 | Real Gross Domestic Product | G _Notes_ : N refers to no transformation of the data. G refers transforming variable $f_{t}$ $f_{t}$ by computing $100\left(\log(f_{t})-\log(f_{t-1})\right)$. D refers transforming variable $f_{t}$ by computing $f_{t}-f_{t-1}$. GG refers transforming variable $f_{t}$ by computing $0.1226\times 100\log(f_{t}/100)$ (as in [24, 17]).
# On the noise generation and unsteady performance of combined heaving and pitching foils Nathan Wagenhoffer<EMAIL_ADDRESS>Keith W Moored Justin W Jaworski Engineering, Pennsylvania State University Abington Mechanical Engineering and Mechanics, Lehigh University ###### Abstract A transient two-dimensional acoustic boundary element solver is coupled to a potential flow boundary element solver via Powell’s acoustic analogy to determine the acoustic emission of isolated hydrofoils performing biologically-inspired motions. The flow-acoustic boundary element framework is validated against experimental and asymptotic solutions for the noise produced by canonical vortex-body interactions. The numerical framework then characterizes the noise production of an oscillating foil, which is a simple representation of a fish caudal fin. A rigid NACA 0012 hydrofoil is subjected to combined heaving and pitching motions for Strouhal numbers ($0.03<St<1$) based on peak-to-peak amplitudes and chord-based reduced frequencies ($0.125<f^{*}<1$) that span the parameter space of many swimming fish species. A dipolar acoustic directivity is found for all motions, frequencies, and amplitudes considered, and the peak noise level increases with both the reduced frequency and the Strouhal number. A combined heaving and pitching motion produces less noise than either a purely pitching or purely heaving foil at a fixed reduced frequency and amplitude of motion. Correlations of the lift and power coefficients with the peak root-mean-square acoustic pressure levels are determined, which could be utilized to develop long-range, quiet swimmers. ## 1 Introduction Many aquatic animals oscillate their fins to propel themselves quickly through the ocean. Their propulsion is based on an unsteady flow paradigm that is distinct from the steady flow paradigm that underpins the design of most man- made underwater vehicles. Consequently, animals can excel at maneuvering and rapid accelerations [1, 2] in addition to high-speed and high-efficiency swimming [3, 4]. This multi-modal performance has spurred vigorous research over the last two decades into bio-inspired autonomous underwater vehicles (BAUVs) [5, 6, 7, 8], which would possess the additional benefit of exceptional stealth [9, 10]. It is generally expected that fish-like swimming motions lead to quiet acoustic signatures [9], which (even if detected) would resemble the noise signature of real fish, making a BAUV exceptionally difficult to detect and identify [11]. However, to date the noise signatures of swimming animals and BAUVs have not been adequately quantified, nor have the trade-offs between the noise signature and performance of unsteady swimmers been examined. The competition between acoustic stealth and fluid dynamic performance also emerges for biologically-inspired aerial vehicles [12, 13, 14, 15] that seek novel means to suppress flow-noise generation. Most sounds produced by fish result from aggressive actions, spawning, or reproductive behavior [16]. The noise made during these actions can be categorized into two broad categories: active acoustic signaling through morphological structures and passive noise generated during swimming, feeding, or respiration. The actively-produced noises include vocal calls, swimbladder motions, and drumming [17, 18]. Active fish signaling is not of interest here, as there exist quantified data for several species [17]. However, fish produce a vortex wake by simply oscillating their fins during swimming, which is a well-known source of noise [16]. Yet, the locomotive noise of fish during steady-state rectilinear swimming remains relatively unquantified. Fish locomotion produces low-amplitude hydrodynamic noise that is challenging to record reliably and has received limited attention in the literature [16]. Instead of examining live fish, numerical tools can be used to simulate the motions of fish and their resulting unsteady vortical wakes to estimate their noise production. For instance, one approach to simulating the unsteady flow field around a swimming fish is the boundary element method (BEM) [19]. This potential flow method discretizes surfaces representing shear layers in the flow, i.e. a solid body surface and the wake, into a collection of elements [20, 21]. This method approximates the flow as incompressible, inviscid, and irrotational (except on the elements). Additionally, the reduction of the solution domain to solve only for the strengths of elements on the boundaries enables the rapid simulation of unsteady flow phenomena. Acoustic BEMs have been used to determine the far-field noise of an airfoil in a turbulent flow, such as by Glegg and Devenport [22]. Their method predicts the frequency- domain acoustic far field based on a form of the surface loading due to the energy spectrum of the turbulent boundary layer immediately upstream of the trailing edge. However, this method is restricted to single airfoils in steady flows and does not account for interactions that would be present in many- bodied systems of swimmers or fliers. Once an unsteady flow field is simulated, an acoustic analogy is a common and effective tool to post-process the flow data and determine the noise production [23, 24]. Lighthill [25] first developed an acoustic analogy by recasting the Navier-Stokes equations into a wave equation in terms of perturbations of the fluid density. A nearly ubiquitous approach to predicting flow noise, Lighthill’s acoustic analogy has become central to many aeroacoustic studies [23, 24, 26, 27]. For example, Powell [28] adapted the Lighthill analogy to consider all of the unsteady flow fluctuations as vorticity and designated this vorticity as the forcing function for the acoustic system. Vorticity also plays a central role in the noise theory by Howe [29] that is adapted from Lighthill’s formulation. Motivated by these observations, the present study makes two principal technical advancements. First, a coupled flow-acoustic BEM is developed to predict the noise generation and performance of hydrofoils in unsteady motion that can also account for multi-body interactions, such as those that occur in fish schools. Second, the novel flow-acoustic BEM will be used to examine the performance and noise production of a hydrofoil in combined heaving and pitching motions, which acts as a simple, two-dimensional representation of an oscillating fish caudal fin. This paper is laid out across three main sections. The first section describes the potential flow and acoustic boundary element methods used in the current study. Next, the coupling between the potential flow and acoustic solvers is presented along with validations of the solver against available experimental data and asymptotic solutions of vortex- body interaction noise. Finally, the performance and noise production of a combined heaving and pitching hydrofoil are examined with respect to non- dimensional frequency, amplitude, and heave-to-pitch ratio. ## 2 Potential Flow Boundary Element Method This section details the two-dimensional unsteady boundary element method used in this work. The potential flow solver is an adaptation of the panel method described by Willis et al. [30]. The inviscid flow around hydrofoils can be found by solving the Laplace equation with an imposed no-penetration boundary condition on the surface, $\nabla\phi\cdot\mathbf{\hat{n}}=0~{}~{}{\rm on}~{}~{}S_{\rm{b}},$ (1) where $\mathbf{\hat{n}}$ is the outward unit normal of the surface. The boundary integral equation integrates the effects of the combined distribution of sources and doublets on the body surface $S_{\rm{b}}$ and doublets on edge panel $S_{\rm{e}}$, with vortex particles in the wake $S_{\rm{w}}$ (cf. Fig. 1). The scalar potential may be written as $\phi(\mathbf{t})=\int_{S_{\rm{b}}}\left[\sigma(\mathbf{s})G(\mathbf{t},\mathbf{s})-\mu(\mathbf{s})\hat{\mathbf{n}}\cdot\nabla G(\mathbf{t},\mathbf{s})\right]\rm{d}S-\int_{S_{\rm{e}}}\mu_{\rm e}(\mathbf{s})\hat{\mathbf{n}}\cdot\nabla G(\mathbf{t},\mathbf{s})\rm{d}S,$ (2) where $\mathbf{s}$ is a source location, $\mathbf{t}$ is the observer location, and $G(\mathbf{t},\mathbf{s})=\frac{1}{2\pi}\ln|\mathbf{t}-\mathbf{s}|$ is the two-dimensional Green’s function for the Laplace equation. The source and doublet strengths are defined respectively as $\displaystyle\sigma=\hat{\mathbf{n}}\cdot(\mathbf{U}+\mathbf{U}_{\rm{rel}}-\mathbf{U}_{\omega}),$ (3) $\displaystyle\mu=\phi_{I}-\phi,$ (4) where $\mathbf{U}_{\omega}$ is velocity induced by the vortex particles in the field, $\mathbf{U}$ is the body velocity, $\mathbf{U}_{\rm rel}$ is the velocity of the center of each element relative to the body-frame of reference, and $\phi_{I}$ is the interior potential of the body. At each time- step, vorticity is defined at the trailing-edge to satisfy the Kutta condition. The trailing edge panel is assigned the potential difference between the upper and lower panels at the trailing edge of the foil, $\mu_{\rm e}=\mu_{\rm{upper}}-\mu_{\rm{lower}}$. Application of the Kutta condition ensures that the vorticity at the trailing edge of the hydrofoil is zero. An implicit Kutta condition for the two-dimensional flow solver is similar to the methods mentioned in [30, 31]. The iterative implicit Kutta condition employs a Newton’s method to define the length and angle of the trailing edge panel in order to minimize the pressure across the trailing edge. The evolution of vorticity in the domain is governed by $\frac{\partial\boldsymbol{\omega}}{\partial t}+\mathbf{U}\cdot\nabla\boldsymbol{\omega}=\boldsymbol{\omega}\cdot\nabla\mathbf{U},$ (5) where the vorticity field $\boldsymbol{\omega}$ is represented by discrete, radially-symmetric, desingularized Gaussian vortex particles. The induced velocity of the vortex blobs is determined by the Biot-Savart law [32], $\mathbf{u}(\mathbf{t},t)=\sum_{i=1}^{N}\frac{\Gamma_{i}}{2\pi}\left(1-\exp\left(\frac{-|\mathbf{t-s}|}{2{r_{\rm{cut}}}^{2}}\right)\right),\\\ $ (6) where $\Gamma_{i}$ is the circulation of the $i$th vortex particle, and $r_{\rm{cut}}$ is the cut-off radius. Following the work of Pan et al. [33], the cut-off radius is set to $r_{\rm{cut}}=1.3\Delta t$ for time step $\Delta t$ to ensure that the wake particle cores overlap and a thin vortex sheet is shed. The evolution of the vortex particle position is updated using a forward Euler scheme [30], $\mathbf{x}(t+1)=\mathbf{x}(t)+\mathbf{u}(\mathbf{x}(t),t)\Delta t.$ (7) The use of discrete vortices to represent the wake instead of panels alters the process of shedding vorticity into the wake in comparison to more classical source-doublet methods [21]. Two edge panels are set behind a foil. The first edge panel, set with the empirical length of $l_{\rm{panel}}=0.4U_{\infty}\Delta t$ [21], satisfies the Kutta condition at the trailing edge. Next, the buffer panel is attached to the edge panel and stores information about the previous time step. Figure 1 illustrates the distinction of the source/doublet panels across a foil, the arrangement of the trailing-edge and buffer panels, and how the vortex particle wake behaves behind the body. Figure 1: Schematic of a fish with a blown up section depicting the propulsor as the discrete geometry for the BEM. A propulsor of chord length $c$ is discretized with source/doublet elements (blue lines with endpoints indicated by black circles). The edge and buffer panels (green lines with endpoints indicated by green circles) are connected to the trailing edge of the propulsor. The vortex particle wake (blue/red circles) is shed from the edge/buffer panels. The vortex particles influence on the body is accounted for in the definition of the source strength $\sigma$. The vortex particle induced velocity also augments the pressure calculation put forth by Katz and Plotkin [21]. The surface pressure is determined by $\frac{P_{\infty}-P(x)}{\rho}=\left.\frac{\partial\phi_{\rm{wake}}}{\partial t}\right|_{\rm{body}}+\left.\frac{\partial\phi}{\partial t}\right|_{\rm{body}}-(\mathbf{U}+\mathbf{u}_{\rm{}_{rel}})\cdot(\nabla\phi+\mathbf{U}_{\omega}))+\frac{1}{2}|\nabla\phi+\mathbf{U}_{\omega})|^{2},$ (8) where $\partial\phi_{\rm{wake}}/\partial t=\Gamma\dot{\theta}/(2\pi)$ is the time rate of change due to a vortex particle with circulation $\Gamma$ at an angle $\theta$ from the observation point. Equation (8) is similar to the form put forth by Willis et al. [30], but here $\partial\phi_{\rm{wake}}/\partial t$ is the positional change of a vortex with respect to a panel and does not require the solution of a secondary system to find the influence of wake vortices onto the body surface. The appendix presents validation of this methodology against available theoretical and numerical results. ## 3 Acoustic Boundary Element Method The Helmholtz wave equation for a homogeneous medium and its boundary conditions are $\displaystyle\nabla^{2}\phi+\kappa^{2}\phi=0,$ (9) $\displaystyle\phi(\mathbf{x})=d_{n}~{}~{}{\rm on}~{}~{}S_{\rm{b}},$ (10) $\displaystyle\frac{\partial\phi}{\partial n}(\mathbf{x})=g_{n}~{}~{}{\rm on}~{}~{}S_{\rm{b}},$ (11) where $\phi$ is the time-independent velocity potential, $\kappa=2\pi f/c_{0}$ is the acoustic wavenumber, $c_{0}$ is the speed of sound, $d_{n}$ is a Dirichlet boundary condition, and $g_{n}$ is a Neumann boundary condition. Application of Green’s second identity to Eq. (9) moves all of the information of the system onto the boundary: $a(\mathbf{t})\phi(\mathbf{t})=\int_{S_{\rm{b}}}\left[\frac{\partial G(\mathbf{t},\mathbf{s})}{\partial n}\phi(\mathbf{s})-G(\mathbf{t},\mathbf{s})\frac{\partial\phi}{\partial n}(\mathbf{s})\right]\rm{d}S_{\rm{b}},$ (12) where $a(\mathbf{t})=\frac{1}{2}$ on the boundary and $a(\mathbf{t})=1$ in the exterior field. The two-dimensional acoustic Green’s functions are $\displaystyle G(\kappa,\mathbf{t},\mathbf{s})=\frac{\textrm{i}H_{0}^{(1)}(\kappa|\mathbf{t}-\mathbf{s}|)}{4},$ (13) $\displaystyle\frac{\partial G}{\partial n}(\kappa,\mathbf{t},\mathbf{s})=\frac{-\textrm{i}\kappa H_{1}^{(1)}(\kappa|\mathbf{t}-\mathbf{s}|)}{4},$ (14) which correspond respectively to an acoustic monopole and dipole, and $H_{n}^{(1)}$ is the Hankel function of the first kind of order $n$. In the remainder of this work, $G(\kappa,\mathbf{t},\mathbf{s})=G(\mathbf{t},\mathbf{s})$, as the wavenumber is constant for each solution. Differentiation of Eq. (12) with respect to the outward normal of the boundary produces a quadrupole system, $\frac{\partial\phi(\mathbf{t})}{\partial n}=\int_{S_{\rm{b}}}\frac{\partial^{2}G(\mathbf{t},\mathbf{s})}{\partial n_{\mathbf{t}}\partial n_{\mathbf{s}}}\phi(\mathbf{s})\rm{d}S_{\rm{b}},$ (15) where $n_{\mathbf{s}}$ and $n_{\mathbf{t}}$ are the outward normals at the source and observer, respectively. A combination of Eqs. (12) and (15) for points on the boundary arrives at the Burton-Miller formulation [27, 34]: $\int_{S_{\rm{b}}}\left(\frac{\partial G(\mathbf{t},\mathbf{s})}{\partial n}\phi(\mathbf{s})+\frac{1}{2}\phi(\mathbf{s})\right)\rm{d}S_{\rm{b}}+\beta\int_{S_{\rm{b}}}\frac{\partial^{2}G(\mathbf{t},\mathbf{s})}{\partial n_{\mathbf{t}}\partial n_{\mathbf{s}}}\phi(\mathbf{s})\rm{d}S_{\rm{b}}=\phi(x)+\beta\frac{\partial\phi(\mathbf{t})}{\partial n},$ (16) where $\beta=\textrm{i}/\kappa$ is a chosen coupling parameter [27]. The frequency domain problem (16) is converted into a transient solution by the application of the convolution quadrature method, which is detailed in the next section. ### 3.1 Time discretization The frequency potential operators in Eq. (16) are evaluated as convolution integrals. Green’s functions in the frequency domain problem are Laplace transforms of the retarded-time Green’s function, which permit their convolution with the potential field. The potential field is evaluated by a convolution quadrature. This methodology of time discretization can be achieved via a convolution quadrature method put forth by Lubich [35]. A representative example of a convolution system is $\displaystyle\int_{0}^{t}f(t-\tau)\phi(\tau)d\tau=g(t).$ Here $f$ represents a retarded-time operator, a characteristic differential operator of the transient wave equation, $\phi$ is some known potential distribution, and $g(t)$ is a transient forcing function. The interested reader may consult Hassell and Sayas [36] for a detailed explanation of the convolution quadrature method. The retarded-time operator is a convolution that can be discretized by splitting the time domain into $N+1$ time steps of equal spacing, yielding $\Delta t=T/N$ and $t_{n}=n\Delta t$ for $n=[0,1,...,N]$. The discrete convolution can be viewed as a sum of weights of the $F$ operator at discrete times of $\phi$: $\displaystyle F\frac{\partial\Phi(t_{n})}{\partial t}^{\Delta t}=\sum_{j=0}^{n}w_{n-j}^{\Delta t}(F)\phi^{\Delta t}(t_{j}),$ (17) where $F$ represents the Laplace transform of the $f$ operator, and the superscript $\Delta t$ indicates the weight for a specific time-step size. The series expansion can be arranged to solve for the convolution weights, $w$: $\displaystyle F\left(\frac{\gamma(\zeta)}{\Delta t}\right)=\sum_{n=0}^{\infty}w_{n-j}^{\Delta t}\zeta^{n},\quad|\zeta|<1,$ (18) $\displaystyle w_{n-j}^{\Delta t}=\frac{1}{2\pi i}\oint_{C}\frac{F(\frac{\gamma(\zeta)}{\Delta t})}{\zeta^{j+1}}d\zeta,$ (19) where $C$ is a circle of radius $0<\lambda<1$ centered at the origin. A second-order backwards difference function, $\gamma(\zeta)=(1-\zeta)+\frac{1}{2}(1-\zeta)^{2}$, is used to define the spacing of the integration. A review of other integration methods that can be incorporated into the convolution quadrature method is presented in Hassell and Sayas [36]. Employing a scaled inverse transform, the weights become $\displaystyle w_{n-j}^{\Delta t,\lambda}(F)=\frac{\lambda^{-j}}{N+1}\sum_{l=0}^{N}F(s_{l})\zeta_{N+1}^{lj},$ (20) where $\zeta_{N+1}=\exp\left(\frac{2\pi i}{N+1}\right)$ is the discrete Fourier transform scale, and $s_{l}=\gamma(\lambda\zeta_{N+1}^{-l})/\Delta t$ is the accompanying time-dependent complex wavenumber. The value of $s_{l}$ is different for each time step and provides the link between the frequency- domain solver and a transient boundary integral equation. For this formulation, $\lambda=\Delta t^{3/N}$ is selected based on the error analysis of Banjai and Sauter [37]. Placing (20) into the boundary value problem (16) yields a system of $N+1$ equations, $\displaystyle\frac{\lambda^{-j}}{N+1}\sum_{l=0}^{N}F(s_{l},\mathbf{x})\hat{\phi_{l}}(\mathbf{x})\zeta_{N+1}^{lj}=\frac{\lambda^{-j}}{N+1}\sum_{l=0}^{N}\hat{g_{l}}\zeta^{lj}_{N=1},$ (21) where $F$ is the linear combination of operators on the left hand side of Eq. (16). Here $g_{n}$ is a discrete representation of the mixed boundary condition. The inverse discrete Fourier transform of the convolution is $\displaystyle\hat{\phi_{l}}=\sum_{j=0}^{N}\lambda^{j}\phi_{j}^{\lambda}\xi_{N+1}^{-lj},$ (22) which produces the transient solution. In summary, the convolution quadrature method [37] discretizes a transient wave problem into a system of frequency-domain (Helmholtz) wave equations that are uncoupled in time. This discretization allows $N+1$ independent solutions of Eq. (16) in the frequency domain using wavenumbers $s_{l}$ that are generated via the convolution quadrature method. The time-domain solution is recovered by applying the inverse Fourier transform (22). ## 4 Acoustic Analogies The Lighthill acoustic analogy [25] is an exact rearrangement of the Navier- Stokes equation, where the resulting wave equation is forced by the so-called Lighthill tensor, $T_{ij}$. For example, $T_{ij}$ may be used as a forcing function in Eq. (9). The flow is assumed inviscid, without thermal losses, and at low Mach number $(M^{2}\ll 1)$. The form of the Lighthill tensor under these conditions is reduced to the Reynolds stress contribution, $T_{ij}\approx\rho u_{i}u_{j}$ [38]. Taking two spatial derivatives of this tensor yields the quadrupole source for Lighthill’s acoustic analogy. This quadrupole is related to the vorticity field by $\frac{\partial^{2}u_{i}u_{j}}{\partial x_{i}\partial x_{j}}=\nabla\cdot(\boldsymbol{\omega}\times\mathbf{u})+\nabla^{2}\left(\frac{1}{2}u^{2}\right),$ (23) where $u^{2}=\mathbf{u}\cdot\mathbf{u}$. The Powell acoustic analogy, a derivative of the Lighthill analogy, uses this form of the velocity field as its forcing function [28]. The Powell acoustic analogy allows the direct relation of the vorticity defined by the flow solver to be the forcing function of the acoustic solver. The present flow solver defines the vorticity that is bound to the body and shed into the wake. Other vortical noise sources, such as the broadband content in a turbulent boundary layer, may also be incorporated but are neglected in the present work to focus on the acoustic interactions involving incident and shed vorticity. The Powell acoustic analogy states that in free space the forcing of the wave equation is a function of the vorticity in the field, $\displaystyle\partial_{t}^{2}P-\nabla^{2}P=\nabla\cdot(\boldsymbol{\omega}\times\mathbf{u}),$ (24) $\displaystyle\frac{\partial G}{\partial n}=0\quad{\rm on}\quad S_{\rm{b}}.$ (25) Using a Green’s function solution, and applying an integration by parts, the pressure in the field is determined by $P(\mathbf{t},t)=\rho\int_{S_{\rm{b}}}(\boldsymbol{\omega}\times\mathbf{v})\cdot\frac{\partial G}{\partial n_{\mathbf{s}}}\rm{d}S_{\rm{b}},$ (26) where $\partial G/\partial n_{\mathbf{s}}$ is the two-dimensional potential flow Green’s function. The pressure integral in Eq. (26) is applicable regardless of whether or not a solid body is present [39]. The use of the potential flow Green’s function imposes instantaneously the vortical acoustic loading onto a solid body, as in the asymptotic methods of Kambe and Kao [39, 40]. Also, the use of the flow Green’s function instead of the retarded potential acoustic Green’s function [39], i.e., $G=\frac{H(t-|\mathbf{t}|/c_{0})}{2\pi\sqrt{t-|\mathbf{t}|/c_{0}}}$, removes any singularities in time in addition to the slow decay tail created by the Heaviside function found in the numerator. For all of the flow scenarios observed in this work, the small products of the acoustic wavenumber with the chord length (Helmholtz number) and with the distance between dominant wake vortices with the foil in low-Mach-number locomotion ensures acoustically compact vortex-body interactions [38]. Equation (26) provides the pressure at an arbitrary point in space, for instance the collocation point of a discrete foil. This approach is sufficient to solve a Dirichlet boundary condition, but Eq. (25) is a no-flux Neumann boundary condition. The Neumann boundary condition can be satisfied by arranging Euler’s equation to define the acoustic dipole potential needed to guarantee no fluid flux through the surface of the discrete geometry, $\rho\frac{\partial\mathbf{u}}{\partial t}=-\nabla P.$ (27) In Eq. (27) the velocity is taken to be the normal induced velocity from each discrete vortex in the domain, including the discrete vortex particles that comprise the wake and vortex values distributed along the foil found via Eq. (2). A rearrangement of Eq. (27) that considers the outward normal pressure on the foil results in $\frac{\partial P}{\partial n}(\mathbf{t})=-\rho\sum_{i=0}^{N}\frac{\partial(\mathbf{u}_{i}\cdot\hat{\mathbf{n}}(\mathbf{t}))}{\partial t}.$ (28) The pressure in Eq. (26) and its normal derivative in Eq. (28) are the boundary conditions to the Burton-Miller acoustic BEM formulation in Eq. (16). ### 4.1 Validation of Acoustic Analogy for Vortex-Body Interactions The Powell acoustic analogy is now validated as a suitable forcing function for the one-way coupled flow-acoustic BEM. First, the canonical problem by Crighton [41] whereby a line vortex generates noise as it passes round a half- plane is used to verify the flow-acoustic BEM for edge scattering. Given a point vortex that advects around a half-plane lying at $y=0$ with the edge located at the origin, the vortex path is described analytically by $r=\ell\sqrt{1+\left(\frac{Ut}{\ell}\right)^{2}},\quad~{}\quad\theta=2\tan^{-1}\left(-\frac{Ut}{\ell}\right),$ where $r$ and $\theta$ are the radial components of the vortex position. The characteristic vortex velocity is $U=\Gamma/8\pi\ell$, with $\ell$ being the distance of closest approach of the vortex to the trailing edge. Howe [38] (see also [29]) determined the time-varying magnitude of the acoustic pressure produced by the vortex motion near the edge using an approximate Green’s function for a half plane with the observer in the acoustic far field: $\displaystyle P(\mathbf{t},t)$ $\displaystyle\approx\frac{\rho\Gamma^{2}}{4\pi\ell^{2}}\left(\frac{\ell}{|\mathbf{t}|}\right)^{\frac{1}{2}}\sin\left(\frac{\theta}{2}\right)\left[\frac{\Gamma t/8\pi\ell^{2}}{[1+(\Gamma t/8\pi\ell^{2})^{2}]^{5/4}}\right]_{t-\frac{|\mathbf{t}|}{c_{0}}},~{}\mathbf{|t|}$ $\displaystyle\rightarrow{}\infty.$ (29) The brackets indicate evaluation at the retarded time. Note that this model neglects any vortex shedding by the edge. Figure 2 illustrates the vortex path near the half plane and presents a comparison of the analytical acoustic response from Eq. (29) against the numerical results from the BEM developed in §§2 and 3. The solution was created by approximating a half plane with a 10 m flat plate and passing a vortex of strength $\Gamma$ = 1 m2 s-1 to within a distance of $\ell$ = 0.25 m at the trailing edge in a medium with density $\rho$ = 1 kg m-3. The observer location is $|\mathbf{t}|$ = 50 m above the trailing edge. The half plane was approximated with a 10 m flat plate discretized with 512 boundary elements in a cosine distribution which ensures a denser concentration of elements near the edge. The doubling of elements on the body from 128 to 256 elements produced less than 0.1% change in the acoustic solution as observed at $|\mathbf{t}|$. Agreement of the acoustic responses are seen when the vortex is near the trailing edge, i.e. $|Ut/\ell|$ ¡ 1, and the system begins to diverge slightly outside of that range. The divergence of the solution outside of this time is likely due to the limitation in the Howe solution that the loading is only at the trailing edge, where as the BEM implicitly solves for loading along the flat plate as the vortex continues to travel near the body. The agreement between the BEM and Howe’s solution verifies the accuracy of the acoustic portion of the BEM in modeling trailing edge scattering. The fully coupled solver including the unsteady potential flow portion can be examined further for vortex-body interaction cases. Figure 2: Sound produced by vortex motion near a half plane. (a) Vortex path round the trailing edge of a half plane. The closest position of the vortex to the body $\ell$ occurs at $Ut/\ell=0.$ (b) Acoustic response computed by the BEM (blue circles) and from Howe’s solution (red line). The experimental work of Booth [42] details acoustic scattering due to vortex- body interaction. The matched asymptotic method of Kao [40] uses this experimental study to validate their analysis. Kao asymptotically matches the acoustic loading from a potential flow solver to an outer far-field acoustic solution. The selected vortex-body interaction problem sets a vortex upstream of a NACA 0012 airfoil, with a chord of $c=0.2032$ m. The vortex has a circulation of $\Gamma=0.52$ m2 s-1 and advects in a freestream of speed $U_{\infty}=4.7$ m s-1 at a vertical displacement of $h=0.152c$ from the foil centerline (cf. Fig. 3). This particular offset distance was selected because at other distances in the experimental study the vortex impinges on the body and breaks down. The fluid medium has a speed of sound of $c_{0}=343$ m s-1 and the density $\rho=1.225$ kg m-3. The acoustic response is then found in front of the airfoil at $\mathbf{t}_{1}=(100c,0)$, as shown in the problem schematic at the top of Fig. 3. Figure 3: Acoustic emission due to a single vortex advecting past a NACA 0012 airfoil. The vortex circulation $\Gamma$, chord length $c$, observation point $\mathbf{t}$, freestream velocity $U_{\infty}$, and offset heights $h$ are different for each of the two validation cases. The response of $(a)$ is observed 100 chord lengths in front of the foil $\mathbf{t}_{1}=(-100c,0)$, while the response of $(b)$ is observed 50 chord lengths above the airfoil $\mathbf{t}_{2}=(0,50c)$. The black lines represents the experimental results of Booth [42], the red circle represents the matched asymptotic solution of Kao [40], and the blue line is the result of the coupled potential flow and transient acoustics BEM put forward in this work. Figure 3$(a)$ compares the experimental vortex-body interaction sound results from the work of Booth, the matched asymptotic method of Kao, and the one way coupled potential flow and acoustic BEM presented in this study. The matched asymptotic method and the flow-acoustic BEM have qualitatively similar responses. The experimental acoustic response is the same order of magnitude as the other methodologies with similar qualitative trends, albeit with more fluctuations in its signal. The leading edge acoustic response occurs at $t\approx 0.25$ s. It can be seen that the amplitude of this interaction is quantitatively similar for the theoretical and BEM approaches, while qualitatively the slope of the leading-edge response as it approaches the minimum pressure from the flow-acoustic BEM is steeper than the matched asymptotic method response. The trailing-edge acoustic response occurs at $t\approx 0.30$ s, where the peak predicted response of $P\approx-0.0015$ Pa from the matched asymptotic solution is stronger than the BEM prediction of $P\approx-0.0005$ Pa. The increased pressure response at the trailing edge from the matched asymptotic method can be affected by the explicit Kutta condition applied and the manner in which the wake evolves behind the foil. Howe [43] stated that, as a vortex passes the trailing edge of a body, the vorticity shed into the wake tends to cancel the effect of the incoming vorticity and mitigates the noise generation. The Kutta condition in the potential flow BEM could be implicitly imposing the mechanism described by Howe, which would help explain the weaker acoustic response predicted by the flow-acoustic BEM framework. Additionally, the acoustic response in Fig. 3$(a)$ is measured in front of the foil, a location where the acoustic pressure would be small in comparison to other measurement locations. Further verification of the the flow-acoustic BEM is accomplished by measuring the acoustic response where it has its maximum value. Figure 3$(b)$ presents the measurement of the acoustic response above the foil where the peak acoustic pressure occurs. The flow scenario has a vortex with circulation $\Gamma=$ 0.1 m2 s-1 that is released five chord lengths upstream with vertical offset $h=0.1c$ above the center of a NACA 0012 foil with chord $c=1$ m. The case has a freestream velocity $U_{\infty}=1$ m s-1, a sound speed of $c_{0}=5$ m s-1, and density $\rho$ = 1 kg m-3. The acoustic response was measured fifty chord lengths above the leading edge of the foil at $\mathbf{t}_{2}=(0,50c)$. The matched asymptotic method predicts a leading- edge acoustic response at $t\approx 0.25$ s, which occurs before the flow- acoustic BEM response at $t\approx 0.35$ s. However, both approaches exhibit a similar magnitude of response. The maximum acoustic responses of both systems are found at $t\approx 0.45$ s with a pressure of $P=0.013$ Pa. The trailing- edge responses at $t\approx 0.5$ s also exhibits similar magnitudes for both approaches. The flow-acoustic BEM predicts the same order-of-magnitude acoustic response as the experiments of Booth [42]. In addition, the flow- acoustic BEM produces qualitatively and quantitatively similar acoustic responses to the matched asymptotic method of Kao [40]. ## 5 Acoustic Emission from Biological Swimming Many fish swim by undulating their bodies and oscillating their caudal or tail fins, which are in many cases responsible for the majority of their thrust production [2, 44]. A common and simple representation of a biological swimmer is to neglect the body and only consider the caudal fin as a combined heaving and pitching hydrofoil [45], which is also the case in the present study. Fish-like locomotion via a traveling wave is shown in Fig. 4$(a)$. The motion of the rear of the foil is tracked and then treated as a discrete propulsive foil, which is denoted in the figure as a solid black body. Figure 4$(b)$ shows how a combined heaving and pitching motion of a rigid body is used as a proxy for the entire traveling wave system. The rigid body pitches about its leading edge, as that is where it would be connected to the fish. The heaving and pitching motion and the peak-to-peak amplitude of the foil are described by $\displaystyle h(t)=h_{0}\sin(2\pi ft),$ (30) $\displaystyle\theta(t)=\theta_{0}\sin(2\pi ft+\phi),$ (31) $\displaystyle A(t^{*})=2\,\text{max}\\{h(t)+c\,\sin\left[\theta(t)\right]\\},$ (32) $\displaystyle h^{*}=2\,h(t^{*})/A(t^{*}),$ (33) $\displaystyle\theta^{*}=2\,c\,\sin\left[\theta(t^{*})\right]/A(t^{*}),$ (34) $\displaystyle A^{*}=A(t^{*})/c,$ (35) where $f$ is the frequency of motion, $t$ is time, and $h_{0}$ and $\theta_{0}$ are the maximum heaving amplitude and maximum pitching angle, respectively. The phase delay between heave and pitch signals is $\phi=\pi/2$. The peak-to-peak amplitude is $A(t^{*})$, and $t^{*}$ is the time at which the foil reaches its peak amplitude. Given $A(t^{*})$ and $\theta_{0}$, $h_{0}$ can be calculated by a nonlinear equation solver. Normalizing $h(t^{*})$ and $\theta(t^{*})$ by $A(t^{*})$ produces the identity $h^{*}+\theta^{*}=1.$ (36) Supposing a non-dimensional peak-to-peak amplitude $A^{*}$, the ratio of the heaving and pitching amplitudes is solely described by the non-dimensional heave-to-pitch ratio $h^{*}$. A purely pitching foil has a value of $h^{*}=0$, a purely heaving foil has a value of $h^{*}=1$, and $h^{*}=0.5$ represents a combined heaving and pitching motion where half of the total amplitude comes from pitching and the other half comes from heaving. All other values in the range $0<h^{*}<1$ represent combined heaving and pitching motions. The chord- based reduced frequency $f^{*}=fc/U_{\infty}$ is the non-dimensional quantity that describes the unsteadiness of the prescribed swimming motion. Figure 5 shows a typical reverse von Kármán wake structure of a foil operating at $h^{*}=0.5$, $f^{*}=0.5$, and $A^{*}=0.5$. The spacing of the wake is dictated vertically by the amplitude of motion and horizontally by the freestream speed and the frequency. The ratio of the vertical spacing to horizontal spacing results in the Strouhal number, $St=fA(t^{*})/U_{\infty}$. The example wake in Fig. 5 has a Strouhal number of $St=0.25,$ which is within the range of typical fish swimming of $0.2<St<0.4$ [5, 46, 47]. The acoustic pressure presented for the remainder of this work is non-dimensionalized by dynamic pressure, $P^{*}=2P_{\rm{acoustic}}/\rho U_{\infty}^{2}.$ Figure 4: Use of a pitching/heaving foil as a proxy to undulatory locomotion. Illustration $(a)$ shows a period of a traveling wave undulating across a NACA 0012 airfoil. The trailing edge of the foil is modeled as a separate entity that acts as a proxy to the caudal fin of a fish. The schematic $(b)$ tracks the motion of the ‘caudal fin’ separated from the body as a function of pitching and heaving. Figure 5: Typical wake of an unsteady swimmer in this study. Wake of a foil after several cycles of motion for the values of $h^{*}=f^{*}=A^{*}=0.5$. A foil of chord $c$ is placed in a freestream flow at speed $U_{\infty}$. The vertical spacing of the vortices in the wake are described as a function of the amplitude $A=c/2$ and the horizontal spacing is a function of freestream speed and frequency, $2U_{\infty}/f$, for half of a cycle of motion. The flow-acoustic BEM is used to study how variations in the non-dimensional amplitude, reduced frequency, and non-dimensional heave-to-pitch ratio alter both the acoustic emissions and the hydrodynamic forces on the body. The ranges of variables and parameters used in the current study are presented in Table 1. The range of Strouhal numbers in the simulations is $0.0312\leq St\leq 1$. The reduced frequencies and Strouhal numbers in the current study cover the ranges associated with typical fish swimming [47, 5] and also extend to regions where biological systems may perform fast starts or rapid turns [2]. Input Variables: | $0.25\leq A^{*}\leq 1$ | $0\leq h^{*}\leq 1$ | ---|---|---|--- | $0.125\leq f^{*}\leq 1$ | | Input Parameters: | $U_{\infty}=1$ m s-1 | $\rho=1000$ kg m-3 | | $c_{0}=1000$ m s-1 | $c=1$ m | Table 1: Input variables and parameters used in the present study. ## 6 Results Figure 6 presents the near-field transient acoustic pressure for a foil with parameters of $h^{*}=0.5$, $f^{*}=0.5$, and $A^{*}=0.5$. The acoustic pressure is determined at discrete points on circles with radii of two to five chords away from the mid-chord of the foil at rest. A vertically-oriented pressure dipole is generally observed. The position of the maximum acoustic pressure shifts from the front to back as the effective angle of attack increases, as seen in the snapshots from $t/T=0$ to $t/T=5/9$, where $T$ is the period of motion. The sign change of the dipole strength at the middle of the period ($t/T=4/9$) corresponds to the change in effective angle of attack going from negative to positive values. At time $t/T=4/9$, the transient acoustic pressure field has a quadrupole shape with two of its lobes directed behind the foil, which are an order of magnitude weaker than the response above and below the foil. Figure 6: Transient near-field acoustic pressure of a typical swimmer. The non-dimensional near-field acoustic pressure $P$ is shown for a foil operating at $h^{*}=0.5$, $f^{*}=0.5$, and $A^{*}=0.5$ at different instances of non- dimensional time $t/T$. The acoustic pressure is found at discrete points set around circles centered about the mid-chord when at rest. The circle radii range from two to five chord lengths. Figure 7a shows the acoustic response at a single observation point 50 chords above the foil with a heave-to-pitch ratio of $h^{*}=0.375$, a reduced frequency of $f^{*}=0.25$, and over the entire range of amplitudes used in the current study. As expected, the pressure fluctuates harmonically in time with the same frequency as the foil motion. For fixed heave-to-pitch ratio and reduced frequency, the amplitude of the acoustic pressure response increases with the amplitude of motion. A directivity plot of the root-mean-square (RMS) pressure ($P_{\text{RMS}}$) over three motion cycles for various reduced frequencies is shown in Fig. 7$(b)$. Regardless of the motion parameters selected, a dipole directivity is always observed. Moreover, the peak acoustic pressure can be observed to also increase with an increase in the reduced frequency. Since all of the variables produce self-similar dipole acoustic responses, the peak RMS acoustic pressure can be used as the single metric to describe the acoustic field. The peak pressure can then be scaled by the dynamic pressure: $P^{*}_{\rm{Peak}}=\frac{P^{*}_{\text{RMS}}}{\frac{1}{2}\rho U^{2}}.$ (37) Figure 7: Transient and root-mean-square (RMS) acoustic pressure levels of a typical swimmer. $(a)$ Transient acoustic pressure response of a foil undergoing a combined heaving/pitching motion with $h^{*}=0.375$, $f^{*}=0.25$, and various amplitudes. The acoustic pressure is determined at the position 50 chords above the leading-edge of the foil and shown for 3 cycles of motion. $(b)$ RMS acoustic pressure $P_{\text{RMS}}$ from a foil undergoing a combined heaving/pitching motion with $h^{*}=0.8$, $A^{*}=0.5$, and various reduced frequencies. The acoustic pressure response is computed on a circle 50 chord lengths from the leading edge of the foil and is averaged over 3 cycles of motion. The dipolar acoustic directivity is observed for all kinematic parameters considered. Figure 8 presents the non-dimensional peak acoustic pressure for a purely pitching foil ($h^{*}=0$), a combined-motion foil with equal amplitude contributions from pitching and heaving ($h^{*}=0.5$), and a purely heaving foil ($h^{*}=1$). The global maximum in the peak acoustic pressure is found at $h^{*}=1$, $f^{*}=1$, and $A^{*}=1$, which represents the upper bounds of all of the parameters being explored. However, the minimum in the noise production does not occur at the minima of the parameter set: the minimum noise occurs at $h^{*}\approx 0.25$ for fixed values of amplitude and reduced frequency. A purely heaving foil produces higher acoustic pressures than a purely pitching foil, except for $f^{*}\lesssim 0.25$. The combined heaving and pitching foil emits a weaker acoustic pressure signal than either purely heaving or pitching foils for all combinations of reduced frequency and amplitude of motion. Figure 8$(b)$ overlays the Strouhal number on the peak acoustic pressure, showing that in general an increase in Strouhal number results in an increase in the peak RMS acoustic pressure for a fixed swimming motion $h^{*}$, even though the isolines of $St$ and $P^{*}_{\rm{peak}}$ are not precisely aligned. Therefore, the noise level trends are not driven solely by changes in the Strouhal number. Figure 8: Peak acoustic pressure for $(a)$ a purely pitching foil ($h^{*}=0$), $(b)$ a combined heaving/pitching foil ($h^{*}=0.5$), and $(c)$ a purely heaving foil ($h^{*}=1$) as a function of reduced frequency and amplitude of motion. Isolines of Strouhal number overlay the acoustic pressure contours in $(b)$. An increase in either the reduced frequency or the amplitude of motion will result in an increase in acoustic pressure, but the motion type of the foil can lead to lower values of acoustic pressure. In fact, a pressure minimum is observed for a combined heaving and pitching motion for all combinations of reduced frequency and amplitude examined in this study. Figure 9 presents a map that shows the $h^{*}$ value leading to the lowest noise production for a given $f^{*}$ and $A^{*}$. A purely heaving or pitching foil never produces the lowest acoustic pressure. For $f^{*}\gtrsim 0.3$ the quietest swimming is produced by pitch-dominated swimming motions ($h^{*}<0.5$). Only for low reduced frequencies of $f^{*}\lesssim 0.3$ do heave-dominated motions ($h^{*}>0.5$) produce the quietest swimming, and this $h^{*}$ value is independent of the amplitude. Lines of constant $St$ are also marked on the figure, which denote the typical range of $St$ for swimming animals [46]. From this map it can be seen that for swimming animals operating with low reduced frequencies ($f^{*}<0.3$) their noise production is minimized if they utilize heave-dominated swimming kinematics. Swimming animals with higher reduced frequency ($f^{*}>0.3$) minimize their noise production if they utilize pitch- dominated swimming kinematics. Figure 9: Map showing for a given $f^{*}$ and $A^{*}$ which $h^{*}$ value leads to the lowest noise production. The potential flow solver also can solve for the associated performance characteristics of these oscillating foils. The force acting on the foil is defined by $\mathbf{F}=\int_{S_{\rm{b}}}-(P_{\text{flow}}\hat{\mathbf{n}})\,dS$, where $P_{\text{flow}}$ is the pressure from the flow solver as opposed to the acoustic pressure, $\hat{\mathbf{n}}$ is the local outward normal vector and $S_{b}$ is the foil surface. Since the potential flow method is inviscid, the forces on the foil arise only from its external pressure distribution. The power consumption of the oscillating motion is calculated as the negative inner product of the force vector and velocity vector of each boundary element, i.e., $P_{w}=-\int_{S_{b}}\mathbf{F}_{\rm{ele}}\cdot\mathbf{u}_{\rm{ele}}\,dS$. The time-averaged coefficients of lift, thrust, and power may be defined as $\displaystyle C_{L}=\frac{\overline{F_{y}}}{\frac{1}{2}\rho U_{\infty}^{2}},\quad C_{T}=-\frac{\overline{F_{x}}}{\frac{1}{2}\rho U_{\infty}^{2}},\quad C_{P}=\frac{\overline{P_{w}}}{\frac{1}{2}\rho U_{\infty}^{3}},$ (38) where $F_{x}$ and $F_{y}$ are the integrated streamwise and transverse components of the force on the foil, respectively. The efficiency is also defined as $\eta={C_{T}}/{C_{P}}$. In addition to the time-averaged coefficient of lift, the maximum coefficient of lift will also prove to be a useful metric and is defined by $C_{L,\text{max}}=\frac{{\rm max}(\bar{F}_{y})}{\frac{1}{2}\rho U_{\infty}^{2}}.$ (39) Figure 10 presents a comparison of the peak RMS acoustic pressure $P^{*}_{\rm{Peak}}$, the maximum coefficient of lift $C_{L,\text{max}}$ scaled by a factor of $1000$ and the absolute value of the time-averaged coefficient of lift $|C_{L}|$ scaled by a factor of $20$. These quantities are shown as functions of the Strouhal number and non-dimensional heave-to-pitch ratio. The lift metrics were scaled in order to make the plots the same order of magnitude. It can be seen that the peak RMS acoustic pressure and the maximum coefficient of lift follow the same trend for increasing $St$ and $h^{*}$. It can also be observed that the peak RMS acoustic pressure does not follow the same trend as the time-averaged coefficient of lift. Not surprisingly, the maximum lift coefficient provides a better correlation with the peak RMS acoustic pressure than the time-average coefficient of lift. In light of this finding, the maximum coefficient of lift $C_{L,\text{max}}$ may be used as a proxy metric for comparison of the relative acoustic emissions between oscillating hydrofoils. Figure 10: Comparison of lift metrics with peak acoustic pressure as a function of $St$ and $h^{*}$. The plots show the peak RMS acoustic pressure $P^{*}_{\rm{Peak}}$, maximum coefficient of lift scaled $C_{L\text{max}}/1000$, and the absolute value of the time-averaged coefficient of lift $|C_{L}|$ scaled by a factor of $20$. Subfigure $(a)$ is a purely pitching foil, $h^{*}=0$, $(b)$ is a combined heaving and pitching motion with $h^{*}=0.5$, and $(c)$ is a purely heaving foil, $h^{*}=1$. Figure 11 presents a comparison among the coefficients of thrust and power, efficiency, and the peak RMS acoustic pressure. The thrust production increases with increasing $f^{*}$, $h^{*}$, and $A^{*}$. In figure 11, regions of negative coefficient of thrust (and the corresponding regions of coefficients of power, efficiency, and the peak RMS acoustic pressure) are excluded from the contour plots shown. The power consumption also increases with increasing $f^{*}$ and $A^{*}$; however, as $h^{*}$ increases the power decreases to a minimum and then increases. This result indicates that combined heaving and pitching motions use less power than purely pitching or heaving motions for a fixed $f^{*}$ and $A^{*}$. The efficiency results show global peaks around $h^{*}=0.85$, $f^{*}<0.2$ for all $A^{*}$. Since the $St=f^{*}A^{*}$, the highest efficiencies occur for the lowest swimming Strouhal numbers. Most fish swim with $0.2\leq St\leq 0.4$ [46] making it difficult to reach the highest levels of efficiency, even with $A^{*}=1$. The optimal $h^{*}$ to maximize efficiency will vary, depending upon the Strouhal number of the particular swimming animal or biorobotic device. Figure 11: Comparison of acoustic and hydrodynamic metrics. The rows correspond to the coefficients of thrust and power, the efficiency, and the peak acoustic pressure. The columns correspond to different $A^{*}$ values. Each contour plot is presented as a function of $f^{*}$ and $h^{*}$. For the first time, the noise of a biopropulsor and its performance can be compared. The peak acoustic pressure can be observed to not follow trends of the thrust or efficiency; however, the peak acoustic pressure does follow similar trends with the power coefficient. This result can be explained by the fact that the acoustic pressure is well-correlated with the maximum lift coefficient and, consequently, with the power consumption [48]. Moreover, the absolute value of coefficient of lift is greater than thrust for all scenarios, further explaining the vertical acoustic dipole and the trend between the peak RMS acoustic pressure and the maximum coefficient of lift. These results highlight that when the power needed to move the foil is minimized, then there is also a minimum amount of energy that can be converted into noise, leading to the quietest acoustic signatures. In contrast, there is a trade-off between operating at maximum propulsive efficiency and minimizing the noise production. Furthermore, the thrust increases as the swimmer moves from a pure pitching to a pure heaving swimming motion, which requires more power and produces a louder acoustic signal. ## 7 Conclusion An integrated, two-dimensional flow-acoustic boundary element solver is developed to predict the noise generated by the vortical wake of rigid foils in motion. The vortex-particle wake computed by the potential flow boundary element solver furnishes the input for the transient acoustic boundary element solver via Powell’s acoustic analogy. This one-way flow-acoustic coupling is validated against experimental and analytical results for the acoustic emission of a vortex gust encounter with an airfoil. The coupled potential flow-acoustic method is used to investigate the performance and acoustic emission of a heaving and pitching hydrofoil. The hydrofoil is subjected to varying non-dimensional frequencies, amplitudes, and heave-to-pitch ratios that encompass the parametric range of most swimming and maneuvering animals. All combinations of these variables examined in this work produce a similar dipole acoustic response, where the maximum sound pressure levels occur directly above and below the foil. Foils in purely pitching or purely heaving motions are found to be noisier than foils that operate with a combined heaving and pitching motion. In fact, for fixed reduced frequency and amplitude there exists an optimal heave-to-pitch ratio, $h^{*}$, that minimizes the noise production. The numerical model indicates that most swimming animals would minimize their noise production by using heave- dominated swimming motions. As the reduced frequency increases past the regime of swimming animals, a transition to pitch-dominated swimming motions minimize the noise production. Moreover, the correlation between the maximum coefficient of lift and the peak RMS acoustic pressure for all combinations of reduced frequency, amplitude, and heave-to-pitch ratio corresponds to the acoustic dipole response. Consequently, the trends in the coefficient of power are well-correlated with the trends in the peak RMS acoustic pressure for swimming motions with $h^{*}>0.25$. This result supports the conclusion that swimming with low power consumption and a low acoustic signature can be achieved together. In contrast, it is discovered that there is a trade-off between swimming with high propulsive efficiency and a low acoustic signature. These insights seek to further our understanding of swimming in nature and could aid in the design of high-performance, quiet bio-inspired autonomous underwater vehicles. ## Acknowledgments The authors gratefully acknowledge financial support from the National Science Foundation under grants 1805692 (JWJ) and 1653181 (KWM), the Office of Naval Research under MURI grant N00014-08-1-0642 (KWM), and a Lehigh CORE grant (JWJ, KWM). ## Appendix A The potential flow boundary element method presented in this work is a two- dimensional derivative of the three-dimensional method described by Willis et al. [30]. A comparison of the BEM solution to analytic and numerical works is performed to ensure accuracy of the method presented. Theodorsen [49] solved analytically for the fluid-dynamic lift and moment acting on a flat-plate foil undergoing harmonic pitching and heaving motions under the assumption of a planar wake. Garrick [50] used these results to predict the time-averaged thrust and efficiency of pitching and heaving motions, as well as trailing- edge flap motions. These analytical results have recently been extended by Jaworski [51] to also handle leading-edge flap actuation in addition to pitch, heave, and trailing-edge flap motions, and the results from Theodorsen and Garrick have previously been compared to computational fluid dynamic simulations of rigid and deformable thin airfoils [52, 53]. The first validation case is against Garrick’s theory. In the numerical simulations, a 2 %-thick tear drop foil is subjected to a purely pitching motion of amplitude $\theta_{0}=3^{\circ}$ about the leading edge. First, convergence studies on the number of boundary elements and time-steps is found for a reduced frequency $f^{*}=1$. Figure 12 $(a)$ shows spatial and $(b)$ temporal convergence for time-averaged coefficient of force. The inset images detail the percent change (%$\Delta$) of the time-averaged value as the number of elements or time steps per period of motion double. The spatial convergence was conducted for a fixed temporal resolution of 150 time steps per period. The inset of figure 12 $(a)$ shows the ${\it O}(1\%)$ difference in the force when changing from 128 to 256 elements. The temporal convergence study used a fix number of 256 body elements. It can be seen in figure 12 $(b)$ an ${\it O}(1\%)$ change in force when increasing from 128 to 256 time steps per period of motion. All of the simulation results previously presented used 150 time steps per period of motion and 256 boundary elements to define the discrete body. Comparison to analytic solutions and convergence studies of the acoustic BEM are presented in Appendix B. The acoustic results have been previously conducted by Wagenhoffer et al. [11], showing the selected spatial and temporal values of the potential flow solver will also result in converged acoustic results. Figure 12: Spatial and temporal convergence of potential flow BEM. (a) Plots the time averaged coefficient of force as the number of boundary elements on the body doubles for a fixed number of time steps. Next, validation via comparison of Garrick’s theory to the solution of the solver in §2 for varying reduced frequencies is detailed. Theodorsen defined lift as $C_{L}=\rho V^{2}c\sqrt{R^{2}+I^{2}}e^{\textrm{i}\omega t},$ (40) and aerodynamic moment as, $M=\frac{1}{2}\rho V^{2}c^{2}\sqrt{R^{2}+I^{2}}e^{\textrm{i}(\omega t+\phi)},$ (41) where $\displaystyle R=\pi\theta_{0}\left\\{\frac{k^{2}}{2}\left(\frac{1}{8}+a^{2}\right)+\left(\frac{1}{2}+a\right)\left[F-\left(\frac{1}{2}-a\right)kG\right]\right\\},$ $\displaystyle I=-\pi\theta_{0}\left\\{\frac{k}{2}\left(\frac{1}{2}-a\right)-\left(\frac{1}{2}+a\right)\left[G+\left(\frac{1}{2}-a\right)kF\right]\right\\},$ $\displaystyle\phi=\tan^{-1}\frac{I}{R}.$ The required power to sustain the foil motion is ${\rm Pow}=-M\dot{\theta}.$ The coefficient of thrust from Garrick [50] was corrected by Jones et al. [54] to be $C_{T}=\pi k^{2}\theta_{0}^{2}\left[(F^{2}+G^{2})\left(\frac{1}{k^{2}}+\left(\frac{1}{2}-a\right)^{2}\right)-\left(\frac{1}{2}-a\right)\left(\frac{1}{2}-F\right)-\frac{F}{k^{2}}-\left(\frac{1}{2}+a\right)\frac{G}{k}\right],$ (42) where $F$ and $G$ are the real and imaginary parts of the Theodorsen lift deficiency function, $C(k)=\textrm{i}H_{1}^{(1)}(k)/(H_{0}^{(1)}(k)+\textrm{i}H_{1}^{(1)}(k))$ and $a$ is the position of the pivot point measured from the mid-chord in half- chord intervals. Rotation about the leading edge corresponds to $a=-1$. Figure 13 compares the potential flow BEM results against the theory of Garrick for three different reduced frequencies that encapsulate the range of reduced frequencies used for the study in §5. The figure shows the BEM solution matching well to the thin-airfoil theory results over one cycle of motion. Figure 13: Comparison of BEM solution with solution of Garrick. Shown is a comparison of the solution of §2 to the theory of Garrick [50] for a purely pitching foil. From left to right are increasing values of reduced frequency, $f^{*}=fc/U_{\infty}=[0.25,0.5,1]$. From top to bottom are comparisons of the coefficient of lift $C_{L}$, coefficient of thrust $C_{T}$, and coefficient of power $C_{P}$, respectively. In each plot the dashed blue line is the solution of the potential flow method and the solid green line is the theory of Garrick 42. The theory of Garrick is applicable to low-amplitude motion, and additional validation against large amplitude motions is necessary to give confidence in the potential flow solver. The work of Pan et al. [33] developed a boundary element method to investigate leading-edge separation of heaving and pitching foils, but that is not necessary for the work presented here as we assume that the flow remains attached over the propulsive surface being studied. In the work of Pan et al., thrust and efficiency are found for heaving and pitching foils without leading-edge separation for a range of Strouhal numbers and maximum angles of attack $\alpha_{\rm{max}}$. Figure 14 shows good agreement between the method presented here and the work of Pan et al. for a heave-to- chord ratio of $0.75$ and $\alpha_{\rm{max}}=35^{\circ}$ over $0.25<St<0.4$. Figure 14: Comparison of the numerical results of the potential flow solver in §2 with the numerical solutions by Pan et al. [33]. A combined heaving and pitching motion with a heave-to-chord ratio of $h_{0}/c=0.75$ reaching maximum angles of attack of $15^{\circ}$ (left) and $35^{\circ}$ (right) are performed over Strouhal numbers ranging from 0.25 to 0.4. The results are compared with respect to the coefficient of thrust $C_{T}$ (top) and efficiency $\eta$ (bottom). The blue squares represent the work of Pan et al. and the yellow circles represent potential flow solver in this work. ## Appendix B The capability of the acoustic boundary element method to model acoustic scattering by a solid body is demonstrated and validated. A rigid circle of radius $a$ placed at the origin that is bombarded by a harmonic field of plane waves. The incident field of unit strength has the form $P_{\rm{i}}(x,t)=\exp[i(\kappa r\cos\theta-\omega t)],$ where $\omega$ is the angular frequency, $\kappa$ is the wavenumber, and $x=r\cos\theta$. The analytical result for the scattered field is [55] $P_{\rm{s}}(x,t)=e^{i\omega t}\sum_{n=0}^{\infty}\epsilon_{n}i^{n}\left[J_{n}(\kappa a)-\frac{J^{\prime}_{n}(\kappa a)H_{n}(\kappa r)}{H^{\prime}_{n}(\kappa a)}\cos n\theta\right].$ (43) The total acoustic field is the sum of the incident and scattered fields, $P_{\rm{t}}=P_{\rm{s}}+P_{\rm{i}}.$ Figure 15: A comparison of the analytical to BEM results of the plane wave scatterer study with convergence studies. $(a)$ shows the fully developed scattered field. The observation point, denoted by a black circle, is placed at the arbitrary point $(r,\theta)=(5,\frac{\pi}{9})$. $(b)$ compares the time history of the scattered field at the observation point for $\omega=1$ and $\kappa=2$ with the analytic solution. $(c)$ shows the spatial convergence of the solution, while $(d)$ shows the temporal convergence of the solution The interaction of the harmonic incident field with the solid cylinder is as follows. The incoming plane waves propagate in the positive $x$-direction and make initial contact with the cylinder at $(r,\theta)=(a,\pi)$. In the area in front of the cylinder, the plane waves are reflected back onto themselves. The waves reflect at the front of the cylinder to create a shadow region aft of the body. The length of the shadow region is dictated by the wavenumber, with larger values resulting in a smaller shadow region. The $L_{2}$ error norm is calculated over observation points placed on five circles of five points, sampling all of the regions of the scattered field from distances of $1a\rightarrow 10a$ from the rigid circle. Figure 15 $(a)$ compares the transient acoustic response at a point in the acoustic field to the analytical solution to harmonic wave forcing. Here $\omega=1$, $\kappa=2$, and arbitrary point $(r,\theta)=(5,\frac{\pi}{9})$ are selected for this example. Note the absence of a signal in the BEM solution until the initial scattered wave reaches the observation point, after which the numerical solution quickly converges to the analytical result. Temporal and spatial discretization independence of the numerical solution are shown in figure 15 $(c)$ and $(d)$. For the spatial convergence study, four periods of T $=\pi$ are divided into 256 equidistant time-steps. An increasing number of elements on the boundary were used to compare the BEM solution with (43). The temporal convergence study (figure 15 $(d)$) had a boundary of 1024 equal length elements over a total period of T $=4\pi$. The total period is divided into increasing numbers of equidistant time steps. Spatial convergence occurs at approximately 512 elements, showing a relative error of less than 0.1% when using more than 256 time steps. ## References * Lauder [2015] G. V. Lauder, Fish locomotion: recent advances and new directions, Annual Review of Marine Science 7 (2015) 521–545. * Sfakiotakis et al. [1999] M. Sfakiotakis, D. M. Lane, J. B. C. Davies, Review of fish swimming modes for aquatic locomotion, IEEE Journal of Oceanic Engineering 24 (1999) 237–252. * Triantafyllou et al. [1993] G. S. Triantafyllou, M. S. Triantafyllou, M. A. Grosenbaugh, Optimal thrust development in oscillating foils with application to fish propulsion, Journal of Fluids and Structures 7 (1993) 205–224. * Triantafyllou et al. [1991] M. S. Triantafyllou, G. S. Triantafyllou, R. Gopalkrishnan, Wake mechanics for thrust generation in oscillating foils, Physics of Fluids A 3 (1991) 2835–2837. * Triantafyllou and Triantafyllou [1995] M. S. Triantafyllou, G. S. Triantafyllou, An efficient swimming machine, Scientific American 272 (1995) 64–71. * Liu and Hu [2004] J. Liu, H. Hu, A 3d simulator for autonomous robotic fish, International Journal of Automation and Computing 1 (2004) 42–50. * Hu [2006] H. Hu, Biologically inspired design of autonomous robotic fish at Essex, in: IEEE SMC UK-RI Chapter Conference, on Advances in Cybernetic Systems, Citeseer, pp. 3–8. * Clapham [2015] R. J. Clapham, Developing high performance linear Carangiform swimming, Ph.D. thesis, University of Essex, 2015\. * Bandyopadhyay [2005] P. R. Bandyopadhyay, Trends in biorobotic autonomous undersea vehicles, IEEE Journal of Oceanic Engineering 30 (2005) 109–139. * Moored et al. [2011] K. W. Moored, F. E. Fish, T. H. Kemp, H. Bart-Smith, Batoid fishes: inspiration for the next generation of underwater robots, Marine Technology Society Journal 45 (2011) 99–109. * Wagenhoffer et al. [2019] N. Wagenhoffer, K. W. Moored, J. W. Jaworski, Accelerated acoustic boundary element method and the noise generation of an idealized school of fish, in: E. Ciappi, S. De Rosa, F. Franco, J.-L. Guyader, S. A. Hambric, R. C. K. Leung, A. D. Hanford (Eds.), FLINOVIA II: Flow Induced Noise and Vibration Issues and Aspects, Springer, pp. 157–178. * Jaworski and Peake [2013] J. W. Jaworski, N. Peake, Aerodynamic noise from a poroelastic edge with implications for the silent flight of owls, Journal of Fluid Mechanics 723 (2013) 456–479. * Clark et al. [2017] I. A. Clark, W. N. Alexander, W. Devenport, S. Glegg, J. W. Jaworski, C. Daly, N. Peake, Bioinspired trailing-edge noise control, AIAA Journal 55 (2017) 740–754. * Hajian and Jaworski [2017] R. Hajian, J. W. Jaworski, The steady aerodynamics of aerofoils with porosity gradients, Proceedings of the Royal Society A 473 (2017) 20170266. * Jaworski and Peake [2020] J. W. Jaworski, N. Peake, Aeroacoustics of silent owl flight, Annual Review of Fluid Mechanics 52 (2020) 395–420. * Fay [2009] R. R. Fay, Fish bioacoustics, Handbook of Signal Processing in Acoustics (2009) 1851–1860. * Ladich and Fine [2006] F. Ladich, M. L. Fine, Sound-generating mechanisms in fishes: a unique diversity in vertebrates, Communication in Fishes 1 (2006) 3–43. * Luczkovich and Sprague [2002] J. J. Luczkovich, M. W. Sprague, Using passive acoustics to monitor spawning of fishes in the drum family (Sciaenidae), Listening to Fish 3 (2002) 1. * Moored [2018] K. W. Moored, Unsteady three-dimensional boundary element method for self-propelled bio-inspired locomotion, Computers & Fluids 167 (2018) 324–340. * Hess and Smith [1967] J. L. Hess, A. O. Smith, Calculation of potential flow about arbitrary bodies, Progress in Aerospace Sciences 8 (1967) 1–138. * Katz and Plotkin [2001] J. Katz, A. Plotkin, Low-speed aerodynamics, volume 13, Cambridge University Press, 2001. * Glegg and Devenport [2010] S. A. L. Glegg, W. J. Devenport, Panel methods for airfoils in turbulent flow, Journal of Sound and Vibration 329 (2010) 3709–3720. * Wang et al. [2006] M. Wang, J. B. Freund, S. K. Lele, Computational prediction of flow-generated sound, Annual Review Fluid Mechanics 38 (2006) 483–512. * Karimi et al. [2016] M. Karimi, P. Croaker, N. Kessissoglou, Trailing-edge noise prediction using a periodic BEM technique, in: Fluid-Structure-Sound Interactions and Control, Springer, 2016, pp. 39–44. * Lighthill [1952] M. J. Lighthill, On sound generated aerodynamically. I. general theory, Proceedings of the Royal Society of London A 211 (1952) 564–587. * Wang et al. [1996] M. Wang, S. K. Lele, P. Moin, Computation of quadrupole noise using acoustic analogy, AIAA Journal 34 (1996) 2247–2254. * Wolf and Lele [2011] W. Wolf, S. Lele, Trailing edge noise predictions using compressible LES and acoustic analogy, in: 17th AIAA/CEAS Aeroacoustics Conference (32nd AIAA Aeroacoustics Conference), p. 2784. * Powell [1964] A. Powell, Theory of vortex sound, The Journal of the Acoustical Society of America 36 (1964) 177–195. * Howe [1975] M. S. Howe, Contributions to the theory of aerodynamic sound, with application to excess jet noise and the theory of the flute, Journal of Fluid Mechanics 71 (1975) 625–673. * Willis et al. [2007] D. J. Willis, J. Peraire, J. K. White, A combined pFFT-multipole tree code, unsteady panel method with vortex particle wakes, International Journal for Numerical Methods in Fluids 53 (2007) 1399–1422. * Jones et al. [1997] K. Jones, M. Platzer, K. Jones, M. Platzer, Numerical computation of flapping-wing propulsion and power extraction, in: 35th Aerospace Sciences Meeting and Exhibit, p. 826. * Cottet and Koumoutsakos [2000] G.-H. Cottet, P. D. Koumoutsakos, Vortex methods: theory and practice, Cambridge University Press, 2000. * Pan et al. [2012] Y. Pan, X. Dong, Q. Zhu, D. K. Yue, Boundary-element method for the prediction of performance of flapping foils with leading-edge separation, Journal of Fluid Mechanics 698 (2012) 446–467. * Kirkup [2007] S. M. Kirkup, The boundary element method in acoustics, Integrated Sound Software, 2007\. * Lubich [2004] C. Lubich, Convolution quadrature revisited, BIT Numerical Mathematics 44 (2004) 503–514. * Hassell and Sayas [2016] M. Hassell, F. J. Sayas, Convolution quadrature for wave simulations, in: Numerical Simulation in Physics and Engineering, Springer, 2016, pp. 71–159. * Banjai and Sauter [2011] L. Banjai, S. A. Sauter, Rapid solution of the wave equation in unbounded domains, SIAM Journal of Numerical Analysis 7 (2011) 227–249. * Howe [2003] M. S. Howe, Theory of vortex sound, volume 33, Cambridge University Press, 2003\. * Kambe [1986] T. Kambe, Acoustic emissions by vortex motions, Journal of Fluid Mechanics 173 (1986) 643–666. * Kao [2002] H. C. Kao, Body-vortex interaction, sound generation, and destructive interference, AIAA Journal 40 (2002) 652–660. * Crighton [1972] D. G. Crighton, Radiation from vortex filament motion near a half plane, Journal of Fluid Mechanics 51 (1972) 357–362. * Booth [1990] E. R. Booth, Experimental observations of two-dimensional blade-vortex interaction, AIAA Journal 28 (1990) 1353–1359. * Howe [1976] M. S. Howe, The influence of vortex shedding on the generation of sound by convected turbulence, Journal of Fluid Mechanics 76 (1976) 711–740. * Lighthill [1969] M. Lighthill, Hydromechanics of aquatic animal propulsion, Annual Review of Fluid Mechanics 1 (1969) 413–446. * Akoz and Moored [2018] E. Akoz, K. W. Moored, Unsteady propulsion by an intermittent swimming gait, Journal of Fluid Mechanics 834 (2018) 149–172. * Taylor et al. [2003] G. K. Taylor, R. L. Nudds, A. L. Thomas, Flying and swimming animals cruise at a Strouhal number tuned for high power efficiency, Nature 425 (2003) 707. * Eloy [2012] C. Eloy, Optimal Strouhal number for swimming animals, Journal of Fluids and Structures 30 (2012) 205–218. * Moored and Quinn [2019] K. W. Moored, D. B. Quinn, Inviscid scaling laws of a self-propelled pitching airfoil, AIAA Journal 57 (2019) 3686–3700. * Theodorsen [1935] T. Theodorsen, General theory of aerodynamic instability and the mechanism of flutter, NACA Technical Report (1935). * Garrick [1937] I. E. Garrick, Propulsion of a flapping and oscillating airfoil, NACA Technical Report (1937). * Jaworski [2012] J. W. Jaworski, Thrust and aerodynamic forces from an oscillating leading edge flap, AIAA Journal 50 (2012) 2928–2931. * Jaworski and Gordnier [2012] J. W. Jaworski, R. E. Gordnier, High-order simulations of low Reynolds number membrane airfoils under prescribed motion, Journal of Fluids and Structures 31 (2012) 49–66. * Jaworski and Gordnier [2015] J. W. Jaworski, R. E. Gordnier, Thrust augmentation of flapping airfoils in low Reynolds number flow using a flexible membrane, Journal of Fluids and Structures 52 (2015) 199–209. * Jones et al. [1996] K. Jones, C. Dohring, M. Platzer, Wake structures behind plunging airfoils - a comparison of numerical and experimental results, in: 34th Aerospace Sciences Meeting and Exhibit, p. 78. * Junger and Feit [1986] M. C. Junger, D. Feit, Sound, structures, and their interaction, volume 225, MIT press Cambridge, MA, 1986\.
# Automorphism Groups and Isometries for Cyclic Orbit Codes Heide Gluesing-Luerssen∗ and Hunter Lehmann111HGL was partially supported by the grant #422479 from the Simons Foundation. HGL and HL are with the Department of Mathematics, University of Kentucky, Lexington KY 40506-0027, USA; {heide.gl<EMAIL_ADDRESS> (January 23, 2021) ###### Abstract We study orbit codes in the field extension $\mathbb{F}_{q^{n}}$. First we show that the automorphism group of a cyclic orbit code is contained in the normalizer of the Singer subgroup if the orbit is generated by a subspace that is not contained in a proper subfield of $\mathbb{F}_{q^{n}}$. We then generalize to orbits under the normalizer of the Singer subgroup. In that situation some exceptional cases arise and some open cases remain. Finally we characterize linear isometries between such codes. ## 1 Introduction In [15] Koetter/Kschischang introduced subspace codes for random network coding. As they demonstrated, these codes, together with rank-metric codes, are the appropriate tools for information transmission with error correction through a network with multiple sources and receivers. As a consequence, [15] led to an intense study of both classes of codes. Mathematically, a subspace code is simply a collection of subspaces of some vector space $\mathbb{F}_{q}^{n}$, endowed with the subspace distance. One class that garnered particular attention are orbit codes; see [6, 16, 20]. These are, by definition, orbits of a subspace of $\mathbb{F}_{q}^{n}$ under a subgroup of $\text{GL}_{n}(\mathbb{F}_{q})$ (acting naturally on the set of subspaces). If the group is cyclic, these codes are known as cyclic orbit codes. However, in most of the literature the latter notion is reserved for orbit codes under the Singer subgroup, and we will follow this custom in this paper. A Singer subgroup is, by definition, a cyclic subgroup of $\text{GL}_{n}(\mathbb{F}_{q})$ of order $q^{n}-1$. Its meaning is best understood by identifying $\mathbb{F}_{q}^{n}$ with the field extension $\mathbb{F}_{q^{n}}$ as $\mathbb{F}_{q}$-vector spaces. The matrix group $\text{GL}_{n}(\mathbb{F}_{q})$ turns into the group of $\mathbb{F}_{q}$-vector space automorphisms of $\mathbb{F}_{q^{n}}$, and we will denote this group by $\text{GL}_{n}(q)$. The subgroup consisting of the multiplication maps $x\mapsto ax$ for any $a\in\mathbb{F}_{q^{n}}^{*}$ is isomorphic to $\mathbb{F}_{q^{n}}^{*}$ and thus a Singer subgroup of $\text{GL}_{n}(q)$. In fact, all Singer subgroups of $\text{GL}_{n}(q)$ are conjugate to $\mathbb{F}_{q^{n}}^{*}$ and can be interpreted as a group of multiplication maps; see Lemma 2.3 and the paragraph thereafter. In this setting, a cyclic orbit code is thus the orbit of an $\mathbb{F}_{q}$-subspace $\mathcal{U}$ of $\mathbb{F}_{q^{n}}$ under $\mathbb{F}_{q^{n}}^{*}$, i.e., $\\{\omega^{i}\mathcal{U}\mid i=0,\ldots,q^{n}-2\\}$, where $\omega$ is a primitive element of $\mathbb{F}_{q^{n}}$. First examples of cyclic orbit codes with good distance appeared already in [7], and in fact, in most of the literature, cyclic orbit codes have been studied in this setting, see for instance [10] for details on the orbit length and some distance results, [1, 17, 18, 3, 21] for constructions of unions of cyclic orbit codes with good distance with the aid of subspace polynomials and [9] for a study of the distance distribution of cyclic orbit codes. The aforementioned unions of cyclic orbit codes are in fact orbits codes under the normalizer of $\mathbb{F}_{q^{n}}^{*}$ in $\text{GL}_{n}(q)$. The normalizer is isomorphic to $\text{Gal}(\mathbb{F}_{q^{n}}\\!\mid\\!\mathbb{F}_{q})\rtimes\mathbb{F}_{q^{n}}^{*}$, and thus its orbits are simply unions of at most $n$ cyclic orbit codes. This insight has also been utilized in [2], where the authors succeeded in finding a $q$-Steiner system of type $\mathcal{S}_{2}[2,3,13]$: it consists of $15$ orbit codes under the normalizer group. In this paper we will study the automorphism groups of cyclic orbit codes and orbit codes under the Singer normalizer. As usual, the automorphism group of a subspace code is defined as the subgroup of $\text{GL}_{n}(q)$ that leaves the code invariant. We will prove the following result. Let $\mathcal{U}$ be a subspace of $\mathbb{F}_{q^{n}}$ containing $1$ (which is no restriction) and let $\mathbb{F}_{q^{s}}$ be the smallest subfield of $\mathbb{F}_{q^{n}}$ containing $\mathcal{U}$. Then the automorphism group is contained in the normalizer of the extension-field subgroup $\text{GL}_{n/s}(q^{s})$, where the latter is defined as the subgroup of all $\mathbb{F}_{q^{s}}$-linear automorphisms of $\mathbb{F}_{q^{n}}$. In particular, if $\mathcal{U}$ is generic, i.e., not contained in a proper subfield of $\mathbb{F}_{q^{n}}$, the automorphism group of the cyclic orbit code generated by $\mathcal{U}$ is contained in the normalizer of the Singer subgroup $\mathbb{F}_{q^{n}}^{*}$. In order to prove these results we will derive a lower bound on the length of the $\text{GL}_{n/s}(q^{s})$-orbit of $\mathcal{U}$ for any given divisor $s$ of $n$. A crucial role will be played by the parameter $\delta_{s}(\mathcal{U})$, which is the $\mathbb{F}_{q^{s}}$-dimension of the $\mathbb{F}_{q^{s}}$-subspace generated by $\mathcal{U}$. Note that $\delta_{s}(\mathcal{U})=1$ iff $\mathcal{U}\subseteq\mathbb{F}_{q^{s}}$. We then turn to orbit codes under the normalizer of the Singer subgroup and derive the same results for the automorphism groups as long as the orbit code is generated by a subspace $\mathcal{U}$ satisfying $\delta_{s}(\mathcal{U})\neq 2$. The case $\delta_{s}(\mathcal{U})=2$ is of particular interest: the above results hold for many parameter cases, while there exist counterexamples for others. We strongly believe that these examples are the only exceptions to our main result on the automorphism group. We finally discuss linear isometries, i.e., maps from $\text{GL}_{n}(q)$, between cyclic orbit codes and orbit codes under the Singer normalizer. Our results on the automorphism groups immediately imply the following facts for orbits generated by generic subspaces: (i) a linear isometry between cyclic orbit codes is in the normalizer of $\mathbb{F}_{q^{n}}^{*}$; (ii) linearly isometric orbit codes under the Singer normalizer are in fact equal – with the possible exception of orbits generated by subspaces $\mathcal{U}$ with $\delta_{s}(\mathcal{U})=2$ for some $s$. This drastically reduces the work load for testing isometry between such codes. The nature of our counterexamples leads us to believe that the last statement does not need the assumption on $\delta_{s}(\mathcal{U})$. We close the paper with some examples listing the number of distinct isometry classes of cyclic orbit codes and, making use of [9], also provide the weight distribution for each class. ## 2 Singer Subgroups and Extension-Field Subgroups Throughout we fix a finite field $\mathbb{F}_{q}$. The field extension $\mathbb{F}_{q^{n}}$ is taken as our model for the $n$-dimensional vector space over $\mathbb{F}_{q}$. We denote by $\text{PG}(n-1,q)$ the _$n$ -dimensional projective geometry over_ $\mathbb{F}_{q}$, that is, the set of all subspaces of $\mathbb{F}_{q^{n}}$. Accordingly, $\text{GL}_{n}(q)$ denotes the group of all $\mathbb{F}_{q}$-automorphisms of $\mathbb{F}_{q^{n}}$. Specific subgroups will play a crucial role. ###### Definition 2.1. Let $\mathbb{F}_{q^{s}}$ be a subfield of $\mathbb{F}_{q^{n}}$, thus $\mathbb{F}_{q^{n}}$ is an $\mathbb{F}_{q^{s}}$-vector space of dimension $n/s$. The _extension-field subgroup of degree $s$_ is defined as $\text{GL}_{n/s}(q^{s})=\\{\phi\in\text{GL}_{n}(q)\mid\phi\text{ is $\mathbb{F}_{q^{s}}$-linear}\\}.$ The subgroup $\text{GL}_{1}(q^{n})$ will be identified with the multiplicative group $\mathbb{F}_{q^{n}}^{*}$ via the map $a\mapsto m_{a}$, where $m_{a}$ is the multiplication by $a$, that is, $m_{a}:\mathbb{F}_{q^{n}}\longrightarrow\mathbb{F}_{q^{n}},\ x\longmapsto ax.$ (2.1) Clearly, $\text{GL}_{1}(q^{n})$ is a cyclic subgroup of order $q^{n}-1$. Subgroups of $\text{GL}_{n}(q)$ of this form are well known. ###### Definition 2.2. A cyclic subgroup of $\text{GL}_{n}(q)$ of order $q^{n}-1$ is called a _Singer subgroup_. ###### Lemma 2.3 ([8, Lem. 3]). Every Singer subgroup of $\text{GL}_{n}(q)$ is conjugate to $\mathbb{F}_{q^{n}}^{*}$. Let us briefly comment on this result. Consider the extension-field subgroups $\text{GL}_{n/s}(q^{s})$ from Definition 2.1, and let $\rho\in\text{GL}_{n}(q)$. Then the $\mathbb{F}_{q}$-linear isomorphism $\rho$ leads to new field structures $\rho(\mathbb{F}_{q^{n}})$ and $\rho(\mathbb{F}_{q})$ with identity $\rho(1)$ (they turn $\rho$ into a ring homomorphism). The conjugate group $\rho\text{GL}_{n/s}(q^{s})\rho^{-1}$ is now the group of all $\rho(\mathbb{F}_{q^{s}})$-linear automorphisms of the field $\rho(\mathbb{F}_{q^{n}})$, and in particular the conjugate Singer subgroup $\rho\mathbb{F}_{q^{n}}^{*}\rho^{-1}$ is the group of all $\rho(\mathbb{F}_{q^{n}})$-linear automorphisms of the field $\rho(\mathbb{F}_{q^{n}})$. Thus, conjugation of any of these subgroups corresponds to an isomorphic field structure. For this reason we may and will restrict ourselves to the Singer subgroup $\mathbb{F}_{q^{n}}^{*}$. The following results will be needed later on and are well known. The normalizer of a subgroup $H$ in a group $G$ is denoted by $N_{G}(H)$. ###### Theorem 2.4. Let $S=\mbox{$\langle{\tau}\rangle$}\leq\text{GL}_{n}(q)$ be a Singer subgroup. * (a) The normalizer of $S$ is $N_{\text{GL}_{n}(q)}(S)=\mbox{$\langle{\tau,\sigma}\rangle$}\cong\text{Gal}(\mathbb{F}_{q^{n}}\\!\mid\\!\mathbb{F}_{q})\rtimes S$, where $\sigma\in\text{GL}_{n}(q)$ is the Frobenius homomorphism of order $n$. Moreover, $N_{\text{GL}_{n}(q)}(S)$ is self-normalizing in $\text{GL}_{n}(q)$. * (b) The only Singer subgroup contained in $N_{\text{GL}_{n}(q)}(S)$ is $S$. * (c) Let $H\leq\text{GL}_{n}(q)$ such that $S\leq H$. Then there is a divisor $s$ of $n$ such that $\text{GL}_{n/s}(q^{s})\unlhd H$. * (d) $N_{\text{GL}_{n}(q)}(\text{GL}_{n/s}(q^{s}))\cong\text{Gal}(\mathbb{F}_{q^{s}}\\!\mid\\!\mathbb{F}_{q})\rtimes\text{GL}_{n/s}(q^{s})$. ###### Proof. (a) is in [13, Ch. II, Satz 7.3(a) and its proof], (b) in [4, Prop. 2.5], (c) is in [14, p. 232] and [8, Thm. 7], and (d) is in [8, Sec. 2]. ∎ The following is immediate. ###### Corollary 2.5. Let $S\leq\text{GL}_{n}(q)$ be a Singer subgroup. If $n$ is an odd prime or $n=2$ and $q\geq 3$, then $N_{\text{GL}_{n}(q)}(S)$ is a maximal subgroup of $\text{GL}_{n}(q)$. All of the above can, of course, be translated into matrix groups. In order to do so, we consider the following isomorphism. Fix a primitive element $\omega$ of $\mathbb{F}_{q^{n}}$, and let $f=x^{n}-\sum_{i=0}^{n-1}f_{i}x^{i}\in\mathbb{F}_{q}[x]$ be its minimal polynomial over $\mathbb{F}_{q}$. Let $M_{f}=\begin{pmatrix}&1&&\\\ &&\ddots&\\\ &&&1\\\ f_{0}&f_{1}&\cdots&f_{n-1}\end{pmatrix}\in\mathbb{F}_{q}^{n\times n}$ (2.2) be the companion matrix of $f$. Then $1,\omega,\ldots,\omega^{n-1}$ form a basis of $\mathbb{F}_{q^{n}}$ over $\mathbb{F}_{q}$, and we have the isomorphism $\Phi:\mathbb{F}_{q^{n}}\longrightarrow\mathbb{F}_{q}^{n},\quad\sum_{i=0}^{n-1}a_{i}\omega^{i}\longmapsto(a_{0},\ldots,a_{n-1}).$ (2.3) It satisfies $\Phi(c\,\omega^{i})=\Phi(c)M_{f}^{i}\ \text{ for all $c\in\mathbb{F}_{q^{n}}$ and all }i\in\mathbb{N}_{0}.$ (2.4) In other words, $M_{f}^{i}$ is the matrix representation of the linear map $m_{\omega^{i}}$ with respect to the basis $1,\omega,\ldots,\omega^{n-1}$. ###### Remark 2.6. Denote by $\text{GL}_{n}(\mathbb{F}_{q})$ the general linear group of invertible $n\times n$-matrices over $\mathbb{F}_{q}$ and identify a matrix $A\in\text{GL}_{n}(\mathbb{F}_{q})$ in the usual way with the isomorphism $\mathbb{F}_{q}^{n}\longrightarrow\mathbb{F}_{q}^{n},\ v\longmapsto vA$. Then we have the group isomorphism $\text{GL}_{n}(\mathbb{F}_{q})\longrightarrow\text{GL}_{n}(q),\ A\longmapsto\phi_{A}=\Phi^{-1}\circ A\circ\Phi,$ which satisfies $\phi_{A}(a)=\Phi^{-1}\big{(}\Phi(a)A\big{)}\text{ for all }a\in\mathbb{F}_{q^{n}}.$ (2.5) Let now $s$ be a divisor of $n$ and set $N=(q^{n}-1)/(q^{s}-1)$. Thus $\omega^{N}$ is a primitive element of $\mathbb{F}_{q^{s}}$. Then for any $A\in\text{GL}_{n}(\mathbb{F}_{q})$ $\phi_{A}\text{ is $\mathbb{F}_{q^{s}}$-linear}\Longleftrightarrow AM_{f}^{N}=M_{f}^{N}A.$ As a consequence, the subgroup $\\{A\in\text{GL}_{n}(\mathbb{F}_{q})\mid AM_{f}^{N}=M_{f}^{N}A\\}$ may be identified with the extension-field subgroup $\text{GL}_{n/s}(q^{s})$. Consider the special case $s=1$. From [12, Thm. 2.9] it is known that $\langle{M_{f}}\rangle$ is self-centralizing, i.e., $\\{A\in\text{GL}_{n}(\mathbb{F}_{q})\mid AM_{f}=M_{f}A\\}=\mbox{$\langle{M_{f}}\rangle$}$ (see also [11, Cor. 2 and Cor. 3]). Since $\text{GL}_{1}(q^{n})\cong\mathbb{F}_{q^{n}}^{*}$, this simply reflects the well-known isomorphism $\mathbb{F}_{q^{n}}^{*}\cong\mbox{$\langle{M_{f}}\rangle$}$ (and $\mathbb{F}_{q^{n}}\cong\mathbb{F}_{q}[M_{f}]$). ## 3 Orbit Codes and Linear Isometries In this section we turn to subspace codes and, more specifically, orbit codes. We endow the projective geometry $\text{PG}(n-1,q)$ with the _subspace distance_ $\text{d}(\mathcal{V},\mathcal{W}):=\dim\mathcal{V}+\dim\mathcal{W}-2\dim(\mathcal{V}\cap\mathcal{W})$ (3.1) for $\mathcal{V},\mathcal{W}\in\text{PG}(n-1,q)$. The subspace distance is a metric on $\text{PG}(n-1,q)$; see [15, Lem. 1]. A subset of $\text{PG}(n-1,q)$ with at least two elements is called a _subspace code (of block length $n$)_. The _subspace distance_ of a code $\mathcal{C}$ is, as usual, $\text{d}_{\rm{s}}(\mathcal{C}):=\min\\{\text{d}(\mathcal{V},\mathcal{W})\mid\mathcal{V},\,\mathcal{W}\in\mathcal{C},\,\mathcal{V}\neq\mathcal{W}\\}.$ (3.2) The subspace codes defined next are _constant-dimension codes_ , that is, they are contained in some $\mathcal{G}_{q}(k,n)$, where $\mathcal{G}_{q}(k,n)$ denotes the Grassmannian consisting of the $k$-dimensional subspaces of $\mathbb{F}_{q^{n}}$. ###### Definition 3.1. Let $G\leq\text{GL}_{n}(q)$ be a subgroup and let $\mathcal{U}\in\mathcal{G}_{q}(k,n)$. Then the $G$-orbit of $\mathcal{U}$, defined as $\operatorname{Orb}_{G}(\mathcal{U})=\\{\phi(\mathcal{U})\mid\phi\in G\\}$, is called an _orbit code_. For a Singer subgroup $S$, the orbit $\operatorname{Orb}_{S}(\mathcal{U})$ is called a _cyclic orbit code_. Two classes of orbit codes will be in the focus of this paper: orbits under the Singer subgroup $\mathbb{F}_{q^{n}}^{*}$ and orbits under the normalizer of $\mathbb{F}_{q^{n}}^{*}$. They take the following explicit form. Let $\omega$ be a primitive element of $\mathbb{F}_{q^{n}}$. Furthermore, for $\mathcal{U}\in\mathcal{G}_{q}(k,n)$ define $\mathcal{U}^{[i]}:=\\{u^{[i]}\mid u\in\mathcal{U}\\}$, where we use the standard notation $[i]:=q^{i}$. Consider the Singer subgroup $\mathbb{F}_{q^{n}}^{*}$ and its normalizer $N:=N_{\text{GL}_{n}(q)}(\mathbb{F}_{q^{n}}^{*})\cong\text{Gal}(\mathbb{F}_{q^{n}}\\!\mid\\!\mathbb{F}_{q})\rtimes\mathbb{F}_{q^{n}}^{*}$. Then $\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})=\\{\omega^{i}\mathcal{U}\mid i=0,\ldots,q^{n}-2\\}\ \text{ and }\operatorname{Orb}_{N}(\mathcal{U})=\bigcup_{i=0}^{n-1}\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U}^{[i]}).$ (3.3) For later reference we record the following simple fact about the sizes of these orbits. ###### Remark 3.2 ([10, Cor. 3.13]). Let $\mathcal{U}\in\mathcal{G}_{q}(k,n)$. Suppose $\mathbb{F}_{q^{t}}$ is the largest subfield of $\mathbb{F}_{q^{n}}$ such that $\mathcal{U}$ is closed under multiplication by scalars from $\mathbb{F}_{q^{t}}$ (i.e., $\mathcal{U}$ is an $\mathbb{F}_{q^{t}}$-vector space with respect to the ordinary multiplication in $\mathbb{F}_{q^{n}}$). Then $|\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})|=\frac{q^{n}-1}{q^{t}-1}.$ As a consequence, $|\operatorname{Orb}_{N}(\mathcal{U})|\leq n(q^{n}-1)/(q^{t}-1)$ for the normalizer $N:=N_{\text{GL}_{n}(q)}(\mathbb{F}_{q^{n}}^{*})$. Let us return to general $G$-orbits. In matrix notation, they take the following form. This is the setting in which they have been studied in [20]. ###### Remark 3.3. Let $\mathcal{U}\in\mathcal{G}_{q}(k,n)$ and $G\leq\text{GL}_{n}(q)$. Define $\tilde{G}:=\\{\Phi\circ\phi\circ\Phi^{-1}\mid\phi\in G\\}$ and $\tilde{\mathcal{U}}=\Phi(\mathcal{U})$, where $\Phi$ is the isomorphism from (2.3). Then $\tilde{G}\leq\text{GL}_{n}(\mathbb{F}_{q})$ and $\mathcal{U}\subseteq\mathbb{F}_{q}^{n}$, and (2.5) shows that $\Phi(\operatorname{Orb}_{G}(\mathcal{U}))=\operatorname{Orb}_{\tilde{G}}(\tilde{\mathcal{U}}):=\\{\tilde{\mathcal{U}}A\mid A\in\tilde{G}\\}.$ In this paper we want to study linear isometries between orbit codes. ###### Definition 3.4. An _isometry_ on $\text{PG}(n-1,q)$ is a distance-preserving map $\varphi:\text{PG}(n-1,q)\to\text{PG}(n-1,q)$, thus, $\text{d}(\mathcal{U},\mathcal{V})=\text{d}(\varphi(\mathcal{U}),\varphi(\mathcal{V}))$ for all $\mathcal{U},\mathcal{V}\in\text{PG}(n-1,q)$. It is clear that an isometry is bijective. In [19, 2.3–2.8] it has been shown that the dimension-preserving isometries are precisely the elements of the projective general semi-linear group $\text{GL}_{n}(q)/Z\rtimes\text{Aut}(\mathbb{F}_{q})$, where $Z$ is the center of $\text{GL}_{n}(q)$, that is, $Z=\\{m_{a}\mid a\in\mathbb{F}_{q}^{*}\\}$ with $m_{a}$ as in (2.1). Thanks to the Fundamental Theorem of Projective Geometry, these are exactly the automorphisms (i.e., incidence-preserving bijections) of $\text{PG}(n-1,q)$. In this paper we will only consider linear isometries, that is, maps in the projective linear group $\text{PGL}_{n}(q)=\text{GL}_{n}(q)/Z$. Note that a map $\phi\in\text{GL}_{n}(q)$ is in $Z$ if and only if it fixes every $\mathbb{F}_{q}$-subspace of $\mathbb{F}_{q^{n}}$, which is why we may factor out $Z$. For ease of notation, we will simply consider linear isometries in $\text{GL}_{n}(q)$. This will have no impact on our considerations (one can just factor out Z in all groups occurring below). ###### Definition 3.5. Let $G\leq\text{GL}_{n}(q)$ and $\mathcal{U}_{1},\,\mathcal{U}_{2}\in\mathcal{G}_{q}(k,n)$. Consider the $G$-orbits $\mathcal{C}_{i}=\operatorname{Orb}_{G}(\mathcal{U}_{i})$ for $i=1,2$. Then $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ are called _(linearly) isometric_ if there exists an isomorphism $\psi\in\text{GL}_{n}(q)$ such that $\psi(\mathcal{C}_{1})=\mathcal{C}_{2}$, where $\psi(\mathcal{C}_{1}):=\\{\psi(\mathcal{V})\mid\mathcal{V}\in\mathcal{C}_{1}\\}$. In this case $\psi$ is called a _(linear) isometry_ between $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$. In the special case, where $G=S$ is a Singer subgroup and $\psi(\mathcal{C}_{1})=\mathcal{C}_{2}$ for some $\psi\in N_{\text{GL}_{n}(q)}(S)$, we call the cyclic orbit codes $\operatorname{Orb}_{S}(\mathcal{U}_{1})$ and $\operatorname{Orb}_{S}(\mathcal{U}_{2})$ _Frobenius-isometric_ and $\psi$ a _Frobenius-isometry_. The terminology Frobenius-isometry is motivated by the fact that, thanks to 2.4(a), $N_{\text{GL}_{n}(q)}(S)\cong\text{Gal}(\mathbb{F}_{q^{n}}\\!\mid\\!\mathbb{F}_{q})\rtimes S$. Later in Section 6 we will see that – just like for block codes with the Hamming metric – not every weight-preserving bijection between cyclic orbit codes is an isometry. Hence not every such map extends to an isometry on $\text{PG}(n-1,q)$. The following is easy to see. ###### Theorem 3.6 (see also [20, Thm. 10]). Let $G\leq\text{GL}_{n}(q),\;\psi\in\text{GL}_{n}(q)$, and $\mathcal{U}\in\mathcal{G}_{q}(k,n)$. * (a) Set $G^{\prime}=\psi G\psi^{-1}$ and $\mathcal{U}^{\prime}=\psi(\mathcal{U})$. Then the orbit codes $\mathcal{C}=\operatorname{Orb}_{G}(\mathcal{U})$ and $\mathcal{C}^{\prime}=\operatorname{Orb}_{G^{\prime}}(\mathcal{U}^{\prime})$ are linearly isometric with $\mathcal{C}^{\prime}=\psi(\mathcal{C})$. * (b) Let $\mathcal{C}=\operatorname{Orb}_{G}(\mathcal{U})$ and $\mathcal{C}^{\prime}=\psi(\mathcal{C})$. Then $\mathcal{C}^{\prime}=\operatorname{Orb}_{\psi G\psi^{-1}}(\mathcal{U}^{\prime})$ with $\mathcal{U}^{\prime}=\psi(\mathcal{U})$. As a consequence, if $\psi\in N_{\text{GL}_{n}}(G)$, then $\mathcal{C}$ and $\mathcal{C}^{\prime}$ are isometric $G$-orbit codes. In order to study isometries between cyclic orbit codes, we need to understand their automorphism groups. This is the subject of the next section. For these considerations it will suffice to restrict to orbit codes generated by subspaces $\mathcal{U}\in\mathcal{G}_{q}(k,n)$, where $k\leq n/2$. In order to see this, we need to briefly introduce the dual code. Let $\omega$ be a primitive element of $\mathbb{F}_{q^{n}}$ and choose the symmetric, non- degenerate, $\mathbb{F}_{q}$-bilinear form $\langle{\cdot}\\!\mid\\!{\cdot}\rangle$ on $\mathbb{F}_{q^{n}}$ defined via $\mbox{$\langle{\omega^{i}}\\!\mid\\!{\omega^{j}}\rangle$}=\delta_{i,j}$ for all $i,j=0,\ldots,n-1$ (this is simply the standard dot product on $\mathbb{F}_{q}^{n}$ under the isomorphism in (2.3)). Define the dual of a subspace $\mathcal{W}\leq\mathbb{F}_{q^{n}}$ in the usual way as $\mathcal{W}^{\perp}=\\{v\in\mathbb{F}_{q^{n}}\mid\mbox{$\langle{v}\\!\mid\\!{w}\rangle$}=0\text{ for all }w\in\mathcal{W}\\}$. Clearly, $\dim\mathcal{W}^{\perp}=n-\dim\mathcal{W}$. The _dual_ of a subspace code $\mathcal{C}\subseteq\mathbb{F}_{q^{n}}$ is simply defined as $\mathcal{C}^{\perp}:=\\{\mathcal{W}^{\perp}\mid\mathcal{W}\in\mathcal{C}\\}$. We can now describe the dual of an orbit code. For an $\mathbb{F}_{q}$-linear map $\phi:\mathbb{F}_{q^{n}}\longrightarrow\mathbb{F}_{q^{n}}$ denote by $\phi^{\dagger}$ its adjoint map, that is, the unique linear map satisfying $\mbox{$\langle{\phi(x)}\\!\mid\\!{y}\rangle$}=\mbox{$\langle{x}\\!\mid\\!{\phi^{\dagger}(y)}\rangle$}$ for all $x,y\in\mathbb{F}_{q^{n}}$. Clearly $\phi^{\dagger}\in\text{GL}_{n}(q)$ for any $\phi\in\text{GL}_{n}(q)$. ###### Remark 3.7. Suppose $\mathcal{C}=\operatorname{Orb}_{G}(\mathcal{U})$ for some subgroup $G\leq\text{GL}_{n}(q)$. Then $\mathcal{C}^{\perp}=\operatorname{Orb}_{G^{\dagger}}(\mathcal{U}^{\perp})$, where $G^{\dagger}=\\{\phi^{\dagger}\mid\phi\in G\\}$, which is clearly a subgroup of $\text{GL}_{n}(q)$. This follows immediately from $\phi(\mathcal{U})^{\perp}=(\phi^{\dagger})^{-1}(\mathcal{U}^{\perp})$. We call $G^{\dagger}$ the _adjoint group of_ $G$. In the setting of 3.3, where subgroups of the matrix group $\text{GL}_{n}(\mathbb{F}_{q})$ act on subspaces in $\mathbb{F}_{q}^{n}$, this fact also appears in [20, Thm. 18]. The following surprising result tells us that the adjoint groups of all groups of interest in this paper are conjugate to the group itself, and even more, we may choose the same conjugation matrix for all these groups. ###### Theorem 3.8. There exists a map $\rho\in\text{GL}_{n}(q)$ such that $\rho^{-1}G^{\dagger}\rho=G\ \text{ for all }G\in\\{\mathbb{F}_{q^{n}}^{*},\text{Gal}(\mathbb{F}_{q^{n}}\\!\mid\\!\mathbb{F}_{q})\\}\cup\\{\text{GL}_{n/s}(q^{s})\mid s\text{ divisor of }n\\}.$ The proof, which is not needed for the rest of this paper, is postponed to an appendix. Returning to our orbit codes, 3.8 along with 3.7 tells us that the dual of a $G$-orbit, where $G$ is any of the groups above, is again an orbit of the same type, but with respect to an isomorphic field structure; see the paragraph following Lemma 2.3. The isomorphic field structure does not depend on the group. All of this tells us that it suffices to study isometries (and automorphisms) for orbit codes generated by subspaces of dimension at most $n/2$. Hence from now on we only consider subspaces $\mathcal{U}\in\mathcal{G}_{q}(k,n)$, where $k\leq n/2$. ## 4 The Automorphism Groups of Singer Orbits In this section we will derive information about the automorphism groups of cyclic orbit codes. This will be sufficient to discuss isometries between cyclic orbit codes later in this paper. In accordance with earlier notation we will consider automorphisms in $\text{GL}_{n}(q)$ rather than $\text{PGL}_{n}(q)=\text{GL}_{n}(q)/Z$. ###### Definition 4.1. Let $\mathcal{C}\subseteq\text{PG}(n-1,q)$ be a subspace code. The _automorphism group_ of $\mathcal{C}$ is defined as the group of linear isometries that fix $\mathcal{C}$, that is, $\text{Aut}(\mathcal{C}):=\\{\psi\in\text{GL}_{n}(q)\mid\psi(\mathcal{C})=\mathcal{C}\\}$. Any subgroup of $\text{Aut}(\mathcal{C})$ is called _a group of automorphisms_ of $\mathcal{C}$. Clearly, for any $G\leq\text{GL}_{n}(q)$ and any orbit code $\mathcal{C}=\operatorname{Orb}_{G}(\mathcal{U})$, the group $G$ is a group of automorphisms of $\mathcal{C}$. Furthermore, if $H\leq\text{GL}_{n}(q)$ then $H\leq\text{Aut}(\operatorname{Orb}_{G}(\mathcal{U}))\Longleftrightarrow\operatorname{Orb}_{H}(\mathcal{U})\subseteq\operatorname{Orb}_{G}(\mathcal{U}).$ (4.1) We will now focus on the case where $\mathcal{C}$ is a cyclic orbit code, that is, $\mathcal{C}=\operatorname{Orb}_{S}(\mathcal{U})$ for some subspace $\mathcal{U}\leq\mathbb{F}_{q^{n}}$ and a Singer subgroup $S\leq\text{GL}_{n}(q)$. Thanks to Lemma 2.3 and 3.6(a) it suffices to study the case where $S=\mathbb{F}_{q^{n}}^{*}$. The following result is immediate with 2.4(c). ###### Proposition 4.2. Let $\mathcal{C}=\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})$ be a cyclic orbit code. Then there exists a divisor $s$ of $n$ such that $\text{GL}_{n/s}(q^{s})\unlhd\text{Aut}(\mathcal{C})\leq N_{\text{GL}_{n}(q)}(\text{GL}_{n/s}(q^{s}))$. The following notion will be convenient throughout. ###### Definition 4.3. A subspace $\mathcal{U}\subseteq\mathbb{F}_{q^{n}}$ is called _generic_ if $\mathcal{U}$ is not contained in a proper subfield of $\mathbb{F}_{q^{n}}$. The next theorem is the main result of this section. It shows that for any subspace $\mathcal{U}$, the parameter $s$ from 4.2 is the smallest divisor of $n$ such that $\mathcal{U}\subseteq\mathbb{F}_{q^{s}}$. As a consequence, the automorphism group of $\mathcal{C}$ contains linear isometries that are outside the normalizer of $\mathbb{F}_{q^{n}}^{*}$ if and only if $\mathcal{U}$ is not generic. Since any cyclic orbit code contains a subspace $\mathcal{U}$ such that $1\in\mathcal{U}$, we may assume without loss of generality that $1$ is contained in the generating subspace. If, in addition, $\dim(\mathcal{U})=1$, then $\mathcal{U}=\mathbb{F}_{q}$ and $\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})=\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})=\mathcal{G}_{q}(1,n)$ for all divisors $s$ of $n$. Hence from now on we assume $k\geq 2$ and thus $n\geq 4$. ###### Theorem 4.4. Let $S=\mathbb{F}_{q^{n}}^{*}$ and let $\mathcal{U}\in\mathcal{G}_{q}(k,n)$ be such that $1\in\mathcal{U}$. Let $s$ be a divisor of $n$. Then $\mathcal{U}\subseteq\mathbb{F}_{q^{s}}\Longleftrightarrow\text{GL}_{n/s}(q^{s})\leq\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U})).$ (4.2) Moreover, if $\mathbb{F}_{q^{s}}$ is the smallest subfield containing $\mathcal{U}$, then $\text{GL}_{n/s}(q^{s})$ is normal in $\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))$ and thus $\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))\leq N_{\text{GL}_{n}(q)}(\text{GL}_{n/s}(q^{s}))$. As a consequence, $\mathcal{U}\text{ is generic }\Longleftrightarrow\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))\leq N_{\text{GL}_{n}(q)}(S).$ The proof is postponed to the end of this section. We first need some technical results. We start with a lower bound on the size of the orbits $\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})$ for a given divisor $s$ of $n$. As we will see, this size depends on the dimension of the $\mathbb{F}_{q^{s}}$-subspace of $\mathbb{F}_{q^{n}}$ generated by $\mathcal{U}$. ###### Definition 4.5. For any $\mathbb{F}_{q}$-subspace $\mathcal{V}$ of $\mathbb{F}_{q^{n}}$ we set $\widehat{\mathcal{V}}:=\text{span}_{\mathbb{F}_{q^{s}}}(\mathcal{V})$ and $\delta_{s}(\mathcal{V}):=\dim_{\mathbb{F}_{q^{s}}}(\widehat{\mathcal{V}})$. Note that $\delta_{s}(\mathcal{V})s=\dim_{\mathbb{F}_{q}}(\widehat{\mathcal{V}})\leq n$. Clearly $\delta_{s}(\,\cdot\,)$ is invariant under the actions of the groups $\mathbb{F}_{q^{n}}^{*},\,\text{Gal}(\mathbb{F}_{q^{n}}\\!\mid\\!\mathbb{F}_{q})$, and $\text{GL}_{n/s}(q^{s})$. ###### Proposition 4.6. Let $\mathcal{U}\in\mathcal{G}_{q}(k,n)$ be such that $1\in\mathcal{U}$, and let $s$ be a divisor of $n$. Set $\delta_{s}(\mathcal{U})=r$. Then $1\leq r\leq k$ and $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})|\geq\frac{q^{\binom{r}{2}(s-1)}}{\genfrac{[}{]}{0.0pt}{}{k}{r}_{q}}\prod_{i=0}^{r-1}\frac{q^{n-is}-1}{q^{r-i}-1}$ with equality if $r=k$. Note that $(r-1)s<rs=\dim_{\mathbb{F}_{q}}(\widehat{\mathcal{U}})\leq n$. This shows that the right hand side is not $0$. ###### Proof. First let $s=1$. Then $\mathbb{F}_{q^{s}}=\mathbb{F}_{q}$ and $\widehat{\mathcal{U}}=\mathcal{U}$, and thus $r=k$. In this case, $\operatorname{Orb}_{\text{GL}_{n}(q)}(\mathcal{U})$ consists of all $k$-dimensional subspaces of $\mathbb{F}_{q^{n}}$, and hence its size is $\genfrac{[}{]}{0.0pt}{}{n}{k}_{q}$, which is the right hand side above. From now on let $s>1$. Case 1) Let $r=k$. Let $B=(u_{1},\ldots,u_{k})$ be an ordered $\mathbb{F}_{q}$-basis of $\mathcal{U}$. Thanks to $\delta_{s}(\mathcal{U})=k$, the vectors $u_{1},\ldots,u_{k}$ are also $\mathbb{F}_{q^{s}}$-linearly independent. Under the action of $\text{GL}_{n/s}(q^{s})$ the orbit of the basis $B$ consists of all $k$-tuples of $\mathbb{F}_{q^{s}}$-linearly independent vectors in $\mathbb{F}_{q^{n}}$. This implies that $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})|$ is given by the number of $k$-tuples of $\mathbb{F}_{q^{s}}$-linearly independent vectors in $\mathbb{F}_{q^{n}}$ divided by the number of ordered $\mathbb{F}_{q}$-bases for a $k$-dimensional $\mathbb{F}_{q}$-subspace. We conclude $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})|=\dfrac{\prod\limits_{i=0}^{k-1}(q^{n}-q^{is})}{\prod\limits_{i=0}^{k-1}(q^{k}-q^{i})}=q^{\binom{k}{2}(s-1)}\prod_{i=0}^{k-1}\frac{q^{n-is}-1}{q^{k-i}-1}.$ Case 2) Let now $1\leq r<k$. There exists a subspace $\mathcal{V}$ of $\mathcal{U}$ such that $\dim_{\mathbb{F}_{q}}(\mathcal{V})=r$ and $\widehat{\mathcal{V}}=\widehat{\mathcal{U}}$. Clearly, each subspace $\psi(\mathcal{U})\in\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})$ contains exactly $K:=\genfrac{[}{]}{0.0pt}{}{k}{r}_{q}$ subspaces of $\mathbb{F}_{q}$-dimension $r$, and thus in particular at most $K$ subspaces of the form $\psi^{\prime}(\mathcal{V})$ for some $\psi^{\prime}\in\text{GL}_{n/s}(q^{s})$. Since each $\psi^{\prime}(\mathcal{V})\in\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{V})$ is contained in at least one $\psi(\mathcal{U})\in\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})$, we obtain $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})|\geq\frac{1}{K}|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{V})|.$ Since $\mathcal{V}$ satisfies $\delta_{1}(\mathcal{V})=\delta_{s}(\mathcal{V})$,we may apply Case 1) to $\mathcal{V}$ to obtain the desired result. ∎ In order to compare the sizes of the $\text{GL}_{n/s}(q^{s})$-orbits and the Singer orbits, we need some technical lemmas. ###### Lemma 4.7. Let $2\leq r\leq n/2$ and $1\leq s<n$ be such that $sr\leq n$. Then $\prod_{i=0}^{r-1}\frac{q^{n-is}-1}{q^{r-i}-1}>q^{r(n-r)-(s-1)\binom{r}{2}}.$ ###### Proof. Note first that by the assumptions $n-is-r+i\geq 1\text{ for all }i=0,\ldots,r-1.$ (4.3) Indeed, $n-is-r+i=n-r-i(s-1)\geq n-r-(r-1)(s-1)=n-rs+s-1$. If $s\geq 2$ the latter is clearly at least $1$, while for $s=1$ we have $n-rs+s-1=n-r\geq n/2\geq 1$, too. Using the inequality $\frac{q^{a}-1}{q^{b}-1}>q^{a-b}\ \text{ whenever }a>b,$ (4.4) we obtain from (4.3) the inequality $\prod_{i=0}^{r-1}\frac{q^{n-is}-1}{q^{r-i}-1}>q^{M}$, where $M=\sum_{i=0}^{r-1}(n-is-r+i)=r(n-r)-(s-1)\binom{r}{2}$, as desired. ∎ The next lemma comes in two forms, one with a factor $n$ on the right hand side and one without such factor. The version with factor $n$ will be needed in Section 5 when we study orbits under the normalizer of the Singer subgroup. ###### Lemma 4.8. Let $2\leq r\leq k\leq n/2$ and $1\leq s<n$ such that $rs\leq n$. * (a) Let $r\geq 3$. Then $q^{\binom{r}{2}(s-1)}\prod_{i=0}^{r-1}\frac{q^{n-is}-1}{q^{r-i}-1}>n\genfrac{[}{]}{0.0pt}{}{k}{r}_{q}\frac{q^{n}-1}{q-1}.$ (4.5) * (b) If $r=2$, then ${\displaystyle q^{\binom{r}{2}(s-1)}\prod_{i=0}^{r-1}\frac{q^{n-is}-1}{q^{r-i}-1}>\genfrac{[}{]}{0.0pt}{}{k}{r}_{q}\frac{q^{n}-1}{q-1}}$. ###### Proof. (a) Let $r\geq 3$, thus $n\geq 6$. Setting $c=q/(q-1)$ we have $(q^{n}-1)/(q-1)<cq^{n-1}$. Furthermore, $r\geq 3$ implies $r(k-r)+n-1\leq r(n-r)-(n/2+1),$ because $r(k-r)+n-1\leq r(n/2-r)+n-1=r(n-r)-rn/2+n-1\leq r(n-r)-3n/2+n-1$. Using the above inequalities along with $\genfrac{[}{]}{0.0pt}{}{k}{r}_{q}<4q^{r(k-r)}$ (see [15, Lem. 4]) and Lemma 4.7 we compute $n\genfrac{[}{]}{0.0pt}{}{k}{r}_{q}\frac{q^{n}-1}{q-1}<4ncq^{r(k-r)+n-1}<\frac{4nc}{q^{n/2+1}}q^{\binom{r}{2}(s-1)}\prod_{i=0}^{r-1}\frac{q^{n-is}-1}{q^{r-i}-1}.$ Finally, one easily checks that $\frac{4nc}{q^{n/2+1}}=\frac{4n}{q^{n/2}(q-1)}\leq 1$ for $q\geq 3$ and $n\geq 4$ as well as $q=2$ and $n\geq 11$. For the remaining cases ($q=2$ and $n=6,\ldots,10$) Inequality (4.5) can be verified directly. (b) Let $r=2$. In this case the desired inequality is equivalent to $Q:=(q-1)(q^{n}-q^{s})-(q^{k}-1)(q^{k}-q)>0.$ Since $Q$ decreases with increasing $s$ or $k$, we may lower bound $Q$ by using $s=k=n/2$ (ignoring that this may not be an integer). This leads to $\displaystyle Q$ $\displaystyle\geq(q-1)(q^{n}-q^{n/2})-(q^{n/2}-1)(q^{n/2}-q)=\big{(}(q-1)q^{n/2}-(q^{n/2}-q)\big{)}(q^{n/2}-1)$ $\displaystyle=\big{(}(q-2)q^{n/2}+q\big{)}(q^{n/2}-1)>0,$ as desired. ∎ ###### Lemma 4.9. Let $s,\,t\in\mathbb{N}$ such that $s\\!\mid\\!t\\!\mid\\!n$ and $s\neq t$. Then $\big{|}\text{GL}_{n/s}(q^{s})\big{|}>\big{|}N_{\text{GL}_{n}(q)}(\text{GL}_{n/t}(q^{t}))\big{|}.$ ###### Proof. Set $\hat{q}=q^{s},\,\hat{n}=n/s$ and let $sa=t$. Then $\big{|}\text{GL}_{n/s}(q^{s})\big{|}=\prod_{i=0}^{\hat{n}-1}(\hat{q}^{\hat{n}}-\hat{q}^{i})$ and from 2.4(d) we know that $\big{|}N_{\text{GL}_{n}(q)}(\text{GL}_{n/t}(q^{t}))\big{|}=t\prod_{i=0}^{\hat{n}/a-1}((\hat{q}^{a})^{\hat{n}/a}-(\hat{q}^{a})^{i})\leq n\prod_{i=0}^{\hat{n}/a-1}(\hat{q}^{\hat{n}}-\hat{q}^{ai}).$ Clearly, all factors in the product on the right hand side appear in $|\text{GL}_{n/s}(q^{s})|$. Furthermore, since $a>1$, the factor $\hat{q}^{\hat{n}}-\hat{q}=q^{n}-q^{s}$ of $|\text{GL}_{n/s}(q^{s})|$ does not appear in $|N_{\text{GL}_{n}(q)}(\text{GL}_{n/t}(q^{t}))|$. Hence the desired inequality follows if we can show that $q^{n}-q^{s}>n$. Since $s\neq n$ and $s$ is a divisor of $n$, we have $q^{n}-q^{s}-n\geq q^{n}-q^{n/2}-n$. One easily verifies that the function $f(x)=q^{x}-q^{x/2}-x$ is indeed positive on $[4,\infty)$. This concludes the proof. ∎ Now we are ready to prove our main result. _Proof of 4.4._ Let $S,\,s,\,\mathcal{U}$ be as in the theorem. Set $\widehat{\mathcal{U}}=\text{span}_{\mathbb{F}_{q^{s}}}(\mathcal{U})$ as in 4.5. Then $1\leq\delta_{s}(\mathcal{U})\leq\dim_{\mathbb{F}_{q}}(\mathcal{U})$ and $\mathcal{U}\subseteq\mathbb{F}_{q^{s}}\Longleftrightarrow 1=\delta_{s}(\mathcal{U})\Longleftrightarrow\widehat{\mathcal{U}}=\mathbb{F}_{q^{s}},$ (4.6) where the last equivalence follows from the fact that $1\in\mathcal{U}$. Since $|S|=q^{n}-1$ and $\mathbb{F}_{q}^{*}$ stabilizes $\mathcal{U}$, we have $|\operatorname{Orb}_{S}(\mathcal{U})|\leq(q^{n}-1)/(q-1)$ by the orbit- stabilizer theorem (see also 3.2). Moreover, since $S\leq\text{GL}_{n/s}(q^{s})$, we have $\operatorname{Orb}_{S}(\mathcal{U})\subseteq\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})\ \text{ with equality iff }\ \text{GL}_{n/s}(q^{s})\leq\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U})).$ (4.7) We now prove the equivalence (4.2). “$\Longrightarrow$” Let $\mathcal{U}\subseteq\mathbb{F}_{q^{s}}$, thus $\widehat{\mathcal{U}}=\mathbb{F}_{q^{s}}$. Since $1\in\mathcal{U}$ we obtain $\psi(\mathcal{U})=\\{u\cdot\psi(1)\mid u\in\mathcal{U}\\}$ for every $\psi\in\text{GL}_{n/s}(q^{s})$. Hence $\psi(\mathcal{U})$ is the cyclic shift $\psi(1)\mathcal{U}$ and thus contained in $\operatorname{Orb}_{S}(U)$. This shows $\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})\subseteq\operatorname{Orb}_{S}(\mathcal{U})$ and (4.7) implies the desired result. “$\Longleftarrow$” Suppose $\mathcal{U}\not\subseteq\mathbb{F}_{q^{s}}$. Then $r:=\delta_{s}(\mathcal{U})\geq 2$ by (4.6). 4.6 and Lemma 4.8 imply $\big{|}\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})\big{|}\geq\frac{q^{\binom{r}{2}(s-1)}}{\genfrac{[}{]}{0.0pt}{}{k}{r}_{q}}\prod_{i=0}^{r-1}\frac{q^{n-is}-1}{q^{r-i}-1}>\frac{q^{n}-1}{q-1}\geq|\operatorname{Orb}_{S}(\mathcal{U})|.$ (4.7) implies $\text{GL}_{n/s}(q^{s})\not\leq\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))$. We now turn to the remaining statements of 4.4. Let $\mathbb{F}_{q^{s}}$ be the smallest subfield containing $\mathcal{U}$. We want to show that $\text{GL}_{n/s}(q^{s})$ is normal in $\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))$. To this end set $\mathcal{T}=\big{\\{}t\in\mathbb{N}\,\big{|}\,s\\!\mid\\!t\\!\mid\\!n\big{\\}}$. Clearly $\text{GL}_{n/t}(q^{t})\leq\text{GL}_{n/s}(q^{s})$ for all $t\in\mathcal{T}$. From (4.2) we conclude that for any $t\in\mathbb{N}$ $\text{GL}_{n/t}(q^{t})\leq\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))\Longleftrightarrow t\in\mathcal{T}.$ Furthermore, thanks to 2.4(c) one of the subgroups $\text{GL}_{n/t}(q^{t}),\,t\in\mathcal{T}$, is normal in $\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))$. Suppose $\text{GL}_{n/t}(q^{t})$ is normal in $\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))$ for some $t\in\mathcal{T}\setminus\\{s\\}$. Then $\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))\leq N_{\text{GL}_{n}(q)}(\text{GL}_{n/t}(q^{t}))$. Now, Lemma 4.9 along with $\text{GL}_{n/s}(q^{s})\leq\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))$ leads to a contradiction. Thus $\text{GL}_{n/s}(q^{s})$ is the only extension- field subgroup that is normal in $\text{Aut}(\operatorname{Orb}_{S}(\mathcal{U}))$. The rest of the theorem follows. $\square$ ## 5 The Automorphism Groups of Orbits under the Singer Normalizer The considerations of the previous sections allow us to also describe the automorphism group of orbits under the normalizer of the Singer subgroup in most cases. Recall the notation in (3.3). The following theorem is analogous to 4.4, but needs the assumption $\delta_{s}(\mathcal{U})\neq 2$. We will deal with the case $\delta_{s}(\mathcal{U})=2$ afterwards. Throughout this section, let $N:=N_{\text{GL}_{n}(q)}(\mathbb{F}_{q^{n}}^{*})$, i. e., $N$ is the normalizer of $\mathbb{F}_{q^{n}}^{*}$. Recall also that we assume $2\leq k\leq n/2$. ###### Theorem 5.1. Let $s$ be a divisor of $n$ and $\mathcal{U}\in\mathcal{G}_{q}(k,n)$ be such that $1\in\mathcal{U}$ and such that $\delta_{s}(\mathcal{U})\neq 2$. Then $\mathcal{U}\subseteq\mathbb{F}_{q^{s}}\Longleftrightarrow\text{GL}_{n/s}(q^{s})\leq\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U})).$ (5.1) Moreover, if $\mathbb{F}_{q^{s}}$ is the smallest subfield containing $\mathcal{U}$ and $\delta_{t}(\mathcal{U})\neq 2$ for all divisors $t$ of $n$, then $\text{GL}_{n/s}(q^{s})$ is normal in $\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))$ and thus $\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))\leq N_{\text{GL}_{n}(q)}(\text{GL}_{n/s}(q^{s}))$. As a consequence: * (a) If $\mathcal{U}$ is generic and $\delta_{t}(\mathcal{U})\neq 2$ for all divisors $t$ of $n$, then $\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))=N$; * (b) If $\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))=N$, then $\mathcal{U}$ is generic. Note that the left hand side of (5.1) means that $\delta_{s}(\mathcal{U})=1$. Hence the excluded case $\delta_{s}(\mathcal{U})=2$ may be regarded as a transitional case, and we will see below that in that case either is possible: $\text{GL}_{n/s}(q^{s})\leq\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))$ or $\text{GL}_{n/s}(q^{s})\not\leq\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))$. ###### Proof. For “$\Longrightarrow$” of (5.1) recall that $\operatorname{Orb}_{N}(\mathcal{U})=\bigcup_{i=0}^{n-1}\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U}^{[i]})$, see (3.3). Since $1\in\mathcal{U}^{[i]}$ and $\mathcal{U}^{[i]}\subseteq\mathbb{F}_{q^{s}}$ for all $i$, the desired statement follows from 4.4. “$\Longleftarrow$” The proof is similar to the one of 4.4. Suppose $\mathcal{U}\not\subseteq\mathbb{F}_{q^{s}}$. Thanks to our assumption this implies $\delta_{s}(\mathcal{U})=:r\geq 3$. Thus 4.6, Lemma 4.8(a), and 3.2 lead to $\big{|}\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})\big{|}\geq\frac{q^{\binom{r}{2}(s-1)}}{\genfrac{[}{]}{0.0pt}{}{k}{r}_{q}}\prod_{i=0}^{r-1}\frac{q^{n-is}-1}{q^{r-i}-1}>n\frac{q^{n}-1}{q-1}\geq|\operatorname{Orb}_{N}(\mathcal{U})|,$ and therefore $\text{GL}_{n/s}(q^{s})\not\leq\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))$. The rest of the proof is identical to the one for 4.4. For Part (b) notice that “$\Rightarrow$” of (5.1) holds true for general $\delta_{s}(\mathcal{U})$. ∎ We now turn to the remaining case $r=\delta_{s}(\mathcal{U})=2$. In this case there are indeed instances where $\text{GL}_{n/s}(q^{s})\leq\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))$ even though $\mathcal{U}\not\subseteq\mathbb{F}_{q^{s}}$. Clearly, this containment is equivalent to $\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})\subseteq\operatorname{Orb}_{N}(\mathcal{U})$. In all known examples we even have $\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})=\operatorname{Orb}_{N}(\mathcal{U})$. In fact, we believe that we have $\operatorname{Orb}_{N}(\mathcal{U})\subseteq\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})$ for all subspaces $\mathcal{U}$ (i.e., $\mathcal{U}^{q}=\phi(\mathcal{U})$ for some $\phi\in\text{GL}_{n/s}(q^{s})$), but unfortunately we are not able at this point to prove this statement. ###### Example 5.2. Let $(q,n,k,s)=(2,4,2,2)$. A $2$-dimensional subspace $\mathcal{U}\leq\mathbb{F}_{2^{4}}$ with $\delta_{2}(\mathcal{U})=2$ is of the form $\mathcal{U}=\text{span}_{\mathbb{F}_{2}}\\{1,\alpha\\}$ for some $\alpha\in\mathbb{F}_{2^{4}}\setminus\mathbb{F}_{2^{2}}$. One can directly verify (using, e.g., SageMath) that all these subspaces generate the same $N$-orbit, and this orbit agrees with the $\text{GL}_{2}(4)$-orbit. The orbit size is $n/2(2^{n}-1)/(2-1)=30$ (see also 5.4 below). ###### Example 5.3. Let $(q,n,k,s)=(2,8,4,4)$. Let $\alpha\in\mathbb{F}_{2^{8}}\setminus\mathbb{F}_{2^{4}}$ and consider the subspace $\mathcal{U}=\text{span}_{\mathbb{F}_{2^{2}}}\\{1,\alpha\\}$. Then $\mathcal{U}\not\subseteq\mathbb{F}_{2^{4}}$, hence $\delta_{4}(\mathcal{U})=2$, and one straightforwardly verifies that $\operatorname{Orb}_{\text{GL}_{2}(2^{4})}(\mathcal{U})=\operatorname{Orb}_{N}(\mathcal{U})$, and the orbit has size $340$ (for comparison, the lower bound from 4.6 is 292). These observations can also be seen as follows. Let $S=\mathbb{F}_{2^{8}}^{*}$. * (i) By 3.2 the Singer orbit has size $|\operatorname{Orb}_{S}(\mathcal{U})|=(2^{n}-1)/(2^{2}-1)=85$. * (ii) As 5.4 below shows, $\mathcal{U}^{[4]}\in\operatorname{Orb}_{S}(\mathcal{U})$; thus $\sigma^{4}$ stabilizes $\operatorname{Orb}_{S}(\mathcal{U})$, where $\sigma$ is the Frobenius automorphism. Furthermore, no other non-trivial element of the Galois group $\text{Gal}(\mathbb{F}_{2^{8}}\\!\mid\\!\mathbb{F}_{2})$ stabilizes $\operatorname{Orb}_{S}(\mathcal{U})$ (this is true for these specific parameters, but not in the general situation of 5.4). Together with (i) this shows that $|\operatorname{Orb}_{N}(\mathcal{U})|=4\\!\cdot\\!85=340$. * (iii) Since $\mathcal{U}=\text{span}_{\mathbb{F}_{2^{2}}}\\{1,\alpha\\}$ and $\mathbb{F}_{2^{2}}\subseteq\mathbb{F}_{2^{4}}$, an $\mathbb{F}_{2^{4}}$-linear isomorphism $\phi$ maps $\mathcal{U}$ to the space $\text{span}_{\mathbb{F}_{2^{2}}}\\{\phi(1),\phi(\alpha)\\}$. As a consequence, $\operatorname{Orb}_{\text{GL}_{2}(2^{4})}(\mathcal{U})$ consists of all subspaces in $\mathbb{F}_{2^{8}}$ that are $2$-dimensional over $\mathbb{F}_{2^{2}}$ and not $1$-dimensional over $\mathbb{F}_{2^{4}}$, i.e., not a cyclic shift of $\mathbb{F}_{2^{4}}$. Thus $|\operatorname{Orb}_{\text{GL}_{2}(2^{4})}(\mathcal{U})|=\genfrac{[}{]}{0.0pt}{}{4}{2}_{4}-|\operatorname{Orb}_{S}(\mathbb{F}_{2^{4}})|=\genfrac{[}{]}{0.0pt}{}{4}{2}_{4}-(2^{8}-1)/(2^{4}-1)=340$. * (iv) Finally, $\operatorname{Orb}_{N}(\mathcal{U})\subseteq\operatorname{Orb}_{\text{GL}_{2}(2^{4})}(\mathcal{U})$. To see this, it suffices to show that $\mathcal{U}^{[i]}\in\operatorname{Orb}_{\text{GL}_{2}(2^{4})}(\mathcal{U})$ for all $i\in\\{0,\ldots,n-1\\}$. Note that $\mathcal{U}^{[i]}=\text{span}_{\mathbb{F}_{2^{2}}}\\{1,\alpha^{[i]}\\}$. Since $1$ and $\alpha^{[i]}$ are $\mathbb{F}_{2^{4}}$-linearly independent, there exists $\phi\in\text{GL}_{2}(2^{4})$ such that $\phi(1)=1$ and $\phi(\alpha)=\alpha^{[i]}$. Hence $\mathcal{U}^{[i]}=\phi(\mathcal{U})\in\operatorname{Orb}_{\text{GL}_{2}(2^{4})}(\mathcal{U})$. We wish to add that all subspaces of the form $\text{span}_{\mathbb{F}_{2^{2}}}\\{1,\alpha\\}$ with $\alpha\in\mathbb{F}_{2^{8}}\setminus\mathbb{F}_{2^{4}}$ generate the same orbit, and this is the only $N$-orbit of a $4$-dimensional subspace that coincides with the $\text{GL}_{n/s}(q^{s})$-orbit. Finally, since $\mathcal{U}$ is actually an $\mathbb{F}_{4}$-vector space and $\mathbb{F}_{2^{8}}=\mathbb{F}_{4^{4}}$, we may regard all of this also as an example for the parameters $(q,n,k,s)=(4,4,2,2)$. Thus $\operatorname{Orb}_{\text{GL}_{2}(4^{2})}(\mathcal{U})=\operatorname{Orb}_{N^{\prime}}(\mathcal{U})$, where $N^{\prime}=N_{\text{GL}_{4}(4)}(\mathbb{F}_{4^{4}}^{*})$. The subspaces $\mathcal{U}$ in the above examples are both of the form $\mathcal{U}=\text{span}_{\mathbb{F}_{q^{a}}}\\{1,\alpha\\}\subseteq\mathbb{F}_{q^{n}}$, where $a=n/4$ and $s=k=n/2$ and $\mathcal{U}\not\subseteq\mathbb{F}_{q^{s}}$. In 5.6 below we will show that for no other subspaces of this type the $\text{GL}_{n/s}(q^{s})$-orbit coincides with the $N$-orbit. We start with showing that all such subspaces $\mathcal{U}$ satisfy $\mathcal{U}^{[s]}\in\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})$. ###### Proposition 5.4. Let $a\in\mathbb{N},\,n=4a$, and $s=k=2a$. Choose $\alpha\in\mathbb{F}_{q^{n}}\setminus\mathbb{F}_{q^{s}}$ and set $\mathcal{U}=\text{span}_{\mathbb{F}_{q^{a}}}\\{1,\alpha\\}\subseteq\mathbb{F}_{q^{n}}$. Then $\mathcal{U}^{[s]}\in\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})$. Thus $|\operatorname{Orb}_{N}(\mathcal{U})|\leq\frac{n}{2}\,\frac{q^{n}-1}{q^{a}-1}.$ ###### Proof. By 3.2 we have $|\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})|=(q^{n}-1)/(q^{a}-1)$. Thus the second statement follows once we establish $\mathcal{U}^{[s]}\in\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})$. To do so we proceed as follows. 1) We show first that $\alpha^{[s]}\mathcal{U}\cap\mathcal{U}\neq\\{0\\}.$ (5.2) Since both $\mathcal{U}$ and $\alpha^{[s]}\mathcal{U}=\text{span}_{\mathbb{F}_{q^{a}}}\\{\alpha^{[s]},\alpha\alpha^{[s]}\\}$ have dimension $2a=n/2$, (5.2) is equivalent to $\alpha^{[s]}\mathcal{U}+\mathcal{U}\neq\mathbb{F}_{q^{n}}$. Hence we have to show that $1,\,\alpha,\,\alpha^{[s]},\,\alpha\alpha^{[s]}$ are linearly dependent over $\mathbb{F}_{q^{a}}$. We show that there exist $\lambda,\mu,\nu\in\mathbb{F}_{q^{a}}$ such that $\lambda+\mu\alpha+\mu\alpha^{[s]}+\nu\alpha\alpha^{[s]}=0.$ (5.3) Raising (5.3) to the power $[a]$ and using $s=2a$ we obtain a second equation, which together with (5.3) can be written as $\begin{pmatrix}1&\alpha+\alpha^{[2a]}&\alpha\alpha^{[2a]}\\\ 1&\alpha^{[a]}+\alpha^{[3a]}&\alpha^{[a]}\alpha^{[3a]}\end{pmatrix}\begin{pmatrix}\lambda\\\ \mu\\\ \nu\end{pmatrix}=0.$ (5.4) The matrix is row equivalent to $\begin{pmatrix}1&\alpha+\alpha^{[2a]}&\alpha\alpha^{[2a]}\\\ 0&\alpha^{[a]}+\alpha^{[3a]}-\alpha-\alpha^{[2a]}&\alpha^{[a]}\alpha^{[3a]}-\alpha\alpha^{[2a]}\end{pmatrix}.$ Now we can find a solution of the desired form. Suppose first that $\alpha-\alpha^{[a]}+\alpha^{[2a]}-\alpha^{[3a]}\neq 0$. Set $\nu=1$. Then (5.4) has the unique (normalized) solution $\nu=1,\quad\mu=\frac{\alpha^{[a]}\alpha^{[3a]}-\alpha\alpha^{[2a]}}{\alpha-\alpha^{[a]}+\alpha^{[2a]}-\alpha^{[3a]}},\quad\lambda=-\mu(\alpha+\alpha^{[2a]})-\alpha\alpha^{[2a]}.$ Using that $4a=n$, one easily verifies that $\mu^{[a]}=\mu$ and $\lambda=\lambda^{[a]}$, and thus $(\lambda,\mu,\nu)\in\mathbb{F}_{q^{a}}^{3}$. If $\alpha\\!-\\!\alpha^{[a]}\\!+\\!\alpha^{[2a]}\\!-\\!\alpha^{[3a]}=0$, (5.3) has the solution $(\lambda,\mu,\nu)=(-(\alpha+\alpha^{[2a]}),1,0)$, which again is in $\mathbb{F}_{q^{a}}^{3}$. All of this establishes (5.2). 2) (5.2) implies that also $\alpha^{-[s]}\mathcal{U}\cap\mathcal{U}\neq\\{0\\}$. Choose $\delta\in\alpha^{-[s]}\mathcal{U}\cap\mathcal{U}\setminus\\{0\\}$ and let $\gamma\in\mathbb{F}_{q^{n}}^{*}$ be such that $\gamma^{[s]}=\delta$. Then $\gamma=\gamma^{[2s]}=\delta^{[s]}\in\mathcal{U}^{[s]}$. Moreover, $\gamma^{[s]}\alpha^{[s]}\in\mathcal{U}$ and thus $\gamma\alpha\in\mathcal{U}^{[s]}$. All of this shows that $\gamma\mathcal{U}=\text{span}_{\mathbb{F}_{q^{a}}}\\{\gamma,\gamma\alpha\\}=\mathcal{U}^{[s]}$. Thus, $\mathcal{U}^{[s]}\in\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})$, as desired. ∎ ###### Remark 5.5. 5.4 only provides an upper bound for $|\operatorname{Orb}_{N}(\mathcal{U})|$. In fact, there even exist subspaces $\mathcal{U}$ of the specified form for which $\mathcal{U}^{[i]}\in\operatorname{Orb}_{S}(\mathcal{U})$ for all $i$ and thus $\operatorname{Orb}_{N}(\mathcal{U})=\operatorname{Orb}_{S}(\mathcal{U})$; for instance for $q=3$ and $a=2$. On the other hand, for $q=2$ and $a=2$ we have equality in 5.4 for all subspaces of the given form. ###### Corollary 5.6. Let the data be as in 5.4. Then $\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})=\operatorname{Orb}_{N}(\mathcal{U})\Longleftrightarrow(q,a)\in\\{(2,1),\,(2,2)\\}.$ ###### Proof. “$\Longleftarrow$” Examples 5.2 and 5.3. “$\Longrightarrow$” Let $(q,a)\not\in\\{(2,1),\,(2,2)\\}$. We show that $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})|>|\operatorname{Orb}_{N}(\mathcal{U})|$. Thanks to 5.4 it suffices to show $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})|-n/2(q^{n}-1)/(q^{a}-1)>0$. Using $r=\delta_{s}(\mathcal{U})=2$ and $k=2a=s=n/2$ along with the lower bound in 4.6 the inequality follows if we prove $Q:=q^{2a-1}(q^{a}-1)-2a(q^{2a-1}-1)>0$. We have $Q>(q^{2a-1}-1)(q^{a}-1-2a).$ The first factor is clearly positive. As for the second factor, note that the function $f(x)=q^{x}-(2x+1)$ is non-negative on $[1,\infty)$ if $q\geq 3$, while for $q=2$ this is the case for the interval $[3,\infty)$. This shows that $Q>0$ whenever $(q,a)\not\in\\{(2,1),\,(2,2)\\}$ and concludes the proof. ∎ There is one more known example where the $\text{GL}_{n/s}(q^{s})$-orbit coincides with the $N$-orbit even though the subspace is not contained in $\mathbb{F}_{q^{s}}$. In fact, it is the only such example for $s=1$. Indeed, note that $s=1$ together with $\delta_{s}(\mathcal{U})=2$ forces $\dim(\mathcal{U})=2$. In 5.9 below we will list all $2$-dimensional subspaces for which the orbits coincide. ###### Example 5.7. Let $(q,n,s)=(2,5,1)$ and choose any subspace $\mathcal{U}\in\mathcal{G}_{2}(2,5)$. Then $\operatorname{Orb}_{\text{GL}_{5}(2)}(\mathcal{U})$ is the entire Grassmannian $\mathcal{G}_{2}(2,5)$. It has cardinality $\genfrac{[}{]}{0.0pt}{}{5}{2}_{2}=155=5(2^{5}-1)/(2-1)$ and satisfies $\operatorname{Orb}_{\text{GL}_{5}(2)}(\mathcal{U})=\operatorname{Orb}_{N}(\mathcal{U})$. We now turn to cases where Inequality (4.5) of Lemma 4.8(a) holds true even with $r=2$. Recall that $\delta_{s}(\mathcal{U})=2$ implies $s\leq n/2$ because $\delta_{s}(\mathcal{U})s\leq n$. ###### Proposition 5.8. Let $k\leq 3n/8$ and $s\leq n/2$ be a divisor of $n$. Let $\mathcal{U}\in\mathcal{G}_{q}(k,n)$ be such that $1\in\mathcal{U}$. Then * (a) If $\delta_{s}(\mathcal{U})=2$, then $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})\big{|}>|\operatorname{Orb}_{N}(\mathcal{U})|$ and thus $\text{GL}_{n/s}(q^{s})\not\leq\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))$. * (b) If $\mathbb{F}_{q^{s}}$ is the smallest subfield containing $\mathcal{U}$, then $\text{GL}_{n/s}(q^{s})$ is normal in $\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))$ and thus $\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))\leq N_{\text{GL}_{n}(q)}(\text{GL}_{n/s}(q^{s}))$. * (c) $\mathcal{U}$ is generic iff $\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))=N$. ###### Proof. (a) Let $r:=\delta_{s}(\mathcal{U})=2$. We show that (4.5) holds true for most parameters and discuss the remaining values subsequently. Inequality (4.5) is equivalent to $Q:=(q-1)(q^{n}-q^{s})-n(q^{k}-1)(q^{k}-q)>0.$ (5.5) The left hand side decreases for increasing $s$, and thus we may assume $s=n/2$. With the aid of (4.4) we compute $\displaystyle Q$ $\displaystyle\geq(q-1)q^{n/2}(q^{n/2}-1)-nq(q^{k}-1)(q^{k-1}-1)$ $\displaystyle>\big{(}(q-1)q^{n/2}q^{n/2-k+1}-nq(q^{k}-1)\big{)}(q^{k-1}-1)$ $\displaystyle=\Big{(}\frac{(q-1)q^{n-k}}{q^{k}-1}-n\Big{)}q(q^{k}-1)(q^{k-1}-1)$ $\displaystyle>\big{(}(q-1)q^{n-2k}-n\big{)}q(q^{k}-1)(q^{k-1}-1)$ (5.6) $\displaystyle>\big{(}(q-1)q^{n/4}-n\big{)}q(q^{k}-1)(q^{k-1}-1),$ where in the last step we used that $k\leq 3n/8$. Clearly the last three factors are positive. As for the first factor, consider the function $f(x)=(x-1)x^{n/4}-n$. For fixed $n$ the function is increasing on $[2,\infty)$. Furthermore, $f(2)\geq 0\text{ for }n\geq 16,\quad f(3)\geq 0\text{ for }n\geq 8,\ f(4)\geq 0\text{ for }n\geq 4.$ Thus $Q>0$ if (i) $q\geq 4$ and $n\geq 4$, (ii) $q=3$ and $n\geq 8$, or (iii) $q=2$ and $n\geq 16$. For the cases $q=2$ with $4\leq n\leq 15$ and $q=3$ with $4\leq n\leq 7$, direct verification shows that (5.5) holds true unless $(q,n,k)\in\\{(2,8,3),(2,11,4)\\}$. We consider these cases separately. i) Let $(q,n,k)=(2,11,4)$. Then $s=1$ (since $s$ is a divisor of $n$). But then every $4$-dimensional subspace $\mathcal{U}$ satisfies $\delta_{s}(\mathcal{U})=4$, and thus there is nothing to show. ii) Let $(q,n,k)=(2,8,3)$. In this case $s\in\\{2,4\\}$. Exhaustive consideration of all $3$-dimensional subspaces $\mathcal{U}$ in $\mathbb{F}_{2^{8}}$ with $\delta_{s}(\mathcal{U})=2$ shows that in each case the orbit $\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})$ is strictly larger than $n(2^{n}-1)=2040$, which is an upper bound for $|\operatorname{Orb}_{N}(\mathcal{U})|$. To be precise, for $s=2$, there is exactly one $\text{GL}_{n/s}(q^{s})$-orbit and it has size $5355$, while for $s=4$ there exists one orbit of size $61200$, two orbits of size $15300$, and one orbit of size $5100$. For comparison, the lower bound in 4.6 only provides $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})|\geq 1530$ if $s=2$ and $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})|\geq 1458$ if $s=4$. For (b) and (c) note that Part (a) and 5.1 imply the equivalence $[\mathcal{U}\subseteq\mathbb{F}_{q^{t}}\Longleftrightarrow\text{GL}_{n/t}(q^{t})\leq\text{Aut}_{N}(\mathcal{U})]$ for any divisor $t$ of $n$ with $t\leq n/2$. Thus the proof follows as in 4.4. ∎ Now we can fully cover the case where $k=r=2$. Let $N:=N_{\text{GL}_{n}(q)}(\mathbb{F}_{q^{n}}^{*})$. ###### Proposition 5.9. Let $n\geq 4$ and $1\leq s\leq n/2$ be a divisor of $n$. The following are equivalent. * (i) There exists $\mathcal{U}\in\mathcal{G}_{q}(2,n)$ such that $\delta_{s}(\mathcal{U})=2$ and $\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})=\operatorname{Orb}_{N}(\mathcal{U})$. * (ii) $(q,n,s)\in\\{(2,4,2),(2,5,1),(4,4,2)\\}$. ###### Proof. “(ii) $\Rightarrow$ (i)” Examples 5.2, 5.7, and 5.3. “(i) $\Rightarrow$ (ii)” By 5.8 we must have $k=2>3n/8$, hence $n\leq 5$. Since $s$ is a divisor of $n$ and $s\leq n/2$, this leaves the cases $(n,s)\in\\{(4,1),(4,2),(5,1)\\}$ with arbitrary $q$. Using 4.6 for the case $r=k=2$ and $|\operatorname{Orb}_{N}(\mathcal{U})|\leq n(q^{n}-1)/(q-1)$, we conclude that $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})\big{|}>|\operatorname{Orb}_{N}(\mathcal{U})|$ if $Q:=q^{s-1}(q^{n-s}-1)-n(q^{2}-1)>0$. Case 1: $(n,s)=(4,1)$. In this case $Q>0$ iff $q\geq 4$. Thus it remains to consider $q\in\\{2,3\\}$. Since $s=1$, every $2$-dimensional subspace $\mathcal{U}$ satisfies $\delta_{s}(\mathcal{U})=2$ and $|\operatorname{Orb}_{\text{GL}_{4}(q)}(\mathcal{U})|=\genfrac{[}{]}{0.0pt}{}{4}{2}_{q}$. Furthermore, exhaustive verification shows that $|\operatorname{Orb}_{N}(\mathcal{U})|\leq n/2(q^{n}-1)/(q-1)$. Thus $|\operatorname{Orb}_{\text{GL}_{4}(q)}(\mathcal{U})|>|\operatorname{Orb}_{N}(\mathcal{U})|$. Case 2: $(n,s)=(4,2)$. In this case $Q>0$ for all $q\geq 5$, and exhaustive verification shows that for $q=3$ every $2$-dimensional subspace $\mathcal{U}$ in $\mathbb{F}_{3^{4}}$ with $\delta_{2}(\mathcal{U})=2$ satisfies $|\operatorname{Orb}_{N}(\mathcal{U})|\leq n/2(q^{n}-1)/(q-1)<|\operatorname{Orb}_{\text{GL}_{4}(3)}(\mathcal{U})|$ (where the first inequality also follows from 5.4). This leaves the cases $(q,n,s)\in\\{(2,4,2),(4,4,2)\\}$. Case 3: $(n,s)=(5,1)$. In this case $Q>0$ iff $q\geq 3$, and thus only $(q,n,s)=(2,5,1)$ remains. ∎ Similarly we can cover all cases where $k=3$ (hence $n\geq 6$). In this case, $\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})$ is always strictly bigger than $\operatorname{Orb}_{N}(\mathcal{U})$. ###### Proposition 5.10. Let $n\geq 6$ and $s\leq n/2$ be a divisor of $n$. Then for every subspace $\mathcal{U}\in\mathcal{G}_{q}(3,n)$ such that $\delta_{s}(\mathcal{U})=2$ we have $|\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})|>|\operatorname{Orb}_{N}(\mathcal{U})|$. ###### Proof. By Lemma 5.8 we only need to verify the cases where $k=3>3n/8$, thus $n<8$. Since $\delta_{s}(\mathcal{U})=2\neq\dim(\mathcal{U})=3$, we must have $s\neq 1$. This leaves $(n,s)\in\\{(6,2),(6,3)\\}$. Using $n=6,\,k=3$ and $s\in\\{2,3\\}$ one verifies that (5.5) is true whenever $q\geq 7$. Exhaustive verification for $q\in\\{2,3,4,5\\}$ establishes the desired result. ∎ As the proofs in this section have shown, for given parameters $(n,k,s)$ and $r=2$ the inequality in (4.5) is true for sufficiently large $q$ (for instance, if $k<n/2$, then this is the case for $q\geq n+1$ as (5.6) shows). Thus, any further examples where the $N$-orbit agrees with the $\text{GL}_{n/s}(q^{s})$-orbit requires a relatively small field size. We strongly believe that no further example exists and thus close this section with ###### Conjecture 5.11. Let $s\leq n/2$ be a divisor of $n$ and $\mathcal{U}\in\mathbb{F}_{q^{n}}$ be such that $\delta_{s}(\mathcal{U})=2$ and $\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})\subseteq\operatorname{Orb}_{N}(\mathcal{U})$. Then the orbits coincide and $\mathcal{U}$ is one of the subspaces from Examples 5.2, 5.3, and 5.7. ## 6 Isometries of Orbit Codes In this section we turn to the question when two orbit codes (under the Singer subgroup or its normalizer) are linearly isometric. Our first result provides a criterion for when two cyclic orbit codes are not linearly isometric. ###### Theorem 6.1. Let $\mathcal{C},\mathcal{C}^{\prime}$ be distinct Singer orbits. If $\text{Aut}(\mathcal{C}^{\prime})=N_{\text{GL}_{n}(q)}(\mathbb{F}_{q^{n}}^{*})\subseteq\text{Aut}(\mathcal{C})$, then $\mathcal{C}$ and $\mathcal{C}^{\prime}$ are not linearly isometric. Our proof is an adaptation of [2, Thm. 5], where the authors prove an analogous result for $q$-Steiner systems. ###### Proof. We prove the contrapositive. Suppose that $\mathcal{C}$ and $\mathcal{C}^{\prime}$ are linearly isometric, so that there exists $\psi\in\text{GL}_{n}(q)$ such that $\psi(\mathcal{C})=\mathcal{C}^{\prime}$. Let $\tau\in N:=N_{\text{GL}_{n}}(\mathbb{F}_{q^{n}}^{*})$. Then our assumptions on $\text{Aut}(\mathcal{C}^{\prime})$ and $\text{Aut}(\mathcal{C})$ imply $\psi\circ\tau\circ\psi^{-1}(\mathcal{C}^{\prime})=\psi\circ\tau(\mathcal{C})=\psi(\mathcal{C})=\mathcal{C}^{\prime}$, and thus $\psi\circ\tau\circ\psi^{-1}\in\text{Aut}(\mathcal{C}^{\prime})=N$. This shows that $\psi$ is in the normalizer of $N$. But the latter is $N$ itself thanks to 2.4(a), and hence $\psi\in\text{Aut}(\mathcal{C}^{\prime})$ and $\mathcal{C}=\mathcal{C}^{\prime}$. ∎ The main result of this section shows that Singer orbits of generic subspaces are linearly isometric iff they are Frobenius-isometric. This drastically reduces the workload when finding isometry classes of such codes. ###### Theorem 6.2. Let $\mathcal{U},\,\mathcal{U}^{\prime}\in\mathcal{G}_{q}(k,n)$ such that $1\in\mathcal{U}^{\prime}$ and $\mathcal{U}^{\prime}$ is generic. * (a) Let $S=\mathbb{F}_{q^{n}}^{*}$. Then $\operatorname{Orb}_{S}(\mathcal{U})$ and $\operatorname{Orb}_{S}(\mathcal{U}^{\prime})$ are linearly isometric iff they are Frobenius-isometric. * (b) Let $k\leq 3n/8$ or $\delta_{s}(\mathcal{U})\geq 3$ for all divisors $s$ of $n$. Let $N=N_{\text{GL}_{n}(q)}(\mathbb{F}_{q^{n}}^{*})$. Then $\operatorname{Orb}_{N}(\mathcal{U})$ and $\operatorname{Orb}_{N}(\mathcal{U}^{\prime})$ are linearly isometric iff they are equal. ###### Proof. (a) Only “$\Longrightarrow$” needs proof. Set $\mathcal{C}=\operatorname{Orb}_{S}(\mathcal{U})$ and $\mathcal{C}^{\prime}=\operatorname{Orb}_{S}(\mathcal{U}^{\prime})$. Let $\psi\in\text{GL}_{n}(q)$ be such that $\psi(\mathcal{C})=\mathcal{C}^{\prime}$. 3.6(b) tells us that $\mathcal{C}^{\prime}=\operatorname{Orb}_{\psi S\psi^{-1}}(\mathcal{U}^{\prime\prime})$, where $\mathcal{U}^{\prime\prime}=\psi(\mathcal{U})$. Hence $\text{Aut}(\mathcal{C}^{\prime})$ contains the Singer subgroups $S$ and $\psi S\psi^{-1}$. By 4.4 the automorphism group $\text{Aut}(\mathcal{C}^{\prime})$ is contained in $N_{\text{GL}_{n}(q)}(S)$. However, by 2.4(b) $N_{\text{GL}_{n}(q)}(S)$ contains only one Singer subgroup. This implies $\psi S\psi^{-1}=S$, and thus $\psi\in N_{\text{GL}_{n}(q)}(S)$. (b) Let $\mathcal{C}:=\operatorname{Orb}_{N}(\mathcal{U})$ and $\psi(\mathcal{C})=\mathcal{C}^{\prime}:=\operatorname{Orb}_{N}(\mathcal{U}^{\prime})$. Then $\mathcal{C}^{\prime}=\operatorname{Orb}_{\psi N\psi^{-1}}(\mathcal{U}^{\prime\prime})$, where $\mathcal{U}^{\prime\prime}=\psi(\mathcal{U})$, and thus $\psi N\psi^{-1}\leq\text{Aut}(\mathcal{C}^{\prime})=N$, where the last identity follows from 5.1 and 5.8. Hence $\psi\in N$ thanks to 2.4(a), and thus $\mathcal{C}^{\prime}=\psi(\mathcal{C})=\mathcal{C}$. ∎ As the proof shows, Part (b) above is true for all subspaces that satisfy $\text{Aut}(\operatorname{Orb}_{N}(\mathcal{U}))=N$. Since the three outliers from Examples 5.2, 5.3, and 5.7 are the only orbit of their size in the respective ambient space, they trivially satisfy the equivalence in (b) above even though their automorphism group is much larger. We close this section with some examples and a comparison of isometries and weight-preserving bijections between cyclic orbit codes, where we define the weight of a codeword in $\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})$ as the distance to the ‘reference space’ $\mathcal{U}$. Since we will exclusively consider cyclic orbit codes, we write from now on $\operatorname{Orb}(\mathcal{U})$ instead of $\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})$. In [9] we studied the weight distribution of cyclic orbit codes $\operatorname{Orb}(\mathcal{U})$. We will see below that codes with the same weight distribution may not be isometric. Before providing details we first summarize the results from [9]. Recall the notation from (3.1) and (3.2). As before we assume $k\leq n/2$. ###### Definition 6.3. Let $\mathcal{U}\in\mathcal{G}_{q}(k,n)$. Define $\omega_{i}=|\\{\alpha\,\mathcal{U}\in\operatorname{Orb}(\mathcal{U})\mid\alpha\in\mathbb{F}_{q^{n}}^{*},\text{d}(\mathcal{U},\alpha\,\mathcal{U})=i\\}|$ for $i=0,\ldots,2k$. We call $(\omega_{0},\ldots,\omega_{2k})$ the _weight distribution_ of $\operatorname{Orb}(\mathcal{U})$. Clearly $\omega_{0}=1$ and $\omega_{i}=0$ for $i=1,\ldots,d-1$, where $d=\text{d}_{\rm{s}}(\operatorname{Orb}(\mathcal{U}))$. Obviously, the weight distribution is trivial for spread codes (i.e., if $\text{d}_{\rm{s}}(\operatorname{Orb}(\mathcal{U}))=2k$). From (3.1) it follows that $\text{d}_{\rm{s}}(\operatorname{Orb}(\mathcal{U})=2(k-\ell)$, where $\ell=\max\\{\dim(\mathcal{U}\cap\alpha\,\mathcal{U})\mid\alpha\in\mathbb{F}_{q^{n}}^{*}\\}$. In 6.4 below we list some facts about the weight distribution. Part (a) shows that all cyclic orbit codes with distance $2(k-1)$ have the same weight distribution. Hence there exists a weight-preserving bijection between any such codes. However, as we will see below, the codes are not necessarily isometric. Subspaces $\mathcal{U}$ that generate cyclic orbit codes with distance $2(k-1)$ are known as Sidon spaces; see [18] where also constructions of such spaces can be found. Not surprisingly, codes with distance up to $2k-4$ do not share the same weight distribution in general. For distance equal to $2(k-2)$, Part (b) below provides information about the weight distribution. Further details about the parameter $r$ in Part (b) can be found in [9, Sec. 4]. However, it is not yet fully understood which values this parameter can assume in general. ###### Theorem 6.4 ([9, Thms. 3.7 and 4.1]). Let $\mathcal{U}\in\mathcal{G}_{q}(k,n)$ be such that $1\in\mathcal{U}$. Let $\text{d}_{\rm{s}}(\operatorname{Orb}(\mathcal{U})=2(k-\ell)$, where $\ell>0$. Set $Q=(q^{k}-1)(q^{k}-q)/(q-1)^{2}$ and $N=(q^{n}-1)/(q-1)$. * (a) Suppose $\ell=1$. Then $|\operatorname{Orb}(\mathcal{U})|=N$ and $\big{(}\omega_{2k-2},\,\omega_{2k}\big{)}=\big{(}Q,\ N-Q-1\big{)}.$ * (b) Suppose $\ell=2$ and $|\operatorname{Orb}(\mathcal{U})|=N$. Then there exits $r\in\mathbb{N}_{0}$ and $\epsilon\in\\{0,1\\}$ such that $\big{(}\omega_{2k-4},\,\omega_{2k-2},\,\omega_{2k}\big{)}=\big{(}\epsilon q+rq(q+1),\ Q-(q+1)\omega_{2k-4},\ N-\omega_{2k-2}-\omega_{2k-4}-1\big{)}.$ The case $\epsilon=1$ occurs iff $\mathcal{U}$ contains the subfield $\mathbb{F}_{q^{2}}$ (which implies that $n$ is even). In the following examples we list all isometry classes of the subspaces in question along with their automorphism group. In most cases the size of the isometry class is determined by the automorphism group as follows. ###### Remark 6.5. Let $\mathcal{C}=\operatorname{Orb}_{\mathbb{F}_{q^{n}}^{*}}(\mathcal{U})$ be a cyclic orbit code with automorphism group $A$ contained in $N_{\text{GL}_{n}(q)}(\mathbb{F}_{q^{n}}^{*})$. Then the isometry class of $\mathcal{C}$ consists of $\nu$ cyclic orbit codes, where $\nu=n(q^{n}-1)/|A|$. This is due to 6.2, which tells us that two cyclic orbit codes are isometric iff they belong to the same orbit under the normalizer of the Singer subgroup $\mathbb{F}_{q^{n}}^{*}$. In the following examples, the total number of orbits also follows from the formula for the number of Singer orbits of a given length that is provided in [5, Thm. 2.1] for general $(q,n,k)$. ###### Example 6.6. Let $(q,n,k)=(2,6,3)$. There exist $23$ cyclic orbit codes generated by $3$-dimensional subspaces. One of them is $\operatorname{Orb}(\mathbb{F}_{2^{3}})$, which is a spread code (i.e., it consists of $9$ subspaces and its subspace distance is $6$; hence the union of its subspaces is $\mathbb{F}_{2^{6}}$). Its automorphism group is $\text{Aut}(\operatorname{Orb}(\mathbb{F}_{2^{3}}))=N_{\text{GL}_{6}(2)}(\text{GL}_{2}(2^{3}))$. This follows directly from 4.4 along with the fact that $\text{Gal}(\mathbb{F}_{2^{3}}\\!\mid\\!\mathbb{F}_{2})$ acts trivially on $\operatorname{Orb}(\mathbb{F}_{2^{3}})$. Clearly, this is the only orbit generated by a non-generic subspace of $\mathbb{F}_{2^{6}}$. Even more, it is the only orbit with a generating subspace $\mathcal{U}$ such that $\delta_{3}(\mathcal{U})\neq 2$ (see 4.5). The other $22$ orbits have length $2^{6}-1$, and their automorphism group is contained in $N_{\text{GL}_{6}(2)}(\mathbb{F}_{2^{6}}^{*})$ thanks to 4.4. They classify as follows. Note that distance $4$ corresponds to Case (a) of the above theorem and distance $2$ to Case (b). In the latter case we also present the value of $\omega_{2k-4}=\omega_{2}$ (which fully determines the weight distribution). It is, of course, invariant under isometry and thus identical for all orbits in the isometry class. Finally, we also present $\delta_{2}(\mathcal{U})$ for any subspace $\mathcal{U}$ in any of the orbits. * (a) Orbits with automorphism group $\mathbb{F}_{2^{6}}^{*}$: – 1 isometry class, consisting of orbits with distance 4 ($\delta_{2}(\mathcal{U})=3$). – 1 isometry class, consisting of orbits with distance 2 and $\omega_{2}=6$ ($\delta_{2}(\mathcal{U})=3$). * (b) Orbits with automorphism group $\text{Gal}(\mathbb{F}_{2^{6}}\\!\mid\\!\mathbb{F}_{2^{3}})\rtimes\mathbb{F}_{2^{6}}^{*}$: – 1 isometry class, consisting of orbits with distance $2$ and $\omega_{2}=2$ ($\delta_{2}(\mathcal{U})=2$). – 1 isometry class, consisting of orbits with distance $2$ and $\omega_{2}=6$ ($\delta_{2}(\mathcal{U})=3$). * (c) Orbits with automorphism group $\text{Gal}(\mathbb{F}_{2^{6}}\\!\mid\\!\mathbb{F}_{2^{2}})\rtimes\mathbb{F}_{2^{6}}^{*}$: – 1 isometry class, consisting of orbits with distance $4$ ($\delta_{2}(\mathcal{U})=3$). – 1 isometry class, consisting of orbits with distance $2$ and $\omega_{2}=2$ ($\delta_{2}(\mathcal{U})=2$). ###### Example 6.7. Let $(q,n,k)=(2,7,3)$. In this case, there are no proper subfields of $\mathbb{F}_{2^{7}}$ to be taken into account, and in particular $\epsilon=0$ in Case (b) of 6.4. There exist $93$ cyclic orbit codes generated by $3$-dimensional subspaces. All of them have length $2^{7}-1$. They classify as follows. * (a) Orbits with automorphism group $\mathbb{F}_{2^{7}}^{*}$: – 10 isometry classes, consisting of orbits with distance 4. – 3 isometry classes, consisting of orbits with distance 2 and $\omega_{2}=6$. * (b) Orbits with automorphism group $\text{Gal}(\mathbb{F}_{2^{7}}\\!\mid\\!\mathbb{F}_{2})\rtimes\mathbb{F}_{2^{7}}^{*}$: – 2 isometry classes, each consisting of a single orbit with distance $4$. ###### Example 6.8. Let $(q,n,k)=(2,8,3)$. There exist $381$ cyclic orbit codes generated by a $3$-dimensional subspace. All orbits have length $2^{8}-1$. Exactly one orbit is generated by a subspace contained in $\mathbb{F}_{2^{4}}$. Clearly, all other orbits are generated by subspaces $\mathcal{U}$ with $\delta_{4}(\mathcal{U})=2$. The orbits classify as follows. We present the data as in 6.6. * (a) Orbits with automorphism group $\mathbb{F}_{2^{8}}^{*}$: – 38 isometry classes, consisting of orbits with distance 4 ($\delta_{2}(\mathcal{U})=3$). – 4 isometry classes, consisting of orbits with distance 2 and $\omega_{2}=6$ ($\delta_{2}(\mathcal{U})=3$). – 2 isometry classes, consisting of orbits with distance 2 and $\omega_{2}=2$ ($\delta_{2}(\mathcal{U})=2$). * (b) Orbits with automorphism group $\text{Gal}(\mathbb{F}_{2^{8}}\\!\mid\\!\mathbb{F}_{2^{4}})\rtimes\mathbb{F}_{2^{8}}^{*}$: – 3 isometry classes, consisting of orbits with distance 4 ($\delta_{2}(\mathcal{U})=3$). – 2 isometry classes, consisting of orbits with distance 2 and $\omega_{2}=6$ ($\delta_{2}(\mathcal{U})=3$). – 1 isometry class, consisting of orbits with distance 2 and $\omega_{2}=2$ ($\delta_{2}(\mathcal{U})=2$). * (c) Orbits with automorphism group $\text{Gal}(\mathbb{F}_{2^{8}}\\!\mid\\!\mathbb{F}_{2^{2}})\rtimes\mathbb{F}_{2^{8}}^{*}$: – 2 isometry classes, consisting of orbits with distance 4 ($\delta_{2}(\mathcal{U})=3$). * (d) Orbits with automorphism group $\text{Gal}(\mathbb{F}_{2^{4}}\\!\mid\\!\mathbb{F}_{2})\rtimes\text{GL}_{2}(2^{4})$: – 1 isometry class, consisting of a single orbit with distance 2 and $\omega_{2}=14$ ($\delta_{2}(\mathcal{U})=2$). This cyclic orbit code is the only orbit generated by a subspace contained in $\mathbb{F}_{2^{4}}$ (and it contains $\mathbb{F}_{2^{2}}$). ## Conclusion and Open Problems We studied orbits of $\mathbb{F}_{q}$-subspaces of $\mathbb{F}_{q^{n}}$ under the Singer subgroup and under the normalizer of the Singer group. For cyclic orbit codes generated by generic subspaces we proved that a linear isometry between such orbits is contained in the normalizer of the Singer group. The result implies that, for most parameter cases, distinct orbits under the normalizer of the Singer subgroup are not linearly isometric. The following questions remain. * (a) We strongly believe that the isometry result for orbits under the normalizer is true for all parameter cases. This would follow if 5.11 can be established, that is: the automorphism group of a normalizer orbit generated by a subspace $\mathcal{U}$ does not contain the field-extension subgroup $\text{GL}_{n/s}(q^{s})$ if $\mathcal{U}$ is not contained in $\mathbb{F}_{q^{s}}$ – unless $\mathcal{U}$ is one of the exceptional cases from Examples 5.2, 5.3, and 5.7. * (b) Furthermore, our isometry result in 6.2 is true only for orbits generated by generic subspaces. It is an open question whether the same result is true for arbitrary orbits. * (c) Finally, as we briefly address in Section 5 we believe that any subspace $\mathcal{U}\subseteq\mathbb{F}_{q^{n}}$ satisfies $\operatorname{Orb}_{N}(\mathcal{U})\subseteq\operatorname{Orb}_{\text{GL}_{n/s}(q^{s})}(\mathcal{U})$, where $N=N_{\text{GL}_{n}(q)}(\mathbb{F}_{q^{n}}^{*})$ and $s\leq n/2$ is any divisor of $n$. We have to leave this to future research. ## Appendix _Proof of 3.8:_ Let $\omega$ be a primitive element of $\mathbb{F}_{q^{n}}$ and $f=X^{n}-\sum_{i=0}^{n-1}f_{i}X^{i}\in\mathbb{F}_{q}[X]$ be its minimal polynomial. We proceed in several steps. Step 1: Recall the maps $m_{a}$ from (2.1). According to 2.1 we identify $\mathbb{F}_{q^{n}}^{*}$ with $\mathbb{F}_{q^{n}}^{*}=\mbox{$\langle{m_{\omega}}\rangle$}=\\{m_{\omega^{i}}\mid i=0,\ldots,q^{n}-2\\}.$ We determine all maps $\rho\in\text{GL}_{n}(q)$ such that $\rho^{-1}\circ m_{\omega}^{\dagger}\circ\rho=m_{\omega}$. These maps then clearly satisfy $\rho^{-1}(\mathbb{F}_{q^{n}}^{*})^{\dagger}\rho=\mathbb{F}_{q^{n}}^{*}$, which is what we want. The reader may recall the fact that any matrix $A$ is similar to its transpose $A^{\sf t}$ (use for instance the fact that they share the same invariant factors). Hence there exists at least one such map $\rho\in\text{GL}_{n}(q)$. However, we need to determine all of them explicitly in order to select a suitable one in a later step. Consider the recurrence relation $x_{j+n}=\sum_{i=0}^{n-1}f_{i}x_{j+i}\ \text{ for }\ j\geq 0.$ (A.1) For every initial condition $x_{0}=a_{0},\ldots,x_{n-1}=a_{n-1}$ the recurrence (A.1) has a unique solution, which we denote by $(a_{i})_{i\in\mathbb{N}_{0}}$. Set $\mathcal{R}:=\\{\rho\in\text{GL}_{n}(q)\mid\rho^{-1}\circ m_{\omega}^{\dagger}\circ\rho=m_{\omega}\\}$. Thus every $\rho\in\mathcal{R}$ satisfies $\rho^{-1}(\mathbb{F}_{q^{n}}^{*})^{\dagger}\rho=\mathbb{F}_{q^{n}}^{*}$. We show $\mathcal{R}=\bigg{\\{}\rho\in\text{End}_{\mathbb{F}_{q}}(\mathbb{F}_{q^{n}})\bigg{|}\begin{array}[]{l}\exists(a_{0},\ldots,a_{n-1})\in\mathbb{F}_{q^{n}}\setminus 0:\\\\[2.58334pt] \rho(\omega^{i})=\sum_{j=0}^{n-1}a_{j+i}\omega^{j}\ \text{ for }\ i\in\mathbb{N}_{0}\end{array}\bigg{\\}}.$ (A.2) Note that this identity tells us that the maps $\rho\in\mathcal{R}$ are fully determined by the value of $\rho(1)$, which is given as $\sum_{j=0}^{n-1}a_{j}\omega^{j}$. ‘$\subseteq$’ Let $\rho\in\mathcal{R}$. Set $\rho(1)=a\in\mathbb{F}_{q^{n}}^{*}$ and write $a=\sum_{j=0}^{n-1}a_{j}\omega^{j}$. Since $\mbox{$\langle{\omega^{i}}\\!\mid\\!{\omega^{j}}\rangle$}=\delta_{i,j}$ for $i,j=0,\ldots,n-1$ this means $\mbox{$\langle{\rho(1)}\\!\mid\\!{\omega^{j}}\rangle$}=a_{j}$ for $j=0,\ldots,n-1$. Inducting on $i$ we show now that $\rho(\omega^{i})=\sum_{j=0}^{n-1}a_{j+i}\omega^{j}\ \text{ for all }\ i\in\mathbb{N}_{0}.$ (A.3) It is clearly true for $i=0$. For the induction step note first that the identity $m_{\omega}^{\dagger}\circ\rho=\rho\circ m_{\omega}$ is equivalent to $\mbox{$\langle{\rho(\omega y)}\\!\mid\\!{z}\rangle$}=\mbox{$\langle{\rho(y)}\\!\mid\\!{\omega z}\rangle$}\text{ for all }y,z\in\mathbb{F}_{q^{n}}.$ (A.4) Assuming now (A.3) and using (A.4) we obtain $\mbox{$\langle{\rho(\omega^{i+1})}\\!\mid\\!{\omega^{j}}\rangle$}=\mbox{$\langle{\rho(\omega^{i})}\\!\mid\\!{\omega^{j+1}}\rangle$}=a_{j+1+i}$ for $j=0,\ldots,n-2$ and $\mbox{$\langle{\rho(\omega^{i+1})}\\!\mid\\!{\omega^{n-1}}\rangle$}=\mbox{$\langle{\rho(\omega^{i})}\\!\mid\\!{\omega^{n}}\rangle$}=\sum_{j=0}^{n-1}f_{j}\mbox{$\langle{\rho(\omega^{i})}\\!\mid\\!{\omega^{j}}\rangle$}=\sum_{j=0}^{n-1}f_{j}a_{j+i}=a_{j+n}$, where the last step follows from (A.1). Hence $\rho(\omega^{i+1})=\sum_{j=0}^{n-1}a_{j+i+1}\omega^{j}$, and this establishes (A.3). All of this shows that $\rho$ is in the set on the right hand side of (A.2). ‘$\supseteq$’ Let $\rho$ in the set on the right hand side of (A.2). In order to establish (A.4) it suffices to show that $\mbox{$\langle{\rho(\omega^{i+1})}\\!\mid\\!{\omega^{j}}\rangle$}=\mbox{$\langle{\rho(\omega^{i})}\\!\mid\\!{\omega^{j+1}}\rangle$}\ \text{ for all }\ i,j=0,\ldots,n-1.$ (A.5) The left hand side simplifies to $\mbox{$\langle{\rho(\omega^{i+1})}\\!\mid\\!{\omega^{j}}\rangle$}=\sum_{\ell=0}^{n-1}a_{\ell+i+1}\mbox{$\langle{\omega^{\ell}}\\!\mid\\!{\omega^{j}}\rangle$}=a_{j+i+1}$ for all $j=0,\ldots,n-1$. For $j=0,\ldots,n-2$ the right hand side of (A.5) turns into $\mbox{$\langle{\rho(\omega^{i})}\\!\mid\\!{\omega^{j+1}}\rangle$}=\sum_{\ell=0}^{n-1}a_{\ell+i}\mbox{$\langle{\omega^{\ell}}\\!\mid\\!{\omega^{j+1}}\rangle$}=a_{j+1+i}$, while for $j=n-1$ we have $\mbox{$\langle{\rho(\omega^{i})}\\!\mid\\!{\omega^{j+1}}\rangle$}=\mbox{$\langle{\rho(\omega^{i})}\\!\mid\\!{\omega^{n}}\rangle$}=\sum_{\ell=0}^{n-1}a_{\ell+i}\sum_{r=0}^{n-1}f_{r}\mbox{$\langle{\omega^{\ell}}\\!\mid\\!{\omega^{r}}\rangle$}=\sum_{\ell=0}^{n-1}a_{\ell+i}f_{\ell}=a_{n+i}=a_{j+1+i},$ where the penultimate identity follows from (A.1). All of this establishes (A.5). In order to complete the proof of (A.2) it remains to show that $\rho$ is an isomorphism. Assume $\rho(b)=0$ for some $b\in\mathbb{F}_{q^{n}}$. Then (A.4) implies $0=\mbox{$\langle{\rho(b)}\\!\mid\\!{z}\rangle$}=\mbox{$\langle{\rho(\omega b)}\\!\mid\\!{\omega^{-1}z}\rangle$}\ \text{ for all }\ z\in\mathbb{F}_{q^{n}}.$ Hence $\rho(\omega b)=0$ and thus $\rho(\omega^{i}b)=0$ for all $i=0,\ldots,q^{n}-2$. Since $\rho(1)=a\neq 0$, this implies $b=0$. Thus $\rho$ is injective and an isomorphism. Step 2: Let $\sigma:\mathbb{F}_{q^{n}}\longrightarrow\mathbb{F}_{q^{n}}$ be the Frobenius homomorphism, thus $\sigma(z)=z^{q}$ for all $z\in\mathbb{F}_{q^{n}}$. We now want to determine a map $\rho\in\mathcal{R}$ satisfying $\rho^{-1}\circ\sigma^{\dagger}\circ\rho=\sigma^{-1}$. Consider the $\mathbb{F}_{q}$-linear map $\xi:\mathbb{F}_{q^{n}}\longrightarrow\mathbb{F}_{q^{n}},\ z\longmapsto\sigma(z)-z$. Clearly, $\ker\xi=\mathbb{F}_{q}$. Thus, $\dim(\text{im}\xi)=n-1$. Pick now $\rho(1)\in(\text{im}\xi)^{\perp}\setminus 0\ \text{(which is unique up to $\mathbb{F}_{q}$-scalar multiples).}$ Thanks to (A.2) this determines a unique map $\rho\in\mathcal{R}$. The choice of $\rho(1)$ implies $\mbox{$\langle{\rho(1)}\\!\mid\\!{\sigma(z)}\rangle$}=\mbox{$\langle{\rho(1)}\\!\mid\\!{z}\rangle$}\ \text{ for all }z\in\mathbb{F}_{q^{n}}.$ (A.6) With the aid of (A.4) we obtain $\mbox{$\langle{\rho(\omega^{i})}\\!\mid\\!{\omega^{j}}\rangle$}=\mbox{$\langle{\rho(1)}\\!\mid\\!{\omega^{i+j}}\rangle$}=\mbox{$\langle{\rho(1)}\\!\mid\\!{\omega^{(i+j)q}}\rangle$}=\mbox{$\langle{\rho(\omega^{iq})}\\!\mid\\!{\omega^{jq}}\rangle$}\ \text{ for all }\ i,j\in\mathbb{N}_{0}.$ Since $1,\omega,\ldots,\omega^{n-1}$ is an $\mathbb{F}_{q}$-basis of $\mathbb{F}_{q^{n}}$, this implies $\mbox{$\langle{\rho(y)}\\!\mid\\!{z}\rangle$}=\mbox{$\langle{\rho(\sigma(y))}\\!\mid\\!{\sigma(z)}\rangle$}$ for all $z,y\in\mathbb{F}_{q^{n}}$. The latter is equivalent to $\mbox{$\langle{\rho(\sigma^{-1}(y)}\\!\mid\\!{z}\rangle$}=\mbox{$\langle{\rho(y)}\\!\mid\\!{\sigma(z)}\rangle$}$ for all $z,y\in\mathbb{F}_{q^{n}}$, and this means $\rho\circ\sigma^{-1}=\sigma^{\dagger}\circ\rho$. All of this implies $\rho^{-1}\text{Gal}(\mathbb{F}_{q^{n}}\\!\mid\\!\mathbb{F}_{q})^{\dagger}\rho=\text{Gal}(\mathbb{F}_{q^{n}}\\!\mid\\!\mathbb{F}_{q})$. Step 3: Let $s$ be a divisor of $n$ and consider the subgroup $\text{GL}_{n/s}(q^{s})$ of $\text{GL}_{n}(q)$. Let $\gamma\in\text{GL}_{n/s}(q^{s})$. We have to show that $\rho^{-1}\circ\gamma^{\dagger}\circ\rho=:\hat{\gamma}$ is in $\text{GL}_{n/s}(q^{s})$, i.e., that $\hat{\gamma}$ is $\mathbb{F}_{q^{s}}$-linear. Let $N=(q^{n}-1)/(q^{s}-1)$. Then $\mathbb{F}_{q^{s}}=\mathbb{F}_{q}[\omega^{N}]$ and thus it suffices to prove that $\hat{\gamma}(\omega^{N}y)=\omega^{N}\hat{\gamma}(y)\ \text{ for all }\ y\in\mathbb{F}_{q^{n}}.$ (A.7) Using $\rho\circ\hat{\gamma}=\gamma^{\dagger}\circ\rho$ together with (A.4) and the $\mathbb{F}_{q^{s}}$-linearity of $\gamma$, we compute for $y,z\in\mathbb{F}_{q^{n}}$ $\langle{\rho\circ\hat{\gamma}(\omega^{N}y)}\\!\mid\\!{z}\rangle$ $\displaystyle=\mbox{$\langle{\rho(\omega^{N}y)}\\!\mid\\!{\gamma(z)}\rangle$}=\mbox{$\langle{\rho(y)}\\!\mid\\!{\omega^{N}\gamma(z)}\rangle$}=\mbox{$\langle{\rho(y)}\\!\mid\\!{\gamma(\omega^{N}z)}\rangle$}$ $\displaystyle=\mbox{$\langle{\gamma^{\dagger}(\rho(y))}\\!\mid\\!{\omega^{N}z}\rangle$}=\mbox{$\langle{\rho(\hat{\gamma}(y))}\\!\mid\\!{\omega^{N}z}\rangle$}=\mbox{$\langle{\rho(\omega^{N}\hat{\gamma}(y))}\\!\mid\\!{z}\rangle$}.$ Since this is true for all $z\in\mathbb{F}_{q^{n}}$ and since $\rho$ is an isomorphism, this implies (A.7). All of this proves $\rho^{-1}\text{GL}_{n/s}(q^{s})^{\dagger}\rho^{-1}=\text{GL}_{n/s}(q^{s})$. $\square$ ## References * [1] E. Ben-Sasson, T. Etzion, A. Gabizon, and N. Raviv. Subspace polynomials and cyclic subspace codes. IEEE Trans. Inform. Theory, IT-62:1157–1165, 2016. * [2] M. Braun, T. Etzion, P. R. J. Östergård, A. Vardy, and A. Wasserman. Existence of $q$-analogs of Steiner systems. Forum of Mathematics, Pi, 4:e7, 2016. * [3] B. Chen and H. Liu. Constructions of cyclic constant dimension codes. Des. Codes Cryptogr., 86:1267–1279, 2018. * [4] A. Cossidente and M. Resmini. Remarks on Singer Cylic Groups and Their Normalizers. Des. Codes Cryptogr., 32:97–102, 05 2004. * [5] K. Drudge. On the orbits of Singer groups and their subgroups. Electron. J. Combin., 9:#R15, 2002. * [6] A. Elsenhans, A. Kohnert, and A. Wassermann. Construction of codes for network coding. In Proc. 19th Int. Symp. Math. Theory Netw. Syst., pages 1811–1814, Budapest, Hungary, 2010. * [7] T. Etzion and A. Vardy. Error-correcting codes in projective space. IEEE Trans. Inform. Theory, IT-57:1165–1173, 2011. * [8] N. Gill. On a conjecture of Degos. Cah. Topol. Géom. Différ. Catég., 57:229–237, 2016. * [9] H. Gluesing-Luerssen and H. Lehmann. Distance distributions of cyclic orbit codes. Preprint 2019. arXiv: 1912.05522. Accepted for publication in Des. Codes Cryptogr., DOI: s10623-020-00823-x, 2020. * [10] H. Gluesing-Luerssen, K. Morrison, and C. Troha. Cyclic orbit codes and stabilizer subfields. Adv. Math. Commun., 9:177–197, 2015. * [11] J. Gomez-Calderon. On the stabilizer of companion matrices. Proc. Japan Acad., 69, Ser. A:140–143, 1993. * [12] M. D. Hestenes. Singer groups. Canadian Journal of Mathematics, 22(3):492–513, 1970. * [13] B. Huppert, B. Endliche Gruppen. Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berucksichtigung der Anwendungsgebiete ; Bd. 134, v. 1. Springer, Berlin, Heidelberg, New York, 1967. * [14] W. M. Kantor. Linear groups containing a Singer cycle. Journal of Algebra, 62(1):232–234, 1980. * [15] R. Koetter and F. R. Kschischang. Coding for errors and erasures in random network coding. IEEE Trans. Inform. Theory, IT-54:3579–3591, 2008. * [16] A. Kohnert and S. Kurz. Construction of large constant dimension codes with a prescribed minimum distance. In J. Calmet, W. Geiselmann, and J. Müller-Quade, editors, Mathematical Methods in Computer Science, volume 5393, pages 31–42. Lecture Notes in Computer Science; Springer, Berlin, 2008. arXiv: 0807.3212. * [17] K. Otal and F. Özbudak. Cyclic subspace codes via subspace polynomials. Des. Codes Cryptogr., 85:191–204, 2017. * [18] R. M. Roth, N. Raviv, and I. Tamo. Construction of Sidon Spaces with Applications to Coding. IEEE Trans. Inform. Theory, IT-64(6):4412–4422, June 2018. * [19] A.-L. Trautmann. Isometry and automorphisms of constant dimension codes. Adv. Math. Commun., 7:147–160, 2013. * [20] A.-L. Trautmann, F. Manganiello, M. Braun, and J. Rosenthal. Cyclic orbit codes. IEEE Trans. Inform. Theory, IT-59:7386–7404, 2013. * [21] W. Zhao and X. Tang. A characterization of cyclic subspace codes via subspace polynomials. Finite Fields Appl., 57:1–12, 2019.
††Andrew Stasiuk and Lane Gunderman are the co-first-authors of this work and contributed equally. # Generalized Collective Lamb Shift Andrew Stasiuk<EMAIL_ADDRESS>The Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Lane G. Gunderman<EMAIL_ADDRESS>The Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Mohamed El Mandouh The Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Troy W. Borneman The Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada High Q Technologies Inc, Waterloo, Ontario, N2L 3G1, Canada David G. Cory The Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Department of Chemistry, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada (August 27, 2024) ###### Abstract Hybrid quantum systems consisting of an ensemble of two–level systems interacting with a single–mode electromagnetic field are important for the development of quantum information processors and other quantum devices. These systems are characterized by the set of energy level hybridizations, split by collective Lamb shifts, that occur when the ensemble and field mode interact coherently with high cooperativity. Computing the full set of Lamb shifts is generally intractable given the high dimensionality of many devices. In this work, we present a set of techniques that allow a compact description of the Lamb shift statistics across all collective angular momentum subspaces of the ensemble without using restrictive approximations on the state space. We use these techniques to both analyze the Lamb shift in all subspaces and excitation manifolds and to describe the average observed Lamb shift weighted over the degeneracies of all subspaces. cQED, Tavis–Cummings Model, Lamb shift ## I Introduction The field of quantum electrodynamics (QED) was initiated by Lamb’s discovery that an electron interacts with its own radiation field to split the energies of the $2s_{1/2}$ and $2p_{1/2}$ levels of the Hydrogen atom lamb1947fine . This splitting, referred to as the Lamb shift, demonstrates that the electromagnetic field and vacuum are quantized. Cavity QED systems, where two–level quantum systems (such as atoms) are confined in a high-finesse cavity, also present features analogous to the Lamb shift. The light–matter interaction breaks degeneracies between separable field and atom states with the same number of excitations, $k$, hybridizing the states with a splitting that scales in magnitude as $\sqrt{k}$. Provided the cavity finesse is high enough and the atomic coherence long enough, this hybridization may be observed experimentally haroche2006exploring . Understanding the structure of cavity QED Lamb shifts has become particularly important recently due to their importance in the development of large-scale quantum information processors blais_circuit_2020 ; fink_dressed_2009 ; yang_probing_2020 ; zou_implementation_2014 and hybrid quantum devices kurizki_quantum_2015 ; morton_hybrid_2011 ; xiang_hybrid_2013 , such as quantum memories for microwave photons kubo_hybrid_2011 ; grezes_multimode_2014 ; wu_storage_2010 ; zhu_coherent_2011 and optical photons hammerer_quantum_2010 ; afzelius_proposal_2013 ; vivoli_high-bandwidth_2013 . Collective Lamb shifts also play a crucial role in radiative ground-state cooling of an ensemble wood_cavity_2014 ; wood_cavity_2016 ; bienfait_controlling_2016 ; albanese_radiative_2020 ; ranjan_pulsed_2019 The notion of a Lamb shift in cavity QED may be formally defined using the Jaynes–Cummings Hamiltonian describing the light–matter interaction of a single two-level quantum system with a quantized single–mode electromagnetic field jaynes1963comparison ($\hbar=1$ throughout work): $\displaystyle\hat{\mathcal{H}}_{0}$ $\displaystyle=\omega_{c}\hat{a}^{\dagger}\hat{a}+\frac{\omega_{s}}{2}\hat{\sigma}_{z},$ $\displaystyle\hat{\mathcal{H}}_{int}$ $\displaystyle=g_{0}(\hat{a}^{\dagger}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+}),$ $\displaystyle\hat{\mathcal{H}}_{JC}$ $\displaystyle=\hat{\mathcal{H}}_{0}+\hat{\mathcal{H}}_{int},$ (1) where $\hat{\mathcal{H}}_{0}$ is the Hamiltonian describing the quantization of the two–level system and single–mode separately, and $\hat{\mathcal{H}}_{int}$ is the Hamiltonian describing the interaction of the two-level system with the field mode. The lowering (raising) operators, $\hat{a}$ ($\hat{a}^{\dagger}$), describe the annihilation (creation) of a photon in the field mode with energy $\omega_{c}$. The number operator, $\hat{a}^{\dagger}\hat{a}$, describes the quantization of the field mode in terms of the number of photons, $n$, occupying the mode, and defines the Fock eigenstates, $\ket{n}$, as $\hat{a}^{\dagger}\hat{a}\ket{n}=n\ket{n}$. The corresponding quantization of the two-level system is given by Zeeman eigenstates, $\hat{\sigma}_{z}\ket{\uparrow}=+\ket{\uparrow}$ and $\hat{\sigma}_{z}\ket{\downarrow}=-\ket{\downarrow}$, with energy splitting $\omega_{s}$ given by the Pauli $z$ spin operator. For brevity, we will refer to a general two-level quantum system as a spin and the single–mode electromagnetic field as a cavity. The spin–cavity interaction describes a coherent swapping of a single photon between the spin and the cavity mode, where $\hat{\sigma}_{+}$ and $\hat{\sigma}_{-}$ correspond, respectively, to the creation and annihilation of a photon in the spin system. The strength of the spin–cavity interaction is given by the geometric parameter $g_{0}=g_{e}\mu_{B}\sqrt{\frac{\mu_{0}\omega_{c}}{2V_{c}}},$ (2) where $g_{e}$ is the electron Landau g-factor, $\mu_{B}$ is the Bohr magneton, $\mu_{0}$ is the permeability of free–space, and $V_{c}$ is the mode volume of the cavity. We restrict ourselves to the regime $g_{0}\ll\omega_{c},\omega_{s}$ such that a rotating–wave approximation (RWA) may be applied to suppress multi-photon processes. For simplicity, we will also restrict our argument to the case where the spin system and cavity mode are resonant ($\omega_{c}=\omega_{s}=\omega_{0}$). We denote the separable $k$-excitation eigenstates of $\hat{\mathcal{H}}_{0}$ as $\\{\ket{k}\ket{\downarrow},\ket{k-1}\ket{\uparrow}\\}$ and, upon diagonalization under $\hat{\mathcal{H}}_{int}$, the resulting hybridized spin-cavity eigenstates are $\\{\frac{1}{\sqrt{2}}(\ket{k}\ket{\downarrow}+\ket{k-1}\ket{\uparrow}),\frac{1}{\sqrt{2}}(\ket{k}\ket{\downarrow}-\ket{k-1}\ket{\uparrow})\\}$, with a Lamb shift splitting given by $g_{0}\sqrt{k}$. In the special case of $k=1$ (the single–excitation manifold), the Lamb shift splitting is commonly referred to as a “normal mode” or “vacuum Rabi” splitting haroche2006exploring . The non–linearity of the Lamb shift with excitation number has been observed experimentally to verify the “quantum” nature of the spin–cavity interaction fink2008climbing . The Jaynes–Cummings model may be generalized to an ensemble of $N$ non–interacting spins collectively interacting with a single–mode cavity to yield the Tavis–Cummings Hamiltonian tavis1968exact , $\hat{\mathcal{H}}_{TC}=\omega_{0}(\hat{a}^{\dagger}\hat{a}+\hat{J}_{z})+g_{0}(\hat{a}^{\dagger}\hat{J}_{-}+\hat{a}\hat{J}_{+}),$ (3) where the two–level spin operators have been replaced with collective operators that act identically over the ensemble of energetically indistinguishable spins and a RWA has once again been made. We will formally define the collective operators in the next section. An important feature of the TC Hamiltonian is that the spin–cavity interaction strength, $g_{0}$, is often replaced with an effective interaction strength that is enhanced by $\sqrt{N}$: $g_{eff}=g_{0}\sqrt{N}.$ (4) This transformation is paired with a $1/\sqrt{N}$ term in the collective angular momentum raising and lowering operators, which we will omit in this work. The ensemble enhancement of the spin–cavity interaction strength has allowed observation of an analogous normal mode splitting (often referred to as “strong coupling”) in ensemble spin systems interacting with high quality factor (high Q) cavities schuster_high-cooperativity_2010 ; benningshof_superconducting_2013 ; imamoglu_cavity_2009 ; kubo_strong_2010 . The relative strengths of the parameters necessary to resolve this splitting are formalized by defining the cooperativity: $C=\frac{4Ng_{0}^{2}QT_{2}}{\omega_{0}},$ (5) where $Q$ is the quality factor of the cavity and $T_{2}$ is the coherence time of the spin ensemble. In general, experimentally observed splittings in a high–cooperativity spin–cavity system are a complex function of the many Lamb shifts that occur in various collective angular momentum subspaces and excitation subspaces. Calculating the full set of Lamb shifts is generally intractable for systems of the size required to build useful quantum devices, leading to a number of approximation methods being utilized to analyze experimental data. The most common approximations are to restrict the treatment to only the largest, permutation–invariant, Dicke subspace, or to treat the spin ensemble in the low–excitation regime as a simple quantum harmonic oscillator holstein1940field . Many limiting results have been shown and can be seen in garraway2011dicke , however, we analyze many aspects of the TC Hamiltonian in further detail. In this paper, we revisit the structure of the Tavis–Cummings Hamiltonian and find that we are able to identify a two parameter family of collective Lamb shifts indexed by total number of excitations and collective angular momentum. We then show that, in general, a correct description of the energy landscape requires considering many collective angular momentum subspaces in all but the simplest limiting cases. We show that the representative set of spaces never limit to the Dicke space, nor a constant value, but instead grow as $O\left(\sqrt{N}\right)$. We then proceed to give non–trivial descriptive statistics on these Lamb shifts’ scaling behaviors, culminating in a description of these statistics upon averaging over the degeneracies of these collective angular momentum subspaces. These insights provide bounds and estimates for structures that can be experimentally observed inthe near future. This paper is structured as follows. We begin by recalling common definitions in section II. Then, in section III we discuss features of the Tavis–Cummings Hamiltonian that allow us to break it into subspaces of constant unperturbed energy and define degenerate copies of coupling matrices which act within these subspaces. We also show some examples of these coupling matrices as well as introduce our generalized collective Lamb shift. In section IV we show our primary results, which include finding the maximally degenerate angular momentum subspace, and provide proof that the majority of the relevant dynamics occurs in this subspace and neighboring subspaces. We also describe basic properties of the collective Lamb shift within each subspace, and lastly combining these collective Lamb shifts over their degeneracies to predict what should be experimentally seen for such a system at a given number of total excitations. We then end with a discussion of our results in sections V and VI. ## II Definitions We include our choice of notation and definitions in this section. The Pauli operators are written in the Zeeman basis, so that $\hat{\sigma}_{z}=\left|\uparrow\vphantom{\uparrow}\right>\\!\\!\left<\uparrow\vphantom{\uparrow}\right|-\left|\downarrow\vphantom{\downarrow}\right>\\!\\!\left<\downarrow\vphantom{\downarrow}\right|.$ (6) Also, $\hat{\sigma}_{+}\ket{\downarrow}=\ket{\uparrow},\quad\hat{\sigma}_{+}|\uparrow\rangle=0.$ (7) These spin operators can be combined as a sum of tensor products to produce the collective versions of these spin operators. Let $N$ be the number of spin–$1/2$ particles. Then, the collective $z$ angular momentum operator is given as, $\hat{J}_{z}=\frac{1}{2}\sum_{i=1}^{N}\hat{\sigma}_{z}^{(i)},$ (8) where the superscript on the Pauli operator indicates action only on the $i$-th particle. Similarly, the collective raising and lowering collective angular momentum operators are given as $\hat{J}_{\pm}=\sum_{i=1}^{N}\hat{\sigma}_{\pm}^{(i)}.$ (9) The collective operators span the $\mathfrak{sl}(2;\mathbb{C})$ Lie algebra, and thus satisfy the following commutation relations: $\displaystyle[\hat{J}_{z},\hat{J}_{\pm}]=\pm\hat{J}_{\pm},$ (10) $\displaystyle[\hat{J}_{+},\hat{J}_{-}]=2\hat{J}_{z}.$ (11) The collective operator algebra is a sub-algebra of self–adjoint operators acting on the $N$-spin system, and conveniently satisfies the same commutation relations as those for a single particle spin operator. By a change of basis, we can identify the transverse spin operators, $\displaystyle\hat{J}_{x}$ $\displaystyle=\frac{1}{2}\big{(}\hat{J}_{+}+\hat{J}_{-}\big{)},$ (12) $\displaystyle\hat{J}_{y}$ $\displaystyle=\frac{1}{2i}\big{(}\hat{J}_{+}-\hat{J}_{-}\big{)}.$ (13) The transverse spin operators, along with $\hat{J}_{z}$, span a collective $\mathfrak{su}(2)$ algebra, which differ from a single spin-1/2 Pauli operators in that the collective operators are not involutory. The representations of the $\mathfrak{sl}(2;\mathbb{C})$ operators can be defined by their action on a state of total angular momentum $j$ with $z$ component $m$: $\displaystyle\hat{J}_{z}\ket{j,m}$ $\displaystyle=m\ket{j,m}$ (14) $\displaystyle\hat{J}_{\pm}\ket{j,m}$ $\displaystyle=\sqrt{j(j+1)-m(m\pm 1)}\ket{j,m\pm 1}.$ (15) Throughout this work, we focus on two good quantum numbers representing conserved quantities. The first of these is the total angular momentum, $j$, which determines the eigenvalues of the total angular momentum operator, $\hat{\bm{J}}^{2}=\hat{J}_{x}^{2}+\hat{J}_{y}^{2}+\hat{J}_{z}^{2},$ (16) with eigenvalues $j(j+1)$. The second of these conserved quantities is the number of total excitations, $k$, given as the eigenvalues of the excitation operator, $\hat{K}=\hat{a}^{\dagger}\hat{a}+\hat{J}_{z}+\frac{N}{2}\leavevmode\hbox{\small$1$\normalsize\kern-3.30002pt$1$}.$ (17) The scaled identity term in the excitation operator ensures excitations are non-negative, as the action of $\hat{J}_{z}$ on the ground state has eigenvalue $-N/2$. All of the Hamiltonians considered in this work share the same internal structure, defined by $\hat{\mathcal{H}}_{0}=\omega_{0}(\hat{a}^{\dagger}\hat{a}+\hat{J}_{z}).$ (18) The interaction Hamiltonian is given by the Dicke model dicke1954coherence : $\hat{\mathcal{H}}_{D,int}=2g_{0}\big{(}\hat{a}+\hat{a}^{\dagger}\big{)}\hat{J}_{x}.$ (19) Applying the rotating wave approximation, the counter–rotating term is discarded, leaving us with $\hat{\mathcal{H}}_{int}=g_{0}\big{(}\hat{a}\hat{J}_{+}+\hat{a}^{\dagger}\hat{J}_{-}\big{)},$ (20) which is the interaction term in the Tavis–Cummings (TC) Hamiltonian. Lastly, we note that all states live within the Hilbert space, $\mathscr{H}=\operatorname{L}^{2}\left(\mathbb{R}\right)\otimes\big{(}\mathbb{C}^{2}\big{)}^{\otimes N}.$ (21) It is common to perform a Holstein–Primakoff transformation on the collective angular momentum operators in order to simplify the underlying algebra holstein1940field . This transformation is valid on a single subspace of angular momentum $j$, such that $\displaystyle\hat{J}_{+}$ $\displaystyle\longrightarrow\hat{b}^{\dagger}\sqrt{2j\leavevmode\hbox{\small$1$\normalsize\kern-3.30002pt$1$}-\hat{b}^{\dagger}\hat{b}}$ (22) $\displaystyle\hat{J}_{-}$ $\displaystyle\longrightarrow\sqrt{2j\leavevmode\hbox{\small$1$\normalsize\kern-3.30002pt$1$}-\hat{b}^{\dagger}\hat{b}}\hskip 7.11317pt\hat{b}.$ (23) By requiring that standard angular momentum commutation relations are maintained, the transformation for $\hat{J}_{z}$ is then fixed: $\hat{J}_{z}\longrightarrow\hat{b}^{\dagger}\hat{b}-j\leavevmode\hbox{\small$1$\normalsize\kern-3.30002pt$1$}.$ (24) Usually $j$ is taken to be the Dicke subspace, such that $j=N/2$, and $N$ is assumed to be large compared to the number of excitations. If the number of excitations approach $j$, the spin system begins to saturate and the approximation becomes increasingly invalid ressayre1975holstein . In general, thermal population of the Dicke space is negligible at nearly all experimental temperatures wesenberg2002mixed , so we avoid making restrictive approximations in this work and treat the Hamiltonian in generality across all subspaces and excitation manifolds. We are now equipped with our primary definitions and notations. In the next section we discuss what features of our Hamiltonian allow us to decompose the interaction portion of the Hamiltonian into a direct sum of coupling matrices, allowing us to show results regarding the subspaces forming the bases of these coupling matrices. ## III Symmetry and Subspace Decomposition In this section, we discuss the symmetries of various models of spin ensembles interacting with a cavity. Through the use of conserved quantities, we motivate a decomposition of the TC Hamiltonian into a two parameter family of subspaces indexed by the good quantum numbers present in the system. We then solidify the remaining notation to be used in the rest of the paper, relying heavily on the symmetry motivated subspace decomposition, and provide instructive examples for small values of $N$. ### III.1 Symmetries of Light–Matter Interaction Generally, an ensemble of $N$ spins identically coupled to a single cavity mode is described by the Dicke model, with a Hamiltonian given by $\displaystyle\mathcal{\hat{H}}=\omega_{0}(\hat{a}^{\dagger}\hat{a}+\hat{J}_{z})+g_{0}(\hat{a}^{\dagger}+\hat{a})\hat{J}_{x}.$ (25) The Dicke Hamiltonian can be decomposed into two distinct parts, the bare spin and cavity energies, $\mathcal{\hat{H}}_{0}$, and the spin–cavity interaction $\mathcal{\hat{H}}_{\text{D,int}}$. When the collective spin–cavity interaction, $g_{eff}$, is zero, the ground state is $\ket{0}\ket{N/2,-N/2}$, which represents the state with zero photons in the cavity and all spins in their ground states. When $g_{eff}>0$, the Dicke Hamiltonian is symmetric under the parity operator $\operatorname{\hat{\Pi}}=\operatorname{exp}\left[-i\pi\left(\hat{J}_{z}+\hat{a}^{\dagger}\hat{a}\right)\right]$, with eigenvalues $\pm 1$. This implies that the Hilbert space of the Dicke model can be decomposed into a direct sum of two spaces labelled by the parity operator’s sign: $\mathscr{H}=\mathscr{H}_{+}\oplus\mathscr{H}_{-}$. In this model, excitations are not conserved, and the two parity subspaces are infinite dimensional. For the case of $N=1$, this particularization of the Dicke model is known as the Quantum Rabi Model (QRM), which has recently been solved zhong2013analytical ; maciejewski2014full , where eigenvalues and eigenstates are given in terms of special functions judd1979exact . The existence of this solution can be seen directly from the symmetry group of the Hamiltonian, as the parity symmetry is sufficient to show that the QRM is integrable braak2011integrability . When we consider $N>1$, the parity symmetry is no longer sufficient to show integrability, and it is expected that the Dicke model is not exactly solvable baxter2016exactly ; in other words, there are no explicit solutions in terms of any known functions. Turning now to the model of interest for this paper, the Tavis–Cummings Hamiltonian is derived by the application of a RWA to the Dicke Hamiltonian. Unlike the Dicke Model, the TC model admits a continuous symmetry described by the circle group, $U(1)$, in addition to parity symmetry and total angular–momentum symmetry. The generator of the continuous symmetry has infinite eigenvalues, enumerated by $k\in\mathbb{N}$, while the total angular momentum symmetry has eigenvalues $j=N/2,N/2-1,\cdots,1/2\ (0)$, where the last value for $j$ is determined by whether $N$ is odd or even. The additional symmetry is sufficient to make the Tavis–Cummings model integrable and solvable, which is supported by the Bethe ansatz solution provided by Bogoliubov bogoliubov1996exact ; bogolyubov2000algebraic . Given that $\hat{J}_{+}$ conserves total angular momentum $j$, the repeated action of $\hat{J}_{+}$ on the ground state of an $N$ spin ensemble will only populate the $N+1$ fully symmetric states in the Dicke subspace. Bogoliubov utilized these orbits to verify a Bethe ansatz solution of the Tavis–Cummings model is correct, casting the eigenvalue problem as equivalent to solving a differential equation bogoliubov1996exact ; bogolyubov2000algebraic . Translating their construction into our notation, the primary expression is: $\ket{\Phi_{j,k}^{\lambda}}=\sum_{m=0}^{k}A_{j,k,m}^{\lambda}(\hat{a}^{\dagger})^{k-m}\hat{J}_{+}^{m}\ket{0}\ket{j,-j},$ (26) for recursively defined scalar coefficients $A_{j,k,m}^{\lambda}$ determined from difference equations, where $j$ indicates the angular momentum space, $k$ the excitation subspace, and $\lambda$ to a labeled eigenvector within the $(j,k)$ subspace. Putting together these symmetry observations, we see that the TC Hamiltonian can be tractably analyzed in terms of its structure and dynamics. In section IV, we provide a detailed analysis of the TC Hamiltonian’s energy level structure across all non-interacting subspaces. The two symmetries of the TC model directly imply that the Hamiltonian admits a two parameter subspace decomposition. We will repeatedly make use of this fact throughout the remainder of our analysis. Within the context of previous work, the Holstein–Primakoff approximation largely ignores the second parameter $j$ by focusing on a single value of it, particularly the $j=N/2$ subspace which is being treated as a single harmonic oscillator. The Bogoliubov solution via Bethe ansatz, while correct, is equally as hard as solving the eigenvalue problem itself. Further work attempting to directly analyze large photon number behavior via a direct diagonalization approach has been performed by restriction to the Dicke subspace and tested experimentally by Chiorescu et al chiorescu2010magnetic . We will demonstrate in later sections that the most dominantly contributing angular momentum subspaces are, in general, those with the lowest $O(\sqrt{N})$ $j$ values allowed by the model. ### III.2 Subspace Decomposition of the TC Model Subsection III.1 argued that we can use group theory to decompose the total Hamiltonian into a direct sum structure indexed by two parameters defined by the conserved quantities of the system, $j$ and $k$. A direct sum decomposition is not novel, and was given explicitly in the original 1968 paper defining the Tavis–Cummings Hamiltonian tavis1968exact . Recast in our notation, the decomposition is $\hat{\mathcal{H}}\cong\bigoplus_{j,k}\big{(}\omega_{0}k\leavevmode\hbox{\small$1$\normalsize\kern-3.30002pt$1$}_{j,k}+g_{0}L(j,k)\big{)},$ (27) where $L(j,k)$ are a natural representation of the interaction Hamiltonian, which we define in equation (III.2) and refer to as the coupling matrices. While Tavis and Cummings focused on the eigenstates of their model, computed by recasting the diagonalization problem as a differential equation tavis1968exact , our work focuses on the energy eigenvalue problem, utilizing modern insights into numerical linear algebra to provide a deeper analysis. We define a natural basis for a general $(j,k)$ subspace with total angular momentum $j$ and $k$ excitations as $\mathcal{B}_{j,k}=\\{\ket{\alpha_{j,k}}\,|\,\alpha=1,\cdots,n_{j,k},n_{j,k}+1\\},$ (28) using a shorthand ket representation of the tensor product of a spin-cavity state $\ket{\alpha_{j,k}}=\ket{k-\alpha-k_{0}(j)}\ket{j,-j+\alpha}.$ (29) The single parameter, $\alpha$, provides a convenient representation of states within a $(j,k)$ subspace. The value, $n_{j,k}=\left|\mathcal{B}_{j,k}\right|-1$, one less than the dimension, is chosen for convenience. We define $k_{0}(j)=N/2-j$ as the number of excitations present in the ground state of an angular momentum $j$ subspace within an $N$ spin ensemble. Explicitly, $n_{j,k}$ is given as $n_{j,k}=\operatorname{min}\\{2j,k-k_{0}(j)\\}.$ (30) If $k<k_{0}(j)$, then the basis set is empty and there are no states present at this excitation level within the $j$ angular momentum subspace. Under unitary evolution generated by collective operators, two subspaces with the same value of $j$ stemming from ensembles of differing $N$ will behave identically, as these subspaces have isomorphic representations. The main functional difference between them is their relative locations within the energy level spectrum of their respective Hamiltonians. Thus, while the evolution or action of a collective operator can be calculated identically, the resultant contribution of the evolution to aggregate statistics or an observable will be weighted differently. The coupling matrices’ entries can be found by applying the interaction term from $\mathcal{\hat{H}}_{TC}$ on the bases defined in equation (28). The Lamb shift coupling matrix for the $(j,k)$ subspace is then given by $\displaystyle L(j,k)$ $\displaystyle=\sum_{\alpha=1}^{n_{j,k}}l_{\alpha}(j,k)\bigg{(}\left|\alpha_{j,k}\vphantom{(\alpha+1)_{j,k}}\right>\\!\\!\left<(\alpha+1)_{j,k}\vphantom{\alpha_{j,k}}\right|$ $\displaystyle\hskip 42.67912pt+\left|(\alpha+1)_{j,k}\vphantom{\alpha_{j,k}}\right>\\!\\!\left<\alpha_{j,k}\vphantom{(\alpha+1)_{j,k}}\right|\bigg{)},$ (31) where the matrix elements $l_{\alpha}(j,k)$ are given by $\displaystyle\frac{1}{g_{0}}\left<\alpha_{j,k}\vphantom{\hat{\mathcal{H}}_{int}(\alpha+1)_{j,k}}\right|\hat{\mathcal{H}}_{int}\left|(\alpha+1)_{j,k}\vphantom{\alpha_{j,k}\hat{\mathcal{H}}_{int}}\right>$ $\displaystyle=\sqrt{\big{(}2\alpha j-\alpha(\alpha-1)\big{)}\big{(}k-k_{0}(j)-\alpha+1\big{)}}.$ (32) In the above expression subscripts are only included within kets such as $\ket{\alpha_{j,k}}$, while $\alpha$ itself is a scalar. The index $j$ runs from $N/2$ to $0$ ($1/2$) when $N$ is even (odd). Each angular momentum space is of dimension $2j+1$, and so the total number of spin states accounted for across all values of $j$ is $O(N^{2})$,which is far less than the full space’s dimension of $2^{N}$. As of this point we have neglected to include the degeneracy of each of the angular momentum subspaces. The degeneracy of the subspace with total angular momentum $j$ on $N$ spins is given as $d_{j}=\frac{N!(2j+1)}{(N/2-j)!(N/2+j+1)!}.$ (33) That is, there are $d_{j}$ disjoint angular momentum subspaces with total angular momentum $j$ present in a direct sum decomposition of $\big{(}\mathbb{C}^{2}\big{)}^{\otimes N}$ wesenberg2002mixed . By including this degeneracy we recover the identity that the sum over the dimension of all disjoint subspaces is equal to the dimension of the entire space, $\sum_{j}(2j+1)d_{j}=2^{N}.$ (34) Through Schur–Weyl duality, we can associate total angular momentum symmetry with invariance over permutations (or subgroups of permutations) of the ordering of the underlying spin Hilbert spaces weyl1946classical . Within this context, the subspace with $j=N/2$, commonly known as the Dicke subspace, is referred to as the fully symmetric subspace. This is due to every angular momentum state within the $j=N/2$ subspace remaining invariant under action of any permutation in $S(N)$, the permutation group of order $N$. The remaining subspaces have a more complex structure under the action of a spin- permutation. Importantly, each degenerate copy of a $j$ subspace can be naturally and uniquely labelled by a Young Tableau. If one wished to consider a perturbation to the TC Hamiltonian which distinguished individual spins, such as a field inhomogeneity, then these Young Tableaux would be required to properly determine the perturbation’s action on subspaces with identical total angular momentum $j$. As previously mentioned, in our work we focus on the ideal case with no perturbations. As such, it is sufficient to treat each degenerate copy of a given total angular momentum subspace as identical. Under this identification, we are able to reduce the effective spin dimensionality from $2^{N}$ to $O(N^{2})$. ### III.3 Examples for Small $N$ We now proceed to explicitly calculate the collective Lamb shift in two small $N$ spin–cavity systems. This is shown mathematically by re–diagonalization under the perturbative interaction Hamiltonian and finding how these re–diagonalized states’ energies differ from those where $\mathcal{H}_{int}=0$, or equivalently, where $g_{0}=0$. #### III.3.1 Single Spin The case of a single spin coupled to one electromagnetic mode is known as the Jaynes–Cummings model jaynes1963comparison . The JC Hamiltonian follows from the application of the RWA on the Dicke Hamiltonian for a single spin, also known as the Rabi model, and is given by $\hat{\mathcal{H}}_{0}+\hat{\mathcal{H}}_{int}=\omega_{0}(\hat{a}^{\dagger}\hat{a}+\hat{\sigma}_{z})+g_{0}\left(\hat{a}^{\dagger}\sigma_{-}+\hat{a}\sigma_{+}\right).$ (35) Since $\hat{\mathcal{H}}_{int}$ couples spins with equal energy in the unperturbed spectrum, the Hilbert space decouples into blocks of constant total excitation, indexed by the good quantum number $k$: $\bigoplus_{k}|\psi_{k}\rangle\Big{[}\langle\psi_{k}|\mathcal{\hat{H}}_{0}+\mathcal{\hat{H}}_{int}|\phi_{k}\rangle\Big{]}\langle\phi_{k}|,\\\ \text{ with }\mathcal{\hat{H}}_{0}\phi_{k}=E_{k}\phi_{k}\text{ and }\mathcal{\hat{H}}_{0}\psi_{k}=E_{k}\psi_{k}.$ (36) The ground state $|0\rangle|\downarrow\rangle$ is unique, and is thus not hybridized. For the remaining states, we utilize the fact that excitations are conserved. Consider the two states with excitations $k>0$, defined by $\\{\ket{k}\ket{\downarrow},\ket{k-1}\ket{\uparrow}\\}$. The interaction Hamiltonian represented in this basis is given by the direct sum over all two–dimensional excitation spaces as follows: $\mathcal{\hat{H}}_{int}\cong\bigoplus_{k}g_{0}\begin{bmatrix}0&\sqrt{k}\\\ \sqrt{k}&0\end{bmatrix}.$ (37) The $k$ excitation representation of the interaction Hamiltonian has energy eigenvalues given by $E_{k,\pm}=k\omega_{0}\pm g_{0}\sqrt{k},$ (38) which correspond to the following energy eigenstates: $|\psi_{k,\pm}\rangle=|k\rangle|\downarrow\rangle\pm|k-1\rangle|\uparrow\rangle.$ (39) #### III.3.2 Three Spins We now consider an $N=3$ spin–cavity system. We demonstrate the utility of the subspace decomposition technique by solving for the eigenstructure exactly. When the number of excitations are such that $k\leq 2$, the number of hybridized states are sub-maximal, as illustrated in figure 1. For the purpose of this example, we focus on solving for a general collection of excitation subspaces with $k\geq 3$, ensuring that all $2^{3}=8$ spin states participate in hybridization. For completeness, we provide the solutions to the $N=3$ spin model with $k<3$ excitations, as well as the $N=2$ spin model in the appendix using the same techniques illustrated in this section. For $N=3$ and $k\geq 3$, a matrix representation of the interaction Hamiltonian is given by the matrix $L(k)$ in equation (40). $L(k)=\begin{bmatrix}0&\sqrt{k}&\sqrt{k}&0&\sqrt{k}&0&0&0\\\ \sqrt{k}&0&0&\sqrt{k-1}&0&\sqrt{k-1}&0&0\\\ \sqrt{k}&0&0&\sqrt{k-1}&0&0&\sqrt{k-1}&0\\\ 0&\sqrt{k-1}&\sqrt{k-1}&0&0&0&0&\sqrt{k-2}\\\ \sqrt{k}&0&0&0&0&\sqrt{k-1}&\sqrt{k-1}&0\\\ 0&\sqrt{k-1}&0&0&\sqrt{k-1}&0&0&\sqrt{k-2}\\\ 0&0&\sqrt{k-1}&0&\sqrt{k-1}&0&0&\sqrt{k-2}\\\ 0&0&0&\sqrt{k-2}&0&\sqrt{k-2}&\sqrt{k-2}&0\end{bmatrix},$ (40) where the ordered basis states for this matrix representation are given by the set $\\{|k\rangle|\downarrow\downarrow\downarrow\rangle,|k-1\rangle|\downarrow\downarrow\uparrow\rangle,\ldots|k-3\rangle|\uparrow\uparrow\uparrow\rangle\\}$. This can be decomposed into 3 distinct subspaces, $\frac{3}{2}\oplus\frac{1}{2}\oplus\frac{1}{2}$, as follows: $\begin{bmatrix}0&\sqrt{3}\sqrt{k}&0&0\\\ \sqrt{3}\sqrt{k}&0&2\sqrt{k-1}&0\\\ 0&2\sqrt{k-1}&0&\sqrt{3}\sqrt{k-2}\\\ 0&0&\sqrt{3}\sqrt{k-2}&0\end{bmatrix}\oplus\begin{bmatrix}0&\sqrt{k-1}\\\ \sqrt{k-1}&0\end{bmatrix}\oplus\begin{bmatrix}0&\sqrt{k-1}\\\ \sqrt{k-1}&0\end{bmatrix}.$ (41) The first matrix is written in the Dicke (fully symmetric) basis (normalized versions of $|k-m\rangle\hat{J}_{+}^{m}|\downarrow\downarrow\downarrow\rangle$ with $m\in\\{0,1,2,3\\}$), while the second and third are written in terms of the composite spin–1/2 bases, given by $\displaystyle\frac{1}{\sqrt{2}}|k-1\rangle(|\downarrow\uparrow\downarrow\rangle-|\uparrow\downarrow\downarrow\rangle)$ , $\displaystyle\quad\frac{1}{\sqrt{2}}|k-2\rangle(|\downarrow\uparrow\uparrow\rangle-|\uparrow\downarrow\uparrow\rangle)$ (42) $\displaystyle\frac{1}{\sqrt{6}}|k-1\rangle(2|\downarrow\downarrow\uparrow\rangle-|\uparrow\downarrow\downarrow\rangle-|\downarrow\uparrow\downarrow\rangle)$ , $\displaystyle\quad\frac{1}{\sqrt{6}}|k-2\rangle(|\uparrow\downarrow\uparrow\rangle+|\downarrow\uparrow\uparrow\rangle-2|\uparrow\uparrow\downarrow\rangle).$ (43) The matrix representations for the degenerate spin-1/2 subspaces are identical, and thus indistinguishable under a collective operation or measurement. We have freedom in the choices of bases for the degenerate subspaces; the states we give are the standard basis states for these subspaces, as computed via a Clebsch–Gordon table mann2011introduction . We diagonalize each block individually starting with the matrix representing the $j=3/2$ subspace, and find the resulting (non-normalized) Lamb-shifted dressed states are given by the following superpositions: $\displaystyle|3/2,k;\pm_{1},\pm_{2}\rangle$ $\displaystyle:=$ $\displaystyle\pm_{1}\sqrt{5k-5\mp_{2}\sqrt{25-32k+16k^{2}}}(2k-5\pm_{2}\sqrt{25-32k+16k^{2}})|k\rangle|\downarrow\downarrow\downarrow\rangle$ (44) $\displaystyle+(1+2k\mp_{2}\sqrt{25-32k+16k^{2}})\sqrt{3}\sqrt{k}|k-1\rangle\frac{1}{\sqrt{3}}(|\downarrow\downarrow\uparrow\rangle+|\downarrow\uparrow\downarrow\rangle+|\uparrow\downarrow\downarrow\rangle)$ $\displaystyle\mp_{1}2\sqrt{5k-5\mp_{2}\sqrt{25-32k+16k^{2}}}\sqrt{3}\sqrt{k-1}\sqrt{k}|k-2\rangle\frac{1}{\sqrt{3}}(|\downarrow\uparrow\uparrow\rangle+|\uparrow\downarrow\uparrow\rangle+|\uparrow\uparrow\downarrow\rangle)$ $\displaystyle+6\sqrt{k-2}\sqrt{k-1}\sqrt{k}|k-3\rangle|\uparrow\uparrow\uparrow\rangle,$ each with associated energy $E_{k;\pm_{1}\pm_{2}}=k\omega_{0}\mp_{1}g_{0}\sqrt{5k-5\mp_{2}\sqrt{16k^{2}-32k+25}}.$ (45) We have introduced a shorthand notation via a subscript on the $\pm$ sign, such that $\pm_{1},\pm_{2}$ are a pair of sign choices (and $\mp_{1}$ indicates that the opposite sign as $\pm_{1}$ is used, and likewise for $\mp_{2}$) which allows for a more compact expression for all four dressed states. The four perturbed energy values are not equally spaced, though they still come in oppositely signed pairs of equal magnitude. We show in a later section that the eigenvalues always come in oppositely signed pairs. For the remaining two matrices with $j=\frac{1}{2}$ in the direct sum decomposition, we note that these systems are algebraically equivalent to the single spin model. This equivalence allows us to immediately write down the diagonalized states and perturbed energies: $\displaystyle|1/2,k;\pm\rangle_{1}$ $\displaystyle:=\frac{1}{2}[|k-1\rangle[|\downarrow\uparrow\downarrow\rangle-|\uparrow\downarrow\downarrow\rangle]\pm|k-2\rangle[|\downarrow\uparrow\uparrow\rangle-|\uparrow\downarrow\uparrow\rangle]],$ $\displaystyle E_{k;\pm}$ $\displaystyle=k\omega_{0}\pm g_{0}\sqrt{k-1}$ $\displaystyle|1/2,\pm\rangle_{2}$ $\displaystyle:=\frac{1}{2\sqrt{3}}[|k-1\rangle[2|\downarrow\downarrow\uparrow\rangle-|\uparrow\downarrow\downarrow\rangle-|\downarrow\uparrow\downarrow\rangle]\pm|k-2\rangle[|\uparrow\downarrow\uparrow\rangle+|\downarrow\uparrow\uparrow\rangle-2|\uparrow\uparrow\downarrow\rangle]],$ $\displaystyle E_{k;\pm}$ $\displaystyle=k\omega_{0}\pm g_{0}\sqrt{k-1}.$ The subscript on the kets in the above equations indicate the arbitrarily chosen degeneracy label of that subspace. This provides the full spectrum and dressed states for $k\geq 3$. Figure 1 illustrates the energy level diagram of the $N=3$ example, showing hybridization for the $0\leq k\leq 3$ subspaces, as well as collective dipole allowed transitions between dressed states. Figure 1: Illustration of the resulting hybridization of energy levels in the Tavis–Cummings model for $N=3$, explicitly on resonance such that $\omega_{0}=\omega_{s}=\omega_{c}$. Vertical single arrow lines (red) indicate transitions mediated by $\hat{J}_{+}$, meaning that the eigenstates represented by the horizontal bars have a non-zero $\hat{J}_{+}$ matrix element. Transitions are all–to–all between neighboring excitation subspaces of the same angular momentum, with some transitions between the $k=2$ and $k=3$ subspaces omitted for clarity. Note that there are no allowed transitions via collective spin or photon operators between distinct angular momentum subspaces, regardless of the value of $j$. Separation between excitation spaces is a constant $\omega_{0}$, denoted by bidirectional arrows (blue) between the pre–hybridized angular momentum states. Lamb shift splittings are denoted be bidirectional arrows (green) to the right of the hybridized states. In the $j=1/2$ subspaces, these splittings are given by $E_{1/2,k}=g_{0}\sqrt{k}$. In the $j=3/2$ subspace, the Lamb shifts are given by: $E_{3/2,1}=g_{0}\sqrt{3}\approx 1.73g_{0}$, $E_{3/2,2}=g_{0}\sqrt{10}\approx 3.16g_{0}$, $E_{3/2,3,1}=g_{0}\sqrt{10-\sqrt{73}}\approx 1.21g_{0}$, and $E_{3/2,3,2}=g_{0}\sqrt{10+\sqrt{73}}\approx 4.31g_{0}$. Before we move to the general case, we remark on a few well-known aspects of the solutions provided for $N=1,3$. Firstly, through the direct sum decomposition we see that the model decomposes into subspaces which are disconnected under the interaction portion of the total Hamiltonian. Secondly, through this decomposition, computing the dressed states and their energies, while non-trivial, is still more efficient than it would have been to diagonalize an $8\times 8$ matrix. Thirdly, as we will explore in greater depth, the coupling matrices for degenerate subspaces are identical, a reflection of the fact that they are indistinguishable under collective operations. The complexity of computing the eigendecomposition analytically increases rapidly. To our knowledge one can only solve up to $N=8$ spin systems exactly; beyond this point the characteristic polynomial’s degree for the largest space is beyond the size where general polynomial solutions exist. Once $N=9$ the largest decomposed matrix will have dimensions $10\times 10$, and since roots always come in positive-negative pairs, the simplified characteristic polynomial will have degree five, which will not generally have a formula for finding the roots. As we will show in the following section, the problem of diagonalizing the Tavis–Cummings problem is equivalent to diagonalizing a particular two parameter family of Jacobi operators, which we will define as $L(j,k)$. Any real symmetric matrix can be written in a basis where it satisfies the Jacobi operator conditions via a similarity transformation rutishauser1966jacobi . Then, if a Jacobi operator was generally solvable in a closed form, all real symmetric matrices‘ characteristic polynomials would also be solvable in a closed form, a contradiction to Galois’s insolubility of general polynomials of degree 5 or greater. There is a deep connection of Jacobi operators to the study of orthogonal polynomials, which can in part be seen by the determinant recurrence formula of equation (IV.2) teschl2000jacobi . It is outside the scope of this work to attempt a study of the generated orthogonal polynomials of the Jacobi operators representing the Tavis–Cummings Hamiltonian. As of yet we have been unable to solve the recurrence relationship for the eigenvalues in a closed form. That being said, we suspect that a proof (or disproof) for the existence of a closed form diagonalization of the TC Hamiltonian will be found not with the tools of linear algebra, but with polynomial techniques. ## IV Structure and Statistics of the Full Hamiltonian Here, we illustrate that a single subspace approximation of the Hamiltonian is generally insufficient to capture the full dynamics of the TC Hamiltonian, regardless of the chosen value of $j$, and provide a more accurate technique for analyzing the TC Hamiltonian. To do so, we first investigate the degeneracy of angular momentum subspaces as a function of $j$, with a focus on determining the maximally degenerate subspace. We then turn our attention to extracting as much information as possible from the collective Lamb shift coupling matrices without numerically solving an eigenvalue problem. As we will show, determining the descriptive statistics of the energy shifts of a given $(j,k)$ subspace is tractable theoretically. Appealing to computational mathematics, we are then able to join the two discussions in order to provide a picture of the degeneracy averaged collective Lamb shift as a function of $N$ and $k$, across all subspaces. Finally, motivated by the numerical results, we determine the root mean square Lamb shift averaged over the degeneracies across all angular momentum subspaces. ### IV.1 Maximally Degenerate Angular Momentum Space The Dicke subspace is often considered “special”, in that the following properties hold: all contained states are completely symmetric under particle exchange, the subspace has the largest dimension for a given $N$, the subspace contains the ground state of the Hamiltonian, and the subspace has no degenerate copies. Concerns about the validity of restricting the dynamics to within the Dicke subspace have been noted baragiola2010collective ; wesenberg2002mixed and we expand on that work here. The maximally degenerate collective angular momentum subspace for $N$ spin–1/2 particles, which we denote as $j^{*}$, is given by: $j^{*}=\frac{\sqrt{N}}{2}-\frac{1}{2}+\frac{1}{6\sqrt{N}}+O(N^{-1}).$ (46) The maximally degenerate space is increasingly separated from the Dicke space as $N$ increases. Notice also that the value of $j^{*}$ does not approach $0$ or $1/2$, indicating that large $N$ structure, through the lens of degeneracy, is not well approximated by a single spin with angular momentum $j=N/2$, nor one with small angular momentum, such as $j=1/2$. Given that the maximally degenerate angular momentum subspace is well approximated by this expression for $j^{*}$, we must determine how well this subspace represents the entire system. To formalize this notion, consider $d_{j^{*}}$, the degeneracy of the $j^{*}$ subspace, and $d_{j^{*}+1}$, the degeneracy of the $j^{*}+1$ subspace. Then, we have that $\frac{d_{j^{*}}}{d_{j^{*}+1}}=1+O(N^{-3/2}),$ (47) indicating the maximally degenerate subspace is not significantly more degenerate than its nearest neighbor. This argument may be extended to show that generally there’s not a large difference in the ratio of nearby spaces. This implies that, although $j^{*}$ is the most degenerate subspace, we cannot reasonably approximate the system by just this angular momentum space. Within the context of degeneracy–weighted observables, of which an observable for thermal states of the TC Hamiltonian would be, there is no single subspace which can accurately mimic the structure of the entire Hamiltonian. To represent a large majority of the possible angular momentum states, we must also include many neighbors of $j^{*}$ in our analysis. The most essential collection of angular momentum subspaces of the TC Hamiltonian for a given $N$ is well quantified by the strong support of $d_{j}$. This is visualized in figure 2. Figure 2: Normalized plot of $d_{j}$ as a function of $j$, for $N=1000$ spin-1/2 particles. The maximally degenerate space is the $j=15$ angular momentum space. This plot clearly indicates that when weighted by degeneracy, the Dicke subspace contributes negligibly as compared to lower $j$ angular momentum subspaces. The strong support of $d_{j}$ is approximately given by the interval $0\leq j\leq O(\sqrt{N})$, for all allowed values of $j$. That is, almost all of the states are contained in the subspaces below some constant multiple of $\sqrt{N}$, where the constant is, of course, independent of $N$. The $O(\sqrt{N})$ upper limit can be derived by considering the ratios of the degeneracies of increasingly separated angular momentum subspaces (see appendix for further details). The insights provided by the computation of $j^{*}$ and determination of the strong support of $d_{j}$ have a few important implications. Firstly, the fact that the system must be represented by $O(\sqrt{N})$ subspaces is of interest to those working in the area of the complexity of quantum systems. Secondly, this also will be of interest to those simulating quantum systems admitting a similar angular momentum subspace decomposition, in that so long as one has the subspace structure being preserved, any observable that grows sub–exponentially in $j$, as suggested by (90), can be sufficiently modelled using this region of strong support. By restricting computations to this region, we can expect a halving of the dominant order of the computational cost (i.e. an $O(N^{4})$ algorithm can be well approximated by an $O(N^{2})$ algorithm). In fact, this reduction of order further reduces the effective spin dimensionality of the problem from $O(N^{2})$ spin states, to $O(N)$ spin sates, a reduction from the original dimension of $2^{N}$. As of this point, we set aside this result and move on to discussing some of the properties that can be gleaned from the coupling matrices as functions of $(j,k)$. In later sections, we average these results across all angular momentum subspaces, using the knowledge gained from this section, to provide an aggregate picture of the energy level structure as a function of $k$ excitations. ### IV.2 Statistics of a Collective Angular Momentum Subspace In light of the subspace decomposition of the TC Hamiltonian, $\hat{\mathcal{H}}\cong\bigoplus_{j,k}\big{(}\omega_{0}k\leavevmode\hbox{\small$1$\normalsize\kern-3.30002pt$1$}_{j,k}+g_{0}L(j,k)\big{)},$ (48) it is clear that if one were able to diagonalize $L(j,k)$, then the Hamiltonian would be fully solved. It is instructive to visualize the representation of $L(j,k)$ with respect to $\mathcal{B}_{j,k}$: $\begin{bmatrix}0&l_{1}(j,k)&&&\\\ l_{1}(j,k)&0&l_{2}(j,k)&&\\\ &l_{2}(j,k)&0&\ddots&\\\ &&\ddots&\ddots&l_{n}(j,k)\\\ &&&l_{n}(j,k)&0\end{bmatrix}.$ (49) Thus, the Lamb shift coupling matrix can be naturally represented as a hollow tridiagonal matrix, a highly structured sparse matrix. Further, this coupling matrix is analogous to an un–normalized transition matrix for a 1D random walk. A full closed form diagonalization of this matrix is unlikely to exist, but there is still a good amount of information that can be extracted. As a first approach we can consider the problem from a numerical linear algebra perspective. It was shown in 2013 that the eigenvalues of this variety of matrix can be computed exactly (to within numerical precision) in $O(n_{j,k}\log n_{j,k})$ floating point operations, a speed-up over the unstructured problem coakley2013fast . This algorithm can then be used to efficiently extract the Lamb shifts of a given $(j,k)$ space on demand, if desired. The eigenvector problem given an eigenvalue $\lambda$ is then solvable in $O(n_{j,k})$ floating point operations utilizing the Thomas algorithm thomas1949elliptic . This must be done for each of the $n_{j,k}+1$ eigenvalues, and so while the cost of producing the set of eigenvalues is $O(n_{j,k}\log n_{j,k})$, the cost of producing the entire eigensystem is $O(n_{j,k}^{2})$, where the cost is dominated by the eigenvector problem. We expect the numerical speedup of finding the eigenvalues to be useful for simulating this system’s dynamics and computing state dependent quantities for states defined by classical mixtures across angular momentum subspace, thereby increasing the maximal value for $N$ that can be feasibly simulated on a classical processor. We return now to considering properties we can analytically compute, or estimate, of the collection of the Lamb shifts. From work in theoretical numerics, it was shown that the eigenvalues of hollow tridiagonal matrices come in oppositely signed pairs watkins2005product . That is, if $\lambda$ is an eigenvalue of $L(j,k)$, then so is $-\lambda$. The eigenvalue spectrum of the Lamb shift coupling matrix can be seen as a two parameter family of sets, $\Lambda(j,k)=\\{\lambda\,|\,L(j,k)\bm{v}=\lambda\bm{v},\bm{v}\neq\bm{0}\\},$ (50) and so from the work of watkins2005product , $\Lambda(j,k)$ is an even set. The fact that the eigenvalues of these coupling matrices is a family of sets and not multi-sets is shown in barth1967calculation . Thus, if $\left|\Lambda(j,k)\right|=\left|\mathcal{B}_{j,k}\right|=n_{j,k}+1$ is odd, there must be exactly one eigenvalue with value $\lambda=0$. A standard parameter of matrices to compute is the determinant, as this is equal to the product of the eigenvalues. There is a two step recursive formula for computing the determinant of a symmetric tridiagonal matrix, that can be used to compute the characteristic polynomial or simply compute the determinant of $L(j,k)$. Let $A$ be a symmetric tridiagonal (Jacobi) matrix with matrix elements, $\displaystyle A=\sum_{\alpha=1}^{n+1}a_{\alpha}\left|\alpha\vphantom{\alpha}\right>\\!\\!\left<\alpha\vphantom{\alpha}\right|+\sum_{\alpha=1}^{n}b_{\alpha}\big{(}$ $\displaystyle\left|\alpha\vphantom{\alpha+1}\right>\\!\\!\left<\alpha+1\vphantom{\alpha}\right|$ $\displaystyle+\left|\alpha+1\vphantom{\alpha}\right>\\!\\!\left<\alpha\vphantom{\alpha+1}\right|\big{)},$ (51) and sub-matrices $A_{\alpha}$, formed by discarding all basis vectors with index greater than $\alpha$. Then, $\displaystyle\det(A)$ $\displaystyle=\det(A_{n+2})$ $\displaystyle=a_{n+1}\det(A_{n+1})-b_{n}^{2}\det(A_{n}).$ (52) $j=N/2$$j=N/2-1$$j=N/2-2$$j=N/2-3$x 1x $(N-1)$x $N(N-3)/2$x $N(N-1)(N-5)/6$ Figure 3: Schematic representation of the energy eigenstates of the Tavis- Cummings Hamiltonian with excitations $0\leq k\leq 3$ along the vertical, and labelled horizontally by the number of degeneracies of each angular momentum subspace. Upon computing the determinant of $L(j,k)$, we find that if $n_{j,k}+1$ is odd, then the recurrence terminates with $\det A_{0}=0$, and so $\det L(j,k)=0$. Otherwise, $n_{j,k}+1$ is even and the determinant is given as $\det L(j,k)=(-1)^{\frac{n+1}{2}}l_{n}^{2}l_{n-2}^{2}\cdots l_{1}^{2},$ (53) where the dependence of the matrix elements $l_{\alpha}$ on $(j,k)$ were suppressed for clarity. While it is interesting to know that the determinant can be computed efficiently and in a closed form, it does not provide a description of the structure of the collective Lamb shifts. Rather, given the set of Lamb shift eigenvalues, $\Lambda(j,k)$, it is more instructive to provide descriptive statistics. Given knowledge of the eigenvalues, the $t$-th moment is given as $\left<\Lambda(j,k)^{t}\right>=\frac{1}{\left|\mathcal{B}_{j,k}\right|}\sum_{\lambda\in\Lambda(j,k)}\lambda^{t}.$ (54) We can avoid computing the eigenvalues explicitly by noticing that the sum over eigenvalues is equivalent to the trace of the Lamb shift coupling matrix. Thus, $\left<\Lambda(j,k)^{t}\right>=\frac{1}{\left|\mathcal{B}_{j,k}\right|}\operatorname{tr}\big{(}L(j,k)^{t}\big{)}.$ (55) Then, it is clear that the mean of each subspace is 0, which follows as the Lamb shift coupling matrix is hollow, $\left<\Lambda(j,k)\right>=0,$ (56) as all the diagonal entries of $L(j,k)$ are zero with sum zero. This statement can be extended to all odd moments of the Lamb shift eigenvalues. That is, for each coupling matrix, $L(j,k)$, $\left<\Lambda(j,k)^{2t+1}\right>=0,\,\,\forall t\in\mathbb{N}.$ (57) This follows immediately from the fact that, for every $\lambda\in\Lambda(j,k)$, $-\lambda\in\Lambda(j,k)$. In order to quantify the magnitude of the collective Lamb shift splittings, we may utilize the variance as a measure, which in this case is equal to the second moment of $\Lambda(j,k)$: $\displaystyle\operatorname{Var}(\Lambda(j,k))$ $\displaystyle=\left<\Lambda(j,k)^{2}\right>-\left<\Lambda(j,k)\right>^{2}$ $\displaystyle=\left<\Lambda(j,k)^{2}\right>.$ (58) Computing the variance is then equivalent to determining the trace of the square of the coupling matrix, which is a banded pentadiagonal matrix, explicitly given as $\begin{bmatrix}l_{1}^{2}&0&l_{1}l_{2}&&&\\\ 0&l_{1}^{2}+l_{2}^{2}&0&l_{2}l_{3}&&\\\ l_{1}l_{2}&0&\ddots&\ddots&\ddots&\\\ &l_{2}l_{3}&\ddots&\ddots&\ddots&l_{n-1}l_{n}&\\\ &&\ddots&\ddots&l_{n-1}^{2}+l_{n}^{2}&0&\\\ &&&l_{n-1}l_{n}&0&l_{n}^{2}\end{bmatrix}$ (59) Thus, the trace of the square of $L(j,k)$ has a tidy closed form expression in terms of the matrix elements $l_{\alpha}(j,k)$, $\operatorname{tr}L(j,k)^{2}=2\sum_{\alpha=1}^{n}l_{\alpha}(j,k)^{2}.$ (60) Then, the variance of $\Lambda(j,k)$, for $k\geq k_{0}(j)$, with $k^{\prime}=k-k_{0}(j)$, is given by the following expression $\displaystyle\operatorname{Var}(\Lambda(j,k))$ $\displaystyle=\frac{1}{2}\left|\mathcal{B}_{j,k}\right|^{3}-\frac{1}{3}\left|\mathcal{B}_{j,k}\right|^{2}(2k^{\prime}+4j+7)$ $\displaystyle+\left|\mathcal{B}_{j,k}\right|(2jk^{\prime}+2k^{\prime}+4j+7/2)$ $\displaystyle-\frac{1}{3}(6jk^{\prime}+8j+4k^{\prime}+5)$ (61) Recalling that the dimension of the basis of a $(j,k)$ space is given by $\left|\mathcal{B}_{j,k}\right|=\min\\{2j+1,k-k_{0}(j)+1\\}$, when $k$ satisfies $k-k_{0}(j)>2j$ the dimension of the space becomes fixed at $2j+1$. And so, for $k$ such that $k-k_{0}(j)>2j$, or equivalently such that $k>N/2+j$, $\operatorname{Var}(\Lambda(j,k))$ is a linear function in $k$. This can be seen by substituting $\left|\mathcal{B}_{j,k}\right|=2j+1$ into equation (IV.2). Taking the square root of the variance, we can attain the standard deviation, which has an interpretation as the average distance from the mean. In this sense then, the average collective Lamb shift, treating $j$ as a constant, is $O(\sqrt{k})$ for $k>N/2+j$. In order to describe the full statistics of the collective Lamb shift, it is insufficient to consider $j$ a constant, rather we must consider all angular momentum subspaces and their respective degeneracies present at a given value of $k$. ### IV.3 Rotating–Wave Approximation Revisited Before we move to averaging over degeneracies, we include one more aspect of the Lamb shifts that may be computed generally and discuss it’s implications. Since all $L(j,k)$ are non-negative matrices we can bound the maximal absolute value of the eigenvalue, also given by the spectral norm, from above and below using the Perron–Frobenius theorem: $\min_{m}\sum_{n}[L(j,k)]_{mn}\leq\max\Lambda\leq\max_{m}\sum_{n}[L(j,k)]_{mn}.$ (62) Applying this theorem allows us to determine that $\max\Lambda(j,k)$ is upper bounded by the various cases illustrated in (63). $\begin{cases}\frac{2}{\sqrt{3}}\sqrt{(2j+k^{\prime})jk^{\prime}}&\text{generally}\\\ 2[j\sqrt{k^{\prime}}-\frac{1}{2}\frac{j^{2}}{\sqrt{k^{\prime}}}+\frac{1}{8}\frac{j^{4}}{(k^{\prime})^{5/2}}+O(\frac{j^{5}}{(k^{\prime})^{7/2}})]&2j\ll k^{\prime}\\\ 2[\frac{1}{\sqrt{2}}k^{\prime}\sqrt{j}-\frac{1}{8\sqrt{2}}\frac{(k^{\prime})^{2}}{\sqrt{j}}+\frac{1}{512}\frac{(k^{\prime})^{4}}{j^{5/2}}+O(\frac{(k^{\prime})^{5}}{j^{7/2}})]&k^{\prime}\ll 2j.\end{cases}$ (63) The relations for $\max\Lambda(j,k)$ can narrow the energy range we need to consider in experimental design for a given value of $N$ and bounded total energy. We could likewise provide a lower bound on the maximal splitting via the same argument–this always yields the minimum of the first row and the last row (both of which have a single entry in the coupling matrix). These bounds on the maximal eigenvalue also have an additional implication. As a general rule of thumb, the rotating–wave approximation used for approximating the Dicke Hamiltonian by the Tavis–Cummings Hamiltonian is said to be valid for $g_{0}\sqrt{N}\ll\omega_{0}$. Although this rule of thumb is useful for potentially determining the ability to experimentally resolve vacuum Rabi oscillations of a spin ensemble with a cavity, it is not a good metric for determining the validity of the RWA. We can improve the specificity of this requirement. We begin by noting that from the lower bound for the maximal eigenvalue, we have: $\lim_{j,k\rightarrow\infty}\|L(j,k)\|_{\infty}=\infty,$ (64) which means that eventually $g_{0}\max\Lambda(j,k)$ will approach and exceed $2\omega_{0}$. This means that the true condition that should be used to justify a rotating–wave approximation is $g_{0}\max\Lambda(j,k)\ll\omega_{0},$ (65) which puts a limit on the size of $j$ and $k$ that can be considered with this model. A maximally allowed value for $k$, after which the rotating wave approximation breaks down, is not a property unique to the TC Hamiltonian, as the JC Hamiltonian’s RWA is invalidated when $k\approx\omega_{0}^{2}/g_{0}^{2}$. Given our upper and lower bounds on $\max\Lambda(j,k)$ we can estimate where the rotating–wave approximation begins to breakdown. As an example, we consider the behavior of the density of states for an $N=20$ system. The density of states is a sum of delta functions over all excitation spaces, with the location of the delta functions being the energies of the Lamb- shifted eigenstates, scaled by the weight: $n(E)=\sum_{k=0}^{\infty}\sum_{\lambda\in\Lambda(k)}w_{k}(\lambda)\delta(E-(k\omega_{0}+\lambda g_{0})).$ (66) When the rotating wave approximation holds, the distribution of delta functions across neighboring excitation subspaces will be well separated, as show in figure 4. On the other hand, figure 5 illustrates what the energy level structure looks like when the rotating wave approximation breaks down. In this case, states in a given excitation subspace can overlap with states from neighboring excitation subspaces, breaking the notion of the good quantum number, and invalidating the predictions of the model. Figure 4: Scaled density of states for $N=20$ spins, with $\omega/g=500$. Figure 5: Scaled density of states for $N=20$ spins, with $\omega/g=100$. ### IV.4 Degeneracy Averaged Statistics of the Collective Lamb Shift Given that relaxation processes and thermal excitation tend to suppress collective behavior in an ensemble and spread population over many subspaces wood_cavity_2016 ; baragiola2010collective ; wesenberg2002mixed ; chase_collective_2008 , the utility of descriptive statistics of the collective Lamb shifts for specific values of $j$ is limited. To address this constraint, we now discuss properties of the collective Lamb shift upon taking an appropriate average over angular momentum subspaces. To begin, we define a probability distribution on the set of eigenvalues across all values of $j$ at a given value of $k$. A natural choice is weighting each eigenvalue by its degeneracy. $w_{k}(\lambda)=\sum_{j}\begin{cases}d_{j}&\lambda\in\Lambda(j,k)\\\ 0&\text{else}\end{cases}$ (67) The sum over $j$ accounts for the case of repeat eigenvalues across $j$ spaces, although we believe that it is generally only the 0 eigenvalue that repeats across $j$ spaces. For convenience, we also define the set of Lamb shift eigenvalues over $k$ excitations to be given as: $\Lambda(k)=\bigcup_{j}\Lambda(j,k).$ (68) The set of pairs, $(\lambda,\omega_{k}(\lambda))$ define an unnormalized probability distribution on the Lamb shifts for an $N$ spin TC system with $k$ excitations. The $t$-th moment of the collective Lamb shifts across all angular momentum spaces is formally written, $\left<\Lambda(k)^{t}\right>=\frac{1}{D_{k}}\sum_{\lambda\in\Lambda(k)}w_{k}(\lambda)\lambda^{t},$ (69) where $D_{k}$ is the number of states with $k$ excitations, given by $D_{k}=\sum_{k^{\prime}=0}^{k}\binom{N}{k^{\prime}}.$ (70) Recall that if $k<k_{0}(j)=N/2-j$, then there are no states of excitation $k$ for the given value of $j$. In this case, $\Lambda(j,k)$ is empty and does not contribute to the statistics of this excitation level. Once $k\geq N$, the total number of states present at a given excitation becomes fixed at $D_{k}=2^{N}$. As with the case of a single angular momentum space, it is better to compute the moments utilizing traces of the Lamb shift coupling matrix, which are computable in $O(n_{j,k})$ floating point operations, compared to the $O(n_{j,k}\log n_{j,k})$ eigenvalue problem. The advantage is particularly significant for the second moment, where we already have determined a closed form expression for the trace, dropping the cost to $O(1)$ operations per angular momentum subspace. Using this insight, the $t$-th moment of the collective Lamb shift splittings across all angular momentum spaces is equal to the weighted average of traces of the coupling matrices: $\left<\Lambda(k)^{t}\right>=\frac{1}{D_{k}}\sum_{j}d_{j}\operatorname{tr}(L(j,k)^{t}).$ (71) As $\Lambda(k)$ is the union of even sets, it is also an even set. This fact allows us to immediately determine that the odd moments for the collective Lamb shifts indexed by $k$ excitations are all 0: $\left<\Lambda(k)^{2t+1}\right>=0,\,\,\forall t\in\mathbb{N}.$ (72) Due to the combinatorial nature of the weights on eigenvalues, there is no exact closed form expression for the even moments of the Lamb shift splittings. That being said, it is computationally feasible to visualize the function for select values of $N$, as shown in figure 6. Figure 6: Variance of the unit–less ($\hbar=g_{0}=1$) Lamb shift splittings for $N=1000$ spin–1/2 particles. Notice that the variance becomes linear in $k$ soon after $k=N/2=500$. Notice the non-linearity and reduction of scale of the variance in the lower excitation subspaces as compared to the $k>N/2$ subspaces. We first address the apparent suppression of the variance for $k<N/2$. Recalling from the partial energy level diagram of figure 3, as we introduce a new angular momentum subspace into the statistics, the ground state of that $j$ space is included into the statistics with energy splitting of 0. Now, since $d_{j}$ is an increasing function for $j<j^{*}$, the dominant element of the distribution of eigenvalues is the ground state of the smallest considered angular momentum subspace. This trend holds true until $k$ approaches $N/2-j^{*}$. In the case of $N=1000$, we predict $j^{*}=15$, hence the suppression of the variance to nearly $k=N/2$. As for the linearity of the variance starting at roughly $k=N/2$, we expect the variance of each angular momentum $j$ subspace to be linear in $k$ for values of $k>N/2+j$. It follows that the apparent transition into the linear regime at $k\approx N/2$ is caused by the variance of subspaces in the region of strong support of $d_{j}$ being most dominant in the linear regime for $k>N/2+j^{*}$. Thus, one can expect a linear variance in $k$ for values of $k>N/2+\sqrt{N}/2-1/2+1/(6\sqrt{N})$, which is dominated by $N/2$ for large $N$. Now, while we showed that the variance is indeed a linear function in $k$, we did not provide the function, as the slope of this linearity is a sizeable polynomial in $j$. We will return to the task of averaging this polynomial over all possible values of $j$ momentarily. As a first investigation, we can efficiently illustrate the trend numerically. We perform regression on the linear regime of the variance for select values of $N$, and plot the slope of these lines as a function of $N$, as seen in figure 7. Figure 7: Slope of the variance of the collective Lamb shift splittings in the linear regime, for various $N$. Points mark computed values of the slope for each $N$. The regression model is Slope$(N)=0.9989N-0.27$, with an $R^{2}$ value of nearly 1. The result of the regression indicates that for the values of $k$ and $N$ considered, the variance is effectively growing at a rate of $0.9989Nk$. In fact, we are able to show analytically that the variance of the collective Lamb shift splittings, as a function of $k$ excitations, grows as the product of $N$ and $k$ for $k\geq N$, such that $\operatorname{Var}(\Lambda(k))=O(Nk).$ (73) We further conjecture that this result holds for all $k>k^{*}$, where $k^{*}$ is some value in the range $N/2<k^{*}<N$, and likely depends on $N$. This can be seen in part by noting that the dimension of a subspace, $\left|\mathcal{B}_{j,k}\right|$, saturates at $2j+1$ when $k=k_{0}(j)+2j=N/2+j$. Further, since we need only consider the strong support of $d_{j}$ when averaging (for more details see the appendix), the result could likely be extended to hold if $k^{*}=N/2+O(\sqrt{N})$. In the region $k^{*}<k<N$, the statistics are not exposed to all possible spin states as $D_{k}<2^{N}$. This issue can be avoided by bounding $d_{j}/D_{k}$ instead of $d_{j}/2^{N}$, for all values of $k$ greater than some $k^{*}$. Since $D_{k}$ is almost constant on this region, the extension should not be too difficult. The proof of equation (73) is given in the appendix, for the case where $k\geq N$. ## V Discussion It is a common practice to treat the Tavis–Cummings Hamiltonian as a generalized Jaynes–Cummings Hamiltonian operating in the Dicke subspace ($j=N/2$) with a collectively enhanced spin-cavity coupling, $g_{eff}=g_{0}\sqrt{N}$. The Lamb shifts in higher excitation manifolds, then, scale in analogy to the Jaynes–Cummings model, $g_{0}\sqrt{k}\rightarrow g_{eff}\sqrt{k}=g_{0}\sqrt{Nk}$. The justification for this approximation is often taken to be operation at low excitation, with the limiting case being the single-excitation manifold ($k=1$) where the Lamb shift is simply $g_{0}\sqrt{N}$. It is perhaps surprising, then, that the average Lamb-shifted energy level splitting of the Tavis–Cummings Hamiltonian, taken over all angular momentum subspaces, has a magnitude approximately given by $g_{0}\sqrt{Nk}$. In this sense, the dynamics of a large ensemble appear similar for very low excitations ($k\sim 1$) and very high excitations ($k\gg N/2$). Mathematically, this is due to the structure being dominated by the subspaces nearest to the maximally degenerate subspace, as given in the proof of equation (73). In fact, only the lowest $O(\sqrt{N})$ subspaces are theoretically needed to produce the result. We note the consistency of our result with various experiments measuring a high–cooperativity splitting of spin ensembles interacting with high Q cavities, where a $g_{0}\sqrt{N}$ behavior is observed rose_coherent_2017 ; angerer_superradiant_2018 ; kubo_strong_2010 ; schuster_high- cooperativity_2010 . These experiments are generally run at relatively high power, corresponding to many excitations in the system. Further, it has been noted that the high–cooperativity splitting disappears at high drive powers angerer_ultralong_2017 ; chiorescu2010magnetic . Our results indicate that the coalescence of the splitting into a single peak may be considered as a violation of the rotating-wave approximation (Section IV.3), such that the eigenstructure of the Tavis–Cummings model becomes invalid, and “classical” behavior emerges due to the smearing of the density of states (Figure 5). We also note that care must be taken in applying this result, however, as the linearity of the variance in $k$ is invalid for $k<N/2$, as illustrated in figure 6. This indicates that, while the single excitation splitting prediction of $O(\sqrt{N})$ is indeed valid for states with enough energy under the right conditions, for non–trivial moderate excitation states the variance cannot be approximated as simply. The full implications of this result are outside the scope of this paper, but we note our results suggest a further examination of the validity of various subspace restriction techniques is required in the regime of low to moderate excitation and ensemble size used for many quantum devices. ## VI Conclusion In this work we have revisited the Tavis–Cummings model and have elucidated a number of new observations. We began by recasting the original decomposition of the Hamiltonian for this system as a direct sum of subspaces, with respective structure described by a two-parameter family of coupling matrices. We further analyzed the structure of the degeneracies within the Hamiltonian and found the system is well described by a (relatively) small subset of the possible angular momentum spaces. This identification alone has implications to fields such as the complexity and simulation of quantum systems, as well as direct applications to the latter parts of our work. We proceeded to describe various parameters of our coupling matrices that we can compute and what those computed values could imply. First, we showed that, due to the structure of these coupling matrices, finding the eigenvalues is computationally easier than for general matrices. Additionally, we showed that all odd moments of the distribution of the eigenvalues are zero and computed the second moment explicitly. Using the derived formulae of this paper, one could also compute the higher order even moments, both within each subspace but also averaging across all degeneracies. The utility of such computations is dependent on the feasibility of experimentally detecting these higher moments. From this we provided bounds on the maximal value in the Lamb shift collection for each $(j,k)$ subspace. The bounds provided can likely be tightened; however, the current bounds still provide a strong condition for determining the validity of the Tavis–Cummings rotating–wave approximation. Following this, we showed how one would average over the $j$ portion of the computed statistics of the Lamb shifts on the coupling matrices by averaging over the angular momentum, using degeneracy as the weighting function. We bounded the RMS of the splittings for $k\geq N$, and showed good agreement with numerics. A natural question is whether this can be shown for some $k^{*}$ where $N/2<k^{*}<N$, or even for lower values of $k$. We believe that such a $k^{*}$ exists, and provided a sketch of what a proof for this extension might look like. ## Acknowledgements We thank Maryam Mirkamali for helpful discussions on mesoscopic physics and multi–body entanglement, and John Watrous for inspiring us to reexamine this problem using symmetry techniques, which gave us the tools to better understand the structure of the Tavis–Cummings model. ## Funding This work was supported by Industry Canada, the Canada First Research Excellence Fund (CFREF), the Canadian Excellence Research Chairs (CERC 215284) program, the Natural Sciences and Engineering Research Council of Canada (NSERC RGPIN-418579) Discovery program, the Canadian Institute for Advanced Research (CIFAR), and the Province of Ontario. ## References * [1] Willis E Lamb Jr and Robert C Retherford. Fine structure of the hydrogen atom by a microwave method. Physical Review, 72(3):241, 1947. * [2] Serge Haroche and Jean-Michel Raimond. Exploring the Quantum. Oxford University Press, 2006. * [3] Alexandre Blais, Arne L. Grimsmo, S. M. Girvin, and Andreas Wallraff. Circuit quantum electrodynamics, 2020. * [4] JM Fink, R Bianchetti, Matthias Baur, M Göppl, Lars Steffen, Stefan Filipp, Peter J Leek, Alexandre Blais, and Andreas Wallraff. Dressed collective qubit states and the tavis-cummings model in circuit qed. Physical review letters, 103(8):083601, 2009. * [5] Ping Yang, Jan David Brehm, Juha Leppäkangas, Lingzhen Guo, Michael Marthaler, Isabella Boventer, Alexander Stehli, Tim Wolz, Alexey V Ustinov, and Martin Weides. Probing the tavis-cummings level splitting with intermediate-scale superconducting circuits. Physical Review Applied, 14(2):024025, 2020. * [6] LJ Zou, David Marcos, Sebastian Diehl, Stefan Putz, Jörg Schmiedmayer, Johannes Majer, and Peter Rabl. Implementation of the dicke lattice model in hybrid quantum system arrays. Physical review letters, 113(2):023603, 2014. * [7] Gershon Kurizki, Patrice Bertet, Yuimaru Kubo, Klaus Mølmer, David Petrosyan, Peter Rabl, and Jörg Schmiedmayer. Quantum technologies with hybrid systems. Proceedings of the National Academy of Sciences, 112(13):3866–3873, 2015. * [8] John JL Morton and Brendon W Lovett. Hybrid solid-state qubits: the powerful role of electron spins. Annu. Rev. Condens. Matter Phys., 2(1):189–212, 2011. * [9] Ze-Liang Xiang, Sahel Ashhab, JQ You, and Franco Nori. Hybrid quantum circuits: Superconducting circuits interacting with other quantum systems. Reviews of Modern Physics, 85(2):623, 2013. * [10] Yuimaru Kubo, Cecile Grezes, Andreas Dewes, T Umeda, Junichi Isoya, H Sumiya, N Morishita, H Abe, S Onoda, T Ohshima, et al. Hybrid quantum circuit with a superconducting qubit coupled to a spin ensemble. Physical review letters, 107(22):220501, 2011. * [11] C Grezes, Brian Julsgaard, Y Kubo, M Stern, T Umeda, J Isoya, H Sumiya, H Abe, S Onoda, T Ohshima, et al. Multimode storage and retrieval of microwave fields in a spin ensemble. Physical Review X, 4(2):021049, 2014. * [12] Hua Wu, Richard E George, Janus H Wesenberg, Klaus Mølmer, David I Schuster, Robert J Schoelkopf, Kohei M Itoh, Arzhang Ardavan, John JL Morton, and G Andrew D Briggs. Storage of multiple coherent microwave excitations in an electron spin ensemble. Physical review letters, 105(14):140503, 2010. * [13] Xiaobo Zhu, Shiro Saito, Alexander Kemp, Kosuke Kakuyanagi, Shin-ichi Karimoto, Hayato Nakano, William J Munro, Yasuhiro Tokura, Mark S Everitt, Kae Nemoto, et al. Coherent coupling of a superconducting flux qubit to an electron spin ensemble in diamond. Nature, 478(7368):221–224, 2011. * [14] Klemens Hammerer, Anders S Sørensen, and Eugene S Polzik. Quantum interface between light and atomic ensembles. Reviews of Modern Physics, 82(2):1041, 2010. * [15] Mikael Afzelius, N Sangouard, Göran Johansson, MU Staudt, and CM Wilson. Proposal for a coherent quantum memory for propagating microwave photons. New Journal of Physics, 15(6):065008, 2013. * [16] Valentina Caprara Vivoli, Nicolas Sangouard, Mikael Afzelius, and Nicolas Gisin. High-bandwidth quantum memory protocol for storing single photons in rare-earth doped crystals. New Journal of Physics, 15(9):095012, 2013. * [17] Christopher J Wood, Troy W Borneman, and David G Cory. Cavity cooling of an ensemble spin system. Physical Review Letters, 112(5):050501, 2014. * [18] Christopher J Wood and David G Cory. Cavity cooling to the ground state of an ensemble quantum system. Physical Review A, 93(2):023414, 2016. * [19] Audrey Bienfait, JJ Pla, Yuimaru Kubo, Xin Zhou, Michael Stern, CC Lo, CD Weis, Thomas Schenkel, Denis Vion, Daniel Esteve, et al. Controlling spin relaxation with a cavity. Nature, 531(7592):74–77, 2016. * [20] Bartolo Albanese, Sebastian Probst, Vishal Ranjan, Christoph W Zollitsch, Marek Pechal, Andreas Wallraff, John JL Morton, Denis Vion, Daniel Esteve, Emmanuel Flurin, et al. Radiative cooling of a spin ensemble. Nature Physics, pages 1–5, 2020. * [21] Vishal Ranjan, Sebastian Probst, Bartolo Albanese, Andrin Doll, Oscar Jacquot, Emmanuel Flurin, Reinier Heeres, Denis Vion, Daniel Esteve, JJL Morton, et al. Pulsed electron spin resonance spectroscopy in the purcell regime. Journal of Magnetic Resonance, 310:106662, 2020. * [22] Edwin T Jaynes and Frederick W Cummings. Comparison of quantum and semiclassical radiation theories with application to the beam maser. Proceedings of the IEEE, 51(1):89–109, 1963. * [23] JM Fink, M Göppl, M Baur, R Bianchetti, PJ Leek, Alexandre Blais, and Andreas Wallraff. Climbing the jaynes–cummings ladder and observing its nonlinearity in a cavity qed system. Nature, 454(7202):315–318, 2008. * [24] Michael Tavis and Frederick W Cummings. Exact solution for an n-molecule—radiation-field hamiltonian. Physical Review, 170(2):379, 1968. * [25] DI Schuster, AP Sears, E Ginossar, L DiCarlo, L Frunzio, JJL Morton, H Wu, GAD Briggs, BB Buckley, DD Awschalom, et al. High-cooperativity coupling of electron-spin ensembles to superconducting cavities. Physical review letters, 105(14):140501, 2010. * [26] OWB Benningshof, HR Mohebbi, IAJ Taminiau, GX Miao, and DG Cory. Superconducting microstrip resonator for pulsed esr of thin films. Journal of Magnetic Resonance, 230:84–87, 2013. * [27] Atac Imamoğlu. Cavity qed based on collective magnetic dipole coupling: spin ensembles as hybrid two-level systems. Physical review letters, 102(8):083602, 2009. * [28] Y Kubo, FR Ong, Patrice Bertet, Denis Vion, V Jacques, D Zheng, A Dréau, J-F Roch, Alexia Auffèves, Fedor Jelezko, et al. Strong coupling of a spin ensemble to a superconducting resonator. Physical review letters, 105(14):140502, 2010. * [29] T Holstein and Hl Primakoff. Field dependence of the intrinsic domain magnetization of a ferromagnet. Physical Review, 58(12):1098, 1940. * [30] Barry M Garraway. The dicke model in quantum optics: Dicke model revisited. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369(1939):1137–1155, 2011. * [31] Robert H Dicke. Coherence in spontaneous radiation processes. Physical review, 93(1):99, 1954. * [32] E Ressayre and A Tallet. Holstein-primakoff transformation for the study of cooperative emission of radiation. Physical Review A, 11(3):981, 1975. * [33] Janus Wesenberg and Klaus Mølmer. Mixed collective states of many spins. Physical Review A, 65(6):062304, 2002. * [34] Honghua Zhong, Qiongtao Xie, Murray T Batchelor, and Chaohong Lee. Analytical eigenstates for the quantum rabi model. Journal of Physics A: Mathematical and Theoretical, 46(41):415302, 2013. * [35] Andrzej J Maciejewski, Maria Przybylska, and Tomasz Stachowiak. Full spectrum of the rabi model. Physics Letters A, 378(1-2):16–20, 2014. * [36] BR Judd. Exact solutions to a class of jahn-teller systems. Journal of Physics C: Solid State Physics, 12(9):1685, 1979. * [37] Daniel Braak. Integrability of the rabi model. Physical Review Letters, 107(10):100401, 2011. * [38] Rodney J Baxter. Exactly solved models in statistical mechanics. Elsevier, 2016. * [39] NM Bogoliubov, RK Bullough, and J Timonen. Exact solution of generalized tavis-cummings models in quantum optics. Journal of Physics A: Mathematical and General, 29(19):6305, 1996\. * [40] NM Bogolyubov. Algebraic bethe anzatz and the tavis-cummings model. Journal of Mathematical Sciences, 100(2):2051–2060, 2000. * [41] I Chiorescu, N Groll, Sylvain Bertaina, T Mori, and S Miyashita. Magnetic strong coupling in a spin-photon system and transition to classical regime. Physical Review B, 82(2):024413, 2010. * [42] Hermann Weyl. The classical groups: their invariants and representations, volume 45. Princeton university press, 1946. * [43] Robert Mann. An introduction to particle physics and the standard model. CRC press, 2011. * [44] Heinz Rutishauser. The jacobi method for real symmetric matrices. Numerische Mathematik, 9(1):1–10, 1966. * [45] Gerald Teschl. Jacobi operators and completely integrable nonlinear lattices. Number 72. American Mathematical Soc., 2000. * [46] Ben Q Baragiola, Bradley A Chase, and JM Geremia. Collective uncertainty in partially polarized and partially decohered spin-1 2 systems. Physical Review A, 81(3):032104, 2010. * [47] Ed S Coakley and Vladimir Rokhlin. A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices. Applied and Computational Harmonic Analysis, 34(3):379–414, 2013\. * [48] Llewellyn Thomas. Elliptic problems in linear differential equations over a network: Watson scientific computing laboratory. Columbia Univ., NY, 1949. * [49] David S Watkins. Product eigenvalue problems. SIAM review, 47(1):3–40, 2005. * [50] W Barth, RS Martin, and JH Wilkinson. Calculation of the eigenvalues of a symmetric tridiagonal matrix by the method of bisection. Numerische Mathematik, 9(5):386–393, 1967. * [51] Bradley A Chase and JM Geremia. Collective processes of an ensemble of spin-1/ 2 particles. Physical Review A, 78(5):052101, 2008. * [52] BC Rose, AM Tyryshkin, H Riemann, NV Abrosimov, P Becker, H-J Pohl, MLW Thewalt, Kohei M Itoh, and SA Lyon. Coherent rabi dynamics of a superradiant spin ensemble in a microwave cavity. Physical Review X, 7(3):031002, 2017. * [53] Andreas Angerer, Kirill Streltsov, Thomas Astner, Stefan Putz, Hitoshi Sumiya, Shinobu Onoda, Junichi Isoya, William J Munro, Kae Nemoto, Jörg Schmiedmayer, et al. Superradiant emission from colour centres in diamond. Nature Physics, 14(12):1168–1172, 2018. * [54] Andreas Angerer, Stefan Putz, Dmitry O Krimer, Thomas Astner, Matthias Zens, Ralph Glattauer, Kirill Streltsov, William J Munro, Kae Nemoto, Stefan Rotter, et al. Ultralong relaxation times in bistable hybrid quantum systems. Science advances, 3(12):e1701626, 2017. * [55] Joel Spencer and Laura Florescu. Asymptopia, volume 71 of student mathematical library. American Mathematical Society, Providence, RI, page 66, 2014. * ## Appendix A Appendix ###### Proof of equation (46). To derive the value of $j^{*}$, we begin by defining the degeneracy function in a convenient form, $f(j)=\frac{2j+1}{N/2+j+1}{N\choose N/2+j},$ (74) where $j$ takes integer or half-integer values $0\leq j\leq N/2$, depending on the parity of $N$. To prepare for differentiation, the binomial coefficient can be extended to a continuous function ${N\choose K}=\frac{\Gamma(N+1)}{\Gamma(K+1)\Gamma(N-K+1)},$ (75) such that (74) can be written as a continuous function in $j$ $f(j)=\frac{2j+1}{N/2+j+1}\frac{\Gamma(N+1)}{\Gamma(N/2+j+1)\Gamma(N/2-j+1)}.$ (76) We may now differentiate and look for critical values: $\frac{d}{dj}f(j)=4{N\choose N/2+j}\frac{\frac{1}{2}(2j+1)(N+2j+2)(H_{N/2-j}-H_{N/2+j})+N+1}{(N+2j+2)^{2}}.$ (77) In equation (77), $H_{x}$ is the Harmonic series truncated at term $x$. The degeneracy is then maximal when $\frac{1}{2}(2j+1)(N+2j+2)(H_{N/2-j}-H_{N/2+j})+N+1=0.$ (78) We can re-cast this result by utilizing the expression for $H_{x}=\log x+\gamma+O(x^{-1})$, where $\gamma$ is the Euler-Mascheroni constant ($\gamma\approx 0.577$). This allows us to take the difference of the Harmonic numbers as $\log$’s, which cancels the additive term. After simplifying, we are left with $\frac{1}{2}(2j+1)(N+2j+2)\log(\frac{N/2-j}{N/2+j})+N+1=0.$ (79) This is a very tight approximation. If we examine the series expansions of this, we see that taking $j\approx\frac{\sqrt{N}}{2}$ will remove the leading error. We may repeat this procedure, noting that the errors are a Laurent series in $\sqrt{N}$, so we adjust by decreasing powers of $\sqrt{N}$ corrections. Using this, we take as a guess that $j=\frac{\sqrt{N}-1}{2}+\frac{1}{6\sqrt{N}}$, which yields: $(3N+1)(3(N^{3/2}+N+\sqrt{N})+1)\frac{\log(\frac{-6N+6\sqrt{N}-2}{3\sqrt{N}(N+\sqrt{N}-1)+1}+1)}{18N}+N+1.$ (80) When expanded as a series in the limit of large $N$, we have: $\frac{1}{\sqrt{N}}+O(\frac{1}{N}),$ (81) and thus our guess is equal to the true root in the limit of $N\longrightarrow\infty$. Thus, $j^{*}=\frac{\sqrt{N}-1}{2}+\frac{1}{6\sqrt{N}}+O(N^{-1})$ (82) is the collective spin space with the largest degeneracy, up to error $O(N^{-1})$. ∎ ###### Proof of equation (47). We begin by considering the ratio: $\displaystyle\frac{d_{j^{*}}}{d_{j^{*}+1}}$ $\displaystyle=$ $\displaystyle\frac{N!(2j^{*}+1)}{(N/2-j^{*})!(N/2+j^{*}+1)!}\cdot\frac{(N/2-j^{*}-1)!(N/2+j^{*}+2)!}{N!(2j^{*}+3)}$ (83) $\displaystyle=$ $\displaystyle\frac{2j^{*}+1}{2j^{*}+3}\cdot\frac{N/2+j^{*}+2}{N/2-j^{*}}$ (84) $\displaystyle=$ $\displaystyle(1-\frac{2}{2j^{*}+3})\cdot\frac{1+\frac{2j^{*}}{N}+\frac{4}{N}}{1-\frac{2j^{*}}{N}}$ (85) $\displaystyle=$ $\displaystyle(1-\frac{2}{2j^{*}+3})(1+\frac{2j^{*}}{N}+\frac{4}{N})(1+\frac{2j^{*}}{N}+(\frac{2j^{*}}{N})^{2}+O(((j^{*})/N)^{3}))$ (86) where the last factor is a geometric series expansion with a ratio of $\frac{2j^{*}}{N}$. Grouping by powers, using $j^{*}=O(\sqrt{N})$, we have: $1+[-\frac{2}{2j^{*}+3}+\frac{2j^{*}}{N}+\frac{2j^{*}}{N}]+[\frac{4}{N}-2\frac{4j^{*}}{(2j^{*}+3)N}+2(\frac{2j^{*}}{N})^{2}]+O(N^{-3/2})$ (87) Now we utilize $j^{*}=\frac{\sqrt{N}}{2}-\frac{1}{2}+\frac{1}{6\sqrt{N}}$ to evaluate the above: $\displaystyle=$ $\displaystyle 1+0+[-\frac{2}{N}+\frac{4}{N}-\frac{4\sqrt{N}}{\sqrt{N}N}+2(\frac{1}{\sqrt{N}})^{2}]+O(N^{-3/2})$ (88) So the ratio of the degeneracies is $1+O(N^{-3/2})$. ∎ ###### Proof of Strong Support for $0\leq j\leq O(\sqrt{N})$. Recall the computation of the relative population between the maximal angular momentum subspace and its neighbor, and consider the case when the leading term will contribute to the ratio of neighboring values of $j$. We found that when $\frac{2j}{N}\ll 1$, $\frac{d_{j}}{d_{j+1}}=1+[\frac{4j}{N}-\frac{2}{2j+3}]+O(N^{-1}),$ (89) which cancelled when $j=j^{*}$ since $j^{*}=\frac{\sqrt{N}}{2}+O(1)$. Suppose we still have $\frac{2j}{N}\ll 1$, but now we consider a subspace nearby the maximal angular momentum space, such that $j=j^{*}+\Omega(\sqrt{N})$. The ratio for this value of $j$ is then $1+\Omega(N^{-1/2})$. This ratio will remain valid for increasing values of $j$, so long as $\frac{2j}{N}\ll 1$. To find the ratio of the degeneracies of the next nearest neighbors, we apply this procedure twice, finding $\frac{d_{j}}{d_{j+2}}=(1+\Omega(1/\sqrt{N}))^{2}.$ (90) To continue this argument to further subspace degeneracies, listed within the set $\\{d_{j},d_{j+1},\ldots,d_{j+O(\sqrt{N})}\\}$, we can write a geometric series taken at the infinity limit since the other term will contribute a much smaller portion to the sum. Thus, this series limits to $\frac{1}{1-\frac{1}{1+\Omega(1/\sqrt{N})}}=O(\sqrt{N}).$ (91) We have shown that while the total number of allowed values for $j$ is $O(N)$, the fractional contribution contained in this region is only $O(N^{-1/2})$. Thus we have that of the $2^{N}$ possible angular momentum states, most of them are contained within the lowest, smallest values of $j$, $O(\sqrt{N})$ angular momentum subspaces. ∎ ###### Proof of equation (IV.2). We will make use of the following summation formulae, $\displaystyle\sum_{i=1}^{n}i$ $\displaystyle=\frac{n(n+1)}{2}$ $\displaystyle\sum_{i=1}^{n}i^{2}$ $\displaystyle=\frac{n(n+1)(2n+1)}{6}$ $\displaystyle\sum_{i=1}^{n}i^{3}$ $\displaystyle=\frac{n^{2}(n+1)^{2}}{4}.$ Recall that the trace of the square of the coupling matrix can be written exactly as $\operatorname{tr}L(j,k)^{2}=2\sum_{\alpha=1}^{n}l_{\alpha}(j,k)^{2}=2\sum_{\alpha=1}^{n}\big{(}2\alpha j-\alpha(\alpha-1)\big{)}\big{(}k^{\prime}-\alpha+1\big{)}.$ (92) Now, grouping the summand by orders of $\alpha$ we have $l_{\alpha}^{2}(j,k)=\alpha^{3}-\alpha^{2}(2j+1+k^{\prime}+1)+\alpha(2j+1)(k^{\prime}+1).$ (93) And so $\displaystyle\operatorname{tr}L(j,k)^{2}$ $\displaystyle=\frac{1}{2}\left|\mathcal{B}_{j,k}\right|^{4}-\frac{2}{3}\left|\mathcal{B}_{j,k}\right|^{3}(2j+k^{\prime}+7/2)+\left|\mathcal{B}_{j,k}\right|^{2}(4j+2k^{\prime}+2jk^{\prime}+7/2)$ $\displaystyle-\frac{2}{3}\left|\mathcal{B}_{j,k}\right|(3jk^{\prime}+4j+2k+5/2).$ (94) The variance is then, by definition, $\displaystyle\operatorname{Var}(\Lambda(j,k))$ $\displaystyle=\frac{1}{2}\left|\mathcal{B}_{j,k}\right|^{3}-\frac{1}{3}\left|\mathcal{B}_{j,k}\right|^{2}(2k^{\prime}+4j+7)$ $\displaystyle+\left|\mathcal{B}_{j,k}\right|(2jk^{\prime}+2k^{\prime}+4j+7/2)-\frac{1}{3}(6jk^{\prime}+8j+4k^{\prime}+5).$ (95) ∎ ###### Proof of equation (63). We begin by transforming the entry values into a continuous function of $\alpha$ so that we may differentiate it. We will use Perron-Frobenius since we have a non-negative matrix and so can bound the maximal eigenvalue by the maximal row sum. To this end we focus on maximizing a single $l_{\alpha}(j,k)$ entry and double it since the true maximum will occur within one entry of the optimal continuous value choice for $\alpha$ and there are two entries in that row. Differentiating this we have: $\frac{d}{d\alpha}\left[\sqrt{\alpha}\sqrt{2j+1-\alpha}\sqrt{k^{\prime}-\alpha+1}\right]=\frac{3\alpha^{2}-4\alpha+2j(-2\alpha+k^{\prime}+1)-2\alpha k^{\prime}+k^{\prime}+1}{2\sqrt{\alpha(\alpha-2j-1)(\alpha-k^{\prime}-1)}}$ (96) and so this is optimized when: $\displaystyle\alpha$ $\displaystyle=$ $\displaystyle\frac{1}{3}\left(2j+k^{\prime}+2\pm\sqrt{4j^{2}-2jk^{\prime}+2j+(k^{\prime})^{2}+k^{\prime}+1}\right)$ (97) $\displaystyle=$ $\displaystyle\frac{1}{3}\left(2j+k^{\prime}+2\pm\sqrt{(2j+\frac{1}{2})^{2}+(k^{\prime}+\frac{1}{2})^{2}-2jk^{\prime}+\frac{1}{2}}\right)$ (98) In the above we must exclude the positive sign choice since this results in $\alpha\geq\left|\mathcal{B}_{j,k}\right|$, which is beyond the domain for $\alpha$. With the negative sign choice we note that $\alpha$ is linear in $j$ and $k^{\prime}$ to first order, so we remove the 1 shifts in our objective function. With this, we have: $\displaystyle\sqrt{j^{2}-(\alpha-j)^{2}}\sqrt{k^{\prime}-\alpha}$ $\displaystyle=$ $\displaystyle\sqrt{2j\alpha-\alpha^{2}}\sqrt{k^{\prime}-\alpha}$ (99) $\displaystyle=$ $\displaystyle\sqrt{\alpha}\sqrt{2j-\alpha}\sqrt{k^{\prime}-\alpha}$ (100) Solving for the roots again using this simplified expression provides: $\displaystyle\alpha$ $\displaystyle=$ $\displaystyle\frac{1}{3}(2j+k^{\prime}-\sqrt{4j^{2}-2jk^{\prime}+(k^{\prime})^{2}})+O(\sqrt{j}+\sqrt{k})$ (101) $\displaystyle=$ $\displaystyle\frac{1}{3}(2j+k^{\prime}-\sqrt{(2j+k^{\prime})^{2}-6jk^{\prime}})$ (102) $\displaystyle=$ $\displaystyle\frac{1}{3}(2j+k^{\prime}-(2j+k^{\prime})\sqrt{1-\frac{6jk^{\prime}}{(2j+k^{\prime})^{2}}})$ (103) $\displaystyle=$ $\displaystyle\frac{1}{3}(2j+k^{\prime})(1-\sqrt{1-\frac{6jk^{\prime}}{(2j+k^{\prime})^{2}}})$ (104) Observe that $\max_{j,k^{\prime}}\frac{6jk^{\prime}}{(2j+k^{\prime})^{2}}=\frac{3}{4}$ where $k^{\prime}=2j$, and $\min_{j,k^{\prime}}\frac{6jk^{\prime}}{(2j+k^{\prime})^{2}}=0$ when one is constant and the other approaches infinity. This means that: $0<\alpha\leq\frac{1}{6}(2j+k^{\prime})$ (105) Putting this into our expression for the largest eigenvalue, being sure to include the factor of two due to there being a second entry, provides: $\displaystyle\max\Lambda(j,k)$ $\displaystyle<$ $\displaystyle 2\sqrt{\frac{1}{6}(2j+k^{\prime})}\sqrt{2j}\sqrt{k^{\prime}}$ (106) $\displaystyle=$ $\displaystyle\frac{2}{\sqrt{3}}\sqrt{(2j+k^{\prime})jk^{\prime}}+O(j^{3/4}+(k^{\prime})^{3/4})$ (107) This expression is mostly relevant when $2j\approx k^{\prime}$. We now move to the cases of $2j\ll k^{\prime}$ and $k^{\prime}\ll 2j$. Returning to our prior expression this is: $\displaystyle\sqrt{\frac{1}{3}(2j+k^{\prime})(1-\sqrt{1-\frac{6jk^{\prime}}{(2j+k^{\prime})^{2}}})}\sqrt{2j-\frac{1}{3}(2j+k^{\prime})(1-\sqrt{1-\frac{6jk^{\prime}}{(2j+k^{\prime})^{2}}})}$ (109) $\displaystyle\times\sqrt{k^{\prime}-\frac{1}{3}(2j+k^{\prime})(1-\sqrt{1-\frac{6jk^{\prime}}{(2j+k^{\prime})^{2}}})}$ $\displaystyle=$ $\displaystyle\sqrt{\frac{2}{27}(8j^{3}(\sqrt{1-\frac{6jk}{(2j+k)^{2}}}-1)+6j^{2}k+k^{3}(\sqrt{1-\frac{6jk}{(2j+k)^{2}}}-1)+3jk^{2})}$ (110) Taking a series expansion of this and doubling for there being two entries we have: $\displaystyle\max\Lambda(j,k)$ $\displaystyle\leq$ $\displaystyle\begin{cases}2\sqrt{j^{2}k^{\prime}-j^{3}+\frac{j^{4}}{4k^{\prime}}}&2j\ll k^{\prime}\\\ 2\sqrt{\frac{j(k^{\prime})^{2}}{2}-\frac{1}{8}(k^{\prime})^{3}+\frac{1}{128}\frac{(k^{\prime})^{4}}{j}}&k^{\prime}\ll 2j\end{cases}$ (111) $\displaystyle\approx$ $\displaystyle\begin{cases}2[j\sqrt{k^{\prime}}-\frac{1}{2}\frac{j^{2}}{\sqrt{k}}+\frac{1}{8}\frac{j^{4}}{k^{5/2}}+O(j^{5}/(k^{\prime})^{7/2})]&2j\ll k^{\prime}\\\ 2[\frac{1}{\sqrt{2}}k^{\prime}\sqrt{j}-\frac{1}{8\sqrt{2}}\frac{(k^{\prime})^{2}}{\sqrt{j}}+\frac{1}{512}\frac{(k^{\prime})^{4}}{j^{5/2}}+O((k^{\prime})^{5}/j^{7/2})]&k^{\prime}\ll 2j\end{cases}$ (112) ∎ ###### Proof of equation (71). This relation can be seen as the weighted average of averages, and can thus be derived as follows: $\displaystyle\left<\Lambda(k)^{t}\right>$ $\displaystyle=\frac{1}{D_{k}}\sum_{j}d_{j}\left|\mathcal{B}_{j,k}\right|\left<\Lambda(j,k)^{t}\right>$ $\displaystyle=\frac{1}{D_{k}}\sum_{j}d_{j}\left|\mathcal{B}_{j,k}\right|\frac{\operatorname{tr}(L(j,k)^{t}}{\left|\mathcal{B}_{j,k}\right|}$ $\displaystyle=\frac{1}{D_{k}}\sum_{j}d_{j}\operatorname{tr}(L(j,k)^{t}).$ ∎ ###### Lemma 1. The degeneracies in our system satisfy: $\frac{d_{j}}{2^{N}}=O(\frac{1}{N})$ (113) for all allowed values of $j$. ###### Proof. Since $d_{j}<d_{j^{*}}$ for each $j$, we particularize to $j=j^{*}$. Then, taking only the leading term of $j^{*}=\sqrt{N}/2$, we have $d_{j^{*}}=\frac{\sqrt{N}/2+1}{N/2+\sqrt{N}/2+1}\binom{N}{N/2+\sqrt{N}/2+1}.$ (114) Focusing on the first factor, $\frac{2}{\frac{N+1}{\sqrt{N}+1}+1}=O(1/\sqrt{N}).$ (115) Since $k=N/2+\sqrt{N}/2+1$, we have that $\left|N/2-k\right|=o(n^{2/3})$, we can utilize the following asymptotic equivalence relation[55]: $\binom{N}{k}\sim\frac{2^{N}}{\sqrt{N\pi/2}}e^{-(N-2k)^{2}/(2N)}.$ (116) Then, using the fact that $N-2k=\sqrt{N}-2$, we find that $\displaystyle e^{-(N-2k)^{2}/(2N)}$ $\displaystyle=e^{-(\sqrt{N}-2)^{2}/(2N)}$ $\displaystyle=e^{-1/2+2/\sqrt{N}-2/N}$ $\displaystyle=\frac{1}{\sqrt{e}}\big{(}1+O(1/\sqrt{N})\big{)}.$ (117) Putting together the leading term with the asymptotic equivalence relation, we find $\displaystyle\binom{N}{N/2+\sqrt{N}/2+1}$ $\displaystyle\sim\frac{2^{N}}{\sqrt{N\pi e/2}}\big{(}1+O(1/\sqrt{N})\big{)}$ $\displaystyle=O(2^{N}/\sqrt{N}).$ (118) This finally implies $d_{j^{*}}=O(2^{N}/N),$ (119) and thus it hold that for all allowed $j$, $d_{j}2^{-N}=O(1/N).$ (120) ∎ ###### Proof of equation (73). In order to derive our result, we particularize to $k>N$, since this fixes $D_{k}=2^{N}$ and $k>N/2+j$ is true for each value of $j$. Then, $\operatorname{Var}(\Lambda(k))=\frac{1}{2^{N}}\sum_{j}d_{j}\operatorname{tr}(L(j,k)^{2}).$ (121) Recall the trace of the square of a coupling matrix is given by, $\displaystyle\operatorname{tr}(L(j,k)^{2})$ $\displaystyle=\frac{1}{2}\left|\mathcal{B}_{j,k}\right|^{4}-\frac{1}{3}\left|\mathcal{B}_{j,k}\right|^{3}(2k^{\prime}+4j+7)+\left|\mathcal{B}_{j,k}\right|^{2}(2jk^{\prime}+2k^{\prime}+4j+7/2)-\frac{1}{3}\left|\mathcal{B}_{j,k}\right|(6jk^{\prime}+8j+4k^{\prime}+5)$ Using the fact now that $\left|\mathcal{B}_{j,k}\right|=2j+1$ and $k^{\prime}=k-N/2+j$, we find that $\displaystyle\operatorname{tr}(L(j,k)^{2})$ $\displaystyle=k\big{(}\frac{8}{3}j^{3}+4j^{2}+\frac{4}{3}j\big{)}-N\big{(}\frac{4}{3}j^{3}+2j^{2}+\frac{2}{3}j\big{)}+\big{(}\frac{4}{3}j^{3}+2j^{2}+\frac{2}{3}j\big{)}.$ We focus on only the terms of order $k$, thus the dominant part of the expression we wish to analyze is given by $\frac{4}{3}k(2j^{3}+3j^{2}+j).$ (122) It remains to determine the order $k$ contribution to the entire variance, upon averaging over the degeneracies, $\operatorname{Var}(\Lambda(k))=\frac{4}{3}k\sum_{j}\frac{d_{j}}{2^{N}}(2j^{3}+3j^{2}+j)+\cdots,$ (123) where the terms of order $k^{0}$ will be dropped moving forward. In order to make the sum over $j$ tractable, we make use of Lemma 1, $\frac{d_{j}}{2^{N}}=O\big{(}\frac{1}{N}\big{)}.$ (124) Given that the strong support of the weighting function is from $0$ to $O(\sqrt{N})$, we have, $\displaystyle\sum_{j}\frac{4kd_{j}}{2^{N}}\frac{2j^{3}+3j^{2}-2j}{3}$ $\displaystyle=\frac{4k}{3}\sum_{j}O\big{(}\frac{1}{N}\big{)}(2j^{3}+3j^{2}-2j)$ $\displaystyle\approx O\big{(}\frac{k}{N}\big{)}\sum_{j=0}^{O(\sqrt{N})}(2j^{3}+3j^{2}-2j)$ $\displaystyle=O\big{(}\frac{k}{N}\big{)}O(N^{2})$ $\displaystyle=O(Nk).$ (125) ∎ #### N=2 Case of the Tavis–Cummings Model Here we solve the Tavis–Cummings model for the case of two spins interacting with a cavity. We show that this problem, beyond the first couple of rungs of the spectrum, can be solved in two different ways: one where we just solve the full matrix for two spin-$1/2$’s and one where we split the matrix into a spin-0 space and spin-1 space. Keeping with the same convention as before the spectrum can be written graphically as: $\begin{matrix}&\vdots&&\vdots&&\vdots&&\vdots\\\ \vspace{0.3cm}\\\ k=3&\rule{28.45274pt}{2.84544pt}&\hskip 28.45274pt&\rule{28.45274pt}{2.84544pt}&\hskip 28.45274pt&\rule{28.45274pt}{2.84544pt}&\hskip 28.45274pt&\rule{28.45274pt}{2.84544pt}\\\ \vspace{0.5cm}\\\ k=2&\rule{28.45274pt}{2.84544pt}&\hskip 28.45274pt&\rule{28.45274pt}{2.84544pt}&\hskip 28.45274pt&\rule{28.45274pt}{2.84544pt}&\hskip 28.45274pt&\rule{28.45274pt}{2.84544pt}\\\ \vspace{0.5cm}\\\ k=1&\rule{28.45274pt}{2.84544pt}&\hskip 28.45274pt&\rule{28.45274pt}{2.84544pt}&\hskip 28.45274pt&\rule{28.45274pt}{2.84544pt}\\\ \vspace{0.5cm}\\\ k=0&\rule{28.45274pt}{2.84544pt}&&\\\ \vspace{0.2cm}\\\ &|\downarrow\downarrow\rangle&&|\downarrow\uparrow\rangle&&|\uparrow\downarrow\rangle&&|\uparrow\uparrow\rangle\end{matrix}$ Again the first row’s state, $|0\rangle|\downarrow\downarrow\rangle$, is unperturbed under the coupling term. The second row has coupling matrix: $\begin{bmatrix}0&1&1\\\ 1&0&0\\\ 1&0&0\\\ \end{bmatrix}$ (126) We find that the dressed states and perturbed energies are: $\displaystyle\frac{1}{2}[\sqrt{2}|1\rangle|\downarrow\downarrow\rangle+|0\rangle|\downarrow\uparrow\rangle+|0\rangle|\uparrow\downarrow\rangle]$ , $\displaystyle\quad E=\omega+g_{0}\sqrt{2}$ (127) $\displaystyle\frac{1}{2}[\sqrt{2}|1\rangle|\downarrow\downarrow\rangle-|0\rangle|\downarrow\uparrow\rangle-|0\rangle|\uparrow\downarrow\rangle]$ , $\displaystyle\quad E=\omega-g_{0}\sqrt{2}$ (128) $\displaystyle\frac{1}{\sqrt{2}}[|0\rangle|\downarrow\uparrow\rangle-|0\rangle|\uparrow\downarrow\rangle]$ , $\displaystyle\quad E=\omega+0$ (129) The set of splittings here are $\\{0,\pm g_{0}\sqrt{2}\\}$. For the further levels with $k$ excitations, $k\geq 2$, there are always exactly four states that we must diagonalize, providing the coupling matrix: $\begin{bmatrix}0&\sqrt{k}&\sqrt{k}&0\\\ \sqrt{k}&0&0&\sqrt{k-1}\\\ \sqrt{k}&0&0&\sqrt{k-1}\\\ 0&\sqrt{k-1}&\sqrt{k-1}&0\\\ \end{bmatrix}$ (130) Solving this directly the dressed states and energies are: $\displaystyle\frac{1}{\sqrt{2k-1}}[-\sqrt{k-1}|k\rangle|\downarrow\downarrow\rangle+\sqrt{k}|k-2\rangle|\uparrow\uparrow\rangle]$ , (131) $\displaystyle E=k\omega_{0}+0$ (132) $\displaystyle\frac{1}{\sqrt{2}}[|k-1\rangle|\downarrow\uparrow\rangle-|k-1\rangle|\uparrow\downarrow\rangle]$ , (133) $\displaystyle E=k\omega_{0}+0$ (134) $\displaystyle\frac{1}{2\sqrt{2k-1}}[\sqrt{2}\sqrt{k}|k\rangle|\downarrow\downarrow\rangle+\sqrt{2k-1}|k-1\rangle|\downarrow\uparrow\rangle+\sqrt{2k-1}|k-1\rangle|\uparrow\downarrow\rangle+\sqrt{2}\sqrt{k-1}|k-2\rangle|\uparrow\uparrow\rangle]$ , (135) $\displaystyle E=k\omega_{0}+g_{0}\sqrt{2}\sqrt{2k-1}$ (136) $\displaystyle\frac{1}{2\sqrt{2k-1}}[\sqrt{2}\sqrt{k}|k\rangle|\downarrow\downarrow\rangle-\sqrt{2k-1}|k-1\rangle|\downarrow\uparrow\rangle-\sqrt{2k-1}|k-1\rangle|\uparrow\downarrow\rangle+\sqrt{2}\sqrt{k-1}|k-2\rangle|\uparrow\uparrow\rangle]$ , (137) $\displaystyle E=k\omega_{0}-g_{0}\sqrt{2}\sqrt{2k-1}$ (138) This provides a complete description of the states. We carried out this computation with the Zeeman basis states. If we instead take as our basis states the collective spin states with constant angular momentum (spin-1 and spin-0), we slightly reduce the complexity of the problem. In this case the only difference is replacing $|k\rangle|\uparrow\downarrow\rangle$ and $|k\rangle|\downarrow\uparrow\rangle$ with $|k\rangle(|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle)$ and $|k\rangle(|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle)$–the first breaks off a spin-1 space while the second breaks off a spin-0 space. In this frame, the singlet state $|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle$ is always annihilated by $J_{\pm}$, thus forming its own space. Generally this particular decomposition doesn’t work, but the space is still orthogonal to the other states. In this case since this singlet state removes one vector, the coupling matrices for the above becomes smaller and thus easier to diagonalize. The resulting states and energies will of course be left unchanged. #### N=3 Case of the Tavis–Cummings Model with $k\leq 2$ For the sake of completeness we show the dressed states for $k\leq 2$ for the $N=3$ case of the Tavis–Cummings model, which were neglected in Section III.3.2. As always the $k=0$ state is unperturbed, giving $|0\rangle|\downarrow\downarrow\downarrow\rangle$ with energy $E=0$. At $k=1$ there are four bases, which in terms of the standard bases gives a coupling matrix of: $g\begin{bmatrix}0&1&1&1\\\ 1&0&0&0\\\ 1&0&0&0\\\ 1&0&0&0\end{bmatrix}$ (139) Solving this matrix, we find that the dressed states and their associated energies are given by: $\displaystyle\frac{1}{\sqrt{6}}[\sqrt{3}|1\rangle|\downarrow\downarrow\downarrow\rangle+|0\rangle(|\downarrow\downarrow\uparrow\rangle+|\downarrow\uparrow\downarrow\rangle+|\uparrow\downarrow\downarrow\rangle)]$ , $\displaystyle\quad E=\omega_{0}+g_{0}\sqrt{3}$ (140) $\displaystyle\frac{1}{\sqrt{6}}[-\sqrt{3}|1\rangle|\downarrow\downarrow\downarrow\rangle+|0\rangle(|\downarrow\downarrow\uparrow\rangle+|\downarrow\uparrow\downarrow\rangle+|\uparrow\downarrow\downarrow\rangle)]$ , $\displaystyle\quad E=\omega_{0}-g_{0}\sqrt{3}$ (141) $\displaystyle\frac{1}{\sqrt{2}}|0\rangle(|\uparrow\downarrow\downarrow\rangle-|\downarrow\downarrow\uparrow\rangle)$ , $\displaystyle\quad E=\omega_{0}$ (142) $\displaystyle\frac{1}{\sqrt{2}}|0\rangle(|\downarrow\uparrow\downarrow\rangle-|\downarrow\downarrow\uparrow\rangle)$ , $\displaystyle\quad E=\omega_{0}$ (143) At $k=2$ there are now seven bases, which in terms of the standard Zeeman bases gives a coupling matrix of: $\begin{bmatrix}0&\sqrt{2}&\sqrt{2}&0&\sqrt{2}&0&0\\\ \sqrt{2}&0&0&1&0&1&0\\\ \sqrt{2}&0&0&1&0&0&1\\\ 0&1&1&0&0&0&0\\\ \sqrt{2}&0&0&0&0&1&1\\\ 0&1&0&0&1&0&0\\\ 0&0&1&0&1&0&0\\\ \end{bmatrix}$ (144) where the basis states are $\\{|k\rangle|\downarrow\downarrow\downarrow\rangle,|k-1\rangle|\downarrow\downarrow\uparrow\rangle,\ldots|k-3\rangle|\uparrow\uparrow\uparrow\rangle\\}$. This can be decomposed as: $\begin{bmatrix}0&\sqrt{6}&0\\\ \sqrt{6}&0&2\\\ 0&2&0\end{bmatrix}\oplus\begin{bmatrix}0&1\\\ 1&0\end{bmatrix}\oplus\begin{bmatrix}0&1\\\ 1&0\end{bmatrix}$ (145) The first matrix in the direct sum has dressed states: $\displaystyle\frac{1}{2\sqrt{3}}[\sqrt{\frac{3}{2}}|2\rangle|\downarrow\downarrow\downarrow\rangle-\sqrt{\frac{5}{2}}|1\rangle(|\downarrow\downarrow\uparrow\rangle+|\downarrow\uparrow\downarrow\rangle+|\uparrow\downarrow\downarrow\rangle)+|0\rangle(|\downarrow\uparrow\uparrow\rangle+|\uparrow\downarrow\uparrow\rangle+|\uparrow\uparrow\downarrow\rangle)]$ , $\displaystyle\ E=2\omega_{0}-g_{0}\sqrt{10}$ (146) $\displaystyle\frac{1}{2\sqrt{3}}[\sqrt{\frac{3}{2}}|2\rangle|\downarrow\downarrow\downarrow\rangle+\sqrt{\frac{5}{2}}|1\rangle(|\downarrow\downarrow\uparrow\rangle+|\downarrow\uparrow\downarrow\rangle+|\uparrow\downarrow\downarrow\rangle)+|0\rangle(|\downarrow\uparrow\uparrow\rangle+|\uparrow\downarrow\uparrow\rangle+|\uparrow\uparrow\downarrow\rangle)]$ , $\displaystyle\ E=2\omega_{0}+g_{0}\sqrt{10}$ (147) $\displaystyle\sqrt{\frac{3}{11}}[-\sqrt{\frac{2}{3}}|2\rangle|\downarrow\downarrow\downarrow\rangle+|0\rangle(|\downarrow\uparrow\uparrow\rangle+|\uparrow\downarrow\uparrow\rangle+|\uparrow\uparrow\downarrow\rangle)]$ , $\displaystyle\ E=2\omega_{0}$ (148) The second and third matrices can be diagonalized in the following bases: $\displaystyle\frac{1}{2}[|1\rangle[|\downarrow\uparrow\downarrow\rangle-|\uparrow\downarrow\downarrow\rangle]\pm|0\rangle[|\downarrow\uparrow\uparrow\rangle-|\uparrow\downarrow\uparrow\rangle]]$ , $\displaystyle\quad E=2\omega_{0}\pm g_{0}$ (149) $\displaystyle\frac{1}{2\sqrt{3}}[|1\rangle[2|\downarrow\downarrow\uparrow\rangle-|\uparrow\downarrow\downarrow\rangle-|\downarrow\uparrow\downarrow\rangle]\pm|0\rangle[|\uparrow\downarrow\uparrow\rangle+|\downarrow\uparrow\uparrow\rangle-2|\uparrow\uparrow\downarrow\rangle]]$ , $\displaystyle\quad E=2\omega_{0}\pm g_{0}$ (150) Combining these results with those from Section III.3.2 provides the full set of dressed states and re-diagonalized energies for the Tavis–Cummings model at $N=3$.
printacmref=false Brown University Providence, RI Brown University Providence, RI Brown University Providence, RI # Learning Competitive Equilibria in Noisy Combinatorial Markets Enrique Areyan Viqueira<EMAIL_ADDRESS>, Cyrus Cousins <EMAIL_ADDRESS>and Amy Greenwald<EMAIL_ADDRESS> ###### Abstract. We present a methodology to robustly estimate the competitive equilibria (CE) of combinatorial markets under the assumption that buyers do not know their precise valuations for bundles of goods, but instead can only provide noisy estimates. We first show tight lower- and upper-bounds on the buyers’ utility loss, and hence the set of CE, given a uniform approximation of one market by another. We then develop a learning framework for our setup, and present two probably-approximately-correct algorithms for learning CE, i.e., producing uniform approximations that preserve CE, with finite-sample guarantees. The first is a baseline that uses Hoeffding’s inequality to produce a uniform approximation of buyers’ valuations with high probability. The second leverages a connection between the first welfare theorem of economics and uniform approximations to adaptively prune value queries when it determines that they are provably not part of a CE. We experiment with our algorithms and find that the pruning algorithm achieves better estimates than the baseline with far fewer samples. ###### Key words and phrases: Competitive Equilibria Learning, Noisy Combinatorial Markets, PAC Algorithms for Combinatorial Markets none ## 1\. Introduction Combinatorial Markets (CMs) are a class of markets in which buyers are interested in acquiring bundles of goods, and their values for these bundles can be arbitrary. Real-world examples of CMs include: spectrum auctions Cramton et al. (2002) allocation of landing and take-off slots at airports Ball et al. (2006); internet ad placement Edelman et al. (2007); and procurement of bus routes Cantillon and Pesendorfer (2006). An outcome of a CM is an assignment of bundles to buyers together with prices for the goods. A competitive equilibrium (CE) is an outcome of particular interest in CMs and other well-studied economic models Bikhchandani and Mamer (1997); Walras (2003). In a CE, buyers are utility-maximizing (i.e., they maximize their utilities among all feasible allocations at the posted prices) and the seller maximizes its revenue (again, over all allocations at the posted prices). While CEs are a static equilibrium concept, they can sometimes arise as the outcome of a dynamic price adjustment process (e.g., Cheung et al. (2020)). In such a process, prices might be adjusted by an imaginary Walrasian auctioneer, who poses demand queries to buyers: i.e., asks them their demands at given prices. Similarly, we imagine that prices in a CM are set by a market maker, who poses value queries to buyers: i.e., asks them their values on select bundles. One of the defining features of CMs is that they afford buyers the flexibility to express complex preferences, which in turn has the potential to increase market efficiency. However, the extensive expressivity of these markets presents challenges for both the market maker and the buyers. With an exponential number of bundles in general, it is infeasible for a buyer to evaluate them all. We thus present a model of noisy buyer valuations: e.g., buyers might use approximate or heuristic methods to obtain value estimates Fujishima et al. (1999). In turn, the market maker chooses an outcome in the face of uncertainty about the buyers’ valuations. We call the objects of study in this work _noisy combinatorial markets_ (NCM) to emphasize that buyers do not have direct access to their values for bundles, but instead can only noisily estimate them. In this work, we formulate a mathematical model of NCMs. Our goal is then to design learning algorithms with rigorous finite-sample guarantees that approximate the competitive equilibria of NCMs. Our first result is to show tight lower- and upper-bounds on the set of CE, given uniform approximations of buyers’ valuations. We then present two learning algorithms. The first one—Elicitation Algorithm; $\operatorname{EA}$—serves as a baseline. It uses Hoeffding’s inequality Hoeffding (1994) to produce said uniform approximations. Our second algorithm—Elicitation Algorithm with Pruning; $\operatorname{EAP}$—leverages the first welfare theorem of economics to adaptively prune value queries when it determines that they are provably not part of a CE. After establishing the correctness of our algorithms, we evaluate their empirical performance using both synthetic unit-demand valuations and two spectrum auction value models. The former are a class of valuations central to the literature on economics and computation Lehmann et al. (2006), for which there are efficient algorithms to compute CE Gul and Stacchetti (1999). In the spectrum auction value models, the buyers’ valuations are characterized by complements, which complicate the questions of existence and computability of CE. In all three models, we measure the average quality of learned CE via our algorithms, compared to the CE of the corresponding certain market (i.e., here, “certain” means lacking uncertainty), as a function of the number of samples. We find that $\operatorname{EAP}$ often yields better error guarantees than $\operatorname{EA}$ using far fewer samples, because it successfully prunes buyers’ valuations (i.e., it ceases querying for values on bundles of goods that a CE provably does not comprise), even without any _a priori_ knowledge of the market’s combinatorial structure. As the market size grows, an interesting tradeoff arises between computational and sample efficiency. To prune a value query and retain rigorous guarantees on the quality of the learned CE, we must solve a welfare-maximizing problem whose complexity grows with the market’s size. Consequently, at each iteration of $\operatorname{EAP}$, for each value query, we are faced with a choice. Either solve said welfare-maximizing problem and potentially prune the value query (thereby saving on future samples), or defer attempts to prune the value query, until more is known about the market. To combat this situation, we show that an upper bound on the optimal welfare’s value (rather than the precise value) suffices to obtain rigorous guarantees on the learned CE’s quality. Such upper bounds can be found easily, by solving a relaxation of the welfare- maximization problem. Reminiscent of designing admissible heuristics in classical search problems, this methodology applies to any combinatorial market, but at the same time allows for the application of domain-dependent knowledge to compute these upper bounds, when available. Empirically, we show that a computationally cheap relaxation of the welfare-maximization problem yields substantial sample and computational savings in a large market. #### Related Work The idea for this paper stemmed from the work on abstraction in Fisher markets by Kroer et al. Kroer et al. (2019). There, the authors tackle the problem of computing equilibria in large markets by creating an abstraction of the market, computing equilibria in the abstraction, and lifting those equilibria back to the original market. Likewise, we develop a pruning criterion which in effect builds an abstraction of any CM, where then compute a CE, which is provably also an approximate CE in the original market. The mathematical formalism we adopt follows that of Areyan Viqueira et al. Areyan Viqueira et al. (2020). There, the authors propose a mathematical framework for empirical game-theoretic analysis Wellman (2006), and algorithms that learn the Nash equilibria of simulation-based games Vorobeychik and Wellman (2008); Vorobeychik (2010). In this paper, we extend this methodology to market equilibria, and provide analogous results in the case of CMs. Whereas intuitively, a basic pruning criterion for games is arguably more straightforward—simply prune dominated strategies—the challenge in this work was to discover a pruning criterion that would likewise prune valuations that are provably not part of a CE. Jha and Zick Jha and Zick (2020) have also tackled the problem of learning CE in CM.111The market structure they investigate is not identical to the structure studied here. Thus, at present, our results are not directly comparable. Whereas our approach is to accurately learn only those components of the buyers’ valuations that determine a CE (up to PAC guarantees), their approach bypasses the learning of agent preferences altogether, going straight for learning a solution concept, such as a CE. It is an open question as to whether one approach dominates the other, in the context of noisy CMs. Another related line of research is concerned with learning valuation functions from data Balcan et al. (2012); Balcan and Harvey (2011); Lahaie and Parkes (2004). In contrast, our work is concerned with learning buyers’ valuations only in so much as it facilitates learning CE. Indeed, our main conclusion is that CE often can be learned from just a subset of the buyers’ valuations. There is also a long line of work on preference elicitation in combinatorial auctions (e.g., Conen and Sandholm (2001)), where an auctioneer aims to pose value queries in an intelligent order so as to minimize the computational burden on the bidders, while still clearing the auction. Finally, our pruning criterion relies on a novel application of the first welfare theorem of economics. While prior work has connected economic theory with algorithmic complexity Roughgarden and Talgam-Cohen (2015), this work connects economic theory with statistical learning theory. ## 2\. Model We write $\mathbb{X}_{+}$ to denote the set of positive values in a numerical set $\mathbb{X}$ including zero. Given an integer $k\in\mathbb{Z}$, we write $[k]$ to denote the first $k$ integers, inclusive: i.e., $[k]=\\{1,2,\ldots,k\\}$. Given a finite set of integers $Z\subset\mathbb{Z}$, we write $2^{Z}$ to denote the power set of $Z$. A _combinatorial market_ is defined by a set of goods and a set of buyers. We denote the set of goods by $G=[m]$, and the set of buyers by $N=[n]$. We index an arbitrary good by $j\in G$, and an arbitrary buyer by $i\in N$. A _bundle_ of goods is a set of goods $S\subseteq G$. Each buyer $i$ is characterized by their preferences over bundles, represented as a valuation function $v_{i}:2^{G}\mapsto\mathbb{R}_{+}$, where $v_{i}(S)\in\mathbb{R}_{+}$ is buyer $i$’s value for bundle $S$. We assume valuations are normalized so that $v_{i}(\emptyset)=0$, for all $i\in N$. Using this notation, a combinatorial market—market, hereafter—is a tuple $M=(G,N,\\{v_{i}\\}_{i\in N})$. Given a market $M$, an _allocation_ $\mathcal{S}=(S_{1},\ldots,S_{n})$ denotes an assignment of goods to buyers, where $S_{i}\subseteq G$ is the bundle assigned to buyer $i$. We consider only feasible allocations. An allocation $\mathcal{S}$ is _feasible_ if $S_{i}\cap S_{k}=\emptyset$ for all $i,k\in N$ such that $i\neq k$. We denote the set of all feasible allocations of market $M$ by $\mathcal{F}(M)$. The _welfare_ of allocation $\mathcal{S}$ is defined as $w(\mathcal{S})=\sum_{i\in N}v_{i}(S_{i})$. A welfare-maximizing allocation $\mathcal{S}^{*}$ is a feasible allocation that yields maximum welfare among all feasible allocations, i.e., $\mathcal{S}^{*}\in\arg\max_{\mathcal{S}\in\mathcal{F}(M)}w(M)$. We denote by $w^{*}(M)$ the welfare of any welfare-maximizing allocation $\mathcal{S}^{*}$, i.e., $w^{*}(M)=w(\mathcal{S}^{*})=\sum_{i\in N}v_{i}(S^{*}_{i})$. A pricing profile $\mathcal{P}=(P_{1},\ldots,P_{n})$ is a vector of $n$ pricing functions, one function $P_{i}:2^{G}\mapsto\mathbb{R}_{+}$ for each buyer, each mapping bundles to prices, $P_{i}(S)\in\mathbb{R}_{+}$. The seller’s revenue of allocation $\mathcal{S}$ given a pricing $\mathcal{P}$ is $\sum_{i\in N}P_{i}(S_{i})$. We refer to pair $(\mathcal{S},\mathcal{P})$ as a _market outcome_ —outcome, for short. Given an outcome, buyer $i$’s utility is difference between its attained value and its payment, $v_{i}(S_{i})-P_{i}(S_{i})$, and the seller’s utility is equal to its revenue. In this paper, we are interested in approximations of one market by another. We now define a mathematical framework in which to formalize such approximations. In what follows, whenever we decorate a market $M$, e.g., $M^{\prime}$, what we mean is that we decorate each of its components: i.e., $M^{\prime}=(G^{\prime},N^{\prime},\\{v_{i}^{\prime}\\}_{i\in N^{\prime}})$. It will be convenient to refer to a subset of buyer–bundle pairs. We use the notation $\mathcal{I}\subseteq N\times 2^{G}$ for this purpose. Markets $M$ and $M^{\prime}$ are _compatible_ if $G=G^{\prime}$ and $N=N^{\prime}$. Whenever a market $M$ is compatible with a market $M^{\prime}$, an outcome of $M$ is also an outcome of $M^{\prime}$. Given two compatible markets $M$ and $M^{\prime}$, we measure the difference between them at $\mathcal{I}$ as $\left\lVert M-M^{\prime}\right\rVert_{\mathcal{I}}=\max_{(i,S)\in\mathcal{I}}|v_{i}(S)-v_{i}^{\prime}(S)|$. When $\mathcal{I}=N\times 2^{G}$, this difference is precisely the infinity norm. Given $\varepsilon>0$, $M$ and $M^{\prime}$ are called _$\varepsilon$ -approximations_ of one another if $\left\lVert M-M^{\prime}\right\rVert_{\infty}\leq\varepsilon$. The solution concept of interest in this paper is competitive equilibrium222A competitive equilibrium is always guaranteed to exists Bikhchandani et al. (2002).. A competitive equilibrium consists of two conditions: the utility- maximization (UM) condition and the revenue-maximization (RM) condition. UM ensures that the allocation maximizes buyers’ utilities given the pricings, while RM ensures that the seller maximizes its utility. Together, both conditions constitute an equilibrium of the market, i.e., an outcome where no agent has an incentive to deviate by, for example, relinquishing its allocation. We now formalize this solution concept, followed by its relaxation, central when working with approximate markets. ###### Definition 0 (Competitive Equilibrium). Given a market $M$, an outcome $(\mathcal{S},\mathcal{P})$ is a _competitive equilibrium_ (CE) if: * (UM) $\forall i\in N,T\subseteq G:v_{i}(S_{i})-P_{i}(S_{i})\geq v_{i}(T)-P_{i}(T)$ * (RM) $\forall\mathcal{S}^{\prime}\in\mathcal{F}(M):\sum_{i\in N}P_{i}(S_{i})\geq\sum_{i\in N}P_{i}(S_{i}^{\prime})$ ###### Definition 0 (Approximate Competitive Equilibria). Let $\varepsilon>0$. An outcome $(\mathcal{S},\mathcal{P})$ is a $\varepsilon$-competitive equilibrium ($\varepsilon$-CE) if it is a CE in which UM holds up to $\varepsilon$: $\varepsilon$-(UM) $\forall i\in N,T\subseteq G:v_{i}(S_{i})-P_{i}(S_{i})+\varepsilon\geq v_{i}(T)-P_{i}(T)$ For $\alpha\geq 0$, we denote by $\mathcal{CE}_{\alpha}(M)$ the set of all $\alpha$-approximate CE of $M$, i.e., $\mathcal{CE}_{\alpha}(M)=\\{(\mathcal{S},\mathcal{P}):(\mathcal{S},\mathcal{P})$ is a $\alpha$-approximate CE of $M\\}$. Note that $\mathcal{CE}_{0}(M)$ is the set of (exact) CE of market $M$, which we denote $\mathcal{CE}(M)$. ###### Theorem 3 (Competitive Equilibrium Approximation). Let $\varepsilon>0$. If $M$ and $M^{\prime}$ are compatible markets such that $\left\lVert M-M^{\prime}\right\rVert_{\infty}\leq\varepsilon$, then $\mathcal{CE}(M)\subseteq\mathcal{CE}_{2\varepsilon}(M^{\prime})\subseteq\mathcal{CE}_{4\varepsilon}(M)$. ###### Proof. We prove that: $\mathcal{CE}_{\alpha}(M)\subseteq\mathcal{CE}_{\alpha+2\varepsilon}(M^{\prime})$, for $\alpha\geq 0$. This result then implies $\mathcal{CE}(M)\subseteq\mathcal{CE}_{2\varepsilon}(M^{\prime})$ when $\alpha=0$; likewise, it (symmetrically) implies $\mathcal{CE}_{2\varepsilon}(M^{\prime})\subseteq\mathcal{CE}_{4\varepsilon}(M)$ when $\alpha=2\varepsilon$. Let $M$ and $M^{\prime}$ be compatible markets s.t. $\left\lVert M-M^{\prime}\right\rVert_{\infty}\leq\varepsilon$. Suppose $(\mathcal{S},\mathcal{P})$ is a $\alpha$-competitive equilibrium of $M$. Our task is to show that $(\mathcal{S},\mathcal{P})$, interpreted as an outcome of $M^{\prime}$, is a $(\alpha+2\varepsilon)$-competitive equilibrium of $M^{\prime}$. First, note that the RM condition is immediately satisfied, because $\mathcal{S}$ and $\mathcal{P}$ do not change when interpreting $(\mathcal{S},\mathcal{P})$ as an outcome of $M^{\prime}$. Thus, we need only show that the approximation holds for the UM condition: $\displaystyle v_{i}^{\prime}(S_{i})-P_{i}(S_{i})$ $\displaystyle\geq v_{i}(S_{i})-P_{i}(S_{i})-\varepsilon,$ $\displaystyle\forall i,S_{i}$ (1) $\displaystyle\geq v_{i}(T)-P_{i}(T)-\alpha-\varepsilon,$ $\displaystyle\forall T\subseteq G$ (2) $\displaystyle\geq v_{i}^{\prime}(T)-P_{i}(T)-\alpha-2\varepsilon,$ $\displaystyle\forall T\subseteq G$ (3) where (1) and (3) follow because $\left\lVert M-M^{\prime}\right\rVert_{\infty}\leq\varepsilon$, and (2) follows because $(\mathcal{S},\mathcal{P})$ is a $\alpha$-approximate CE of $M$. ∎ ## 3\. Learning Methodology We now present a formalism in which to model noisy combinatorial markets. Intuitively, a noisy market is one in which buyers’ valuations over bundles are not known precisely; rather, only noisy samples are available. ###### Definition 0 (Conditional Combinatorial Markets). A _conditional comb. market_ $M_{\mathcal{X}}=(\mathcal{X},G,N,\\{v_{i}\\}_{i\in N})$ consists of a set of conditions $\mathcal{X}$, a set of goods $G$, a set of buyers $N$, and a set of conditional valuation functions $\\{v_{i}\\}_{i\in N}$, where $v_{i}:2^{G}\times\mathcal{X}\mapsto\mathbb{R}_{+}$. Given a condition $x\in\mathcal{X}$, the value $v_{i}(S,x)$ is $i$’s value for bundle $S\subseteq G$. ###### Definition 0 (Expected Combinatorial Market). Let $M_{\mathcal{X}}=(\mathcal{X},$ $G,N,\\{v_{i}\\}_{i\in N})$ be a conditional combinatorial market and let $\mathcal{D}$ be a distribution over $\mathcal{X}$. For all $i\in N$, define the expected valuation function $v_{i}:2^{G}\mapsto\mathbb{R}_{+}$ by $v_{i}(S,\mathcal{D})=\mathbf{E}_{x\sim\mathcal{D}}[v_{i}(S,x)]$, and the corresponding expected combinatorial market as $M_{\mathcal{D}}=(G,N,\\{v_{i}\\}_{i\in N})$. The goal of this work is to design algorithms that learn the approximate CE of expected combinatorial markets. We will learn their equilibria given access only to their empirical counterparts, which we define next. ###### Definition 0 (Empirical Combinatorial Market). Let $M_{\mathcal{X}}=(\mathcal{X},$ $G,N,\\{v_{i}\\}_{i\in N})$ be a conditional combinatorial market and let $\mathcal{D}$ be a distribution over $\mathcal{X}$. Denote by $\bm{x}=(x_{1},\ldots,x_{t})\sim\mathcal{D}$ a vector of $t$ samples drawn from $\mathcal{X}$ according to distribution $\mathcal{D}$. For all $i\in N$, we define the empirical valuation function $\hat{v}_{i}:2^{G}\mapsto\mathbb{R}_{+}$ by $\smash{\hat{v}_{i}(S)=\frac{1}{t}\sum_{l=1}^{t}v_{i}(S,x_{l})}$, and the corresponding empirical combinatorial market as $\hat{M}_{\bm{x}}=(G,N,\\{\hat{v}_{i}\\}_{i\in N})$. ###### Observation 1 (Learnability). Let $M_{\mathcal{X}}$ be a conditional combinatorial market and let $\mathcal{D}$ be a distribution over $\mathcal{X}$. Let $M_{\mathcal{D}}$ and $\hat{M}_{\bm{x}}$ be the corresponding expected and empirical combinatorial markets. If, for some $\varepsilon,\delta>0$, it holds that $\small\smash{\mathbb{P}\left(\left\lVert M_{\mathcal{D}}-\hat{M}_{\bm{x}}\right\rVert\leq\varepsilon\right)\geq 1-\delta}$, then the competitive equilibria of $M_{\mathcal{D}}$ are learnable: i.e, any competitive equilibrium of $M_{\mathcal{D}}$ is a $2\varepsilon$-competitive equilibrium of $\hat{M}_{\bm{x}}$ with probability at least $1-\delta$. Theorem 3 implies that CE are approximable to within any desired $\varepsilon>0$ guarantee. The following lemma shows we only need a finitely many samples to learn them to within any $\delta>0$ probability. ###### Lemma 0 (Finite-Sample Bounds for Expected Combinatorial Markets via Hoeffding’s Inequality). Let $M_{\mathcal{X}}$ be a conditional combinatorial market, $\mathcal{D}$ a distribution over $\mathcal{X}$, and $\mathcal{I}\subseteq N\times 2^{G}$ an index set. Suppose that for all $x\in\mathcal{X}$ and $(i,S)\in\mathcal{I}$, it holds that $v_{i}(S,x)\in[0,c]$ where $c\in\mathbb{R}_{+}$. Then, with probability at least $1-\delta$ over samples $\bm{x}=(x_{1},\ldots,x_{t})\sim\mathcal{D}$, it holds that $\smash{\left\lVert M_{\mathcal{D}}-\hat{M}_{\bm{x}}\right\rVert_{\mathcal{I}}}\leq c\sqrt{\nicefrac{{\ln(\nicefrac{{2|\mathcal{I}|}}{{\delta}})}}{{2t}}}$, where $\delta>0$. (Proof in the Appendix) Hoeffding’s inequality is a convenient and simple bound, where only knowledge of the range of values is required. However, the union bound can be inefficient in large combinatorial markets. This shortcoming can be addressed via uniform convergence bounds and Rademacher averages (Bartlett and Mendelson, 2002; Areyan Viqueira et al., 2019; Koltchinskii, 2001). Furthermore, sharper empirical variance sensitive bounds have been shown to improve sample complexity in learning the Nash equilibria of black-box games (Areyan Viqueira et al., 2020). In particular, to obtain a confidence interval of radius $\varepsilon$ in a combinatorial market with index set $\mathcal{I}=N\times 2^{G}$, Hoeffding’s inequality requires $t\in\smash{\mathcal{O}(\nicefrac{{c^{2}|G|}}{{\varepsilon^{2}}}\ln\nicefrac{{|N|}}{{\delta}})}$ samples. Uniform convergence bounds can improve the $|G|$ term arising from the _union bound_ , and variance-sensitive bounds can largely replace dependence on $c^{2}$ with _variances_. Nonetheless, even without these augmentations, our methods are statistically efficient in $|G|$, requiring only _polynomial_ sample complexity to learn _exponentially_ large combinatorial markets. ### 3.1. Baseline Algorithm $\operatorname{EA}$ (Algorithm 2) is a preference elicitation algorithm for combinatorial markets. The algorithm places value queries, but is only assumed to elicit noisy values for bundles. The following guarantee follows immediately from Lemma 7. ###### Theorem 8 (Elicitation Algorithm Guarantees). Let $M_{\mathcal{X}}$ be a conditional market, $\mathcal{D}$ be a distribution over $\mathcal{X}$, $\mathcal{I}$ an index set, $t\in\mathbb{N}_{>0}$ a number of samples, $\delta>0$, and $c\in\mathbb{R}_{+}$. Suppose that for all $x\in\mathcal{X}$ and $(i,S)\in\mathcal{I}$, it holds that $v_{i}(S,x)\in[0,c]$. If $\operatorname{EA}$ outputs $(\\{\hat{v}_{i}\\}_{(i,S)\in\mathcal{I}},\hat{\varepsilon})$ on input $(M_{\mathcal{X}},\mathcal{D},\mathcal{I},t,\delta,c)$, then, with probability at least $1-\delta$, it holds that $\smash{\left\lVert M_{\mathcal{D}}-\hat{M}_{\bm{x}}\right\rVert_{\mathcal{I}}}\leq c\sqrt{\nicefrac{{\ln(\nicefrac{{2|\mathcal{I}|}}{{\delta}})}}{{2t}}}$. ###### Proof. The result follows from Lemma 7. ∎ ### 3.2. Pruning Algorithm $\operatorname{EA}$ elicits buyers’ valuations for all bundles, but in certain situations, some buyer valuations are not relevant for computing a CE—although bounds on all of them are necessary to guarantee strong bounds on the set of CE (Theorem 3). For example, in a first-price auction for one good, it is enough to accurately learn the highest bid, but is not necessary to accurately learn all other bids, if it is known that they are lower than the highest. Since our goal is to learn CE, we present $\operatorname{EAP}$ (Algorithm 1), an algorithm that does not sample uniformly, but instead adaptively decides which value queries to prune so that, with provable guarantees, $\operatorname{EAP}$’s estimated market satisfies the conditions of Theorem 3. Algorithm 1 Elicitation Algorithm with Pruning ($\operatorname{EAP}$) Input: $M_{\mathcal{X}},\mathcal{D},\bm{t},\bm{\delta},c,\varepsilon$. A conditional combinatorial market $M_{\mathcal{X}}$, a distribution $\mathcal{D}$ over $\mathcal{X}$, a sampling schedule $\bm{t}$, a failure probability schedule $\bm{\delta}$, a pruning budget schedule $\bm{\pi}$, a valuation range $c$, and a target approx. error $\varepsilon$. Output: Valuation estimates $\hat{v}_{i}(S)$, for all $(i,S)$, approximation errors $\hat{\varepsilon}_{i,S}$, failure probability $\hat{\delta}$, and CE error $\hat{\varepsilon}$. 1: $\mathcal{I}\leftarrow N\times 2^{G}$ {Initialize index set} 2: $(\hat{v}_{i}(S),\hat{\varepsilon}_{i,S})\leftarrow(0,\nicefrac{{c}}{{2}}),\forall(i,S)\in\mathcal{I}$ {Initialize outputs} 3: for $k\in 1,\ldots,|\bm{t}|$ do 4: $(\\{\hat{v}_{i}\\}_{(i,S)\in\mathcal{I}},\hat{\varepsilon})\leftarrow\textsc{EA}(M_{\mathcal{X}},\mathcal{D},\mathcal{I},t_{k},\delta_{k},c)$ {Call Alg. 2} 5: $\hat{\varepsilon}_{i,S}\leftarrow\hat{\varepsilon},\forall(i,S)\in\mathcal{I}$ {Update error rates} 6: if $\hat{\varepsilon}\leq\varepsilon$ or $k=|\bm{t}|$ or $\mathcal{I}=\emptyset$ then 7: return $(\\{\hat{v}_{i}\\}_{i\in N},\\{\hat{\varepsilon}_{i,S}\\}_{(i,S)\in N\times 2^{G}},\sum_{l=1}^{k}\delta_{l},\hat{\varepsilon})$ 8: end if 9: Let $\hat{M}$ be the market with valuations $\\{\hat{v}_{i}\\}_{(i,S)\in\mathcal{I}}$ 10: $\mathcal{I}_{\textsc{prune}}\leftarrow\emptyset$ {Initialize set of indices to prune} 11: $\mathcal{I}_{\textsc{candidates}}\leftarrow$ a subset of $\mathcal{I}$ of size at most $\pi_{k}$ {Select some active pairs as candidates for pruning} 12: for $(i,S)\in\mathcal{I}_{\textsc{candidates}}$ do 13: Let $\hat{M}_{-(i,S)}$ be the $(i,S)$-submarket of $\hat{M}$. 14: Let $w_{(i,S)}^{\diamond}$ an upper bound of $w^{*}(\hat{M}_{-(i,S)})$. 15: if $\smash{\hat{v}_{i}(S)+w_{(i,S)}^{\diamond}+2\hat{\varepsilon}n<w^{*}(\hat{M})}$ then 16: $\mathcal{I}_{\textsc{prune}}\leftarrow\mathcal{I}_{\textsc{prune}}\cup(i,S)$ 17: end if 18: end for 19: $\mathcal{I}\leftarrow\mathcal{I}\setminus\mathcal{I}_{\textsc{prune}}$ 20: end for Algorithm 2 Elicitation Algorithm ($\operatorname{EA}$) Input: $M_{\mathcal{X}},\mathcal{D},\mathcal{I},t,\delta,c$. A conditional combinatorial market $M_{\mathcal{X}}$, a distribution $\mathcal{D}$ over $\mathcal{X}$, an index set $\mathcal{I}$, sample size $t$, failure prob. $\delta$, and valuation range $c$. Output: Valuation estimates $\hat{v}_{i}(S)$, for all $(i,S)\in\mathcal{I}$, and an approximation error $\hat{\varepsilon}$. 1: $(x_{1},\ldots,x_{t})\sim\mathcal{D}$ {Draw $t$ samples from $\mathcal{D}$} 2: for $(i,S)\in\mathcal{I}$ do 3: $\hat{v}_{i}(S)\leftarrow\frac{1}{t}\sum_{l=1}^{t}v_{i}(S,x_{l})$ 4: end for 5: $\hat{\varepsilon}\leftarrow c\sqrt{\nicefrac{{\ln(\nicefrac{{2|\mathcal{I}|}}{{\delta}})}}{{2t}}}$ {Compute error} 6: return $(\\{\hat{v}_{i}\\}_{(i,S)\in\mathcal{I}},\hat{\varepsilon})$ $\operatorname{EAP}$ (Algorithm 1) takes as input a sampling schedule $\bm{t}$, a failure probability schedule $\bm{\delta}$, and a pruning budget schedule $\bm{\pi}$. The sampling schedule $\bm{t}$ is a sequence of $|\bm{t}|$ strictly decreasing integers $t_{1}>t_{2}>\cdots>t_{|\bm{t}|}$, where $t_{k}$ is the total number of samples to take for each $(i,S)$ pair during $\operatorname{EAP}$’s $k$-th iteration. The failure probability schedule $\bm{\delta}$ is a sequence of the same length as $\bm{t}$, where $\delta_{k}\in(0,1)$ is the $k$-th iteration’s failure probability and $\sum_{k}\delta_{k}\in(0,1)$ is the total failure probability. The pruning budget schedule $\bm{\pi}$ is a sequence of integers also of the same length as $\bm{t}$, where $\pi_{k}$ is the maximum number of $(i,S)$ pruning candidate pairs. The algorithm progressively elicits buyers’ valuations via repeated calls to $\operatorname{EA}$. However, between calls to $\operatorname{EA}$, $\operatorname{EAP}$ searches for value queries that are provably not part of a CE; the size of this search is dictated by the pruning schedule. All such queries (i.e., buyer–bundle pairs) then cease to be part of the index set with which $\operatorname{EA}$ is called in future iterations. In what follows, we prove several intermediate results, which enable us to prove the main result of this section, Theorem 13, which establishes $\operatorname{EAP}$’s correctness. Specifically, the market learned by $\operatorname{EAP}$—with potentially different numbers of samples for different $(i,S)$ pairs—is enough to provably recover any CE of the underlying market. ###### Lemma 0 (Optimal Welfare Approximations). Let $M$ and $M^{\prime}$ be compatible markets such that they $\varepsilon$-approximate one another. Then $|w^{*}(M)-w^{*}(M^{\prime})|\leq\varepsilon n$. ###### Proof. Let $\mathcal{S}^{*}$ be a welfare-maximizing allocation for $M$ and $\mathcal{U}^{*}$ be a welfare-maximizing allocation for $M^{\prime}$. Let $w^{*}(M)$ be the maximum achievable welfare in market $M$. Then, $\displaystyle w^{*}(M)=\sum_{i\in N}v_{i}(\mathcal{S}^{*}_{i})\geq\sum_{i\in N}v_{i}(\mathcal{U}^{*}_{i})\geq\sum_{i\in N}v_{i}^{\prime}(\mathcal{U}^{*}_{i})-\varepsilon n$ $\displaystyle=w^{*}(M^{\prime})-\varepsilon n$ The first inequality follows from the optimality of $\mathcal{S}^{*}$ in $M$, and the second from the $\varepsilon$-approximation assumption. Likewise, $w^{*}(M^{\prime})\geq w^{*}(M)-\varepsilon n$, so the result holds. ∎ The key to this work was the discovery of a pruning criterion that removes $(i,S)$ pairs from consideration if they are provably not part of any CE. Our check relies on computing the welfare of the market without the pair: i.e., in submarkets. ###### Definition 0. Given a market $M$ and buyer–bundle pair $(i,S)$, the _$(i,S)$ -submarket_ of $M$, denoted by $M_{-(i,S)}$, is the market obtained by removing all goods in $S$ and buyer $i$ from market $M$. That is, $M_{-(i,S)}=(G\setminus S,N\setminus\\{i\\},\\{v_{k}\\}_{k\in N\setminus\\{i\\}})$. ###### Lemma 0 (Pruning Criteria). Let $M$ and $M^{\prime}$ be compatible markets such that $\left\lVert M-M^{\prime}\right\rVert_{\infty}\leq\varepsilon$. In addition, let $(i,S)$ be a buyer, bundle pair, and $M^{\prime}_{-(i,S)}$ be the $(i,S)$-submarket of $M^{\prime}$. Finally, let $w_{(i,S)}^{\diamond}\in\mathbb{R}_{+}$ upper bound $w^{*}(M^{\prime}_{-(i,S)})$, i.e., $w^{*}(M^{\prime}_{-(i,S)})\leq w_{(i,S)}^{\diamond}$. If the following pruning criterion holds, then $S$ is not allocated to $i$ in any welfare-maximizing allocation of $M$: $v_{i}^{\prime}(S)+w_{(i,S)}^{\diamond}+2\varepsilon n<w^{*}(M^{\prime})\enspace.$ (4) ###### Proof. Let $\mathcal{S}^{*},\mathcal{U}^{*},$ and $\mathcal{U}^{*}_{-(i,S)}$ be welfare-maximizing allocations of markets $M,M^{\prime},$ and $M^{\prime}_{-(i,S)}$, respectively. Then, $\displaystyle w^{*}(M)$ $\displaystyle\geq w^{*}(M^{\prime})-\varepsilon n$ (5) $\displaystyle>v_{i}^{\prime}(S)+w_{(i,S)}^{\diamond}+\varepsilon n$ (6) $\displaystyle\geq v_{i}^{\prime}(S)+w^{*}(M^{\prime}_{-(i,S)})+\varepsilon n$ (7) $\displaystyle\geq v_{i}(S)-\varepsilon+w^{*}(M_{-(i,S)})-\varepsilon(n-1)+\varepsilon n$ (8) $\displaystyle=v_{i}(S)+w^{*}(M_{-(i,S)})$ (9) The first inequality follows from Lemma 9. The second follows from Equation (4) and the third because $w_{(i,S)}^{\diamond}$ is an upper bound of $w^{*}(M^{\prime}_{-(i,S)})$. The fourth inequality follows from the assumption that $\left\lVert M-M^{\prime}\right\rVert_{\infty}\leq\varepsilon$, and by Lemma 9 applied to submarket $M_{-(i,S)}$. Therefore, the allocation where $i$ gets $S$ cannot be welfare-maximizing in market $M$. ∎ Lemma 11 provides a family of pruning criteria parameterized by the upper bound $\smash{w_{(i,S)}^{\diamond}}$. The closer $\smash{w_{(i,S)}^{\diamond}}$ is to $\smash{w^{*}(M^{\prime}_{-(i,S)})}$, the sharper the pruning criterion, with the best pruning criterion being $\smash{w_{(i,S)}^{\diamond}=w^{*}(M^{\prime}_{-(i,S)})}$. However, solving for $\smash{w^{*}(M^{\prime}_{-(i,S)})}$ exactly can easily become a bottleneck, as the pruning loop requires a solution to _many_ such instances, one per $\smash{(i,S)}$ pair (Line 12 of Algorithm 1). Alternatively, one could compute looser upper bounds, and thereby trade off computation time for opportunities to prune more $\smash{(i,S)}$ pairs, when the upper bound is not tight enough. In our experiments, we show that even relatively loose but cheap-to-compute upper bounds result in significant pruning and, thus, savings along both dimensions—computational and sample complexity. To conclude this section, we establish the correctness of $\operatorname{EAP}$. For our proof we rely on the following generalization of the first welfare theorem of economics, which handles additive errors. ###### Theorem 12 (First Welfare Theorem Roughgarden (2010)). For $\varepsilon>0$, let $(\mathcal{S},\mathcal{P})$ be an $\varepsilon$-competitive equilibrium of $M$. Then, $\mathcal{S}$ is a welfare-maximizing allocation of $M$, up to additive error $\varepsilon n$. ###### Theorem 13 (Elicitation Algorithm with Pruning Guarantees). Let $M_{\mathcal{X}}$ be a conditional market, let $\mathcal{D}$ be a distribution over $\mathcal{X}$, and let $c\in\mathbb{R}_{+}$. Suppose that for all $x\in\mathcal{X}$ and $(i,S)\in\mathcal{I}$, it holds that $v_{i}(S,x)\in[0,c]$, where $c\in\mathbb{R}$. Let $\bm{t}$ be a sequence of strictly increasing integers, and $\bm{\delta}$ a sequence of the same length as $\bm{t}$ such that $\delta_{k}\in(0,1)$ and $\sum_{k}\delta_{k}\in(0,1)$. If $\operatorname{EAP}$ outputs $\smash{(\\{\hat{v}_{i}\\}_{i\in N},\\{\hat{\varepsilon}_{i,S}\\}_{(i,S)\in N\times 2^{G}},\smash{1-\sum_{k}\delta_{k}},\hat{\varepsilon})}$ on input $\smash{(M_{\mathcal{X}},\mathcal{D},\bm{t},\bm{\delta},c,\varepsilon)}$, then the following holds with probability at least $\smash{1-\sum_{k}\delta_{k}}$: 1\. $\left\lVert M_{\mathcal{D}}-\hat{M}\right\rVert_{\mathcal{I}}\leq\hat{\varepsilon}_{i,S}$ 2\. $\mathcal{CE}(M_{\mathcal{D}})\subseteq\mathcal{CE}_{2\hat{\varepsilon}}(\hat{M})\subseteq\mathcal{CE}_{4\hat{\varepsilon}}(M_{\mathcal{D}})$ Here $\hat{M}$ is the empirical market obtained via $\operatorname{EAP}$, i.e., the market with valuation functions given by $\\{\hat{v}_{i}\\}_{i\in N}$. ###### Proof. To show part 1, note that at each iteration $k$ of $\operatorname{EAP}$, Line 5 updates the error estimates for each $(i,S)$ after a call to $\operatorname{EA}$ (Line 4 of $\operatorname{EAP}$) with input failure probability $\delta_{k}$. Theorem 8 implies that each call to $\operatorname{EA}$ returns estimated values that are within $\hat{\varepsilon}$ of their expected value with probability at least $1-\delta_{k}$. By union bounding all calls to $\operatorname{EA}$ within $\operatorname{EAP}$, part 1 then holds with probability at least $1-\sum_{k}\delta_{k}$. To show part 2, note that only pairs $(i,S)$ for which Equation (4) holds are removed from index set $\mathcal{I}$ (Line 14 of $\operatorname{EAP}$). By Lemma 11, no such pair can be part of any approximate welfare-maximizing allocation of the expected market, $\smash{M_{\mathcal{D}}}$. By Theorem 12, no such pair can be a part of any CE. Consequently, $\smash{\hat{M}}$ contains accurate enough estimates (up to $\varepsilon$) of all $(i,S)$ pairs that may participate in any CE. Part 2 then follows from Theorem 3. ∎ ## 4\. Experiments The goal of our experiments is to robustly evaluate the empirical performance of our algorithms. To this end, we experiment with a variety of qualitatively different inputs. In particular, we evaluate our algorithms on both unit- demand valuations, the Global Synergy Value Model (GSVM) Goeree and Holt (2010), and the Local Synergy Value Model (LSVM) Scheffel et al. (2012). Unit- demand valuations are a class of valuations central to the literature on economics and computation Lehmann et al. (2006) for which efficient algorithms exist to compute CE Gul and Stacchetti (1999). GSVM and LSVM model situations in which buyers’ valuations encode complements; CE are not known be efficiently computable, or even representable, in these markets. While CE are always guaranteed to exist (e.g., Bikhchandani et al. (2002)), in the worst case, they might require personalized bundle prices. These prices are computationally complex, not to mention out of favor Hinz et al. (2011). A pricing $\mathcal{P}=(P_{1},\ldots,P_{n})$ is _anonymous_ if it charges every buyer the same price, i.e., $P_{i}=P_{k}=P$ for all $i\neq k\in N$. Moreover, an anonymous pricing is _linear_ if there exists a set of prices $\\{p_{1},\ldots,p_{m}\\}$, where $p_{j}$ is good $j$’s price, such that $P(S)=\sum_{j\in S}p_{j}$. In what follows, we refer to linear, anonymous pricings as linear prices. Where possible, it is preferable to work with linear prices, as they are simpler, e.g., when bidding in an auction Kwasnica et al. (2005). In our present study—one of the first empirical studies on learning CE—we thus focus on linear prices, leaving as future research the empirical333Note that all our theoretical results hold for any pricing profile. effect of more complex pricings.444Lahaie and Lubin Lahaie and Lubin (2019), for example, search for prices in between linear and bundle. To our knowledge, there have been no analogous attempts at learning CE; hence, we do not reference any baseline algorithms from the literature. Rather, we compare the performance of $\operatorname{EAP}$, our pruning algorithm, to $\operatorname{EA}$, investigating the quality of the CE learned by both, as well as their sample efficiencies. ### 4.1. Experimental Setup. We first explain our experimental setup, and then present results. We let $U[a,b]$ denote the continuous uniform distribution over range $[a,b]$, and $U\\{k,l\\}$, the discrete uniform distribution over set $\\{k,k+1,\ldots,l\\}$, for $k\leq l\in\mathbb{N}$. #### Simulation of Noisy Combinatorial Markets. We start by drawing markets from experimental market distributions. Then, fixing a market, we simulate noisy value elicitation by adding noise drawn from experimental noise distributions to buyers’ valuations in the market. We refer to a market realization $M$ drawn from an experimental market distribution as the _ground-truth_ market. Our experiments then measure how well we can approximate the CE of a ground-truth market $M$ given access only to noisy samples of it. Fix a market $M$ and a condition set $\mathcal{X}=[a,b]$, where $a<b$. Define the conditional market $M_{\mathcal{X}}$, where $v_{i}(S,x_{iS})=v_{i}(S)+x_{iS}$, for $x_{iS}\in\mathcal{X}$. In words, when eliciting $i$’s valuation for $S$, we assume additive noise, namely $x_{iS}$. The market $M_{\mathcal{X}}$ together with distribution $\mathcal{D}$ over $\mathcal{X}$ is the model from which our algorithms elicit noisy valuations from buyers. Then, given samples $\bm{x}$ of $M_{\mathcal{X}}$, the empirical market $\smash{\hat{M}_{\bm{x}}}$ is the market estimated from the samples. Note that $\smash{\hat{M}_{\bm{x}}}$ is the only market we get to observe in practice. We consider only zero-centered noise distributions. In this case, the expected combinatorial market $M_{\mathcal{D}}$ is the same as the ground-truth market $M$ since, for every $i,S\in N\times 2^{G}$ it holds that $v_{i}(S,\mathcal{D})=\mathbf{E}_{\mathcal{D}}[v_{i}(S,x_{iS})]=\mathbf{E}_{\mathcal{D}}[v_{i}(S)+x_{iS}]=v_{i}(S)$. While this noise structure is admittedly simple, we robustly evaluate our algorithms along another dimension, as we study several rich market structures (unit-demand, GSVM, and LSVM). An interesting future direction would be to also study richer noise structures, e.g., letting noise vary with a bundle’s size, or other market characteristics. #### Utility-Maximization (UM) Loss To measure the quality of a CE $(\mathcal{S}^{\prime},\mathcal{P}^{\prime})$ computed for a market $M^{\prime}$ in another market $M$, we first define the per-buyer metric $\emph{UM-Loss}_{M,i}$ as follows, $\emph{UM- Loss}_{M,i}(\mathcal{S}^{\prime},\mathcal{P}^{\prime})=\smash{\max_{S\subseteq G}(v_{i}(S)-P^{\prime}(S))-(v_{i}(S_{i}^{\prime})-P^{\prime}(S_{i}^{\prime}))},$ i.e., the difference between the maximum utility $i$ could have attained at prices $\mathcal{P}^{\prime}$ and the utility $i$ attains at the outcome $(\mathcal{S}^{\prime},\mathcal{P}^{\prime})$. Our metric of interest is then $\emph{UM-Loss}_{M}$ defined as, $\emph{UM-Loss}_{M}(\mathcal{S}^{\prime},\mathcal{P}^{\prime})=\max_{i\in N}\emph{UM-Loss}_{M,i}(\mathcal{S}^{\prime},\mathcal{P}^{\prime}),$ which is a worst-case measure of utility loss over all buyers in the market. Note that it is not useful to incorporate the SR condition into a loss metric, because it is always satisfied. In our experiments, we measure the UM loss that a CE of an empirical market obtains, evaluated in the corresponding ground-truth market. Thus, given an empirical estimate $\smash{\hat{M}_{\bm{x}}}$ of $M$, and a CE $(\hat{\mathcal{S}},\hat{\mathcal{P}})$ in $\smash{\hat{M}_{\bm{x}}}$, we measure $\smash{\emph{UM-Loss}_{M}(\hat{\mathcal{S}},\hat{\mathcal{P}})}$, i.e., the loss in $M$ at prices $\hat{\mathcal{P}}$ of CE $\smash{(\hat{\mathcal{S}},\hat{\mathcal{P}})}$. Theorem 3 implies that if $\smash{\hat{M}_{\bm{x}}}$ is an $\varepsilon$-approximation of $M$, then $\smash{\emph{UM-Loss}_{M}(\hat{\mathcal{S}},\hat{\mathcal{P}})\leq 2\varepsilon}$. Moreover, Theorem 8 yields the same guarantees, but with probability at least $1-\delta$, provided the $\varepsilon$-approximation holds with probability at least $1-\delta$. #### Sample Efficiency of $\operatorname{EAP}$. We say that algorithm $A$ has better _sample efficiency_ than algorithm $B$ if $A$ requires fewer samples than $B$ to achieve at least the same $\varepsilon$ accuracy. Fixing a condition set $\mathcal{X}$, a distribution $\mathcal{D}$ over $\mathcal{X}$, and a conditional market $M_{\mathcal{X}}$, we use the following experimental design to evaluate $\operatorname{EAP}$’s sample efficiency relative to that of $\operatorname{EA}$. Given a desired error guarantee $\varepsilon>0$, we compute the number of samples $t(\varepsilon)$ that would be required for $\operatorname{EA}$ to achieve accuracy $\varepsilon$. We then use the following doubling strategy as a sampling schedule for $\operatorname{EAP}$, $\bm{t}(t(\varepsilon))=[\nicefrac{{t(\varepsilon)}}{{4}},\nicefrac{{t(\varepsilon)}}{{2}},t(\varepsilon),2t(\varepsilon)]$, rounding to the nearest integer as necessary, and the following failure probability schedule $\bm{\delta}=[0.025,0.025,0.025,0.025]$, which sums to $0.1$. Finally, the exact pruning budget schedules will vary depending on the value model (unit demand, GSVM, or LSVM). But in all cases, we denote an _unconstrained_ pruning budget schedule by $\bm{\pi}=[\infty,\infty,\infty,\infty]$, which by convention means that at every iteration, all active pairs are candidates for pruning. Using these schedules, we run $\operatorname{EAP}$ with a desired accuracy of zero. We denote by $\varepsilon_{\operatorname{EAP}}(\varepsilon)$ the approximation guarantee achieved by $\operatorname{EAP}$ upon termination. ### 4.2. Unit-demand Experiments A buyer $i$ is endowed with unit-demand valuations if, for all $S\subseteq G$, $v_{i}(S)=\max_{j\in S}v_{i}(\\{j\\})$. In a unit-demand market, all buyers have unit-demand valuations. A unit-demand market can be compactly represented by matrix $\mathbf{V}$, where entry $v_{ij}\in\mathbb{R}_{+}$ is $i$’s value for $j$, i.e., $v_{ij}=v_{i}(\\{j\\})$. In what follows, we denote by $\mathcal{V}$ a random variable over unit-demand valuations. We construct four different distributions over unit-demand markets: Uniform, Preferred-Good, Preferred-Good-Distinct, and Preferred-Subset. All distributions are parameterized by $n$ and $m$, the number of buyers and goods, respectively. A uniform unit-demand market $\mathcal{V}\sim\textsc{Uniform}$ is such that for all $i,j,$ $v_{ij}\sim U[0,10]$. When $\mathcal{V}\sim\textsc{Preferred-Good}$, each buyer $i$ has a preferred good $j_{i}$, with $j_{i}\sim U\\{1,\ldots,m\\}$ and $v_{ij_{i}}\sim U[0,10]$. Conditioned on $v_{ij_{i}}$, $i$’s value for good $k\neq j_{i}$ is given by $v_{ik}=\nicefrac{{v_{ij_{i}}}}{{2^{k}}}$. Distribution Preferred- Good-Distinct is similar to Preferred-Good, except that no two buyers have the same preferred good. (Note that the Preferred-Good-Distinct distribution is only well defined when $n\leq m$.) Finally, when $\mathcal{V}\sim\textsc{Preferred-Subset}$, each buyer $i$ is interested in a subset of goods $i_{G}\subseteq G$, where $i_{G}$ is drawn uniformly at random from the set of all bundles. Then, the value $i$ has for $j$ is given by $v_{ij}\sim U[0,10]$, if $j\in i_{G}$; and 0, otherwise. In unit-demand markets, we experiment with three noise models, low, medium, and high, by adding noise drawn from $U[-.5,.5]$, $U[-1,1],$ and $U[-2,2]$, respectively. We choose $n,m\in\\{5,10,15,20\\}^{2}$. | $\varepsilon=0.05$ | $\varepsilon=0.2$ ---|---|--- Distribution | $\hat{\mathcal{P}}_{\textsc{min}}$ | $\hat{\mathcal{P}}_{\textsc{max}}$ | $\hat{\mathcal{P}}_{\textsc{min}}$ | $\hat{\mathcal{P}}_{\textsc{max}}$ Uniform | 0.0018 | 0.0020 | 0.0074 | 0.0082 Preferred-Good | 0.0019 | 0.0023 | 0.0080 | 0.0094 Preferred-Good-Distinct | 0.0000 | 0.0020 | 0.0000 | 0.0086 Preferred-Subset | 0.0019 | 0.0022 | 0.0076 | 0.0090 Table 1. Average _UM-Loss_ for $\varepsilon\in\\{0.05,0.2\\}$. Figure 1. Mean $\operatorname{EAP}$ sample efficiency relative to $\operatorname{EA}$, $\varepsilon=0.05$. Each $(i,j)$ pair is annotated with the corresponding % saving. #### Unit-demand Empirical UM Loss of $\operatorname{EA}$. As a learned CE is a CE of a learned market, we require a means of computing the CE of a market—specifically, a unit-demand market $\mathbf{V}$. To do so, we first solve for the555Since in all our experiments, we draw values from continuous distributions, we assume that the set of markets with multiple welfare-maximizing allocations is of negligible size. Therefore, we can ignore ties. welfare-maximizing allocation $\mathcal{S}^{*}_{\mathbf{V}}$ of $\mathbf{V}$, by solving for the maximum weight matching using Hungarian algorithm Kuhn (1955) in the bipartite graph whose weight matrix is given by $\mathbf{V}$. Fixing $\mathcal{S}^{*}_{\mathbf{V}}$, we then solve for prices via linear programming Bikhchandani et al. (2002). In general, there might be many prices that couple with $\mathcal{S}^{*}_{\mathbf{V}}$ to form a CE of $\mathbf{V}$. For simplicity, we solve for two pricings given $\mathcal{S}^{*}_{\mathbf{V}}$, the revenue-maximizing $\mathcal{P}_{\textsc{max}}$ and revenue-minimizing $\mathcal{P}_{\textsc{min}}$, where revenue is defined as the sum of the prices. For each distribution, we draw 50 markets, and for each such market $\mathbf{V}$, we run $\operatorname{EA}$ four times, each time to achieve guarantee $\varepsilon\in\\{0.05,0.1,0.15,0.2\\}$. $\operatorname{EA}$ then outputs an empirical estimate $\hat{\mathbf{V}}$ for each $\mathbf{V}$. We compute outcomes $\smash{(\mathcal{S}^{*}_{\hat{\mathbf{V}}},\hat{\mathcal{P}}_{\textsc{max}})}$ and $\smash{(\mathcal{S}^{*}_{\hat{\mathbf{V}}},\hat{\mathcal{P}}_{\textsc{min}})}$, and measure $\smash{\emph{UM- Loss}_{\mathbf{V}}(\mathcal{S}^{*}_{\hat{\mathbf{V}}},\hat{\mathcal{P}}_{\textsc{max}})}$ and $\smash{\emph{UM- Loss}_{\mathbf{V}}(\mathcal{S}^{*}_{\hat{\mathbf{V}}},\hat{\mathcal{P}}_{\textsc{min}})}$. We then average across all market draws, for both the minimum and the maximum pricings. Table 1 summarizes a subset of these results. The error guarantees are consistently met across the board, indeed by one or two orders of magnitude, and they degrade as expected: i.e., with higher values of $\varepsilon$. We note that the quality of the learned CE is roughly the same for all distributions, except in the case of $\hat{\mathcal{P}}_{\textsc{min}}$ and Preferred-Good-Distinct, where learning is more accurate. For this distribution, it is enough to learn the preferred good of each buyer. Then, one possible CE is to allocate each buyer its preferred good and price all goods at zero which yields near no _UM-Loss_. Note that, in general, pricing all goods at zero is not a CE, unless the market has some special structure, like the markets drawn from Preferred-Good- Distinct. #### Unit-demand Sample Efficiency We use pruning schedule $\bm{\pi}=[\infty,\infty,\infty,\infty]$ and for each $(i,j)$ pair, we use the Hungarian algorithm Kuhn (1955) to compute the optimal welfare of the market without $(i,j)$. In other words, in each iteration, we consider all active $(i,j)$ pairs as pruning candidates (Algorithm 1, Line 11), and for each we compute the optimal welfare (Algorithm 1, Line 14). For each market distribution, we compute the average of the number of samples used by $\operatorname{EAP}$ across 50 independent market draws. We report samples used by $\operatorname{EAP}$ as a percentage of the number of samples used by $\operatorname{EA}$ to achieve the same guarantee, namely, $\varepsilon_{\operatorname{EAP}}(\varepsilon)$, for each initial value of $\varepsilon$. Figure 1 depicts the results of these experiments as heat maps, for all distributions and for $\varepsilon=0.05$, where darker colors indicate more savings, and thus better $\operatorname{EAP}$ sample efficiency. A few trends arise, which we note are similar for other values of $\varepsilon$. For a fixed number of buyers, $\operatorname{EAP}$’s sample efficiency improves as the number of goods increases, because fewer goods can be allocated, which means that there are more candidate values to prune, resulting in more savings. On the other hand, the sample efficiency usually decreases as the number of buyers increases; this is to be expected, as the pruning criterion degrades with the number of buyers (Lemma 11). While savings exceed 30% across the board, we note that Uniform, the market with the least structure, achieves the least savings, while Preferred-Subset and Preferred- Good-Distinct achieve the most. This finding shows that $\operatorname{EAP}$ is capable of exploiting the structure present in these distributions, despite not knowing anything about them _a priori_. Finally, we note that sample efficiency quickly degrades for higher values of $\varepsilon$. In fact, for high enough values of $\varepsilon$ (in our experiments, $\varepsilon=0.2$), $\operatorname{EAP}$ might, on average, require more samples than $\operatorname{EA}$ to produce the same guarantee. Most of the savings achieved are the result of pruning enough $(i,j)$ pairs early enough: i.e., during the first few iterations of $\operatorname{EAP}$. When $\varepsilon$ is large, however, our sampling schedule does not allocate enough samples early on. When designing sampling schedules for $\operatorname{EAP}$, one must allocate enough (but not too many) samples at the beginning of the schedule. Precisely how to determine this schedule is an empirical question, likely dependent on the particular application at hand. ### 4.3. Value Models In this next set of experiments, we test the empirical performance of our algorithms in more complex markets, where buyers valuations contain synergies. Synergies are a common feature of many high-stakes combinatorial markets. For example, telecommunication service providers might value different bundles of radio spectrum licenses differently, depending on whether the licenses in the bundle complement one another. For example, a bundle including New Jersey and Connecticut might not be very valuable unless it also contains New York City. Specifically, we study the Global Synergy Value Model (GSVM) Goeree and Holt (2010) and the Local Synergy Value Model (LSVM) Scheffel et al. (2012). These models or markets capture buyers’ synergies as a function of buyers’ types and their (abstract) geographical locations. In both GSVM and LSVM, there are 18 licenses, with buyers of two types: national or regional. A national buyer is interested in larger packages than regional buyers, whose interests are limited to certain regions. GSVM has six regional bidders and one national bidder and models geographical regions as two circles. LSVM has five regional bidders and one national bidder and uses a rectangular model. The models differ in the exact ways buyers’ values are drawn, but in any case, synergies are modeled by suitable distance metrics. In our experiments, we draw instances of both GSVM and LSVM using SATS, a universal spectrum auction test suite developed by researchers to test algorithms for combinatorial markets Weiss et al. (2017). | GSVM | LSVM ---|---|--- $\varepsilon$ | $\operatorname{EA}$ | $\operatorname{EAP}$ | $\varepsilon_{\operatorname{EAP}}$ | UM Loss | $\operatorname{EA}$ | $\operatorname{EAP}$ | $\varepsilon_{\operatorname{EAP}}$ | UM Loss 1.25 | $2,642$ | ${\bf 720\pm 10}$ | $0.73\pm 0.01$ | $0.0022\pm 0.0002$ | $330,497\pm 386$ | ${\bf 270,754\pm 14,154}$ | $0.89\pm 0.00$ | $0.0011\pm 0.0003$ 2.50 | $660$ | ${\bf 226\pm 10}$ | $1.57\pm 0.02$ | $0.0041\pm 0.0005$ | $82,624\pm 96$ | ${\bf 73,733\pm 3,629}$ | $1.78\pm 0.00$ | $0.0018\pm 0.0003$ 5.00 | $165$ | ${\bf 117\pm 11}$ | $3.41\pm 0.03$ | $0.0063\pm 0.0008$ | ${\bf 20,656\pm 24}$ | $22,054\pm 933$ | $3.59\pm 0.01$ | $0.0037\pm 0.0005$ 10.0 | ${\bf 41}$ | $69\pm 4$ | $7.36\pm 0.04$ | $0.0107\pm 0.0010$ | ${\bf 5,164\pm 6}$ | $7,580\pm 211$ | $7.27\pm 0.01$ | $0.0072\pm 0.0011$ Table 2. GSVM (left group) and LSVM (right group) results. Each group reports sample efficiency and UM loss. Each row of the table reports results for a fixed value of $\varepsilon$. Results are 95% confidence intervals over 40 GSVM market draws and 50 LSVM market draws, except for $\operatorname{EA}$’s number of samples in the case of GSVM which is a deterministic quantity (a GSVM market is of size 4,480). The values in bold indicate the more sample efficient algorithm. Numbers of samples are reported in millions. #### Experimental Setup. On average, the value a buyer has for an arbitrary bundle in either GSVM or LSVM markets is approximately 80. We introduce noise i.i.d. noise from distribution $U[-1,1]$ whose range is 2, or 2.5% of the expected buyer’s value for a bundle. As GSVM’s buyers’ values are at most 400, and LSVM’s are at most 500, we use valuation ranges $c=402$ and $c=502$ for GSVM and LSVM, respectively. We note that a larger noise range yields qualitatively similar results with errors scaling accordingly. For the GSVM markets, we use the pruning budget schedule $\bm{\pi}=[\infty,\infty,\infty,\infty]$. For each $(i,S)$ pair, we solve the welfare maximization problem using an off-the-shelf solver.666We include ILP formulations and further technical details in the appendix. In an LSVM market, the national bidder demands all 18 licenses. The welfare optimization problem in an LSVM market is solvable in a few seconds.777Approximately 20 seconds in our experiments, details appear in the appendix. Still, the many submarkets (in the hundreds of thousands) call for a finite pruning budget schedule and a cheaper-to-compute welfare upper bound. In fact, to address LSVM’s size complexity, we slightly modify $\operatorname{EAP}$, as explained next. #### A two-pass strategy for LSVM Because of the complexity of LSVM markets, we developed a heuristic pruning strategy, in which we perform two pruning passes during each iteration of $\operatorname{EAP}$. The idea is to compute a computationally cheap upper bound on welfare with pruning budget schedule $\bm{\pi}=[\infty,\infty,\infty,\infty]$ in the first pass, use this bound instead of the optimal for each active $(i,S)$. We compute this bound using the classic relaxation technique to create admissible heuristics. Concretely, given a candidate $(i,S)$ pair, we compute the maximum welfare in the absence of pair $(i,S)$, ignoring feasibility constraints: $w_{(i,S)}^{\diamond}=\sum\nolimits_{k\in N\setminus\\{i\\}}\max\\{v_{k}(T)\mid T\in 2^{G}\text{ and }S\cap T=\emptyset\\}\vspace{-0.1em}$ After this first pass, we undertake a second pass over all remaining active pairs. For each active pair, we compute the optimal welfare without pair $(i,S)$, but using the following _finite_ pruning budget schedule $\bm{\pi}=[180,90,60,45]$. In other words, we carry out this computation for just a few of the remaining candidate pairs. We chose this pruning budget schedule so that one iteration of $\operatorname{EAP}$ would take approximately two hours. One choice remains undefined for the second pruning pass: which $(i,S)$ candidate pairs to select out of those not pruned in the first pass? For each iteration $k$, we sort the $(i,S)$ in descending order according to the upper bound on welfare computed in the first pass, and then we select the bottom $\bm{\pi}_{k}$ pairs (180 during the first iteration, 90 during the second, etc.). The intuition for this choice is that pairs with lower upper bounds might be more likely to satisfy Lemma 11’s pruning criteria than pairs with higher upper bounds. Note that the way candidate pairs are selected for the second pruning pass uses no information about the underlying market, and is thus widely applicable. We will have more to say about the lack a priori information used by $\operatorname{EAP}$ in what follows. #### Results. Table 2 summarizes the results of our experiments with GSVM and LSVM markets. The table shows 95% confidence intervals around the mean number of samples needed by $\operatorname{EA}$ and $\operatorname{EAP}$ to achieve the indicated accuracy ($\varepsilon$) guarantee for each row of the table. The table also shows confidence intervals around the mean $\varepsilon$ guarantees achieved by $\operatorname{EAP}$, denoted $\varepsilon_{\operatorname{EAP}}$, and confidence intervals over the UM loss metric. Several observations follow. Although ultimately a heuristic method, on average $\operatorname{EAP}$ uses far fewer samples than $\operatorname{EA}$ and produces significantly better $\varepsilon$ guarantees. We emphasize that $\operatorname{EAP}$ is capable of producing these results without any a priori knowledge about the underlying market. Instead, $\operatorname{EAP}$ _autonomously_ samples those quantities that can provably be part of an optimal solution. The $\operatorname{EAP}$ guarantees are slightly worse in the LSVM market than for GSVM, where we prune _all_ eligible $(i,S)$ pairs. In general, there is a tradeoff between computational and sample efficiency: at the cost of more computation, to find more pairs to prune up front, one can save on future samples. Still, even with a rather restricted pruning budget $\bm{\pi}=[180,90,60,45]$ (compared to hundreds of thousands potentially active $(i,S)$ pairs), $\operatorname{EAP}$ achieves substantial savings compared to $\operatorname{EA}$ in the LSVM market. Finally, the UM loss metric follows a trend similar to those observed for unit-demand markets, i.e., the error guarantees are consistently met and degrade as expected (worst guarantees for higher values of $\varepsilon$). Note that in our experiments, all 40 GSVM market instances have equilibria with linear and anonymous prices. In contrast, only 18 out of 32 LSVM market do, so the table reports UM loss over this set. For the remaining 32 markets, we report here a UM loss of approximately $12\pm 4$ _regardless_ of the value of $\varepsilon$. This high UM loss is due to the lack of CE in linear pricings which dominates any UM loss attributable to the estimation of values. ## 5\. Conclusion and Future Directions In this paper, we define noisy combinatorial markets as a model of combinatorial markets in which buyers’ valuations are not known with complete certainty, but noisy samples can be obtained, for example, by using approximate methods, heuristics, or truncating the run-time of a complete algorithm. For this model, we tackle the problem of learning CE. We first show tight lower- and upper-bounds on the buyers’ utility loss, and hence the set of CE, given a uniform approximation of one market by another. We then develop learning algorithms that, with high probability, learn said uniform approximations using only finitely many samples. Leveraging the first welfare theorem of economics, we define a pruning criterion under which an algorithm can provably stop learning about buyers’ valuations for bundles, without affecting the quality of the set of learned CE. We embed these conditions in an algorithm that we show experimentally is capable of learning CE with far fewer samples than a baseline. Crucially, the algorithm need not know anything about this structure _a priori_ ; our algorithm is general enough to work in any combinatorial market. Moreover, we expect substantial improvement with sharper sample complexity bounds; in particular, variance-sensitive bounds can be vastly more efficient when the variance is small, whereas Hoeffding’s inequality essentially assumes the worst-case variance. #### Acknowledgements This work was supported by NSF Award CMMI-1761546 and by DARPA grant FA8750. ## References * (1) * Areyan Viqueira et al. (2020) Enrique Areyan Viqueira, Cyrus Cousins, and Amy Greenwald. 2020\. Improved Algorithms for Learning Equilibria in Simulation-Based Games. In _Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’20, Auckland, New Zealand, May 9-13, 2020_ , Amal El Fallah Seghrouchni, Gita Sukthankar, Bo An, and Neil Yorke-Smith (Eds.). International Foundation for Autonomous Agents and Multiagent Systems, 79–87. https://dl.acm.org/doi/abs/10.5555/3398761.3398776 * Areyan Viqueira and Greenwald (2020) Enrique Areyan Viqueira and Amy Greenwald. 2020. Learning Competitive Equilibria in Noisy Combinatorial Markets. In _Proceedings of the 2nd Games, Agents, and Incentives Workshop (GAIW@ AAMAS 2020)_. * Areyan Viqueira et al. (2019) Enrique Areyan Viqueira, Amy Greenwald, Cyrus Cousins, and Eli Upfal. 2019. Learning Simulation-Based Games from Data. In _Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems_. International Foundation for Autonomous Agents and Multiagent Systems, 1778–1780. * Balcan et al. (2012) Maria-Florina Balcan, Florin Constantin, Satoru Iwata, and Lei Wang. 2012. Learning Valuation Functions. In _COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland_ _(JMLR Proceedings, Vol. 23)_ , Shie Mannor, Nathan Srebro, and Robert C. Williamson (Eds.). JMLR.org, 4.1–4.24. http://proceedings.mlr.press/v23/balcan12b/balcan12b.pdf * Balcan and Harvey (2011) Maria-Florina Balcan and Nicholas J. A. Harvey. 2011. Learning submodular functions. In _Proceedings of the 43rd ACM Symposium on Theory of Computing, STOC 2011, San Jose, CA, USA, 6-8 June 2011_ , Lance Fortnow and Salil P. Vadhan (Eds.). ACM, 793–802. https://doi.org/10.1145/1993636.1993741 * Ball et al. (2006) MO Ball, G Donohue, and K Hoffman. 2006. Auctions for the safe, efficient and equitable allocation of airspace system resources. Cramton P, Shoham Y, Steinberg R, eds. Combinatorial Auctions. * Bartlett and Mendelson (2002) Peter L Bartlett and Shahar Mendelson. 2002. Rademacher and Gaussian complexities: Risk bounds and structural results. _Journal of Machine Learning Research_ 3, Nov (2002), 463–482. * Bikhchandani and Mamer (1997) Sushil Bikhchandani and John W Mamer. 1997. Competitive equilibrium in an exchange economy with indivisibilities. _Journal of economic theory_ 74, 2 (1997), 385–413. * Bikhchandani et al. (2002) Sushil Bikhchandani, Joseph M Ostroy, et al. 2002\. The package assignment model. _Journal of Economic theory_ 107, 2 (2002), 377–406. * Cantillon and Pesendorfer (2006) Estelle Cantillon and Martin Pesendorfer. 2006. Auctioning bus routes: The London experience. _._ (2006). * Cheung et al. (2020) Yun Kuen Cheung, Richard Cole, and Nikhil R Devanur. 2020\. Tatonnement beyond gross substitutes? Gradient descent to the rescue. _Games and Economic Behavior_ 123 (2020), 295–326. * Conen and Sandholm (2001) Wolfram Conen and Tuomas Sandholm. 2001. Preference elicitation in combinatorial auctions. In _Proceedings 3rd ACM Conference on Electronic Commerce (EC-2001), Tampa, Florida, USA, October 14-17, 2001_ , Michael P. Wellman and Yoav Shoham (Eds.). ACM, 256–259. https://doi.org/10.1145/501158.501191 * Cramton et al. (2002) Peter Cramton et al. 2002\. Spectrum auctions. _Handbook of telecommunications economics_ 1 (2002), 605–639. * Edelman et al. (2007) Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz. 2007\. Internet advertising and the generalized second-price auction: Selling billions of dollars worth of keywords. _American economic review_ 97, 1 (2007), 242–259. * Fujishima et al. (1999) Yuzo Fujishima, Kevin Leyton-Brown, and Yoav Shoham. 1999\. Taming the Computational Complexity of Combinatorial Auctions: Optimal and Approximate Approaches. In _Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, IJCAI 99, Stockholm, Sweden, July 31 \- August 6, 1999. 2 Volumes, 1450 pages_ , Thomas Dean (Ed.). Morgan Kaufmann, 548–553. http://ijcai.org/Proceedings/99-1/Papers/079.pdf * Goeree and Holt (2010) Jacob K Goeree and Charles A Holt. 2010. Hierarchical package bidding: A paper & pencil combinatorial auction. _Games and Economic Behavior_ 70, 1 (2010), 146–169. * Gul and Stacchetti (1999) Faruk Gul and Ennio Stacchetti. 1999. Walrasian equilibrium with gross substitutes. _Journal of Economic theory_ 87, 1 (1999), 95–124. * Hinz et al. (2011) Oliver Hinz, II-Horn Hann, and Martin Spann. 2011\. Price discrimination in e-commerce? An examination of dynamic pricing in name-your-own price markets. _Mis quarterly_ (2011), 81–98. * Hoeffding (1994) Wassily Hoeffding. 1994\. Probability inequalities for sums of bounded random variables. In _The Collected Works of Wassily Hoeffding_. Springer, 409–426. * Jha and Zick (2020) Tushant Jha and Yair Zick. 2020. A Learning Framework for Distribution-Based Game-Theoretic Solution Concepts. In _Proceedings of the 21st ACM Conference on Economics and Computation_. 355–377. * Koltchinskii (2001) Vladimir Koltchinskii. 2001\. Rademacher penalties and structural risk minimization. _IEEE Transactions on Information Theory_ 47, 5 (2001), 1902–1914. * Kroer et al. (2019) Christian Kroer, Alexander Peysakhovich, Eric Sodomka, and Nicolás E. Stier Moses. 2019\. Computing Large Market Equilibria using Abstractions. In _Proceedings of the 2019 ACM Conference on Economics and Computation, EC 2019, Phoenix, AZ, USA, June 24-28, 2019_ , Anna Karlin, Nicole Immorlica, and Ramesh Johari (Eds.). ACM, 745–746. https://doi.org/10.1145/3328526.3329553 * Kuhn (1955) Harold W Kuhn. 1955\. The Hungarian method for the assignment problem. _Naval research logistics quarterly_ 2, 1-2 (1955), 83–97. * Kwasnica et al. (2005) Anthony M Kwasnica, John O Ledyard, Dave Porter, and Christine DeMartini. 2005. A new and improved design for multiobject iterative auctions. _Management science_ 51, 3 (2005), 419–434. * Lahaie and Lubin (2019) Sébastien Lahaie and Benjamin Lubin. 2019. Adaptive-Price Combinatorial Auctions. In _Proceedings of the 2019 ACM Conference on Economics and Computation, EC 2019, Phoenix, AZ, USA, June 24-28, 2019_ , Anna Karlin, Nicole Immorlica, and Ramesh Johari (Eds.). ACM, 749–750. https://doi.org/10.1145/3328526.3329615 * Lahaie and Parkes (2004) Sébastien Lahaie and David C. Parkes. 2004. Applying learning algorithms to preference elicitation. In _Proceedings 5th ACM Conference on Electronic Commerce (EC-2004), New York, NY, USA, May 17-20, 2004_ , Jack S. Breese, Joan Feigenbaum, and Margo I. Seltzer (Eds.). ACM, 180–188. https://doi.org/10.1145/988772.988800 * Lehmann et al. (2006) Benny Lehmann, Daniel Lehmann, and Noam Nisan. 2006\. Combinatorial auctions with decreasing marginal utilities. _Games and Economic Behavior_ 55, 2 (2006), 270–296. * Nisan et al. ([n.d.]) Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay V Vazirani. [n.d.]. Algorithmic Game Theory, 2007. _Google Scholar Google Scholar Digital Library Digital Library_ ([n. d.]). * Roughgarden (2010) Tim Roughgarden. 2010\. Algorithmic game theory. _Commun. ACM_ 53, 7 (2010), 78–86. https://doi.org/10.1145/1785414.1785439 * Roughgarden and Talgam-Cohen (2015) Tim Roughgarden and Inbal Talgam-Cohen. 2015. Why Prices Need Algorithms. In _Proceedings of the Sixteenth ACM Conference on Economics and Computation, EC ’15, Portland, OR, USA, June 15-19, 2015_ , Tim Roughgarden, Michal Feldman, and Michael Schwarz (Eds.). ACM, 19–36. https://doi.org/10.1145/2764468.2764515 * Saltzman (2002) Matthew J Saltzman. 2002\. COIN-OR: an open-source library for optimization. In _Programming languages and systems in computational economics and finance_. Springer, 3–32. * Scheffel et al. (2012) Tobias Scheffel, Georg Ziegler, and Martin Bichler. 2012\. On the impact of package selection in combinatorial auctions: an experimental study in the context of spectrum auction design. _Experimental Economics_ 15, 4 (2012), 667–692. * Vorobeychik (2010) Yevgeniy Vorobeychik. 2010\. Probabilistic analysis of simulation-based games. _ACM Trans. Model. Comput. Simul._ 20, 3 (2010), 16:1–16:25. https://doi.org/10.1145/1842713.1842719 * Vorobeychik and Wellman (2008) Yevgeniy Vorobeychik and Michael P. Wellman. 2008. Stochastic search methods for nash equilibrium approximation in simulation-based games. In _7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008), Estoril, Portugal, May 12-16, 2008, Volume 2_ , Lin Padgham, David C. Parkes, Jörg P. Müller, and Simon Parsons (Eds.). IFAAMAS, 1055–1062. https://dl.acm.org/citation.cfm?id=1402368 * Walras (2003) Léon Walras. 2003\. _Elements of Pure Economics: Or the Theory of Social Wealth_. Routledge. https://books.google.com/books?id=hwjRD3z0Qy4C * Weiss et al. (2017) Michael Weiss, Benjamin Lubin, and Sven Seuken. 2017\. SATS: A Universal Spectrum Auction Test Suite. In _Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017, São Paulo, Brazil, May 8-12, 2017_ , Kate Larson, Michael Winikoff, Sanmay Das, and Edmund H. Durfee (Eds.). ACM, 51–59. http://dl.acm.org/citation.cfm?id=3091139 * Wellman (2006) Michael P. Wellman. 2006\. Methods for Empirical Game-Theoretic Analysis. In _Proceedings, The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, July 16-20, 2006, Boston, Massachusetts, USA_. AAAI Press, 1552–1556. http://www.aaai.org/Library/AAAI/2006/aaai06-248.php ## Appendix ### Theoretical Proofs ###### Proof. (Lemma 7 of the main paper) Let $M_{\mathcal{X}}$ be a conditional combinatorial market, $\mathcal{D}$ a distribution over $\mathcal{X}$, and $\mathcal{I}\subseteq N\times 2^{G}$ an index set. Let $\bm{x}=(x_{1},\ldots,x_{t})\sim\mathcal{D}$ be a vector of $t$ samples drawn from $\mathcal{D}$. Suppose that for all $x\in\mathcal{X}$ and $(i,S)\in\mathcal{I}$, it holds that $v_{i}(S,x)\in[0,c]$ where $c\in\mathbb{R}_{+}$. Let $\delta>0$ and $\varepsilon>0$. Then, by Hoeffding’s inequality Hoeffding (1994), $Pr(|v_{i}(S)-\hat{v}_{i}(S)|\geq\varepsilon)\leq 2e^{-2t(\frac{\varepsilon}{c})^{2}}$ (10) Now, applying union bound over all events $|v_{i}(S)-\hat{v}_{i}(S)|\geq\varepsilon$ where $(i,S)\in\mathcal{I}$, $Pr\left(\bigcup_{(i,S)\in\mathcal{I}}|v_{i}(S)-\hat{v}_{i}(S)|\geq\varepsilon\right)\leq\sum_{(i,S)\in\mathcal{I}}Pr\left(|v_{i}(S)-\hat{v}_{i}(S)|\geq\varepsilon\right)$ (11) Using bound (10) in the right-hand side of (11), $Pr\left(\bigcup_{(i,S)\in\mathcal{I}}|v_{i}(S)-\hat{v}_{i}(S)|\geq\varepsilon\right)\leq\sum_{(i,S)\in\mathcal{I}}2e^{-2t(\frac{\varepsilon}{c})^{2}}=2|\mathcal{I}|e^{-2t(\frac{\varepsilon}{c})^{2}}$ (12) Where the last equality follows because the summands on the right-hand size of eq. 12 do not depend on the summation index. Now, note that eq. 12 implies a lower bound for the event that complements $\bigcup_{(i,S)\in\mathcal{I}}|v_{i}(S)-\hat{v}_{i}(S)|\geq\varepsilon$, $Pr\left(\bigcap_{(i,S)\in\mathcal{I}}|v_{i}(S)-\hat{v}_{i}(S)|\leq\varepsilon\right)\geq 1-2|\mathcal{I}|e^{-2t(\frac{\varepsilon}{c})^{2}}$ (13) The event $\bigcap_{(i,S)\in\mathcal{I}}|v_{i}(S)-\hat{v}_{i}(S)|\leq\varepsilon$ is equivalent to the event $\max_{(i,S)\in\mathcal{I}}|v_{i}(S)-\hat{v}_{i}(S)|\leq\varepsilon$. Setting $\delta=2|\mathcal{I}|e^{-2t(\frac{\varepsilon}{c})^{2}}$ and solving for $\varepsilon$ yields $\varepsilon=c\sqrt{\nicefrac{{\ln\left(\nicefrac{{2|\mathcal{I}|}}{{\delta}}\right)}}{{2t}}}$. The results follows by substituting $\varepsilon$ in eq. 13. ∎ ### Mathematical Programs For our experiments, we solve for CE in linear prices. To compute CE in linear prices, we first solve for a welfare-maximizing allocation $\mathcal{S}^{*}$ and then, fixing $\mathcal{S}^{*}$, we solve for CE linear prices. Note that, if a CE in linear prices exists, then it is supported by any welfare- maximizing allocation Roughgarden and Talgam-Cohen (2015). Moreover, since valuations in our experiments are drawn from continuous distributions, we assume that the set of welfare-maximizing allocations for a given market is of negligible size. Next, we present the mathematical programs we used to compute welfare- maximizing allocations and find linear prices. Given a combinatorial market $M$, the following integer linear program, (14), computes a welfare-maximizing allocation $\mathcal{S}^{*}$. Note this formulation is standard in the literature Nisan et al. ([n.d.]). $\begin{array}[]{lrlll}\text{maximize}&\displaystyle\sum\limits_{i\in N,S\subseteq G}v_{i}(S)x_{iS}\\\ \\\ \text{subject to}\par&\displaystyle\sum\limits_{i\in N,S\mid j\in S}x_{iS}\leq 1,&j=1,\ldots,m\\\ \\\ &\displaystyle\sum\limits_{S\subseteq G}x_{iS}\leq 1,&i=1,\ldots,n\\\ \\\ &x_{iS}\in\\{0,1\\},&i\in N,S\subseteq G\end{array}$ (14) Given a market $M$ and a solution $\mathcal{S}^{*}$ to (14), the following set of linear inequalities, (15), define all linear prices that couple with allocation $\mathcal{S}^{*}$ to form a CE in $M$. The inequalities are defined over variables $P_{1},\ldots,P_{m}$ where $P_{j}$ is good $j$’s price. The price of bundle $S$ is then $\sum_{j\in S}P_{j}$. $\begin{array}[]{lrlll}&v_{i}(S)-\sum_{j\in S}P_{j}\leq v_{i}(S^{*}_{i})-\sum_{j\in S^{*}_{i}}P_{j},&i\in N,S\subseteq G\\\ \\\ &\text{If }j\notin\cup_{i\in N}S^{*}_{i},\text{ then }P_{j}=0,&j=1,\ldots,m\\\ \\\ &P_{j}\geq 0,&j\in G\end{array}$ (15) The first set of inequalities of (15) enforce the UM conditions. The second set of inequalities states that the price of goods not allocated to any buyer in $\mathcal{S}^{*}$ must be zero. In the case of linear pricing, this condition is equivalent to the RM condition. In practice, a market might not have CE in linear pricings, i.e., the set of feasible solutions of (15) might be empty. In our experiments, we solve the following linear program, (16), which is a relaxation of (15). In linear program (16), we introduce slack variables $\alpha_{iS}$ to relax the UM constraints. We define as objective function the sum of all slack variables, $\sum_{i\in N,S\subseteq G}\alpha_{iS}$, which we wish to minimize. $\begin{array}[]{rllll}\text{minimize}\par\displaystyle\sum\limits_{i\in N,S\subseteq G}\alpha_{iS}\\\ \\\ \text{subject to \quad\quad\quad\quad\quad}\\\ \\\ v_{i}(S)-\sum_{j\in S}P_{j}-\alpha_{iS}\leq v_{i}(S^{*}_{i})-\sum_{j\in S^{*}_{i}}P_{j},&i\in N,S\subseteq G\\\ \\\ \text{If }j\notin\cup_{i\in N}S^{*}_{i},\text{ then }P_{j}=0,&j=1,\ldots,m\\\ \\\ P_{j}\geq 0,&j\in G\\\ \\\ \alpha_{iS}\geq 0,&i\in N,S\subseteq G\end{array}$ (16) As reported in the main paper, for each GSVM market we found that the optimal solution of (16) was such that $\sum_{i\in N,S\subseteq G}\alpha_{iS}=0$, which means that an exact CE in linear prices was found. In contrast, for LSVM markets only 18 out of 50 markets had linear prices ($\sum_{i\in N,S\subseteq G}\alpha_{iS}=0$) whereas 32 did not ($\sum_{i\in N,S\subseteq G}\alpha_{iS}>0)$. ### Experiments’ Technical Details We used the COIN-OR Saltzman (2002) library, through Python’s PuLP (https://pypi.org/project/PuLP/) interface, to solve all mathematical programs. We wrote all our experiments in Python, and once the double-blind review period finalizes, we will release all code publicly. We ran our experiments in a cluster of 2 Google’s GCloud _c2-standard-4_ machines. Unit- demand experiments took approximately two days to complete, GSVM experiments approximately four days, and LSVM experiments approximately eight days.
# Reconstructing missing seismic data using Deep Learning Dieuwertje Kuijpers1, Ivan Vasconcelos1 and Patrick Putzky2 1Utrecht University, Utrecht, the Netherlands 2AMLab University of Amsterdam, Amsterdam, the Netherlands ## Abstract In current seismic acquisition practice, there is an increasing drive for data to be acquired sparsely in space, and often in irregular geometry. These surveys can trade off subsurface information for efficiency/cost - creating a problem of “missing seismic data” that can greatly hinder subsequent seismic processing and interpretation. Reconstruction of regularly-sampled dense data from highly-sparse, irregular data can therefore aid in processing and interpretation of these far sparser, more efficient seismic surveys. Here, we compare two methods to solve the reconstruction problem in both space-time and wavenumber-frequency domain. Both of these methods require an operator that maps sparse to dense data: this operator is generally unknown, being the inverse of a known data sampling operator. As such, here our deterministic inversion is efficiently solved by least squares optimisation using an numerically-efficient Python-based linear operator representation. An alternative method is the probabilistic approach that uses deep learning. Here, two specific deep learning architectures are benchmarked against each other and the deterministic approach; a Recurrent Inference Machine (RIM), which is designed specifically to solve inverse problems given known forward operators and the U-Net, originally designed for image segmentation tasks. The trained deep learning networks are capable of successfully mapping sparse to dense seismic data for a range of different datasets and decimation percentages, thereby significantly reducing spatial aliasing in the wavenumber-frequency domain. The deterministic inversion on the contrary, could not reconstruct the missing data in space-time domain and thus did not reduce the undesired spatial aliasing. Our results show that the application of Deep Learning for the seismic reconstruction is promising, but the treatment of large-volume, multi-component seismic datasets will require dedicated learning architectures not yet realisable with existing tools. ## 1 \- Introduction Efficient and cost-effective data acquisition is, together with streamlined data processing, of crucial importance in seismic imaging, from exploration to the global scale. In the example of exploration surveys, acquisition is designed to sample data at a set Nyquist rate (or higher), driving costs to be very high and the duration to often be very long. In principle, a more beneficial acquisition model would be to use fewer sources and/or receivers, yet still maintaining the same information content as a more conventional high-density, regularly-sampled setup. However, on its own, sparse, irregular acquisition results in missing data/information due to sparser sampling (i.e. sub-Nyquist sampling). Missing seismic data, either due to sparser sampling or irregularities can greatly hinder accurate processing and interpretation. For example Peng and Vasconcelos, (2019) find that missing seismic data in either source or receiver domain or both domains can lead to different types of artifacts and data gaps after using the sparse datasets for Marchenko methods. The reconstruction of dense, regularly sampled wavefields from highly sparse, (ir)regular data can therefore play a critical role in achieving better processing and interpretation from far sparser, more efficient seismic surveys. Several methods exist to solve this reconstruction problem. These methods can broadly be divided into two groups; deterministic and probabilistic. Most often the reconstruction problem is solved using deterministic, iterative linear solvers. Ruan and Vasconcelos, (2019) for example, find that the sampling rate in seismic acquisition can be decimated further than the Nyquist rate by means of preconditioning and compressive sensing techniques in the presence of acquired data gradients. Using a multi-component reconstruction theorem that includes the acquired data, the first- and second-order spatial derivatives plus the cross-derivatives in shot- and receiver-domain, Ruan, (2019) can succesfully reconstruct regularly decimated 3D seismic data with one-third of the original Nyquist rate using a gradient-based, sparsity promoting solver. When using an irregular sampling scheme as proposed by Hennenfent and Herrmann, (2008), Ruan, (2019) can decimate the sample rate even further. One major requirement for this method is the need for spatial derivatives of the data in the inversion: in practice, this would mean that data are acquired far more sparsely, but each data station contains many channels due to the multi-component nature of gradient data. For example, in offshore seismic, derivatives of the wavefield can be measured if particle- velocity measurements are available, something that is often not the case for vintage seismic data and also presents technological challenges in practice, such as the engineering of source-side derivatives, or higher order derivatives on either source or receiver side. The interest in machine learning solutions to inverse (seismic) problems is growing, the reconstruction problem provides an attractive application because the underlying forward operators are computationally inexpensive. For deterministic approaches however, achieving accurate solutions to data reconstruction can be quite challenging. Recently, Siahkoohi et al., (2018) addressed the use of adversarial neural networks (GANNs) to learn a map from sparsely to fully sampled seismic data. With the use of their trained GANN, Siahkoohi et al., (2018) are able to reconstruct 90 percent of the missing seismic data in frequency domain under different types of frequency domain decimation, as long as at least 5 percent of the data in that particular frequency slice was densely sampled. Seismic acquisition however, is often done in the spatial domain and thus does the decimation also takes place in the spatial domain. This research will focus on reconstructing dense seismic wavefields from spatially decimated data using deep learning, by means of the so-called Recurrent Inference Machine (RIM) deep learning architecture designed by Putzky and Welling, (2017). Testing the potential of using RIMs in seismic processing problems where determining a complex inverse map to a known forward problem is the main goal. The RIM will be benchmarked against the U-Net deep learning architecture (originally designed for biomedical image segmentation; Ronneberger et al., (2015)) and will be compared to deterministic linear iterative methods. Deep learning mainly consists of two stages. The first stage is the training stage in which the neural networks have access to an input and expected output. Based on the input the network has to make a prediction that should be as close as possible to the expected output. The misfit between the prediction and expected output can be backpropagated through the network thereby updating its internal state in order to make a better prediction for the next example. After a period of training, the neural nets enter the inference stage. In this stage the network will have access to input data, that it has never seen before, only. From this input the network should try to make a prediction. Here, the reconstruction problem will be studied and the neural networks will estimate a map between the decimated and dense seismic wavefields in which deep learning can be seen as an approach to solving inverse problem. The reconstruction problem will be studied in the time-space domain mostly as most seismic data are acquired in this domain. In the frequency-wavenumber domain the reconstruction problem becomes the dealiasing problem as sub- Nyquist spatial sampling will lead to spatial aliasing. After studying the approach the two methods take in solving inverse problems, the reconstruction problem will first be studied in 2D where decimation (with different patterns and percentages) only takes place along the receiver dimension. As a final test all different studied methods will aim at solving a highly decimated 3D Ocean Turbulence dataset, that is not just decimated along the receiver dimension but also along the source dimension, resulting in over 90 % missing data to be reconstructed. The next section gives the reader a general introduction to machine learning, a deeper description of the specific architectures used here will be given in coming sections. ## 2 \- A brief introduction to Machine Learning In this section, a short introduction to machine learning is given to help the reader understand the techniques used in this research. Because the machine learning community often uses a specific wording that will also be used in this study, a short glossary is given at the end of this section. The introduction and glossary are far from complete as they only serve to describe the basic concepts. Two recommended references for a more detailed description or a more hands-on experience include the book on Deep Learning by Goodfellow et al., (2016) and the online course on Pytorch via Udacity, (2019). A machine learning algorithm is able to learn from example data, with learning being described as an increased performance over repetitive execution of a given task. In its very mathematical basics, machine learning can be seen as a form of applied statistics since computer models are used to statistically estimate a unknown, often complicated function that maps a given input to a given output. Deep learning is a form of machine learning in which a deep (multiple layers) neural network is the learning computer model. The network is a numerical representation of a series of computations that process information. With every pass through a layer mathematical computations are applied to the input data, thereby mapping part of the input data to a new representation. The visible input and output to a machine learning network can have very different forms such as images, text or classification labels. All layers in between hold hidden representations of the data that are invisible for the user. The layers in a neural network consist of nodes, each different node applies a mathematical function to part of the input data. The output of each node has a different importance in the layer’s representation of the data and therefore all nodes have a corresponding weight. When building a machine learning model, the weights have an initial setup that is not optimal in mapping the input to output. Thus, for a model that should generalize well to different and highly variable data, it is important to find the optimum set of weights (high weights corresponding to more import features) that represent a map between the data in a so-called training dataset. The network, mathematically represented by $g$, defines a parametric model between the output $\mathbf{\tilde{x}}$ and input $\mathbf{y}$ as set by the weights such that $\mathbf{\tilde{x}}\ =\ g(\mathbf{y},\mathbf{w})$. Training consists of estimating the network weights $\mathbf{w}$ by minimization of a specific loss function suitable for the problem. Training data consists of a large set of data for which both $\mathbf{x}$ and $\mathbf{y}$ are known such that the difference (loss) between model output $\mathbf{\tilde{x}}$ (generated by the network from input $\mathbf{y}$; indicated by a tilde) and, during training known, $\mathbf{x}$ can be minimized. Minimization of the loss by altering the weights during training is achieved with the help of an optimizer that performs iterative optimisation using stochastic gradient descent. The training stage is followed by the inference stage during which the trained network is deployed for testing. In this phase never before seen data $\mathbf{y}$ can be used as an input and the model will map this to a new output representation $\mathbf{\tilde{x}}$. A deep learning model is build by selecting an architecture suited for the specific problem, a loss function and an optimizer. Many different combinations of these three exist and here we have chosen to use convolutional networks to solve a regression problem. The most simple form of a regression problem consists of finding the parameters $a$ and $b$ fitting a linear trend ($y\ =ax+b$) with (training) data in Cartesian space. In this study the problem is more complex, the convolutional networks will take corrupted (decimated) 2D seismic gathers as input and the network should map these to an output consisting of 2D reconstructed (dense) gathers. Convolutional networks (CNNs) are capable of taking N-dimensional images as input without having to transform these into 1-dimensional vectors (a very common technique in machine learning), thereby more successfully capturing the spatial and temporal dependencies in the data. In CNNs, 2D convolutional kernels are applied to the input data, therefore the weights in a CNN correspond to kernel weights that extract higher-level features from the input. The main goal in deep learning is thus to find a ”different” (the meaning of different is unique for each problem) representation of the input data after a forward pass through the model. The mapping function that takes input to output is then represented by the network weights. The problem of mapping corrupted to reconstructed seismic gathers can be cast as an inverse problem (forward problem: $\mathbf{y}\ =\ \mathbf{A}\mathbf{x}$) where the task is to find $\mathbf{x}$ (reconstructed gather) given $\mathbf{y}$ (corrupted gather) and the forward operator $\mathbf{A}$. In this example the weights of the neural network, representing the mapping function, should represent the inverse of the forward operator that maps $\mathbf{y}$ back to $\mathbf{x}$. Therefore, deep learning will be used in this study as a probabilistic approach to inverse problems. After the machine learning glossary, the next sections will describe the exact deep learning architectures used in this study and how each of those approach inverse problems. ### Machine Learning Glossary * • Activation function \- the function applied to the input data in a node activating that node or transforming input to output. Here, the Rectified Linear Unit (ReLU) activation ($\text{ReLU}(x)\ =\ \text{max}(0,x)$) is used. * • Batch \- the set of data(-patches) that is used for one update step during training of the network. * • Channels / Features \- features are the properties or characteristic phenomenons of the input data that are extracted in a layer. Channels and features refer to the same dimension in the data (e.g. a grayscale image consists of 1 channel and a color scale image of 3 for RGB). * • Dropout \- layer that randomly sets some nodes to zero during the update step in training, could help prevent overfitting. * • Epoch \- the time the network needs to see all training data once. * • Gated Recurrent Unit (GRU) \- Gating mechanism in recurrent neural networks that has feedback connections and can process entire data sequences at once. The cell regulates information flow through the network with the use of a forget and memory gate. * • Learning rate \- parameter that controls the step size in stochastic gradient descent; how much the weights are adjusted with respect to the loss gradient. * • Loss \- cost function that measures the misfit between the networks predictions and the expected results, loss should be minimized during the training phase. * • Optimizer \- the algorithm that is used to update the weights and/or learning rate in order to reduce the loss during the training phase. * • Overfitting \- when an algorithms is overfitting the training data, the model remembers the output with the input instead of learning. The model therefore generalizes poorly to unseen datasets during the inference stage. * • Training / Inference \- the training phase is the phase in which a machine learning algorithm is build, inference uses this trained model to make a prediction. ## 3 \- The Reconstruction problem In sparse seismic wavefield acquisition, the reconstruction problem can be posed as a general linear problem (3.1); $\mathbf{y}\ =\ \mathbf{R}\ \mathbf{x}$ (3.1) in which $\mathbf{y}$ is the decimated (corrupted) wavefield and $\mathbf{x}$ the dense wavefield. $\mathbf{R}$ is the Restriction operator that can be assembled from the characteristics of the acquisition setup (e.g. malfunctioning receivers or missing shots). $\mathbf{R}$ represents a mask that extracts a subset of data from the dense wavefield into the decimated wavefield. Equation (3.1) is known as the forward problem that generates the observed data. The inverse problem consists of reconstructing the dense wavefield $\mathbf{x}$ from the observed decimated wavefield $\mathbf{y}$ using an inverse of the restriction operator. From Nyquist-Shannon’s sampling theorem it is known that the restriction operator in equation (3.1) has an exact inverse as long as the sample-rate criterion is satisfied. A main assumption in Nyquist-Shannon’s sampling theorem is that of uniform sampling. In reality however, irregularities in the acquired data could be caused by malfunctioning receivers or perturbations leading to a varying receiver spacing or sample rate during acquisition. Irregular and/or far sparser sampling both result in ill-posedness of the inverse of equation (3.1). In these cases the inverse of the restriction operator can be approximated by two types of approaches; iterative deterministic or probabilistic inversion. In what follows, each densely sampled gather is represented by $\mathbf{x}$ and the decimated version by $\mathbf{y}$. The goal is to estimate a dense version of the data from the decimated data and the forward operator, this estimate is represented by $\mathbf{\tilde{x}}$ and should be as close to the original dense data $\mathbf{x}$ as possible. The seismic data could be decimated over a single source- or receiver-dimension resulting in the reconstruction of missing traces in 2D seismic gathers, or decimated in both dimensions resulting in a highly sparse 3D decimated dataset. ### Deterministic - Linear Solvers Deterministic methods aim at inverting equation (3.1) without explicitly using any probability theory on the parameters of the inversion. The most general solution to this inverse problem is the least-squares solution to which possible regularization terms can be added. Minimizing the least squares cost function, yields the reconstructed dense wavefield $\mathbf{\tilde{x}}$ of equation (3.2). The linear system in equation (3.1) can numerically be represented using an efficient linear operator representation in the Python- based Pylops framework (Ravasi and Vasconcelos,, 2020). Pylops-implemented least squares optimisation can also be used to efficiently solve the inversion in equation (3.2). Least squares optimisation uses the forward operators in the inversion and is therefore controlled by the physics of the restriction operator. $\tilde{\mathbf{x}}\ =\min_{\mathbf{x}}||\mathbf{y}\ -\ \mathbf{R\ x}||^{2}=\ (\mathbf{R}^{T}\mathbf{R})^{-1}\mathbf{R}^{T}\mathbf{y}$ (3.2) ### Probabilistic - Deep Learning An alternative method to solve the inverse problem makes use of deep learning. The neural network (mathematically represented by $g_{\phi}$) is trained to represent an approximate inverse of the restriction operator thereby mapping the decimated to the dense data. From now on $\phi$ will be used to represent the network’s parameters instead of the earlier introduced $\mathbf{w}$. This because $\phi$ includes the weights and can also, since the used models are more complex than simple linear regression, include other trainable parameters like a varying learning rate. The neural network is trained to minimize the mean squared cost function $\mathbf{J}$ (see equation (3.3)) with the use of an optimizer that performs gradient descent on this cost function and the model parameters. The main focus of this study lies on the Recurrent Inference Machine (RIM) as designed by Putzky and Welling, (2017), which will be benchmarked to a more simplistic network architecture; the U-Net as first designed by Ronneberger et al., (2015). The numerical code used for U-Net is based on that of Zbontar et al., (2018) for their fastMRI challenge. Both existing code basements for the RIM and U-Net will be adjusted for the specific goal of reconstructing missing seismic data. $\mathbf{J}\ =\ ||\mathbf{x}-\mathbf{\tilde{x}}||^{2}\ =\ ||\mathbf{x}-g_{\phi}(\mathbf{y})||^{2}$ (3.3) #### Probability Theorem The parameters in a neural network should represent an unknown map between an input $\mathbf{y}$ and an output $\mathbf{x}$, that is supposed to be an inverse to a known forward operator (linear or non-linear) mapping $\mathbf{x}$ to $\mathbf{y}$. This means that the goal to solving inverse problems using deep learning comes down to creating a function estimator of the actual inverse operator. The neural network parameters are trained to represent this function estimator, the belief that these parameters ($\theta$) can represent the inverse operator can be expressed using probabilities. Maximum probability corresponds to a 100 % capability of the network parameters to represent the desired inverse operators. Different approaches can be taken to maximize this probability (refer to Chapter 5 of Goodfellow et al., (2016)). Here, the inverse problem is approached by defining a likelihood and a prior and optimizing the maximum a posteriori solution (MAP) in the following equation, $\tilde{\mathbf{x}}\ =\ \max_{\mathbf{x}}\ \log p(\mathbf{y}|\mathbf{x};\theta)\ +\ \log p_{\theta}(\mathbf{x})\,.$ (3.4) such that the iterative approach to MAP inference represents the iterative approach to inversion (an optimization problem). In equation (3.4), the first term is a conditional probability (log-likelihood term) under network parameters $\theta$ that represents the forward problem, while the latter is a parametric prior over $\mathbf{x}$ that reduces the ill- posedness of the inverse problem by including for example a sparsity promoting term (Putzky and Welling,, 2017). Maximizing the conditional log-likelihood term is an attempt to make the network parameters match the mapping function between input and output as set by the training data. Ideally this would match all data used during inference, however these data are not directly available and therefore that probability distribution remains unknown. The conditional log-likelihood term is the basis for supervised learning in which $\mathbf{y}$ is predicted given $\mathbf{x}$ and the model parameters. The maximum a posteriori approach also includes the prior on the dense wavefield thereby allowing the network parameters (and therefore the estimate of the inverse function) to be affected by prior beliefs. The prior distribution is also related to the training data. In the case of seismic data, the prior space can include information on spatial and temporal signal distribution, curvature and sparsity. The next sections will describe two specific architectures used in this study and how each of those approximate the inverse problem. ## 4 \- The Recurrent Inference Machine By design, a Recurrent Inference Machine (Putzky and Welling,, 2017), or RIM, uses a recurrent neural network (RNN) as a recurrent approach to MAP inference. Putzky and Welling, (2017) stepped away from taking the usual deep learning approach in which the prior and log-likelihood are learned separately and instead setup a RNN that jointly learns inference and a prior. The RIM uses the current reconstruction ($\mathbf{\tilde{x}}_{t}$), a hidden memory state ($\mathbf{s}$) and the gradient of the log-likelihood term ($\mathbf{\nabla}\log p(\mathbf{y}|\mathbf{x};\theta)$) to infer a better reconstruction ($\mathbf{\tilde{x}}_{t+1}$) over a fixed number of steps in the recurrent part of the RIM. Each consecutive estimate of the recurrent part in the RIM $\mathbf{x}$ can, in its most simple form, be estimated through a recursive update function $\mathbf{\tilde{x}}_{t+1}\ =\ \mathbf{\tilde{x}}_{t}\ +\gamma_{t}\nabla\big{(}\log p(\mathbf{y}|\mathbf{x})+\log p_{\theta}(\mathbf{x})\big{)}\,.$ (4.1) Using Bayes’ rule and generalization to the RIMs formulation this results in recursive update equation (4.2). The learnable parameters $\phi$ in the RIM (represented by $g_{\phi}$ in (3.3)) now include network and prior parameters $\theta$ and the learning rate. For a more detailed description on RIMs and the derivation from equation (4.1) to (4.2), the reader is referred to Putzky and Welling, (2017). For now it suffices to know that the inputs to a RIM consist of a memory state, the gradient of the likelihood term (as given by the forward operator $R$) and the current reconstruction. The gradient of the likelihood term for general inverse problems where $\mathbf{y}\ =\ \mathbf{Ax}$ can be written as $\log p(\mathbf{y}|\mathbf{x})\ =\ \mathbf{A}^{T}(\mathbf{y}-\mathbf{Ax})$. Because the forward operator $\mathbf{R}$ is self-adjoint, the gradient can here be written as $\log p(\mathbf{y}|\mathbf{x})\ =\ \mathbf{R}\mathbf{x}-\mathbf{y}$. $\mathbf{x}_{t+1}^{RIM}\ =\ \ \mathbf{x}_{t}^{RIM}+\ g_{\phi}\ \big{(}\nabla\log p(\mathbf{y}|\mathbf{x})(\mathbf{x}_{t}^{RIM})\ ,\ \mathbf{x}_{t}^{RIM}\ ,\ \mathbf{s}_{t+1}\big{)}\,.$ (4.2) ### RIM architecture The RIM can be seen as a series of repeating neural nets configured in a single cell representing the iterative approach to inverse problems (indicated by subscripts $t$ and $t+1$ in figure 1). The RIM cell consists of a Gated Recurrent Unit (GRU) and convolutional layers. The flow through a cell is intrinsically repeated by a fixed number of steps (here chosen to be 10). Over these steps the network should improve its reconstruction for which it uses an intrinsic loss function that compares the inference prediction with the expected outcome (known for all training data). For both the intrinsic and global loss in the RIM the mean squared error is used (see equation (3.3)). Figure 1: RIM Architecture - An overview of the data flow through the RIM used in this project. Bold arrows are direct connections within a single timestep, dotted lines are recurrent connections passing information through to the next time step. Conv is short for convolution and $\mathbf{\nabla_{y|x_{t}}}$ for the gradient of the log-likelihood term. The different representations of the input data throughout the model are described in the main text. Figure adapted from Lønning et al., (2018). In figure 1 input image $\mathbf{y}$ is the decimated data. The forward operator generating this decimated data is applied to the current estimate of the RIM ($\mathbf{x}_{t}$) to generate the gradient of the log-likelihood term in the green cell. The gradient (indicated by $\mathbf{\nabla_{y|x_{t}}}$; short for $\mathbf{\nabla}\log p(\mathbf{y}|\mathbf{x})$) and the current estimate ($\mathbf{x}_{t}$) of the dense wavefield, are concatenated over the channel dimension and form the input to the first convolutional layer that is followed by a ReLu activation layer. The next layer is a GRU (gating mechanism) that determines what information in the hidden state ($\mathbf{s}_{t+1}^{1}$) is important and what can be forgotten for the next step. Another convolutional layer followed by ReLU activation and a GRU pass (with hidden state $\mathbf{s}_{t+1}^{2}$) follows before the final convolutional layer. The exact RIM architecture chosen here consists of three hidden convolutional layers, the first with kernel size 5x5 and the last two having size 3x3. Padded convolution is used to have a constant image size throughout the whole network. The output in the recurrent network is an update $\Delta\mathbf{x}_{t+1}$ that is added to the current estimate ($\mathbf{x}_{t}$) to form the new estimate ($\mathbf{x}_{t+1}$). Neural networks extract features from the input to learn about data characteristics, in the first two hidden layers 64 features are extracted from the input that consists of two channels (the decimated data concatenated with the gradient of the log-likelihood term), the final output consists of a single channel; the grayscale reconstructed seismic gather $\mathbf{x}_{t+1}$, that becomes $\mathbf{x}_{t}$ in the next timestep. In total the RIM consists of just over 90.000 trainable parameters. ## 5 \- U-Net The U-Net is a very well-known deep learning architecture for image tasks, with the benefit of being relatively easy to implement and train. The U-Net consists of a contracting path, a bottleneck in the center and an expanding path. The two paths consist of a number of blocks in which convolutional operations are applied. The contracting path maps the input $\mathbf{y}$ to a hidden representation in the bottleneck layer, thereby compressing the input to a higher-level feature representation over the blocks. The expanding path transforms the hidden representation coming from the bottleneck layer into an estimate $\mathbf{\tilde{x}}$ of the dense data $\mathbf{x}$, thereby decreasing the number of features over the blocks while increasing the size of the data. Thus, the contracting path of the U-Net is trained such that maps the corrupted input to a compact representation of the reconstructed data and the expanding path is trained to map from this compact, hidden representation to the full reconstructed data. What is special about the U-Net is that the features from each contracting block are concatenated to the features from the expansion block at the same level. Concatenation ensures that the learned features in the contracting path are used to build up the image in the expansion path. In contrast to the RIM, the U-Net has no knowledge of the forward operator that created the decimated data. This means that where the RIM is forced to follow the physics set by the restriction operator, the U-Net does not and that is expected to sometimes lead to physically implausible results. Here, the same loss function and optimizer as for the RIM are used. Figure 2: U-Net Architecture - An overview of the data flow through the U-Net as used in this project, the different representations are described in the main text. The colours of the cells represent from which path the features come; blue for the contracting path, gray for the expanding path and green for the fully connected layers. Conv is short for convolution, the numbers above the cells stand for the number of features as present in the representation of the data in that cell, width of the cell for the number of features and length for the size of the representation of the data. In the U-Net blocks, 2D max-pooling, bilinear upsampling and instance normalization are used. Pooling is a form of non-linear downsampling, the convolutional kernels output an image of the same dimensions as the input with a different number of features. Max pooling is used to reduce the size of the data between two blocks in the contracting path thereby as well reducing the required number of parameters (the more parameters the more the network is prone to overfitting the training data), memory load and number of computations. The output from one block is reassembled into small windows from which only the maximum values are kept and assembled to form the input to the next block. Pooling is a valid operation in the reasoning behind U-Net because the exact location of a feature is less important than its relative position in the global image. In order to undo this downsampling process in the contracting path, bilinear upsampling is used in the expanding path. In bilinear upsampling linear interpolation is used to interpolate the missing data in a 2D grid. First, one of the dimensions is kept fixed and linear interpolation occurs in the other direction and the second step is vice-versa. Each step is thus linear but the total interpolation is non-linear on the sampled location. Similar to the effect of data and feature normalization on network performance, instance normalization improves training by normalizing the data over the channel dimension. ### U-Net architecture The used U-Net architecture consists of four pooling blocks that perform 3x3 convolutions in both the contracting and expanding path, no dropout is used in these blocks. In figure 2, the input to the contracting path (indicated in blue) consists of a seismic gather that is decimated in the spatial domain (the same $\mathbf{y}$ as in the RIM). In the first block 64 features are extracted from the gather, this number doubles each block in the contracting path (indicated by cell width) and reaches its maximum at 1024 features in the bottleneck layer (the rectangular area in figure 2). The size of the input image decreases by a factor 2 in both image dimensions per layer (indicated by the length of the cells). Over the four expanding blocks (gray in figure 2) the number of features are decreased to 64 again and in the final two 1x1 convolutional layers (indicated in green in figure 2) this decreases to a single feature image with the same size as the original input. A 1x1 convolutional layer decreases the number of features in the representations without a change in the size. In total this U-Net consist of almost 13.5 million trainable parameters. Both the input ($\mathbf{y}$; the decimated data) and the output ($\mathbf{x}$; the reconstructed data) of the U-Net thus consist of a single feature, single channel seismic gather. The concatenation between the features from the contracting and expanding bath is indicated by the gray horizontal arrows and the combined blue/grey cells. Figure 2 also justifies the name of the U-Net as the input data indeed follows a U-like flow towards the output. ## 6 \- Methods The inverse problem, that consists of retrieving the dense seismic wavefields from the restriction operator and the decimated data, will be solved by two approaches; deterministic inversion and deep learning. Here, the main focus lies on the RIM and the potential of the RIM to solve the reconstruction problem, as an example of an inverse problem for which the forward operator is known and computationally inexpensive. The reconstruction is benchmarked against the deterministic approach and the U-Net deep learning architecture. Eventhough the U-Net is originally designed for image segmentation (Ronneberger et al.,, 2015), it has lately been used for other tasks as well. For both deep learning networks many different architectures, choices of activation functions, loss functions and training data are possible. The architectures used in this study have been described in previous sections, both networks are numerically implemented using the Python-based deep learning package PyTorch (Paszke et al.,, 2019). The most important step before deploying the neural networks in their inference stage, is training the networks on seismic data representative of the data to be inferred. The trained models can then, during the inference stage, be compared to the deterministic inversion over several tasks. The least squares optimisation in the deterministic approach is numerically implemented using the efficient linear operator representation Python-based package Pylops (Ravasi and Vasconcelos,, 2020). ### Training networks using Seismic data Four different seismic datasets of different formats and sizes have been used for this study. These include the Gulf Of Suez (Gulf) field dataset that consists of 128 shots, 128 receivers and 512 timesamples, two more complex numerical subsalt datasets (Pdat & Rdat) with in total 202 shots, 201 receivers and 2001 timesamples and a 3D numerical ocean turbulence dataset (OTD) consisting of 300 shots, 301 receivers and 1081 timesamples. A range of different networks are trained on different parts of these datasets. To generate synthetic sparser (decimated) training data for the neural networks, the originally densely sampled, in source, receiver and time domain, data are decimated using five different decimation patterns on the receiver domain. To limit the possible effects of the selected training decimation patterns on the networks capability to generalize to other decimation patterns, two jittered irregular (based on ideas of Hennenfent and Herrmann, (2008)) and three regular (factor 2, 3 and 4) decimation patterns are applied. During training the decimation percentages vary between 50 and 80 %. It is well known that sufficient data is required to accurately train a neural network (e.g. Siahkoohi et al., (2018)). For this study a single GPU (Nvidia GeForce GTX 1080 Ti; 11 GB memory) is used. In order to both increase the amount of training data while decreasing the computational memory load on the single GPU, non-overlapping patches consisting of 32 traces of each 64 timesamples are extracted from all shot gathers. The patches are decimated using five different masks resulting in 5 times as many decimated input wavefields as there are dense wavefields. The data-windowing effect on the data is a band-limitation of the original signal, therefore the full frequency content of the original signal is no longer present in the windowed signal. Next to that, edge effects could include undesired peaks in the frequency spectrum related to smaller-scale structures. To reduce this effect, a 2D Tukey taper (with fraction 0.3) is applied to the windowed gathers. This space-time domain multiplication of the windowed data and the Tukey taper results in a smoothing convolutional operation in frequency-wavenumber domain, that attempts to diminish the undesired effects introduced by space-time windowing. In the inference the seismic gathers will not be windowed and therefore tapering is only used in the training stage. Note thus that it is not needed to train a neural network on the same size input data as used for inference. #### Prior space sampling To make the best predictions possible for unseen data during the inference stage, the trained deep learning algorithms require the prior space inferred from the training data to be an accurate description of the space that the networks have to infer. In the case of reconstructing seismic data, it is important for the training data to have similar slope variation, curvature and types of reflections as in the to be inferred data. Next to that, the bandwidth of the reconstruction plays an important role. The finer the temporal and spatial scale structures in the to be inferred data are, the broader the bandwidth of the training data should be. From later results it will become clear that having an idea on the decimation percentage in the data to be reconstructed can improve the network’s predictions. This is related to the fact that the network’s prediction quality will start to decrease at the higher end of the range of decimation percentages present in the prior. Therefore it is important to generate synthetic data with high decimation percentages for training if that is what should be reconstructed during inference. Figure 3 illustrates this effect because if the left panel (Pdat; single-shot salt data) were to be the goal of inference, it is important to include similar structures and properties in the training data. The four different datasets used in this study have different complexities. The Gulf Of Suez dataset (Gulf) has little structural variations but includes velocity variations of the subsurface therefore having hyperbolas centered around the source location. The ocean turbulence dataset (OTD) is the complete opposite of this because the velocities in the ocean layers have very little velocity variations but high structural variations (turbulence) therefore this dataset includes many different diffractions and reflections that can be off- centered and interfering. The Rdat salt dataset is a synthetic dataset that includes all of the previously mentioned properties. All of these structures can be found in the single-shot Pdat salt dataset, this data is however generated from a source within the medium and is therefore different from all other datasets that are generated by sources at the surface. Figure 3: Illustration of the importance of a representative prior space in the training data. The to be inferred data on the left is a complex dataset consisting of many slope variations (blue), of variation in scale of structures (bandwidth; green) and a combination of diffractions and reflections (orange). The training data should therefore consist of a combination of the properties desired to be inferred by the network. All different datasets have different properties as explained in the main text. From left to right the used shot gathers are from Pdat (shot 1), Rdat (156), Gulf (47) and OTD (145). #### General Deep Learning parameters Both networks make use of the Adam optimizer (Kingma and Ba,, 2014) with weight decay factor 1e-8 and gradient norm 0.1. The initial learning rate is set to 1e-4 and can be altered by the optimizer. The networks are subject to the same mean squared loss function and use the Rectified Linear Unit (ReLU) activation function. During training, batches of 32 images are made over seismic shot gathers in windows of size 32x64. After the training stage, dense wavefields can be predicted for single decimated seismic gathers of varying sizes (does not have to equal the training data size). All models are trained for 40 epochs during which the loss is monitored using Tensorboard (Martinez,, 2016). The same decimation percentages used to decimate the training data for the RIM are used for the U-Net. Some machine learning architectures can be very sensitive to the scale of input data. Scaling the input data is known to have a positive effect on network performance as it is a helpful approach to the vanishing gradient problem that often occurs during back-projection of the misfit (e.g. Ioffe and Szegedy, (2015); Dai and Heckel, (2019)). The variety in amplitude and complexity of the different seismic datasets is high, scaling is therefore applied to reduce this variance and improve training. Four different types of scaling are compared; normalisation (to range -1, +1), normalisation (using maximum absolute amplitude), standardisation (zero mean, unit standard deviation) and no scaling of original data. ### Reconstruction Approach During both the training and inference stage in the deep learning approach, a single decimated 2D seismic gather is used as input. During the inference stage, the 2D decimated wavefields for unseen data map are mapped to dense reconstructions. The same synthetically generated decimated gathers, are used to perform a deterministic inversion with the help of Pylops’ least squares optimisation over a 1000 iterations. The inference and inversion results will be compared over two tasks; 2D seismic gather and 3D highly decimated reconstruction. Unlike the deep learning networks that can only take single 2D gathers as input, the deterministic approach can invert the problem for any N-dimensional decimated data. Next to that, it is also known from compressive sensing techniques that far sparser data can be reconstructed by inversion with the help of derivatives of the decimated data (e.g. Ruan, (2019)). To test the potential of the neural networks (specifically trained to perform 2D reconstruction) to be used for more complex 3D highly sparse data decimated over both source and receiver domain, the 3D reconstruction problem is split into two 2D problems. First, all shot gathers will be reconstructed and after sorting the data to common receiver domain, inference can again be applied to the receiver gathers to reconstruct the rest of the missing data. This two- step approach will be compared to least squares optimisation using the first- and second-order derivative of the Ocean Turbulence data as well as the cross- derivatives in the source- and receiver-domain. The ocean turbulence dataset is a seismic dataset generated from a synthetic 2D model as described in more detail by Ruan, (2019). All (cross-)derivatives are created synthetically with the use of Pylops’ linear operators and are decimated as well to simulate the effect of measuring these derivatives in the field with the use of streamers. #### Evaluation Metrics The different results will visually be compared in both the space-time (data reconstruction) and the wavenumber-frequency domain (aliasing-dealiasing problem). To quantitatively compare the different reconstruction qualities, that are scaled differently and created differently, two different metrics are used. A common evaluation metric in inversion techniques is the (normalized) root mean squared error, in image reconstruction however the structural similarity index is more common. Both metrics focus on different aspects of the reconstruction and are here used jointly to compare the performance of inversion and inference. The root mean squared error (RMSE) measures the difference in per-pixel amplitude between the reconstructed and reference image thereby representing the Euclidean distance between two images. The RMSE (see equation (6.1)) is very easy to implement as the mean squared error is already used as the loss function in the RIM and U-Net. However, RMSE lacks the ability to use overall image structure because the comparison is made per-pixel. The Structural Similarity Index (SSIM; Ndajah et al., (2010)) however uses the structural properties of an image and can be computed at different local patches of the image data with the use of a sliding window. SSIM is used here as defined in equation (6.2). In which the average pixel intensities ($\mu$), their variance ($\sigma^{2}$) and two stabilizing factors ($c$) are used to calculate the structural similarity between two seismic gathers. $\displaystyle\text{RMSE}(\tilde{x},x)\ $ $\displaystyle=\ \sqrt{||\tilde{x}-x||_{2}^{2}}$ (6.1) $\displaystyle\text{SSIM}(\tilde{x},x)\ $ $\displaystyle=\ \frac{(2\mu_{\tilde{x}}\mu_{x}+c_{1})(2\sigma_{\tilde{x}}\sigma_{x}+c_{2})}{(\mu_{\tilde{x}}^{2}+\mu_{x}^{2}+c_{1})(\sigma_{\tilde{x}}^{2}+\sigma_{x}^{2}+c_{2})}$ (6.2) ## 7 \- Results Comparison of all trained models revealed that the networks trained on normalized (by maximum) data performed best. Scaling the data proved to be necessary to have a good generalizing model. Normalization by the maximum absolute value results in scaled data without having altered the physics of the wavefield, something that is no longer true when standardizing the data or normalizing to a custom range. Application of Tukey tapering to the patched data proved to decrease the effect of the undesired edge effects (present in the training data) on the inference results. Therefore, all deep learning results that will follow are based on normalized, tapered models. ### Prior space sampling As stated before, it is important for a neural network to generalize well to new data. The ability of generalization is determined by the prior space sampled from the training data. The generalization quality of the networks are also dependent on the amount of data used during training because an incorrect ratio between number of training data and number of network parameters could lead to under- or overfitting. First, the effect of data complexity is studied, later the decimation patterns. Varying both of these factors results in a varying amount of training data as well. Initially, the five different decimation patterns consisted of two irregular and three regular patterns, thereby decimating the data between 50 and 80 %. Four different models are compared for both the U-Net and RIM, based on different training data consisting of Gulf (of Suez) (every second shot), Rdat (every second shot of the largest salt dataset), GulfRdat (a combination of the former two) or Ocean Turbulence Data (OTD; every second shot). The different decimation percentages in addition to patching results in a dataset size of just over 100.000 images for the last two models, just under 100.000 for only Rdat and only around 10.000 for Gulf. 75 percent of these images went into training, the other 25 percent is used for testing and validation. #### Data complexity Table 1 in combination with figure 4 illustrate the effect of data complexity on the potential of the networks to generalize to unseen data. From the average SSIM in table 1 (arithmetic mean of all but training data performance), it can be deduced that all models perform best on their training data and that the RIM overall performs slightly better than the U-Net. The RIM generalizes equally well with models trained on different higher complexity datasets and poorer when inference is performed on data with a higher complexity than seen during training. This result is to be expected as based on the data complexity discussion given before. U-Net on the other hand, has more trouble generalizing to unseen datasets especially if trained on only the ocean turbulence data that consists of many diffractions and reflections but very little velocity variations (and therefore very little slope variation). Figure 4 illustrates this effect and now also gives an indication of the misfit between the network’s inference results and to be inferred dense data. The displayed shot gather comes from the single shot salt dataset (Pdat) that none of the models had been trained on. This dataset is different from the rest because the data is generated from a source within the medium. The decimation is irregular with a percentage of 62 % (within the range of decimation percentages in the training data). The 8 different reconstruction panels (B-E in figure 4(a) and 4(b)) are all very different. For example both reconstructions made by the network trained on Gulf-data only, show many small-scale structures on the left flank than present in the dense data (see panels B in figure 4). In the RIM it is clear that many small-scale structures, most likely related to the curvature in the training data, overprint the desired curvature of the salt data. In the U-Net this effect is less pronounced, related to the fact that that network also underestimates the amplitude of the reconstruction. Both networks perform best when trained on a combination of complex salt dataset and the Gulf of Suez dataset that includes many velocity and slope variations. | Gulf | Gulf / Rdat | Rdat | OTD ---|---|---|---|--- | U-Net | RIM | U-Net | RIM | U-Net | RIM | U-Net | RIM Gulf | 0.88 | 0.92 | 0.89 | 0.90 | 0.83 | 0.85 | 0.20 | 0.88 Rdat | 0.77 | 0.77 | 0.84 | 0.87 | 0.82 | 0.86 | 0.11 | 0.80 OTD | 0.64 | 0.78 | 0.75 | 0.83 | 0.75 | 0.81 | 0.21 | 0.91 Pdat | 0.63 | 0.65 | 0.75 | 0.79 | 0.73 | 0.77 | 0.13 | 0.74 Average | 0.68 | 0.73 | 0.75 | 0.81 | 0.77 | 0.81 | 0.15 | 0.81 Table 1: Average SSIM for inference using the trained models (columns) on the to be inferred dense data (rows). The SSIM are computed as an arithmetic mean over the SSIM for 10 different decimation percentages (5 regular, 5 irregular) for 3 shot gathers in the data (if available; left quarter, center, right quarter) without taking the training data into the calculation (indicated by gray cells). All models perform best on the data they are trained on and the RIM outperforms the U-Net in these tasks. (a) U-Net (b) RIM Figure 4: Reconstruction of 62 % irregularly decimated shot gather from a complex single shot salt dataset (normalized). The top bar represents the decimation pattern in which black stands for missing trace. None of the models in panels B-E have been trained on this shot gather. Panel A represents the original shot gather that has to be inferred by the network, therefore the reconstructions in panel B-E should be as close as possible to this panel if inference were perfect. The quality of generalization to unseen data is indicated by the SSIM in brackets and the amplitude of the reconstruction. #### Decimation patterns The networks were initially trained on 5 different decimation masks, ranging in decimation percentage between 50 and 80 %. From these patterns, 2 were irregular and 3 regular. When performing inference on data decimated by between 25 and 82 percent it is observed that the networks can generalize better to lower percentages than towards the higher end of the range present in the prior space. This means that the reconstruction quality thus decreases when the data is highly decimated. There is no clear indication that the networks perform better on irregular or regular decimated data, unlike in the deterministic inversion that tends to be able to reconstruct irregularly sampled data better. Training the RIM on only two patterns (50 % regular and 84 % irregular) in the same prior space range resulted in similar observations. Using more patterns in the same range (50, 67, 75, 80 % regular and 75, 81, 2x 84 % - random jittering- irregular) improved the reconstruction quality but increased the training time as more training data became available. With less decimation patterns in the lower range, the RIM could generalize well to percentages just outside this training range and predicted structures similar to the training data at much higher percentages. These observations thus explain that it is important to train the network in the range of decimation percentages that have to be inferred during training to reconstruct well. The U-Net performance varied highly with a change in training data and that resulted in a performance drop when less data or decimation percentages are used. Based on the previous discussion on prior-space sampling, the networks trained on half of the Gulf of Suez and salt data for five different decimation percentages (previously called GulfRdat) are selected for further inference. This is a weigh-off between training time and inference performance at different percentages. Training the RIM for 40 epochs using just over 100.000 training images on a single GPU took around 12 hours. The U-Net is not a recurrent neural net and requires less memory of the GPU, training this network on the same data and number of epochs took only 1.5 hours. Performing inference on a single full-size shot gather is almost instantaneous, whereas deterministic inversion can take minutes per gather before convergence is reached. ### 2D gather reconstruction The reconstruction results for a central shot gather from the ocean turbulence dataset are shown in figure 5. Panel A illustrates the temporal bandwidth and spatial variation present in the ocean turbulence dataset. The first arrivals have a strong amplitude, later arrivals are less pronounced but because of normalization and filtering still clearly visible. In this example, the shot gather is regularly decimated by factor 4, resulting in the decimated gather of panel B. Because of sub-nyquist spatial decimation, spatial aliasing in the frequency-wavenumber domain occurs as can be seen in the corresponding Fourier spectrum. Solving the deterministic inversion without regularization results in panel C of figure 5. By visual inspection and comparison of the norms in table 2, there is no difference between the decimated and the reconstructed gather. The misfit between the original Fourier domain image and the Fourier transform of the reconstruction equals the original Fourier domain image. This means that the inversion is not capable of reconstructing the 75 % missing seismic traces eventhough the iterative inversion has converged. Both deep learning approaches on the other hand, panel D and E in figure 5, are capable of reconstructing the missing seismic data. In both panels there is still an imprint of the missing traces, this is especially clear in the first arrivals. The later reflections and diffractions seem to not have this imprint resulting in a low misfit in both the spatial and Fourier domain. Similar as what has been observed before, the U-Net introduces low frequency structures into the reconstruction visible in the low frequency, low wavenumber part of the misfit that has a higher amplitude than that same area for the RIM’s reconstruction. The U-Net again also underestimates the amplitude of the data more than the RIM (see the difference in norms in table 2). The training data included higher velocity variations than present in the to be inferred data as well as structural variation. This, structure-wise, results in a high correspondence of the predicted wavefields and the dense wavefield (to be inferred). Not just the strong first arrivals, but also the later diffractions and reflections are reconstructed without loss of bandwidth. Both deep learning approaches are thus capable of reconstructing the missing data to similar extent, thereby decreasing spatial aliasing in Fourier domain. The higher SSIM values and lower misfit amplitudes of RIM reconstructions are not limited to this specific gather or dataset only, table 2 indicates that this is a general trend. The presented results are based on 75 % regularly decimated data and can be generalized to other gathers and decimation percentages as well. Where the deterministic inversion on the decimated data and the forward decimation operator already breaks down at very low decimation percentages due to the Shannon-Nyquist sampling theorem, the neural nets performance only start to decrease at decimation percentages near the edge of the sampled prior space. | Gulf - 27 | Rdat - 97 | Pdat - 1 | OTD - 154 ---|---|---|---|--- | (4.292e3) | (0.938) | (29.08e3) | (4.242e-3) | SSIM | norm | SSIM | norm | SSIM | norm | SSIM | norm Decimated gather | 0.82 | 2.151e3 | 0.77 | 0.474 | 0.67 | 14.64e3 | 0.61 | 2.131e-3 Inversion | 0.82 | 2.151e3 | 0.77 | 0.474 | 0.67 | 14.64e3 | 0.61 | 2.131e-3 U-Net | 0.87 | 3.070e3 | 0.86 | 0.706 | 0.79 | 20.59e3 | 0.75 | 3.370e-3 RIM | 0.88 | 3.528e3 | 0.90 | 0.821 | 0.81 | 25.73e3 | 0.83 | 3.915e-3 Table 2: A comparison of the reconstruction for the different approaches to inversion. The different gathers are regularly decimated by a factor 4 (75 % decimation), the norm of the dense shot gathers is given in brackets after the name of the dataset and the selected shot. The deterministic iterative inversion cannot solve the reconstruction problem for all datasets at this decimation percentage (no difference between input decimated gather and reconstruction), the RIM slightly outperforms the U-Net when comparing the metrics. Figure 5: The reconstruction of a central shot gather from the ocean turbulence dataset; each panel consists (from top to bottom) of a bar representing sample distribution (black for missing, white for sampled trace), the normalized wavefield and the corresponding Fourier spectrum. A) Original dense seismic gather, no missing data. B) Data regularly decimated by factor 4 (75 %), spatial aliasing in the Fourier domain occurs. C - E) Reconstruction using three different approaches, Fourier spectrum is misfit to A. In brackets the SSIM between A and the reconstruction in space-time domain is given. The RIM reconstruction has the lowest misfit in space-time and frequency- wavenumber domain as well as the highest SSIM. ### 3D Ocean Turbulence reconstruction Because the neural nets are trained to reconstruct 2D seismic gathers, a two- step inference procedure is followed to reconstruct the 3D decimated dataset. The total 3D reconstruction is thus an inference result created by first reconstructing all shot gathers and, after sorting to common receiver gathers, in a second step the receiver gathers. The deterministic inversion uses the forward operator and performed a 1000 iterations. Next to that, for the 3D inversion it is assumed that the first-, second-order as well as the spatial cross-derivatives are available therefore taking more data into the inversion and solving a multichannel reconstruction problem. The data are decimated by 94 % resulting in randomly missing about every fourth trace in both the source and receiver dimension. In total there is 16 times less data present in the decimated wavefield than in the dense wavefield. The decimation pattern is equal in the source and receiver domain, source and receiver positions are colocated in this dataset. Therefore each position will have either both shot and receiver sampled or none of them. Table 3 compares the inference and inversion results for 5 different methods. Because of the two-step procedure used in inference, the two different networks (RIM and U-Net) can also be used jointly such that the networks could benefit from each others reconstruction made in the first step. The best overall reconstruction is clearly made by the deterministic inversion that used the forward operator, the decimated data and all eight (cross-)derivatives. All deep learning methods however, still estimate the wavefield in a decent matter considering the fact that these networks only know the decimated data and, in case of the RIM, a 2D version of the forward operator. Because two steps are taken in the inference procedure, the second inference step takes place on reconstructed data, this reconstruction is far from perfect and therefore error propagation occurs. From table 3 it should be clear that the reconstruction is best at positions where some data was sampled. Because of the used loss function in training, the networks are free to alter also the traces that where sampled instead of only the missing traces. The inversion uses the forward operator and does not allow the alteration of sampled traces, therefore the misfit between the inference results could always be higher than that of the inversion. | Decimated data | Sampled data | Total 3D dataset ---|---|---|--- | (44.91) | (26.07) | (51.94) | SSIM | norm | SSIM | norm | SSIM | norm Inversion | 0.82 | 33.57 | 0.86 | 22.14 | 0.84 | 40.21 RIM | 0.71 | 20.23 | 0.78 | 17.36 | 0.73 | 26.66 U-Net / RIM | 0.68 | 17.96 | 0.77 | 15.89 | 0.70 | 23.98 U-Net | 0.64 | 16.86 | 0.75 | 17.25 | 0.67 | 24.12 RIM / U-Net | 0.65 | 16.79 | 0.78 | 17.45 | 0.69 | 24.21 Table 3: A comparison of 3D inversion results for the 94 % decimated ocean turbulence data. The deterministic inversion in this case performs best on all components. The two-step RIM reconstruction again estimates the amplitudes of the reconstruction better than the U-Net. Combining the U-Net and RIM leads to a better 3D reconstruction than using the U-Net for two steps, possibly because the RIM uses the forward operator in the estimation. The norm of the original part of the data is given in brackets, all norms are scaled by factor 1e3. Figures 6 and 7 display the dense wavefield estimates from deterministic inversion for a set of shots in the center of the ocean turbulence dataset. These results are compared to the best probabilistic estimate of the wavefield made by the RIM in figures 8 and 9. Because the data is randomly decimated by 75 % over each dimension, the maximum amount of missing traces in a row within a gather corresponds to six. In the panels of all figures, only the first and last shot/receiver were recorded and the traces in all other missing shots/receivers are reconstructed. Because the RIM reconstructs the decimated data in two steps over the two dimensions, the maximum decimation percentage the network takes as an input equals that of the single dimension, this 75 % decimation falls just within the range sampled by the prior space. Traces in the six missing shots in figure 6 are reconstructed by the deterministic inversion. From all approaches, the amplitude of this reconstruction best approximates the dense wavefield. The misfit increases further away from the last sampled shot, yet all major seismic events are accurately recovered. In panel A-D it can be observed that the temporal bandwidth of the reconstruction also decreases with distance from the last sampled shot. As expected, more densely sampled areas result in a better reconstruction. The same general trend can be observed in figure 7 for the missing receivers because the decimation patterns over both dimensions are equal and the deterministic inversion method included the 3D forward decimation operator. Traces in the six missing shots in figure 8 are reconstructed by the two-step RIM inference procedure. Again, the misfit increases further away from the last sampled shot. The temporal bandwidth of the reconstruction however does not seem to decrease with distance, this approach does underestimate the dense wavefield amplitude however. At source and receiver locations where many datapoints are missing, the imprint of the decimation pattern is more evident than in the deterministic inversion. The RIM reconstruction is relatively poor in panels D and E, where the distance to the last sampled shot/receiver is largest. This is most likely due to the fact that as an input to the model, these panels had no data. The reconstruction is thus fully based on inference and the build up of errors over the two steps. Figure 6: Deterministic inversion results for 94 % decimated ocean turbulence dataset, shot gather 140 - 147. Bar on top of each panel represents sample distribution (white for sampled, black for decimated), SSIM values are reported for each gather as well. Quality of reconstruction decreases and bandwidth is lost with distance from last sampled gather. Figure 7: Deterministic inversion results for 94 % decimated ocean turbulence dataset, receiver gather 140 - 147. Bar on top of each panel represents sample distribution (white for sampled, black for decimated), SSIM values are reported for each gather as well. Figure 8: Two-stage RIM inference results for 94 % decimated ocean turbulence dataset, shot gather 140 - 147. Bar on top of each panel represents sample distribution (white for sampled, black for decimated), SSIM values are reported for each gather as well. Reconstruction is poorer than the deterministic inversion, reconstruction quality decreases with distance from last sampled shot yet there is no clear indication of loss of bandwidth. Figure 9: Two-stage RIM inference results for 94 % decimated ocean turbulence dataset, receiver gather 140 - 147. Bar on top of each panel represents sample distribution (white for sampled, black for decimated), SSIM values are reported for each gather as well. ## 8 \- Discussion In order to solve the reconstruction problem, two different approaches have been studied. The wavefields reconstructed with the use of deterministic inversion without regularization, verify Shannon-Nyquist sampling theorem that states that dense wavefields can be reconstructed from the decimated (sampled) wavefields only if the sampling frequency is not less than twice the Nyquist frequency. Herrmann, (2010) studied the effect of different decimation patterns on the imprint in the Fourier spectrum. Regular sampling will lead to sparse and strong aliased signals in the Fourier spectrum where irregular sampling tends to generate weaker decimation artifacts. The regular sampling artifacts hinder the reconstruction and dominate the misfit, whereas the irregular sampling artifacts are less distinct and therefore do not hinder the reconstruction of the original main structures in the wavefield. Because of irregularities or limitations in data acquisition, sampled data are often not fulfilling the sampling criterion and therefore aliasing occurs. These effects are also observed in this study. At lower decimation percentages the deterministic inversion can reconstruct the data for both regular and irregular decimated data. The best reconstructions are made on irregularly decimated data. However, for higher decimation percentages the inversion without regularization is not able to solve the inverse problem for both regular and irregular decimation. Deterministic inversion is only limited to very low decimation percentages, yet it would be beneficial to reconstruct data that is far sparser than reconstructable with the help of inversion. Here, two deep learning approaches have been introduced that have shown to be able to map decimated wavefields into denser wavefields for both regular and irregular, highly sparse data. ### Deterministic versus Probabilistic approach Deep learning approached the inverse problem in a probabilistic sense in which the prior has shown to be of crucial importance. The quality of the reconstruction is mainly dependent on the information extracted from the training data. Sampling the training data results in a prior space distribution that is used in the neural networks inference stage. In the seismic reconstruction problem the most important elements the prior space should contain include reflections and diffractions due to spatial variation, bandwidth, slope variations due to velocity variations and a range of decimation percentages. Unlike the deterministic inversion of 2D decimated gathers, that can only reconstruct data accurately when the sampling criterion is fulfilled, the neural networks have proved to be able to reconstruct 2D seismic gathers with decimation percentages up to the edge of the decimation range the networks were trained on. When the derivatives of the data are available however, the deterministic inversion of the reconstruction problem turns into the multichannel reconstruction problem. In this case the deterministic inversion improved as more sparse data could be reconstructed. In the 3D highly sparse reconstruction of ocean turbulence data, the deep learning methods have proved to be able to reconstruct the sparse data without the need of derivatives. The reconstruction quality is not as good as the inversion however but it is believed that the reconstruction can be improved by more extensive training on highly sparse data or creating a neural network capable of taking N-dimensional data as in- and output. The two-step inference procedure is prone to error propagation, something that does not occur when having N-dimensional data as input. The loss of bandwidth in the inversion with distance to last sampled shot is not observed in the inference results, indicating that the used training data was sufficient to describe the bandwidth in the ocean turbulence dataset. Because the extra data taking into the inversion (derivatives) is often not available, deep learning should be considered a viable option in data reconstruction. Next to the fact that the deep learning methods do not require anything but the data and possibly the forward operator, another advantage of using deep learning methods over deterministic methods lies in the short inference times. Of course, training a neural network takes time. In the case of the used RIM that corresponds to 12 hours where the U-Net did this in under 2 hours. However, with a good generalizing ability, a network only has to be trained once and can be used for inference on unseen datasets afterwards. The reconstruction of a single 2D seismic gather by inference is almost instantaneous whereas the inversion can take up to minutes per gather. When including the derivatives into the inversion this may take even longer (the 3D inversion in a 1000 iterations took over 14 hours to converge). The training time of neural networks could possible be reduced, based on the discussion of prior space sampling required for a good generalizing model. The requirement of having a large training data to extract an accurate description of the prior space, could be seen as a difficulty in deep learning as well. In this case, the training data are created synthetically from dense seismic wavefields that include a range of different properties and structures. This means that in all cases it is best to either use existing dense data for training or to sample part of the acquisition densely, thereby providing a possibility of generating synthetic training data consisting of structures present in the to be reconstructed data. As noticeable in the results, without accurate prior space sampling the deep learning networks cannot generalize well enough. Of course, the required quality of the reconstructed data also depends on what this data will be used for in post- processing steps. For example, migration is less demanding than full waveform inversion that attempts to use every single arrival. Therefore making exact conclusions based on the presented metrics here should be done with care, taking the ultimate aim of the reconstruction into account. In seismics, collecting suitable and enough training data should be a manageable task as the required features are very common in seismic data. ### Comparison of deep learning methods The two deep learning architectures used here are the Recurrent Inference Machine (RIM) and U-Net. Both methods require training data to update their internal state to match the approximate inverse of the forward operator that generated the decimated data. The RIM approaches inverse problems by combining the known forward operator and the sampled data within a main Recurrent Neural Net (RNN) cell. According to Putzky and Welling, (2017), this approach that combines the inference and training stage is crucial and unique to solving inverse problems with deep learning. That the RIM has to potential to solve inverse problems has been demonstrated here by solving the reconstruction problem for which the forward operator is the linear (computationally inexpensive) restriction operator. The RIM demonstrated to generalize well to unseen data and decimation percentages also with a limited amount of training data. From the results it can be concluded that the RIMs have a low tendency to overfit the training data while generalizing well outside the prior range. That the RIM is not the only neural net that can represent the inverse of the restriction operator, has been proven with the help of the U-Net. Like the RIM, the U-Net makes use of convolutional operators to extract higher-level features from the input data. However, the U-Net does not use a RNN or the forward operator. In both the 2D seismic gather and the 3D highly decimated reconstruction, the U-Net consistently underestimates the amplitude of the reconstruction and introduces lower frequency structures in the prediction. Most often however, it is possible to filter these lower frequency structures from the predictions and reach results that are similar to the predictions made by the RIM. Likewise, it is often not the absolute amplitude of the reconstruction that is the main goal, the relative distribution of amplitudes is of higher importance as this is a measure of contrast in subsurface properties. This indicates that structure-wise, the reconstruction of the U-Net after filtering could be good enough for further processing as well. Training the U-Net on different training data resulted in highly varying inference results. It can therefore be concluded that the U-Net is much more likely to overfit the training data, possible because of the high number of trainable parameters in the network, and is therefore more prone to prior space variance. During the course of this study, another study has been published by Mandelli et al., (2019) in which the U-Net is again used to solve the reconstruction problem as a pre-processing step before using the reconstructed data for migration. There however, as a post-processing step, at the sampled locations the traces are removed from the network’s prediction and replaced by the actual sampled traces. Mandelli et al., (2019) find that the U-Net can be used to solve the reconstruction problem. However, their results are based on decimation percentages 10, 30 and 50. Similar observations of poorer generalization to unseen data or decimation patterns are observed. When taking these considerations into account it can be stated that the reconstructed wavefields in both 2D and 3D made by the RIM are slightly better (in structural similarity, norm as well as dealiasing) than that of the U-Net while both methods perform better than the single channel deterministic inversion at higher decimation percentages. In this decision, emphasis is put on the fact that the RIM generalizes better to unseen data and decimation percentages outside the prior range. When the deterministic inversion does include the derivatives of the data (multichannel reconstruction), the reconstruction improves and becomes better than deep learning methods. Deep learning has proven to be a promising strategy to the single channel reconstruction problem that does not lose bandwidth over the reconstructions and should be considered in N-dimensional problems as well when only the decimated data is acquired. The choice of hyperparameters in the RIM architecture is based on considerations made by Patrick Putzky and described in Lønning et al., (2018). The U-Net architecture is created such that it extracts a similar number of features in the first layer as does the RIM (here 64). The number of pooling layers is chosen to be four such the representation of the input data has a minimum size in the bottleneck layer. The size of the input data (32 receiver samples, 64 timesamples) is based on the memory load on a single GPU. For the RIM, which has a higher memory load than the U-Net, this input size in batches of 32 was the maximum load the single GPU could use. As observed in the results, with this window size the temporal and spatial structures can be captured such that generalization to full (not windowed; inference stage) seismic gathers is possible. To benchmark the U-Net and RIM, the input size in the U-Net is chosen to be equal to that of the RIM eventhough the computational load is much lower for this network and a larger window could have been chosen. The training data is windowed using non-overlapping patches, results in Mandelli et al., (2019) describe that overlapping patches increase the computational load while resulting in only a very limited increase in inference performance. Even though the neural networks have been trained to reach their, as equal as possible, minimum states, the networks should still be compared with care as their architectures are different. #### Effect of forward operator That the RIM takes the forward operator into account is what is believed to make the RIMs approach to inverse problems better than the U-Net. Unfortunately, because that is not the only difference between the two architectures (1. the RIM is a RNN, 2. the RIM is a RNN that uses the forward operator in its update function), it can only be stated with care that the fact the forward operator is used to solve the inverse problem in the RIM is what makes the RIM a better probabilistic inverse problem solver than the U-Net. To exclude the fact that the RNN is what makes the RIM perform better than the U-Net, a neural network is trained using a unit forward operator. In that case, the prediction made by the RIM are worse than that of U-Net. This observation supports the hypothesis and indicates that the differences between the RIM and U-Net indeed come from the fact that the RIM can extract information from the gradient of the log-likelihood for which the forward operator is required. #### More complex forward operator Eventhough the U-Net performs slightly worse than the RIM, the U-Net is able to represent the inverse to the linear forward operator decimating the data. Because the RIM is mostly designed in an approach to inverse problems, it was expected to outperform the U-Net. The RIM does perform better than the U-Net, but it did not excel in the reconstruction problem. It is believed that the RIM will excel for more complex (possibly even non-linear) forward operators. As a first test closely related to the reconstruction problem, the reconstruction problem was transformed to the Fourier domain. Reconstructing data in space-time domain can be seen as dealiasing the Fourier spectrum that is aliased due to sub-Nyquist spatial sampling. Because of the current limitations by the single GPU setup it was not possible to study this approach to more complex forward operators. This is related to the fact that taking the Fourier transform of a patch of data results in a local Fourier representation of the data instead of the full global spectrum. Training the networks to dealias the local spectrum did not correspond to dealiasing the global spectrum for all given methods and therefore this should be part of future studies. Lønning et al., (2019) did use the RIM as an approximate inverse of a more complex forward operator and also compared this to the U-Net. In this case, the data is sampled in image space with decimation taking place in another data space related to the image space by the Fourier transform. Results from Lønning et al., (2019) indicate that indeed it is the RIMs architecture that makes the network a potential inverse problem solver. The RIM generalized better to unseen data, required less training data (less parameters to train) and did not suffer from structural artifacts as generated by the U-Net. Again the U-Net generalized poorly to unseen data or decimation ranges, linked to the number of trainable parameters. ### Limitations & Future work Unlike the deterministic inversion, the networks were free to alter the sampled traces. This might not have been the best approach and should be changed in the future. A weighting factor and the forward operator could be included in the loss function that then emphasizes that the network should reconstruct the decimated traces only. It is believed that this will positively affect the reconstruction results. From these results and those in Mandelli et al., (2019), it became clear that not just the RIM but also the U-Net has the ability to represent the inverse to the restriction operator. Despite currently being limited by the single GPU setup, it would be interesting to test the ability of both networks to represent more complex (possibly non-linear) operators. Results from Lønning et al., (2019) indicate that in that case the RIM will outperform the U-Net. This statement could be studied in the Fourier domain as a follow-up to this study where reconstruction took place in the space-time domain. With the use of multiple GPUs it would be possible to distribute the training data over multiple GPUs without being limited to the window size of 32x64 currently used. This would mean the networks can be trained to dealias the global Fourier spectrum, thereby reducing spatial aliasing and thus reconstructing decimated data in space-time domain. This study, as well comparisons made by e.g. Kim and Nakata, (2018) and Russell, (2019), indicate that indeed deep learning should be considered as a viable option to solving inverse problems and especially those for which deterministic inversion is not possible. It would be interesting to use the reconstructed data volumes in post- processing steps. For example, migration can be performed on the 3D reconstructed highly sparse ocean turbulence data volume. At this point, the comparison between the deterministic and probabilistic approach is limited to the reconstructions and after migration it would be possible to see if the methods result in a similar image of the studied subsurface. Therefore a decisive conclusion should not purely be based on the metrics used in this study, different types of effects can or cannot have an effect in post- processing steps and therefore it is difficult to state exactly what makes a reconstructed image ’good’. Using the reconstructed data volumes for migration is currently part of ongoing studies. ## 9 \- Conclusions In this study two different approaches to solving the reconstruction problem, as an example of an inverse problem for which the forward operator is known, have been studied. The deterministic inversion without regularization is not capable of reconstructing the decimated seismic data when the acquisition did not follow the setup specified by Shannon-Nyquist sampling theorem. Two deep learning methods, that approach the inverse problem in a probabilistic sense, have been compared on different reconstruction tasks. It can be concluded that the most important element in building a well generalizing neural network is the prior space. In the seismic data reconstruction problem, this prior space should consist of similar features as those to be inferred including bandwidth, structural and velocity variations, and a range of decimation percentages. The ability of the deep learning methods to represent the inverse of the restriction operator is better than that of the deterministic inversion. The predictions made by the network result in higher SSIM values and better estimates of the norm for all studied decimation percentages, patterns and datasets. The deep learning methods are capable of eliminating spatial aliasing in the Fourier domain where the inversion cannot undo the aliasing caused by sub-Nyquist spatial sampling. Both deep learning methods have proved to be able to map decimated data into dense seismic data thereby solving the reconstruction problem. The deterministic inversion can be improved by incorporating spatial derivatives. The two-step multichannel reconstruction made by deep learning proved that deep learning should be considered as a viable option for highly sparse, N-dimensional data reconstruction when only the decimated data are acquired. The RIM architecture is specifically designed to approximate the inverse of the forward operator and is compared to the U-Net (initially designed for image segmentation). Benchmarking the RIM against the U-Net leads to the conclusion that the RIM generalizes better to unseen decimation percentages and data due to the nature of the architecture in which the reconstruction is regularized by the forward operator. The RIM contains less trainable parameters thereby being less prone to overfitting. For simple linear operators, the U-Net is also capable of inverting the system except underestimating amplitudes and introducing low frequency artifacts thereby requiring further processing before using the data volumes in e.g. migration and full waveform inversion. Benchmarking the RIM against other deep learning architectures for more complex forward operators should be the subject of future studies. However, initial results as presented here show that RIMs have great potential in seismic processing problems where determining a complex inverse map to a known forward problem is the goal of inference by machine learning. ## References * Dai and Heckel, (2019) Dai, Z., and R. Heckel, 2019, Channel normalization in Convolutional Neural Network avoids Vanishing Gradients: arXiv preprint arXiv:1907.09539. * Goodfellow et al., (2016) Goodfellow, I., Y. Bengio, and A. Courville, 2016, Deep Learning: MIT Press. (http://www.deeplearningbook.org). * Hennenfent and Herrmann, (2008) Hennenfent, G., and F. J. Herrmann, 2008, Simply denoise: Wavefield reconstruction via jittered undersampling: Geophysics, 73, V19–V28. * Herrmann, (2010) Herrmann, F. J., 2010, Randomized sampling and sparsity: Getting more information from fewer samples: Geophysics, 75, WB173–WB187. * Ioffe and Szegedy, (2015) Ioffe, S., and C. Szegedy, 2015, Batch normalization: Accelerating deep network training by reducing internal covariate shift: arXiv preprint arXiv:1502.03167. * Kim and Nakata, (2018) Kim, Y., and N. Nakata, 2018, Geophysical inversion versus machine learning in inverse problems: The Leading Edge, 37, 894–901. * Kingma and Ba, (2014) Kingma, D. P., and J. Ba, 2014, Adam: A method for stochastic optimization: arXiv preprint arXiv:1412.6980. * Lønning et al., (2018) Lønning, K., P. Putzky, M. W. Caan, and M. Welling, 2018, Recurrent inference machines for accelerated MRI reconstruction: Presented at the International Conference on Medical Imaging with Deep Learning (MIDL 2018). * Lønning et al., (2019) Lønning, K., P. Putzky, J.-J. Sonke, L. Reneman, M. W. Caan, and M. Welling, 2019, Recurrent inference machines for reconstructing heterogeneous MRI data: Medical image analysis, 53, 64–78. * Mandelli et al., (2019) Mandelli, S., V. Lipari, P. Bestagini, and S. Tubaro, 2019, Interpolation and denoising of seismic data using convolutional neural networks: arXiv preprint arXiv:1901.07927. * Martinez, (2016) Martinez, M. T., 2016, An overview of Google’s Machine Intelligence Software TensorFlow.: Technical report, Sandia National Lab.(SNL-NM), Albuquerque, NM (United States). * Ndajah et al., (2010) Ndajah, P., H. Kikuchi, M. Yukawa, H. Watanabe, and S. Muramatsu, 2010, SSIM image quality metric for denoised images: Proc. 3rd WSEAS Int. Conf. on Visualization, Imaging and Simulation, 53–58. * Paszke et al., (2019) Paszke, A., S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., 2019, PyTorch: An imperative style, high-performance deep learning library: Advances in Neural Information Processing Systems, 8024–8035. * Peng and Vasconcelos, (2019) Peng, H., and I. Vasconcelos, 2019, A study of acquisition-related sub-sampling and aperture effects on Marchenko focusing and redatuming, in SEG Technical Program Expanded Abstracts 2019: Society of Exploration Geophysicists, 248–252. * Putzky and Welling, (2017) Putzky, P., and M. Welling, 2017, Recurrent inference machines for solving inverse problems: arXiv preprint arXiv:1706.04008. * Ravasi and Vasconcelos, (2020) Ravasi, M., and I. Vasconcelos, 2020, PyLops—A linear-operator Python library for scalable algebra and optimization: SoftwareX, 11, 100361. * Ronneberger et al., (2015) Ronneberger, O., P. Fischer, and T. Brox, 2015, U-net: Convolutional networks for biomedical image segmentation: International Conference on Medical image computing and computer-assisted intervention, Springer, 234–241. * Ruan, (2019) Ruan, J., 2019, Compressive Acquisition and Ocean Turbulence Wavefield Reconstruction: Master’s thesis, Utrecht University. * Ruan and Vasconcelos, (2019) Ruan, J., and I. Vasconcelos, 2019, Data-and prior-driven sampling and wavefield reconstruction for sparse, irregularly-sampled, higher-order gradient data, in SEG Technical Program Expanded Abstracts 2019: Society of Exploration Geophysicists, 4515–4519. * Russell, (2019) Russell, B., 2019, Machine learning and geophysical inversion—A numerical study: The Leading Edge, 38, 512–519. * Siahkoohi et al., (2018) Siahkoohi, A., R. Kumar, and F. Herrmann, 2018, Seismic data reconstruction with generative adversarial networks: Presented at the 80th EAGE Conference and Exhibition 2018. * Udacity, (2019) Udacity, 2019, Intro to Deep Learning with PyTorch, `https://www.udacity.com/course/deep-learning-pytorch--ud188`, accessed Summer 2019. * Zbontar et al., (2018) Zbontar, J., F. Knoll, A. Sriram, M. J. Muckley, M. Bruno, A. Defazio, M. Parente, K. J. Geras, J. Katsnelson, H. Chandarana, et al., 2018, fastMRI: An open dataset and benchmarks for accelerated MRI: arXiv preprint arXiv:1811.08839.
# The Gauss Hypergeometric Covariance Kernel for Modeling Second-Order Stationary Random Fields in Euclidean Spaces: its Compact Support, Properties and Spectral Representation Xavier Emery Department of Mining Engineering, University of Chile, Avenida Beauchef 850, Santiago 8370448, Chile. Advanced Mining Technology Center, University of Chile, Avenida Beauchef 850, Santiago 8370448, Chile. Alfredo Alegría111Corresponding author. Email<EMAIL_ADDRESS>Departamento de Matemática, Universidad Técnica Federico Santa María, Valparaíso, Chile. ###### Abstract This paper presents a parametric family of compactly-supported positive semidefinite kernels aimed to model the covariance structure of second-order stationary isotropic random fields defined in the $d$-dimensional Euclidean space. Both the covariance and its spectral density have an analytic expression involving the hypergeometric functions ${}_{2}F_{1}$ and ${}_{1}F_{2}$, respectively, and four real-valued parameters related to the correlation range, smoothness and shape of the covariance. The presented hypergeometric kernel family contains, as special cases, the spherical, cubic, penta, Askey, generalized Wendland and truncated power covariances and, as asymptotic cases, the Matérn, Laguerre, Tricomi, incomplete gamma and Gaussian covariances, among others. The parameter space of the univariate hypergeometric kernel is identified and its functional properties — continuity, smoothness, transitive upscaling (montée) and downscaling (descente) — are examined. Several sets of sufficient conditions are also derived to obtain valid stationary bivariate and multivariate covariance kernels, characterized by four matrix-valued parameters. Such kernels turn out to be versatile, insofar as the direct and cross-covariances do not necessarily have the same shapes, correlation ranges or behaviors at short scale, thus associated with vector random fields whose components are cross- correlated but have different spatial structures. _Keywords:_ Positive semidefinite kernels; Spectral density; Direct and cross- covariances; Generalized hypergeometric functions; Conditionally negative semidefinite matrices; Multiply monotone functions. ## 1 Introduction Geostatistical techniques such as kriging or conditional simulation are widely used to interpolate regionalized data, in order to address spatial prediction problems or to quantify uncertainty at locations without data (Chilès and Delfiner, , 2012). These techniques rely on a modeling of the spatial correlation structure of one or more regionalized variables, viewed as realizations of as many spatial random fields. Application domains include natural (mineral, oil and gas) resources assessment, groundwater hydrology, soil and environmental sciences, among many others, where it is not uncommon to work with up to a dozen variables (Ahmed, , 2007; Emery and Séguret, , 2020; Hohn, , 1999; Webster and Oliver, , 2007). This motivates the need for univariate and multivariate covariance (positive semidefinite) kernels that allow a flexible parameterization of the relevant properties such as the correlation range or the short-scale regularity. In practical applications, the random fields under study are often assumed to be second-order stationary, i.e., their first- and second-order moments (expectation and covariance) exist and are invariant under spatial translation (Chilès and Delfiner, , 2012; Cressie, , 1993; Wackernagel, , 2003). The stationarity assumption is made throughout this work, which implies that the covariance kernel for two input vectors $\boldsymbol{s}$ and $\boldsymbol{s}^{\prime}$ is actually a function of the separation $\boldsymbol{h}=\boldsymbol{s}-\boldsymbol{s}^{\prime}$ between these vectors (here $\boldsymbol{s}$ and $\boldsymbol{s}^{\prime}$ are elements of the $d$-dimensional Euclidean space $\mathbb{R}^{d}$) and that its Fourier transform (spectral density of the covariance kernel) is also a function of a single vectorial argument $\boldsymbol{u}\in\mathbb{R}^{d}$ (Chilès and Delfiner, , 2012; Wackernagel, , 2003). Many parametric families of stationary covariance kernels have been proposed in the past decades, the most widespread being the Matérn kernel (Matérn, , 1986) that allows controlling the behavior of the covariance at the origin. This kernel has been extended to the multivariate case (Apanasovich et al., , 2012; Gneiting et al., , 2010), offering a more flexible parameterization than the traditional linear model of coregionalization (Wackernagel, , 2003), but still suffers from restrictive conditions on its parameters to be a valid coregionalization model. Compactly-supported covariance kernels possess nice computational properties that make them of particular interest for applications, insofar as they are suitable to likelihood-based inference and kriging in the presence of large data sets when combined with algorithms for solving sparse systems of linear equations (Furrer et al., , 2006; Kaufman et al., , 2008), and to specific simulation algorithms such as circulant-embedding and FFT-based approaches (Chilès and Delfiner, , 2012; Dietrich and Newsam, , 1993; Pardo-Igúzquiza and Chica-Olmo, , 1993; Wood and Chan, , 1994). However, although many families of such kernels have been elaborated for the modeling of univariate random fields, such as the spherical, cubic, Askey, Wendland and generalized Wendland families (Askey, , 1973; Chilès and Delfiner, , 2012; Hubbert, , 2012; Matheron, , 1965; Wendland, , 1995), so far there is still a lack of flexible families of multivariate compactly-supported covariance kernels, with few notable exceptions (Porcu et al., , 2013; Daley et al., , 2015). This paper deals with the design of a wide parametric family of compactly- supported covariance kernels for second-order stationary univariate and multivariate random fields in $\mathbb{R}^{d}$, and with the determination of their parameter space, functional properties, spectral representations and asymptotic behavior. The intended family of covariance kernels will contain all the above-mentioned kernels, as well as the Matérn kernel as an asymptotic case. Estimating the kernel parameters from a set of experimental data, comparing estimation approaches or examining the impact of the parameters in spatial prediction or simulation outputs are out of the scope of this paper and are left for future research. The outline is the following: Section 2 presents the univariate kernel and its properties. This kernel is then extended to multivariate random fields in Section 3 and to specific bivariate random fields in Section 4. Conclusions follow in Section 5, while technical definitions, lemmas and proofs are deferred to Appendices A and B. ## 2 A class of stationary univariate compactly-supported covariance kernels ### 2.1 Notation For $k,k^{\prime}\in\mathbb{N}$, $\alpha_{1},\ldots,\alpha_{k},\beta_{1},\ldots,\beta_{k^{\prime}}\in\mathbb{R}$, the generalized hypergeometric function ${}_{k}F_{k^{\prime}}$ in $\mathbb{R}$ is defined by the following power series (Olver et al., , 2010, formula 16.2.1): ${}_{k}F_{k^{\prime}}(\alpha_{1},\ldots,\alpha_{k};\beta_{1},\ldots,\beta_{k^{\prime}};x)=1+\sum_{n=1}^{+\infty}\frac{\prod_{i=1}^{k}\Gamma(\alpha_{i}+n)\prod_{j=1}^{k^{\prime}}\Gamma(\beta_{j})}{\prod_{i=1}^{k}\Gamma(\alpha_{i})\prod_{j=1}^{k^{\prime}}\Gamma(\beta_{j}+n)}\frac{x^{n}}{n!},\quad x\in\mathbb{R},$ (1) where $\Gamma$ is Euler’s gamma function. The series (1) converges for any $x\in\mathbb{R}$ if $k<k^{\prime}+1$, for any $x\in]-1,1[$ if $k=k^{\prime}+1$ and also for $x=\pm 1$ if $k=k^{\prime}+1$ and $\sum_{i=1}^{k}\alpha_{i}<\sum_{j=1}^{k^{\prime}}\beta_{j}$. Specific cases include the confluent hypergeometric limit function ${}_{0}F_{1}$, Kummer’s confluent hypergeometric function ${}_{1}F_{1}$ and Gauss hypergeometric function ${}_{2}F_{1}$. ### 2.2 Kernel construction Consider the isotropic function $\widetilde{G}_{d}(\cdot;a,\alpha,\beta,\gamma)$ defined in $\mathbb{R}^{d}$ by: $\begin{split}\widetilde{G}_{d}({\boldsymbol{u}};a,\alpha,\beta,\gamma)&=\widetilde{g}_{d}(\|{\boldsymbol{u}}\|;a,\alpha,\beta,\gamma)\\\ &=\zeta_{d}(a,\alpha,\beta,\gamma)\,{}_{1}F_{2}\left(\alpha;\beta,\gamma;-(\pi a\|{\boldsymbol{u}}\|)^{2}\right),\quad{\boldsymbol{u}}\in\mathbb{R}^{d},\end{split}$ (2) where $\|\cdot\|$ denotes the Euclidean norm, $d$ is a positive integer, $(a,\alpha,\beta,\gamma)$ are positive scalar parameters, and $\zeta_{d}(a,\alpha,\beta,\gamma)$ is a normalization factor that will be determined later. Cho et al., (2020) proved that $\widetilde{G}_{d}(\cdot;a,\alpha,\beta,\gamma)$ is nonnegative under the following conditions: * • $\alpha>0$; * • $2(\beta-\alpha)(\gamma-\alpha)\geq\alpha$; * • $2(\beta+\gamma)\geq 6\alpha+1$. Hereinafter, $\mathcal{P}_{0}$ denotes the set of triplets $(\alpha,\beta,\gamma)$ of $\mathbb{R}_{+}^{3}$ satisfying these three conditions; note that the last two conditions imply, in particular, that $\beta>\alpha$ and $\gamma>\alpha$. Under an additional assumption of integrability, $\widetilde{G}_{d}(\cdot;a,\alpha,\beta,\gamma)$ is the spectral density associated with a stationary isotropic covariance kernel $G_{d}(\cdot;a,\alpha,\beta,\gamma)$ in $\mathbb{R}^{d}$. Let $g_{d}(\cdot;a,\alpha,\beta,\gamma):\mathbb{R}_{+}\to\mathbb{R}$ denote the radial part of such a covariance kernel: $G_{d}({\boldsymbol{h}};a,\alpha,\beta,\gamma)=g_{d}(\|{\boldsymbol{h}}\|;a,\alpha,\beta,\gamma)$ for ${\boldsymbol{h}}\in\mathbb{R}^{d}$. Following the scaling conventions used by Stein and Weiss, (1971) to define the Fourier and inverse Fourier transforms, $g_{d}(\cdot;a,\alpha,\beta,\gamma)$ is the Hankel transform of order $d$ of $\widetilde{g}_{d}(\cdot;a,\alpha,\beta,\gamma)$, i.e.: $g_{d}(r;a,\alpha,\beta,\gamma)=\frac{2\pi\zeta_{d}(a,\alpha,\beta,\gamma)}{r^{\frac{d}{2}-1}}\int_{0}^{+\infty}\rho^{\frac{d}{2}}J_{\frac{d}{2}-1}(2\pi\rho r){}_{1}F_{2}\left(\alpha;\beta,\gamma;-(\pi a\rho)^{2}\right)\text{d}\rho,\quad r>0,$ (3) with $J_{\mu}$ denoting the Bessel function of the first kind of order $\mu$. By using formulae 16.5.2 and 10.16.9 of Olver et al., (2010), the generalized hypergeometric function ${}_{1}F_{2}$ can be written as a beta mixture of Bessel functions of the first kind: $\begin{split}{}_{1}F_{2}&\left(\alpha;\beta,\gamma;-(\pi a\rho)^{2})\right)\\\ &=\frac{\Gamma(\beta)}{\Gamma(\alpha)\Gamma(\beta-\alpha)}\int_{0}^{1}t^{\alpha-1}(1-t)^{\beta-\alpha-1}{}_{0}F_{1}\left(;\gamma;-t(\pi a\rho)^{2}\right)\text{d}t\\\ &=\frac{\Gamma(\beta)\Gamma(\gamma)}{\Gamma(\alpha)\Gamma(\beta-\alpha)}\int_{0}^{1}t^{\alpha-1}(1-t)^{\beta-\alpha-1}\left(\pi a\rho\sqrt{t}\right)^{1-\gamma}J_{\gamma-1}\left(2\pi a\rho\sqrt{t}\right)\text{d}t.\\\ \end{split}$ (4) Owing to Fubini’s theorem, the radial function (3) is found to be $\begin{split}g_{d}&(r;a,\alpha,\beta,\gamma)=\frac{2\pi(\pi a)^{1-\gamma}\Gamma(\beta)\Gamma(\gamma)}{r^{\frac{d}{2}-1}\Gamma(\alpha)\Gamma(\beta-\alpha)}\zeta_{d}(a,\alpha,\beta,\gamma)\\\ &\times\int_{0}^{1}t^{\alpha-1/2-\gamma/2}(1-t)^{\beta-\alpha-1}\int_{0}^{+\infty}\rho^{\frac{d}{2}+1-\gamma}J_{\frac{d}{2}-1}(2\pi\rho r)J_{\gamma-1}\left(2\pi a\rho\sqrt{t}\right)\text{d}\rho\text{d}t.\end{split}$ (5) For $\gamma>\frac{d}{2}$, the last integral in (5) is convergent and can be evaluated by using formula 6.575.1 of Gradshteyn and Ryzhik, (2007): $\begin{split}g_{d}&(r;a,\alpha,\beta,\gamma)\\\ &=\begin{cases}\frac{\pi^{-\frac{d}{2}}a^{-d}\Gamma(\beta)\Gamma(\gamma)\zeta_{d}(a,\alpha,\beta,\gamma)}{\Gamma(\alpha)\Gamma(\beta-\alpha)\Gamma(\gamma-\frac{d}{2})}\int_{\left(\frac{r}{a}\right)^{2}}^{1}t^{\alpha-\gamma}(1-t)^{\beta-\alpha-1}\left(t-\left(\frac{r}{a}\right)^{2}\right)^{\gamma-\frac{d}{2}-1}\text{d}t&\text{if }0<r\leq a\\\ 0&\text{if }r>a.\end{cases}\end{split}$ (6) The function $g_{d}(\cdot;a,\alpha,\beta,\gamma)$ so defined can be extended by continuity at $r=0$ if $\alpha>\frac{d}{2}$ (Gradshteyn and Ryzhik, , 2007, formulae 3.191.3): $g_{d}(0;a,\alpha,\beta,\gamma)=\frac{\pi^{-\frac{d}{2}}a^{-d}\Gamma(\alpha-\frac{d}{2})\Gamma(\beta)\Gamma(\gamma)\zeta_{d}(a,\alpha,\beta,\gamma)}{\Gamma(\alpha)\Gamma(\beta-\frac{d}{2})\Gamma(\gamma-\frac{d}{2})}.$ This value is equal to one when considering the following normalization factor: $\zeta_{d}(a,\alpha,\beta,\gamma)=\frac{\pi^{\frac{d}{2}}a^{d}\Gamma(\alpha)\Gamma(\beta-\frac{d}{2})\Gamma(\gamma-\frac{d}{2})}{\Gamma(\alpha-\frac{d}{2})\Gamma(\beta)\Gamma(\gamma)}.$ (7) ### 2.3 Analytic expressions and parameter space By substituting (7) in (2) and (6), one obtains the following expressions for the spectral density and the covariance kernel: $\widetilde{G}_{d}({\boldsymbol{u}};a,\alpha,\beta,\gamma)=\frac{\pi^{\frac{d}{2}}a^{d}\Gamma(\alpha)\Gamma(\beta-\frac{d}{2})\Gamma(\gamma-\frac{d}{2})}{\Gamma(\alpha-\frac{d}{2})\Gamma(\beta)\Gamma(\gamma)}\,{}_{1}F_{2}\left(\alpha;\beta,\gamma;-(\pi a\|{\boldsymbol{u}}\|)^{2}\right),\quad{\boldsymbol{u}}\in\mathbb{R}^{d},$ (8) and $\begin{split}G_{d}&({\boldsymbol{h}};a,\alpha,\beta,\gamma)\\\ &=\begin{cases}\frac{\Gamma(\beta-\frac{d}{2})}{\Gamma(\alpha-\frac{d}{2})\Gamma(\beta-\alpha)}\int_{\left(\frac{\|{\boldsymbol{h}}\|}{a}\right)^{2}}^{1}t^{\alpha-\gamma}(1-t)^{\beta-\alpha-1}\left(t-\left(\frac{\|{\boldsymbol{h}}\|}{a}\right)^{2}\right)^{\gamma-\frac{d}{2}-1}\text{d}t&\text{if }0\leq\|{\boldsymbol{h}}\|\leq a\\\ 0&\text{if }\|{\boldsymbol{h}}\|>a.\end{cases}\end{split}$ (9) Hereinafter, $G_{d}(\cdot;a,\alpha,\beta,\gamma)$ will be referred to as the Gauss hypergeometric covariance, the reason being that it has the following analytic expression, obtained from (9) by using formula II.1.4 of Matheron, (1965): $\begin{split}G_{d}({\boldsymbol{h}};a,\alpha,\beta,\gamma)=&\frac{\Gamma(\beta-\frac{d}{2})\Gamma(\gamma-\frac{d}{2})}{\Gamma(\beta-\alpha+\gamma-\frac{d}{2})\Gamma(\alpha-\frac{d}{2})}\left(1-\frac{\|{\boldsymbol{h}}\|^{2}}{a^{2}}\right)_{+}^{\beta-\alpha+\gamma-\frac{d}{2}-1}\\\ &\times{}_{2}F_{1}\left(\beta-\alpha,\gamma-\alpha;\beta-\alpha+\gamma-\frac{d}{2};\left(1-\frac{\|{\boldsymbol{h}}\|^{2}}{a^{2}}\right)_{+}\right),\quad{\boldsymbol{h}}\in\mathbb{R}^{d},\end{split}$ (10) with $()_{+}$ denoting the positive part function. A wealth of closed-form expressions can be obtained for specific values of the parameters $\alpha$, $\beta$ and $\gamma$, see examples in forthcoming subsections. Also, several algorithms and software libraries are available to accurately compute the confluent hypergeometric limit function ${}_{0}F_{1}$ and the Gauss hypergeometric function ${}_{2}F_{1}$ (Galassi and Gough, , 2009; Johansson, , 2017, 2019; Pearson et al., , 2017), allowing the numerical calculation of both the covariance (10) and its spectral density (8), the latter being written as a beta mixture of ${}_{0}F_{1}$ function as in (4). Consequently, the proposed hypergeometric kernel can be used without any difficulty for kriging or for simulation (in the scope of Gaussian random fields) based on matrix decomposition (Alabert, , 1987; Davis, , 1987), Gibbs sampling (Arroyo et al., , 2012; Galli and Gao, , 2001; Lantuéjoul and Desassis, , 2012), discrete (Chilès and Delfiner, , 2012; Dietrich and Newsam, , 1993; Pardo-Igúzquiza and Chica-Olmo, , 1993; Wood and Chan, , 1994) or continuous (Arroyo and Emery, , 2020; Emery et al., , 2016; Lantuéjoul, , 2002; Shinozuka, , 1971) Fourier approaches. Covariance (positive semidefinite) kernels also have important applications in various other branches of mathematics, such as numerical analysis, scientific computing and machine learning, where the use of compactly-supported kernels yields sparse Gram matrices and implies an important gain in storage and computation. The expression (9) bears a resemblance to the Buhmann covariance kernels (Buhmann, , 1998, 2001), to the generalized Wendland covariance kernels (Bevilacqua et al., , 2020, 2019; Gneiting, , 2002; Zastavnyi, , 2006) and to the scale mixtures of Wendland kernels defined by Porcu et al., (2013), all of which are also compactly supported. Our proposal, nevertheless, escapes from these three families: on the one hand, Buhmann’s integral cannot yield the kernel (9) due to the restrictions on its parameters (the integrand contains a term $(1-t^{\delta})$ with $\delta\leq\frac{1}{2}$ instead of $\delta=1$ in our case). On the other hand, the definition of the generalized Wendland kernel uses a different expression of the integrand, with a $t^{2}$ instead of a $t$ in one of the factors; a similar situation occurs for Porcu’s mixtures of Wendland kernels, which use $\|\boldsymbol{h}\|$ instead of $\|\boldsymbol{h}\|^{2}$ in the integrand. We will see, however, that the family of generalized Wendland covariances is included in the Gauss hypergeometric class of covariance kernels (Section 2.5.2). Other compactly- supported covariance kernels involving the hypergeometric function ${}_{2}F_{1}$ have been proposed by Porcu et al., (2013) and Porcu and Zastavnyi, (2014), but none coincides with (10). The previously defined nonnegativity and integrability conditions yield the following restrictions on the parameters to provide a valid univariate covariance kernel. ###### Theorem 1 (Parameter space). The Gauss hypergeometric covariance (10) is a valid covariance kernel in $\mathbb{R}^{d}$ and, consequently, its spectral density (8) is nonnegative and integrable, if the following sufficient conditions hold: * • $a>0$; * • $\alpha>\frac{d}{2}$; * • $2(\beta-\alpha)(\gamma-\alpha)\geq\alpha$; * • $2(\beta+\gamma)\geq 6\alpha+1$. In the following, $\mathcal{P}_{d}$ denotes the set of triplets $(\alpha,\beta,\gamma)$ of $\mathbb{R}_{+}^{3}$ satisfying the last three conditions of Theorem 1 (in passing, this notation is consistent with the previous definition of $\mathcal{P}_{0}$) and $\mathcal{G}_{d}$ denotes the set of kernels of the form $\sigma^{2}G_{d}(\cdot;a,\alpha,\beta,\gamma)$ with $\sigma>0$, $a>0$ and $(\alpha,\beta,\gamma)\in\mathcal{P}_{d}$. These kernels are compactly supported, being identically zero outside the ball of radius $a$. Also note that $\mathcal{P}_{d}\subsetneq\mathcal{P}_{d^{\prime}}$ for any $d>d^{\prime}\geq 0$. ### 2.4 Main properties ###### Theorem 2 (Positive definiteness). The $d$-dimensional Gauss hypergeometric covariance kernel (10) is positive definite, not just semidefinite, in $\mathbb{R}^{d}$. ###### Theorem 3 (Restriction to subspaces). The restriction of the $d$-dimensional Gauss hypergeometric covariance kernel (10) to any subspace $\mathbb{R}^{d-k}$, $k\in\\{0,\ldots,d-1\\}$, belongs to the family of Gauss hypergeometric covariance kernels $\mathcal{G}_{d-k}$. ###### Theorem 4 (Extension to higher-dimensional spaces). The extension of the $d$-dimensional Gauss hypergeometric covariance kernel (10) to a higher-dimensional space $\mathbb{R}^{d+k}$, $k\in\mathbb{N}$, belongs to the family of Gauss hypergeometric covariance kernels $\mathcal{G}_{d+k}$ provided that $(\alpha+\frac{k}{2},\beta+\frac{k}{2},\gamma+\frac{k}{2})\in\mathcal{P}_{d+k}$. ###### Remark 1. For any set of finite parameters $(\alpha,\beta,\gamma)\in\mathcal{P}_{d}$, there exists a finite nonnegative integer $k$ such that $(\alpha+\frac{k}{2},\beta+\frac{k}{2},\gamma+\frac{k}{2})\in\mathcal{P}_{d+k}$ and $(\alpha+\frac{k+1}{2},\beta+\frac{k+1}{2},\gamma+\frac{k+1}{2})\not\in\mathcal{P}_{d+k+1}$: the extension of the Gauss hypergeometric covariance kernel with parameters $(\alpha,\beta,\gamma)$ in spaces of dimension greater than $d+k$ is no longer a valid covariance kernel. This agrees with Schoenberg’s theorem (Schoenberg, , 1938), according to which an isotropic function is a positive semidefinite kernel in Euclidean spaces of any dimension if, and only if, it is a nonnegative mixture of Gaussian covariance kernels, which the Gauss hypergeometric covariance (as any compactly supported kernel) is not. ###### Theorem 5 (Continuity and smoothness). The function $(r,a,\alpha,\beta,\gamma)\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ from $\mathbb{R}_{+}\times\mathbb{R}_{+}^{*}\times\mathcal{P}_{d}$ to $\mathbb{R}$ is * • continuous with respect to $r$ on $[0,+\infty[$ and infinitely differentiable on $]0,a[$ and $]a,+\infty[$; * • continuous and infinitely differentiable with respect to $a$ on $]0,r[$ and $]r,+\infty[$; * • continuous and infinitely differentiable with respect to $\alpha$, $\beta$ and $\gamma$. ###### Theorem 6 (Differentiability at $r=a$). The function $r\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ from $\mathbb{R}_{+}$ to $\mathbb{R}$ is $k$ times differentiable at $r=a$ if, and only if, $\beta-\alpha+\gamma>k+\frac{d}{2}+1$. ###### Theorem 7 (Differentiability at $r=0$). The function $r\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ from $\mathbb{R}_{+}$ to $\mathbb{R}$ is $k$ times differentiable at $r=0$ (therefore, it can be associated with a $\lfloor k/2\rfloor$ times mean-square differentiable random field, with $\lfloor\cdot\rfloor$ denoting the floor function) if, and only if, $\alpha>\frac{k+d}{2}$. ###### Theorem 8 (Monotonicity). The function $(r,a,\alpha,\beta,\gamma)\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ from $\mathbb{R}_{+}\times\mathbb{R}_{+}^{*}\times\mathcal{P}_{d}$ to $\mathbb{R}$ is * • decreasing in $r$ on $[0,a]$ and identically zero on $[a,+\infty[$; * • increasing in $a$ on $[r,+\infty[$ and identically zero on $]0,r]$; * • decreasing in $\beta$ if $0<r<a$, constant in $\beta$ if $r=0$ or if $r\geq a$; * • decreasing in $\gamma$ if $0<r<a$, constant in $\gamma$ if $r=0$ or if $r\geq a$. ###### Theorem 9 (Montée). If $G_{d}(\cdot,a,\alpha,\beta,\gamma)\in\mathcal{G}_{d}$ and $\mathfrak{M}_{k}$ stands for the transitive upgrading (montée) of order $k$, $k\in\\{0,\ldots,d-1\\}$ (Appendix A), then $\mathfrak{M}_{k}(G_{d}(\cdot,a,\alpha,\beta,\gamma))\in\mathcal{G}_{d-k}$ and its radial part is proportional to $g_{d}(\cdot,a,\alpha+\frac{k}{2},\beta+\frac{k}{2},\gamma+\frac{k}{2})$. In other words, when looking at the radial part of the covariance kernel, the montée of order $k$ amounts to upgrading the $\alpha$, $\beta$ and $\gamma$ parameters by $\frac{k}{2}$. ###### Theorem 10 (Descente). If $G_{d}(\cdot,a,\alpha,\beta,\gamma)\in\mathcal{G}_{d}$ and $k\in\mathbb{N}$, then $\mathfrak{M}_{-k}(G_{d}(\cdot,a,\alpha,\beta,\gamma))\in\mathcal{G}_{d+k}$ and its radial part is proportional to $g_{d}(\cdot,a,\alpha-\frac{k}{2},\beta-\frac{k}{2},\gamma-\frac{k}{2})$, provided that $(\alpha-\frac{k}{2},\beta-\frac{k}{2},\gamma-\frac{k}{2})\in\mathcal{P}_{d+k}$. ###### Remark 2. Theorems 6, 7, 9 and 10 show that a montée (descente) of order $2k$ increases (decreases) the differentiability order by $2k$ near the origin, but only by $k$ near the range. ###### Remark 3. Compare the montée, descente, restriction and extension operations in Theorems 3, 4, 9 and 10. Both the extension and montée of order $k$ upgrade the parameters $\alpha,\beta$ and $\gamma$ by $\frac{k}{2}$, but the latter reduces the space dimension by $k$ whereas the former increases the dimension. Conversely, the restriction and descente of order $k$ downgrade the parameters $\alpha,\beta$ and $\gamma$ by $\frac{k}{2}$, but the latter increases the dimension by $k$ whereas the former reduces the dimension. ### 2.5 Examples #### 2.5.1 Euclid’s hat (spherical) covariance kernel For $\alpha>0$, $\beta=\alpha+\frac{1}{2}$ and $\gamma=2\alpha$, the generalized hypergeometric function ${}_{1}F_{2}$ can be expressed in terms of a squared Bessel function (Erdélyi, , 1953): ${}_{1}F_{2}\left(\alpha;\alpha+\frac{1}{2},2\alpha;-{(\pi a\|{\boldsymbol{u}}\|)^{2}}\right)=\Gamma^{2}\left(\alpha+\frac{1}{2}\right)\left(\frac{\pi a\|{\boldsymbol{u}}\|}{2}\right)^{1-2\alpha}J^{2}_{\alpha-\frac{1}{2}}(\pi a\|{\boldsymbol{u}}\|).$ (11) Equations (8) and (11), together with the Legendre duplication formula for the gamma function (Olver et al., , 2010, formula 5.5.5) yield the following result, valid for $\boldsymbol{u}\in\mathbb{R}^{d}$ and $\kappa\in\mathbb{N}$: $\begin{split}\widetilde{G}_{d}&\left({\boldsymbol{u}};a,\frac{d+1}{2}+\kappa,\frac{d}{2}+1+\kappa,d+1+2\kappa\right)\\\ &=\frac{\Gamma(\kappa+1)\Gamma(\frac{d}{2}+1+2\kappa)\Gamma^{2}(\frac{d}{2}+1)}{\pi^{\frac{d-1}{2}}\Gamma(\kappa+\frac{1}{2})\Gamma^{2}(\frac{d}{2}+1+\kappa)2^{2\kappa}\|{\boldsymbol{u}}\|^{d}}J_{\frac{d}{2}+\kappa}^{2}(\pi a\|{\boldsymbol{u}}\|).\end{split}$ One recognizes the spectral density of the montée of order $2\kappa$ of the spherical covariance in $\mathbb{R}^{d}$ (Arroyo and Emery, , 2020). The case $\kappa=0$ corresponds to the $d$-dimensional spherical covariance (triangular or tent covariance in $\mathbb{R}$, circular covariance in $\mathbb{R}^{2}$, usual spherical covariance in $\mathbb{R}^{3}$, pentaspherical in $\mathbb{R}^{5}$) (Matheron, , 1965, formula II.5.2), also known as Euclid’s hat (Gneiting, , 1999), while the cases $\kappa=1$ and $\kappa=2$ correspond to the $d$-dimensional cubic and penta covariances, respectively (Chilès and Delfiner, , 2012). Interestingly, these spherical and upgraded spherical kernels can be extended to parameters that are not integer or half-integer by taking $\alpha>\frac{d}{2},\beta=\alpha+\frac{1}{2}$ and $\gamma=2\alpha$ (i.e., $\kappa\not\in\mathbb{N}$). Such extended kernels correspond to the so- called fractional montée (if $\alpha>\frac{d+1}{2}$) or fractional descente (if $\frac{d}{2}<\alpha<\frac{d+1}{2}$) of the $d$-dimensional spherical covariance kernel (Matheron, , 1965; Gneiting, , 2002). #### 2.5.2 Generalized Wendland and Askey covariance kernels The generalized Wendland covariance in $\mathbb{R}^{d}$ with range $a>0$ and smoothness parameter $\kappa>0$ is defined as: ${\boldsymbol{h}}\mapsto\frac{\Gamma(\ell+2\kappa+1)}{\Gamma(\ell+1)\Gamma(2\kappa)}\int_{0}^{1}t(1-t)^{\ell}\left(t^{2}-\frac{\|{\boldsymbol{h}}\|^{2}}{a^{2}}\right)_{+}^{\kappa-1}\text{d}t,$ with $\ell\geq\frac{d+1}{2}+\kappa$. Bevilacqua et al., (2020), Chernih et al., (2014), Hubbert, (2012) and Zastavnyi, (2006) showed that this covariance and its spectral density can be written under the forms (10) and (8), respectively, with $\alpha=\frac{d+1}{2}+\kappa$, $\beta=\frac{d+\ell+1}{2}+\kappa$ and $\gamma=\frac{d+\ell}{2}+1+\kappa$. The cases when $\ell=\lfloor\frac{d}{2}+\kappa\rfloor+1$ and $\kappa$ is an integer or a half-integer yield the original (Wendland, , 1995) and missing (Schaback, , 2011) Wendland functions, respectively. The radial parts of the former are truncated polynomials in $[0,a]$, while that of the latter involve polynomials, logarithms and square root components (Chernih et al., , 2014). The above parameterization with $\kappa=0$, i.e., $\alpha=\frac{d+1}{2}$, $\beta=\frac{d+\ell+1}{2}$ and $\gamma=\frac{d+\ell}{2}+1$, yields the well- known Askey covariance (Askey, , 1973), the expression of which can be recovered by using Equation (10) along with formula 15.4.17 of Olver et al., (2010): $G_{d}\left({\boldsymbol{h}};a,\frac{d+1}{2},\frac{d+\ell+1}{2},\frac{d+\ell}{2}+1\right)=\left(1-\frac{\|\mathbf{h}\|}{a}\right)_{+}^{\ell},\quad\boldsymbol{h}\in\mathbb{R}^{d},\,\ell\geq\frac{d+1}{2}.$ In spaces of even dimension, the lower bound $\frac{d+1}{2}$ for $\ell$ is less than the one $\lfloor\frac{d}{2}\rfloor+1$ found by Askey, (1973) and agrees with the findings of Gasper, (1975). #### 2.5.3 Truncated power expansions and truncated polynomial covariance kernels The Gauss hypergeometric covariance reduces to a finite power expansion by choosing $\alpha-\frac{d}{2}\not\in\mathbb{N}$, $\beta-\frac{d}{2}=N\in\mathbb{N}$ and $\gamma-\alpha=M\in\mathbb{N}$. Using formula (19) in Appendix A and the duplication formula for the gamma function, one finds: $\begin{split}g_{d}&\left(r;a,\alpha,\frac{d}{2}+N,\alpha+M\right)\\\ &=\frac{\Gamma(\frac{d}{2}-\alpha+1)\Gamma(N)}{\Gamma(\frac{d}{2}-\alpha-M+1)}\sum_{n=0}^{N-1}\frac{(-1)^{n}\Gamma(\frac{d}{2}-\alpha-M+1+n)}{\Gamma(\frac{d}{2}-\alpha+1+n)\Gamma(N-n)\,n!}\left(\frac{r}{a}\right)^{2n}\\\ &+\frac{\Gamma(\frac{d}{2}-\alpha+1)\Gamma(N)}{\Gamma(\frac{d}{2}-\alpha-M+1)}\sum_{n=0}^{M-1}\frac{(-1)^{n}\Gamma(\alpha-\frac{d}{2}-N+1+n)}{\Gamma(\alpha-\frac{d}{2}+1+n)\Gamma(M-n)\,n!}\left(\frac{r}{a}\right)^{2n+2\alpha-d},\quad 0\leq r<a.\end{split}$ A similar expansion is found by choosing $\alpha-\frac{d}{2}\not\in\mathbb{N}$, $\gamma-\frac{d}{2}=N\in\mathbb{N}$ and $\beta-\alpha=M\in\mathbb{N}$. In both cases, if $\alpha-\frac{d}{2}$ is a half-integer, the radial part of the covariance is a polynomial function, truncated at zero for $r>a$. The Askey and original Wendland kernels and, when the space dimension $d$ is an odd integer, the spherical kernels are particular cases of these truncated polynomial kernels. ### 2.6 Asymptotic cases ###### Theorem 11 (uniform convergence to the Matérn covariance kernel). Let $\alpha>\frac{d}{2}$. As $a$, $\beta$ and $\gamma$ tend to infinity such that $\frac{a}{2\sqrt{\beta\gamma}}$ tends to a positive constant $b$, the Gauss hypergeometric covariance converges uniformly on $\mathbb{R}^{d}$ to the Matérn covariance with scale factor $b$ and smoothness parameter $\alpha-\frac{d}{2}$: $\boldsymbol{h}\mapsto\frac{2}{\Gamma(\alpha-\frac{d}{2})}\left(\frac{\|\boldsymbol{h}\|}{2b}\right)^{\alpha-\frac{d}{2}}{K}_{\alpha-\frac{d}{2}}\left(\frac{\|\boldsymbol{h}\|}{b}\right),\quad\boldsymbol{h}\in\mathbb{R}^{d},$ (12) where $K_{\alpha-\frac{d}{2}}$ is the modified Bessel function of the second kind of order $\alpha-\frac{d}{2}$. ###### Theorem 12 (uniform convergence to generalized Laguerre kernel). As $a$ and $\gamma$ tend to infinity in such a way that $\frac{a}{\sqrt{\gamma}}$ tends to a positive constant $b$, the Gauss hypergeometric covariance converges uniformly on $\mathbb{R}^{d}$ to the covariance kernel $\boldsymbol{h}\mapsto\frac{\Gamma(\beta-\frac{d}{2})}{\Gamma(\alpha-\frac{d}{2})}\,L\left(\frac{d}{2}-\beta+1,\frac{d}{2}-\alpha+1,\frac{\,\|\boldsymbol{h}\|^{2}}{b^{2}}\right),\quad\boldsymbol{h}\in\mathbb{R}^{d},$ where $L$ is the Laguerre function of the second kind, defined by (Matheron, , 1965, formula D.7): $L(\alpha,\beta,x)=\frac{1}{\Gamma(\beta-\alpha)}\int_{1}^{+\infty}\exp(-u\,x)u^{\alpha-1}(u-1)^{\beta-\alpha-1}du,\quad x\in\mathbb{R}_{+},\beta>\alpha.$ The same result holds by interchanging $\beta$ and $\gamma$. ###### Theorem 13 (uniform convergence to Tricomi’s confluent hypergeometric kernel). As $\alpha-\frac{d}{2}$ tends to a positive even integer $2n$ and $a$ and $\gamma$ tend to infinity such that $\frac{a}{\sqrt{\gamma}}$ tends to a positive constant $b$, the Gauss hypergeometric covariance converges uniformly on $\mathbb{R}^{d}$ to the covariance kernel $\boldsymbol{h}\mapsto\frac{\Gamma(\frac{d}{2}-\beta+2n+1)}{\Gamma(2n)}\,U\left(\frac{d}{2}-\beta+1,1-2n,-\frac{\|\boldsymbol{h}\|^{2}}{b^{2}}\right),\quad\boldsymbol{h}\in\mathbb{R}^{d},$ where $U$ is Tricomi’s confluent hypergeometric function (Olver et al., , 2010, formula 13.2.6). The same result holds by interchanging $\beta$ and $\gamma$. ###### Theorem 14 (uniform convergence to incomplete gamma kernel). As $a$ and $\gamma$ tend to infinity in such a way that $\frac{a}{\sqrt{\gamma}}$ tends to a positive constant $b$ and $\beta=\frac{d}{2}+1$, the Gauss hypergeometric covariance converges uniformly on $\mathbb{R}^{d}$ to the covariance kernel $\boldsymbol{h}\mapsto Q\left(\alpha-\frac{d}{2},\frac{\|\boldsymbol{h}\|^{2}}{b^{2}}\right),\quad\boldsymbol{h}\in\mathbb{R}^{d},$ where $Q$ is the regularized incomplete gamma function (Olver et al., , 2010, formula 8.2.4). The same result holds by interchanging $\beta$ and $\gamma$. ###### Remark 4. If, furthermore, $\alpha=\frac{d+1}{2}$, one obtains the complementary error function $\operatorname{erfc}(\frac{\|\boldsymbol{h}\|}{b})$, which is positive semidefinite in $\mathbb{R}^{d}$ for any dimension $d$ (Gneiting, , 1999). ###### Theorem 15 (uniform convergence to the Gaussian kernel, part 1). As $a,\alpha,\beta,\gamma$ tend to infinity in such a way that $a\sqrt{\frac{\alpha}{\beta\gamma}}$ tends to a positive constant $b$, the Gauss hypergeometric covariance converges uniformly on $\mathbb{R}^{d}$ to the Gaussian covariance with scale factor $b$: $\boldsymbol{h}\mapsto\exp\left(-\frac{\|\boldsymbol{h}\|^{2}}{b^{2}}\right),\quad\boldsymbol{h}\in\mathbb{R}^{d}.$ (13) ###### Theorem 16 (uniform convergence to the Gaussian kernel, part 2). As $\beta$ tends to $\alpha$ and $a$ and $\gamma$ tend to infinity in such a way that $(\alpha,\beta,\gamma)\in\mathcal{P}_{d}$ and $\frac{a}{\sqrt{\gamma}}$ tends to a positive constant $b$, the Gauss hypergeometric covariance converges uniformly on $\mathbb{R}^{d}$ to the Gaussian covariance with scale factor $b$. The same result holds by interchanging $\beta$ and $\gamma$. ###### Remark 5. All the previous asymptotic kernels are positive semidefinite in Euclidean spaces of any dimension $d$, as the parameters $(\alpha,\beta,\gamma)$ can belong to $\mathcal{P}_{d}$ for sufficiently large $\beta$ and/or $\gamma$ values. ## 3 Multivariate compactly-supported hypergeometric covariance kernels Let $p$ be a positive integer and consider a $p\times p$ matrix-valued kernel as: ${\boldsymbol{G}}_{d}({\boldsymbol{h}};{\boldsymbol{a}},{\boldsymbol{\alpha}},{\boldsymbol{\beta}},{\boldsymbol{\gamma}},\boldsymbol{\rho})=[\rho_{ij}G_{d}({\boldsymbol{h}};a_{ij},\alpha_{ij},\beta_{ij},\gamma_{ij})]_{i,j=1}^{p},\quad\boldsymbol{h}\in\mathbb{R}^{d},$ (14) where ${\boldsymbol{a}}=[a_{ij}]_{i,j=1}^{p}$, ${\boldsymbol{\alpha}}=[\alpha_{ij}]_{i,j=1}^{p}$, ${\boldsymbol{\beta}}=[\beta_{ij}]_{i,j=1}^{p}$, ${\boldsymbol{\gamma}}=[\gamma_{ij}]_{i,j=1}^{p}$ and ${\boldsymbol{\rho}}=[\rho_{ij}]_{i,j=1}^{p}$ are symmetric real-valued matrices of size $p\times p$. The following theorem establishes various sufficient conditions on these matrices for ${\boldsymbol{G}}_{d}({\boldsymbol{h}};{\boldsymbol{a}},{\boldsymbol{\alpha}},{\boldsymbol{\beta}},{\boldsymbol{\gamma}},\boldsymbol{\rho})$ to be a valid matrix-valued covariance kernel in $\mathbb{R}^{d}$. ###### Theorem 17 (Multivariate sufficient validity conditions). The $p$-variate Gauss hypergeometric kernel (14) is a valid matrix-valued covariance kernel in $\mathbb{R}^{d}$ if the following sufficient conditions hold (see the definitions of conditionally negative semidefinite matrices and multiply monotone functions in Appendix A): * (1). * (i) $\boldsymbol{a}=a\boldsymbol{1}$ with $a>0$; * (ii) $\boldsymbol{\alpha}=\alpha\boldsymbol{1}$; * (iii) $\boldsymbol{\beta}$ is symmetric and conditionally negative semidefinite; * (iv) $\boldsymbol{\gamma}$ is symmetric and conditionally negative semidefinite; * (v) $(\alpha,\beta_{ij},\gamma_{ij})\in\mathcal{P}_{d}$ for all $i,j$ in $[1,\ldots,p]$; * (vi) $(\alpha,\beta,\gamma)\in\mathcal{P}_{0}$, with $\beta<\beta_{ij}$ and $\gamma<\gamma_{ij}$ for all $i,j$ in $[1,\ldots,p]$; * (vii) $\left[\frac{\rho_{ij}\Gamma(\beta_{ij}-\frac{d}{2})\Gamma(\gamma_{ij}-\frac{d}{2})}{\Gamma(\beta_{ij}-\beta)\Gamma(\gamma_{ij}-\gamma)}\right]_{i,j=1}^{p}$ is symmetric and positive semidefinite; * or * (2). * (i) $a_{ij}=\max\\{\varepsilon_{i},\varepsilon_{j}\\}$ if $i\neq j$ and $a_{ii}=\varepsilon_{i}-\delta_{i}$, with $0\leq\delta_{i}<\varepsilon_{i}$ for $i=1,\ldots,p$; * (ii) $\boldsymbol{\alpha}=\alpha\boldsymbol{1}$; * (iii) $\boldsymbol{\beta}$ is symmetric and conditionally negative semidefinite; * (iv) $\boldsymbol{\gamma}$ is symmetric and conditionally negative semidefinite; * (v) $(\alpha,\beta_{ij},\gamma_{ij})\in\mathcal{P}_{d}$ for all $i,j$ in $[1,\ldots,p]$; * (vi) $(\alpha+1,\beta+1,\gamma+1)\in\mathcal{P}_{0}$; * (vii) $\left[\frac{\rho_{ij}a_{ij}^{d}\Gamma(\beta_{ij}-\frac{d}{2})\Gamma(\gamma_{ij}-\frac{d}{2})}{\Gamma(\beta_{ij}-\beta)\Gamma(\gamma_{ij}-\gamma)}\right]_{i,j=1}^{p}$ is symmetric and positive semidefinite; * or * (3). * (i) $a_{ij}^{2}=\psi_{1}(\|\boldsymbol{s}_{i}-\boldsymbol{s}_{j}\|)$, with $\psi_{1}$ a positive function in $\mathbb{R}_{+}$ that has a $(q+1)$-times monotone derivative, $q\in\mathbb{N}$ and $\boldsymbol{s}_{1},\ldots,\boldsymbol{s}_{p}\in\mathbb{R}^{2q+1}$; * (ii) $\boldsymbol{\alpha}=\alpha\boldsymbol{1}$; * (iii) $\boldsymbol{\beta}$ is symmetric and conditionally negative semidefinite; * (iv) $\boldsymbol{\gamma}$ is symmetric and conditionally negative semidefinite; * (v) $(\alpha,\beta_{ij},\gamma_{ij})\in\mathcal{P}_{d}$ for all $i,j$ in $[1,\ldots,p]$; * (vi) $(\alpha+q+2,\beta+q+2,\gamma+q+2)\in\mathcal{P}_{0}$ for $q\in\mathbb{N}$; * (vii) $\left[\frac{\rho_{ij}a_{ij}^{d}\Gamma(\beta_{ij}-\frac{d}{2})\Gamma(\gamma_{ij}-\frac{d}{2})}{\Gamma(\beta_{ij}-\beta)\Gamma(\gamma_{ij}-\gamma)}\right]_{i,j=1}^{p}$ is symmetric and positive semidefinite; * or * (4). * (i) $a_{ij}=a$ if $i\neq j$ and $a_{ii}=a-\delta_{i}$, with $0\leq\delta_{i}<a$ for $i=1,\ldots,p$; * (ii) $\alpha_{ij}=\psi_{2}(\|\boldsymbol{t}_{i}-\boldsymbol{t}_{j}\|)$, with $\psi_{2}$ a function in $\mathbb{R}_{+}$ with values in $]0,\frac{2\gamma-1}{4}]$ and a $(q^{\prime}+1)$-times monotone derivative, $q^{\prime}\in\mathbb{N}$, $\gamma>\frac{1}{2}$ and $\boldsymbol{t}_{1},\ldots,\boldsymbol{t}_{p}\in\mathbb{R}^{2q^{\prime}+1}$; * (iii) $\boldsymbol{\beta}-\boldsymbol{\alpha}-\boldsymbol{1}$ is symmetric, conditionally negative semidefinite and with positive entries; * (iv) $\boldsymbol{\gamma}$ is symmetric and conditionally negative semidefinite; * (v) $\left[\frac{\rho_{ij}a_{ij}^{d}\Gamma(\beta_{ij}-\frac{d}{2})\Gamma(\gamma_{ij}-\frac{d}{2})}{\alpha_{ij}\Gamma(\alpha_{ij}-\frac{d}{2})\Gamma(\beta_{ij}-\alpha_{ij}-1)\Gamma(\gamma_{ij}-\gamma)}\right]_{i,j=1}^{p}$ is symmetric and positive semidefinite; * or * (5). * (i) $a_{ij}^{2}=\psi_{1}(\|\boldsymbol{s}_{i}-\boldsymbol{s}_{j}\|)$, with $\psi_{1}$ a positive function that has a $(q+1)$-times monotone derivative, $q\in\mathbb{N}$ and $\boldsymbol{s}_{1},\ldots,\boldsymbol{s}_{p}\in\mathbb{R}^{2q+1}$; * (ii) $\alpha_{ij}=\psi_{2}(\|\boldsymbol{t}_{i}-\boldsymbol{t}_{j}\|)$, with $\psi_{2}$ a positive function in $\mathbb{R}_{+}$ with values in $]0,\frac{2\gamma-1}{4}]$ and a $(q^{\prime}+1)$-times monotone derivative, $q^{\prime}\in\mathbb{N}$, $\gamma>\frac{1}{2}$ and $\boldsymbol{t}_{1},\ldots,\boldsymbol{t}_{p}\in\mathbb{R}^{2q^{\prime}+1}$; * (iii) $\boldsymbol{\beta}-\boldsymbol{\alpha}-\boldsymbol{1}$ is symmetric, conditionally negative semidefinite and with positive entries; * (iv) $\boldsymbol{\gamma}$ is symmetric and conditionally negative semidefinite; * (v) $(\alpha_{ij}+q+3,\alpha_{ij}+q+4,\gamma+q+3)\in\mathcal{P}_{0}$ for all $i,j$ in $[1,\ldots,p]$; * (vi) $\left[\frac{\rho_{ij}a_{ij}^{d+2}\Gamma(\beta_{ij}-\frac{d}{2})\Gamma(\gamma_{ij}-\frac{d}{2})}{\alpha_{ij}\Gamma(\alpha_{ij}-\frac{d}{2})\Gamma(\beta_{ij}-\alpha_{ij}-1)\Gamma(\gamma_{ij}-\gamma)}\right]_{i,j=1}^{p}$ is symmetric and positive semidefinite. The conditions derived by interchanging $\boldsymbol{\beta}$ and $\boldsymbol{\gamma}$ in (4) and (5) also lead to a valid covariance kernel. ## 4 Specific bivariate compactly-supported hypergeometric covariance kernels In addition to the general sufficient conditions established in Theorem 17, one can obtain three specific bivariate kernels by satisfying the following determinantal inequality: $\widetilde{G}_{d}({\boldsymbol{u}};a_{11},\alpha_{11},\beta_{11},\gamma_{11})\widetilde{G}_{d}({\boldsymbol{u}};a_{22},\alpha_{22},\beta_{22},\gamma_{22})\geq\rho_{12}\,\widetilde{G}_{d}^{2}({\boldsymbol{u}};a_{12},\alpha_{12},\beta_{12},\gamma_{12}),\quad{\boldsymbol{u}}\in\mathbb{R}^{d}.$ * (i) For $x>0,\alpha>0,\beta\in]\alpha+\frac{1}{2},2\alpha]$, one has the following inequality (Cho and Yun, , 2018, Theorem 5.1): ${}_{1}F_{2}\left(\alpha;\beta,3\alpha+\frac{1}{2}-\beta;-\frac{x^{2}}{4}\right)\geq\Gamma^{2}\left(\alpha+\frac{1}{2}\right)\left(\frac{x}{2}\right)^{1-2\alpha}J^{2}_{\alpha-\frac{1}{2}}\left(x\right).$ This implies that a valid bivariate kernel can be obtained by putting: $\boldsymbol{a}=\left[\begin{matrix}a&a\\\ a&a\end{matrix}\right],\boldsymbol{\alpha}=\left[\begin{matrix}\alpha&\alpha\\\ \alpha&\alpha\end{matrix}\right],\boldsymbol{\beta}=\left[\begin{matrix}\beta_{1}&\alpha+\frac{1}{2}\\\ \alpha+\frac{1}{2}&\beta_{2}\end{matrix}\right],\boldsymbol{\gamma}=\left[\begin{matrix}3\alpha+\frac{1}{2}-\beta_{1}&2\alpha\\\ 2\alpha&3\alpha+\frac{1}{2}-\beta_{2}\end{matrix}\right],\boldsymbol{\rho}=\left[\begin{matrix}1&\rho\\\ \rho&1\end{matrix}\right],$ with $a>0$, $\alpha>\frac{d}{2}$, $\alpha+\frac{1}{2}<\beta_{1}\leq 2\alpha$, $\alpha+\frac{1}{2}<\beta_{2}\leq 2\alpha$ and $\rho^{2}\leq\frac{\Gamma^{2}(\alpha+\frac{1}{2})\Gamma^{2}(2\alpha)\Gamma(\beta_{1}-\frac{d}{2})\Gamma(\beta_{2}-\frac{d}{2})\Gamma(3\alpha-\beta_{1}+\frac{1-d}{2})\Gamma(3\alpha-\beta_{2}+\frac{1-d}{2})}{\Gamma^{2}(\alpha+\frac{1-d}{2})\Gamma^{2}(2\alpha-\frac{d}{2})\Gamma(\beta_{1})\Gamma(\beta_{2})\Gamma(3\alpha-\beta_{1}+\frac{1-d}{2})\Gamma(3\alpha-\beta_{2}+\frac{1-d}{2})}.$ * (ii) The same line of reasoning applies with the inequality (Cho and Yun, , 2018, Theorem 5.2): ${}_{1}F_{2}\left(\alpha;\beta,\alpha+\beta-\frac{1}{2};-\frac{x^{2}}{4}\right)\geq\Gamma^{2}\left(\beta\right)\left(\frac{x}{2}\right)^{2-2\beta}J^{2}_{\beta-1}\left(x\right),\quad x>0,\alpha>0,\beta\geq\alpha+\frac{1}{2}.$ This implies the validity of the following kernel in $\mathbb{R}^{d}$: $\boldsymbol{a}=\left[\begin{matrix}a&a\\\ a&a\end{matrix}\right],\boldsymbol{\alpha}=\left[\begin{matrix}\alpha_{1}&\beta-\frac{1}{2}\\\ \beta-\frac{1}{2}&\alpha_{2}\end{matrix}\right],\boldsymbol{\beta}=\left[\begin{matrix}\beta&\beta\\\ \beta&\beta\end{matrix}\right],\boldsymbol{\gamma}=\left[\begin{matrix}\alpha_{1}+\beta-\frac{1}{2}&2\beta-1\\\ 2\beta-1&\alpha_{2}+\beta-\frac{1}{2}\end{matrix}\right],\boldsymbol{\rho}=\left[\begin{matrix}1&\rho\\\ \rho&1\end{matrix}\right],$ with $a>0$, $\alpha_{1}>\frac{d}{2}$, $\alpha_{2}>\frac{d}{2}$, $\beta\geq\max\\{\alpha_{1},\alpha_{2}\\}+\frac{1}{2}$ and $\rho^{2}\leq\frac{\Gamma(\alpha_{1})\Gamma(\alpha_{2})\Gamma(\alpha_{1}+\beta-\frac{d+1}{2})\Gamma(\alpha_{2}+\beta-\frac{d+1}{2})\Gamma^{2}(\beta-\frac{d+1}{2})\Gamma^{2}(2\beta-1)}{\Gamma(\alpha_{1}-\frac{d}{2})\Gamma(\alpha_{2}-\frac{d}{2})\Gamma(\alpha_{1}+\beta-\frac{1}{2})\Gamma(\alpha_{2}+\beta-\frac{1}{2})\Gamma^{2}(\beta-\frac{1}{2})\Gamma^{2}(2\beta-1-\frac{d}{2})}.$ * (iii) Likewise, one has (Cho and Yun, , 2018, Theorem 5.3): ${}_{1}F_{2}\left(\alpha;\beta,2\alpha;-\frac{x^{2}}{4}\right)\geq\Gamma^{2}\left(\beta\right)\left(\frac{x}{2}\right)^{2-2\beta}J^{2}_{\beta-1}\left(x\right),\quad x>0,\alpha>0,\beta\geq\alpha+\frac{1}{2}.$ This implies the validity of the following kernel in $\mathbb{R}^{d}$: $\boldsymbol{a}=\left[\begin{matrix}a&a\\\ a&a\end{matrix}\right],\boldsymbol{\alpha}=\left[\begin{matrix}\alpha_{1}&\beta-\frac{1}{2}\\\ \beta-\frac{1}{2}&\alpha_{2}\end{matrix}\right],\boldsymbol{\beta}=\left[\begin{matrix}\beta&\beta\\\ \beta&\beta\end{matrix}\right],\boldsymbol{\gamma}=\left[\begin{matrix}2\alpha_{1}&2\beta-1\\\ 2\beta-1&2\alpha_{2}\end{matrix}\right],\boldsymbol{\rho}=\left[\begin{matrix}1&\rho\\\ \rho&1\end{matrix}\right],$ with $a>0$, $\alpha_{1}>\frac{d}{2}$, $\alpha_{2}>\frac{d}{2}$, $\beta\geq\max\\{\alpha_{1},\alpha_{2}\\}+\frac{1}{2}$ and $\rho^{2}\leq\frac{\Gamma(\alpha_{1})\Gamma(\alpha_{2})\Gamma(2\alpha_{1}-\frac{d}{2})\Gamma(2\alpha_{2}-\frac{d}{2})\Gamma^{2}(\beta-\frac{d+1}{2})\Gamma^{2}(2\beta-1)}{\Gamma(\alpha_{1}-\frac{d}{2})\Gamma(\alpha_{2}-\frac{d}{2})\Gamma(2\alpha_{1})\Gamma(2\alpha_{2})\Gamma^{2}(\beta-\frac{1}{2})\Gamma^{2}(2\beta-1-\frac{d}{2})}.$ These kernels escape from the cases presented in Theorem 17, insofar as $\boldsymbol{\beta}$ and $\boldsymbol{\gamma}$ are not conditionally semidefinite negative in kernel (i), $\boldsymbol{\alpha}$ is not proportional to the all-ones matrix in kernels (ii) and (iii), and $\boldsymbol{\beta}-\boldsymbol{\alpha}$ is not conditionally semidefinite negative in all three kernels. Interestingly, if $\alpha$ (kernel (i)) or $\beta$ (kernels (ii) and (iii)) is an integer or a half-integer, the cross- covariances (off-diagonal entries of $G_{d}$) are univariate spherical kernels, but the direct covariances (diagonal entries of $G_{d}$) are not, unless $\beta_{1}=\beta_{2}=2\alpha$ or $\alpha_{1}=\alpha_{2}=\beta-\frac{1}{2}$, respectively. ## 5 Concluding remarks The class of Gauss hypergeometric covariance kernels presented in this work includes the stationary univariate kernels that are most widely used in spatial statistics: spherical, Askey, generalized Wendland and, as asymptotic cases, Matérn and Gaussian. Figure 1 maps these kernels in the parameter space $\mathcal{P}_{d}$. Concerning multivariate covariance kernels, under Conditions (1) of Theorem 17, $\boldsymbol{a}$ is proportional to the all-ones matrix, i.e., all the direct and cross-covariances share the same range. In contrast, Conditions (2) and (3) allow different ranges, at the price of additional restrictions on the shape parameters $\boldsymbol{\alpha}$, $\boldsymbol{\beta}$ and $\boldsymbol{\gamma}$ that exclude a few covariance kernels located on the boundary of the parameter space $\mathcal{P}_{d}$, such as the spherical and Askey kernels. Even more interesting, Conditions (4) and (5) allow both $\boldsymbol{a}$ and $\boldsymbol{\alpha}$ not to be proportional to the all-ones matrix, i.e., the direct and cross-covariances not to share the same range nor the same behavior at the origin. This versatility makes the proposed multivariate Gauss hypergeometric covariance kernel a compactly-supported competitor of the well-known multivariate Matérn kernel (Apanasovich et al., , 2012). Figure 1: Positioning of common covariance kernels in the parameter space $\mathcal{P}_{d}$ (with, here, $d=3$) of the hypergeometric covariance. The colored lines represent the upper boundary of $\mathcal{P}_{3}$, the color being a function of $\alpha$, from the lowest (blue) to the highest (red) values; the thin lines correspond to the spherical and Askey-Wendland families. The lower boundary of $\mathcal{P}_{3}$ is the white plane $\alpha=\frac{3}{2}$. The greater $\alpha$, the more regular the hypergeometric covariance at the origin. ## Appendices ## Appendix A Technical definitions and lemmas ###### Definition 1 (Montée and descente). For $k\in\mathbb{N}$, $k<d$, the transitive upgrading or montée of order $k$ is the operator $\mathfrak{M}_{k}$ that transforms an isotropic covariance in $\mathbb{R}^{d}$ into a an isotropic covariance in $\mathbb{R}^{d-k}$ with the same radial spectral density (Matheron, , 1965). The reciprocal operator is the transitive downgrading (descente) of order $k$ and is denoted as $\mathfrak{M}_{-k}$. ###### Definition 2 (Conditionally negative semidefinite matrix). A $p\times p$ symmetric real-valued matrix $\boldsymbol{A}$ is conditionally negative semidefinite if, for any vector $\boldsymbol{\omega}$ in $\mathbb{R}^{p}$ whose components add to zero, one has $\boldsymbol{\omega}^{\top}\,\boldsymbol{A}\boldsymbol{\omega}\leq 0$. ###### Example 1. Examples of conditionally negative semidefinite matrices include the all-ones matrix $\boldsymbol{1}$ or the matrix $\boldsymbol{A}=[a_{ij}]_{i,j=1}^{p}$ with $a_{ij}=\frac{\eta_{i}+\eta_{j}}{2}+\psi(\boldsymbol{s}_{i},\boldsymbol{s}_{j}),$ for any $\eta_{1},\ldots,\eta_{p}$ in $\mathbb{R}$, $\boldsymbol{s}_{1},\ldots,\boldsymbol{s}_{p}$ in $\mathbb{R}^{d}$, and variogram $\psi$ on $\mathbb{R}^{d}\times\mathbb{R}^{d}$ (Matheron, , 1965; Chilès and Delfiner, , 2012). Also, the set of conditionally negative semidefinite matrices is a closed convex cone, so that the product of a conditionally negative semidefinite matrix with a nonnegative constant, the sum of two conditionally negative semidefinite matrices, or the limit of a convergent sequence of conditionally negative semidefinite matrices are still conditionally negative semidefinite. ###### Lemma 1 (Berg et al., (1984)). A symmetric real-valued matrix $\boldsymbol{A}=[a_{ij}]_{i,j=1}^{p}$ is conditionally negative semidefinite if and only if $[\exp(-t\,a_{ij})]_{i,j=1}^{p}$ is positive semidefinite for all $t\geq 0$. ###### Definition 3 (Multiply monotone function). For $q\in\mathbb{N}$, a $q$-times differentiable function $\varphi$ on $\mathbb{R}_{+}$ is $(q+2)$-times monotone if $(-1)^{k}\varphi^{(k)}$ is nonnegative, nonincreasing and convex for $k=0,\ldots,q$. A $1$-time monotone function is a nonnegative and nonincreasing function on $\mathbb{R}_{+}$ (Williamson, , 1956). ###### Lemma 2 (Williamson, (1956)). A $(q+2)$-times monotone function, $q\geq-1$, admits the expression $\varphi(x)=\int_{0}^{+\infty}(1-t\,x)_{+}^{q+1}\,\nu(\textnormal{d}t),\qquad x\in\mathbb{R}_{+},$ (15) where $\nu$ is a nonnegative measure. ###### Example 2. Examples of $(q+2)$-times monotone functions include the truncated power function $x\mapsto b+(1-\frac{x}{a})_{+}^{\eta}$ with $a>0$, $b\geq 0$ and $\eta\geq q+1$, the completely monotone functions, and positive mixtures and products of such functions. ###### Lemma 3. Let $q\in\mathbb{N}$, $\alpha,\beta$, $\gamma\in\mathbb{R}_{+}^{*}$ and $\psi_{1}$ a positive function in $\mathbb{R}_{+}$ whose derivative is $(q+1)$-times monotone. Then, the function $\Phi_{1}:\mathbb{R}^{2q+1}\to\mathbb{R}$ defined by $\Phi_{1}({\boldsymbol{x}})={}_{1}F_{2}\left(\alpha;\beta,\gamma;-\psi_{1}(\|{\boldsymbol{x}}\|)\right),\quad{\boldsymbol{x}}\in\mathbb{R}^{2q+1},$ (16) is a stationary isotropic covariance kernel in $\mathbb{R}^{2q+1}$ if $(\alpha+q+2,\beta+q+2,\gamma+q+2)\in\mathcal{P}_{0}$. ###### Example 3. Examples of functions $\psi_{1}$ satisfying the conditions of Lemma 3 include the integrated truncated power function $\psi_{1}(x)=b\,x+c-(1-\frac{x}{a})_{+}^{\eta+1}$ ($a>0$, $b>0$, $c>1$ and $\eta\geq q$) and the Bernstein functions (positive primitives of completely monotone functions), e.g. (Schilling et al., , 2010): * • $\psi_{1}(x)=1+\log\left(1+\frac{x}{b}\right)$ with $b>0$; * • $\psi_{1}(x)=\left(1+b\,x^{\eta}\right)^{\theta}$ with $b>0$, $\eta\in]0,1]$ and $\theta\in]0,1]$; * • $\psi_{1}(x)=1+x\,(x+b)^{-\eta}$ with $b>0$ and $\eta\in]0,1]$. ###### Lemma 4. Let $q^{\prime}\in\mathbb{N}$, $\gamma>0$, $x>0$ and $\psi_{2}$ a positive function in $\mathbb{R}_{+}$ upper bounded by $\alpha_{\max}=\frac{2\gamma-1}{4}$ and whose derivative is $(q^{\prime}+1)$-times monotone. Then, the function $\Phi_{2}:\mathbb{R}^{2q^{\prime}+1}\to\mathbb{R}$ defined by $\Phi_{2}(\boldsymbol{y})={}_{1}F_{2}\left(\psi_{2}(\|\boldsymbol{y}\|);\psi_{2}(\|\boldsymbol{y}\|)+1,\gamma;-x\right),\quad\boldsymbol{y}\in\mathbb{R}^{2q^{\prime}+1},$ (17) is a stationary isotropic covariance kernel in $\mathbb{R}^{2q^{\prime}+1}$. ###### Lemma 5. Let $q,q^{\prime}\in\mathbb{N}$, $\gamma>0$, $\psi_{1}$ a positive function in $\mathbb{R}_{+}$ with a $(q+1)$-times monotone derivative, and $\psi_{2}$ a positive function in $\mathbb{R}_{+}$ upper bounded by $\alpha_{\max}=\frac{2\gamma-1}{4}$ and with a $(q^{\prime}+1)$-times monotone derivative. Then, the function $\Phi:\mathbb{R}^{2q+1}\times\mathbb{R}^{2q^{\prime}+1}\to\mathbb{R}$ defined by $\Phi(\boldsymbol{x},\boldsymbol{y})=\frac{1}{\psi_{1}(\|\boldsymbol{x}\|)}{}_{1}F_{2}\left(\psi_{2}(\|\boldsymbol{y}\|);\psi_{2}(\|\boldsymbol{y}\|)+1,\gamma;-\psi_{1}(\|\boldsymbol{x}\|)\right),\quad\boldsymbol{x}\in\mathbb{R}^{2q+1},\boldsymbol{y}\in\mathbb{R}^{2q^{\prime}+1},$ (18) is positive semidefinite in $\mathbb{R}^{2q+1}\times\mathbb{R}^{2q^{\prime}+1}$ if $(\alpha+q+3,\alpha+q+4,\gamma+q+3)\in\mathcal{P}_{0}$. ## Appendix B Proofs ###### Proof of Theorem 2. Let $(\alpha,\beta,\gamma)\in\mathcal{P}_{d}$. As the complex extension of the generalized hypergeometric function $x\mapsto{}_{1}F_{2}(\alpha,\beta,\gamma,x)$, $x\in\mathbb{C}$, is an entire function not identically equal to zero, its zeroes (if they exist) are isolated. It follows that there exists an nonempty open interval $I\subseteq\mathbb{R}$ such that ${}_{1}F_{2}(\alpha,\beta,\gamma,x)$ does not vanish, hence is positive, for all $x\in I$. Accordingly, the support of the spectral density (8) contains a nonempty open set of $\mathbb{R}^{d}$, which implies that the associated covariance kernel is positive definite Dolloff et al., (2006). ∎ ###### Proof of Theorem 3. The claim stems from the fact that $g_{d-k}(\cdot;a,\alpha-\frac{k}{2},\beta-\frac{k}{2},\gamma-\frac{k}{2})$ is the same as $g_{d}(\cdot;a,\alpha,\beta,\gamma)$ and that $(\alpha-\frac{k}{2},\beta-\frac{k}{2},\gamma-\frac{k}{2})\in\mathcal{P}_{d-k}$ as soon as $(\alpha,\beta,\gamma)\in\mathcal{P}_{d}$. ∎ ###### Proof of Theorem 4. The proof is analog to that of Theorem 3, with the additional restriction to ensure that the extended covariance remains valid in $\mathbb{R}^{d+k}$. ∎ ###### Proof of Theorem 5. The continuity and differentiability with respect to $r$ stem from the fact that the Gauss hypergeometric function $x\mapsto{}_{2}F_{1}(a_{1},a_{2};b_{1};x)$ with $b_{1}-a_{1}-a_{2}>0$ is continuous on the interval $[0,1]$, equal to $1$ at $x=0$, and infinitely differentiable on $]0,1[$. One deduces the continuity and differentiability with respect to $a$ by noting that, for fixed $\alpha$, $\beta$ and $\gamma$, $g_{d}(r;a,\alpha,\beta,\gamma)$ only depends on $\frac{r}{a}$. Finally, the continuity and differentiability with respect to $\alpha$, $\beta$ and $\gamma$ stem from the fact that the exponential function of base $\left(1-(\frac{r}{a})^{2}\right)_{+}$ and the gamma function are infinitely differentiable wherever they are defined, and the hypergeometric function ${}_{2}F_{1}$ is an entire function of its parameters. ∎ ###### Proof of Theorem 6. From (10), it is seen that $r\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ is of the order of $(1-\frac{r}{a})^{\beta+\gamma-\alpha-\frac{d}{2}-1}$ as $r\to a^{-}$, while it is identically zero for $r\to a^{+}$. Hence, this function is $k$ times differentiable (with zero derivatives of order $1,2,\ldots,k$) at $r=a$, if, and only if, $\beta-\alpha+\gamma>k+\frac{d}{2}+1$. ∎ ###### Proof of Theorem 7. Using formula E.2.3 of Matheron, (1965), one obtains, for $\alpha-\frac{d}{2}\not\in\mathbb{N}$: $\begin{split}g_{d}(r;a,\alpha,\beta,\gamma)&={}_{2}F_{1}\left(\frac{d}{2}-\gamma+1,\frac{d}{2}-\beta+1;\frac{d}{2}-\alpha+1;\frac{r^{2}}{a^{2}}\right)\\\ &+\frac{\Gamma(\frac{d}{2}-\alpha)\Gamma(\beta-\frac{d}{2})\Gamma(\gamma-\frac{d}{2})}{\Gamma(\alpha-\frac{d}{2})\Gamma(\beta-\alpha)\Gamma(\gamma-\alpha)}\left(\frac{r}{a}\right)^{2\alpha-d}\\\ &\times{}_{2}F_{1}\left(\alpha-\beta+1,\alpha-\gamma+1;\alpha-\frac{d}{2}+1;\frac{r^{2}}{a^{2}}\right),\quad 0\leq r<a.\end{split}$ (19) The right-hand side of (19) is a power series of $r^{2}$, plus a power series of $r^{2}$ (with a constant nonzero term) multiplied by $r^{2\alpha-d}$. Since $2\alpha-d$ is not an even integer, $r\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ turns out to be $k$ times differentiable at $r=0$ if, and only if, $\alpha>\frac{k+d}{2}$. If $\alpha-\frac{d}{2}\in\mathbb{N}$, then formula E.2.4 of Matheron, (1965) shows that $r\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ is a power series of $r^{2}$ plus a power series of $r^{2}$ (with a constant nonzero term) multiplied by $r^{2\alpha-d}\log(\frac{r}{a})$, and the same conclusion prevails: $r\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ is $k$ times differentiable at $r=0$ if, and only if, $\alpha>\frac{k+d}{2}$. ∎ ###### Proof of Theorem 8. Using an integral representation of the Gauss hypergeometric function ${}_{2}F_{1}$ (Gradshteyn and Ryzhik, , 2007, formula 9.111), the restriction of the radial function $g_{d}$ on the interval $[0,a]$ can be written as follows: $\begin{split}g_{d}(r;a,\alpha,\beta,\gamma)=&\frac{\Gamma(\gamma-\frac{d}{2})}{\Gamma(\gamma-\alpha)\Gamma(\alpha-\frac{d}{2})}\left(1-\frac{r^{2}}{a^{2}}\right)^{\beta-\alpha+\gamma-\frac{d}{2}-1}\\\ &\times\int_{0}^{1}t^{\gamma-\alpha-1}(1-t)^{\alpha-\frac{d}{2}-1}\left(1+\frac{t}{1-t}\frac{r^{2}}{a^{2}}\right)^{\alpha-\beta}\text{d}t,\quad r\in[0,a].\end{split}$ (20) Accordingly, on $[0,a]$, $r\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ appears as a beta mixture of powered quadratic functions of the form $r\mapsto(1-\frac{r^{2}}{a^{2}})^{\beta-\alpha+\gamma-\frac{d}{2}-1}$ multiplied by generalized Cauchy covariance functions of the form $r\mapsto(1+\frac{t}{1-t}\frac{r^{2}}{a^{2}})^{\alpha-\beta}$, with $t\in]0,1[$, $a>0$ and $(\alpha,\beta,\gamma)\in\mathcal{P}_{d}$. Since all these functions are nonnegative and decreasing on $[0,a]$, so is $r\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$. The monotonicity in $r$ implies the monotonicity in $a$, insofar as $g_{d}(r;a,\alpha,\beta,\gamma)$ only depends on $\frac{r}{a}$ for fixed $\alpha$,$\beta$ and $\gamma$. Consider the integral representation (9) as a function of $r=\|\boldsymbol{h}\|$, $a$, $\alpha$, $\beta$ and $\gamma$. Based on the dominated convergence theorem, this function can be differentiated under the integral sign with respect to parameter $\gamma$, which leads to: $\begin{split}\frac{\partial g_{d}(r;a,\alpha,\beta,\gamma)}{\partial\gamma}&=\frac{\Gamma(\beta-\frac{d}{2})}{\Gamma(\alpha-\frac{d}{2})\Gamma(\beta-\alpha)}\\\ &\times\int_{0}^{1}t^{\alpha-\gamma}(1-t)_{+}^{\beta-\alpha-1}\left(t-\left(\frac{r}{a}\right)^{2}\right)_{+}^{\gamma-\frac{d}{2}-1}\ln\left(1-\frac{r^{2}}{ta^{2}}\right)_{+}\text{d}t.\end{split}$ This equation includes the $r=0$ instance, as $\gamma\mapsto g_{d}(1;a,\alpha,\beta,\gamma)$ is identically equal to $1$. The partial derivative is therefore always negative (if $0<r<a$) or zero (if $r=0$ or $r>a$), implying that $\gamma\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$ is decreasing or constant in $\gamma$, respectively. The same result holds by substituting $\beta$ for $\gamma$ owing to the symmetry of the ${}_{2}F_{1}$ function. ∎ ###### Proof of Theorem 9. For $(\alpha,\beta,\gamma)\in\mathcal{P}_{d}$, the radial part of $\mathfrak{M}_{k}(G_{d}(\cdot,a,\alpha,\beta,\gamma))$ is the Hankel transform of order $d-k$ of $\widetilde{g}_{d}(\cdot,a,\alpha,\beta,\gamma)$. From (3), one has $\mathfrak{M}_{k}(G_{d}(\cdot,a,\alpha,\beta,\gamma))=\frac{\zeta_{d}(a,\alpha,\beta,\gamma)}{\zeta_{d-k}(a,\alpha,\beta,\gamma)}G_{d-k}(\cdot,a,\alpha,\beta,\gamma).$ Since $\mathcal{P}_{d}\subset\mathcal{P}_{d-k}$, it follows that $\mathfrak{M}_{k}(G_{d}(\cdot,a,\alpha,\beta,\gamma))\in\mathcal{G}_{d-k}$, its radial part being $\frac{\zeta_{d}(a,\alpha,\beta,\gamma)}{\zeta_{d-k}(a,\alpha,\beta,\gamma)}g_{d-k}(\cdot,a,\alpha,\beta,\gamma)=\frac{\zeta_{d}(a,\alpha,\beta,\gamma)}{\zeta_{d-k}(a,\alpha,\beta,\gamma)}\,g_{d}\left(\cdot,a,\alpha+\frac{k}{2},\beta+\frac{k}{2},\gamma+\frac{k}{2}\right).$ ∎ ###### Proof of Theorem 10. The proof follows that of Theorem 9. The condition $(\alpha-\frac{k}{2},\beta-\frac{k}{2},\gamma-\frac{k}{2})\in\mathcal{P}_{d+k}$ ensures that the downgraded covariance is positive semidefinite in $\mathbb{R}^{d+k}$, based on Theorem 1. ∎ ###### Proof of Theorem 11. The proof relies on expansion (19) of the radial function $r\mapsto g_{d}(r;a,\alpha,\beta,\gamma)$, valid for $r\in[0,a]$ and $\alpha-\frac{d}{2}\not\in\mathbb{N}$. Using formulae 5.5.3 and 5.11.12 of Olver et al., (2010), as well as the theorem of dominated convergence to interchange limits and infinite summations, one finds the following asymptotic equivalence: $\begin{split}g_{d}(r;a,\alpha,\beta,\gamma)&\sim{}_{0}F_{1}\left(;\frac{d}{2}-\alpha+1;\frac{\beta\gamma r^{2}}{a^{2}}\right)\\\ &+\frac{\Gamma(\frac{d}{2}-\alpha)}{\Gamma(\alpha-\frac{d}{2})}\left(\frac{\beta\gamma r^{2}}{a^{2}}\right)^{\alpha-\frac{d}{2}}{}_{0}F_{1}\left(;\alpha-\frac{d}{2}+1;\frac{\beta\gamma r^{2}}{a^{2}}\right),\quad r\leq a,\end{split}$ as $\beta\to+\infty$ and $\gamma\to+\infty$. The left-hand side can be expressed in terms of modified Bessel functions of the first ($I_{\eta}$) and second ($K_{\eta}$) kinds thanks to formulae 5.5.3, 10.27.4 and 10.39.9 of Olver et al., (2010), which finally yields: $\begin{split}g_{d}(r;a,\alpha,\beta,\gamma)&\sim\Gamma\left(\frac{d}{2}-\alpha+1\right)\left(\frac{\sqrt{\beta\gamma}r}{a}\right)^{\alpha-\frac{d}{2}}I_{\frac{d}{2}-\alpha}\left(\frac{2\sqrt{\beta\gamma}r}{a}\right)\\\ &+\frac{\Gamma(\frac{d}{2}-\alpha)}{\Gamma(\alpha-\frac{d}{2})}\Gamma\left(\alpha-\frac{d}{2}+1\right)\left(\frac{\sqrt{\beta\gamma}r}{a}\right)^{\alpha-\frac{d}{2}}I_{\alpha-\frac{d}{2}}\left(\frac{2\sqrt{\beta\gamma}r}{a}\right)\\\ &=\frac{2}{\Gamma\left(\alpha-\frac{d}{2}\right)}\left(\frac{\sqrt{\beta\gamma}r}{a}\right)^{\alpha-\frac{d}{2}}K_{\alpha-\frac{d}{2}}\left(\frac{2\sqrt{\beta\gamma}r}{a}\right),\quad r\leq a.\end{split}$ (21) Accordingly, $g_{d}(\cdot;a,\alpha,\beta,\gamma)$ tends pointwise to the radial part of the Matérn covariance (12) by letting $\beta$ and $\gamma$ tend to infinity and $a$ be asymptotically equivalent to $2b\sqrt{\beta\gamma}$. In particular, since $a$ tends to infinity, the pointwise convergence is true for any $r\geq 0$. It is also true if $\alpha-\frac{d}{2}\in\mathbb{N}$, as it suffices to consider the asymptotic equivalence (21) with $\alpha-\delta-\frac{d}{2}$ and $\delta>0$ and then to let $\delta$ tend to zero, both the Gauss hypergeometric and Matérn covariances being continuous with respect to the parameter $\alpha$. Note that the conditions of Theorem 1 are fulfilled when $\alpha$ is fixed and greater than $\frac{d}{2}$ and $\beta$ and $\gamma$ become infinitely large, so that $g_{d}(\cdot;a,\alpha,\beta,\gamma)$ in (21) is the radial part of a valid covariance kernel. Finally, because $g_{d}(\cdot;a,\alpha,\beta,\gamma)$ is a decreasing function on any compact segment of $\mathbb{R}_{+}$ for sufficiently large $a$ and $\beta$ or $\gamma$ (Theorem 8) and the limit function (the radial part of the Matérn covariance (12)) is continuous on $\mathbb{R}_{+}$, Dini’s second theorem implies that the pointwise convergence is actually uniform on any compact segment of $\mathbb{R}_{+}$. In turn, since all the functions are lower bounded by zero, uniform convergence on a compact segment of $\mathbb{R}_{+}$ implies uniform convergence on $\mathbb{R}_{+}$. ∎ The proofs of Theorems 12 to 16 use of the same argument as above to identify pointwise convergence with uniform convergence. This argument will be omitted for the sake of brevity. ###### Proof of Theorem 12. The starting point is the expansion (19) of $g_{d}(\cdot;a,\alpha,\beta,\gamma)$ in $[0,a]$. Using formulae 5.5.3 and 5.11.12 of Olver et al., (2010) and the dominated convergence theorem to interchange limits and infinite summations, one finds the following asymptotic equivalence as $\gamma$ tends to infinity: $\begin{split}g_{d}&\left(r;a,\alpha,\beta,\gamma\right)\sim{}_{1}F_{1}\left(\frac{d}{2}-\beta+1;\frac{d}{2}-\alpha+1;-\frac{\gamma\,r^{2}}{a^{2}}\right)\\\ &+{}_{1}F_{1}\left(\alpha-\beta+1;\alpha-\frac{d}{2}+1;-\frac{\gamma\,r^{2}}{a^{2}}\right)\frac{\Gamma(\frac{d}{2}-\alpha)\Gamma(\beta-\frac{d}{2})}{\Gamma(\alpha-\frac{d}{2})\Gamma(\beta-\alpha)}\left(\frac{\gamma\,r^{2}}{a^{2}}\right)^{\alpha-\frac{d}{2}},\quad 0\leq r<a.\end{split}$ (22) Using formula D.8 of Matheron, (1965) and letting $a\to+\infty$ such that $\frac{a}{\sqrt{\gamma}}\to b>0$ yields the claim. ∎ ###### Proof of Theorem 13. The proof relies on (22) and formulae 5.5.3 and 13.2.42 of Olver et al., (2010). ∎ ###### Proof of Theorem 14. The proof relies on (22) and formulae 8.2.3, 8.2.4, 8.5.1 and 13.6.3 of Olver et al., (2010). ∎ ###### Proof of Theorem 15. The proof follows from Theorem 11 and the fact that the Matérn covariance (12) with scale parameter $b/(2\sqrt{\alpha})$ and smoothness parameter $\alpha$ tends to the Gaussian covariance (13) as $\alpha\to+\infty$. Following Chernih et al., (2014), the convergence can also be shown by noting that the spectral density (8) of the Gauss hypergeometric covariance is asymptotically equivalent to $\begin{split}\widetilde{G}_{d}({\boldsymbol{u}};a,\alpha,\beta,\gamma)&\sim\left(\frac{\pi a^{2}\alpha}{\beta\gamma}\right)^{\frac{d}{2}}\,\sum_{n=0}^{+\infty}\frac{1}{n!}\left(-\frac{\alpha(\pi a\|{\boldsymbol{u}}\|)^{2}}{\beta\gamma}\right)^{n}\\\ &=\left(\frac{\pi a^{2}\alpha}{\beta\gamma}\right)^{\frac{d}{2}}\,\exp\left(-\frac{\alpha(\pi a\|{\boldsymbol{u}}\|)^{2}}{\beta\gamma}\right),\quad{\boldsymbol{u}}\in\mathbb{R}^{d},\end{split}$ as $\alpha\to+\infty$, $\beta\to+\infty$ and $\gamma\to+\infty$. If, furthermore, $a\to+\infty$ such that $a\sqrt{\frac{\alpha}{\beta\gamma}}\to b>0$, then one obtains: $\widetilde{G}_{d}({\boldsymbol{u}};a,\alpha,\beta,\gamma)\sim\pi^{\frac{d}{2}}b^{d}\,\exp\left(-(\pi b\|{\boldsymbol{u}}\|)^{2}\right),\quad{\boldsymbol{u}}\in\mathbb{R}^{d},$ which coincides with the spectral density of the Gaussian covariance (13) (Arroyo and Emery, , 2020; Lantuéjoul, , 2002). ∎ ###### Proof of Theorem 16. The proof follows from the asymptotic equivalence (22) for $\gamma$ tending to infinity. As $\beta$ tends to $\alpha$ and $a$ tends to infinity in such a way that $\frac{a}{\sqrt{\gamma}}$ tends to $b>0$, the first term in the right- hand side of (22) tends to $\exp(-r^{2}/b^{2})$ and the second term to zero. ∎ ###### Proof of Lemma 3. One has $\Phi_{1}({\boldsymbol{x}})=\varphi_{1}\circ\psi_{1}(\|\boldsymbol{x}\|)$, where $\varphi_{1}:x\mapsto{}_{1}F_{2}\left(\alpha;\beta,\gamma;-x\right)$ is an infinitely differentiable function on $\mathbb{R}_{+}$, with (Olver et al., , 2010, formula 16.3.1) $(-1)^{k}\frac{\partial^{k}\varphi_{1}}{\partial x^{k}}(x)=\frac{\Gamma(\alpha+k)\Gamma(\beta)\Gamma(\gamma)}{\Gamma(\alpha)\Gamma(\beta+k)\Gamma(\gamma+k)}{}_{1}F_{2}\left(\alpha+k;\beta+k,\gamma+k;-x\right),\quad x\in\mathbb{R}_{+},k\in\mathbb{N}.$ If $(\alpha+q+2,\beta+q+2,\gamma+q+2)\in\mathcal{P}_{0}$, then, for any $k=0,\ldots,q+2$, $(\alpha+k,\beta+k,\gamma+k)\in\mathcal{P}_{0}$ and $(-1)^{k}\frac{\partial^{k}\varphi_{1}}{\partial x^{k}}$ is nonnegative on $\mathbb{R}_{+}$, hence $\varphi_{1}$ is $(q+2)$-times monotone. Since $\psi_{1}$ is positive and has a $(q+1)$-times monotone derivative, the composite function $\varphi_{1}\circ\psi_{1}$ is $(q+2)$-times monotone (Gneiting, , 1999, proposition 4.5). The fact that this composite function is continuous implies that $\Phi_{1}$ is positive semidefinite in $\mathbb{R}^{2q+1}$ (Askey, , 1973; Micchelli, , 1986; Gneiting, , 1999, criterion 1.3). ∎ ###### Proof of Lemma 4. $\Phi_{2}({\boldsymbol{y}})=\varphi_{2}\circ\psi_{2}(\|\boldsymbol{y}\|)$, where $\varphi_{2}:\alpha\mapsto{}_{1}F_{2}\left(\alpha;\alpha+1,\gamma;-x\right)$ is a nonnegative function on $[0,\alpha_{\max}]$, insofar as $(\alpha,\alpha+1,\gamma)\in\mathcal{P}_{0}$ as soon as $\alpha\leq\alpha_{\max}$. This function is infinitely differentiable; for $k\in\mathbb{N}^{*}$, its $k$-th derivative, obtained with a term-by-term differentiation of (1), is $\begin{split}\frac{\partial^{k}\varphi_{2}}{\partial\alpha^{k}}(\alpha)&=\sum_{n=1}^{+\infty}\frac{(-1)^{k-1}k!\,n\,\Gamma(\gamma)}{n!(\alpha+n)^{k+1}\Gamma(\gamma+n)}\left(-x\right)^{n}\\\ &=\frac{(-1)^{k}k!\,x}{(\alpha+1)^{k+1}\gamma}\sum_{n=0}^{+\infty}\frac{(\alpha+1)^{k+1}\Gamma(\gamma+1)}{n!(\alpha+1+n)^{k+1}\Gamma(\gamma+1+n)}\left(-x\right)^{n}\\\ &=\frac{(-1)^{k}k!\,x}{(\alpha+1)^{k+1}\gamma}\,{}_{k+1}F_{k+2}\left(\alpha+1,\ldots,\alpha+1;\alpha+2,\ldots,\alpha+2,\gamma+1;-x\right).\end{split}$ (23) If $\alpha\in[0,\alpha_{\max}]$, then $(\alpha+1,\alpha+2,\gamma+1)\in\mathcal{P}_{0}$ and ${}_{k+1}F_{k+2}(\alpha+1,\ldots,\alpha+1;\alpha+2,\ldots,\alpha+2,\gamma+1;-x)$ is nonnegative, as a beta mixture of nonnegative ${}_{1}F_{2}$ functions (Olver et al., , 2010, formula 16.5.2), which implies that $\varphi_{2}$ is completely monotone on $[0,\alpha_{\max}]$. Since $\psi_{2}$ is positive with values in $[0,\alpha_{\max}]$ and has a $(q^{\prime}+1)$-times monotone derivative, the composition $\varphi_{2}\circ\psi_{2}$ is $(q^{\prime}+2)$-times monotone on $\mathbb{R}_{+}$. As it is continuous, this entails that $\Phi_{2}$ is a positive semidefinite in $\mathbb{R}^{2q^{\prime}+1}$ (Micchelli, , 1986). ∎ ###### Proof of Lemma 5. Let introduce $\Phi(\boldsymbol{x},\boldsymbol{y})=\varphi(\psi_{1}(\|\boldsymbol{x}\|),\psi_{2}(\|\boldsymbol{y}\|))$, where $\varphi:(x,\alpha)\mapsto\frac{1}{x}\,{}_{1}F_{2}\left(\alpha;\alpha+1,\gamma;-x\right)$ is nonnegative and infinitely differentiable on $\mathbb{R}_{+}^{*}\times[0,\alpha_{\max}]$. From (23), it comes, for $k,k^{\prime}\in\mathbb{N}$: $\begin{split}&\frac{\partial^{k+k^{\prime}}\varphi}{\partial x^{k}\,\partial\alpha^{k^{\prime}}}(x,\alpha)=\sum_{n=0}^{+\infty}\frac{(-1)^{k+k^{\prime}}k^{\prime}!\,\Gamma(\gamma)}{\Gamma(n+1)(\alpha+n+k+1)^{k^{\prime}+1}\Gamma(\gamma+n+k+1)}\left(-x\right)^{n}\\\ &=\frac{(-1)^{k+k^{\prime}}k^{\prime}!\,\Gamma(\gamma)}{(\alpha+k+1)^{k^{\prime}+1}\Gamma(\gamma+k+1)}\\\ &\quad\times{}_{k^{\prime}+1}F_{k^{\prime}+2}\left(\alpha+k+1,\ldots,\alpha+k+1;\alpha+k+2,\ldots,\alpha+k+2,\gamma+k+1;-x\right).\end{split}$ If $(\alpha+q+3,\alpha+q+4,\gamma+q+3)\in\mathcal{P}_{0}$, then, for any $k=0,\ldots,q+2$, $(\alpha+k+1,\alpha+k+2,\gamma+k+1)\in\mathcal{P}_{0}$ and the hypergeometric term ${}_{k^{\prime}+1}F_{k^{\prime}+2}$ is nonnegative, as a beta mixture of nonnegative ${}_{1}F_{2}$ terms. Under this condition, $(-1)^{k+k^{\prime}}\frac{\partial^{k+k^{\prime}}\varphi}{\partial x^{k}\,\partial\alpha^{k^{\prime}}}$ is nonnegative for $k=0,\ldots,q+2$ and any $k^{\prime}\in\mathbb{N}$. Accordingly, $\varphi$ is a bivariate multiply monotone function of order $(q+2,q^{\prime}+2)$, and so is the composite function $\varphi(\psi_{1},\psi_{2})$ (Gneiting, , 1999, proposition 4.5). Arguments in Williamson, (1956) generalized to functions of two variables imply that $\varphi(\psi_{1},\psi_{2})$ is a mixture of products of truncated power functions of the form (15) (one function of $x$ with power exponent $q+1$ times one function of $\alpha$ with power exponent $q^{\prime}+1$) and is the radial part of a product covariance kernel in $\mathbb{R}^{2q+1}\times\mathbb{R}^{2q^{\prime}+1}$. ∎ ###### Proof of Theorem 17. We start proving (1). Conditions (i), (ii) and (v) imply the existence of a spectral density associated with each direct or cross covariance (Theorem 1). Based on Cramér’s criterion (Cramér, , 1940; Chilès and Delfiner, , 2012), $\boldsymbol{\widetilde{G}}_{d}(\cdot;a\boldsymbol{1},\alpha\boldsymbol{1},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\rho})$ is a valid matrix-valued spectral density function if, and only if, $\boldsymbol{\widetilde{G}}_{d}(\boldsymbol{u};a\boldsymbol{1},\alpha\boldsymbol{1},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\rho})$ is positive semidefinite for any vector $\boldsymbol{u}\in\mathbb{R}^{d}$. The key of the proof is to expand this matrix as a positive mixture of positive semidefinite matrices. Such an expansion rests on the following identity, which can be obtained by a term-by-term integration of the infinite series (1) defining the generalized hypergeometric function ${}_{1}F_{2}$ along with formula 3.251.1 of Gradshteyn and Ryzhik, (2007): $\begin{split}\int_{0}^{1}&\int_{0}^{1}{}_{1}F_{2}\left(\alpha;\beta,\gamma;-t_{1}t_{2}(a\,x)^{2}\right)t_{1}^{\beta-1}(1-t_{1})^{\beta_{ij}-\beta-1}t_{2}^{\gamma-1}(1-t_{2})^{\gamma_{ij}-\gamma-1}\text{d}t_{1}\text{d}t_{2}\\\ &=\frac{\Gamma(\beta)\Gamma(\beta_{ij}-\beta)\Gamma(\gamma)\Gamma(\gamma_{ij}-\gamma)}{\Gamma(\beta_{ij})\Gamma(\gamma_{ij})}{}_{1}F_{2}\left(\alpha;\beta_{ij},\gamma_{ij};-(a\,x)^{2}\right),\end{split}$ (24) for $x\geq 0,a>0,\alpha>0,\beta_{ij}>\beta>0$ and $\gamma_{ij}>\gamma>0$. Accordingly, for $\boldsymbol{u}\in\mathbb{R}^{d}$: $\begin{split}&{\widetilde{\boldsymbol{G}}}_{d}(\boldsymbol{u};{a}\boldsymbol{1},{\alpha}\boldsymbol{1},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\rho})=\frac{\pi^{\frac{d}{2}}a^{d}\Gamma(\alpha)\Gamma(\boldsymbol{\beta}-\frac{d}{2})\Gamma(\boldsymbol{\gamma}-\frac{d}{2})\boldsymbol{\rho}}{\Gamma(\alpha-\frac{d}{2})\Gamma(\beta)\Gamma(\boldsymbol{\beta}-\beta)\Gamma(\gamma)\Gamma(\boldsymbol{\gamma}-\gamma)}\\\ &\times\int_{0}^{1}\int_{0}^{1}{}_{1}F_{2}\left(\alpha;\beta,\gamma;-{t_{1}\,t_{2}(\pi a\|\boldsymbol{u}\|)^{2}}\right)t_{1}^{\beta-1}(1-t_{1})^{\boldsymbol{\beta}-\beta-1}t_{2}^{\gamma-1}(1-t_{2})^{\boldsymbol{\gamma}-\gamma-1}\text{d}t_{1}\text{d}t_{2},\end{split}$ with the products, quotients and powers taken element-wise. ${}_{1}F_{2}\left(\alpha;\beta,\gamma;-{t_{1}t_{2}(\pi a\|\boldsymbol{u}\|)^{2}}\right)$ is nonnegative for any $t_{1},t_{2}\in[0,1]$ under Condition (vi) (Cho et al., , 2020). Under Conditions (iii) and (iv), $(1-t_{1})^{\boldsymbol{\beta}}$ and $(1-t_{2})^{\boldsymbol{\gamma}}$ are positive semidefinite matrices (Lemma 1). Along with Condition (vii), ${\widetilde{\boldsymbol{G}}}_{d}(\boldsymbol{u};a\boldsymbol{1},\alpha\boldsymbol{1},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\rho})$ is positive semidefinite for any $\boldsymbol{u}$ in $\mathbb{R}^{d}$, as the elewent-wise product of positive semidefinite matrices, which completes the proof for (1). We now prove (2). Under Condition (vi), the generalized hypergeometric function ${}_{1}F_{2}(\alpha;\beta,\gamma,x)$ is positive and increasing in $x$ on $\mathbb{R}$ (Olver et al., , 2010, formula 16.3.1). Therefore, if $\boldsymbol{a}$ fulfills Condition (i), $[{}_{1}F_{2}\left(\alpha;\beta,\gamma;-t_{1}t_{2}(\,a_{ij}\,x)^{2}\right)]_{i,j=1}^{p}$ is positive semidefinite, as the sum of a _min_ matrix with positive entries (Horn and Johnson, , 2013, problem 7.1.P18) and a diagonal matrix with nonnegative entries. The proof of (1) can then be adapted in a straightforward manner, by substituting such a positive semidefinite matrix for the positive scalar ${}_{1}F_{2}\left(\alpha;\beta,\gamma;-t_{1}t_{2}(\,a\,x)^{2}\right)$. The proof of (3) follows that of (2) and relies on the fact that, under Conditions (i) and (vi), the matrix $[{}_{1}F_{2}\left(\alpha;\beta,\gamma;-t_{1}t_{2}(\,a_{ij}\,x)^{2}\right)]_{i,j=1}^{p}$ is positive semidefinite for any $t_{1}$, $t_{2}$ and $x$ (Lemma 3). The proof of (4) is similar to that of (1), with (24) replaced by $\begin{split}\int_{0}^{1}&\int_{0}^{1}{}_{1}F_{2}\left(\alpha_{ij};\alpha_{ij}+1,\gamma;-t_{1}t_{2}(a_{ij}\,x)^{2}\right)t_{1}^{\alpha_{ij}}(1-t_{1})^{\beta_{ij}-\alpha_{ij}-2}t_{2}^{\gamma-1}(1-t_{2})^{\gamma_{ij}-\gamma-1}\text{d}t_{1}\text{d}t_{2}\\\ &=\frac{\Gamma(\alpha_{ij}+1)\Gamma(\beta_{ij}-\alpha_{ij}-1)\Gamma(\gamma)\Gamma(\gamma_{ij}-\gamma)}{\Gamma(\beta_{ij})\Gamma(\gamma_{ij})}{}_{1}F_{2}\left(\alpha_{ij};\beta_{ij},\gamma_{ij};-(a_{ij}\,x)^{2}\right),\end{split}$ for $x\geq 0,a_{ij}>0,\beta_{ij}-1>\alpha_{ij}>0$ and $\gamma_{ij}>\gamma>0$ for $i,j$ in $[1,\ldots,p]$. Under Condition (ii), the composite function $t\mapsto\exp(-x(\psi_{2}(t)-\psi_{2}(0)))$ is $(q^{\prime}+2)$-times monotone (Gneiting, , 1999, proposition 4.5), hence it is a mixture of truncated power functions of the form (15) and is the radial part of a positive semidefinite function in $\mathbb{R}^{2q^{\prime}+1}$ for any $x>0$. A classical result by Schoenberg, (1938) states that $\boldsymbol{x}\mapsto\psi_{2}(\|\boldsymbol{x}\|)-\psi_{2}(0)$ is a variogram in $\mathbb{R}^{2q^{\prime}+1}$, so $\boldsymbol{\alpha}$ is conditionally negative semidefinite (Example 1) and $[t_{1}^{\alpha_{ij}}]_{i,j=1}^{p}$ is positive semidefinite for any $t_{1}\in[0,1]$ (Lemma 1). Under Conditions (iii) and (iv), $[(1-t_{1})^{\beta_{ij}-\alpha_{ij}}]_{i,j=1}^{p}$ and $[(1-t_{2})^{\gamma_{ij}}]_{i,j=1}^{p}$ are positive semidefinite for any $t_{1},t_{2}\in[0,1]$ (Lemma 1). Under Condition (ii), $[{}_{1}F_{2}(\alpha_{ij};\alpha_{ij}+1,\gamma;-t_{1}t_{2}(a\,x)^{2})]_{i,j=1}^{p}$ is also positive semidefinite for any $t_{1},t_{2}\in[0,1]$, $a>0$, $x>0$ (Lemma 4). As $(\alpha_{ij}+1,\alpha_{ij}+2,\gamma+1)\in\mathcal{P}_{0}$, the generic entry of this matrix decreases with $a$ (Olver et al., , 2010, formula 16.3.1), hence the matrix $[{}_{1}F_{2}(\alpha_{ij};\alpha_{ij}+1,\gamma;-t_{1}t_{2}(a_{ij}\,x)^{2})]_{i,j=1}^{p}$ has increased diagonal entries and is still positive semidefinite. Finally, Condition (v) and Schur’s product theorem imply that ${\widetilde{\boldsymbol{G}}}_{d}(\boldsymbol{u};\boldsymbol{a},\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\rho})$ is positive semidefinite for any $\boldsymbol{u}$ in $\mathbb{R}^{d}$, as the elewent-wise product of positive semidefinite matrices, which completes the proof of (4). The proof of (5) follows the same line of reasoning as that of (4). The positive semidefiniteness of $[a_{ij}^{-2}\,{}_{1}F_{2}(\alpha_{ij};\alpha_{ij}+1,\gamma;-t_{1}t_{2}(a_{ij}\,x)^{2})]_{i,j=1}^{p}$ now stems from Conditions (i), (ii) and (v) together with Lemma 5. ∎ ## Acknowledgements The authors acknowledge the funding of the National Agency for Research and Development of Chile, through grants ANID/FONDECYT/REGULAR/No. 1210050 (X. Emery), ANID PIA AFB180004 (X. Emery) and ANID/FONDECYT/INICIACIÓN/No. 11190686 (A. Alegría). ## References * Ahmed, (2007) Ahmed, S. (2007). Application of geostatistics in hydrosciences. In Thangarajan, M., editor, Groundwater, pages 78–111, Dordrecht. Springer. * Alabert, (1987) Alabert, F. (1987). The practice of fast conditional simulations through the lu decomposition of the covariance matrix. Mathematical Geology, 19(5):369–386. * Apanasovich et al., (2012) Apanasovich, T. V., Genton, M. G., and Sun, Y. (2012). A valid Matérn class of cross-covariance functions for multivariate random fields with any number of components. Journal of the American Statistical Association, 107(497):180–193. * Arroyo and Emery, (2020) Arroyo, D. and Emery, X. (2020). Algorithm 1013: An R implementation of a continuous spectral algorithm for simulating vector gaussian random fields in Euclidean spaces. ACM Transactions on Mathematical software, in press. * Arroyo et al., (2012) Arroyo, D., Emery, X., and Peláez, M. (2012). An enhanced gibbs sampler algorithm for non-conditional simulation of gaussian random vectors. Computers & Geosciences, 46:138–148. * Askey, (1973) Askey, R. (1973). Radial characteristic functions. Technical Report No. 1262, Mathematics Research Center, University of Wisconsin-Madison. * Berg et al., (1984) Berg, C., Christensen, J. P. R., and Ressel, P. (1984). Harmonic Analysis on Semigroups: Theory of Positive Definite and Related Functions. Springer-Verlag. * Bevilacqua et al., (2020) Bevilacqua, M., Caamaño Carrillo, C., and Porcu, E. (2020). Unifying compactly supported and Matérn covariance functions in spatial statistics. arXiv:2008.02904v1 [math.ST]. * Bevilacqua et al., (2019) Bevilacqua, M., Faouzi, T., Furrer, R., and Porcu, E. (2019). Estimation and prediction using generalized wendland covariance functions under fixed domain asymptotics. Annals of Statistics, 47(2):828–856. * Buhmann, (1998) Buhmann, M. (1998). Radial functions on compact support. Proceedings of the Edinburgh Mathematical Society, 41:41–46. * Buhmann, (2001) Buhmann, M. (2001). A new class of radial basis functions with compact support. Mathematics of Computation, 70(233):307–318. * Chernih et al., (2014) Chernih, A., Sloan, I. H., and Womersley, R. S. (2014). Wendland functions with increasing smoothness converge to a gaussian. Advances in Computational Mathematics, 40(1):185–200. * Chilès and Delfiner, (2012) Chilès, J.-P. and Delfiner, P. (2012). Geostatistics: Modeling Spatial Uncertainty. John Wiley & Sons, New York. * Cho et al., (2020) Cho, Y.-K., Chung, S.-Y., and Yun, H. (2020). Rational extension of the Newton diagram for the positivity of ${}_{1}{F}_{2}$ hypergeometric functions and Askey–Szegö problem. Constructive Approximation, 51(1):49–72. * Cho and Yun, (2018) Cho, Y.-K. and Yun, H. (2018). Newton diagram of positivity for ${}_{1}{F}_{2}$ generalized hypergeometric functions. Integral Transforms and Special Functions, 29(7):527–542. * Cramér, (1940) Cramér, H. (1940). On the theory of stationary random processes. Annals of Mathematics, 41(1):215–230. * Cressie, (1993) Cressie, N. A. (1993). Statistics for Spatial Data. Wiley. * Daley et al., (2015) Daley, D. J., Porcu, E., and Bevilacqua, M. (2015). Classes of compactly supported covariance functions for multivariate random fields. Stochastic Environmental Research and Risk Assessment, 29(4):1249–1263. * Davis, (1987) Davis, M. (1987). Production of conditional simulations via the LU triangular decomposition of the covariance matrix. Mathematical Geology, 19(2):91–98. * Dietrich and Newsam, (1993) Dietrich, C. and Newsam, G. (1993). A fast and exact method for multidimensional gaussian stochastic simulations. Water Resources Research, 19:2961–2969. * Dolloff et al., (2006) Dolloff, J., Lofy, B., Sussman, A., and Taylor, C. (2006). Strictly positive definite correlation functions. In Kadar, I., editor, Signal Processing, Sensor Fusion, and Target Recognition XV, volume 6235, pages 1–18, Bellingham. SPIE. * Emery et al., (2016) Emery, X., Arroyo, D., and Porcu, E. (2016). An improved spectral turning-bands algorithm for simulating stationary vector Gaussian random fields. Stochastic Environmental Research and Risk Assessment, 30(7):1863–1873. * Emery and Séguret, (2020) Emery, X. and Séguret, S. (2020). Geostatistics for the Mining Industry. CRC Press, Boca Raton. * Erdélyi, (1953) Erdélyi, A. (1953). Higher Transcendental Functions. McGraw-Hill. * Furrer et al., (2006) Furrer, R., Genton, M. G., and Nychka, D. (2006). Covariance tapering for interpolation of large spatial datasets. Journal of Computational and Graphical Statistics, 15(3):502–523. * Galassi and Gough, (2009) Galassi, M. and Gough, B. (2009). GNU Scientific Library: Reference Manual. GNU manual. Network Theory. * Galli and Gao, (2001) Galli, A. and Gao, H. (2001). Rate of convergence of the Gibbs sampler in the Gaussian case. Mathematical Geology, 33(6):653–677. * Gasper, (1975) Gasper, G. (1975). Positivity and special functions. In Askey, R., editor, Theory and Application of Special Functions, pages 375–433, New York. Academic Press. * Gneiting, (1999) Gneiting, T. (1999). Radial positive definite functions generated by Euclid’s hat. Journal of Multivariate Analysis, 69(1):88–119. * Gneiting, (2002) Gneiting, T. (2002). Compactly supported correlation functions. Journal of Multivariate Analysis, 83(2):493–508. * Gneiting et al., (2010) Gneiting, T., Kleiber, W., and Schlather, M. (2010). Matérn cross-covariance functions for multivariate random fields. Journal of the American Statistical Association, 105:1167–1177. * Gradshteyn and Ryzhik, (2007) Gradshteyn, I. and Ryzhik, I. (2007). Table of Integrals, Series, and Products. Amsterdam: Academic Press. * Hohn, (1999) Hohn, M. (1999). Geostatistics and Petroleum Geology. Kluwer Academic, Dordrecht. * Horn and Johnson, (2013) Horn, R. A. and Johnson, C. R. (2013). Matrix Analysis. Cambridge University Press, Cambridge, 2nd. edition edition. * Hubbert, (2012) Hubbert, S. (2012). Closed form representations for a class of compactly supported radial basis functions. Advances in Computational Mathematics, 36(1):115–136. * Johansson, (2017) Johansson, F. (2017). Arb: Efficient arbitrary-precision midpoint-radius interval arithmetic. IEEE Transactions on Computers, 66(8):1281–1292. * Johansson, (2019) Johansson, F. (2019). Computing hypergeometric functions rigorously. ACM Transactions on Mathematical Software, 45(3):30. * Kaufman et al., (2008) Kaufman, C. G., Schervish, M. J., and Nychka, D. W. (2008). Covariance tapering for likelihood-based estimation in large spatial data sets. Journal of the American Statistical Association, 103(484):1545–1555. * Lantuéjoul, (2002) Lantuéjoul, C. (2002). Geostatistical Simulation: Models and Algorithms. Springer-Verlag, Berlin, 2nd. edition edition. * Lantuéjoul and Desassis, (2012) Lantuéjoul, C. and Desassis, N. (2012). Simulation of a Gaussian random vector: a propagative version of the Gibbs sampler. In 9th International Geostatistics Congress, Oslo. Available at http://geostats2012.nr.no/pdfs/1747181.pdf. * Matérn, (1986) Matérn, B. (1986). Spatial Variation — Stochastic Models and Their Application to Some Problems in Forest Surveys and Other Sampling Investigations. Springer. * Matheron, (1965) Matheron, G. (1965). Les Variables Régionalisées et leur Estimation. Masson. * Micchelli, (1986) Micchelli, C. A. (1986). Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constructive Approximation, 2:11–22. * Olver et al., (2010) Olver, F. W., Lozier, D. W., Boisvert, R. F., and Clark, C. W. (2010). NIST handbook of mathematical functions hardback and CD-ROM. Cambridge university press. * Pardo-Igúzquiza and Chica-Olmo, (1993) Pardo-Igúzquiza, E. and Chica-Olmo, M. (1993). The fourier integral method: an efficient spectral method for simulation of random fields. Mathematical Geology, 25(2):177–217. * Pearson et al., (2017) Pearson, J. W., Olver, S., and Porter, M. A. (2017). Numerical methods for the computation of the confluent and Gauss hypergeometric functions. Numerical Algorithms, 74(3):821–866. * Porcu et al., (2013) Porcu, E., Daley, D. J., Buhmann, M., and Bevilacqua, M. (2013). Radial basis functions with compact support for multivariate geostatistics. Stochastic Environmental Research and Risk Assessment, 27(4):909–922. * Porcu and Zastavnyi, (2014) Porcu, E. and Zastavnyi, V. (2014). Generalized Askey functions and their walks through dimensions. Expositiones Mathematicæ, 32(2):169–174. * Schaback, (2011) Schaback, R. (2011). The missing Wendland functions. Advances in Computational Mathematics, 34(1):67–81. * Schilling et al., (2010) Schilling, R., Song, R., and Vondraček, Z. (2010). Bernstein Functions. De Gruyter, Berlin. * Schoenberg, (1938) Schoenberg, I. (1938). Metric spaces and completely monotone functions. Annals of Mathematics, 39(4):811–831. * Shinozuka, (1971) Shinozuka, M. (1971). Simulation of multivariate and multidimensional random processes. The Journal of the Acoustical Society of America, 49(1B):357–367. * Stein and Weiss, (1971) Stein, E. and Weiss, G. (1971). Introduction to Fourier Analysis in Euclidean Spaces. Princeton University Press, Princeton. * Wackernagel, (2003) Wackernagel, H. (2003). Multivariate Geostatistics: an Introduction with Applications. Springer. * Webster and Oliver, (2007) Webster, R. and Oliver, M. A. (2007). Geostatistics for Environmental Scientists. Wiley, New York. * Wendland, (1995) Wendland, H. (1995). Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. Advances in Computational Mathematics, 4(1):389–396. * Williamson, (1956) Williamson, R. (1956). Multiply monotone functions and their Laplace transforms. Duke Mathematical Journal, 23(2):189–207. * Wood and Chan, (1994) Wood, A. T. and Chan, G. (1994). Simulation of stationary Gaussian processes in $[0,1]^{d}$. Journal of Computational and Graphical Statistics, 3(4):409–432. * Zastavnyi, (2006) Zastavnyi, V. (2006). On some properties of Buhmann functions. Ukrainian Mathematical Journal, 58(8):1184.
# Deep Learning for General Game Playing with Ludii and Polygames Dennis J. N. J. Soemers$*$, Vegard Mella$**$, Cameron Browne$*$, Olivier Teytaud$**$ ###### Abstract Combinations of Monte-Carlo tree search and Deep Neural Networks, trained through self-play, have produced state-of-the-art results for automated game- playing in many board games. The training and search algorithms are not game- specific, but every individual game that these approaches are applied to still requires domain knowledge for the implementation of the game’s rules, and constructing the neural network’s architecture – in particular the shapes of its input and output tensors. Ludii is a general game system that already contains over 500 different games, which can rapidly grow thanks to its powerful and user-friendly game description language. Polygames is a framework with training and search algorithms, which has already produced superhuman players for several board games. This paper describes the implementation of a bridge between Ludii and Polygames, which enables Polygames to train and evaluate models for games that are implemented and run through Ludii. We do not require any game-specific domain knowledge anymore, and instead leverage our domain knowledge of the Ludii system and its abstract state and move representations to write functions that can automatically determine the appropriate shapes for input and output tensors for any game implemented in Ludii. We describe experimental results for short training runs in a wide variety of different board games, and discuss several open problems and avenues for future research.111$*$Department of Data Science and Knowledge Engineering, Maastricht University, the Netherlands.$**$Facebook AI Research. ## 1 Introduction Self-play training approaches such as those popularised by AlphaGo Zero [27] and AlphaZero [26], based on combinations of Monte-Carlo tree search (MCTS) [10, 6, 3] and Deep Learning [13], have been demonstrated to be fairly generally applicable, and achieved state-of-the-art results in a variety of board games such as Go [27], Chess, Shogi [26], Hex, and Havannah [5]. These approaches require relatively little domain knowledge, but still require some in the form of: 1. 1. A complete implementation of a forward model for the game, for the implementation of lookahead search as well as automated self-play to generate experience for training. 2. 2. Knowledge of which state features are required or useful to provide as inputs for a neural network. 3. 3. Knowledge of the action space, which is typically used to construct the policy head in such a way that every distinct possible action has a unique logit. The first requirement, for the implementation of a forward model, is partially addressed by research on using learned simulators for tree search as in MuZero [24], but in practice a simulator is actually still required for the purpose of generating trajectories outside of the tree search. For the board games Go, Chess, and Shogi, MuZero still requires the input and output tensor shapes (for states and actions, respectively) to be manually designed per game. We remark that MuZero was also evaluated on 57 different Atari games in the Arcade Learning Environment (ALE) [1], and it can use identical tensor shapes across all these Atari games because ALE uses the same observation and action spaces for all games in this framework. The challenge posed by General Game Playing (GGP) [21] is to build systems that can play a wide variety of games, which makes the three forms of required domain knowledge listed above difficult. A number of systems have been proposed that can interpret and run any arbitrary game as long as it has been described in their respective game description language, such as the original Game Description Language (GDL) [16] from Stanford, Regular Boardgames (RBG) [11], and Ludii [20]. In this paper, we describe how we combine the GGP system Ludii and the PyTorch-based [17] state-of-the-art training algorithms in Polygames [5], with the goal of mitigating all three of the requirements for domain knowledge listed above. Section 2 provides some background information on these training techniques. Section 3 describes existing work and limitations in applying these Deep Learning approaches to general games. Section 4 presents the interface between Ludii and Polygames. Experiments and results are described in Section 5. We discuss some open problems in Section 6, and conclude the paper in Section 7. ## 2 Background The basic premise behind AlphaZero and similar approaches in frameworks such as Polygames is that Deep Neural Networks (DNNs) take representations $T(s)$ of game states $s$ as input, and produce discrete probability distributions $\mathbf{P}(s)$ with probabilities $P(s,a)$ for all actions $a$ in states $s$, as well as value estimates $V(s)$, as outputs. This is depicted in Figure 1. Both of these outputs are used to guide MCTS in different ways. Figure 1: Basic architecture of DNNs for game playing. Raw game states $s$ are transformed into a tensor representation $T(s)$ of some fixed shape (often $3$-dimensional). The DNN learns to compute hidden representations of its inputs in hidden layers. Finally, it computes a scalar value estimate $V(s)$, and a discrete probability distribution $\mathbf{P}(s)$ with probabilities $P(s,a)$ for all actions $a$ in the complete action space. DNNs in general have a fixed architecture, requiring fixed and predetermined shapes for both the input and the output representations. The value output is always simply a scalar,222Assuming $2$-player zero-sum games; see [18] for relaxations of this assumption. but determining the shapes of the input tensors $T(s)$ and policy outputs $\mathbf{P}(s)$ typically requires game- specific domain knowledge. $T(s)$ is generally a $3$-dimensional tensor, where $2$ dimensions are spatial dimensions (corresponding to e.g. a $2$-dimensional playable area in a board game). The third dimension is formed by a stack of different channels which each have different semantics. For example, $T(s)$ in AlphaZero has a shape of $19$$\times$$19$$\times$$17$ for the game of Go played on a $19$$\times$$19$ board, with eight times two binary channels encoding the presence of the two players’ pieces – for a history of up to eight successive game states ending in $s$ – and one final channel encoding the current player to move. The spatial structure of the first two dimensions is typically assumed to be meaningful, which is exploited by the inductive bias of Convolutional Neural Networks (CNNs) [14]. For the policy head, it is customary for neural networks to first output real- valued logits $L(s,a)$ for all possible actions $a$. These are subsequently converted into probabilities $P(s,a)$ using a softmax over all legal actions $a^{\prime}\in\mathcal{A}(s)$ in $s$: $P(s,a)=\frac{\exp(L(s,a))}{\sum_{a^{\prime}\in\mathcal{A}(s)}\exp(L(s,a^{\prime}))}.$ It is generally assumed that every distinct possible action $a$ that may be legal in any game state $s$ has a unique, matching logit $L(s,a)$. This means that domain knowledge of the game’s action space is required to construct a DNN’s architecture in such a way that distinct actions always have distinct logits. The logits are sometimes laid out in a structure of multiple $2$-dimensional planes, like the inputs, but typically preceded by fully connected (as opposed to convolutional) layers. This is equivalent to all the logits being laid out in a single, flat vector with no spatial structure. In addition to such typical architectures, Polygames [5] includes various different structures, such as: * • Fully convolutional networks: As in many cases, actions are spatially distributed in a manner somehow close to the pieces, the output has spatial coordinates matching the spatial coordinates of the input. This can be exploited in fully convolutional networks [25]: the policy head has no fully connected layer, and directly maps inputs to outputs through convolutional blocks. This has the advantage of being boardsize invariant: we can train in size $13$$\times$$13$ and play in $19$$\times$$19$. Global pooling can be used to additionally make the value head size-invariant [15, 29]. * • U-networks: It is usually considered that DNNs rephrase their data in an increasingly abstract manner, layer after layer. However, in fully convolutional networks, the output is dense; it has the same low-level nature as the input. The level of abstraction increases, and then decreases again. Then, one may consider that layers might benefit from a direct connection into a layer containing information at the same level of abstraction. This can be done by skip-connections, i.e. additional connections to layers symmetrically positioned in the network (Figure 2(c)): this is a U-network [22]. Some of these different structures are depicted and explained in Figure 2. (a) Standard convolutional net: convolutional layers first, with their inductive bias towards spatial invariance, followed by fully connected layers. (source: https://en.wikipedia.org/wiki/Convolutional\\_neural\\_network\\#/media/File:Typical\\_cnn.png) (b) Fully convolutional net, e.g. for image segmentation: each output scalar (each logit in the case of games) is the output of the same net, applied on a moving window. No fully connected layers. (c) U-networks: this fully convolutional network also connects some layers to their symmetric counterpart supposed to work at a similar level of abstraction. (d) Max pooling: here we downsize from 4x4 to 2x2. In case of global pooling, the size of tensors after the global pooling layer is $1\times 1\times\\#channels$, independently of the input size. Figure 2: Convolutional neural network (a). Fully convolutional counterpart (b, image from [25]; other images from Wikipedia), typically used in Image segmentation: image segmentation is related to policy heads in games in that the output has the same spatial coordinates at the input. U-networks (c): only convolutional layers, and skip connections symmetrically connecting layers. Global pooling (d): here we down-sample to a spatial size 1x1 in the value head: this is boardsize invariant. Global pooling can use channels for mean, standard deviation, max, etc: the number of channels is not necessarily preserved. (b+d) or (c+d) allow boardsize-invariant training [5]. ## 3 Deep Learning in General Game Playing To some extent, all GGP systems mitigate the requirement for the implementation of complete forward models for every distinct game, in the sense that new games can be added and supported simply by defining them in a game description language. Ludii’s game description language in particular has been designed in such a way that game descriptions for new games are fast and easy to write and understand [20], which has allowed for a significantly larger library of distinct games333Ludii has over 500 distinct built-in games at the time of this writing, with many of them having multiple variants for different board sizes, board shapes, variant rulesets, etc. to be built up than would be feasible if they were all written in a programming language such as C++. Ludii’s predecessor has also already demonstrated that the “ludemic” approach to game description languages used by Ludii facilitates procedural generation of complete games [4], which can be used to easily extend the set of compatible benchmark problems. Similarly, we may argue that running games through a GGP system removes the requirements for _game-specific knowledge_ about how to shape state inputs and action outputs, but introduces requirements for similar _knowledge about the GGP system_. Given any arbitrary game defined in a game description language of a GGP system, we require the ability to construct tensor representations of game states, and the ability to map from any index in a policy head to a matching action in any non-terminal game state. GDL [16] is a low-level logic-based game description language, where games are described as logic programs consisting of many low-level propositions. Many GDL-based agents convert such a GDL description into a propositional network [23, 7, 28], which can more efficiently process the games than Prolog-based reasoners or other similar techniques. Such propositional networks can be automatically constructed from GDL descriptions, and the structure of such a network remains constant across all game states of the same game. [9] therefore proposed using the internal state of a game’s propositional network as the input state tensor for a deep neural network. A downside of this approach is that the state input tensor is a flat tensor, and there is no possibility to use inductive biases such as those of CNNs for inputs with spatial semantics. Galvanise Zero [8] does exploit knowledge of spatial semantics through CNNs, but it only supports a limited selection of GDL-based games because it requires a handwritten Python function to create the mapping from game states to input tensors for every game that it supports. The action space can automatically be inferred from GDL descriptions, which means that these approaches require no extra domain knowledge with respect to the output policy heads. In the game description language of Ludii [20], common high-level game concepts such as boards, piece types, etc. are all “first-class citizen” of the language, as opposed to GDL where every separate game description file encodes such concepts from scratch in low-level logic. Based on these concepts, Ludii also has an object-oriented game state representation that it uses internally, which remains consistent across all games. This enables us to write a single function that automatically constructs input tensors from Ludii’s internal state representation, using our domain knowledge of Ludii as a whole instead of domain knowledge of every individual game. Unlike GDL, it is not straightforward (if at all possible) to infer the action space from game description files in Ludii. However, actions in Ludii do have an object- oriented structure, and at least an approximation of the action space can be constructed based on these properties – again, based on domain knowledge of Ludii rather than any individual game. In many games, this is sufficient to distinguish most or all legal actions from each other. ## 4 Interface Between Ludii and Polygames Based on the insights described above, we developed an interface between the Ludii general game system, and the Polygames framework with state-of-the-art AI training code. In Polygames, different games are normally implemented from scratch in C++. The basic idea of this interface is that there is a single “Ludii game” in Polygames, with C++ code that interacts with Ludii’s Java- based API through Java Native Interface. Polygames command-line arguments can be used to load different games and variants from Ludii into this wrapper. This section provides details on how Ludii automatically constructs tensor representations of its state and action spaces, based on its own internal representations, for any arbitrary game implemented in Ludii.444The source code for building tensors from Ludii’s internal state and action representations is available from https://github.com/Ludeme/LudiiAI. All the source code of Polygames is available from https://github.com/facebookincubator/Polygames. ### 4.1 Constructing the Spatial Dimensions CNNs normally operate on grid structures of “pixels”, such that every position can be indexed by a row and column, and every position has a square of up to eight neighbour positions around it. This structure resembles the game boards of games such as Chess, Shogi, and Go most closely. Some other boards, such as the tilings of hexagonal cells used in games like Hex and Havannah, can also be “packed” into such a grid. This approach is used for the game-specific C++ implementations of those games in Polygames. However, Ludii supports games with arbitrary graphs as boards, and hence requires a generic solution that can map positions from graphs with any arbitrary connectivity structure into a grid structure that CNNs can work with. For every game in Ludii, there is at least one (and possibly more than one) container, which specifies a playable “area” with positions that may contain pieces, have corresponding clickable elements in Ludii’s GUI, etc. [20, 19]. The first container typically corresponds to the board that a game is played on, and is often the largest. Any other containers represent auxiliary areas, such as players’ hands to hold captured pieces in Shogi. Even games that are not generally thought of as being played on a board are still modelled in this way in Ludii. For instance, Rock-Paper-Scissors is modelled as a board with two (initially empty) cells, and two hands for the two players, each containing rock, paper, and scissors “pieces” which players can drag onto their designated cells on the board to make their move. (a) A $3$$\times$$3$ grid is computed based on the distinct $x$\- and $y$-coordinates of three sites A, B, and C. (b) An $8$$\times$$8$ grid is overlaid on the $[0,1]^{2}$ space containing three sites A, B, and C. Figure 3: Two different approaches for computing a grid based on a playable space defined by three sites A, B, and C, each with distinct $x$\- and $y$-coordinates. The approach we use is depicted in (a). This approach results in smaller, more dense tensors, but information of the relative distances between all sites is not necessarily preserved. The alternative approach, depicted in (b), preserves more of this information, but can result in large and sparse tensors. Every site in any such container in Ludii has $x$ and $y$ coordinates in $[0,1]$, which are used by Ludii for purposes such as drawing game states in the GUI for human players. We construct a grid structure simply by sorting all the distinct $x$\- and $y$-coordinates across all sites in the board in increasing order, and assigning distinct columns and rows, respectively, to distinct $x$\- and $y$-coordinates. Coordinates that are within a tolerance value of $10^{-5}$ are treated as equal, to avoid generating excessively large and sparse tensors due to small differences resulting from floating-point arithmetic. Note that this approach is not equivalent to directly overlaying a sufficiently fine-grained grid over the $[0,1]^{2}$ space, because we only add rows and columns that each contain at least one site. This is depicted in Figure 3. Our approach may lose some information concerning the relative distances between sites, but because these $x$\- and $y$-coordinates are only used for the graphical user interface – not for game logic – we expect the smaller and less sparse grids to be preferable due to improved computational efficiency. Note that the vast majority of games in Ludii use boards defined by regular or semiregular tilings, and for these the two approaches will have similar results. In the current version of Ludii, containers other than the first one (corresponding to the “main” board) never have more than one meaningful dimension; they are always a single, contiguous sequence of cells. Each of those containers is concatenated to the grid constructed for the first container, either using one extra column or one extra row per extra container (whichever results in the lowest increase in total size of the tensor). Additionally, one extra dummy row or column is inserted to create a more explicit separation between the main board (for which we expect there to be meaningful spatial semantics) and the other containers (for which there is no expectation that any meaningful spatial semantics exist). For example, Shogi is played on a $9$$\times$$9$ board, but each of the two players also has a “hand” of $7$ cells as extra containers to potentially hold captured pieces. This results in a $12$$\times$$9$ grid for Shogi. A screenshot of Shogi being played in Ludii’s user interface is depicted in Figure 4, with cells of the different containers labelled by numbers. The mapping from these positions to positions in the tensor representation is depicted in Figure 5. Figure 4: Shogi being played in Ludii’s user interface. The game board is on the left-hand side, and each player has a “hand” with seven slots to hold captured pieces on the right-hand side. Figure 5 shows how the numbered positions get mapped to positions in a tensor. Figure 5: Mapping from positions in Shogi’s three containers to positions in a single tensor. Numbers $0$ through $80$ correspond to positions on the board, $81$ through $87$ are positions in the hand of Player 1, and $88$ through $94$ are positions in the hand of Player 2 (see Figure 4). ### 4.2 Representing Ludii Game States as Tensors Let $s$ denote a raw game state in Ludii’s object-oriented state representation [19], for a game $\mathcal{G}$. Based on the properties of $s$, we construct a tensor representation $T(s)$ – which can be used as input for a DNN – of shape $(C,W,H)$, where $C$ denotes the number of channels (variable, depends on $\mathcal{G}$), $W$ denotes the width (i.e., number of columns), and $H$ denotes the height (i.e., number of rows). The channels are constructed as follows: * • Binary channels indicating the presence (or absence) of every piece type defined in $\mathcal{G}$. Most games have one channel per piece type, where values of $1$ indicate the presence of a piece of that type in a position. If $\mathcal{G}$ is a “stacking” game, meaning that it allows for multiple pieces to form a stack on a single position, we use $M+N$ binary channels per piece type, instead of just one. $M$ channels are used to indicate presence of a piece type in the bottom $M$ layers of every stack on every position, and $N$ channels indicate the same for the top $N$ layers. In our implementation, we use $M=N=5$. If a single stack contains more than $M+N$ pieces, this representation is not sufficient to provide information about some of the middle layers to the DNN, but this is rare in practice. * • If $\mathcal{G}$ is a “stacking” game, we include an additional non-binary channel containing the height of every stack in every position. * • If $\mathcal{G}$ is a game where positions can contain a “count” of more than one piece, we include a non-binary channel denoting the count of pieces on that position. This channel is semantically similar to the one described above for stack heights. In Ludii, positions in these games are still restricted to containing only a single piece type at a time. This is most notably used for mancala games. Games where pieces of different types can share a single position are modelled as stacking games instead. * • Ludii’s state representation can include an “amount” value per player, primarily intended to represent money for games that involve betting or other similar mechanisms. If $\mathcal{G}$ uses this, we add one non-binary channel per player, such that every position in the channel for player $p$ contains the amount value of $p$ in $s$. * • If $\mathcal{G}$ is played by $n>1$ players, we include $n$ binary channels, such that the $n^{th}$ channel is filled with values of $1$ if and only if $n$ is the current player to move in state $s$. This also accounts for swap rules. For example, the first player normally plays red, and the second blue, in Hex. If $s$ is a state where the red player is the next to make a move, and a swap has occurred, the second of these channels will be filled with $1$ entries instead of the first. * • In some games, every position has a “local state” variable, which is an integer value. Different games can use this in different ways to store (temporary) auxiliary information about positions. For instance, local state values of $1$ are used for positions that contain pieces that are still in their initial position, and values of $0$ otherwise (this is used for castling). Most games only use low local state values, if any at all. Hence, we use separate binary channels to indicate local state values of $0$, $1$, $2$, $3$, $4$, and $\geq 5$. * • If the game uses a swap rule (or “pie rule”), such as Hex, we include a binary channel that is filled with values of $1$ if and only if a swap has occurred in $s$. * • For every distinct container in $\mathcal{G}$, we include one binary channel that has values of $1$ for entries that correspond to a position in that container, and values of $0$ everywhere else. * • For each of the last two moves $m$ played prior to reaching $s$, we add one binary channel with only a single value of $1$ in the entry corresponding to the “from” position of $m$ (typically the location that a piece moves away from), and a similar channel for the “to” position of $m$ (typically the location that a piece is placed in). With these channels, we did not yet exhaustively cover all the state variables in Ludii’s game state representation [19], but we covered the most commonly- used ones. Whenever new variables are added to Ludii’s game state representation, engineering effort for including these in the tensor representations is only required once for Ludii as a whole – not once per game added to Ludii. ### 4.3 Representing Ludii Actions as Tensors In contrast to GDL [16, 8, 9], it is not straightforward – if at all possible – to automatically infer the complete action space for any arbitrary game described in Ludii’s game description language. This is because in Ludii’s game description language, the function that generates lists of legal moves is defined as a composite of many simple functions (ludemes), which may be arranged in any arbitrary tree structure. While each of these ludemes in principle has some domain for its possible inputs, and range for its possible outputs, these are not strictly defined in logic-based or other formats that permit automated inference. Similar to its state representation, Ludii has an object-oriented move555In this document we use the terms “move” and “action” interchangeably, to refer to complete decisions that players make. Within Ludii, these are referred to only as moves, and actions are smaller parts of moves. representation [19]. However, in contrast to the state representation, the most important variables of the move representation are arbitrarily-sized lists (of primitive modifications to be applied to a game state) and arbitrarily-sized trees (of ludemes to be evaluated after applying the initial primitive modifications). The arbitrary sizes of these variables make them difficult to encode in a fixed-size tensor representation. Hence, we ignore these properties, and only distinguish moves based on some simple properties that can easily be used for this purpose. We construct the space of output tensors to map moves to for a game $\mathcal{G}$ as follows: * • The action space is organised as a stack of $2$-dimensional planes, with the spatial dimensions being identical to those of the state tensors (see Subsection 4.2). Every action will map to exactly one position in this space – i.e., one location in the $2$-dimensional area, and one channel. * • Pass and swap moves have been identified as special cases that are sufficiently common, important, and semantically different from any other kind of move that they warrant the inclusion of their own dedicated channels. * • Many games only involve moves that can be identified by just a single position in the spatial dimensions; these are generally games where players place stones (Go, Hex, Havannah, etc.), but may in theory also be games like Chess if they have been defined in a way such that movements are split up into two separate decisions (picking a source and picking a destination). These games can be automatically discovered in Ludii. For these games, we only add one more channel in addition to the pass and swap move channels, to encode all other moves based on their positions in the spatial dimensions. In Ludii, this position is referred to as the “to” position. * • In all other games, moves may have distinct “from” and “to” positions; typical examples are standard implementations of Chess, Amazons, Shogi, etc. For moves that have an invalid “from” position, we assume that it is equal to the “to” position. For games that involve stacking, moves may additionally have $l_{min}$ and $l_{max}$ properties which refer to the levels within a stack at which a move operates; both are assumed to equal $0$ if the game does not allow stacking. The “to” position of a move is used to map the move to a location in the spatial dimensions, and the remaining properties are used to index into one of multiple channels based on the relative “distance covered” by the move. More specifically, we create $(2M+1)\times(2M+1)\times(N+1)\times(N+1)$ channels, where we use $M=3$, and $N=2$ if $\mathcal{G}$ involves stacking, or $N=0$ otherwise. Let $dx$ and $dy$ denote the differences in rows and columns, respectively, between the “to” and “from” positions of a move. Let $[a]_{b}^{c}$ denote a value of $a$ clipped to lie in the interval $[b,c]$. Then, this move gets mapped to the channel given by the $0$-based index $\left(\left([dx]_{-M}^{M}\times(2M+1)+[dy]_{-M}^{M}\right)\times(N+1)+[l_{min}]_{0}^{N}\right)\times(N+1)+[l_{max}-l_{min}]_{0}^{N}$. Note that this is simply one approach to constructing tensor representations of moves that we implemented, but we may envision other approaches as well. For instance, in a game like Chess, it may be more important to encode the type of the piece that makes a move, rather than encoding the distance and direction covered by a move. This could be accomplished by creating channels that are indexed based on the type of piece in the “from” location of a move, instead of the distance between “from” and “to” positions. While we find this approach to be sufficient to distinguish moves from each other in many cases, there are cases where multiple distinct moves that are legal in a single game state will end up being represented by exactly the same logit. When multiple distinct moves are represented by the same logit in a DNN’s output, we say that they are aliased. DNNs cannot distinguish between aliased moves, and hence always provide the same advice (in the form of the prior probabilities $P(s,a)$) to MCTS for these different moves. However, in Polygames [5], the MCTS itself _can_ still distinguish between the different moves by different representing them as distinct branches in the search tree, and backing up (potentially) different values throughout the tree search. This is an important difference with other frameworks, such as OpenSpiel [12], where the MCTS itself requires every possible distinct action that may ever be legal in a game to be assigned a unique integer upfront. When subsequently using the visit counts to compute the standard cross-entropy loss as proposed by [27], the visit counts for all moves that share a single logit are summed up. The softmax over the logits only counts every distinct logit once. ## 5 Experiments In this section we describe experiments666The code used by Ludii to construct state and move tensors for any game is available from https://github.com/Ludeme/LudiiAI. All the training and evaluation code of Polygames is available from https://github.com/facebookincubator/Polygames. Checkpoints of models used in these experiments are available from http://dl.fbaipublicfiles.com/polygames/ludii_checkpoints/list.txt. intended to demonstrate the potential for the approach described in the previous section to facilitate training and research in general games. We picked fifteen different games, all as implemented with their default options in Ludii [20] v1.1.6, and trained a model of the ResConvConvLogitPoolModelV2 type from Polygames [5] in each of these games. The selected games are depicted in Figure 6. We used the same training hyperparameters across all games. Every training run used 20 hours of wall time, with 8 GPUs, 80 CPU cores, and 475GB memory allocated per training job. Every training job used 1 server for model training, and 7 clients for the generation of self-play games. The MCTS agents used 400 MCTS iterations per move in self-play. The final model checkpoint of every training run is evaluated in a set of 300 evaluation games played against a pure MCTS – a standard UCT agent [10, 3] without any DNNs. In evaluation games, the MCTS with a trained model used 40 iterations per move, whereas the pure MCTS used 800 iterations per move – where at the end of every iteration, the average outcome of 10 random rollouts is backed up. The final column of Table 1 reports the win percentages of the trained MCTS against the untrained MCTS. The table also provides further details on the number of trainable parameters in each of the DNNs, and for some games summarises unusual properties that these games have which we did not yet observe in much of the existing literature on learning through self- play in games. Table 1: Data for a variety of different games, all implemented in Ludii v1.1.6, for which we trained models in Polygames over a duration of 20 hours using 8 GPUs and 80 CPU cores per model. The second column lists some interesting properties for games that we have not yet often seen (if at all) in existing literature using AlphaZero-like training approaches. The third column lists the number of trainable parameters in the model (we used identical Polygames hyperparameters for the DNN architecture across all games, but in Polygames by default the number of channels in hidden convolutional layers scales with the number of input channels). The last column lists the win percentages of MCTS with the trained model using 40 iterations per move, against MCTS without any trained model using 800 iterations per move – where every iteration backs up the average outcome of 10 random rollouts. Game | Unusual Properties | Trainable Parameters | Win Percentage ---|---|---|--- Breakthrough | - | 188,296 | 100.00% Connect6 | - | 180,472 | 75.67% Dai Hasami Shogi | - | 188,296 | 99.33% Fanorona | Move aliasing due to choice of capture direction. | 188,296 | 50.00% Feed the Ducks | Moves have global effects across entire board. | 231,152 | 83.00% Gomoku | - | 180,472 | 91.00% Hex | - | 222,464 | 100.00% HeXentafl | Asymmetry in piece types, initial setup, and goals. | 231,152 | 98.67% Konane | - | 188,296 | 98.00% Lasca | Pieces (of multiple different types) can stack. | 5,450,268 | 3.50% Minishogi | - | 2,009,752 | 97.00% Pentalath | - | 180,472 | 95.33% Squava | Lines of $4$ win, but lines of $3$ lose. | 222,464 | 96.67% Surakarta | Loops around board allow for unique move patterns. | 188,948 | 100.00% Yavalath | Lines of $4$ win, but lines of $3$ lose. | 222,464 | 97.33% In the majority of the evaluated games, the trained MCTS easily outperforms the untrained one, even using 20 times fewer MCTS iterations (or 200 times fewer if the number of random rollouts performed by the untrained MCTS is counted). Note that, in comparison to work that focuses on achieving superhuman playing strength [26, 5], we focused on short training runs using fewer resources and smaller networks. Our primary aim is to demonstrate the possibility of training effectively using a single implementation without game-specific domain knowledge. The two results that stand out most are for Lasca and Fanorona. The win percentage of $3.50\%$ for Lasca indicates that this model is not trained nearly as well as the others. Lasca is the only game among those tested that involves stacking of multiple pieces on a single site. Our procedures for the construction of input and output tensors lead to a significantly larger numbers of channels in this game compared to the other games, which is also reflected in the large number of trainable parameters that this model has. Further research is required to establish whether it would be sufficient to reduce the size of the model, or whether entirely different approaches for constructing the tensors would be more appropriate. In Fanorona, the win percentage of $50\%$ for the trained model is not necessarily a poor level of performance (considering the large difference in number of MCTS iterations), but it appears to be noticeably worse than in the other games. One possible explanation for this may be that Fanorona has a more severe degree of move aliasing, because there are situations where there are multiple different legal moves with identical “to” and “from” positions, but different effects in that a player can choose in which direction they wish to capture opposing pieces. Such moves are all represented by a single, shared logit in our output tensors – which means that only the MCTS can distinguish between them, but the trained policy head cannot. Figure 6: Screenshots of all the Ludii-based games included in our experiments. First row: Breakthrough, Connect6, Dai Hasami Shogi, Fanorona, Feed the Ducks. Second row: Gomoku, Hex, HeXentafl, Konane, Lasca. Third row: Minishogi, Pentalath, Squava, Surakarta, Yavalath. ## 6 Open Problems Thanks to the large library of games available in Ludii [20], we can get a clear picture of categories of games that are open problems to various extents; some that are simply not supported yet by Polygames [5] and require more engineering effort, and some that appear to have been neglected across the majority of recent game AI literature. All of these types of games are supported by Ludii: * • Stochastic games: these were not included in this paper because they are temporarily unsupported by the MCTS implementation of Polygames, but were supported in earlier versions of Polygames and will be again in future versions. * • Games with more than $2$ players: support for these can be added relatively easily [18], but is not yet available in Polygames. * • Imperfect-information games: there has been some recent work towards AlphaZero-like training approaches that support imperfect-information games [2], but tractability is still a concern for games with little common knowledge. * • Simultaneous-move games: simultaneous-move games will at least require significant changes in the MCTS component [3] as it is typically used in AlphaZero-like training setups. * • Games with excessively large state or move tensors: games such as Taikyoku Shogi, with a $36$$\times$$36$ board and $402$ pieces per player of $209$ different types, can be modelled and run in Ludii, but produce excessively large tensors which quickly lead to memory issues when training with standard hyperparameter values that work well for “normal” games. These issues do not appear straightforward to resolve with current hardware and large DNNs. * • Games played on a mix of cells, edges and/or vertices of graphs: while games like Chess are only played on cells, and games like Go only on vertices, there are also games such as Contagion that are played on a mix of multiple different parts of a graph. It is not clear how to directly support these with the standard CNNs. * • Games without an explicitly defined board: games such as Andantino or Chex are not played in a limited area that is defined upfront, but in a playable area that grows dynamically as play progresses. The standard DNN architectures require these spatial dimensions to be predefined and fixed. * • Games with more than $2$ spatial dimensions: games such as Spline have a third spatial dimension, which cannot be handled by the standard $2$D convolutional layers. While a straightforward extension to $3$D convolutional layers may be sufficient, we are not aware of any existing research towards this for games, and also imagine that a third spatial dimension can rapidly lead to tensors becoming excessively large again for many non-trivial games. ## 7 Conclusions We have described our approach for constructing tensor representations of states and moves for any game implemented in the Ludii general game system, and used this to implement a bridge between Ludii and the Polygames framework. This allows for the state-of-the-art tree search and self-play training techniques implemented in Polygames to be used for training game-playing models in any game described in Ludii’s general game description language. Whereas AlphaZero-like approaches typically require game-specific domain knowledge to define a Deep Neural Network’s architecture and its input and output tensors, we only require such domain knowledge at the level of the general game system as a whole, and can now leverage Ludii’s wide library of games – which can quickly grow thanks to its user-friendly game description language – to facilitate more general game AI research with minimal requirements for game-specific engineering efforts. We have identified a series of “open problems” in the form of classes of games that are already supported by Ludii, but not yet by Polygames. For some of these is a clear path that merely requires additional engineering effort, but others are likely to require a more significant amount of extra research. ## Acknowledgments This work was partially supported by the European Research Council as part of the Digital Ludeme Project (ERC Consolidator Grant #771292), led by Cameron Browne at Maastricht University’s Department of Data Science and Knowledge Engineering. We thank Éric Piette for his image editing skills, and Matthew Stephenson for his mastery of the English language. ## References * [1] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade Learning Environment: An Evaluation Platform for General Agents. Journal of Artificial Intelligence Research, 47:253–279, 2013. * [2] N. Brown, A. Bakhtin, A. Lerer, and Q. Gong. Combining deep reinforcement learning and search for imperfect-information games. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020. * [3] C. Browne, E. Powley, D. Whitehouse, S. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton. A survey of Monte Carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in Games, 4(1):1–49, 2012. * [4] C. B. Browne. Automatic Generation and Evaluation of Recombination Games. PhD thesis, Queensland University of Technology, 2009. * [5] T. Cazenave, Y.-C. Chen, G. Chen, S.-Y. Chen, X.-D. Chiu, J. Dehos, M. Elsa, Q. Gong, H. Hu, V. Khalidov, C.-L. Li, H.-I. Lin, Y.-J. Lin, X. Martinet, V. Mella, J. Rapin, B. Roziere, G. Synnaeve, F. Teytaud, O. Teytaud, S.-C. Ye, Y.-J. Ye, S.-J. Yen, and S. Zagoruyko. Polygames: Improved zero learning. ICGA Journal, 2020. To appear. * [6] R. Coulom. Efficient selectivity and backup operators in Monte-Carlo tree search. In H. J. van den Herik, P. Ciancarini, and H. H. L. M. Donkers, editors, Computers and Games, volume 4630 of LNCS, pages 72–83. Springer Berlin Heidelberg, 2007. * [7] E. Cox, E. Schkufza, R. Madsen, and M. R. Genesereth. In Proceedings of the IJCAI Workshop on General Intelligence in Game-Playing Agents (GIGA), pages 13–20, 2009. * [8] R. Emslie. Galvanise zero. https://github.com/richemslie/galvanise_zero, 2019. * [9] A. Goldwaser and M. Thielscher. Deep reinforcement learning for general game playing. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 1701–1708. AAAI Press, 2020. * [10] L. Kocsis and C. Szepesvári. Bandit based Monte-Carlo planning. In J. Fürnkranz, T. Scheffer, and M. Spiliopoulou, editors, Machine Learning: ECML 2006, volume 4212 of LNCS, pages 282–293. Springer, Berlin, Heidelberg, 2006. * [11] J. Kowalski, M. Maksymilian, J. Sutowicz, and M. Szykuła. Regular boardgames. In The Thirty-Third AAAI Conference on Artificial Intelligence, pages 1699–1706. AAAI Press, 2019. * [12] M. Lanctot, E. Lockhart, J.-B. Lespiau, V. Zambaldi, S. Upadhyay, J. Pérolat, S. Srinivasan, F. Timbers, K. Tuyls, S. Omidshafiei, D. Hennes, D. Morrill, P. Muller, T. Ewalds, R. Faulkner, J. Kramár, B. de Vylder, B. Saeta, J. Bradbury, D. Ding, S. Borgeaud, M. Lai, J. Schrittwieser, T. Anthony, E. Hughes, I. Danihelka, and J. Ryan-Davis. OpenSpiel: A framework for reinforcement learning in games. http://arxiv.org/abs/1908.09453, 2019. * [13] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015. * [14] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, 1989. * [15] M. Lin, Q. Chen, and S. Yan. Network in network, 2014. * [16] N. Love, T. Hinrichs, D. Haley, E. Schkufza, and M. Genesereth. General game playing: Game description language specification, 2008. * [17] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019. * [18] N. Petosa and T. Balch. Multiplayer alphazero. In Workshop on Deep Reinforcement Learning at the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 2019. * [19] É. Piette, C. Browne, and D. J. N. J. Soemers. Ludii game logic guide. CoRR, abs/2101.02120, 2021. * [20] É. Piette, D. J. N. J. Soemers, M. Stephenson, C. F. Sironi, M. H. M. Winands, and C. Browne. Ludii – the ludemic general game system. In G. D. Giacomo, A. Catala, B. Dilkina, M. Milano, S. Barro, A. Bugarín, and J. Lang, editors, Proceedings of the 24th European Conference on Artificial Intelligence (ECAI 2020), volume 325 of Frontiers in Artificial Intelligence and Applications, pages 411–418. IOS Press, 2020. * [21] J. Pitrat. Realization of a general game-playing program. In A. J. H. Morrel, editor, Information Processing, Proceedings of IFIP Congress 1968, Edinburgh, UK, 5-10 August 1968, Volume 2 - Hardware, Applications, pages 1570–1574, 1968. * [22] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham, 2015. * [23] E. Schkufza, N. Love, and M. Genesereth. Propositional automata and cell automata: Representational frameworks for discrete dynamic systems. In W. Wobcke and M. Zhang, editors, AI 2008: Advances in Artificial Intelligence, volume 5360 of LNCS, pages 56–66. Springer, Berlin, Heidelberg, 2008. * [24] J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lockhart, D. Hassabis, T. Graepel, T. Lillicrap, and D. Silver. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588:604–609, 2020. * [25] E. Shelhamer, J. Long, and T. Darrell. Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):640–651, 2017. * [26] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419):1140–1144, 2018. * [27] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis. Mastering the game of Go without human knowledge. Nature, 550:354–359, 2017. * [28] C. F. Sironi and M. H. M. Winands. Optimizing propositional networks. In Computer Games, pages 133–151. Springer, 2017. * [29] D. J. Wu. Accelerating self-play learning in go. CoRR, abs/1902.10565, 2019.
# A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network Xinwei Zhao, , Chen Chen, Matthew C. Stamm This material is based upon work supported by the National Science Foundation under Grant No. 1553610. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.This material is based on research sponsored by DARPA and Air Force Research Laboratory (AFRL) under agreement number PGSC- SC-111346-03. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA and Air Force Research Laboratory (AFRL) or the U.S. Government.The authors are with the Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, 19104 USA (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>mstamm@drexel.edu). ###### Abstract With the development of deep learning, convolutional neural networks (CNNs) have become widely used in multimedia forensics for tasks such as detecting and identifying image forgeries. Meanwhile, anti-forensic attacks have been developed to fool these CNN-based forensic algorithms. Previous anti-forensic attacks often were designed to remove forgery traces left by a single manipulation operation as opposed to a set of manipulations. Additionally, recent research has shown that existing anti-forensic attacks against forensic CNNs have poor transferability, i.e. they are unable to fool other forensic CNNs that were not explicitly used during training. In this paper, we propose a new anti-forensic attack framework designed to remove forensic traces left by a variety of manipulation operations. This attack is transferable, i.e. it can be used to attack forensic CNNs are unknown to the attacker, and it introduces only minimal distortions that are imperceptible to human eyes. Our proposed attack utilizes a generative adversarial network (GAN) to build a generator that can attack color images of any size. We achieve attack transferability through the use of a new training strategy and loss function. We conduct extensive experiment to demonstrate that our attack can fool many state-of-art forensic CNNs with varying levels of knowledge available to the attacker. ###### Index Terms: Generative Adversarial Networks, Convolutional Neural Networks, Anti-Forensic Attack, Transferability ## I Introduction Software, such as PhotoShop and GIMP, makes photo editing easy for people who have no background in image processing and allow people to add or remove contents and effects of preference. In many critical scenarios, however, multimedia contents can be used as digital evidence to assist in activities such as criminal investigations and news reporting. Therefore, it is important for forensic investigators to ensure the integrity and authenticity of digital contents [1, 2, 3, 4]. To combat multimedia forgeries, researchers have developed various forensic algorithms to detect image forgeries [5, 6, 7, 8, 9, 10, 11, 12, 13, 14], and identify manipulation operations, such as resizing [15, 16], contrast enhancement [17, 2], JPEG compression [18, 19, 20], median filtering [21] and etc. In recent years, deep-learning based techniques such as CNNs have become the most popular approach for sophisticated and robust forensic algorithms and achieved many most state-of-art performances in digital forensics [22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. To help investigators discover weaknesses of forensic techniques and develop defense strategies, it is equivalently important to study anti-forensics [32, 33]. In some scenarios, an intelligent attacker may launch a malicious attack to fool forensic algorithms by removing the forensic traces left by manipulations [34, 35, 36, 37, 38]. Previous research has shown that the deep learning based algorithms are vulnerable to adversarial examples [39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51]. Recent research on digital anti- forensics has shown that attacks can be crafted to fool various forensic CNNs [52, 53, 54, 55, 56]. While these attacks can achieve strong performance when they are explicitly trained against a victim CNN (i.e white-box scenarios), forensic researchers have found that many anti-forensic attacks cannot fool other CNNs other than those used directly during training [57, 58]. This common problem is addressed as transferability issues of adversarial attacks. The transferability issue of anti-forensic attacks occurs in limited knowledge scenarios when the attacker has no direct access to the trained victim CNNs (i.e. the CNNs that they wish to attack). The transferability of attacks has been actively studied in machine learning fields, such as computer vision [59, 60, 43]. However, very limited research has been done to address this problem in forensics. Recent research has demonstrated that existing anti-forensic attacks on CNNs, such as FGSM and GAN-based attack, have many difficulties in transferring [57, 58]. Particularity, Barni et. al has shown that attacks on image forensic CNNs have difficulty transferring to other CNNs constructed with different architectures or trained using different training data [57]. Moreover, Zhao et. al found that by simply changing the definitions of classes of a image forensic CNN, the adversarial examples cannot fool anecdotally same CNNs [58]. In this paper, we propose a new anti-forensic attack based on generative adversarial networks (GANs) to fool forensic CNNs trained for manipulation detection and identification. Our proposed attack operates by using a generative model to remove forensic traces left by manipulation operations from an image, and also synthetically constructs forensic traces of unaltered images. The anti-forensically attacked image produced by our proposed attack can mimic the forensic statistics of real unaltered images and fool state-of- art forensic CNNs under different scenarios. Moreover, our proposed attack demonstrates strong transferability under various limited knowledge or black box scenarios. This includes transferring across different training data and CNN architectures. Our proposed anti-forensic GAN differs from the traditional GAN structure by incorporating additional elements. To achieve transferability, our proposed anti-forensic GAN introduces an ensemble of pre- trained surrogate CNNs into training the generator. This forces the generator to learn more generic forensic traces of unaltered images. Each surrogate CNN is trained by the attacker to perform manipulation detection or identification, and forms an unique decision boundary of the unaltered class. By using an ensemble of surrogates CNNs constructed with diverse architectures and class definitions, the generator is trained to capture a comprehensive forensic information of unaltered images, and produces anti-forensically attacked images residing on the intersection of boundaries of unaltered class formed by each surrogate CNN. Additionally, we introduce the rule of pixel to pixel correspondence for creating training data to improve the transferability of the proposed attack. We evaluate our proposed anti-forensic GAN attack through extensive amount of experiments different knowledge levels of the investigator’s CNN, including perfect knowledge scenarios and three limited knowledge scenarios. We demonstrate that our proposed anti-forensic GAN attack can fool forensic CNNs built with various state-of-art architectures. Moreover, our proposed anti- forensic GAN attack shows strong transferability, and can fool CNNs built with different architectures and trained on different database. Additionally, our proposed attack will not introduce visual distortions to the produced anti- forensically attacked images under any scenario. ## II Image Forensic CNNs Convolutional neural networks (CNNs) are one of most popular deep learning frameworks. They have achieved state-of-art performances on many forensic tasks, including detecting and identifying manipulation operations [61, 24, 62, 25]. Image forensic CNNs operates by learning forensic feature extracted from a large amount of training data, then forming decision boundary for each pre-defined forensic class. There are multiple ways to define classes of image forensic CNNs. Zhao et al. specify three major class definitions in previous research: an binary decisions of unaltered or manipulated, multi-class definitions of unaltered vs several individual manipulations, or multiclass definitions of unaltered vs. several parameterized versions of individual manipulations [58]. We now briefly discuss these class definitions. Manipulation detection is a binary classification between manipulated and unaltered. One class is assigned to unaltered images, and the other class is assigned to images manipulated in any form. This class definition allows investigator to detect if manipulation was applied. Manipulation classification is a multi-class classification to identify a specific manipulation or unaltered. One class is assigned to unaltered images, and other classes are each assigned to individual manipulation operation. For each individual manipulated class, all parameters related to this manipulation operation are grouped together into a single class. This class definition not only allows the investigator to detect if manipulations exist, but also to identify which specific manipulation was applied. Manipulation parameterization is a multi-class classification to identify a specific manipulation and parameter pairing or unaltered. One class is assigned to unaltered images, and other classes are each assigned to each pairing of manipulation and parameter (or range of parameterizations). This class definition could be used if the investigator wants to know very detailed information about a possible forger or identify inconsistencies in editing within an image. Each of the above class definitions includes the “unaltered” class. To fool forensic CNNs, the anti-forensically attacked image should be produced within the decision boundary of “unaltered” class. However, the decision boundaries of CNNs highly depend on the training data, CNN architecture, and the definition of classes. Any change may result in the change of the decision boundaries. Therefore, many existing anti-forensic attacks against CNNs, such as FGSM [42] and MISLGAN [53], can only fool CNNs directly trained against [57, 58]. ## III Problem Formulation To formulate the anti-forensic attack proposed in this work, we begin by examining the interaction between an information attacker Alice and a forensic investigator Bob. In this interaction, Bob will be confronted with an image and will attempt to determine if the image is unaltered or if it has been manipulated. To do this, he will make use of a CNN classifier $C(\cdot)$ that is trained to differentiate between unaltered images and manipulated ones. Particularly, we assume he will use manipulation detection, manipulation identification or manipulation parameterization to define the classes, and a class corresponding to “unaltered” will always be present. The attacker Alice will use a photo editor to create a falsified image by applying image manipulations $M(\cdot)$ to an unaltered image $I$. Since Alice is aware that Bob will use his forensic CNN $C$ to determine if Alice’s image is unaltered or not, Alice will use an anti-forensic attack $A(\cdot)$ to remove forensic traces left by her manipulations. Alice’s primary goal in designing this attack is to fool Bob’s forensic classifier such that $C(A(M(I)))=\mbox{{``unaltered"}}.$ (1) Additionally, Alice wants to ensure that her anti-forensic attack does not introduce visually detectable distortions into her image, thus rendering it visually implausible. For the purposes of this work, we assume Alice can collect a large amount data and train deep learning based algorithms. We assume that Alice’s manipulations $M$ consist of a single editing operation, however this work can be extended with minor modifications to consider multiple editing operations. Furthermore, we assume that Alice will choose between $N$ potential manipulations, all of which are known to Bob. While in reality Bob may not know all possible manipulations available to Alice, this assumption corresponds to a worst-case- scenario for Alice and ensures that our attack can still be successful in unfavorable conditions. ## IV Knowledge Scenarios TABLE I: Match and mismatch information between the attacker and the investigator under each scenario. Knowledge Scenarios | Training Data | Manipulations | Parameters | Trained CNN ---|---|---|---|--- Perfect knowledge | Match | Match | Match | Match Training Data Mismatch | Mismatch | Match | Match | Mismatch Training Data and Manipulation Parameter Mismatch | Mismatch | Match | Mismatch | Mismatch Training Data and CNN Architecture Mismatch | Mismatch | Match | Match | Mismatch CNNs possessed by the investigator Bob are the “victim CNNs” that the attacker Alice attempts to fool. It is reasonable to assume that Alice will use as much information as possible to make the attack successful. Based on the amount of knowledge available to Alice, it is common in multimedia anti-forensics to categorize attack scenarios into the perfect knowledge scenario and limited knowledge scenarios [33, 57, 58]. In the perfect knowledge scenario, we assume Alice has the same amount of knowledge that Bob has (i.e whatever Bob has access to, Alice has the equal access). All other scenarios are considered as limited knowledge scenarios. Since the conditions for limited knowledge scenarios are harder to formulate than the perfect knowledge scenario, we formulate three limited knowledge scenarios as below, and discuss Alice’s known and unknown information in each scenario. Perfect Knowledge: We assume Alice have full access to the victim CNN trained and possessed by Bob, or she is capable of reproducing an identical copy of the victim CNN to train her attack, such as when Alice has access to Bob’s training data and knows the details of Bob’s CNN architecture. Training Data Mismatch: We assume Alice has no access to the victim CNN trained and possessed by Bob, nor she has access to Bob’s training data. However, we assume Alice knows about architecture of the victim CNN. She also knows about the manipulations and chosen parameters that Bob used to create his training data. As a result, Alice can train a “well-approximated copy” of the victim CNN by using the same CNN architecture and her own collected data. However, due to the discrepancy between the two sets of training data, boundaries formed by Bob’s CNN and Alice’s CNN are not identical. Therefore, Alice’s attack should transfer across training data to fool the victim CNN. Training Data and Manipulation Parameter Mismatch: We assume Alice has no access to the victim CNN trained and possessed by Bob, nor she has access to Bob’s training data. However, we assume Alice knows about the architecture of the victim CNN. She also knows about the manipulations of Bob’s interest and a subset of parameters of these manipulations. As a result, Alice can obtain a “bad-approximated copy” of the victim CNN by using the same CNN architecture and her own collected data. However, due to the incomplete information about manipulation parameters, the discrepancies between Bob’s CNN and Alice’s CNN are bigger than the “Training Data Mismatch” scenario stated above. Therefore, it is more challenging for Alice to make her attack transfer to fool Bob’s CNN, especially to fool Bob’s manipulation parameterization classifier, since Bob’s manipulation paramerization classifier can output more classes than Alice. Training Data and CNN Architecture Mismatch: We assume Bob uses a CNN constructed with a private architecture, and Alice cannot gather any information about Bob’s CNN architecture by all means. However, we assume Alice knows about the manipulations and chosen parameters that Bob used to create his training data. Therefore, Alice’s trained attack should be able to transfer across both training data and CNN architectures to fool Bob’s CNN. We summarize the match and mismatch information between the attacker Alice and the investigator Bob for each knowledge scenario stated above in Table I. ## V Proposed Anti-Forensic Attack Our goal is to create anti-forensically falsified images to fool manipulation forensic CNNs, such that the victim CNN will classify forged images produced by our attack as unaltered or authentic. Additionally, the anti-forensically attacked image produced by our attack should contain no perceptible distortion. To do it, we propose a new generative attack to remove forensic forgery traces left by manipulation operations from an image and reconstruct falsified forensic traces of unaltered images. Our proposed attack utilizes a generative adversarial network to build a generator that will take in a forgery image of any size as input, and produce an anti-forensically attacked image of the same size and same contents as the forgery image. ### V-A Generative Adversarial Networks Generative adversarial networks (GANs) are a deep learning framework and often used to produce visually realistic images in computer vision [63]. A GAN typically consists of two networks, a discriminator $D(\cdot)$ and a generator $G(\cdot)$. While the goal of the discriminator is to distinguish between generated data and real data, the goal of the generator is to produce fake data that can mimic statistical distribution of real data to fool the discriminator. The mechanism of a GAN is to train the discriminator and the generator in an alternative fashion using a minmax function (2), until the two networks reach an equilibrium. Assuming real images $I$ have distribution $I\sim p_{r}(I)$ and generated images $I^{\prime}$ have distribution $I^{\prime}\sim p_{g}(I^{\prime})$, the minmax function for training GAN is formulated as, $\min_{G}\max_{D}\operatorname{\mathbb{E}}_{I\sim p_{r}(I)}[\log D(I)]+\operatorname{\mathbb{E}}_{I^{\prime}\sim p_{g}(I^{\prime})}[\log(1-D(I^{\prime}))]$ (2) where $\operatorname{\mathbb{E}}$ represents the operation of calculating expected value. Figure 1: Overview of the proposed anti-forensic GAN framework. ### V-B Proposed Anti-Forensic GAN While GANs have been widely used to construct spatial contents and achieved many state-of-art performances [64, 65, 66, 67], limited research has been done to construct forensic traces to fool CNN-based forensic algorithms. Chen et al. proposed MISLGANs to fool forensic CNNs for camera model identification [53, 54]. Kim et al. showed that GANs can be used to remove forensic traces left by median filtering [52]. However, these algorithms were designed to remove particular forensic traces, and there exists no anti-forensic attack that can remove forgery traces left by varying manipulation operations. Additionally, previous research has shown that existing anti-forensic attacks on forensic CNNs, such as FGSM [42] and GAN-based attacks [53], can only successfully fool those CNNs directly used to train the attack [57, 58]. This is a well-known problem of adversarial attacks referred as the transferability issue in machine learning fields. While the transferability of attacks have been actively studied in computer vision [60, 59, 68], related research in multimedia anti-forensics is still in early stage. In this paper, we propose a new GAN-based anti-forensic attack, “anti-forensic GAN”, to solve the above problems. The proposed anti-forensic GAN attack is a general anti-forensic framework aimed to remove forgery traces left by various manipulation operations. Additionally, our proposed anti-forensic GAN attack are designed to transfer and can fool CNNs not used during training. Figure 1 shows the overall framework of the proposed anti-forensic GAN. Different from traditional GAN, the proposed anti-forensic GAN consists of three major components, a generator, a discriminator and an ensemble of surrogate CNNs. The ensemble of surrogate CNNs contains a diverse set of pre- trained forensic CNNs constructed with various CNN architecture and class definitions. Each surrogate CNN is trained for manipulation detection, manipulation identification or manipulation parameterization. The goal of the ensemble of surrogate CNNs is to guide the generator to produce robust anti- forensically falsified images that can mimic comprehensive aspects of forensic information of unaltered images. We now describe the architecture of each component in the proposed anti- forensic GAN. Generator: The goal of the generator is to remove forensic traces left by manipulations and reconstruct the forensic traces to mimic forensic traces of unaltered images. Figure 2 shows the architecture of the generator in the proposed anti-forensic GAN framework. The generator consists of two major building blocks, ConvBlocks and feature map reduction module, shown in Figure 3. The ConvBlocks are conceptual blocks which are constructed with a sequence of convolutional layers with activation formed in common structure. A $B=b$ ConvBlock is constructed with one convolutional layer with $b$, $3\times 3$ filters, stride $1$ and ReLU activation [69], followed by another convolutional layer of the same structure, and followed by a $1\times 1$ convolutional layer with ReLU activation. The purpose of using $1\times 1$ convolutional layers is to enforce the generator to learn cross-channel correlations between feature maps. Our generator used two consecutive ConvBlocks, $B=64$ for the first one and $B=128$ for the second one. Since the second ConvBlock will output a large amount of feature maps, we use the feature map reduction module to combine all the feature maps into a three- layer color image. The feature map reduction module is constructed with a convolutional layer with three, $3\times 3$ filters and stride $1$ followed by ReLU activation. Figure 2: The generator’s architecture of the proposed anti-forensic GAN framework. Figure 3: Building blocks of the generator: convblock is shown on the left, and feature map reduction is shown on the right. Figure 4: The discriminator’s architecture of the proposed anti-forensic GAN framework. Discriminator: The goal is the discriminator is to differentiate the unaltered images from the generated anti-forensically attacked images. The architecture of the discriminator is shown in Figure 4. It is a variant of the image forensic CNN proposed by Bayar and Stamm [70], since this CNN was designed to learn forensic features rather than the images’ contents. It first uses a constrained convolutional layer to learn low level forensic features, then uses four standard convolutional layers with batch normalization [71] and hyperbolic tangent activation followed by three fully connected layers to extract high-level features. We modify the last fully connected layer to a single neuron followed by sigmoid activation to construct our discriminator. The last neuron activation corresponds to the likelihood that an image is unaltered or generated (i.e 1 if the image is real, 0 if the image is generated). Ensemble of Surrogate CNNs: The goal of the ensemble of surrogate CNNs is to enforce the generator to produce anti-forensically attacked images that can mimic the forensic traces of unaltered images. Each surrogate CNN is pre- trained to perform manipulation detection, identification or parameterization and can form an unique decision boundary of the unaltered class. By integrating diverse surrogates CNNs, the ensemble may capture a comprehensive aspects of forensic statistics of the unaltered images and enforce the generator to only produce anti-forensically attacked images that reside on the overlapping areas of decision boundaries of unaltered classes. Our intuition that the ensemble can improve the transferability of the attack is that if the generated anti-forensically images can fool each surrogate CNN in the ensemble, they most likely can fool other CNNs not used during training the attack. To improve the transferbility of the proposed anti-forensic attack, it is critical to ensure the diversity of surrogate CNNs in the ensemble. We propose to increase the diversity by varying CNN architectures and definitions of classes. Particularly, the attacker can choose multiple CNN architectures, then for each CNN architecture, the attacker can obtain multiple versions of CNNs by changing the definition of classes for manipulation detection, manipulation identification, and manipulation parameterization, as described in Section II. In the perfect knowledge scenario, since the attacker has full access to the victim CNN possessed by the investigator, the victim CNN can be directly used as one surrogate CNN to train the proposed anti-forensic attack. While the attacker does not need to choose other architectures, empirically we found that using an ensemble with various class definitions of the victim CNN can improve the performance of the anti-forensic attack even in the perfect knowledge scenario. These findings will be demonstrated in Section VII-D via experiments. In limited knowledge scenarios, since the attacker has no direct access to the victim CNN possessed by the investigator, each surrogate CNN should be trained using CNN architectures of the attacker’s choice and training data collected by the attacker. The attacker should train surrogate CNNs with varying architectures and class definitions to ensure the diversity of the ensemble. ### V-C Creation of Training Data Pixel to Pixel Correspondence: We found that the transferabiliy of the proposed anti-forensic GAN attack heavily depend on the strategy of creating training data which will be demonstrated in later sections via experiments. We propose that training data should be created as a pair of the unaltered image and its corresponding manipulated image. Specifically, each pixel in the unaltered image should have a corresponding pixel in the manipulated image. One explanation for this finding is that the pixel to pixel correspondence between the unaltered image and the manipulated image can enforce the proposed attack to ignore difference caused by image contents and only learn the difference resulted from forensic information. Since the discriminator and surrogate CNNs in the proposed framework take in images of the same size, manipulations that distort the shape of images, such as resizing or barrel distortion, may decrease the transferability of the proposed anti-forensic GAN attack. ### V-D Loss functions To train the proposed anti-forensic GAN attack, we now define loss functions for the generator and the discriminator. Generator’s Loss: This loss function is formulated to ensure generated anti- forensically attacked images maintain high visual quality, and can fool the ensemble of surrogate CNNs and the discriminator. The generator’s loss $\mathcal{L}_{G}$ is formulated as the weighted sum of three terms, perceptual loss $\mathcal{L}_{p}$, classification loss $\mathcal{L}_{c}$ and adversarial loss $\mathcal{L}_{a}$, $\mathcal{L}_{G}=\alpha\mathcal{L}_{p}+\sum_{s=1}^{S}\beta^{(s)}\mathcal{L}_{c}^{(s)}+\gamma\mathcal{L}_{a},\vspace{-0.3em}$ (3) where $S$ represents the number of surrogate CNNs in the ensemble and $\alpha$, $\beta^{(s)}$, $\gamma$ are weights. Perceptual loss is used to ensure the visual quality of anti-forensically attacked images produce by the proposed attack. It is formulated as the mean absolute difference of pixel values between the unaltered image and its corresponding generated anti-forensically attacked image. For an unaltered image $I$ of size $w\times h$ and its corresponding manipulated image $I^{\prime}$, perceptual loss is expressed as, $\mathcal{L}_{p}=\frac{1}{w\times h}\sum_{i=1}^{w}\sum_{j=1}^{h}\mid I_{i,j}-G(I^{\prime})_{i,j}\mid,$ (4) where subscripts $i,j$ represent the pixel’s coordinates. We note that when training the attack to remove traces left by manipulations that distort the image’s shape, formula 4 should be modified and calculated between the generated anti-forensically attacked image and the manipulated image $I^{\prime}$ rather than the unaltered image $I$. However, this modification may degrade the transferability of the attack. Classification losses are used to ensure the anti-forensically attacked image produced by the proposed attack can fool the ensemble of surrogate CNNs. For $s^{th}$ surrogate CNN of the ensemble, let $C(\cdot)_{s}$ represent this surrogate CNN’s softmax output, then the classification loss $\mathcal{L}_{c}^{(s)}$ pertaining to the $s^{th}$ surrogate CNN is formulated as the softmax cross-entropy between the generated anti-forensically attacked image and the unaltered class, $\mathcal{L}_{c}^{(s)}=-\sum_{k=1}^{K}t_{k}\log\left(C_{s}(G(I^{\prime}))_{k}\right),$ (5) where $K$ is the number of the classes, $t_{k}$ is the $k^{th}$ entry of ideal softmax vector with a $1$ at the location of the unaltered class and $0$’s elsewhere. Adversarial loss is used to ensure the anti-forensically attacked image produced by our proposed attack can fool the discriminator. Adversarial loss is formulated as the sigmoid cross-entropy between the discriminator’s output of the generated anti-forensically attacked image and $1$. $\mathcal{L}_{a}=\log(1-D(G(I^{\prime}))),$ (6) Discriminator’s Loss: The loss function is formulated to ensure the discriminator can distinguish between unaltered images and generated anti- forensically attacked images. The discriminator’s loss $\mathcal{L}_{D}$ is expressed as, $\mathcal{L}_{D}=\log D(I)+\log(1-D(G(I^{\prime})))$ (7) ### V-E Deployment of the Proposed Attack After training the proposed anti-forensic GAN framework, the attacker can use the generator to anti-forensically remove the forensic traces left by manipulation operations from forged images. The trained generator can be applied to color images of any size. If the attacker does not possess enough computational resources to attack full-size images at once, the attacker can divide the full-size color image into smaller patches and use the generator to attack each image patch individually, and then group the anti-forensically attacked patches together in the original order to form the full-size attacked image. ## VI Evaluation Metrics The proposed anti-forensic GAN attack should be able to fool victim CNNs, and leaves no visible distortions. Attack Success Rate (ASR): To evaluate the performance of the anti-forensic GAN attack on fooling the victim CNN, we defined attack success rate as the percentage of the generated anti-forensically attacked image patches are classified to the “unaltered class”. Mean SSIM and Mean PSNR: The anti-forensically attacked images should not contain any perceptible distortions. To evaluate the visual quality of anti- forensically attacked images produced by our attack, we calculate the average of SSIM and PSNR between generated the anti-forensically attacked images and corresponding manipulated images. ## VII Experimental Results We conducted a series of experiments to evaluate the performance of our proposed anti-forensic GAN attack against a variety of manipulation forensic CNNs built under each scenario described in Section IV. We assume that the investigator will use CNNs trained for manipulation detection, manipulation identification, or manipulation parameterization to authenticate images, and will trust the results of their CNNs. To show that our proposed attack is a general attack and can remove forensic traces left by varying manipulation operations, we selected three common manipulations, and for each manipulation we chose five parameters that cover a reasonable range of values. The chosen manipulations and parameters are shown in Table II. These manipulations were chosen for several reasons. First, these manipulations can be easily parameterized and used to train manipulation parameterization classifiers. Second, these manipulations do not alter the shape or size of an image, thus making it straightforward to create training data using the pixel to pixel correspondence rule. Additionally, finite computational resources placed practical limitations on the number of manipulations and associated manipulation parameters that we could run in our extensive experiments. Specifically, it took $3,000$ computational hours to run the experiments in this paper with an Nvidia 1080Ti GPU. The number of manipulations and associated manipulation parameters could be increased given greater computational power and larger storage, however this was not feasible given our current computational resources. TABLE II: Manipulation operations and their associated parameters. Manipulations | Parameters ---|--- Additive Gaussian white noise | $\mu=0,\sigma=0.5,1,1.5,2,2.5$ Gaussian blurring | $\sigma=1,1.5,2,2.5,3,3.5,4,4.5$ Median filtering | filter size $=3,5,7,9,11$ ### VII-A Data Preparation We started with preparing data for the experiments. We used the Dresden image database [72], and our data were created from $169,620$ full-size JPEG images taken by 70 unique devices of 27 camera models. First we randomly selected $10\%$ of images as the evaluation set, “eval-set”. The “eval-set” were only used for evaluating the performance of forensic CNNs “before and after” the proposed attack. Since under all limited knowledge scenarios described in Section IV we assume the attacker has no access to the investigator’s training data, we built two disjoint training data sets by randomly bisecting the rest $90\%$ of images, “I-set” for the investigator and “A-set” for the attacker. As a result, we ensure that “eval-set”, “I-set” and “A-set contains different images, and shares no same statistics. “I-set” were used as by the investigator to train manipulation forensic CNNs for authenticating images. “I-set” can also be used by the attacker only under the perfect knowledge scenario. “A-set were used by the attacker to train the surrogate CNNs and then the proposed anti-forensic GAN attack under all limited knowledge scenarios. Next, for each data set we created manipulation images by applying each combination of manipulation and parameter shown in Table II to every single full-size image. The manipulated images were saved as PNG files. Thus, each data set contains $15$ unique classes of manipulated images and one class of unaltered images. Then we divided full-size images into $256\times 256$ non-overlapping image patches. The following experiments were evaluated on image patches. ### VII-B Baseline Performance of Manipulation Forensic CNNs In this experiment, we evaluated the baseline classification accuracies of CNNs trained on “I-set” and “A-set”. CNNs trained on both data sets were evaluated on “eval-set for fair comparison. We selected six CNN architectures as they can achieve the state-of-art performance on manipulation forensics. These CNN architectures are MISLnet [23], SRNet[73], PHNet [74], TransferNet [75], DenseNet_BC [76], and VGG-19 [77]. We note that to train TransferNet CNNs on color images, we modify the high pass filter layer of TransferNet from one high pass filter to three identical high pass filters. For each architecture, we trained CNNs using the three class definitions. We grouped the images patches to make two classes (one unaltered class vs. one manipulated class) for the manipulated detection, four classes (one unaltered class vs. three manipulated classes) for the manipulation identification, and 16 classes (one unaltered class vs. 15 manipulated classes) for the manipulation parameterization. Due to the difference in goals and conditions of experiments, we chose the best set of hyperparemeters for each CNN architecture if yields the highest classification accuracy after grid search. CNNs of same CNN architecture and different class definitions were trained using the same hyperparemeters. All CNNs were trained from scratch for $43$ epochs, and would stop early if the training loss started to increase. Weights were initialized using Xavier initializer [78] and biases were initialized as $0$’s. Weights and biases were optimized using stochastic gradient descent. For TransferNet, the batch size was $50$, and the learning rate started with $0.001$ and decayed $10\%$ every $5,000$ iterations. For other five CNN architectures, batch size was $25$, and the learning rate started from $0.0005$ and decayed half every $4$ epochs. CNNs trained on “I-set” are the victim CNNs that attacker attempts to fool, and the investigator uses them to authenticate images. The baseline classification accuracies of CNNs trained on “I-set” are shown in Table III. On average, for “I-set”, we achieved the classifcation accuracy of $99.29\%$ for manipulation detection, $98.51\%$ for manipulation identification, and $77.93\%$ for manipulation parameterization. CNNs trained on “A-set” are the surrogate CNNs used by the attacker to built the ensemble for training the proposed anti-forensic GAN attack under limited knowledge scenarios. The baseline classification accuracies of CNNs trained on “A-set” are shown in Table IV. On average, we achieved the classification accuracy of $99.42\%$ for manipulation detection, $98.39\%$ for manipulation identification, and $79.08\%$ for manipulation parameterization. These results shows that CNNs used in our experiments were well trained and consistent with the state-of-art performances on manipulation forensics. Furthermore, the differences between classification accuracies of CNNs trained on “I-set” and “A-set” indicate CNNs possessed by the investigator and the attacker are comparable but not identical. TABLE III: I-set, baseline classification accuracies for six CNN architectures and three class definitions. CNN Architect. | Detection | Classification | Parameterization ---|---|---|--- MISLnet | 99.84% | 99.55% | 86.24% TransferNet | 99.20% | 98.04% | 65.27% PHNet | 99.58% | 98.94% | 86.58% SRNet | 99.16% | 99.36% | 81.30% DenseNet_BC | 98.13% | 95.66% | 65.50% VGG-19 | 99.87% | 99.50% | 82.67% Avg. | 99.29% | 98.51% | 77.93% TABLE IV: A-set, baseline classification accuracies for fsix CNN architectures and three class definitions. CNN Architect. | Detection | Classification | Parameterization ---|---|---|--- MISLnet | 99.12% | 99.08% | 87.44% TransferNet | 99.54% | 98.66% | 67.81% PHNet | 99.58% | 98.76% | 84.79% SRNet | 99.47% | 98.50% | 83.78% DenseNet_BC | 98.89% | 95.91% | 68.60% VGG-19 | 99.90% | 99.41% | 82.11% Avg. | 99.42% | 98.39% | 79.08% ### VII-C Training Proposed Anti-Forensic GAN All attacks demonstrated in this paper were trained from scratch for $12$ epochs. Weights for each loss term were all $1^{\prime}s$. Weights of the generator and the discriminators are initialized using Xavier initializer [78] and biases were initialized as $0$’s. The generator was optimized using Adam optimizer [79] and the discriminator was optimized using stochastic gradient descent. The learning rate was started with $0.0001$ and decayed half every $4$ epochs. ### VII-D Perfect Knowledge TABLE V: Attack success rates (ASRs) achieved by different attack strategies in the perfect knowledge scenario. Proposed Anti-Forensic GAN --- CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 0.98 | 0.98 | 0.98 | 0.98 TransferNet | 1.00 | 0.99 | 1.00 | 1.00 PHNet | 0.84 | 0.99 | 0.99 | 0.94 SRNet | 0.96 | 0.99 | 0.97 | 0.97 DenseNet_BC | 0.99 | 0.96 | 0.98 | 0.98 VGG-19 | 0.99 | 0.96 | 0.98 | 0.98 Avg. | 0.96 | 0.98 | 0.99 | 0.98 Removing Discriminator CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 0.90 | 0.97 | 1.00 | 0.95 TransferNet | 1.00 | 1.00 | 1.00 | 1.00 PHNet | 0.78 | 0.93 | 0.98 | 0.89 SRNet | 0.53 | 0.93 | 0.98 | 0.81 DenseNet_BC | 0.89 | 1.00 | 1.00 | 0.96 VGG-19 | 0.87 | 0.99 | 0.99 | 0.95 Avg. | 0.83 | 0.97 | 0.99 | 0.92 MISLGAN [53] CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 0.55 | 0.95 | 0.84 | 0.78 TransferNet | 0.99 | 1.00 | 1.00 | 1.00 PHNet | 0.90 | 0.97 | 0.94 | 0.94 SRNet | 0.88 | 0.90 | 0.82 | 0.87 DenseNet | 0.90 | 0.94 | 0.94 | 0.93 VGG-19 | 0.71 | 0.97 | 0.96 | 0.88 Avg. | 0.82 | 0.96 | 0.92 | 0.90 Standard GAN CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 0.08 | 0.01 | 0.00 | 0.03 TransferNet | 0.02 | 0.00 | 0.00 | 0.01 PHNet | 0.06 | 0.00 | 0.00 | 0.02 SRNet | 0.04 | 0.20 | 0.18 | 0.14 DenseNet_BC | 0.08 | 0.24 | 0.26 | 0.19 VGG-19 | 0.05 | 0.75 | 0.57 | 0.46 Avg. | 0.06 | 0.20 | 0.17 | 0.14 In this experiment, we evaluated the baseline performance of the proposed anti-forensic GAN attack in the perfect knolwedge scenario. In this scenario, we assume the attacker has equal amount of information as the investigator (i.e whatever Bob has access to, Alice has the equal access). Specifically, the attacker has access to the investigator’s training data. The attacker can either obtain investigator’s trained CNN or obtain an identical copy. Also the attacker knows about the architecture and class definitions of the investigator’s CNN. To launch the attack on the victim CNN using the proposed anti-forensic GAN attack, the attacker should first obtain an ensemble of surrogate CNNs and then train the proposed anti-forensic GAN attack by integrating the ensemble of surrogate CNNs into the training phase. Since the attacker has access to the trained victim CNN, the attacker has an advantage including the victim CNN into the ensemble. Furthermore, since the attacker knows about the architecture of the victim CNN, an easy approach to make the ensemble is to adopt the CNNs of same architecture and other class definitions. As a result, the ensemble of surrogate CNNs contains the victim CNN and the other two CNNs of the same architecture but different class definitions. Since the attacker also has access to the investigator’s training data, the attacker could use “I-set” to build training data for the proposed attack to avoid statistic discrepancy caused by mismatching training data. We note that the pixel to pixel correspondence in creating the training data is not required in the perfect knowledge scenario for our proposed attack. However we found that it can drastically improve the performance of other attack strategies used for comparison in this subsection. To make a fair comparison, the creation of training data follows the rules of pixel to pixel correspondence in this evaluation. After training, the trained generator can be used to attack the investigator’s CNN of the same architecture and any class definition. To demonstrate this, we treated each CNN demonstrated trained on “I-set” as a victim CNN to fool. For each of the six CNN architectures, an ensemble of the surrogate CNNs was made by grouping the manipulation detector, the manipulation identifier and the manipulation parameterizer of this architecture. Then we trained the proposed anti-forensic GAN attack using the ensemble. Hence, we only trained the proposed attack once for CNNs of the same architecture. The trained generator then would be used to attack manipulated image patches and produced anti-forensically attacked image patches. All anti- forensically attacked image patches were saved to disk as PNG files and then read back for classification. This is to ensure the pixel values of generated anti-forensic attacked images reside on the legit range from 0 to 255. Next, we evaluated the performance of the proposed attack by classifying the anti- forensic attacked images using each victim CNN. We evaluated the our proposed attack on $45,000$ manipulated image patches from “eval-set ”. The attack success rates for fooling each victim CNN are demonstrated in Table V. On average, our proposed anti-forensic GAN attack achieved the attack success rate of $0.96$ for manipulation detection, $0.98$ for manipulation classification, and $0.99$ for manipulation parameterization. The mean of attack success rates regardless of the class definitions was $0.98$. It means that the proposed anti-forensic GAN attack can successfully full forensic CNNs in the perfect knowledge scenario. The Effect of the Discriminator: We explored other generative attack strategies, and demonstrated the performance of the trained generator when training without a discriminator. Removing the discriminator from the proposed attack, the generate will only learn the forensic information provided by the ensemble of surrogate CNNs, and will not compete with the discriminator. From Table V, we see while on average the attack success rates of this attack strategy were comparable with our proposed anti-forensic GAN attack in terms of manipulation classification and manipulation parameterization, the attack success rate for manipulation detection was $13\%$ lower than our proposed attack. The difference was especially significant for manipulation detection classifier of SRNet architecture. Removing the discriminator, the attack success rate dropped $43\%$. The results show that discriminator is needed to improve the attack. The Effect of the Ensemble of Surrogate CNNs: We compared the performance of our proposed attack with MISLGAN attack [53]. While MISLGAN was initially proposed as a white-box attack for camera model falsification, it can be easily adapted to anti-forensically remove traces left by manipulations. To make a MISLGAN attack, we integrated the victim CNN to train the generator, instead of the ensemble. The results are shown in Table V. On average, our proposed anti-forensic GAN attack achieved $14\%$ higher attack success rate for manipulation detection, $2\%$ higher for manipulation classification and $7\%$ higher for manipulation parameterization. Generally, our proposed anti- forensic GAN attack achieved $8\%$ higher attack success rate than MISLGAN. This results shows that introducing the ensemble of surrogates CNNs is very critical for improving the performance of the attack, even in the perfect knowledge scenario. Comparing with Standard GANs: We compared with standard GAN framework. Since the attacked knows the architecture of the investigator’s CNN, the attacker can modify the last layer of each CNN architecture to be a single node and construct a discriminator. Combined with the generator, we trained a standard GAN attack and computed attack success rate for each victim CNN. The results show that standard GANs achieved the lowest attack success rate among all attack strategies we demonstrated in Table V. It indicates that the standard GANs are not suitable to falsify forensic traces, and integrating the ensemble of surrogates can solve this problem. Visual Quality and Inspection: We evaluated the visual quality of generated anti-forensically attacked images by computing the mean PSNR and the mean SSIM between manipulated image patches and generated anti-forensically attacked image patches. The mean PSNR was $54.64$ and the mean SSIM was $0.9986$. The results show that in the perfect knowledge scenario, our proposed attack will not introduce perceptible distortion to the anti-forensically attacked images. ### VII-E Training Data Mismatch In this experiment, we evaluated the performance of the proposed anti-forensic GAN attack in a limited knowledge scenario when the training data of the attacker are different from the investigator. In this scenario, we assume he attacker has no access to the investigator’s trained forensic CNN and training data. We also assume the attacker knows about the architecture of investigator’s CNN. As a result, the attacker cannot use the victim CNN or an identically trained copy of the victim CNN to train the attack. The anti- forensically attacked images produced by the attacker should transfer across training data set to fool the investigator’s CNN. To launch the attack on the victim CNN using our proposed anti-forensic GAN attack, the attacker should first form the ensemble of surrogate CNNs of different architectures and class definitions. Since the attacker knows about the architecture of the victim CNN, the attacker could group the surrogate CNNs of this architecture plus other architectures to form an ensemble. Next, the attack can train the proposed anti-forensic GAN using the ensemble of surrogate CNNs. Since the attacker has no access to the investigator’s training data, the attacker should create training data for the attack from “A-set”. The creation of training data follows the rules of pixel to pixel correspondence. To demonstrate it, we treated each CNN trained on “I-set” as a victim CNN to fool. We trained the proposed attack using an ensemble of surrogate CNNs trained on “A-set”. Ideally we could use all surrogate CNNs to form the ensemble to capture the most comprehensive forensic information of unaltered images, and thus we would only train the proposed attack once for all victim CNNs. However, in practice due to limitation of computer memory, we could not load all surrogates CNNs into the memory to train one single generator. Since VGG-19 CNNs take the most computer memories, we trained one attack against VGG-19 CNNs, and one attack against all other CNNs. The ensemble of surrogate CNNs built for attacking VGG-19 consists of three surrogate CNNs of VGG-19 and three surrogate CNNs of MISLnet. The ensemble of surrogate CNNs built for attacking other victim CNNs consists of all other surrogates CNN excluding VGG-19 CNNs. After training the attack, we used the trained generator to attack each manipulation image patches and produce anti-foresically attacked image patches. We evaluated the performance of our proposed attack by classifying the generated anti-forensic attacked images using each victim CNN. We evaluated the performance of our proposed anti-forensic attack on $45,000$ manipulated image patches from “eval-set”. The attack success rates for fooling each victim CNNs are shown in Table VI. Except for manipulation paramerization of DenseNet_BC architecture, our attack achieved high attack success rates on the victim CNNs. On average, our proposed anti-forensic GAN attack achieved the attack success rate of $0.98$ for manipulation detection, $0.85$ for manipulation classifcation, and $0.87$ for manipulation parameterization. The mean of attack success rates regardless of the class definition was $0.90$. The results show that the proposed anti-forensic GAN attack can strongly fool forensic CNNs trained on different training data. TABLE VI: Attack success rates (ASRs) achieved by different attack strategies in training data mismatch scenario. Proposed Anti-Forensic GAN --- CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 1.00 | 0.87 | 1.00 | 0.96 TransferNet | 1.00 | 0.99 | 0.98 | 0.99 PHNet | 0.98 | 1.00 | 0.96 | 0.98 SRNet | 0.93 | 0.97 | 0.78 | 0.89 DenseNet_BC | 0.99 | 0.31 | 0.64 | 0.65 VGG-19 | 0.98 | 0.95 | 0.86 | 0.93 Avg. | 0.98 | 0.85 | 0.87 | 0.90 MISLGAN [53] CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 0.78 | 0.25 | 0.91 | 0.64 TransferNet | 1.00 | 1.00 | 1.00 | 1.00 PHNet | 0.62 | 0.00 | 0.19 | 0.27 SRNet | 0.46 | 0.06 | 0.34 | 0.29 DenseNet_BC | 0.21 | 0.08 | 0.01 | 0.10 VGG-19 | 0.10 | 0.53 | 0.52 | 0.38 Avg. | 0.53 | 0.32 | 0.50 | 0.45 Removing Architecture Diversity CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 0.79 | 0.17 | 0.99 | 0.65 TransferNet | 0.99 | 1.00 | 0.88 | 0.95 PHNet | 0.97 | 0.68 | 0.83 | 0.82 SRNet | 0.53 | 0.21 | 0.01 | 0.25 DenseNet_BC | 0.70 | 0.12 | 0.71 | 0.51 VGG-19 | 0.98 | 0.97 | 0.65 | 0.87 Avg. | 0.83 | 0.53 | 0.68 | 0.68 Training without Pixel to Pixel Correspondence CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 0.95 | 0.71 | 0.88 | 0.85 TransferNet | 1.00 | 0.98 | 0.98 | 0.96 PHNet | 0.97 | 1.00 | 0.88 | 0.95 SRNet | 0.93 | 0.92 | 0.73 | 0.86 DenseNet_BC | 0.70 | 0.01 | 0.13 | 0.28 VGG-19 | 0.90 | 0.79 | 0.51 | 0.73 Avg. | 0.91 | 0.74 | 0.69 | 0.78 Comparing with MISLGAN: We compared the performance of the proposed attack with the state-of-art GAN-based atttack, MISLGAN [53]. To attack each victim CNN, we trained the MISLGAN using the surrogate CNN built with the same architecture and class definition, and tested the performance of anti- forensically attacked images using the victim CNN. The results are shown as in Table VI. Comparing with MISLGAN attack, the average attack success rates achieved by our proposed attack almost doubled for all class definitions. Except for TranferNet CNNs, the attack success rates for other CNNs all dropped significantly comparing to performance of the MISLGAN attack in the perfect scenario shown in Table V. It may indicate the TransferNet CNNs are not robust to adversarial attacks. The results are consistent with previous finding that the attack on forensic CNNs can hardly transfer across training data [57], and show that the ensemble of surrogate CNNs is important to make the attack transfer. The Effect of CNN Architecture Diversity : We conducted experiments to show that building the ensemble of surrogate CNN with diverse architecture is critical for the attack to achieve transferability in limited knowledge scenarios. To demonstrate this, for each victim CNN, we used three surrogates CNNs of same architecture as the victim CNN to build the ensemble, and then trained the attack. Shown in Table VI, the attack success rates achieved when only using one CNN architecture significantly decreased. On average, the attack success rate dropped $15\%$ for manipulation detection, $32\%$ for manipulation classification, and $19\%$ for manipulation parameterization. Particularly, attack success rates for SRNet show the most significant differences when training with diverse CNN architectures in the ensemble, the attack success rate increased $40\%$ for manipulation detection, $75\%$ for manipulation classification, and $77\%$ for manipulation parameterization. These results show the training the attack with diverse CNN architectures can improve the transferability of the attack dramatically. The Effect of Pixel to Pixel Correspondence: We compared with the attack success rates when training data was not created following the rule of pixel to pixel correspondence. Training data were created from “A-set”, but we do not require correspondence between pixels for each training batch. The visual quality loss was modified to compute between the anti-forensically attacked image patch output by the generator and the manipulated image patch input to the generator. The results are shown in Table VI. Without using pixel to pixel correspondence, on average, the attack success rates dropped $7\%$ for manipulation detection, $11\%$ for manipulation classification and $19\%$ for manipulation classifiers. The results show that using the pixel to pixel correspondence can help improve the transferability of the attack. Visual Quality: We evaluated the image quality of the anti-forensically attacked images produces by our proposed attack by calculating the mean PSNR and the mean SSIM between manipulated image patches and generated anti- forensically attacked image patches. The mean PSNR is $47.12$ and the mean SSIM is $0.9944$. The results show that in the training data mismatch scenario, our proposed attack will not introduce perceptible distortion to the anti-forensically attacked images. ### VII-F Training Data and Manipulation Parameter Mismatch TABLE VII: Classification accuracies of surrogate CNNs for building the ensemble in the training data and manipulation parameter mismatch scenario. CNN Architect. | Detection | Classfication | Parameterization ---|---|---|--- MISLNet | 99.02% | 99.28% | 90.88% Zhan et. al | 99.27% | 97.99% | 68.13% PHNet | 99.44% | 98.86% | 90.44% SRNet | 99.65% | 98.72% | 88.31% DenseNet_BC | 94.80% | 96.58% | 73.64% VGG-19 | 99.97% | 99.26% | 88.09% Avg. | 98.69% | 98.45% | 83.25% TABLE VIII: Attack success rates (ASRs) achieved by the proposed anti-forensic GAN attack in the training data and manipulation parameter mismatch scenario. All Manipulation Parameters --- CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 0.99 | 0.53 | 0.35 | 0.62 TransferNet | 1.00 | 0.97 | 0.94 | 0.97 PHNet | 0.96 | 1.00 | 0.99 | 0.98 SRNet | 0.59 | 0.90 | 0.52 | 0.68 DenseNet_BC | 0.99 | 0.49 | 0.21 | 0.56 VGG-19 | 0.83 | 0.59 | 0.74 | 0.72 Avg. | 0.89 | 0.75 | 0.63 | 0.76 Unseen Manipulation Parameters Only CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 1.00 | 0.51 | 0.28 | 0.60 TransferNet | 1.00 | 0.98 | 0.99 | 0.99 PHNet | 0.92 | 1.00 | 0.95 | 0.96 SRNet | 0.60 | 0.95 | 0.49 | 0.68 DenseNet_BC | 0.99 | 0.35 | 0.20 | 0.51 VGG-19 | 0.75 | 0.51 | 0.79 | 0.68 Avg. | 0.87 | 0.72 | 0.62 | 0.74 In this experiment, we evaluated the performance of the proposed anti-forensic GAN attack in a limited knowledge scenario when the attacker has partial knowledge about the manipulations of the investigator’s interest. In this scenario, we assume the attacker has no access to the investigator’s trained CNNs and training data. We also assume the attacker knows about architecture of the investigator’s CNN and the manipulations the investigator tries to identify. However, the attacker has no perfect knowledge of the parameters associated with each manipulation. Specifically, the attacker only gather a subset of the manipulation parameters to create the attacker’s training data. Moreover, since the attacker does not know all manipulation parameters, it will cause a more severe statistical discrepancy on the attacker’s CNNs comparing with the training data mismatch scenario demonstrated earlier. Particularly, the investigator’s CNNs can output more classes than the surrogate CNNs using for manipulation parameterization classifiers. As a result, the attacker cannot directly use the victim CNN or obtain an identical copy to train the attack. The anti-forensically attacked images produced by the attacker should transfer to fool the investigator’s CNN. To mimic this scenario, we used “A-set” and created two new training data sets to train new surrogates CNNs and the proposed attack. The new training data sets were created using four parameters for each manipulation. The excluded manipulation parameters are $\sigma=1.5$ for additive Gaussian white noise, $\sigma=2.5$ for Gaussian blurring, and $7\times 7$ filter size for median filtering. The training data created for the proposed attack follows the rules of pixel to pixel correspondence. First we evaluated the performance of the newly trained surrogate CNNs on “eval-set“. The testing data were created using four parameters for each manipulation. The classification accuracies of newly trained surrogate CNNs are shown in Table VII. On average, we achieved the classification accuracy of $98.69\%$ for manipulation detection, $98.45\%$ for manipulation classification and $83.25\%$ for manipulation parameterization. These surrogate CNNs were used for building the ensemble to train the proposed attack. To attack victim CNNs trained on “I-set”, we adopted the same strategies as in the training data mismatch scenario and built different ensembles of surrogate CNNs for attacking VGG-19 CNNs and other CNNs, due to the limitation of computer memories. Then we evaluate the performance of the proposed attack on “eval-set”. The first testing set contains $45,000$ image patches created using all manipulations and parameters. The second testing set contains $9,000$ image patches created only using manipulation parameters unseen by the attacker. The attack success rates of fooling each victim CNN are shown in Table VIII. For the testing image patches containing all manipulation and parameters, on average, we achieved attack success rate of $0.89$ for manipulation detection, $0.75$ for manipulation classification, and $0.63$ for manipulation parameterization. It means that $89\%$ anti-forensically attacked images produces by our attack can fool manipulation detection classifiers, $75\%$ can fool manipulation classification classifiers, and $63\%$ can fool manipulation parameterization classifiers. The mean of attack success rates regardless of class definition was $0.76$. It means that $0.76$ anti-forensically attacked images can fool any forensic CNN. For the testing image patches containing only attacker’s unseen manipulation parameters, on average, we achieved the attack success rate of $0.87$ for manipulation detection, $0.72$ for manipulation classification, and $0.62$ for manipulation parameterization. The mean of attack success rates regardless of class definition was $0.74$. The results indicate once the attack is trained, the performances on attacking manipulated image patches created with seen and unseen parameters are comparable. Therefore, if the attacker knows about the attack success rates of the attack for attacking manipulated images created with known parameters, the attacker can also use the attack on unknown parameters without performance decrease. The Effect of Missing Training Parameters: Comparing with the attack success rates with the training data mismatch in Table VI. We note that victim CNNs built with TransferNet and PHNet architectures were not influenced much when the attack was trained with less parameters. It may indicate that these architectures are less sensitive to parameters, and potentially less robust to the anti-forensic attack. For other victim CNNs such as the manipulation parameterization of MISLnet and DenseNet_BC, the attack success scores dropped significantly. It may indicate that these CNN architectures are very sensitive to the training parameters. Visual Quality and Inspection: We evaluated the image quality of the anti- forensically attacked images produces by our proposed attack by calculating the mean PSNR and the mean SSIM between manipulated image patches and generated anti-forensically attacked image patches. The mean PSNR was $45.83$ and the mean SSIM was $98.91$. The results show that in the training data and parameter mismatch scenario, the anti-forensically attacked images produced by our proposed attack have high visual quality. For visual inspection, we demonstrated an an-forensically attacked image produced by our proposed attack in Fig. 5. The manipulated image was created using median filtering with $7\times 7$ filter size. The generator used to create the anti-forensically attacked image was not trained using this parameter. The blue box shows that smooth region of the images, and the red box shows the textured regions of the images. By inspecting the boxed regions in the zoomed view, we cannot notice any visual difference even on the pixel level. It indicates that it is impossible for the investigator to know whether an image was anti-forensically attacked or not via visual inspection, if only presented with an image. Figure 5: Comparison between median filtered image and its anti-forensically attacked copy produced by our attack. Left column: an image median filtered using $7\times 7$ filter. Right column: anti-forensically attacked images produced by the proposed anti-forensic GAN. The blue box shows the smooth region of the images, and the red box shows the textured region of the images. ### VII-G Training Data and CNN Architecture Mismatch In this experiment, we evaluated the performance of our proposed anti-forensic GAN attack when the attacker has zero information about investigator’s CNN and has no access to the investigator’ s training data. In this scenario, we assume that the attacker knows that the investigator will use a CNN to authenticate image, as well as which manipulations and parameters the CNN will be trained to detect. However, the investigator will keep their CNN architecture secret and the attacker cannot gather information about it by any means. As a result, the attacker cannot utilize the victim CNN, obtain an identical copy, or probe the output of the victim CNN to train the attack. Moreover, the attacker cannot use CNNs built with the same architecture of the investigator’s CNN to train the attack. It is the attacker’s goal for attacked images to have sufficient transferrability to fool the investigator’s CNN. To experimentally replicate this scenario, we treated each CNN trained on the “I-set” as a victim CNN to fool. We used the CNNs trained on the “ A-set” as the surrogate CNNs available to the attacker. In each case, the ensemble of surrogate CNNs used to train the attack did not contain a surrogate CNN with the same architecture as the victim CNN. Thus we ensured that the attacker did not use investigator’s CNN architecture. Then we used the victim CNN to classify the anti-forensically attacked images produced by the attack. As before, the size of VGG-19 required us to slightly modified our training ensemble due to GPU memory limitations. To attack VGG-19 CNNs, the training ensemble was built using surrogate CNNs of all other architectures. To attack other victim CNNs, the ensemble was built using surrogate CNNs of other architectures excluding architectures of VGG-19 and the victim CNN. The generated anti-forensically attacked images then were classified by each victim CNN. We evaluated the performance of our proposed anti-forensic attack on $45,000$ manipulated image patches from the “eval-set”. The attack success rates for fooling each victim CNN are shown in Table IX. On average, our proposed anti- forensic GAN attack achieved the attack success rate of $0.89$ for manipulation detection, $0.71$ for manipulation classifcation, and $0.68$ for manipulation parameterization. The mean of attack success rates regardless of the class definition was $0.76$. These results demonstrate that even when the attacker has zero information about the victim CNN, including its architecture, our proposed anti-forensic GAN attack can still fool the investigator’s CNN. This is important because it shows that our attack can be launched in a realistic black box scenario. Moreover, our finding demonstrates that transferable attacks against forensic CNNs can be achieved, unlike what researchers have previously found [57, 58]. While this attack is not perfect, the successful attack rates are high enough to pose a real threat to CNN-based forensic algorithms. TABLE IX: Attack succes rates (ASRs) achieved by the proposed anti-forensic GAN attack in the training data and CNN architecture mismatch scenerio. Proposed Anti-Forensic GAN Attack --- CNN Architect. | Detection | Classification | Parameterization | Avg. MISLnet | 0.99 | 0.72 | 0.95 | 0.88 TransferNet | 0.99 | 0.80 | 0.91 | 0.90 PHNet | 0.64 | 0.98 | 0.78 | 0.80 SRNet | 0.85 | 0.03 | 0.45 | 0.44 DenseNet_BC | 0.94 | 0.77 | 0.09 | 0.60 VGG-19 | 0.94 | 0.98 | 0.92 | 0.95 Avg. | 0.89 | 0.71 | 0.68 | 0.76 Visual Quality: We evaluated the image quality of the anti-forensically attacked images produces by our proposed attack by calculating the mean PSNR and the mean SSIM between manipulated image patches and generated anti- forensically attacked image patches. The mean PSNR $46.76$ and the mean SSIM$46.76$. The results show that in the training data and CNN architecture mismatch scenario, our proposed attack will not introduce perceptible distortion to the anti-forensically attacked image. ## VIII Conclusion In this paper, we proposed a new anti-forensic GAN attack to fool CNN-based forensic algorithms. Different from standard GANs, our proposed anti-forensic GAN attack uses an ensemble of surrogate CNNs to enforce the generator to learn comprehensive aspects of forensic information about real unaltered images, and produces anti-forensically attacked images that can fool forensic CNNs. Furthermore, our proposed attack demonstrated strong transferability to fool CNNs not used during training under various limited knowledge scenarios. We conducted a series of experiments to evaluate our proposed anti-forensic GAN attack. We showed that the proposed attack can fool many state-of-art CNN- based forensic algorithms in the perfect knowledge scenarios and other scenarios when the attacker lacks the information of the investigator’s CNN and training data. Additionally, we showed that under all scenarios our proposed attack leaves behind no visual distortions to the produced anti- forensically attacked image. Hence the investigator cannot uncover the forgery by mere visual inspection. ## References * [1] M. C. Stamm, M. Wu, and K. J. R. Liu, “Information forensics: An overview of the first decade,” _IEEE Access_ , vol. 1, pp. 167–200, 2013. * [2] A. Piva, “An overview on image forensics,” _ISRN Signal Processing_ , vol. 2013, 2013. * [3] T. Gloe, M. Kirchner, A. Winkler, and R. Böhme, “Can we trust digital image forensics?” in _Proceedings of the 15th ACM international conference on Multimedia_. ACM, 2007, pp. 78–86. * [4] S. Milani, M. Fontani, P. Bestagini, M. Barni, A. Piva, M. Tagliasacchi, and S. Tubaro, “An overview on video forensics,” _APSIPA Transactions on Signal and Information Processing_ , vol. 1, 2012. * [5] J. Fridrich, D. Soukal, and J. Lukáš, “Detection of copy-move forgery in digital images,” in _in Proceedings of Digital Forensic Research Workshop_. Citeseer, 2003. * [6] M. Chen, J. Fridrich, M. Goljan, and J. Lukás, “Determining image origin and integrity using sensor noise,” _IEEE Transactions on Information Forensics and Security_ , vol. 3, no. 1, pp. 74–90, 2008. * [7] S. Bayram, I. Avcibas, B. Sankur, and N. D. Memon, “Image manipulation detection,” _Journal of Electronic Imaging_ , vol. 15, no. 4, p. 041102, 2006\. * [8] S. Bayram, H. T. Sencar, and N. Memon, “An efficient and robust method for detecting copy-move forgery,” in _2009 IEEE International Conference on Acoustics, Speech and Signal Processing_. IEEE, 2009, pp. 1053–1056. * [9] H. Farid, “Exposing digital forgeries from jpeg ghosts,” _IEEE transactions on information forensics and security_ , vol. 4, no. 1, pp. 154–160, 2009. * [10] I. Amerini, L. Ballan, R. Caldelli, A. Del Bimbo, and G. Serra, “A sift-based forensic method for copy–move attack detection and transformation recovery,” _IEEE transactions on information forensics and security_ , vol. 6, no. 3, pp. 1099–1110, 2011. * [11] E. Kee, M. K. Johnson, and H. Farid, “Digital image authentication from jpeg headers,” _IEEE Transaction on Information Forensics and Security_ , vol. 6, no. 3, pp. 1066–1075, 2011. * [12] X. Pan and S. Lyu, “Region duplication detection using image feature matching,” _IEEE Transactions on Information Forensics and Security_ , vol. 5, no. 4, pp. 857–867, 2010. * [13] T. Bianchi and A. Piva, “Image forgery localization via block-grained analysis of jpeg artifacts,” _IEEE Transactions on Information Forensics and Security_ , vol. 7, no. 3, pp. 1003–1017, 2012. * [14] O. Mayer and M. C. Stamm, “Accurate and efficient image forgery detection using lateral chromatic aberration,” _IEEE transactions on information forensics and security_ , vol. 13, no. 7, pp. 1762–1777, 2018. * [15] M. Kirchner, “Fast and reliable resampling detection by spectral analysis of fixed linear predictor residue,” in _Proceedings of the 10th ACM workshop on Multimedia and security_ , 2008, pp. 11–20. * [16] A. C. Popescu and H. Farid, “Exposing digital forgeries by detecting traces of resampling,” _IEEE Transactions on signal processing_ , vol. 53, no. 2, pp. 758–767, 2005. * [17] M. C. Stamm and K. R. Liu, “Forensic detection of image manipulation using statistical intrinsic fingerprints,” _IEEE Transactions on Information Forensics and Security_ , vol. 5, no. 3, pp. 492–506, 2010. * [18] F. Huang, J. Huang, and Y. Q. Shi, “Detecting double jpeg compression with the same quantization matrix,” _IEEE Transactions on Information Forensics and Security_ , vol. 5, no. 4, pp. 848–856, 2010. * [19] T. Bianchi and A. Piva, “Detection of nonaligned double jpeg compression based on integer periodicity maps,” _IEEE transactions on Information Forensics and Security_ , vol. 7, no. 2, pp. 842–848, 2011. * [20] M. Barni, Z. Chen, and B. Tondi, “Adversary-aware, data-driven detection of double jpeg compression: How to make counter-forensics harder,” in _2016 IEEE international workshop on information forensics and security (WIFS)_. IEEE, 2016, pp. 1–6. * [21] H.-D. Yuan, “Blind forensics of median filtering in digital images,” _IEEE Transactions on Information Forensics and Security_ , vol. 6, no. 4, pp. 1335–1345, 2011. * [22] O. Mayer and M. C. Stamm, “Forensic similarity for digital images,” _IEEE Transactions on Information Forensics and Security_ , vol. 15, pp. 1331–1346, 2020. * [23] B. Bayar and M. C. Stamm, “Constrained convolutional neural networks: A new approach towards general purpose image manipulation detection,” _IEEE Transactions on Information Forensics and Security_ , vol. 13, no. 11, pp. 2691–2706, Nov 2018. * [24] ——, “Towards order of processing operations detection in jpeg-compressed images with convolutional neural networks,” _Electronic Imaging_ , vol. 2018, no. 7, pp. 211–1, 2018. * [25] M. Barni, A. Costanzo, E. Nowroozi, and B. Tondi, “Cnn-based detection of generic contrast adjustment with jpeg post-processing,” in _2018 25th IEEE International Conference on Image Processing (ICIP)_. IEEE, 2018, pp. 3803–3807. * [26] D. Cozzolino, G. Poggi, and L. Verdoliva, “Splicebuster: A new blind image splicing detector,” in _2015 IEEE International Workshop on Information Forensics and Security_. IEEE, 2015, pp. 1–6. * [27] L. Bondi, S. Lameri, D. Guera, P. Bestagini, E. J. Delp, and S. Tubaro, “Tampering detection and localization through clustering of camera-based cnn features,” in _2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_. IEEE, 2017, pp. 1855–1864. * [28] X. Kang, M. C. Stamm, A. Peng, and K. R. Liu, “Robust median filtering forensics using an autoregressive model,” _IEEE Transactions on Information Forensics and Security_ , vol. 8, no. 9, pp. 1456–1468, 2013. * [29] D. Cozzolino and L. Verdoliva, “Noiseprint: a cnn-based camera model fingerprint,” _IEEE Transactions on Information Forensics and Security_ , vol. 15, pp. 144–159, 2019. * [30] L. Bondi, L. Baroffio, D. Güera, P. Bestagini, E. J. Delp, and S. Tubaro, “First steps toward camera model identification with convolutional neural networks,” _IEEE Signal Processing Letters_ , vol. 24, no. 3, pp. 259–263, 2016. * [31] A. Tuama, F. Comby, and M. Chaumont, “Camera model identification with the use of deep convolutional neural networks,” in _International Workshop on Information Forensics and Security (WIFS)_. IEEE, Dec 2016, pp. 1–6. * [32] M. C. Stamm, W. S. Lin, and K. R. Liu, “Forensics vs. anti-forensics: A decision and game theoretic framework,” in _International Conference on Acoustics, Speech and Signal Processing_. IEEE, 2012, pp. 1749–1752. * [33] M. Barni, M. C. Stamm, and B. Tondi, “Adversarial multimedia forensics: Overview and challenges ahead,” in _2018 26th European Signal Processing Conference (EUSIPCO)_. IEEE, 2018, pp. 962–966. * [34] S. Sharma, A. V. Subramanyam, M. Jain, A. Mehrish, and S. Emmanuel, “Anti-forensic technique for median filtering using l1-l2 tv model,” in _2016 IEEE International Workshop on Information Forensics and Security (WIFS)_ , Dec 2016, pp. 1–6. * [35] M. Fontani and M. Barni, “Hiding traces of median filtering in digital images,” in _Signal Processing Conference (EUSIPCO), Proceedings of the 20th European_. IEEE, 2012, pp. 1239–1243. * [36] M. Kirchner and R. Bohme, “Hiding traces of resampling in digital images,” _IEEE Transactions on Information Forensics and Security_ , vol. 3, no. 4, pp. 582–592, 2008. * [37] G. Cao, Y. Zhao, R. Ni, and H. Tian, “Anti-forensics of contrast enhancement in digital images,” in _Proceedings of the 12th ACM Workshop on Multimedia and Security_ , 2010, pp. 25–34. * [38] M. C. Stamm and K. J. R. Liu, “Anti-forensics of digital image compression,” _IEEE Transactions on Information Forensics and Security_ , vol. 6, no. 3, pp. 1050–1065, Sep. 2011. * [39] A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 427–436. * [40] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: A simple and accurate method to fool deep neural networks,” in _The IEEE Conference on Computer Vision and Pattern Recognition_ , June 2016. * [41] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” _arXiv preprint arXiv:1412.6572_ , 2014. * [42] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” 2014. * [43] N. Carlini and D. Wagner, “Adversarial examples are not easily detected: Bypassing ten detection methods,” in _Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security_ , 2017, pp. 3–14. * [44] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in _2016 IEEE European Symposium on Security and Privacy (EuroS &P)_. IEEE, 2016, pp. 372–387. * [45] P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh, “Ead: elastic-net attacks to deep neural networks via adversarial examples,” in _Thirty-second AAAI conference on artificial intelligence_ , 2018. * [46] J. Su, D. V. Vargas, and K. Sakurai, “One pixel attack for fooling deep neural networks,” _IEEE Transactions on Evolutionary Computation_ , vol. 23, no. 5, pp. 828–841, 2019. * [47] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks on deep learning visual classification,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 1625–1634. * [48] T. B. Brown, D. Mane, A. Roy, M. Abadi, and J. Gilmer, “Adversarial patch,” 2017\. * [49] J. Hayes, “On visible adversarial perturbations & digital watermarking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2018, pp. 1597–1604. * [50] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” _arXiv preprint arXiv:1707.07397_ , 2017. * [51] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” _arXiv preprint arXiv:1607.02533_ , 2016. * [52] D. Kim, H. Jang, S. Mun, S. Choi, and H. Lee, “Median filtered image restoration and anti-forensics using adversarial networks,” _IEEE Signal Processing Letters_ , vol. 25, no. 2, pp. 278–282, 2018. * [53] C. Chen, X. Zhao, and M. C. Stamm, “Mislgan: An anti-forensic camera model falsification framework using a generative adversarial network,” in _2018 25th IEEE International Conference on Image Processing (ICIP)_ , 2018, pp. 535–539. * [54] C. Chen, X. Zhao, and M. C. Stamm, “Generative adversarial attacks against deep-learning-based camera model identification,” _IEEE Transactions on Information Forensics and Security_ , 2019. * [55] J. Yu, Y. Zhan, J. Yang, and X. Kang, “A multi-purpose image counter-anti-forensic method using convolutional neural networks,” in _International Workshop on Digital Watermarking_. Springer, 2016, pp. 3–15. * [56] D. Guera, Y. Wang, L. Bondi, P. Bestagini, S. Tubaro, and E. J. Delp, “A counter-forensic method for cnn-based camera model identification,” in _Computer Vision and Pattern Recognition Workshops (CVPRW)_. IEEE, July 2017, pp. 1840–1847. * [57] M. Barni, K. Kallas, E. Nowroozi, and B. Tondi, “On the transferability of adversarial examples against cnn-based image forensics,” _2019 IEEE International Conference on Acoustics, Speech and Signal Processing_ , pp. 8286–8290, 2019. * [58] X. Zhao and M. C. Stamm, “The effect of class definitions on the transferability of adversarial attacks against forensic cnns,” _Electronic Imaging_ , vol. 2020, no. 4, pp. 119–1, 2020. * [59] Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” _arXiv preprint arXiv:1611.02770_ , 2016\. * [60] N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” _arXiv preprint arXiv:1605.07277_ , 2016. * [61] B. Bayar and M. C. Stamm, “A deep learning approach to universal image manipulation detection using a new convolutional layer,” in _ACM Workshop on Information Hiding and Multimedia Security (IH &MMSec)_, Vigo, Galicia, Spain, 2016, pp. 5–10. * [62] L. Bondi, S. Lameri, D. Guera, P. Bestagini, E. J. Delp, and S. Tubaro, “Tampering detection and localization through clustering of camera-based cnn features,” in _Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_. IEEE, July 2017, pp. 1855–1864. * [63] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in _Advances in neural information processing systems_ , 2014, pp. 2672–2680. * [64] A. Brock, J. Donahue, and K. Simonyan, “Large scale gan training for high fidelity natural image synthesis,” _arXiv preprint arXiv:1809.11096_ , 2018\. * [65] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” 2019. * [66] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” 2020. * [67] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” 2018. * [68] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in _Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security_. Association for Computing Machinery, 2017, pp. 506–519. * [69] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in _Proceedings of the 27th international conference on machine learning (ICML-10)_ , 2010, pp. 807–814. * [70] B. Bayar and M. C. Stamm, “Design principles of convolutional neural networks for multimedia forensics,” _Electronic Imaging_ , vol. 2017, no. 7, pp. 77–86, 2017. * [71] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” 2015. * [72] T. Gloe and R. Böhme, “The ’dresden image database’ for benchmarking digital image forensics,” in _Proceedings of the 2010 ACM Symposium on Applied Computing_ , ser. SAC ’10. New York, NY, USA: ACM, 2010, pp. 1584–1590. [Online]. Available: http://doi.acm.org/10.1145/1774088.1774427 * [73] M. Boroumand, M. Chen, and J. Fridrich, “Deep residual network for steganalysis of digital images,” _IEEE Transactions on Information Forensics and Security_ , vol. 14, no. 5, pp. 1181–1193, 2018. * [74] M. Boroumand and J. Fridrich, “Deep learning for detecting processing history of images,” _Electronic Imaging_ , vol. 2018, no. 7, pp. 213–1, 2018. * [75] Y. Zhan, Y. Chen, Q. Zhang, and X. Kang, “Image forensics based on transfer learning and convolutional neural network,” in _Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security_ , 2017, pp. 165–170. * [76] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 4700–4708. * [77] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014. * [78] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in _Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics_ , ser. Proceedings of Machine Learning Research, Y. W. Teh and M. Titterington, Eds., vol. 9. Chia Laguna Resort, Sardinia, Italy: PMLR, 13–15 May 2010, pp. 249–256. [Online]. Available: http://proceedings.mlr.press/v9/glorot10a.html * [79] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2017.
# Finite rotating and translating vortex sheets Bartosz Protas1,, Stefan G. Llewellyn Smith2,3 and Takashi Sakajo4 1 Department of Mathematics and Statistics, McMaster University Hamilton, Ontario, L8S 4K1, Canada 2 Department of Mechanical and Aerospace Engineering, Jacobs School of Engineering, UCSD, La Jolla CA 92093-0411, USA 3 Scripps Institution of Oceanography, UCSD, La Jolla CA 92093-0213, USA 4 Department of Mathematics, Kyoto University Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto, 606-8502, Japan Email address for correspondence<EMAIL_ADDRESS> (August 27, 2024) ###### Abstract We consider the rotating and translating equilibria of open finite vortex sheets with endpoints in two-dimensional potential flows. New results are obtained concerning the stability of these equilibrium configurations which complement analogous results known for unbounded, periodic and circular vortex sheets. First, we show that the rotating and translating equilibria of finite vortex sheets are linearly unstable. However, while in the first case unstable perturbations grow exponentially fast in time, the growth of such perturbations in the second case is algebraic. In both cases the growth rates are increasing functions of the wavenumbers of the perturbations. Remarkably, these stability results are obtained entirely with analytical computations. Second, we obtain and analyze equations describing the time evolution of a straight vortex sheet in linear external fields. Third, it is demonstrated that the results concerning the linear stability analysis of the rotating sheet are consistent with the infinite-aspect-ratio limit of the stability results known for Kirchhoff’s ellipse (Love, 1893; Mitchell & Rossi, 2008) and that the solutions we obtained accounting for the presence of external fields are also consistent with the infinite-aspect-ratio limits of the analogous solutions known for vortex patches. ## 1 Introduction Vortex sheets are often used as idealized inviscid models of complex vortex- dominated flows, especially those arising in the presence of separating shear layers. While some attempts have been made to model three-dimensional vortex sheets (Brady et al., 1998; Sakajo, 2001), most work has focused on two- dimensional (2D) flows that can be described more simply. Vortex sheets have been used in classical aerodynamics (Milne-Thomson, 1973) and to model fluid- structure interactions in separated flows such as flutter (Jones, 2003; Jones & Shelley, 2005; Alben, 2009, 2015). The classical problem of sheet roll-up is also receiving renewed attention (Elling & Gnann, 2019; Pullin & Sader, 2021). Vortex equilibria have always played a distinguished role in the study of vortex-dominated flows, as they represent long-lived flow structures. A lot is known about vortex sheet equilibria in idealized setting with infinite, periodic or circular sheets (Saffman, 1992; Marchioro & Pulvirenti, 1993). On the other hand, our understanding of key properties of equilibria involving finite open sheets (with endpoints) is far less complete. The goal of this study is thus to fill this gap partially by establishing a number of new facts about two equilibria of finite open vortex sheets. We consider the inviscid evolution of a finite vortex sheet $L(t)$. In addition to the position ${\mathbf{x}}(t,\xi)=(x(t,\xi),y(t,\xi))\in L(t)$, where $\xi$ is a parameter, the circulation density of the sheet, $\gamma(t,s(\xi))$ where $s$ is the arclength coordinate, is also needed to describe the time evolution of the vortex sheet. This quantity represents the jump in the tangential velocity component across the sheet as a function of position. In the most general case, assuming an arbitrary parameterization $\xi$ of the sheet, the evolution of the sheet is described by the system (Lopes Filho et al., 2007) $\displaystyle{\partial{\mathbf{x}}(t,\xi)\over\partial t}+a(t,\xi){\partial{\mathbf{x}}(t,\xi)\over\partial\xi}$ $\displaystyle={\mathbf{V}}({\mathbf{x}}(t,\xi)):=\mbox{p.v.}\\!\\!\int{\bf K}\left({\mathbf{x}}(t,\xi)-{\mathbf{x}}(t,\xi^{\prime})\right)\sigma(t,\xi^{\prime})\,\mathrm{d}\xi^{\prime},$ (1a) $\displaystyle{\partial\sigma(t,\xi)\over\partial t}+{\partial\left[a(t,\xi)\sigma(t,\xi)\right]\over\partial\xi}$ $\displaystyle=0,$ (1b) where the Biot-Savart kernel is defined as ${\bf K}({\mathbf{x}}):={\mathbf{x}}^{\perp}/(2\pi|{\mathbf{x}}|)$ with $(x,y)^{\perp}:=(-y,x)$ and the symbol “p.v.” means that integration is understood in Cauchy’s principal-value sense. The conserved quantity is defined as $\sigma(t,\xi):=\gamma(t,s(t,\xi))(\mathrm{d}s/\mathrm{d}\xi)$, while $a(t,\xi)$ is determined by the parameterization. Specific forms of parameterization which have been considered in the literature include parameterization in terms of the arclength $s$ (DeVoria & Mohseni, 2018) and in terms of the graph of a function with ${\mathbf{x}}=[x,y(x)]^{T}$ (Marchioro & Pulvirenti, 1993). However, for the particular parameterization in terms of the circulation $\Gamma(s):=\int_{0}^{s}\gamma(s^{\prime})\,\mathrm{d}s^{\prime}$ (2) contained between the sheet endpoint and the point with the arclength coordinate $s$, we have $\sigma=\gamma(\mathrm{d}s/\mathrm{d}\Gamma)\equiv 1$ and $a\equiv 0$. Then equation (1b) is satisfied trivially, whereas equation (1a) rewritten using the complex representation in terms of $z=z(t,\Gamma)=x(t,\Gamma)+\mathrm{i}y(t,\Gamma)$ becomes the celebrated Birkhoff–Rott equation (Saffman, 1992) ${\partial\overline{z}(t,\Gamma)\over\partial t}=\frac{1}{2\pi\mathrm{i}}\mbox{p.v.}\\!\\!\int_{L(t)}\frac{\mathrm{d}\Gamma^{\prime}}{z(t,\Gamma)-z(t,\Gamma^{\prime})},$ (3) where overbar denotes complex conjugation. The system (1), or equivalently equation (3), represents a free-boundary problem in which the time-dependent shape of the interface (sheet) needs to be found as a part of the solution to the problem. We remark that formulations employing different parameterizations of the sheet have the same normal velocity component in (1a), but different tangential components determined by the parameterization (Lopes Filho et al., 2007). In particular, in the Lagrangian parameterization in terms of the circulation $\Gamma$, the point ${\mathbf{x}}(t,\Gamma)$ moves with the velocity ${\mathbf{V}}({\mathbf{x}}(t,\Gamma))$ also in the tangential direction. Thus, $a\neq 0$ is a measure of the departure from Lagrangian motion. Another remarkable feature of the Lagrangian parameterization is that the Birkhoff–Rott equation (3) also contains information about the evolution of the circulation density $\gamma(t,s)$ which is implicit in the definition of the independent variable in (2). The Birkhoff-Rott equation (3) is known to be ill-posed and to lead to singularities in finite time (Moore, 1979). For these and other related reasons, this equation has been at the centre of a lot of mathematical research and many important results are summarized in the collection edited by Caflisch (1989) and in the monograph by Majda & Bertozzi (2002). Because of its compact form, the Birkhoff-Rott equation (3) has been used in many numerical studies of the evolution of vortex sheets typically involving some form of regularization (Krasny, 1986a, b; Krasny & Nitsche, 2002; Sakajo & Okamoto, 1996; DeVoria & Mohseni, 2018). Similarly, we will also use it, albeit without any regularization, as the point of departure for the different analyses in the present study. The interesting question of how to recover the circulation density $\gamma(t,s)$ from the Lagrangian representation $z(t,\Gamma)$ will be addressed in § 3.1. The velocity ${\mathbf{V}}=(u,v)$ on the right-hand side (RHS) of (1a) can be expressed in complex notation as $\forall z\in L\quad u-\mathrm{i}v=\frac{1}{2\pi\mathrm{i}}\mbox{p.v.}\\!\\!\int_{L}\frac{\gamma(s^{\prime})}{z-z(s^{\prime})}\,\mathrm{d}s^{\prime}=\frac{1}{2\pi\mathrm{i}}\mbox{p.v.}\\!\\!\int_{L}\frac{\varphi(z^{\prime})}{z-z^{\prime}}\,\mathrm{d}z^{\prime},$ (4) where $\varphi(z):=\gamma(s(z))(\mathrm{d}z/\mathrm{d}s)^{-1}$ is introduced so that we can rewrite the integral as a complex integral. It is known that in order for the integrals in (4) to be well-defined in Cauchy’s principal-value sense, the function $\varphi(z)$ must be Hölder-continuous which also implies a similar condition on the circulation density $\gamma(s)$ (Muskhelishvili, 2008). Furthermore, in order for the velocity in (4) to be bounded everywhere on and in the neighborhood of the sheet $L$, including the case when the point $z$ approaches the endpoints $c_{1},c_{2}$ of the sheet, we must have $\varphi(c_{1})=\varphi(c_{2})=0$ (Muskhelishvili, 2008), implying that $\gamma(s(c_{1}))=\gamma(s(c_{2}))=0,$ (5) where $s(c_{1})$ and $s(c_{2})$ denote the arclength coordinates of the endpoints of the sheet. Condition (5) thus defines a class of physically- admissible circulation densities as those that vanish at the endpoints of the sheet. In this study we focus on two equilibria involving finite open vortex sheets. The first is the rotating while the second is the translating equilibrium, also known as the Prandtl-Munk vortex. While the linear stability of the straight infinite and the closed circular sheet has been understood for a long time (Michalke & Timme, 1967; Saffman, 1992), little is known about the stability properties of open finite sheets. As the first main result of the paper, we show that the rotating and translating equilibria of finite vortex sheets are linearly unstable. However, while in the first case unstable perturbations grow exponentially fast in time, the growth of unstable perturbations in the second case is algebraic. In both cases the growth rates are increasing functions of the wavenumbers of the perturbations. Remarkably, these stability results are obtained entirely with analytical computations. As the second contribution, we also obtain and analyze equations describing the time evolution of a straight vortex sheet in a linear external velocity field. Batchelor (1988) argued that the rotating equilibrium of the vortex sheet can be obtained as an infinite-aspect-ratio limit of Kirchhoff’s ellipse in which circulation is conserved. We demonstrate that this analogy goes further and in fact also applies to many key findings of the present study. More precisely, as our final contribution, we show that the results concerning the linear stability analysis of the rotating sheet are consistent of the infinite- aspect-ratio limit of the stability results known for Kirchhoff’s ellipse (Love, 1893; Mitchell & Rossi, 2008). The structure of the paper is as follows. In the next section we recall the rotating and translating equilibria of finite open vortex sheets. Next in § 3 we carry out the linear stability analysis of these equilibria. In § 4 we construct time-dependent generalizations of these equilibria in the presence of linear strain and shear. Finally in § 5 we demonstrate that most of the results reported in § 3 and § 4 are consistent with the infinite-aspect-ratio limits of solutions involving rotating vortex ellipses. A discussion and conclusions are presented in § 6, while some additional technical material is given in Appendix A. ## 2 Two relative equilibria of a straight vortex sheet In this section we recall some basic facts about the rotating and translating equilibrium configurations of a single open sheet. The rotating equilibrium is mentioned by Batchelor (1988) as a limit of the Kirchhoff elliptical vortex, whereas the translating equilibrium is known as the Prandtl-Munk vortex (Munk, 1919) and has received attention in classical aerodynamics as a simple model for elliptically loaded wings. Interestingly, as proved in Lopes Filho et al. (2003, 2007), while the rotating equilibrium can be interpreted as a weak solution of the 2D Euler equation, the translating equilibrium cannot. The rotating equilibrium was recently generalized to configurations involving multiple straight segments with one endpoint at the centre of rotation and the other at a vertex of a regular polygon by Protas & Sakajo (2020). By allowing for the presence of point vortices in the far field O’Neil (2018a, b) were able to find more general equilibria involving multiple vortex sheets, including curved ones, in both rotating and translating frames of reference. ### 2.1 Rotating Equilibrium Without loss of generality, a rotating equilibrium is sought in which the sheet rotates anticlockwise about its centre point with angular frequency $\Omega=1$. The sheet can thus be described as $L(t)=L_{0}e^{\mathrm{i}t}$, where $L_{0}:=[-1,1]$, with the centre of rotation at the origin and $L(t)=L(t+2\pi n)$, $n\in{\mathbb{Z}}$. Transforming the Birkhoff-Rott equation (3) to the rotating frame of reference via the change of variables $Z(t,\Gamma)=z(t,\Gamma)e^{-\mathrm{i}t}$ yields ${\partial\overline{Z}(t,\Gamma)\over\partial t}=\frac{1}{2\pi\mathrm{i}}\mbox{p.v.}\\!\\!\int_{L_{0}}\frac{\mathrm{d}\Gamma^{\prime}}{Z(t,\Gamma)-Z(t,\Gamma^{\prime})}+\mathrm{i}\overline{Z}(t,\Gamma).$ (6) Noting that the time derivative now vanishes and changing the integration variable to $x$ (which differs from the arclength $s$ by an additive constant) leads to $-\mathrm{i}x=\frac{1}{2\pi\mathrm{i}}\mbox{p.v.}\\!\\!\int_{-1}^{1}\frac{\gamma_{0}(x^{\prime})}{x-x^{\prime}}\,\mathrm{d}x^{\prime},\quad\forall x\in[-1,1]$ (7) as a relation characterizing the rotating equilibrium. The circulation density satisfying this equation has the form $\gamma_{0}(x)=2\sqrt{1-x^{2}},\quad x\in[-1,1],$ (8) which is clearly Hölder continuous and satisfies conditions (5). Therefore, in this equilibrium configuration the velocity induced by the sheet on itself (equal to $-\mathrm{i}x$, which is the opposite of the velocity due to the background rotation) is well behaved everywhere its neighborhood. The bijective relation between the circulation parameter $\Gamma$ and arclength $s$ (equivalently, the coordinate $x$) for the rotating equilibrium is given by $\Gamma(x)=\int_{-1}^{x}\gamma_{0}(\xi)\,\mathrm{d}\xi=\int_{-1}^{x}2\sqrt{1-\xi^{2}}\,\mathrm{d}\xi=\frac{\pi}{2}+x\sqrt{1-x^{2}}+\arcsin{x}.$ (9) We note that the total circulation of the sheet is then given by ${\widehat{\Gamma}}=\Gamma(1)=\pi$. Generalizations of the equilibrium solution described above to flows in the presence of external strain and/or shear are described in § 4. ### 2.2 Translating Equilibrium The translating equilibrium involves a straight vortex sheet $L_{0}$ moving steadily in the direction perpendicular to itself with a constant velocity $W$. The corresponding circulation density does not satisfy conditions (5), so the flow velocity near the sheet endpoints is unbounded. The sheet in such an equilibrium configuration can thus be described by $L(t)=L_{0}-\mathrm{i}t$, taking $W=1$. Transforming the Birkhoff-Rott equation (3) to a translating frame of reference via the change of variable $Z(t,\Gamma)=z(t,\Gamma)+\mathrm{i}t$ yields ${\partial\overline{Z}(t,\Gamma)\over\partial t}=\frac{1}{2\pi\mathrm{i}}\mbox{p.v.}\\!\\!\int_{L_{0}}\frac{\mathrm{d}\Gamma^{\prime}}{Z(t,\Gamma)-Z(t,\Gamma^{\prime})}-\mathrm{i}.$ (10) Then, noting that the time derivative vanishes and changing the integration variable to $x$ we obtain $\mathrm{i}=\frac{1}{2\pi\mathrm{i}}\mbox{p.v.}\\!\\!\int_{-1}^{1}\frac{\gamma_{0}(x^{\prime})}{x-x^{\prime}}\,\mathrm{d}x^{\prime},\qquad\forall x\in[-1,1]$ (11) as a relation characterizing the translating equilibrium. The circulation density satisfying this equation has the form $\gamma_{0}(x)=\frac{2x}{\sqrt{1-x^{2}}},\qquad x\in[-1,1].$ (12) Evidently, this function is not Hölder-continuous at the endpoints $x=\pm 1$. As a result, the velocity field induced by the vortex sheet in such a translating equilibrium configuration is unbounded near the endpoints where it has an inverse square-root singularity (Muskhelishvili, 2008). However, as is evident from (11), on the sheet itself the induced velocity remains bounded. The relation between the circulation parameter $\Gamma$ and arclength $s$ (or equivalently the coordinate $x$) for the translating equilibrium is given by $\Gamma(x)=\int_{-1}^{x}\gamma_{0}(\xi)\,\mathrm{d}\xi=\int_{-1}^{x}\frac{2\xi}{\sqrt{1-\xi^{2}}}\,\mathrm{d}\xi=-2\sqrt{1-x^{2}}.$ (13) We note that the total circulation of the sheet vanishes since ${\widehat{\Gamma}}=\Gamma(1)=0$. ## 3 Linear Stability Analysis In this section we analyze the stability of the equilibrium configurations introduced in § 2.1 and 2.2 in essentially the same way in both cases. To fix attention, we first consider the Birkhoff-Rott equation in the rotating frame of reference (6) and study the amplification of infinitesimal perturbations around the equilibrium defined by relations (7)–(8). We thus need to linearize equation (6) around this equilibrium. We write (see Figure 1) $Z(t,\Gamma)=x(\Gamma)+\epsilon\,\zeta(t,\Gamma),\quad|\epsilon|\ll 1.$ (14) Note that while the imaginary component of the perturbation $\zeta(t,\Gamma)$ describes the deformation of the sheet, its real part encodes information about perturbations to the circulation density $\gamma(s)$. Figure 1: Schematic representation of the perturbation defined by (14). Plugging (14) into (6), expanding the terms in this equation in a Taylor series in $\epsilon$ around $\epsilon=0$ and retaining terms proportional to $\epsilon$ yields ${\partial\overline{\zeta}(t,\Gamma)\over\partial t}=\mathrm{i}({\mathcal{H}}\zeta)(t,\Gamma)+\mathrm{i}\overline{\zeta}(t,\Gamma),$ (15) where $({\mathcal{H}}\zeta)(t,\Gamma):=\frac{1}{2\pi}\mbox{f.p.}\int_{0}^{{{\widehat{\Gamma}}}}\frac{\zeta(t,\Gamma)-\zeta(t,\Gamma^{\prime})}{[x(\Gamma)-x(\Gamma^{\prime})]^{2}}\,\,\mathrm{d}\Gamma^{\prime}$ (16) is a hypersingular integral operator. The symbol “f.p.” indicates that the integral is understood in the sense of Hadamard’s finite part (Estrada & Kanwal, 2012). In the present problem the relation between the coordinates $\Gamma$ and $x$ in (16) is given in (9). It is illuminating to separate equation (15) into its real and imaginary parts using $\zeta=\zeta^{r}+\mathrm{i}\zeta^{i}$, leading to $\displaystyle{\partial\zeta^{r}(t,\Gamma)\over\partial t}$ $\displaystyle=-\left({\mathcal{H}}\zeta^{i}\right)(t,\Gamma)+\zeta^{i}(t,\Gamma),$ (17a) $\displaystyle{\partial\zeta^{i}(t,\Gamma)\over\partial t}$ $\displaystyle=-\Big{(}{\mathcal{H}}\zeta^{r}\Big{)}(t,\Gamma)-\zeta^{r}(t,\Gamma).$ (17b) The integro-differential system (17) describes the evolution of infinitesimal perturbations to the equilibrium. Assuming that the real and imaginary parts of the perturbation depend on time as $\zeta^{r}(t,\Gamma)=\mathrm{e}^{\mathrm{i}\lambda t}\widehat{\zeta}^{r}(\Gamma)$ and $\zeta^{i}(t,\Gamma)=\mathrm{e}^{\mathrm{i}\lambda t}\widehat{\zeta}^{i}(\Gamma)$ turns (17) into the eigenvalue problem $\displaystyle\mathrm{i}\lambda\,\widehat{\zeta}^{r}(\Gamma)$ $\displaystyle=-\left({\mathcal{H}}\widehat{\zeta}^{i}\right)(\Gamma)+\widehat{\zeta}^{i}(\Gamma),$ (18a) $\displaystyle\mathrm{i}\lambda\,\widehat{\zeta}^{i}(\Gamma)$ $\displaystyle=-\left({\mathcal{H}}\widehat{\zeta}^{r}\right)(\Gamma)-\widehat{\zeta}^{r}(\Gamma),$ (18b) where $\lambda\in{\mathbb{C}}$ is the eigenvalue and $\widehat{\zeta}^{r}$, $\widehat{\zeta}^{i}$ the corresponding eigenvectors. Performing the same steps for the translating equilibrium described by (11)–(12) leads to the linearized system $\displaystyle{\partial\zeta^{r}(t,\Gamma)\over\partial t}$ $\displaystyle=-\left({\mathcal{G}}\zeta^{i}\right)(t,\Gamma),$ (19a) $\displaystyle{\partial\zeta^{i}(t,\Gamma)\over\partial t}$ $\displaystyle=-\Big{(}{\mathcal{G}}\zeta^{r}\Big{)}(t,\Gamma),$ (19b) where the hypersingular integral operator ${\mathcal{G}}$ is defined as in (16), except that the relation between the coordinates $\Gamma$ and $x$ is now given in (13). The corresponding eigenvalue problem then takes the form $\displaystyle\mathrm{i}\lambda\,\widehat{\zeta}^{r}(\Gamma)$ $\displaystyle=-\left({\mathcal{G}}\widehat{\zeta}^{i}\right)(\Gamma),$ (20a) $\displaystyle\mathrm{i}\lambda\,\widehat{\zeta}^{i}(\Gamma)$ $\displaystyle=-\left({\mathcal{G}}\widehat{\zeta}^{r}\right)(\Gamma).$ (20b) In principle, the linearized systems (17) and (19) have been obtained in a similar manner to the corresponding system in the case of the periodic vortex sheet (Saffman, 1992), except for a difference in the form of the kernel of the integral operator (16), more specifically how the coordinate $x$ depends on the integration variable $\Gamma^{\prime}$. and the presence of additional terms representing the background rotation in (17). In our analysis of eigenvalue problems (18) and (20) below it will be convenient to switch between parameterizations in terms of $\Gamma$ and $x$, which will be facilitated by relations (9) and (13). ### 3.1 Constraints on Admissible Perturbations In order for perturbations $\zeta(t,\Gamma)$ to be physically admissible, they have to satisfy certain conditions. In the present problem we will require them to leave the total circulation ${\widehat{\Gamma}}$ of the sheet unchanged. Moreover, in the case of the rotating equilibrium we will also require the associated circulation densities to satisfy conditions (5), so that the velocity induced by the perturbed sheet remains everywhere bounded. In the case of the translating equilibrium, the analogous condition will require the perturbations to leave the type of singularity in the velocity field induced by the sheet near its endpoints unchanged. The main difficulty is that these conditions are naturally expressed in terms of the perturbed circulation density $\gamma$ which enters into (14) implicitly via the circulation parameter $\Gamma$. We thus need to translate these two conditions into constraints on the functions $\zeta(t,\Gamma)$. In the rotating or translating frame of reference the perturbed sheet can be represented as the graph of a function $[x,r(x)]$, as in Figure 1, where the normal displacement $r(x)$ is related to the perturbation $\zeta(\Gamma)$ via $r(x)=\Im\,[Z(\Gamma(x))]=\epsilon\,\Im\,[\zeta(\Gamma(x))]$ (21) (for brevity, the dependence on time $t$ is omitted in this discussion). The circulation density $\gamma(x)$ is obtained from $\zeta(\Gamma)$ as follows $\displaystyle\left.\begin{aligned} \gamma(s)&=\frac{\mathrm{d}\Gamma}{\mathrm{d}s}=\left(\frac{\mathrm{d}s}{\mathrm{d}\Gamma}\right)^{-1}=\left({\partial Z\over\partial\Gamma}\,{\partial\overline{Z}\over\partial\Gamma}\right)^{-1/2}\\\ {\partial{Z}\over\partial\Gamma}&={\partial x\over\partial\Gamma}+\epsilon{\partial\zeta\over\partial\Gamma}=\gamma_{0}^{-1}+\epsilon{\partial\zeta\over\partial\Gamma}\end{aligned}\right\\}\quad\Longrightarrow$ $\displaystyle\begin{aligned} \gamma(s)&=\left[\left(\gamma_{0}^{-1}+\epsilon{\partial\zeta\over\partial\Gamma}\right)\left(\gamma_{0}^{-1}+\epsilon{\partial\overline{\zeta}\over\partial\Gamma}\right)\right]^{-1/2}\\\ &=\gamma_{0}-\epsilon\,\gamma_{0}^{2}\,\Re\,\left[{\partial\zeta\over\partial\Gamma}\right]+O(\epsilon^{2})=\gamma_{0}\left[1-\epsilon{\partial\zeta^{r}\over\partial x}\right]+O(\epsilon^{2}),\end{aligned}$ (22) where we used the property $\gamma_{0}^{-1}\partial/\partial x=\partial/\partial\Gamma$. The total circulation of the perturbed sheet is $\displaystyle\int_{0}^{{\widehat{\Gamma}}}\,d\Gamma$ $\displaystyle=\int_{-1}^{1}\frac{\mathrm{d}\Gamma}{\mathrm{d}s}\frac{\mathrm{d}s}{\mathrm{d}x}\,\mathrm{d}x=\int_{-1}^{1}\left({\partial Z\over\partial\Gamma}\,{\partial\overline{Z}\over\partial\Gamma}\right)^{-1/2}\left({\partial Z\over\partial x}\,{\partial\overline{Z}\over\partial x}\right)^{1/2}\,\mathrm{d}x$ $\displaystyle=\int_{-1}^{1}\left[\left(\gamma_{0}^{-1}+\epsilon{\partial\zeta\over\partial\Gamma}\right)\left(\gamma_{0}^{-1}+\epsilon{\partial\overline{\zeta}\over\partial\Gamma}\right)\right]^{-1/2}\left[\left(1+\epsilon{\partial\zeta\over\partial x}\right)\left(1+\epsilon{\partial\overline{\zeta}\over\partial x}\right)\right]^{1/2}\,\mathrm{d}x$ $\displaystyle=\int_{-1}^{1}\left[\gamma_{0}-\epsilon\gamma_{0}^{2}\Re\,\left({\partial\zeta\over\partial\Gamma}\right)+O(\epsilon^{2})\right]\left[1+\epsilon\Re\,\left({\partial\zeta\over\partial x}\right)+O(\epsilon^{2})\right]\,\mathrm{d}x$ $\displaystyle=\int_{-1}^{1}\gamma_{0}(x)\,\mathrm{d}x+O(\epsilon^{2}),$ (23) where we have used (22). This relation implies that, to leading order in $\epsilon$, the total circulation is unaffected by perturbations of the form (14). Hence the leading-order term in the expression for the perturbed circulation density is proportional to $\gamma_{0}(x){\partial\zeta^{r}\over\partial x}(\Gamma(x)).$ (24) In the case of the rotating equilibrium this term vanishes at the endpoints since $\gamma_{0}(\pm 1)=0$, provided $\zeta(\Gamma(x))\in C^{1}([-1,1])$. Under the same condition on $\zeta(\Gamma(x))$, the inverse square-root singularity of the circulation density in (12) is preserved in the case of the translating equilibrium. We thus conclude that the two constraints discussed above are satisfied automatically by perturbations $\zeta(\Gamma(x))$ that are continuously differentiable functions of $x$, such as polynomials. Hence there are no extra constraints to add to the eigenvalue problems (18) and (20). ### 3.2 Solution of Eigenvalue Problems (18) and (20) In this section we present closed-form solutions to eigenvalue problems (18) and (20) corresponding to the rotating and translating equilibria. We remark that the form of these solutions was inspired by solutions to these problems obtained numerically using a spectral Chebyshev method, with complementary insights provided by Galerkin and collocation formulations (Boyd, 2001). #### 3.2.1 Eigenvalue Problem for the Rotating Equilibrium We begin by expressing the hypersingular operator defined in (16) in terms of integrals defined in Cauchy’s principal-value sense as follows: $\displaystyle({\mathcal{H}}\zeta)(\Gamma(x))$ $\displaystyle=\frac{1}{2\pi}\mbox{f.p.}\int_{0}^{{{\widehat{\Gamma}}}}\frac{\zeta(\Gamma(x))-\zeta(\Gamma^{\prime})}{[x-x(\Gamma^{\prime})]^{2}}\,\mathrm{d}\Gamma^{\prime}=\frac{1}{2\pi}\mbox{f.p.}\int_{-1}^{1}\frac{\zeta(\Gamma(x))-\zeta(\Gamma(\xi))}{[x-\xi]^{2}}\gamma_{0}(\xi)\,\mathrm{d}\xi$ $\displaystyle=-\frac{1}{2\pi}\left[\zeta(\Gamma(x))\frac{\mathrm{d}}{\mathrm{d}x}\mbox{p.v.}\\!\\!\int_{-1}^{1}\frac{2\sqrt{1-\xi^{2}}}{x-\xi}\,\mathrm{d}\xi-\frac{\mathrm{d}}{\mathrm{d}x}\mbox{p.v.}\int_{-1}^{1}\frac{\zeta(\Gamma(\xi))2\sqrt{1-\xi^{2}}}{x-\xi}\,\mathrm{d}\xi\right],$ $\displaystyle=-\zeta(\Gamma(x))+\frac{1}{\pi}\frac{\mathrm{d}}{\mathrm{d}x}\mbox{p.v.}\\!\\!\int_{-1}^{1}\frac{\zeta(\Gamma(\xi))\sqrt{1-\xi^{2}}}{x-\xi}\,\mathrm{d}\xi,$ (25) where we have used (8) and the identity $\mbox{p.v.}\\!\\!\int_{-1}^{1}\sqrt{1-\xi^{2}}(x-\xi)^{-1}\,\mathrm{d}\xi=\pi x$. Next, applying this operator to the Chebyshev polynomial of the second type $U_{k}$ yields $({\mathcal{H}}U_{k})(x)=-U_{k}(x)+\frac{\mathrm{d}}{\mathrm{d}x}T_{k+1}(x)=kU_{k}(x),\qquad k=0,1,\dots,$ (26) where $T_{k}$ is the Chebyshev polynomial of the first kind. We have also used the identities $\mbox{p.v.}\\!\\!\int_{-1}^{1}\sqrt{1-\xi^{2}}U_{k-1}(\xi)(x-\xi)^{-1}\,\mathrm{d}\xi=\pi T_{k}(x)$ and $\mathrm{d}T_{k}/\mathrm{d}x=kU_{k-1}$ valid for $k\geq 1$ (DLMF, 2020). Relation (26) implies that $k=0,1,\dots$ and $U_{k}$ are, respectively, the eigenvalues and eigenvectors of the operator ${\mathcal{H}}$ in (16). We then rearrange problem (18) as $-\lambda^{2}\widehat{\zeta}^{r}=-({\mathcal{I}}-{\mathcal{H}})({\mathcal{I}}+{\mathcal{H}})\widehat{\zeta}^{r}=({\mathcal{H}}^{2}-{\mathcal{I}}^{2})\widehat{\zeta}^{r},$ (27) where ${\mathcal{I}}$ denotes the identity operator. Evidently, $\widehat{\zeta}^{r}(\Gamma(x))=U_{k}(x)$ is also an eigenfunction of problem (27) with the eigenvalue $\lambda_{k}=\pm\mathrm{i}\sqrt{k^{2}-1}$. Since $\widehat{\zeta}^{i}(\Gamma(x))$ satisfies an equation identical to (27), the solution of eigenvalue problem (18) is $\widehat{\zeta}^{r}_{k}(\Gamma(x))=U_{k}(x)$, $\widehat{\zeta}^{i}_{k}(\Gamma(x))=\theta_{k}U_{k}(x)$. Inserting this representation into (18) leads to $\theta_{k}=\sqrt{(k+1)/(k-1)}$ for $k=2,3,\dots$. The cases with $k=0,1$ need to be considered separately: the corresponding solutions can be easily deduced from (18). Thus the eigenvalues $\lambda_{k}$ and eigenvectors $\widehat{\zeta}_{k}=\widehat{\zeta}^{r}_{k}+\mathrm{i}\widehat{\zeta}^{i}_{k}$ of the problem (18) are $\displaystyle\lambda_{0}$ $\displaystyle=\pm 1,$ $\displaystyle\qquad\widehat{\zeta}_{0}(\Gamma)$ $\displaystyle=1\pm\mathrm{i}^{2}=0,2,$ (28a) $\displaystyle\lambda_{1}$ $\displaystyle=0,$ $\displaystyle\qquad\widehat{\zeta}_{1}(\Gamma)$ $\displaystyle=\mathrm{i}x(\Gamma),$ (28b) $\displaystyle\lambda_{k}$ $\displaystyle=\pm\mathrm{i}\sqrt{k^{2}-1},$ $\displaystyle\widehat{\zeta}_{k}(\Gamma)$ $\displaystyle=\left(1\pm\mathrm{i}\sqrt{\frac{k+1}{k-1}}\right)U_{k}(x(\Gamma)),\ k=2,3,\dots.$ (28c) The neutrally-stable mode (28a) represents harmonic oscillation of the centre of rotation around the origin. Mode (28b) associated with the zero eigenvalue represents the stretching or compression of the sheet, and can be therefore interpreted as connecting the rotating equilibrium defined by (7)–(8) with a nearby equilibrium. Finally, there exists a countably infinite family of linearly stable and unstable eigenmodes involving deformation of the sheet. In the limit of large $k$, the eigenvalues behave as $\lambda_{k}\sim\pm\mathrm{i}k$. The eigenfunctions (28c) corresponding to three different even and odd values of $k$ are shown in Figure 2 in terms of perturbed shapes and perturbed circulation densities of the vortex sheet. Figure 2: Unstable eigenvectors of the rotating equilibrium corresponding to (a,b) $k=4,5$, (c,d) $k=12,13$ and (e,f) $k=36,37$ in expressions (28c). The left column shows the perturbed sheet geometry $Z(\Gamma(x))$, and the right column the corresponding circulation density $\gamma(x)$, with $\epsilon=10^{-2}$. Solid blue and dashed red lines represent, respectively, the perturbations corresponding to even and odd values of $k$, while thick black lines represent the equilibrium configuration. #### 3.2.2 Eigenvalue Problem for the Translating Equilibrium Analogously to (25), the action of the hypersingular operator ${\mathcal{G}}$ on the perturbation $\zeta$ can be expressed using (12) as $\displaystyle({\mathcal{G}}\zeta)(\Gamma(x))$ $\displaystyle=\frac{1}{2\pi}\mbox{f.p.}\int_{-1}^{1}\frac{\zeta(\Gamma(x))-\zeta(\Gamma(\xi))}{[x-\xi]^{2}}\gamma_{0}(\xi)\,\mathrm{d}\xi=\frac{1}{2\pi}\mbox{f.p.}\int_{-1}^{1}\frac{\zeta(\Gamma(x))-\zeta(\Gamma(\xi)}{[x-\xi]^{2}}\frac{2\xi}{\sqrt{1-\xi^{2}}}\,\mathrm{d}\xi$ $\displaystyle=-\frac{1}{\pi}\left[\zeta(\Gamma(x))\frac{\mathrm{d}}{\mathrm{d}x}\mbox{p.v.}\int_{-1}^{1}\frac{\xi}{\sqrt{1-\xi^{2}}(x-\xi)}\,\mathrm{d}\xi-\frac{\mathrm{d}}{\mathrm{d}x}\mbox{p.v.}\int_{-1}^{1}\frac{\zeta(\Gamma(\xi))\xi}{\sqrt{1-\xi^{2}}(x-\xi)}\,\mathrm{d}\xi\right],$ $\displaystyle=\frac{1}{\pi}\frac{\mathrm{d}}{\mathrm{d}x}\mbox{p.v.}\int_{-1}^{1}\frac{\zeta(\Gamma(\xi))\xi}{\sqrt{1-\xi^{2}}(x-\xi)}\,\mathrm{d}\xi,$ (29) where we have used the identity $\mbox{p.v.}\\!\\!\int_{-1}^{1}\xi(1-\xi^{2})^{-1/2}(x-\xi)^{-1}\,\mathrm{d}\xi=-\pi$. Representing the perturbation as a function of $x$ in terms of a Chebyshev series expansion with complex coefficients $\zeta(\Gamma(x))=\sum_{k=0}^{\infty}(\alpha_{k}+i\beta_{k})T_{k}(x),\qquad\alpha_{k},\beta_{k}\in{\mathbb{R}},\qquad k=0,1,\dots,$ (30) we have $({\mathcal{G}}\zeta)(x)=\sum_{k=0}^{\infty}(\alpha_{k}+\mathrm{i}\beta_{k})({\mathcal{G}}T_{k})(x),$ where $({\mathcal{G}}T_{k})(x)=\frac{1}{\pi}\frac{\mathrm{d}}{\mathrm{d}x}\mbox{p.v.}\int_{-1}^{1}\frac{\xi T_{k}(\xi)}{\sqrt{1-\xi^{2}}(x-\xi)}\,\,\mathrm{d}\xi,\qquad k\geq 0.$ (31) Using the identity $\mbox{p.v.}\\!\\!\int_{-1}^{1}T_{k}(\xi)(1-\xi^{2})^{-1/2}(x-\xi)^{-1}\,\mathrm{d}\xi=-\pi U_{k-1}(x)$, $k\geq 1$, and the recurrence relations characterizing the Chebyshev polynomials of the first and second kind (DLMF, 2020), we find $\displaystyle({\mathcal{G}}T_{0})(x)$ $\displaystyle=0,$ (32a) $\displaystyle({\mathcal{G}}T_{k})(x)$ $\displaystyle=\frac{1}{1-x^{2}}\left[kxT_{k}(x)-U_{k-1}(x)\right],\qquad k=1,2,\dots.$ (32b) We then define $\displaystyle{\mathbf{w}}(t)$ $\displaystyle:=\left[\alpha_{0}(t),\alpha_{1}(t),\dots,\beta_{0}(t),\beta_{1}(t),\dots\right]^{T},$ (33a) $\displaystyle[{\bf G}]_{jk}$ $\displaystyle:=\int_{-1}^{1}\frac{T_{j}(\xi)({\mathcal{G}}T_{k})(\xi)}{\sqrt{1-\xi^{2}}}\,\mathrm{d}\xi,\quad j,k=0,1,\dots,$ (33b) where the last expression represents the $j$th Chebyshev coefficient of $({\mathcal{G}}T_{k})(x)$. Then the system (19) can be rewritten as an infinite-dimensional vector equation $\frac{d}{dt}{\mathbf{w}}=\begin{bmatrix}\phantom{-}{\mathbf{0}}&-{\bf G}\\\ -{\bf G}&\phantom{-}{\mathbf{0}}\end{bmatrix}{\mathbf{w}}=:{\bf A}{\mathbf{w}},$ (34) where ${\mathbf{0}}$ represents the null matrix. We remark that when the operator ${\mathcal{G}}$ acts on a polynomial, the result is a polynomial of degree reduced by one. Thus, matrix ${\bf G}$ representing this operator in the Chebyshev basis is upper triangular with zeros on its main diagonal. Therefore, we conclude that zero is the only eigenvalue of the operator ${\mathcal{G}}$ and hence also of the eigenvalue problem (20). The relation (32a) indicates that $\widehat{\zeta}^{r}(\Gamma)=\widehat{\zeta}^{i}(\Gamma)=1$ is the only eigenvector, which implies that the eigenvalue $\lambda=0$ has infinite algebraic multiplicity and geometric multiplicity equal to 1. The matrix ${\bf A}$ is nilpotent of degree infinity. In the presence of such an extreme form of degeneracy, solutions of system (34) corresponding to some initial condition ${\mathbf{w}}_{0}\in l^{2}$ can be written as (Perko, 2008) ${\mathbf{w}}(t)=\mathrm{e}^{t{\bf A}}{\mathbf{w}}_{0}={\bf P}\operatorname{diag}\\{\mathrm{e}^{\lambda_{j}t}\\}{\bf P}^{-1}\left[{\bf I}+t{\bf A}+\frac{t^{2}}{2}{\bf A}^{2}+\dots\right]{\mathbf{w}}_{0}=\left[{\bf I}+t{\bf A}+\frac{t^{2}}{2}{\bf A}^{2}+\dots\right]{\mathbf{w}}_{0},$ (35) where ${\bf I}$ is the identity matrix and ${\bf P}:=[{\mathbf{v}}_{0},{\mathbf{v}}_{1},\dots]$ is a matrix with columns given by the generalized eigenvectors ${\mathbf{v}}_{k}$, $k=0,1,\dots$ obtained from the Jordan chain (Perko, 2008) $\displaystyle{\bf A}{\mathbf{v}}_{1}$ $\displaystyle={\mathbf{v}}_{0}=\left[1,0,\dots,1,0,\dots\right]^{T},$ (36a) $\displaystyle{\bf A}{\mathbf{v}}_{k+1}$ $\displaystyle={\mathbf{v}}_{k},\qquad k=1,2,\dots.$ (36b) Because of property (32b), the Jordan chain of generalized eigenvectors consists of polynomials of degree increasing with $k$, and ${\mathbf{v}}_{k}$ represents the Chebyshev coefficients of a polynomial of degree $k$. Since the generalized eigenvectors are linearly independent, the matrix ${\bf P}$ is invertible. As all the eigenvalues are equal to zero, the product of the first three factors on the first line in expression (35) reduces to the identity matrix. Expanding the initial condition in terms of the generalized eigenvectors from the Jordan chain (36) as ${\mathbf{w}}_{0}=\eta_{0}{\mathbf{v}}_{0}+\eta_{1}{\mathbf{v}}_{1}+\dots$ for some $\eta_{0},\eta_{1},\dots\in{\mathbb{R}}$ and using the property that ${\bf A}^{j}{\mathbf{v}}_{k}={\mathbf{0}}$ for $0\leq k<j$, we can rewrite the solution (35) as ${\mathbf{w}}(t)={\mathbf{w}}_{0}+t(\eta_{1}{\mathbf{v}}_{1}+\eta_{2}{\mathbf{v}}_{2}+\dots)+\cdots+\frac{t^{n}}{n!}(\eta_{n}{\mathbf{v}}_{n}+\eta_{n+1}{\mathbf{v}}_{n+1}+\dots)+\cdots,\qquad 2<n<\infty.$ (37) This form of the solution allows us to conclude that, for each integer $n>0$, there exists a perturbation ${\mathbf{w}}_{0}$ given by a polynomial of degree equal to or greater than $n$ which grows in time at a rate proportional to at least $t^{n}$. In order to understand the structure of the fastest-growing perturbations represented by (37), it is instructive to examine the generalized eigenvectors defined in (36) as functions of $x\in[-1,1]$, i.e. $v_{k}(x)=\sum_{j=0}^{k}[{\mathbf{v}}_{k}]_{j}T_{j}(x)$, $k=0,1,\dots$. The first six generalized eigenvectors then take the form $\displaystyle v_{0}(x)$ $\displaystyle=T_{0}(x)=1,$ $\displaystyle\qquad v_{1}(x)$ $\displaystyle=T_{1}(x)=x,$ (38a) $\displaystyle v_{2}(x)$ $\displaystyle=\frac{1}{4}T_{2}(x)=\frac{1}{2}x^{2}-\frac{1}{4},$ $\displaystyle\qquad v_{3}(x)$ $\displaystyle=\frac{1}{24}T_{3}(x)-\frac{5}{24}T_{1}(x)=\frac{1}{6}x^{3}-\frac{1}{3}x,$ (38b) $\displaystyle v_{4}(x)$ $\displaystyle=\frac{1}{192}T_{4}(x)-\frac{7}{96}T_{2}(x)$ $\displaystyle\qquad v_{5}(x)$ $\displaystyle=\frac{1}{1920}T_{5}(x)-\frac{9}{640}T_{3}(x)+\frac{61}{960}T_{1}(x)$ $\displaystyle=\frac{1}{24}x^{4}-\frac{3}{16}x^{2}+\frac{5}{64},$ $\displaystyle=\frac{1}{120}x^{5}-\frac{1}{15}x^{3}+\frac{13}{120}x$ (38c) and some of them are plotted in Figure 3. The remaining generalized eigenvectors follow the same pattern. We observe that the generalized eigenvectors of even order consist of even-degree polynomials only and vice versa. In all cases the magnitude of the coefficients decreases with the degree of the term, so that the form of the generalized eigenvectors is dominated by their lower-degree terms. As a result, while the generalized eigenvectors (38) are linearly independent (Perko, 2008), they form a strongly non-normal set, as shown in Figure 3. This implies that when the initial perturbation $\zeta(0,\Gamma(x))$ in the form of a generic degree-$n$ polynomial of $x$ is expanded in terms of the generalized eigenvectors (38), the expansion coefficients $\eta_{k}$, $k=0,1,\dots,n$ generically increase in magnitudes with $k$. This means that the fastest-growing components of such an initial perturbation will be given by the generalized eigenvector $v_{n}$, and will grow at a rate proportional to $t^{n}$. We also remark that the even- degree generalized eigenvectors shown in Figure 3 have some resemblance to the form of the most amplified perturbations observed during the time evolution of a perturbed Prandtl-Munk vortex. More precisely, while the linear stability analysis cannot predict the roll-up of the sheet near its endpoints which is driven by nonlinear effects, it does appear to capture the change of the global shape of the sheet as in Figure 2 in Krasny (1987) and Figure 4 in DeVoria & Mohseni (2018). However, given the form (37) of the solution of the linearized problem, it is impossible to make this statement more quantitative, e.g. by comparing the growth rates. Figure 3: Generalized eigenvectors $v_{2}$, $v_{4}$ (blue solid lines) and $v_{3}$, $v_{5}$ (red dashed lines) as functions of $x$. Thicker lines represent generalized eigenvectors of a higher degree. The graphs of the remaining generalized eigenvectors $v_{6}$ ,$v_{7}$, $\dots$ are essentially indistinguishable from the thicker curves. ## 4 Time-dependent straight vortex sheets The relative equilibrium involving Kirchhoff’s rotating ellipse has been generalized by including the effect of linear velocity fields. Moore & Saffman (1971) found steady states involving ellipses in a uniform straining field, while time-dependent solutions in the presence of a simple shear were investigated by Kida (1981). In this section we describe analogous generalizations of the rotating equilibria of the vortex sheet described in § 2.1, whereas in § 5.2 it is demonstrated that some of these solutions in fact coincide with the infinite-aspect-ratio limits of the Moore-Saffman and Kida vortex-patch solutions (Moore & Saffman, 1971; Kida, 1981). An alternative derivation of these solutions will be presented in Appendix A. ### 4.1 Governing equations We now derive from first principles the equation of motion for a single vortex sheet in the presence of a linear external flow given by $F(z)=Az+B{\overline{z}}$ with $A,B\in\mathbb{C}$. Our focus is on solutions in which the vortex sheet retains the form of a straight segment, but with varying length and inclination angle to the coordinate axes. Since the external flow should be divergence-free, we have $\Re\,[(\partial_{x}-i\partial_{y})F(z)]=\Re\,[B]=0$. Hence, without loss of generality, we can set the parameters of the external flow as $A=r\mathrm{e}^{-\mathrm{i}\theta_{0}}$ and $B=\mathrm{i}\Omega$, where $r>0$, $\Omega\in\mathbb{R}$ and $\theta_{0}\in(0,2\pi)$. This form of the external flow is a generalization of the earlier studies mentioned: Kida’s case (Kida, 1981) corresponds to $\theta_{0}=0$, while $\Omega=0$ leads to Moore’s and Saffman’s case (Moore & Saffman, 1971). The evolution of the vortex sheet represented by the curve ${\mathcal{L}}(t)\in{\mathbb{C}}$ is then governed by the augmented Birkhoff-Rott equation (6), which becomes ${\partial\overline{z}\over\partial t}=\frac{1}{2\pi\mathrm{i}}\mbox{p.v.}\int_{\mathcal{L}}\frac{\gamma(w)}{z-w}|\mathrm{d}w|+r\mathrm{e}^{-\mathrm{i}\theta_{0}}z+\mathrm{i}\Omega\overline{z},\qquad z\in{\mathcal{L}}.$ (39) We now assume that the vortex sheet has the form of a line segment and therefore can be parameterized as ${\mathcal{L}}(t)\;\colon\;z(t,s)=a(t)s\mathrm{e}^{\mathrm{i}\theta(t)},\qquad\mbox{where $-1\leq s\leq 1$}.$ (40) The positive-valued function $a(t)$ represents the half-length of the vortex sheet, while the real-valued function $\theta(t)$ gives the angle between $x$-axis and the sheet. Although the support ${\mathcal{L}}(t)$ of the circulation density changes in time, the total circulation ${\widehat{\Gamma}}$ carried by the vortex sheet must be conserved in time. Hence, we take the circulation density to have a form analogous to (8), with $\gamma(z)=\frac{2}{a(t)}\sqrt{1-\frac{z^{2}}{a^{2}(t)\mathrm{e}^{2\mathrm{i}\theta(t)}}},\qquad z\in{\mathcal{L}}(t).$ (41) Then the total circulation is independent of $a(t)$ and $\theta(t)$, and ${\widehat{\Gamma}}=\int_{{\mathcal{L}}(t)}\gamma(z)|\mathrm{d}z|=\int_{-1}^{1}\frac{2}{a(t)}\sqrt{1-\frac{a^{2}(t)s^{2}\mathrm{e}^{2\mathrm{i}\theta(t)}}{a^{2}(t)\mathrm{e}^{2\mathrm{i}\theta(t)}}}a(t)\,\mathrm{d}s=\int_{-1}^{1}2\sqrt{1-s^{2}}\,\mathrm{d}s=\pi.$ (42) Equations for $a(t)$ and $\theta(t)$ are obtained by substituting (40) and (41) into (39). The singular integral in (39) then becomes $\displaystyle\mbox{p.v.}\int_{\gamma}\frac{\gamma(w)}{z-w}|dw|$ $\displaystyle=\mbox{p.v.}\int_{-1}^{1}\frac{2\sqrt{1-(s^{\prime})^{2}}}{a(t)s\mathrm{e}^{\mathrm{i}\theta(t)}-a(t)s^{\prime}\mathrm{e}^{\mathrm{i}\theta(t)}}\,\mathrm{d}s^{\prime}$ (43) $\displaystyle=\frac{\mathrm{e}^{-\mathrm{i}\theta(t)}}{a(t)}\mbox{p.v.}\int_{-1}^{1}\frac{2\sqrt{1-(s^{\prime})^{2}}}{s-s^{\prime}}\,\mathrm{d}s^{\prime}=\frac{2\pi s}{a(t)}\mathrm{e}^{-\mathrm{i}\theta(t)},$ (44) where we have used the fact that the Hilbert transform of $\sqrt{1-s^{2}}$ is $\pi s$. After performing some elementary algebraic operations we obtain $\begin{bmatrix}\dot{a}(t)\\\ \dot{\theta}(t)\end{bmatrix}=\begin{bmatrix}a(t)r\cos(2\theta(t)-\theta_{0})\\\ \frac{1}{a^{2}(t)}-r\sin(2\theta(t)-\theta_{0})-\Omega\end{bmatrix}.$ (45) As shown in Appendix A, this system can also be derived using an approach proposed by O’Neil (2018a, b) to construct equilibrium solutions involving vortex sheets. The well-posedness of system (45) is easily established. Rewriting the first equation in (45) as $a^{-1}\dot{a}=r\cos(2\theta(t)-\theta_{0})$, we immediately see that $a(t)=a(0)\exp\left[r\int_{0}^{t}\cos(2\theta(s)-\theta_{0})\,\mathrm{d}s\right].$ (46) Since $-1\leqq\cos(2\theta(t)-\theta_{0})\leqq 1$ for all $t\in\mathbb{R}$, we have $0<a(0)\exp\left(-rt\right)\leqq a(t)\leqq a(0)\exp\left(rt\right)<\infty.$ This means the solutions of (45) cannot blow up in finite time, but unbounded growth is possible in infinite time, in the sense that $a(t)\rightarrow\infty$ as $t\to\pm\infty$ as we shall see below. ### 4.2 Analysis of the fixed points of the system (45) Fixed points of the system (45) are obtained directly by solving the equations $\dot{a}=\dot{\theta}=0$. From $\dot{a}=ar\cos(2\theta-\theta_{0})=0$ it immediately follows that $\theta=\theta_{n}:=\frac{\theta_{0}}{2}+\frac{\pi}{4}+\frac{n\pi}{2}$ for $n\in\mathbb{Z}$. Since $\sin(2\theta_{n}-\theta_{0})=(-1)^{n}$, $\dot{\theta}=0$ is equivalent to $a^{-2}=r+\Omega$ when $\theta=\theta_{2m}$ and to $a^{-2}=-r+\Omega$ when $\theta=\theta_{2m+1}$, $m\in{\mathbb{Z}}$. Hence, when $\Omega>r$, we have the following two families of steady states $\displaystyle(a_{2m},\theta_{2m})$ $\displaystyle=\left(\frac{1}{\sqrt{r+\Omega}},\frac{\theta_{0}}{2}+\frac{\pi}{4}+m\pi\right),$ (47a) $\displaystyle(a_{2m+1},\theta_{2m+1})$ $\displaystyle=\left(\frac{1}{\sqrt{-r+\Omega}},\frac{\theta_{0}}{2}+\frac{\pi}{4}+\left(m+\frac{1}{2}\right)\pi\right),\qquad m\in{\mathbb{Z}}.$ (47b) On the other hand, when $r>\Omega>-r$, there is only one family of steady states given by (47a) and there are no steady states when $\Omega<-r$. Since in the fixed frame of reference considered here the relative equilibrium discussed in § 2.1 has the form of a periodic solution, in the limit $r,\Omega\rightarrow 0^{+}$ the fixed points (47) disappear to infinity. We now analyze trajectories near the fixed points (47). We emphasize that this is not a stability analysis of the equations motion as was carried out in § 3.2.1; instead, here we focus on perturbations which only affect $a(t)$ and $\theta(t)$ in (40), i.e. those that leave the vortex sheet in the form of a straight segment. The Jacobian of system (45) is given by $\begin{bmatrix}r\cos(2\theta-\theta_{0})&-2ar\sin(2\theta-\theta_{0})\\\ -2a^{-3}&-2r\cos(2\theta-\theta_{0})\end{bmatrix}.$ (48) Computing the eigenvalues of Jacobian (48) evaluated at the critical points yields * • $\lambda=\pm 2\sqrt{r(r+\Omega)}$ for the critical points (47a) when $r+\Omega>0$, indicating that these critical points are saddles, * • $\lambda=\pm 2\sqrt{r(-r+\Omega)}\,i$ for the critical points (47b) when $\Omega>r$, indicating that these critical points are centres. The structure of the phase space $(a,\theta)$ of system (45) for different combinations of the parameters $r$ and $\Omega$ is explored in the next section. ### 4.3 Phase plots Figure 4: Solution trajectories of system (45) in the phase space $(a,\theta)\in{\mathbb{R}}_{+}\times{\mathbb{R}}$ for different choices of parameters: (a) $(r,\Omega)=(0.5,1.0)$, (b) $(r,\Omega)=(1.0,0.5)$ and (c) $(r,\Omega)=(0.5,-1.0)$. The black and red solid symbols represent centres and saddles as given in (47a) and (47b). The phase space $(a,\theta)\in{\mathbb{R}}_{+}\times{\mathbb{R}}$ of the system (45) is characterized by solving it numerically with different initial conditions $(a(0),\theta(0))$. The results are shown in Figure 4 for the following three cases: (a) $\Omega>r$, (b) $r>\Omega>-r$ and (c) $-r>\Omega$. Without loss of generality, we choose $\theta_{0}=0$, since this parameter controls only the inclination angle of the equilibrium configurations and not their stability. When $r=0.5$ and $\Omega=1.0$, the steady states $(a_{2m},\theta_{2m})$ are centres and those at $(a_{2m+1},\theta_{2m+1})$ are saddles with heteroclinic connections; see Figure 4(a). In the neighborhood of the centres the orbits are periodic representing oscillation of the sheet without rotation. Outside the heteroclinic connections, the solutions involve rotation of the sheet. The direction of rotation for initial data located to the left of the heteroclinic orbits is opposite to that for initial data located to the right of the heteroclinic orbits. For $r=1.0$ and $\Omega=0.5$, the steady states at $(a_{2m+1},\theta_{2m+1})$ are saddle points linked by heteroclinic connections, as seen in Figure 4(b). Orbits to the left of the heteroclinic connections represent periodic solutions for which the sheet rotates in the counter-clockwise direction while its length oscillates. This is because in the second equation in system (45) we obtain $\dot{\theta}\sim\frac{1}{2a^{2}}>0$ for sufficiently small $a$. On the other hand, orbits to the right of the heteroclinic connection represent unbounded solutions in which the length of the vortex sheet goes to infinity as $t\rightarrow\infty$ while the inclination angle $\theta$ asymptotically approaches a constant angle $\theta_{\infty}:=-\frac{\pi}{12}+\frac{m\pi}{2}$, $m\in{\mathbb{Z}}$, which satisfies the relation $\Omega+r\sin 2\theta_{\infty}=0$. For $r=0.5$ and $\Omega=-1.0$, we observe only periodic orbits in which the vortex sheet is rotating in the counter-clockwise direction; see Figure 4(c). Longer sheets exhibit a more significant variation of their length during one period of rotation. We reiterate that in the analysis presented in this section we restricted our attention to those solutions only where the sheet retains the form of a straight segment with variable length and inclination angle. Determining the effect of external fields on motions of the vortex sheet involving arbitrary deformations remains an open problem. ## 5 Relation between Rotating Sheets and Ellipses ### 5.1 Stability Analysis Based on Limit of Kirchhoff’s Rotating Ellipse We first list stability results for Kirchhoff’s ellipse following Love (1893) who first found instability for $a/b>3$, where $a$ and $b$ are, respectively, the semi-major and semi-minor axis of the ellipse, and then investigate the limit of large aspect ratio $a/b$. Following the notation of Mitchell & Rossi (2008), the dimensional frequency, $\lambda_{*}$, of a mode-$m$ disturbance satisfies $\lambda_{*}^{2}=\frac{\omega^{2}}{4}\left[\left(\frac{2mab}{(a+b)^{2}}-1\right)^{2}-\left(\frac{a-b}{a+b}\right)^{2m}\right],$ (49) where $\omega$ is the value of the constant vorticity inside the ellipse and $m>0$. Now consider the limit of (49) as $b/a$ tends to $0$, with the circulation, ${\widehat{\Gamma}}=\pi\omega ab$, kept constant. This leads to $\lambda_{*}^{2}=\frac{\omega^{2}b^{2}}{a^{2}}(2m-m^{2})+o(1).$ (50) This is negative for $m>2$, so that modes with $m>2$ are unstable. The growth rate increases with $m$, a characteristic sign of ill-posedness. We nondimensionalize $\lambda_{*}$ using the dimensional angular velocity $\Omega_{*}=\omega ab/(a+b)^{2}$ of the ellipse. This angular velocity tends to $\omega b/a$ as $b/a\to 0$, so we obtain the nondimensional frequency $\lambda=\frac{\lambda_{*}}{\Omega_{*}}=\pm\mathrm{i}\sqrt{m^{2}-2m}=\pm\mathrm{i}[(m-1)^{2}-1]^{1/2}.$ (51) To relate this result to the vortex sheet frequencies given by (28a)–(28c), we note that the Cartesian mode number $k$ is related to the azimuthal mode number $m$ by $m=k+1$, as in (16a–b) of Mitchell & Rossi (2008). We see that the unstable growth rates with $m>2$ correspond to (28c). The neutral mode $m=2$ corresponds to (28b). The stable oscillations with $m=1$ correspond to (28a). This limiting process is illustrated in Figure 5 where we show Kirchhoff’s elliptic vortex with aspect ratio 100 together with its deformation by the highest-wavenumber unstable mode predicted by Love’s analysis (Love, 1893; Mitchell & Rossi, 2008), which for the given aspect ratio corresponds to $m=64$. In Figure 5 we note the emergence of a deformation pattern in the form of a slanted wave which is also evident in Figures 2a,c,e. Figure 5: (Thick solid line) a section of Kirchhoff’s elliptic vortex with aspect ratio 100 together with (thin red line) its deformation by the unstable mode with wavenumber $m=64$. ### 5.2 Limit of Kirchhoff’s ellipse in the presence of external fields The generalizations of the rotating equilibrium of a vortex sheet described in § 2.1 can be obtained by considering suitable limits of the evolution of Kirchhoff’s ellipse in the presence of external fields. Our discussion of the generalizations of Kirchhoff’s elliptical vortex follows § 9.3 in Saffman (1992). Given a prescribed strain $U-\mathrm{i}V=(\mathrm{i}e(t)+g(t))z$, where $e(t)$ and $g(t)$ are real-valued functions of time, there exist time- dependent patch solutions with constant vorticity $\omega$ in the form of rotating ellipses whose semi-major and semi-minor axes $a(t)$ and $b(t)$ vary with time and the semi-major axis makes an angle $\theta(t)$ with the $x$-axis, as described by equations (19)–(20) in § 9.3 of Saffman (1992). Taking the limit $b\to 0$ with constant circulation ${\widehat{\Gamma}}=\pi\omega a(t)b(t)$ so that $\omega\to\infty$, we obtain the equations governing the evolution of the vortex sheet in the prescribed strain in terms of its half-length $a(t)$ and inclination angle $\theta(t)$ in the form $\dot{a}=-ea\sin{2\theta}+ga\cos{2\theta},\qquad\dot{\theta}=\frac{{\widehat{\Gamma}}}{\pi a^{2}}-e\cos{2\theta}-g\sin{2\theta}.$ (52) The circulation density of the sheet is then $2{\widehat{\Gamma}}\sqrt{1-s^{2}/a(t)^{2}}/(\pi a(t))$, where $s\in[0,a(t)]$ is distance from the origin. It is clear that in the absence of the external strain ($e=g=0$) we recover the rotating equilibrium discussed in § 2.1. Moreover, the equations are equivalent to (45) when we consider the single vortex sheet (40) with circulation ${\widehat{\Gamma}}=\pi$ in steady external strain, i.e. $e=-r\sin\theta_{0}$ and $g=r\cos\theta_{0}$, with $\Omega=0$. The Moore–Saffman solutions are steady states involving ellipses in a uniform straining field (Moore & Saffman, 1971). The corresponding vortex sheet equilibrium can be obtained without loss of generality by taking $g=0$ in (52), which yields $\dot{a}=0$, $\theta=0$ along with $e={\widehat{\Gamma}}/(\pi a^{2})$. Saffman (1992) points out that if patches do not satisfy the appropriate condition, “the vortex is pulled out into a long thin ellipse along the principal axis of extension.” Since the vortex sheet already has zero thickness, such circumstances will result in unbounded growth of the sheet length $a(t)$ accompanied by the vanishing of its circulation density. The effect of solid-body rotation represented by an extra term of the form $-\mathrm{i}\Omega_{0}\overline{z}$ was considered by Kida (1981). In terms of the evolution of the vortex sheet the only difference is an extra term $\Omega$ in the equation for $\dot{\theta}$ in (52), and this recovers (45) for the case of steady strain with rotation $\Omega$. ## 6 Discussion and Conclusions In this study we have established a number of new results concerning the stability of the rotating and translating equilibria of open finite vortex sheets. Some of these findings complement analogous results already known for unbounded, periodic and circular vortex sheets. The main difference between these two types of equilibria, and at the same time the source of several technical difficulties here, is the presence of the endpoints. The stability analysis of rotating equilibria shows similar behavior to straight periodic sheets (Saffman, 1992) and circular sheets (Michalke & Timme, 1967). More specifically, there is a countably infinite family of unstable modes with growth rates increasing with the wavenumber $k$, as shown in (28c). Away from the endpoints and in the limit of large wavenumbers the corresponding unstable eigenmodes resemble the unstable eigenmodes of a straight periodic sheet which have the form $(1-\mathrm{i})\sin(k\xi)$, $\xi\in[0,2\pi]$. More precisely, near the centre of the sheet the unstable eigenmodes have the form of slanted sine and cosine waves. The reason for this analogy can be understood by examining the structure of the eigenvalue problem (18) and the hypersingular integral operator (25). We see that when the eigenvalues $\lambda$ have large magnitude, the terms due to the background rotation in (18) are dominated by the other terms. Moreover, when the integral operator ${\mathcal{H}}$ acts on high-wavenumber perturbations $\zeta(\Gamma(x))$, the circulation density $\gamma_{0}(x)$ in (8) can be locally approximated by a constant for $x$ away from the endpoints. Thus, in this limit, the structure of the eigenvalue problem (18) becomes similar to the structure of the eigenvalue problem characterizing the stability of straight periodic vortex sheets (Saffman, 1992). Therefore, we can conclude that rotating finite sheets are subject to the same Kelvin–Helmholtz instability as straight sheets, which becomes more severe at higher wavenumbers and rendering this problem similarly ill-posed. On the other hand, the solution of the stability problem for the translating vortex sheet in § 3.2.2 is more nuanced since, as a result of the degeneracy of the eigenvalue problem (20) with the hypersingular integral operator (29), this equilibrium sustains unstable modes growing at an algebraic rather than exponential rate. However, this algebraic growth rate can be arbitrarily large provided the perturbations vary sufficiently rapidly in space. Thus, this problem is ill-posed in a similar way to vortex sheets exhibiting the classical Kelvin-Helmholtz instability. As suggested by the form of the generalized eigenvectors shown in Figure 3, this analysis captures the general form of the instability actually observed in numerical computations (Krasny, 1987; DeVoria & Mohseni, 2018), although direct comparisons are made difficult by the fact that the computations relied on various regularized forms of the Birkhoff-Rott equation (3), while no such regularization was used in our stability analysis. The Prandtl-Munk vortex is thus the only known equilibrium involving a vortex sheet which does not have exponentially unstable modes. This property can be attributed to the fact that the corresponding circulation density (12) is not sign-definite, so that the self-induced straining field exerts a stabilizing effect. The results reported in § 4 show that the rotating equilibrium discussed in § 2.1 is “robust” in the sense that configurations involving straight sheets but with time-dependent length and inclination angle also arise as solutions in the presence of external fields. However, we remark that the results presented in § 4.2 do not represent a complete stability analysis since they do not account for perturbations affecting the shape of the sheet. Generalizing this analysis to account for such shape-deforming perturbations is thus an open problem. For the periodic solutions of Figure 4a, this could be done by combining the methods from § 3 with Floquet theory. Another interesting open question is whether the translating equilibrium admits generalizations analogous to those discussed in § 4. The analysis presented in § 5 demonstrates that the relation between the rotating vortex sheet and Kirchhoff’s ellipse stipulated by Batchelor (1988) does not merely concern the form of the equilibrium configurations, but also applies to their stability properties. That this should be the case seems nontrivial because in the infinite-aspect-ratio limit the form of the Euler equation used to describe vortex patches and its linearization lose validity. The practical value of the stability results obtained in § 3 is their simple and explicit form making comparisons with stability analyses of other configurations such a straight infinite vortex sheet straightforward. In contrast, the expressions describing unstable modes of Kirchhoff ellipse obtained by Love (1893) are rather complicated. The relation between the evolution of unbounded sheets of finite and zero thickness was considered by Baker & Shelley (1990); Benedetto & Pulvirenti (1992). An interesting open question is whether there exists a family of vortex-patch equilibria that will converge to the Prandtl-Munk vortex in a certain limit. Another open problem is to understand whether the equilibria considered here are unique in the class of configurations involving a single open finite vortex sheet. ## Acknowledgments The authors thank Kevin O’Neil for interesting discussions about his approach and anonymous reviewers for insightful and constructive comments on the paper. The first author acknowledges partial support through an NSERC (Canada) Discovery Grant. The third author was partially supported by the JSPS Kakenhi (B) (#18H01136), the RIKEN iTHEMS program in Japan, and a grant from the Simons Foundation in the USA. The authors would also like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme “Complex analysis: techniques, applications and computations” where this work was initiated. This programme was supported by EPSRC grant number EP/R014604/1. ## Appendix A Derivation of solutions from § 4 following O’Neil’s formulation In this appendix we show that the solutions obtained in § 4 as generalizations of the rotating equilibrium from § 2.1 can be obtained in an entirely different manner using the method of O’Neil (2018a, b). We begin with vortex sheet equilibria in the presence of uniform strain without rotation. The velocity field due to the sheet ${\mathcal{L}}$ and the strain field is $f(z)=\frac{1}{2\pi\mathrm{i}}\int_{\mathcal{L}}\frac{\gamma(w)\overline{\tau}(w)}{z-w}\,\mathrm{d}w+r\mathrm{e}^{-\mathrm{i}\theta_{0}}z,\qquad z\notin{\mathcal{L}}.$ (53) The argument of O’Neil (2018a, b) shows that the extension of $f^{2}(z)$ in the finite plane including the sheet ${\mathcal{L}}$ is entire. At infinity, we find $\displaystyle f^{2}(z)$ $\displaystyle=\left[r\mathrm{e}^{-\mathrm{i}\theta_{0}}z+\frac{1}{2\pi\mathrm{i}z}\int_{\mathcal{L}}\gamma(w)\,|\mathrm{d}w|+O(|z|^{-2})\right]^{2}=r^{2}\mathrm{e}^{-2\mathrm{i}\theta_{0}}z^{2}+\frac{{\widehat{\Gamma}}r\mbox{e}^{-\mathrm{i}\theta_{0}}}{\pi\mathrm{i}}+O(|z|^{-1}),$ $\displaystyle=r^{2}\mathrm{e}^{-2\mathrm{i}\theta_{0}}z^{2}-\mathrm{i}r\mathrm{e}^{-\mathrm{i}\theta_{0}}+O(|z|^{-1}),\qquad|z|\rightarrow\infty,$ (54) where ${\widehat{\Gamma}}=\pi$ is the circulation along the sheet. By Liouville’s theorem, $f^{2}(z)$ is in fact equal to the sum of the constant term and the term unbounded at infinity, i.e. one drops the terms $O(|z|^{-1})$. In a steady state the endpoints of the sheet ${\mathcal{L}}$ must be stagnation points. Parameterizing the points on the vortex sheet as $w=as\mathrm{e}^{\mathrm{i}\theta}$ with $-1\leq s\leq 1$, we obtain at the endpoints $f^{2}(\pm a\mathrm{e}^{\mathrm{i}\theta})=r^{2}a^{2}\mathrm{e}^{2\mathrm{i}(\theta-\theta_{0})}-\mathrm{i}r\mathrm{e}^{-\mathrm{i}\theta_{0}}=0,$ (55) which gives rise to the steady states (47) with $\Omega=0$. For the general non-stationary case, with the points of the vortex sheet given by $z=a(t)s\mathrm{e}^{\mathrm{i}\theta(t)}$, $-1\leq s\leq 1$, we have $f(z)=\frac{1}{2\pi\mathrm{i}}\int_{\mathcal{L}}\frac{\gamma(w)\overline{\tau}(w)}{z-w}\,\mathrm{d}w+r\mathrm{e}^{-\mathrm{i}\theta_{0}}z=[\mathrm{i}\dot{\theta}+u(s)]\overline{z}.$ (56) The term proportional to $\dot{\theta}$ represents rotation and is expected. The real function $u(s)$ in the last term corresponds to the tangential velocity along the contour resulting from its extension or contraction. It needs to be obtained as part of the solution, but we only need to satisfy the kinematic condition $u(1)=\dot{a}$ at the endpoints of the sheet where $s=1$. Using the identity $\overline{z}=\mathrm{e}^{-2\mathrm{i}\theta}z$ valid for $z\neq 0$ and employing the same process as in (54) above shows that the function $[f(z)-(\mathrm{i}\dot{\theta}+u(s))\mathrm{e}^{-2\mathrm{i}\theta}z]^{2}$ is meromorphic. The limit $|z|\to\infty$ then gives $[f(z)-(\mathrm{i}\dot{\theta}+u)z]^{2}=A^{2}z^{2}+\frac{{\widehat{\Gamma}}A}{\mathrm{i}\pi}=A^{2}z^{2}-\mathrm{i}A=0,$ (57) where $A=r\mathrm{e}^{-\mathrm{i}\theta_{0}}-(\mathrm{i}\dot{\theta}+u(s))\mathrm{e}^{-2\mathrm{i}\theta}$ and ${\widehat{\Gamma}}=\pi$. We now use the kinematic relation $u(1)=\dot{a}$ at $z=a\mathrm{e}^{\mathrm{i}\theta}$ and obtain $Aa^{2}\mathrm{e}^{2\mathrm{i}\theta}=\mathrm{i}.$ (58) Separating this relation into real and imaginary parts gives the equations (45) for $\dot{a}$ and $\dot{\theta}$ with $\Omega=0$. We can also obtain an expression for $u(s)$ from $f(z)$ if desired. ## References * Alben (2009) Alben, S. 2009 Simulating the dynamics of flexible bodies and vortex sheets. J. Comp. Phys. 228, 2587–2603. * Alben (2015) Alben, S. 2015 Flag flutter in inviscid channel flow. Phys. Fluids 27 (3), 033603, arXiv: https://doi.org/10.1063/1.4915897. * Baker & Shelley (1990) Baker, G. R. & Shelley, M. J. 1990 On the connection between thin vortex layers and vortex sheets. J. Fluid Mech. 215, 161–194. * Batchelor (1988) Batchelor, G. K. 1988 An Introduction to Fluid Dynamics, 7th edn. Cambridge, New York: Cambridge University Press. * Benedetto & Pulvirenti (1992) Benedetto, D. & Pulvirenti, M. 1992 From vortex layers to vortex sheets. SIAM J. Applied Math. 52, 1041–1056. * Boyd (2001) Boyd, J. P. 2001 Chebyshev and Fourier Spectral Methods. Dover. * Brady et al. (1998) Brady, M., Leonard, A. & Pullin, D. I. 1998 Regularized vortex sheet evolution in three dimensions. J. Comp. Phys. 146, 520–545. * Caflisch (1989) Caflisch, R. E., ed. 1989 Mathematical Aspects of Vortex Dynamics, Philadelphia, PA. SIAM. * DeVoria & Mohseni (2018) DeVoria, A. C. & Mohseni, K. 2018 Vortex sheet roll-up revisited. J. Fluid Mech. 855, 299–321. * DLMF (2020) DLMF 2020 NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/, Release 1.1.0 of 2020-12-15, f. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds. * Elling & Gnann (2019) Elling, V. & Gnann, M. V. 2019 Variety of unsymmetric multibranched logarithmic vortex spirals. Eur. J. Appl. Math. 30, 23–38. * Estrada & Kanwal (2012) Estrada, R. & Kanwal, R.P. 2012 Singular Integral Equations. Birkhäuser Boston. * Jones (2003) Jones, M. A. 2003 The separated flow of an inviscid fluid around a moving flat plate. J. Fluid Mech. 496, 405–441. * Jones & Shelley (2005) Jones, M. A. & Shelley, M. J. 2005 Falling cards. J. Fluid Mech. 540, 393–425. * Kida (1981) Kida, S. 1981 Motion of an elliptic vortex in a uniform shear flow. J. Phys. Soc. Japan 50, 3517–3520. * Krasny (1986a) Krasny, R. 1986a Desingularization of periodic vortex sheet roll-up. J. Comp. Phys. 65, 292–313. * Krasny (1986b) Krasny, R. 1986b A study of singularity formation in a vortex sheet by the point vortex approximation. J. Fluid Mech. 167, 65–93. * Krasny (1987) Krasny, R. 1987 Computation of vortex sheet roll-up in the Trefftz plane. J. Fluid Mech. 184, 123–155. * Krasny & Nitsche (2002) Krasny, R. & Nitsche, M. 2002 The onset of chaos in vortex sheet flow. J. Fluid Mech. 454, 47–69. * Lopes Filho et al. (2003) Lopes Filho, M. C., Nussenzeig Lopes, H. J. & Souza, M. O. 2003 On the Equation Satisfied by a Steady Prandtl-Munk Vortex Sheet. Commun. Math. Sci. 1, 68–73. * Lopes Filho et al. (2007) Lopes Filho, M. C., Nussenzveig Lopes, H. J. & Schochet, S. 2007 A criterion for the equivalence of the Birkhoff-Rott and Euler descriptions of vortex sheet evolution. Trans. Amer. Math. Soc. 359, 4125–4142. * Love (1893) Love, A. E. H. 1893 On the stability of certain vortex motions. Proc. London Math. Soc. s1–25, 18–43. * Majda & Bertozzi (2002) Majda, A. J. & Bertozzi, A. L. 2002 Vorticity and Incompressible Flow. Cambridge University Press. * Marchioro & Pulvirenti (1993) Marchioro, C. & Pulvirenti, M. 1993 Mathematical Theory of Incompressible Nonviscous Fluids. Springer. * Michalke & Timme (1967) Michalke, A. & Timme, A. 1967 On the inviscid instability of certain two-dimensional vortex-type flows. J. Fluid Mech. 29, 647–666. * Milne-Thomson (1973) Milne-Thomson, L.M. 1973 Theoretical Aerodynamics. New York: Dover Publications. * Mitchell & Rossi (2008) Mitchell, T. B. & Rossi, L. F. 2008 The evolution of Kirchhoff elliptic vortices. Phys. Fluids 20, 054103. * Moore (1979) Moore, D. W. 1979 The spontaneous appearance of a singularity in the shape of an evolving vortex sheet. Proc. Roy. Soc. A 365, 105–119. * Moore & Saffman (1971) Moore, D. W. & Saffman, P. G. 1971 Structure of a line vortex in an imposed strain. In Aircraft wake turbulence (ed. Goldburg Olsen & Rogers), pp. 339–354. Plenum. * Munk (1919) Munk, M. M. 1919 Isoperimetrische Aufgaben aus der Theorie des Fluges. PhD thesis, University of Göttingen. * Muskhelishvili (2008) Muskhelishvili, N. I. 2008 Singular Integral Equations. Boundary Problems of Function Theory and Their Application to Mathematical Physics, 2nd edn. Dover. * O’Neil (2018a) O’Neil, K. A. 2018a Dipole and multipole flows with point vortices and vortex sheets. Reg. Chaotic Dyn. 23, 519–529. * O’Neil (2018b) O’Neil, K. A. 2018b Relative equilibria of point vortices and linear vortex sheets. Phys. Fluids 30, 107101. * Perko (2008) Perko, L. 2008 Differential Equations and Dynamical Systems. Texts in Applied Mathematics 7\. Springer New York. * Protas & Sakajo (2020) Protas, B. & Sakajo, T. 2020 Rotating equilibria of vortex sheets. Physica D: Nonlinear Phenomena 403, 132286\. * Pullin & Sader (2021) Pullin, D. I. & Sader, J. E. 2021 On the starting vortex generated by a translating and rotating flat plate. J. Fluid Mech. 906, A9. * Saffman (1992) Saffman, P. G. 1992 Vortex Dynamics. Cambridge, New York: Cambridge University Press. * Sakajo (2001) Sakajo, T. 2001 Numerical computation of a three-dimensional vortex sheet with swirl flow. Fluid Dyn. Res. 28 (6), 423–448. * Sakajo & Okamoto (1996) Sakajo, T. & Okamoto, H. 1996 Numerical computation of vortex sheet roll-up in the background shear flow. Fluid Dyn. Res.h 17, 195–212.
figurec [firstpage,color=gray!90,angle=0,scale=0.28, xpos=0in,ypos=-5in]*correspondence<EMAIL_ADDRESS> # BF++: a language for general-purpose program synthesis Vadim Liventsev Aki Härmä Milan Petković ###### Abstract Most state of the art decision systems based on Reinforcement Learning (RL) are data-driven black-box neural models, where it is often difficult to incorporate expert knowledge into the models or let experts review and validate the learned decision mechanisms. Knowledge-insertion and model review are important requirements in many applications involving human health and safety. One way to bridge the gap between data and knowledge driven systems is program synthesis: replacing a neural network that outputs decisions with a symbolic program generated by a neural network or by means of genetic programming. We propose a new programming language, BF++, designed specifically for automatic programming of agents in a Partially Observable Markov Decision Process (POMDP) setting and apply neural program synthesis to solve standard OpenAI Gym benchmarks. Source code is available at https://github.com/vadim0x60/cibi _K_ eywords Reinforcement Learning $\cdot$ Program Synthesis $\cdot$ Programming Languages ## 1 Introduction Reinforcement Learning (RL) has been applied successfully in fields like Energy, Finance and Robotics [1]. However, traditional approaches to Reinforcement Learning involve black box models that preclude any exchange of knowledge between experts and ML algorithms. In safety-critical fields111Safety requirements in healthcare are the main motivation for our research. However, in this paper we use conventional OpenAI Gym benchmarks to enable comparison between methods like Healthcare [2] the ability to understand the decision algorithms induced by artificial intelligence, as well as to initialize the system using expert knowledge for an acceptable baseline performance, is required for acceptability. In this work we focus on an alternative approach for RL based on program induction, known as Programmatically Interpretable Reinforcement Learning [3]. We introduce BF++, a new programming language tailor-made for this approach (section 4.1). We then demonstrate that neural program synthesis with BF++ can solve arbitrary reinforcement learning challenges and gives us an avenue for knowledge sharing between domain experts and data-driven models via the mechanism of _expert inspiration_ (section 5.5). ## 2 Background In this paper we define a Reinforcement Learning environment as Partially Observable Markov Decision Process [4, 5]: when at step $i$ the agent takes action $a_{i}\in A$ it has an impact on the state of the environment $s_{i}\in S$ via distribution $p_{s}(s_{i+1}|s_{i},a_{i})$ of conditional probabilities of possible subsequent states. State is a latent variable that the agent cannot observe. Instead, the agent can see an observation $o_{i}\in O$ which is a random variable that depends on the latent state via distribution $p_{o}(o_{i}|s_{i},a_{i})$. $A$, $S$ and $O$ are sets of all possible actions, states and observations respectively. Finally, at every step the agent observes a reward $r_{i}=R(s_{i},a_{i})$ Given this limited toolset, without full (or any) prior knowledge of how the agent’s actions influence the the environment (distributions $p_{s}(s_{i+1}|s_{i},a_{i})$ and $p_{o}(o_{i}|s_{i},a_{i})$), the agent has to come up with a strategy that will maximize $n$-step return $R_{n}=\sum_{t=i}^{n}r_{t}$ where $n$ is the agent’s planning horizon. It is, in the general sense, a hyperparameter, however if an environment has a limit on how many steps an episode can last, it is reasonable to set $n$ equal to the step limit. Conventional solutions [6] introduce a parametrized _policy function_ $\pi_{\phi}(a|s)$ that defines agent’s behavior as a probability distribution over actions and/or function $Q_{\phi}(a|s)$ that defines what $R_{n}$ the agent is expecting to receive if they take action $a$. Parameters $\phi$ are learned empirically, using gradient descent or evolutionary methods [7, 8]. This approach has been applied extensively and with great success [9] in Partially Observable Markov Decision Process (POMDP) settings, however it does have major limitations: 1. 1. The agent is defined as stateless. As such, when making a decision $a_{i}$ the agent is unable to take into account any observations it made prior to step $i$. Long-term dependencies like "this patient should not receive this drug since she has shown signs of allergy when this drug was administered to her 17 iterations ago" cannot be captured by a memoryless model. 2. 2. The agent is represented as a set of model weights $\phi$, often with millions of parameters. Such a program can be used as a black box decision system, but domain experts are unable to understand and/or make their contributions to the agent’s programming. In this paper, we address these limitations by representing an RL agent with a program in a specialized language, to be introduced in section 4.1, as opposed to $\pi_{\phi}$ and $Q_{\phi}$ ## 3 Related work Despite Program Synthesis being one of the most challenging tasks in Computer Science, many solutions exist, see [10]. They can be roughly classified by specification modality: how are the requirements for a program to be synthesized communicated to the generative model? The most advanced program synthesis technology to date is deep neural network- based language modeling [11]. This models are autocomplete engines that given a fragment of a program, also known as a prompt, predict the fragment to follow afterwards. Such models can be very powerful, however, the prompt is a suboptimal form of specification, creating an open problem of prompt engineering - generating the first fragment of the program in such a way that it encourages the language model to write a particular prompt [12]. If the requirements are specified as a natural language description of what a program should do, program synthesis becomes a machine translation task [13]. A neural model can be pre-trained as a language model and fine-tuned on a translation dataset like CoNaLa [14] or, in the case of AlphaCode [15], a dataset of competitive programming tasks and submitted solutions. If the requirements are specified as a set of inputs to the program along with expected outputs, the task is known as programming by example [16, 17] using techniques like neural-guided program search [18]. One can also generate input-output pairs artificially [19]. Models like Neural Turing Machines [20], Memory Networks [21] and Neural Random Access Machines [22] are also trained with input-output pairs and even though they don’t explicitly generate code, they fit the definition of program. In this work our goal is to synthesize programs with no explicit specification - only an environment where the program can be tested. This task is typically tackled with neural program synthesis [3, 23] or genetic programming [24] in a domain-specific language, where the reward function of the POMDP is known as the _fitness function_. However, to the best of our knowledge, there is no programming language for POMDP settings specifically. Because of this, applications of genetic programming for Reinforcement Learning challenges have been limited and [23], for example, only supports non-interactive programs, i.e. a program is a mapping from an input string to an output string. ## 4 BF++ ### 4.1 BF syntax Abolafia et al [23] picked BF222Brainfuck [25] as their language for program synthesis for the following reasons: * • In industry-grade programming languages like Python or Java program code can contain a very large variety of characters since any of the 143859 Unicode [26] characters can be used in string literals. In BF, however, only 8 characters can be used: they can be one-hot-encoded with vectors of size 8. * • BF’s simple syntax means that an arbitrary string of valid characters is likely to be a valid program. In more complex languages, most possible strings result in a syntax error. A generative model being trained to write programs in such a language risks being stuck in a long exploration phase when all the programs it generates are invalid and it has no positive examples in the dataset. * • Despite all of the above, it is a Turing-complete language. The simplicity of the language also means that it is relatively easy to develop a compiler that translates programs from an industry-standard programming languages like Java and Python to BF thus making use of the expert knowledge existing in those languages. In the current paper, we introduce an extended version of the original BF language, BF++. As explained below, the extensions to the original BF syntax are particularly useful in the RL use cases. BF’s runtime model is inspired by the classic Turing Machine [27]: at any point during the program’s execution, the state of the program consists of: * • An infinite333If you happen to be executing a BF program on a computer with finite memory, the tape will be finite due to your hardware limitations. tape of cells $T$ where each cell holds an integer number. * • A memory pointer $p_{T}$ that points to a certain cell in the tape (active cell $T^{p_{T}}$). * • A string of characters $C$ that represents program code. * • A code pointer $p_{C}$ pointing to a character about to be executed. The code pointer starts at the first character, then this character gets executed and the pointer is incremented (moved to the next character). There are 8 possible characters: > Move the memory pointer one cell right. $p_{T}:=p_{T}+1$ < Move the memory pointer one cell left. $p_{T}:=p_{T}-1$ + Increment the active cell. $T^{p_{T}}:=T^{p_{T}}+1$ - Decrement the active cell. $T^{p_{T}}:=T^{p_{T}}-1$ . Write $T^{p_{T}}$ from the active cell to the output stream444The definition of input and output streams is purposefully underspecified, it may depend on the particular implementation. , Read $x$ from the input stream to the active cell. $T^{p_{T}}:=x$ [ If the active cell $T^{p_{T}}=0$, jump (move $p_{C}$) to the matching $]$. ] If the active cell $T^{p_{T}}\neq 0$, jump (move $p_{C}$) to the matching $[$ [ and ] commands constitute a loop that will be executed repeatedly until the active cell becomes zero. They are also the only way to write a BF program with a syntax error: a valid BF program is one that doesn’t contain non- matching [ or ] ### 4.2 Negative values In BF memory cells $T^{i}$ hold non-negative values only. In BF++ $T^{i}\in\mathbb{Z}$, a negation operator ~ is introduced and operators []are redefined to loop while the active cell is non-positive, i.e. ~ If the active cell $T^{p_{T}}:=-T^{p_{T}}$. [ If the active cell $T^{p_{T}}\geq 0$, jump (move $p_{C}$) to the matching $]$. ] If the active cell $T^{p_{T}}<0$, jump (move $p_{C}$) to the matching $[$ This decision was taken because negative observations are common in control problems (see section 5) as is branching on whether the observed value is positive or negative. ### 4.3 Non-blocking action operators The main issue of BF as a language for Reinforcement Learning is its input- output system. It assumes that the program can freely decide on the relative frequency of inputs to outputs. For example, the following program +[.....,] inputs 5 integers, outputs the 5th character it read, then goes back to the beginning and proceeds indefinitely outputting every 5th character it inputs. Thus it assumes a 5:1 frequency of inputs to outputs. If we simply assume that inputs are observations and outputs are actions, such program will not be able to operate in a POMDP environment where I/O frequency is fixed at 1:1 and the agent that has made an observation has to act before it can make the next observation. In other words, operators . and , are blocking: . stops program execution and waits until new input is received to resume execution, , stops program execution and waits until there is an opportunity to act in the environment. To address this, in BF++ . operator is non-blocking. It outputs the current value of the active cell by placing it at the bottom of the action queue $S$ \- a sequence of integer numbers that represent actions the program is planning to take in the environment. We also introduce a non-blocking operator ! that places $T^{p_{T}}$ on top of the action queue. $\begin{array}[]{cc}.&S:=S^{\frown}(T^{p_{T}})\\\ !&S:=(T^{p_{T}})^{\frown}S\end{array}$ (1) where $\frown$ denotes concatenation of tuples The program can thus decide by using . or ! whether the newly added action takes precedence over ones already in the queue. As soon as an opportunity to act arises, the top of the action queue (item $S^{1}$ or several items $S^{1},S^{2},\dots$, see section 5.2) defines which action the program takes and is then removed from the queue. If $S^{k}$ does not exist (the queue is empty or shorter than $k$) default value of $S^{k}=0$ is assumed. , operator, on the other hand, is blocking. Thus its function is more important than just reading an observation into memory. Executing , is when the program moves to the next step of POMDP. ### 4.4 Virtual comma The system where the only way to proceed to the following iteration is the , operator, naively implemented, means that to be successful in any POMDP environment, a program has to contain an infinite loop with a , operator. Any program that has a finite number of , steps will terminate prematurely in an environment that supports arbitrarily long number of iterations. Since we originally set out to develop a language where most random programs would be valid, this had to be addressed. We decided to turn any BF++ program into an infinite loop with a , operator by default: 1. 1. Every BF++ program starts with a virtual , operator at address $p_{C}=-1$: it is executed before all operators in the code of the program, they are indexed starting from $p_{C}=0$ 2. 2. When the code pointer $p_{C}$ reaches the end of the program it loops back to the virtual comma $p_{C}:=-1$ Due to the virtual comma, every program starts executing with the initial observation already stored in memory and available for branching/decision- making. ### 4.5 Observation discretization Another issue complicating applications of BF to Reinforcement Learning is that since its memory tape holds only integer numbers its inputs and outputs have to be integer as well. And this issue cannot be fixed simply by replacing an integer tape with a tape of floating point numbers as BF’s only operations for manipulating numbers are + and - \- increment and decrement. Non-integer action and observation spaces are fairly common in reinforcement learning tasks hence BF++ implements coercion mechanisms for reading and writing continuous vectors into discrete memory. We assume that the vector observation space $O$ is a hypercube defined as an intersection of $n$ separate scalar observation spaces $O^{k}$ such that $o_{1}\in O_{1}^{k},o_{2}\in O_{2}^{k},\dots,o_{n}\in O_{n}^{k}\Leftrightarrow(o_{1},o_{2},\dots,o_{n})\in O$ (2) This assumption theoretically excludes some possible observation spaces, but almost all POMDP tasks discussed in the research literature and all OpenAI Gym tasks conform to this assumption. To write an observation onto the memory tape we the observation vector of size $n$ is aligned with memory cells $T^{p_{T}},T^{p_{T}+1},\dots,T^{p_{T}+n-1}$ and turned into an integer with the use of $d$ discretization bins. $T^{p_{T}+k-1}:=\min_{\omega\in 1,\dots d|o^{k}<\tau^{k}_{\omega}}\omega$ (3) If $O^{k}$ is an interval $O^{k}=[o_{low},o_{high}]$, it is split into discretization bins evenly, as in eq. 4: $\tau_{\omega}=\begin{cases}o_{low}+\frac{o_{high}-o_{low}}{d}\omega,\omega=1,2,\dots,d-1\\\ +\infty,\omega=d\end{cases}$ (4) Figure 1: Fluid discretization example in Mountain Car Some environments, however, have unbounded observation spaces $O^{k}=(-\infty;+\infty)$, $O^{k}=(-\infty;o_{high}]$, $O^{k}=[o_{low};+\infty)$. This spaces are challenging because the formal description $O^{k}$ does not in any way reflect the actual underlying distributions of observations. It can be the case, for example, that $O^{k}=(-\infty;+\infty)$ but most observations found in the environment fall in the interval $O^{k}=[42;43]$. For such observation spaces, BF++ uses a fluid discretization system that learns the true distribution of observations online. The idea was inspired by a work of Touati et al [28], although, they assumed that $O^{k}$ has a finite diameter and didn’t support unbounded observation spaces. Initial thresholds $\tau_{\omega}$ can be arbitrary. With each new observation, thresholds $\tau_{\omega}$ are readjusted so that among $h$ prior observations, roughly $\omega$ out of $d$ observations are lower values that $\tau_{\omega}$: $\underset{\tau}{\text{minimize}}\sum_{\omega\in 0,1,\dots d}|\frac{\omega}{d}-\frac{\sum_{i^{\prime}\in i-h,i-h+1,\dots,i-1}\mathbb{I}(o_{i^{\prime}}^{k}<\tau_{\omega})}{h}|$ (5) To solve this optimization problem, one has to sort previous $d$ observations in ascending order so that $\text{sort}:\\{o_{i}|i\in i-h,i-h+1,\dots,i-1\\}\longrightarrow\\{s_{i}|i\in 1,2,\dots,h\\}$ (6) is such a bijection that $s_{1}<s_{2}<\dots<s_{h}$ holds and set $\tau_{\omega}=s_{\lceil\frac{\omega}{d}h\rceil}$ (7) See figure 1 for a visual example. This system has 2 hyperparameters: $d$ and $h$. With a low $d$ a lot of the information observed form the environment is lost, while when $d$ is in the hunderds the generated programs can become very complex. $h$ switches between relative and absolute observations. With a very high $h$, $\omega=0$ means that this observation is one of the lowest that can be observed in this environment, with $h=1$ it means that the observation is lower than the previous one. High values of $h$ present an additional challenge: how to correctly discretize observation in the first $h$ iterations? We implemented burn-in: before training or evaluation we run $h$ iterations of a random agent (see section 4.8) to collect a history of $h$ observations and pick correct thresholds. ### 4.6 Action coercion A symmetrical problem arises with actions taken by the agent. Memory tape holds integer numbers $T^{k}\in\mathbb{Z}$ and any value can be pushed onto the action stack. However, the action that’s output to the environment has to belong to a $N$-dimensional action space $A$, an intersection of unidimensional action spaces $A^{k}$. The "act" operation thus includes a coercion system and is defined as: $\begin{array}[]{l}a^{k}:=\begin{cases}\frac{S^{k}}{d-1},A^{k}=(-\infty;+\infty)\\\ a_{\text{min}}+|\frac{S^{k}}{d-1}-a_{\text{min}}|,A^{k}=[a_{\text{min}};+\infty)\\\ a_{\text{max}}-|a_{\text{max}}-\frac{S^{k}}{d-1}|,A^{k}=(-\infty;a_{\text{max}}]\\\ a_{\text{min}}+\frac{(S^{k}\bmod d)}{d-1}*(a_{\text{max}}-a_{\text{min}}),A^{k}=[a_{\text{min}};a_{\text{max}}]\\\ S^{k},A^{k}\subset\mathbb{Z}\end{cases}\\\ S:=(S^{N+1},S^{N+2},\dots)\end{array}$ (8) ### 4.7 Goto It is notoriously hard to introduce any kind of branching behavior in BF [29]. To facilitate if-then style programs we introduce a goto operator `^` defined as $p_{T}:=T^{p_{T}}$ (9) Note that it is not a goto; in the traditional C sense, since the memory pointer is being moved, not the code pointer. Still, it lets the agent preemptively store potential actions in memory cells and than branch between this actions based on the observation. ### 4.8 Random number generator Operator @ writes a random number into the active cell. A random agent is often used as a starting point for exploration and in BF++ a random agent can be implemented as `@!` ### 4.9 Shorthands With all the commands we introduced in sections 4.1 \- 4.7 it is still surprisingly hard to encode relatively simple decisions like "add action 5 to the top of the action queue": [>]+++++! This program moves the memory pointer right until it hits a cell that contains zero, increments it five times, and then pushes $T^{p_{T}}$ to the top of the action queue. It also loses the current value of the memory pointer which might be meaningful. Our experiments have shown that it takes a very long time for the neural model to learn to write this kind of combinations. To mitigate this issue we introduce shorthands: commands 01234 mean "write the respective number (0,1,2,3 or 4)" into the cell and commands abcde mean "move the memory pointer to cell a,b,c,d or e" where cells a,b,c,d and e are the first 5 cells in the memory tape. We intentionally made the number of shorthands equal to discretization constant $d=5$. Due to our method of discretization of continious action spaces (see sections 4.5, 4.6) the program will often encounter situations when it can choose between $d$ different actions and thanks to shorthands taking them can be encoded as 1!, 2!, … ### 4.10 Summary In total (assuming 5 shorthands) BF++ has 22 commands: ><^@+~-[].,!01234abcde Commands `@^~01234abcde` are considered optional and can be disabled if the task at hand calls for it. The number of shorthand commands can be increased or decreased. Observation discretization and action coercion techniques built into the language mean that BF++ is compatible with any POMDP environment. However, in practice, there is one important limitation: the complexity of the program required to operate in an environment is directly proportional to dimensionality of it’s action and observation spaces $A$ and $O$. If, for example the observation space is 10000-dimensional, once an observation is read onto tape $T$ it takes 9999 `>` operators to reach second to last observation. Thus, in practice, BF++ should be used with low-dimensional POMDPs. An extension of our methodology to high-dimensional POMDPs (such as Atari games [30], where the observation is a matrix of pixels on simulated game screen) can be achieved by adding a scene encoder neural network that maps the observed image to a low-dimensional vector as proposed in [31]. ## 5 Experimental setup ### 5.1 Hypotheses and goals Our experiments were designed to test the following hypotheses: $H_{1}$ BF++ can be used in conjunction with a program synthesis algorithm to solve arbitrary reinforcement learning challenges (POMDPs) $H_{2}$ BF++ can be used to take a program written by an expert and use program synthesis to automatically improve it $H_{3}$ BF++ can be used to generate an interpretable solutions to Reinforcement Learning Challenges that experts can learn from $H_{4}$ Optional commands `@^~01234abcde` introduced for convenience make it easier for experts to write programs in BF++ $H_{5}$ Optional commands `@^~01234abcde` improve the quality of programs synthesised by neural models Hence we 1. 1. Pick several commonly studied reinforcement learning environments 2. 2. Employ an expert555first author of this paper to write BF++ programs to solve them 3. 3. Develop a program synthesis model following from [23] 4. 4. Compare the best programs generated by the model with expert programs in terms of program quality 5. 5. Perform ablation studies: remove some of the optional commands from the language (resulting language is called BF+), remove the expert program from the model’s program pool, compare program quality 6. 6. Perform case studies: analyze programs generated by the model to gain insight into how the model approached the problem ### 5.2 Environments (a) CartPole-v1 (b) MountainCarContinuous-v0 (c) Taxi-v3 (d) BipedalWalker-v2 Figure 2: Selected environments, visualized We evaluate our framework on 4 low-dimensional (see section 4.10) POMDPs sampled from OpenAI Gym [32] leaderboard666https://github.com/openai/gym/wiki/Leaderboard: 1. 1. CartPole-v1 [33]. A pole is attached to a cart. which moves along a frictionless track. The agent observes cart position, cart velocity, pole angle and pole velocity at tip. The goal is to keep the pole upright by applying force between -1 and 1 to the cart. At every step the agent receives a +1 reward for survival. The episode terminates when the pole inclines too far. 2. 2. MountainCarContinuous-v0 [34]. A car is on a one-dimensional track, positioned between two "mountains". The goal is to drive up the mountain consuming a minimal amount of fuel by controlling the engine, setting it’s torque in the range $[-1;1]$; however, the engine is not strong enough to scale the mountain in a single pass. Therefore, the only way to succeed is to drive back and forth to build up momentum. We picked MountainCarContinuous-v0 as opposed to MountainCar-v0 to demonstrate the performance of our discretization system. 3. 3. Taxi-v3 [35]. There are 4 locations (labeled by different letters) and the goal is to pick up the passenger at one location and drop him off in another in as few timesteps as possible spending as little fuel as possible. 4. 4. BipedalWalker-v2. A simulated 2D robot with legs has to learn how to walk. Moving rightwards is rewarded, falling is penalized. Observation vector consists of speeds, angular speeds and joint positions collected by the robot’s sensors. These observations do not, however, include any global coordinates - they can only be inferred from sensor inputs. With action vector of size 4 the agent controls speeds of the robots hip and knee motors. ### 5.3 Hyperparameters For observation discretization (section 4.5) we picked $d=5$ (so that it’s equal to the number of shorthands) and $h=500$ for our experiments, hence when the observation is among the highest 20% of the last 500 observations it is written into memory as 4 while if it falls between 40-th and 60-th percentiles it is 2. ### 5.4 Expert programs For CartPole we wrote 2 programs. One completely ignores all observations and just alternates between "move right" and "move left": 0!,1! Another calculates the difference between velocity of the cart and angular velocity of the pole. If it’s positive, the cart is pushed to the right (the cart has to catch up with the pole), if it’s negative the cart is pushed to the left, if zero it is pushed randomly: [a0>0>0>0>0>@>1>1>1>1>1>,>[->>-<<]>>+++++^!1] The first part of this program sets up an action map on the tape where every possible value of the velocity differential has a respective cell with 0, 1 or (in the center) random number. Then `[->>-<<]` block does subtraction, `+++++` adds 5 to the result, so that it belongs to in $0..10$ and not $-5..5$, `^` moves the memory pointer to the correct cell in the action map and `!` puts the action onto the action stack. For Mountain Car we wrote an elegant algorithm that reads the observation vector into the tape, goes to the second observation (car velocity) and outputs it as action: >!a In other words, we apply motor torque in the same direction where we’re currently headed, thus always accelerating our car. If we’re headed right, that helps us get to the destination and if we’re headed left that helps us get as high as possible onto the hill so that when direction reverses, the car has more energy to push through the right hill. For Taxi we introduce 2 programs. The first program: 1. 1. Finds the coordinates of the current destination (passenger to pick up or current passenger’s destination) 2. 2. Subtracts the current destination 3. 3. Moves in the resulting direction The problem with this approach is that it always gets stuck when it hits a wall. To compensate for that, the second program alternates between the strategy above (for 5 iterations) and random movements (for 5 iterations) so that it eventually gets unstuck. See source code repository for the programs. Optional commands `@^~01234abcde` have all been invaluable in developing these programs - a fact in support of $H_{4}$. A more rigorous way to confirm it would be employing several human experts to develop programs with and without optional operators, but finding volunteer BF++ developers has proven difficult. Developing programs for Bipedal Walker is, unfortunately, above our expert’s paygrade. ### 5.5 Program synthesis model In order to train a generative model $g$ to write BF++ programs we treat the writing process as a reinforcement learning episode in its own right [23] . Every character of a program is an action taken by the _writer agent_ , the programs are terminated by a NULL character. When the NULL character is written, a _BF++ agent_ is created in the target POMDP environment (e.g. CartPole) and sum total of rewards $Q$ collected in that episode is assigned as a reward to the _writer agent_ for the NULL character. All other characters are rewarded with zero. The _writer agent’s_ policy is modeled with an LSTM [36] neural network and is trained with a modified version of REINFORCE [37] algorithm. While standard REINFORCE optimizes Policy Gradient: $O_{\text{PG}}(\phi)=\mathbb{E_{\pi(C;\phi)}}(Q)$ (10) where $\phi$ are LSTM parameters, $C$ \- program, $Q$ \- reward obtained by the program in target environment, we optimize $O(\phi)=O_{PG}(\phi)+O_{PQT}(\phi)$ (11) where $O_{\text{PQT}}=\frac{1}{K}\sum_{k=0}^{K}\log\pi(C_{k};\phi)$ (12) where $C_{1}$ is the best (highest $Q$) known program, $C_{2}$ \- second best, … Intuitively, both $O_{\text{PG}}(\phi)$ and $O_{\text{PQT}}(\phi)$ when optimized update the weights of the LSTM so that programs that we have found to be successful are more likely. But Policy Gradient weighs programs proportionately to their respective rewards while PQT creates a priority queue of the best known programs and assigns a high importance to them and zero to the rest. Figure 3: Neural-symbolic learning cycle [38] $O_{\text{PQT}}$ component has been shown to have "a stabilizing affect and helps reduce catastrophic forgetting in the policy" [23]. In addition to this, we use $O_{\text{PQT}}$ to implement expert inspiration. By default, the priority queue of the best known programs is initialized as an empty set. But if expert-written programs are available, it can be prepopulated with these programs that act as useful positive examples for teaching the _writer agent_. This approach is used to incorporate programs from section 5.4 and transfer knowledge from experts to the neural developer. This approach to expert inspiration follows what’s known as neural-symbolic learning cycle, displayed in figure 3 \- expert knowledge is represented symbolically, in terms of a BF++ program, then a neural network is trained to generate this program, effectively translating the expert knowledge from symbolic into connectionist format (_representation_), the neural network learns from reinforcement how to solve the task better than the expert (_training_). Unlike in most neural-symbolic systems [39] that extract knowledge from connectionist systems with algortihms like TREPAN [40] or JRip extraction [41], the _extraction_ step is trivial since the neural network outputs a symbolic program directly. In all experiments below, the _writer agent_ ’s LSTM has hidden size of 50, batch size of 4 and is trained with RMSProp [42] optimizer. ### 5.6 Stopping and Scoring All experiments were run with an upper limit of 100000 training episodes. Environments other than Taxi also used Exponential Variance Elimination [43] early stopping technique - training was stopped when the postive trend in the quality of the best found program stopped, i.e. when the exponential moving average of program quality is lower that it was 1000 episodes ago. Agents for Taxi are trained for a fixed number of episodes, because we noticed that in this environment the longest part of the training process is learning to pick up your first passenger and until that happens $Q=-200$ holds. Once the training process is finished, we take the best known programs and since each of them was only tested once (leading to high variance) we test them again, averaging total rewards over 100 episodes. We use this averaged reward to pick the best program. ### 5.7 Implementation BF++ interpreter and the training system were written in Python with TensorFlow for neural models. GPU resources weren’t used, because the performance bottleneck of the system is not backpropagation but rather testing a BF++ program in the environment, single experiment runtime was between 1 hour (CartPole) and 10 (Taxi). ## 6 Results Table 1: Total episode reward $Q$ achieved by best programs found, averaged over 100 episodes Environment | CartPole-v1 | MountainCarContinuous-v0 | Taxi-v3 | BipedalWalker-v2 ---|---|---|---|--- Random agent | 9.3 | 0 | -200 | -91.92 BF++ expert program 1 | 20.48 | -6.55 | -179.49 | - BF++ expert program 2 | 18.23 | - | -150.44 | - BF+ (without shorthands) LSTM | 44.55 | 91.57 | -57.93 | -91.9 BF+ (without `@^~`) LSTM | 48.14 | 81.16 | -42.21 | -31.79 BF++ LSTM | 71.38 | 88.41 | -199.82 | -26.97 BF++ LSTM with expert inspiration | 96.64 | 91.39 | -60.65 | - Leaderboard threshold | 195 | 90 | 0 | 300 ### 6.1 Quantitative results Table 1 presents the quality metric (average 100-episode reward) of the best program in every category, compared to that of a fully random agent and the result required to join the OpenAI gym leaderboard for context. Note that the expert programs used a lot of optional operators (shorthands and `@^!`), so it wasn’t possible to implement expert inspiration with limited command sets. These results support (see section 5.1) hypothesis $H_{1}$ \- we have obtained functional programs for all environments, $H_{2}$ \- when expert inspiration was used the resulting programs were better than expert programs and better than programs generated without expert inspiration and $H_{4}$ \- ablation studies for optional operators do indeed show that those operators are useful. ### 6.2 Case studies We have established that the program synthesis model is able to learn from human experts. But can experts learn from the model? ($H_{3}$) To confirm this, we offer a detailed explanation of the most successful program of all experiments listed in section 5. This program scored _91.39_ on Mountain Car: -..~+ The trailing `~` and `+` do not affect the behavior of the agent: they modify the value of the active cell only for it to be immediately rewritten by the virtual comma (section 4.4) before it has any chance to influence actions. One can think about these commands as inactive genes in the DNA - we have found many resulting programs to contain such commands. If necessary this effect can be accounted for by incorporating program length into the loss function. So this program is equivalent to: -.. Figure 4: Visual summary of the strategy enacted by -.. on Mountain Car When the virtual comma is executed, car position and car velocity are read into memory, discretized into integers $0\dots 4$. The position is read into the active memory cell $p_{T}$, while the velocity is in cell $p_{T}+1$. Then the active cell is decremented and the resulting number is put onto the action stack twice. There is 1 read operation and 2 write operations to the end of the action stack, which introduces a delay before the actions get executed. When it’s time to act, the number on the action stack is coerced to one of the actions possible in this environment (0 for going left, 1 for doing nothing, 2 for right). A strategy emerges, illustrated on figure 4, in which the car puts "going right" onto the agenda if it’s on the far left or the center right of the landscape, puts "going left" onto the agenda when it’s on the far right or center left and schedules doing nothing if it’s in the center. This strategy helps the car successfully reach the right fringe every time it is applied. ## 7 Conclusions In this paper, we have introduced a new programming language tailored to the task of programmatically interpretable reinforcement learning. We have shown experimentally that this language can facilitate program synthesis as well as knowledge transfer between expert-based systems and data-driven systems. The results in the OpenAI gym test examples show that the proposed system is able to find a functional solution to the problem. In some cases the performance is similar to the best deep learning solution but the obtained program remains still explainable. This is a very encouraging result and suggest that the use of program induction methods may indeed be a viable way towards explainable solutions in RL applications. We propose the following directions for future work: 1. 1. Develop translation mechanisms between BF++ and other languages. Potentially, BF++ can be used as _bytecode_ [44] for reinforcement learning. The expert would write a program in a higher-level language and transpile it into BF++ so that the program then can be improved with reinforcement learning. 2. 2. Use other neural network architectures as well as non-neural evolution methods like genetic programming [45] in conjunction with BF++ 3. 3. Apply the framework to problems in Healthcare where expert inspiration is important for crossing the AI chasm [46]. 4. 4. Use Natural Language Generation techniques to translate the BF++ code automatically to a friendly human-readable text description as in [47, 48]. ## Acknowledgements This work was funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement n° 812882. This work is part of "Personal Health Interfaces Leveraging HUman-MAchine Natural interactionS" (PhilHumans) project: https://www.philhumans.eu ## References * Li [2019] Yuxi Li. Reinforcement learning applications. _CoRR_ , abs/1908.06973, 2019. URL http://arxiv.org/abs/1908.06973. * Yu et al. [2019] Chao Yu, Jiming Liu, and Shamim Nemati. Reinforcement learning in healthcare: a survey. _arXiv preprint arXiv:1908.08796_ , 2019. * Verma et al. [2018] Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, and Swarat Chaudhuri. Programmatically interpretable reinforcement learning. In Jennifer Dy and Andreas Krause, editors, _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pages 5045–5054, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/verma18a.html. * Åström [1965] K J Åström. Optimal control of Markov processes with incomplete state information. _Journal of Mathematical Analysis and Applications_ , 10(1):174–205, 1965. ISSN 0022-247X. doi: https://doi.org/10.1016/0022-247X(65)90154-X. URL http://www.sciencedirect.com/science/article/pii/0022247X6590154X. * Kramer [1964] Jr Kramer, J David R. Partially Observable Markov Processes., 1964. * Sutton and Barto [2017] Richard S Sutton and Andrew G Barto. _Reinforcement Learning: An Introduction, Second edition in progress_ , volume 3. 2017\. doi: 10.1016/S1364-6613(99)01331-5. * Mousavi et al. [2018] Seyed Sajad Mousavi, Michael Schukat, and Enda Howley. Deep Reinforcement Learning: An Overview. In _Lecture Notes in Networks and Systems_ , volume 16, pages 426–440. 2018. doi: 10.1007/978-3-319-56991-8_32. URL https://arxiv.org/abs/. * Arulkumaran et al. [2017] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath. Deep reinforcement learning: A brief survey. _IEEE Signal Processing Magazine_ , 34(6):26–38, 2017. doi: 10.1109/MSP.2017.2743240. * Kaelbling et al. [1996] Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey. _Journal of artificial intelligence research_ , 4:237–285, 1996. * Gulwani et al. [2017] Sumit Gulwani, Oleksandr Polozov, and Rishabh Singh. Program synthesis. _Foundations and Trends in Programming Languages_ , 4(1-2):1–119, 2017. ISSN 23251131. doi: 10.1561/2500000010. URL www.nowpublishers.com;. * Chen et al. [2021] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. _CoRR_ , abs/2107.03374, 2021. URL https://arxiv.org/abs/2107.03374. * [12] The unreasonable effectiveness of language models for source code · vadim liventsev. https://vadim.me/posts/unreasonable/. (Accessed on 07/08/2022). * Xu et al. [2017] Xiaojun Xu, Chang Liu, and Dawn Song. Sqlnet: Generating structured queries from natural language without reinforcement learning. _CoRR_ , abs/1711.04436, 2017. URL http://arxiv.org/abs/1711.04436. * Yin et al. [2018] Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. Learning to mine aligned code and natural language pairs from stack overflow. In _International Conference on Mining Software Repositories_ , MSR, pages 476–486. ACM, 2018. doi: https://doi.org/10.1145/3196398.3196408. * Li et al. [2022] Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level code generation with alphacode, 2022. URL https://arxiv.org/abs/2203.07814. * Kant [2018] Neel Kant. Recent Advances in Neural Program Synthesis. 2018\. URL http://arxiv.org/abs/1802.02353. * Polozov and Gulwani [2015] Oleksandr Polozov and Sumit Gulwani. Flashmeta: A framework for inductive program synthesis. In _Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications_ , OOPSLA 2015, page 107–126, New York, NY, USA, 2015. Association for Computing Machinery. ISBN 9781450336895. doi: 10.1145/2814270.2814310. URL https://doi.org/10.1145/2814270.2814310. * Vijayakumar et al. [2018] Ashwin K Vijayakumar, Dhruv Batra, Abhishek Mohta, Prateek Jain, Oleksandr Polozov, and Sumit Gulwani. Neural-guided deductive search for real-time program synthesis from examples. In _6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings_ , 2018. URL https://microsoft.github.io/prose/impact/. * Shin et al. [2019] Richard Shin, Neel Kant, Kavi Gupta, Christopher Bender, Brandon Trabucco, Rishabh Singh, and Dawn Song. Synthetic datasets for neural program synthesis. Technical report, 2019. * Zaremba and Sutskever [2015] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. _CoRR_ , abs/1505.00521, 2015. URL http://arxiv.org/abs/1505.00521. * Weston et al. [2015] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In _3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings_ , oct 2015. URL http://arxiv.org/abs/1410.3916. * Kurach et al. [2016] Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. In _4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings_ , 2016. * Abolafia et al. [2018] Daniel A. Abolafia, Mohammad Norouzi, Jonathan Shen, Rui Zhao, and Quoc V. Le. Neural Program Synthesis with Priority Queue Training. 2018\. URL http://arxiv.org/abs/1801.03526. * Ahvanooey et al. [2019] Milad Taleby Ahvanooey, Qianmu Li, Ming Wu, and Shuo Wang. A survey of genetic programming and its applications. _KSII Transactions on Internet and Information Systems (TIIS)_ , 13(4):1765–1794, 2019. * Muller [1993] U. Muller. Brainfuck – an eight-instruction turing-complete programming language. Available at the internet address http://en.wikipedia.org/wiki/Brainfuck, 1993. URL http://en.wikipedia.org/wiki/Brainfuck. * Allen et al. [2012] Julie D Allen, Deborah Anderson, Joe Becker, Richard Cook, Mark Davis, Peter Edberg, Michael Everson, Asmus Freytag, Laurentiu Iancu, Richard Ishida, et al. The unicode standard. _Mountain view, CA_ , 2012. * Turing [1938] A M Turing. On computable numbers, with an application to the entscheidungsproblem. a correction. _Proceedings of the London Mathematical Society_ , s2-43(1):544–546, 1938. ISSN 1460244X. doi: 10.1112/plms/s2-43.6.544. * Touati et al. [2020] Ahmed Touati, Adrien Ali Taiga, and Marc G Bellemare. Zooming for efficient model-free reinforcement learning in metric spaces. _arXiv preprint arXiv:2003.04069_ , 2020. * Linander [2016] Mats Linander. control flow in brainfuck | matslina, 2016. URL http://calmerthanyouare.org/2016/01/14/control-flow-in-brainfuck.html. * Bellemare et al. [2013] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. _Journal of Artificial Intelligence Research_ , 47:253–279, jun 2013. * Kimura [2018] Daiki Kimura. Daqn: Deep auto-encoder and q-network. _arXiv preprint arXiv:1806.00630_ , 2018. * Brockman et al. [2016] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. _CoRR_ , abs/1606.01540, 2016. URL http://arxiv.org/abs/1606.01540. * Barto et al. [1983] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult learning control problems. _IEEE Transactions on Systems, Man, and Cybernetics_ , SMC-13(5):834–846, Sep. 1983. ISSN 2168-2909. doi: 10.1109/TSMC.1983.6313077. * Moore [1990] Andrew William Moore. Efficient memory-based learning for robot control. Technical report, 1990. * Dietterich [2000] Thomas G. Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. _Journal of Artificial Intelligence Research_ , 13:227–303, 2000. * Gers et al. [1999] Felix A Gers, Jürgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm. 1999\. * Williams [1992] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine learning_ , 8(3-4):229–256, 1992. * Bader and Hitzler [2005] Sebastian Bader and Pascal Hitzler. Dimensions of neural-symbolic integration-a structured survey. _arXiv preprint cs/0511042_ , 2005. * Besold et al. [2017] Tarek R Besold, Artur d’Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe Kühnberger, Luis C Lamb, Daniel Lowd, Priscila Machado Vieira Lima, et al. Neural-symbolic learning and reasoning: A survey and interpretation. _arXiv preprint arXiv:1711.03902_ , 2017. * França et al. [2015] Manoel Vitor Macedo França, Artur S d’Avila Garcez, and Gerson Zaverucha. Relational knowledge extraction from neural networks. In _CoCo@ NIPS_ , 2015. * Svatoš et al. [2019] Martin Svatoš, Gustav Šourek, and Filip Železný. Revisiting neural-symbolic learning cycle. In _14TH INTERNATIONAL WORKSHOP ON NEURAL-SYMBOLIC LEARNING AND REASONING_ , 2019. URL https://sites.google.com/view/nesy2019/home. * Tieleman and Hinton [2012] T. Tieleman and G. Hinton. Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. * Liventsev [2021] Vadim Liventsev. vadim0x60/evestop: Early stopping with exponential variance elmination. https://github.com/vadim0x60/evestop, 2021. (Accessed on 01/20/2021). * Wikipedia contributors [2020] Wikipedia contributors. Bytecode — Wikipedia, the free encyclopedia, 2020. URL https://en.wikipedia.org/w/index.php?title=Bytecode&oldid=995026385. [Online; accessed 21-January-2021]. * Poli et al. [2008] Riccardo Poli, William B Langdon, Nicholas F McPhee, and John R Koza. _A field guide to genetic programming_. Lulu. com, 2008. * Keane and Topol [2018] P. A. Keane and E. J. Topol. With an eye to AI and autonomous diagnosis. _NPJ Digit Med_ , 1:40, 2018. [PubMed Central:PMC6550235] [DOI:10.1038/s41746-018-0048-y] [PubMed:29618526]. * Richardson et al. [2017] Kyle Richardson, Sina Zarrieß, and Jonas Kuhn. The code2text challenge: Text generation in source code libraries. _CoRR_ , abs/1708.00098, 2017. URL http://arxiv.org/abs/1708.00098. * LeClair et al. [2019] A. LeClair, S. Jiang, and C. McMillan. A neural model for generating natural language summaries of program subroutines. In _2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE)_ , pages 795–806, 2019. doi: 10.1109/ICSE.2019.00087.
# Examining Factors Associated with Twitter Account Suspension Following the 2020 U.S. Presidential Election Farhan Asif Chowdhury1, Dheeman Saha1, Md Rashidul Hasan1, Koustuv Saha2, Abdullah Mueen1 ###### Abstract Online social media enables mass-level, transparent, and democratized discussion on numerous socio-political issues. Due to such openness, these platforms often endure manipulation and misinformation - leading to negative impacts. To prevent such harmful activities, platform moderators employ countermeasures to safeguard against actors violating their rules. However, the correlation between publicly outlined policies and employed action is less clear to general people. In this work, we examine violations and subsequent moderations related to the 2020 U.S. President Election discussion on Twitter. We focus on quantifying plausible reasons for the suspension, drawing on Twitter’s rules and policies by identifying suspended users (Case) and comparing their activities and properties with (yet) non-suspended (Control) users. Using a dataset of 240M election-related tweets made by 21M unique users, we observe that Suspended users violate Twitter’s rules at a higher rate (statistically significant) than Control users across all the considered aspects - hate speech, offensiveness, spamming, and civic integrity. Moreover, through the lens of Twitter’s suspension mechanism, we qualitatively examine the targeted topics for manipulation. ## 1 Introduction Social media platforms such as Facebook, Twitter, and Reddit have become vastly popular in public discussion related to societal, economic, and political issues (Gil de Zúñiga, Jung, and Valenzuela 2012). However, in the recent past, these platforms were heavily targeted for manipulation and spreading misinformation relating to numerous civic issues across the world (Bessi and Ferrara 2016), which is often referred to as “Computational Propaganda” (Woolley and Howard 2018). In particular, the coordinated misinformation and influence campaigns of foreign state-sponsored actors during the 2016 U.S. presidential election were highly scrutinized - which eventually led to the U.S. Congressional hearings and investigation by the U.S. Department of Justice (Congress-Hearing 2017; Mueller-Report 2019). In the aftermath of the 2016 U.S. presidential election, social platforms announced strict and improved platform moderation policy (Facebook-Update 2017; Twitter-Update 2018). However, these platform’s moderation and suspension policies have been largely debated and have faced severe criticism from the political leaders and supporters about bias towards their opposition (Bias 2016, 2020). Although these platforms publicly outline their moderation policy, there is no third-party monitoring of their enacted moderation. Moreover, social platforms like Twitter and Facebook employ extensive safeguard mechanisms that consider various aspects of user activities (coordinated activities, impersonation, etc.) to identify malicious behavior (Twitter-Safety 2021). Therefore, analyzing these suspended users’ tweets and shared content might shed light on violators’ targeted topics. In the context of inadequate countermeasures against manipulation during the 2016 U.S. presidential election, the 2020 election was of paramount importance for platform operators to provide a safe and democratized public discussion sphere (Twitter-Policy 2021). The impact and importance of these safeguard measures are not confined to this particular election; instead, they bear cardinal implications for future online political discussions exceeding all geopolitical boundaries. Therefore, it is requisite to assess these platforms’ moderation policy — to investigate the correlation between their policies and actions, examine for potential political biases, and make general people aware of the targeted malice topics. In this respect, we focus on analyzing the moderation policy of popular micro- blogging site Twitter as a case-study by asking the following research questions: * • RQ1: What factors associate with Twitter account suspension following the 2020 U.S. President Election? * • RQ2: How do political ideologies associate with the suspended accounts? * • RQ3: What was the topic of discussion among suspended users? What type of content these users shared? This work. To answer these research questions, we collect a large-scale dataset of 240M tweets made by 21M unique users over eight weeks centering the 2020 U.S. President Election. Afterward, we identify 355K suspended users who participated in this election discussion. We draw upon Twitter’s rules and policies to examine plausible suspension factors. To investigate the user activity that might lead to suspension, we adopt the “case-control” study design from epidemiology (Schulz and Grimes 2002). We consider the suspended users as Case group and sample similar number of non-suspended users as Control group. We devise several classification techniques to quantify suspension factors among these two groups. We infer these users’ political leaning by utilizing the political bias of the news media outlet they share. By employing a language differentiation technique, we contrast the conversational topics and shared content among Case and Control groups. Through the lens of Twitter’s suspension policy, we passively infer the targeted topics by platform violators and identify the online content platforms utilized to sway the discussion. Summary findings. We find that across all suspension factors, the Suspended users have higher (statistically significant) violation occurrences. Coherent with prior work, we find that Suspended users are short-lived, have fewer followers, and show more tweeting activity. We observe a higher presence of right-leaning users than left-leaning users among Suspended users. We find that Suspended users use more curse and derogatory words and personally- attacking and propaganda related hashtags. We also notice that these users share news content from heavily biased right-leaning news-propaganda sites. We discuss the implications and limitations of our work in the Conclusion. ## 2 Related Work There have been several works related to Twitter suspension, most of which focused primarily on spam-related suspensions (Thomas et al. 2011; Amleshwaram et al. 2021). More recently, Le et al. studied suspended users in the context of the 2016 U.S. presidential election (Le et al. 2019), and Chowdhury et al. examined a large group of suspended users related to a large-scale Twitter purge in 2018 (Chowdhury et al. 2020). We refer readers to (Le et al. 2019; Chowdhury et al. 2020) for a more comprehensive understanding of suspension and moderation on online platforms. However, none of these works quantify specific factors associated with Twitter suspension. Additionally, political discussions and related manipulation on online platforms have been thoroughly studied previously (Ferrara 2017; Im et al. 2020), mostly related to the 2016 U.S.presidential election (Badawy, Ferrara, and Lerman 2018; Zannettou et al. 2019). These works primarily focus on characterizing malicious users and inferring their motivation and impact. Im et al.; Badawy, Ferrara, and Lerman provide an extensive overview of this line of work (Im et al. 2020; Badawy, Ferrara, and Lerman 2018). In contrast, we focus on quantifying suspension factors and examining malice topics related to the 2020 U.S. presidential election. ## 3 Data To collect tweets related to the 2020 U.S. President Election discussion, we deployed an uninterrupted data collection framework utilizing Twitter’s streaming API to filter real-time tweets based on given keywords. Similar to prior work that collected specific theme or event-related tweets (Olteanu et al. 2014), we initialize the keyword set with manually curated election- related words and hashtags. To cover the continuously evolving election discussions and topics, we update the keyword set daily with new trending hashtags and words from previous days collection, as employed in (Abu-El-Rub and Mueen 2019; Olteanu et al. 2015). We provide a list of all the keywords in the supporting website (Website 2021). Our data collection time-period spans over around ten weeks, centering the election date - from “September 28, 2020” to “December 04, 2020”. Approximately one month after our data collection ends, on “January 01, 2021” we start probing Twitter for each of the participating users from our dataset to identify the suspended users. Twitter returns the response code $63$ when requested user information for a suspended user. In this process, we identify 355,573 suspended users from roughly 21M participating users. We provide a summary descriptive statistics of our dataset in Table 1 and plot per user tweet count in Figure 1. We discuss the limitation of our data collection framework in Section 6. Figure 1: Distribution of # of users by # of tweets. Statistic | Value ---|--- # Tweets | 240M # Unique users | 21M # Retweets | 173M # Quote tweets | 60M # $\mu$ tweets per user | 11 # Suspended users | 355K # Tweets by sus. users | 7.2M Table 1: Descriptive statistics of Twitter Dataset. Ethical Concerns. Throughout our data collection, experiment design, and analysis process, we maintain ethical research standards (Rivers and Lewis 2014). Hence, the accommodating academic institution’s Institutional Review Board exempted this project after a formal review. Following Twitter’s terms of services guideline: (1) we use the Twitter API key only for passive data collection purposes, (2) we do not publish user-specific information, (3) we do not redistribute the data, and (4) we only share aggregated statistics and derived information to facilitate future work. ## 4 Methods ### RQ1: Inferring Suspension Factors Twitter rules and policies. To infer the plausible factors that explain suspension, we draw upon Twitter’s rules, and policies for free and safe public discussion (TwitterRules 2021). Twitter outlines three specific categories — (1) safety, (2) privacy, (3) authenticity, each of which entails finer sub-categories on specific violating activities. We specifically focus on five sub-categories that are more likely to be enacted upon on election discussion: three from safety — (1) hateful conduct, (2) abuse/harassment, (3) terrorism/violent extremism; two from authenticity — (4) spamming, (5) civic integrity. The rest are either not largely relevant (i.e., copyright, nudity) or inferred from our data (i.e., impersonation - as we do not have user and tweet information for all the tweet, sensitive media content - as we do not crawl media content). Hateful Conduct and Offensive Behavior. Several recent works aim to identify hateful and abusive activities in online platforms, which produced publicly available datasets and trained models (Founta et al. 2018; Davidson et al. 2017). However, as the distinction between abusive and violent language is ill-defined, they unified these categories. Similarly, we combine both abusive and violent tweets into one category - offensive. We utilize an automatic hate speech and offensive language detection technique to detect hateful and offensive tweet, known as HateSonar (Davidson et al. 2017). HateSonar is a logistic regression classifier that uses several text features (i.e., TF-IDF of word-grams, sentiment, etc.), which has been trained on manually labeled tweet corpus. We use a pre-trained HateSonar model to classify each tweet into three categories: (1) hateful, (2) offensive, and (3) normal. Civic integrity. Twitter has established strict rules to prevent users from “manipulating or interfering in elections or other civic processes”, including “posting and sharing misleading content” (Twitter-Integrity 2021). To infer such violation, we utilize the posted hashtags and shared news website URLs. We curate a list of hashtags related to misinformation, propaganda, and conspiracy theories, borrowing from the work by (Ferrara et al. 2020), where they curated a list of conspiracy related hashtags. Additionally, we compile a list of biased and propaganda-spreader news websites based on a publicly available dataset from (FactCheck 2021; Politifact 2021). If a tweet contains a hashtag or news article from our curated list, we consider it a violation of civic integrity policy. Spam. Several previous works have identified spamming on Twitter, most of which consider both tweet content and user attributes for user-level classification. Here, we primarily infer spamming violations at tweet-level based on tweet content, for which we utilize a collection of spam keywords (Benevenuto et al. 2010). However, to quantify spammers at the user-level, we also examine several account attributes (i.e., account age, tweet rate, etc.), which are most prominent for spammer detection (Thomas et al. 2011; Yang et al. 2020). We note that the above-defined classification techniques are no match to Twitter’s actual countermeasure mechanism. Rather, we posit these methods as high-precision approaches — which utilize language models and keyword matching to avoid false-positives. The detected violations can be regarded as the lower-bound for actual ensued violations, only to increase with more comprehensive approaches. ### RQ2: Political Ideology of Suspended Users We infer political leaning based on the political bias of the shared media outlets in the tweets. Similar to previous work on studying political ideology on Twitter (Badawy et al. 2019), we curate a list of “politically inclined” media outlets based on publicly available data from (AllSides 2021; MediaBias 2021). Additionally, if a user retweets one of the presidential candidates without adding a quote, we consider it as ideological endorsement (Ferrara et al. 2020). ### RQ3: Conversational topics and shared content Twitter employs extensive countermeasure tools that consider a multitude of factors, features, and algorithms (Twitter-Measures 2018); which is beyond the scope of any third-party observer to reproduce. However, through the lens of Twitter’s suspension policy, we can identify platform violators’ targeted topics as a passive sensing mechanism for detecting online malice. Towards that, we contrast the conversational topics of Suspended and Control users. In particular, we consider (1) top uni-grams and bi-grams – to infer the commonality of discussion language; (2) hashtags – which are used for signaling and discoverability purposes (Bruns and Burgess 2011) and have been instrumental in several political and social movements (Arif, Stewart, and Starbird 2018). Additionally, Twitter is often used as an amplifier and fishing platform to disseminate news and multimedia content. Hence, we also examine the shared URL-domains to examine the online content platforms utilized by platform violators. To examine the uniqueness across these dimensions, we use a generative text modeling method known as Sparse Additive Generative Models of Text, or SAGE (Eisenstein, Ahmed, and Xing 2011). We use SAGE as a text differentiation technique where each class label or latent topic is endowed with a model of the deviation in log-frequency from a constant background distribution. We utilize SAGE to identify the highly used distinctive word-grams, hashtags, and URL-domains across Suspended and Control user’s tweet corpus. ## 5 Results Measure | Suspended | Control | d | t | KS ---|---|---|---|---|--- Suspension rule (Safety) Hateful (%) | 0.78 | 0.43 | 0.05 | 77.73*** | 0.003*** Offensive (%) | 6.45 | 5.21 | 0.05 | 91.62*** | 0.01*** Suspension rule (Authenticity) Civic Integrity (%) | 0.40 | 0.30 | 0.02 | 31.46*** | 0.001*** Spam (%) | 0.56 | 0.38 | 0.03 | 46.83*** | 0.002*** Account Properties Active days | 964.0 | 1972.9 | -0.74 | -151.77*** | 0.24*** Tweets per Day | 33.5 | 20.82 | 0.25 | 50.20*** | 0.14*** Followers Count | 1491.5 | 3337.2 | -0.04 | -8.54*** | 0.11*** Friends Count | 1112.5 | 1263.7 | -0.03 | -6.74*** | 0.11*** Political Ideology (% Tweets) Left Leaning | 3.97 | 5.65 | -0.08 | -137.9*** | 0.02*** Right Leaning | 7.25 | 6.00 | 0.05 | 87.21*** | 0.01*** Table 2: Summary of differences in quantitative measures across Suspended and Control users. We report average occurrences across matched clusters, effect size (Cohen’s $d$), independent sample $t$-statistic, and $KS$-statistic. $p$-values are reported after Bonferroni correction (* $p$<0.05, ** $p$<0.01, *** $p$<0.001). ### RQ1: Inferring Suspension Reason In this subsection, we quantify the differences in the suspension rules between Suspended and Control users. We calculate effect size (Cohen’s $d$) and use independent sample $t$-tests to evaluate statistical significance in the differences. We perform Koglomorov-Smirnov ($KS$) test to test against the null hypothesis that the distribution of suspension rules for the Suspended and Control users are drawn from the same distribution (Saha et al. 2021). We summarize these differences in Table 5. Hateful Conduct and Offensive Behavior. Suspended users are twice more likely to post hateful tweets than Control users ($t$=77.73, $p$<0.05). Also, Suspended users post more offensive tweet than Control users ($t$=91.62, $p$<0.05). These findings are in coherence with Twitter’s suspension policy. Civic Integrity and Spam. We find that Suspended users are more likely to use hashtags related to conspiracy theories and share news from media sites with questionable authenticity ($t$=31.46, $p$<0.05). Also, Suspended users are 50% more likely to post spam-tweets ($t$=46.83, $p$<0.05). Account Properties. We find significant difference in the active days between Suspended and Control users ($t$=-151.77, $p$<0.05). The Control users are, on average, roughly three years older than Suspended users. Contrastingly, these short-lived Suspended users post 50% more tweets compared to Control users ($t$=50.20, $p$<0.05). The Suspended users have less follower count, on average 100% less than Control users ($t$=-8.54, $p$<0.05). However, both Suspended and Control users have similar friends count ($t$=-6.74, $p$<0.05). These findings resonate with previous works studying spamming and suspension on Twitter (Thomas et al. 2011; Chowdhury et al. 2020), which identified that the rules violating users are generally short-lived, posts more tweet, and have a smaller follower base. ### RQ2: Political Ideology of Suspended Users In Table 5, we observe $40$% higher left-leaning tweets among Control users than Suspended users ($t$=-$137.9$, $p$<$0.05$). In contrast, we observe higher right-leaning tweets among Suspended users than Control users ($t$=$87.21$, $p$<$0.05$). Our finding shows that both left-leaning and right- leaning users engaged in violating Twitter’s rules and policies, with $100$% higher presence of right-leaning tweets among Suspended users than Control users. ### RQ3: Conversational topics and shared content Category | Words ---|--- Control | transistion, health care, amy coney barrett, flynn, senate, sidney powell, absentee, graham, judge, legal, rigged, mail, ballot, rudy giuliani, fraudulent Suspended | traitor, dumb, communist, biden family, idiot, seanhannity, fu*k, liar, stupid, breitbartnews, ukraine, treason, terrorist, evil, leftist Table 3: Highly used fifteen distinctive words per user-group obtained using SAGE. Word Usage. In Table 3, we present the top 15 most distinctive words used in per group’s tweet as obtained from the SAGE technique. We observe that Control users distinctly used words relating to different event-driven election- related topics (i.e., mail, ballot, rigged, fraudulent). However, we observe a large presence of swear words (i.e., idiot, dumb) and sensitive words (traitor, treason, terrorist) among Suspended users’ unique words, which supports the higher hate-speech detection among suspended users. Category | Hashtags ---|--- Control | pentagon, bigtech, corruptkelly, quidproquo, doj, justicematters, climatechange, flipthesenate, michiganhearing, cnntapes Suspended | stevebannon, bidenfamilycorruption, warroompandemic, russia, hunterbidenemails, hunterbidenlaptop, democratsaredestroyingamerica, bidencrimesyndicate, chinajoe, chinabitchbiden Table 4: Highly used ten distinctive hashtags per user-group obtained using SAGE. Hashtag Usage. Table 4 presents the top 10 most distinctively used hashtags by Suspended and Control users. Similar to word usage, the unique hashtags among Control users were related to specific events (i.e., cnntapes, corruptkelly, etc) and general election issues (i.e., bigtech, climatechange). In contrast, distinct hashtags from Suspended users were mostly related to the defamation of Democratic presidential candidate Joe Biden (i.e., bidencrimesyndicate, chinajoe, chinabitchbiden) and issues related to his son Hunter Biden (i.e., hunterbidenemails, hunterbidenlaptops). Category | Domain ---|--- Control | democracydocket.com, buildbackbetter.gov, www.infobae.com, latimes.com, texastribune.com, citizensforethics.org, theatlantic.com, motherjones.com, nytimes.com, npr.org Suspended | usfuturenews.com, trumpsports.org, techfinguy.com, mostafadahroug.com, ovalofficeradio.com, wuestdevelopment.de, queenofsixteens.com, truenewshub.com, einnews.com, thefreeliberty.com Table 5: Highly shared ten distinctive URL-domain per user-group obtained using SAGE. Shared Content. In Table 5, we show the top 10 distinct shared domain names per user group. Among the Control users, we observe the presence of few moderately neutral news outlets (i.e., nytimes.com, npr.org), independent political monitoring organization (i.e., democracydocekt.com, citizenforethics.org), and few left-leaning news outlets (theatlantic.com, motherjones.com). However, among Suspended users, we notice several heavily right-leaning non-mainstream news-propaganda sites (i.e., usfuturenews.com, ovalofficeradio.com, trunewshub.com, thefreeliberty.com). ## 6 Discussion and Conclusion Implications. Our study bears an implication in shedding light on the transparency about Twitter’s content moderation policy. Although we cannot ascertain any quantitative estimation towards how far or through what means Twitter’s rules followed, our study makes insightful findings of the statistically significant occurrences of hateful, offensive, and misinformative content among the users whose accounts were suspended after a while. These findings support theoretical, empirical, and anecdotal evidence about Twitter’s moderation policies (TwitterRules 2021), which had only gained significant attention since January 2021 when Twitter suspended the U.S. President Donald Trump’s Twitter account owing to inciteful and unrest- provocative content (Trump-Ban 2020). Limitations and Future Work. Our Twitter data collection has potential biases as we initialize our seed keywords manually. While investigating plausible suspension reasons, we use simple, interpretable, and high-precision approaches - which are no match to Twitter’s complex and multi-faceted safeguard mechanisms. We do not infer the exact reason for suspension for individual users; rather, we quantify violations at tweet level. Future research can use causal inference methods like matching (Saha and Sharma 2020) to minimize confounds and draw causal claims about why certain accounts were suspended. Moreover, we utilize several publicly available datasets that might suffer from biases. We argue to situate our work as an initial step towards understanding malice, misinformation, and subsequent moderation related to the 2020 U.S. presidential election on online platforms. Our presented insights and the derived information can instigate further in-depth examination. For example, the shared news articles’ content can be analyzed to understand the nature of propaganda news. To facilitate such research, we make these news URLs publicly available and other summary statistics (Website 2021). Similarly, future works can investigate the dynamics of propaganda hashtags and news articles unique to suspended users to understand their impact and influence. Additionally, interactions among suspended users can be explored to identify potential coordination. Conclusion. In this work, we perform a computational study to analyze Twitter’s suspension policy situated in the context of the 2020 U.S. presidential election. We facilitate our work by collecting large-scale tweet dataset during the election period and subsequently identifying the suspended users. By designing a Case-Control experimental study and devising high- precision classification approaches, we quantify associated factors related to the suspension. Additionally, we explore the political ideology and targeted topics of suspended users. We aim to motivate more rigorous and in-depth future works through our presented insights and shared datasets. ## References * Abu-El-Rub and Mueen (2019) Abu-El-Rub, N.; and Mueen, A. 2019. Botcamp: Bot-driven interactions in social campaigns. In _The World Wide Web Conference_ , 2529–2535. * AllSides (2021) AllSides. 2021. https://www.allsides.com/media-bias/media-bias-ratings. * Amleshwaram et al. (2021) Amleshwaram, A. A.; Reddy, A. N.; Yadav, S.; Gu, G.; and Yang, C. 2021. CATS: Characterizing automation of Twitter spammers. * Arif, Stewart, and Starbird (2018) Arif, A.; Stewart, L. G.; and Starbird, K. 2018. Acting the part: Examining information operations within# BlackLivesMatter discourse. _Proceedings of the ACM on Human-Computer Interaction_ 2(CSCW): 1–27. * Badawy et al. (2019) Badawy, A.; Addawood, A.; Lerman, K.; and Ferrara, E. 2019. Characterizing the 2016 Russian IRA influence campaign. _Social Network Analysis and Mining_ 9(1): 31. * Badawy, Ferrara, and Lerman (2018) Badawy, A.; Ferrara, E.; and Lerman, K. 2018. Analyzing the digital traces of political manipulation: The 2016 russian interference twitter campaign. In _2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)_ , 258–265. IEEE. * Benevenuto et al. (2010) Benevenuto, F.; Magno, G.; Rodrigues, T.; and Almeida, V. 2010. Detecting spammers on twitter. * Bessi and Ferrara (2016) Bessi, A.; and Ferrara, E. 2016. Social bots distort the 2016 US Presidential election online discussion. _First Monday_ 21(11-7). * Bias (2016) Bias. 2016. https://www.usatoday.com/story/tech/news/2016/11/18/conservatives-accuse-twitter-of-liberal-bias/94037802/. * Bias (2020) Bias. 2020. https://www.wired.co.uk/article/twitter-political-account-ban-us-mid-term-elections. * Bruns and Burgess (2011) Bruns, A.; and Burgess, J. E. 2011. The use of Twitter hashtags in the formation of ad hoc publics. In _Proceedings of the 6th European consortium for political research (ECPR) general conference 2011_. * Chowdhury et al. (2020) Chowdhury, F. A.; Allen, L.; Yousuf, M.; and Mueen, A. 2020. On Twitter Purge: A Retrospective Analysis of Suspended Users. In _Companion Proceedings of the Web Conference 2020_ , 371–378. * Congress-Hearing (2017) Congress-Hearing. 2017. https://www.govinfo.gov/content/pkg/CHRG-115shrg27398/pdf/CHRG-115shrg27398.pdf. * Davidson et al. (2017) Davidson, T.; Warmsley, D.; Macy, M.; and Weber, I. 2017. Automated hate speech detection and the problem of offensive language. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 11. * Eisenstein, Ahmed, and Xing (2011) Eisenstein, J.; Ahmed, A.; and Xing, E. P. 2011. Sparse additive generative models of text . * Facebook-Update (2017) Facebook-Update. 2017. https://about.fb.com/news/2017/09/information-operations-update/. * FactCheck (2021) FactCheck. 2021. https://www.factcheck.org/2017/07/websites-post-fake-satirical-stories/. * Ferrara (2017) Ferrara, E. 2017. Disinformation and social bot operations in the run up to the 2017 French presidential election. _arXiv preprint arXiv:1707.00086_ . * Ferrara et al. (2020) Ferrara, E.; Chang, H.; Chen, E.; Muric, G.; and Patel, J. 2020. Characterizing social media manipulation in the 2020 US presidential election. _First Monday_ . * Founta et al. (2018) Founta, A.; Djouvas, C.; Chatzakou, D.; Leontiadis, I.; Blackburn, J.; Stringhini, G.; Vakali, A.; Sirivianos, M.; and Kourtellis, N. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 12. * Gil de Zúñiga, Jung, and Valenzuela (2012) Gil de Zúñiga, H.; Jung, N.; and Valenzuela, S. 2012. Social media use for news and individuals’ social capital, civic engagement and political participation. _Journal of computer-mediated communication_ 17(3): 319–336. * Im et al. (2020) Im, J.; Chandrasekharan, E.; Sargent, J.; Lighthammer, P.; Denby, T.; Bhargava, A.; Hemphill, L.; Jurgens, D.; and Gilbert, E. 2020. Still out there: Modeling and identifying russian troll accounts on twitter. In _12th ACM Conference on Web Science_ , 1–10. * Le et al. (2019) Le, H.; Boynton, G.; Shafiq, Z.; and Srinivasan, P. 2019. A postmortem of suspended Twitter accounts in the 2016 US presidential election. In _2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)_ , 258–265. IEEE. * MediaBias (2021) MediaBias. 2021. https://mediabiasfactcheck.com/. * Mueller-Report (2019) Mueller-Report. 2019. https://www.justice.gov/storage/report.pdf. * Olteanu et al. (2015) Olteanu, A.; Castillo, C.; Diakopoulos, N.; and Aberer, K. 2015. Comparing events coverage in online news and social media: The case of climate change. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 9. * Olteanu et al. (2014) Olteanu, A.; Castillo, C.; Diaz, F.; and Vieweg, S. 2014. Crisislex: A lexicon for collecting and filtering microblogged communications in crises. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 8. * Politifact (2021) Politifact. 2021. https://www.politifact.com/article/2017/apr/20/politifacts-guide-fake-news-websites-and-what-they/. * Rivers and Lewis (2014) Rivers, C. M.; and Lewis, B. L. 2014. Ethical research standards in a world of big data. _F1000Research_ 3\. * Saha et al. (2021) Saha, K.; Liu, Y.; Vincent, N.; Chowdhury, F. A.; Neves, L.; Shah, N.; and Bos, M. W. 2021. AdverTiming Matters: Examining User Ad Consumption for Effective Ad Allocations on Social Media. In _Proc. CHI_. * Saha and Sharma (2020) Saha, K.; and Sharma, A. 2020. Causal Factors of Effective Psychosocial Outcomes in Online Mental Health Communities. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 14, 590–601. * Schulz and Grimes (2002) Schulz, K. F.; and Grimes, D. A. 2002. Case-control studies: research in reverse. _The Lancet_ 359(9304): 431–434. * Thomas et al. (2011) Thomas, K.; Grier, C.; Song, D.; and Paxson, V. 2011. Suspended accounts in retrospect: an analysis of twitter spam. In _Proceedings of the 2011 ACM SIGCOMM conference on Internet measurement conference_ , 243–258. * Trump-Ban (2020) Trump-Ban. 2020. https://blog.twitter.com/en_us/topics/company/2020/suspension.html. * Twitter-Integrity (2021) Twitter-Integrity. 2021. https://help.twitter.com/en/rules-and-policies/election-integrity-policy. * Twitter-Measures (2018) Twitter-Measures. 2018. https://blog.twitter.com/en_us/topics/company/2018/how-twitter-is-fighting-spam-and-malicious-automation.html. * Twitter-Policy (2021) Twitter-Policy. 2021. https://blog.twitter.com/en_us/topics/company/2020/2020-election-changes.html. * Twitter-Safety (2021) Twitter-Safety. 2021. https://blog.twitter.com/en_us/topics/company/2018/how-twitter-is-fighting-spam-and-malicious-automation.html. * Twitter-Update (2018) Twitter-Update. 2018. https://blog.twitter.com/en_us/topics/company/2018/2016-election-update.html. * TwitterRules (2021) TwitterRules. 2021. https://help.twitter.com/en/rules-and-policies/twitter-rules. * Website (2021) Website. 2021. https://sites.google.com/view/us-election20-twitter-suspend. * Woolley and Howard (2018) Woolley, S. C.; and Howard, P. N. 2018. _Computational propaganda: political parties, politicians, and political manipulation on social media_. Oxford University Press. * Yang et al. (2020) Yang, K.-C.; Varol, O.; Hui, P.-M.; and Menczer, F. 2020. Scalable and generalizable social bot detection through data selection. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, 1096–1103. * Zannettou et al. (2019) Zannettou, S.; Caulfield, T.; De Cristofaro, E.; Sirivianos, M.; Stringhini, G.; and Blackburn, J. 2019. Disinformation warfare: Understanding state-sponsored trolls on Twitter and their influence on the web. In _Companion proceedings of the 2019 world wide web conference_ , 218–226.
∎ 11institutetext: Blaž Škrlj, Sašo Džeroski, Matej Petkovič 22institutetext: Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia Jožef Stefan International Postgraduate School, Jamova 39, 1000 Ljubljana, Slovenia 22email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> Nada Lavrač Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia University of Nova Gorica, Vipavska 13, 5000 Nova Gorica, Slovenia 22email<EMAIL_ADDRESS> # ReliefE: Feature Ranking in High-dimensional Spaces via Manifold Embeddings Blaž Škrlj Sašo Džeroski Nada Lavrač Matej Petković (Received: date / Accepted: date) ###### Abstract Feature ranking has been widely adopted in machine learning applications such as high-throughput biology and social sciences. The approaches of the popular Relief family of algorithms assign importances to features by iteratively accounting for nearest relevant and irrelevant instances. Despite their high utility, these algorithms can be computationally expensive and not-well suited for high-dimensional sparse input spaces. In contrast, recent embedding-based methods learn compact, low-dimensional representations, potentially facilitating down-stream learning capabilities of conventional learners. This paper explores how the Relief branch of algorithms can be adapted to benefit from (Riemannian) manifold-based embeddings of instance and target spaces, where a given embedding’s dimensionality is intrinsic to the dimensionality of the considered data set. The developed ReliefE algorithm is faster and can result in better feature rankings, as shown by our evaluation on 20 real-life data sets for multi-class and multi-label classification tasks. The utility of ReliefE for high-dimensional data sets is ensured by its implementation that utilizes sparse matrix algebraic operations. Finally, the relation of ReliefE to other ranking algorithms is studied via the Fuzzy Jaccard Index. ###### Keywords: Feature ranking Representation learning Relief ## 1 Introduction Contemporary machine learning has found its use in many scientific disciplines, ranging from biology, sociology, logistics, engineering sciences to physics. Data sets are often available in tabular form, and consist of instances (rows) and features (columns), where attributes denote column names, and individual features correspond to individual attribute values. Even though predictive models can offer insights into how well a certain aspect of a given system can be predicted, researchers and industry practitioners are frequently interested in _which_ parts of the input space are the most relevant. Having such knowledge can yield novel insights into relevant aspects of the studied problem, leading to improved human understanding of the studied phenomenon. For example, in modern molecular and systems biology, discovery of novel biomarkers is of high relevance—once the researchers know which, e.g., compounds or proteins indicate the presence of the studied condition, they can be used for preliminary condition detection, but also to advance the human understanding of the conditions leading to the construction of novel hypotheses. We next discuss the types of feature ranking algorithms. Feature ranking algorithms can be split into two main groups: myopic and non- myopic. Myopic algorithms do not consider multiple features simultaneously and thus potentially neglect _interactions_ between features. Examples of myopic feature ranking algorithms include, e.g., information gain-based ranking. Algorithms from the Relief branch, originating in the early Relief algorithm kira1992feature , are among the most widely used non-myopic algorithms for _feature ranking_ , where each feature is assigned a real-valued score, offering insights into its importance. The Relief family of algorithms has been successfully applied to numerous real-life problems stiglic2010stability ; stokes2012application ; PETKOVIC2021104143 . In this work we propose ReliefE, an embedding-based feature ranking algorithm built on the ideas of the original Relief and ReliefF robnik2003theoretical , as well as their extensions to a multi-label classification setting mlcrelief . ReliefE does not compute feature importances based on the original, high-dimensional feature space, but via low-dimensional embeddings of the input and/or output spaces. The key contributions of this work are summarized as follows: * • We present ReliefE, an algorithm for feature ranking implemented using sparse matrix algebraic computation of distances between low-dimensional manifold embeddings of both instances and targets (when considering multi-label classification). * • The latent dimension of the space, in which the distances are computed, is _inferred automatically_ in an efficient manner. * • We show that the number of neighbors to be considered can be automatically inferred based on the distribution of distances to the considered instances, rather than hard-coded as part of the input. * • Theoretically grounded sparsification of the input was considered as a preprocessing step, potentially decreasing the execution time. * • We offer evidence that ReliefE performs significantly faster than many state- of-the-art methods, especially in high-dimensional input (and output) spaces. * • Theoretical properties of ReliefE, including the properties of the embedding spaces, their relations and computational complexity analysis are analysed. * • We showcase the ReliefE’s performance against six strong (widely used) baselines on 20 real-life multi-class and multi-label classification data sets. * • We perform extensive Bayesian and frequentist performance comparisons assessing the statistical evidence of ReliefE’s utility and potential drawbacks. The rest of this paper is structured as follows. In Section 2, we discuss the key ideas that have led to the developments described in this paper. Section 3 presents the proposed ReliefE methodology, followed by a description of the experimental setting in Section 4 and the results of the empirical evaluation in Section 5. The overall findings are discussed in Section 6, followed by the conclusions and plans for further work in Section 7. The paper includes numerous appendices presenting detailed results of empirical comparisons and case studies. ## 2 Background In this section we discuss the works that have impacted this paper the most, starting with the notion of feature ranking and the description of the Relief branch of feature ranking algorithms. Next, we discuss how embedding-based learning can be of use when solving otherwise intractable problems, serving as a motivation for the proposed work. ### 2.1 Feature ranking Feature ranking can be considered as the process of learning to prioritize the feature space with respect to a given learning task. Algorithms that rank features can operate in non-myopic (considering feature interactions) or myopic manner (ignoring feature interactions). One of the first and most widely used algorithms for non-myopic feature ranking is Relief kira1992feature , introduced in the early 1990s. This iterative algorithm operates by randomly selecting parts of the instance space (e.g., rows), followed by iterative update of weights corresponding to individual features based on the closest instances from the positive and negative classes. The original Relief performs well for binary classification, however was unable to generalize to more complex learning tasks such as multi-class classification. Its extension, ReliefF robnik2003theoretical , introduced a prior-based weighting scheme that can take different classes into account. In the following years, multiple adaptations of both Relief and ReliefF were introduced, varying mostly in terms of schemes for taking into account a given instance’s neighborhood and its (aggregated) properties. For example, SURF greene2009spatially unifies the considered per-class neighborhoods, whereas SURFstar greene2010informative additionally considers distant neighbors. Further, MultiSURFstar granizo2013multiple takes neighborhood boundaries into account based on the average and standard deviation of distances from the target instance to all others. The TuRF adaptation moore2007tuning of any Relief algorithm also employs recursive feature elimination, whilst applying the dedicated Relief implementation iteratively during feature pruning. TuRF attempts to address some of the problems that arise in very large feature spaces (e.g., more than 20,000 features), yet at the cost of higher computational complexity. In terms of scalability, for example, VLSReliefF eppstein2008very samples _random subspaces_ whilst simultaneously offering competitive performance at a far lower computational cost. The above ReliefF variants mostly attempt to correct some of the original ReliefF drawbacks by re-considering the update step and how the neighbors are taken into account. In recent years, the Relief algorithms have been also extended to a multi- label classification setting (MLC) – a learning problem where multiple labels for an instance are simultaneously possible. Examples of this task include, for example, gene function prediction. The first attempt of extending ReliefF to MLC that scales well with the number of labels mlcrelief , uses the Hamming distance as the distance measure between two label sets. Since Hamming loss can be expressed as a sum of component-wise differences, it is not able to detect the possible label-label interactions, which may lead to sub-optimal quality of the obtained rankings as shown recently mlcrelief , where other MLC error measure-based distances between labels were shown to offer superior performance. ### 2.2 Embeddings The rise of embedding-based learning can be nowadays observed in virtually all areas of science and industry. Since the 2010s, for example, deep neural networks are successfully used in fields such as computer vision, where state- of-the-art results are consistently demonstrated, e.g., in face recognition and anomaly detection lecun2015deep ; pouyanfar2018survey . Further, in recent years, natural language processing has been transformed first by the introduction of recurrent neural networks, followed by attention-based neural networks (transformers), showing prominent results in the areas of question answering, language understanding and text classification vaswani2017attention . Similar results have been observed in the areas of network mining kipf2016semi and time series analysis (based on the initial work of connor1994recurrent ). Neural networks offer an elegant alternative for learning a latent representation of the input data set, that can be _transferred_ , as well as directly used for problem solving. More recent works on representation learning, however, suggest that a low-dimensional manifold is a suitable topological object for learning rich and transferable representations bronstein2017geometric ; masci2015geodesic . Even though representations learned by neural networks can be associated with manifold learning, the development of algorithms capable of approximating such low-dimensional manifolds has been an active research area even before the era of deep learning, and is of particular relevance to this work. For example, algorithms such as Isomap balasubramanian2002isomap and Locally linear embedding roweis2000nonlinear have been successfully used for data visualization and more efficient learning. Further, modern _omics_ sciences have widely adopted t-SNE maaten2008visualizing , an approximation method capable of producing two dimensional embeddings of high-dimensional spaces, making it suitable for e.g., gene expression visualization. Hyperbolic embeddings have been also recently demonstrated to be useful when considering hierarchical multi-label classification stepisnikHyperbolic . This work builds on the notion of uniform manifold projections (UMAP) mcinnes2018umap ; mcinnes2018umap-software , a recently introduced algorithm with a strong theoretical grounding in _manifold theory_. We explore, whether low-dimensional manifold approximations, derived from sparse input spaces, can be a natural extension to the Relief family of algorithms. ## 3 Proposed methodology One of the main criticisms of the Relief branch of algorithms is their lack of ability to handle very high-dimensional, potentially sparse input spaces, where problems arise either due to increased space complexity or too incremental weight update steps that result in similar importance scores for many features (i.e. non-informative rankings). We have developed the proposed ReliefE algorithm with the goal to addresses these issues. In this section, we discuss in detail ReliefE, the proposed embedding-based feature ranking algorithm for multi-class and multi-label classification problems, along with its implementation, currently one of the fastest Python-based implementations compiled to machine code capable of handling both multi-class and multi-label classification problems. Figure 1: Overview of the core idea behind ReliefE. Distances in both the feature and target space are computed based on instance embeddings. The proposed ReliefE algorithm is summarized in Figure 1. Here, the input feature space ($\boldsymbol{F}$) is mapped (by $\phi_{\boldsymbol{F}}$) to its corresponding low-dimensional approximation ($\boldsymbol{E_{F}}$). Finally, the Relief feature ranking ($f$) is adapted to operate via this low- dimensional representation to obtain final feature rankings $\boldsymbol{w}$. Further, in a multi-label setting, distances between targets ($\boldsymbol{T}$) can also be computed in the latent space $\boldsymbol{E_{T}}$, constructed via $\phi_{\boldsymbol{T}}$. ### 3.1 Rationale for embedding-based ranking Embedding the instances in the feature space prior to feature ranking is sensible due to the fact that volume increases _exponentially_ with dimension. Many classifiers benefit from increasing the number of dimensions, however, once a certain dimensionality is reached, their performance starts to degrade hughes . Feature ranking aids this problem by prioritizing parts of the feature space that are relevant for learning. Higher-dimensional feature spaces render feature importane detection in the initial feature space harder, as more subtle differences between instances need to be taken into account. Embedding (in an unsupervised manner) offers the construction of instance representations that potentially carry additional _semantic context_ , as comparing instances in the embedded space does not compare only the considered pairs of instances but also their potential _roles_ in the joint latent space, offering more meaningful comparisons (as shown in e.g., NIPS2013_5021 for words and phrases). We next discuss the embedding method considered throughout this work. ### 3.2 Uniform Manifold Approximation and Projection This work builds on the recently introduced idea of Uniform Manifold Approximation and Projection (UMAP) mcinnes2018umap for the task of low- dimensional, unsupervised representation learning. Even though a detailed treatment of the theoretical underpinnings of UMAP are beyond the scope of this paper, we discuss some of the key ideas underlying the actual implementation, and refer the interested reader to mcinnes2018umap for a detailed overview of the theory. UMAP is formulated with concepts from both topological manifold theory and applied category theory. Riemannian manifolds are topological spaces that have a locally constant metric, and are locally connected. UMAP assumes that high dimensional data can be uniformly mapped across a low-dimensional Riemannian manifold. The locality of the metric, connectivity and uniformity of the mapping are the three main assumptions of this method. Even though such assumptions are not necessarily fulfilled, appropriate selection of the metric used by UMAP can offer better performance and generalization when the learned representations are used for down-stream learning tasks. In contrast to t-SNE, which is effective for two dimensional embeddings, UMAP is highly efficient also for embeddings into _higher-dimensional_ vector spaces. UMAP thus serves as a general unsupervised representation learner111UMAP can also perform supervised embeddings, yet this functionality is not considered in this work.. It has been successfully used for exploration of biological and other high- dimensional data sets Cao2019 . In summary, the topological manifold theory underlying UMAP offers a very general representation learning framework, applicable beyond the current implementation of UMAP, which we discuss in more detail below. Even though the original formulation assumes continuity, in practice, discrete graph-theoretical analogs of the continuous concepts are employed. The representations are a result of a two-step procedure, where in the first step, a weighted graph is constructed based on the distances between the closest instances. The second step resembles conventional graph layout computation, which is normally computed in two or three dimensions for visualization purposes, where, e.g., two coordinates of a 2D layout represent two features. Analogously, UMAP extends the idea to higher dimensions, where the instances are positioned in a $d$-dimensional space (with $d$ going up to several hundred in most cases). The resulting space is not useful for direct visualization, but serves as a representation suitable for a down-stream learning task—in this work, feature ranking. The computation of the UMAP embedding can be described as follows. • Weighted graph construction. Assume a user-specified dissimilarity measure $\delta:\mathbb{R}^{|F|}\times\mathbb{R}^{|F|}\to[0,\infty)$ and the number of nearest neighbours $k$, computed via an approximation algorithm dong2011efficient . We refer to the ordered set of instances nearest to instance $\boldsymbol{r}_{i}$ as $\\{\boldsymbol{r}_{i_{1}},\dots,\boldsymbol{r}_{i_{k}}\\}$. For each instance, let $\omega_{i}=\min{\\{\delta(r_{i},r_{i_{j}})|1\leq j\leq k,\delta(r_{i},r_{i_{j}})>0\\}},$ and $\beta_{i}$ such that $\sum_{j=1}^{k}\exp\bigg{(}{\frac{-\max(0,\delta(r_{i},r_{r_{j}})-\omega_{i})}{\beta_{i}}}\bigg{)}=\log_{2}(k).$ Next, a weighted directed graph $G=(N,E,w)$ is constructed, where $N$ is the set of considered instances, $E=\\{(r_{i},r_{i_{j}})|1\leq j\leq k\\}$ and $w((r_{i},r_{i_{j}}))=\exp\bigg{(}\frac{-\max(0,\delta(r_{i},r_{i_{j}})-\omega_{i})}{\beta_{i}}\bigg{)}.$ The final adjacency matrix $\boldsymbol{B}$ is computed as $\boldsymbol{B}=\boldsymbol{A}+\boldsymbol{A}^{T}-\boldsymbol{A}\odot\boldsymbol{A}^{T}$ where, $\boldsymbol{A}$ is the adjacency matrix of $G$ and $\odot$ denotes the Hadamard product. • Layout computation. In the second step, the graph undergoes a process resembling energy minimization, where repulsive and attractive forces are iteratively applied across pairs of instances, resulting in a $d$-dimensional layout, which is effectively a real-valued embedding. The attractive forces ($F_{+}$) are computed, for a given pair of vertices $n_{i}$ and $n_{j}$ and their corresponding coordinate representations (embeddings) $\boldsymbol{r}_{i}$ and $\boldsymbol{r}_{j}$ as follows: $F_{+}=\frac{-2\cdot a\cdot b\cdot||\boldsymbol{r_{i}}-\boldsymbol{r_{j}}||_{2}^{2(b-1)}}{1+||\boldsymbol{r_{i}}-\boldsymbol{r_{j}}||_{2}^{2}}\cdot w(n_{i},n_{j})\cdot(\boldsymbol{r_{i}}-\boldsymbol{r_{j}}),$ and similarly, the repulsive forces ($F_{-}$) are computed as $F_{-}=\frac{b\cdot(1-w(n_{i},n_{j}))\cdot(\boldsymbol{r_{i}}-\boldsymbol{r_{j}})}{(\eta+||\boldsymbol{r_{i}}-\boldsymbol{r_{j}}||_{2}^{2})(1+||\boldsymbol{r_{i}}-\boldsymbol{r_{j}}||_{2}^{2})}.$ Here, $\eta$ is introduced for numerical stability (a small constant), while $a$ and $b$ are hyperparameters. Note that the $F_{-}$ update is computationally very expensive, and is addressed via sampling. The initial coordinates are computed by using spectral layout—here, the two eigenvectors corresponding to the two largest eigenvalues are used as the starting set of coordinates. The two steps, when implemented efficiently, offer fast construction of $d$-dimensional, real-valued representations. The considered UMAP implementation exploits the Numba framework lam2015numba for compiling parts of Python code, making it scalable whilst maintaining user-friendly API execution. Note that UMAP’s computational complexity depends heavily on the approximate $k$-nearest neighbor computation. In the following sections, we discuss how we further facilitate the embedding computation, as the current version of UMAP still has _high space complexity_ (graph construction). ### 3.3 Input sparsification The proposed methodology is capable of handling highly sparse inputs without additional memory overheads. However, real-life data frequently comes in the form of dense matrices, as is the case with, e.g., gene expression data sets. As a part of the proposed methodology, we explore whether input sparsification can speed up the feature ranking process with minimal ranking quality degradation. We implement the recently introduced, theoretically grounded, Probabilistic Matrix Sparsification algorithm (PrMS) for matrix sparsification arora2006fast , given in Algorithm 1. Data: Input matrix $\boldsymbol{A}$, number of elements $n$, approximation constant $\epsilon$ 1 for _$i,j\in n$_ do 2 if _$|\boldsymbol{A}_{ij}| >\frac{\epsilon}{\sqrt{n}}$_ then 3 $\hat{\boldsymbol{A}_{ij}}=\boldsymbol{A}_{ij}$ 4 5 else 6 $\hat{\boldsymbol{A}}_{ij}=\begin{cases}\textrm{sgn}(\boldsymbol{A}_{ij})\cdot\frac{\epsilon}{\sqrt{n}};\textrm{ with probability }p_{ij}=\frac{\sqrt{n}|\boldsymbol{A}_{ij}|}{\epsilon}\\\ 0;else\end{cases}$ 7return $\hat{\boldsymbol{A}}$ Algorithm 1 Probabilistic Matrix Sparsification (PrMS) arora2006fast The mathematical intuition behind PrMS is as follows. Given a real-valued matrix $\boldsymbol{A}\in\mathbb{R}^{n\times n}$, let $S=\sum_{ij}|A_{ij}|$. A single PrMS pass through $\bm{A}$ guarantees at least $\mathcal{O}(\frac{\sqrt{n}\cdot S}{\Delta})$ elements with probability $1-\exp{(-\Omega(\frac{\sqrt{n}\cdot S}{\Delta}))}$. Here, $\Omega$ represents the lower bound, and $\Delta=\epsilon/\left\lVert\bm{A}\right\rVert_{2}$, where parameter $\epsilon>0$ determines the approximation level. Further, with probability $1-\exp{(-\Omega(n))}$, $\lvert\lvert\boldsymbol{A}-\hat{\boldsymbol{A}}\rvert\rvert_{2}\leq\mathcal{O}(\epsilon)$ holds222For extensive theoretical treatise, please consult arora2006fast .. Note that, as the majority of real-life data sets are not represented by symmetric matrices (typically, they are not even square matrices), the transformation $\bm{B}\mapsto\bm{A}=\begin{bmatrix}0&\bm{B}\\\ \bm{B}^{T}&0\end{bmatrix}$ of the initial matrix $\boldsymbol{B}\in\mathbb{R}^{m\times n}$ has to be employed since $\bm{A}$ is symmetric and $\left\lVert\bm{A}\right\rVert_{2}=\left\lVert\bm{B}\right\rVert_{2}$. We consider $\epsilon=\left\lVert\bm{A}\right\rVert_{\infty}/(m+n)$, i.e. the maximal column-average (of absolute values) of matrix $\bm{A}$. The sparsification procedure remains highly dependent only on $\epsilon$, the parameter determining the reconstruction error that is allowed. We have used this estimate of $\epsilon$ as it is fast to compute and avoids the need for user specification of $\epsilon$, whilst simultaneously guaranteeing reasonable performance (see Section 5). One of the most crucial hyperparameters related to representation learning in general is the dimension of the space in which the constructed representation resides. We have attempted to automate the choice of this—otherwise hard-coded—parameter and discuss the considered estimate next. ### 3.4 Estimation of latent dimension Following facco2017estimating , we improve upon the idea of latent dimension estimation via top two nearest neighbors. To compute the latent dimension $d$ of the data (under the assumption of a locally constant probability density), it suffices to define two (hyper)spheres $S_{1}$ and $S_{2}$. Both are centred at a random data sample and have radii equal to the distances between the sample and its two nearest neighbors facco2017estimating . The radii and the dimension $d$ define the volumes of the spheres and it turns out that the value of $d$ can be easily estimated from the empirical probability distribution of the ratio $V(S_{2}\backslash S_{1})/V(S_{1})$ of the volumes of the shell $S_{2}\backslash S_{1}$ and sphere $S_{1}$. The method, implemented as a part of ReliefE is summarized in Algorithm 2. Data: Set of instance indices $I$, (sparse) feature matrix $\textbf{F}\in\mathbb{R}^{|I|\times|F|}$ 1 distanceMatrix $\leftarrow$ pairwiseDistancesNonzero($\boldsymbol{F}$) 2 $\mu\leftarrow$ distanceMatrix[:,1]/ distanceMatrix[:,0] 3 empiricalDist $\leftarrow$ computeEmpiricalDistribution($\mu$) 4 $d\leftarrow$ leastSquaresLine(log($\mu$), log(1 - empiricalDist)) 5 return $d$ Algorithm 2 Latent dimension estimation facco2017estimating . The algorithm first computes distanceMatrix, i.e. a matrix comprised of distances to the top two nearest neighbors. In this work, we improve upon the original idea of simply computing the two nearest neighbors of a given instance. Instead, we ignore the neighbors that are equal to a given instance (are at distance $0$), and take into account the two nearest neighbors that are at a positive distance to the given instance (method pairwiseDistancesNonzero). The rationale for this step is that this method is also used on the output space of multi-label classification data, where two (or more) examples can often have the same output value (vector describing the labels assigned to the instance). If we had followed the original method to the letter, some components of the vector $\mu$ would equal $\infty$ or NaN. Using the modified procedure, numerical instability is avoided whilst observing the same, or very similar, results. Next, $\mu$, a quotient between the two distance vectors is computed (second closest against closest). An empirical distribution is derived from $\mu$. This distribution can be defined as: $\textsc{EMP}_{n}(x)=\frac{1}{|I|}\sum\mathbb{I}_{x_{i}<x},$ where $\mathbb{I}$ represents the indicator function (presence). Thus, for a given $x$, $F_{n}$ represents the relative number of elements that are smaller than $x$. The logarithms of $\mu$, as well as (1 - $\textsc{EMP}_{n}(x)$) are computed next. The line between the two quantities intersects 0, and its coefficient, when rounded to the nearest integer, corresponds to the estimated latent dimension. For example, the intrinsic dimension of the _genes_ data set is estimated to 33 (see Figure 2). Figure 2: Intrinsic dimension for the _genes_ weinstein2013cancer data set. ### 3.5 Scaling up UMAP: Learning to embed based on representative subspaces As the most apparent memory (space) bottleneck, we (empirically) identified the UMAP’s graph construction phase, where the entirety of the instances is used for learning to approximate the low-dimensional manifold (embedding). This paper explores whether representative subspaces of the instance space can serve similarly well for learning to embed. The idea was inspired by recent work on random output space selections for ensemble learning Breskvar2018 . The two adaptations we needed to consider were the initialization and training on a representative subspace of the instances. In the original UMAP, the initialization is based on the spectral decomposition of the instance graph. We instead considered random initialization, which already notably reduced the memory requirement, and kept the quality of the ranking at the same level. However, on the larger data sets (see Section 4), the embedding computation was still beyond the capabilities of an off-the-shelf computer (Lenovo Carbon X1). To overcome this issue, we employ the following two-step sampling scheme. In the first step, the set of target values appearing in the data is ordered into a list. In the second step, we _cyclically_ iterate through the list and (without repetition) choose an element with a given target value, or skip this value if all samples labeled with the considered target value have already been chosen. The multi-class (and binary) classification examples thus adhere to straightforward mapping from possible classes to the corresponding _multisets_ of instances. We also extended this idea to the task of multi-label classification (MLC) where multiple labels can be simultaneously assigned a given example. However, a finite number of possible different label sets allows for first enumerating them, and then proceeding as in the multi-class classification scenario. We consider the theoretical properties of this procedure in Section 3.8. ### 3.6 Adaptive neighbor selection and comparison to average neighbor representation The first improvement we introduce next removes the hyperparameter $k$—the number of neighbors considered for an update step w.r.t. a single randomly chosen instance. Commonly, $k$ is a user defined hyperparameter: However, we explored whether this part of the user input can be removed entirely and replaced by a distance distribution-based heuristic that _dynamically allocates_ a number of neighbors suitable for a given randomly selected instance, albeit at some additional computational cost. The rationale for such a heuristic is that real-life data sets are often not uniformly distributed liu2005investigation , indicating only a few other instances are mostly well connected with a given one (scale-free property). We implement this (optional) estimation as follows. For each instance, ReliefF computes its distance to the remainder of the other instances in order to obtain the top $k$ nearest ones. Such a hard-coded selection scheme is not optimal, as it does not take into account the shape of the distance distribution with respect to an individual instance. To overcome this issue, we propose the following procedure: 1. 1. Sort distances w.r.t. the selected instance $\boldsymbol{r}_{i}$. 2. 2. Compute the difference between each pair of consequent entries in the distance vector. 3. 3. Select the value of $k$ based on the maximum difference observed. This procedure intuitively takes into account the shape of the distance distribution as follows. Assuming that all instances are similarly far from the selected instance, the difference vector will mostly consist of small values (a given pair of distances is not all that different). This can result in large $k$, as the global difference maximum can occur very late. On the contrary, if only a handful of instances are close, and the remainder is very far, only this handful shall serve to determine $k$, which will be consequently lower. Furthermore, if very high $k$ is selected and all distances are approximately the same, their mean will be similar no matter how many are selected. If a similar situation is considered for highly non-uniform distance distribution, the mean of selected $k$ nearest instances should represent only the ones that are indeed similar to the selected instance, and do not take into account the remainder which is further and possibly not as relevant. Even if the standard IID assumption holds when sampling a data set, we are always given a finite sample. The neighborhoods of the samples on the border of the convex hull of the data set are most probably different than the neighborhoods of those in the interior of the hull. Additional theoretical properties of this neighbor number selection are given in Section 3.8, where computational complexity is also considered. The proposed adaptation of ReliefF was further extended with an additional option to use the _average neighbor_ for the update step, instead of performing updates based on all neighbors and averaging the updates. Albeit subtle, this difference potentially improves the update part’s robustness, making the update step less prone to possible outliers amongst the nearest neighbors—such situations could emerge, if the value of $k$ is hard-coded, as is commonly the practice. The intuition behind this incremental change in weight update is as follows. As the distance computation is conducted in the _latent_ space, averaging the representation of the neighborhood can also be considered as constructing a _semantic representation_ of the considered instance’s neighborhood. ### 3.7 ReliefE - ranking via manifold embeddings In the following sections, we discuss more formally the proposed ReliefE algorithm incorporating the possible improvements stated in the previous sections. The solution (that handles both multi-class and multi-label classification) is given as Algorithm 3. We can see that this is an iterative algorithm where a random sample $\bm{r}$ is chosen on each of the $n$ iterations, and distances between $\bm{r}$ and the remaining instances are computed (lines 3 and 3). We next discuss the two main parts of ReliefE with respect to the addressed classification task. Data: feature set $F$, instance index set $I$, (sparse) feature matrix $\textbf{F}\in\mathbb{R}^{|I|\times|F|}$, target classes $C$, (sparse) target space $\textbf{T}\in\\{0,1\\}^{|I|\times|C|}$, neigborhood size $k$, boolean adaptiveThreshold, distance score $\delta$, latent dimension $d$, boolean estimateLatentDimension, MLC distance $\tau$, number of iterations $s$, indices $\nu$ of dimension estimation examples $\boldsymbol{w}\leftarrow$ zero list of length $|F|$ $\triangleright$ Initiate importances. 1 if _estimateLatentDimension_ then 2 $d\leftarrow$ latentDimension$(\textbf{F},\nu)\quad(\in\mathbb{N}_{+})$ 3 4$\textbf{E}\leftarrow$ manifoldProjection$(|\textbf{F}|$,$d$,$\nu)\quad\left(\in\mathbb{R}^{|I|\times d}\right)$ 5 for _ $i$ in 1 $\dots$ $s$_ do $\boldsymbol{r}_{i}\leftarrow$ randomInstance(F) $\triangleright$ Sample instance. 6 if _MCC_ then 7 for _$c\in C$_ do 8 dists $\leftarrow$ computeDistances($\boldsymbol{r}_{i}$,$\delta$,E,$c$) 9 sortedIndices $\leftarrow$ argSortDistances(dists) 10 if _adaptiveThreshold_ then 11 $k\leftarrow$ adaptiveThresholdMethod(sortedIndices, dists) 12 13 nearestNeighbors $\leftarrow$ select(sortedIndices,$k$) 14 15 for _$j\textrm{ in }1\dots|F|$_ do 16 priorWeight $\leftarrow$ ComputeWeight($c$, dists, $k$) 17 $\boldsymbol{w}[j]\leftarrow\boldsymbol{w}[j]$ \+ updateScore(priorWeight, nearestNeighbors, j) 18 19 else if _MLC_ then 20 dists $\leftarrow$ computeDistances($\boldsymbol{r}_{i}$,$\delta$,E) 21 sortedIndices $\leftarrow$ argSortDistances(dists) 22 if _adaptiveThreshold_ then 23 $k\leftarrow$ adaptiveThresholdMethod(sortedIndices, dists) 24 25 nearestNeighbors $\leftarrow$ sortedIndices[:$k$] targetDistances $\leftarrow$ {} $\triangleright$ Note the direct use of T compared to MCC. 26 descriptiveDistances $\leftarrow$ {} 27 for _neighborIndex $\in$ nearestNeighbors_ do 28 distTar $\leftarrow$ distanceToTarget($\boldsymbol{T}[i]$, $\boldsymbol{T}$[neighbor], $\tau$) 29 distDes $\leftarrow$ distanceToDesc($\boldsymbol{E}[i]$, $\boldsymbol{E}$[neighbor], $\delta$) 30 targetDistances.add(distTar) 31 descriptiveDistances.add(distDes) 32 33 meanTarDist $\leftarrow$ mean(targetDistances) 34 meanDesDist $\leftarrow$ mean(descriptiveDistances) 35 td-diff, d-diff, t-diff $\leftarrow$ expectedDist(meanTarDist, meanDesDist) 36 for _$j\textrm{ in }1\dots|F|$_ do $\boldsymbol{w}\leftarrow\boldsymbol{w}+\frac{\textsc{td- diff}}{\textsc{t-diff}}-\frac{\textsc{d-diff}-\textsc{td- diff}}{1-\textsc{t-diff}}.$ $\triangleright$ See Eq. 1. 37 38return $\boldsymbol{w}$ Algorithm 3 ReliefE. The adaptive threshold step can be further formalized as shown in Algorithm 4. Data: distances, sortedIndices 1 sortedDist $\leftarrow$ reorderAscending(dists, sortedIndices) 2 sortedDiff[i]$\leftarrow$ sortedDist$[i+1]-$sortedDist$[i]\quad\left(\in\mathbb{R}_{+}^{|F|-1}\right)$ $k\leftarrow\operatorname*{arg\,max}{\textrm{sortedDiff}}$ $\triangleright$ Peak point. 3 return $k$ Algorithm 4 adaptiveThresholdMethod. #### 3.7.1 Multi-class classification We first discuss the part of ReliefE algorithm that handles multi-class classification (MCC) tasks. Here, the classes are traversed (line 3) as follows. If adaptiveThreshold is enabled, the number of neighbors to be considered is determined dynamically for each sample (see Section 3.6 for more details). Next, the neighbors are selected and used for the final weight update, where the prior probabilities of individual classes are also considered. The updateScore method iteratively updates the weights (line 3), and is in this work for the $j$-th feature and $i$-th sample defined as follows: $w[j]\textrm{+=}\underbrace{\Big{|}\boldsymbol{\boldsymbol{r}}_{i}^{j}-\mathbb{E}[\textrm{nearestNeighbors}(i)][j]\Big{|}}_{\textrm{ absMean weight update}}\cdot\underbrace{\begin{cases}P[c]/(1-P[c_{i}])&;\;c\neq c_{i}\\\ -1&;\;c=c_{i}\end{cases}}_{\textrm{Prior information}}.$ In the proposed update step, an instance is compared directly to the mean of its neighbors which reduces the noise compared to the original updates of Relief kira1992feature where the order of the averaging and $|\cdot|$ operators is reversed. The nearestNeighbors represents the ordered set of indices of the top $k$ neighbors, thus $\mathbb{E}[\textrm{nearestNeighbors}(i)][j]$ represents the first moment w.r.t. the $j$-th feature based on the nearest neighbors of the $i$-th instance (there are $k$ such neighbors). We set the prior to -1 and the offset for considering the nearest neighbors to +1 if $c=c_{i}$, i.e., the considered class $c_{i}$ is equal to currently considered class $c$. #### 3.7.2 Multi-label classification For multi-label classification (MLC option), the weight update step differs substantially from the MCC case, after selecting a random instance ($\boldsymbol{r}_{i}$) and determining its distance to the other examples. First, the indices of the closest $k$ neighbors are stored in nearestNeighbors. As the values in the target space $\boldsymbol{T}$ are sets of (multiple) labels per instance, the simple iteration considered in the MCC case does not take the interactions between classes into account (label co- occurrences). Hence, distances are also computed between the target values of $\boldsymbol{r}_{i}$ and its nearest neighbors (line 3), by using one of the implemented options of ReliefE, which are given in Table 1. Table 1: Considered distances between rows in the multi-label output matrix $\boldsymbol{T}$. The $\boldsymbol{t}_{1}$ and $\boldsymbol{t}_{2}$ correspond to two rows, the nnz represents the count of non-zero elements in a given row vector. Note that the considered vectors are binary in all but the cosine and hyperbolic cases (last two rows). Distance | Definition ---|--- F1 | $\textrm{dist}=\begin{cases}1-\frac{2\boldsymbol{t}_{1}\boldsymbol{t}_{2}^{T}}{(\textrm{nnz}(\boldsymbol{t}_{1})+\textrm{nnz}(\boldsymbol{t}_{2}))}&;(\textrm{nnz}(\boldsymbol{t}_{1})+\textrm{nnz}(\boldsymbol{t}_{2}))>0\\\ \vspace{0.1cm}0&;\textrm{ otherwise}\end{cases}$ Accuracy | $\textrm{dist}=\begin{cases}1-\frac{\boldsymbol{t}_{1}\boldsymbol{t}_{2}^{T}}{(\textrm{nnz}(\boldsymbol{t}_{1}+\boldsymbol{t}_{2}))}&;(\textrm{nnz}(\boldsymbol{t}_{1}+\boldsymbol{t}_{2}))>0\\\ \vspace{0.1cm}0&;\textrm{ otherwise}\end{cases}$ Subset | $\textrm{dist}=\begin{cases}1&;\;\boldsymbol{t}_{1}==\boldsymbol{t}_{2};\\\ \vspace{0.1cm}0&;\textrm{ otherwise}\end{cases}$ Hamming | $\textrm{dist}=\sum_{i=1}^{|\bm{t}_{1}|}|\boldsymbol{t}_{1,i}-\boldsymbol{t}_{2,i}|/|\boldsymbol{t}_{1}|$ Cosine (if embedded) | $\textrm{dist}=\boldsymbol{t}_{1}\boldsymbol{t}_{2}^{T}/(\left\lVert\boldsymbol{t}_{1}\right\rVert_{2}\left\lVert\boldsymbol{t}_{2}\right\rVert_{2})$ Hyperbolic (if embedded) | dist $=\operatorname{arcCosh}{}(-\bm{t}_{1}\bm{t}_{2}^{T})$ Note that we also consider the cosine and hyperbolic distances which are applicable if the label space is embedded prior to the ranking step. We believe employment of manifold projections that operate on sparse spaces can be of relevance for high-dimensional output spaces, as for example seen when considering gene function prediction urbanowicz2018benchmarking . Once the distances are obtained both based on the input space and the output space, this information is used for updating feature weights as follows. Let $K$ represent the set of considered nearest neighbors. Let t-diff represent the mean of the target distances tar-diff. Let d-diff represent the mean of the (descriptive) distances to neighbors (des-diff), as also considered for the MLC case, i.e. the absolute difference between the selected instance $\boldsymbol{r}_{i}$ and the mean of the nearest neighbors. More formally $\textsc{t-diff}=\mathbb{E}[\textrm{dist}(\boldsymbol{t}_{i},\boldsymbol{T}[\bm{n}\in K])]\quad\textrm{ and }\quad\textsc{d-diff}=\mathbb{E}[\textrm{dist}(\boldsymbol{r}_{i},\boldsymbol{E}[\bm{n}\in K])],$ where $\bm{X}[\bm{n}\in K]$ keeps only the rows of the matrix $\bm{X}$ that correspond to the neighbors $\bm{n}\in K$. We further define $\textsc{td-diff}=\mathbb{E}[\textsc{tar-diff}\odot\textsc{des-diff}],$ and the weight update can be defined as: $w[j]\textsc{+=}\frac{\textsc{td- diff}}{\textsc{t-diff}}-\frac{\textsc{d-diff}-\textsc{t-diff}}{1-\textsc{td- diff}}.$ (1) The weight update concludes the ranking for multi-label classification. ### 3.8 Theoretical analysis We next discuss the relevant theoretical aspects of ReliefE, ranging from computational complexity analysis (both time and space) to the implications of the adaptations considered. #### 3.8.1 Time complexity The time complexity of ReliefE can be studied with respect to the two main modes of function – multi-class and multi-label classification. • Dimensionality estimation. The dimensionality estimation step, as optionally considered in this work, requires pairwise distance computation between the instances. Thus, $\Theta(|\nu|^{2}\cdot|F|)$ operations are required, where $\nu$ represents the indices of the samples for dimension estimation, and $|\nu|<|I|$, as discussed in Section 3.5. Note that if all instances are considered, the complexity rises to $\Theta(|I|^{2}\cdot|F|)$. • Manifold projections. Learning low-dimensional representations represents one of the computationally more intensive parts of ReliefE. Following mcinnes2018umap-software , the UMAP’s complexity can be split into two main parts. First, approximate nearest neighbor computation was shown to have empirical complexity of $\mathcal{O}(|I|^{1.12}\cdot|F|)$ dong2011efficient . However, in the sampling limit, if all instances are considered, the computational complexity is equal to that of pairwise comparisons – $\mathcal{O}(|I|^{2}\cdot|F|)$ The optimization of embeddings requires additional $\mathcal{O}(k\cdot|I|)$ steps, where $k$ is the number of nearest neighbors (a hyperparameter). Overall complexity is in the worst case thus $\mathcal{O}(|I|^{2}\cdot|F|)$. Note that the proposed cyclic sampling scheme (Section 3.5) implies $I\rightarrow\nu$ for all cases in this paragraph. • Multi-class. Given a fixed number of samples $s$, ReliefE traverses each of the classes $|T|$, and for each one performs the sampling. The adaptive neighbor selection scheme does not cost any additional time w.r.t. $|I|$, as the distances are already computed. The feature update step requires $\mathcal{O}(|F|)$ operations, for each neighbor. The complexity of the original, re-implemented ReliefF is thus $\mathcal{O}(|I|\cdot|F|\cdot s)$ robnik2003theoretical . The absMean update does not change this complexity, however, when adaptive scoring is considered, distances to the class members need to be sorted. We re-use the sorted indices of top neighbors to obtain closest distances, thus no additional time is spent on sorting. If ReliefE is considered, the complexity needs to be adapted for input dimension estimation, as well as lower dimension in which distances are computed. The final complexity is thus: $\mathcal{O}(|\nu|^{2}\cdot|F|+|I|\cdot d\cdot s)$, where $d$ is the dimensionality of the embedding. Assuming the “empirical complexity” from the previous paragraph holds, the multi-class complexity can also be stated as $\mathcal{O}(|\nu|^{1.12}\cdot|F|+|I|\cdot d\cdot s)$ • Multi-label. The complexity of multi-label classification needs to additionally account for the distances computed between the target instances. Effectively, $\mathcal{O}(|\nu|^{2}|F|+s\cdot(|I|\cdot d_{F}+k\cdot d_{T}))$ operations are required, where $d_{T}$ and $d_{F}$ correspond to embedding dimensions of the input and output space – for each sample, first distances need to be computed between the instances to that sample within the input space ($|I|$). Once top $k$ nearest instances are identified, the distances of the target instance to these $k$ other target instances are computed in $d_{T}$ space. • Down-stream ranking. Commonly, Relief algorithms operate in the raw feature space, however, as ReliefE operates via embedding-based distance computation, we consider the option that embeddings are _pre-computed_. This is possible due the fact that many contemporary embedding algorithms _refine_ the representation, once the new data is obtained, and do not (necessarily) re- compute the embedding for each new instance. In this case, the initial complexity of $\mathcal{O}(|\nu|^{2}\cdot|F|+|I|\cdot d\cdot s)$ reduces to $\mathcal{O}(|I|\cdot d\cdot s)$. #### 3.8.2 Space complexity The proposed implementation of ReliefE in comparison with, e.g., state-of-the- art Python-based implementations (as found in urbanowicz2018benchmarking ) operates easily in very high-dimensional, sparse vector spaces. In practice, we adopt the CSR formalism for matrix representation. Here, a sparse matrix is stored as three arrays, a data array, a pointer array and an index array. All three code for non-zero entries in the input space. Note that such representation is not optimal for dense matrices, as it results in some (minor) space overhead. This design decision means that every computationally- intensive operation that is part of ReliefE was re-written with handcrafted CSR-friendly, Numba-compilable methods. More formally, let nnz correspond to the number of non-zero elements in a given matrix. The space complexity of ReliefE can thus be stated as: $\mathcal{O}(\textrm{max}(\textrm{nnz}(\boldsymbol{F}),\textrm{nnz}(\boldsymbol{E_{F}}))+\textrm{max}(\textrm{nnz}(\boldsymbol{T}),\textrm{nnz}(\boldsymbol{E_{T}})))$. The complexity thus depends on the relation between the embedded space and the input space, which can be very context-dependent, however very low-dimensional embeddings normally do not result in space overhead, and neither do highly sparse input matrices. More formally, if $\textrm{nnz}(\boldsymbol{F})\geq|I|\cdot d\textrm{ or }\textrm{nnz}(\boldsymbol{T})\geq|I|\cdot d$, the embedded space will require less (or equal) memory. Note that $\boldsymbol{T}$, corresponding to a potentially very sparse output space, is similarly considered as a sparse matrix, meaning that classification problems with very high-dimensional target spaces can also be considered, which is to our knowledge one of the first such Python-based, user-friendly implementations. As dimensionality estimation only requires the two closest neighbors, we do not keep all others in memory, the space complexity becomes linear, i.e., $\mathcal{O}(|I|)$ (in fact, exactly $2\cdot|I|$). We empirically discovered that UMAP’s memory requirements are the main space bottleneck, and, based on the evaluation on the larger data sets, require $\mathcal{O}(|I|^{2})$ (empirical) space. Such complexity potentially arises from the dense computational graph derived by UMAP. This observation led us to introduce the representative (cyclic) sampling scheme, which reduced this complexity to $\mathcal{O}(|\nu|^{2})$, making ReliefE executable even on an off-the-shelf computer (Lenovo Carbon X1). Note that the number of samples is lower-bounded by the number of classes or unique label sets. #### 3.8.3 absMean update step and its implications Compared to the original ReliefF, one of the proposed modifications implemented in ReliefE is the comparison of a given instance directly to the average nearest neighbor. We believe that this approach is advantageous in two ways. | ---|--- (a) $\bm{r}$ is in the center of its class (b) $\bm{r}$ is on the border of its class Figure 3: Updating weights with the absMean approach. The $n_{1,2,3}$ represent the instance r’s neighbors. First, as shown in Fig. 3(a), if a sample $\bm{r}$ that is far away from the class border is chosen, we cannot capture the local structure of the data in the other classes, so such samples $\bm{r}$ should not influence the updates considerably. This is not the case in the standard ReliefF, since the differences in feature values are necessarily large. This is overcome by computing the average neighbor first, and then updating the weights. Second, when the sample $\bm{r}$ is close to the border (Fig. 3(b)), averaging neighbors results in correctly detecting that the general direction of the neighbors should be perpendicular to the class borders when the number of samples goes to infinity. For example, in the situation depicted in Fig. 3(b), only $n_{1}$ should be rewarded. Again, computing the mean neighbor $\mathbb{E}(\bm{n})$ first, brings us closer to the optimal direction. The reduction of noise can be also proven by using the triangle inequality, $\frac{1}{k}\sum_{j=1}^{k}|n_{i}^{0}-n_{i}^{j}|\geq|n_{i}^{0}-\frac{1}{k}\sum_{j=1}^{k}n_{i}^{j}|,$ from which it directly follows that this approach results in smaller weight updates. #### 3.8.4 Adaptive neighbor selection and its behavior The considered adaptive neighbor selection attempts to reduce the number of hyperparameters by one ($k$), potentially saving $\mathcal{O}(k)$ optimization iterations, should this parameter be tuned. Furthermore, by considering neighbors, potentially relevant for a given instance, less noise is considered during the weight update step. For example, assume $k=7$, with only three other instances very close and the remaining four much further (by a large margin). The latter 4 instances will impact the weight update significantly, as the average distance will be heavily biased towards their mean, and thus potentially not representative of the close neighborhood of a given instance that naturally appears in the data. A visualization in such a situation is shown in Figure 4. In both panels ((a) and (b)), the outer circle represents the neighbourhood for a hard-coded value of $k$. In Figure 4(a), very distant instances are also considered for the update (e.g., from $n_{3}$ onward) and the adaptive estimation only selects the closest neighbors (green). However, in Figure 4(b), all instances are very close, thus the value of $k$ is equal to the automatically selected choice. | ---|--- (a) Dispersed neighborhood (b) Local neighborhood Figure 4: Adaptive neighbor selection. The $n_{1,2,3,\dots}$ represent neighbors. This follows the intuition behind the Relief family of algorithms, where an instance is compared to its slight perturbations. Another downside of having $k$ fixed, is that taking into account more distant nearest neighbors would (on average), increase the importance of more noisy features, since the distance values directly influence the importance. Irregularities in distance distribution were shown to hold for many real-life data sets, see for example the assumptions and their implementation in dong2011efficient . Finally, as ReliefE operates in a latent, low-dimensional space, obtained by instance similarity comparison, comparison to the closest instances only is potentially meaningful. #### 3.8.5 Parallelism aspects The proposed implementation exploits the Numba framework for just-in-time compilation lam2015numba . Numba offers parallelism at the level of individual methods that get compiled, meaning that the proposed implementation offers parallelism at the level of weight updates. During compilation, parts of the code that are sensible to compile get detected _automatically_. Many operations such as scalar-vector addition and similar can easily be parallelized. With auto-parallelization, Numba attempts to identify such operations in the ReliefE weight update step, and fuse adjacent ones together, to form one or more kernels that are automatically run in parallel. In practice, however, we observed that such auto-parallelism does not necessarily offer superior performance in terms of speed. However, it represents an elegant, array-level parallelism detection which, when improved/updated, shall speed up the execution time even more. We omit the discussion regarding different spaces considered during ReliefE to Appendix A. #### 3.8.6 How powerful is ReliefE? Throughout this paper, we propose and demonstrate the utility of ReliefE when tabular data is considered. However, as ReliefE requires merely the _representations_ of instances (training or target), the proposed approach generalizes well beyond tabular data with a single adaptation: the embedding method needs to be suitable for the considered data type. For example, if an instance is described by an ordered list of graphs, the plethora of graph embedding methods goyal2018graph ; 9265235 could be used to prioritize the graphs based on their (learned) representations. Similarly, ReliefE could be adapted for learning in the context of relational data bases, via Wordification perovvsek2015wordification and other propositionalization-like algorithms. ## 4 Empirical evaluation setting Our empirical evaluation of ReliefE consists of many sub-studies, and can be summarized as follows. First, we discuss the evaluation of ReliefE against state-of-the-art ranking algorithms on eight multi-class classification data sets. Next, we present the empirical evaluation setup where ReliefE’s capabilities are shown on nine multi-label classification data sets. Finally, we conducted a series of experiments where we investigated in more detail the convergence and time performance. We conclude this section by describing the Bayesian and frequentist approaches, which aided understanding of the results. ### 4.1 Multi-class classification data sets Multi-class classification remains one of the most widely adopted forms of learning. Here, the input space is associated with a single, integer-valued vector, where each instance can belong to one of the many possible classes. In this work, we consider a wide spectrum of data sets, summarized in Table 2. Table 2: The properties of the considered multi-class classification data sets. The last column denotes the proportion of non-zero elements in the data table. Data set | Instances | Features | Classes | Proportion of non-zero entries ---|---|---|---|--- chess shapiro1984role | 3196 | 38 | 2 | 0.726558 biodeg-p2-discrete dvzeroski1999experiments | 328 | 61 | 4 | 0.107357 optdigits alpaydin1998cascading | 5620 | 62 | 10 | 0.528117 madelon guyon2005result | 2000 | 500 | 2 | 0.999999 php88ZB4Q php | 10299 | 561 | 6 | 0.999860 pd_speech_features sakar2013collection | 756 | 753 | 2 | 0.995294 dlbcl armstrong2002mll | 77 | 7070 | 2 | 0.997388 tumors C tumors-c | 60 | 7129 | 2 | 0.995722 AP_Ovary_Kidney stiglic2010stability | 458 | 10935 | 2 | 1.000000 ohscal.wc ohscal | 11162 | 11465 | 10 | 0.005270 genes weinstein2013cancer | 801 | 20531 | 5 | 0.857824 The data sets are from multiple domains, incl. biological, social and other domains (e.g., chess). The data sets are of different dimensions, in terms of the numbers of rows and also columns. ### 4.2 Multi-label classification data sets Feature ranking for multi-label classification remains an active research area. Many of the approaches considered in the previous section (multi-class classification) are not able to handle the multi-label setting, where a single instance can belong to many classes simultaneously. Such a setting, for example, naturally emerges when gene function prediction is considered—a single gene is associated with many functions and contexts. The considered multi-label data sets are summarized in Table 3. Similarly to the multi-class setting, we selected data sets from various domains to maintain diversity. Note that multiple repetitions of 10 fold cross validation were needed to perform Bayesian comparisons. Table 3: The properties of the considered MLC data sets. The last column denotes the proportion of non-zero elements in the data table. Data set | Instances | Features | Classes | Proportion of non-zero entries ---|---|---|---|--- delicious delicious | 16105 | 1000 | 983 | 0.500000 imdb imdb | 120919 | 1001 | 28 | 0.019363 medical medical | 978 | 2898 | 45 | 0.500000 bibtex bibtex | 7395 | 3672 | 159 | 0.500000 Education1 ueda | 12030 | 27534 | 33 | 0.004059 Health1 ueda | 9205 | 30605 | 32 | 0.003555 Entertainment1ueda | 12730 | 32001 | 21 | 0.004552 Science1 ueda | 6428 | 37187 | 40 | 0.004659 Social1 ueda | 12111 | 52350 | 39 | 0.002949 ### 4.3 Additional experiments and statistical evaluation of results For MCC, logistic regression with its default parameters was used as the learner. The first reason for this choice is the fact that this very learner is commonly used to evaluate the quality of a given data representation (in our case a subset of the feature space), and is known to be sensitive to noisy features. The second reason is computational: With all repetitions required for Bayesian analysis, additional grid search would be out of reach as it could further increase the computational time beyond reasonable capabilities. For the MLC setting, the default random forest parametrization was used, as it has been previously shown to perform competitively/ well in such a setting. Throughout the experiments, we set the regularization term of logistic regression (C) to one, the default value pedregosa2011scikit . For multi-label classification, we considered the RandomForest classifier with default settings as set in pedregosa2011scikit . As we consider either multi-class or multi-label problems, we compute either relative F1 or micro-averaged relative F1 scores, defined as: $\textrm{rF1}(f)=\frac{\textrm{F1}_{f}}{\textrm{F1}_{f=|F|}},$ where F1 is the harmonic mean of precision and recall, and $f$ is the number of features. The macro rF1 is defined in the same fashion. Considering relative performance offers direct insights into how performant a given ranking is with how many top-ranked features. Note that by considering relative performance, it can be directly observed when the feature ranking algorithm identifies a ranking that outperforms the situation where all features are considered – a reasonable baseline. We performed ten fold stratified cross-validation ten times, as required for the statistical analysis discussed next. In order to summarize the overall performance of a given ranking, we believe taking into account the ranking’s quality over all possible values of top $f$ features needs to be considered. Hence, we introduce the area under rF1 (AUrF1), i.e., the integral of rF1 normalized by the number of considered top $f$ rankings (to be more comparative across data sets), where we numerically integrate with the Simpson’s method. Recent criticisms of the frequentist non-parametric comparison of multiple classifiers demvsar2006statistical has given rise to a novel spectrum of Bayesian t-tests, that directly offer insight into a probability space corresponding to the differences in algorithm performance benavoli2017time . In this work we adopt the hierarchical t-test, which is capable of comparing pairs of classifiers. The hierarchical Bayesian t-test is used to assess the probability of observing a given difference in performance between a pair of classifiers. As noted by Benavoli et al. benavoli2017time , it requires that e.g., ten repetitions of ten fold cross validation need to be considered in order to reliably fit a hierarchical model. The approach attempts to model the probability of observing a given difference in performance between a pair of classifiers, which can be in favor of either of the classifiers or undetermined - practically equivalent (rope region). The plotted results are given in the form of triangular schemes, where each point represents a sample from the posterior distribution. Such samples, when aggregated, directly represent a probability of observing a given state (in this case difference between the classifiers). We set the rope region to 5%—if the difference in quality between two rankings is less than 5%, they are considered equal. The remaining setting is the same as in the original paper benavoli2017time , we compare the top ranking for each fold. For a given pair of ranking algorithms, the pairwise Bayesian tests were performed on the data sets common to both algorithms. Finally, results of time performance are presented in computation time (in seconds) diagrams with standard deviations. Such a comparison is not necessarily informative/useful when multiple classifiers are simultaneously considered, thus we also offer the results in the form of average rank diagrams demvsar2006statistical . We believe that having both local and global insights into the relations between classifiers, their differences are easier to study, even though looking at the classifier ranks alone can be misleading benavoli2017time . ### 4.4 Considered implementations and baselines We next discuss the implementations considered. For multi-class classification, the considered Relief variants were MultiSURF, MultiSURFstar, ReliefF, all from the scikit-rebate library urbanowicz2018benchmarking . We also used RandomForest (RF)-based importances (Genie3) and Mutual information (MI)-based ones pedregosa2011scikit . The multi-class Relief variants that are the original contribution of this work include: ReliefE, ReliefE-absMean, ReliefE-adaptive and ReliefE-absMean-adaptive. The suffix _adaptive_ denotes the use of an adaptive threshold and _absMean_ the use of absMean update step. Multi-label classification is not supported (at all) in scikit-rebate urbanowicz2018benchmarking , and thus we considered the multi-label variants of ReliefE and ReliefF (re-implemented in this work with Numba) with all of the possible distances given in Table 1. We emphasize that when multi-label distances are considered, only the cosine and hyperbolic distances operate on target space embeddings (the other distances do not). The computation of these distances is also more efficient. Table 4: Computational complexity of feature importance estimation. For Relief algorithms, we used $s=|I|$. The $t$ corresponds to the number of trees. Algorithm | time complexity | space complexity ---|---|--- ReliefE | $\mathcal{O}(|\nu|^{2}\cdot|F|+|I|\cdot d\cdot s)$ | $\mathcal{O}(|\nu|^{2})$ ReliefF | $\mathcal{O}(|F|\cdot|I|^{2})$ | $\mathcal{O}(|F|)$ Random Forest | $\mathcal{O}(t\cdot|F|\cdot|I|\cdot\log^{2}|I|)$ | $\mathcal{O}(|I|+|F|)$ Mutual Information | $\mathcal{O}(|F|\cdot|I|)$ | $\mathcal{O}(|I|)$ Note that all versions of ReliefF, implemented or re-implemented in this work333Implementation’s official repository is https://github.com/SkBlaz/reliefe, natively operate on sparse spaces, which is on its own a contribution of this work. In terms of sparsification, we set the sparsification threshold to 0.15, meaning that if a matrix’s density is higher than 15%, it is sparsified with the proposed procedure (there are many of such matrices amongst the considered data sets). Detailed results of investigating the ablation of the considered data sets’ (induced) sparsities are given in Appendix B. Similarly, the behavior of the adaptive $k$ statistic was also studied in more detail in Appendix C. Further, $\nu$ (the sample for intrinsic dimension estimation) was set to 2048. The dimension number was set so that the algorithm runs normally on an off-the-shelf-computer (Lenovo Carbon X1) even for larger data sets. Thus, if a given data set consisted of more than 2048 instances, a representative subset of 2048 instances was considered for estimating the intrinsic dimension and consequent embedding. The UMAP’s setting is left to its defaults, with the dimension being set to the estimated one444Extensive evaluation of UMAP’s capabilities w.r.t. the proposed implementations is beyond the scope of this paper, and is left for further work.. The value of $k$ is set to 15 for our implementation for ReliefF, and left at its defaults for the baselines. The time and space complexity of the baselines and ReliefE are summarized in Table 4. Note that, even though ReliefF (and its other variants’) space complexity is linear w.r.t $|F|$, their implementations, should they not consider the sparse input structure, in fact require $\mathcal{O}(|I|\cdot|F|)$ space (as found, e.g., in urbanowicz2018benchmarking ). Finally, the considered experiments for multi-label classification consider both Euclidean embeddings, as well as non-Euclidean ones (Poincaré ball). ## 5 Results This section presents the results of the empirical evaluation. We begin by discussing the performance comparisons for the task of multi-class classification. We follow on by discussing the results of the experiments on multi-label classification tasks. Finally, we present additional investigations of ReliefE’s behavior. ### 5.1 Multi-class classification We first present two average rank diagrams depicting the relative performance on the different ranking methods for MCC in terms of the quality of the produced rankings, as measured by the corresponding average and maximum F1 scores (Figures 5(a) and 5(b), respectively). The diagrams include critical distances, representing the minimum differences in performance that are statistically significant. It can be observed that the ReliefE variants yield the best performing rankings (with lowest average ranks, Figure 5(a)), but there are not many such rankings (Figure 5(b)). The AUrF1 values (Appendix D) indicate that the performances of the top 5 feature ranking algorithms are highly similar (within the confidence interval). --- (a) (b) Figure 5: Max (a) and mean (b) F1 scores across all feature rankings. We next present the mean time consumption averaged across data sets. Consistently slower SURF variants of ReliefF can be observed in the rightmost part of Figure 6(a). The average rank diagram is shown in Figure 6(b). --- (a) Average absolute running times with standard deviations (in seconds). (b) Average rank diagram (times). Figure 6: Speed comparison of ranking approaches for MCC. Absolute running times (a) show that ReliefE variants perform an order of magnitude (or more) faster. Relative running times are given in terms of average ranks (b), where lower ranks mean worse performance, i.e., longer running times. Additional analysis of the proportions of time spent at different parts of the algorithm is presented in Appendix E, showing that most time is spent on feature weight updates. Average rank diagrams comparing the rankings in terms of the top 50 and 100 features are given in Appendix F. ### 5.2 Bayesian ranking comparison of ranking approaches for MCC In this section, we present selected Bayesian pairwise comparisons of classifiers’ performance. Previously determined relationships, such as the dominance of the SURF branch of algorithms over mutual information were confirmed, and further extended by adding comparisons with the proposed ReliefE branch of algorithms. The comparisons are presented in Figures 7 and 8. | ---|--- | | (a) MultiSURFstar vs. MI (b) MultiSURFStar vs. RF (c) MultiSURFstar vs. ReliefE-absMean-adaptive (d) MultiSURFstar vs. ReliefE (e) MultiSURFstar vs. SURF (f) MultiSURFstar vs. ReliefF Figure 7: Bayesian comparisons of performance (ranking qualities) between MultiSURFstar and other feature ranking methods for MCC. Each diagram has three main regions (parts of the pyramid). The two bottom regions correspond to the samples associated with the dominance of each of the two algorithms compared, and the rope region to the difference space, where the winner is not clearly defined. The probability density directly corresponds to the density of dots in the diagram, thus, the part of the diagram with the highest density implies the most probable situation. Individual (posterior) probabilities are also shown next to each diagram, and denote the probabilities of one algorithm outperforming the other or the algorithms being of similar performance. The key results of such pairwise comparisons can be summarized as follows. Very few comparisons yield clear winners. In the majority of the cases, when the most competitive methods are considered, less than 50% probability that one of the ranking algorithms dominates is observed, giving no strong evidence for dominant ranking algorithms. This is the case also for the diagrams in Figure 7. | ---|--- | (a) ReliefE-absMean-adaptive vs. SURF (b) ReliefE-absMean-adaptive vs. MultiSURF (c) ReliefE-absMean-adaptive vs. RandomForest (d) ReliefE-absMean-adaptive vs. ReliefF Figure 8: Bayesian comparisons of performance (ranking qualities) between ReliefE-absMean-adaptive and other feature ranking methods for MCC. The visualizations in Figure 8 show that ReliefE-absMean-adaptive, the implementation proposed in this work, performs on par, or better than many existing, well established approaches such as MultiSURF and RandomForest-based rankings. However, we observe, in the second part of Figure 8, that ReliefE- absMean-adaptive offers small, albeit incremental win rate when compared against the other methods. With the highest probability (80%), we can claim ReliefE’s dominance against MultiSURF, however, the observed probability ratio does not suffice for a significant difference with $>95\%$ probability (the commonly considered convention). To further study the algorithm performance, we visualize the top $f$ features—rF1 curves and discuss the selected examples–such figures showing in detail the ranking performance of the different algorithms for the selected data sets are given in Appendix G. Overall, considering the different statistical approaches to evaluating ReliefE’s performance, the results indicate that the method has similar performance to its competitors, but offers up to two orders of magnitude faster ranking computation, which also confirms the theoretical findings from Section 3.8. ### 5.3 Multi-label classification We next present the results of feature ranking for multi-label classification. For readability purposes, we present the average rank diagrams in Appendix H. The time required for the execution of various distance-ranking algorithm combinations is shown in Figure 9. Figure 9: Running times for the MLC variants of ReliefE (and ReliefF, reimplemented in this work and denoted ReliefF-this). The differences in the execution times are apparent. The ReliefE branch (blue) offers more than an order of magnitude faster ranking computation. The AUrF1 scores, averaged across data sets are shown in Figure 10. Figure 10: Area under the relative F1 for different ranking approaches in the context of multi-label classification. The best performing ReliefF variants for multi-label classification do not embed the input space. However, the top performant variant employs Euclidean embeddings of the target space, where the distances are computed based on the cosine similarity score. This result indicates multi-label classification can benefit from embedding-based approaches. A case study, where the behavior of various ReliefE variants for MLC is considered in more detail can be found in Appendix I. ### 5.4 Relations between ranking algorithms We employ the FUJI score, a recently introduced scale-free comparison of ordered positive real-valued lists, to study how different feature ranking algorithms relate to each other. This study employs the same methodology as discussed in petkovic2020fuzzy ; vskrlj2020feature . The considered FUJI scores can, apart from the ranking, also take into account the differences between the elements that are being compared—this is not possible by using, e.g., the Jaccard score. We compare pairs of curves comprised of (rF1,top $f$) tuples, thus effectively comparing the _shape_ of the rankings’ performance. The results of these comparisons are shown in Figure 11 for multi-class classification and in Figure 12 for multi-label classification. The most apparent pattern that emerges when these comparisons is that embedding-based rankings (ReliefE variants) tend to give very similar rankings. This holds for both multi-class and multi-label classification rankings. Figure 11: AUFUJI scores for multi-class rankings. Higher numbers (red colors) mean higher similarity between rankings. Figure 12: AUFUJI scores for multi- label rankings. The red block of cells in the upper left part of the triangle corresponds to various variants of ReliefE. ### 5.5 Convergence to the final ranking Note that in all the examples up to this point, the number of iterations via which the weights corresponding to feature importances were updated was equal to the number of instances (hence the quadratic complexity). Having shown that this setting already offers state-of-the-art performance, we further explored how redundant is the iteration process, i.e., what is the minimum number of iterations needed to obtain a similar ranking. We investigated this question on MCC datasets following the approach described below. For each number of considered iterations, we conducted 100 logistic regression runs building models with up to 100 top-ranked features. We computed the AUrF1 and inspected the curve induced by the obtained series of (top $f$, AUrF1) tuples. We conducted these experiments for the DLBCL, Tumors C, Biodeg- discrete and chess data sets, with the results shown in Figure 13. We compared ReliefE-absMean-adaptive with ReliefF as implemented in this work, evaluating each iteration with three-fold cross validation (same splits). | ---|--- | (a) DLBCL (b) CHESS (c) Biodeg-p2-discrete (d) Tumors C Figure 13: Impact of the number of ReliefF iterations on ranking quality. It can be observed in Figure 13(a) that the convergence is slower with the ReliefE-absMean-adaptive variant, however, once the performance is achieved, it is no longer impacted by additional iterations. This does not appear to be the case with ReliefF, where a decrease is observed when 32 iterations are considered. Overall, however, ReliefE-absMean-adaptive offers state-of-the-art performance already after four iterations. A similar situation is observed in the case of Biodeg in Figure 13(c). We also observed that on the Tumors C data set (Figure 13(d)), ReliefE-absMean-adaptive was consistently outperformed by ReliefF. Being very high-dimensional, and with only tens of instances, this data set’s intrinsic dimension is most likely under-estimated, yielding feature ranking based on representations that loose too much information. The ReliefE branch of algorithms is highly dependent on the underlying embeddings, where construction of high quality embeddings in such data scarce scenarios remains a lively research area on its own. Potential speedups by decreasing the number of iterations will be explored in further work. The performance on the chess data set, however, remains consistent for both algorithms—this is a low-dimensional data set, where feature importance estimation via embedded space does not offer notable performance improvements, both with respect to top F1 and computation time. ### 5.6 Relevant negative results Even though the paper proposes a promising Relief variant, capable of operating in high-dimensional sparse spaces, many intermediary steps did not perform as expected, and are summarized below: 1. 1. Due to pointer-based storage, using sparse matrix algebra can result in additional overhead, which can be significant in large dense data sets. 2. 2. Running UMAP with spectral decomposition resulted in an unexpected memory overhead. We circumvented this issue with $\nu$, however, the original implementation, once adapted for large scale embedding, could offer an alternative that is more native to UMAP’s routines. 3. 3. Employment of Numba’s parallel capabilities led to somewhat mixed results. On the one hand, trivially parallel routines such as independent looping and similar could easily be adapted to run in parallel, however, when the parallel decorator was employed over the whole ReliefE weight update step, even though all cores were utilized, no notable speedups were observed. Additional study of the intricacies of such decorator-based parallelism is left for further work. 4. 4. When validating our and scikit-rebate’s implementations against Weka’s ReliefF, it turned out that ReliefF, as implemented in scikit-rebate differs with a somewhat negative effect on performance (as shown in this paper). 5. 5. We did not experiment with detailed typing of the most time-consuming methods, however we believe some of the routines could be, this way, made even faster. 6. 6. The intrinsic dimension algorithm (Algorithm 2) appears to _underestimate_ the real dimension, leading to poorer performance in some cases. 7. 7. Embedding target instances in hyperbolic space either works well, or does not work at all. We believe the observed performances are due to the intrinsic geometry of the data, which we will explore in further work. We next discuss some of the general observations and their implications. ## 6 Discussion In this work, we considered extensions of the original ReliefF approach with embedding-based distance computations to both multi-class and multi-label classification settings. We observed that, especially in MLC, embedding the target space can contribute both to lower running time and improve classifier’s performance. The distance that showed the most promising results was based on the cosine similarity, which is widely used when considering embedding-based learning and exploration. The main contribution of this work, the ReliefE ranking approach is capable of operating via embeddings of both input and output spaces (e.g., in multi-label classification). In this section, we comment on the obtained results and discuss further implications of ReliefE. We first observe that adaptive neighbor selection empirically performs very similarly to implementations where neighbor selection is hard-coded. This positive results indicate that one hyperparameter less needs to be tuned, should the user not have the computational resources for extensive grid searches. Further, the simple adaptation of the update step to take into account the distance to the mean of the neighbors similarly offered competitive results. One of the possible reasons for such performance is the potential cancellation of noise, as with averaging, especially in the embedding space, a joint representation is obtained that can also carry some information on semantic similarity amongst the neighbors. Within the proposed ReliefE approach, we also explored how data sparsification can be leveraged to further speed up feature ranking in high-dimensional settings. The sparsification procedure was targeted at larger, higher- dimensional, data sets and did not affect smaller data sets as much. In terms of multi-label classification performance, we observed that the classic adaptation of ReliefF with the proposed adaptive distance and the hamming loss was amongst the best performing options. Interestingly, the variant which used the cosine distance on the target space embeddings, was also amongst the top three best performing solutions, indicating that multi- label classification potentially benefits more by considering only the embeddings of the target space instances (and not of instances in the feature space). Similarly, the absMean variant of ReliefF was also amongst the top five performed, indicating that this aggregation scheme is competitive to the widely accepted averaging, followed by the absolute value step. The best variant of ReliefE that considered both the feature and the target space embeddings is ranked 13th, indicating that by embedding the feature space, performance is lost (albeit significant speedups can be obtained): This hints at a trade-off between performance and ranking quality. Of the remaining metrics, the subset and hyperbolic distances were amongst the worst performing ones, indicating that hyperbolic embeddings operate well in rather limited settings, possibly where a hierarchical structure of the target space can be observed. This work is also one of the first (to our knowledge) to compare the performance curves of different ranking algorithms with the Fuzzy Jaccard Index. We observe that embedding-based algorithms proposed in this work behave very similarly, for both multi-label and multi-class classification. Especially in MLC, two consistent patterns emerge. All ReliefE variants are shown to be very similar to one another (red block in Figure 12). However, also the hyperbolic and subset versions of ReliefF appear to behave similarly to the embedding-based ones, even though the input space was not embedded in these cases. For multi-label classification (Figure 11), the ReliefE variants again emerge as the most similar (to one another). However, similarly to the MLC comparison, versions of the adapted ReliefF as implemented in this work are also shown to yield similar performance curves to ReliefE-based variants. Following the results of ablation studies, we believe further speedups could be obtained by considering fewer iterations. Current experiments indicate that potentially quadratic speedup could be obtained, as adequate performance was already observed after $\sqrt{|I|}$ iterations in some cases. Further, the number of iterations could also be adapted dynamically, by monitoring the feature ranking scores and detecting convergence before all iterations are carried out. When studying individual data sets, e.g., DLBCL and opt-digits, we observe that ReliefE offers superior performance at a fraction of the computation time required by the other methods, indicating that the development of approaches based on ideas introduced in this paper is a sensible research avenue. In this work, we have evaluated feature rankings based on classification performance obtained by robust learners, such as logistic regression, which have not been fine-tuned. The purpose of such evaluation was to emphasize the effect of feature ranking. However, extensive studies of the interplay between regularization regimes (e.g., L1 vs. L2) and ranking performance could also offer interesting insights into the robustness of rankings, and further, their purpose. For example, a L1 regularized learner could automatically discard large parts of the feature space: Although this would be considered as feature selection (and not ranking), it would potentially offer similar results. We leave this type of experiments for further work. Similarly, the Bayesian comparisons, involving mostly a state-of-the-art feature ranker MultiSURFstar and the proposed ReliefE algorithm(s), indicate that ReliefE is competitive and many times outperforms MultiSURFstar, even in a probabilistic sense. For example, the probability that ReliefE-absMean- adaptive outperforms MultiSURFstar is more than 30%, with most of the remaining probability density lying in the equal performance (rope) region. Finally, we discuss several potentially interesting future empirical studies that would represent a non-trivial extension of the proposed work. Detailed analysis of the algorithms’ performance with respect to various properties of the data sets could offer additional insights into when to use what type of ranking. We believe that meta-learning could be a promising research venue, as by linking the data sets’ properties with suitable algorithms could largely benefit situations where embedding-based ranking is not the best option. Overall, if one optimizes for efficient performance on large, contemporary data sets, ReliefE offers a computationally efficient approach, that could serve as a first step to further study where to invest the remaining computational resources, and whether feature ranking is a sensible approach at all (it might not be for, e.g., image-based data). Similarly, understanding whether the choice of the distance score can be further _transferred_ between similar data sets also represents an interesting research direction worth of further study. Overall, the proposed paper provides an empirical, as well as a theoretical foundation for potentially more involved embedding schemes, such as e.g., (variational) autoencoder-based ones. We believe that a relevant factor influencing ReliefE’s performance is the quality of the _learned_ representation, indicating that another promising research venue could be the investigation of different embedding approaches (this work explores different distances within a single embedding approach, but does not consider different embedding approaches). ## 7 Conclusions and further work In this paper, we have proposed one of the first embedding-based Relief implementations with both theoretical and practical grounding. We have explored whether embedding the input, but also output space onto a Riemannian manifold prior to feature ranking yields better rankings. The results indicate that, while being significantly faster, embedding-based ranking methods do not consistently outperform the ones that do not use embeddings. However, we show that they are indeed consistently faster than all other Relief-based ranking approaches. We also show that for multi-label classification, where additional complexity arises due to multiple label co-presence, ReliefE can offer more stable, and on data sets like Delicious, better performance. Further, we demonstrate that embedding the target label space is beneficial for the final ranking’s quality in a multi-label setting. The proposed adaptive neighbor estimation procedure could be further developed in terms of the neighborhood dependence with respect to a given metric. Similarly, the current implementation potentially over-estimates the neighborhood size, which could be due to the nature of the embedded space or the method’s bias. Both possibilities are to be explored in future work. We believe that comparison of feature ranking algorithms should also be considered at the level of their properties and not only their performance. In this work, we show that embedding-based ranking gives rise to a fundamentally different type of rankings, which we believe are worthy of being studied further. To our knowledge, we are the first to perform such a large-scale comparisons of a long list of ranking approaches (using, e.g., different similarities in MLC) and take into account also the actual values of importance scores withing rankings (through the FUJI score), and not only the feature order. We also observe that the variants of the original ReliefF, as re-implemented in this work, already offer superior performance to, e.g., the SURF branch of algorithms, indicating that their scikit-rebate implementations have some limitations in terms of numeric stability (and are not adapted at all to handle sparsity). As further work, we believe the study of non-Euclidean spaces could yield many novel insights, as the target space is frequently of hierarchical nature, implying Euclidean geometry is not sufficiently good for its representation. In this work, we show initial results for embedding on a hyperboloid (Pincaré ball model). However, Lorenzian geometry can also be considered. ###### Acknowledgements. We would like to acknowledge the Slovenian Research Agency (ARRS) for funding the first and the last author (BŠ, MP) through young researcher grants and supporting other authors (SD, NL) through the research program _Knowledge Technologies_ (P2-0103) and the research project _Semantic Data Mining for Linked Open Data_ (financed under the ERC Complementary Scheme, N2-0078). This research was also partially supported by TAILOR (a project funded by the EU Horizon 2020 research and innovation programme under GA No 952215) and AI4EU (GA No 825619). We would also like to thank the administrators of the SLING supercomputing environment for the computing resources which made the empirical part of this study possible. ## References * (1) Imdb dataset. Obtained from https://sourceforge.net/projects/meka/files/Datasets/IMDB-F.arff/download * (2) Alpaydin, E., Kaynak, C.: Cascading classifiers. Kybernetika 34(4), 369–374 (1998) * (3) Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.: A public domain dataset for human activity recognition using smartphones (2013) * (4) Armstrong, S.A., Staunton, J.E., Silverman, L.B., Pieters, R., den Boer, M.L., Minden, M.D., Sallan, S.E., Lander, E.S., Golub, T.R., Korsmeyer, S.J.: Mll translocations specify a distinct gene expression profile that distinguishes a unique leukemia. Nature genetics 30(1), 41–47 (2002) * (5) Arora, S., Hazan, E., Kale, S.: A fast random sampling algorithm for sparsifying matrices. In: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp. 272–279. Springer (2006) * (6) Balasubramanian, M., Schwartz, E.L.: The isomap algorithm and topological stability. Science 295(5552), 7–7 (2002) * (7) Benavoli, A., Corani, G., Demšar, J., Zaffalon, M.: Time for a change: a tutorial for comparing multiple classifiers through bayesian analysis. The Journal of Machine Learning Research 18(1), 2653–2688 (2017) * (8) Breskvar, M., Kocev, D., Dzeroski, S.: Ensembles for multi-target regression with random output selections. Machine Learning 107(11), 1673–1709 (2018). DOI 10.1007/s10994-018-5744-y. URL https://doi.org/10.1007/s10994-018-5744-y * (9) Bronstein, M.M., Bruna, J., LeCun, Y., Szlam, A., Vandergheynst, P.: Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine 34(4), 18–42 (2017) * (10) Cao, J., Spielmann, M., Qiu, X., Huang, X., Ibrahim, D.M., Hill, A.J., Zhang, F., Mundlos, S., Christiansen, L., Steemers, F.J., Trapnell, C., Shendure, J.: The single-cell transcriptional landscape of mammalian organogenesis. Nature 566(7745), 496–502 (2019). DOI 10.1038/s41586-019-0969-x. URL https://doi.org/10.1038/s41586-019-0969-x * (11) Connor, J.T., Martin, R.D., Atlas, L.E.: Recurrent neural networks and robust time series prediction. IEEE transactions on neural networks 5(2), 240–254 (1994) * (12) Demšar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine learning research 7(Jan), 1–30 (2006) * (13) Dong, W., Moses, C., Li, K.: Efficient k-nearest neighbor graph construction for generic similarity measures. In: Proceedings of the 20th international conference on World wide web, pp. 577–586 (2011) * (14) Džeroski, S., Blockeel, H., Kompare, B., Kramer, S., Pfahringer, B., Van Laer, W.: Experiments in predicting biodegradability. In: International Conference on Inductive Logic Programming, pp. 80–91. Springer (1999) * (15) Eppstein, M.J., Haake, P.: Very large scale relieff for genome-wide association analysis. In: 2008 IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology, pp. 112–119. IEEE (2008) * (16) Facco, E., d’Errico, M., Rodriguez, A., Laio, A.: Estimating the intrinsic dimension of datasets by a minimal neighborhood information. Scientific reports 7(1), 1–8 (2017) * (17) Goyal, P., Ferrara, E.: Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems 151, 78–94 (2018) * (18) Granizo-Mackenzie, D., Moore, J.H.: Multiple threshold spatially uniform relieff for the genetic analysis of complex human diseases. In: European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics, pp. 1–10. Springer (2013) * (19) Greene, C.S., Himmelstein, D.S., Kiralis, J., Moore, J.H.: The informative extremes: using both nearest and farthest individuals can improve relief algorithms in the domain of human genetics. In: European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics, pp. 182–193. Springer (2010) * (20) Greene, C.S., Penrod, N.M., Kiralis, J., Moore, J.H.: Spatially uniform relieff (surf) for computationally-efficient filtering of gene-gene interactions. BioData mining 2(1), 5 (2009) * (21) Guyon, I., Gunn, S., Ben-Hur, A., Dror, G.: Result analysis of the nips 2003 feature selection challenge. In: Advances in neural information processing systems, pp. 545–552 (2005) * (22) Han, E.H.S., Karypis, G.: Centroid-based document classification: Analysis and experimental results. In: D.A. Zighed, J. Komorowski, J. Żytkow (eds.) Principles of Data Mining and Knowledge Discovery, pp. 424–431. Springer Berlin Heidelberg, Berlin, Heidelberg (2000) * (23) Hughes, G.: On the mean accuracy of statistical pattern recognizers. IEEE Transactions on Information Theory 14(1), 55–63 (1968). DOI 10.1109/TIT.1968.1054102 * (24) Katakis, I., Tsoumakas, G., Vlahavas, I.: Multilabel text classification for automated tag suggestion. In: Proceedings of the ECML/PKDD 2008 Discovery Challenge (2008) * (25) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016) * (26) Kira, K., Rendell, L.A., et al.: The feature selection problem: Traditional methods and a new algorithm. In: Aaai, vol. 2, pp. 129–134 (1992) * (27) Lam, S.K., Pitrou, A., Seibert, S.: Numba: A llvm-based python jit compiler. In: Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, pp. 1–6 (2015) * (28) LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015) * (29) Liu, T., Moore, A.W., Yang, K., Gray, A.G.: An investigation of practical approximate nearest neighbor algorithms. In: Advances in neural information processing systems, pp. 825–832 (2005) * (30) Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(Nov), 2579–2605 (2008) * (31) Masci, J., Boscaini, D., Bronstein, M., Vandergheynst, P.: Geodesic convolutional neural networks on riemannian manifolds. In: Proceedings of the IEEE international conference on computer vision workshops, pp. 37–45 (2015) * (32) McInnes, L., Healy, J., Melville, J.: Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018) * (33) McInnes, L., Healy, J., Saul, N., Grossberger, L.: Umap: Uniform manifold approximation and projection. The Journal of Open Source Software 3(29), 861 (2018) * (34) Mežnar, S., Lavrač, N., Škrlj, B.: Snore: Scalable unsupervised learning of symbolic node representations. IEEE Access 8, 212568–212588 (2020). DOI 10.1109/ACCESS.2020.3039541 * (35) Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, K.Q. Weinberger (eds.) Advances in Neural Information Processing Systems 26, pp. 3111–3119. Curran Associates, Inc. (2013) * (36) Moore, J.H., White, B.C.: Tuning relieff for genome-wide genetic analysis. In: European Conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics, pp. 166–175. Springer (2007) * (37) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al.: Scikit-learn: Machine learning in python. Journal of machine learning research 12(Oct), 2825–2830 (2011) * (38) Perovšek, M., Vavpetič, A., Kranjc, J., Cestnik, B., Lavrač, N.: Wordification: Propositionalization by unfolding relational data into bags of words. Expert Systems with Applications 42(17-18), 6442–6456 (2015) * (39) Pestian, J.P., Brew, C., Matykiewicz, P., Hovermale, D.J., Johnson, N., Bretonnel Cohen, K., Duch, W.: A shared task involving multi-label classification of clinical free text. In: Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing (BioNLP ’07), pp. 97–104 (2007) * (40) Petković, M., Kocev, D., Džeroski, S.: Feature ranking with relief for multi-label classification: Does distance matter? In: L. Soldatova, J. Vanschoren, G. Papadopoulos, M. Ceci (eds.) Discovery Science, pp. 51–65. Springer International Publishing, Cham (2018) * (41) Petković, M., Slavkov, I., Kocev, D., Džeroski, S.: Biomarker discovery by feature ranking: Evaluation on a case study of embryonal tumors. Computers in Biology and Medicine 128, 104143 (2021). DOI https://doi.org/10.1016/j.compbiomed.2020.104143. URL http://www.sciencedirect.com/science/article/pii/S0010482520304741 * (42) Petković, M., Škrlj, B., Kocev, D., Simidjievski, N.: Fuzzy jaccard index: A robust comparison of ordered lists (2020) * (43) Pomeroy, S., Tamayo, P., Gaasenbeek, M., Sturla, L.M., Angelo, M., McLaughlin, M., Kim, J., Goumnerova, L., Black, P., Lau, C., Allen, J., Zagzag, D., Olson, J., Curran, T., Wetmore, C., Biegel, J., Poggio, T., Mukherjee, S., Rifkin, R., Golub, T.: Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature 415, 436–42 (2002). DOI 10.1038/415436a * (44) Pouyanfar, S., Sadiq, S., Yan, Y., Tian, H., Tao, Y., Reyes, M.P., Shyu, M.L., Chen, S.C., Iyengar, S.: A survey on deep learning: Algorithms, techniques, and applications. ACM Computing Surveys (CSUR) 51(5), 1–36 (2018) * (45) Robnik-Šikonja, M., Kononenko, I.: Theoretical and empirical analysis of relieff and rrelieff. Machine learning 53(1-2), 23–69 (2003) * (46) Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. science 290(5500), 2323–2326 (2000) * (47) Sakar, B.E., Isenkul, M.E., Sakar, C.O., Sertbas, A., Gurgen, F., Delil, S., Apaydin, H., Kursun, O.: Collection and analysis of a parkinson speech dataset with multiple types of sound recordings. IEEE Journal of Biomedical and Health Informatics 17(4), 828–834 (2013) * (48) Shapiro, A.D.: The role of structured induction in expert systems (1984) * (49) Škrlj, B., Džeroski, S., Lavrač, N., Petkovič, M.: Feature importance estimation with self-attention networks. arXiv preprint arXiv:2002.04464 (2020) * (50) Stepišnik, T., Kocev, D.: Hyperbolic embeddings for hierarchical multi-label classification. In: D. Helic, G. Leitner, M. Stettinger, A. Felfernig, Z.W. Raś (eds.) Foundations of Intelligent Systems, pp. 66–76. Springer International Publishing, Cham (2020) * (51) Stiglic, G., Kokol, P.: Stability of ranked gene lists in large microarray analysis studies. BioMed Research International 2010 (2010) * (52) Stokes, M.E., Visweswaran, S.: Application of a spatially-weighted relief algorithm for ranking genetic predictors of disease. BioData mining 5(1), 20 (2012) * (53) Tsoumakas, G., Katakis, I., Vlahavas, I.: Effective and efficient multilabel classification in domains with large number of labels. In: ECML/PKDD 2008 Workshop on Mining Multidimensional Data (MMD’08) (2008) * (54) Ueda, N., Saito, K.: Parametric mixture models for multi-labeled text. In: Advances in Neural Information Processing Systems 15, pp. 721–728. MIT Press (2003) * (55) Urbanowicz, R.J., Olson, R.S., Schmitt, P., Meeker, M., Moore, J.H.: Benchmarking relief-based feature selection methods for bioinformatics data mining. Journal of biomedical informatics 85, 168–188 (2018) * (56) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017) * (57) Weinstein, J.N., Collisson, E.A., Mills, G.B., Shaw, K.R.M., Ozenberger, B.A., Ellrott, K., Shmulevich, I., Sander, C., Stuart, J.M., Network, C.G.A.R., et al.: The cancer genome atlas pan-cancer analysis project. Nature genetics 45(10), 1113 (2013) ## Appendix A Theoretical considerations of embedding spaces As many of the recently introduced embedding-based methods tend to replace earlier methods, whilst maintaining the representation power, we believe the comparison of the two mappings, when considered, can be represented as a simple commutative diagram. The example, considered to represent ReliefE’s mapping compared to, e.g., that of the standard ReliefF’s can be represented as ${\boldsymbol{F}}$${\boldsymbol{E}}$${\boldsymbol{w}}$$\scriptstyle{\phi}$$\scriptstyle{q}$$\scriptstyle{f}$ Here, the initial, real valued feature matrix $\boldsymbol{F}$ is either directly ($q$), or indirectly ($\phi$ and $f$) mapped to the output weight vector $\boldsymbol{w}$. Note that $q:\mathbb{R}^{|I|\times|F|}\rightarrow\mathbb{R}^{|F|}$, $\phi:\mathbb{R}^{|I|\times|F|}\rightarrow\mathbb{R}^{|I|\times d}$ and $f:\mathbb{R}^{|I|\times d}\rightarrow\mathbb{R}^{|F|}$. ReliefE operates under the assumption that the initial ranking can be retrieved via latent space $\boldsymbol{E}$ in two steps ($\phi$ and $f$). ## Appendix B Ablation study of data sparsity The considered sparsification procedure is dependent on the parameter _epsilon_ , i.e., the approximation error. The study of how different error thresholds impact the final sparsification result is shown in Figure 14. It can be observed that most of the data sets only get sparsified after a rather large epsilon is permitted. The second ablation explores the relation between the initial data sparsity and the final sparsity, i.e., the sparsity of a given data set after the conducted sparsification procedure. The result is shown as a kernel density plot in Figure 15. Figure 14: Dependence of sparsification on approximation error allowed. The observed result indicates that when the data set is sparse to begin with, the result will be, as expected, similarly sparse. However, the vertical density at the rightmost part of the figure demonstrates that the sparsification procedure indeed yields sparser data, albeit not in all cases. A similar visualization can be produced for the space based on estimated epsilon values, shown in Figure 16. The considered estimate yields a similar landscape sparseness to that obtained via grid-search (Figure 15), indicating decrease in most cases. However, there are examples where a given data set’s density was substantially lowered, such as for example pd-speech-features and biodeg-p2-discrete. The results indicate that the considered estimate could be further relaxed, albeit at the cost of worse approximation of the input matrix, which could negatively impact the final performance. Figure 15: Kernel density estimation of the relation between the initial and final sparsity. Figure 16: Kernel density estimation – estimated epsilon values. ## Appendix C Adaptive $k$ distributions In this ablation study, we visualized the distributions of the neighborhoods across all considered MCC data sets. This plot demonstrates that for different data sets, differently sized neighborhoods were identified by the proposed heuristic (Figure 17). Figure 17: Density of estimated $k$ values for 100 iterations of ReliefE. ## Appendix D Area under the rF1 The AUrF1 scores, averaged across data sets are shown in Figure 18. It can be observed that the first 5 rankings behave very similarly w.r.t. this measure. Thus, we emphasize other types of comparison, where the differences are more apparent. Figure 18: Area under the relative F1 curve for multi-class classification. All ranking approaches perform similarly with no notable differences. More insights into the relative performance of ranking algorithms are provided by Bayesian tests and FUJI-based comparisons of performance curves. ## Appendix E Detailed analysis of running time We additionally studied how different parts of ReliefE impact the total running time. For p53, one of the largest considered data sets, we visualize the proportions in Figure 19. Figure 19: Proportions of time spent on different methods within ReliefE on the p53 dataset. Majority of time is spent on weight update steps (as expected). ## Appendix F Multi-class classification, additional rank diagrams This appendix includes additional ablation studies in the form of critical distance diagrams presenting the performance for multi-class ranking (Figures 20 and 21). Figure 20: Max first 100 features. Similarly to the situation with 50 top features, the ReliefE variants, including the adaptive one, perform well for the multi-class classification task. Figure 21: Max first 50 features. The performance if considering only first 50 features. The adaptive version of ReliefE performs on average the best in this scenario. ## Appendix G Multi-class classification – case study with Madelon, DLBCL and genes This section contains feature rankings, visualized for the Madelon data set, where either the ReliefE or a variant of ReliefF equiped with one of the proposed heuristics shows different behavior (better performance) (Figure 22.) The visualized performances offer insights into behavior of the algorithms. For example, the ReliefF branch of adaptations (and vanilla ReliefF) peak at less than a hundred features, however, another performance peak where feature ranking is sensible ($rF1>1$) is around 250 features, where ReliefE-type algorithms are consistently amongst the best-performing ones. Figure 22: Madelon performance curves. This result indicates that there exist situations where initially better rankings are obtained via e.g., the SURF branch of the algorithms, however, when considering more features, ReliefE variants are the only ones that find rankings which perform well. Figure 23: DLBCL performance curves. The DLBCL is a very high-dimensional data set and reflects the ReliefE’s capability to operate with high-dimensional feature spaces. Figure 24: Genes performance curves. Compared to mutual information (myopic)-based rankings ReliefE performs consistently better (steeper curve at the beginning in the first around hundred features. The power of ReliefE is apparent when considering DLBCL data set (very high dimensional with not many instances). Results are shown in Figure 23. Finally, the results for the genes data set are shown in Figure 24. Note how the more time expensive SURF variants were not able to finish in dedicated time. Further, ReliefF is notably worse, requiring more information to detect the relevant signal. On the other hand ReliefE variants perform consistently well. ## Appendix H Multi-label classification – additional rank diagrams In this section, we present the average rank diagrams that offer insights into global distribution of the performances when multi-label classification setting is considered (Figures 25 and 26). Figure 25: Max (upper) F1 scores for top 10 highly-ranked features. The results indicate that the adaptive threshold step impacts the placing of the top features (adaptive) version of ReliefF. Figure 26: Mean (upper) F1 scores for top 10 highly-ranked features. The average rank diagrams confirm the finding that if the target space is embedded via cosine distance, the MLC ReliefF variant performs the best. ## Appendix I Multi-label classification – case study with Delicious We study in more detail the performance on the Delicious data set, as it offers interesting insights into the algorithms’ performances (Figure 27). The algorithms’ performances are overall consistent. Note how cosine-based embeddings of the target space emerge as the best option (orange line), indicating embedding-based distances amongst the target instances can already offer competitive performance. Figure 27: Delicious performance curves. The ReliefE variants perform consistently better the first for up to 250 features.
lemmadefinition theoremdefinition corollarydefinition propositiondefinition assumptiondefinition remarkdefinition exampledefinition conjecturedefinition lemma theorem corollary proposition remark conjecture assumption # Variational methods for fluid-structure interaction and porous media B. Benešová, M. Kampschulte, S. Schwarzacher Department of Mathematics and Physics, Charles University Prague<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract. In this work we consider a poroelastic flexible material that may deform largely which is situated in an incompressible fluid driven by the Navier- Stokes equations in two or three space dimensions. By a variational approach we show existence of weak solutions for a class of such coupled systems. We consider the unsteady case, this means that the PDE for the poroelastic solid involves the Frechet derivative of a non-convex functional as well as (second order in time) inertia terms. ###### Key words and phrases: Mathematics for continuum mechanics, Fluid poroelastic structure interactions, Minimizing movements, Navier-Stokes equations, Elastic solids, Hyperbolic evolutions, Coupled systems of PDEs ## 1\. Introduction _Poromechanics_ or _mechanics of porous media_ has been a lively area of research in engineering and continuum mechanics (see e.g. the monographs [18, 16]) as the possible applications of the proposed theories are numerous, ranging from the mechanics of tissue and biological materials in general (see e.g. [27]) to the recently important breathing through masks. In many of these cases, the poroelastic matrix is not isolated but instead immersed into a fluid which may flow through the elastic structure. Naturally, then, the porous medium and the free fluid are interacting with each other. Indeed on the mutual interface they both exert pressure and stresses to each other and on the top of that the fluid may enter the pores of the structure and flow through it. Such a setting is referred to as _fluid-pororelastic structure interaction (FPSI)_ and will be the subject of the present work. The main result of this paper is the well-posedness for a class of FPSI problems allowing large deformations and involving incompressible fluids driven by the Navier-Stokes equations in two or three space dimensions. In order to predict the response of the FPSI-system, we need to devise a model for the free fluid, the poroelastic structure and prescribe suitable interface conditions. Each of the tasks can be approached on several scales, ranging from the microscopic to the macroscopic one. For the former the skeleton is modeled in detail and a (yet another) fluid-structure interaction problem between the skeleton and the circumflowing fluid is set up. More common in the engineering literature, however, is a more _macroscopic approach_ where the porous medium is assumed to be saturated by the fluid, i.e. they form a kind of “mixture” in each material point and an _ad hoc_ form of the stress tensor is prescribed for the porous medium [12]. Such an approach covers the early and famous models for the porous medium due to Darcy [17], Brinkman [10] or Biot [7] and will be also the one we follow in this contribution. Let us also note that a justification of the proposed models by homogenization is highly relevant indeed and some results concerning the aforementioned standard models for porous media have already been obtained, see e.g. [1, 15, 24]. Following the macroscopic approach, we will assume that the porous medium is fully saturated by the fluid and can be completely described by the fluid velocity $v$ and the deformation of the solid $\eta$ (see Section 2 for a more detailed modeling description). We will take a _variational point of view_ for modeling as well as for the analysis by defining the constitutive behavior of the mechanical system by prescribing its _stored energy_ as well as _dissipation functional_ (dissipation pseudo-potential). Such an approach has been advocated by many authors (e.g. [22] for solids or [36, 37, 35] for fluids) as it may simplify the modeling. On the top of that, we argue in Section 3 that the variational approach is essential also from the point of view of _mathematical analysis_ if a _large-strain, non-linear_ model is to be used for the involved solid. Indeed, in such a case, the Helmholtz free energy cannot be chosen as a convex functional for physical restrictions and thus the resulting differential operator is not monotone. In the case of FPSI, the variational approach may also facilitate the derivation of boundary conditions at the interface of the poroelastic medium itself and the fluid into which it is immersed, see e.g. [20]. Prescribing the above mentioned energy and dissipation for the whole system then automatically yields a balance of stresses at this boundary and, additionally, imposes a “weak continuity” condition for the fluid velocity over the porous media interface, as we ask for its derivatives to be globally integrable. Nonetheless, generalizations are, of course, possible. For this see in particular the notes at the end of Section 2. While the engineering literature on models in poroelasticity and FPSI seems to be rich, the mathematical literature studying well-posedness of the FPSI is, to the authors’ knowledge, rather scarce. Interaction of the linear or non- linear poroelastic medium with Stokes flow has been studied in [31, 2, 8]. Moreover, the coupling of the Stokes and Darcy flow has been studied (see e.g. [34] for an overview) but here, intrinsically, the region occupied by the porous medium is fixed a-priori. Similarly, the interaction of the Biot- poroelastic medium with a Navier-Stokes flow was studied in [11], but again with the interface wall between the porous solid and fluid fixed. Thus, well- posedness of a FPSI problem featuring a fully non-linear poroelastic solid that may undergo large deformations coupled to a Navier-Stokes fluid has not been addressed in literature so far. In this contribution, we work exactly in this setting and we establish _existence of weak solutions_. In order to do so, it is crucial to approach the modeling as well as the subsequent analysis in an _energetic (or variational)_ way to access methods from the calculus of variations. We provide an overview of the analytical details of the approach in Section 3 but would like to emphasize here that working the proposed framework not only allows to prove the result presented here but also paves the road for studying further FPSI models. Indeed, models including dependence of the stresses on e.g. the porosity, the porous pressure of the fluid or more general coupling conditions are accessible to the analysis, though a careful investigation will be needed. This work is built up as follows: In Section 2 we present the model considered in this work and formulate the main results. In Section 3 we overview the ingredients of the variational approach and in Section 4 we present the proof of the main result, Theorem 2. ## 2\. Modeling and main results To set up the porous media problem, we consider a regular enough111The regularity of the boundary of $\Omega$ as well as $Q$ only fully comes into play when discussing contact of the solid with $\partial\Omega$. This is discussed in a bit more detail in [6], and will be thoroughly discussed in a forthcoming paper involving the second author. For the purpose of the paper at hand, we ignore this aspect completely and just require enough regularity to meaningfully talk about boundary values. spatial domain $\Omega\subset\mathbb{R}^{n}$ in which both the porous medium an the circumlying fluid are contained. $Q$$\Omega$$\Omega$$\eta$$v$$\partial_{t}\eta\circ\eta^{-1}$$\partial_{t}\eta$ Figure 1. A short illustration of the geometry involved in the model. As already announced, we assume that the poroelastic medium is a perfectly homogenized mixture of the solid and the fluid in each material point while outside the porous medium the fluid is found in its “pure” state. Thus, the fluid is allowed to flow through the whole of $\Omega$, including the part already occupied by the solid. The state of the solid will be described by its deformation, a map $\eta:Q\to\Omega$; i.e. from some regular enough reference domain $Q\subset\mathbb{R}^{n}$ to the container $\Omega$. We choose to fix “boundary values” $\eta|_{P}=\gamma$ for some subset $P\subset\overline{Q}$, but technically this is optional. The deformation $\eta$ additionally fixes the geometry of the porous medium $\eta(Q)$. The fluid, on the other hand, is described by its velocity $v:\Omega\to\mathbb{R}^{n}$ and pressure $p:\Omega\to\mathbb{R}^{n}$ with $\operatorname{div}v=0$. In order to simplify the discussion, we will assume no-slip conditions at the boundary of $\Omega$, i.e. $v|_{\partial\Omega}=0$. As the fluid is assumed incompressible, the pressure will later “disappear” from the weak formulation thanks to the choice of appropriate test functions. Having defined all the quantities involved, we can now turn back to modeling by prescribing the stored energy. We assume an additive decomposition of the energy for the fluid and the solid. However, as the fluid is assumed to be purely Newtonian and incompressible, no energy can be stored in the fluid. Thus, the stored energy consists just of the solid one denoted as $E(\eta)$. The overall energy is then given by the sum $\displaystyle\int_{\Omega}\frac{\rho_{f}}{2}\left|{v}\right|^{2}dy+\int_{Q}\frac{\rho_{s}}{2}\left|{\partial_{t}\eta}\right|^{2}dx+E(\eta).$ Here $\rho_{f}$ and $\rho_{s}$ are given mass densities of the fluid and solid respectively. To complete the modeling we need to prescribe a form of $E(\eta)$. For simplicity, we stick to the the Saint Venant-Kirchhoff energy enriched with a term introducing resistance of the solids to infinite compression and a higher-order gradient regularisation: (2.1) $\displaystyle E(\eta)$ $\displaystyle:=\begin{cases}\int_{Q}\frac{1}{8}|\nabla\eta^{T}\nabla\eta-I|^{2}_{\mathcal{C}}+\frac{1}{(\det\nabla\eta)^{a}}+\frac{1}{q}\left|{\nabla^{2}\eta}\right|^{q}dx&\text{if $\det\nabla\eta>0$ a.e. in $Q$ }\\\ +\infty&\text{otherwise}\end{cases}$ where we used the notation $|\nabla\eta^{T}\nabla\eta-I|^{2}_{\mathcal{C}}:=\big{(}\mathcal{C}(\nabla\eta^{T}\nabla\eta-I)\big{)}\cdot\big{(}\nabla\eta^{T}\nabla\eta-I\big{)}$, with $\mathcal{C}$ being a fourth order positive definite tensor of elastic constants, $q>n$ and $a>\frac{qn}{q-n}$. ###### Remark (Non-simple materials). Let us stress that the proposed energy functional for the solid depends not only on the _deformation gradient_ , which is the standard setting, but it also includes a term depending on the _second gradient of the deformation_. Thus, the proposed model belongs to the class of so-called _non-simple_ materials that have been proposed by Toupin [33] and later further developed in other works, e.g. [28, 32, 21]. In the mathematical literature concerned with the large-deformation regime this concept is often employed to gain additional, necessary regularity. Also the present work relies on it. Indeed, at the heart of our variational approach (see also Section 3) is that we will be able to solve a time-discrete variant of the set of PDEs in (2.5) by solving a minimization problem and then relying on the fact that this minimizer solves the associate Euler-Langrage equation. However, as our energy functional takes infinite values, the relation between the functional and the Euler-Lagrange equation is not straighforward. Indeed, the minimizer may lie almost at the boundary of the domain of $E(\eta)$ so that taking variations at the minimizer would not be possible. This is indeed a well-known problem in mathematical elasticity [4] that has not found a satisfactory solution to date. Let us just mention that even in the one-dimensional setting explicit examples have been found in [5] that illustrate this exact phenomenon: a minimizer of the functional exists but it does not satisfy the Euler-Lagrange equation. To the authors’ knowledge, as of today, the only setting where it is known that this phenomenon does not occur is the one of _non-simple materials_. Indeed, Healey and Krömer have found in [23] that if the indices $q$ and $a$ in $E(\eta)$ are chosen appropriately, then every deformation of a finite energy has a uniform lower bound for its Jacobian. Thus, the minimizer is well separated from the boundary of the domain in $E(\eta)$ and variations at the minimizer can be performed. ∎ Let us just remark that the second term in $E(\eta)$ assures that the stored energy blows up whenever $\det({\nabla\eta})\to 0$ and as such penalizes compression (in fact we get a uniform energy dependent lower bound on the determinant, see [23]). This is a crucial requirement non-linear elasticity (see e.g. [13]). We also stress that such a requirement alone rules out convexity of the stored energy, albeit when $E(\eta)$ is of the form of (2.1) also the first term is not convex. Much more challenging for the analysis than the non-convex terms in the energy is the fact, that the admissible set of deformations has to be chosen carefully; in particular it is bound to be a non-convex set. This is related to the physical expectation that any admissible deformation will be injective. Indeed, while it might be possible to make sense of the equations of the problem even in the case that two different parts of the solid occupy the same Eulerian space, physically such a situation is of course nonsensical. Now observe that locally the blow-up of the determinant already guarantees injectivity of any deformation of finite energy. But in addition to the local injectivity we further have to take into account the important non-convex restriction of global injectivity. What thus turns out to be a good choice for the admissible set of $\eta$ is the following set that is in coherence with the celebrated _Ciarlet-Nečas condition_ proposed in [14]: (2.2) $\displaystyle\mathcal{E}:=\left\\{\eta\in W^{2,q}(Q;\Omega)\,:\,E(\eta)<\infty,\,\left|{\eta(Q)}\right|=\int_{Q}\det\nabla\eta\,dx\right\\}.$ Here, the finite energy guarantees local injectivity and together with the last condition we have that any $C^{1}$-local homeomorphism is globally injective except for possible touching at the boundary. Hence in the following we will construct deformations in the set (2.2). In fact the set $\mathcal{E}$ only includes deformations for which the dissipation function $R$ of the solid deformation, that we introduce next, is also well defined. As for the dissipation function, we also assume an additive splitting of the fluid and solid dissipation, respectively. In addition to that we introduce a phenomenological coupling term, a _drag term_ , that introduces additional dissipation if the velocities of the fluid and solid are not equal. For the purposes of this paper we stick to the simplest possible setting when the drag force is proportional to the difference of the solid and fluid velocity (see [29]); this is also the setting within which, upon further simplifications, the law of Darcy or the equations of Brinkman are derived [29]. However since one of these velocities is given in Lagrangian and the other in Eulerian representation, we need to transform one of them into the proper frame, the easier of the two options being the Lagrangian. Thus, we obtain for the the dissipation functional: $\displaystyle\mathcal{D}=\underbrace{\frac{\nu}{2}\left\|{\varepsilon}v\right\|_{{\Omega}}^{2}}_{\text{fluid dissipation}}+\underbrace{R(\eta,\partial_{t}\eta)}_{\text{solid disspation}}+\underbrace{A(\eta,\partial_{t}\eta-v\circ\eta)}_{\text{drag potential}}$ where $A(\eta,\phi):=\int_{Q}\frac{1}{2}\phi^{T}\cdot a(\nabla\eta)\cdot\phi\,dx$ and $a:\\{M\in\mathbb{R}^{n\times n}:\det M>0\\}\to\mathbb{R}^{n\times n}$ is a smooth function mapping into the set of positive semi-definite symmetric matrices. Note that the drag potential is quadratic, so that the drag force depends linearly, through a factor depending on $\nabla\eta$, on the difference of the two velocities. This quite general dependence on $\nabla\eta$ is necessary to model anisotropy in the correct frame independent way.222If we assume the simpler case of an isotropic solid with regards to drag, we can instead use $A(\eta,b)=\frac{a_{0}}{2}\left\|b\right\|_{{Q}}^{2}$, but doing so does not fundamentally change any of the later computations. Note also that for physical reasons the drag does only depend on the relative and not on the absolute velocities. Finally, we need to prescribe the solid dissipation, for which we choose the following form which leads to an viscoelastic solid of Kelvin-Voigt type: (2.3) $\displaystyle R(\eta,\partial_{t}\eta)$ $\displaystyle:=\int_{Q}|(\nabla\partial_{t}\eta)^{T}\nabla\eta+(\nabla\eta)^{T}(\nabla\partial_{t}\eta)|^{2}dx=\int_{Q}|\partial_{t}(\nabla\eta^{T}\nabla\eta)|^{2}dx.$ Here, let us notice that the dissipation of the solid depends on the state. This is indeed necessary to assure frame indifference [3]. With all quantities in hand, we can now write down the (formal) energy equality $\displaystyle\underbrace{E(\eta(T))+\frac{\rho_{s}}{2}\left\|\partial_{t}\eta(T)\right\|_{{Q}}^{2}+\frac{\rho_{f}}{2}\left\|v(T)\right\|_{{\Omega}}^{2}}_{\text{energy at final time $T$}}+\underbrace{\int_{0}^{T}\left[\nu\left\|{\varepsilon}v\right\|_{{\Omega}}^{2}+2R(\eta,\partial_{t}\eta)+2A(\eta,\partial_{t}\eta-v\circ\eta)\right]dt}_{\text{dissipated energy}}$ $\displaystyle=\underbrace{E(\eta_{0})+\frac{\rho_{s}}{2}\left\|b\right\|_{{Q}}^{2}+\frac{\rho_{f}}{2}\left\|v_{0}\right\|_{{\Omega}}^{2}}_{\text{energy at the initial time $0$}}+\underbrace{\int_{0}^{T}\left\langle f,v\right\rangle_{\Omega}dt.}_{\text{power of the external forces}}$ Here $f$ is an external force acting on the fluid. Forces acting on the solid can be considered in a similar fashion. We use the subscript “0” to denote the initial data for the solid deformation and the fluid velocity given at $t=0$, while $b$ is used for the initial solid velocity. Once the stored energy and the dissipation function have been specified, we embark on giving a weak and a strong formulation of the considered problem. This formulation needs to be closed using boundary conditions on the interface between the pure fluid and the porous media. We do so in the least invasive way, by requiring weak continuity of the fluid (in the sense that $v\in W^{1,2}(\Omega;\mathbb{R}^{n})$) and stress free boundary condition for the solid deformation. Now, we start with the weak formulation and fix the space of test functions, which consists of the pairs $(\phi,\xi)\in W^{2,q}(Q;\mathbb{R}^{n})\times W^{1,2}_{0}(\Omega;\mathbb{R}^{n})$ with $\phi|_{P}=0$. Taking the derivatives of stored energy and dissipation with respect to position and velocity respectively, in directions $\phi$ and $(\phi,\xi)$ gives us the corresponding forces and we end up with: ###### Definition 2.1. We call the pair $(\eta,v)\in L^{\infty}(0,T;\mathcal{E})\cap W^{1,2}(0,T;W^{1,2}(Q))\times L^{2}(0,T;W^{1,2}_{0}(\Omega;\mathbb{R}^{n}))$ with $\eta|_{P}=0$ and $\eta(0)=\eta_{0}$ satisfying (2.4) $\displaystyle\left\langle b,\phi(0)\right\rangle_{Q}+\left\langle v_{0},\xi(0)\right\rangle_{\Omega}$ $\displaystyle=\int_{0}^{T}\rho_{s}\left\langle\partial_{t}\eta,\partial_{t}\phi\right\rangle_{Q}+\left\langle DE(\eta),\phi\right\rangle+\left\langle D_{2}R(\eta,\partial_{t}\eta),\phi\right\rangle+\nu\left\langle{\varepsilon}v,{\varepsilon}\xi\right\rangle_{\Omega}\ $ $\displaystyle\quad+\left\langle D_{2}A(\eta,\partial_{t}\eta-v\circ\eta),\phi-\xi\circ\eta\right\rangle+\rho_{f}\left\langle v,\partial_{t}\xi-v\cdot\nabla\xi\right\rangle_{\Omega}-\left\langle f,\xi\right\rangle_{\Omega}dt,$ for all $(\phi,\xi)\in C^{\infty}([0,T]\times Q)\times C^{\infty}(0,T;C^{\infty}_{0}(\Omega))$, with $\phi(T)=0$, $\phi|_{P}=0$, $\operatorname{div}\xi=0$ and $\xi(T)=0$ a weak solution of the FPSI-problem given by (2.1) and (2.3) with respective initial and boudnary values. The corresponding strong formulation reads (2.5) $\displaystyle\left\\{\begin{aligned} \rho_{s}\partial_{t}^{2}\eta&=\operatorname{div}\sigma&\text{ in }&[0,T]\times Q\\\ \operatorname{div}\sigma&=DE(\eta)+D_{2}R(\eta,\partial_{t}\eta)+a(\nabla\eta)\cdot(\partial_{t}\eta-v\circ\eta)&\text{ in }&[0,T]\times Q\\\ \operatorname{div}v&=0&\text{ in }&[0,T]\times\Omega\\\ \rho_{f}(\partial_{t}v+v\cdot\nabla v)&=\nu\Delta v+F-f-\nabla p&\text{ in }&[0,T]\times\Omega\\\ F&=(\det\nabla(\eta^{-1}))a(\nabla\eta\circ\eta^{-1})\cdot(v-\partial_{t}\eta\circ\eta^{-1})&\text{ in }&[0,T]\times\eta(Q)\\\ F&=0&\text{ in }&[0,T]\times\Omega\setminus\eta(Q)\\\ \sigma\cdot n&=0&\text{ in }&[0,T]\times\partial Q\setminus P\\\ \eta&=\gamma&\text{ in }&[0,T]\times\partial P\\\ v&=0&\text{ in }&[0,T]\times\partial\Omega\\\ v(0)&=v_{0}&\text{ in }&\Omega\\\ \eta(0)=\eta_{0}&\text{ and }\partial_{t}\eta(0)=b&\text{ in }&Q,\end{aligned}\right.$ where $\sigma$ is the Piola-Kirchhoff stress tensor derived from $E$ and $F$ is the Eulerian drag from the solid acting on the fluid. In deriving the strong formulation from the energy contribution, the drag term is used twice (it appears in the PDE for the velocity $v$ as well as the deformation $\eta$) but deriving both constituents from the same term guarantees that these two forces are indeed opposite and equal. Notice also that the boundary value of $\sigma$ is of Neumann type and due to a balance of stresses on the porous medium - fluid interface; in the weak formulation this is already encoded implicitly. The goal of this paper is to prove the following theorem: ###### Theorem (Existence for the porous media problem). Given $T>0$, $f\in L^{2}([0,T]\times\Omega;\mathbb{R}^{n})$, $\eta_{0}\in\mathcal{E}$, $b\in L^{2}(Q;\mathbb{R}^{n})$, $v_{0}\in L^{2}(\Omega;\mathbb{R}^{n})$ there exists time $T^{*}\in(0,T]$ and a weak solution (in the sense of (2.4)) to (2.5). More precisely we show the existence of a pair $(\eta,v)\in C^{0}([0,T^{*}];W^{2,q}(Q;\mathbb{R}^{n}))\cap W^{1,2}([0,T^{*}];W^{1,2}(Q;\mathbb{R}^{n}))\times W^{1,2}([0,T^{*}]\times\Omega;\mathbb{R}^{n})$ with initial data $\eta(0)=\eta_{0}$, $\partial_{t}\eta_{0}=b$, $v(0)=v_{0}$ (in the weak sense), satisfying (2.4). Additionally this solution satisfies the physical energy inequality. Here $T^{*}$ either equals $T$ or is the the first time of (self-) contact, where the free part of $\partial\eta(T^{*},Q)$ either touches itself or $\partial\Omega$. Theorem 2 is proved in Section 4. However, as the method of proof has the potential to be extended to more general models (see below) we first present, for a better understanding of the techniques, a respective overview in Section 3. ### Limitations and generalizations The model considered in this paper for which we prove existence of weak solutions in Theorem 2 includes many important features that have not been considered in mathematical FPSI yet: namely, the model is _fully dynamic including inertia_ and the response of the solid is _fully nonlinear allowing for large structural deformations_. On the other hand, we have made several simplifying assumptions including the following ones: 1. (1) Non-saturated/non-zero volume porous media: In the model at hand, the volume fraction of the fluid inside the porous medium is always fixed to one. This greatly simplifies the discussion as it in particular implies $\operatorname{div}v=0$ everywhere, even over the interface. It is however possible to generalize this condition and assume divergence free flow only on a part of the domain. In particular in [6] we already dealt with a global velocity field that is only divergence free outside the solid. 2. (2) Dependence of the solid stored energy on fluid variables: In the model we assume that the stored energy of the solid does not depend on the properties of the saturating fluid. Generalizations in this directions are known in modeling and include e.g. Biot’s model where the stored energy depends on the pressure of the fluid in pores or models that depend on the volume fraction of the fluid and solid in each point. 3. (3) Compressibility of the fluid: The fluid is assumed incompressible and Newtonian in the present model. Nonetheless, in view of possible applications like the breathing through masks, incorporating compressibility seems an important task. This will call for changes in the analysis, but possibly also modelling, in particular if the pressure additionally enters the solid stored energy like in Biot’s model. (See [9] for an example of the variational strategy in the context of fluid-structure interaction with a compressible fluid). 4. (4) Drag force: We assume the most basic form of the drag force that depends linearly on the difference between the fluid and solid velocity. It is also just the drag force where the coupling between the fluid and solid velocities happens in the porous medium model. One could think of a drag force that depends also on other variables included in the model (like volume fractions) or other coupling possibilities. 5. (5) Interface conditions and boundary layers: The interface condition at the transition between the porous medium and the circumlying fluid currently consists of continuity of the fluid velocity and a stress free boundary conditions for the solid. Generalizations are thinkable for instance an additional boundary layer described by an additional dissipation functional. 6. (6) Geometry of the porous medium: In many cases the porous medium can be well described as a lower-dimensional structure, thus an appropriate membrane, shell or plate model is applicable. Applications include cell walls or the aforementioned breathing through masks. The variational approach presented here is well suited to handle also these situations; for the classical FSI this situation is currently investigated [25] so that generalizations to the FPSI setting seem feasible. It seems plausible that the discussed aspects (or other) could be incorporated into the presented model in a future work. Let us however remark that some limitations of the given model seem not to be removable with the current state of art. Most significantly for the solid model. In particular it is essential to include regularizations featuring the second gradient (see Remark 2). ## 3\. Variational strategy - Overview of the proof of Theorem 2 We are interested in proving existence of weak solutions to the system (2.5) in the sense of Definition 2.1. As is standard, we will first prove existence of approximations of (2.5) and then pass to the limit. To enable the limit passage a crucial prerequisite is the availability of suitable a-priori estimates, which are usually based on an appropriate energy (in)equality. Indeed, multiplying formally the balances of the respective momenta in (2.5) by the velocity and integrating over $[0,T]$ leads to $\displaystyle\underbrace{E(\eta(T))+\frac{\rho_{s}}{2}\left\|\partial_{t}\eta(T)\right\|_{{Q}}^{2}+\frac{\rho_{f}}{2}\left\|v(T)\right\|_{{\Omega}}^{2}}_{\text{energy at final time $T$}}+\underbrace{\int_{0}^{T}\left[\nu\left\|{\varepsilon}v\right\|_{{\Omega}}^{2}+2R(\eta,\partial_{t}\eta)+2A(\eta,\partial_{t}\eta-v\circ\eta)\right]dt}_{\text{dissipated energy}}$ $\displaystyle=\underbrace{E(\eta_{0})+\frac{\rho_{s}}{2}\left\|\partial_{t}\eta(0)\right\|_{{Q}}^{2}+\frac{\rho_{f}}{2}\left\|v_{0}\right\|_{{\Omega}}^{2}}_{\text{energy at the initial time $0$}}+\underbrace{\int_{0}^{T}\left\langle f,v\right\rangle_{\Omega}dt}_{\text{power of the external forces}},$ which, on a more abstract level, takes the compact form $\displaystyle E_{\text{st}}(T)+E_{\text{kin}}(T)+\int_{0}^{T}W_{\text{diss}}(t)dt=E_{\text{st}}(0)+E_{\text{kin}}(0)+\int_{0}^{T}W_{\text{ext}}(t)dt$ where we distinguished the the following four contributions: The stored energy $E_{\text{st}}$, the kinetic energy $E_{\text{kin}}$, energy lost through dissipation $W_{\text{diss}}$ and work done by external forces $W_{\text{ext}}$. The kinetic energy and the external forces will generally each always have a similar form, independent of the considered material. On the other hand, the other two carry the constitutive information on material modelling. In this work, we stick to the form (2.1) and (2.3) of these contributions, but the method of proof is to a large extent independent of those, so we shall not use the specific form within this section. We will prove Theorem 2 by _time-discretization_ ; this is particularly suited as the geometry between the porous medium and the circumlying fluid changes during the evolution. Thus, even if we do not deal with the situation here, a time-discretization method has the potential to handle complex transition conditions between the porous medium and the fluid. More importantly, however, a time-discretization allows us to set up a _variational problem_ to prove _existence of approximate solutions_. This is of great importance as standardly used fixed point methods do not function well if the included operators are not monotone (in some sense) and/or the set of admissible functions is not convex. Nonetheless, the selected time-discretization method needs to be chosen carefully as the balance of momentum for the solid porous medium (i.e. first equation in (2.5)) is _hyperbolic featuring a non-monotone differential operator stemming from the non-convex stored energy_. Let us explain the difficulty on a simple toy model involving a single unit mass particle with position $x(t)\in\mathbb{R}^{n}$ and a _non-convex_ potential energy $E(x(t))$. Hence, we seek the solution to the following hyperbolic ODE: $\partial_{t}^{2}x=-\nabla E(x)$ with initial data $x(0)=x_{0}$ and $\partial_{t}x(0)=x^{*}$. The naive ansatz is to consider a time-discretization with step-size $\tau$ and simply set up a backward Euler discretization as follows: (3.1) $\displaystyle 0=\nabla E(x_{k+1})+\frac{\tfrac{x_{k+1}-x_{k}}{\tau}-\tfrac{x_{k}-x_{k-1}}{\tau}}{\tau}.$ Assuming that existence of solutions to (3.1) can be proved, one would like to derive an energy inequality from (3.1) to obtain a-priori estimates. Mimicking the continuous case, the most straighforward way to do so seems to test (3.1) with the discretized time derivative. This yields $0=\left\|\tfrac{x_{k+1}-x_{k}}{\tau}\right\|^{2}-\left\langle\tfrac{x_{k}-x_{k-1}}{\tau},\tfrac{x_{k+1}-x_{k}}{\tau}\right\rangle+\left\langle\nabla E(x_{k+1}),\tfrac{x_{k+1}-x_{k}}{\tau}\right\rangle,$ where the second term can be estimated using Young’s inequality. Thus, we need to concentrate on the last term; in the time-continuous case we would apply a chain rule here. However in the discrete setting a chain rule is generally not available, only if $E$ would be convex the following inequality $\left\langle\nabla E(x_{k+1}),\tfrac{x_{k+1}-x_{k}}{\tau}\right\rangle\geq E(x_{k+1})-E(x_{k}),$ that is sometimes called the _discrete chain rule_ could be applied. In the _non-convex setting_ this inequality is false in general and this path to a-priori estimates seems closed.333In some cases, e.g. in the case of $\lambda$-convexity, it is still possible to derive a variant of the discrete chain rule with some additional error terms on the right hand side. This can lead the way to an a-priori estimate and a resulting existence proof but is much more problem specific than the general method we present here. The solution to this quandry has been found in [6] by noting that rather a _two-scale_ approximation is needed in (3.1). Thus we consider two parameters $\tau,h$ and write (3.2) $0=\nabla E(x_{k+1})+\frac{\tfrac{x_{k+1}-x_{k}}{\tau}-\tfrac{x_{k}-x_{k-1}}{\tau}}{h}.$ If $h$ is fixed and $t<h$, (3.2) can be viewed as a $\tau$-time discretization of the following gradient flow problem under forcing (3.3) $\displaystyle\nabla E(x(t))=-\frac{\partial_{t}x(t)-x^{*}}{h},\quad x(0)=x_{0},$ where $x^{*}=\partial_{t}x(0)$. Indeed, if we could pass to the limit $\tau\to 0$ in (3.2) we construct a solution $x^{h}$ to (3.3) on $[0,h]$. Now using the known values of $\partial_{t}x(t-h)$ in place of $x^{*}$ for the next interval of length $h$ we can iteratively prolong this process to get what we will call a _time-delayed solution_ , satisfying (3.4) $\displaystyle\nabla E(x(t))=-\frac{\partial_{t}x(t)-\partial_{t}x(t-h)}{h}$ for $t\in[0,T]$. We can test this time-delayed solution with $\partial_{t}x(t)$ and find the hyperbolic a-priori estimate $\displaystyle E(x(b))-E(x(a))$ $\displaystyle=-\int_{a}^{b}\left\langle\frac{\partial_{t}x(t)-\partial_{t}x(t-h)}{h},\partial_{t}x(t)\right\rangle\,dt$ $\displaystyle\quad\leq-\frac{1}{2}\fint_{b-h}^{b}\left|{\partial_{t}x(t)}\right|^{2}dt+\frac{1}{2}\fint_{a-h}^{a}\left|{\partial_{t}x(t)}\right|^{2}dt,$ whenever the solution was constructed over $[a-h,b]$.444Actually in the construction procedure the hyperbolic a-priori estimate already needs to be used for each interval of length $h$ to guarantee the admissibility of the initial/right hand data of the next. The above gives us a good estimate on $E(x(t))$ and an averaged time- derivative, independent of $h$ which allows for sending $h\to 0$ in (3.4). Thus this idea paves the path to Theorem 2. On the level of the $\tau$-discretization we deal with the gradient flow problem (3.4). For these variational methods are well established. We will explain this in the following. Let us consider the rescaled problem with $h=1$ so that we obtain $\nabla E(x(t))=-\partial_{t}x(t)-f,\quad x(0)=x_{0},$ with some given time dependent force $f$, which includes the previously constructed $\partial_{t}x(t-h)=\partial_{t}x(t-1)$. We may discretize this by means of the backward Euler method to get (3.5) $\displaystyle 0=\nabla E(x_{k+1})+\frac{x_{k+1}-x_{k}}{\tau}+f_{k}.$ where $f_{k}$ is a corresponding time discretization of $f$. Now one realizes that (3.5) is actually the Euler-Lagrange equation to the minimization problem $\mathcal{F}_{k}(x)\longrightarrow\text{min},$ where $\mathcal{F}_{k}:x\mapsto E(x)+\frac{1}{2\tau}\left|{x-x_{k}}\right|^{2}+(x-x_{k})\cdot f_{k}$. Thus, one constructs $x_{k+1}$ using the minimization problem. Now, since one specifically has a minimizer, instead of using the equation one can compare the values of $F_{k}(x)$ at $x_{k+1}$ and $x_{k}$ in order to derive a-priori estimates. This leads to $\displaystyle E(x_{k+1})+\frac{\tau}{2}\left|{\frac{x_{k+1}-x_{k}}{\tau}}\right|^{2}+\tau\frac{x_{k+1}-x_{k}}{\tau}f=\mathcal{F}_{k}(x_{k+1})\leq\mathcal{F}_{k}(x_{k})=E(x_{k}),$ and summing up yields the energy inequality (up to a factor on the dissipation) and, in turn, a-priori estimates. This method, when one resorts to minimization, has been known as the _minimizing movements approximation_ of gradient flows and is due to De Giorgi [19]. Since then it has been widely used in mathematical solid mechanics since it allows to cope with the non- convexities of $E$ (see e.g. [26]) and is well suitable also in our case. ###### Remark (Minimization in the hyperbolic case). A more direct attempt at a proof of the main theorem would be to generalize the minimizing movements by minimizing the functional (3.6) $\displaystyle\mathcal{F}_{k}(x):=E(x)+\frac{1}{2}\left|{\tfrac{x-x_{k}}{\tau}-\tfrac{x_{k}-x_{k-1}}{\tau}}\right|^{2},$ which is easily checked to have (3.1) as Euler-Lagrange equation. But then the same arguments apply to establishing estimates via the equation and comparing the values of $F_{k}(x)$ at $x_{k+1}$ and $x_{k}$ instead yields $\displaystyle E(x_{k+1})+\frac{\tau^{2}}{2}\left|{\frac{\tfrac{x_{k+1}-x_{k}}{\tau}-\tfrac{x_{k}-x_{k-1}}{\tau}}{\tau}}\right|^{2}=\mathcal{F}_{k}(x_{k+1})\leq\mathcal{F}_{k}(x_{k})=E(x_{k})+\frac{1}{2}\left|{\tfrac{x_{k}-x_{k-1}}{\tau}}\right|^{2}$ with a term on the right hand side that turns out to have entirely the wrong scaling to estimate.555This is not surprising, as we are comparing a proper approximately inertial solution with one that suddenly stops. A better competitor might be the “straight continuation” $x_{k}+\tau(x_{k}-x_{k+1})$, but then the estimate again requires convexity to deal with the energy-term. Thus, the energy estimate cannot be obtained. ∎ The approach explained here turns out to be admissible for infinite- dimensional spaces instead of $\mathbb{R}^{n}$ and even coupling between Eulerian and Lagrangian coordinates. For that Eulerian-Lagrangian coupling however additional difficulties appear that require some novel ideas on its own. This will be discussed in the forthcoming sections. ###### Remark (Numerical use of the method). Since numerous numerical schemes for minimization (over discrete spaces) are available the above methodology might also be attractive for computational mathematics. The idea would here be to do a two-scale approximation: This means that once $x_{k+1}^{\ell-1},x_{k}^{\ell-1}$ and $x^{\ell}_{k}$ are constructed. Then we can define $x_{k+1}^{\ell}$ as the minimizer of $\displaystyle\mathcal{F}_{k}^{\ell}(x):=E(x)+\frac{\tau}{2h}\left|{\frac{x-x_{k}^{\ell}}{\tau}-\frac{x_{k+1}^{\ell-1}-x_{k}^{\ell-1}}{\tau}}\right|^{2}.$ In order to pass to the limit it is in general unavoidable to use the hyperbolic structure on a time-continuous level. This means that first $\tau\to 0$ and only afterwards $h\to 0$. The question is how much smaller does $\tau$ needs to be? One observes quickly, that in case $E$ is convex $\tau$ and $h$ can be chosen arbitrarily. Hence, the smallness of $\tau$ in relation should depend on the non-convexity of the assumed energies. In a forthcoming paper we hope to investigate this issue further. ∎ ## 4\. Proof of Theorem 2 In this section we prove Theorem 2, i.e. we prove existence of weak solutions of the FPSI-problem (2.5). We follow the general strategy as outlined in Section 3 and as devised in [6]. First we show existence for solutions to a time-delayed, parabolic problem and then we use these solutions to construct the weak solution to the actual problem. These two parts will be the topic of the next two subsections. ### 4.1. Existence for the time-delayed problem The key to finding a solution to the time-delayed problem is to understand that on short time-scales, the time-delayed problem is ultimatively parabolic. Instead of treating the delayed velocities $\partial_{t}\eta(t-h)$ and $v(t-h)$ as non-local (in time) parts of the equation, we can treat them as a fixed given data by considering times $t<h$ only. This mindset allows us to state and prove the following Proposition 4.1 during this subsection. Additionally, in this subsection and in Proposition 4.1, we need to introduce regularizations of the involved stored energy and dissipation function depending on the following parameters: We take $k_{0}\in\mathbb{N}$ in such a way that $W^{k_{0},2}(\Omega;\mathbb{R}^{n})$ embeds into $C^{1}(\Omega;\mathbb{R}^{n})\cap W^{2,q}(Q;\mathbb{R}^{n})$. Further we choose $a_{0}\in(0,1)$ appropriately to be fixed later. ###### Proposition (Existence for short time time-delayed solutions). Given $h>0$ sufficiently small, $f,w\in L^{2}([0,h]\times\Omega;\mathbb{R}^{n})$, $\zeta\in L^{2}([0,h]\times Q;\mathbb{R}^{n})$, $\eta_{0}\in\mathcal{E}$ there exist $\eta\in C^{0}([0,h];\overline{\mathcal{E}})\cap W^{1,2}([0,h];W^{k_{0},2}(Q;\mathbb{R}^{n}))\text{ with }\eta(t)|_{P}=\gamma$ and $v\in W_{0}^{k_{0},2}([0,h]\times\Omega;\mathbb{R}^{n})$ with $\operatorname{div}v=0$ such that $\displaystyle\int_{0}^{h}\langle DE(\eta),\phi\rangle+h^{a_{0}}\left\langle\nabla^{k_{0}}\eta,\nabla^{k_{0}}\phi\right\rangle_{Q}+\langle D_{2}R(\eta,\partial_{t}\eta),\phi\rangle+h\left\langle\nabla^{k_{0}}\partial_{t}\eta,\nabla^{k_{0}}\phi\right\rangle_{Q}$ $\displaystyle+\langle D_{2}A(\eta,\partial_{t}\eta-v\circ\eta),\phi-\xi\circ\eta\rangle+\nu\left\langle{\varepsilon}v,{\varepsilon}\xi\right\rangle_{\Omega}+h\left\langle\nabla^{k_{0}}v,\nabla^{k_{0}}\xi\right\rangle_{\Omega}$ $\displaystyle+\rho_{s}\left\langle\tfrac{\partial_{t}\eta-\zeta}{h},\phi\right\rangle_{Q}+\rho_{f}\left\langle\tfrac{v\circ\Phi-w}{h},\xi\circ\Phi\right\rangle_{\Omega}-\left\langle f,\xi\right\rangle_{\Omega}dt=0$ for all $\phi\in L^{2}([0,h];W^{2,q}(Q;\mathbb{R}^{n}))$ with $\phi_{P}=0$ and $\xi\in L^{2}([0,h];W^{1,2}_{0}(\Omega;\mathbb{R}^{n}))$ with $\operatorname{div}\xi=0$. Here $\Phi:[0,h]\times\Omega\to\Omega$ is the flow map of $v$, i.e. a family of volume preserving diffeomorphisms $\Phi(t,.):\Omega\to\Omega$ such that $\Phi(0,.)=\textrm{id}$ and $\partial_{t}\Phi(t,y)=v(t,\Phi(t,y))$. In accordance with the regularizing parameters, we define $\displaystyle E_{h}(\eta):=E(\eta)+\frac{h^{a_{0}}}{2}\left\|\nabla^{k_{0}}\eta\right\|_{{Q}}^{2}\text{ and }R_{h}(\eta,b):=R(\eta,b)+\frac{h}{2}\left\|\nabla^{k_{0}}b\right\|_{{Q}}^{2}$ as a shorthand and note that $\displaystyle\left\langle DE_{h}(\eta),\phi\right\rangle=\left\langle DE(\eta),\phi\right\rangle+h^{a_{0}}\left\langle\nabla^{k_{0}}\eta,\nabla^{k_{0}}\phi\right\rangle_{Q}$ and similarly for $\left\langle D_{2}R_{h}(\eta,\partial_{t}\eta),\phi\right\rangle$. These regularizations will help us at several points, both with establishing the energy inequality as well as the flow map of the fluid. As explained previously, the key to deal with such problems in a manner consistent with our energy-considerations, is De Giorgi’s method of minimizing movements [19]. We thus start the proof by minimizing the functional $\displaystyle\mathcal{F}_{k}^{(\tau)}(\eta,v)$ $\displaystyle:=E_{h}(\eta)+\tau\left[R_{h}(\eta_{k}^{(\tau)},\partial_{k}^{\tau}\eta)+A(\eta_{k}^{(\tau)},\partial_{k}^{\tau}\eta-v\circ\eta_{k}^{(\tau)})+\frac{\nu}{2}\left\|{\varepsilon}v\right\|_{{\Omega}}^{2}+\frac{h}{2}\left\|\nabla^{k_{0}}v\right\|_{{\Omega}}^{2}\right]$ $\displaystyle+\frac{\tau}{2h}\left[\rho_{s}\left\|\partial_{k}^{\tau}\eta-\zeta_{k}^{(\tau)}\right\|_{{Q}}^{2}+\rho_{f}\left\|v\circ\Phi_{k}^{(\tau)}-w_{k}^{(\tau)}\right\|_{{\Omega}}^{2}\right]-\tau\left\langle f_{k}^{(\tau)},v\right\rangle_{\Omega}$ in the class of all admissible $\eta,v$, i.e. those such that $\eta\in\mathcal{E}_{h}=\mathcal{E}\cap W^{k_{0},2}(Q;\mathbb{R}^{n})$ and $v\in W_{0,\mathrm{div}}^{1,2}(\Omega;\mathbb{R}^{n})$, i.e. $v\in W_{0}^{1,2}(\Omega;\mathbb{R}^{n})$ with $\operatorname{div}v=0$. Here $\partial_{k}^{\tau}\eta:=\smash{\tfrac{\eta-\eta_{k}^{(\tau)}}{\tau}}$ is a discrete derivative and $\smash{f_{k}^{(\tau)}}:=\smash{\fint_{k\tau}^{(k+1)\tau}fdt}$ is a time- discretization of $f$. In the same way we define $\smash{\zeta_{k}^{(\tau)}}$ and $\smash{w_{k}^{(\tau)}}$. For any given $\eta_{k}^{(\tau)},\Phi_{k}^{(\tau)}$, this minimizer will yield the next step, i.e. the pair $(\eta_{k+1}^{(\tau)},v_{k+1}^{(\tau)})$. From this we then also construct $\Phi_{k+1}^{(\tau)}:=(\textrm{id}+\tau v_{k+1}^{(\tau)})\circ\Phi_{k}^{(\tau)}$.666Note that, as expected for a parabolic problem, the rate variables, i.e. the velocities $\partial_{k}^{\tau}\eta^{(\tau)}_{k}$ and $v_{k}^{(\tau)}$ are almost entirely discarded. Only the latter occurs indirectly in $\Phi_{k}^{(\tau)}$. Similarly we are not using the initial data for these velocities. Only later will they all reappear through proper choice of $\zeta$ and $w$. This again emphasizes that these rate variables are associated with the $h$-scale which is kept constant for the time-delayed problem. The existence of a minimizer to $\mathcal{F}_{k}^{(\tau)}$ follows by the direct method from the calculus of variations. Indeed, by the coercivity of $\smash{\mathcal{F}_{k}^{(\tau)}}$, we find a bounded sequence $(\eta_{k}^{l},v_{k}^{l})\in\mathcal{E}_{h}\times\smash{W^{1,2}_{0,\mathrm{div}}(\Omega;\mathbb{R}^{n})}$, such that $\mathcal{F}_{k}^{(\tau)}(\eta_{k}^{l},v_{k}^{l})\to\mathrm{inf}_{(\nu,\eta)\in\mathcal{E}_{h}\times W^{1,2}_{0,\mathrm{div}}(\Omega;\mathbb{R}^{n})}\mathcal{F}_{k}^{(\tau)}(\eta,v)$ Here, the boundedness of sequence is particularly due to the added regularizing terms, though existence of minimizers could be shown even if they were omitted, as well as the highest order term in the elastic stored energy. By the Banach-Alaoglu theorem, we conclude the existence of a pair $(\eta_{k}^{\tau},v_{k}^{\tau})\in W^{k_{0},2}(Q;\mathbb{R}^{n})\cap W^{2,q}(Q;\mathbb{R}^{n})\times W^{1,2}(\Omega,\mathbb{R}^{n})$ such that $(\eta_{k}^{l},v_{k}^{l})\rightharpoonup(\eta_{k}^{\tau},v_{k}^{\tau})\in W^{k_{0},2}(Q;\mathbb{R}^{n})W^{2,q}(Q;\mathbb{R}^{n})\times W^{1,2}(\Omega,\mathbb{R}^{n}).$ By compact embeddings, owing to the regularizing terms, we can show that $\eta_{k}^{\tau}$ fulfills the Ciarlet-Nečas condition and thus lies in $\mathcal{E}_{h}$ 777This can be show even under a weaker type of convergence; see the original paper [14]., due to linearity $v_{k}^{\tau}$ is divergence free and satisfies the zero boundary condition. It remains to show that $\mathcal{F}_{k}^{\tau}$ is weakly lower semicontinuous with respect to the weak convergence above. This is nevertheless standard, due to the convexity of the highest order terms, the strong convergence of $\eta_{k}^{l}\to\eta_{k}^{\tau}$ in $C^{1}(Q;\mathbb{R}^{n})$. Thus, $(\eta_{k}^{\tau},v_{k}^{\tau})$ is the sought minimizer. The reason we prefer a minimization to other approaches is that it allows us to immediately access a discrete energy inequality, without having to rely on additional properties of the energy, such as convexity. For this we compare the minimal value, i.e. that of $(\eta_{k+1}^{(\tau)},v_{k+1}^{(\tau)})$ to that of “standing still”, i.e. $(\eta_{k}^{(\tau)},0)$ in the same functional. This removes all dissipative terms on one side of the inequality and gives us $\displaystyle\phantom{{}={}}E_{h}(\eta_{k+1}^{(\tau)})+\tau\left[R_{h}(\eta_{k}^{(\tau)},\partial_{k}^{\tau}\eta_{k+1}^{(\tau)})+A(\eta_{k}^{(\tau)},\partial_{k}^{\tau}\eta_{k+1}^{(\tau)}-v_{k+1}^{(\tau)}\circ\eta_{k}^{(\tau)})+\frac{\nu}{2}\left\|{\varepsilon}v_{k+1}^{(\tau)}\right\|_{{\Omega}}^{2}+\frac{h}{2}\left\|\nabla^{k_{0}}v^{(\tau)}_{k+1}\right\|_{{\Omega}}^{2}\right]$ $\displaystyle+\frac{\tau}{2h}\left[\rho_{s}\left\|\partial^{\tau}_{k}\eta_{k+1}^{(\tau)}-\zeta_{k}^{(\tau)}\right\|_{{Q}}^{2}+\rho_{f}\left\|v_{k+1}^{(\tau)}\circ\Phi_{k}^{(\tau)}-w_{k}^{(\tau)}\right\|_{{\Omega}}^{2}\right]-\tau\left\langle f_{k}^{(\tau)},v_{k+1}^{(\tau)}\right\rangle_{\Omega}$ $\displaystyle\leq E_{h}(\eta_{k}^{(\tau)})+\frac{\tau}{2h}\left[\rho_{s}\left\|\zeta_{k}^{(\tau)}\right\|_{{Q}}^{2}+\rho_{f}\left\|w_{k}^{(\tau)}\right\|_{{\Omega}}^{2}\right]$ A telescope argument and the usual weighted Young’s inequality then gives us an energy estimate as well. $\displaystyle E_{h}(\eta_{N}^{(\tau)})+\\!\sum_{k=0}^{N-1}\tau\left[R_{h}(\eta_{k}^{(\tau)},\partial^{\tau}_{k}\eta_{k+1}^{(\tau)})+A(\eta_{k}^{(\tau)},\partial_{k}^{\tau}\eta_{k+1}^{(\tau)}-v_{k+1}^{(\tau)}\circ\eta_{k}^{(\tau)})+\frac{\nu}{2}\left\|{\varepsilon}v_{k+1}^{(\tau)}\right\|_{{\Omega}}^{2}\\!+\frac{h}{2}\left\|\nabla^{k_{0}}v^{(\tau)}_{k+1}\right\|_{{\Omega}}^{2}\right]$ $\displaystyle+\sum_{k=0}^{N-1}C\tau\left[\rho_{s}\left\|\partial^{\tau}_{k}\eta_{k+1}^{(\tau)}\right\|_{{Q}}^{2}+\rho_{f}\left\|v_{k+1}^{(\tau)}\circ\Phi_{k}^{(\tau)}\right\|_{{\Omega}}^{2}\right]$ $\displaystyle\leq E_{h}(\eta_{k}^{(\tau)})+C\sum_{k=0}^{n-1}\frac{\tau}{2h}\left[\rho_{s}\left\|\smash{\zeta_{k}^{(\tau)}}\right\|_{{Q}}^{2}+\rho_{f}\left\|\smash{w_{k}^{(\tau)}}\right\|_{{\Omega}}^{2}+\left\|\smash{f_{k}^{(\tau)}}\right\|_{{\Omega}}^{2}\right]$ This is the first of many similar energy estimates related to the physical energy inequality, which we will make use of. Note also that in each step this also allows us to derive a uniform estimate on the distance from the initial data in the form $\displaystyle\left\|\eta_{0}-\smash{\eta_{N}^{(\tau)}}\right\|_{{Q}}\leq\sum_{k=0}^{N-1}\tau\left\|\partial^{\tau}_{k}\eta^{(\tau)}_{k+1}\right\|_{{Q}}\leq\sqrt{\sum_{k=0}^{N-1}\tau}\sqrt{\sum_{k=0}^{N-1}\tau\left\|\partial^{\tau}_{k}\eta^{(\tau)}_{k+1}\right\|_{{Q}}^{2}}\leq C\sqrt{h}.$ Since a straightforward calculation (see [6, Prop. 2.7]) shows that any given state of finite energy has a minimum $L^{2}$-distance to any (self-)coliding state of finite energy, this also tells us that we can indeed choose $h$ small enough to avoid such a collision. Next, we define the follwoing piecewise constant and piecewise affine (in time) approximations: $\displaystyle\bar{\eta}^{(\tau)}(t,x)$ $\displaystyle:=\eta_{k+1}^{(\tau)}(x)$ for $\displaystyle t\in[\tau k,\tau(k+1)),x\in Q$ $\displaystyle\underline{\eta}^{(\tau)}(t,x)$ $\displaystyle:=\eta_{k}^{(\tau)}(x)$ for $\displaystyle t\in[\tau k,\tau(k+1)),x\in Q$ $\displaystyle\hat{\eta}^{(\tau)}(t,x)$ $\displaystyle:=((k+1)-t/\tau)\eta_{k}^{(\tau)}(x)-(t/\tau-k)\eta_{k+1}^{(\tau)}(x)$ for $\displaystyle t\in[\tau k,\tau(k+1)),x\in Q$ $\displaystyle v^{(\tau)}(t,y)$ $\displaystyle:=v_{k}^{(\tau)}(y)$ for $\displaystyle t\in[\tau k,\tau(k+1)),y\in\Omega$ $\displaystyle\Phi^{(\tau)}(t,y)$ $\displaystyle:=\Phi_{k}^{(\tau)}(y)$ for $\displaystyle t\in[\tau k,\tau(k+1)),y\in\Omega$ in particular we have $\displaystyle\partial_{t}\hat{\eta}^{(\tau)}(t,x)$ $\displaystyle=\partial^{\tau}_{k}\eta_{k+1}^{(\tau)}(x)$ for $\displaystyle t\in(\tau k,\tau(k+1)),x\in\Omega$ Plugging these quantities into the energy estimate above, we get that there is a constant $C$ independent of $\tau$ such that $\displaystyle\sup_{t\in[0,h]}E_{h}(\bar{\eta}^{(\tau)}(t))\leq C,$ $\displaystyle\quad\sup_{t\in[0,h]}\left\|\bar{\eta}^{(\tau)}(t)\right\|_{{W^{k_{0},2}(Q)}}\leq C,$ $\displaystyle\quad\sup_{t\in[0,h]}\left\|\hat{\eta}^{(\tau)}(t)\right\|_{{W^{k_{0},2}(Q)}}\leq C$ $\displaystyle\int_{0}^{h}\left\|\partial_{t}\hat{\eta}^{(\tau)}(t)\right\|_{{W^{k_{0},2}(Q)}}^{2}dt\leq C$ $\displaystyle\quad\int_{0}^{h}\left\|v^{(\tau)}\right\|_{{W^{k_{0},2}(\Omega)}}^{2}dt\leq C,$ $\displaystyle\int_{0}^{h}A(\eta,v^{(\tau)}\circ\bar{\eta}^{(\tau)}-\partial_{t}\hat{\eta}^{(\tau)})dt\leq C.$ Thus we can find a (non-relabelled) subsequence of $\tau$’s and $\eta\in W^{1,2}([0,T];W^{k_{0},2}(Q;\mathbb{R}^{n}))$ and $v\in L^{2}([0,T];W^{k_{0},2}(\Omega;\mathbb{R}^{n}))$ such that $\displaystyle\bar{\eta}^{(\tau)},\underline{\eta}^{(\tau)}\stackrel{{\scriptstyle*}}{{\rightharpoonup}}\eta\text{ in }L^{\infty}([0,T];W^{k_{0},2}(Q;\mathbb{R}^{n}))\quad\text{ and }\quad\hat{\eta}^{(\tau)}\rightharpoonup\eta\text{ in }W^{1,2}([0,T];W^{k_{0},2}(Q;\mathbb{R}^{n}))$ $\displaystyle v^{(\tau)}\rightharpoonup v\text{ in }L^{2}([0,T];W^{k_{0},2}(\Omega;\mathbb{R}^{n})).$ The next question we are going to have to deal with is how to construct the flow map $\Phi$. In each step, the flow map is updated by concatenation. Not only is this in itself an inherently non-linear procedure, but also for any non-zero time, the number of those steps goes to infinity when we send $\tau$ to zero. On the other hand, each of these steps represents a change of scale $\tau$. This hints at an exponential structure and this is indeed what we find. As often (e.g. when working with conservation laws) the Lipschitz continuity of the flow-velocity is essential to have a well defined Lagrangian flow map. Here we particularly rely on the regularization of the flow-velocity is used at the $\tau$ level. ###### Lemma (Existence of a flow map). Assume that $v^{(\tau)}\in L^{2}([0,h];C^{0,1}(\Omega;\mathbb{R}^{n}))$ uniformly in $\tau$, such that $\operatorname{div}v^{(\tau)}=0$. Then the maps $\Phi^{(\tau)}$ are uniformly Lipschitz continuous in space, independent of $\tau$ (but not of $h$). Additionally we have in the limit that $\displaystyle\det\nabla\Phi^{(\tau)}\to 1.$ ###### Proof. Let us illustrate the proof for the Jacobian determinant. Here we are using an estimate involving an expansion of the determinant, the inequality between arithmetric and geometric mean, as well as the fact that $(1+a/N)^{N}\nearrow\exp(a)$: $\displaystyle\phantom{{}={}}\det\nabla\Phi_{N}^{(\tau)}=\prod_{k=1}^{N}\left[\det\left(I+\tau\nabla v_{k}^{(\tau)}\right)\right]\circ\Phi_{k-1}^{(\tau)}$ $\displaystyle=\prod_{k=1}^{N}\bigg{[}1+\tau\underbrace{\operatorname{tr}\left(\nabla v_{k}^{(\tau)}\right)}_{=\operatorname{div}v_{k}^{(\tau)}=0}+\sum_{l=2}^{n}\tau^{l}M_{l}\left(\nabla v_{k}^{(\tau)}\right)\bigg{]}\circ\Phi_{k-1}^{(\tau)}$ $\displaystyle\leq\prod_{k=1}^{N}\left[1+\sum_{l=2}^{n}c\tau^{l}\operatorname{Lip}\left(\nabla v_{k}^{(\tau)}\right)^{l}\right]\leq\left(1+\frac{1}{N}\sum_{k=1}^{N}\sum_{l=2}^{n}\tau^{l}c\operatorname{Lip}\left(v_{k}^{(\tau)}\right)^{l}\right)^{N}$ $\displaystyle\leq\exp\left(C\tau\sum_{k=1}^{N}\tau\operatorname{Lip}\left(v_{k}^{(\tau)}\right)^{2}\right),$ where $M_{l}$ includes all terms of order $l$ in the polynomial expansion of the determinant (e.g. $M_{1}(A)=\operatorname{tr}(A)$, …, $M_{n-1}(A)=\operatorname{tr}(\operatorname{cof}(A))$, $M_{n}(A)=\det A$, cf. e.g. [6, Lemma A.1.]). Now the sum in the last term is nothing but $\tau$ times the $L^{2}([0,h];C^{0,1}(\Omega;\mathbb{R}^{n}))$-norm of $v^{(\tau)}$, something on which we have $\tau$-independent bounds by assumption. Similar arguments also derive a bound from below, which gives us $\det\nabla\Phi_{N}^{(\tau)}\to 1$ uniformly. Furthermore applying a very similar argument to $\left|{\smash{\nabla\Phi^{(\tau)}_{N}}}\right|$ gives us a $\tau$-independent bound on the Lipschitz constant of $\Phi$. ∎ In particular, since $\Phi^{(\tau)}$ is piecewise constant, this allows us to use a variant of the Arzela-Ascoli theorem to find a limit $\Phi:[0,h]\times\Omega\to\Omega$ (after a subsequence), which has to be a volume preserving diffeomorphism with $\displaystyle\partial_{t}\Phi(t,y)=v(t,\Phi(t,y))\text{ for all }t\in[0,h],y\in\Omega$ i.e. a flow map. With this, we have one half of the proof in hand, namely constructing the objects that are going to form our solution. The other half is showing that these objects actually are a solution, i.e. that they satisfy the right equation. For this we can follow along the same steps as the construction. Apart from giving us an energy inequality, the other benefit of starting with a minimization is that any such minimizer has to satisfy the Euler-Lagrange equation. $\displaystyle\left\langle DE_{h}(\eta_{k+1}),\phi\right\rangle+\left\langle D_{2}R_{h}(\eta_{k},\partial^{\tau}\eta_{k+1}),\phi\right\rangle+\left\langle D_{2}A(\eta_{k},\partial^{\tau}\eta_{k+1}-v_{k+1}\circ\eta_{k}),\phi-\xi\circ\eta_{k}\right\rangle$ $\displaystyle\quad+\nu\left\langle\varepsilon v_{k+1},\varepsilon\xi\right\rangle_{\Omega}+h\left\langle\nabla^{k_{0}}v_{k+1},\nabla^{k_{0}}\xi\right\rangle_{\Omega}+\rho_{s}\left\langle\tfrac{\partial^{\tau}\eta-\zeta_{k}}{h},\phi\right\rangle_{Q}+\rho_{f}\left\langle\tfrac{v_{k+1}\circ\Phi_{k}-w_{k}}{h},\xi\right\rangle_{\Omega}+\left\langle f,\xi\right\rangle_{\Omega}=0$ for any pair $(\phi,\xi)$ such that $(\eta_{k+1}+\varepsilon\phi,v_{k+1}+\varepsilon\xi/\tau)$ is also admissible for our minimization888Note that we scale $\phi$ and $\xi$ differently in $\tau$, so that they both behave like a change of position. This is not required at this point, but convenient to do for the limit passage. for some $\varepsilon_{0}$ and any $\varepsilon\in[-\varepsilon_{0},\varepsilon_{0}]$, $\phi\in W^{1,2}(Q;\mathbb{R}^{n})$, $\xi\in W_{0}^{1,2}(\Omega;\mathbb{R}^{n})$ with $\phi|_{P}=0$ and $\operatorname{div}\xi=0$. Here, we point out that, as we start with a configuration without contact we know due to the bounds in the energy estimate that, for $h$ small enough, every minimizer will be contact-free. Thus, indeed variations in all directions are possible. Next we replace all occurences of the discrete approximations with the corresponding time dependent functions, giving us $\displaystyle\left\langle DE_{h}(\bar{\eta}^{(\tau)}),\phi\right\rangle+\left\langle D_{2}R_{h}(\underline{\eta}^{(\tau)},\partial_{t}\hat{\eta}^{(\tau)}),\phi\right\rangle+\left\langle D_{2}A(\underline{\eta}^{(\tau)},\partial_{t}\hat{\eta}^{(\tau)}-v^{(\tau)}\circ\underline{\eta}^{(\tau)}),\phi-\xi\circ\underline{\eta}^{(\tau)}\right\rangle$ $\displaystyle\quad+\nu\left\langle\varepsilon v^{(\tau)},\varepsilon\xi\right\rangle_{\Omega}+h\left\langle\nabla^{k_{0}}v^{(\tau)},\nabla^{k_{0}}\xi\right\rangle_{\Omega}$ $\displaystyle\quad+\rho_{s}\left\langle\tfrac{\partial_{t}\hat{\eta}^{(\tau)}-\zeta_{k}}{h},\phi\right\rangle_{Q}+\rho_{f}\left\langle\tfrac{v^{(\tau)}\circ\Phi^{(\tau)}-w_{k}}{h},\xi\circ\Phi^{(\tau)}\right\rangle_{\Omega}+\left\langle f,\xi\right\rangle_{\Omega}=0$ for almost all times $t$. Note in particular that there is now no longer any direct occurrence of $\tau$ in this equation. Thus one can form the integral over $[0,h]$ and pass to the limit $\tau\to 0$. Due to the added regularizing terms this is straightforward. Indeed, the highest terms (added through linearization) are linear. For all other terms (except the $\frac{1}{\mathrm{det}(\nabla\eta)^{a}}$ for which we rely on uniform continuity) we exploit compact embeddings and Nemitskii continuity (see e.g. [30]) and thus obtain: $\displaystyle\int_{0}^{h}\left\langle DE_{h}(\eta),\phi\right\rangle+\left\langle D_{2}R_{h}(\eta,\partial_{t}\eta),\phi\right\rangle+\left\langle D_{2}A(\eta,\partial_{t}\eta-v\circ\eta),\phi-\xi\circ\eta\right\rangle$ $\displaystyle\quad+\nu\left\langle\varepsilon v,\varepsilon\xi\right\rangle_{\Omega}+h\left\langle\nabla^{k_{0}}v,\nabla^{k_{0}}\xi\right\rangle_{\Omega}+\rho_{s}\left\langle\tfrac{\partial_{t}\eta-\zeta_{k}}{h},\phi\right\rangle_{Q}+\rho_{f}\left\langle\tfrac{v\circ\Phi-w}{h},\xi\circ\Phi\right\rangle_{\Omega}+\left\langle f,\xi\right\rangle_{\Omega}dt=0$ Finally we need to show that the new solution also satisfies the energy inequality. There is a way to do so directly from the approximation using De Giorgi’s methods.999Note that our previous energy estimate was off by a factor 2 in all dissipation terms, so we cannot simply apply lower-semicontinuity to this. However in the situation that we are studying, due to the regularizer, we can directly show that the energy inequality holds true for all solutions. For this, we test the equation directly with the pair $(\partial_{t}\eta,v)$, which we are allowed due to our regularizing terms and we obtain $\displaystyle E_{h}(\eta(h))+\int_{0}^{h}\left\langle D_{2}R_{h}(\eta,\partial_{t}\eta),\partial_{t}\eta\right\rangle+2A(\eta,\partial_{t}\eta-v\circ\eta)+\nu\left\|\varepsilon v\right\|_{{\Omega}}^{2}+h\left\|\nabla^{k_{0}}v\right\|_{{\Omega}}^{2}dt$ $\displaystyle\quad+\fint_{0}^{h}\rho_{s}\left\langle\partial_{t}\eta-\zeta,\partial_{t}\eta\right\rangle_{Q}+\rho_{f}\left\langle v\circ\Phi-w,v\circ\Phi\right\rangle_{\Omega}dt+\int_{0}^{h}\left\langle f,v\right\rangle_{\Omega}dt=E_{h}(\eta(0))$ Now we apply a consequence of Young’s inequality, $\left\langle a-b,a\right\rangle=\left|{a}\right|^{2}-\left\langle b,a\right\rangle\geq\frac{1}{2}\left|{a}\right|^{2}-\frac{1}{2}\left|{b}\right|^{2}$, to the two inertial terms. Together with the conservation of volume implying $\left\|v\circ\Phi\right\|_{{\Omega}}^{2}=\left\|v\right\|_{{\Omega}}^{2}$, this results in the energy inequality: $\displaystyle E_{h}(\eta(h))+\int_{0}^{h}2R_{h}(\eta,\partial_{t}\eta)+2A(\eta,\partial_{t}\eta-v\circ\eta)+\nu\left\|\varepsilon v\right\|_{{\Omega}}^{2}+h\left\|\nabla^{k_{0}}v\right\|_{{\Omega}}^{2}dt$ $\displaystyle+\fint_{0}^{h}\frac{\rho_{s}}{2}\left\|\partial_{t}\eta\right\|_{{Q}}^{2}+\frac{\rho_{f}}{2}\left\|v\right\|_{{\Omega}}dt=E_{h}(\eta(0))+\fint_{0}^{h}\frac{\rho_{s}}{2}\left\|\zeta\right\|_{{Q}}^{2}+\frac{\rho_{f}}{2}\left\|w\right\|_{{\Omega}}dt+\int_{0}^{h}\left\langle f,v\right\rangle_{\Omega}dt.$ ### 4.2. Convergence to the full problem Taking a look at the solutions constructed in the previous section, we can use the energy inequality to see that a solution on the interval $[0,h]$ can be turned into admissible initial and right hand side data with $\eta(h)$ as the new $\eta_{0}$, $\partial_{t}\eta$ as $\zeta$, $v(t)\circ\Phi(t)\circ\Phi(h)^{-1}$ as $w$ (to correct for the flow) and so on. So we can apply the result again to construct a solution on the interval $[h,2h]$ and so on. We do so and combine these solutions into a single pair of functions $\eta^{(h)}:[0,T]\times Q\to\Omega$ and $v^{(h)}:[0,T]\times\Omega\to\mathbb{R}^{n}$. These then fulfill the full time-delayed equation: (4.1) $\displaystyle\begin{aligned} &\int_{0}^{T}\rho_{s}\left\langle\tfrac{\partial_{t}\eta^{(h)}(t)-\eta^{(h)}(t-h)}{h},\phi\right\rangle_{Q}+\rho_{f}\left\langle\tfrac{v^{(h)}(t)-v^{(h)}(t-h)\circ\Phi_{-h}^{(h)}(t)}{h},\xi\right\rangle_{\Omega}\,dt\\\ &\quad+\int_{0}^{T}\left\langle DE_{h}(\eta^{(h)}),\phi\right\rangle_{Q}+\left\langle D_{2}A(\eta,\partial_{t}\eta^{(h)}-v^{(h)}\circ\eta^{(h)}),\phi-\xi\circ\eta^{(h)}\right\rangle_{Q}\\\ &\quad+\int_{0}^{T}\left\langle D_{2}R_{h}(\eta^{(h)},\partial_{t}\eta^{(h)}),\phi\right\rangle_{Q}+\nu\left\langle\varepsilon v^{(h)},\varepsilon\xi\right\rangle_{\Omega}+h\left\langle\nabla^{k_{0}}v^{(h)},\nabla^{k_{0}}\xi\right\rangle\,dt=\int_{0}^{T}\left\langle f,\xi\right\rangle_{\Omega}dt\end{aligned}$ Special care needs to be taken in constructing all the fluid related terms along the way. For the flow map we take $\Phi_{s}^{(h)}(t,.)$ to be the map that maps a fluid particle’s position at time $t$ to its position at time $t+s$. Such a map is constructed from composing the flow maps on each of the subintervals, by first using their inverse to get to a multiple of $h$ and then moving forward again. Or in other words, if $\Phi^{0},\Phi^{1},\dots$ denote the flow maps on the respective intervals $[0,h],[h,2h],\dots$ then we first define for $t\in[lh,(l+1)h]$ $\displaystyle\Phi_{t}^{(h)}(0):=\Phi^{l}(t-lh)\circ\Phi^{l-1}(h)\circ\dots\circ\Phi^{0}(h)$ and then for arbitrary $t,t+s\in[0,T]$ using the invertibility of the flow maps $\displaystyle\Phi^{(h)}_{s}(t):=\Phi^{(h)}_{t+s}(0)\circ\Phi^{(h)}(t)^{-1}.$ In practice of course we are mostly interested in the case when $t$ and $t+s$ are at most a distance $h$ apart and most of the back and forth in this definition cancels. In particular these maps are flow maps for $v^{h}$, so we have $\displaystyle\partial_{s}\Phi_{s}^{(h)}(t,y)=v^{(h)}(t+s,\Phi_{s}^{(h)}(t,y))\text{ for all }t,t+s\in[0,T],y\in\Omega$ as well as $\Phi_{0}^{(h)}(t,.)=\textrm{id}$ and $\det\nabla\Phi_{s}^{(h)}(t,.)=1$. Additionally, a simple iteration of the energy-inequality yields the following energy inequality for the problem on $[0,T]$: $\displaystyle E_{h}(\eta^{(h)}(t_{0}))+\int_{0}^{t_{0}}2R_{h}(\eta^{(h)},\partial_{t}\eta^{(h)})+2A(\eta,\partial_{t}\eta^{(h)}-v^{(h)}\circ\eta^{(h)})+\nu\left\|\varepsilon v^{(h)}\right\|_{{\Omega}}^{2}+h\left\|\nabla^{k_{0}}v^{(h)}\right\|^{2}dt$ $\displaystyle\quad+\fint^{t_{0}}_{t_{0}-h}\frac{\rho_{s}}{2}\left\|\partial_{t}\eta^{(h)}\right\|_{{Q}}^{2}+\frac{\rho_{f}}{2}\left\|v^{(h)}\right\|_{{\Omega}}^{2}dt+\int_{0}^{t_{0}}\left\langle f,v^{(h)}\right\rangle_{\Omega}dt\leq E_{h}(\eta_{0})+\frac{\rho_{s}}{2}\left\|b\right\|_{{Q}}^{2}+\frac{\rho_{f}}{2}\left\|v_{0}\right\|_{{\Omega}}^{2}.$ Similarly to the estimate for the discrete approximation, this again gives us a uniform bound on $\left\|\eta_{0}-\eta^{(h)}(t_{0})\right\|_{{Q}}$ in terms of $t_{0}$, which we can use to initially choose $T$ small enough to avoid collisions. As a consequence, we conclude that there is a constant $C$ independent of $h$ such that $\displaystyle\sup_{t\in[0,T]}\Big{(}E_{h}(\eta^{(h)}(t))+\left\|\eta^{(h)}(t)\right\|_{{W^{2,q}(Q)}}+h^{a_{0}}\left\|\eta^{(h)}(t)\right\|_{{W^{k_{0},2}(Q)}}\Big{)}\leq C,$ $\displaystyle\int_{0}^{T}\Big{(}\left\|\partial_{t}\eta^{(h)}(t)\right\|_{{W^{1,2}(Q)}}^{2}+h\left\|\partial_{t}\eta^{(h)}(t)\right\|_{{W^{k_{0},2}(Q)}}^{2}+\left\|v^{(h)}\right\|_{{W^{1,2}(\Omega)}}^{2}+h\left\|v^{(h)}\right\|_{{W^{k_{0},2}(\Omega)}}^{2}\Big{)}dt\leq C,$ $\displaystyle\int_{0}^{T}A(\eta,v^{(h)}\circ\eta^{(h)}-\partial_{t}\eta^{(h)})dt\leq C.$ These can again be used to pick weak*-converging sub-sequences and a limit pair $(\eta,v)$. Our goal is again to prove convergence of the equation and these weak convergences are indeed enough to do so for all the terms which are unchanged from before. However this time, we have to additionaly deal with the two $h$-dependent inertial terms, of which in particular the fluid-term requires some attention as weak convergence is not enough to go to the limit. Interestingly in this case, the correct quantities to consider are time- averages. Consider first the solid. Then we can define the rolling average of the momentum as $\displaystyle m^{(h)}(t,x)=\rho_{s}\fint_{t-h}^{t}\partial_{t}\eta^{(h)}(s,x)ds$ and note that its time derivative $\displaystyle\partial_{t}m^{(h)}(t,x)=\rho_{s}\frac{\partial_{t}\eta^{(h)}(t)-\partial_{t}\eta^{(h)}(t-h)}{h}$ is something already occuring in the equation and over which we thus already have some weak control. The same is true for the fluid, where we however have to be a bit more careful and instead of the physically correct quantity $\fint\rho_{f}v\circ\Phi$, we instead have to consider a similarly “straightened” version $\displaystyle m^{(h)}(t,x)=\rho_{f}\fint_{t-h}^{t}v(s,x)ds$ For these averaged momenta, since we have $L^{2}(W^{1,2})$-bounds from the energy estimate and bounds on their time-derivatives in negative spaces from the equation, we can apply a version of the Aubin-Lions-lemma (see e.g. [30, Sec. 7.3]) to obtain their strong $L^{2}$-convergence (in space-time). In particular, for the fluid this implies $\displaystyle\int_{0}^{T}\left\langle\frac{v^{(h)}(t)-v^{(h)}(t-h)\circ\Phi_{-h}^{(h)}(t)}{h},\xi(t)\right\rangle dt=-\int_{0}^{T}\left\langle v^{(h)}(t),\frac{\xi(t+h)\circ\Phi_{h}^{(h)}(t)-\xi(t)}{h}\right\rangle dt$ $\displaystyle=-\int_{0}^{T}\left\langle v^{(h)}(t),\fint_{0}^{h}\partial_{s}\xi(t+s)\circ\Phi_{s}^{(h)}(t)ds\right\rangle dt$ $\displaystyle=-\int_{0}^{T}\left\langle v^{(h)}(t),\fint_{0}^{h}\left[\partial_{t}\xi(t+s)+v^{(h)}(t+s)\cdot\nabla\xi(t+s)\right]\circ\Phi_{s}^{(h)}(t)ds\right\rangle dt$ $\displaystyle\to-\int_{0}^{T}\left\langle v(t),\partial_{t}\xi+v\cdot\nabla\xi\right\rangle dt.$ Next note that a Minty-type argument (compare [6, Lem. 3.9]) allows us to deal with the second order term in $DE(\eta^{(h)})$ while the convergence of the first order terms results from compact embeddings and the boundedness of the determinant. In total implies the weak convergence of $DE(\eta^{(h)})$ to $DE(\eta)$. Conclusively we then have $\displaystyle\int_{0}^{T}\left\langle DE(\eta),\phi\right\rangle+\left\langle D_{2}R(\eta,\partial_{t}\eta),\phi\right\rangle+\left\langle D_{2}A(\eta,\partial_{t}\eta-v\circ\eta),\phi-\xi\circ\eta\right\rangle$ $\displaystyle\quad+\nu\left\langle\varepsilon v,\varepsilon\xi\right\rangle-\rho_{s}\left\langle\partial_{t}\eta,\partial_{t}\phi\right\rangle-\rho_{f}\left\langle v,\partial_{t}\xi-v\cdot\nabla\xi\right\rangle+\left\langle f,\xi\right\rangle dt=0$ for all $\phi\in C^{\infty}([0,T]\times Q;\mathbb{R}^{n})$ and $\xi\in C_{0}^{\infty}([0,T]\times\Omega;\mathbb{R}^{n})$ with $\phi|_{[0,T]\times P}=0$ and $\operatorname{div}\xi=0$, as desired. This initially holds only on a small interval on which collisions are avoided a-priori, but by standard arguments, we can extend this interval until we reach the original $T$ or an actual collision. Finally, we can spare some thoughts on the pressure, which can be found through the equation. For this we need the so called Bogovskiĭ-operator which is a bounded linear operator $\mathcal{B}:W^{k,p}(\Omega)\to W_{0}^{k+1,p}(\Omega;\mathbb{R}^{n})$ such that $\operatorname{div}\mathcal{B}\psi=\psi$. With this we can define the operator $\displaystyle P(\psi):=$ $\displaystyle\int_{0}^{T}\Big{(}\left\langle D_{2}A(\eta,\partial_{t}\eta-v\circ\eta),\mathcal{B}(\psi)\circ\eta\right\rangle-\nu\left\langle\varepsilon v,\varepsilon\mathcal{B}(\psi)\right\rangle$ $\displaystyle\quad+\rho_{f}\left\langle v,\partial_{t}\mathcal{B}(\psi)-v\cdot\nabla\mathcal{B}(\psi)\right\rangle-\left\langle f,\mathcal{B}\psi\right\rangle\Big{)}dt$ for which a short calculation reveals that it is a bounded distribution in space-time which gives the pressure of the fluid. Indeed, adding $\nabla p(\xi):=-P(\operatorname{div}(\xi))$ to the equation allows us to test with testfunctions of non-zero divergence and shows that the solution (distributionally) satisfies the Navier-Stokes equations. ### Declarations This work has been supported by the Primus research programme PRIMUS/19/ SCI/01, the University Centre UNCE/SCI/023 of Charles University as well as the grant GJ19-11707Y of the Czech national grant agency (GAČR). Further S. S. wishes to thank the University of Vienna for their kind hospitality in winter 2020/21. There is no conflict of interest. ## References * Allaire [1990] Grégoire Allaire. Homogenization of the Navier-Stokes equations in open sets perforated with tiny holes. II. Noncritical sizes of the holes for a volume distribution and a surface distribution of holes. _Arch. Rational Mech. Anal._ , 113(3):261–298, 1990. doi: 10.1007/BF00375066. * Ambartsumyan et al. [2019] Ilona Ambartsumyan, Vincent J Ervin, Truong Nguyen, and Ivan Yotov. A nonlinear stokes–biot model for the interaction of a non-newtonian fluid with poroelastic media. _ESAIM: Mathematical Modelling and Numerical Analysis_ , 53(6):1915–1955, 2019. * Antman [1998] Stuart S. Antman. Physically unacceptable viscous stresses. _Zeitschrift für angewandte Mathematik und Physik_ , 49(6):980–988, 1998. * Ball [2002] John M Ball. Some open problems in elasticity. In _Geometry, mechanics, and dynamics_ , pages 3–59. Springer, 2002\. doi: doi.org/10.1007. * Ball and Mizel [1987] John M Ball and Victor J Mizel. One-dimensional variational problems whose minimizers do not satisfy the euler-lagrange equation. In _Analysis and thermomechanics_ , pages 285–348. Springer, 1987\. * Benešová et al. [2020] Barbora Benešová, Malte Kampschulte, and Sebastian Schwarzacher. A variational approach to hyperbolic evolutions and fluid-structure interactions. _arXiv:2008.04796 [math]_ , August 2020. * Biot [1941] Maurice A Biot. General theory of three-dimensional consolidation. _Journal of applied physics_ , 12(2):155–164, 1941. * Bociu et al. [2020] Lorena Bociu, Sunčica Čanić, Boris Muha, and Justin T Webster. Multilayered poroelasticity interacting with stokes flow. _arXiv preprint arXiv:2011.12602_ , 2020. * Breit et al. [2021] Dominic Breit, Malte Kampschulte, and Sebastian Schwarzacher. Compressible fluids interacting with 3D visco-elastic bulk solids. _arXiv:2108.03042 [math]_ , August 2021. * Brinkman [1949] Hendrik C Brinkman. A calculation of the viscous force exerted by a flowing fluid on a dense swarm of particles. _Flow, Turbulence and Combustion_ , 1(1):27–34, 1949. * Cesmelioglu [2017] Aycil Cesmelioglu. Analysis of the coupled navier–stokes/biot problem. _Journal of Mathematical Analysis and Applications_ , 456(2):970–991, 2017. * Chapelle and Moireau [2014] Dominique Chapelle and Philippe Moireau. General coupling of porous flows and hyperelastic formulations—from thermodynamics principles to energy balance and compatible time schemes. _European Journal of Mechanics-B/Fluids_ , 46:82–96, 2014\. * Ciarlet [2000] Philippe G. Ciarlet. _Mathematical elasticity. Vol. III_ , volume 29 of _Studies in Mathematics and its Applications_. North-Holland Publishing Co., Amsterdam, 2000. ISBN 0-444-82891-5. Theory of shells. * Ciarlet and Nečas [1987] Philippe G. Ciarlet and Jindřich Nečas. Injectivity and self-contact in nonlinear elasticity. _Arch. Rat. Mech. Anal._ , 97(3):171–188, Sep 1987. doi: 10.1007/BF00250807. * Conca [1988] Carlos Conca. The stokes sieve problem. _Communications in applied numerical methods_ , 4(1):113–121, 1988. * Coussy [2004] Olivier Coussy. _Poromechanics_. John Wiley & Sons, 2004. * Darcy [1856] Henry Darcy. _Les fontaines publiques de la ville de Dijon: exposition et application…_ Victor Dalmont, 1856. * De Boer [2006] Reint De Boer. _Trends in continuum mechanics of porous media_ , volume 18. Springer Science & Business Media, 2006. * De Giorgi [1993] Ennio De Giorgi. New problems on minimizing movements. _Ennio de Giorgi: Selected Papers_ , pages 699–713, 1993. * Dell’Isola et al. [2009] Francesco Dell’Isola, Angela Madeo, and Pierre Seppecher. Boundary conditions at fluid-permeable interfaces in porous media: A variational approach. _International Journal of Solids and Structures_ , 46(17):3150–3164, 2009. * Fried and Gurtin [2006] Eliot Fried and Morton E Gurtin. Tractions, balances, and boundary conditions for nonsimple materials with application to liquid flow at small-length scales. _Archive for Rational Mechanics and Analysis_ , 182(3):513–554, 2006. * Halphen and Son Nguyen [1975] Bernard Halphen and Quoc Son Nguyen. Sur les matériaux standard généralisés. _Journal de Mécanique_ , 14:39–63, 1975. * Healey and Krömer [2009] Timothy J. Healey and Stefan Krömer. Injective weak solutions in second-gradient nonlinear elasticity. _ESAIM: Control, Optimisation and Calculus of Variations_ , 15(4):863–871, 2009. * Hornung [1996] Ulrich Hornung. _Homogenization and porous media_ , volume 6. Springer Science & Business Media, 1996. * Kampschulte et al. [2020] Malte Kampschulte, Sebastian Schwarzacher, and Gianmarco Sperone. Unrestricted deformations of thin elastic structures interacting with fluids. _in preperation_ , 2020. * Kružík and Roubíček [2019] Martin Kružík and Tomáš Roubíček. _Mathematical methods in continuum mechanics of solids_. Springer, 2019. * Ogden and Holzapfel [2006] Ray W Ogden and Gerhard A Holzapfel. _Mechanics of biological tissue_. Springer, 2006. * Podio-Guidugli and Vianello [2010] Paolo Podio-Guidugli and Maurizio Vianello. Hypertractions and hyperstresses convey the same mechanical information. _Continuum Mechanics and Thermodynamics_ , 22(3):163–176, 2010. * Rajagopal [2007] Kumbakonam R Rajagopal. On a hierarchy of approximate models for flows of incompressible fluids through porous solids. _Mathematical Models and Methods in Applied Sciences_ , 17(02):215–252, 2007. * Roubíček [2005] Tomáš Roubíček. _Nonlinear partial differential equations with applications_ , volume 153 of _International Series of Numerical Mathematics_. Birkhäuser Verlag, Basel, 2005. ISBN 978-3-7643-7293-4; 3-7643-7293-1. * Showalter [2005] Ralph E Showalter. Poroelastic filtration coupled to stokes flow. In _Control theory of partial differential equations_ , pages 243–256. Chapman and Hall/CRC, 2005. * Šilhavỳ [1985] Miroslav Šilhavỳ. Phase transitions in non-simple bodies. _Archive for rational mechanics and analysis_ , 88(2):135–161, 1985. * Toupin [1962] R Toupin. Elastic materials with couple-stresses. _Archive for rational mechanics and analysis_ , 11(1):385–414, 1962. * Wilbrandt [2019] Ulrich Wilbrandt. _Stokes–Darcy Equations: Analytic and Numerical Analysis_. Springer, 2019. * Ziegler and Wehrli [1987a] H. Ziegler and C. Wehrli. On a principle of maximal rate of entropy production. _J. Non-Equilib. Thermodyn._ , 12(3):229–243, 1987a. doi: 10.1515/jnet.1987.12.3.229. * Ziegler [1963] Hans Ziegler. Some extremum principles in irreversible thermodynamics with application to continuum mechanics. In _Progress in Solid Mechanics, Vol. IV_ , pages 91–193. North-Holland, Amsterdam, 1963. * Ziegler and Wehrli [1987b] Hans Ziegler and Christoph Wehrli. The derivation of constitutive relations from the free energy and the dissipation function. _Adv. Appl. Mech._ , 25:183–238, 1987b. doi: 10.1016/S0065-2156(08)70278-3.
# The Second Variation for Null-Torsion Holomorphic Curves in the 6-Sphere Jesse Madnick (December 2021) ###### Abstract In the round 6-sphere, null-torsion holomorphic curves are fundamental examples of minimal surfaces. This class of minimal surfaces is quite rich: By a theorem of Bryant, extended by Rowland, every closed Riemann surface may be conformally embedded in the round $6$-sphere as a null-torsion holomorphic curve. In this work, we study the second variation of area for compact null-torsion holomorphic curves $\Sigma$ of genus $g$ and area $4\pi d$, focusing on the spectrum of the Jacobi operator. We show that if $g\leq 6$, then the multiplicity of the lowest eigenvalue $\lambda_{1}=-2$ is equal to $4d$. Moreover, for any genus, we show that the nullity is at least $2d+2-2g$. These results are likely to have implications for the deformation theory of asymptotically conical associative $3$-folds in $\mathbb{R}^{7}$, as studied by Lotay. ## 1 Introduction ### 1.1 Background: Minimal Surfaces in Spheres Let $\Sigma^{2}$ denote a closed orientable surface. In a Riemannian manifold $(M,\langle\cdot,\cdot\rangle)$, an immersed surface $u\colon\Sigma^{2}\to M$ is called a minimal surface if every variation $u_{t}\colon\Sigma^{2}\to M$ of $u_{0}=u$ satisfies $\left.\frac{d}{dt}\right|_{t=0}\text{Area}(u_{t})=0$. That is, minimal surfaces are critical points of the area functional, but not necessarily global minimizers of it. The extent to which a minimal surface fails to be area-minimizing to second order can be measured by the second variation of area, which takes the form $\left.\frac{d^{2}}{dt^{2}}\right|_{t=0}\text{Area}(u_{t})=\int_{\Sigma}\left\langle\mathcal{L}\eta,\eta\right\rangle\\!,$ (1.1) where $\eta:=\left.\frac{d}{dt}\right|_{t=0}u_{t}$ is a normal variation vector field, and where $\mathcal{L}\colon\Gamma(N\Sigma)\to\Gamma(N\Sigma)$ is the Jacobi operator of the minimal surface $u$. We will recall the standard expression for $\mathcal{L}$ in $\S$4. In view of the second variation formula (1.1), it is of fundamental interest to understand the Jacobi operator of a minimal surface, which in turn motivates the study its spectrum. Indeed, recalling that $\mathcal{L}$ is strongly elliptic [24, $\S$I.9], it may be diagonalized with real eigenvalues $\lambda_{1}<\lambda_{2}<\cdots<\lambda_{s}<0=\lambda_{s+1}<\lambda_{s+2}<\cdots\to\infty$ of finite multiplicities $m_{1},\,m_{2},\,\ldots,m_{s},m_{s+1},\ldots$ Certain well-known invariants of minimal surfaces may be phrased in terms of this spectrum. For example, the Morse index and nullity are, respectively, $\displaystyle\text{Ind}(u)$ $\displaystyle=m_{1}+\cdots+m_{s}$ $\displaystyle\text{Nullity}(u)$ $\displaystyle=m_{s+1}.$ A minimal surface is said to be stable if $\lambda_{1}\geq 0$ and unstable if $\lambda_{1}<0$. In general, computing the spectrum of $\mathcal{L}$ is extremely difficult. There seem to be very few examples of minimal surfaces whose Jacobi spectra are known explicitly. In view of this, geometers instead seek to estimate the eigenvalues $\lambda_{j}$ and (sums of) multiplicities $m_{j}$ in terms of more computable geometric and topological quantities. Still, obtaining such bounds is a non-trivial task. Even in the classical case of (non-compact) minimal surfaces in $\mathbb{R}^{3}$, several outstanding open problems remain: see, for example, the excellent survey [10]. Our focus will be on compact orientable minimal surfaces (without boundary) in round spheres $M=\mathbb{S}^{n}$ (of constant curvature $1$) with $n\geq 3$. Pioneering work in this subject was carried out in the late 1960’s by, for example, Calabi [7], Chern [9], Simons [33], and Lawson [25]. In particular, Simons showed [33, Lemma 5.1.4] that all compact minimal surfaces in $\mathbb{S}^{n}$ have $\lambda=-2$ as an eigenvalue of $\mathcal{L}$, and hence are unstable. He also established the lower bounds $\displaystyle\text{Ind}(u)$ $\displaystyle\geq n-2$ $\displaystyle\text{Nullity}(u)$ $\displaystyle\geq 3(n-2).$ In both estimates, equality holds if and only if $u$ is the totally-geodesic $\mathbb{S}^{2}$. In the case of $n=3$, Urbano [34] improved Simons’ bound, showing that non- totally-geodesic minimal surfaces satisfy $\text{Ind}(u)\geq 5$, with equality if and only if $u$ is the Clifford torus. This characterization of the Clifford torus was an important ingredient in Marques’ and Neves’ resolution of the Willmore conjecture [28]. In the case $n=4$, Micallef and Wolfson [29] proved that minimal surfaces in $\mathbb{S}^{4}$ of area $A$ satisfy $\text{Ind}(u)\geq\frac{1}{2}\left(\frac{A}{\pi}-\chi(\Sigma)\right)\\!,$ where $\chi(\Sigma)=2-2g$ is the Euler characteristic. Recently, motivated by potential applications to the generalized Willmore conjecture in $\mathbb{S}^{n}$, Kusner and Wang [23] proved that minimal surfaces of genus $g=1$ in $\mathbb{S}^{4}$ satisfy $\text{Ind}(u)\geq 6$, with equality if and only if $u$ is a Clifford torus in a totally-geodesic $\mathbb{S}^{3}$. In a different direction, if one restricts attention to the class of superminimal surfaces in $\mathbb{S}^{4}$, the beautiful paper of Montiel- Urbano [30] provides remarkably precise information. They show that superminimal surfaces in $\mathbb{S}^{4}$ have lowest eigenvalue $\lambda_{1}=-2$ and satisfy $\displaystyle\text{Ind}(u)$ $\displaystyle=m_{1}=\frac{A}{\pi}-\chi(\Sigma)$ $\displaystyle\text{Nullity}(u)=m_{2}$ $\displaystyle\geq\frac{A}{\pi}+\chi(\Sigma).$ (1.2) Moreover, if $g=0$ or $g=1$, then equality holds in the nullity estimate. In fact, $\text{Ind}(u)\geq 10$, with equality if and only if $u$ is a (twistor deformation of a) Veronese surface. The formulas (1.2) for superminimal surfaces in $\mathbb{S}^{4}$ were the primary inspiration for this work. In even dimensions $n=2k$, Karpukhin [21] has recently shown that a linearly full minimal surface in $\mathbb{S}^{2k}$ of genus $g=0$ and area $A=4\pi d$ has index $\text{Ind}(u)\geq 2(k-1)(2d-[\sqrt{8d+1}]_{\text{odd}}+2),$ where $[x]_{\text{odd}}$ is the largest odd integer not exceeding $x$. ### 1.2 Background: Holomorphic Curves in the $6$-Sphere Among all spheres $\mathbb{S}^{n}$ with $n\geq 3$, the $6$-sphere is the only one that admits an almost-complex structure. In this work, we will equip $\mathbb{S}^{6}$ with its standard almost-complex structure $\widetilde{J}\colon T\mathbb{S}^{6}\to T\mathbb{S}^{6}$. This almost-complex structure is compatible with the round metric, and arises from viewing $\mathbb{S}^{6}\subset\mathbb{R}^{7}=\text{Im}(\mathbb{O})$ in the imaginary octonions, as we will recall in $\S$2.2. Having chosen $\widetilde{J}$, the $6$-sphere now admits a distinguished class of surfaces. That is, a holomorphic curve is a surface $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ whose tangent spaces are $\widetilde{J}$-invariant: $\widetilde{J}(T_{p}\Sigma)=T_{p}\Sigma,\ \ \forall p\in\Sigma.$ It is easy to show that holomorphic curves in $\mathbb{S}^{6}$ are (unstable) minimal surfaces. In a remarkable 1982 paper, Bryant [5] studied holomorphic curves in $\mathbb{S}^{6}$ by means of a “holomorphic Frenet frame,” which we discuss in $\S$3.2. Essentially, this amounts to a decomposition of the vector bundle of $(1,0)$-vectors along $u(\Sigma)$ into complex line subbundles $u^{*}(T^{1,0}\mathbb{S}^{6})\simeq L_{T}\oplus L_{N}\oplus L_{B}.$ (1.3) Crucially, each of the bundles $L_{T},L_{N},L_{B}$ carries a natural holomorphic structure, though the isomorphism (1.3) generally only holds in the smooth (not holomorphic) category. By analogy with the classical case of curves in $\mathbb{R}^{3}$, one can extract two basic invariants: a second- order invariant (“curvature”) that is essentially the second fundamental form of the immersion, and a third-order invariant (“torsion”) that is rather more subtle. Bryant encodes the torsion as a holomorphic section $\Phi_{\text{I\\!I\\!I}}\in H^{0}(L_{T}^{*}\otimes L_{N}^{*}\otimes L_{B}),$ and defines a holomorphic curve to be null-torsion if $\Phi_{\text{I\\!I\\!I}}\equiv 0$ on $\Sigma$. It is not hard to show that every holomorphic curve of genus $g=0$ is null-torsion. It turns out that the null-torsion condition is equivalent to the holomorphicity of the binormal Gauss map $b_{u}\colon\Sigma\to\mathbb{CP}^{6}$, the map sending a point $p\in\Sigma$ to its binormal real $2$-plane in $T_{p}\mathbb{S}^{6}\subset\mathbb{R}^{7}$ (viewed as a complex line in $\mathbb{C}^{7}$). From this fact, together with the Wirtinger Theorem, it follows that the area $A$ of a null-torsion holomorphic curve is quantized. That is, $A=4\pi d,$ where $d\in\mathbb{Z}^{+}$ is the degree of the binormal Gauss map. Aside from the totally-geodesic $2$-sphere (which has $d=1$), all null-torsion holomorphic curves have $d\geq 6$. The moduli space of genus zero holomorphic curves in $\mathbb{S}^{6}$ of a fixed degree $d\geq 6$ has been studied by Fernández [16]. In [5], Bryant derived a Weierstrass representation formula for null-torsion holomorphic curves. Using this formula, together with an algebro-geometric argument, he proved a striking result: every closed Riemann surface admits a conformal branched immersion into $\mathbb{S}^{6}$ as a null-torsion holomorphic curve. This was sharpened by Rowland [31] in his 1999 Ph.D. thesis, who improved “branched immersion” to “smooth embedding.” The upshot is that, while the generic holomorphic curve is not null-torsion, the class of null-torsion curves is nevertheless extremely rich. Since Bryant’s 1982 paper, there have been several interesting studies of holomorphic curves in the $6$-sphere. For example, Sekigawa [32] classified the constant-curvature examples, Ejiri [15] classified the $\text{U}(1)$-invariant examples, and Hashimoto [19] obtained beautiful explicit examples of one-parameter deformations. Bolton, Vrancken, and Woodward [4] studied holomorphic curves by using harmonic sequences, and showed that every holomorphic curve in $\mathbb{S}^{6}$ can only be full in a totally-geodesic $\mathbb{S}^{2}$, $\mathbb{S}^{5}$, or else the entire $\mathbb{S}^{6}$. This is by no means a complete list of references; we refer the interested reader to the books of Chen [8, $\S$19.1-19.2] and Joyce [20, $\S$12.2] for more. Finally, we note that the study of holomorphic curves in $\mathbb{S}^{6}$ forms part of the larger study of holomorphic curves in nearly-Kähler $6$-manifolds. For example, holomorphic curves in $\mathbb{CP}^{3}$ have been studied by Xu [35] and Aslan [2], and in $\mathbb{S}^{3}\times\mathbb{S}^{3}$ by Bolton, Dioos, and Vrancken [3]. In fact, holomorphic curves in nearly- Kähler $6$-manifolds are precisely the links of associative cones in conical $\text{G}_{2}$-manifolds, and thereby serve as models for conically singular associative $3$-folds. This relationship makes holomorphic curves objects of fundamental interest in $\text{G}_{2}$-geometry. ### 1.3 Main Results In this work, we consider the Jacobi spectra of null-torsion holomorphic curves in $\mathbb{S}^{6}$. Perhaps the most basic question is: What is the multiplicity $m_{1}$ and value $\lambda_{1}$ of the lowest eigenvalue of $\mathcal{L}$? In the early 1980’s, Ejiri [14] considered this question in the context of superminimal surfaces in $\mathbb{S}^{2n}$, showing that $\lambda_{1}=-2$. (Although his results are stated for minimal $2$-spheres in $\mathbb{S}^{2n}$, most of Ejiri’s arguments apply without change to the larger class of superminimal surfaces.) Furthermore, equipping the normal bundle with a certain holomorphic structure, which we call $\overline{\partial}^{\nabla}$, he showed that the $\lambda_{1}$-eigenspace of $\mathcal{L}$ may be identified with the space of holomorphic normal vector fields: $\\{\eta\in\Gamma(N\Sigma)\colon\mathcal{L}\eta=-2\eta\\}\cong\\{\text{solutions of }\overline{\partial}^{\nabla}\xi=0\\}.$ The Riemann-Roch Theorem then implies that $m_{1}\geq\frac{A}{\pi}+(n-3)\chi(\Sigma).$ Ejiri also observed that equality holds in the case of genus $g=0$, essentially by an application of Grothendieck’s classification of holomorphic vector bundles on $\mathbb{S}^{2}=\mathbb{CP}^{1}$. Now, since null-torsion holomorphic curves in $\mathbb{S}^{6}$ are, in particular, superminimal surfaces, Ejiri’s results imply that they satisfy $\lambda_{1}=-2$ and $m_{1}\geq\frac{A}{\pi}.$ (1.4) Our first result is that, in fact, equality holds for genus $g\leq 6$: ###### Theorem 1.1. Let $u\colon\Sigma\to\mathbb{S}^{6}$ be a null-torsion holomorphic curve of genus $g$ and area $A=4\pi d$. If $g\leq 6$ (or, more generally, if $g<\frac{1}{2}(d+2)$), then the first multiplicity $m_{1}$ of the Jacobi operator is: $m_{1}=\frac{A}{\pi}=4d.$ Where minimal surfaces of high genus ($g\geq 1$) and high codimension (at least $2$) in round spheres are concerned, the only explicit formulas for $m_{1}$ that the author knows are Montiel and Urbano’s result (1.2) and Theorem 1.1 above. Our argument makes crucial use of the particular geometry of null-torsion holomorphic curves. In outline, the idea is the following. We will equip the normal bundle with a second holomorphic structure, called $\overline{\partial}^{D}$, that arises naturally from the nearly-Kähler structure on $\mathbb{S}^{6}$. Letting $S$ denote the difference tensor $S\xi:=\overline{\partial}^{\nabla}\xi-\overline{\partial}^{D}\xi$, the Cauchy-Riemann system $\overline{\partial}^{\nabla}\xi=0$ is equivalent to $\overline{\partial}^{D}\xi=-S\xi.$ (1.5) It turns out that the system (1.5) decouples, yielding an easy upper bound for the dimension of the solution space, which proves the theorem. Our second result is a lower bound on the nullity, valid for all genera: ###### Theorem 1.2. Let $u\colon\Sigma\to\mathbb{S}^{6}$ be a null-torsion holomorphic curve of genus $g$ and area $A=4\pi d$. Then the nullity of its Jacobi operator satisfies $\mathrm{Nullity}(u)\geq 2d+\chi(\Sigma).$ (1.6) Here, our argument is not original. Indeed, we closely follow the calculations in Montiel and Urbano’s study [30] of superminimal surfaces in self-dual Einstein $4$-manifolds. The idea of the proof is to identify a certain subspace of $\text{Null}(u):=\\{\eta\in\Gamma(N\Sigma)\colon\mathcal{L}\eta=0\\}$ with the space of holomorphic sections of a certain line bundle whose dimension can be estimated (and for genus $g\leq 6$, computed explicitly) by Riemann-Roch. Note that, since we only consider a subspace of $\text{Null}(u)$, our bound (1.6) is almost certainly not sharp. On the other hand, it appears that most of the argument extends without change to the general case of superminimal surfaces in any even-dimensional sphere $\mathbb{S}^{2n}$, providing an avenue for further inquiry. ### 1.4 Open Questions 1. 1. Let $u\colon\Sigma^{2}\to\mathbb{S}^{2n}$ be a compact orientable superminimal surface of genus $g$. Is it always the case that Ejiri’s lower bound is satisfied: $m_{1}=\frac{A}{\pi}+(n-3)\chi(\Sigma)?$ Ejiri [14] has proven this for genus $g=0$, while Montiel-Urbano [30] has proven this for $n=2$. Our Theorem 1.1 establishes this in the special case where $n=3$, $g\leq 6$, and the superminimal surface is holomorphic. 2. 2. Holomorphic curves may be studied in any nearly-Kähler $6$-manifold. What can be said about the Jacobi spectrum in that generality? 3. 3. In the $6$-sphere: Can one establish a lower bound on the second eigenvalue $\lambda_{2}$? As a first step, it would be instructive to understand the spectrum of the Boruvka sphere, the unique holomorphic curve of constant curvature $K=\frac{1}{6}$. We show in Proposition 5.8 that the Boruvka sphere satisfies $\lambda_{2}\geq-\frac{5}{3}$. Further, Karpukhin [21, Theorem 1.7] has estimated its Morse index as $m_{1}+\cdots+m_{s}=\text{Ind}(u)\geq 36$, and Ejiri’s result [14] gives $m_{1}=24$, implying $m_{2}+\cdots+m_{s}\geq 12$, so $\lambda_{2}<0$. ### 1.5 Organization In $\S$2, we recall basic facts and formulas regarding minimal surfaces in $\mathbb{S}^{6}$, holomorphic curves in $\mathbb{S}^{6}$, and holomorphic vector bundles over Riemann surfaces. This section is largely to establish conventions and experts may wish to skip it. In $\S$3.1 and $\S$3.2, we set up the moving frame for holomorphic curves in the $6$-sphere. Our discussion is essentially a summary of [5, $\S$4], though our notation is quite different. In $\S$3.3, we take a closer look at null- torsion holomorphic curves, culminating in Proposition 3.4, which counts the holomorphic sections of $L_{N}$ and $L_{B}^{*}$, and Proposition 3.6, which justifies our tacit parenthetical claim in Theorem 1.1 that $g\leq 6$ implies $g<\frac{1}{2}(d+2)$. Section 3.4 is at the heart of Theorem 1.1. The purpose of $\S$3.4 is to explain how the normal bundle of a null-torsion holomorphic curve may naturally be equipped with three different holomorphic structures, which we call $\overline{\partial}^{\text{SU}}$, $\overline{\partial}^{\nabla}$, and $\overline{\partial}^{D}$. The operator $\overline{\partial}^{\text{SU}}$ relates to the deformation theory of asymptotically conical associative $3$-folds in $\mathbb{R}^{7}$, as shown by Lotay [26], while $\overline{\partial}^{\nabla}$ relates to the $(-2)$\- and $0$-eigenspaces of the Jacobi operator $\mathcal{L}$. However, it is with respect to $\overline{\partial}^{D}$ that the normal bundle splits holomorphically, which aids in the decoupling of (1.5). In $\S$4, we begin our study of the Jacobi operator of null-torsion holomorphic curves. Sections 4.1 and 4.2 establish the lower bound (1.4), while $\S$4.3 proves Theorem 1.1 by analyzing (1.5). Finally, in $\S$5.2, we reduce Theorem 1.2 to a claim (Proposition 5.3) about the image of a certain linear Cauchy-Riemann type operator, and in $\S$5.3-$\S$5.4 we establish Proposition 5.3. Acknowledgements: This work benefited from clarifying conversations with Benjamin Aslan, Gavin Ball, Robert Bryant, Bang-Yen Chen, Mikhail Karpukhin, Hsueh-Yung Lin, Jason Lotay, and David Wen. I thank Gorapada Bera for his careful reading of an earlier version of this preprint, and thank Da Rong Cheng, Shubham Dwivedi, Spiro Karigiannis, and Chung-Jun Tsai for their interest and encouragement. This work was completed during the author’s postdoctoral fellowship at the National Center for Theoretical Sciences (NCTS) at National Taiwan University. I thank the Center for their support. ## 2 Preliminaries In this brief section, we recall basic facts about minimal surfaces in $\mathbb{S}^{6}$, holomorphic curves in $\mathbb{S}^{6}$, and holomorphic vector bundles. This section primarily serves to fix notation and conventions. ### 2.1 Minimal Surfaces in $\mathbb{S}^{6}$ Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be an immersed surface in the round $6$-sphere of constant curvature $1$. Let $\langle\cdot,\cdot\rangle$ denote the round metric on $\mathbb{S}^{6}$ and let $\overline{\nabla}\colon\Gamma(T\mathbb{S}^{6})\to\Omega^{1}(\mathbb{S}^{6})\otimes\Gamma(T\mathbb{S}^{6})$ denote the Levi-Civita connection. As usual, we split $u^{*}(T\mathbb{S}^{6})=T\Sigma\oplus N\Sigma$ into tangential and normal parts. For $X,Y\in\Gamma(T\Sigma)$ and $N\in\Gamma(N\Sigma$), we have $\displaystyle\overline{\nabla}_{X}Y$ $\displaystyle=\nabla^{\top}_{X}Y+\text{I\\!I}(X,Y)$ $\displaystyle\overline{\nabla}_{X}N$ $\displaystyle=W_{X}N+\nabla^{\perp}_{X}N$ where $\nabla^{\top}$ is the Levi-Civita connection on $\Sigma$, where $\nabla^{\perp}$ is the normal connection, where I​I is the second fundamental form, and where $W$ is the shape operator. Recall the Weingarten equation $\langle W_{X}N,Y\rangle=-\langle\text{I\\!I}(X,Y),N\rangle.$ The curvature tensors of $\overline{\nabla},\nabla^{\top},\nabla^{\perp}$ will be denoted $\overline{R}$, $R^{\top}$, $R^{\perp}$, respectively. We will often use the notation $\overline{R}_{XY}Z:=\overline{R}(X,Y,Z,\cdot)$, and similarly for $R^{\top}$ and $R^{\perp}$. Suppose now that $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ is a minimal surface. Let $(e_{1},\ldots,e_{6})$ be a local orthonormal frame with $e_{1},e_{2}\in T\Sigma$ and $e_{3},e_{4},e_{5},e_{6}\in N\Sigma$. We recall the Gauss equation $1=K+\|\text{I\\!I}(e_{1},e_{1})\|^{2}+\|\text{I\\!I}(e_{1},e_{2})\|^{2}$ (2.1) where $K$ is the Gauss curvature of $\Sigma$. We also recall the Ricci equation $\displaystyle\langle R^{\perp}_{12}e_{\alpha},e_{\beta}\rangle$ $\displaystyle=\left\langle W_{1}(e_{\beta}),W_{2}(e_{\alpha})\right\rangle-\left\langle W_{1}(e_{\alpha}),W_{2}(e_{\beta})\right\rangle$ where $3\leq\alpha,\beta\leq 6$, and we are using the shorthand $R^{\perp}_{12}:=R^{\perp}_{e_{1},e_{2}}$ and $W_{j}:=W_{e_{j}}$. Expressing the second fundamental form as $\text{I\\!I}(e_{i},e_{j})=h^{\alpha}_{ij}e_{\alpha}$, we have $\displaystyle W_{1}(e_{\alpha})$ $\displaystyle=-h^{\alpha}_{11}e_{1}-h^{\alpha}_{12}e_{2}$ $\displaystyle W_{2}(e_{\alpha})$ $\displaystyle=-h^{\alpha}_{12}e_{1}+h^{\alpha}_{11}e_{2},$ so that the Ricci equation reads $\langle R^{\perp}_{12}e_{\alpha},e_{\beta}\rangle=2\left(h^{\beta}_{11}h^{\alpha}_{12}-h^{\alpha}_{11}h^{\beta}_{12}\right)\\!.$ (2.2) #### 2.1.1 First and Second Normal Bundles Since $u$ is a minimal surface, its second fundamental form $\text{I\\!I}_{p}\colon\text{Sym}^{2}(T_{p}\Sigma)\to N_{p}\Sigma$ at $p\in\Sigma$ is determined by $\text{I\\!I}_{p}(e_{1},e_{1})$ and $\text{I\\!I}_{p}(e_{1},e_{2})$. Therefore, the image of $\text{I\\!I}_{p}$, called the first normal space $\displaystyle\left.E_{N}\right|_{p}$ $\displaystyle:=\left\\{\text{I\\!I}_{p}(X,Y)\in N_{p}\Sigma\colon X,Y\in T_{p}\Sigma\right\\}=\text{span}(\text{I\\!I}_{p}(e_{1},e_{1}),\text{I\\!I}_{p}(e_{1},e_{2})),$ is a vector space of dimension at most $2$. Letting $\displaystyle\Sigma^{\circ}:=\left\\{p\in\Sigma\colon\dim(E_{N}|_{p})=2\right\\}$ we note that $\Sigma^{\circ}\subset\Sigma$ is an open set, and that $E_{N}:=\bigcup_{p\in\Sigma^{\circ}}E_{N}|_{p}\to\Sigma^{\circ}$ is a rank $2$ vector bundle, called the first normal bundle. For $p\in\Sigma^{\circ}$, let $E_{B}|_{p}$ denote the second normal space, i.e., the orthogonal complement of $E_{N}|_{p}\subset N_{p}\Sigma$, so that there is an orthogonal splitting $N_{p}\Sigma=E_{N}|_{p}\oplus E_{B}|_{p}.$ The rank $2$ vector bundle $E_{B}:=\bigcup_{p\in\Sigma^{\circ}}E_{B}|_{p}\to\Sigma^{\circ}$ is called the second normal bundle. For a normal vector $\eta\in N\Sigma$, we write $\eta=\eta^{N}+\eta^{B}$ (2.3) for its decomposition into first normal and second normal components. The third fundamental form $\text{I\\!I\\!I}\colon\text{Sym}^{3}(T\Sigma)\to E_{B}$ is defined by $\text{I\\!I\\!I}(X,Y,Z):=[\nabla^{\perp}_{X}(\text{I\\!I}(Y,Z))]^{B}$. It is a standard fact that I​I​I is, in fact, symmetric in its arguments. ### 2.2 Holomorphic Curves in $\mathbb{S}^{6}$ Thus far, we have been regarding the round $\mathbb{S}^{6}$ simply as a Riemannian manifold. We now equip it with extra data, namely its standard (nearly-Kähler) $\text{SU}(3)$-structure. To begin, let us consider the imaginary octonions $\text{Im}(\mathbb{O})=\mathbb{R}^{7}$, equipped with the standard euclidean inner product $g_{0}$. The imaginary octonions admit a well-known cross product $\times\colon\text{Im}(\mathbb{O})\times\text{Im}(\mathbb{O})\to\text{Im}(\mathbb{O})$ via $x\times y:=\textstyle\frac{1}{2}(xy-yx).$ Using the metric $g_{0}$, the cross product can be recast as a $3$-form $\phi\in\Lambda^{3}(\mathbb{R}^{7})^{*}$ $\phi(x,y,z):=g_{0}(x\times y,z)$ called the associative $3$-form. The associative $3$-form is a ($\text{G}_{2}$-invariant) calibration on $\mathbb{R}^{7}$, and its calibrated $3$-folds are called “associative $3$-folds.” That is, an associative $3$-fold is an immersed submanifold $N^{3}\to\mathbb{R}^{7}$ that satisfies $\left.\phi\right|_{N}=\text{vol}_{N}$ where $\text{vol}_{N}$ is the volume form on $N^{3}$. The study of associative $3$-folds is of fundamental importance to $\text{G}_{2}$-geometry [20, $\S$12]. Returning to the round $6$-sphere, let us embed $\mathbb{S}^{6}\subset\mathbb{R}^{7}=\text{Im}(\mathbb{O})$ in the standard way. For each $p\in\mathbb{S}^{6}$, we can use the cross product $\times$ to define a map $\displaystyle\widetilde{J}_{p}\colon T_{p}\mathbb{S}^{6}$ $\displaystyle\to T_{p}\mathbb{S}^{6}$ $\displaystyle\widetilde{J}_{p}(x)$ $\displaystyle=p\times x.$ The properties of $\times$ imply that each $(\widetilde{J}_{p})^{2}=-\text{Id}$. The resulting bundle map $\widetilde{J}\colon T\mathbb{S}^{6}\to T\mathbb{S}^{6}$ is the standard ($\text{G}_{2}$-invariant) almost-complex structure on the $6$-sphere. One can check that each $\widetilde{J}_{p}\colon T_{p}\mathbb{S}^{6}\to T_{p}\mathbb{S}^{6}$ is an isometry, and that the bilinear form on $\mathbb{S}^{6}$ given by $\widetilde{\Omega}(x,y):=\langle\widetilde{J}x,y\rangle$ is skew-symmetric and non-degenerate. In other words, the triple $(\langle\cdot,\cdot\rangle,\widetilde{J},\widetilde{\Omega})$ is an almost- Hermitian (or $\mathrm{U}(3)$-structure) on $\mathbb{S}^{6}$. We emphasize that $\widetilde{J}$ is not integrable, and that $\widetilde{\Omega}$ is not closed. Now, letting $\partial_{r}$ denote the radial vector field on $\mathbb{R}^{7}$, one can show that the complex $3$-form $\Upsilon\in\Omega^{3}(\mathbb{S}^{6};\mathbb{C})$ given by $\displaystyle\Upsilon$ $\displaystyle:=\left.\left(\partial_{r}\lrcorner(\ast\phi)+i\phi\right)\right|_{\mathbb{S}^{6}}$ is a $(3,0)$-form on $\mathbb{S}^{6}$ that satisfies $\frac{i}{8}\Upsilon\wedge\overline{\Upsilon}=\text{vol}_{\mathbb{S}^{6}}.$ That is, the quadruple $(\langle\cdot,\cdot\rangle,\widetilde{J},\widetilde{\Omega},\Upsilon)$ is an $\mathrm{SU}(3)$-structure on $\mathbb{S}^{6}$. In fact, this $\text{SU}(3)$-structure satisfies the nearly-Kähler equations $d\widetilde{\Omega}=3\,\text{Im}(\Upsilon)$ and $d\,\text{Re}(\Upsilon)=2\,\widetilde{\Omega}\wedge\widetilde{\Omega}$. The round $6$-sphere with this $\text{SU}(3)$-structure is the simplest example of a strict nearly-Kähler $6$-manifold. Now, the $\text{SU}(3)$-structure gives rise to distinguished classes of submanifolds of the $6$-sphere. In particular, an immersed surface $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ is a holomorphic curve if $\widetilde{J}(T_{p}\Sigma)=T_{p}\Sigma,\ \ \ \forall p\in\Sigma.$ Holomorphic curves are, in fact, minimal surfaces. One way to see this is to observe that holomorphic curves have extra symmetries in their second fundamental forms (see (3.10) in $\S$3.2.2), and these symmetries imply minimality. Another way uses the following fundamental fact: ###### Proposition 2.1. Let $\Sigma^{2}\subset\mathbb{S}^{6}$ be an immersed surface, and let $C(\Sigma)=\\{rx\in\mathbb{R}^{7}\colon r>0,x\in\Sigma\\}$ be its cone in $\mathbb{R}^{7}$. Then $\Sigma$ is a holomorphic curve if and only if $C(\Sigma)$ is an associative $3$-fold. So, as holomorphic curves are the links of associative cones, and since associative cones are homologically volume-minimizing, it follows that holomorphic curves are minimal surfaces. ### 2.3 Holomorphic Bundles over Riemann Surfaces Let $E\to M$ be a complex vector bundle over a complex manifold $M$. It is well-known [22, $\S$1.3] that a holomorphic structure on $E$ is equivalent to a $\overline{\partial}$-operator, i.e., an operator $\overline{\partial}\colon\Gamma(E)\to\Omega^{0,1}(M)\otimes\Gamma(E)$ satisfying both the relevant Leibniz rule and $\overline{\partial}^{2}=0$. Given a complex vector bundle $E\to M$ equipped with both a connection $\nabla\colon\Gamma(E)\to\Omega^{1}(M;\mathbb{C})\otimes\Gamma(E)$ and a holomorphic structure $\overline{\partial}$, we say that $\nabla$ and $\overline{\partial}$ are compatible if $\nabla^{0,1}=\overline{\partial}$. Note that if $E\to\Sigma$ is a complex vector bundle over a Riemann surface, then every connection $\nabla$ on $E$ has the property that $\nabla^{0,1}$ satisfies the Leibniz rule and squares to zero. Said another way: ###### Proposition 2.2. Let $E\to\Sigma$ be a complex vector bundle over a Riemann surface $\Sigma$. For each connection $\nabla$, there exists a unique holomorphic structure on $E$ compatible with $\nabla$ (viz., $\overline{\partial}=\nabla^{0,1}$). The holomorphic structure in Proposition 2.2 is often called the Koszul- Malgrange holomorphic structure for $\nabla$. However, we shall occasionally abuse terminology and refer to $\nabla$ itself as the holomorphic structure. ## 3 The Geometry of Holomorphic Curves in the $6$-Sphere In $\S$3.1 and $\S$3.2, we set up the moving frame for holomorphic curves in the $6$-sphere. In $\S$3.3, we take a closer look at the class of null-torsion holomorphic curves, the primary results being Proposition 3.4 and Proposition 3.6. In $\S$3.4, we consider three different holomorphic structures on the normal bundle of a null-torsion holomorphic curve. ### 3.1 Moving Frames for $\mathbb{S}^{6}$ We begin by viewing $\mathbb{S}^{6}$ simply as an oriented Riemannian manifold (i.e., as a $6$-manifold with an $\text{SO}(6)$-structure). Let $F_{\text{SO}(6)}\to\mathbb{S}^{6}$ denote the oriented orthonormal coframe bundle of $\mathbb{S}^{6}$. Let $\omega\in\Omega^{1}(F_{\text{SO}(6)};\mathbb{R}^{6})$ denote the tautological $1$-form, and let $\psi\in\Omega^{1}(F_{\text{SO}(6)};\mathfrak{so}(6))$ denote the Levi-Civita connection, so that we have $d\omega=-\psi\wedge\omega.$ So, if $(e_{1},\ldots,e_{6})$ is a local oriented orthonormal frame on an open set $U\subset\mathbb{S}^{6}$, then $\overline{\nabla}e_{i}=-\psi_{ij}\otimes e_{j}$ (3.1) where we are conflating (and will continue to conflate) the $1$-forms $\psi_{ij}$ on $F_{\text{SO}(6)}$ with their pullbacks $\sigma^{*}(\psi_{ij})$ on $U$ via the local section $\sigma\colon U\to F_{\text{SO}(6)}$ corresponding to $(e_{1},\ldots,e_{6})$. #### 3.1.1 The $\text{SU}(3)$-Structure We now equip $\mathbb{S}^{6}$ with its standard $\text{SU}(3)$-structure $(\langle\cdot,\cdot\rangle,\widetilde{J},\widetilde{\Omega},\Upsilon)$, recalling $\S$2.2. Let $\mathscr{P}:=F_{\text{SU}(3)}\subset F_{\text{SO}(6)}$ denote the $\text{SU}(3)$-coframe bundle of $\mathbb{S}^{6}$. There is a natural identification $\mathscr{P}\cong\text{G}_{2}$, but we will not use this fact explicitly. Via the $\text{SU}(3)$-invariant splitting $\mathfrak{so}(6)=\mathfrak{su}(3)\oplus\mathbb{R}^{6}$ (orthogonal with respect to the Killing form), the restriction of the Levi-Civita connection to $\mathscr{P}$ decomposes as $\left.\psi\right|_{\mathscr{P}}=\widetilde{\gamma}+T(\omega),$ (3.2) where $\widetilde{\gamma}\in\Omega^{1}(\mathscr{P};\mathfrak{su}(3))$ is the natural $\text{SU}(3)$-connection and $T(\omega)\in\Omega^{1}(\mathscr{P};\mathbb{R}^{6})$ is the intrinsic torsion of the $\text{SU}(3)$-structure. Thus, on $\mathscr{P}$, we have the first structure equations $d\omega=-\widetilde{\gamma}\wedge\omega-T(\omega)\wedge\omega.$ By writing the subspaces $\mathfrak{su}(3)$ and $\mathbb{R}^{6}$ of $\mathfrak{so}(6)$ in terms of explicit $6\times 6$ matrices, we can express the connection matrix $\widetilde{\gamma}$ in the form $\widetilde{\gamma}=\left[\begin{array}[]{c c | c c | c c}0&-\beta_{11}&\alpha_{21}&-\beta_{21}&-\alpha_{31}&-\beta_{31}\\\ \beta_{11}&0&\beta_{21}&\alpha_{21}&\beta_{31}&-\alpha_{31}\\\ \hline\cr-\alpha_{21}&-\beta_{21}&0&-\beta_{22}&\alpha_{32}&-\beta_{32}\\\ \beta_{21}&-\alpha_{21}&\beta_{22}&0&\beta_{32}&\alpha_{32}\\\ \hline\cr\alpha_{31}&-\beta_{31}&-\alpha_{32}&-\beta_{32}&0&-\beta_{33}\\\ \beta_{31}&\alpha_{31}&\beta_{32}&-\alpha_{32}&\beta_{33}&0\end{array}\right]$ and calculate that the intrinsic torsion of $\mathbb{S}^{6}$ is $T(\omega)=\frac{1}{2}\left[\begin{array}[]{c c | c c | c c}0&0&\omega_{5}&-\omega_{6}&-\omega_{3}&\omega_{4}\\\ 0&0&-\omega_{6}&-\omega_{5}&\omega_{4}&\omega_{3}\\\ \hline\cr-\omega_{5}&\omega_{6}&0&0&\omega_{1}&-\omega_{2}\\\ \omega_{6}&\omega_{5}&0&0&-\omega_{2}&-\omega_{1}\\\ \hline\cr\omega_{3}&-\omega_{4}&-\omega_{1}&\omega_{2}&0&0\\\ -\omega_{4}&-\omega_{3}&\omega_{2}&\omega_{1}&0&0\end{array}\right]\\!.$ Let $\overline{D}\colon\Gamma(T\mathbb{S}^{6})\to\Omega^{1}(\mathbb{S}^{6})\otimes\Gamma(T\mathbb{S}^{6})$ denote the covariant derivative operator associated to the connection $\widetilde{\gamma}$. If $(e_{1},\ldots,e_{6})$ is a local $\text{SU}(3)$-frame on $U\subset\mathbb{S}^{6}$, we have $\overline{D}e_{i}=-\widetilde{\gamma}_{ij}\otimes e_{j}.$ (3.3) #### 3.1.2 The $\text{SU}(3)$-Structure in Complex Notation It will often be convenient to have complex versions of the above equations. To that end, let $\zeta\in\Omega^{1}(\mathscr{P};\mathbb{C}^{3})$ denote the complex tautological $1$-form, where: $\displaystyle\zeta_{1}$ $\displaystyle=\omega_{1}+i\omega_{2}$ $\displaystyle\zeta_{2}$ $\displaystyle=\omega_{3}+i\omega_{4}$ $\displaystyle\zeta_{3}$ $\displaystyle=\omega_{5}+i\omega_{6}$ Let $\gamma\in\Omega^{1}(\mathscr{P};\mathfrak{su}(3))$ denote the natural $\text{SU}(3)$-connection on $\mathbb{S}^{6}$, regarded now as a complex $3\times 3$ matrix (rather than a real $6\times 6$ matrix). In other words: $\left(\gamma_{ij}\right)=\begin{bmatrix}\gamma_{11}&\gamma_{12}&\gamma_{13}\\\ \gamma_{21}&\gamma_{22}&\gamma_{23}\\\ \gamma_{31}&\gamma_{32}&\gamma_{33}\end{bmatrix}=\begin{bmatrix}i\beta_{11}&\alpha_{21}+i\beta_{21}&-\alpha_{31}+i\beta_{31}\\\ -\alpha_{21}+i\beta_{21}&i\beta_{22}&\alpha_{32}+i\beta_{32}\\\ \alpha_{31}+i\beta_{31}&-\alpha_{32}+i\beta_{32}&i\beta_{33}\end{bmatrix}\\!.$ In this notation, the first and second structure equations of $\mathbb{S}^{6}$ are [5], [6] $\displaystyle d\zeta_{i}$ $\displaystyle=-\gamma_{i\ell}\wedge\zeta_{\ell}+\overline{\zeta}_{j}\wedge\overline{\zeta}_{k}$ (3.4) $\displaystyle d\gamma_{ij}$ $\displaystyle=\textstyle-\gamma_{ik}\wedge\gamma_{kj}+\frac{3}{4}\zeta_{i}\wedge\overline{\zeta}_{j}-\frac{1}{4}\delta_{ij}\,\zeta_{\ell}\wedge\overline{\zeta}_{\ell}$ (3.5) where $(i,j,k)$ in the first structure equation is an even permutation of $(1,2,3)$. Extend both $\overline{\nabla}$ and $\overline{D}$ by $\mathbb{C}$-linearity to operators $\Gamma(T\mathbb{S}^{6}\otimes_{\mathbb{R}}\mathbb{C})\to\Gamma(T\mathbb{S}^{6}\otimes_{\mathbb{R}}\mathbb{C})\otimes\Omega^{1}(\mathbb{S}^{6};\mathbb{C})$. In terms of a local $\text{SU}(3)$-frame $(e_{1},\ldots,e_{6})$ for $T\mathbb{S}^{6}$, we let $\displaystyle f_{1}$ $\displaystyle=\frac{1}{2}(e_{1}-ie_{2})$ $\displaystyle f_{2}$ $\displaystyle=\frac{1}{2}(e_{3}-ie_{4})$ $\displaystyle f_{3}$ $\displaystyle=\frac{1}{2}(e_{5}-ie_{6})$ $\displaystyle\overline{f}_{1}$ $\displaystyle=\frac{1}{2}(e_{1}+ie_{2})$ $\displaystyle\overline{f}_{2}$ $\displaystyle=\frac{1}{2}(e_{3}+ie_{4})$ $\displaystyle\overline{f}_{3}$ $\displaystyle=\frac{1}{2}(e_{5}+ie_{6}).$ Note that $(f_{1},f_{2},f_{3})$ is a local $\text{SU}(3)$-frame for $T^{1,0}\mathbb{S}^{6}$, while $(\overline{f}_{1},\overline{f}_{2},\overline{f}_{3})$ is a local $\text{SU}(3)$-frame for $T^{0,1}\mathbb{S}^{6}$. A calculation shows that $\displaystyle\overline{\nabla}f_{1}$ $\displaystyle=\textstyle\overline{D}f_{1}+\frac{1}{2}(\zeta_{2}\otimes\overline{f}_{3}-\zeta_{3}\otimes\overline{f}_{2})$ $\displaystyle\overline{\nabla}f_{2}$ $\displaystyle=\textstyle\overline{D}f_{2}+\frac{1}{2}(\zeta_{3}\otimes\overline{f}_{1}-\zeta_{1}\otimes\overline{f}_{3})$ (3.6) $\displaystyle\overline{\nabla}f_{3}$ $\displaystyle=\textstyle\overline{D}f_{3}+\frac{1}{2}(\zeta_{1}\otimes\overline{f}_{2}-\zeta_{2}\otimes\overline{f}_{1})$ and $\overline{D}f_{i}=\gamma_{ji}\otimes f_{j}$ (3.7) where we underscore that $(\gamma_{ji})=(\gamma_{ij})^{T}$. ### 3.2 Moving Frames for Holomorphic Curves in $\mathbb{S}^{6}$ We now turn our attention to holomorphic curves $u\colon\Sigma^{2}\to\mathbb{S}^{6}$, always assuming for simplicity that $u$ is an (unramified) immersion. In this section, we recall Bryant’s “holomorphic Frenet frame” for $u$, which will be central to our calculations. Our discussion is essentially a self-contained summary of [5, $\S$4], though we have changed notation in several places. Preparation of this section was aided by clarifying discussions in [18] and [27]. Before getting started, let us give a brief overview of the various complex vector bundles over $\Sigma$ that we will need. First, consider the complex rank $3$ bundle $u^{*}(T^{1,0}\mathbb{S}^{6})\to\Sigma$ of $(1,0)$-vectors along $u(\Sigma)$. Next, we let $L_{T}:=T^{1,0}\Sigma\subset u^{*}(T^{1,0}\mathbb{S}^{6})$ and define the complex rank $2$ bundle $Q_{NB}:=u^{*}(T^{1,0}\mathbb{S}^{6})/L_{T}.$ For $v\in u^{*}(T^{1,0}\mathbb{S}^{6})$, we let $(v)\in Q_{NB}$ denote its projection to the quotient. In the sequel, we will define a certain complex line subbundle $L_{N}\subset Q_{NB}$, from which we will set $L_{B}:=Q_{NB}/L_{N}$. As above, for $(v)\in Q_{NB}$, we let $(\\!(v)\\!)\in L_{B}$ denote its projection to $L_{B}$. In summary, we have a diagram: ${L_{T}}$${u^{*}(T^{1,0}\mathbb{S}^{6})}$${L_{N}}$${Q_{NB}}$${L_{B}}$$\scriptstyle{(\cdot)}$$\scriptstyle{(\\!(\cdot)\\!)}$ All complex vector bundles under consideration are assumed to be endowed with their obvious Hermitian metrics. As Hermitian vector bundles, we will have isomorphisms $\displaystyle u^{*}(T^{1,0}\mathbb{S}^{6})$ $\displaystyle\simeq L_{T}\oplus L_{N}\oplus L_{B}$ (3.8) $\displaystyle Q_{NB}$ $\displaystyle\simeq L_{N}\oplus L_{B}.$ (3.9) We will shortly equip all of these bundles with holomorphic structures, cautioning that the isomorphisms (3.8) and (3.9) generally will not hold in the holomorphic category. #### 3.2.1 Holomorphic Structures To begin, recall the (complexified) $\text{SU}(3)$-connection $\overline{D}$ on $T\mathbb{S}^{6}\otimes_{\mathbb{R}}\mathbb{C}$. By restriction and pullback, we get an induced connection (still denoted $\overline{D}$) on $u^{*}(T^{1,0}\mathbb{S}^{6})\to\Sigma$. We endow $u^{*}(T^{1,0}\mathbb{S}^{6})$ with the Koszul-Malgrange holomorphic structure for $\overline{D}$. Since $u$ is an immersion, the complex line bundle $L_{T}:=T^{1,0}\Sigma\subset u^{*}(T^{1,0}\mathbb{S}^{6})$ is a holomorphic line subbundle, and we equip the quotient bundle $Q_{NB}:=u^{*}(T^{1,0}\mathbb{S}^{6})/L_{T}$ with the induced holomorphic structure. These structures in place, we now make two frame adaptations. #### 3.2.2 First Adaptation Let $(f_{1},f_{2},f_{3})$ be an $\text{SU}(3)$-frame for $u^{*}(T^{1,0}\mathbb{S}^{6})$. For our first adaptation, we consider those frames for which $f_{1}\in L_{T}=T^{1,0}\Sigma.$ The set of such frames comprises a $\text{U}(2)$-subbundle $\mathscr{F}_{1}\subset u^{*}\mathscr{P}$ over $\Sigma$, and we will refer to such $(f_{1},f_{2},f_{3})$ as being $\mathrm{U}(2)$-adapted. On $\mathscr{F}_{1}$, we have that $\zeta_{2}=\zeta_{3}=0$. Differentiating these equations and applying Cartan’s Lemma shows that there exist functions $\kappa,\mu\colon\mathscr{F}_{1}\to\mathbb{C}$ for which $\displaystyle\gamma_{21}$ $\displaystyle=\kappa\zeta_{1}$ $\displaystyle\gamma_{31}$ $\displaystyle=\mu\zeta_{1}.$ Writing $\kappa=\kappa_{1}+i\kappa_{2}$ and $\mu=\mu_{1}+i\mu_{2}$, where $\kappa_{1},\kappa_{2},\mu_{1},\mu_{2}\colon\mathscr{F}_{1}\to\mathbb{R}$, a calculation shows that the second fundamental form may be expressed as $\displaystyle\text{I\\!I}(e_{1},e_{1})$ $\displaystyle=\kappa_{1}e_{3}+\kappa_{2}e_{4}+\mu_{1}e_{5}-\mu_{2}e_{6}$ $\displaystyle\text{I\\!I}(e_{1},e_{2})$ $\displaystyle=-\kappa_{2}e_{3}+\kappa_{1}e_{4}+\mu_{2}e_{6}+\mu_{1}e_{6}$ (3.10) $\displaystyle\text{I\\!I}(e_{2},e_{2})$ $\displaystyle=-\text{I\\!I}(e_{1},e_{1}).$ In particular, we observe that holomorphic curves are minimal surfaces. Equation (3.10) also shows that the functions $\kappa,\mu$ are essentially equivalent to the second fundamental form. Using $\text{U}(2)$-adapted frames, we can understand the holomorphic structures on $L_{T}$ and $Q_{NB}$ more explicitly. That is, the Chern connection of $L_{T}$ is given by $D^{L_{T}}f_{1}=\gamma_{11}\otimes f_{1}$ while the Chern connection of $Q_{NB}$ is given by $\displaystyle D^{Q_{NB}}(f_{2})$ $\displaystyle=\gamma_{22}\otimes(f_{2})+\gamma_{32}\otimes(f_{3})$ $\displaystyle D^{Q_{NB}}(f_{3})$ $\displaystyle=\gamma_{23}\otimes(f_{2})+\gamma_{33}\otimes(f_{3}).$ We now recast the second fundamental form as a holomorphic section. In [5, Lemma 4.3], it is shown that $\displaystyle\Phi_{\text{I\\!I}}$ $\displaystyle\in\Gamma(L_{T}^{*}\otimes L_{T}^{*}\otimes Q_{NB})$ $\displaystyle\Phi_{\text{I\\!I}}$ $\displaystyle=\kappa\zeta_{1}\otimes f^{\vee}_{1}\otimes(f_{2})+\mu\zeta_{1}\otimes f^{\vee}_{1}\otimes(f_{3})$ is a well-defined (frame-independent) holomorphic section, where $f^{\vee}_{1}\colon\mathscr{F}_{1}\to L_{T}^{*}$ is the dual of $f_{1}$. It is remarked in [5, Lemma 4.4] that $\Phi_{\text{I\\!I}}=0$ if and only if $u$ is the totally geodesic $\mathbb{S}^{2}$. On the other hand, if $u$ is not the totally geodesic $\mathbb{S}^{2}$, then the zeros of $\Phi_{\text{I\\!I}}$ are isolated, hence finite (since $\Sigma$ is compact). To streamline further discussion, we enact the following: Convention: From now on, we assume that $u$ is not totally-geodesic. It is convenient to regard $\Phi_{\text{I\\!I}}$ as a holomorphic section of $\text{Hom}(L_{T}\otimes L_{T};Q_{NB})$. Thus, there is a holomorphic line subbundle, called $L_{N}\subset Q_{NB}$, such that $\Phi_{\text{I\\!I}}\in H^{0}(\text{Hom}(L_{T}\otimes L_{T};L_{N})).$ To be more explicit, let $F$ denote the (effective) divisor of the holomorphic section $\Phi_{\text{I\\!I}}$, i.e., $F=\sum_{p\in\Sigma\colon\Phi_{\text{I\\!I}}(p)=0}\text{ord}_{p}(\Phi_{\text{I\\!I}})\cdot p,$ and let $\mathcal{O}_{F}\to\Sigma$ be the corresponding holomorphic line bundle. Viewing $\Phi_{\text{I\\!I}}\in H^{0}((L_{T}\otimes L_{T})^{*}\otimes L_{N})$, it follows that $L_{N}=\mathcal{O}_{F}\otimes L_{T}\otimes L_{T}.$ Finally, we let $L_{B}:=Q_{NB}/L_{N}$ and equip $L_{B}$ with the induced holomorphic structure. #### 3.2.3 Second Adaptation For our second adaptation, we consider the $\text{U}(2)$-adapted frames $(f_{1},f_{2},f_{3})$ for which $(f_{2})\in L_{N}.$ This adaptation defines a $T^{2}$-subbundle $\mathscr{F}_{2}\subset\mathscr{F}_{1}\subset u^{*}\mathscr{P}$ over $\Sigma$, and we refer to such frames as $T^{2}$-adapted. On $\mathscr{F}_{2}$, we have that $\gamma_{31}=0$, so that $\mu=0$. Differentiating $\gamma_{31}=0$ shows that $\gamma_{32}$ is a semibasic $(1,0)$-form, and hence $\gamma_{32}=\tau\zeta_{1}$ for some function $\tau\colon\mathscr{F}_{2}\to\mathbb{C}$. In summary, if $(f_{1},f_{2},f_{3})$ is a $T^{2}$-adapted frame, the equations (3.7) now read: $\overline{D}\begin{bmatrix}f_{1}\\\ f_{2}\\\ f_{3}\end{bmatrix}=\begin{pmatrix}\gamma_{11}&\kappa\zeta_{1}&0\\\ -\overline{\kappa}\overline{\zeta}_{1}&\gamma_{22}&\tau\zeta_{1}\\\ 0&-\overline{\tau}\overline{\zeta_{1}}&\gamma_{33}\end{pmatrix}\otimes\begin{bmatrix}f_{1}\\\ f_{2}\\\ f_{3}\end{bmatrix}\\!.$ (3.11) These are the holomorphic Frenet equations for the holomorphic curve $u\colon\Sigma^{2}\to\mathbb{S}^{6}$. Using $T^{2}$-adapted frames, we can understand the holomorphic structures on $L_{N}$ and $L_{B}$ more explicitly. That is, the Chern connection of $L_{N}$ is given by $D^{L_{N}}(f_{2})=\gamma_{22}\otimes(f_{2})$ and that of $L_{B}$ by $D^{L_{B}}(\\!(f_{3})\\!)=\gamma_{33}\otimes(\\!(f_{3})\\!).$ Now, by analogy with the familiar Frenet frame for curves in $\mathbb{R}^{3}$, one might be inclined to call $\tau$ the “holomorphic torsion” of $u\colon\Sigma^{2}\to\mathbb{S}^{6}$, but for the fact that $\tau$ depends on the choice of $T^{2}$-frame $(f_{1},f_{2},f_{3})$. However, the “null-torsion” condition $\tau=0$ turns out to be independent of frame. Indeed, Bryant shows [5, Lemma 4.5] that $\displaystyle\Phi_{\text{I\\!I\\!I}}$ $\displaystyle\in\Gamma(L_{T}^{*}\otimes L_{N}^{*}\otimes L_{B})$ $\displaystyle\Phi_{\text{I\\!I\\!I}}$ $\displaystyle=\tau\zeta_{1}\otimes(f^{\vee}_{2})\otimes(\\!(f_{3})\\!)$ is a well-defined (frame-independent) holomorphic section. The section $\Phi_{\text{I\\!I\\!I}}$ partitions the collection of (non-totally-geodesic) holomorphic curves into three classes: 1. 1. $\Phi_{\text{I\\!I\\!I}}=0$ identically. 2. 2. The zero set of $\Phi_{\text{I\\!I\\!I}}$ is finite and non-empty. 3. 3. $\Phi_{\text{I\\!I\\!I}}$ is nowhere-vanishing. The generic situation is (2), and relatively little is known about this case. As Bryant remarks [5, p. 225], the condition (3) is quite strong, implying a stringent relation on the line bundles $L_{T},L_{N},L_{B}$. Holomorphic curves of type (1) are said to be null-torsion, and are the focus of this work. Note that every holomorphic curve of genus zero is null-torsion [5, Theorem 4.6] or the totally-geodesic $2$-sphere. It is shown in [4] that every null-torsion holomorphic curve is linearly full in $\mathbb{S}^{6}$ (i.e., is not contained in a totally-geodesic $\mathbb{S}^{5}$), implying that even the simplest null-torsion curves cannot be reduced to the study of minimal Legendrians in $\mathbb{S}^{5}$. It is a remarkable fact [5, Theorem 4.10] that every compact Riemann surface admits a conformal branched immersion into $\mathbb{S}^{6}$ as a null-torsion holomorphic curve. In his 1999 Ph.D. thesis [31], Rowland extended this result, showing that, in fact, every compact Riemann surface may be conformally embedded as a null-torsion holomorphic curve in $\mathbb{S}^{6}$. ###### Remark. As of this writing, it is an open question whether every open Riemann surface can be conformally embedded as a null-torsion holomorphic curve in $\mathbb{S}^{6}$. It seems to the author that the techniques of [1] may yield a positive solution to this problem. ### 3.3 Null-Torsion Holomorphic Curves We now examine null-torsion holomorphic curves more closely. In Proposition 3.1, we give two holomorphic interpretations of the null-torsion condition. Then, in Proposition 3.3, we will see how the null-torsion condition constrains the topologies of the line bundles $L_{T},L_{N},L_{B}$. Using this, together with Riemann-Roch, we will calculate (Proposition 3.4) the number of independent holomorphic sections of $L_{N}$ and $L_{B}^{*}$. #### 3.3.1 Holomorphic Interpretations of Null-Torsion Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ holomorphic curve, and regard $\mathbb{S}^{6}\subset\mathbb{R}^{7}$ as the usual unit sphere. Its binormal Gauss map is $\displaystyle b_{u}\colon\Sigma^{2}$ $\displaystyle\to\text{Gr}_{2}^{+}(\mathbb{R}^{7})$ $\displaystyle b_{u}(p)$ $\displaystyle=e_{5}\wedge e_{6}$ where $(e_{1},\ldots,e_{6})$ is a $T^{2}$-frame at $p\in\Sigma$. One can check that $b_{u}$ is well-defined, independent of frame. Now, consider the map $\displaystyle\text{Gr}_{2}^{+}(\mathbb{R}^{7})$ $\displaystyle\to\mathbb{P}(\mathbb{C}^{7})=\mathbb{CP}^{6}$ $\displaystyle x\wedge y$ $\displaystyle\mapsto\text{span}_{\mathbb{C}}(x-iy)$ where $\\{x,y\\}$ is orthonormal. This map is well-defined (independent of basis), injective, and its image is the complex hypersurface $\Lambda=\\{[z]\in\mathbb{CP}^{6}\colon z_{1}^{2}+\cdots+z_{7}^{2}=0\\}$. Consequently, we may identify $\text{Gr}_{2}^{+}(\mathbb{R}^{7})\simeq\Lambda\subset\mathbb{CP}^{6}$. ###### Proposition 3.1. Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ holomorphic curve. The following are equivalent: (i) $u$ is null-torsion. (ii) The binormal Gauss map $b_{u}\colon\Sigma^{2}\to\Lambda$ is holomorphic. (iii) There is a holomorphic splitting $Q_{NB}\cong L_{N}\oplus L_{B}$. ###### Proof. The equivalence of (i) and (ii) is [5, Theorem 4.7]. For the equivalence of (i) and (iii), we simply observe that $\tau$ is essentially the second fundamental form (in the sense of complex geometry, cf. [11, Chap. V: $\S$14] or [22, $\S$1.6]) of the holomorphic Hermitian subbundle $L_{N}\subset Q_{NB}$. . ∎ ###### Corollary 3.2. If $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ is a null-torsion holomorphic curve, then its area $A=4\pi d$, where $d$ is the degree of the binormal Gauss map. #### 3.3.2 Topological Consequences We now consider the topologies of the bundles $L_{T},L_{N},L_{B}$. Now, as Bryant points out [5, p. 224], the $\text{SU}(3)$-structure on $\mathbb{S}^{6}$ yields a holomorphic, metric isomorphism $\Lambda^{3}(T^{1,0}\mathbb{S}^{6})\cong\underline{\mathbb{C}}$ where $\underline{\mathbb{C}}$ is the trivial line bundle. Consequently, there is an isomorphism of holomorphic line bundles $L_{T}\otimes L_{N}\otimes L_{B}\cong\underline{\mathbb{C}}$ In particular, it follows that: $c_{1}(L_{T})+c_{1}(L_{N})+c_{1}(L_{B})=0.$ (3.12) Here is an equivalent way to see this. Since $\gamma$ is valued in $\mathfrak{su}(3)$, we have $\gamma_{11}+\gamma_{22}+\gamma_{33}=0.$ Note that $\gamma_{11}$, $\gamma_{22}$, $\gamma_{33}$ are, respectively, the connection forms of the Chern connections on $L_{T},L_{N},L_{B}$. From the structure equations (3.5), we may compute that their curvature $(1,1)$-forms $F_{T},F_{N},F_{B}$ are given by $\displaystyle F_{T}=i\,d\gamma_{11}$ $\displaystyle=\left(1-2|\kappa|^{2}\right)\text{vol}=:K_{T}\,\text{vol}$ $\displaystyle F_{N}=i\,d\gamma_{22}$ $\displaystyle=\left(2|\kappa|^{2}-2|\tau|^{2}-\frac{1}{2}\right)\text{vol}=:K_{N}\,\text{vol}$ $\displaystyle F_{B}=i\,d\gamma_{33}$ $\displaystyle=\left(2|\tau|^{2}-\frac{1}{2}\right)\text{vol}=:K_{B}\,\text{vol}$ where $\text{vol}=\omega_{1}\wedge\omega_{2}$ is the volume form on $\Sigma$, and where $K_{T}$, $K_{N}$, $K_{B}$ are defined by these equations. Note that $K_{T}=K$ is simply the Gauss curvature of $\Sigma$. Thus, we have $F_{T}+F_{N}+F_{B}=0,$ and hence $\displaystyle\frac{1}{2\pi}\int_{\Sigma}K_{T}\,\text{vol}+\frac{1}{2\pi}\int_{\Sigma}K_{N}\,\text{vol}+\frac{1}{2\pi}\int_{\Sigma}K_{B}\,\text{vol}=0,$ which gives (3.12) by Chern-Weil theory. In the null-torsion case, the formulas for $K_{N}$ and $K_{B}$ simplify, yielding: ###### Proposition 3.3. If $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ is a null-torsion holomorphic curve of area $A=4\pi d$, then $\displaystyle c_{1}(L_{T})$ $\displaystyle=\chi(\Sigma)$ $\displaystyle c_{1}(L_{N})$ $\displaystyle=-\chi(\Sigma)+d$ $\displaystyle c_{1}(L_{B})$ $\displaystyle=-d.$ Moreover, counting with multiplicity, there are exactly $d-3\chi(\Sigma)$ points $p\in\Sigma$ at which $\Phi_{\mathrm{I\\!I}}(p)=0$. ###### Proof. It is a standard fact that $c_{1}(L_{T})=c_{1}(T^{1,0}\Sigma)=\chi(\Sigma)$. Since $u$ is null-torsion, we have $\tau=0$, so $K_{B}=-\frac{1}{2}$, so: $c_{1}(L_{B})=\frac{1}{2\pi}\int_{\Sigma}K_{B}\,\text{vol}=-\frac{A}{4\pi}.$ Note that this gives another proof of the fact that null-torsion holomorphic curves have area equal to $4\pi$ times a positive integer. From Corollary 3.2, we know that $A=4\pi d$, where $d$ is the degree of the binormal lift. So, we obtain: $c_{1}(L_{B})=-d.$ The last claim is simply that $\sum_{p\in\Sigma\colon\Phi_{\text{I\\!I}}(p)=0}\text{ord}_{p}(\Phi_{\text{I\\!I}})=\deg(F)=c_{1}(\mathcal{O}_{F})=c_{1}(L_{T}^{*}\otimes L_{T}^{*}\otimes L_{N})=d-3\chi(\Sigma).$ ∎ #### 3.3.3 Holomorphic Consequences We now invoke Riemann-Roch to understand the spaces of holomorphic sections of $L_{N}$ and $L_{B}^{*}$. ###### Proposition 3.4. Suppose $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ is a null-torsion holomorphic curve of area $A=4\pi d$. Then: (a) We have: $\displaystyle h^{0}(L_{N})$ $\displaystyle=d-\frac{1}{2}\chi(\Sigma)$ $\displaystyle h^{0}(L_{B}^{*})$ $\displaystyle=d+\frac{1}{2}\chi(\Sigma)+h^{0}(L_{B}\otimes K_{\Sigma}).$ (b) If $d>2g-2$, then $h^{0}(L_{B}\otimes K_{\Sigma})=0$. ###### Proof. (a) To begin, observe that $\deg(L_{N}^{*}\otimes K_{\Sigma})=-c_{1}(L_{N})-\chi(\Sigma)=-d<0.$ Since the line bundle $L_{N}^{*}\otimes K_{\Sigma}$ is negative, it has no non-trivial holomorphic sections. Therefore, by Riemann-Roch, Serre Duality, and Proposition 3.3, we obtain: $\displaystyle h^{0}(L_{N})$ $\displaystyle=h^{0}(L_{N}^{*}\otimes K_{\Sigma})+c_{1}(L_{N})+\frac{1}{2}\chi(\Sigma)$ $\displaystyle=0-\chi(\Sigma)+d+\frac{1}{2}\chi(\Sigma).$ Similarly, we have: $\displaystyle h^{0}(L_{B}^{*})$ $\displaystyle=h^{0}(L_{B}\otimes K_{\Sigma})+c_{1}(L_{B}^{*})+\frac{1}{2}\chi(\Sigma)$ $\displaystyle=d+\frac{1}{2}\chi(\Sigma)+h^{0}(L_{B}\otimes K_{\Sigma}).$ (b) Letting $\mathscr{L}:=L_{B}\otimes K_{\Sigma}$, we compute $\deg(\mathscr{L})=c_{1}(L_{B})+c_{1}(K_{\Sigma})=-d+(2g-2).$ Therefore, if $d>2g-2$, then $\deg(\mathscr{L})<0$, so $\mathscr{L}$ has no non-trivial holomorphic sections. ∎ In light of Proposition 3.4(b), it is of interest to know when the hypothesis “$d>2g-2$” might be automatically satisfied. To that end, we recall the following facts from complex algebraic geometry. See [17, $\S$2.3, page 253] for a proof. ###### Lemma 3.5. Let $\widetilde{u}\colon\Sigma^{2}\to\mathbb{CP}^{6}$ be a non-degenerate complex curve of genus $g$ and degree $d$. (a) We have $d\geq 6$. (b) If $d=6$, then $\widetilde{u}$ is the rational normal curve, which has genus $g=0$. (c) If $7\leq d\leq 11$, then $g\leq d-6$. ###### Proposition 3.6. Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a null-torsion holomorphic curve, where $\Sigma$ is a closed surface of genus $g$ and area $A=4\pi d$. If $g\leq 6$, then $d>2g-2$. ###### Proof. If $u$ is totally-geodesic, the claim is trivial, so assume otherwise. Then its binormal Gauss map $b_{u}\colon\Sigma^{2}\to\Lambda\subset\mathbb{CP}^{6}$ is non-degenerate, so $d\geq 6$. If $d=6$, then by Lemma 3.5(b), we have $g=0$, fulfilling the bound $d>2g-2$. Thus, we may assume that $d\geq 7$ for the remainder of this proof. If $g\leq 4$, then $2g-2\leq 6<7\leq d$, so the bound is fulfilled in this case. Suppose now that $g=5$. If we had $d\leq 2g-2=8$, then Lemma 3.5(c) would imply that $g\leq 2$, which is absurd. Analogous reasoning holds for $g=6$. ∎ ### 3.4 Structures on the Normal Bundle Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a holomorphic curve (which may or may not be null-torsion). With respect to the splitting $u^{*}(T\mathbb{S}^{6})=T\Sigma\oplus N\Sigma$, let $\nabla^{\top}$, $\nabla^{\perp}$ and $D^{\top}$, $D^{\perp}$ denote the tangential and normal connections for $\overline{\nabla}$ and $\overline{D}$. In this section, we equip the normal bundle $N\Sigma$ with various holomorphic structures and compare their properties. Since this is an important but perhaps technical point, we now provide a brief overview of this section. Thus far, we have been focused on the $\text{G}_{2}$-invariant almost-complex structure $\widetilde{J}$ on $\mathbb{S}^{6}$. By restriction, we get a complex structure, also called $\widetilde{J}$, on $N\Sigma$. Decomposing the complexification $N\Sigma\otimes_{\mathbb{R}}\mathbb{C}$ into $\widetilde{J}$-eigenbundles $N\Sigma\otimes_{\mathbb{R}}\mathbb{C}=\widetilde{N}^{1,0}\oplus\widetilde{N}^{0,1}$ we will ask how the $\mathbb{C}$-linear extensions of $\nabla^{\perp}$ and $D^{\perp}$ relate to this decomposition. The upshot is that the complex bundle $\widetilde{N}^{1,0}$ can always be endowed with a holomorphic structure $\overline{\partial}^{\text{SU}}$ compatible with $D^{\perp}$. As shown by Lotay [26], this holomorphic structure plays a key role in the deformation theory of associative $3$-folds in $\mathbb{R}^{7}$ asymptotic to the cone on $\Sigma$. However, for our study of the second variation of area, we will need to consider a different complex structure on $N\Sigma=E_{N}\oplus E_{B}$, which we call $\widehat{J}$. This alternate complex structure $\widehat{J}$ will agree with $\widetilde{J}$ on $E_{N}$, but differ on $E_{B}$. Again, we will decompose $N\Sigma\otimes_{\mathbb{R}}\mathbb{C}$ into $\widehat{J}$-eigenbundles $N\Sigma\otimes_{\mathbb{R}}\mathbb{C}=\widehat{N}^{1,0}\oplus\widehat{N}^{0,1}$ and consider how the $\mathbb{C}$-linear extensions of $\nabla^{\perp}$ and $D^{\perp}$ interact with this decomposition. The result is that in the null- torsion case, the complex bundle $\widehat{N}^{1,0}$ can be equipped with two different holomorphic structures: one called $\overline{\partial}^{\nabla}$ that is compatible with $\nabla^{\perp}$, and another called $\overline{\partial}^{D}$ that is compatible with $D^{\perp}$. Comparing these two structures on $\widehat{N}^{1,0}$ is the idea behind Theorem 1.1. #### 3.4.1 The Induced Complex Structure Let $u\colon\Sigma\to\mathbb{S}^{6}$ be a holomorphic curve. It is easy to check that the covariant derivative operators $\nabla^{\perp}$ and $D^{\perp}$ on the normal bundle $N\Sigma$ interact with the complex structure $\widetilde{J}$ as follows: ###### Proposition 3.7. The complex structure $\widetilde{J}$ is not $\nabla^{\perp}$-parallel, but is $D^{\perp}$-parallel (i.e., $D^{\perp}\widetilde{J}=0$). We now complexify $N\Sigma$ and extend $\widetilde{J}$ by $\mathbb{C}$-linearity. In the usual way, we have a decomposition of $N\Sigma\otimes_{\mathbb{R}}\mathbb{C}$ into $\widetilde{J}$-eigenbundles, say $N\Sigma\otimes_{\mathbb{R}}\mathbb{C}=\widetilde{N}^{1,0}\oplus\widetilde{N}^{0,1}$ where, for example, $\widetilde{N}^{1,0}=\\{\xi\in N\Sigma\otimes_{\mathbb{R}}\mathbb{C}\colon\widetilde{J}\xi=i\xi\\}=\textstyle\\{\frac{1}{2}(\eta-i\widetilde{J}\eta)\colon\eta\in N\Sigma\\}$. As complex vector bundles, there is an isomorphism $(N\Sigma,\widetilde{J})\simeq(\widetilde{N}^{1,0},i)$ via $\eta\mapsto\frac{1}{2}(\eta-i\widetilde{J}\eta)$ with inverse $\xi\mapsto\frac{1}{2}(\xi+\overline{\xi})$. There is also a well-defined isomorphism of complex vector bundles $\displaystyle(\widetilde{N}^{1,0},i)$ $\displaystyle\to Q_{NB}$ (3.13) $\displaystyle v^{\perp}$ $\displaystyle\mapsto(v)$ where we decompose $v\in u^{*}(T^{1,0}\mathbb{S}^{6})$ as $v=v^{\top}+v^{\perp}$ with tangential part $v^{\top}\in T^{1,0}\Sigma$ and normal part $v^{\perp}\in\widetilde{N}^{1,0}\Sigma$. In terms of a local $T^{2}$-frame $(e_{1},\ldots,e_{6})$, the isomorphisms $(N\Sigma,\widetilde{J})\simeq(\widetilde{N}^{1,0},i)\simeq Q_{NB}\simeq L_{N}\oplus L_{B}$ are simply: $\displaystyle e_{3}$ $\displaystyle\mapsto f_{2}\mapsto(f_{2})\mapsto(f_{2})\oplus 0$ $\displaystyle e_{5}$ $\displaystyle\mapsto f_{3}\mapsto(f_{3})\mapsto 0\oplus(\\!(f_{3})\\!)$ $\displaystyle e_{4}$ $\displaystyle\mapsto if_{2}\mapsto(if_{2})\mapsto(if_{2})\oplus 0$ $\displaystyle e_{6}$ $\displaystyle\mapsto if_{3}\mapsto(if_{3})\mapsto 0\oplus(\\!(if_{3})\\!).$ Now, we have already equipped $Q_{NB}$, $L_{N}$, and $L_{B}$ with holomorphic structures, and we would like to endow $\widetilde{N}^{1,0}$ with a holomorphic structure as well. There is an obvious way to do this: we can use the isomorphism (3.13) to pull back the holomorphic structure on $Q_{NB}$ to $\widetilde{N}^{1,0}$. Unwinding the definitions shows that this is precisely the Koszul-Malgrange structure for the $\mathbb{C}$-linear extension of $D^{\perp}$ on $\widetilde{N}^{1,0}$. Let us illustrate this holomorphic structure in terms of a local $T^{2}$-frame $(f_{1},f_{2},f_{3})$. To begin, extend both $\nabla^{\perp}$ and $D^{\perp}$ by $\mathbb{C}$-linearity to operators $\nabla^{\perp},D^{\perp}\colon\Gamma(N\Sigma\otimes_{\mathbb{R}}\mathbb{C})\to\Omega^{1}(\Sigma;\mathbb{C})\otimes\Gamma(N\Sigma\otimes_{\mathbb{R}}\mathbb{C}).$ From (3.6) and (3.11), we see that $\displaystyle\nabla^{\perp}f_{2}$ $\displaystyle=\textstyle\gamma_{22}\otimes f_{2}+\tau\zeta_{1}\otimes f_{3}-\frac{1}{2}\zeta_{1}\otimes\overline{f}_{3}$ $\displaystyle D^{\perp}f_{2}$ $\displaystyle=\gamma_{22}\otimes f_{2}+\tau\zeta_{1}\otimes f_{3}$ $\displaystyle\nabla^{\perp}f_{3}$ $\displaystyle=\textstyle\overline{\tau}\overline{\zeta}_{1}\otimes f_{2}+\frac{1}{2}\zeta_{1}\otimes\overline{f}_{2}+\gamma_{33}\otimes f_{3}$ $\displaystyle D^{\perp}f_{3}$ $\displaystyle=\overline{\tau}\overline{\zeta}_{1}\otimes f_{2}+\gamma_{33}\otimes f_{3}.$ Thus, the restriction of $\nabla^{\perp}$ to $\widetilde{N}^{1,0}\Sigma$ does not give a well-defined connection, whereas the restriction of $D^{\perp}$ to $\widetilde{N}^{1,0}$ does. That is, we have a covariant derivative operator $D^{\perp}\colon\Gamma(\widetilde{N}^{1,0})\to\Omega^{1}(\Sigma;\mathbb{C})\otimes\Gamma(\widetilde{N}^{1,0}),$ which gives $\widetilde{N}^{1,0}$ the holomorphic structure described in the previous paragraph. Composing this $D^{\perp}$ with the projection $T\Sigma\otimes_{\mathbb{R}}\mathbb{C}\to T^{0,1}\Sigma$ gives the corresponding $\overline{\partial}$-operator $\overline{\partial}^{\text{SU}}\colon\Gamma(\widetilde{N}^{1,0})\to\Omega^{0,1}(\Sigma)\otimes\Gamma(\widetilde{N}^{1,0}).$ #### 3.4.2 A Second Complex Structure Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a holomorphic curve. In terms of a local $T^{2}$-frame $(e_{1},\ldots,e_{6})$, we have: $\displaystyle\widetilde{J}e_{3}$ $\displaystyle=e_{4}$ $\displaystyle\widetilde{J}e_{5}$ $\displaystyle=e_{6}.$ We now define a new complex structure $\widehat{J}$ on $N\Sigma$ by declaring $\displaystyle\widehat{J}e_{3}$ $\displaystyle=e_{4}$ $\displaystyle\widehat{J}e_{5}$ $\displaystyle=-e_{6}.$ The following shows how $\nabla^{\perp}$ and $D^{\perp}$ relate to $\widehat{J}$, and should be contrasted with Proposition 3.7. ###### Proposition 3.8. Let $u\colon\Sigma\to\mathbb{S}^{6}$ be a holomorphic curve. The following are equivalent: (i) $u$ is null-torsion. (ii) $\nabla^{\perp}\widehat{J}=0$. (iii) $D^{\perp}\widehat{J}=0$. ###### Proof. Directly from (3.1), (3.2), and (3.3), one can check that $\displaystyle\nabla^{\perp}(\widehat{J}e_{3})-\widehat{J}(\nabla^{\perp}e_{3})=D^{\perp}(\widehat{J}e_{3})-\widehat{J}(D^{\perp}e_{3})$ $\displaystyle=-2\beta_{32}\otimes e_{5}-2\alpha_{32}\otimes e_{6}$ $\displaystyle\nabla^{\perp}(\widehat{J}e_{4})-\widehat{J}(\nabla^{\perp}e_{4})=D^{\perp}(\widehat{J}e_{4})-\widehat{J}(D^{\perp}e_{4})$ $\displaystyle=2\alpha_{32}\otimes e_{5}-2\beta_{32}\otimes e_{6}$ $\displaystyle\nabla^{\perp}(\widehat{J}e_{5})-\widehat{J}(\nabla^{\perp}e_{5})=D^{\perp}(\widehat{J}e_{5})-\widehat{J}(D^{\perp}e_{5})$ $\displaystyle=2\beta_{32}\otimes e_{3}-2\alpha_{32}\otimes e_{4}$ $\displaystyle\nabla^{\perp}(\widehat{J}e_{6})-\widehat{J}(\nabla^{\perp}e_{6})=D^{\perp}(\widehat{J}e_{6})-\widehat{J}(D^{\perp}e_{6})$ $\displaystyle=2\alpha_{32}\otimes e_{3}+2\beta_{32}\otimes e_{4}$ Noting that $u$ is null-torsion if and only if $\alpha_{32}=\beta_{32}=0$ proves the claim. ∎ We now complexify $N\Sigma$ and extend $\widehat{J}$ by $\mathbb{C}$-linearity. We have a decomposition of $N\Sigma\otimes_{\mathbb{R}}\mathbb{C}$ into $\widehat{J}$-eigenbundles, say $N\Sigma\otimes_{\mathbb{R}}\mathbb{C}=\widehat{N}^{1,0}\oplus\widehat{N}^{0,1}$ where, for example, $\widehat{N}^{1,0}=\\{\xi\in N\Sigma\otimes_{\mathbb{R}}\mathbb{C}\colon\widehat{J}\xi=i\xi\\}=\textstyle\\{\frac{1}{2}(\eta-i\widehat{J}\eta)\colon\eta\in N\Sigma\\}$. As complex vector bundles, we have isomorphisms $(N\Sigma,\widehat{J})\simeq(\widehat{N}^{1,0},i)\simeq L_{N}\oplus L_{B}^{*}$. In terms of a local $T^{2}$-frame, these isomorphisms are simply: $\displaystyle e_{3}$ $\displaystyle\mapsto f_{2}\mapsto(f_{2})\oplus 0$ $\displaystyle e_{5}$ $\displaystyle\mapsto\overline{f}_{3}\mapsto 0\oplus(\\!(\overline{f}_{3})\\!)$ $\displaystyle e_{4}$ $\displaystyle\mapsto if_{2}\mapsto(if_{2})\oplus 0$ $\displaystyle e_{6}$ $\displaystyle\mapsto-i\overline{f}_{3}\mapsto 0\oplus(\\!(-i\overline{f}_{3})\\!).$ We now extend both $\nabla^{\perp}$ and $D^{\perp}$ by $\mathbb{C}$-linearity. In terms of a local $T^{2}$-frame, we compute $\displaystyle\nabla^{\perp}f_{2}$ $\displaystyle=\textstyle\gamma_{22}\otimes f_{2}+\tau\zeta_{1}\otimes f_{3}-\frac{1}{2}\zeta_{1}\otimes\overline{f}_{3}$ $\displaystyle D^{\perp}f_{2}$ $\displaystyle=\gamma_{22}\otimes f_{2}+\tau\zeta_{1}\otimes f_{3}$ (3.14) $\displaystyle\nabla^{\perp}\overline{f}_{3}$ $\displaystyle=\textstyle\frac{1}{2}\overline{\zeta}_{1}\otimes f_{2}+\tau\zeta_{1}\otimes\overline{f}_{2}-\gamma_{33}\otimes\overline{f}_{3}$ $\displaystyle D^{\perp}\overline{f}_{3}$ $\displaystyle=\tau\zeta_{1}\otimes\overline{f}_{2}-\gamma_{33}\otimes\overline{f}_{3}.$ (3.15) Thus, if $u$ null-torsion (i.e., $\tau=0$), the $\mathbb{C}$-linear extensions of $\nabla^{\perp}$ and $D^{\perp}$ both give well-defined connections on $\widehat{N}^{1,0}$, meaning that we have covariant derivative operators $\displaystyle\nabla^{\perp},D^{\perp}\colon\Gamma(\widehat{N}^{1,0})\to\Omega^{1}(\Sigma;\mathbb{C})\otimes\Gamma(\widehat{N}^{1,0}).$ Composing these with the projection $T\Sigma\otimes_{\mathbb{R}}\mathbb{C}\to T^{0,1}\Sigma$ gives the corresponding $\overline{\partial}$-operators: $\displaystyle\overline{\partial}^{\nabla},\overline{\partial}^{D}\colon\Gamma(\widehat{N}^{1,0})\to\Omega^{0,1}(\Sigma)\otimes\Gamma(\widehat{N}^{1,0}).$ Explicitly, continuing to assume that $u$ is null-torsion: $\displaystyle\overline{\partial}^{\nabla}f_{2}$ $\displaystyle=\textstyle(\gamma_{22})^{0,1}\otimes f_{2}$ $\displaystyle\overline{\partial}^{D}f_{2}$ $\displaystyle=(\gamma_{22})^{0,1}\otimes f_{2}$ (3.16) $\displaystyle\overline{\partial}^{\nabla}\overline{f}_{3}$ $\displaystyle=\textstyle\frac{1}{2}\overline{\zeta}_{1}\otimes f_{2}-(\gamma_{33})^{0,1}\otimes\overline{f}_{3}$ $\displaystyle\overline{\partial}^{D}\overline{f}_{3}$ $\displaystyle=-(\gamma_{33})^{0,1}\otimes\overline{f}_{3}.$ (3.17) The upshot is that if $u$ is null-torsion, we have two different holomorphic structures to consider on the complex bundle $\widehat{N}^{1,0}$. The corresponding spaces of holomorphic sections will be denoted $\displaystyle H^{0}(\widehat{N}^{1,0};\nabla^{\perp})$ $\displaystyle=\\{\xi\in\Gamma(\widehat{N}^{1,0})\colon\overline{\partial}^{\nabla}\xi=0\\}$ $\displaystyle H^{0}(\widehat{N}^{1,0};D^{\perp})$ $\displaystyle=\\{\xi\in\Gamma(\widehat{N}^{1,0})\colon\overline{\partial}^{D}\xi=0\\}.$ As we will see, it is the bundle $(\widehat{N}^{1,0},\nabla^{\perp})$ that arises in the study of the second variation of area. On the other hand, the bundle $(\widehat{N}^{1,0},D^{\perp})$ has the desirable feature that it splits holomorphically: ###### Proposition 3.9. If $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ is a null-torsion holomorphic curve, then: (a) The inclusion $L_{N}\hookrightarrow(\widehat{N}^{1,0},\nabla^{\perp})$ is holomorphic. (b) There is a holomorphic splitting $(\widehat{N}^{1,0},D^{\perp})\cong L_{N}\oplus L_{B}^{*}.$ ###### Proof. We have already seen that $\widehat{N}^{1,0}\simeq L_{N}\oplus L_{B}^{*}$ holds as complex vector bundles. Equations (3.16) and (3.17) show that the inclusions $\displaystyle L_{N}$ $\displaystyle\hookrightarrow(\widehat{N}^{1,0},\nabla^{\perp})$ $\displaystyle L_{N}$ $\displaystyle\hookrightarrow(\widehat{N}^{1,0},D^{\perp})$ $\displaystyle L_{B}^{*}$ $\displaystyle\hookrightarrow(\widehat{N}^{1,0},D^{\perp})$ are all holomorphic. ∎ To conclude this section, we use the holomorphic structure $\nabla^{\perp}$ on $\widehat{N}^{1,0}$ to define a notion of “real-holomorphicity” for sections of $(N\Sigma,\widehat{J})$. We say that $\eta\in\Gamma(N\Sigma)$ is $(\widehat{J},\nabla^{\perp})$-real holomorphic if $\widehat{\eta}^{1,0}:=\frac{1}{2}(\eta-i\widehat{J}\eta)$ is $\nabla^{\perp}$-holomorphic. Defining the operator $\displaystyle\widehat{\mathscr{D}}\colon\Gamma(N\Sigma)$ $\displaystyle\to\Omega^{1}(\Sigma)\otimes\Gamma(N\Sigma)$ (3.18) $\displaystyle\widehat{\mathscr{D}}_{X}\eta$ $\displaystyle=\nabla^{\perp}_{JX}\eta-\nabla^{\perp}_{X}(\widehat{J}\eta)$ where $J:=\widetilde{J}|_{T\Sigma}$ is the complex structure on $T\Sigma$, it is easy to check that for $Z=\frac{1}{2}(X+iJX)\in T^{0,1}\Sigma$, we have $\displaystyle\nabla^{\perp}_{Z}\widehat{\eta}^{1,0}=\textstyle\frac{i}{4}\left(\widehat{\mathscr{D}}_{X}\eta+i\widehat{\mathscr{D}}_{JX}\eta\right)\\!.$ Consequently, $\eta$ is $(\widehat{J},\nabla^{\perp})$-real holomorphic if and only if $\widehat{\mathscr{D}}_{X}\eta=0$ for all $X\in T\Sigma$. In other words, $\displaystyle\\{\eta\in\Gamma(N\Sigma)\colon\widehat{\mathscr{D}}\eta=0\\}$ $\displaystyle\cong H^{0}(\widehat{N}^{1,0};\nabla^{\perp})$ (3.19) as complex vector spaces. ###### Remark. In the same way, one can speak of $(\widehat{J},D^{\perp})$-real holomorphicity, and define a corresponding operator $\widehat{\mathscr{D}}^{D}\colon\Gamma(N\Sigma)\to\Omega^{1}(\Sigma)\otimes\Gamma(N\Sigma)$. However, we will not need this concept. ## 4 The Second Variation of Area We now begin our study of the Jacobi operator of null-torsion holomorphic curves. In $\S$4.1, we derive a second variation formula suited to the study of null-torsion holomorphic curves in $\mathbb{S}^{6}$, and in $\S$4.2 we use this formula to give a holomorphic interpretation of the $(-2)$-eigenspace of the Jacobi operator. In $\S$4.3, we prove Theorem 1.1. ### 4.1 A Second Variation Formula Let $u\colon\Sigma^{2}\to\mathbb{S}^{2n}$ be a minimal surface in a round $2n$-sphere of constant curvature $1$, where $\Sigma$ is a compact oriented surface without boundary. Let $u_{t}\colon\Sigma\to\mathbb{S}^{2n}$ be a variation of $F_{0}=u$ with $\eta:=\left.\frac{d}{dt}\right|_{t=0}u_{t}$ a normal variation vector field. As is well-known [24, Chap I: $\S$9], the second variation of area is given by $\mathcal{Q}(\eta):=\left.\frac{d^{2}}{dt^{2}}\right|_{t=0}\text{Area}(u_{t})=\int_{\Sigma}\langle\mathcal{L}\eta,\eta\rangle$ where the Jacobi operator $\mathcal{L}\colon\Gamma(N\Sigma)\to\Gamma(N\Sigma)$ is $\mathcal{L}=-\Delta^{\perp}-\mathcal{B}-\mathcal{R}$ where, in a local orthonormal frame $(e_{1},e_{2})$ for $T\Sigma$, we have $\displaystyle\Delta^{\perp}\eta$ $\displaystyle=\nabla^{\perp}_{e_{i}}\nabla^{\perp}_{e_{i}}\eta-\nabla^{\perp}_{\nabla^{\top}_{e_{i}}e_{i}}\eta$ $\displaystyle\mathcal{B}(\eta)$ $\displaystyle=\left\langle\text{I\\!I}(e_{i},e_{j}),\eta\right\rangle\text{I\\!I}(e_{i},e_{j})$ $\displaystyle\mathcal{R}(\eta)$ $\displaystyle=(\overline{R}(\eta,e_{i})e_{i})^{\perp}.$ Note that $\Delta^{\perp}$ is simply the connection Laplacian for $\nabla^{\perp}$, the normal part of the Levi-Civita connection. The spectrum of $-\Delta^{\perp}$ consists of non-negative real numbers. We now fix $\eta\in\Gamma(N\Sigma)$ and study the terms $\mathcal{R}(\eta)$, $\Delta^{\perp}\eta$, and $\mathcal{B}(\eta)$, in that order. Let $\theta_{\eta}\in\Omega^{1}(\Sigma)$ denote the $1$-form $\theta_{\eta}(X)=\langle\nabla^{\perp}_{X}\eta,\eta\rangle.$ Standard arguments show (see e.g., [33], [14], [30]) that: ###### Proposition 4.1. Let $u\colon\Sigma^{2}\to\mathbb{S}^{2n}$ be an oriented minimal surface. Then $\mathcal{R}(\eta)=2\eta.$ (4.1) and $-\langle\Delta^{\perp}\eta,\eta\rangle=\|\nabla^{\perp}\eta\|^{2}+\mathrm{div}(\theta_{\eta}^{\sharp}).$ (4.2) We now seek a formula for the term $\|\nabla^{\perp}\eta\|^{2}$. For this, let $J$ denote the complex structure on $T\Sigma$ given by the metric and orientation, and equip $N\Sigma$ with a complex structure $I$ that is compatible with the metric. Associated to $J$ and $I$, we let $\Theta_{\eta}\in\Omega^{1}(\Sigma)$ denote the $1$-form $\displaystyle\Theta_{\eta}(X)$ $\displaystyle=\langle\nabla^{\perp}_{JX}\eta,I\eta\rangle\\!,$ and let $\mathscr{D}\colon\Gamma(N\Sigma)\to\Gamma(N\Sigma)\otimes\Omega^{1}(\Sigma)$ denote the operator $\mathscr{D}_{X}\eta=\nabla^{\perp}_{JX}\eta-\nabla^{\perp}_{X}(I\eta).$ We now have: ###### Proposition 4.2. Let $u\colon\Sigma^{2}\to\mathbb{S}^{2n}$ be an oriented minimal surface. Let $I$ be any complex structure on $N\Sigma$ that is compatible with the metric. If $\nabla^{\perp}I=0$, then $\|\nabla^{\perp}\eta\|^{2}=\frac{1}{2}\|\mathscr{D}\eta\|^{2}-\langle R^{\perp}_{12}\eta,I\eta\rangle-\mathrm{div}(\Theta_{\eta}^{\sharp}).\\\ $ (4.3) ###### Proof. We observe that $\|\mathscr{D}_{X}\eta\|^{2}=\|\nabla^{\perp}_{JX}\eta\|^{2}+\|\nabla^{\perp}_{X}(I\eta)\|^{2}-2\left\langle\nabla^{\perp}_{JX}\eta,\nabla^{\perp}_{X}(I\eta)\right\rangle$ Therefore, we have: $\displaystyle\|\mathscr{D}\eta\|^{2}=\left\|\mathscr{D}_{e_{i}}\eta\right\|^{2}$ $\displaystyle=\|\nabla^{\perp}_{Je_{i}}\eta\|^{2}+\|\nabla^{\perp}_{e_{i}}(I\eta)\|^{2}-2\left\langle\nabla^{\perp}_{Je_{i}}\eta,\nabla^{\perp}_{e_{i}}(I\eta)\right\rangle$ $\displaystyle=\|\nabla^{\perp}\eta\|^{2}+\|\nabla^{\perp}(I\eta)\|^{2}-2\left\langle\nabla^{\perp}_{Je_{i}}\eta,\nabla^{\perp}_{e_{i}}(I\eta)\right\rangle$ $\displaystyle=2\|\nabla^{\perp}\eta\|^{2}-2\left\langle\nabla^{\perp}_{Je_{i}}\eta,\nabla^{\perp}_{e_{i}}(I\eta)\right\rangle$ using $\|\nabla^{\perp}(I\eta)\|=\|\nabla^{\perp}\eta\|$ in the last line. Therefore, $\|\nabla^{\perp}\eta\|^{2}=\frac{1}{2}\|\mathscr{D}\eta\|^{2}+\left\langle\nabla^{\perp}_{Je_{i}}\eta,\nabla^{\perp}_{e_{i}}(I\eta)\right\rangle\\!.$ To evaluate the last term, we compute $\displaystyle\left\langle\nabla^{\perp}_{Je_{i}}\eta,\nabla^{\perp}_{e_{i}}(I\eta)\right\rangle$ $\displaystyle=-\left\langle\nabla^{\perp}_{e_{i}}\nabla^{\perp}_{Je_{i}}\eta,I\eta\right\rangle+e_{i}(\Theta_{\eta}(e_{i}))$ $\displaystyle=-\left\langle\nabla^{\perp}_{e_{i}}\nabla^{\perp}_{Je_{i}}\eta,I\eta\right\rangle+\Theta_{\eta}(\nabla^{\top}_{e_{i}}e_{i})-\delta\Theta_{\eta}$ $\displaystyle=-\langle R^{\perp}_{12}\eta,I\eta\rangle-\left\langle\nabla^{\perp}_{[e_{1},e_{2}]}\eta,I\eta\right\rangle+\Theta_{\eta}(\nabla^{\top}_{e_{i}}e_{i})-\delta\Theta_{\eta}$ $\displaystyle=-\langle R^{\perp}_{12}\eta,I\eta\rangle-\delta\Theta_{\eta}$ where $\delta$ is the codifferential, and in the last line we used that $\nabla^{\top}$ is torsion-free and commutes with $J$. Thus, we have shown that $\|\nabla^{\perp}\eta\|^{2}=\frac{1}{2}\|\mathscr{D}\eta\|^{2}-\langle R^{\perp}_{12}\eta,I\eta\rangle-\delta\Theta_{\eta}$ This gives the result. ∎ Finally, we need a formula for $\mathcal{B}(\eta)$. For this, we specialize to the case of holomorphic curves in $\mathbb{S}^{6}$ and recall the complex structure $\widehat{J}$ on $N\Sigma$ defined in $\S$3.4.2. ###### Proposition 4.3. Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a holomorphic curve. Then $\mathcal{B}(\eta)=\widehat{J}R^{\perp}_{12}\eta$ (4.4) so that $\langle\mathcal{B}(\eta),\eta\rangle=-\langle R^{\perp}_{12}\eta,\widehat{J}\eta\rangle.$ (4.5) ###### Proof. Let $(e_{1},\ldots,e_{6})$ be a $T^{2}$-adapted frame. By (3.10) and the fact that $T^{2}$-adapted frames have $\mu=0$, we have $\displaystyle\text{I\\!I}(e_{1},e_{1})$ $\displaystyle=\kappa_{1}e_{3}+\kappa_{2}e_{4}$ $\displaystyle\text{I\\!I}(e_{1},e_{2})$ $\displaystyle=-\kappa_{2}e_{3}+\kappa_{1}e_{4}.$ It follows that $\displaystyle\mathcal{B}(e_{3})$ $\displaystyle=2|\kappa|^{2}e_{3}$ $\displaystyle\mathcal{B}(e_{5})$ $\displaystyle=0$ $\displaystyle\mathcal{B}(e_{4})$ $\displaystyle=2|\kappa|^{2}e_{4}$ $\displaystyle\mathcal{B}(e_{6})$ $\displaystyle=0.$ On the other hand, the Ricci equation (2.2) implies that $\displaystyle R^{\perp}_{12}(e_{3})$ $\displaystyle=-2|\kappa|^{2}\widehat{J}e_{3}$ $\displaystyle R^{\perp}_{12}(e_{5})$ $\displaystyle=0$ (4.6) $\displaystyle R^{\perp}_{12}(e_{4})$ $\displaystyle=-2|\kappa|^{2}\widehat{J}e_{4}$ $\displaystyle R^{\perp}_{12}(e_{6})$ $\displaystyle=0.$ (4.7) Therefore, $\mathcal{B}(\eta)=\widehat{J}R^{\perp}_{12}\eta$, so that $\langle\mathcal{B}(\eta),\eta\rangle=\langle\widehat{J}R^{\perp}_{12}\eta,\eta\rangle=-\langle R^{\perp}_{12}\eta,\widehat{J}\eta\rangle$. ∎ We now intend to combine Propositions 4.1, 4.2, and 4.3 to arrive at a second variation formula for holomorphic curves in $\mathbb{S}^{6}$. To do this, notice that Proposition 4.2 required the choice of a complex structure $I$ on $N\Sigma$ satisfying $\nabla^{\perp}I=0$. In $\S$3.4, we observed that $\widehat{J}$ satisfies $\nabla^{\perp}\widehat{J}=0$ if and only if $u$ is null-torsion (whereas $\widetilde{J}$ never has this property). Therefore, restricting to the null-torsion case and taking $I:=\widehat{J}$ in Proposition 4.2, we deduce: ###### Corollary 4.4. Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a null-torsion holomorphic curve, where $\Sigma$ is a closed surface. For $\eta\in\Gamma(N\Sigma)$, we have: $\mathcal{Q}(\eta)=\int_{\Sigma}\frac{1}{2}\|\widehat{\mathscr{D}}\eta\|^{2}-2\|\eta\|^{2}$ (4.8) recalling from (3.18) that $\widehat{\mathscr{D}}_{X}\eta:=\nabla^{\perp}_{JX}\eta-\nabla^{\perp}_{X}(\widehat{J}\eta)$. ###### Proof. Using (4.1) and (4.5), we have $\displaystyle\langle\mathcal{L}\eta,\eta\rangle$ $\displaystyle=-\langle\Delta^{\perp}\eta,\eta\rangle-\langle\mathcal{B}(\eta),\eta\rangle-\langle\mathcal{R}(\eta),\eta\rangle$ $\displaystyle=-\langle\Delta^{\perp}\eta,\eta\rangle+\langle R^{\perp}_{12}\eta,\widehat{J}\eta\rangle-2\|\eta\|^{2}.$ Next, using (4.2) and (4.3) with the choice $I=\widehat{J}$, we have $\displaystyle\langle\mathcal{L}\eta,\eta\rangle$ $\displaystyle=\frac{1}{2}\|\widehat{\mathscr{D}}\eta\|^{2}-\langle R^{\perp}_{12}\eta,\widehat{J}\eta\rangle-\text{div}(\Theta^{\sharp}_{\eta})+\text{div}(\theta^{\sharp}_{\eta})+\langle R^{\perp}_{12}\eta,\widehat{J}\eta\rangle-2\|\eta\|^{2}$ $\displaystyle=\frac{1}{2}\|\widehat{\mathscr{D}}\eta\|^{2}-\text{div}(\Theta^{\sharp}_{\eta})+\text{div}(\theta^{\sharp}_{\eta})-2\|\eta\|^{2}.$ Integrating both sides and using Stokes’ Theorem completes the proof. ∎ Analogues of the second variation formula (4.8) have been observed in several other contexts. For example, a version of (4.8) was obtained by Simons [33, p. 78] for complex submanifolds of Kähler manifolds, by Ejiri [14, Lemma 3.2] for minimal $2$-spheres in $\mathbb{S}^{2n}$, by Micallef-Wolfson [29, p. 264] for minimal Lagrangians in negative Kähler-Einstein $4$-manifolds, and by Montiel- Urbano [30, p. 2259] for superminimal surfaces in self-dual Einstein $4$-manifolds. ### 4.2 The First Eigenvalue Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a null-torsion holomorphic curve. Let $\eta\in\Gamma(N\Sigma)$ be an eigenvector for $\mathcal{L}$, so that $\mathcal{L}\eta=\lambda\eta$ for some $\lambda\in\mathbb{R}$. Then $\langle\mathcal{L}\eta,\eta\rangle=\lambda\|\eta\|^{2}$, so Corollary 4.4 gives $\int_{\Sigma}\frac{1}{2}\|\widehat{\mathscr{D}}\eta\|^{2}-2\|\eta\|^{2}=\mathcal{Q}(\eta)=\int_{\Sigma}\lambda\|\eta\|^{2}.$ Rearranging, we obtain $\int_{\Sigma}\frac{1}{2}\|\widehat{\mathscr{D}}\eta\|^{2}-(\lambda+2)\|\eta\|^{2}=0.$ Since $\|\widehat{\mathscr{D}}\eta\|^{2}\geq 0$, we deduce that $\lambda\geq-2$. Moreover, we see that $\lambda=-2$ if and only if $\eta\in\Gamma(N\Sigma)$ satisfies $\widehat{\mathscr{D}}\eta=0$, i.e., if and only if $\eta$ is $(\widehat{J},\nabla^{\perp})$-real holomorphic. Recalling (3.19), this proves: ###### Proposition 4.5. The lowest eigenvalue of the Jacobi operator satisfies $\lambda_{1}\geq-2$. The multiplicity $m_{1}$ of the eigenvalue $\lambda=-2$ is $m_{1}=\dim_{\mathbb{R}}\\{\eta\in\Gamma(N\Sigma)\colon\widehat{\mathscr{D}}\eta=0\\}=2\,h^{0}(\widehat{N}^{1,0};\nabla^{\perp}).$ We now use the Riemann-Roch Theorem to derive a lower bound for $m_{1}$. This lower bound will show, in particular, that $m_{1}\geq 4$, so that the lowest eigenvalue of $\mathcal{L}$ is always $\lambda_{1}=-2$. ###### Proposition 4.6. The multiplicity $m_{1}$ satisfies $m_{1}=4d+2h^{1}(\widehat{N}^{1,0};\nabla^{\perp})\geq 4d.$ ###### Proof. By the Riemann-Roch Theorem for rank 2 vector bundles, the isomorphism $\widehat{N}^{1,0}\simeq L_{N}\oplus L_{B}^{*}$ of complex vector bundles, and Proposition 3.3, we have: $\displaystyle h^{0}(\widehat{N}^{1,0};\nabla^{\perp})-h^{1}(\widehat{N}^{1,0};\nabla^{\perp})$ $\displaystyle=\deg(\widehat{N}^{1,0})+\chi(\Sigma)$ $\displaystyle=c_{1}(L_{N})-c_{1}(L_{B})+\chi(\Sigma)$ $\displaystyle=-\chi(\Sigma)+d+d+\chi(\Sigma)$ $\displaystyle=2d.$ The result follows. ∎ ### 4.3 The Multiplicity of the First Eigenvalue: Proof of Theorem 1.1 In Proposition 4.6, we obtained a lower bound for $m_{1}$. Our aim is to prove the following upper bound for $m_{1}$. ###### Proposition 4.7. Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a null-torsion holomorphic curve. Then $h^{0}(\widehat{N}^{1,0};\nabla^{\perp})\leq h^{0}(L_{N})+h^{0}(L_{B}^{*}).$ Accepting this proposition on faith for a moment, we now show how it implies Theorem 1.1. ###### Proof. Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a null-torsion holomorphic curve satisfying $g<\frac{1}{2}(d+2)$, or equivalently, $d>2g-2$. Using Proposition 4.5, followed by Proposition 4.7, and finally Proposition 3.4(a) and (b), we have the upper bound $\displaystyle m_{1}=2h^{0}(\widehat{N}^{1,0};\nabla^{\perp})$ $\displaystyle\leq 2\left(h^{0}(L_{N})+h^{0}(L_{B}^{*})\right)=2\left(2d+h^{0}(L_{B}\otimes K_{\Sigma})\right)=4d.$ Coupled with the lower bound of Proposition 4.6, we deduce that $m_{1}=4d=\frac{A}{\pi}$. ∎ #### 4.3.1 Proof of Proposition 4.7 Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a null-torsion holomorphic curve. On the complex vector bundle $\widehat{N}^{1,0}$, we recall the two $\overline{\partial}$-operators $\displaystyle\overline{\partial}^{\nabla},\overline{\partial}^{D}\colon\Gamma(\widehat{N}^{1,0})\to\Omega^{0,1}(\Sigma)\otimes\Gamma(\widehat{N}^{1,0}).$ Let $S\colon\Gamma(\widehat{N}^{1,0})\to\Omega^{0,1}(\Sigma)\otimes\Gamma(\widehat{N}^{1,0})$ denote the difference tensor, i.e., $S(\xi):=\overline{\partial}^{\nabla}\xi-\overline{\partial}^{D}\xi$ for smooth sections $\xi\in\Gamma(\widehat{N}^{1,0})$. Since $S$ is a tensor, it can be viewed as a pointwise operator, i.e., as a bundle map $\widehat{N}^{1,0}\to\Lambda^{0,1}\Sigma\otimes\widehat{N}^{1,0}$. To understand $S$ in more detail, let $(f_{2},\overline{f}_{3})$ be a $T^{2}$-frame for $\widehat{N}^{1,0}$ at a point $p\in\Sigma$. By (3.16) and (3.17), we have $\displaystyle S(f_{2})$ $\displaystyle=0$ $\displaystyle S(\overline{f}_{3})$ $\displaystyle=\textstyle\frac{1}{2}\overline{\zeta}_{1}\otimes f_{2}.$ Thus, using the Hermitian vector bundle isomorphism $\widehat{N}^{1,0}\simeq L_{N}\oplus L_{B}^{*}$, we can regard $S$ as a map $S\colon L_{N}\oplus L_{B}^{*}\to\Lambda^{0,1}\Sigma\otimes L_{N}$ such that $S|_{L_{N}}=0$. Let $\xi\in\Gamma(\widehat{N}^{1,0})$ be a smooth section. Write $\xi=\xi_{2}+\xi_{3}$, where $\xi_{2}\in\Gamma(L_{N})$ and $\xi_{2}\in\Gamma(L_{B}^{*})$. The condition that $\xi$ be $\nabla^{\perp}$-holomorphic is equivalent to $\overline{\partial}^{D}\xi=-S(\xi),$ which (by decomposing into $L_{N}$ and $L_{B}^{*}$ components) is in turn is equivalent to the system $\displaystyle\overline{\partial}^{D}\xi_{2}$ $\displaystyle=-S(\xi_{3})$ $\displaystyle\overline{\partial}^{D}\xi_{3}$ $\displaystyle=0.$ Now, by Proposition 3.9(b), there is a holomorphic isomorphism $(\widehat{N}^{1,0},D^{\perp})\cong L_{N}\oplus L_{B}^{*}$. Thus, the $\nabla^{\perp}$-holomorphicity condition can finally be rewritten as $\displaystyle\overline{\partial}^{L_{N}}\xi_{2}$ $\displaystyle=-S(\xi_{3})$ (4.9) $\displaystyle\overline{\partial}^{L_{B}^{*}}\xi_{3}$ $\displaystyle=0,$ (4.10) where $\overline{\partial}^{L_{N}}$ and $\overline{\partial}^{L_{B}^{*}}$ are the respective $\overline{\partial}$-operators on $L_{N}$ and $L_{B}^{*}$. The upshot is that the system (4.9)-(4.10) is decoupled, making it easy to count its solutions. Indeed, the solution space of (4.10) is $H^{0}(L_{B}^{*})$, a $\mathbb{C}$-vector space of complex dimension $h^{0}(L_{B}^{*})$. Moreover, for each $\xi_{3}\in H^{0}(L_{B}^{*})$, the set of solutions to (4.9) is either empty or an affine space of complex dimension $h^{0}(L_{N})$. Geometrically, the set of solutions to (4.9)-(4.10) can be viewed as a bundle of complex $h^{0}(L_{N})$-dimensional affine spaces over the set of $\xi_{3}\in H^{0}(L_{B}^{*})$ for which (4.9) has a solution. In conclusion, the set of $\nabla^{\perp}$-holomorphic sections has complex dimension at most $h^{0}(L_{N})+h^{0}(L_{B}^{*})$. This proves Proposition 4.7. ## 5 The Nullity: Proof of Theorem 1.2 Let $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ be a null-torsion holomorphic curve of area $A=4\pi d$. The aim of this section is to prove the following lower bound on the nullity of the Jacobi operator $\mathcal{L}$ of $u$: $\text{Nullity}(u)\geq 2d+\chi(\Sigma).$ The idea of the proof is to identify a subspace of $\text{Null}(u):=\\{\eta\in\Gamma(N\Sigma)\colon\mathcal{L}\eta=0\\}$ that is isomorphic to $H^{0}(K_{\Sigma}^{*}\otimes L_{N})$. The dimension of $H^{0}(K_{\Sigma}^{*}\otimes L_{N})$ will then be estimated via Riemann-Roch. Our method in this section is not original. Indeed, our calculations are direct analogues those in Montiel and Urbano’s study [30] of superminimal surfaces in self-dual Einstein $4$-manifolds. To ease notation, we enact the following conventions: Convention: For the remainder of this work, we let $J$ denote the complex structure on $u^{*}(T\mathbb{S}^{6})=T\Sigma\oplus N\Sigma$ that on $T\Sigma$ is given by the metric and orientation, and on $N\Sigma$ is given by $\widehat{J}$. Recall that as complex bundles, we have an isomorphism $\widehat{N}^{1,0}\simeq(N\Sigma,J)=E_{N}\oplus E_{B}^{*}.$ As real vector bundles, we have $N\Sigma\simeq E_{N}\oplus E_{B}$, and we will use the notation $\eta=\eta^{N}+\eta^{B}$ for the decomposition of a normal vector $\eta\in N\Sigma$ into its $E_{N}$ and $E_{B}$ components. Convention: On the complex bundle $\widehat{N}^{1,0}$, the only holomorphic structure we will need from now on is $\overline{\partial}^{\nabla}$. Thus, we will abbreviate “$\nabla^{\perp}$-holomorphic” as “holomorphic,” and abbreviate “$(\widehat{J},\nabla^{\perp})$-real holomorphic” as “real- holomorphic.” Noting that holomorphic sections of $K_{\Sigma}^{*}\otimes\widehat{N}^{1,0}$ are in bijection with real- holomorphic sections of $T^{*}\Sigma\otimes N\Sigma$, we may identify $\displaystyle\Gamma(K_{\Sigma}^{*}\otimes L_{N})$ $\displaystyle\cong\\{\alpha\in\Gamma(T^{*}\Sigma\otimes E_{N})\colon\alpha\circ J=-J\circ\alpha\\}$ $\displaystyle H^{0}(K_{\Sigma}^{*}\otimes L_{N})$ $\displaystyle\cong\\{\alpha\in\Gamma(T^{*}\Sigma\otimes E_{N})\colon\alpha\circ J=-J\circ\alpha\text{ and }\alpha\text{ real- holomorphic}\\}.$ These identifications will frequently be made without comment. ### 5.1 Preliminaries Recall that the Levi-Civita connection $\overline{\nabla}$ on $\mathbb{S}^{6}$ gives a tangential connection $\nabla^{\top}$on $T\Sigma$ and a normal connection $\nabla^{\perp}$ on $N\Sigma$. These give an induced connection $\widetilde{\nabla}$ on $T^{*}\Sigma\otimes N\Sigma$, and an induced connection $\widehat{\nabla}$ on $T^{*}\Sigma\otimes T^{*}\Sigma\otimes N\Sigma$. Explicitly, for $\alpha\in\Gamma(T^{*}\Sigma\otimes N\Sigma)$, we have $(\widetilde{\nabla}_{Y}\alpha)(X):=\nabla^{\perp}_{Y}(\alpha(X))-\alpha(\nabla^{\top}_{Y}X),$ (5.1) and for $\beta\in\Gamma(T^{*}\Sigma\otimes T^{*}\Sigma\otimes N\Sigma)$, we have $(\widehat{\nabla}_{Z}\beta)(X,Y):=\nabla^{\perp}_{Z}(\beta(X,Y))-\beta(\nabla^{\top}_{Z}X,Y)-\beta(X,\nabla^{\top}_{Z}Y).$ (5.2) We remark that if $\beta$ is a symmetric $2$-tensor (i.e., $\beta(X,Y)=\beta(Y,X)$ for all $X,Y\in T\Sigma$), then $\widehat{\nabla}_{Z}\beta$ is also a symmetric $2$-tensor. For $\alpha\in\Gamma(T^{*}\Sigma\otimes N\Sigma)$, we recall that its second covariant derivative $\widetilde{\nabla}^{2}_{X,Y}\alpha\in\Gamma(T^{*}\Sigma\otimes N\Sigma)$ at $X,Y\in T\Sigma$ is given by: $\displaystyle\widetilde{\nabla}^{2}_{XY}\alpha$ $\displaystyle:=(\widehat{\nabla}_{X}\widetilde{\nabla}\alpha)(Y,\cdot)=\widetilde{\nabla}_{X}\widetilde{\nabla}_{Y}\alpha-\widetilde{\nabla}_{\nabla^{\top}_{X}Y}\alpha.$ We also recall the Ricci identity $\widetilde{\nabla}^{2}_{XY}\alpha=\widetilde{\nabla}^{2}_{YX}\alpha+\widetilde{R}_{XY}\alpha$ (5.3) where $\widetilde{R}$ is the curvature of $\widetilde{\nabla}$. A straightforward calculation shows that $(\widetilde{R}_{XY}\alpha)(Z)=R^{\perp}_{XY}(\alpha(Z))-\alpha(R^{\top}_{XY}(Z)).$ (5.4) We let $(\nabla^{\perp})^{*}\colon\Gamma(T^{*}\Sigma\otimes N\Sigma)\to\Gamma(N\Sigma)$ denote the formal adjoint of $\nabla^{\perp}$, so that $\int_{\Sigma}\left\langle\nabla^{\perp}\xi,\alpha\right\rangle=\int_{\Sigma}\left\langle\xi,(\nabla^{\perp})^{*}\alpha\right\rangle\ \ \text{ for all }\xi\in\Gamma(N\Sigma),\,\alpha\in\Gamma(T^{*}\Sigma\otimes N\Sigma).$ In terms of a local orthonormal frame $(e_{1},e_{2})$ on $T\Sigma$, one can compute $(\nabla^{\perp})^{*}\alpha$ via the well-known formula: $(\nabla^{\perp})^{*}\alpha=-(\widetilde{\nabla}_{e_{i}}\alpha)(e_{i}).$ (5.5) Similarly, we let $\widetilde{\nabla}^{*}\colon\Gamma(T^{*}\Sigma\otimes T^{*}\Sigma\otimes N\Sigma)\to\Gamma(T^{*}\Sigma\otimes N\Sigma)$ denote the formal adjoint of $\widetilde{\nabla}$. Again, in terms of a local orthonormal frame $(e_{1},e_{2})$ on $T\Sigma$, one has the formula $\widetilde{\nabla}^{*}\beta=-(\widehat{\nabla}_{e_{i}}\beta)(e_{i},\cdot).$ (5.6) ### 5.2 Strategy of Proof For a fixed $\eta\in\Gamma(N\Sigma)$, we consider the section $\Psi_{\eta}\in\Gamma(T^{*}\Sigma\otimes N\Sigma)$ given by $\Psi_{\eta}(X):=\widehat{\mathscr{D}}_{X}\eta=\nabla^{\perp}_{JX}\eta-J(\nabla^{\perp}_{X}\eta).$ The basic properties of $\Psi_{\eta}$ are given by the following two lemmas. Verifying Lemma 5.1 is straightforward; we will prove only Lemma 5.2. ###### Lemma 5.1. We have: (a) $\eta$ is real-holomorphic $\iff$ $\Psi_{\eta}=0$. (b) $\Psi_{\eta}\circ J=-J\circ\Psi_{\eta}$. (c) We have $(\widetilde{\nabla}_{Y}\Psi_{\eta})(JX)=-J\left[(\widetilde{\nabla}_{Y}\Psi_{\eta})(X)\right]\\!.$ ###### Lemma 5.2. We have: $(\nabla^{\perp})^{*}\Psi_{\eta}=-J(\mathcal{L}\eta+2\eta).$ ###### Proof. Using (5.5) and (5.1), we calculate $\displaystyle(\nabla^{\perp})^{*}\Psi_{\eta}=-(\widetilde{\nabla}_{e_{i}}\Psi_{\eta})(e_{i})$ $\displaystyle=-\nabla^{\perp}_{e_{i}}(\Psi_{\eta}(e_{i}))+\Psi_{\eta}(\nabla^{\top}_{e_{i}}e_{i})$ $\displaystyle=-\nabla^{\perp}_{e_{i}}\nabla^{\perp}_{Je_{i}}\eta+J\nabla^{\perp}_{e_{i}}\nabla^{\perp}_{e_{i}}\eta+\nabla^{\perp}_{J(\nabla^{\top}_{e_{i}}e_{i})}\eta-J(\nabla^{\perp}_{\nabla^{\top}_{e_{i}}e_{i}}\eta)$ $\displaystyle=-\nabla^{\perp}_{e_{i}}\nabla^{\perp}_{Je_{i}}\eta+\nabla^{\perp}_{J(\nabla^{\top}_{e_{i}}e_{i})}\eta+J\Delta^{\perp}\eta.$ Now, using that $\nabla^{\perp}_{e_{i}}\nabla^{\perp}_{Je_{i}}=R^{\perp}_{12}+\nabla^{\perp}_{[e_{1},e_{2}]}$ and that $J(\nabla^{\top}_{e_{i}}e_{i})=[e_{1},e_{2}]$, we obtain $\displaystyle(\nabla^{\perp})^{*}\Psi_{\eta}$ $\displaystyle=-R^{\perp}_{12}\eta+J\Delta^{\perp}\eta$ $\displaystyle=-R^{\perp}_{12}\eta+J(-\mathcal{L}\eta-2\eta)-J(\mathcal{B}\eta)$ where we used that $\Delta^{\perp}\eta=-\mathcal{L}\eta-\mathcal{B}\eta-2\eta$. Finally, using (4.4), we arrive at the result. ∎ Let $\text{Null}(u)=\\{\eta\in\Gamma(N\Sigma)\colon\mathcal{L}\eta=0\\}$ denote the null space of the Jacobi operator $\mathcal{L}$. Consider the linear map $\displaystyle G\colon\\{\eta\in\text{Null}(u)\colon(\Psi_{\eta})^{B}=0\\}$ $\displaystyle\to\Gamma(K_{\Sigma}^{*}\otimes L_{N})$ $\displaystyle\eta$ $\displaystyle\mapsto\widehat{\mathscr{D}}\eta=\Psi_{\eta}.$ Observe that $G$ is injective. Indeed, if $G(\eta)=0$, then $\Psi_{\eta}=0$, so $\eta$ is real-holomorphic, so $\mathcal{L}\eta=-2\eta$, but $\mathcal{L}\eta=0$, so $\eta=0$. Our main claim in this section concerns the image of $G$: ###### Proposition 5.3. The image of $G$ is equal to $H^{0}(K_{\Sigma}^{*}\otimes L_{N})$. Accepting Proposition 5.3 on faith for a moment, we see that $G$ gives an isomorphism $\displaystyle\\{\eta\in\text{Null}(u)\colon(\Psi_{\eta})^{B}=0\\}\cong H^{0}(K_{\Sigma}^{*}\otimes L_{N}).$ From this isomorphism, we now deduce Theorem 1.2: ###### Proof. Recall from Proposition 3.3 that $c_{1}(L_{N})=-\chi(\Sigma)+d$ and $c_{1}(K_{\Sigma}^{*})=\chi(\Sigma)$, and hence $c_{1}(K_{\Sigma}^{*}\otimes L_{N})=d$. By Proposition 5.3, we now estimate $\displaystyle\text{Nullity}(u)\geq\dim_{\mathbb{R}}\\{\eta\in\text{Null}(u)\colon(\Psi_{\eta})^{B}=0\\}$ $\displaystyle=\dim_{\mathbb{R}}[H^{0}(K_{\Sigma}^{*}\otimes L_{N})]$ $\displaystyle=2h^{0}(K_{\Sigma}^{*}\otimes L_{N})$ $\displaystyle=2h^{1}(K_{\Sigma}^{*}\otimes L_{N})+2c_{1}(K_{\Sigma}^{*}\otimes L_{N})+\chi(\Sigma)$ $\displaystyle=2h^{1}(K_{\Sigma}^{*}\otimes L_{N})+2d+\chi(\Sigma),$ where we used Riemann-Roch in the second-to-last step. Finally, using $h^{1}(K_{\Sigma}^{*}\otimes L_{N})\geq 0$, we conclude the result. ∎ ###### Remark. The estimate $h^{1}(K_{\Sigma}^{*}\otimes L_{N})\geq 0$ can be slightly sharpened. Indeed, by Serre Duality, we have $h^{1}(K_{\Sigma}^{*}\otimes L_{N})=h^{0}(K_{\Sigma}\otimes K_{\Sigma}\otimes L_{N}^{*})$, and we compute $c_{1}(K_{\Sigma}\otimes K_{\Sigma}\otimes L_{N}^{*})=-\chi(\Sigma)-d$. Therefore, if $d>2g-2$ — which, by Proposition 3.6 holds if $g\leq 6$ — then $K_{\Sigma}\otimes K_{\Sigma}\otimes L_{N}^{*}$ is a negative line bundle, and hence $h^{1}(K_{\Sigma}^{*}\otimes L_{N})=0$. The remainder of this section consists of a proof of Proposition 5.3, which naturally divides into two halves. That is, in Proposition 5.7, we will show that $\text{Im}(G)\subset H^{0}(K_{\Sigma}^{*}\otimes L_{N})$, and in Proposition 5.9, we will show that $H^{0}(K_{\Sigma}^{*}\otimes L_{N})\subset\text{Im}(G)$. ### 5.3 Technical Lemmas We begin by measuring the extent to which the smooth section $\Psi_{\eta}\in\Gamma(T^{*}\Sigma\otimes N\Sigma)$ might fail to be real- holomorphic. So, for a fixed $\eta\in\Gamma(N\Sigma)$, we consider the section $\Omega_{\eta}\in\Gamma(T^{*}\Sigma\otimes T^{*}\Sigma\otimes N\Sigma)$ given by $\Omega_{\eta}(X,Y):=(\widetilde{\nabla}_{JX}\Psi_{\eta})(JY)-(\widetilde{\nabla}_{X}\Psi_{\eta})(Y).$ We now establish the basic properties of $\Omega_{\eta}$ by analogy with Lemmas 5.1 and 5.2. The analogue of Lemma 5.1 is easy: ###### Lemma 5.4. We have: (a) $\Psi_{\eta}$ is real-holomorphic $\iff$ $\Omega_{\eta}=0$. (b) $\Omega_{\eta}(JX,JY)=-\Omega_{\eta}(X,Y)$. Therefore, $\Omega_{\eta}$ is an $N\Sigma$-valued symmetric $2$-tensor on $T\Sigma$ of trace zero. (c) We have the identity: $(\widehat{\nabla}_{e_{i}}\Omega_{\eta})(v,e_{i})=(\widetilde{\nabla}^{2}_{e_{i},Jv}\Psi_{\eta})(Je_{i})-(\widetilde{\nabla}^{2}_{e_{i},v}\Psi_{\eta})(e_{i}).$ ###### Proof. (a) This is straightforward and left to the reader. (b) Directly from the definition of $\Omega_{\eta}$, we have $\Omega_{\eta}(JX,JY)=-\Omega_{\eta}(X,Y).$ Thus, letting $(e_{1},e_{2})$ denote an oriented orthonormal frame on $T\Sigma$, we have both $\Omega_{\eta}(e_{2},e_{2})=-\Omega_{\eta}(e_{1},e_{1})$ and $\Omega_{\eta}(e_{1},e_{2})=\Omega_{\eta}(e_{2},e_{1})$, so $\Omega_{\eta}$ is an $N\Sigma$-valued symmetric $2$-tensor of trace zero. (c) Using (5.1) and the fact that $\nabla^{\top}_{X}(JY)=J\nabla^{\top}_{X}Y$ for all $X,Y$, the first term on the right is $\displaystyle(\widetilde{\nabla}^{2}_{e_{i},Jv}\Psi_{\eta})(Je_{i})$ $\displaystyle=(\widetilde{\nabla}_{e_{i}}\widetilde{\nabla}_{Jv}\Psi_{\eta})(Je_{i})-(\widetilde{\nabla}_{\nabla^{\top}_{e_{i}}Jv}\Psi_{\eta})(Je_{i})$ $\displaystyle=\nabla_{e_{i}}^{\perp}((\widetilde{\nabla}_{Jv}\Psi_{\eta})(Je_{i}))-(\widetilde{\nabla}_{Jv}\Psi_{\eta})(J\nabla^{\top}_{e_{i}}e_{i})-(\widetilde{\nabla}_{J\nabla^{\top}_{e_{i}}v}\Psi_{\eta})(Je_{i}),$ and similarly, the second term on the right is $\displaystyle(\widetilde{\nabla}^{2}_{e_{i},v}\Psi_{\eta})(e_{i})$ $\displaystyle=(\widetilde{\nabla}_{e_{i}}\widetilde{\nabla}_{v}\Psi_{\eta})(e_{i})-(\widetilde{\nabla}_{\nabla^{\top}_{e_{i}}v}\Psi_{\eta})(e_{i})$ $\displaystyle=\nabla_{e_{i}}^{\perp}((\widetilde{\nabla}_{v}\Psi_{\eta})(e_{i}))-(\widetilde{\nabla}_{v}\Psi_{\eta})(\nabla^{\top}_{e_{i}}e_{i})-(\widetilde{\nabla}_{\nabla^{\top}_{e_{i}}v}\Psi_{\eta})(e_{i}).$ On the other hand, using (5.2) and the definition of $\Omega_{\eta}$, the left side of the desired identity is: $\displaystyle(\widetilde{\nabla}_{e_{i}}\Omega_{\eta})(v,e_{i})$ $\displaystyle=\nabla^{\perp}_{e_{i}}[\Omega_{\eta}(v,e_{i})]-\Omega_{\eta}(\nabla^{\top}_{e_{i}}v,e_{i})-\Omega_{\eta}(v,\nabla^{\top}_{e_{i}}e_{i})$ $\displaystyle=\nabla_{e_{i}}^{\perp}((\widetilde{\nabla}_{Jv}\Psi_{\eta})(Je_{i}))-\nabla_{e_{i}}^{\perp}((\widetilde{\nabla}_{v}\Psi_{\eta})(e_{i}))-(\widetilde{\nabla}_{J{\nabla^{\top}_{e_{i}}v}}\Psi_{\eta})(Je_{i})$ $\displaystyle\ \ \ \ \ \ \ +(\widetilde{\nabla}_{\nabla^{\top}_{e_{i}}v}\Psi_{\eta})(e_{i})-(\widetilde{\nabla}_{Jv}\Psi_{\eta})(J\nabla^{\top}_{e_{i}}e_{i})+(\widetilde{\nabla}_{v}\Psi_{\eta})(\nabla^{\top}_{e_{i}}e_{i}).$ Comparing terms proves the lemma. ∎ The identities in the following lemma are straightforward to prove, but rather tedious. To streamline discussion, their verifications are deferred to the Appendix. ###### Lemma 5.5. Let $\alpha\in\Gamma(T^{*}\Sigma\otimes N\Sigma)$ satisfy $\alpha\circ J=-J\circ\alpha$. Let $(e_{1},e_{2})$ be a local oriented orthonormal frame on $\Sigma$. Then for all $v\in T\Sigma$: $(\widetilde{\nabla}^{2}_{e_{i},Jv}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{e_{i},v}\alpha)(e_{i})=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i})-2\alpha(v)-(2K-2)[\alpha(v)]^{B}.$ (5.7) Moreover, if $(e_{1},e_{2})$ is geodesic at $p\in\Sigma$, then at the point $p$: $J\Psi_{(\nabla^{\perp})^{*}\alpha}(v)=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i}).$ (5.8) Using these identities, we can now give the analogue of Lemma 5.2: ###### Lemma 5.6. We have: $\widetilde{\nabla}^{*}\Omega_{\eta}=-\Psi_{\mathcal{L}\eta}+(2K-2)(\Psi_{\eta})^{B}.$ ###### Proof. Let $v\in T\Sigma$. Using (5.6), followed by the symmetry $(\widehat{\nabla}_{Z}\Omega_{\eta})(X,Y)=(\widehat{\nabla}_{Z}\Omega_{\eta})(Y,X)$, followed by Lemma 5.4(c), followed by (5.7), we get: $\displaystyle(\widetilde{\nabla}^{*}\Omega_{\eta})(v)$ $\displaystyle=-(\widehat{\nabla}_{e_{i}}\Omega_{\eta})(e_{i},v)$ $\displaystyle=-(\widehat{\nabla}_{e_{i}}\Omega_{\eta})(v,e_{i})$ $\displaystyle=-[(\widetilde{\nabla}^{2}_{e_{i},Jv}\Psi_{\eta})(Je_{i})-(\widetilde{\nabla}^{2}_{e_{i},v}\Psi_{\eta})(e_{i})]$ $\displaystyle=-\left[(\widetilde{\nabla}^{2}_{Jv,e_{i}}\Psi_{\eta})(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\Psi_{\eta})(e_{i})-2\Psi_{\eta}(v)-(2K-2)[\Psi_{\eta}(v)]^{B}\right]\\!.$ Choose the local frame $(e_{1},e_{2})$ to be geodesic at $p\in\Sigma$. By (5.8), at the point $p\in\Sigma$, we have: $\displaystyle J\Psi_{(\nabla^{\perp})^{*}\Psi_{\eta}}(v)=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\Psi_{\eta})(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\Psi_{\eta})(e_{i})$ Using this, together with Lemma 5.2, we conclude that $\displaystyle(\widetilde{\nabla}^{*}\Omega_{\eta})(v)$ $\displaystyle=-J\Psi_{(\nabla^{\perp})^{*}\Psi_{\eta}}(v)+2\Psi_{\eta}(v)+(2K-2)[\Psi_{\eta}(v)]^{B}$ $\displaystyle=-J\Psi_{-J(\mathcal{L}+2)\eta}(v)+2\Psi_{\eta}(v)+(2K-2)[\Psi_{\eta}(v)]^{B}$ $\displaystyle=-\Psi_{\mathcal{L}\eta}(v)+(2K-2)[\Psi_{\eta}(v)]^{B}$ which is the result. ∎ ### 5.4 Proof of Proposition 5.3 We now prove that $\text{Im}(G)\subset H^{0}(K_{\Sigma}^{*}\otimes L_{N})$, which is half of Proposition 5.3. More precisely: ###### Proposition 5.7. Let $\eta\in\Gamma(N\Sigma)$. We have $\displaystyle\int_{\Sigma}\|\Omega_{\eta}\|^{2}$ $\displaystyle=2\int_{\Sigma}\left\langle\Psi_{\eta},\Psi_{\mathcal{L}\eta}+(2-2K)(\Psi_{\eta})^{B}\right\rangle\\!.$ In particular, if $\mathcal{L}\eta=0$ and $(\Psi_{\eta})^{B}=0$, then $\Psi_{\eta}$ is real-holomorphic. ###### Proof. First, we use the symmetries of $\Omega_{\eta}$ given by Lemma 5.4(b) to observe that $\displaystyle\left\langle(\widetilde{\nabla}_{Je_{i}}\Psi_{\eta})(Je_{j}),\Omega_{\eta}(e_{i},e_{j})\right\rangle$ $\displaystyle=\left\langle(\widetilde{\nabla}_{e_{2}}\Psi_{\eta})(e_{2}),\Omega_{\eta}(e_{1},e_{1})\right\rangle-\left\langle(\widetilde{\nabla}_{e_{2}}\Psi_{\eta})(e_{1}),\Omega_{\eta}(e_{1},e_{2})\right\rangle$ $\displaystyle\ \ \ \ \ \ \ -\left\langle(\widetilde{\nabla}_{e_{1}}\Psi_{\eta})(e_{2}),\Omega_{\eta}(e_{2},e_{1})\right\rangle+\left\langle(\widetilde{\nabla}_{e_{1}}\Psi_{\eta})(e_{1}),\Omega_{\eta}(e_{2},e_{2})\right\rangle$ $\displaystyle=-\left\langle(\widetilde{\nabla}_{e_{2}}\Psi_{\eta})(e_{2}),\Omega_{\eta}(e_{2},e_{2})\right\rangle-\left\langle(\widetilde{\nabla}_{e_{2}}\Psi_{\eta})(e_{1}),\Omega_{\eta}(e_{2},e_{1})\right\rangle$ $\displaystyle\ \ \ \ \ \ \ -\left\langle(\widetilde{\nabla}_{e_{1}}\Psi_{\eta})(e_{2}),\Omega_{\eta}(e_{1},e_{2})\right\rangle-\left\langle(\widetilde{\nabla}_{e_{1}}\Psi_{\eta})(e_{1}),\Omega_{\eta}(e_{1},e_{1})\right\rangle$ $\displaystyle=-\left\langle(\widetilde{\nabla}_{e_{i}}\Psi_{\eta})(e_{j}),\Omega_{\eta}(e_{i},e_{j})\right\rangle\\!.$ Using this fact, we can calculate $\displaystyle\|\Omega_{\eta}\|^{2}$ $\displaystyle=\left\langle\Omega_{\eta}(e_{i},e_{j}),\Omega_{\eta}(e_{i},e_{j})\right\rangle$ $\displaystyle=\left\langle(\widetilde{\nabla}_{Je_{i}}\Psi_{\eta})(Je_{j}),\Omega_{\eta}(e_{i},e_{j})\right\rangle-\left\langle(\widetilde{\nabla}_{e_{i}}\Psi_{\eta})(e_{j}),\Omega_{\eta}(e_{i},e_{j})\right\rangle$ $\displaystyle=-2\left\langle(\widetilde{\nabla}_{e_{i}}\Psi_{\eta})(e_{j}),\Omega_{\eta}(e_{i},e_{j})\right\rangle$ and hence $\displaystyle\int_{\Sigma}\|\Omega_{\eta}\|^{2}=-2\int_{\Sigma}\left\langle(\widetilde{\nabla}_{e_{i}}\Psi_{\eta})(e_{j}),\Omega_{\eta}(e_{i},e_{j})\right\rangle$ $\displaystyle=-2\int_{\Sigma}\left\langle\Psi_{\eta}(e_{j}),(\widetilde{\nabla}^{*}\Omega_{\eta})(e_{j})\right\rangle$ $\displaystyle=2\int_{\Sigma}\left\langle\Psi_{\eta},\Psi_{\mathcal{L}\eta}+(2-2K)(\Psi_{\eta})^{B}\right\rangle\\!,$ where we used Lemma 5.6 in the last step. Finally, note that if $\mathcal{L}\eta=0$ and $(\Psi_{\eta})^{B}=0$ both hold, then $\int_{\Sigma}\|\Omega_{\eta}\|^{2}=0$, so that $\Omega_{\eta}=0$, so that $\Psi_{\eta}$ is real-holomorphic. ∎ We now make a brief digression. In general, if $\mathcal{L}\eta=\lambda\eta$, then Proposition 5.7 shows that: $\displaystyle 0\leq\frac{1}{2}\int_{\Sigma}\|\Omega_{\eta}\|^{2}$ $\displaystyle=\int_{\Sigma}\left\langle\Psi_{\eta},\,\lambda\Psi_{\eta}+(2-2K)(\Psi_{\eta})^{B}\right\rangle$ $\displaystyle=\lambda\int_{\Sigma}\|\Psi_{\eta}\|^{2}+\int_{\Sigma}(2-2K)\left\langle\Psi_{\eta},(\Psi_{\eta})^{B}\right\rangle$ $\displaystyle=\lambda\int_{\Sigma}\|\Psi_{\eta}\|^{2}+\int_{\Sigma}(2-2K)\left\|(\Psi_{\eta})^{B}\right\|^{2}.$ This estimate gives: ###### Proposition 5.8. Let $u\colon\mathbb{S}^{2}\to\mathbb{S}^{6}$ be a holomorphic $2$-sphere. If its Gauss curvature $K$ satisfies $K\geq c>0$, then $\lambda_{2}\geq-2+2c.$ In particular, the Jacobi operator of the Boruvka sphere satisfies $\lambda_{2}\geq-\frac{5}{3}$. ###### Proof. Suppose $\eta\in\Gamma(N\Sigma)$ satisfies $\mathcal{L}\eta=\lambda\eta$ with $\lambda>\lambda_{1}=-2$. We estimate $\displaystyle 0\leq\frac{1}{2}\int_{\Sigma}\|\Omega_{\eta}\|^{2}$ $\displaystyle=\lambda\int_{\Sigma}\|\Psi_{\eta}\|^{2}+\int_{\Sigma}(2-2K)\left\|(\Psi_{\eta})^{B}\right\|^{2}$ $\displaystyle\leq\lambda\int_{\Sigma}\|\Psi_{\eta}\|^{2}+(2-2c)\int_{\Sigma}\left\|(\Psi_{\eta})^{B}\right\|^{2}$ $\displaystyle\leq(\lambda+2-2c)\int_{\Sigma}\left\|\Psi_{\eta}\right\|^{2}.$ If it were the case that $\int_{\Sigma}\|\Psi_{\eta}\|^{2}=0$, then $\Psi_{\eta}=0$, so $\eta$ would be real-holomorphic and $\lambda=-2$, contrary to assumption. Thus, we must have $\int_{\Sigma}\|\Psi_{\eta}\|^{2}>0$, so $\lambda+2-2c\geq 0$, whence the result. ∎ ###### Remark. It is proved in [12] that a holomorphic curve $u\colon\Sigma^{2}\to\mathbb{S}^{6}$ satisfying $K\geq\frac{1}{6}$ must have either $K\equiv\frac{1}{6}$ or $K\equiv 1$, hence must be either the Boruvka sphere or the totally-geodesic $2$-sphere. Hence, any non-constant curvature example satisfying the hypothesis of Proposition 5.8 must have $c\in(0,\frac{1}{6})$. I do not know any examples of this type. We remark that it is also known [13] that the pinching condition $0\leq K\leq\frac{1}{6}$ implies $K\equiv 0$ or $K\equiv\frac{1}{6}$. See also [18] for further results. Returning to the main discussion, we now show that $H^{0}(K_{\Sigma}^{*}\otimes L_{N})\subset\text{Im}(G)$, thereby completing the proof of Proposition 5.3, and hence of Theorem 1.2. ###### Proposition 5.9. If $\alpha\in H^{0}(K_{\Sigma}^{*}\otimes L_{N})$, then $\alpha=\Psi_{\eta}$ for some $\eta\in\mathrm{Null}(u)$ with $(\Psi_{\eta})^{B}=0$. ###### Proof. Let $\alpha\in H^{0}(K_{\Sigma}^{*}\otimes L_{N})$, and recall the identification $H^{0}(K_{\Sigma}^{*}\otimes L_{N})\cong\\{\alpha\in\Gamma(T^{*}\Sigma\otimes E_{N})\colon\alpha\circ J=-J\circ\alpha\text{ and }\alpha\text{ real- holomorphic}\\}.$ Let $\xi=\frac{1}{2}J(\nabla^{\perp})^{*}\alpha$. Let $(e_{1},e_{2})$ be a geodesic frame at $p\in\Sigma$. Then at $p$, we have, by (5.8) and (5.7): $\displaystyle\Psi_{\xi}(v)=\frac{1}{2}J\Psi_{(\nabla^{\perp})^{*}\alpha}(v)$ $\displaystyle=\frac{1}{2}\left[(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i})\right]$ $\displaystyle=\frac{1}{2}\left[(\widetilde{\nabla}^{2}_{e_{i},Jv}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{e_{i},v}\alpha)(e_{i})+2\alpha(v)+(2K-2)[\alpha(v)]^{B}\right]$ $\displaystyle=\frac{1}{2}\left[(\widetilde{\nabla}^{2}_{e_{i},Jv}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{e_{i},v}\alpha)(e_{i})\right]+\alpha(v)$ where in the last step we used that $[\alpha(v)]^{B}=0$. Finally, using that $\alpha\circ J=-J\circ\alpha$ and that $\alpha$ is real-holomorphic, we have $(\widetilde{\nabla}^{2}_{e_{i},Jv}\alpha)(Je_{i})=(\widetilde{\nabla}^{2}_{e_{i},v}\alpha)(e_{i})$ at $p\in\Sigma$, whence $\Psi_{\xi}=\alpha.$ Now, since $\alpha$ is real-holomorphic, it follows that $\Psi_{\xi}$ is real- holomorphic, so $\Omega_{\xi}=0$, so by Lemma 5.6, we have $\Psi_{\mathcal{L}\xi}=0$, so that $\mathcal{L}\xi$ is real-holomorphic. Therefore, $\mathcal{L}(\mathcal{L}\xi)=-2\mathcal{L}\xi$, so that $\mathcal{L}(\mathcal{L}\xi+2\xi)=0$, and hence $\mathcal{L}\xi+2\xi=2\eta$ for some $\eta\in\text{Null}(u)$. Therefore, $\Psi_{2\eta}=\Psi_{\mathcal{L}\xi+2\xi}=\Psi_{\mathcal{L}\xi}+2\Psi_{\xi}=2\Psi_{\xi}=2\alpha,$ whence $\alpha=\Psi_{\eta}$ for an $\eta\in\text{Null}(u)$ with $(\Psi_{\eta})^{B}=(\Psi_{\xi})^{B}=0$. ∎ ## 6 Appendix: Proof of Lemma 5.5 The purpose of this appendix is to prove Lemma 5.5, which we restate as Lemma 6.2. Throughout, we fix $v\in\Gamma(T\Sigma)$, $\eta\in\Gamma(N\Sigma)$, and a local oriented orthonormal frame $(e_{1},e_{2})$ on $\Sigma$. ###### Lemma 6.1. Let $\alpha\in\Gamma(T^{*}\Sigma\otimes N\Sigma)$. (a) We have: $R^{\perp}_{12}(\eta)=(K-1)J(\eta^{N}).$ (b) We have: $\displaystyle R^{\perp}(e_{i},Jv)\alpha(Je_{i})-R^{\perp}(e_{i},v)\alpha(e_{i})$ $\displaystyle=(2K-2)[\alpha(v)]^{N}$ (6.1) $\displaystyle\alpha(R^{\top}(e_{i},v)e_{i})-\alpha(R^{\top}(e_{i},Jv)e_{i})$ $\displaystyle=-2K\alpha(v).$ (6.2) ###### Proof. (a) Equations (4.6)-(4.7) followed by the Gauss equation (2.1) and the fact that $J(\eta^{N})=(J\eta)^{N}$ give $R^{\perp}_{12}(\eta)=(K-1)(J\eta)^{N}=(K-1)J(\eta^{N}).$ (b) An easy calculation shows that $\displaystyle R^{\perp}(e_{i},Jv)\alpha(Je_{i})$ $\displaystyle=R^{\perp}_{12}[\alpha(Jv)]$ $\displaystyle R^{\top}(e_{i},v)e_{i}$ $\displaystyle=-R^{\top}_{12}(Jv)=-Kv$ $\displaystyle R^{\perp}(e_{i},v)\alpha(e_{i})$ $\displaystyle=-R^{\perp}_{12}[\alpha(Jv)]$ $\displaystyle R^{\top}(e_{i},Jv)Je_{i}$ $\displaystyle=R^{\top}_{12}(Jv)=Kv.$ The equations on the left, together with (a), give (6.1). The equations on the right give (6.2). ∎ ###### Lemma 6.2. Let $\alpha\in\Gamma(T^{*}\Sigma\otimes N\Sigma)$ satisfy $\alpha\circ J=-J\circ\alpha$. (a) We have: $\displaystyle(\widetilde{\nabla}^{2}_{e_{i},Jv}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{e_{i},v}\alpha)(e_{i})$ $\displaystyle=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i})-2\alpha(v)-(2K-2)[\alpha(v)]^{B}.$ (b) If $(e_{1},e_{2})$ is geodesic at $p\in\Sigma$, then at the point $p$: $\displaystyle J\Psi_{(\nabla^{\perp})^{*}\alpha}(v)=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i}).$ ###### Proof. (a) Let $L$ denote the left side of the desired identity. Using the Ricci identity (5.3), followed by the formula (5.4), we have $\displaystyle L$ $\displaystyle=(\widetilde{\nabla}^{2}_{e_{i},Jv}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{e_{i},v}\alpha)(e_{i})$ $\displaystyle=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i})+(\widetilde{R}(e_{i},Jv)\alpha)(Je_{i})-(\widetilde{R}(e_{i},v)\alpha)(e_{i})$ $\displaystyle=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i})$ $\displaystyle\ \ \ \ \ +R^{\perp}(e_{i},Jv)\alpha(Je_{i})-R^{\perp}(e_{i},v)\alpha(e_{i})+\alpha(R^{\top}(e_{i},v)e_{i})-\alpha(R^{\top}(e_{i},Jv)Je_{i}).$ Now, using equations (6.1) and (6.2), we obtain: $\displaystyle L$ $\displaystyle=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i})+(2K-2)[\alpha(v)]^{N}-2K\alpha(v)$ $\displaystyle=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i})+(2K-2)\alpha(v)-(2K-2)[\alpha(v)]^{B}-2K\alpha(v)$ $\displaystyle=(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i})-2\alpha(v)-(2K-2)[\alpha(v)]^{B}$ This proves (a). (b) Let $(e_{1},e_{2})$ be a geodesic frame at $p\in\Sigma$. Then at the point $p$, we have that $(\widetilde{\nabla}_{e_{i}}\alpha)(Je_{i})=\nabla^{\perp}_{e_{i}}(\alpha(Je_{i}))=-J[\nabla^{\perp}_{e_{i}}(\alpha(e_{i}))]=-J[(\widetilde{\nabla}_{e_{i}}\alpha)(e_{i})].$ Using this and recalling (5.5), we compute $\displaystyle(\widetilde{\nabla}^{2}_{Jv,e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}^{2}_{v,e_{i}}\alpha)(e_{i})$ $\displaystyle=(\widetilde{\nabla}_{Jv}\widetilde{\nabla}_{e_{i}}\alpha)(Je_{i})-(\widetilde{\nabla}_{v}\widetilde{\nabla}_{e_{i}}\alpha)(e_{i})$ $\displaystyle=-J(\widetilde{\nabla}_{Jv}\widetilde{\nabla}_{e_{i}}\alpha)(e_{i})-(\widetilde{\nabla}_{v}\widetilde{\nabla}_{e_{i}}\alpha)(e_{i})$ $\displaystyle=-J\nabla^{\perp}_{Jv}[(\widetilde{\nabla}_{e_{i}}\alpha)(e_{i})]-\nabla^{\perp}_{v}[(\widetilde{\nabla}_{e_{i}}\alpha)(e_{i})]$ $\displaystyle=J\nabla^{\perp}_{Jv}((\nabla^{\perp})^{*}\alpha)+\nabla^{\perp}_{v}((\nabla^{\perp})^{*}\alpha)$ $\displaystyle=J\Psi_{(\nabla^{\perp})^{*}\alpha}(v)$ which proves the claim. ∎ ## References * [1] Antonio Alarcón, Franc Forstnerič, and Finnur Lárusson. Holomorphic Legendrian Curves in $\mathbb{CP}^{3}$ and Superminimal Surfaces in $S^{4}$. Geom. Topol., in press. Preprint, 2019. * [2] Benjamin Aslan. Transverse $J$-Holomorphic Curves in Nearly Kahler $\mathbb{CP}^{3}$. arXiv preprint arXiv:2101.03845. * [3] John Bolton, Franki Dillen, Bart Dioos, and Luc Vrancken. Almost Complex Surfaces in the Nearly Sähler $S^{3}\times S^{3}$. Tohoku Mathematical Journal, 67(1):1–17, 2015. * [4] John Bolton, Luc Vrancken, and Lyndon M. Woodward. On Almost Complex Curves in the Nearly Kahler 6-sphere. The Quarterly Journal of Mathematics, 45(4):407–427, 1994. * [5] Robert L. Bryant. Submanifolds and Special Structures on the Octonians. Journal of Differential Geometry, 17(2):185–232, 1982. * [6] Robert L. Bryant. On the Geometry of Almost Complex 6-Manifolds. Asian Journal of Mathematics, 10(3):561–605, 2006. * [7] Eugenio Calabi. Minimal Immersions of Surfaces in Euclidean Spheres. Journal of Differential Geometry, 1(1-2):111–125, 1967. * [8] Bang-Yen Chen. Riemannian Submanifolds. Handbook of differential geometry, 1:187–418, 2000. * [9] Shiing Shen Chern. On the Minimal Immersions of the Two-sphere in a Space of Constant Curvature. Problems in Analysis, pages 27–40, 1970. * [10] Davi Chodosh, Otis; Maximo. The Morse index of a minimal surface. Notices Amer. Math. Soc., 68(6):892–898, 2021. * [11] Jean-Pierre Demailly. Complex Analytic and Differential Geometry. Citeseer, 1997. * [12] Franki Dillen, Barbara Opozda, Leopold Verstraelen, and Luc Vrancken. On Almost Complex Surfaces of the Nearly Kaehler 6-sphere. Zb. Rad. (Kragujevac) no. 8, 1987. * [13] Franki Dillen, Leopold Verstraelen, and Luc Vrancken. On Almost Complex Surfaces of the Nearly Kaehler 6-sphere II. Kodai Mathematical Journal, 10(3):261–271, 1987. * [14] Norio Ejiri. The Index of Minimal Immersions of $S^{2}$ into $S^{2n}$. Mathematische Zeitschrift, 184(1):127–132, 1983. * [15] Norio Ejiri. Equivariant Minimal Immersions of $S^{2}$ into $S^{2m}(1)$. Transactions of the American Mathematical Society, pages 105–124, 1986. * [16] Luis Fernández. The Space of Almost Complex 2-Spheres in the 6-Sphere. Transactions of the American Mathematical Society, 367(4):2437–2458, 2015. * [17] Phillip Griffiths and Joseph Harris. Principles of Algebraic Geometry, volume 19. Wiley Online Library, 1978. * [18] Hideya Hashimoto. $J$-Holomorphic Curves of a 6-Dimensional Sphere. Tokyo Journal of Mathematics, 23(1):137–159, 2000. * [19] Hideya Hashimoto. Deformations of Super-Minimal $J$-Holomorphic Curves of a 6-Dimensional Sphere. Tokyo J. Math., 27(2):285–298, 12 2004. * [20] Dominic D. Joyce. Riemannian Holonomy Groups and Calibrated Geometry, volume 12. Oxford University Press, 2007. * [21] Mikhail Karpukhin. Index of minimal spheres and isoperimetric eigenvalue inequalities. Inventiones mathematicae, 223(1):335–377, 2021. * [22] Shoshichi Kobayashi. Differential Geometry of Complex Vector Bundles. Princeton University Press, 2014. * [23] Rob Kusner and Peng Wang. On the Index of Minimal 2-tori in the 4-Sphere. arXiv preprint arXiv:1803.01615, 2018. * [24] H. Blaine Lawson. Lectures on Minimal Submanifolds. Publish or Perish, 1980. * [25] H. Blaine Lawson Jr. Complete Minimal Surfaces in $S^{3}$. Annals of Mathematics, pages 335–374, 1970. * [26] Jason D. Lotay. Asymptotically Conical Associative 3-folds. Quarterly Journal of Mathematics, 62(1):131–156, 2011. * [27] Jason D. Lotay. Ruled Lagrangian Submanifolds of the 6-Sphere. Transactions of the American Mathematical Society, 363(5):2305–2339, 2011. * [28] Fernando C. Marques and André Neves. Min-Max Theory and the Willmore Conjecture. Annals of Mathematics, pages 683–782, 2014. * [29] Mario J. Micallef and Jon G. Wolfson. The Second Variation of Area of Minimal Surfaces in Four-manifolds. Mathematische Annalen, 295(1):245–267, 1993. * [30] Sebastián Montiel and Francisco Urbano. Second Variation of Superminimal Surfaces into Self-Dual Einstein Four-Manifolds. Transactions of the American Mathematical Society, 349(6):2253–2269, 1997. * [31] Todd Rowland. Smooth Holomorphic Curves in $S^{6}$. PhD thesis, The University of Chicago, 1999. * [32] Kouei Sekigawa. Almost Complex Submanifolds of a 6-Dimensional Sphere. Kodai Mathematical Journal, 6(2):174–185, 1983. * [33] James Simons. Minimal Varieties in Riemannian Manifolds. Annals of Mathematics, pages 62–105, 1968. * [34] Francisco Urbano. Minimal Surfaces with Low Index in the Three-dimensional Sphere. Proceedings of the American Mathematical Society, pages 989–992, 1990. * [35] Feng Xu. Pseudo-Holomorphic Curves in Nearly Kähler $\mathbb{CP}^{3}$. Differential Geometry and its Applications, 28(1):107–120, 2010\. National Center for Theoretical Sciences National Taiwan University Taipei, Taiwan E-mail address<EMAIL_ADDRESS>
# Machine learning of high dimensional data on a noisy quantum processor Evan Peters<EMAIL_ADDRESS>Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada Fermi National Accelerator Laboratory, Batavia, IL 60510 João Caldeira Fermi National Accelerator Laboratory, Batavia, IL 60510 Alan Ho Google Quantum AI, Venice, CA 90291, United States Stefan Leichenauer Sandbox@Alphabet, Mountain View, CA 94043, United States Masoud Mohseni Google Quantum AI, Venice, CA 90291, United States Hartmut Neven Google Quantum AI, Venice, CA 90291, United States Panagiotis Spentzouris Fermi National Accelerator Laboratory, Batavia, IL 60510 Doug Strain Google Quantum AI, Venice, CA 90291, United States Gabriel N. Perdue Fermi National Accelerator Laboratory, Batavia, IL 60510 ###### Abstract We present a quantum kernel method for high-dimensional data analysis using Google’s universal quantum processor, Sycamore. This method is successfully applied to the cosmological benchmark of supernova classification using real spectral features with no dimensionality reduction and without vanishing kernel elements. Instead of using a synthetic dataset of low dimension or pre- processing the data with a classical machine learning algorithm to reduce the data dimension, this experiment demonstrates that machine learning with real, high dimensional data is possible using a quantum processor; but it requires careful attention to shot statistics and mean kernel element size when constructing a circuit ansatz. Our experiment utilizes 17 qubits to classify 67 dimensional data - significantly higher dimensionality than the largest prior quantum kernel experiments - resulting in classification accuracy that is competitive with noiseless simulation and comparable classical techniques. quantum computing, machine learning, kernel methods ††preprint: FERMILAB-PUB-20-624-QIS ## I Introduction Quantum kernel methods (QKM) [1, 2] provide techniques for utilizing a quantum co-processor in a machine learning setting. These methods were recently proven to provide a speedup over classical methods for certain specific input data classes [3]. They have also been used to quantify the computational power of data in quantum machine learning algorithms and drive the conditions under which quantum models will be capable of outperforming classical ones [4]. Prior experimental work [5, 6, 1] has focused on artificial or heavily pre- processed data, hardware implementations involving very few qubits, or circuit connectivity unsuitable for NISQ [7] processors; recent experimental results show potential for many-qubit applications of QKM to high energy physics [8]. In this work, we extend the method of machine learning based on quantum kernel methods up to 17 hardware qubits requiring only nearest-neighbor connectivity. We use this circuit structure to prepare a kernel matrix for a classical support vector machine to learn patterns in 67-dimensional supernova data for which competitive classical classifiers fail to achieve 100% accuracy. To extract useful information from a processor without quantum error correction (QEC), we implement error mitigation techniques specific to the QKM algorithm and experimentally demonstrate the algorithm’s robustness to some of the device noise. Additionally, we justify our circuit design based on its ability to produce large kernel magnitudes that can be sampled to high statistical certainty with relatively short experimental runs. We implement this algorithm on the Google Sycamore processor which we accessed through Google’s Quantum Computing Service. This machine is similar to the quantum supremacy demonstration Sycamore chip [9], but with only 23 qubits active. We achieve competitive results on a nontrivial classical dataset, and find intriguing classifier robustness in the face of moderate circuit fidelity. Our results motivate further theoretical work on noisy kernel methods and on techniques for operating on real, high-dimensional data without additional classical pre-processing or dimensionality reduction. ## II Quantum kernel Support Vector Machines A common task in machine learning is supervised learning, wherein an algorithm consumes datum-label pairs $(x,y)\in\mathcal{X}\times\\{0,1\\}$ and outputs a function $f:\mathcal{X}\rightarrow\\{0,1\\}$ that ideally predicts labels for seen (training) input data and generalizes well to unseen (test) data. A popular supervised learning algorithm is the Support Vector Machine (SVM) [10, 11] which is trained on inner products $\langle x_{i},x_{j}\rangle$ in the input space to find a robust linear classification boundary that best separates the data. An important technique for generalizing SVM classifiers to non-linearly separable data is the so-called “kernel trick” which replaces $\langle x_{i},x_{j}\rangle$ in the SVM formulation by a symmetric positive definite kernel function $k(x_{i},x_{j})$ [12]. Since every kernel function corresponds to an inner product on input data mapped into a feature Hilbert space [13], linear classification boundaries found by an SVM trained on a high-dimensional mapping correspond to complex, non-linear functions in the input space. Figure 1: In this experiment we performed limited data preprocessing that is standard for state-of-the-art classical techniques, before using the quantum processor to estimate the kernel matrix $\hat{K}_{ij}$ for all pairs of encoded datapoints $(x_{i},x_{j})$ in each dataset. We then passed the kernel matrix back to a classical computer to optimize an SVM using cross validation and hyperparameter tuning before evaluating the SVM to produce a final train/test score. Quantum kernel methods can potentially improve the performance of classifiers by using a quantum computer to map input data in $\mathcal{X}\subset\mathbb{R}^{d}$ into a high-dimensional complex Hilbert space, potentially resulting in a kernel function that is expressive and challenging to compute classically. It is difficult to know without sophisticated knowledge of the data generation process whether a given kernel is particularly suited to a dataset, but perhaps families of classically hard kernels may be shown empirically to offer performance improvements. In this work we focus on a non-variational quantum kernel method, which uses a quantum circuit $U(x)$ to map real data into quantum state space according to a map $\phi(x)=U(x)|0\rangle$. The kernel function we employ is then the squared inner product between pairs of mapped input data given by $k(x_{i},x_{j})=|\langle\phi(x_{i})|\phi(x_{j})\rangle|^{2}$, which allows for more expressive models compared to the alternative choice $\langle\phi(x_{i})|\phi(x_{j})\rangle$ [4]. In the absence of noise, the kernel matrix $K_{ij}=k(x_{i},x_{j})$ for a fixed dataset can therefore be estimated up to statistical error by using a quantum computer to sample outputs of the circuit $U^{\dagger}(x_{i})U(x_{j})$ and then computing the empirical probability of the all-zeros bitstring. However in practice, the kernel matrix $\hat{K}_{ij}$ sampled from the quantum computer may be significantly different from $K_{ij}$ due to device noise and readout error. Once $\hat{K}_{ij}$ is computed for all pairs of input data in the training set, a classical SVM can be trained on the outputs of the quantum computer. An SVM trained on a size-$m$ training set $\mathcal{T}\subset\mathcal{X}$ learns to predict the class $f(x)=\hat{y}$ of an input data point $x$ according to the decision function: $f(x)=\text{sign}\left(\sum_{i=1}^{m}\alpha_{i}y_{i}k(x_{i},x)+b\right)$ (1) where $\alpha_{i}$ and $b$ are parameters determined during the training stage of the SVM. Training and evaluating the SVM on $\mathcal{T}$ requires an $m\times m$ kernel matrix, after which each data point $z$ in the testing set $\mathcal{V}\subset\mathcal{X}$ may be classified using an additional $m$ evaluations of $k(x_{i},z)$ for $i=1\dots m$. Figure 1 provides a schematic representation of the process used to train an SVM using quantum kernels. ### II.1 Data and preprocessing We used the dataset provided in the Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC) [14] that simulates observations of the Vera C. Rubin Observatory [15]. The PLAsTiCC data consists of simulated astronomical time series for several different classes of astronomical objects. The time series consist of measurements of flux at six wavelength bands. Here we work on data from the training set of the challenge. To transform the problem into a binary classification problem, we focus on the two most represented classes, 42 and 90, which correspond to types II and Ia supernovae, respectively. Each time series can have a different number of flux measurements in each of the six wavelength bands. In order to classify different time series using an algorithm with a fixed number of inputs, we transform each time series into the same set of derived quantities. These include: the number of measurements; the minimum, maximum, mean, median, standard deviation, and skew of both flux and flux error; the sum and skew of the ratio between flux and flux error, and of the flux times squared flux ratio; the mean and maximum time between measurements; spectroscopic and photometric redshifts for the host galaxy; the position of each object in the sky; and the first two Fourier coefficients for each band, as well as kurtosis and skewness. In total, this transformation yields a 67-dimensional vector for each object. To prepare data for the quantum circuit, we convert lognormal-distributed spectral inputs to $\log$ scale, and normalize all inputs to $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$. We perform no dimensionality reduction. Our data processing pipeline is consistent with the treatment applied to state-of-the-art classical methods. Our classical benchmark is a competitive solution to this problem, although significant additional feature engineering leveraging astrophysics domain knowledge could possibly raise the benchmark score by a few percent. ### II.2 Circuit design Figure 2: a. 14-qubit example of the type 2 circuit used for experiments in this work. The dashed box indicates $U(x_{i})$, while the remainder of the circuit computes $U^{\dagger}(x_{j})$ to ouput $|\langle\phi(x_{j})|\phi(x_{i})\rangle|^{2}$. Non-virtual gates occurring at the boundary (dashed line) are contracted for hardware runs. b. The basic encoding block consists of a Hadamard followed by three single-qubit rotations, each parameterized by a different element of the input data $x$ (normalization and encoding constants omitted here). c. We used the $\sqrt{\text{iSWAP}}$ entangling gate, a hardware-native two-qubit gate on the Sycamore processor. To compute the kernel matrix $K_{ij}\equiv k(x_{i},x_{j})$ over the fixed dataset we must run $R$ repetitions of each circuit $U^{\dagger}(x_{j})U(x_{i})$ to determine the total counts $\nu_{0}$ of the all zeros bitstring, resulting in an estimator $\hat{K}_{ij}=\frac{\nu_{0}}{R}$. This introduces a challenge since quantum kernels must also be sampled from hardware with low enough statistical uncertainty to recover a classifier with similar performance to noiseless conditions. Since the likelihood of large relative statistical error between $K$ and $\hat{K}$ grows with decreasing magnitude of $\hat{K}$ and decreasing $R$, the performance of the hardware-based classifier will degrade when the kernel matrix to be sampled is populated by small entries. Conversely, large kernel magnitudes are a desirable feature for a successful quantum kernel classifier, and a key goal in circuit design is to balance the requirement of large kernel matrix elements with a choice of mapping that is difficult to compute classically. Another significant design challenge is to construct a circuit that separates data according to class without mapping data so far apart as to lose information about class relationships - an effect sometimes referred to as the “curse of dimensionality” in classical machine learning. For this experiment, we accounted for these design challenges and the need to accommodate high-dimensional data by mapping data into quantum state space using the quantum circuit shown in Figure 2. Each local rotation in the circuit is parameterized by a single element of preprocessed input data so that inner products in the quantum state space correspond to a similarity measure for features in the input space. Importantly, the circuit structure is constrained by matching the input data dimensionality to the number of local rotations so that the circuit depth and qubit count individually do not significantly impact the performance of the SVM classifier in a noiseless setting. This circuit structure consistently results in large magnitude inner products (median $K\geq 10^{\text{-}1}$) resulting in estimates for $\hat{K}$ with very little statistical error. We provide further empirical evidence justifying our choice of circuit in Appendix B. ## III Hardware classification results ### III.1 Dataset selection Figure 3: Learning curve for an SVM trained using noiseless circuit encoding on 17 qubits vs. RBF kernel $k(x_{i},x_{j})=\exp(-\gamma||x_{i}-x_{j}||^{2})$. Points reflect train/test accuracy for a classifier trained on a stratified 10-fold split resulting in a size-$x$ balanced subset of preprocessed supernova datapoints. Error bars indicate standard deviation over 10 trials of downsampling, and the dashed line indicates the size $m=210$ of the training set chosen for this experiment. We are motivated to minimize the size $\mathcal{T}\subset\mathcal{X}$ since the complexity cost of training an SVM on $m$ datapoints scales as $\mathcal{O}(m^{2})$. However too small a training sample will result in poor generalization of the trained model, resulting in low quality class predictions for data in the reserved size-$v$ test set $\mathcal{V}$. We explored this tradeoff by simulating the classifiers for varying train set sizes in Cirq [16] to construct learning curves (Figure 3) standard in machine learning. We found that our simulated 17-qubit classifier applied to 67-dimensional supernova data was competitive compared to a classical SVM trained using the Radial Basis Function (RBF) kernel on identical data subsets. For hardware runs, we constructed train/test datasets for which the mean train and k-fold validation scores achieved approximately the mean performance over randomly downsampled data subsets, accounting for the SVM hyperparameter optimization. The final dataset for each choice of qubits was constructed by producing a $1000\times 1000$ simulated kernel matrix , repeatedly performing 4-fold cross validation on a size-280 subset, and then selecting as the train/test set the exact elements from the fold that resulted in an accuracy closest to the mean validation score over all trials and folds. ### III.2 Hardware classification and Postprocessing Figure 4: a. Parameters for the three circuits implemented in this experiment. Values in parentheses are calculated ignoring contributions due to virtual Z gates. b. The depth of the each circuit and number of entangling layers (dark grey) scales to accommodate all 67 features of the input data, so that the expressive power of the circuit doesn’t change significantly across different numbers of qubits. c. The test accuracy for hardware QKM is competitive with the noiseless simulations even in the case of relatively low circuit fidelity, across multiple choices of qubit counts. The presence of hardware noise significantly reduces the ability of the model to overfit the data. Error bars on simulated data represent standard deviation of accuracy for an ensemble of SVM classifiers trained on 10 size-$m$ downsampled kernel matrices and tested on size-$v$ downsampled test sets (no replacement). Dataset sampling errors are propagated to the hardware outcomes but lack of larger hardware training/test sets prevents appropriate characterization of of a similar margin of error. We computed the quantum kernels experimentally using the Google Sycamore processor [9] accessed through Google’s Quantum Computing Service. At the time of experiments, the device consisted of 23 superconducting qubits with nearest neighbor (grid) connectivity. The processor supports single-qubit Pauli gates with $>99\%$ randomized benchmarking fidelity and $\sqrt{i\text{SWAP}}$ native entangling gates with XEB fidelities [17, 9] typically greater than $97\%$. To test our classifier performance on hardware, we trained a quantum kernel SVM using $n$ qubit circuits for $n\in\\{10,14,17\\}$ on $d=67$ supernova data with balanced class priors using a $m=210,v=70$ train/test split. We ran 5000 repetitions per circuit for a total of $m(m-1)/2+mv\approx 1.83\times 10^{8}$ experiments per number of qubits. As described in Section III.1, the train and test sets were constructed to provide a faithful representation of classifier accuracy applied to datasets of restricted size. Typically the time cost of computing the decision function (Equation 1) is reduced to some fraction of $mv$ since only a small subset of training inputs are selected as support vectors. However in hardware experiments we observed that a large fraction ($>90\%$) of data in $\mathcal{T}$ were selected as support vectors, likely due to a combination of a complex decision boundary and noise in the calculation of $\hat{K}$. Training the SVM classifier in postprocessing required choosing a single hyperparameter $C$ that applies a penalty for misclassification, which can significantly affect the noise robustness of the final classifier. To determine $C$ without overfitting the model, we performed leave-one-out cross validation (LOOCV) on $\mathcal{T}$ to determine $C_{opt}$ corresponding to the maximum mean LOOCV score. We then fixed $C=C_{opt}$ to evaluate the test accuracy $\frac{1}{v}\sum_{j=1}^{v}\Pr(f(x_{j})\neq y_{j})$ on reserved datapoints taken from $\mathcal{V}$. Figure 4 shows the classifier accuracies for each number of qubits, and demonstrates that the performance of the QKM is not restricted by the number of qubits used. Significantly, the QKM classifier performs reasonably well even when observed bitstring probabilities (and therefore $\hat{K}_{ij}$) are suppressed by a factor of 50%-70% due to limited circuit fidelity. This is due in part to the fact that the SVM decision function is invariant under scaling transformations $K\rightarrow rK$ and highlights the noise robustness of quantum kernel methods. ## IV Conclusion and outlook Whether and how quantum computing will contribute to machine learning for real world classical datasets remains to be seen. In this work, we have demonstrated that quantum machine learning at an intermediate scale (10 to 17 qubits) can work on “natural” datasets using Google’s superconducting quantum computer. In particular, we presented a novel circuit ansatz capable of processing high-dimensional data from a real-world scientific experiment without dimensionality reduction or significant pre-processing on input data, and without the requirement that the number of qubits matches the data dimensionality. We demonstrated classification results that were competitive with noiseless simulation despite hardware noise and lack of quantum error correction. While the circuits we implemented are not candidates for demonstrating quantum advantage, these findings suggest quantum kernel methods may be capable of achieving high classification accuracy on near-term devices. Careful attention must be paid to the impact of shot statistics and kernel element magnitudes when evaluating the performance of quantum kernel methods. This work highlights the need for further theoretical investigation under these constraints, as well as motivates further studies in the properties of noisy kernels. The main open problem is to identify a “natural” data set that could lead to beyond-classical performance for quantum machine learning. We believe that this can be achieved on datasets that demonstrate correlations that are inherently difficult to represent or store on a classical computer, hence inherently difficult or inefficient to learn/infer on a classical computer. This could include quantum data from simulations of quantum many-body systems near a critical point or solving linear and nonlinear systems of equations on a quantum computer [18, 19]. The quantum data could be also generated from quantum sensing and quantum communication applications. The software library TensorFlow Quantum (TFQ) [20] was recently developed to facilitate the exploration of various combinations of data, models, and algorithms for quantum machine learning. Very recently, a quantum advantage has been proposed for some engineered dataset and numerically validated on up to 30 qubits in TFQ using similar quantum kernel methods as described in this experimental demonstration [4]. These developments in quantum machine learning alongside the experimental results of this work suggest the exciting possibility for realizing quantum advantage with quantum machine learning on near term processors. ###### Acknowledgements. We would like to thank Google Quantum AI team for time on their Sycamore-chip quantum computer. In particular, the presentation and discussion with Kostyantyn Kechedzhi on error mitigation techniques that was incorporated into this experiment and Ping Yeh’s participation in some of the group discussions. Pedram Roushan provided a great deal of useful feedback on early versions of the draft and joined in several useful discussions. We would also like to thank Stavros Efthymiou for some early work on the quantum circuit simulations, and Brian Nord for consultation on interesting datasets in the domain of astrophysics and cosmology. EP is partially supported through A Kempf’s Google Faculty Award. JC, GP, and EP are partially supported by the DOE/HEP QuantISED program grant HEP Machine Learning and Optimization Go Quantum, identification number 0000240323. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. ## References * Havlícek _et al._ [2019] V. Havlícek, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, Supervised learning with quantum-enhanced feature spaces, Nature 567, 209 (2019). * Schuld and Killoran [2019] M. Schuld and N. Killoran, Quantum machine learning in feature hilbert spaces, Phys. Rev. Lett. 122, 040504 (2019). * Liu _et al._ [2020] Y. Liu, S. Arunachalam, and K. Temme, A rigorous and robust quantum speed-up in supervised machine learning (2020), arXiv:2010.02174 [quant-ph] . * Huang _et al._ [2020] H.-Y. Huang, M. Broughton, M. Mohseni, R. Babbush, S. Boixo, H. Neven, and J. R. McClean, Power of data in quantum machine learning (2020), arXiv:2011.01938 [quant-ph] . * Kusumoto _et al._ [2019] T. Kusumoto, K. Mitarai, K. Fujii, M. Kitagawa, and M. Negoro, Experimental quantum kernel machine learning with nuclear spins in a solid (2019), arXiv:1911.12021 [quant-ph] . * Bartkiewicz _et al._ [2020] K. Bartkiewicz, C. Gneiting, A. Černoch, K. Jiráková, K. Lemr, and F. Nori, Experimental kernel-based quantum machine learning in finite feature space, Scientific Reports 10, 1 (2020). * Preskill [2018] J. Preskill, Quantum Computing in the NISQ era and beyond, Quantum 2, 79 (2018). * Wu _et al._ [2020] S. L. Wu, J. Chan, W. Guan, S. Sun, A. Wang, C. Zhou, M. Livny, F. Carminati, A. D. Meglio, A. C. Y. Li, J. Lykken, P. Spentzouris, S. Y.-C. Chen, S. Yoo, and T.-C. Wei, Application of quantum machine learning using the quantum variational classifier method to high energy physics analysis at the lhc on ibm quantum computer simulator and hardware with 10 qubits (2020), arXiv:2012.11560 [quant-ph] . * Arute _et al._ [2019] F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. S. L. Brandao, D. A. Buell, B. Burkett, Y. Chen, Z. Chen, B. Chiaro, R. Collins, W. Courtney, A. Dunsworth, E. Farhi, B. Foxen, A. Fowler, C. Gidney, M. Giustina, R. Graff, K. Guerin, S. Habegger, M. P. Harrigan, M. J. Hartmann, A. Ho, M. Hoffmann, T. Huang, T. S. Humble, S. V. Isakov, E. Jeffrey, Z. Jiang, D. Kafri, K. Kechedzhi, J. Kelly, P. V. Klimov, S. Knysh, A. Korotkov, F. Kostritsa, D. Landhuis, M. Lindmark, E. Lucero, D. Lyakh, S. Mandrà, J. R. McClean, M. McEwen, A. Megrant, X. Mi, K. Michielsen, M. Mohseni, J. Mutus, O. Naaman, M. Neeley, C. Neill, M. Y. Niu, E. Ostby, A. Petukhov, J. C. Platt, C. Quintana, E. G. Rieffel, P. Roushan, N. C. Rubin, D. Sank, K. J. Satzinger, V. Smelyanskiy, K. J. Sung, M. D. Trevithick, A. Vainsencher, B. Villalonga, T. White, Z. J. Yao, P. Yeh, A. Zalcman, H. Neven, and J. M. Martinis, Quantum supremacy using a programmable superconducting processor, Nature 574, 505 (2019). * Cortes and Vapnik [1995] C. Cortes and V. Vapnik, Support-vector networks, Machine learning 20, 273 (1995). * Boser _et al._ [1992] B. E. Boser, I. M. Guyon, and V. N. Vapnik, A training algorithm for optimal margin classifiers, in _Proceedings of the Fifth Annual Workshop on Computational Learning Theory_, COLT ’92 (ACM, New York, NY, USA, 1992) pp. 144–152. * Aizerman _et al._ [1964] M. Aizerman, E. Braverman, and R. Rozoner, Theoretical foundations of potential function method in pattern recognition learning, Automation and Remote Control 6, 821 (1964). * Aronszajn [1950] N. Aronszajn, Theory of reproducing kernels, Transactions of the American mathematical society 68, 337 (1950). * The PLAsTiCC team _et al._ [2018] The PLAsTiCC team, T. A. Jr., A. Bahmanyar, R. Biswas, M. Dai, L. Galbany, R. Hložek, E. E. O. Ishida, S. W. Jha, D. O. Jones, R. Kessler, M. Lochner, A. A. Mahabal, A. I. Malz, K. S. Mandel, J. R. Martínez-Galarza, J. D. McEwen, D. Muthukrishna, G. Narayan, H. Peiris, C. M. Peters, K. Ponder, C. N. Setzer, The LSST Dark Energy Science Collaboration, and The LSST Transients and Variable Stars Science Collaboration, The photometric lsst astronomical time-series classification challenge (plasticc): Data set (2018), arXiv:1810.00001 [astro-ph.IM] . * ver [2020] Vera C. Rubin Observatory, https://www.lsst.org/about (2020). * team and collaborators [2020] Q. A. team and collaborators, Cirq (2020). * Neill _et al._ [2018] C. Neill, P. Roushan, K. Kechedzhi, S. Boixo, S. V. Isakov, V. Smelyanskiy, A. Megrant, B. Chiaro, A. Dunsworth, K. Arya, R. Barends, B. Burkett, Y. Chen, Z. Chen, A. Fowler, B. Foxen, M. Giustina, R. Graff, E. Jeffrey, T. Huang, J. Kelly, P. Klimov, E. Lucero, J. Mutus, M. Neeley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, H. Neven, and J. M. Martinis, A blueprint for demonstrating quantum supremacy with superconducting qubits, Science 360, 195 (2018), https://science.sciencemag.org/content/360/6385/195.full.pdf . * Kiani _et al._ [2020] B. T. Kiani, G. D. Palma, D. Englund, W. Kaminsky, M. Marvian, and S. Lloyd, Quantum advantage for differential equation analysis (2020), arXiv:2010.15776 [quant-ph] . * Lloyd _et al._ [2020] S. Lloyd, G. D. Palma, C. Gokler, B. Kiani, Z.-W. Liu, M. Marvian, F. Tennie, and T. Palmer, Quantum algorithm for nonlinear differential equations (2020), arXiv:2011.06571 [quant-ph] . * Broughton _et al._ [2020] M. Broughton, G. Verdon, T. McCourt, A. J. Martinez, J. H. Yoo, S. V. Isakov, P. Massey, M. Y. Niu, R. Halavati, E. Peters, M. Leib, A. Skolik, M. Streif, D. V. Dollen, J. R. McClean, S. Boixo, D. Bacon, A. K. Ho, H. Neven, and M. Mohseni, Tensorflow quantum: A software framework for quantum machine learning (2020), arXiv:2003.02989 [quant-ph] . * Burges [1998] C. J. Burges, A tutorial on support vector machines for pattern recognition, Data mining and knowledge discovery 2, 121 (1998). * Shawe-Taylor _et al._ [2004] J. Shawe-Taylor, N. Cristianini, _et al._ , _Kernel methods for pattern analysis_ (Cambridge university press, 2004). * Fletcher [1987] R. Fletcher, _Practical Methods of Optimization._ , Vol. 2 (John Wiley and Sons, Inc., 1987). * Kuhn and Tucker [1951] H. W. Kuhn and A. W. Tucker, Nonlinear programming, in _Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability_ (University of California Press, Berkeley, Calif., 1951) pp. 481–492. * Goemans [2015] M. Goemans, Lecture notes in 18.310a principles of discrete applied mathematics (2015). * Farhi and Neven [2018] E. Farhi and H. Neven, Classification with quantum neural networks on near term processors (2018), arXiv preprint arXiv:1802.06002 (2018). * F.R.S. [1901] K. P. F.R.S., Liii. on lines and planes of closest fit to systems of points in space, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2, 559 (1901), https://doi.org/10.1080/14786440109462720 . * Tilma and Sudarshan [2004] T. Tilma and E. Sudarshan, Generalized euler angle parameterization for u(n) with applications to su(n) coset volume measures, Journal of Geometry and Physics 52, 263 (2004). * Kandala _et al._ [2017] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets, Nature 549, 242 (2017). * Doug Strain [2019] Doug Strain, Private communication (2019). * Friedman [1994] J. H. Friedman, _Flexible Metric Nearest Neighbor Classification_ , Tech. Rep. (1994). * Pedregosa _et al._ [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, Scikit-learn: Machine learning in Python, Journal of Machine Learning Research 12, 2825 (2011). * Sedgewick [2001] R. Sedgewick, _Algorithms in c, Part 5: Graph Algorithms, Third Edition_ , 3rd ed. (Addison-Wesley Professional, 2001). * Hagberg _et al._ [2008] A. A. Hagberg, D. A. Schult, and P. J. Swart, Exploring network structure, dynamics, and function using NetworkX, 7th Python in Science Conference (SciPy 2008) , 11 (2008). * Bi and Zhang [2005] J. Bi and T. Zhang, Support vector classification with input data uncertainty, in _Advances in neural information processing systems_ (2005) pp. 161–168. ## Appendix A Binary classification with Support Vector Machines Supervised learning algorithms are tasked with the following problem: Given input data $\mathcal{X}\subset\mathbb{R}^{d}$ composed of $d$-dimensional datapoints and the corresponding class labels taken from $\mathcal{Y}=\\{-1,1\\}$ attached to each datapoint, construct a function $f$ that can successfully predict $f(x_{i})=y_{i}$ given a datapoint-label pair taken from the dataset $(x_{i},y_{i})\in\mathcal{X}\times\mathcal{Y}$. We now introduce the theoretical foundations for the Support Vector Machine, or SVM (see [21, 22] for a thorough review). A linear SVM performs binary classification by constructing a $(d-1)$ dimensional hyperplane $\langle x,w\rangle+b$ that divides elements of $\mathcal{X}$ according to their class, by finding the hyperplane with the largest perpendicular distance (“margin”) to elements of either class. The hyperplane parameters capable of classifying linearly separable data must satisfy the inequality $y_{i}\left(\langle x_{i},w\rangle+b\right)\geq 1$ (2) which corresponds to a symmetric margin of $2/||w||$ dividing classes of linearly separable data. Maximizing the margin therefore corresponds to minimizing the hyperplane normal vector, so the task of the SVM is to find the solution to the convex problem $\min_{w,b}\frac{1}{2}||w||^{2}$ (3) Equations 2-3 frame a constrained optimization problem that can be solved by method of Lagrange multipliers. The Lagrangian to minimize is then $L_{P}=\frac{1}{2}||w||^{2}-\sum_{i=1}\alpha_{i}y_{i}(\langle x_{i},w\rangle+b)+\sum_{i=1}\alpha_{i}$ (4) Recognizing that the inequality of Equation 2 can only be satisfied by fully separable data, SVM classifiers trained on real data typically employ so- called “slack” variables that loosen the classification constraints and introduce a misclassification penalty to the Lagrangian formulation [10]. The constraints for training a linear SVM on non-separable data using slack variables $\xi_{i}$ then takes on the form: $\displaystyle y_{i}\left(\sum_{s\in SV}\alpha_{s}y_{s}k(x_{i},x_{s})+b\right)$ $\displaystyle\geq 1-\xi_{i}$ (5) $\displaystyle\xi_{i}$ $\displaystyle\geq 0\quad\forall i$ (6) where we choose to assign an L2 penalty for misclassification by adding an additional cost term to the objective function 3, resulting in the modified objective function $\min_{w,b}\frac{1}{2}||w||^{2}+\frac{C}{2}\sum_{i}\xi_{i}^{2}$ (7) Applying the method of Lagrange multipliers, the primal Lagrangian for Equation 7 is $L_{P}=\frac{1}{2}||w||^{2}-\sum_{i=1}\alpha_{i}(y_{i}(\langle x_{i},w\rangle+b)-1+\xi_{i})-\sum_{i}\mu_{i}\xi_{i}+\frac{C}{2}\sum_{i}\xi_{i}^{2}$ (8) Alternatively this can be reformulated to the Wolfe dual problem [23], with the goal of maximizing the dual Lagrangian $L_{D}=\sum_{i}\alpha_{i}-\frac{1}{2}\sum_{i,j}\alpha_{i}\alpha_{j}y_{i}y_{j}\langle x_{i},x_{j}\rangle-\frac{C}{2}\sum_{i}\xi_{i}^{2}$ (9) subject to the constraints: $\displaystyle 0\leq\alpha_{i}$ $\displaystyle\leq C$ (10) $\displaystyle\sum_{i}\alpha_{i}y_{i}$ $\displaystyle=0$ (11) As Equation 7 is a convex programming problem, the solutions to the primal Lagrangian of Equation 8 and the dual Lagrangian of Equation 9 are subject to the Karush-Kuhn-Tucker (KKT) conditions [24]: $\displaystyle w-\sum_{i}\alpha_{i}y_{i}x_{i}=0$ (12) $\displaystyle\sum_{i}\alpha_{i}y_{i}=0$ (13) $\displaystyle C-\alpha_{i}-\mu_{i}=0$ (14) $\displaystyle\alpha_{i}(y_{i}(\langle x_{i},w\rangle+b)-1+\xi_{i})=0$ (15) $\displaystyle y_{i}(\langle x_{i},w\rangle+b)-1+\xi_{i}\geq 0$ (16) $\displaystyle\mu_{i}\xi_{i}=0$ (17) $\displaystyle C\geq\alpha_{i}\geq 0$ (18) $\displaystyle\mu_{i}\geq 0$ (19) $\displaystyle\xi_{i}\geq 0$ (20) These conditions determine the hyperplane intercept $b$ and also describe the geometry of the maximal margin hyperplane for the trained SVM. Once the optimal set of parameters $\vec{\alpha}$ is determined with respect to the training inputs $\mathcal{X}$, the linear SVM predicts the class of a data point using the decision function $f(x_{p})=\sum_{s\in SV}\alpha_{s}y_{s}\langle x_{p},x_{s}\rangle+b$ (21) where the sum runs over the indices of the support vectors, or equivalently all nonzero $\alpha_{i}$. Equation 21 and the dual Lagrangian 9 no longer explicitly reference elements of the input space $w,x_{i}\in\mathbb{R}$, so the optimization problem is still valid under the substitution $x\rightarrow\phi(x)$ for some mapping $\phi:\mathcal{X}\rightarrow\mathcal{H}$ where $\mathcal{H}$ is a Hilbert space (this is the so-called “kernel trick” [12]). This permits us to embed input data into higher dimensional Hilbert space for some choice of $\phi$ and then train the SVM on inner products in the mapped space, $\langle\phi(x_{i}),\phi(x_{j})\rangle_{\mathcal{H}}\equiv k(x_{i},x_{j})$, where the status of $k$ as an inner product guarantees that it is symmetric, positive-definite. The resulting SVM will then be capable of constructing decision boundaries that are nonlinear in the input space $\mathbb{R}^{d}$ resulting in a decision function $f(x_{p})=\sum_{s\in SV}\alpha_{s}y_{s}k(x_{p},x_{s})+b$ (22) Evaluating $k(x_{p},x_{s})=K_{ps}$ for a fixed set $x_{p},x_{s}\in\mathcal{T}\cup\mathcal{V}$ recovers Equation 1 from the main body (since $\alpha_{i}=0$ for $i$ outside the support vector set by Equations 14 and 17). ## Appendix B Circuit structure and Hilbert space embedding ### B.1 Statistical uncertainty and vanishing kernels We now discuss limitations to hardware-based quantum kernel methods due to statistical uncertainty. Recall that each kernel matrix element $K_{ij}=k(x_{i},x_{j})$ is computed by sampling the output of a circuit $U^{\dagger}(x_{j})U(x_{i})$ for a total of $R$ repetitions and counting the number $\nu_{0}$ of all-zeros bitstrings that appear. This experiment constitutes $R$ trials of a Bernoulli process parameterized by $K_{ij}$; the unbiased estimators for $K_{ij}$ and associated variance are therefore given by: $\displaystyle\hat{K}_{ij}$ $\displaystyle=\frac{\nu_{0}}{R}$ (23) $\displaystyle\text{Var}(\hat{K}_{ij})$ $\displaystyle=\frac{\hat{K}_{ij}(1-\hat{K}_{ij})}{R-1}$ (24) Note that positive definiteness of $\hat{K}$ is not necessarily preserved in the presence of statistical error and hardware noise, but in practice we found this had little effect on the ability of the SVM to classify data. The $O(R^{-1/2})$ sampling error of Equation 24 combined with a requirement that $||\hat{K}-K||_{F}=\left(\sum_{ij}|K_{ij}-\hat{K}_{ij}|\right)^{1/2}\leq\epsilon m$ (where $m=|\mathcal{X}|$) would suggest that an $\epsilon$-close estimation of $K$ could be achieved using $R=O(\epsilon^{-2}N^{2})$ shots per kernel element. In the main body we argue that this fails to bound relative error between kernel matrices. This is evident in the symmetric Chernoff bound for the relative error of a sampled $\hat{K}_{ij}$ [25]: $P\left(\frac{|\hat{K}-K|}{K}\geq\varepsilon\right)\leq 2e^{-RK\varepsilon^{2}/3}$ (25) for which the probablity of large relative error quickly becomes unbounded for $R<<\mathcal{O}(K_{ij}^{-1})$. Relative error is relevant by the following reasoning: Let $L_{D}^{\prime}$ be the L1-penalized dual Lagrangian corresponding to a kernel constructed from the transformation $K^{\prime}=rK$, given by $L_{D}^{\prime}=\sum_{i}\alpha_{i}^{\prime}-\frac{1}{2}\sum_{i,j}\alpha_{i}^{\prime}\alpha_{j}^{\prime}y_{i}y_{j}K_{ij}^{\prime}$ (26) If $\alpha_{opt}$ contains the parameters maximizing the Lagrangian $L_{D}=\sum_{i}\alpha_{i}-\frac{1}{2}\sum_{i,j}\alpha_{i}\alpha_{j}y_{i}y_{j}K_{ij}$ (27) then $\alpha_{opt}$ also maximizes the Lagrangian $\frac{1}{r}L_{D}$ which may be rewritten as follows: $\displaystyle\frac{1}{r}L_{D}=\sum_{i}\frac{\alpha_{i}}{r}-\frac{1}{2}\sum_{i,j}\frac{\alpha_{i}\alpha_{j}}{r^{2}}y_{i}y_{j}(rK_{ij})$ (28) By comparison with Equation 26 $L_{D}^{\prime}$ has the immediate choice of unique solution $\alpha_{opt}^{\prime}=\alpha_{opt}/r$, which may be achieved by appropriate choice of penalty parameter $C$ appearing in the KKT conditions 12-20 for $L_{D}^{\prime}$ (or equivalently in the primal problem $L_{P}^{\prime}$ before kernelizing the problem). A similar result can be attained if an L2 misclassification penalty is used by restricting the Lagrange multipliers $\alpha$. Consequently the decision function for an SVM classifier trained on the kernel matrix $K^{\prime}$ is identical to the decision function for the SVM classifier trained on the original $K$. This result makes intuitive sense for the choice of a linear kernel $K_{ij}=\langle x_{i},x_{j}\rangle$, for which stretching/shrinking each datapoint $x_{i}\rightarrow x_{i}/\sqrt{r}$ has no effect on the geometry of the maximal margin hyperplane. By choosing some $r$ for the transformation $K\rightarrow rK$, $\hat{K}\rightarrow r\hat{K}$, the absolute error $||K-\hat{K}||_{F}$ may be made arbitrarily large or small without affecting the performance of the associated SVM classifier, which suggests that $||K-\hat{K}||_{F}$ does not completely characterize the resulting SVM accuracy. The high dimensionality of quantum state space poses a threat to the experimental feasibility of quantum kernel methods since the relative statistical error incurred by finite shot statistics grows as the magnitude of the sampled kernel element shrinks. Using a naive example, a randomly selected encoding unitary will almost certainly result in vanishing kernel elements by strong measure concentration: If we treat $S=\\{U^{\dagger}(x_{j})U(x_{i})\\}$ as a distribution of unitaries that are random with respect to the Haar measure it is well known that the expected probability for any specific bitstring (and therefore $\frac{\nu_{0}}{R}$) sampled from a unitary in $S$ scales as $\mathcal{O}(2^{-n})$. In practice, the preprocessing of input data and circuit structure must be chosen with careful attention given to the corresponding distribution on $K$. To explore this effect we constructed a classifier similar to quantum circuits described in [1, 26], depicted in Figure 5b and given by $\displaystyle U(x)$ $\displaystyle=H^{\otimes n}V(x)H^{\otimes n}V(x)$ (29) $\displaystyle V(x)$ $\displaystyle=\exp\left(\sum_{i=1}^{n}c_{1}x_{i}Z_{i}+\sum_{(i,j)\in NN}c_{2}(x_{i}-x_{j})Z_{i}Z_{j}\right)$ where the entangling gates are selected among nearest neighbors on the (simulated) circuit grid. We chose to parameterize entanglers by $c_{2}(x_{i}-x_{j})$ instead of quadratic terms proportional to $x_{i}x_{j}$ to eliminate concentration of $E[x_{i}x_{j}]\rightarrow 0$ that occurs for our choice of normalization. For clarity we refer to the circuit described by Equation 29 as “Type 1”. An increase in the connectivity results in a higher gate/parameter count, resulting in generally smaller sampled kernel elements. This circuit structure requires that input data have dimension equal to the number of qubits. We applied Principal Component Analysis (PCA) [27] to reduce the 67-dimensional data to $n$ dimensions and then standardized the data to the interval $[-\pi/2,\pi/2]$. We defined additional hyperparameters $(c_{1},c_{2})$ that can be tuned to optimize the cross-validated performance of the corresponding SVM and control the resulting distribution of kernel matrix elements. Figure 5a shows that the magnitude of $K$ vanishes with respect to increasing $c_{1},c_{2}$ or number of qubits $n$. This does not necessarily result low accuracies for the associated SVM classifiers, but describes a family of kernels that are infeasible to sample on hardware. It is possible to preserve the magnitude of K if $c_{1}$ and $c_{2}$ are scaled down with increasing $n$ but for small enough angles over/under-rotation errors and noise will become dominating factors in hardware outcomes, while the limit $c_{2}\rightarrow 0$ results in a circuit that can be simulated trivially. Figure 5: Circuit structure and data preprocessing has a large impact on the resulting distributions of kernel matrix elements. (a) Distributions of median $K$ with respect to a coarse grid search over $c_{1},c_{2}\in\\{0.1,0.15,0.2,0.25,0.3\\}$ for type 1 circuits with $n$-dimensional PCA compressions as input suggest that vanishing kernel magnitudes (red) make much of the gridsearch space inaccesible to realistic hardware experiments for even modest numbers of qubits. We found no such trend in $K$ for type 2 circuits (Equation 2) implemented in our experiments with 67-dimensional input data. These results motivate a new approach for encoding data on large numbers of qubits, especially if the input data dimensionality is large. To compute large-magnitude kernels on high dimensional data without dimensionality reduction, we designed a circuit encoding to map input data $x_{i},z_{i}\in X\subset\mathbb{R}^{d}$ into a subspace of $\mathbb{C}^{2^{n}}$ using an approximately orthogonal parameterization of $U(2^{n})$, the group describing $n$-qubit unitaries. While examples of exactly orthogonal parameterizations of $U(2^{n})$ exist, such as Euler angle parameterization of $U(2^{n})$ [28], such schemes are generally inefficient to implement on hardware. We approximate such an encoding using circuits structured similarly to the Hardware Efficient Ansatz [29] consisting of an initial layer parameterizing $\bigotimes^{n}U(2)$ interspersed with local entanglers. This circuit structure (referred to here as “Type 2”) is shown in Figure 2 of the main body and can be expressed in terms of individual gates as $\displaystyle U(x)$ $\displaystyle=\prod_{\ell=1}^{L}U_{B}U_{A}(S_{\ell}(x))$ (30) $\displaystyle U_{A}(z)$ $\displaystyle=\bigotimes_{i=1}^{n}H^{(i)}R_{z}^{(i)}(c_{1}z_{i2})R_{y}^{(i)}(c_{1}z_{i1})R_{z}^{(i)}(c_{1}z_{i0})$ $\displaystyle U_{B}$ $\displaystyle=\prod_{(i,j)\in E(G)}\sqrt{\text{iSWAP}^{(i,j)}}$ where $E(G)$ denotes the set of edges composing a length-$n$ simple path permitted by the Sycamore connectivity, superscript $(i)$ indicates action on qubit $i$, and $S:\mathbb{R}^{d}\rightarrow\mathbb{R}^{3n}$ denotes selection of a subset of $3n$ elements from the input data to be encoded into a given rotation layer. The specific choice of rotation and entangling gates was influenced by the gate set available on the processor at the time the experiments were conducted, namely $\sqrt{\text{iSWAP}}$ and the Sycamore gate [9]. Note that the use of Z rotations in $U_{A}$ reduces the hardware depth of the corresponding circuit by 50% [30]. This architecture therefore encodes $d$-dimensional input data in $\mathcal{O}(d/n)$ depth on hardware. The fact that this circuit choice may be made arbitrarily shallow in number of qubits $n$ rules out the possibility of demonstrating quantum advantage, but the experimental and design challenges of implementing this architecture are relevant to QKM in general. As depicted in Figure 1, single qubit rotations in each circuit were “filled” sequentially from left to right, top to bottom beginning with the first element $x_{1}$ of the data and ending with $x_{67}$. Exceptions were made to this pattern to more evenly distribute single qubit rotations between entangling layers. We explored randomized filling schemes and found no significant difference to circuit performance nor inner product magnitudes. When the number of qubits is not an integer factor of 67, gaps appeared in at most two layers of the circuit. ### B.2 Hyperparameter tuning in the quantum circuit Our circuit design introduces a single parameter $c_{1}$ for multiplicative scaling of on elements of preprocessed $x$. We observed that train/test performance varied with respect to $c_{1}$ and therefore treated it as a hyperparameter of the kernel function to be optimized in simulation. In practice, the same optimization procedure can be carried out using sweeps over $c_{1}$ for hardware submissions. Figure 6a shows a typical outcome of the hyperparameter tuning process and demonstrates both a local optimum for validation scores with respect to $c_{1}$ as well as a capacity for overfitting in the large-$c_{1}$ limit. | ---|--- Figure 6: Hyperparameter tuning for simulated type 2 circuits consisted of grid search optimization over two parameters. (left) The encoding parameter $c_{1}$ multiplies each encoded data element and impacts the typical separation (magnitude of $K$) for mapped feature space vectors. (right) The L2 penalty hyperparameter $C$ used to train the SVM allows for robust performance in the presence of noise. Both plots correspond to 10 qubit circuits used in the experiment ($c_{1}$ was optimized over an $m=1000$ subset of SN-67 dataset while $C$ was optimized over experimental run dataset with $m=210$). The choice of optimal $c_{1}$ has a direct impact on the proximity of states $|\psi(x)\rangle$ mapped into the quantum state space; in the trivial limit $c_{1}\rightarrow 0$ the unitary $U(x)$ becomes the identity map and $K_{ij}\rightarrow 1$; similarly in the large $c_{1}$ limit the angles between mapped input data grow linearly and $K$ quickly vanishes. Therefore there is a tension between producing mapped states $\\{|\psi(x_{i})\rangle\\}$ with good separability but without the isolation of mapped points in high dimensional space that would degrade performance, a phenomenon in classical machine learning known as the “curse of dimensionality” (e.g. [31]). ## Appendix C Dataset selection and preprocessing We used the dataset provided in the Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC) [14]. After engineering the 67-float data with binary labels (Section II.1), preprocessing consisted of the following steps: 1. 1. Logscale transformation: Many of the features were distributed in a lognormal distribution which motivates use of the transformation $x_{i}\rightarrow\log_{10}(x_{i})$. Some information loss resulted from taking the absolute value of median flux, for which approximately $4\%$ of entries in the original dataset were negative. All other absolute value operations resulted in negligible loss of sign information. 2. 2. Normalization/scaling and outliers: Since the local rotations of $U(x)$ are $2\pi$-periodic, an effective normalization scheme is to map the boundaries of the input range to $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$. Realistically, outliers will result in the effective range for most data being compressed into a much smaller space (thereby amplifying the effect of over/under- rotation errors) and so we scaled data in such a way to ignore the effects of large outliers by applying the transformation: $x^{\prime}=\pi\left(\frac{x-P_{1}}{P_{99}-P_{1}}\right)-\frac{\pi}{2}$ to every element of input data, where $P_{k}$ denotes the value of the $k$-the percentile of the input domain. This is equivalent to typical implementations of a robust scaler with the quantile range set to (0.01, 0.99) (e.g. [32]), which removes the effects of outliers on the rescaling. Then hyperparameter tuning was used to adjust the rotation parameters by a further multiplicative factor $c_{1}$ (see Appendix B.2). ## Appendix D Error mitigation ### D.1 Device parameters Periodic calibrations of the Sycamore superconducting qubit device produce diagnostic data describing qubit and gate performances. Calibration metrics relevant to this experiment included readout errors $p_{00}$ (probability of a computational basis measurement reporting a “1” when the result should have been “0”) and $p_{11}$ (probability of a measurement reporting a “0” when the result should have been “1”), single qubit $T_{1}$, single qubit gate RB error, and $\sqrt{\text{iSWAP}}$ gate cross-entropy benchmarking (XEB) error. The readout error probabilities were used primarily for constructing and analyzing post-processing techniques for error correction while the remainder of the calibration data were fed in to an automated qubit selection algorithm. ### D.2 Automated qubit selection Figure 7: Sample results of automated qubit selection with rejected qubits denoted by a red slash. Entangler patterns (light/dark gray) are overlaid on the Sycamore 23-qubit grid annotated with $T_{1}$ in $\mu$s. No more than 19 qubits may be assigned to the grid using our connectivity scheme, so that qubit selection has diminishing effects on performance as $n\rightarrow 19$. To improve the performance of the algorithm for a given number of qubits, we designed a graph traversal algorithm to select qubits based on diagnostic data taken during device calibration. Less weight was applied to $p_{0}$ and $p_{1}$ due to the availability of readout error mitigation techniques. We constructed a qubit graph $G_{q}=(E,V)$ where edges represent entangling gate connectivity and nodes represent qubits according to the Sycamore 23-qubit grid layout. The optimization was done by traversing all simple paths of fixed length $k$ (implemented according to [33, 34]), and then scoring each path according to some function of the metrics for the subset of nodes and edges visited. Stated as an optimization problem, given an objective function $f:V\times E\rightarrow\mathbb{R}$ scoring subsets of vertices and edges composing an Eulerian graph, this algorithm finds the maximum evaluation of $f$ over all possible graphs $G_{q}$: $\max_{f(V_{k}(G),E_{k}(G))}G_{q}$ (31) subject to the constraints $|V_{k}(G)|=k$, $|E_{k}(G)|=k-1$. Before applying $f$, the heterogeneous calibration data were normalized to the range $[0,1]$ and inverted if they represented an error (as opposed to a fidelity). Letting the value of the $p$-th category of calibration data for the i-th qubit $v_{i}$ be $c_{p}(v_{i})$, and similarly the $p$-th category of calibration for the i-th edge $e_{i}$ be $d_{p}(e_{i})$, and defining $g_{p}$ as a scoring function applied to the $p$-th processed calibration metric, our implementation of $f$ takes on the form $f(V)=\sum_{p\in C_{1}}\sum_{v_{i}\in G}g_{p}(c_{p}(v_{i}))+\sum_{p\in C_{2}}\sum_{e_{i}\in G}g_{p}(c_{p}(e_{i}))$ (32) where $C_{1}$ and $C_{2}$ represent the calibration metrics corresponding to single and pairs of qubits respectively. We implemented $g_{p}$ as a logarithmic function for $T_{1}$, $T_{2}$, and $f_{XEB,2q}$ metrics and a linear function for $p_{00}$ and $p_{11}$ metrics. Figure 7 shows the results of an example optimization overlaid on $T_{1}$ calibration results. ### D.3 Readout error correction Readout error resulting from relaxation and thermal excitation can be modelled by a stochastic bitflip process applied to the observed bitstrings. Here we describe an efficient and accurate technique for correcting readout error for quantum kernel methods. Let $p(y^{n}|x^{n})$ describe the conditional probability for observing bitstring $y^{n}$ after exposing the bitstring $x^{n}$ to $n$ distinct bitflip channels, and let $q^{k}(y|x)$ for $x,y\in\\{0,1\\}$ describe the corresponding probability for observing bit “$y$” after exposing the $k$-th bit “$x$” to a single bitflip channel. Then for the $k$-th qubit, the metrics introduced in Appendix D.1 as $p_{00}=q^{k}(1|0)$ and $p_{11}=q^{k}(0|1)$ may be used to partially undo readout error by means of postprocessing. We define a response matrix $R\in\mathbb{R}^{2^{n}\times 2^{n}}$ elementwise as $(R)_{xy}=p(y^{n}|x^{n})$ that contains as its elements the total probability for transition from bitstring $x^{n}=x_{1}\dots x_{n}$ to bitstring $y^{n}=y_{1}\dots y_{n}$ computed as the product of $q^{k}(1|0)$ and $q^{k}(0|1)$ corresponding to each individual bit: $R_{xy}\equiv p(y_{1}\dots y_{n}|x_{1}\dots x_{n})=\prod_{k=1}^{n}q^{k}(y_{k}|x_{k})$ (33) For simplicity, we assume that each individual bitflip may be modelled as an independent process, although the techniques discussed here are readily applicable to a system of dependent bitflips if the bitflip likelihoods are experimentally measured in parallel. Then evidently, $R=\bigotimes_{k=1}^{n}\begin{pmatrix}q^{k}(0|0)&q^{k}(0|1)\\\ q^{k}(1|0)&q^{k}(1|1)\end{pmatrix}$ (34) Note that $R$ is generally asymmetric since typically $q^{k}(1|0)<q^{k}(0|1)$. While multiplying $R^{-1}$ by the set of observed bitstring frequencies would recover the prior distribution of bitstring frequencies with good fidelity, standard matrix inversion is subject to instabilities and is not tractable for even modest numbers of qubits. Since only the frequency of the all-zero’s bitstring is necessary to compute $\hat{K}_{ij}$, we implemented correction using a small subset of bitstring transition probabilities to perform quick and relatively high-fidelity readout error correction in post-processing. We generated $R$ and then truncated the full $2^{n}$-dimensional basis to the bitstring space containing strings with Hamming weight $\leq k_{max}$ for some $n$-dependent $k_{max}$ resulting in a truncated response matrix $R_{t}$. We then computed the pseudo-inverse $R_{t}^{-1}$ and performed error correction by simple matrix multiplication on the array of experimental readout frequencies (similarly truncated). This simplification comes at the expense of knowledge about any other post- correction frequencies since the other bitstrings in the truncated space are off-center within the Hamming sphere of kept bitstrings, resulting in a bias in the inverted linear map. We now analyze the effect of truncation on the readout error correction. The number of simultaneous readout error events (either relaxation or excitation) may be modelled as an induced random variable $Z=\sum_{k}X_{k}$ for $\text{Pr}(X_{k}=1)=q^{k}(\neg x|x)$. This distribution has expected value $\mu=\sum_{k}q^{k}(\neg x|x)$ and an exponentially suppressed likelihood for simultaneous readout errors via the Chernoff bound $\text{Pr}(Z\geq k)\leq\exp(k-\mu-k\log(k/\mu))$. Thus a natural measure for the effect of Hamming weight truncation is the empirical probability allocated to the complement of the truncated subspace: $\text{Pr}(Z>k_{max})=1-\sum_{i=0}^{k_{max}}\text{Pr}(Z=i)$ (35) While Equation 35 describes the probability of events outside the truncated subspace, it does not directly translate to failure probability for truncated readout correction. To explore this effect numerically, we computed the output probability distributions for a 10-qubit quantum kernel circuit sampled for $5000$ repetitions, and then introduced artificial readout error (using bitflip probabilities taken from the Sycamore processor) followed by roughly 5% Gaussian noise on the sampled distribution. Figure 8 shows the error distribution as well as the effect of correcting using an inverted truncated response matrix for a variety of kernel magnitudes and truncation weights. We observed similar behavior for 14 and 17-qubit simulated experiments; readout error for quantum kernel experiments may be corrected reasonably well using a small fraction of the full bitflip response matrix. Figure 8: Truncated readout error correction for 10 qubit circuit. (left) Correcting on the likely subspace described by $k_{max}=1$ provides significant error correction, and further increasing $k_{max}$ provides diminishing returns. Dashed lines indicate the infinite-shot upper/lower bounds given by Equation 41 (bounds are violated when empirical bitflip probabilities do not match imposed readout error probabilities); black line indicates perfect error correction. (right) Empirical probability for transition out of the weight $\leq k_{max}$ vanishes exponentially in the highest weight considered. Figure 8 suggests a linear relationship between the kernel $K^{\prime}$ computed in the presence of readout error and the noiseless kernel $K$. While readout error does not constitute a linear process in general (the underlying bitflip probabilities and bitstring distributions may be modified to give rise to arbitrary effects on $K$ within the bounds of Equation 41), by the arguments in Section B demonstrating such an effect in general would imply minimal effect of readout error on the corresponding classifier. The degree to which readout error plays a role in quantum kernel methods is therefore an important research area for implementation on near-term processors. ### D.4 Crosstalk optimization Cross-talk between two-qubit gates on implemented on superconducting processors can contribute to decoherence and decrease circuit fidelity. Since our choice of circuit ansatz requires entire layers of entangling gates, we conducted diagnostic runs to determine whether executing these gates sequentially (in different staggering patterns) could improve performance compared to executing the gates simultaneously. We found that the completely parallel execution of entangling gates achieved the lowest cross entropy with respect to noiseless simulation compared to partially sequential arrangements. Therefore all entangling gates in this experiment were run in parallel. ## Appendix E Hardware error and performance Figure 9 shows a typical outcome for sampled kernel elements $\hat{K}_{ij}$ compared to their true values as determined from noiseless simulation. Notably, all of the hardware outcomes are strongly biased towards zero as a result of decoherence. The observation that the SVM decision function is scale-invariant (Appendix B) suggests that the performance of an SVM trained using data subject to hardware error will only be affected to the degree that the sampled kernel elements $\hat{K}_{ij}$ differ from some linear transformation of the corresponding exact elements $K_{ij}$, so that the circuit fidelity and other typical metrics for hardware performance are not predictive of classifier performance in any obvious way. For instance, we achieved comparable test accuracy to noiseless simulation for $n=14$ qubits despite circuit fidelity in the neighborhood of $30\%$. Figure 9: (left) Distribution of sampled $\hat{K}$ compared to simulated $K$ for 17 qubit train set ($m=210$) colored according to quartiles of $K$. The dashed line indicates the value of $rK$ for $r=0.29$, demonstrating that the hardware trends towards some scaled value of $K$. The mean value of $K_{ii}$ corresponding to $\operatorname{Tr}(|0\rangle\langle 0|)\equiv 1$ (elements in the top right) is a useful proxy for circuit fidelity and tends towards $30\%$ for the qubit counts investigated. (right) The distribution of $\hat{K}$ around $rK$ is irregular but exhibits consistent patterns with respect to kernel magnitude. The hardware error is therefore not normal with respect to (a scaled version of) $K$, which complicates bounds for guaranteed performance. ### E.1 Effects of readout error in quantum kernel methods Since $\hat{K}$ is estimated by computing the empirical probability of the all zeros bitstring $p(0)$, the effects of readout error may be bounded in a straightforward manner. The following analysis will consider readout error as the sole source of noise and ignore statistical effects and other sources of decoherence. The resulting bounds apply to only to the infinite-shot limit but will be shown to be approximately correct in the low-shot limit. As in Section D.3, we let $p(y^{n}|x^{n})$ describe the conditional probability for observing bitstring $y^{n}$ after exposing the bitstring $x^{n}$ to $n$ distinct bitflip channels, and let $q^{k}(y|x)$ for $x,y\in\\{0,1\\}$ describe the corresponding probability for observing ”$y$” after exposing the $k$-th bit ”$x$” to a single bitflip channel. After exposing the all-zeros bitstring to a stochastic bitflip process repeatedly, the lowest possible value for $\hat{K}$ occurs when no bitstrings transform into the all-zeros bitstring. The expected fraction of events remaining is $p(0|0)=\prod_{k}^{n}(1-q^{k}(1|0))$ (36) The maximum increase in the observed $p(0)$ will result from transitions from other bitstrings into the all-zeros bistring. Intuitively this will occur almost entirely due to low-weight bitstrings with just a few bitflips. The probability of transition into the all-zeros bitstring from an arbitrary starting bitstring $y^{n}=y_{1}y_{2}\dots y_{n}$ is $p(0|y^{n})=\prod_{k=1}^{n}q^{k}(0|y_{k})$, while the log-odds of each term in this product may be rewritten as $\displaystyle\log q(0|y_{k})=\log q(0|0)(1-y_{k})+\log q^{k}(0|1)y_{k}$ (37) The total log-odds for transition into $0$ is then $\displaystyle\log p(0|y^{n})$ $\displaystyle=\sum_{k=1}^{n}\log q^{k}(0|0)+\sum_{k=1}^{n}\log\frac{q^{k}(0|1)}{q^{k}(0|0)}y_{k}$ (38) which is in the form $w\cdot y^{n}+b$ with $w\in\mathbb{R}^{n},\,b\in\mathbb{R}$, indicating a linear dependence of $\log p(0|y^{n})$ on $y^{n}$. Since the logarithm is strictly increasing, the corresponding integer programming problem for maximizing $p(0|y^{n})$ is: $\displaystyle\max_{w}\sum_{k=1}^{n}w_{k}x_{k}$ (39) $\displaystyle\text{subject to }x_{k}\in\\{0,1\\},\qquad\sum_{k=1}^{n}x_{k}>0$ (40) where the second constraint avoids the trivial solution of the all-zeros bitstring. By inspection of Equation 38, $q^{k}(0|0)=1-q^{k}(1|0)$ implies that $w$ is strictly negative with the realistic assumption that $q^{k}(1|0)<0.5,\,q^{k}(0|1)<0.5$. In this case, the solution maximizing Equation 39 must be a bitstring with weight 1 with a “1” at the position $\text{argmax}_{k}q^{k}(0|1)$. Combined with Equation 36 this results in the bound: $\hat{K}\prod_{k}^{n}\left(1-q^{k}(1|0)\right)\leq\hat{K}^{\prime}\leq\left(1-\hat{K}\right)\max_{k}q^{k}(0|1)+\hat{K}$ (41) Where $\hat{K}^{\prime}$ describes the estimated kernel element after readout error has occurred. This bound is plotted in Figure 8 for one instance of a 10-qubit readout error calibration. The form of these bounds further justify the need for large kernels, since the bound becomes increasingly loose as $\hat{K}$ approaches the magnitude of typical readout error probabilities on the quantum processor. We now discuss the implementation of the readout error correction technique described in Appendix D.3. While routine calibration of the device returns diagnostic information on readout error probabilities $q^{k}(0|1)$ and $q^{k}(1|0)$, these quantities may drift in the time between calibration and experiment resulting in lower quality readout correction. To account for drift, we periodically estimated $q^{k}(0|1)$ and $q^{k}(1|0)$ by preparing a sequence of random bitstrings $|s\rangle$ and the complement $|s\oplus 1\rangle$ and then measured in the computational basis. We then computed empirical bitflip likelihoods for measurements in parallel over the specific qubits used in each experiment and computed the time-averaged likelihoods to use for readout correction. We averaged over the different prepared states to reduce the impact of imperfect state preparation on measured outcomes. Figure 10 shows learning curves computed for the 14-qubit experiment with and without readout correction for post-processing. While classifiers trained using error-corrected results are capable of achieving higher accuracies on the reserved test set, the choice of $C$ hyperparameter for improved test accuracy did not generally correspond to improved accuracies for the validation sets. Therefore, we could not consistently improve the classifier performance by applying readout correction and opted to present the results achieved without readout correction in Figure 4 in the main body. Figure 10: We analyzed the LOO cross-validation accuracy versus accuracy on the reserved test set to determine the effect of (truncated) readout correction using readout error likelihoods determined experimentally at periodic intervals during the 14-qubit experiment. While the trained classifiers are often able to achieve significantly higher accuracy on the reserved test set, the error-corrected validation accuracy is not predictive of improved test accuracy. For instance, the $k_{max}=2$ classifier achieves a 65.7% validation accuracy and a 64.3% test accuracy while the classifier with no readout correction achieves 66.1% validation accuracy and 61.4% test accuracy, indicating that the improved test score cannot be consistently predicted by LOOCV. ### E.2 Effects of statistical error on SVM accuracy While recent work [3] has established performance bounds relating SVM accuracy to statistical sampling error for quantum kernel methods, we conducted experiments using orders of magnitude fewer repetitions than necessary to achieve robust classification. In addition, modified SVM algorithms exist for processing noisy inputs in the data space [35] but these modifications do not generalize to processing noise in the feature space (i.e. $\Delta k(x_{i},x_{j})$). Given these limitations, we chose to explore the effects of statistical noise numerically to determine the relative impact of statistical noise on final classifier accuracy compared to other sources of error. Figure 11 shows simulated trials of the type 2 circuits used for experimental runs. Each circuit was initially simulated with an full wavefunction simulator, and then the amplitude $K=|\langle 0|\psi\rangle|^{2}$ was used to sample $R$ repetitions from the implied binomial distribution $\text{Bin}(R,K)$. The sampled kernel elements were used to train and validate SVM classifiers following the procedure outlined in Section III.2. The results indicate there are diminishing returns in classifier accuracy beyond the $R=5000$ repetitions we used for the experiments, but that this choice incurs $\sim 1{\text{-}}2\%$ classifier error compared to the infinite-shot limit. As expected, greater statistical noise results in less overfitting for the model. Figure 11: We empirically investigated the effect of statistical noise by generating a sampling a binomial distribution with success probability equal to $K$ and enfocing symmetry on the resulting kernel matrix. The results show that additional circuit repetitions beyond $R\approx 5000$ provide diminishing returns to the validation accuracy of the classifier. We note that typically 50,000 circuit repetitions per kernel element are required to achieve cross- validated accuracy comparable to noiseless simulation. Fill for finite-$R$ represents $1\sigma$ interval for stratified 10-fold cross-validated train/test scores over 10 trials of downsampling to $R$ shots. ### E.3 Classifier results Figure 12 shows the result of tuning the hyperparameter $C$ controlling the $L2$ penalty for violating the SVM margin. The $C$ values resulting in optimal validation scores were then used to determine the final train/test scores reported in Figure 4 of the main body. Figure 12: Hyperparameter optimization for hardware kernels is performed by tuning the L1 penalty parameter $C$ via LOOCV on the training data (top row) during model validation, which then becomes fixed for evaluation of the model on the test set (bottom row). The capacity of the hardware-based models to overfit the data is drastically reduced, and oftentimes the SVM behavior becomes pathological. To avoid undesirable generalization behavior, the validation score corresponding to the optimal $C$ was required to be no greater than the corresponding training score. The vertical dashed line indicates the optimal $C$ decided in the validation stage.
# $\mathcal{N}$-field cosmology in hyperbolic field space: stability and general solutions Perseas Christodoulidis Andronikos Paliathanasis ###### Abstract We study the dynamics of a cosmological model with a perfect fluid and $\mathcal{N}$ fields on a hyperbolic field space interacting via a symmetric potential. We list all late-time solutions, investigate their stability and briefly discuss predictions of the theory. Moreover, for the case of two scalar fields and an exponential potential we prove that the field equations are Liouville integrable and we provide for the first time the general solution for a region of the parameter space. 98.80.-k, 95.35.+d, 95.36.+x ## 1 Introduction Over the last years, two-field models with a symmetric potential and hyperbolic field space have been extensively studied in the literature in the context of multi-field inflation [1, 2] or late-time universe [3, 4]. These models have displayed interesting phenomenology while remaining observationally viable [2, 5, 6, 7, 8]. 111Here we refer to the predictions of this theory during the inflationary era, whereas the viability of quintessence-like models during late-time cosmology has recently been challenged in the literature (see e.g. [9, 10]). Though most works have focused on the two-field regime, certain many-field constructions have also been proposed as in e.g. [11, 12]. Similarly, some progress has been made in the derivation of general solutions in scalar-field cosmology. On the contrary, multi-field generalizations have been proved more challenging and up to date only a few solutions are known for arbitrary number of fields [13, 14, 15, 16, 17]. The existence of exact and analytic solutions is an essential property for the mathematical description of a physical theory. Although a dynamical system can be solved by using numerical techniques we do not know that the numerical trajectories correspond always to real solution of the problem, thus we should investigate if the dynamical system posses the integrability property. There are various techniques for the study of the integrability in the literature. In cosmological studies, due to the fact that the gravitational field equations for scalar field theories admit a minisuperspace description, techniques from analytic mechanics can be applied. The theory of similarity transformations for the derivation of conservation laws has been applied in [18, 19, 20, 21, 22] while some other approaches can be found in [23, 24, 25]. Another important approach for the study of a cosmological model is the determination of the stationary points. The latter points can be used for the determination of the asymptotic behaviour of a specific theory and to extract important information and criteria for the cosmological evolution of the specific model [26, 27, 28, 29, 30]. In this work we will investigate the $\mathcal{N}$-field generalization of the two-field hyperbolic problem in the presence of a perfect fluid. We will first list all critical-point solutions and investigate their stability. Next, we will apply the Noether method to derive two-field general solutions in some cases and then extend them to $\mathcal{N}$-fields. The paper is organized as follows: in Sec. 3 we revisit the stability analysis for the two-field hyperbolic problem in the presence of a fluid. Next, we generalize the discussion for $\mathcal{N}$ fields in Sec. 4 and list all new solutions as well as their stability properties. In Sec. 5 we calculate the general solution for a subset of the parameter space using the Noether method. Finally, in Sec. 6 we offer our conclusions. ## 2 Chiral (multi-field) cosmology The Chiral cosmological model belongs to the family of the Einstein-nonlinear $\sigma$-model where in the Einstein-Hilbert action two scalar fields minimally coupled to gravity are introduced such that the gravitational action integral can be written as follows [31, 32, 33, 34] $S=\int\sqrt{-g}\,\mathrm{d}^{4}X\left(R-\frac{1}{2}g^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g^{\mu\nu}F\left(\phi\right)\nabla_{\mu}\psi\nabla_{\nu}\psi-V\left(\phi\right)\right)+S_{m}\,,$ (2.1) where there is a coupling in the kinetic term between the two scalar fields. When the coupling function $F\left(\phi\right)$ is constant the Action Integral (2.1) describes two quintessence fields; however, in Chiral cosmology the two dynamics of the two fields evolve in a space of constant non-zero curvature, that is, $F\left(\phi\right)=e^{\kappa\phi}$. At this point it is important to mention that we refer to a two-dimensional space defined by the kinetic terms of the scalar fields and not in the background space with metric $g_{\mu\nu}$ and Ricci scalar $R$. According to the cosmological principle for the background space we assume that of spatially flat Friedmann–Lemaître–Robertson–Walker (FLRW) described by the line element $\,\mathrm{d}s^{2}=-N_{l}\left(t\right)^{2}\,\mathrm{d}t^{2}+a\left(t\right)^{2}\left(\,\mathrm{d}X^{2}+\,\mathrm{d}Y^{2}+\,\mathrm{d}Z^{2}\right),$ (2.2) where $a\left(t\right)$ is the scale factor and $N_{l}\left(t\right)$ the lapse function. Moreover, we assume that the scalar fields inherit the symmetries of the background space, that is, $\phi\left(x^{\mu}\right)=\phi\left(t\right)$, $\psi\left(x^{\mu}\right)=\psi\left(t\right)$, while the Action Integral $S_{m}$ describes an ideal gas with energy density $\rho$, pressure $p$ and constant equation of state parameter $p=w\rho$. Hence, the gravitational field equations are [34] $\displaystyle 3H^{2}$ $\displaystyle=\rho_{f}+\rho,$ (2.3) $\displaystyle-\left(2\dot{H}+3H^{2}\right)$ $\displaystyle=p_{f}+P,$ (2.4) in which $\rho_{f},$ $p_{f}$ are the energy density and pressure components of the two scalar fields, that is, $\displaystyle\rho_{f}$ $\displaystyle=\frac{1}{2}\dot{\phi}^{2}+\frac{1}{2}e^{\kappa\phi}\dot{\psi}^{2}+V\left(\phi\right),$ (2.5) $\displaystyle p_{f}$ $\displaystyle=\frac{1}{2}\dot{\phi}^{2}+\frac{1}{2}e^{\kappa\phi}\dot{\psi}^{2}-V\left(\phi\right).$ (2.6) Finally, the equations of motion for the two fields are $\displaystyle\ddot{\phi}+3H\dot{\phi}-\kappa\frac{1}{2}e^{\kappa\phi}\dot{\psi}^{2}+V_{,\phi}\left(\phi\right)=0\,,$ (2.7) $\displaystyle\ddot{\psi}+3H\dot{\psi}+\kappa\dot{\phi}\dot{\psi}=0~{},$ (2.8) while the ideal gas satisfies the continuity equation $\dot{\rho}+3H(1+w)\rho=0\,,$ (2.9) from which it follows $\rho=\rho_{m0}a^{-3\left(1+w\right)}$. At this point it is important to mention that the gravitational field equations can be derived by the variation of the point-like Lagrangian $L\left(a,\dot{a},\phi,\dot{\phi},\psi,\dot{\psi}\right)=\frac{1}{2N_{l}\left(t\right)}\left(-6a\dot{a}^{2}+a^{3}\left(\dot{\phi}^{2}+e^{\kappa\phi}\dot{\psi}^{2}\right)\right)-N\left(t\right)a^{3}V\left(\phi\right)+N_{l}\left(t\right)\rho_{m0}a^{-3w_{m}}.$ (2.10) That it is an important observation, because we can apply techniques from analytic mechanics for the determination of exact solutions. As far as the scalar field potential $V\left(\phi\right)$ is concerned, we will assume the exponential function $V\left(\phi\right)=V_{0}e^{\lambda\phi}$ for the most part of this paper which has been shown to provide interesting physical results [35, 36]. In order to study the asymptotic behaviour of the previous model it is better to switch from cosmic time to the e-folding number defined from $\,\mathrm{d}N=H\,\mathrm{d}t$. In this way the set of evolution equations for the scalar fields and the fluid can be written as an autonomous dynamical system. Introducing a new variable $z=\rho/H^{2}$, describing the fluid energy density, the Friedman constraint becomes $3={1\over 2}v^{i}v_{i}+z+V\,,$ (2.11) which implies that the allowed values for the field velocities and $z$ should satisfy ${1\over 2}v^{i}v_{i}+z\leq 3\,.$ (2.12) The slow-roll parameter becomes $\epsilon={1\over 2}v^{i}v_{i}+{1\over 2}(1+w)z\,,$ (2.13) while the potential satisfies ${V\over H^{2}}=3-\epsilon+{1\over 2}(w-1)z\,.$ (2.14) The evolution equations for the fields and the fluid are $\displaystyle(v^{i})^{\prime}+\Gamma^{i}_{jk}v^{j}v^{k}+(3-\epsilon)(v^{i}+\lambda^{i})+{1\over 2}(w-1)\lambda^{i}z=0\,,$ (2.15) $\displaystyle z^{\prime}+(3+3w-2\epsilon)z=0\,.$ (2.16) However, critical points for the velocities are not expected to be found when a generic field metric is considered because Christoffel symbols are field dependent. Instead, it is better to study equations for the normalized velocities $\sqrt{G_{ii}}v^{i}$ (no sum is assumed in $i$) that enter the definition of $\epsilon$. For the field metric with an isometry these velocities are defined as $y\equiv\phi^{\prime}\,,\qquad x\equiv\sqrt{F(\phi)}\psi^{\prime}\,.$ (2.17) The variables of the two-field dynamical system in first order form are $\phi,\psi,y,x,z$. ## 3 Recap of the two-field problem with a perfect fluid ### 3.1 Hyperbolic field metric Specializing to the hyperbolic field metric $\,\mathrm{d}s^{2}=\,\mathrm{d}\phi^{2}+e^{\kappa\phi}\,\mathrm{d}\psi^{2}\,,$ (3.1) the system in first order form is $\displaystyle\phi^{\prime}=y\,,$ (3.2) $\displaystyle\psi^{\prime}=xe^{-\kappa/2\phi}\,,$ (3.3) $\displaystyle y^{\prime}+(3-\epsilon)(y+\lambda)-{\kappa\over 2}x^{2}+{1\over 2}(w-1)\lambda z=0\,,$ (3.4) $\displaystyle x^{\prime}+\left(3-\epsilon+{\kappa\over 2}y\right)x=0\,,$ (3.5) $\displaystyle z^{\prime}+(3+3w-2\epsilon)z=0\,.$ (3.6) The second equation can be discarded because $\psi$ is a cyclic variable and does not affect dynamics and, moreover, we observe that the last three equations do not depend on $\phi$ and so the first equation can also be omitted; the reduced $3\times 3$ system is sufficient to extract information regarding the solution. Although the stability analysis for this model has been presented recently in Ref. [3], in this section we will revisit it in order to facilitate the transition to more fields. In addition, we will mention the stability criteria for generic $w$ that were missing for some solutions of the aforementioned work. For our set of variables the eigenvalues of the Jacobian matrix follow straightforwardly from our analysis because that matrix can always be written in block-diagonal form. We have the following critical points: 1. 1. First, we have scalar field domination solutions, which generalize the solutions presented in [36, 37] with the addition of $z=0$. Their stability properties remain unchanged with the additional requirement $\epsilon<3/2(1+w)$. This happens because the stability matrix acquires an upper triagonal form with zeros below the main diagonal and so the presence of the fluid does not affect eigenvalues (for this type of solutions). There are three types of scalar-dominated solutions: 1. (a) The scalar-field gradient solution, $(y,x,z)_{\rm gr}=\left(-\lambda,0,0\right)\,.$ (3.7) Motion is aligned with the potential gradient flow. It is stable provided $-\sqrt{6+{\kappa^{2}\over 4}}-{\kappa\over 2}<\lambda<\sqrt{6+{\kappa^{2}\over 4}}-{\kappa\over 2}\,,\qquad|\lambda|<\sqrt{6}\,,\qquad|\lambda|<\sqrt{3(1+w)}\,.$ (3.8) The solution is depicted at the left panel of Fig. 1. Figure 1: The numerical solutions for a wide range of initial conditions (drawn uniformly from the surface defined from $\epsilon\approx 3$). Left: For $\lambda=\kappa=1$ and $w=0$ the solution asymptotes to the gradient critical point. Right: For a fluid with $w=2$, $\lambda=3$ and $\kappa=-1$ the solution asymptotes to kinetic domination. Blue dots correspond to the respective critical points and the semi-transparent blue surface denotes the region of definition for $x,y,z$. 2. (b) The scalar-field kinetic domination solution given as $(y,x,z)_{\rm kin}=\left(\pm\sqrt{6},0,0\right)\,,$ (3.9) and is stable for $|\lambda|>\sqrt{6}\,,\qquad\lambda\cdot\kappa<0\,,\qquad 1+w>2\,,$ (3.10) (see left panel of Fig. 1). 3. (c) The hyperbolic solution $(y,x,z)_{\rm hyper}=\left(-{6\over\kappa+\lambda},\pm{\sqrt{6}\sqrt{\lambda^{2}+\kappa\lambda-6}\over\kappa+\lambda},0\right)\,.$ (3.11) This solution exists provided $k\neq-\lambda$, $\lambda^{2}+\kappa\lambda-6>0$ or $\lambda>\sqrt{6+{\kappa^{2}\over 4}}-{\kappa\over 2}\,,\qquad\lambda<-\sqrt{6+{\kappa^{2}\over 4}}-{\kappa\over 2}\,,$ (3.12) and it is stable whenever the previous two solutions are unstable, namely when the following conditions are satisfied $\displaystyle\kappa>0\,,\qquad\lambda>\sqrt{6+{\kappa^{2}\over 4}}-{\kappa\over 2}\,,\qquad{\lambda\over\kappa+\lambda}<{1\over 2}(1+w)$ (3.13) $\displaystyle\kappa<0\,,\qquad\lambda<-\sqrt{6+{\kappa^{2}\over 4}}-{\kappa\over 2}\qquad{\lambda\over\kappa+\lambda}<{1\over 2}(1+w)\,.$ (3.14) The solution is depicted at the left panel of Fig. 2. Figure 2: The numerical solutions assuming a fluid with $w=0$ for a wide range of initial conditions (drawn uniformly from the surface defined from $\epsilon\approx 3$). Left: For $\lambda=1$, $\kappa=15$ the solution asymptotes to one of the two hyperbolic critical points (depending on the initial $x$). Right: For $\lambda=2.2$, $\kappa=1$ the solution asymptotes to the scaling critical point. Blue dots correspond to the respective critical points and the semi-transparent blue surface denotes the region of definition for $x,y,z$. 2. 2. The second type of solutions describe fluid domination $z=3$, where fields have zero values. In order for this to be a solution of the dynamical system the parenthesis of Eq. (3.6) should be zero and this gives the value of $\epsilon$ $\epsilon={3\over 2}(1+w)\,,$ (3.15) (this value of $\epsilon$ is compatible with Eq. (3.4)). These solutions are stable for $w<-1$, which yields $\epsilon<0$, and thus describe contracting universes. 3. 3. Finally, we find the scaling solution with the fluid and at least one of the fields non-zero. Again we require the parenthesis of Eq. (3.6) to vanish and so $\epsilon$ has the same value as in fluid domination. The solution is $(y,x,z)_{\rm scal}=\left(-{3(w+1)\over\lambda},0,3-{9(1+w)\over\lambda^{2}}\right)\,,$ (3.16) which exists provided $z_{\rm scal}\geq 0$ or $\lambda^{2}\geq 3(1+w)\,.$ (3.17) The eigevalues of the Jacobian matrix are $\displaystyle m_{1}$ $\displaystyle={3\over 2\lambda}\left[(w+1)\kappa+(w-1)\lambda\right]\,,$ (3.18) $\displaystyle m_{\pm}$ $\displaystyle={3\over 4}\left(w-1\pm\sqrt{(w-1)^{2}+8(w^{2}-1)-24(w-1)(w+1)^{2}\lambda^{-2}}\right)\,,$ (3.19) and they are non-positive for $w>-1$ with the following additional restrictions on $w,\lambda,\kappa$ $\displaystyle-1<w<1\,,\qquad{\kappa\over\lambda}<{1-w\over 1+w}\,.$ (3.20) For $w=0$ we recover the relations mentioned in Ref. [3]. The solution is illustrated at the right panel of Fig. 2. Note that there are no real solutions with $y,x,z\neq 0$. ### 3.2 Generic field metric with isometry We will briefly comment on the case of a general field metric with isometry in $\psi$ $\,\mathrm{d}s^{2}=\,\mathrm{d}\phi^{2}+F(\phi)\,\mathrm{d}\psi^{2}\,.$ (3.21) In this case $\kappa\equiv F_{,\phi}/F$ is field dependent and Eq. (3.2) can not be omitted. Choosing an exponential potential, or a potential that asymptotes to an exponential at e.g. $-\infty$, all previous solutions (except for the hyperbolic one) may exist for appropriate choices of the metric function (see Fig. 3 for a model with $F=e^{\phi^{2}}$), while the hyperbolic solution is replaced by a de Sitter asymptotic state $y,x\rightarrow 0$. Figure 3: The numerical solutions assuming a fluid with $w=0$ and $\phi_{0}=2$ for a wide range of initial conditions (drawn uniformly from the surface defined from $\epsilon\approx 3$). Left: For $\lambda=1$ the solution asymptotes to the gradient critical point. Right: For $\lambda=2.2$ the solution asymptotes to the scaling critical point. Blue dots correspond to the respective critical points and the semi-transparent blue surface denotes the region of definition for $x,y,z$. This can be shown as follows: if the gradient or kinetic solutions are unstable then a solution, that resembles the hyperbolic one, can be obtained if $\kappa$ diverges to plus/minus infinity at the boundary of the space. In this case the combination $\kappa y$ is required to be constant and so the parenthesis of Eq. (3.5) can vanish. Plugging back into Eq. (3.4) gives the asymptotic solution for $y$ and $x$ which has exactly the same form as in the hyperbolic case, albeit $\kappa$ is field dependent and growing in norm [36, 4]. Even though a proper stability analysis for a field-dependent $\kappa$ requires study of the dynamical system at infinity, we can use a simpler argument to understand the behaviour of these solutions. For the $4\times 4$ dynamical system calculating Lyapunov exponents, results to one zero eigenvalues which is associated with $\phi$. Since $\phi$ will eventually roll towards decreasing values of the potential (otherwise the Friedman constraint would be violated) no instability related to this marginal direction will be present. Therefore, one can apply the previous formulae for solutions and stability criteria after taking the limit $\phi\rightarrow\infty$. Note though that convergence towards the de Sitter critical point is much slower compared to other critical points, while the system may pass through other critical points first (see Fig. 4 for a model with $F=e^{\phi^{3}}$ as well as the discussion in Ref. [4]). Figure 4: Numerical solutions that asymptote to the de Sitter critical point with $\phi_{0}=2$, $\lambda=1$. ## 4 The $\mathcal{N}$\- field hyperbolic solution To properly generalize the model to $\mathcal{N}$ fields we choose the following form of the field metric $\,\mathrm{d}s^{2}=\,\mathrm{d}\phi^{2}+\sum e^{\kappa_{i}\phi}\,\mathrm{d}\psi_{i}^{2}\,,$ (4.1) with Ricci scalar $R=-{1\over 4}\left(\sum\kappa_{i}\right)^{2}-{1\over 2}\sum\kappa_{i}^{2}\,.$ (4.2) Equations of motion are $\displaystyle\phi^{\prime}=y\,,$ (4.3) $\displaystyle\psi^{\prime}_{i}=x_{i}e^{-\kappa_{i}\phi}\,,$ (4.4) $\displaystyle y^{\prime}+(3-\epsilon)(y+\lambda)-\sum{\kappa_{i}\over 2}x_{i}^{2}+{1\over 2}(w-1)\lambda z=0\,,$ (4.5) $\displaystyle x_{i}^{\prime}+\left(3-\epsilon+{\kappa_{i}\over 2}y\right)x_{i}=0\,,$ (4.6) $\displaystyle z^{\prime}+(3+3w-2\epsilon)z=0\,.$ (4.7) To analyse the problem we will distinguish between two cases. ### 4.1 $\kappa_{i}$ are all equal In the symmetric case $\kappa_{i}=\kappa$ the solutions presented in Sec. 3 carry over with the substitution $x^{2}\rightarrow\sum x_{i}^{2}$. This becomes apparent if we write the differential equation of the slow-roll parameter $\epsilon$ which is found by contracting Eq. (2.15) with $v_{i}$: $\epsilon^{\prime}+(3-\epsilon)(2\epsilon+\lambda y)+{1\over 2}(w-1)\lambda yz=0\,,$ (4.8) and substituting $\sum x_{i}^{2}=2\epsilon-y^{2}$ in Eq. (4.5) $y^{\prime}+(3-\epsilon)(y+\lambda)-{\kappa\over 2}(2\epsilon-y^{2})+{1\over 2}(w-1)\lambda z=0\,.$ (4.9) This shows that the set of Eqs. (4.6) can be replaced with Eq. (4.8) and $x_{i}$ are left undetermined (a similar argument was used in Ref. [38]). ### 4.2 $\kappa_{i}$ are different The situation is drastically different when $\kappa_{i}$ are different. In the next we list all critical-point solutions and their stability properties. 1. 1. The analogue of the hyperbolic solution with $y$ and all $x_{i}$ different than zero is inconsistent as it requires $(3-\epsilon)+{\kappa_{i}\over 2}y=0\,,$ (4.10) to hold for every $\kappa_{i}$. Therefore, we conclude that only one $x_{j}$ can be non-zero and $x_{i}=0$ for $i\neq j$. To study the stability we calculate the Jacobian matrix evaluated on the hyperbolic solution and we observe that it always acquires a block diagonal form (some permutations of rows and columns may be necessary) $\begin{pmatrix}A_{2\times 2}&0_{2\times\mathcal{N}-1}\\\ 0_{\mathcal{N}-1\times 2}&B_{\mathcal{N}-1\times\mathcal{N}-1}\end{pmatrix}\,,$ (4.11) where $A_{2\times 2}={1\over(\kappa_{j}+\lambda)^{2}}\begin{pmatrix}36-3(\kappa_{j}-\lambda)(\kappa_{j}+2\lambda)&\sqrt{6}\sqrt{\lambda^{2}+\kappa_{j}\lambda-6}[(\kappa_{j}-\lambda)^{2}-6]\\\ -\sqrt{{3\over 2}}(12+\kappa_{j}^{2}+\lambda\kappa_{j})\sqrt{\lambda^{2}+\kappa_{j}\lambda-6}&6[(\kappa_{j}-\lambda)^{2}-6]\end{pmatrix}\,,$ (4.12) is exactly the stability matrix of the reduced two-field problem, while the other matrix is diagonal $B_{\mathcal{N}-1\times\mathcal{N}-1}=\text{diag}\left({-3(\kappa_{j}-\kappa_{i})\over\kappa_{j}+\lambda},\cdots,3+3w-2\epsilon\right)\,,$ (4.13) for $i=1,\cdots,\mathcal{N}-2\neq j$. The eigenvalues of $A$ need to satisfy the inequalities (3.13)-(3.14) (with the substitution $\kappa\rightarrow\kappa_{j}$) while the rest $\mathcal{N}-1$ eigenvalues are the diagonal elements of $B$ and so a stable solution requires $\displaystyle\kappa_{j}>\kappa_{i}~{}~{}\text{for}~{}~{}\kappa_{j}>0\,,\qquad\kappa_{j}<\kappa_{i}~{}~{}\text{for}~{}~{}\kappa_{j}<0\,,$ (4.14) for $i\neq j$. Figure 5: The numerical solution for three fields and a wide range of initial conditions (drawn uniformly from the surface of sphere with $\epsilon\approx 3$) with $\lambda=3$, $\kappa_{1}=1$, $z=0$ and $\kappa_{2}=-1$ (left) and $\kappa_{2}=10$ (right). The blue dots correspond to the two hyperbolic solutions. 2. 2. The fluid domination and the scaling solution with non-zero $y,z$ and $x_{i}=0$ $(y,x_{i},z)_{\rm scal}=\left(-{3(w+1)\over\lambda},0,\cdots,3-{9(1+w)\over\lambda^{2}}\right)\,,$ (4.15) both exist with the same stability properties as previously. 3. 3. Finally, in addition to the usual kinetic-domination solution $(y,x_{i},z)_{\rm kin}=\left(\pm\sqrt{6},0,\cdots,0\right)\,,$ (4.16) which is stable for $|\lambda|>\sqrt{6}$ and $\lambda\kappa_{i}<0$, new solutions ($\epsilon=3$) exist with $y=0$ and $x_{i}\neq 0$ which follow from $\kappa_{i}x_{i}^{2}=0\,,\qquad\sum x_{i}^{2}=6.$ (4.17) For three fields the solution is trivially found to be $(y,x_{1},x_{2},z)_{\rm kin}=\left(0,\pm\sqrt{6}\sqrt{{\kappa_{1}\over\kappa_{1}-\kappa_{2}}},\pm\sqrt{6}\sqrt{-{\kappa_{2}\over\kappa_{1}-\kappa_{2}}},0\right)\,,$ (4.18) which exists provided $\kappa_{1}\kappa_{2}<0$. It can be shown that the stability matrix evaluated on the solution contains at least one positive eigenvalue and thus this kinetic domination is unstable. For $\mathcal{N}>4$ fields this type of kinetic solution is defined on the $\mathcal{N}-2$ hypersurface containing points that satisfy the relations of (4.17) and it is again unstable. Having thoroughly examined the background in the next subsection we move to the study of quantum fluctuations neglecting the fluid’s energy density and fluctuations. ### 4.3 Observables To extract physical quantities it is necessary to express gauge-invariant perturbations in the orthonormal basis (local Frenet system) $Q^{i}=E^{~{}i}_{A}q^{A}$ where the matrix $E^{~{}i}_{A}$ has as columns the components of the orthonormal vectors $E^{~{}i}_{A}=\left(t^{i},n^{i},b^{i},\cdots\right)\,.$ (4.19) Here, $t^{i}$ is the tangent unit vector, $n^{i}$ the normal vector, $b^{i}$ the binormal vector and so on. The projection along the tangent vector is related the curvature perturbation, while projections along the orthogonal directions are related to isocurvature perturbations. The covariant time derivatives of the orthonormal vectors satisfy the Frenet-Serret equations $\,\text{D}_{t}E^{~{}i}_{A}=C^{~{}B}_{A}E^{~{}i}_{B}\,,$ (4.20) where the matrix $C$ is antisymmetric with non-zero elements in the upper and lower diagonals. It is well known that the curvature perturbation ($Q_{\sigma}\equiv Q_{i}E^{~{}i}_{1}=q_{1}$) is sourced by the first isocurvature perturbation ($Q_{s}\equiv Q_{i}E^{~{}i}_{2}=q_{2}$) [39, 40], while the rest orthogonal perturbations interact via a “mass matrix” as well as through the curvatures that appear in the Frenet-Serret equations (see e.g. Refs. [41, 42, 43] for examples including up to three fields and Refs. [44, 45] for a formal discussion including an arbitrary number of fields). More specifically, orthogonal fields are coupled through the following terms in the second order action $2\dot{q}^{T}\cdot\Omega\cdot q+q^{T}\cdot\Omega^{T}\cdot\Omega\cdot q-q^{T}\cdot M^{2}\cdot q\,,$ (4.21) where $\Omega$ is the truncated matrix obtained from $C$ after removing elements of the first row and column, and the “mass matrix” is defined from $M^{2}_{AB}\equiv E^{~{}i}_{A}E^{~{}j}_{B}\left(V_{;ij}+\dot{\sigma}^{2}t^{k}t^{l}R_{kilj}+3\omega^{2}\delta_{A2}\delta_{B2}\right)\,,$ (4.22) for $A,B>1$. Extra orthogonal fields decouple from $Q_{\sigma}$ and $Q_{s}$ when the matrices $\Omega$ and $M$ are block diagonal. A necessary condition for this decoupling is the vanishing of the torsion of the $\mathcal{N}$-dimensional field-space trajectory. Recall that the torsion of a curve is found by calculating the rate of change of the normal unit vector. Using the Frenet-Serret equations the torsion ($\tau$) is defined from $\,\text{D}_{t}n^{i}=-\omega t^{i}+\tau b^{i}\,,$ (4.23) where $t^{i},n^{i},b^{i}$ are the first three unit vectors of the orthonormal frame at some point of the curve and $\omega$ is the turn rate. The case of zero torsion is reminiscent to geodesic motion where the curvature perturbation decouples from isocurvature perturbations. For the hyperbolic solution the vectors $t^{i},n^{i}$ have the first two components non-zero and the rest zero, which forces the next vectors in the series to have the first two components zero and the rest non-zero: $\displaystyle t^{i}$ $\displaystyle=(t^{\phi},t^{\chi},0,\cdots)\,,$ (4.24) $\displaystyle n^{i}$ $\displaystyle=(n^{\phi},n^{\chi},0,\cdots)\,,$ (4.25) $\displaystyle b^{i}$ $\displaystyle=(0,0,b^{\chi_{2}},\cdots)\,,$ (4.26) $\displaystyle\cdots$ (4.27) This means that the matrix $E^{~{}i}_{A}$ is block diagonal. Using the Frenet- Serret equation for $n^{i}$ we find $\text{D}_{t}n^{i}=-\omega t^{i}$ and, hence, the torsion is zero, which implies that $\Omega$ is block diagonal. Moreover, $M^{2}$ turns out to be block diagonal and so we conclude that $Q_{\sigma}$ and $Q_{s}$ evolve independently from the rest fields. Therefore, the basic predictions for this model, namely the spectral index and the tensor-to-scalar ratio, are identical to the two-field case. ### 4.4 Field metric with $\mathcal{N}-1$ isometries Similar to Sec. 3.2 we can consider metrics that can support $\mathcal{N}-1$ integrals of motion, and hence they should have $\mathcal{N}-1$ isometries. This type of metrics naturally admit a $1\times\mathcal{N}-1$ decomposition, with the $\phi$ field is canonically normalized and the rest $\mathcal{N}-1\times\mathcal{N}-1$ matrix depending only on $\phi$. To simplify calculations and retain some analytical control we will consider the case of a diagonal metric $G_{ij}=\text{diag}(1,F_{1}(\phi),F_{2}(\phi),\cdots)\,,$ (4.28) where the metric functions $F_{1},F_{2},\cdots$ can be different. Following the two-field discussion, in addition to the kinetic and gradient solutions we can find de Sitter asymptotic solutions in the region where at least one of $\kappa_{i}\equiv(\ln F_{i})_{,\phi}$ diverges and the inequalities (4.14) are satisfied. ## 5 The general solution ### 5.1 Without matter source For the case without matter source we find that there are general solutions for the case where $\kappa$ and the lapse function are given by $\kappa=-\left(\lambda+\sqrt{6}\right)\,,\qquad N_{l}\left(t\right)=a^{-3-\sqrt{6}\lambda}\,.$ (5.1) From the asymptotic analysis of Sec. 3 we know that this combination of $\kappa$ and $\lambda$ will give rise to either kinetic- ($|\lambda|>\sqrt{6}$) or gradient- ($|\lambda|<\sqrt{6}$) type solutions. Note also that for this choice of the lapse function $t$ is not the cosmic time, but rather another time variable. For the latter selection the field equations admit the Noetherian conservation laws $I_{1}\left(a,\dot{a},\phi,\dot{\phi},\psi,\dot{\psi}\right)=\frac{\,\mathrm{d}}{\,\mathrm{d}t}\left(a^{2+\frac{\sqrt{6}}{2}\lambda}e^{-{1\over 2}(\lambda+\sqrt{6})\phi}\right)\,,$ (5.2) $I_{2}\left(a,\dot{a},\phi,\dot{\phi},\psi,\dot{\psi}\right)=a^{6+\sqrt{6}\lambda}e^{-(\lambda+\sqrt{6})\phi}\dot{\psi}\,.$ (5.3) When $\lambda=-\sqrt{6}$ the field-space curvature is zero and the solution for this case can be found in Refs. [13, 14]. In the following we will consider $\lambda\neq-\sqrt{6}$. Applying the coordinate transformation $\displaystyle a\left(\chi\left(t\right),\xi\left(t\right),\zeta\left(t\right)\right)$ $\displaystyle=a_{0}\left(\left(\lambda+\sqrt{6}\right)^{2}\left(\chi\left(t\right)\zeta\left(t\right)-\xi\left(t\right)^{2}\right)\right)^{\left(6+\sqrt{6}\lambda\right)^{-1}}\,,$ (5.4) $\displaystyle\phi\left(\chi\left(t\right),\xi\left(t\right),\zeta\left(t\right)\right)$ $\displaystyle=\frac{2}{\lambda+\sqrt{6}}\ln\left(\frac{\sqrt{\left(\lambda+\sqrt{6}\right)^{2}\left(\chi\left(t\right)\zeta\left(t\right)-\xi\left(t\right)^{2}\right)}}{2\chi\left(t\right)}\right)\,,$ $\displaystyle\psi\left(\chi\left(t\right),\xi\left(t\right),\zeta\left(t\right)\right)$ $\displaystyle=\frac{\xi\left(t\right)}{\chi\left(t\right)}\,,$ the point-like Lagrangian is expressed in the new coordinates as $L\left(\chi,\dot{\chi},\xi,\dot{\xi},\zeta,\dot{\zeta}\right)=2\left(\dot{\xi}^{2}-\dot{\chi}\dot{\zeta}\right)-\bar{V}_{0}\chi^{-\bar{\lambda}}\,,$ (5.5) where we defined for simplicity $\bar{V}_{0}=4^{-{\lambda\over\sqrt{6}+\lambda}}V_{0}\,,\qquad\bar{\lambda}={2\lambda\over\sqrt{6}+\lambda}\,.$ (5.6) The field equations are written as follows $\ddot{\xi}=0\,,\qquad\ddot{\chi}=0\,,\qquad\ddot{\zeta}-\frac{\bar{\lambda}\bar{V}_{0}}{2}\chi^{\bar{\lambda}-1}=0\,,$ (5.7) along with the constraint $2\left(\dot{\xi}^{2}-\dot{\chi}\dot{\zeta}\right)+\bar{V}_{0}\chi^{-\bar{\lambda}}=0\,.$ (5.8) Consequently, the analytic solution of the field equations is $\chi\left(t\right)=\chi_{0}\left(t-t_{0}\right)\,,\qquad\xi\left(t\right)=\xi_{0}\left(t-t_{1}\right)\,,$ and $\zeta\left(t\right)=\frac{\bar{V}_{0}\chi_{0}^{\bar{\lambda}-1}}{2\left(\bar{\lambda}+1\right)}\left(t-t_{0}\right)^{\bar{\lambda}+1}+\zeta_{0}\left(t-t_{2}\right)\,,\qquad\text{ for }\bar{\lambda}\neq-1,0\,,$ (5.9) or $\zeta\left(t\right)=\frac{\bar{V}_{0}}{2\chi_{0}^{2}}\ln\left(t-t_{0}\right)+\zeta_{0}\left(t-t_{2}\right)\,,\qquad\text{ for }\bar{\lambda}=-1\,,$ (5.10) with the constraint $\xi_{0}^{2}-\chi_{0}\zeta_{0}=0$. For the special case of $\bar{\lambda}=0$, that is the potential is a cosmological constant with $\lambda=0$, the exact solution is $\chi\left(t\right)=\chi_{0}\left(t-t_{0}\right)\,,\qquad\xi\left(t\right)=\xi_{0}\left(t-t_{1}\right)\,,\qquad\zeta\left(t\right)=\zeta_{0}\left(t-t_{2}\right)\,,$ (5.11) with the constraint equation $\xi_{0}^{2}-\chi_{0}\zeta_{0}=0$. With $\chi,\xi$ and $\zeta$ known the solution in terms of the original variables of the problem can be found using the transformation (5.4). In Fig. 6 we plot the qualitative evolution of the effective equation of state parameter $w_{\rm eff}=w_{\rm eff}\left(a\right)$, which is defined as $w_{\rm eff}\left(a\left(t\right)\right)=-1-\frac{2}{3}\frac{\dot{H}}{H^{2}}$, for two different values of the parameter $\lambda$. We observe that the $w_{\rm eff}\left(a\right)$ is that of the hyperbolic expansion. Figure 6: Qualitative evolution of the effective equation of state parameter $w_{\rm eff}=-1-\frac{2}{3}\frac{\dot{H}}{H^{2}}$ for the analytic solution of our consideration for $\left(\lambda,\chi_{0},\zeta_{0},t_{0},t_{1},t_{2},\bar{V}_{0}\right)$ equal to $\left(-1,0.1,0.1,0,0.1,10,1\right)$ (left) and $\left(\sqrt{6},1,\zeta_{0},0,0,0,1\right)$ (right). It was pointed out recently in Ref. [46] that when the second field is phantom we can reconstruct the analytic solution for the two-field model in a same way as before by setting $\tilde{\psi}=i\psi$. Therefore, the analytic solution for the second case for $\kappa=\left(\lambda+\sqrt{6}\right)$ is determined in a similar way and we omit the presentation. Indeed by considering $\xi(t)=i\xi(t)$ we obtain the coordinate transformation $\displaystyle a\left(\chi\left(t\right),\xi\left(t\right),\zeta\left(t\right)\right)$ $\displaystyle=a_{0}\left(\left(\lambda+\sqrt{6}\right)^{2}\left(\chi\left(t\right)\zeta\left(t\right)+\xi\left(t\right)^{2}\right)\right)^{\left(6+\sqrt{6}\lambda\right)^{-1}}\,,$ (5.12) $\displaystyle\phi\left(\chi\left(t\right),\xi\left(t\right),\zeta\left(t\right)\right)$ $\displaystyle=\frac{2}{\lambda+\sqrt{6}}\ln\left(\frac{\sqrt{\left(\left(\lambda+\sqrt{6}\right)^{2}\left(\chi\left(t\right)\zeta\left(t\right)+\xi\left(t\right)^{2}\right)\right)}}{2\chi\left(t\right)}\right)\,,$ (5.13) $\displaystyle\psi\left(\chi\left(t\right),\xi\left(t\right),\zeta\left(t\right)\right)$ $\displaystyle=\frac{\xi\left(t\right)}{\chi\left(t\right)}\,,$ (5.14) which produces the same second-order differential equations as before. Consequently, the solution for the variables $\chi\left(t\right),~{}\xi\left(t\right)$ and $\zeta\left(t\right)$ is the same as before, while the constraint equation is now $\xi_{0}^{2}+\chi_{0}\zeta_{0}=0$. ### 5.2 In the presence of matter source Consider now the existence of an additional matter field. We conclude that when the equation of state parameter is $w\left(\lambda\right)=-1+\frac{\sqrt{6}}{3}\lambda$, then the solutions found in the previous section hold for this cosmological model as well, with the only difference that the constraint equations for the integration constants $\xi_{0},\chi_{0},\zeta_{0}$ are now $\rho_{m0}=2\xi_{0}^{2}-\chi_{0}\zeta_{0}\,,$ (5.15) for the Chiral model and $\rho_{m0}=\xi_{0}^{2}+\chi_{0}\zeta_{0}\,,$ (5.16) for the Chiral-quintom model. We observe that for $w_{m}=-\frac{1}{3}$ the point-like Lagrangian (2.10) describes the gravitational field equations for the case of a FLRW spacetime with spatially curvature $k=\rho_{m0}$, that is, for the line element $\,\mathrm{d}s^{2}=-N^{2}\left(t\right)\,\mathrm{d}t^{2}+\frac{a^{2}\left(t\right)}{1-\frac{k}{4}\left(X^{2}+Y^{2}+Z^{2}\right)}\left(\,\mathrm{d}X^{2}+\,\mathrm{d}Y^{2}+\,\mathrm{d}Z^{2}\right)\,.$ (5.17) Hence, we also presented for the first time an analytic solution for a two- field model in a non-flat FLRW background space. ## 6 Discussion In this work we studied the dynamics and the existence of analytic solutions for a multi-field cosmological model in a spatially flat FLRW background space. In particular, we considered a cosmological model consisting of $\mathcal{N}$ scalar fields minimally coupled to gravity with a hyperbolic interaction in the kinetic terms in the presence of an ideal gas. For this gravitational model we studied the asymptotic behaviour of the field equations. We recovered previous results for the two-field model, that is all critical points for the quintessence, i.e. of single-field theory, as well as a pair of points which describe a hyperbolic solution where both fields contribute. In the multi-field scenario with $\mathcal{N}>2$ fields we showed that the number of dynamical fields for any late-time solution, and most importantly for the hyperbolic one, remains two. Therefore, at the background level the late-time behaviour of this problem is identical to the two-field case. Moreover, in Sec. 4.3 we showed that the same is true for first order perturbations. This is an important observation because it is clear that by adding additional scalar fields in this specific theory we do not get new physical results (regarding late-time solutions), while no information can be extracted from current observations to support this $\mathcal{N}$-field theory with $\mathcal{N}>2$. It would be interesting to investigate whether this holds for non-Gaussianities and other higher-order correlators as well. Finally, for the two-field model with an exponential potential we proved for the first time the Liouville integrability of the field equations while we derive the analytic solution of the model. We applied the theory of similarity transformations to construct conservation laws. We found that for a specific combination of the exponents $\lambda$ and $\kappa$ (associated with the potential gradient and the curvature of the hyperbolic space respectively) new conservation laws exist which facilitate the derivation of a closed-form solution of the dynamical system. We demonstrated that the solution holds for the case of the presence of additional matter, while it can be used to construct the analytic solution of the multi-field model in a non-spatially flat FLRW background space. ## Acknowledgments PC acknowledges financial support from the Dutch Organisation for Scientific Research (NWO). ## References * [1] A. R. Brown, “Hyperbolic Inflation,” Phys. Rev. Lett. 121 (2018) no.25, 251601 [arXiv:1705.03023 [hep-th]]. * [2] S. Mizuno and S. Mukohyama, “Primordial perturbations from inflation with a hyperbolic field-space,” Phys. Rev. D 96 (2017) no.10, 103533 [arXiv:1707.05125 [hep-th]]. * [3] M. Cicoli, G. Dibitetto and F. G. Pedro, “New accelerating solutions in late-time cosmology,” Phys. Rev. D 101 (2020) no.10, 103524 [arXiv:2002.02695 [gr-qc]]. * [4] M. Cicoli, G. Dibitetto and F. G. Pedro, “Out of the Swampland with Multifield Quintessence?,” JHEP 10 (2020), 035 [arXiv:2007.11011 [hep-th]]. * [5] J. Fumagalli, S. Garcia-Saenz, L. Pinol, S. Renaux-Petel and J. Ronayne, “Hyper-Non-Gaussianities in Inflation with Strongly Nongeodesic Motion,” Phys. Rev. Lett. 123 (2019) no.20, 201302 [arXiv:1902.03221 [hep-th]]. * [6] T. Bjorkmo, R. Z. Ferreira and M. C. D. Marsh, “Mild Non-Gaussianities under Perturbative Control from Rapid-Turn Inflation Models,” JCAP 12 (2019), 036 [arXiv:1908.11316 [hep-th]]. * [7] R. Z. Ferreira, “Non-Gaussianities in models of inflation with large and negative entropic masses,” JCAP 08 (2020), 034 [arXiv:2003.13410 [astro-ph.CO]]. * [8] M. Bounakis, I. G. Moss and G. Rigopoulos, “Observational constraints on Hyperinflation,” [arXiv:2010.06461 [gr-qc]]. * [9] S. Basilakos, G. Leon, G. Papagiannopoulos and E. N. Saridakis, “Dynamical system analysis at background and perturbation levels: Quintessence in severe disadvantage comparing to $\Lambda$CDM,” Phys. Rev. D 100 (2019) no.4, 043524 [arXiv:1904.01563 [gr-qc]]. * [10] A. Banerjee, H. Cai, L. Heisenberg, E. Ó. Colgáin, M. M. Sheikh-Jabbari and T. Yang, [arXiv:2006.00244 [astro-ph.CO]]. * [11] T. Bjorkmo and M. C. D. Marsh, “Hyperinflation generalised: from its attractor mechanism to its tension with the ‘swampland conditions’,” JHEP 04 (2019), 172 [arXiv:1901.08603 [hep-th]]. * [12] V. Aragam, S. Paban and R. Rosati, “The Multi-Field, Rapid-Turn Inflationary Solution,” [arXiv:2010.15933 [hep-th]]. * [13] L. P. Chimento, “General solution to two-scalar field cosmologies with exponential potentials,” Class. Quant. Grav. 15 (1998), 965-974 * [14] P. Christodoulidis, “Probing the inflationary evolution using analytical solutions,” [arXiv:1811.06456 [astro-ph.CO]]. * [15] A. Paliathanasis, G. Leon and S. Pan, “Exact Solutions in Chiral Cosmology,” Gen. Rel. Grav. 51 (2019) no.9, 106 [arXiv:1811.10038 [gr-qc]]. * [16] J. Socorro, S. Pérez-Payán, R. Hernández, A. Espinoza-García and L. R. Díaz-Barrón, “Classical and quantum exact solutions for a FRW in chiral like cosmology,” [arXiv:2012.11108 [gr-qc]]. * [17] J. Socorro, S. Pérez-Payán, A. Espinoza-García and L. R. Díaz-Barrón, [arXiv:2101.05973 [gr-qc]]. * [18] S. Basilakos, M. Tsamparlis and A. Paliathanasis, “Using the Noether symmetry approach to probe the nature of dark energy,” Phys. Rev. D 83 (2011), 103512 [arXiv:1104.2980 [astro-ph.CO]]. * [19] A. Paliathanasis, M. Tsamparlis and S. Basilakos, “Constraints and analytical solutions of $f(R)$ theories of gravity using Noether symmetries,” Phys. Rev. D 84 (2011), 123514 [arXiv:1111.4547 [astro-ph.CO]]. * [20] J. A. Belinchón, T. Harko and M. K. Mak, Astrophys. Space Sci. 361 (2016) no.2, 52 doi:10.1007/s10509-015-2642-7 [arXiv:1512.08054 [gr-qc]]. * [21] M. Demianski, E. Piedipalumbo, C. Rubano and C. Tortora, “Accelerating universe in scalar tensor models: Confrontation of theoretical predictions with observations,” Astron. Astrophys. 454 (2006), 55-66 [arXiv:astro-ph/0604026 [astro-ph]]. * [22] P. A. Terzis, N. Dimakis and T. Christodoulakis, “Noether analysis of Scalar-Tensor Cosmology,” Phys. Rev. D 90 (2014) no.12, 123543[arXiv:1410.0802 [gr-qc]]. * [23] A. Paliathanasis, J. D. Barrow and P. G. L. Leach, “Cosmological Solutions of $f(T)$ Gravity,” Phys. Rev. D 94 (2016) no.2, 023525 [arXiv:1606.00659 [gr-qc]]. * [24] J. D. Barrow, S. Cotsakis and A. Tsokaros, “Series expansions and sudden singularities,” d[arXiv:1301.6523 [gr-qc]]. * [25] S. Cotsakis, “Asymptotic Poincaré compactification and finite-time singularities,” Grav. Cosmol. 19 (2013), 240-245 [arXiv:1301.4778 [gr-qc]]. * [26] L. Amendola, R. Gannouji, D. Polarski and S. Tsujikawa, “Conditions for the cosmological viability of f(R) dark energy models,” Phys. Rev. D 75 (2007), 083504 [arXiv:gr-qc/0612180 [gr-qc]]. * [27] A. Coley and G. Leon, “Static Spherically Symmetric Einstein-aether models I: Perfect fluids with a linear equation of state and scalar fields with an exponential self-interacting potential,” Gen. Rel. Grav. 51 (2019) no.9, 115 [arXiv:1905.02003 [gr-qc]]. * [28] G. Leon, Y. Leyva and J. Socorro, “Quintom phase-space: beyond the exponential potential,” Phys. Lett. B 732 (2014), 285-297 [arXiv:1208.0061 [gr-qc]]. * [29] R. Lazkoz, G. Leon and I. Quiros, “Quintom cosmologies with arbitrary potentials,” Phys. Lett. B 649 (2007), 103-110 [arXiv:astro-ph/0701353 [astro-ph]]. * [30] Y.F. Cai, E.N. Saridakis, M.R. Setare and J.-Q. Xia, “Quintom cosmology: theoretical implications and observations,” Phys. Rep. 493, 1 (2010) * [31] S.V. Chervon, “On the chiral model of cosmological inflation,” Russ. Phys. J. 38, 539 (1995) * [32] S. V. Ketov, “Quantum Non-linear Sigma Models,”, Springer-Verlag, Berlin, (2000). * [33] J. Lee, T.H. Lee, T. Moon and P. Oh, “de Sitter nonlinear sigma model and accelerating universe,” Phys. Rev. D 80, 065016 (2009) * [34] S.V. Chervon, “Chiral Cosmological Models: Dark Sector Fields Description,” Quantum Matter 2, 71 (2013) * [35] A. Paliathanasis, “ Dynamics of chiral cosmology”, Class. Quantum Grav. 37, 195014 (2020) * [36] P. Christodoulidis, D. Roest and E. I. Sfakianakis, “Scaling attractors in multi-field inflation,” JCAP 12, 059 (2019) [arXiv:1903.06116 [hep-th]]. * [37] N. Dimakis, A. Paliathanasis, P. A. Terzis and T. Christodoulakis, Eur. Phys. J. C 79, no.7, 618 (2019) [arXiv:1904.09713 [gr-qc]]. * [38] A. Paliathanasis and G. Leon, “Asymptotic behavior of N-fields Chiral Cosmology.” Eur. Phys. J. C 80 (2020) no.9, 847 [arXiv:2007.13223 [gr-qc]]. * [39] C. Gordon, D. Wands, B. A. Bassett and R. Maartens, “Adiabatic and entropy perturbations from inflation,” Phys. Rev. D 63 (2000), 023506 [arXiv:astro-ph/0009131 [astro-ph]]. * [40] K. A. Malik and D. Wands, “Cosmological perturbations,” Phys. Rept. 475 (2009), 1-51 [arXiv:0809.4944 [astro-ph]]. * [41] D. I. Kaiser, E. A. Mazenc and E. I. Sfakianakis, “Primordial Bispectrum from Multifield Inflation with Nonminimal Couplings,” Phys. Rev. D 87 (2013), 064004 [arXiv:1210.7487 [astro-ph.CO]]. * [42] D. I. Kaiser and E. I. Sfakianakis, “Multifield Inflation after Planck: The Case for Nonminimal Couplings,” Phys. Rev. Lett. 112 (2014) no.1, 011302 [arXiv:1304.0363 [astro-ph.CO]]. * [43] V. Aragam, S. Paban and R. Rosati, “Multi-field Inflation in High-Slope Potentials,” JCAP 04 (2020), 022 [arXiv:1905.07495 [hep-th]]. * [44] A. Achúcarro, S. Céspedes, A. C. Davis and G. A. Palma, “Constraints on Holographic Multifield Inflation and Models Based on the Hamilton-Jacobi Formalism,” Phys. Rev. Lett. 122 (2019) no.19, 191301 [arXiv:1809.05341 [hep-th]]. * [45] L. Pinol, “Multifield inflation beyond $N_{\mathrm{field}}=2$: non-Gaussianities and single-field effective theory,” [arXiv:2011.05930 [astro-ph.CO]]. * [46] A. Paliathanasis and G. Leon, Dynamics of a two scalar field cosmological model with phantom terms, [arXiv:2009.12874]
# Communication-Efficient Variance-Reduced Decentralized Stochastic Optimization over Time-Varying Directed Graphs Yiyue Chen, Abolfazl Hashemi, Haris Vikalo Yiyue Chen and Haris Vikalo are with the Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX 78712 USA. Abolfazl Hashemi is with the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA. A preliminary version of this article is presented at the 2021 International Conference on Acoustics, Speech, and Signal Processing (ICASSP) [1]. ###### Abstract We consider the problem of decentralized optimization over time-varying directed networks. The network nodes can access only their local objectives, and aim to collaboratively minimize a global function by exchanging messages with their neighbors. Leveraging sparsification, gradient tracking and variance-reduction, we propose a novel communication-efficient decentralized optimization scheme that is suitable for resource-constrained time-varying directed networks. We prove that in the case of smooth and strongly-convex objective functions, the proposed scheme achieves an accelerated linear convergence rate. To our knowledge, this is the first decentralized optimization framework for time-varying directed networks that achieves such a convergence rate and applies to settings requiring sparsified communication. Experimental results on both synthetic and real datasets verify the theoretical results and demonstrate efficacy of the proposed scheme. ## 1 Introduction Decentralized optimization problems are encountered in a number of settings in control, signal processing, and machine learning [2, 3, 4]. Formally, the goal of a decentralized optimization task is to minimize global objective in the form of a finite sum $\min_{\mathbf{x}\in\mathcal{X}}\left[f(\mathbf{x}):=\frac{1}{n}\sum_{i=1}^{n}f_{i}(\mathbf{x})\right],$ (1) where $f_{i}(\mathbf{x})=\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}f_{i,j}(\mathbf{x}):\mathbb{R}^{d}\to\mathbb{R}$ for $i\in[n]:=\left\\{1,...,n\right\\}$ denotes the local objective function that averages loss of $m_{i}$ data points at node $i$, and $\mathcal{X}$ denotes a convex compact constraint set. The $n$ nodes of the network exchange messages to collaboratively solve (1). Since the communication links between nodes in real-world networks are often uni-directional and dynamic (i.e., time-varying), we model the network by a sequence of directed graphs ${\mathcal{G}}(t)=(|n|,{\mathcal{E}}(t))$, where the existence of an edge $e_{i,j}\in{\mathcal{E}}(t)$ implies that node $i$ can send messages to node $j$ at time step $t$. As the networks and datasets keep increasing in size, computational complexity and communication cost of decentralized optimization start presenting major challenges. To reduce the complexity of computing gradients, decentralized stochastic methods that allow each agent to perform gradient estimation by processing small subset of local data are preferred [5]. However, such techniques exhibit low convergence rates and suffer from high variance of the local stochastic gradients. Remedies for these impediments in decentralized optimization over networks and in stochastic optimization in centralized settings include gradient tracking [6, 7] and variance reduction [8, 9], respectively; however, no such remedies have been developed for decentralized optimization over time-varying directed networks. On another note, communication constraints often exacerbate large-scale decentralized optimization problems where the size of the network or the dimension of local model parameters may be on the order of millions. This motivates design of communication-efficient algorithms that compress messages exchanged between network nodes yet preserve fast convergence. In this paper, we study the general setting where a network is time-varying and directed, and present, to the best of our knowledge, the first variance-reduced communication-sparsified algorithm for decentralized convex optimization over such networks. Moreover, we theoretically establish that when the local objective functions are smooth and strongly-convex, the proposed algorithm achieves accelerated linear convergence rate. Note that while our focus is on communication-constrained settings, the proposed scheme readily applies to decentralized optimization problems where time-varying directed graphs operate under no communication constraints; in fact, to our knowledge, this is the first variance-reduced stochastic algorithm for such problems. ### 1.1 Related work The first work on decentralized optimization over networks dates back to the 1980s [10]. A number of studies that followed in subsequent years was focused on the decentralized average consensus problem, where the network nodes work collaboratively to compute the average value of local vectors. Convergence conditions for achieving consensus over directed and undirected time-varying graphs were established in [3, 2, 11, 12, 13]. The first communication- efficient algorithm that achieves linear convergence over time-invariant (static) undirected graphs was proposed in [14]. The consensus problem can be viewed as a stepping stone towards more general decentralized optimization problems, where the nodes in a network aim to collaboratively minimize the sum of local objective functions. A number of solutions to this problem has been proposed for the setting where the network is undirected, including the well-known distributed (sub)gradient descent algorithm (DGD) [4, 15], distributed alternating direction method of multipliers (D-ADMM) [16], and decentralized dual averaging methods [17, 18, 19]. Recently, [20, 14] proposed a novel communication-efficient algorithm for decentralized convex optimization problems; the provably convergent algorithm relies on a message-passing scheme with memory and biased compression. A key technical property required to ensure convergence of decentralized convex optimization algorithms over undirected networks is that the so-called mixing matrix characterizing the network connectivity is doubly-stochastic. However, in directed networks characterized by communication link asymmetry, doubly-stochastic mixing matrices are atypical. This motivated algorithms that rely on auxiliary variables to cancel out the imbalance in asymmetric directed networks in order to achieve convergence. For instance, the subgradient-push algorithm [21, 22] works with column-stochastic mixing matrices and introduces local normalization scalars to ensure converge. The directed distributed gradient descent (D-DGD) algorithm [23], on the other hand, keeps track of the variations of local models by utilizing auxiliary variables of the same dimension as the local model parameters. For convex objective functions, both algorithms achieve ${\mathcal{O}}(\frac{\mathrm{ln}T}{\sqrt{T}})$ convergence rate. When the objectives are strongly-convex with Lipshitz gradients, and assuming availability of only the stochastic gradient terms, the stochastic gradient-push algorithm proposed in [24] achieves ${\mathcal{O}}(\frac{\mathrm{ln}T}{T})$ convergence rate. A common feature of these algorithms is their reliance upon diminishing stepsizes to achieve convergence to the optimal solution; in comparison, using a fixed stepsize can accelerate the gradient search but cannot guarantee the exact convergence, only the convergence to a neighborhood of the optimal solution. The implied exactness-speed dilemma can be overcome using schemes that deploy gradient tracking (see, e.g., [6, 7, 25]). These schemes utilize fixed step sizes to achieve linear convergence rate when the objective functions are both smooth and strongly-convex. Among them, the Push-DIGing algorithm [7] follows the same basic ideas of the subgradient-push algorithm, while TV-AB [25] relies on column- and row-stochastic matrices to update model parameters and gradient terms, respectively. The aforementioned linearly convergent methods rely on full gradient, i.e., each node is assumed to use all of its data to compute the local gradient. However, if the number of data points stored at each node is large, full gradient computation becomes infeasible. To this end, stochastic gradient descent was adapted to decentralized settings, but the resulting computational savings come at the cost of sublinear convergence rate [26, 24]. To accelerate traditional stochastic gradient methods in centralized settings, variance- reduction algorithms such as SAG [9] and SVRG [8] have been proposed; these schemes enable linear convergence when the objective functions are smooth and strongly-convex. In decentralized settings, GT-SVRG [26] and Network-SVRG [27] leverage variance-reduction techniques to achieve linear convergence rate. However, these algorithms are restricted to static and undirected networks, and a narrow class of directed networks where the mixing matrices can be rendered doubly stochastic.111To have doubly stochastic mixing matrices, directed graphs require weight balance, i.e., at each node of a graph, the sum of the weights from in-coming edges should be equal to that of the weights from out-coming edges [28]. The existing algorithms for decentralized optimization over directed networks, such as Push-SAGA [29], are restricted to static networks. In recent years, decentralized learning tasks have experienced rapid growth in the amount of data and the dimension of the optimization problems, which may lead to practically infeasible demands for communication between network nodes. To this end, various communication compression schemes have been proposed; among them, the most frequently encountered are quantization and sparsification. Quantization schemes limit the number of bits encoding the messages, while the sparsification schemes select a subset of features and represent messages in lower dimensional space. For instance, [30, 31, 32, 20, 33, 34] propose algorithms for distributed training of (non)convex machine learning models in static master-worker settings (i.e., star graph topology) using quantized/compressed information, while [35, 14, 36, 37] develop communication-efficient algorithms for decentralized optimization over static and undirected networks. However, directed networks in general, and time- varying ones in particular, have received considerably less attention. Decentralized optimization over such networks faces technical challenges of developing an algorithmic framework conducive to theoretical analysis and establishing convergence guarantees, which is further exacerbated when the communication is compressed. Early steps in this direction were made in [38] by building upon the subgradient-push algorithm to develop a quantized communication framework for decentralized optimization over a static network. Our proposed algorithm utilizes gradient tracking and variance reduction to achieve fast convergence, and relies on stochastic gradients to solve the decentralized convex optimization task at feasible computational cost. Preliminary results of this work, focused on a significantly slower full gradient framework (${\mathcal{O}}(\frac{1}{\epsilon^{2}})$ vs. ${\mathcal{O}}(\ln\frac{1}{\epsilon})$), were presented in [1]. In Table 1, we briefly summarize and contrast several algorithms for decentralized optimization over directed graphs. ### 1.2 Notation We use lowercase bold letters to represent vectors and uppercase letters to represent matrices. $[A]_{ij}$ denotes the $(i,j)$ entry of matrix $A$, while $\|\cdot\|$ denotes the standard Euclidean norm. The spectral radius of a matrix $A$ is denoted by $\rho(A)$. The weighted infinity norm of ${\mathbf{x}}$ given a positive vector $\mathbf{w}$ is $\|{\mathbf{x}}\|^{\mathbf{w}}_{\infty}=\max_{i}|x_{i}|/w_{i}$ and the induced matrix norm is $\||\cdot|\|_{\infty}^{\mathbf{w}}$. Finally, $I$ denotes the identity matrix whose dimension is inferred from the context. Table 1: The settings and convergence rates of algorithms for decentralized optimization over directed graphs. Algorithm | Convergence | Digraph | Gradient | Convex Objective Setting | Compress ---|---|---|---|---|--- Subgradient-push [24] | ${\mathcal{O}}(\frac{1}{\epsilon})$ | Time-varying | Stochastic | Strong convexity | ✗ Push-DIGing [7] | ${\mathcal{O}}(\ln\frac{1}{\epsilon})$ | Time-varying | Full | Strong convexity and smoothness | ✗ TV-AB [25] | ${\mathcal{O}}(\ln\frac{1}{\epsilon})$ | Time-varying | Full | Strong convexity and smoothness | ✗ Quantized Push-sum [38] | ${\mathcal{O}}(\frac{1}{\epsilon^{2}})$ | Static | Full | - | ✓ This work | ${\mathcal{O}}(\ln\frac{1}{\epsilon})$ | Time-varying | Stochastic | Strong convexity and smoothness | ✓ ## 2 Preliminaries ### 2.1 Problem Formulation For convenience, we remind the reader of the problem formulation (1): In a network of $n$ agents, where each node maintains a local model consisting of $d$ parameters, the agents collaboratively solve the decentralized convex optimization $\min_{\mathrm{\mathbf{x}\in\mathbb{R}^{d}}}\left[f(\mathbf{x}):=\frac{1}{n}\sum_{i=1}^{n}f_{i}(\mathbf{x})\right],$ (2) where $f_{i}(\mathbf{x})=\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}f_{i,j}(\mathbf{x}):\mathbb{R}^{d}\to\mathbb{R}$ for $i\in[n]:=\left\\{1,...,n\right\\}$ denotes the local objective function at node $i$. Each component of $f_{i}$ is assumed to be smooth and not accessible to nodes other than the $i^{th}$ one, and the global objective $f$ is assumed to be strongly-convex. We further assume existence of a unique optimal solution ${\mathbf{x}}^{*}\in\mathbb{R}^{d}$ and that each node can communicate to its neighbors; the nodes identify ${\mathbf{x}}^{*}$ by exchanging messages over a time-varying directed network. The network’s connectivity properties are elaborated upon in Section 3. ### 2.2 Communication-Efficient Methods In practice, bandwidth limitations may restrict the amount of data that the network nodes can communicate to each other; this is typical of high- dimensional scenarios where the dimension $d$ of local parameters ${\mathbf{x}}_{i}$ is exceedingly large. To handle communication constraints, network nodes may employ sparsification to reduce the size of the messages. Typically, there are two approaches to sparsification: (i) each node selects and communicates $d_{q}$ out of $d$ components of a $d$-dimensional message; or (ii) each component of a $d$-dimensional message is selected to be communicated independently with probability $d_{q}/d$. Note that the former imposes a hard constraint on the number of communicated entries while the latter results in $d_{q}$ communicated entries on expectation; both select a specific entry with probability $d_{q}/d$. Throughout this paper, we focus on and study the first approach. Let $Q:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ denote the sparsification operator; we allow for biased $Q$ with variance proportional to the argument norm, i.e., $\mathbb{E}[Q({\mathbf{x}})]\neq{\mathbf{x}}$ and $\mathbb{E}[\|Q({\mathbf{x}})-{\mathbf{x}}\|^{2}]\propto\|{\mathbf{x}}\|^{2}$. This stands in contrast to typical compression operators which aim to achieve no bias and have bounded variance (see, e.g., [30]). More recent works [20, 14, 36, 38] do consider biased compression operators but only for time- invariant communication networks – a setting that is more restrictive than the one considered in this paper. ## 3 Algorithm development In this section, we first introduce a novel average consensus algorithm, an intermediate step towards the main (optimization) framework; then we present the optimization algorithm consisting of the consensus and the gradient components. ### 3.1 Foundations: Decentralized average consensus We start by specifying a procedure for decentralized average consensus, an intermediate step towards decentralized optimization and, ultimately, an integral part thereof. The decentralized average consensus problem is formulated as the computation $\bar{{\mathbf{x}}}=\frac{1}{n}\sum_{i=1}^{n}{\mathbf{x}}_{i}$, where ${\mathbf{x}}_{i}\in\mathcal{R}^{d}$ is the parameter vector at node $i$. Following the idea of [12], for each node of the network we define the so- called surplus vector, i.e., an auxiliary variable $\mathbf{y}_{i}\in\mathbb{R}^{d}$ which tracks local state vector variations over consecutive time steps; as shown later in this section, one can use the surplus vector to help provide guarantees of convergence to the optimal solution of the decentralized problem. The surplus vector is exchanged along with the state vector, i.e., at time $t$, node $i$ sends both $\mathbf{y}_{i}^{t}$ and the state vector $\mathbf{x}_{i}^{t}$ to its out- neighbors. For the sake of having compact notation, let us introduce $\mathbf{z}_{i}^{t}\in\mathbb{R}^{d}$, $\mathbf{z}_{i}^{t}=\begin{cases}\mathbf{x}_{i}^{t},&i\in\left\\{1,...,n\right\\}\\\ \mathbf{y}_{i-n}^{t},&i\in\left\\{n+1,...,2n\right\\},\end{cases}$ (3) to represent messages node $i$ communicates to its neighbors in the network at time $t$. We assume that the time-varying graph is $\mathcal{B}$-jointly connected, i.e., that there exists a window size $\mathcal{B}\geq 1$ such that the aggregate graph $\bigcup_{l=t}^{t+\mathcal{B}-1}{\mathcal{G}}_{l}$ is strongly connected for all $t=k\mathcal{B}$, $k\in\mathcal{N}$. Note that if $\mathcal{B}=1$, each instance of the graph is strongly connected. This is a more general assumption than the often used $\mathcal{B}$-bounded strong- connectivity (see, e.g. [25]) which requires strong connectivity of the union graph $\bigcup_{l=t}^{t+\mathcal{B}-1}{\mathcal{G}}_{l}$ for all $t\geq 0$. For $\mathcal{B}$-jointly connected graphs, the product of mixing matrices of graph instances over $\mathcal{B}$ consecutive time steps has a non-zero spectral gap. To formalize this statement, let us construct two weight matrices that reflect the network topology; in particular, let $W_{in}^{t}$ (row-stochastic) and $W_{out}^{t}$ (column-stochastic) denote the in-neighbor and out-neighbor weight matrices at time $t$, respectively. It holds that $[W_{in}^{t}]_{ij}>0$ if and only if $j\in\mathcal{N}_{{in},i}^{t}$ and $[W_{out}^{t}]_{ij}>0$ if and only if $i\in\mathcal{N}_{out,j}^{t}$, where $\mathcal{N}_{{in},i}^{t}$ denotes the set of nodes that may send information to node $i$ (including $i$) whereas $\mathcal{N}_{out,j}^{t}$ denotes the set of nodes that may receive information from node $j$ (including $j$) at time $t$. We assume $W_{in}^{t}$ and $W_{out}^{t}$ are given and that both $\mathcal{N}_{{in},i}^{t}$ and $\mathcal{N}_{out,i}^{t}$ are known to node $i$. A common policy for designing $W^{t}_{in}$ and $W^{t}_{out}$ is to assign $[W^{t}_{in}]_{ij}=1/|\mathcal{N}^{t}_{in,i}|,\qquad[W^{t}_{out}]_{ij}=1/|\mathcal{N}^{t}_{out,j}|.$ (4) Recall that we are interested in sparsifying messages exchanged between nodes of a network; clearly, the sparsification should impact the structure of a mixing matrix. Indeed, if one attempts sparsifying messages used by existing methods, e.g. [21, 12, 13, 22], without any modifications of the mixing matrices therein, non-vanishing error terms induced by the compression operator will prevent those methods from converging. We note that the impact of sparsification on the components of a message vector is similar to the impact of link failures, and may therefore be captured by the structure of the weight matrices. To elaborate on this, observe that the vector-valued problem at time $t$ can essentially be decomposed to $d$ individual scalar-valued tasks with weight matrices $\\{W_{in,m}^{t}\\}_{m=1}^{d}$ and $\\{W_{out,m}^{t}\\}_{m=1}^{d}$. For the sparsified components of the message vector, i.e., those that are set to zero and not communicated, the corresponding entries in the weight matrices can be replaced by zeros; on the other hand, the entries in the weight matrices corresponding to the communicated components of the message vector remain unchanged, leading to the violation of the stochasticity of the weight matrices. To address this, we re- normalize the weight matrices $\\{W_{in,m}^{t}\\}_{m=1}^{d}$ and $\\{W_{out,m}^{t}\\}_{m=1}^{d}$, thus ensuring their row and column stochasticity. Note that the re-normalization of the $i\textsuperscript{th}$ row of $\\{W_{in,m}^{t}\\}_{m=1}^{d}$ ($i\textsuperscript{th}$ column of $\\{W_{out,m}^{t}\\}_{m=1}^{d}$) is performed by the $i^{th}$ network node. To specify the normalization rule, we first need to define the sparsification operation. Sparsification of $\mathbf{x}_{i}^{t}$ (and, consequently, $\mathbf{y}_{i}^{t}$) is done via the compression operator $Q(\cdot)$ applied to $\mathbf{z}_{i}^{t}$; we denote the result by $Q(\mathbf{z}_{i}^{t})$. Let $[Q({{\mathbf{z}}}_{i}^{t})]_{m}$ denote the $m\textsuperscript{th}$ component of $Q(\mathbf{z}_{i}^{t})$. Let $\\{A_{m}^{t}\\}_{m=1}^{d}$ and $\\{B_{m}^{t}\\}_{m=1}^{d}$ be the weight matrices obtained after normalizing $\\{W_{in,m}^{t}\\}_{m=1}^{d}$ and $\\{W_{out,m}^{t}\\}_{m=1}^{d}$, respectively. To formalize the normalization procedure, we introduce the weight matrix $[A^{t}_{m}]_{ij}=\begin{cases}\frac{[W^{t}_{in,m}]_{ij}}{\sum_{j\in\mathcal{S}_{m}^{t}(i,j)}[W^{t}_{in,m}]_{ij}}&\text{if }j\in\mathcal{S}_{m}^{t}(i,j)\\\ 0&\mathrm{otherwise},\end{cases}$ (5) where $\mathcal{S}_{m}^{t}(i,j):=\\{j|j\in\mathcal{N}^{t}_{in,i},[Q({{\mathbf{z}}}_{j}^{t})]_{m}\neq 0\\}\cup\\{i\\}$. Likewise, the weight matrix $B^{t}_{m}$ is defined as $[B^{t}_{m}]_{ij}=\begin{cases}\frac{[W^{t}_{out,m}]_{ij}}{\sum_{i\in\mathcal{T}_{m}^{t}(i,j)}[W^{t}_{out,m}]_{ij}}&\text{if }i\in\mathcal{T}_{m}^{t}(i,j)\\\ 0&\mathrm{otherwise},\end{cases}$ (6) where $\mathcal{T}_{m}^{t}(i,j):=\\{i|i\in\mathcal{N}^{t}_{out,j},[Q({{\mathbf{z}}}_{i}^{t})]_{m}\neq 0\\}\cup\\{j\\}$. We can now define the mixing matrix of a directed network with sparsified messages. ###### Definition 1. The $m\textsuperscript{th}$ mixing matrix at time $t$ of a time-varying directed network with sparsified messages, $\bar{M}_{m}^{t}\in\mathbb{R}^{2n\times 2n}$, is a matrix whose columns sum up to $1$ and whose eigenvalues satisfy $1=|\lambda_{1}(\bar{M}_{m}^{t})|=|\lambda_{2}(\bar{M}_{m}^{t})|\geq|\lambda_{3}(\bar{M}_{m}^{t})|\geq\cdots|\lambda_{2n}(\bar{M}_{m}^{t})|$, constructed from the current network topology as $\bar{M}_{m}^{t}=\left[\begin{matrix}A_{m}^{t}&\mathbf{0}\\\ I-A_{m}^{t}&B_{m}^{t}\\\ \end{matrix}\right],$ (7) where $A_{m}^{t}$ and $B_{m}^{t}$ denote the $m\textsuperscript{th}$ normalized in-neighbor and out-neighbor weight matrices at time $t$, respectively. Given $\mathbf{z}_{i}^{t}$ and $\bar{M}_{m}^{t}$ in (3) and (7), respectively, we can formulate a compact recursive update rule for $\mathbf{z}_{i}^{t}$ as $\displaystyle z_{im}^{t+1}$ $\displaystyle=\sum_{j=1}^{2n}[\bar{M}^{t}_{m}]_{ij}[Q({\mathbf{z}}_{j}^{t})]_{m}+\mathbbm{1}_{\left\\{t\ \text{mod}\ \mathcal{B}=\mathcal{B}-1\right\\}}\gamma[F]_{ij}z_{jm}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor},$ (8) where $F=\left[\begin{matrix}\mathbf{0}&I\\\ \mathbf{0}&-I\end{matrix}\right]$ and $m$ denotes the coordinate index. As seen in (8), vectors ${\mathbf{z}}_{i}^{t}$ (which contain ${\mathbf{x}}_{i}^{t}$, the quantities to be averaged) are updated in a straightforward manner via sparsification and multiplication with the mixing matrix at all times $t$ except those that satisfy $t\mod\mathcal{B}=\mathcal{B}-1.$ (9) In particular, when (9) holds, vectors ${\mathbf{z}}_{i}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor}$, stored at time $\mathcal{B}\lfloor t/\mathcal{B}\rfloor$, are also used to update ${\mathbf{z}}_{i}^{t}$. The usage of the stored vectors is motivated by the observation that $\bar{M}_{m}^{t}$ may have zero spectral gap, which is undesirable since for such mixing matrices the convergence of the consensus algorithms will not be guaranteed. However, for a judiciously chosen perturbation parameter $\epsilon$, which determines to what extent $\sum_{j=1}^{2n}[F]_{ij}z_{jm}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor}$ affects the update, we can ensure a nonzero spectral gap of the product of $\mathcal{B}$ consecutive mixing matrices starting from $t=k\mathcal{B}$. The described communication-sparsified average consensus procedure over directed graphs, referred to for convenience as Di-CS-AC, is formalized as Algorithm 1. Algorithm 1 Directed Communication-Sparsified Average Consensus (Di-CS-AC) 1: Input: $T$, $\mathbf{x}^{0}$, $\mathbf{y}^{0}=\mathbf{0}$, $\gamma$ 2: set $\mathbf{z}^{0}=[\mathbf{x}^{0};\mathbf{y}^{0}]$, $\tilde{w}^{0}={\mathbf{z}}^{0}$ and 3: for each $s\in[0,1,...,S]$ do 4: generate non-negative matrices $\\{W_{in,m}^{t}\\}_{m=1}^{d}$ and $\\{W_{out,m}^{t}\\}_{m=1}^{d}$ 5: for each $m\in[1,...,d]$ do 6: construct a row-stochastic $A^{t}_{m}$ and a column-stochastic $B^{t}_{m}$ according to (5) and (6) 7: construct $\bar{M}^{t}_{m}$ according to (7) 8: for each $i\in[1,...,2n]$ do 9: update $z_{im}^{t+1}$ according to (8) 10: end for 11: end for 12: end for ### 3.2 Decentralized gradient component Going beyond the simple consensus problem and towards solving optimization (2), we re-define the recursive update rule for $\mathbf{z}_{i}^{t}$ as $\displaystyle z_{im}^{t+1}$ $\displaystyle=\sum_{j=1}^{2n}[\bar{M}^{t}_{m}]_{ij}[Q({\mathbf{z}}_{j}^{t})]_{m}+\mathbbm{1}_{\left\\{t\ \text{mod}\ \mathcal{B}=\mathcal{B}-1\right\\}}\gamma[F]_{ij}z_{jm}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor}-\mathbbm{1}_{\left\\{t\ \text{mod}\ \mathcal{B}=\mathcal{B}-1\right\\}}\alpha g_{im}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor},$ (10) where $F$ and $m$ denote the same objects as in (8), and $g_{im}$ combines global gradient tracking with local stochastic reduction to achieve accelerated convergence (elaborated upon shortly). Note that (10) implies the following element-wise update rules for state and surplus vectors, respectively: $\displaystyle x_{im}^{t+1}$ $\displaystyle=\sum_{j=1}^{n}[A_{m}^{t}]_{ij}[Q({\mathbf{x}}_{j}^{t})]_{m}+\mathbbm{1}_{\left\\{t\ \text{mod}\ \mathcal{B}=\mathcal{B}-1\right\\}}\gamma y_{im}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor}-\mathbbm{1}_{\left\\{t\ \text{mod}\ \mathcal{B}=\mathcal{B}-1\right\\}}\alpha g_{im}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor},$ (11) $\displaystyle y_{im}^{t+1}$ $\displaystyle=\sum_{j=1}^{n}[B_{m}^{t}]_{ij}[Q({\mathbf{y}}_{j}^{t})]_{m}-(x_{im}^{t+1}-x_{im}^{t}).$ (12) Paralleling the basic consensus task discussed in the previous section, vectors ${\mathbf{z}}_{i}^{t}$ (containing state vectors to be averaged) are updated via sparsification and multiplication with the mixing matrix at all times $t$ except those that satisfy (9). When (9) does hold, vectors ${\mathbf{z}}_{i}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor}$, stored at times $\mathcal{B}\lfloor t/\mathcal{B}\rfloor$, are also used to update ${\mathbf{z}}_{i}^{t}$; the motivation and reasoning for such special treatment are as same as in the consensus algorithm.222Note that $F$ has all- zero matrices for its $(1,1)$ and $(2,1)$ blocks and thus we only need to store ${\mathbf{z}}_{i}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor}$ (equivalently, ${\mathbf{y}}_{i-n}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor}$), where $n+1\leq i\leq 2n$. In the proposed algorithm, updates of the gradient term ${\mathbf{g}}_{i}^{t}$ combine global gradient tracking with local stochastic variance reduction. In particular, the updates of ${\mathbf{g}}_{i}^{t}$ mix gradient messages while keeping track of the changes in gradient estimates ${\mathbf{v}}_{i}^{t}$; this guides ${\mathbf{g}}_{i}^{t}$ towards the gradient of the global objective, ultimately ensuring convergence to the optimal solution $\mathbf{x}^{*}$ (i.e., global gradient tracking helps avoid the pitfall of non-vanishing local gradients which would otherwise lead the search only to a neighborhood of $\mathbf{x}^{*}$). The $m\textsuperscript{th}$ entry of ${\mathbf{g}}_{i}^{t}$, $g_{im}^{t}$, is updated as $\displaystyle g_{im}^{\mathcal{B}(\lfloor t/\mathcal{B}\rfloor)}=\begin{cases}\sum_{j=1}^{n}[B_{m}(k\mathcal{B}-1:(k-1)\mathcal{B})]_{ij}g_{jm}^{(k-1)\mathcal{B}}+v_{im}^{\mathcal{B}(\lfloor t/\mathcal{B}\rfloor)}-v_{im}^{\mathcal{B}(\lfloor t/\mathcal{B}\rfloor-1)}&i\leq n\\\ 0&\mathrm{otherwise},\end{cases}$ (13) where $k=\lfloor t/\mathcal{B}\rfloor$. The gradient estimate ${\mathbf{v}}_{i}^{t}$ in (13) is updated via the stochastic variance- reduction method [8]. Specifically, $\mathbf{v}_{i}^{\mathcal{B}(\lfloor t/\mathcal{B}\rfloor+1)}=\nabla f_{i,l_{i}}({\mathbf{z}}_{i}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor})-\nabla f_{i,l_{i}}(\tilde{w}_{i})+\tilde{\mu}_{i},\quad\forall i\in[n].$ (14) One can interpret this update as being executed in a double loop fashion: when a local full gradient at node $i$, $\tilde{\mu}_{i}$, is computed (in what can be considered an outer loop), it is retained in the subsequent $T$ iterations (the inner loop). In each iteration of the inner loop, if the time step satisfies (9), node $i$ uniformly at random selects a local sample, $l_{i}$, for the calculation of two stochastic gradient estimates – an estimate of the current state, $\nabla f_{i,l_{i}}({\mathbf{z}}_{i}^{\mathcal{B}\lfloor t/\mathcal{B}\rfloor})$, and an estimate of the state from the last outer loop, $\nabla f_{i,l_{i}}(\tilde{w}_{i})$ – the terms needed to perform update of $\mathbf{v}_{i}^{t}$. By computing a full gradient periodically in the outer loop and estimating gradient stochastically in the inner loop, the described procedure trades computational cost for convergence speed, ultimately achieving linear convergence at fewer gradient computations per sample than the full gradient techniques. The described procedure is formalized as Algorithm 2. ###### Remark 1. We highlight a few important observations regarding Algorithm 2. 1. (a) When there are no communication constraint and each agent in the network can send full information to out-neighboring agents, Algorithm 2 reduces to a novel stochastic variance-reduced scheme for decentralized convex optimization over such networks. 2. (b) For $\mathcal{B}=1$, the problem reduces to decentralized optimization over networks that are strongly connected at all time steps, a typical connectivity assumption for many decentralized optimization algorithms [23, 14]. 3. (c) Algorithm 2 requires each node in the network to store local vectors of size $4d$, including the current state vector, current and past surplus vector, and local gradient vector. While the current state vector and current surplus vector may be communicated to the neighboring nodes, past surplus vectors are only used locally to add local perturbations at the time steps satisfying (9). 4. (d) The columns of $\bar{M}_{m}^{t}$ sum up to one. However, $\bar{M}_{m}^{t}$ is not column-stochastic as it has negative entries, which stands in contrast to the stochasticity property of the mixing matrices appearing in the average consensus algorithms [39, 14]. Algorithm 2 Directed Communication-Sparsified Stochastic Variance-Reduced Gradient Descent (Di-CS-SVRG) 1: Input: $T$, $\mathbf{x}^{0}$, $\mathbf{y}^{0}=\mathbf{0}$, $\alpha$, $\gamma$ 2: set $\mathbf{z}^{0}=[\mathbf{x}^{0};\mathbf{y}^{0}]$, $\tilde{w}^{0}={\mathbf{z}}^{0}$ and ${\mathbf{g}}_{i}^{0}={\mathbf{v}}_{i}^{0}=\nabla{\mathbf{f}}_{i}({\mathbf{x}}_{i}^{0})\quad\forall i\in[n]$ 3: for each $s\in[0,1,...,S]$ do 4: $\tilde{w}=\tilde{w}^{s}$ 5: $\tilde{\mu}_{i}=\nabla f_{i}(\tilde{w})=\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}\nabla f_{i,j}(\tilde{w})$ 6: for each $t\in[sT+1,...,(s+1)T-1]$ do 7: generate non-negative matrices $\\{W_{in,m}^{t}\\}_{m=1}^{d}$ and $\\{W_{out,m}^{t}\\}_{m=1}^{d}$ 8: for each $m\in[1,...,d]$ do 9: construct a row-stochastic $A^{t}_{m}$ and a column-stochastic $B^{t}_{m}$ according to (5) and (6) 10: construct $\bar{M}^{t}_{m}$ according to (7) 11: for each $i\in[1,...,2n]$ do 12: update $z_{im}^{t+1}$ according to (10) 13: end for 14: if $t\mod\mathcal{B}=\mathcal{B}-1$ then 15: for each $i\in[1,...,n]$ do 16: select $l_{i}$ uniformly randomly from $[m_{i}]$: 17: update $\mathbf{v}_{i}^{\mathcal{B}(\lfloor t/\mathcal{B}\rfloor+1)}$ according to (14) 18: update $g_{im}^{\mathcal{B}(\lfloor t/\mathcal{B}\rfloor+1)}$ according to (13) 19: end for 20: end if 21: end for 22: end for 23: $\tilde{w}^{s+1}={\mathbf{z}}^{(s+1)T}$ 24: end for ## 4 Convergence Analysis For convenience, let us denote the product of a sequence of mixing matrices from time step $s$ to $T$ as $\bar{M}_{m}(T:s)=\bar{M}_{m}^{T}\bar{M}_{m}^{T-1}\cdots\bar{M}_{m}^{s}.$ (15) To further simplify notation, we also introduce $M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})=\bar{M}_{m}((k+1)\mathcal{B}-1:k\mathcal{B})+\gamma F,$ (16) and $\displaystyle M_{m}(t:k_{1}\mathcal{B})=\bar{M}_{m}(t:k_{2}\mathcal{B})M_{m}(k_{2}\mathcal{B}-1:(k_{2}-1)\mathcal{B})\cdots M_{m}((k_{1}+1)\mathcal{B}-1:k_{1}\mathcal{B}),$ (17) where $k_{2}\mathcal{B}\leq t\leq(k_{2}+1)\mathcal{B}-1$ and $k_{1},k_{2}\in\mathcal{N},k_{1}\leq k_{2}$. Note that $M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$ is formed by adding a perturbation matrix $\gamma F$ to the product $\bar{M}_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$. Finally, we also introduce shorthand notation for the product of the weight matrices $B_{m}$ from time $s$ to $T$, $B_{m}(T:s)=B_{m}^{T}B_{m}^{T-1}\cdots B_{m}^{s}.$ (18) Our analysis relies on several standard assumptions about the graph and network connectivity matrices as well as the characteristics of the local and global objective functions. These are given next. ###### Assumption 1. Suppose the following conditions hold: 1. (a) The product of consecutive mixing matrices $M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$ in (16) has a non-zero spectral gap for all $k\geq 0$, $1\leq m\leq d$, and all $0<\gamma<\gamma_{0}$ for some $0<\gamma_{0}<1$. 2. (b) The collection of all possible mixing matrices $\\{\bar{M}_{m}^{t}\\}$ is a finite set. 3. (c) Each component of the local objective function $f_{i,j}$ is $L$-smooth and the global objective $f$ is $\mu$-strongly-convex333This implies that $\forall{\mathbf{x}}_{1},{\mathbf{x}}_{2}\in{\mathbb{R}}^{d}$ there exists $L>0$ such that $\|\nabla f_{i,j}({\mathbf{x}}_{1})-\nabla f_{i,j}({\mathbf{x}}_{2})\|\leq L\|{\mathbf{x}}_{1}-{\mathbf{x}}_{2}\|$. Furthermore, $\forall{\mathbf{x}}_{1},{\mathbf{x}}_{2}\in{\mathbb{R}}^{d}$ there exists $\mu>0$ such that $f({\mathbf{x}}_{2})\geq f({\mathbf{x}}_{1})+\langle\nabla f({\mathbf{x}}_{1}),{\mathbf{x}}_{2}-{\mathbf{x}}_{1}\rangle+\frac{\mu}{2}\|{\mathbf{x}}_{2}-{\mathbf{x}}_{1}\|^{2}$. . ###### Remark 2. Assumption 1(a) is readily satisfied for a variety of graph structures such as the $\mathcal{B}$-strongly connected directed graph introduced in [7], i.e., the setting where the union of graphs over $B$ consecutive instances starting from $k\mathcal{B}$ forms a strongly connected graph for any non-negative integer $k$.444There are two versions of the definition of $\mathcal{B}$-strongly connected directed graphs, the difference being the window starting time. As noted in Section II, we consider the definition where the window may start at any time $t=k\mathcal{B}$; this differs from the (more demanding in regards to connectivity) definition in [25] where the starting time is an arbitrary non-negative integer. Furthermore, one can readily verify that Assumption 1(b) holds for the weight matrices defined in (4). Before stating the main theorem, we provide the following lemma which, under Assumption 1, establishes the consensus contraction of the product of mixing matrices and the product of normalized weight matrices. ###### Lemma 1. Suppose Assumptions 1(a) and 1(b) hold. Let $\sigma=\max(|\lambda_{M,2}|,|\lambda_{B,2}|)$ denote the larger of the second largest eigenvalues of $M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$ and $B_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$. Then, $\displaystyle\begin{split}\|M_{m}((k+1)\mathcal{B}-1:k\mathcal{B}){\mathbf{z}}-\bar{{\mathbf{z}}}\|\leq\sigma\|{\mathbf{z}}-\bar{{\mathbf{z}}}\|,\ \forall{\mathbf{z}}\in{\mathbb{R}}^{2n}\\\ \mbox{and}\\\ \|B_{m}((k+1)\mathcal{B}-1:k\mathcal{B}){\mathbf{y}}-\bar{{\mathbf{y}}}\|\leq\sigma\|{\mathbf{y}}-\bar{{\mathbf{y}}}\|,\ \forall{\mathbf{y}}\in{\mathbb{R}}^{n},\\\ \end{split}$ (19) where $\bar{{\mathbf{z}}}=[\frac{1}{n}\sum_{i=1}^{2n}z_{i},\cdots,\frac{1}{n}\sum_{i=1}^{2n}z_{i}]^{T}$ and $\bar{{\mathbf{y}}}=[\frac{1}{n}\sum_{i=1}^{n}y_{i},\cdots,\frac{1}{n}\sum_{i=1}^{n}y_{i}]^{T}$. ###### Proof. The proof of the lemma is in the supplementary document. ∎ Our main result, stated in Theorem 1 below, establishes that Algorithm 2 provides linear convergence of local parameters to their average values, which itself converges linearly to the optimal solution of (1). ###### Theorem 1. Suppose Assumption 1 holds. Denote the condition number of $f$ by $\tilde{Q}=\frac{L}{\mu}$. If the step size $\alpha$ is chosen according to $\alpha=\frac{(1-\sigma^{2})^{2}}{187\tilde{Q}L},$ (20) the iterates of Algorithm 2 satisfy $\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}\|\bar{{\mathbf{z}}}^{ST}-{\mathbf{z}}_{i}^{ST}\|^{2}+\mathbb{E}\|\bar{{\mathbf{z}}}^{ST}-{\mathbf{x}}^{\ast}\|^{2}\leq 2\lambda^{S}\times\Bigg{(}\frac{1}{n}\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{0}-{\mathbf{z}}_{i}^{0}\|^{2}+\|\bar{{\mathbf{z}}}^{0}-{\mathbf{x}}^{\ast}\|^{2}\\\ +\frac{(1-\sigma^{2})^{2}}{1457nL^{2}}\sum_{i=1}^{n}\sum_{m=1}^{d}\mathbb{E}|g_{im}^{0}-\bar{g}_{m}^{0}|^{2}\Bigg{)},$ (21) where $\lambda=8\tilde{Q}^{2}\exp{(-\frac{(1-\sigma^{2})^{2}T}{748\tilde{Q}^{2}})}+0.66,$ (22) $\bar{{\mathbf{z}}}^{t}=\frac{1}{n}\sum_{i=1}^{2n}{\mathbf{z}}_{i}^{t}$, and $\bar{{\mathbf{g}}}^{t}=\frac{1}{n}\sum_{i=1}^{n}{\mathbf{g}}_{i}^{t}$. ###### Corollary 1.1. Instate the notation and hypotheses of Theorem 1. If, in addition, the inner- loop duration $T$ is chosen as $T=\mathcal{B}\lceil\frac{1496\tilde{Q}^{2}}{(1-\sigma^{2})^{2}\mathcal{B}}\ln(200\tilde{Q}^{2})\rceil,$ (23) the proposed algorithm achieves a linear convergence rate. Furthermore, to reach an $\epsilon$-accurate solution, Algorithm 2 takes at most ${\mathcal{O}}(\frac{\tilde{Q}^{2}\mathcal{B}\ln\tilde{Q}}{(1-\sigma^{2})^{2}}\ln 1/\epsilon)$ communication rounds and performs ${\mathcal{O}}((\frac{\tilde{Q}^{2}\ln\tilde{Q}}{(1-\sigma^{2})^{2}}+\max_{i}\left\\{m_{i}\right\\})\ln 1/\epsilon)$ stochastic gradient computations. ###### Proof. It is straightforward to verify that for the stated value of $T$, $\lambda\leq 0.7<1$ and thus the algorithm converges linearly. ∎ Note that due to the gradient tracking step in Algorithm 2, in particular when constructing the linear system of inequalities (which includes the gradient tracking error), the rate of convergence is dependent upon $\tilde{Q}^{2}$ (through the factor in the coefficient matrix). ###### Remark 3. Clearly, the communication cost of Algorithm 2 depends on the level of sparsification, i.e., the value of parameter $d_{q}$. Intuitively, if agents communicate fewer entries in each round, the communication cost per round decreases but the algorithm may take more rounds to reach the same accuracy. Therefore, the total communication cost until reaching a pre-specified $\epsilon$-accuracy, found as the product of the number of communication rounds and the cost per round, is of interest. Let $q$ denote the fraction of entries being communicated per iteration; smaller $q$ implies more aggressive sparsification. This compression level parameter, $q$, impacts $\sigma$ in Theorem 1; in particular, for a fixed network connectivity parameter $\mathcal{B}$, smaller $q$ leads to sparser mixing matrices and, consequently, greater $\sigma$. Note that large $\mathcal{B}$ may be caused by sparsity of the instances of a time-varying network, thus leading to large values of $\sigma$. ###### Remark 4. It is worthwhile discussing and comparing the constants in Corollary 1.1 to those in the original SVRG [8] (centralized optimization) and GT-SVRG [26] (decentralized optimization over undirected graphs). For SVRG, this constant is $O(\frac{1}{\mu\alpha(1-2L\alpha)T}+\frac{2L\alpha}{1-2L\alpha})$, where $\alpha$ denotes the step size, $L$ is the smoothness parameter, $\mu$ is the strong convexity parameter and $T$ denotes the inner loop duration [8]. For both GT-SVRG and our proposed algorithm, the inner loop duration is $T=O(\frac{\tilde{Q}^{2}\log\tilde{Q}}{(1-\sigma)^{2}})$ and the linear convergence constant $O(\tilde{Q}^{2}\exp{(-\frac{(1-\sigma^{2})^{2}T}{\tilde{Q}^{2}})})$, where $\tilde{Q}$ is the condition number and $\sigma$ is specified by the network topology and the applied compression, i.e., sparsification of the communicated quantities. ### 4.1 Proof of Theorem 1 In this section, we prove Theorem 1 by analyzing various error terms that collectively impact the convergence rate of Algorithm 2. The main technical challenge in deriving the linear convergence result is the analysis of the vanishing errors formally introduced in the next paragraph: the consensus error, the optimality error and the gradient tracking error. Note that unlike in undirected graphs, the mixing matrices of directed graphs are not necessarily doubly stochastic; as a result, decentralized optimization schemes may produce state vectors that converge to a weighted average, rather than a consensus average. Furthermore, recall that in order to accelerate the convergence, we deploy two techniques: global gradient tracking and local variance reduction. Both the gradient tracking technique, which relies on the communication of gradient information over the network, and the variance reduction trick increase the difficulty of analyzing the vanishing gradient tracking error. The combination of these issues renders the analysis of the aforementioned errors challenging. Specifically, the convergence rate depends on: (i) the expected consensus error, i.e., the expected squared difference between local vectors and the average vectors at time $(k+1)\mathcal{B}$, $\mathbb{E}[|z_{im}^{(k+1)\mathcal{B}}-\bar{z}_{m}^{(k+1)\mathcal{B}}|^{2}]$; (ii) the expected optimality error, i.e., the expected squared difference between the average vectors and the optimal vector, $\mathbb{E}[\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]$; and (iii) the expected gradient tracking error, $\mathbb{E}[\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}$. Hence, it is critical to determine evolution of these sequences. Note that compared to the gradient-tracking based work, e.g. [7, 26], analyzing the proposed scheme is more involved due to its reliance upon a combination of variance reduction techniques and a communication-sparsified consensus; showing that the novel scheme achieves linear convergence on general directed time-varying graphs despite sparsified communication calls for a careful examination of the error terms in a manner distinct from the analysis found in prior work. Dynamics of the aforementioned errors are clearly interconnected. Consequently, our analysis relies on deriving recursive bounds for the errors in terms of the linear combinations of their past values. The results are formally stated in Lemma 2, Lemma 3, and Lemma 4. Proofs of these lemmas are provided in the supplementary document. ###### Lemma 2. Suppose Assumption 1 holds. Then $\forall i\leq n$, $k\geq 0$ and $0<m\leq d$, updates generated by Algorithm 2 satisfy $\displaystyle\mathbb{E}[|z_{im}^{(k+1)\mathcal{B}}-\bar{z}_{m}^{(k+1)\mathcal{B}}|^{2}]$ $\displaystyle\leq\frac{1+\sigma^{2}}{2}\mathbb{E}[|z_{im}^{k\mathcal{B}}-\bar{z}_{m}^{k\mathcal{B}}|^{2}]+\frac{2\alpha^{2}}{1-\sigma^{2}}\mathbb{E}[|g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}|^{2}].$ (24) Having established in Lemma 2 a recursive bound on the expected consensus error, we proceed by stating in Lemmas 3 and 4 recursive bounds on the expected optimality and gradient tracking errors, respectively. First, let us introduce (for $k\geq 0$) $\displaystyle\bar{\tau}^{k\mathcal{B}}$ $\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\tau_{i}^{k\mathcal{B}},$ (25) $\displaystyle\tau_{i}^{(k+1)\mathcal{B}}$ $\displaystyle=\begin{cases}{\mathbf{x}}_{i}^{(k+1)\mathcal{B}}&\mathrm{if}\quad(k+1)\mathcal{B}\mod T=0\\\ \tilde{w}_{i}&\mathrm{otherwise}.\end{cases}$ ###### Lemma 3. Suppose Assumption 1 holds and let $0<\alpha<\frac{\mu}{8L^{2}}$. Then for all $k>0$ it holds that $\displaystyle\begin{split}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]&\leq\frac{2L^{2}\alpha}{\mu}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]\\\ &\quad+(1-\frac{\mu\alpha}{2})\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ &\quad+\frac{4L^{2}\alpha^{2}}{n}\mathbb{E}[\sum_{i=1}^{n}\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]\\\ &\quad+\frac{4L^{2}\alpha^{2}}{n}\mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}].\end{split}$ (26) ###### Lemma 4. Suppose Assumption 1 holds. Then $\displaystyle\frac{1}{L^{2}}\mathbb{E}[\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}]$ $\displaystyle\leq\frac{120}{1-\sigma^{2}}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]$ (27) $\displaystyle\quad+\frac{89}{1-\sigma^{2}}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]$ $\displaystyle\quad+\frac{3+\sigma^{2}}{4}\mathbb{E}[\frac{\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}|^{2}}{L^{2}}]$ $\displaystyle\quad+\frac{38}{1-\sigma^{2}}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]$ $\displaystyle\quad+\frac{38}{1-\sigma^{2}}\mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}].$ We proceed by defining a system of linear inequalities involving the three previously discussed error terms; study of the conditions for the geometric convergence of the powers of the resultant matrix in the system of linear inequalities leads to the linear convergence result in Theorem 1. To this end, we first state Proposition 1 whose proof follows by combining and re- organizing the inequalities in Lemmas 2-4 in a matrix form. ###### Proposition 1. Suppose Assumption 1 holds. Define ${\mathbf{u}}^{k\mathcal{B}}=\begin{bmatrix}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]\\\ \mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ \mathbb{E}[\frac{\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}|^{2}}{L^{2}}]\end{bmatrix},$ (28) $\tilde{{\mathbf{u}}}^{k\mathcal{B}}=\begin{bmatrix}\mathbb{E}[\sum_{i=1}^{n}\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]\\\ \mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ \mathbf{0}\end{bmatrix},$ (29) $J_{\alpha}=\begin{bmatrix}\frac{1+\sigma^{2}}{2}&0&\frac{2\alpha^{2}L^{2}}{1-\sigma^{2}}\\\ \frac{2L^{2}\alpha}{\mu}&1-\frac{\mu\alpha}{2}&0\\\ \frac{120}{1-\sigma^{2}}&\frac{89}{1-\sigma^{2}}&\frac{3+\sigma^{2}}{4}\end{bmatrix},$ (30) $H_{\alpha}=\begin{bmatrix}0&0&0\\\ \frac{4L^{2}\alpha^{2}}{n}&\frac{4L^{2}\alpha^{2}}{n}&0\\\ \frac{38}{1-\sigma^{2}}&\frac{38}{1-\sigma^{2}}&0\end{bmatrix}.$ (31) If $0\leq\alpha\leq\frac{\mu(1-\sigma^{2})}{14\sqrt{2}L^{2}}$, then for any $k\geq 0$ it holds that ${\mathbf{u}}^{(k+1)\mathcal{B}}\leq J_{\alpha}{\mathbf{u}}^{k\mathcal{B}}+H_{\alpha}\tilde{{\mathbf{u}}}^{k\mathcal{B}}.$ (32) As a direct consequence of Proposition 1, for the iterations of the inner loop of Algorithm 2, for all $k\in[s\lfloor T/\mathcal{B}\rfloor,(s+1)\lfloor T/\mathcal{B}\rfloor-1]$ it holds that ${\mathbf{u}}^{(k+1)\mathcal{B}}\leq J_{\alpha}{\mathbf{u}}^{k\mathcal{B}}+H_{\alpha}{\mathbf{u}}^{sT},$ (33) while for the outer loop of Algorithm 2 it holds for all $\forall s\geq 0$, ${\mathbf{u}}^{(s+1)T}\leq(J_{\alpha}^{T}+\sum_{l=0}^{T-1}J_{\alpha}^{l}H_{\alpha}){\mathbf{u}}^{sT}.$ (34) Now, to guarantee linear decay of the outer loop sequence, we restrict the range of the inner loop duration $T$ and the step size $\alpha$ according to $\rho(J_{\alpha}^{T}+\sum_{l=0}^{T-1}J_{\alpha}^{l}H_{\alpha})<1,$ (35) where $\rho(\cdot)$ denotes the spectral radius of its argument. In Lemma 5 below, we establish the range of $\alpha$ such that the weighted matrix norms of $J_{\alpha}^{T}$ and $\sum_{l=0}^{T-1}J_{\alpha}^{l}H_{\alpha}$ are small, thereby ensuring the geometric convergence of the powers of these matrices to $\mathbf{0}$. ###### Lemma 5. Suppose Assumption 1 holds and let $0<\alpha\leq\frac{(1-\sigma^{2})^{2}}{187\tilde{Q}L}$, where $\tilde{Q}=\frac{L}{\mu}$. Then $\rho(J_{\alpha})<\||J_{\alpha}|\|^{\mathbf{\delta}}_{\infty}<1-\frac{\mu\alpha}{4}$ (36) and $\||\sum_{l=0}^{T-1}J_{\alpha}^{l}H_{\alpha}|\|^{{\mathbf{q}}}_{\infty}\leq\||(I-J_{\alpha})^{-1}H_{\alpha}|\|^{{\mathbf{q}}}_{\infty}<0.66,$ (37) where $\mathbf{\delta}=\begin{bmatrix}1,8\tilde{Q}^{2},\frac{6656\tilde{Q}^{2}}{(1-\sigma^{2})^{2}}\end{bmatrix}$ and ${\mathbf{q}}=[1,1,\frac{1457}{(1-\sigma^{2})^{2}}]$. Essentially, Lemma 5 establishes the range of the stepsize $\alpha$ such that the matrices involved in the system of linear inequalities (34) have small norm. Upon setting $\alpha=\frac{(1-\sigma^{2})^{2}}{187\tilde{Q}L}$ (i.e., assigning $\alpha$ the largest feasible value), we proceed to determine the number of iterations in the outer loop such that the powers of matrix $J_{\alpha}^{T}+\sum_{l=0}^{T-1}J_{\alpha}^{l}H_{\alpha}$ in (34), and hence the components of ${\mathbf{u}}$ (i.e. the error terms), converge to zero at a geometric rate. To this end, note that since $J_{\alpha}$ is non-negative, $\sum_{l=0}^{T-1}J_{\alpha}^{l}\leq\sum_{l=0}^{\infty}J_{\alpha}^{l}=(I-J_{\alpha})^{-1}$ . Hence, for all $s\geq 0$ it holds ${\mathbf{u}}^{(s+1)T}\leq(J_{\alpha}^{T}+(I-J_{\alpha})^{-1}H_{\alpha}){\mathbf{u}}^{sT}.$ (38) Since $\alpha=\frac{(1-\sigma^{2})^{2}}{187QL}$, we may write $\displaystyle\begin{split}\|{\mathbf{u}}^{(s+1)T}\|_{\infty}^{{\mathbf{q}}}&\leq\||J_{\alpha}^{T}+(I-J_{\alpha})^{-1}H_{\alpha}|\|_{\infty}^{{\mathbf{q}}}\|{\mathbf{u}}^{sT}\|_{\infty}^{{\mathbf{q}}}\\\ &\leq(\||J_{\alpha}^{T}|\|_{\infty}^{{\mathbf{q}}}+0.66)\|{\mathbf{u}}^{sT}\|_{\infty}^{{\mathbf{q}}}\\\ &\leq(8\tilde{Q}^{2}(\||J_{\alpha}^{T}|\|_{\infty}^{\delta})^{T}+0.66)\|{\mathbf{u}}^{sT}\|_{\infty}^{{\mathbf{q}}}\\\ &\leq(8\tilde{Q}^{2}\exp{(-\frac{(1-\sigma^{2})^{2}T}{748\tilde{Q}^{2}})}+0.66)\|{\mathbf{u}}^{sT}\|_{\infty}^{{\mathbf{q}}}\\\ &:=\lambda\|{\mathbf{u}}^{sT}\|_{\infty}^{{\mathbf{q}}}.\end{split}$ (39) The result in (21) follows simply by noting the definitions of ${\mathbf{u}}^{sT}$ and the $\|\cdot\|^{\mathbf{q}}_{\infty}$ norm. Therefore, the proof of Theorem 1 is complete. ## 5 Experiments In this section, we report results of benchmarking the proposed Algorithm 2; for convenience, we refer to Algorithm 2 as Di-CS-SVRG (Directed Communication-Sparsified Stochastic Variance-Reduced Gradient Descent). The results for the proposed Algorithm 1 are presented in the supplementary material. We start with a network consisting of $10$ nodes with randomly generated time- varying connections while ensuring strong connectivity at each time step. The construction begins with the Erdős–Rényi model [40] where each edge exists independently with probability $0.9$; then, $2$ directed edges are dropped from each strongly connected graph, leading to directed graphs. Building upon this basic structure, we can design networks with different connectivity profiles. Recall that the window size parameter $\mathcal{B}$, introduced in Assumption 1(a), may imply that the union graph over $\mathcal{B}$ consecutive instances, starting from any instance that is a multiple of $\mathcal{B}$, forms an almost-surely strongly connected Erdős–Rényi graph. When $\mathcal{B}=1$, the network is strongly connected at each time step. The parameter $q$, the fraction of entries being communicated to neighboring nodes, characterizes the level of message sparsification; $q=1$ implies communication without compression, while $q=0$ indicates there is no communication in the network. ### 5.1 Decentralized Optimization Problem We test the proposed Di-CS-SVRG on two tasks, linear and logistic regression, and benchmark it against several baseline algorithms. In particular, we compare Di-CS-SVRG with Decentralized Stochastic Gradient Descent (De-Stoc, stochastic variant), Push-DIGing (Push-DIG-Full) [7], Push-DIGing Stochastic (Push-DIG-Stoc, stochastic variant) and the TV-AB algorithm (AB Algorithm) [25]. In addition, we include comparisons to the full gradient schemes we considered in our preliminary work [1], Decentralized Full Gradient Descent (De-Full) and its communication-sparsified variant Sparsified De-Full (S-De- Full). #### 5.1.1 Decentralized Linear Regression We consider the setting where $n$ nodes collaboratively solve the optimization problem $\min_{{\mathbf{x}}}\left\\{\frac{1}{n}\sum_{i=1}^{n}\|\mathbf{y}_{i}-D_{i}\mathbf{x}\|^{2}\right\\},$ (40) where for the $i^{th}$ node $D_{i}\in\mathbb{R}^{200\times 64}$ denotes the matrix with $200$ local samples of size $d=64$, and $\mathbf{y}_{i}\in\mathbb{R}^{200}$ denotes the corresponding measurement vector. The true value of $\mathbf{x}$, $\mathbf{x}^{*}$, is generated from a normal distribution, and the samples are generated synthetically. The measurements are generated as $\mathbf{y}_{i}=M_{i}\mathbf{x}^{*}+\eta_{i}$, where the entries of $M_{i}$ are generated randomly from the standard normal distribution and then $M_{i}$ is normalized such that its rows sum up to one. The local noise vector $\eta_{i}$ is drawn at random from a zero-mean Gaussian distribution with variance $0.01$. All algorithms are initialized using randomly generated local vectors $\mathbf{x}_{i}^{0}$, and utilize constant step size $\alpha_{t}=0.002$ except the AB algorithm for which we followed the recommendation in [25] and set $\alpha_{t}=0.0025$. Performance of the algorithms is characterized using three metrics: residual over iterations, residual over average gradient computation and residual over communication cost, where the residual is computed as $\frac{\|\mathbf{x}^{t}-\mathbf{x}^{*}\|}{\|\mathbf{x}^{0}-\mathbf{x}^{*}\|}$. The results are shown in Fig. 1. As seen in Fig. 1(a), Di-CS-SVRG (i.e., our Algorithm 2) with $q=1$ converges at a linear rate and, while being a stochastic gradient algorithm, reaches the same residual floor as the full gradient method Push-DIG-Full and the AB algorithm. Di-CS-SVRG converges much faster than the two baseline algorithms employing SGD, Push-DIG-Stoc and De- Stoc. Fig. 1(b) shows that Di-CS-SVRG with varied compression levels $q$ converges to the same residual floor, and that (as expected) larger $q$ leads to faster convergence. Moreover, the figure shows that for a fixed $q$, Di-CS- SVRG achieves faster convergence than the benchmark algorithms. Fig. 1(c) compares different algorithms in terms of the number of gradients computed per sample, demonstrating computation efficiency of Di-CS-SVRG. Finally, Fig. 1(d) shows the communication cost for varied $q$, computed as the total number of the (state, surplus and gradient) vector entries communicated across the network. As seen in the figure, to achieve a pre- specified level of the residual, Di-CS-SVRG with $q=0.05$ incurs smaller communication cost than any other considered algorithm. (a) Residual vs. iterations: full communication schemes. (b) Residual vs. iterations: compressed communication schemes. (c) Residual vs. gradient computations per sample. (d) Residual vs. communication cost. Figure 1: Linear regression, $\mathcal{B}=5$. (a) The residual achieved by full communication schemes and the residual of Di-CS-SVRG (Algorithm 2) with $q=1$ vs. iterations. (b) The residual achieved by Di-CS-SVRG with different compression levels: $q=1$, $q=0.08$, and $q=0.05$ vs. iterations. (c) The cumulative number of gradient computations needed to reach given level of the residual; showing both the compressed as well as full communication schemes. (d) The cumulative communication cost to reach given level of the residual; showing both the compressed as well as full communication schemes. (a) Correct rate vs. iterations: full communication schemes. (b) Correct rate vs. iterations: compressed schemes. (c) Correct rate vs. gradient computations per sample. (d) Correct rate vs. communication cost. Figure 2: Logistic regression, $\mathcal{B}=1$. (a) The correct classification rate achieved by full communication schemes and the correct classification rate of Di-CS-SVRG vs. iterations. (b) The correct classification rate for Di- CS-SVRG with varied compression levels: $q=1$, $q=0.12$, $q=0.08$ vs. iterations. (c) The cumulative number of gradient computations required to reach given levels of correct classification rate. (d) The cumulative communication cost required to reach given level of the correct classification rate. #### 5.1.2 Decentralized Logistic Regression To perform benchmarking on a logistic regression task, we solve a multi-class classification problem on the Stackoverflow dataset [41]. $\min_{{\mathbf{x}}}\left\\{\frac{\mu}{2}\|\mathbf{x}\|^{2}+\sum_{i=1}^{n}\sum_{j=1}^{N}\mathrm{ln}(1+\mathrm{exp}(-(\mathbf{m}_{ij}^{T}\mathbf{x}){\mathbf{y}}_{ij}))\right\\},$ (41) where the training samples $(\mathbf{m}_{ij},{\mathbf{y}}_{ij})\in\mathbb{R}^{400+5}$, $\mathbf{m}_{ij}$ represents a vectorized text feature and ${\mathbf{y}}_{ij}$ represents the corresponding tag vector. We compare the performance of Di-CS-SVRG with the same benchmarking algorithms as in the linear regression problem, and use the same initialization setup. The logistic regression experiment is run with the stepsize $\alpha_{t}=0.01$; the regularization parameter is set to $\mu=10^{-5}$. Performance of the algorithms on the logistic regression task is characterized by the classification correct rate. In particular, we evaluate the following three metrics: the correct rate vs. iterations, the correct rate vs. average gradient computation, and the correct rate vs. communication cost; they are all shown in Fig. 2. As seen in Fig. 2(a), Di-CS-SVRG converges and reaches the same residual floor as the full gradient method Push-DIG-Full and the AB Algorithm. Di-CS-SVRG converges much faster than then the two algorithms that rely on SGD, Push-DIG-Stoc and De-Stoc. Fig. 2(b) shows that for varied compression levels $q$, Di-CS-SVRG converges to the same residual floor. As expected, larger $q$ leads to faster convergence. For a fixed $q$, Di-CS-SVRG converges faster than the benchmark algorithm. Fig. 2(c) reports the average gradient computation, i.e., the number of gradients computed per sample. As can be seen, Di-CS-SVRG with different compression levels uses fewer gradient computation than the full gradient schemes (Push-DIG-Full, the AB algorithm and De-Full) to reach $90\%$ correct classification rate. Fig. 2(d) shows the communication cost, defined as the total number of the (state, surplus and gradient) vector entries exchanged across the network, for various values of $q$. Among the considered schemes, Di-CS-SVRG with $q=0.08$ reaches the pre-specified residual level with smaller communication cost than any stochastic scheme (Push-DIG-Stoc, De-Stoc as well as Di-CS-SVRG with other $q$’s). Even though Push-DIG-Stoc reaches various accuracies slightly faster than Di-CS-SVRG, it requires considerably higher amount of gradient computation to do so (see Fig. 2). ### 5.2 Results on different network topologies To further test Di-CS-SVRG in different settings, we apply it to decentralized optimization over networks with varied connectivity and sizes. #### 5.2.1 Varied network connectivity We consider the linear regression problem and vary the values of the joint connectivity parameter $\mathcal{B}$. Fig. 3(a) shows the resulting residuals; as seen there, larger $\mathcal{B}$, implying the network takes longer time before the union of its instances forms a strongly connected graph, leads to slower convergence. (a) (b) Figure 3: Varied network connectivity and size. #### 5.2.2 Varied network size We now consider the logistic regression problem over networks of varied sizes. In particular, we fix the total number of data points and vary the number of nodes in the network. As the network grows, i.e., the number of nodes in the network becomes larger, each agent has fewer locally available data points. Fig. 3(b) shows the correct rate for compression levels $q=1$, $q=0.12$ and $q=0.08$ as $n$ grows from $10$ to $30$. For a pre-specified sparsification level, larger networks, in which each agent has fewer data points to train its local model, requires more communication rounds and therefore takes longer to converge. ## 6 Conclusion In this paper we studied decentralized convex optimization problems over time- varying directed networks and proposed a stochastic variance-reduced algorithm for solving them. The algorithm sparsifies messages exchanged between network nodes thus enabling collaboration in resource-constrained settings. We proved that the proposed algorithm, Di-CS-SVRG, enjoys linear convergence rate, and demonstrated its efficacy through simulation studies. As part of the future work, it is of interest to extend this work to decentralized non-convex optimization problems. ## References * [1] Yiyue Chen, Abolfazl Hashemi and Haris Vikalo “Decentralized Optimization on Time-Varying Directed Graphs Under Communication Constraints” In _ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2021, pp. 3670–3674 IEEE * [2] Wei Ren and Randal W Beard “Consensus seeking in multiagent systems under dynamically changing interaction topologies” In _IEEE Transactions on automatic control_ 50.5 IEEE, 2005, pp. 655–661 * [3] Ali Jadbabaie, Jie Lin and A Stephen Morse “Coordination of groups of mobile autonomous agents using nearest neighbor rules” In _IEEE Transactions on automatic control_ 48.6 IEEE, 2003, pp. 988–1001 * [4] Angelia Nedic and Asuman Ozdaglar “Distributed subgradient methods for multi-agent optimization” In _IEEE Transactions on Automatic Control_ 54.1 IEEE, 2009, pp. 48–61 * [5] S Sundhar Ram, Angelia Nedić and Venugopal V Veeravalli “Distributed stochastic subgradient projection algorithms for convex optimization” In _Journal of optimization theory and applications_ 147.3 Springer, 2010, pp. 516–545 * [6] Guannan Qu and Na Li “Harnessing smoothness to accelerate distributed optimization” In _IEEE Transactions on Control of Network Systems_ 5.3 IEEE, 2017, pp. 1245–1260 * [7] Angelia Nedic, Alex Olshevsky and Wei Shi “Achieving geometric convergence for distributed optimization over time-varying graphs” In _SIAM Journal on Optimization_ 27.4 SIAM, 2017, pp. 2597–2633 * [8] Rie Johnson and Tong Zhang “Accelerating stochastic gradient descent using predictive variance reduction” In _Advances in neural information processing systems_ , 2013, pp. 315–323 * [9] Mark Schmidt, Nicolas Le Roux and Francis Bach “Minimizing finite sums with the stochastic average gradient” In _Mathematical Programming_ 162.1-2 Springer, 2017, pp. 83–112 * [10] John Nikolas Tsitsiklis “Problems in decentralized decision making and computation.”, 1984 * [11] Wei Ren, Randal W Beard and Ella M Atkins “Information consensus in multivehicle cooperative control” In _IEEE Control systems magazine_ 27.2 IEEE, 2007, pp. 71–82 * [12] Kai Cai and Hideaki Ishii “Average consensus on general strongly connected digraphs” In _Automatica_ 48.11 Elsevier, 2012, pp. 2750–2761 * [13] Kai Cai and Hideaki Ishii “Average consensus on arbitrary strongly connected digraphs with time-varying topologies” In _IEEE Transactions on Automatic Control_ 59.4 IEEE, 2014, pp. 1066–1071 * [14] Anastasia Koloskova, Sebastian Stich and Martin Jaggi “Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication” In _International Conference on Machine Learning_ , 2019, pp. 3478–3487 * [15] Björn Johansson, Maben Rabi and Mikael Johansson “A randomized incremental subgradient method for distributed optimization in networked systems” In _SIAM Journal on Optimization_ 20.3 SIAM, 2010, pp. 1157–1170 * [16] Ermin Wei and Asuman Ozdaglar “Distributed alternating direction method of multipliers” In _2012 IEEE 51st IEEE Conference on Decision and Control (CDC)_ , 2012, pp. 5445–5450 IEEE * [17] John C Duchi, Alekh Agarwal and Martin J Wainwright “Dual averaging for distributed optimization: Convergence analysis and network scaling” In _IEEE Transactions on Automatic control_ 57.3 IEEE, 2011, pp. 592–606 * [18] Angelia Nedić, Soomin Lee and Maxim Raginsky “Decentralized online optimization with global objectives and local communication” In _2015 American Control Conference (ACC)_ , 2015, pp. 4497–4503 IEEE * [19] Lie He, An Bian and Martin Jaggi “Cola: Decentralized linear learning” In _Advances in Neural Information Processing Systems_ , 2018, pp. 4536–4546 * [20] Sebastian U Stich, Jean-Baptiste Cordonnier and Martin Jaggi “Sparsified SGD with memory” In _Advances in Neural Information Processing Systems_ , 2018, pp. 4447–4458 * [21] David Kempe, Alin Dobra and Johannes Gehrke “Gossip-based computation of aggregate information” In _44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings._ , 2003, pp. 482–491 IEEE * [22] Angelia Nedić and Alex Olshevsky “Distributed optimization over time-varying directed graphs” In _IEEE Transactions on Automatic Control_ 60.3 IEEE, 2014, pp. 601–615 * [23] Chenguang Xi, Qiong Wu and Usman A Khan “On the distributed optimization over directed networks” In _Neurocomputing_ 267 Elsevier, 2017, pp. 508–515 * [24] Angelia Nedić and Alex Olshevsky “Stochastic gradient-push for strongly convex functions on time-varying directed graphs” In _IEEE Transactions on Automatic Control_ 61.12 IEEE, 2016, pp. 3936–3947 * [25] Fakhteh Saadatniaki, Ran Xin and Usman A Khan “Optimization over time-varying directed graphs with row and column-stochastic matrices” In _arXiv preprint arXiv:1810.07393_ , 2018 * [26] Ran Xin, Usman A Khan and Soummya Kar “Variance-reduced decentralized stochastic optimization with accelerated convergence” In _arXiv preprint arXiv:1912.04230_ , 2019 * [27] Boyue Li, Shicong Cen, Yuxin Chen and Yuejie Chi “Communication-efficient distributed optimization in networks with gradient tracking and variance reduction” In _International Conference on Artificial Intelligence and Statistics_ , 2020, pp. 1662–1672 PMLR * [28] Bahman Gharesifard and Jorge Cortés “When does a digraph admit a doubly stochastic adjacency matrix?” In _Proceedings of the 2010 American Control Conference_ , 2010, pp. 2440–2445 IEEE * [29] Muhammad I Qureshi, Ran Xin, Soummya Kar and Usman A Khan “Push-SAGA: A decentralized stochastic algorithm with variance reduction over directed graphs” In _IEEE Control Systems Letters_ IEEE, 2021 * [30] Hanlin Tang et al. “Communication compression for decentralized training” In _Advances in Neural Information Processing Systems_ , 2018, pp. 7652–7662 * [31] Wei Wen et al. “Terngrad: Ternary gradients to reduce communication in distributed deep learning” In _Advances in neural information processing systems_ , 2017, pp. 1509–1519 * [32] Hantian Zhang et al. “Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning” In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , 2017, pp. 4035–4043 JMLR. org * [33] Rudrajit Das, Abolfazl Hashemi, Sujay Sanghavi and Inderjit S Dhillon “Improved Convergence Rates for Non-Convex Federated Learning with Compression” In _arXiv preprint arXiv:2012.04061_ , 2020 * [34] Amirhossein Reisizadeh et al. “Robust and communication-efficient collaborative learning” In _Advances in Neural Information Processing Systems_ , 2019, pp. 8386–8397 * [35] Zebang Shen et al. “Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication” In _International Conference on Machine Learning_ , 2018, pp. 4624–4633 * [36] Anastasia Koloskova, Tao Lin, Sebastian U Stich and Martin Jaggi “Decentralized deep learning with arbitrary communication compression” In _arXiv preprint arXiv:1907.09356_ , 2019 * [37] Abolfazl Hashemi et al. “On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization” In _arXiv preprint arXiv:2012.04061_ , 2020 * [38] Hossein Taheri, Aryan Mokhtari, Hamed Hassani and Ramtin Pedarsani “Quantized Decentralized Stochastic Learning over Directed Graphs” In _International Conference on Machine Learning (ICML)_ , 2020 * [39] Lin Xiao and Stephen Boyd “Fast linear iterations for distributed averaging” In _Systems & Control Letters_ 53.1 Elsevier, 2004, pp. 65–78 * [40] Paul Erdös and Alfréd Rényi “On random graphs” In _Publicationes mathematicae_ 6.26, 1959, pp. 290–297 * [41] “The Stack Overflow Data” URL: https://www.kaggle.com/stackoverflow/stackoverflow * [42] Ran Xin, Anit Kumar Sahu, Usman A Khan and Soummya Kar “Distributed stochastic optimization with gradient tracking over strongly-connected networks” In _arXiv preprint arXiv:1903.07266_ , 2019 Supplementary Material In this document, we include detailed proofs of the auxiliary lemmas stated in Section 1 of the main manuscript, and present experimental results for the consensus algorithm, Algorithm 1. ## Appendix A Analysis We start by proving auxiliary lemmas utilized in the proof of Theorem 1. ###### Lemma $1$. Suppose Assumptions 1 (a) and (b) hold. Let $\sigma=\max(|\lambda_{M,2}|,|\lambda_{B,2}|)$ denote the larger of the second largest eigenvalues of $M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$ and $B_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$. Then, $\displaystyle\begin{split}\|M_{m}((k+1)\mathcal{B}-1:k\mathcal{B}){\mathbf{z}}-\bar{{\mathbf{z}}}\|\leq\sigma\|{\mathbf{z}}-\bar{{\mathbf{z}}}\|,\ \forall{\mathbf{z}}\in{\mathbb{R}}^{2n}\\\ \mbox{and}\\\ \|B_{m}((k+1)\mathcal{B}-1:k\mathcal{B}){\mathbf{y}}-\bar{{\mathbf{y}}}\|\leq\sigma\|{\mathbf{y}}-\bar{{\mathbf{y}}}\|,\ \forall{\mathbf{y}}\in{\mathbb{R}}^{n},\\\ \end{split}$ (42) where $\bar{{\mathbf{z}}}=[\frac{1}{n}\sum_{i=1}^{2n}z_{i},\cdots,\frac{1}{n}\sum_{i=1}^{2n}z_{i}]^{T}$ and $\bar{{\mathbf{y}}}=[\frac{1}{n}\sum_{i=1}^{n}y_{i},\cdots,\frac{1}{n}\sum_{i=1}^{n}y_{i}]^{T}$. ###### Proof. To prove Lemma 1, we first need to establish the following. ###### Lemma $1.1$. Assume that $M_{m}((s+1)\mathcal{B}-1:s\mathcal{B})$ has non-zero spectral gap for each $m$. Then the following statements hold: 1. (a) The sequence of matrix products $M_{m}((s+1)\mathcal{B}-1:s\mathcal{B})$ converges to the limit matrix $\mathrm{lim}_{t\to\infty}(M_{m}((s+1)\mathcal{B}-1:s\mathcal{B}))^{t}=\left[\begin{matrix}\frac{\mathbf{1}_{n}\mathbf{1}^{T}_{n}}{n}&\frac{\mathbf{1}_{n}\mathbf{1}^{T}_{n}}{n}\\\ \mathbf{0}&\mathbf{0}\\\ \end{matrix}\right].$ (43) 2. (b) Let $1=|\lambda_{1}(M_{m}((s+1)\mathcal{B}-1:s\mathcal{B}))|>|\lambda_{2}(M_{m}((s+1)\mathcal{B}-1:s\mathcal{B}))|\geq\cdots\geq|\lambda_{2n}(M_{m}((s+1)\mathcal{B}-1:s\mathcal{B}))|$ be the eigenvalues of $M_{m}((s+1)\mathcal{B}-1:s\mathcal{B})$, and let $\sigma_{m}=|\lambda_{2}(M_{m}((s+1)\mathcal{B}-1:s\mathcal{B}))|$; then there exists $\Gamma_{m}^{\prime}>0$ such that $\displaystyle\|(M_{m}((s+1)\mathcal{B}-1:s\mathcal{B}))^{t}-\mathcal{I}\|_{\infty}\leq\Gamma_{m}^{\prime}\sigma_{m}^{t},$ (44) where $\mathcal{I}:=\frac{1}{n}[\mathbf{1}^{T}\ \mathbf{0}^{T}]^{T}[\mathbf{1}^{T}\ \mathbf{1}^{T}]$. ###### Proof. For each $m$, $M_{m}((s+1)\mathcal{B}-1:s\mathcal{B})$ has column sum equal to $1$. According to Assumption 1, definition of the mixing matrix and the construction of the product, $M_{m}((s+1)\mathcal{B}-1:s\mathcal{B})$ has a simple eigenvalue $1$ with the corresponding left eigenvector $[\mathbf{1}^{T}\ \mathbf{1}^{T}]$ and right eigenvector $[\mathbf{1}^{T}\ \mathbf{0}^{T}]^{T}$. Following Jordan matrix decomposition for the simple eigenvalue, there exist some $P,Q\in\mathcal{R}^{(2n-1)\times(2n-1)}$ such that $\displaystyle(M_{m}((s+1)\mathcal{B}-1:s\mathcal{B}))^{t}$ $\displaystyle=\mathcal{I}^{t}+PJ_{m}^{t}Q=\mathcal{I}+PJ_{m}^{t}Q.$ (45) Let $\gamma_{m}$ be the second largest eigenvalue magnitude of $M_{m}((s+1)\mathcal{B}-1:s\mathcal{B})$; then, $\gamma_{m}$ is also the spectral norm of $J_{m}$. The proof of part (a) follows by noting that $\mathrm{lim}_{t\to\infty}J_{m}^{t}=\mathbf{0}.$ Since $\|P\|$, $\|Q\|$ and $\|J_{m}\|$ are finite, there exists some $\Gamma_{m}^{\prime}>0$ such that $\displaystyle\quad\|(M_{m}((s+1)\mathcal{B}-1:s\mathcal{B}))^{t}-\mathcal{I}\|_{\infty}\leq\|PJ_{m}^{t}Q\|_{\infty}\leq\Gamma_{m}^{\prime}\sigma_{m}^{t}$ (46) which completes the proof of part (b). ∎ Then let $\sigma^{\prime}=\max_{m}\sigma_{m}$, where $\sigma_{m}$ is as defined in Lemma Lemma $1.1$ and, by mathematical induction, for each $m$ it holds that $\rho(M_{m}(T\mathcal{B}-1:0)-\frac{1}{n}[\mathbf{1}^{T}\ \mathbf{0}^{T}]^{T}[\mathbf{1}^{T}\ \mathbf{1}^{T}])\leq\sigma^{\prime T}.$ (47) Referring to the fact that $M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$ and $B_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$ both have column sums equal to $1$ and defining $\sigma$ as stated in the lemma, we can conclude the proof of Lemma 1. ∎ The following lemma, restated for convenience, establishes an upper bound on the consensus error. ###### Lemma $2$. Suppose Assumption 1 holds. Then, $\forall i\leq n$, $k\geq 0$, and $0<m\leq d$, the updates generated by Algorithm 1 satisfy $\displaystyle\mathbb{E}[|z_{im}^{(k+1)\mathcal{B}}-\bar{z}_{m}^{(k+1)\mathcal{B}}|^{2}]$ $\displaystyle\leq\frac{1+\sigma^{2}}{2}\mathbb{E}[|z_{im}^{k\mathcal{B}}-\bar{z}_{m}^{k\mathcal{B}}|^{2}]$ (48) $\displaystyle\quad+\frac{2\alpha^{2}}{1-\sigma^{2}}\mathbb{E}[|g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}\|^{2}].$ ###### Proof. By constructing normalized weight matrices and relying on the definition of $M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})$, we can simplify the update as $\displaystyle z_{im}^{(k+1)\mathcal{B}}$ $\displaystyle=\sum_{j=1}^{2n}[M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})]_{ij}[Q(z_{j}^{k\mathcal{B}})]_{m}-\alpha g_{im}^{k\mathcal{B}}$ (49) $\displaystyle=\sum_{j=1}^{2n}[M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})]_{ij}z_{jm}^{k\mathcal{B}}-\alpha g_{im}^{k\mathcal{B}}.$ Since for any $m$ we have $\bar{g}_{m}^{k\mathcal{B}}=\frac{1}{n}\sum_{j=1}^{n}g_{jm}^{k\mathcal{B}}$, it holds that $\displaystyle|z_{im}^{(k+1)\mathcal{B}}-\bar{z}_{m}^{(k+1)\mathcal{B}}|^{2}=|\sum_{j=1}^{2n}[M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})]_{ij}z_{jm}^{k\mathcal{B}}-\bar{z}_{m}^{k\mathcal{B}}-\alpha(g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}})|^{2}.$ (50) By Young’s inequality, $\|{\mathbf{a}}+{\mathbf{b}}\|^{2}\leq(1+\eta)\|{\mathbf{a}}\|^{2}+(1+\frac{1}{\eta})\|{\mathbf{b}}\|^{2},\quad\forall\eta>0,{\mathbf{a}},{\mathbf{b}}.$ Using Lemma 1, $\forall i\leq n$ and $0<m\leq d$ it holds that $|\sum_{j=1}^{2n}[M_{m}((k+1)\mathcal{B}-1:k\mathcal{B})]_{ij}z_{jm}^{k\mathcal{B}}-\bar{z}_{m}^{k\mathcal{B}}|\leq\sigma|z_{im}^{k\mathcal{B}}-\bar{z}_{m}^{k\mathcal{B}}|.$ (51) Then for all $k\geq 0$, $\displaystyle|z_{im}^{(k+1)\mathcal{B}}-\bar{z}_{m}^{(k+1)\mathcal{B}}|^{2}$ $\displaystyle\leq(1+\eta)\sigma^{2}|z_{im}^{k\mathcal{B}}-\bar{z}_{m}^{k\mathcal{B}}|^{2}+(1+\frac{1}{\eta})\alpha^{2}|g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}|^{2}.$ (52) Setting $\eta=\frac{1-\sigma^{2}}{2\sigma^{2}}$ completes the proof. ∎ Next, we provide proofs of two lemmas stating upper bounds on the optimality gap and the gradient tracking error. For convenience, we first introduce $\bar{\tau}^{k\mathcal{B}}=\frac{1}{n}\sum_{i=1}^{n}\tau_{i}^{k\mathcal{B}}$ (53) and $\tau_{i}^{(k+1)\mathcal{B}}=\begin{cases}{\mathbf{x}}_{i}^{(k+1)\mathcal{B}}&\mathrm{if}\quad(k+1)\mathcal{B}\mod T=0\\\ \tilde{w}_{i}&\mathrm{otherwise}.\end{cases}$ (54) ###### Lemma $3$. Suppose Assumption 1 holds and let $0<\alpha<\frac{\mu}{8L^{2}}$. Then for all $k>0$ it holds that $\displaystyle\begin{split}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]&\leq\frac{2L^{2}\alpha}{\mu}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]+(1-\frac{\mu\alpha}{2})\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ &\quad+\frac{4L^{2}\alpha^{2}}{n}\mathbb{E}[\sum_{i=1}^{n}\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]+\frac{4L^{2}\alpha^{2}}{n}\mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}].\end{split}$ (55) ###### Lemma $4$. Suppose Assumption 1 holds. Then, $\displaystyle\frac{1}{L^{2}}\mathbb{E}[\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}]$ $\displaystyle\leq\frac{120}{1-\sigma^{2}}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]+\frac{89}{1-\sigma^{2}}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]$ (56) $\displaystyle\quad+\frac{3+\sigma^{2}}{4}\mathbb{E}[\frac{\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}|^{2}}{L^{2}}]$ $\displaystyle\quad+\frac{38}{1-\sigma^{2}}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]+\frac{38}{1-\sigma^{2}}\mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}].$ Proving Lemmas 3 and 4 requires a series of auxiliary lemmas, Lemma 3.1-3.4. We start with Lemma 3.1, which states an upper bound on $\mathbb{E}[\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]$. ###### Lemma $3.1$. Suppose Assumption 1 holds. Let $0<\alpha<\frac{1}{L}$, where $L$ is the smoothness parameter. For all $k>0$, it holds that $\displaystyle\mathbb{E}[\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]$ $\displaystyle\leq\frac{L^{2}\alpha}{n\mu}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]+(1-\mu\alpha)\mathbb{E}[\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]+\frac{\alpha^{2}}{n^{2}}\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla\mathbf{f}({\mathbf{x}}^{k\mathcal{B}})\|^{2}],$ (57) where $\mu$ is the strong convexity parameter and $\nabla\mathbf{f}({\mathbf{x}}^{k\mathcal{B}})=[\nabla f_{1}({\mathbf{x}}_{1}^{k\mathcal{B}});\cdots;\nabla f_{n}({\mathbf{x}}_{n}^{k\mathcal{B}})]$. ###### Proof. By definition, $\bar{z}_{m}^{t}=\frac{1}{n}\sum_{i=1}^{2n}z_{im}^{t}$. Let us denote $\nabla\bar{f}({\mathbf{x}}^{t})=\nabla\bar{f}({\mathbf{z}}^{t})=\frac{1}{n}\sum_{i=1}^{n}\nabla f_{i}({\mathbf{z}}_{i}^{t})$. By induction, $\bar{g}_{m}^{k\mathcal{B}}=\bar{v}_{m}^{k\mathcal{B}}.$ (58) Next, we have that for any $m$ $\bar{z}_{m}^{(k+1)\mathcal{B}}=\bar{z}_{m}^{k\mathcal{B}}-\alpha\bar{g}_{m}^{k\mathcal{B}}=\bar{z}_{m}^{k\mathcal{B}}-\alpha\bar{v}_{m}^{k\mathcal{B}},$ (59) which implies that $\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}=\bar{{\mathbf{z}}}^{k\mathcal{B}}-\alpha\bar{{\mathbf{g}}}^{k\mathcal{B}}=\bar{{\mathbf{z}}}^{k\mathcal{B}}-\alpha\bar{{\mathbf{v}}}^{k\mathcal{B}}.$ (60) Note that the randomness in Algorithm 1 originates from a set of independent random variables $\\{\omega_{i}^{t}\\}_{i\in[2n]}^{t\geq 0}$. We rely on the $\sigma$-algebra ${\mathcal{F}}^{k\mathcal{B}}$ to characterize the history of the dynamical system generated by $\\{\omega_{i}^{t}\\}_{i\in[2n]}^{t\leq k\mathcal{B}-1}$, $\displaystyle\begin{split}&\mathbb{E}[\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]=\mathbb{E}[\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-\alpha\bar{{\mathbf{v}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]\\\ &=\mathbb{E}[\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-\alpha\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-{\mathbf{x}}^{*}+\alpha(\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\bar{{\mathbf{v}}}^{k\mathcal{B}})\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]\\\ &=\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-\alpha\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-{\mathbf{x}}^{*}\|^{2}+\alpha^{2}\mathbb{E}[\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\bar{{\mathbf{v}}}^{k\mathcal{B}}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]\\\ &\quad+2\alpha\langle\bar{{\mathbf{z}}}^{k\mathcal{B}}-\alpha\nabla f(\bar{z}^{k\mathcal{B}})-{\mathbf{x}}^{*},\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\bar{{\mathbf{v}}}^{k\mathcal{B}}\rangle\\\ &=\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-\alpha\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-{\mathbf{x}}^{*}\|^{2}+\alpha^{2}\mathbb{E}[\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\bar{{\mathbf{v}}}^{k\mathcal{B}}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]\\\ &\quad+2\alpha\langle\bar{{\mathbf{z}}}^{k\mathcal{B}}-\alpha\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-{\mathbf{x}}^{*},\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\nabla\bar{f}({\mathbf{x}}^{\mathcal{B}})\rangle.\end{split}$ (61) We then proceed by considering $\mathbb{E}[\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\bar{{\mathbf{v}}}^{k\mathcal{B}}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]$, $\displaystyle\begin{split}\mathbb{E}[\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\bar{{\mathbf{v}}}^{k\mathcal{B}}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]&=\mathbb{E}[\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})+\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})-\bar{{\mathbf{v}}}^{k\mathcal{B}}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]\\\ &=\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})\|^{2}+\mathbb{E}[\|\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})-\bar{{\mathbf{v}}}^{k\mathcal{B}}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}],\end{split}$ (62) where the fact that $\mathbb{E}[\bar{{\mathbf{v}}}^{t}|{\mathcal{F}}^{t}]=\nabla\bar{f}({\mathbf{x}}^{t})$ is used. Furthermore, note that $\displaystyle\mathbb{E}[\|\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})-\bar{{\mathbf{v}}}^{k\mathcal{B}}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]$ $\displaystyle=\frac{1}{n^{2}}\mathbb{E}[\|\sum_{i=1}^{n}({\mathbf{v}}_{i}^{k\mathcal{B}}-\nabla f_{i}({\mathbf{z}}_{i}^{k\mathcal{B}}))\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]$ (63) $\displaystyle=\frac{1}{n^{2}}\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla\mathbf{f}({\mathbf{x}}^{k\mathcal{B}})\|^{2}|{\mathcal{F}}^{k\mathcal{B}}],$ since $\left\\{{\mathbf{v}}_{i}^{t}\right\\}_{i=1}^{n}$ are independent given ${\mathcal{F}}^{t}$ and $\mathbb{E}[\sum_{i\neq j}\langle{\mathbf{v}}_{i}^{t}-\nabla f_{i}({\mathbf{z}}_{i}^{t}),{\mathbf{v}}_{j}^{t}-\nabla f_{j}({\mathbf{z}}_{j}^{t})\rangle|{\mathcal{F}}^{t}]=0$. Recall the strong convexity of the objective, i.e., we have that if $0<\alpha\leq\frac{1}{L}$, $\forall{\mathbf{x}}$ $\|{\mathbf{x}}-\alpha\nabla{\mathbf{f}}({\mathbf{x}})-{\mathbf{x}}^{*}\|\leq(1-\mu\alpha)\|{\mathbf{x}}-{\mathbf{x}}^{*}\|.$ (64) It follows that $\displaystyle\begin{split}\mathbb{E}[\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]&\leq(1-\mu\alpha)^{2}\|\bar{z}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}+\alpha^{2}\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})\|^{2}\\\ &\quad+2\alpha(1-\mu\alpha)\|\bar{z}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})\|\\\ &\quad+\frac{\alpha^{2}}{n^{2}}\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|^{2}|{\mathcal{F}}^{k\mathcal{B}}].\end{split}$ (65) Using Young’s inequality, we readily obtain that $\displaystyle 2\alpha\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})\|\leq\mu\alpha\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}+\frac{\alpha}{\mu}\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})\|^{2}.$ (66) On the other hand, assuming convexity and smoothness, we have that $\forall k\geq 0$, $\displaystyle\|\nabla f(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\nabla\bar{f}({\mathbf{x}}^{k\mathcal{B}})\|$ $\displaystyle=\|\sum_{i=1}^{n}\frac{\nabla f_{i}(\bar{{\mathbf{z}}}^{k\mathcal{B}})-\nabla f_{i}({\mathbf{z}}_{i}^{k\mathcal{B}})}{n}\|$ (67) $\displaystyle\leq L\sum_{i=1}^{n}\frac{\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|}{n}$ (68) $\displaystyle\leq L\sqrt{\sum_{i=1}^{n}\frac{\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}}{n}}$ (69) $\displaystyle=\frac{L}{\sqrt{n}}\sqrt{\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}}.$ (70) Then by taking the total expectation, $\displaystyle\mathbb{E}[\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]$ $\displaystyle\leq\frac{L^{2}\alpha}{n\mu}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]+(1-\mu\alpha)\mathbb{E}[\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]+\frac{\alpha^{2}}{n^{2}}\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla\mathbf{f}({\mathbf{x}}^{k\mathcal{B}})\|^{2}].$ ∎ The following lemma helps establish an upper bound on the expected gradient tracking error. ###### Lemma $3.2$. Suppose the objective function $f$ is $\mu$-strongly-convex and that each component of the local objective function $f_{i,j}$ is $L$-smooth. If $0<\alpha<\frac{1}{4\sqrt{2}L}$, $\displaystyle\begin{split}\mathbb{E}[\|{\mathbf{g}}_{\sim n}^{(k+1)\mathcal{B}}-\mathbf{1}_{n}\bar{{\mathbf{g}}}^{(k+1)\mathcal{B}}\|^{2}]&\leq\frac{33L^{2}}{1-\sigma^{2}}E[\|{\mathbf{z}}^{k\mathcal{B}}-\mathbf{1}_{2n}(\bar{{\mathbf{z}}}^{k\mathcal{B}})^{\prime}\|^{2}]+\frac{2L^{2}}{1-\sigma^{2}}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ &\quad+(\frac{1+\sigma^{2}}{2}+\frac{32\alpha^{2}L^{2}}{1-\sigma^{2}})\mathbb{E}[\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}-\mathbf{1}_{n}\bar{{\mathbf{g}}}^{k\mathcal{B}}\|^{2}]\\\ &\quad+\frac{5}{1-\sigma^{2}}\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|^{2}]\\\ &\quad+\frac{4}{1-\sigma^{2}}\mathbb{E}[\|{\mathbf{v}}^{(k+1)\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{(k+1)\mathcal{B}})\|^{2}],\end{split}$ (71) where ${\mathbf{g}}_{\sim n}^{t}=[{\mathbf{g}}_{1}^{t};\cdots;{\mathbf{g}}_{n}^{t}]\in{\mathbb{R}}^{n\times d}.$ ###### Proof. For all $i\leq n$ and $0<m\leq d$, it holds that $\displaystyle|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}$ $\displaystyle=|\sum_{j=1}^{n}[B_{m}((k+1)\mathcal{B}-1:k\mathcal{B})]_{ij}g_{jm}^{k\mathcal{B}}+v_{im}^{(k+1)\mathcal{B}}-v_{im}^{k\mathcal{B}}$ (72) $\displaystyle\quad-\frac{1}{n}\sum_{l=1}^{n}(\sum_{j=1}^{n}[B_{m}((k+1)\mathcal{B}-1:k\mathcal{B})]_{lj}g_{jm}^{k\mathcal{B}}+v_{lm}^{(k+1)\mathcal{B}}-v_{lm}^{k\mathcal{B}})|^{2}.$ Denoting ${\mathbf{g}}_{:m}^{(k+1)\mathcal{B}}=[g_{1m}^{(k+1)\mathcal{B}},\cdots,g_{nm}^{(k+1)\mathcal{B}}]^{T}$ and ${\mathbf{v}}_{:m}^{(k+1)\mathcal{B}}=[v_{1m}^{(k+1)\mathcal{B}},\cdots,v_{nm}^{(k+1)\mathcal{B}}]$, $\displaystyle\begin{split}\sum_{i=1}^{n}|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}&=\|{\mathbf{g}}_{:m}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}\mathbf{1}_{n}\|^{2}\\\ &=\sum_{i=1}^{n}|\sum_{j=1}^{n}[B_{m}((k+1)\mathcal{B}-1:k\mathcal{B})]_{ij}g_{jm}^{k\mathcal{B}}+v_{im}^{(k+1)\mathcal{B}}-v_{im}^{k\mathcal{B}}-\\\ &\quad\frac{1}{n}\sum_{l=1}^{n}(\sum_{j=1}^{n}[B_{m}((k+1)\mathcal{B}-1:k\mathcal{B})]_{lj}g_{jm}^{k\mathcal{B}}+v_{lm}^{(k+1)\mathcal{B}}-v_{lm}^{k\mathcal{B}})|^{2}\\\ &=\|B_{m}((k+1)\mathcal{B}-1:k\mathcal{B}){\mathbf{g}}_{:m}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}\mathbf{1}_{n}+({\mathbf{v}}_{:m}^{(k+1)\mathcal{B}}-\bar{v}_{m}^{(k+1)\mathcal{B}}\mathbf{1}_{n})-({\mathbf{v}}_{:m}^{k\mathcal{B}}-\bar{v}_{m}^{k\mathcal{B}}\mathbf{1}_{n})\|^{2}.\end{split}$ (73) Once again applying Young’s inequality yields $\displaystyle\sum_{i=1}^{n}|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}$ $\displaystyle\leq(1+\frac{1-\sigma^{2}}{2\sigma^{2}})\|B_{m}((k+1)\mathcal{B}-1:k\mathcal{B}){\mathbf{g}}_{:m}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}\mathbf{1}_{n}\|^{2}$ (74) $\displaystyle\quad+(1+\frac{2\sigma^{2}}{1-\sigma^{2}})\|({\mathbf{v}}_{:m}^{(k+1)\mathcal{B}}-\bar{v}_{m}^{(k+1)\mathcal{B}}\mathbf{1}_{n})-({\mathbf{v}}_{:m}^{k\mathcal{B}}-\bar{v}_{m}^{k\mathcal{B}}\mathbf{1}_{n})\|^{2}.$ Summing up the above objects over all $m$, we obtain $\displaystyle\begin{split}\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}&\leq\sum_{m=1}^{d}(1+\frac{1-\sigma^{2}}{2\sigma^{2}})\|B_{m}((k+1)\mathcal{B}-1:k\mathcal{B}){\mathbf{g}}_{:m}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}\mathbf{1}_{n}\|^{2}\\\ &\quad+(1+\frac{2\sigma^{2}}{1-\sigma^{2}})\|({\mathbf{v}}_{:m}^{(k+1)\mathcal{B}}-\bar{v}_{m}^{(k+1)\mathcal{B}}\mathbf{1}_{n})-({\mathbf{v}}_{:m}^{k\mathcal{B}}-\bar{v}_{m}^{k\mathcal{B}}\mathbf{1}_{n})\|^{2}\\\ &\leq\sum_{m=1}^{d}\frac{1+\sigma^{2}}{2}\|{\mathbf{g}}_{:m}^{k\mathcal{B}}-\bar{g}_{m}^{t}\mathbf{1}_{n}\|^{2}\\\ &\quad+\frac{2}{1-\sigma^{2}}\|({\mathbf{v}}_{:m}^{(k+1)\mathcal{B}}-\bar{v}_{m}^{(k+1)\mathcal{B}}\mathbf{1}_{n})-({\mathbf{v}}_{:m}^{k\mathcal{B}}-\bar{v}_{m}^{k\mathcal{B}}\mathbf{1}_{n})\|^{2}\\\ &\leq\frac{1+\sigma^{2}}{2}\sum_{m=1}^{d}\|{\mathbf{g}}_{:m}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}\mathbf{1}_{n}\|^{2}+\frac{2}{1-\sigma^{2}}\|{\mathbf{v}}^{(k+1)\mathcal{B}}-{\mathbf{v}}^{k\mathcal{B}}\|^{2}\end{split}$ (75) Taking the total expectation yields $\displaystyle\mathbb{E}[\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}]$ $\displaystyle\leq\frac{1+\sigma^{2}}{2}\mathbb{E}[\sum_{m=1}^{d}\|{\mathbf{g}}_{:m}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}\mathbf{1}_{n}\|^{2}]+\frac{2}{1-\sigma^{2}}\mathbb{E}[\|{\mathbf{v}}^{(k+1)\mathcal{B}}-{\mathbf{v}}^{k\mathcal{B}}\|^{2}].$ (76) Next, we derive an upper bound on $\mathbb{E}[\|{\mathbf{v}}^{(k+1)\mathcal{B}}-{\mathbf{v}}^{k\mathcal{B}}\|^{2}]$ as $\displaystyle\begin{split}\mathbb{E}[\|{\mathbf{v}}^{(k+1)\mathcal{B}}-{\mathbf{v}}^{k\mathcal{B}}\|^{2}]&\leq 2\mathbb{E}[\|{\mathbf{v}}^{(k+1)\mathcal{B}}-{\mathbf{v}}^{k\mathcal{B}}-(\nabla{\mathbf{f}}({\mathbf{x}}^{(k+1)\mathcal{B}})-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}}))\|^{2}]\\\ &\quad+2\mathbb{E}[\|\nabla{\mathbf{f}}({\mathbf{x}}^{(k+1)\mathcal{B}})-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|^{2}]\\\ &\leq 2\mathbb{E}[\|{\mathbf{v}}^{(k+1)\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{(k+1)\mathcal{B}})\|^{2}]\\\ &\quad+2\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|^{2}]+2L^{2}\mathbb{E}[\|{\mathbf{x}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{k\mathcal{B}}\|^{2}]\\\ &\leq 2\mathbb{E}[\|{\mathbf{v}}^{(k+1)\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{(k+1)\mathcal{B}})\|^{2}]\\\ &\quad+2\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|^{2}]+2L^{2}\mathbb{E}[\|{\mathbf{z}}^{(k+1)\mathcal{B}}-{\mathbf{z}}^{k\mathcal{B}}\|^{2}],\end{split}$ (77) where $\nabla{\mathbf{f}}({\mathbf{x}}^{(k+1)\mathcal{B}})=[\nabla f_{1}({\mathbf{x}}_{1}^{(k+1)\mathcal{B}});\cdots;\nabla f_{n}({\mathbf{x}}_{n}^{(k+1)\mathcal{B}})]$. To proceed, let us derive an upper bound on $E[\|{\mathbf{z}}^{(k+1)\mathcal{B}}-{\mathbf{z}}^{k\mathcal{B}}\|^{2}]$. First, consider each column of ${\mathbf{z}}^{(k+1)\mathcal{B}}$ and ${\mathbf{z}}^{k\mathcal{B}}$ (i.e., ${\mathbf{z}}_{:m}^{(k+1)\mathcal{B}}$ and ${\mathbf{z}}_{:m}^{k\mathcal{B}}$) separately and observe that $\displaystyle\begin{split}\|{\mathbf{z}}_{:m}^{(k+1)\mathcal{B}}-{\mathbf{z}}_{:m}^{k\mathcal{B}}\|^{2}&=\|M_{m}((k+1)\mathcal{B}-1:k\mathcal{B}){\mathbf{z}}_{:m}^{k\mathcal{B}}-\alpha{\mathbf{g}}_{:m}^{k\mathcal{B}}-{\mathbf{z}}_{:m}^{k\mathcal{B}}\|^{2}\leq 8\|{\mathbf{z}}_{:m}^{k\mathcal{B}}-\bar{z}_{m}^{k\mathcal{B}}\mathbf{1}_{2n}\|^{2}+2\alpha^{2}\|{\mathbf{g}}_{:m}^{k\mathcal{B}}\|^{2}.\end{split}$ (78) Then $\|{\mathbf{z}}^{(k+1)\mathcal{B}}-{\mathbf{z}}^{k\mathcal{B}}\|^{2}\leq 8\sum_{m=1}^{d}\|{\mathbf{z}}_{:m}^{k\mathcal{B}}-\bar{z}_{m}^{k\mathcal{B}}\mathbf{1}_{2n}\|^{2}+2\alpha^{2}\|{\mathbf{g}}^{k\mathcal{B}}\|^{2}.$ (79) To derive an upper bound on $\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}\|$, let $\bar{{\mathbf{z}}}^{k\mathcal{B}}=[\bar{z}_{1}^{k\mathcal{B}}\mathbf{1}_{2n},\cdots,\bar{z}_{d}^{k\mathcal{B}}\mathbf{1}_{2n}]$ and note that $\displaystyle\begin{split}\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}\|&=\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}-\mathbf{1}_{n}(\bar{{\mathbf{g}}}^{k\mathcal{B}})^{\prime}+\mathbf{1}_{n}(\bar{{\mathbf{v}}}^{k\mathcal{B}})^{\prime}-\mathbf{1}_{n}(\nabla\bar{{\mathbf{f}}}({\mathbf{x}}^{k\mathcal{B}}))^{\prime}+\mathbf{1}_{n}(\nabla\bar{{\mathbf{f}}}({\mathbf{x}}^{k\mathcal{B}}))^{\prime}-\mathbf{1}_{n}(\nabla\bar{{\mathbf{f}}}({\mathbf{x}}^{*}))^{\prime}\|\\\ &\leq\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}-\mathbf{1}_{n}(\bar{{\mathbf{g}}}^{k\mathcal{B}})^{\prime}\|+\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|+L\|{\mathbf{x}}^{k\mathcal{B}}-\mathbf{1}_{n}({\mathbf{x}}^{*})^{\prime}\|\\\ &\leq\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}-\mathbf{1}_{n}(\bar{{\mathbf{g}}}^{k\mathcal{B}})^{\prime}\|+\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|+L\|{\mathbf{x}}^{k\mathcal{B}}-\mathbf{1}_{n}(\bar{{\mathbf{z}}}^{k\mathcal{B}})^{\prime}+\mathbf{1}_{n}(\bar{{\mathbf{z}}}^{k\mathcal{B}})^{\prime}-\mathbf{1}_{n}({\mathbf{x}}^{*})^{\prime}\|\\\ &\leq\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}-\mathbf{1}_{n}(\bar{{\mathbf{g}}}^{k\mathcal{B}})^{\prime}\|+\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|+L\|{\mathbf{z}}^{k\mathcal{B}}-\mathbf{1}_{2n}(\bar{{\mathbf{z}}}^{k\mathcal{B}})^{\prime}\|+\sqrt{2n}L\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|.\end{split}$ (80) Squaring both sides of the above inequality yields $\displaystyle\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}\|^{2}$ $\displaystyle\leq 4L^{2}\|{\mathbf{z}}^{k\mathcal{B}}-\mathbf{1}_{2n}(\bar{{\mathbf{z}}}^{k\mathcal{B}})^{\prime}\|^{2}+8nL^{2}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}+4\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}-\mathbf{1}_{n}(\bar{{\mathbf{g}}}^{k\mathcal{B}})^{\prime}\|^{2}+4\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|^{2}.$ (81) Imposing $0<\alpha<\frac{1}{4\sqrt{2}L}$, $\displaystyle\mathbb{E}[\|{\mathbf{z}}^{(k+1)\mathcal{B}}-{\mathbf{z}}^{k\mathcal{B}}\|^{2}]$ $\displaystyle\leq 8.25\mathbb{E}[\|{\mathbf{z}}^{k\mathcal{B}}-\mathbf{1}_{2n}(\bar{{\mathbf{z}}}^{k\mathcal{B}})^{\prime}\|^{2}]+0.5\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]$ (82) $\displaystyle\quad+8\alpha^{2}\mathbb{E}[\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}-\mathbf{1}_{n}(\bar{{\mathbf{g}}}^{k\mathcal{B}})^{\prime}\|^{2}]+8\alpha^{2}\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|^{2}].$ Then $\displaystyle\begin{split}\mathbb{E}[\|{\mathbf{v}}^{(k+1)\mathcal{B}}-{\mathbf{v}}^{k\mathcal{B}}\|^{2}]&\leq 16.5L^{2}\mathbb{E}[\|{\mathbf{z}}^{k\mathcal{B}}-\mathbf{1}_{2n}(\bar{{\mathbf{z}}}^{k\mathcal{B}})^{\prime}\|^{2}]+L^{2}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ &\quad+16\alpha^{2}L^{2}\mathbb{E}[\|{\mathbf{g}}_{\sim n}^{k\mathcal{B}}-\mathbf{1}_{n}(\bar{{\mathbf{g}}}^{k\mathcal{B}})^{\prime}\|^{2}]+2.5\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{k\mathcal{B}})\|^{2}]\\\ &\quad+2\mathbb{E}[\|{\mathbf{v}}^{(k+1)\mathcal{B}}-\nabla{\mathbf{f}}({\mathbf{x}}^{(k+1)\mathcal{B}})\|^{2}].\end{split}$ (83) The proof is completed by combining (76) with (83). ∎ The gradient estimate error, $\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla f({\mathbf{x}}^{k\mathcal{B}})\|^{2}]$, appearing on the right-hand side of the inequalities in Lemmas 3.1 and 3.2, is analyzed in the following lemma. ###### Lemma $3.3$. Suppose the objective function $f$ is $\mu$-strongly-convex, and let $\tau$, $\bar{\tau}$ be defined as above. Then $\forall k\geq 0$, $\displaystyle\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla f({\mathbf{x}}^{k\mathcal{B}})\|^{2}]$ $\displaystyle\leq 4L^{2}\sum_{i=1}^{n}\mathbb{E}[\|{\mathbf{x}}_{i}^{k\mathcal{B}}-\bar{{\mathbf{z}}}^{k\mathcal{B}}\|^{2}]+4L^{2}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]$ (84) $\displaystyle\quad+4L^{2}\sum_{i=1}^{n}\mathbb{E}[\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]+4L^{2}\mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}].$ ###### Proof. For all $i\leq n$, it holds that $\displaystyle\begin{split}\mathbb{E}[\|{\mathbf{v}}_{i}^{k\mathcal{B}}-\nabla f_{i}({\mathbf{x}}_{i}^{k\mathcal{B}})\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]&=\mathbb{E}[\|\nabla f_{i,l_{i}^{k\mathcal{B}}}({\mathbf{x}}_{i}^{k\mathcal{B}})-f_{i,l_{i}^{k\mathcal{B}}}(\tau_{i}^{k\mathcal{B}})-(\nabla f_{i}({\mathbf{x}}_{i}^{k\mathcal{B}})-\nabla f_{i}(\tau_{i}^{k\mathcal{B}}))\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]\\\ &\leq\mathbb{E}[\|\nabla f_{i,l_{i}^{k\mathcal{B}}}({\mathbf{x}}_{i}^{k\mathcal{B}})-f_{i,l_{i}^{k\mathcal{B}}}(\tau_{i}^{k\mathcal{B}})\|^{2}|{\mathcal{F}}^{k\mathcal{B}}]\\\ &=\frac{1}{m_{i}}\sum_{j=1}^{m_{i}}\|\nabla f_{i,j}({\mathbf{x}}_{i}^{k\mathcal{B}})-\nabla f_{i,j}({\mathbf{x}}^{*})+(\nabla f_{i,j}({\mathbf{x}}^{*})-\nabla f_{i,j}(\tau_{i}^{k\mathcal{B}}))\|^{2}\\\ &\leq 2L^{2}\|{\mathbf{x}}_{i}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}+2L^{2}\|\tau_{i}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}\\\ &\leq 4L^{2}\|{\mathbf{x}}_{i}^{k\mathcal{B}}-\bar{{\mathbf{z}}}^{k\mathcal{B}}\|^{2}+4L^{2}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}+4L^{2}\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}+4L^{2}\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}.\end{split}$ (85) The proof of the lemma is completed by summing over $i$ from $1$ to $n$ and taking the total expectation. ∎ Combining the results of Lemma 2, 3.1 and 3.3, we obtain the following result. ###### Lemma $3.4$. Suppose the objective function $f$ is $\mu$-strongly-convex. If $0<\alpha\leq\frac{1}{8L}$, then for all $k\geq 0$ it holds $\displaystyle\begin{split}\mathbb{E}[\|{\mathbf{v}}_{i}^{(k+1)\mathcal{B}}-\nabla f_{i}({\mathbf{x}}_{i}^{(k+1)\mathcal{B}})\|^{2}]&\leq 16.75L^{2}\mathbb{E}[\|{\mathbf{x}}_{i}^{(k+1)\mathcal{B}}-\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}\|^{2}]\\\ &\quad+16L^{2}\alpha^{2}\mathbb{E}[\|{\mathbf{g}}_{i}^{k\mathcal{B}}-\mathbf{1}_{n}\bar{g}^{k\mathcal{B}}\|^{2}]+16.5L^{2}\mathbb{E}[\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ &\quad+4.5L^{2}\mathbb{E}[\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]+4.5L^{2}\mathbb{E}[\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}].\end{split}$ (86) ###### Proof. The proof is completed by combining Lemma 1, 2 and 3.3. ∎ We can now present an argument proving Lemmas 3 and 4 in the main paper. In particular, combining Lemma 3.1 $\displaystyle\mathbb{E}[\|\bar{{\mathbf{z}}}^{(t+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\leq\frac{L^{2}\alpha}{n\mu}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]+(1-\mu\alpha)\mathbb{E}[\|\bar{z}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]+\frac{\alpha^{2}}{n^{2}}\mathbb{E}[\|{\mathbf{v}}^{k\mathcal{B}}-\nabla f({\mathbf{x}}^{k\mathcal{B}})\|^{2}]$ and the results in Lemma 3.2, we obtain $\displaystyle\begin{split}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]&\leq L^{2}\alpha(\frac{1}{\mu}+\frac{4\alpha}{n})\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]+(1-\mu\alpha+\frac{4L^{2}\alpha^{2}}{n})\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ &\quad+\frac{4L^{2}\alpha^{2}}{n}\mathbb{E}[\sum_{i=1}^{n}\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]+\frac{4L^{2}\alpha^{2}}{n}\mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}].\end{split}$ (87) By letting $0<\alpha\leq\frac{\mu}{8L^{2}}$, we can further bound $\displaystyle\begin{split}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{(k+1)\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]&\leq\frac{2L^{2}\alpha}{\mu}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]+(1-\frac{\mu\alpha}{2})\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ &\quad+\frac{4L^{2}\alpha^{2}}{n}\mathbb{E}[\sum_{i=1}^{n}\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]+\frac{4L^{2}\alpha^{2}}{n}\mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}],\end{split}$ which completes the proof of Lemma 3. Moreover, $\displaystyle\begin{split}\mathbb{E}[\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}]&\leq\frac{120L^{2}}{1-\sigma^{2}}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]\\\ &\quad+\frac{89L^{2}}{1-\sigma^{2}}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ &\quad+(\frac{1+\sigma^{2}}{2}+\frac{96L^{2}\alpha^{2}}{1-\sigma^{2}})\mathbb{E}[\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}|^{2}]\\\ &\quad+\frac{38L^{2}}{1-\sigma^{2}}\mathbb{E}[\sum_{i=1}^{n}\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]\\\ &\quad+\frac{38L^{2}}{1-\sigma^{2}}\mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}].\end{split}$ (88) For $0<\alpha\leq\frac{1-\sigma^{2}}{14\sqrt{2}L}$, we have $\frac{1+\sigma^{2}}{2}+\frac{98L^{2}\alpha^{2}}{1-\sigma^{2}}\leq\frac{3+\sigma^{2}}{4}$; this helps complete the proof of Lemma 4, $\displaystyle\begin{split}\mathbb{E}[\sum_{m=1}^{d}\sum_{i=1}^{n}|g_{im}^{(k+1)\mathcal{B}}-\bar{g}_{m}^{(k+1)\mathcal{B}}|^{2}]&\leq\frac{120L^{2}}{1-\sigma^{2}}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]\\\ &\quad+\frac{89L^{2}}{1-\sigma^{2}}\mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ &\quad+\frac{3+\sigma^{2}}{4}\mathbb{E}[\sum_{m=1}^{d}\sum_{i=1}^{n}\|g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}\|^{2}]\\\ &\quad+\frac{38L^{2}}{1-\sigma^{2}}\mathbb{E}[\sum_{i=1}^{n}\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]\\\ &\quad+\frac{38L^{2}}{1-\sigma^{2}}\mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}].\end{split}$ Using the inequalities shown in Lemma 2, 3 and 4, we can construct a dynamic system and continue the proof of linear convergence of Algorithm 1. To this end, we first define ${\mathbf{u}}^{k\mathcal{B}}=\begin{bmatrix}\mathbb{E}[\sum_{i=1}^{n}\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{z}}_{i}^{k\mathcal{B}}\|^{2}]\\\ \mathbb{E}[n\|\bar{{\mathbf{z}}}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ \mathbb{E}[\frac{\sum_{m=1}^{d}\sum_{i=1}^{n}\|g_{im}^{k\mathcal{B}}-\bar{g}_{m}^{k\mathcal{B}}\|^{2}}{L^{2}}]\end{bmatrix}$ (89) $\tilde{{\mathbf{u}}}^{k\mathcal{B}}=\begin{bmatrix}\mathbb{E}[\sum_{i=1}^{n}\|\tau_{i}^{k\mathcal{B}}-\bar{\tau}^{k\mathcal{B}}\|^{2}]\\\ \mathbb{E}[n\|\bar{\tau}^{k\mathcal{B}}-{\mathbf{x}}^{*}\|^{2}]\\\ \mathbf{0}\end{bmatrix}$ (90) $J_{\alpha}=\begin{bmatrix}\frac{1+\sigma^{2}}{2}&0&\frac{2\alpha^{2}L^{2}}{1-\sigma^{2}}\\\ \frac{2L^{2}\alpha}{\mu}&1-\frac{\mu\alpha}{2}&0\\\ \frac{120}{1-\sigma^{2}}&\frac{89}{1-\sigma^{2}}&\frac{3+\sigma^{2}}{4}\end{bmatrix}$ (91) $H_{\alpha}=\begin{bmatrix}0&0&0\\\ \frac{4L^{2}\alpha^{2}}{n}&\frac{4L^{2}\alpha^{2}}{n}&0\\\ \frac{38}{1-\sigma^{2}}&\frac{38}{1-\sigma^{2}}&0\end{bmatrix}$ (92) and then formally state the dynamic system in Proposition 1. ###### Proposition 2. Suppose Assumption 1 holds, the objective function $f$ is $\mu$-strongly- convex and each component of the local objective function $f_{i,j}$ is $L$-smooth. If $0\leq\alpha\leq\frac{\mu(1-\sigma^{2})}{14\sqrt{2}L^{2}}$, then for any $k\geq 0$ ${\mathbf{u}}^{(k+1)\mathcal{B}}\leq J_{\alpha}{\mathbf{u}}^{k\mathcal{B}}+H_{\alpha}\tilde{{\mathbf{u}}}^{k\mathcal{B}}.$ (93) It follows that for the inner loop, for all $k\in[sT,(s+1)T-1]$ ${\mathbf{u}}^{(k+1)\mathcal{B}}\leq J_{\alpha}{\mathbf{u}}^{k\mathcal{B}}+H_{\alpha}{\mathbf{u}}^{sT}.$ (94) For the outer loop, for all $s\geq 0$, it holds ${\mathbf{u}}^{(s+1)T}\leq(J_{\alpha}^{T}+\sum_{l=0}^{T-1}J_{\alpha}^{l}H_{\alpha}){\mathbf{u}}^{sT}.$ (95) To guarantee linear decay of the outer loop sequence and ultimately show linear convergence of Algorithm 1, we require that the spectral radius of $J_{\alpha}^{T}+\sum_{l=0}^{T-1}J_{\alpha}^{l}H_{\alpha}$ is small. In Lemma 5, we compute the range of the step size, $\alpha$, such that the weighted matrix norms of both $J_{\alpha}^{T}$ and $\sum_{l=0}^{T-1}J_{\alpha}^{l}H_{\alpha}$ are small. ###### Lemma $5$. Suppose Assumption 1 holds and assume that $0<\alpha\leq\frac{(1-\sigma^{2})^{2}}{187\tilde{Q}L}$, where $\tilde{Q}=\frac{L}{\mu}$. Then, $\rho(J_{\alpha})<\||J_{\alpha}|\|^{\mathbf{\delta}}_{\infty}<1-\frac{\mu\alpha}{4},$ (96) and $\||\sum_{l=0}^{T-1}J_{\alpha}^{l}H_{\alpha}|\|^{{\mathbf{q}}}_{\infty}\leq\||(I-J_{\alpha})^{-1}H_{\alpha}|\|^{{\mathbf{q}}}_{\infty}<0.66,$ (97) where $\mathbf{\delta}=\begin{bmatrix}1,8\tilde{Q}^{2},\frac{6656\tilde{Q}^{2}}{(1-\sigma^{2})^{2}}\end{bmatrix}$ and ${\mathbf{q}}=[1,1,\frac{1457}{(1-\sigma^{2})^{2}}]$. ###### Proof. Following Lemma 10 in [42], consider a matrix $A\in{\mathbb{R}}^{d\times d}$ and a positive vector ${\mathbf{x}}\in{\mathbb{R}}^{d}$, and note that if $A{\mathbf{x}}\leq\beta{\mathbf{x}}$ for $\beta>0$ we have that $\rho(A)\leq\||A|\|^{{\mathbf{x}}}_{\infty}\leq\beta$. Using this lemma we solve for a range of $\alpha$ and a positive vector $\mathbf{\delta}\in{\mathbb{R}}^{3}$ such that $J_{\alpha}\mathbf{\delta}\leq(1-\frac{\mu\alpha}{4})\mathbf{\delta},$ (98) which is equivalent to the element-wise inequalities $\displaystyle\frac{1+\sigma^{2}}{2}\delta_{1}+\frac{2\alpha^{2}L^{2}}{1-\sigma^{2}}\delta_{3}\leq(1-\frac{\mu\alpha}{4})\delta_{1}$ (99) $\displaystyle\frac{2L^{2}\alpha}{\mu}\delta_{1}+(1-\frac{\mu\alpha}{2})\delta_{2}\leq(1-\frac{\mu\alpha}{4})\delta_{2}$ $\displaystyle\frac{120}{1-\sigma^{2}}\delta_{1}+\frac{89}{1-\sigma^{2}}\delta_{2}+\frac{3+\sigma^{2}}{4}\delta_{3}\leq(1-\frac{\mu\alpha}{4})\delta_{3}.$ To solve for a meaningful $\mathbf{\delta}$, we set $\delta_{1}=1$ and $\delta_{2}=8\tilde{Q}^{2}$; then $\mathbf{\delta}=\begin{bmatrix}1,8\tilde{Q}^{2},\frac{6656\tilde{Q}^{2}}{(1-\sigma^{2})^{2}}\end{bmatrix}$ and $0<\alpha\leq\frac{(1-\sigma^{2})^{2}}{187\tilde{Q}L}$ are sufficient to satisfy the first inequality. Since $J_{\alpha}$ is non-negative, $\sum_{l=0}^{T-1}J_{\alpha}^{l}\leq\sum_{l=0}^{\infty}J_{\alpha}^{l}=(I_{3}-J_{\alpha})^{-1}$; this yields ${\mathbf{u}}^{(s+1)T}\leq(J_{\alpha}^{T}+(I_{3}-J_{\alpha})^{-1}H_{\alpha}){\mathbf{u}}^{sT},$ (100) where $I_{3}-J_{\alpha}=\begin{bmatrix}\frac{1-\sigma^{2}}{2}&0&-\frac{2\alpha^{2}L^{2}}{1-\sigma^{2}}\\\ -\frac{2L^{2}\alpha}{\mu}&\frac{\mu\alpha}{2}&0\\\ -\frac{120}{1-\sigma^{2}}&-\frac{89}{1-\sigma^{2}}&\frac{1-\sigma^{2}}{4}\end{bmatrix}$ (101) and its determinant is $\det(I_{3}-J_{\alpha})=\frac{(1-\sigma^{2})^{2}\mu\alpha}{16}-\frac{356L^{4}\alpha^{3}}{\mu(1-\sigma^{2})^{2}}-\frac{120\alpha^{3}\mu L^{2}}{(1-\sigma^{2})^{2}}.$ (102) When $0<\alpha\leq\frac{(1-\sigma^{2})^{2}}{187\tilde{Q}L}$, $\det(I_{3}-J_{\alpha})\geq\frac{(1-\sigma^{2})^{2}\mu\alpha}{32}$, and $\displaystyle[\mathrm{adj}(I_{3}-J_{\alpha})]_{1,2}=\frac{178L^{2}\alpha^{2}}{(1-\sigma^{2})^{2}},$ $\displaystyle[\mathrm{adj}(I_{3}-J_{\alpha})]_{1,3}=\frac{\mu L^{2}\alpha^{3}}{(1-\sigma^{2})},$ $\displaystyle[\mathrm{adj}(I_{3}-J_{\alpha})]_{2,2}\leq\frac{(1-\sigma^{2})^{2}}{8},$ $\displaystyle[\mathrm{adj}(I_{3}-J_{\alpha})]_{2,3}=\frac{4L^{4}\alpha^{3}}{\mu(1-\sigma^{2})},$ $\displaystyle[\mathrm{adj}(I_{3}-J_{\alpha})]_{3,2}=44.5,$ $\displaystyle[\mathrm{adj}(I_{3}-J_{\alpha})]_{3,3}=\frac{\mu\alpha(1-\sigma^{2})}{4}.$ Next, we derive a matrix upper-bounding (element-wise) $(I_{3}-J_{\alpha})^{-1}H_{\alpha}=\frac{\mathrm{adj}(I_{3}-J_{\alpha})}{\det(I_{3}-J_{\alpha})}H_{\alpha}$. For $0\leq\alpha\leq\frac{(1-\sigma^{2})^{2}}{187\tilde{Q}L}$, $(I_{3}-J_{\alpha})^{-1}H_{\alpha}\leq\begin{bmatrix}0.039&0.039&0\\\ 0.23&0.23&0\\\ \frac{335}{(1-\sigma^{2})^{2}}&\frac{335}{(1-\sigma^{2})^{2}}&0\end{bmatrix}.$ (103) If ${\mathbf{q}}=[1,1,\frac{1457}{(1-\sigma^{2})^{2}}]$, we have $((I_{3}-J_{\alpha})^{-1}H_{\alpha}){\mathbf{q}}\leq 0.66{\mathbf{q}}.$ (104) Finally, invoking the definition of the weighted matrix norm, we complete the proof of the second inequality in the lemma. ∎ This completes the presentation of auxiliary results that support the proof of Theorem 1 in the main paper. ## Appendix B Experimental Results for the Decentralized Average Consensus Problem (a) Consensus residual: $\mathcal{B}=1$ (b) Consensus residual: $\mathcal{B}=10$ Figure 4: Average consensus on a jointly connected network with $\mathcal{B}=1,10,\epsilon=0.05$. In each of the subplots, we show the performance of Di-CS-AC, i.e., Algorithm 1, for $6$ different sparsification levels and compare it to $2$ benchmark quantization algorithms, Q-Push-sum and Q-Push-Gossip. The quantization level is chosen such that the number of communicated bits for the benchmark algorithms is equal to that of Di-CS-AC when $q=0.078$. In this section, we refer to the proposed Algorithm 1 in the main paper as Di- CS-AC (Directed Communication-Sparsifying Average Consensus) and present its performance. We consider an average consensus problem where the dimension of a local parameter vector at each node is $d=64$. The initial state $\mathbf{x}_{i}^{0}$ is randomly generated from the normal distribution; the goal of the network is to reach the average consensus vector, i.e., compute $\bar{\mathbf{x}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}^{0}$. The network setups are exactly the same as the decentralized optimization experiments in Section V. For benchmarking purposes, we consider two quantized versions of the push-sum algorithm: (i) Q-Push-sum, obtained by applying simple quantization to the push-sum scheme [21, 22], and (ii) Q-Push-Gossip, a quantized push-sum for gossip algorithm recently proposed in [38]. The former was originally developed for unconstrainted communication settings, while the latter originally targeted static networks; in the absence of prior work on communication-constrained consensus over time-varying directed networks, we adopt these two as the benchmarking schemes. We compare the performance of different algorithms by computing the residual value $\frac{\|\mathbf{x}^{t}-\bar{\mathbf{x}}\|}{\|\mathbf{x}^{0}-\bar{\mathbf{x}}\|}$; the results are shown in Fig. 4. As the figures demonstrate, at the considered levels of sparsification $q$ and values of the connectivity parameter $\mathcal{B}$, Di-CS-AC converges to the same limit as the full communication schemes. The convergence rate is linear in the number of iterations $t$ but smaller compression level and larger connectivity period slow the convergence down. In Fig. 4 (a) and (b), the two benchmarking quantization algorithms cannot reach the desired consensus accuracy in the time-varying directed network while the proposed Di-CS-AC achieves considerably smaller consensus error. In regards to communication cost, to reach the same residual threshold (i.e., $10^{-12}$), our consensus algorithm, Di-CS-AC, with $q=0.078$, incurs only $7.8\%$ per-iteration cost as compared to the non-compression consensus algorithm (with $q=1$) and needs fewer than double the number of iterations (see Fig. 4 (a) and (b)). This demonstrates communication efficiency of Di-CS- AC. ## Appendix C Experimental results for larger size of network In this section, we consider a network with $100$ nodes and repeat the logistic regression experiment. We still use the stackoverflow dataset introduced in Section V of the main paper. The network is constructed following the same steps introduced in Section V of the main paper with parameter $B=1$. In Fig. 5, we show the correct rate for three different sparsification levels, i.e., $q=0.25,0.5$ and $1$. Figure 5: Di-CS-SVRG performance on a time-varying directed network with $100$ nodes. As the plot shows, schemes with different sparsification levels have different convergence rates – more aggressive sparsification leads to slower convergence.
# The $*$-product of domains in several complex variables Sylwester Zając Institute of Mathematics, Faculty of Mathematics and Computer Science, Jagiellonian University, Łojasiewicza 6, 30-348 Kraków, Poland <EMAIL_ADDRESS> ###### Abstract. In this article we continue the research, carried out in [28], on computing the $*$-product of domains in $\mathbb{C}^{N}$. Assuming that $0\in G\subset\mathbb{C}^{N}$ is an arbitrary Runge domain and $0\in D\subset\mathbb{C}^{N}$ is a bounded, smooth and linearly convex domain (or a non-decreasing union of such ones), we establish a geometric relation between $D*G$ and another domain in $\mathbb{C}^{N}$ which is ’extremal’ (in an appropriate sense) with respect to a special coefficient multiplier dependent only on the dimension $N$. Next, for $N=2$, we derive a characterization of the latter domain expressed in terms of planar geometry. These two results, when combined together, give a formula which allows to calculate $D*G$ for two-dimensional domains $D$ and $G$ satisfying the outlined assumptions. ###### Key words and phrases: Hadamard product, analytic continuation, spaces of holomorphic functions ###### 2010 Mathematics Subject Classification: Primary: 32A05, 32D15 The research was supported by NCN grant SONATA BIS no. 2017/26/E/ST1/00723 of the National Science Centre, Poland. ## 1\. Introduction Let $\mathcal{O}_{0}$ be the set of all germs of holomorphic functions at the origin of $\mathbb{C}^{N}$ and let $\mathcal{O}_{0,D}$, for a domain $0\in D\subset\mathbb{C}^{N}$, be the subset of $\mathcal{O}_{0}$ consisting of all germs of elements of $\mathcal{O}(D)$. The latter symbol denotes, as usually, the Fréchet space of all holomorphic functions on $D$ equipped with the compact-open topology. The Hadamard product, called also the $*$-product, can be regarded as a bilinear mapping from $\mathcal{O}_{0}\times\mathcal{O}_{0}$ to $\mathcal{O}_{0}$ given by the formula $\left(\sum_{\alpha\in\mathbb{N}^{N}}f_{\alpha}z^{\alpha}\right)*\left(\sum_{\alpha\in\mathbb{N}^{N}}g_{\alpha}z^{\alpha}\right):=\sum_{\alpha\in\mathbb{N}^{N}}f_{\alpha}g_{\alpha}z^{\alpha}.$ It was extensively studied in various aspects: as a bilinear form, as a linear operator with one factor fixed (see the survey [22] and, for instance, the papers [2], [3], [4], [5], [14], [17], [18], [19], [20], [21], [23], [24], [25], [28]), and also as a map acting on spaces of real analytic functions (see [7], [8], [9], [10], [11], [12]). The problem of similar nature as here, that is of extending the $*$-products to as large domain as possible, was investigated, for instance, in [1], [13] and [16] for weighted Hadamard products with certain weights, where results were obtained for starlike domains and for so-called $p$-convex domains (we refer the reader to [13]). In [28] it was shown (see Proposition 4.1 there) that if at least one of domains $0\in D,G\subset\mathbb{C}^{N}$ is Runge domain, then there exists the largest domain $0\in\Omega\subset\mathbb{C}^{N}$ having the property that the image of $\mathcal{O}_{0,D}\times\mathcal{O}_{0,G}$ under the $*$-product lies in $\mathcal{O}_{0,\Omega}$ (here, as well as in [28], we follow [15, Definition 2.7.1] and consider Runge domains to be pseudoconvex). This largest $\Omega$, denoted as $D*G$, was the subject of research carried out in [28], which concluded in finding a description of $D*G$ for domains of a special class. The methodology employed in [28], as well as the results achieved there, were of symmetrical nature: both $D$ and $G$ were assumed to belong to the same family of domains and the fundamental integral formula for $f*g$ relied on geometric properties of $D$ to the same extent as on those of $G$. The approach presented in this paper is substantially different. Starting from a non-symmetric integral expressing $f*g$, we investigate $D*G$ with only $D$ being of the same particular class as in [28] and $G$ being an arbitrary Runge domain. These considerations conclude in Theorem 3.4, which establishes a relation, expressed in geometric terms, between $D*G$ and $h_{\textnormal{{1}}_{N}}*G$. Here $h_{\textnormal{{1}}_{N}}(z)=(1-z_{1}-\ldots-z_{N})^{-N}$ and the set $h_{\textnormal{{1}}_{N}}*G$ is the largest domain containing the origin on which every product $h_{\textnormal{{1}}_{N}}*g$, for $g\in\mathcal{O}_{0,G}$, can be analytically continued. When $G$ is Runge domain, existence of $h_{\textnormal{{1}}_{N}}*G$ is guaranteed by the aforementioned [28, Proposition 4.1]. To calculate $D*G$ we must, however, face the problem of computing $h_{\textnormal{{1}}_{N}}*G$. We deal with this topic in Section 4, where, in Theorem 4.1, we derive a nice geometric formula for $h_{\textnormal{{1}}_{N}}*G$, but, unfortunately, only for $N=2$. It is worthy to emphasize that although obtaining a candidate for this set was quite straightforward and relied mainly of certain integral formula, main difficulties were met in demonstrating that this candidate is the largest one on which all $h_{\textnormal{{1}}_{2}}*g$’s extend. Combining this result with Theorem 3.4 announced above leads to a complete description of $D*G$ for two- dimensional domains satisfying the listed assumptions. Nevertheless, the question for higher dimensions remains open. ## 2\. Preliminaries We begin by introducing basic concepts and notation setting grounds for this study. We use the standard symbols $\mathbb{D}$, $\mathbb{T}$, $\mathbb{C}_{*}$ and $\widehat{\mathbb{C}}$ to denote, respectively, the unit disc in $\mathbb{C}$, its boundary, the punctured plane $\mathbb{C}\setminus\\{0\\}$ and the Riemann sphere $\mathbb{C}\cup\\{\infty\\}$. We assume that the set $\mathbb{N}$ of all natural numbers contains $0$. By $\mathbb{P}_{N}(z,r)$ and $\overline{\mathbb{P}}_{N}(z,r)$ we mean the open and closed polydiscs centered at $z$ and having the radius $r$. To shorten notation, we will use the word ’loop’ to denote a continuous map defined on $\mathbb{T}$ and the word ’smooth’ to declare being of the $\mathcal{C}^{\infty}$ class. For points $z=(z_{1},\ldots,z_{N}),w=(w_{1},\ldots,w_{N})\in\mathbb{C}^{N}$, by $z\bullet w$ we denote the product $z_{1}w_{1}+\ldots+z_{N}w_{N}$ and by $z\cdot w$ or $zw$ we denote their coordinate-wise product, that is, the point $(z_{1}w_{1},\ldots,z_{N}w_{N})$. The identity element of the latter multiplication, $(1,1,\ldots,1)$ ($N$ times), is called $\textnormal{{1}}_{N}$. Given two sets $A,B\subset\mathbb{C}^{N}$, by $AB$ or $A\cdot B$ we mean their algebraic product, i.e. the set $\\{ab:a\in A,b\in B\\}$. We use the classical notation of exponentiation, where for an $\alpha=(\alpha_{1},\ldots,\alpha_{N})\in\mathbb{Z}^{N}$ the symbol $z^{\alpha}$ denotes $z_{1}^{\alpha_{1}}\ldots z_{N}^{\alpha_{N}}$, holding the convention that $z_{j}^{0}=1$. Moreover, $\alpha!$ and $|\alpha|$ will, as usual, denote the product $\alpha_{1}!\ldots\alpha_{N}!$ and the sum $\alpha_{1}+\ldots+\alpha_{N}$. For compact sets $K\subset\mathbb{C}^{N}$ and $L\subset\Omega$ we use the standard notation $\widehat{K}$ and $\widehat{L}_{\Omega}$ for the polynomial hull of $K$ and the holomorphic hull of $L$ with respect to a domain $\Omega$. The supremum of modulus of a complex-valued function over a set $A\subset\mathbb{C}^{N}$ is denoted by $\|f\|_{A}$. Finally, if $0\in A$, then by $\textnormal{cc}_{0}\,A$ we understand the connected component of $A$ containing $0$. ### 2.1. Integral formula for the $*$ product Take two power series $f(z)=\sum_{\alpha\in\mathbb{N}^{N}}f_{\alpha}z^{\alpha},\quad g(z)=\sum_{\alpha\in\mathbb{N}^{N}}g_{\alpha}z^{\alpha}$ convergent in neighbourhoods of polydiscs $\overline{\mathbb{P}}_{N}(0,r)$ and $\overline{\mathbb{P}}_{N}(0,\rho)$, respectively. A straightforward calculation allows us to derive the equality (1) $(f*g)(z)=\left(\frac{1}{2\pi i}\right)^{N}\int_{\rho^{-1}\mathbb{T}^{N}}f(z\zeta)\,g\left(\zeta_{1}^{-1},\ldots,\zeta_{N}^{-1}\right)\frac{d\zeta_{1}}{\zeta_{1}}\ldots\frac{d\zeta_{N}}{\zeta_{N}}$ for $z=(z_{1},\ldots,z_{N})\in\overline{\mathbb{P}}_{N}(0,r\rho)$. Its one- dimensional version was extensively used in research of Hadamard product in one complex variable. Although in this paper we mostly rely on different tools, the above fact will come in useful at some point in Section 4. ### 2.2. Integral formula for the $*$ product with special weights Take a bounded smooth domain $0\in D\subset\mathbb{C}^{N}$, a neighbourhood $V$ of $\partial D$ and a smooth map $\varphi:V\to\mathbb{C}^{N}$ such that $\zeta\bullet\varphi(\zeta)=1$ for all $\zeta\in V$. If a function $f$ is holomorphic in a neighbourhood of $\overline{D}$ and $\sum_{\alpha\in\mathbb{N}^{N}}f_{\alpha}z^{\alpha}$ is its Taylor series expansion at the origin, then from the considerations made in [28, Section 2.1] it follows that $f_{\alpha}=c_{N}\frac{(N+|\alpha|-1)!}{\alpha!}\int_{\partial D}f(\zeta)\varphi(\zeta)^{\alpha}\omega_{\varphi}(\zeta),$ where $c_{N}$ is a constant dependent only on $N$ and $\omega_{\varphi}$ is certain smooth $(N,N-1)$ form on $V$ dependent only on $N$ and $\varphi$. Now, if $g\in\mathcal{O}_{0}$ has the Taylor series expansion $\sum_{\alpha\in\mathbb{N}^{N}}g_{\alpha}z^{\alpha}$, then for $z$ close to $0$ one has that (2) $\sum_{\alpha\in\mathbb{N}^{N}}\frac{\alpha!(N-1)!}{(|\alpha|+N-1)!}f_{\alpha}g_{\alpha}z^{\alpha}=c_{N}(N-1)!\int_{\partial D}f(\zeta)g(\varphi(\zeta)z)\omega_{\varphi}(\zeta).$ We will make use of this equality in Section 3. ## 3\. Description of $D*G$ for $D$ being of special class and $G$ being an arbitrary Runge domain Our goal in this part of the paper is to demonstrate Theorem 3.4. For the Reader’s convenience, we begin by recalling definition and elementary facts regarding the $*$-product of domains. ###### Assumption. Throughout this section we assume that $N\geq 2$ and $D$, $G$ are domains in $\mathbb{C}^{N}$ containing the origin. ###### Definition. Assume that at least one of $D$, $G$ is Runge domain. Proposition 4.1 from [28] establishes existence of the largest domain $0\in\Omega\subset\mathbb{C}^{N}$ having the property that the image of $\mathcal{O}_{0,D}\times\mathcal{O}_{0,G}$ by $*$ lies in $\mathcal{O}_{0,\Omega}$. We define $D*G$ as this largest $\Omega$. The referenced fact guarantees that $D*G$ itself is Runge domain. For every pair of functions $f\in\mathcal{O}(D)$ and $g\in\mathcal{O}(G)$ there exists the only element of $\mathcal{O}(D*G)$ equal to $f*g$ near the origin. Following [28], we denote it by $f*_{D,G}g$. We then obtain the bilinear mapping $*_{D,G}:\mathcal{O}(D)\times\mathcal{O}(G)\ni(f,g)\mapsto f*_{D,G}g\in\mathcal{O}(D*G).$ From the closed graph theorem it follows that $*_{D,G}$ is separately continuous. This, together with [26, page 88, Corollary 1], guarantees that it is jointly continuous. In a similar fashion we introduce the $*$-product of a germ from $\mathcal{O}_{0}$ and a domain containing the origin. ###### Definition. Assume that $G$ is Runge domain. If $\tau\in\mathcal{O}_{0}$, then, in virtue of [28, Proposition 4.1], there exists the largest domain $0\in\Omega\subset\mathbb{C}^{N}$ such that the image of $\mathcal{O}_{0,G}$ by the map $g\mapsto\tau*g$ lies in $\mathcal{O}_{0,\Omega}$. We denote this largest $\Omega$ by $\tau*G$. As previously, $\tau*G$ is Runge domain. For each $g\in\mathcal{O}(G)$ there exists the only function from $\mathcal{O}(\tau*G)$ equal to $\tau*g$ near the origin. Denote this function by $\tau*_{G}g$. The closed graph theorem yields that the linear operator $\mathcal{O}(G)\ni g\mapsto\tau*_{G}g\in\mathcal{O}(\tau*G)$ is continuous. ###### Definition. Similarly as in [28] we introduce the compact set $D^{*}:=\left\\{\xi\in\mathbb{C}^{N}:\xi\bullet z\neq 1\text{ for all }z\in D\right\\}.$ If $\xi\in D^{*}$, then the function (3) $h_{\xi}(z):=\left(1-z\bullet\xi\right)^{-N}.$ belongs to $\mathcal{O}(D)$ and (4) $h_{\xi}(z)=\sum_{\alpha\in\mathbb{N}^{N}}\frac{(|\alpha|+N-1)!}{\alpha!(N-1)!}\xi^{\alpha}z^{\alpha}$ is its Taylor series expansion at the origin. ###### Lemma 3.1. Assume that $G$ is Runge domain and $U\subset\mathbb{C}^{N}$ is an open set. If a domain $\Omega\subset\mathbb{C}^{N}$ contains the origin and $h_{\xi}*g\in\mathcal{O}_{0,\Omega}$ for all $\xi\in U$ and $g\in\mathcal{O}_{0,G}$, then $U\cdot\Omega\subset h_{\textnormal{{1}}_{N}}*G.$ ###### Proof. We need to show that $\xi\Omega\subset h_{\textnormal{{1}}_{N}}*G$ for each $\xi\in U$. If $\xi\in U\cap(\mathbb{C}_{*})^{N}$, then, in view of (4), for $g\in\mathcal{O}_{0,G}$ and $z$ close to $0$ we have $(h_{\xi}*g)(z)=(h_{\textnormal{{1}}_{N}}*g)(\xi z),$ so the germ of the function on the right hand side belongs to $\mathcal{O}_{0,\Omega}$. Hence, $h_{\textnormal{{1}}_{N}}*g\in\mathcal{O}_{0,\xi\Omega}$. This holds for every $g\in\mathcal{O}_{0,G}$, so the definition of $h_{\textnormal{{1}}_{N}}*G$ yields that it indeed contains $\xi\Omega$. On the other hand, if $\xi\in U\setminus(\mathbb{C}_{*})^{N}$, then we can take a number $r>0$ so that $\xi+r\mathbb{T}^{N}\subset U\cap(\mathbb{C}_{*})^{N}$. From the previous considerations it follows that $(\xi+r\mathbb{T}^{N})\cdot\Omega\subset h_{\textnormal{{1}}_{N}}*G.$ Since the set on the right hand side is Runge domain, for each $z\in\Omega$ it contains the polynomial hull of $(\xi+r\mathbb{T}^{N})\cdot z$ and, in particular, the point $\xi\cdot z$ itself. ∎ ###### Lemma 3.2. If $D$ is a pseudoconvex domain and $G$ is Runge domain, then $D^{*}\cdot(D*G)\subset h_{\textnormal{{1}}_{N}}*G.$ Consequently, $D*G\subset\textnormal{cc}_{0}\,\\{z\in\mathbb{C}^{N}:zD^{*}\subset h_{\textnormal{{1}}_{N}}*G\\}.$ It is worthy to note that, as $D^{*}$ is compact, the set under $\textnormal{cc}_{0}\,$ above is open. ###### Proof. Fix an arbitrary domain $0\in\Omega\subset\subset D*G$. From the continuity of $*_{D,G}$ it follows that we can find a constant $C>0$ and compact sets $K\subset D$ and $L\subset G$ such that $K$ is holomorphically convex in $D$, $0\in\textnormal{int}\,K$ and (5) $\|f*_{D,G}g\|_{\Omega}\leq C\|f\|_{K}\|g\|_{L}$ for all $f\in\mathcal{O}(D)$ and $g\in\mathcal{O}(G)$. Take a neighbourhood $U$ of $D^{*}$ such that $\eta\bullet z\neq 1$ for all $z\in K$ and $\eta\in U$. If $\eta\in U$, then $h_{\eta}$ is holomorphic in a neighbourhood of $K$, so, by the Oka-Weil theorem, it can be approximated uniformly on $K$ by a sequence $(f_{n})_{n\in\mathbb{N}}\subset\mathcal{O}(D)$. Hence, if $g\in\mathcal{O}(G)$, then (5) gives that the functions $f_{n}*_{D,G}g$ form a Cauchy sequence with respect to the supremum norm on $\Omega$. This means that they converge in $\mathcal{O}(\Omega)$ and, thanks to the fact that $0\in\textnormal{int}\,K$, the limit has to be equal to $h_{\eta}*g$ near the origin. Consequently, $h_{\eta}*g\in\mathcal{O}_{0,\Omega}$ for every $g\in\mathcal{O}_{0,G}$ and $\eta\in U$. Now, Lemma 3.1 allows us to conclude that $D^{*}\cdot\Omega\subset U\cdot\Omega\subset h_{\textnormal{{1}}_{N}}*G$, what completes the proof. ∎ ###### Lemma 3.3. Let $0\in W$ and $0\in D_{0}\subset D_{1}\subset D_{2}\subset\ldots$ be domains in $\mathbb{C}^{N}$ such that $\bigcup_{n\in\mathbb{N}}D_{n}=D$. Set $\Omega_{n}=\textnormal{cc}_{0}\,\\{z\in\mathbb{C}^{N}:zD_{n}^{*}\subset W\\},\quad\Omega=\textnormal{cc}_{0}\,\\{z\in\mathbb{C}^{N}:zD^{*}\subset W\\}.$ Then $\Omega_{0}\subset\Omega_{1}\subset\Omega_{2}\subset\ldots$ and $\bigcup_{n\in\mathbb{N}}\Omega_{n}=\Omega$. ###### Proof. Clearly, $D^{*}\subset D_{n+1}^{*}\subset D_{n}^{*}$, so $\Omega_{n}\subset\Omega_{n+1}\subset\Omega$. To show that $\Omega$ is contained in $\bigcup_{n\in\mathbb{N}}\Omega_{n}$, take an arbitrary connected compact set $0\in K\subset\Omega$. One has that $K\cdot D^{*}\subset W$, so $K\cdot U\subset W$ for a neighbourhood $U$ of $D^{*}$. The sequence $(D_{n}^{*})_{n\in\mathbb{N}}$ decreases and it is straightforward to check that $D^{*}=\bigcap_{n\in\mathbb{N}}D_{n}^{*}$. Therefore, $D_{n_{0}}^{*}\subset U$ for some $n_{0}$, what gives that $K\cdot D_{n_{0}}^{*}\subset W$. Hence, $K\subset\Omega_{n_{0}}$, because $K$ is connected and contains the origin. From this we conclude that $\Omega\subset\bigcup_{n\in\mathbb{N}}\Omega_{n}$. ∎ ###### Definition. Similarly as in [28] we define $\mathcal{D}_{N}$ as the family of all domains in $\mathbb{C}^{N}$ containing the origin which are countable unions of non- decreasing sequences of bounded smooth linearly convex domains. Recall that $D$ is called _linearly convex_ if through every point of $\mathbb{C}^{N}\setminus D$ one can pass an affine complex hyperplane disjoint from $D$. As it is described in [28, Remark 4.11], each element of $\mathcal{D}_{N}$, being a union of a non-decreasing sequence of Runge domains, is also Runge domain. ###### Theorem 3.4. If $D\in\mathcal{D}_{N}$ and $G$ is Runge domain, then $D*G=\textnormal{cc}_{0}\,\\{z\in\mathbb{C}^{N}:zD^{*}\subset h_{\textnormal{{1}}_{N}}*G\\}.$ ###### Proof. The left-to-right inclusion was established in Lemma 3.2, so it remains to prove the opposite one. In virtue of Lemma 3.3 and [28, Proposition 4.5] it suffices to restrict our considerations to the case when $D$ is bounded, smooth and linearly convex. Then there exists a smooth map $\nu_{D}$ from a neighbourhood of $\partial D$ to $\mathbb{C}^{N}$ such that at each point $w\in\partial D$ its value $\nu_{D}(w)$ is the unit outward normal vector for $D$ at $w$. Hence, for $w\in\partial D$ the equation $(z-w)\bullet\overline{\nu}_{D}(w)=0$ describes the only complex hyperplane passing through $w$ and disjoint from $D$. In particular, $w\bullet\overline{\nu}_{D}(w)\neq 0$, as $0\in D$. This means that the mapping $\varphi:w\mapsto\overline{\nu}_{D}(w)\cdot(w\bullet\overline{\nu}_{D}(w))^{-1}$ is well-defined and smooth in a neighbourhood $V$ of $\partial D$. Moreover, we have that $\varphi(\partial D)\subset D^{*}$ and $w\bullet\varphi(w)=1$ for $w\in V$. Denote by $\Omega$ the set on the right hand side of the conclusion and take a domain $0\in U\subset\subset\Omega$. From the definition of $\Omega$ it follows that $U\cdot\varphi(\partial D^{\prime})\subset h_{\textnormal{{1}}_{N}}*G$ for a sufficiently large smooth domain $0\in D^{\prime}\subset\subset D$ such that $\partial D^{\prime}\subset V$. If $f\in\mathcal{O}(D)$ and $g\in\mathcal{O}(G)$, then, by (2), for $z$ lying near the origin it holds that $(f*g)(z)=c_{N}(N-1)!\int_{\partial D^{\prime}}f(\zeta)(h_{\textnormal{{1}}_{N}}*_{G}g)(\varphi(\zeta)z)\omega_{\varphi}(\zeta).$ The integral on the right hand side defines a function of the variable $z$ which is holomorphic on $U$. Consequently, $f*g\in\mathcal{O}_{0,U}$. Since $f$, $g$ and $U$ were taken arbitrarily, we conclude that $\Omega\subset D*G$. ∎ ## 4\. Description of $h_{(1,1)}*D$ for Runge domains This part is devoted to demonstration of Theorem 4.1, which completes, although only in the two-dimensional case, the description of the star product of domains established in Theorem 3.4. Recall that $h_{(1,1)}$ is the function given by the formula (3), that is, $h_{(1,1)}(z_{1},z_{2})=(1-z_{1}-z_{2})^{-2}=\sum_{\alpha_{1},\alpha_{2}\in\mathbb{N}}\frac{(\alpha_{1}+\alpha_{2}+1)!}{\alpha_{1}!\,\alpha_{2}!}z_{1}^{\alpha_{1}}z_{2}^{\alpha_{2}}.$ ###### Assumption. In this section we assume that $D$ is a domain in $\mathbb{C}^{2}$ containing the origin. ###### Definition. To simplify certain statements in this section, let us say that an open set $U\subset\mathbb{C}_{*}$ _separates_ $0$ and $\infty$ if $U$ contains a loop homotopic in $\mathbb{C}_{*}$ to the loop $\zeta\mapsto\zeta$. This is equivalent to saying that $0$ and $\infty$ lie in different connected components of $\widehat{\mathbb{C}}\setminus U$. For a point $z=(z_{1},z_{2})\in\mathbb{C}^{2}$ introduce the mapping $I_{z}:\mathbb{C}_{*}\to\mathbb{C}^{2}$ as $I_{z}(\zeta):=(z_{1}(1+\zeta),z_{2}(1+\zeta^{-1})).$ One has that $I_{z}(-1)=(0,0)$, so $I_{z}^{-1}(D)$ is non-empty. It is also important that $I_{z}$ is an injective proper map when $z\in(\mathbb{C}_{*})^{2}$. ###### Theorem 4.1. If $D$ is Runge domain, then $h_{(1,1)}*D=\textnormal{cc}_{0}\,\left\\{z\in\mathbb{C}^{2}:\text{the set }I_{z}^{-1}(D)\text{ separates }0\text{ and }\infty\right\\}.$ Directly from the definition of separating is follows that the set under $\textnormal{cc}_{0}\,$ above is open. It also contains $(0,0)$, because $I_{(0,0)}^{-1}(D)=\mathbb{C}_{*}$. ###### Remark 4.2. Assume that $D$ is Runge domain and take $z\in\mathbb{C}^{2}$. The open set $I_{z}^{-1}(D)$ is then $\mathcal{O}(\mathbb{C}_{*})$-convex in the sense that $\widehat{L}_{\mathbb{C}_{*}}\subset I_{z}^{-1}(D)$ for every compact set $L\subset I_{z}^{-1}(D)$. This means that every connected component of $\widehat{\mathbb{C}}\setminus I_{z}^{-1}(D)$ contains $0$ or $\infty$ (possibly both of them). Consequently, the latter set is connected if and only if $I_{z}^{-1}(D)$ does not separate $0$ and $\infty$. ###### Observation 4.3. If an open set $\Omega\subset\mathbb{C}^{2}$ and a point $z\in(\mathbb{C}_{*})^{2}$ are such that $I_{z}^{-1}(\Omega)$ does not separate $0$ and $\infty$, then for every compact polynomially convex set $K\subset\Omega$ the pre-image $I_{z}^{-1}(K)$ is either empty or polynomially convex. ###### Proof. First, note that the set $L:=I_{z}^{-1}(K)$, if non-empty, has to be compact and holomorphically convex in $\mathbb{C}_{*}$, what means that every connected component of $\widehat{\mathbb{C}}\setminus L$ contains $0$ or $\infty$. But since $I_{z}^{-1}(\Omega)$ does not separate $0$ and $\infty$, these points lie in the same connected component of $\widehat{\mathbb{C}}\setminus L$. Hence, the set $\widehat{\mathbb{C}}\setminus L$ is connected, so $L$ is polynomially convex. ∎ ###### Lemma 4.4. Let $K\subset\mathbb{C}^{2}$ be a compact polynomially convex set, $z\in(\mathbb{C}_{*})^{2}$ and $\varrho\in(0,\infty)$. If $I_{z}^{-1}(K)$ is empty or polynomially convex and $I_{z}^{-1}(K)\cap\varrho\overline{\mathbb{D}}=\varnothing,$ then the union $K\cup I_{z}(\varrho\mathbb{T})$ is polynomially convex. ###### Proof. Write $z=(z_{1},z_{2})$ and define $\mu(w_{1},w_{2}):=(w_{1}-z_{1})(w_{2}-z_{2})-z_{1}z_{2},\quad(w_{1},w_{2})\in\mathbb{C}^{2}.$ Clearly, $M:=\mu^{-1}(0)=I_{z}(\mathbb{C}_{*})$ is a complex submanifold of $\mathbb{C}^{2}$ and the map $I_{z}:\mathbb{C}_{*}\to M$ is a biholomorphism. By the assumptions, the union $I_{z}^{-1}(K)\cup\varrho\mathbb{T}$ is holomorphically convex in $\mathbb{C}_{*}$. This implies that the set $(K\cap M)\cup I_{z}(\varrho\mathbb{T})$, being its image by $I_{z}$, is holomorphically convex in $M$ and thus polynomially convex in $\mathbb{C}^{2}$ (use e.g. [15, Theorem 7.4.8]). The conclusion now follows directly from the subsequent general lemma. ∎ ###### Lemma 4.5. Let $V$ be an analytic subset of $\mathbb{C}^{N}$ and let $K\subset\mathbb{C}^{N}$, $L\subset V$ be compact sets. If both $K$ and $(K\cap V)\cup L$ are polynomially convex, then so is $K\cup L$. Note that the sets $K$ and $L$ do not have to be disjoint. ###### Proof. It is known (see e.g. [15, Theorems 6.5.2 and 7.1.5]) that the sheaf of germs of holomorphic functions vanishing on $V$ is a coherent analytic sheaf on $\mathbb{C}^{N}$. Therefore, if $X\subset\mathbb{C}^{N}$ is a compact, polynomially convex set (intersecting $V$ or not), $U$ is a neighbourhood of $X$, $F\in\mathcal{O}(U)$ and $F|_{U\cap V}\equiv 0$, then $F$ is a section of this sheaf and it can be approximated uniformly on $X$ by global sections, that is, by elements of $\mathcal{O}(\mathbb{C}^{N})$ vanishing on $V$. This essential fact is a consequence of [15, Theorem 7.2.7]. Fix a point $z_{0}\in\mathbb{C}^{N}\setminus(K\cup L)$. We are going to show that $z_{0}\not\in\widehat{K\cup L}$. If $z_{0}\not\in V$, then [15, Theorem 7.2.11] provides $f\in\mathcal{O}(\mathbb{C}^{N})$ vanishing on $V$ and such that $f(z_{0})=1$. On the other hand, there exists $g\in\mathcal{O}(\mathbb{C}^{N})$ having $g(z_{0})=1$ and $\|g\|_{K}<1$. Hence, for sufficiently large number $n$ the function $g^{n}f$ maps $z_{0}$ to $1$ and $K\cup L$ into $\mathbb{D}$. It remains to consider the case when $z_{0}\in V$. Since $(K\cap V)\cup L$ is polynomially convex, one can find another compact polynomially convex set $A\subset\mathbb{C}^{N}$ such that $(K\cap V)\cup L\subset\textnormal{int}\,A\text{ and }z_{0}\not\in A.$ Take a sequence $(p_{n})_{n\in\mathbb{N}}\subset\mathcal{O}(\mathbb{C}^{N})$ such that $p_{n}(z_{0})=1\text{ and }\|p_{n}\|_{A}\to 0\text{ when }n\to\infty.$ The set $(\mathbb{C}^{N}\setminus V)\cup\textnormal{int}\,A$ is an open neighbourhood of $K$, so it has a pseudoconvex open subset $\Omega$ containing $K$. This means that $\Omega\cap V\subset\textnormal{int}\,A$ and hence $p_{n}\to 0$ on $\Omega\cap V$. In virtue of [6, Theorem 13.1], there exists a continuous linear extension operator from the Banach space of bounded holomorphic functions on $\Omega\cap V$ into $\mathcal{O}(\Omega)$. Applying it to $p_{n}$’s we obtain a sequence $(g_{n})_{n\in\mathbb{N}}\subset\mathcal{O}(\Omega)$ convergent to $0$ in $\mathcal{O}(\Omega)$ and such that $g_{n}-p_{n}=0$ on $\Omega\cap V$. As it was described in the first paragraph of the proof, for every $n$ one can find a function $q_{n}\in\mathcal{O}(\mathbb{C}^{N})$ so that $q_{n}|_{V}\equiv 0$ and $\|q_{n}+p_{n}-g_{n}\|_{K}<\frac{1}{n}$. Finally, set $f_{n}:=p_{n}+q_{n}$. Every $f_{n}$ is an entire function and $f_{n}=p_{n}\text{ on }V\text{ and }\|f_{n}-g_{n}\|_{K}<\frac{1}{n}.$ This implies that $f_{n}(z_{0})=1$ and $f_{n}\to 0$ on $K\cup L$ uniformly when $n\to\infty$, so for large $n$ it holds that $|f_{n}(z_{0})|>\|f_{n}\|_{K\cup L}$, as desired. ∎ For a holomorphic function $f$ of two variables define $\Lambda(f)(z_{1},z_{2}):=f(z_{1},z_{2})+z_{1}\,\frac{\partial f}{\partial z_{1}}(z_{1},z_{2}).$ For every domain $\Omega\subset\mathbb{C}^{2}$ the mapping $f\mapsto\Lambda(f)$ defines a continuous linear operator on $\mathcal{O}(\Omega)$. ###### Lemma 4.6. If $\gamma$ is a loop in $\mathbb{C}_{*}$ homotopic to the loop $\zeta\mapsto\zeta$ and $f$ is a polynomial in $\mathbb{C}^{2}$, then $(h_{(1,1)}*_{\mathbb{C}^{2}}f)(z)=\frac{1}{2\pi i}\int_{\gamma}(1+\zeta^{-1})\,\Lambda(f)(I_{z}(\zeta))d\zeta$ for all $z\in\mathbb{C}^{2}$. ###### Proof. Fix a number $\rho\in(0,\frac{1}{2})$ and a polynomial $f$. From (1) it follows that $(h_{(1,1)}*_{\mathbb{C}^{2}}f)(z)=\left(\frac{1}{2\pi i}\right)^{2}\int_{\rho^{-1}\mathbb{T}^{2}}\frac{f(z_{1}\zeta_{1},z_{2}\zeta_{2})\zeta_{1}\zeta_{2}}{(\zeta_{2}-1)^{2}(\zeta_{1}-\zeta_{2}(\zeta_{2}-1)^{-1})^{2}}d\zeta_{1}d\zeta_{2},$ when $z=(z_{1},z_{2})\in\mathbb{C}^{2}$. If $\zeta_{2}\in\rho^{-1}\mathbb{T}$, then the point $\zeta_{2}(\zeta_{2}-1)^{-1}$ lies in $\rho^{-1}\mathbb{D}$, so, in view of the Cauchy formula, $\frac{1}{2\pi i}\int_{\rho^{-1}\mathbb{T}}\frac{f(z_{1}\zeta_{1},z_{2}\zeta_{2})\zeta_{1}}{(\zeta_{1}-\zeta_{2}(\zeta_{2}-1)^{-1})^{2}}d\zeta_{1}\\\ =\left.\frac{d}{d\zeta_{1}}\Bigl{(}f(z_{1}\zeta_{1},z_{2}\zeta_{2})\zeta_{1}\Bigr{)}\right|_{\zeta_{1}=\zeta_{2}(\zeta_{2}-1)^{-1}}=\Lambda(f)\left(\frac{z_{1}\zeta_{2}}{\zeta_{2}-1},z_{2}\zeta_{2}\right).$ Therefore, $(h_{(1,1)}*_{\mathbb{C}^{2}}f)(z)=\frac{1}{2\pi i}\int_{\rho^{-1}\mathbb{T}}\frac{\zeta_{2}}{(\zeta_{2}-1)^{2}}\;\Lambda(f)\left(\frac{z_{1}\zeta_{2}}{\zeta_{2}-1},z_{2}\zeta_{2}\right)d\zeta_{2}.$ A homotopy argument allows us to integrate over $\rho^{-1}\mathbb{T}+1$ instead of $\rho^{-1}\mathbb{T}$. Then, after changing the variable via $\zeta:=(\zeta_{2}-1)^{-1}$, we obtain the equality from the conclusion with the integral over $\rho\mathbb{T}$. It is not affected by replacing $\rho\mathbb{T}$ by $\gamma$, because the integrated function of $\zeta$ is holomorphic on $\mathbb{C}_{*}$. ∎ ###### Proof of Theorem 4.1. Denote by $\Omega$ the set on the right hand side of the conclusion, that is, $\Omega:=\textnormal{cc}_{0}\,\left\\{z\in\mathbb{C}^{2}:\text{the set }I_{z}^{-1}(D)\text{ separates }0\text{ and }\infty\right\\}.$ The proof of the equality $h_{(1,1)}*D=\Omega$ is divided into a few steps. Step 1: We show the inclusion $\Omega\subset h_{(1,1)}*D$. Fix a function $f\in\mathcal{O}(D)$ and take a sequence $(f_{n})_{n\in\mathbb{N}}$ of polynomials convergent to $f$ locally uniformly on $D$. Then the functions $h_{(1,1)}*_{\mathbb{C}^{2}}f_{n}$ tend to $h_{(1,1)}*_{D}f$ in the same manner on $h_{(1,1)}*D$. We claim that they form a sequence convergent on $\Omega$ as well. Fix a point $a\in\Omega$ and take a loop $\gamma:\mathbb{T}\to I_{a}^{-1}(D)$ homotopic in $\mathbb{C}_{*}$ to the identity loop. Choose a closed ball $B\subset\Omega$ centered at $a$ so that $I_{z}(\gamma(\mathbb{T}))\subset D$ for all $z\in B$. From Lemma 4.6 it follows that $(h_{(1,1)}*_{\mathbb{C}^{2}}f_{n})(z)=\frac{1}{2\pi i}\int_{\gamma}(1+\zeta^{-1})\,\Lambda(f_{n})(I_{z}(\zeta))d\zeta$ for all $z\in\mathbb{C}^{2}$, $n\in\mathbb{N}$. Since $\Lambda(f_{n})\to\Lambda(f)$ in $\mathcal{O}(D)$ as $n\to\infty$, the integrals on the right hand side of the above equality converge uniformly with respect to $z\in B$ to the identical integral with $f_{n}$ repalced by $f$. Consequently, the sequence of polynomials $h_{(1,1)}*_{\mathbb{C}^{2}}f_{n}$ is uniformly convergent on $B$. Hence, it does converge locally uniformly on $\Omega$, because $a\in\Omega$ was chosen arbitrarily. If $g\in\mathcal{O}(\Omega)$ is the limit, then clearly $g=h_{(1,1)}*f$ near the origin. This means that $h_{(1,1)}*f\in\mathcal{O}_{0,\Omega}$ for every $f\in\mathcal{O}(D)$, so $\Omega\subset h_{(1,1)}*D$. Step 2: We prove the inclusion $(h_{(1,1)}*D)\cap(\mathbb{C}_{*})^{2}\subset\Omega$. Suppose, to the contrary, that it is not valid, and choose a point $z\in(h_{(1,1)}*D)\cap(\mathbb{C}_{*})^{2}\cap\partial\Omega$. There exist a constant $C>0$ and a polynomially convex compact set $K\subset D$ such that (6) $|(h_{(1,1)}*_{D}f)(z)|\leq C\|f\|_{K},\quad f\in\mathcal{O}(D).$ Take a number $\varrho>0$ so that $I_{z}^{-1}(K)\cap\varrho\overline{\mathbb{D}}=\varnothing$. Since $z\in\partial\Omega$, the set $I_{z}^{-1}(D)$ does not separate $0$ and $\infty$, so from Observation 4.3 and Lemma 4.4 it follows that the union $K\cup I_{z}(\varrho\mathbb{T})$ is polynomially convex. Hence, there exists a compact set $L\subset\mathbb{C}^{2}$ such that $I_{z}(\varrho\mathbb{T})\subset\textnormal{int}\,L$, $K\cap L=\varnothing$ and $K\cup L$ is polynomially convex (one can justify it making use, for example, of the Kallin Lemma [27, Theorem 1.6.19]). This allows us to employ the Oka-Weil theorem to get a sequence $(f_{n})_{n\in\mathbb{N}}$ of polynomials in $\mathbb{C}^{2}$ uniformly convergent to $0$ on $K$ and to $1$ on $L$. Lemma 4.6 implies that $(h_{(1,1)}*_{D}f_{n})(z)=\frac{1}{2\pi i}\int_{\varrho\mathbb{T}}(1+\zeta^{-1})\,\Lambda(f_{n})(I_{z}(\zeta))d\zeta.$ If $n\to\infty$, then the left hand side goes to $0$, in view of (6). On the other hand, the integrals converge to $1$, because $\Lambda(f_{n})\to 1$ on $I_{z}(\varrho\mathbb{T})$. A contradiction. Step 3: We show that if $(z_{1},0)\in h_{(1,1)}*D$, then $(z_{1},0)\in D$. Thanks to evident symmetry, it will also mean that $(0,z_{2})\in D$ when $(0,z_{2})\in h_{(1,1)}*D$. Suppose, to the contrary, that $(z_{1},0)\not\in D$. One can find a constant $C>0$ and a compact polynomially convex set $K\subset D$ satisfying (7) $|(h_{(1,1)}*_{D}f)(z_{1},0)|\leq C\|f\|_{K},\quad f\in\mathcal{O}(D).$ Take a closed ball $B$ centered at $(z_{1},0)$ so small that $K\cap B=\varnothing$ and $K\cup B$ is polynomially convex. In view of the Oka-Weil theorem, there exists a sequence $(f_{n})_{n\in\mathbb{N}}$ of polynomials in $\mathbb{C}^{2}$ uniformly convergent to $0$ on $K$ and to $1$ on $B$. From Lemma 4.6 is follows that (8) $(h_{(1,1)}*_{\mathbb{C}^{2}}f_{n})(z_{1},0)=\Lambda(f_{n})(z_{1},0)=f_{n}(z_{1},0)+z_{1}\,\frac{\partial f_{n}}{\partial z_{1}}(z_{1},0).$ Now, if $n\to\infty$, then (7) implies that the left hand side of (8) goes to $0$, while the right hand side converges to $1$. This contradiction completes the proof of this step. Step 4: We obtain the inclusion $h_{(1,1)}*D\subset\Omega$. It remains to prove that if $(z_{1},0)\in h_{(1,1)}*D$, then $(z_{1},0)\in\Omega$ (the same statement for $(0,z_{2})$ can be demonstrated identically). Take such a point $(z_{1},0)$ and choose a curve $\gamma:[0,1]\to h_{(1,1)}*D$ so that $\gamma(0)=(0,0)$, $\gamma(1)=(z_{1},0)$ and $\gamma(t)\in(\mathbb{C}_{*})^{2}$ for $t\in(0,1)$. The conclusion of Step 2 guarantees that $\gamma(t)\in\Omega$ every $t\in(0,1)$, so $(z_{1},0)\in\overline{\Omega}$. Moreover, from Step 3 we know that $(z_{1},0)\in D$, so $\epsilon\mathbb{D}_{*}\subset I_{(z_{1},0)}^{-1}(D)$ for a small number $\epsilon>0$. Hence, the latter set separates $0$ and $\infty$, what gives that $(z_{1},0)\in\Omega$. The proof is complete. ∎ ## References * [1] L. A. Aizenberg, E. K. Leinartas, Multidimensional Hadamard composition and Szegö kernels (Russian), Sibirsk. Mat. Zh. 24 (1983), no. 3, 3–10; translation in Siberian Math. J. 24 (1983), no. 3, 317–323. * [2] L. Bernal-Gonzalez, M. C. Calderon-Moreno, J. A. Prado-Bassas, Cyclicity of coefficient multipliers: linear structure, Acta Math. Hung. 114 (2007), 287-300. * [3] R. Brück, J. Müller, Invertible elements in a convolution algebra of holomorphic functions, Math. Ann. 294 (1992), 421-438. * [4] R. Brück, J. Müller, Closed ideals in a convolution algebra of holomorphic fucntions, Can. J. Math. 47 (1995), 915-928. * [5] R. Brück, H. Render, Invertibility of holomorphic functions with respect to the Hadamard product, Complex Variables 42 (2000), 207-223. * [6] L. Bungart, Holomorphic functions with values in locally convex spaces and applications to integral formulas, Trans. Amer. Math. Soc. 111 (1964), 317-344. * [7] P. Domański, M. Langenbruch, Representation of multipliers on spaces of real analytic functions, Analysis 32 (2012), 137-162. * [8] P. Domański, M. Langenbruch, Algebra of multipliers on the space of real analytic functions of one variable, Studia Math. 212 (2012), 155-171. * [9] P. Domański, M. Langenbruch, Hadamard multipliers on spaces of real analytic functions, Adv. Math. 240 (2013), 575-612. * [10] P. Domański, M. Langenbruch, Euler type partial differential operators on real analytic functions, J. Math. Anal. Appl. 443 (2016), no. 2, 652–674. * [11] P. Domański, M. Langenbruch, Interpolation of holomorphic functions and surjectivity of Taylor coefficient multipliers, Adv. Math. 293 (2016), 782–855. * [12] P. Domański, M. Langenbruch, D. Vogt, Hadamard type operators on spaces of real analytic functions in several variables, J. Funct. Anal. (2015). http://dx.doi.org/10.1016/j.jfa.2015.09.011 (published on-line). * [13] M. M. Elin, Multidimensional Hadamard composition (Russian), Sibirsk. Mat. Zh. 35 (1994), no. 5, 1052–1057; translation in Siberian Math. J. 35 (1994), no. 5, 936–940. * [14] K.-G. Grosse-Erdmann, On the Borel-Okada theorem and the Hadamard multiplication theorem, Complex Var. 22 (1993), 101-112. * [15] L. Hörmander, An introduction to complex analysis in several variables (3rd edition), North-Holland Publishing Company Amsterdam, 1990. * [16] E. K. Leinartas, Multidimensional Hadamard composition and sums with linear constraints on the summation indices (Russian), Sibirsk. Mat. Zh. 30 (1989), no. 2, 102–107, 226; translation in Siberian Math. J. 30 (1989), no. 2, 250–255. * [17] J. Müller, The Hadamard multiplication theorem and applications in summability theory, Complex Variables 18 (1992), 155-166. * [18] J. Müller, Coefficient multipliers from $H(G_{1})$ into $H(G_{2})$, Arch. Math., 61 (1993), 75-81. * [19] J. Müller, S. Naik, S. Ponnusany, Growth of entire functions of exponential type defined by convolution, Arch. Math. 87 (2006), 330-342. * [20] J. Müller, T. Pohlen, The Hadamard product as a universality preserving operator, Computational Methods and Function Theory 10 (2010), 281-289. * [21] J. Müller, T. Pohlen, The Hadamard Product on Open Sets in the Extended Plane, Complex Anal. Oper. Theory 6 (2012), 257-274. * [22] H. Render, Hadamard’s multiplication theorem - recent developments, Coll. Math. 74 (1997), 79-92. * [23] H. Render, Homomorphisms on Hadamard algebras, Rend. Circ. Mat. Palermo, Serie II, Suppl. 40 (1996), 153-158. * [24] H. Render, A. Sauer, Algebras of holomorphic fucntions with Hadamard multiplication, Studia Math. 118 (1996), 77-101. * [25] H. Render, A. Sauer, Invariance properties of homeomorphisms on algebras of holomorphic functions with the Hadamard product, Studia Math. 121 (1996), 53-65. * [26] H. H. Schaefer, Topological vector spaces (2nd edition), Springer New York, 1999. * [27] E. L. Stout, Polynomial convexity, 2007. * [28] S. Zając, The Hadamard multiplication theorem in several complex variables, Complex Variables and Elliptic Equations 62:1 (2017), 1-26.
# Spectral construction of non-holomorphic Eisenstein-type series and their Kronecker limit formula James Cogdell Jay Jorgenson 111The second named author acknowledges grant support PSC-CUNY. Lejla Smajlović ###### Abstract Let $X$ be a smooth, compact, projective Kähler variety and $D$ be a divisor of a holomorphic form $F$, and assume that $D$ is smooth up to codimension two. Let $\omega$ be a Kähler form on $X$ and $K_{X}$ the corresponding heat kernel which is associated to the Laplacian that acts on the space of smooth functions on $X$. Using various integral transforms of $K_{X}$, we will construct a meromorphic function in a complex variable $s$ whose special value at $s=0$ is the log-norm of $F$ with respect to $\mu$. In the case when $X$ is the quotient of a symmetric space, then the function we construct is a generalization of the so-called elliptic Eisenstein series which has been defined and studied for finite volume Riemann surfaces. Dedicated to Emma Previato, on the occasion of her $65$th birthday. ## 1 Introduction ### 1.1 Kronecker’s limit formula The discrete group $\text{PSL}_{2}({\mathbb{Z}})$ acts on the upper half plane ${\mathbb{H}}$, and the quotient space $\text{PSL}_{2}({\mathbb{Z}})\backslash{\mathbb{H}}$ has one cusp which can be taken to be at $i\infty$ by identifying $\text{PSL}_{2}({\mathbb{Z}})\backslash{\mathbb{H}}$ with its fundamental domain. Associated to the cusp is a non-holomorphic Eisenstein series ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ which initially is defined as a Poincaré series for ${\mathrm{Re}}(s)>1$ but can be shown to admit a meromorphic continuation to all $s\in{\mathbb{C}}$. One realization of the classical Kronecker limit formula is the asymptotic expansion that $\displaystyle\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)=\frac{3}{\pi(s-1)}-\frac{1}{2\pi}\log\bigl{(}|\Delta(z)|{\mathrm{Im}}(z)^{6}\bigr{)}+C+O_{z}(s-1)\,\,\,\text{\rm as}\,\,\,s\rightarrow 1$ where $C=6(1-12\,\zeta^{\prime}(-1)-\log(4\pi))/\pi$. An elegant proof of Kronecker’s limit formula can be found in [Si80], though the normalization used in [Si80] is slightly different than in [JST16] from which we quote the above formulation. The series ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ has a well-known functional equation which allows one to restate Kronecker’s limit formula as $\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)=1+\log\bigl{(}|\Delta(z)|^{1/6}{\mathrm{Im}}(z)\bigr{)}s+O_{z}(s^{2})\,\,\,\text{\rm as}\,\,\,s\rightarrow 0.$ There are many results in the mathematical literature which develop and explore analogues of Kronecker’s limit formula. One particularly motivating study is given in [KM79] in which the authors define a non-holomorphic hyperbolic Eisenstein series ${\cal E}^{\mathrm{hyp}}_{\gamma}(z,s)$ associated to any hyperbolic subgroup, generated by a hyperbolic element $\gamma$, of an arbitrary co-finite discrete subgroup $\Gamma$ of $\text{PSL}_{2}({\mathbb{R}})$. The Kronecker limit formula obtained in [KM79] states that the Poincaré series which defines ${\cal E}^{\mathrm{hyp}}_{\gamma}(z,s)$ admits a meromorphic continuation to $s\in{\mathbb{C}}$ and the value of ${\cal E}^{\mathrm{hyp}}_{\gamma}(z,s)$ at $s=0$ is, in effect, the harmonic one-form which is dual to the geodesic on $\Gamma\backslash{\mathbb{H}}$ associated to $\gamma$. Abelian subgroups of discrete groups $\Gamma$ which act on ${\mathbb{H}}$ are classified as parabolic, hyperbolic and elliptic, so it remained to define and study non-holomorphic Eisenstein series associated to any elliptic subgroup of an arbitrary discrete group $\Gamma$. Any elliptic subgroup can be viewed as the stabilizer group of a point $w$ on the quotient $\Gamma\backslash{\mathbb{H}}$, where in all but a finite number of cases the elliptic subgroup consists solely of the identity element of $\Gamma$. One can envision the notion of a non-holomorphic elliptic Eisenstein series ${\cal E}^{\textrm{ell}}_{w}(z,s)$ which, if the above examples serve as forming a pattern, will admit a mermorphic continuation and whose special value at $s=0$ will be associated to a harmonic form of some type specified by $w$. Indeed, such series were studied in [vP10] and, in fact, the Kronecker limit function is the log-norm of a holomorphic form which vanishes only at $w$. ### 1.2 A unified approach The article [JvPS16] developed a unified construction of the hyperbolic, elliptic and parabolic Eisenstein series mentioned above for any finite volume quotient of ${\mathbb{H}}$; of course, if the quotient is compact, then parabolic Eisenstein series do not exist. The goal of [JvPS16] was to devise a means, motivated by a type of pre-trace formula, so that the various Eisenstein series could be obtained by employing different test functions. As one would expect, there were numerous technical considerations which arose, especially in the case when $\Gamma$ was not co-compact. In the end, one can view the approach developed in [JvPS16] as starting with a heat kernel, and then undertaking a sequence of integral transforms until one ends up with each of the above mentioned Eisenstein series. Whereas the article [JvPS16] did provide a unified approach to the construction of parbolic, hyperbolic and elliptic Eisenstein series for hyperbolic Riemann surfaces, the analysis did employ the geometry of $\textrm{\rm SL}_{2}({\mathbb{R}})$ quite extensively. ### 1.3 Our results The goal of the present paper is to understand the heat kernel construction of non-holomorphic elliptic Eisenstein series in a more general setting. We consider a smooth, complex, projective variety $X$ of complex dimension $N$. We fix a smooth Kähler metric on $X$, which we denote by the $(1,1)$ form $\omega$. In general terms, let us now describe the approach we undertake to define and study what we call elliptic Eisenstein series. Let $t$ be a positive real variable, and let $z$ and $w$ be points on $X$. Let $K_{X}(z,w;t)$ be the heat kernel acting on smooth functions on $X$ associated to the Laplacian $\Delta_{X}$ corresponding to $\omega$; see, for example, [Ch84] or [BGV91]. One of the key properties of $K_{X}(z,w;t)$ is that it satisfies the heat equation, meaning that $\left(\Delta_{z}+\partial_{t}\right)K_{X}(z,w;t)=0.$ We compute the integral transform in $t$ of $K_{X}(z,w;t)$ after multiplying by a function $G(t,u)$ which satisfies the differential equation $(\partial_{t}-\partial^{2}_{u})G(t,u)=0$. By what amounts to integration by parts, we get a function $(K_{X}\ast G)(z,w;u)$ which satisfies the equation $\left(\Delta_{z}-\partial^{2}_{u}\right)(K_{X}\ast G)(z,w;u)=0.$ If one formally replaces $u$ by $iu$, one gets the kernel function associated to the wave equation. However, this substitution is only formal because of convergence considerations; nonetheless, one is able to use the language of distributions in order to achieve the desired result which is to obtain a wave kernel $W_{X}(z,w;u)$. At this point, one would like to integrate the wave kernel against the test function $(\sinh u)^{-s}$ for a complex variable $s$ to yield, as in [JvPS16], the elliptic Eisenstein series. Again, however, technical problems occur because of the vanishing of $\sinh(u)$ when $u=0$. Instead, we integrate the wave kernel against $(\cosh u)^{-s}$, for which there is no such technical issue. We then replace $s$ by $s+2k$ and sum over $k$, in a manner dictated by, of all things, the binomial theorem, thus allowing us to mimic the use of $(\sinh u)^{-s}$. In doing so, we arrive at the analogue of the elliptic Eisenstein series $E_{X}(z,w;s)$, where $s$ is a complex variable, initially required to have real part $\textrm{\rm Re}(s)$ sufficiently large, $z$ is a variable on $X$, and $w$ is a fixed point on $X$. Though $w$ may be referred to as the elliptic point, it is, in the case $X$ is smooth, simply a chosen point on $X$. As a final step, we let $D$ be the divisor of a holomorphic form $F$ on $X$, and assume that $D$ is smooth up to codimension two. We show that the integral of $E_{X}(z,w;s)$ with respect of the metric $\mu_{D}(w)$ on $D$ induced from the Kähler form $\omega$ has an expansion in $s$ at $s=0$, and the second order term in $s$ is the log-norm of $F$. This result is the analogue of the classical Kronecker limit formula. Thus far, all results are obtained by using the spectral expansion of the heat kernel associated to the Laplacian $\Delta_{X}$. We can equally well reconsider all of the above steps for the operator $\Delta_{X}-Z$ for any complex number $Z$, in which case we do not begin with the heat kernel $K_{X}(z,w;t)$ but rather we begin with $K_{X}(z,w;t)e^{-Zt}$. If, for whatever reason, there is a means by which we have another expression for the heat kernel, and also have a compelling reason to choose a specific $Z$, then we may end up with another expression for $E_{X}(z,w;s)$. Such a situation occurs when, for instance, $X$ is the quotient of a symmetric space $G/K$ by a discrete group $\Gamma$. In this case, the heat kernel can be obtained as the inverse spherical transform of an exponential function. In that setting, it is natural to take $Z=-\rho^{2}_{0}$ where $\rho_{0}$ is essentially the norm, with respect to the Killing form, of half the sum of the positive roots. (In the notation of Gangoli [Ga68], our $\rho_{0}$ would be his $|\rho_{*}|$.) Finally, we note that one can, without loss of generality, re-scale the time variable $t$ by a positive constant $c$, so then all begins with the function $K_{X}(z,w;t/c)e^{-Zt/c}$. In the development of our results, it will be evident that it is necessary to both translate the Laplacian $\Delta_{X}$ and re-scale time $t$, where it will become evident that as long as $\rho^{2}_{0}\neq 0$, then it is natural to take $c=1/(4\rho^{2}_{0})$, which would have the effect of scaling $\rho_{0}$ to be $1/2$, or $\rho_{0}^{2}$ to be $1/4$. (See section 2.7 below.) The full development of these considerations in all instances would take a considerable amount of time and space, so for the purpose of the present article we will focus on the results obtainable by considering the spectral decomposition of the heat kernel in the case of a compact Kähler variety $X$. However, it is possible to give an indication of what will follow when an additional expression for the heat kernel is available. For instance, if $X$ is an abelian variety, we obtain an expression for the heat kernel on $X$ by viewing $X$ as a complex torus. As an example of our analysis, we can take $D$ to be the divisor of the Riemann theta function $\theta$, so then our construction expresses the log-norm of the Riemann theta function $\theta$ as a type of Kronecker limit function. ### 1.4 Outline of the paper The article is organized as follows. In section 2 we establish notation and recall some known results. In section 3, we define the wave distribution associated to a certain space of smooth functions on $X$. In section 4 we apply the wave distribution to the test function $\cosh^{-(s-\rho_{0})}(u)$, for a suitably chosen constant $\rho_{0}$, yielding a function $K_{X;\rho_{0}^{2}}(z,w;s)$. In section 5 we define two series formed from $K_{X;\rho_{0}^{2}}(z,w;s)$, one producing a formula for the resolvent kernel $G_{X;\rho_{0}^{2}}(z,w;s)$, which is the integral kernel that inverts the operator $\Delta_{X}+s(s-\rho_{0})$. The second series $E_{X;\rho_{0}^{2}}(z,w;s)$ is the analogue of the elliptic Eisenstein series. The analogue of Kronecker’s limit formula is given in section 6. Finally, in section 7, we conclude with some examples. In our opinion, each example is of independent interest. Admittedly, the discussion in section 7 is somewhat speculative; however, we elected to include the discussion in an attempt to illustrate some of the directions we believe our results can apply. In an unavoidable mishap of notation, the heat kernel on $X$ is denoted by $K_{X}(z,w;t)$, and the function obtained by applying the wave distribution to $\cosh^{-(s-\rho_{0})}(u)$ is $K_{X;\rho_{0}^{2}}(z,w;s)$. Similarly, $\Gamma$ will sometimes signify the Gamma function and sometimes signify a discrete group acting on a symmetric space. In each case, the meaning will be clear from the context of the discussion. ## 2 Background material In this section we establish notation and state certain elementary results which will be used throughout the article. The contents in this section are given in no particular order of importance. ### 2.1 Stirling’s approximation Stirling’s approximation for the logarithm $\log\Gamma(s)$ of the classical gamma function is well-known, and we will use the form which states that $\log\Gamma(s)=s\log(s)-s+\frac{1}{2}\log(2\pi/s)+\sum\limits_{n=1}^{M}\frac{B_{2n}}{2n(2n-1)s^{2n-1}}+h_{M}(s),$ (1) where $B_{n}$ is the $n$-th Bernoulli number and $h_{M}(s)$ is a holomorphic function in the half-plane ${\mathrm{Re}}(s)\gg 0$ and $h_{M}(s)=O_{M}(s^{-2M-1})$ as $s\rightarrow\infty$. The proof of (1) is found in various places in the literature; see, for example, [JLa93]. Going further, the proof from [JLa93] extends to show that one can, in effect, differentiate the above asymptotic formula. More precisely, for any integer $\ell\geq 0$, one has that $\partial_{s}^{\ell}\log\Gamma(s)=\partial_{s}^{\ell}\left(s\log(s)-s+\frac{1}{2}\log(2\pi/s)+\sum\limits_{n=1}^{M}\frac{B_{2n}}{2n(2n-1)s^{2n-1}}\right)+\partial_{s}^{\ell}h_{M}(s),$ (2) where $\partial_{s}^{\ell}h_{M}(s)=O_{M,\ell}(s^{-2M-\ell-1})$, as $s\rightarrow\infty$. We will use the notational convenience of the Pochhammer symbol $(s)_{n}$, which is defined as $(s)_{n}:=\frac{\Gamma(s+n)}{\Gamma(s)}.$ ### 2.2 Elementary integrals For any real number $r\in{\mathbb{R}}$ and complex number $\nu$ with $\textrm{Re}(\nu)>0$, we have, from $3.985.1$ of [GR07] the integral formula $\int_{0}^{\infty}\cos(ur)\cosh^{-\nu}(u)\,du=\frac{2^{\nu-2}}{\Gamma(\nu)}\Gamma\left(\frac{\nu- ir}{2}\right)\Gamma\left(\frac{\nu+ir}{2}\right).$ (3) If $r=0$, then we get the important special case that $\int_{0}^{\infty}\cosh^{-\nu}(u)\,du=\frac{2^{\nu-2}\Gamma^{2}(\nu/2)}{\Gamma(\nu)},$ (4) which is stated in $3.512.1$ of [GR07]. Additionally, we will use that for any $r\in{\mathbb{C}}$ with $\textrm{Re}(r^{2})>0$ and $u\in{\mathbb{C}}$ with $\textrm{Re}(u)>0$, one has that $\frac{u}{\sqrt{4\pi}}\int_{0}^{\infty}e^{-r^{2}t}e^{-u^{2}/(4t)}t^{-1/2}\,\frac{dt}{t}=e^{|r|u}.$ (5) For any real valued function $g$ and $r\in\mathbb{C}$, we define $H(r,g)$ as $H(r,g):=2\int_{0}^{\infty}\cos(ur)g(u)\,du.$ (6) This is a purely formal definition; the conditions on $g$ under which we consider $H(r,g)$ are stated in section 3 below. ### 2.3 An asymptotic formula For the convenience of the reader, we state here a result from page 37 of [Er56]. Let $(\alpha,\beta)\subset{\mathbb{R}}$. Let $g$ be a real-valued continuous function, and $h$ be a real-valued continuously differentiable function, on $(\alpha,\beta)$ such that the integral $\int\limits_{\alpha}^{\beta}g(t)e^{xh(t)}dt$ exists for sufficiently large $x$. Assume there is an $\eta>0$ such that $h^{\prime}(t)<0$ for $t\in(\alpha,\alpha+\eta)$. In addition, for some $\epsilon>0$, assume that $h(t)\leq h(\alpha)-\epsilon$ for $t\in(\alpha+\eta,\beta)$. Suppose that $h^{\prime}(t)=-a(t-\alpha)^{\nu-1}+o((t-\alpha)^{\nu-1})\,\,\,\,\,\textrm{and}\,\,\,\,\,g(t)=b(t-\alpha)^{\lambda-1}+o((t-\alpha)^{\lambda-1})\,\,\,\,\,\textrm{as $t\rightarrow\alpha^{+}$}$ for some positive $\lambda$ and $\nu$. Then $\int\limits_{\alpha}^{\beta}g(t)e^{xh(t)}dt=\frac{b}{\nu}\Gamma(\lambda/\nu)(\nu/(ax))^{\lambda/\nu}e^{xh(\alpha)}\left(1+o(1)\right)\,\,\,\,\,\textrm{as $x\rightarrow\infty$.}$ (7) Note that the assumptions on $h$ hold if $\alpha=0$, $h(0)=0$ and is monotone decreasing, which is the setting in which we will apply the above result. In this case, we will use that the above integral is $O(x^{-\lambda/\nu})$. ### 2.4 Geometric setting Let $X$ be a compact, complex, smooth projective variety of complex dimension $N$. Fix a smooth Kähler metric $\mu$ on $X$, which is associated to the Kähler $(1,1)$ form $\omega$. Let $\rho$ denote a (local) potential for the metric $\mu$. If we choose local holomorphic coordinates $z_{1},\dots,z_{N}$ in the neighborhood of a point on $X$, then one can write $\omega$ as $\omega=\frac{i}{2}\sum_{j,k=1}^{N}g_{j,\bar{k}}dz_{j}\wedge d\bar{z}_{k}=\frac{i}{2}\partial_{z}\partial_{\bar{z}}\rho$ If $M$ is a subvariety of $X$, the induced metric on $M$ will be denoted by $\mu_{M}$. In particular, the induced metric on $X$ itself is $\mu_{X}$, which we will simply write as $\mu$. In a slight abuse of notation, we will also write $\mu_{M}$, or $\mu$ in the case $M=X$, for the associated volume form against which one integrates functions. The corresponding Laplacian $\Delta_{X}$ which acts on smooth functions on $X$ is $\Delta_{X}=-\sum_{j,k=1}^{N}g^{j,\bar{k}}\frac{\partial^{2}}{\partial z_{j}\partial\bar{z}_{k}},$ where, in standard notation, $(g^{j,\bar{k}})=(g_{j,\bar{k}})^{-1}$; see page 4 of [Ch84]. An eigenfunction of the Laplacian $\Delta_{X}$ is an a priori $C^{2}$ function $\psi_{j}$ which satisfies the equation $\Delta_{X}\psi_{j}-\lambda_{j}\psi_{j}=0$ for some constant $\lambda_{j}$, which is the eigenvalue associated to $\psi_{j}$. It is well-known that any eigenfunction is subsequently smooth, and every eigenvalue is greater than zero except when $\psi_{j}$ is a constant whose corresponding eigenvalue is zero. As is standard, we assume that each eigenfunction is normalized to have $L^{2}$ norm equal to one. Weyl’s law asserts that $\\#\\{\lambda_{j}|\lambda_{j}\leq T\\}=(2\pi)^{-2N}\textrm{vol}_{N}(\mathbb{B})\textrm{vol}_{\omega}(X)T^{N}+O(T^{N-1/2})\,\,\,\,\,\textrm{as $T\rightarrow\infty$}$ and $\textrm{vol}_{\omega}(X)$ is the volume of $X$ under the metric $\mu$ induced by $\omega$, and $\textrm{vol}_{N}(\mathbb{B)}$ is the volume of the unit ball in $\mathbb{R}^{2N}$. As a consequence of Weyl’s law, one has that for any $\varepsilon>0$, $\sum\limits_{k=1}^{\infty}\lambda_{k}^{-N-\varepsilon}<\infty;$ (8) see, for example, page 9 of [Ch84]. The eigenfunction $\psi_{j}$ corresponding to the eigenvalue $\lambda_{j}$ satisfies a sup-norm bound on $X$, namely that $\|\psi_{j}\|_{\infty}=O_{X}\left(\lambda_{j}^{N/2-1/4}\right);$ (9) see [SZ02] and references therein. ### 2.5 Holomorphic forms By a holomorphic form $F$ we mean a holomorphic section of a power of the canonical bundle $\Omega$ on $X$; see page 146 of [GH78]. (Note: On page 146 of [GH78], the authors denote the canonical bundle by $K_{X}$, which we will not since this notation is being used both for the heat kernel and the function obtained by applying the wave distribution to hyperbolic cosine.) The weight $n$ of the form equals the power of the bundle of which $F$ is a section. Let $D$ denote the divisor of $F$, and assume that $D$ is smooth up to codimension two. In the case $X$ is a quotient of a symmetric space $G/K$ by a discrete group $\Gamma$, then $F$ is a holomorphic automorphic form on $G/K$ with respect to $\Gamma$. With a slight gain in generality, and with no increase in complication of the analysis, we can consider sections of the canonical bundle obtained by considering the tensor product of the canonical bundle with a flat line bundle. The Kähler form $\omega$ will induce a norm on $F$, which we denote by $\|F\|_{\omega}$; see [GH78] for a general discussion as well as section 2 of [JK01]. We can describe the norm as follows. As in the notation of section 2 of [JK01], let $U$ be an element of an open cover of $X$. Once we trivialize $\Omega$ on $U$, we can express the form $F$ in local coordinates $z_{1},\dots,z_{N}$. Also, we have the existence of a Kähler potential $\rho$ of the Kähler form $\omega$. Up until now, there has been no natural scaling of $\omega$. We do so now, by scaling $\omega$ by a multiplicative constant $c$ so that $c\omega$ is a Chern form of $\Omega$; see page 144 of [GH78] as well as chapter 2 of [Fi18]. In a slight abuse of notation, we will denote the re-scaled Kähler form by $\omega$. With this scaling of $\omega$, one can show that $|F(z)|e^{-n\rho(z)}$ is invariant under change of coordinates; see section 2.3 of [JK01]. With this, one defines $\|F\|_{\omega}^{2}(z):=|F(z)|^{2}e^{-2n\rho(z)},$ (10) where $n$ is the weight of the form. The formula is local for each $U$ in the open cover, but its invariance implies that the definition extends independently of the various choices made. Following the discussion of Chapter 1 of [La88], the above equation can be written in differential form as $\textrm{\rm d}\textrm{\rm d}^{c}\log\|F\|_{\omega}^{2}=n(\delta_{D}-\omega)$ (11) where $\delta_{D}$ denotes the Dirac delta distribution supported on $D$. Kähler metrics have the property that the associated Laplacian of a function does not involve derivatives of the metric, as stated on page 75 of [Ba06]. Exercise 1.27.3(a) of [Fi18] states the formula $\frac{1}{2}\Delta_{X,d}f\omega^{n}=ni\partial\bar{\partial}f\wedge\omega^{n-1}$ (12) where $\Delta_{X,d}$ is the Laplacian stemming from the differential $d$ and $f$ is a smooth function, subject to the normalizations of various operators as stated in [Fi18]. As a corollary of (12), one interpret (11) as asserting that $\Delta_{X}\log\|F\|_{\omega}^{2}$ is a non-zero constant away from $D$. ### 2.6 The heat and Poisson kernel The heat kernel acting on smooth functions on $X$ can be defined formally as $K_{X}(z,w;t)=\sum_{k=0}^{\infty}e^{-\lambda_{k}t}\psi_{k}(z)\overline{\psi_{k}}(w),$ where $\\{\psi_{j}\\}$ are eigenfunctions associated to the eigenvalue $\lambda_{j}$. As a consequence of Weyl’s law and the sup-norm bound for eigenfunctions, the series which defines the heat kernel converges for all $t>0$ and $z,w\in X$. Furthermore, if $z\neq w$, then the heat kernel has exponential decay when $t$ approaches zero; see page 198 of [Ch84]. For any $Z\in{\mathbb{C}}$ with ${\mathrm{Re}}(Z)\geq 0$, the translated by $-Z$ Poisson Kernel ${\cal P}_{X,-Z}(z,w;u)$, for $z,w\in X$ and $u\in{\mathbb{C}}$ with ${\mathrm{Re}}(u)\geq 0$ is defined by ${\cal P}_{X,-Z}(z,w;u)=\frac{u}{\sqrt{4\pi}}\int_{0}^{\infty}K_{X}(z,w;t)e^{-Zt}e^{-u^{2}/(4t)}t^{-1/2}\,\frac{dt}{t}.$ (13) The translated Poisson kernel ${\cal P}_{X,-Z}(z,w;u)$ is a fundamental solution associated to the differential operator $\Delta_{X}+Z-\partial^{2}_{u}$. For certain considerations to come, we will choose a constant $\rho_{0}\geq 0$, which will depend on the geometry of $X$, and write each eigenvalue of $\Delta_{X}$ as $\lambda_{j}=\rho_{0}^{2}+t_{j}^{2}$. Thus, we divide the spectral expansion of the heat kernel $K_{X}$ into two subsets: The finite sum for $\lambda_{j}<\rho_{0}^{2}$, so then $t_{j}\in(0,i\rho_{0}]$, and the sum over $\lambda_{j}\geq\rho_{0}^{2}$, so then $t_{j}\geq 0$. Using (5), we get the spectral expansion ${\cal P}_{X,-Z}(z,w;u)=\sum_{\lambda_{k}<\rho_{0}^{2}}e^{-u\sqrt{\lambda_{k}+Z}}\psi_{k}(z)\overline{\psi}_{k}(w)+\sum_{\lambda_{k}\geq\rho_{0}^{2}}e^{-u\sqrt{\lambda_{k}+Z}}\psi_{k}(z)\overline{\psi}_{k}(w).$ (14) By Theorem 5.2 and Remark 5.3 of [JLa03], ${\cal P}_{X,-Z}(z,w;u)$ admits an analytic continuation to $Z=-\rho_{0}^{2}$. In analogy with [JvPS16], we can deduce that that the continuation of ${\cal P}_{X,-Z}(z,w;u)$ for $Z=-\rho_{0}^{2}$, with ${\mathrm{Re}}(u)>0$ and ${\mathrm{Re}}(u^{2})>0$ is given by ${\cal P}_{X,\rho_{0}^{2}}(z,w;u)=\sum_{\lambda_{k}<\rho_{0}^{2}}e^{-u\sqrt{\lambda_{k}-\rho^{2}_{0}}}\psi_{k}(z)\overline{\psi}_{k}(w)+\sum_{\lambda_{k}\geq\rho_{0}^{2}}e^{-u{t_{k}}}\psi_{k}(z)\overline{\psi}_{k}(w),$ (15) where $\sqrt{\lambda_{k}-\rho_{0}^{2}}=t_{k}\in(0,i\rho_{0}]$ is taken to be the branch of the square root obtained by analytic continuation through the upper half-plane. As stated, if $z\neq w$, then the heat kernel has exponential decay as $t$ approaches zero. From this, one can show that the Poisson kernel ${\cal P}_{X,-Z}(z,w;u)$ is bounded as $u$ approaches zero for any $Z$. At this point, we would like to define the (translated by $\rho_{0}^{2}$) wave kernel by defining $W_{X,\rho_{0}^{2}}(z,w;u)={\cal P}_{X,\rho_{0}^{2}}(z,w;iu)+{\cal P}_{X,\rho_{0}^{2}}(z,w;-iu),$ for some branch of the meromorphic continuation of ${\cal P}_{X,\rho_{0}^{2}}(z,w;u)$ to all $u\in{\mathbb{C}}$. However, because of convergence issues, we cannot simply replace $u$ by $iu$ in the expression for the Poisson kernel. As a result, we define the wave distribution via the spectral expansion for the analytic continuation of ${\cal P}_{X,\rho_{0}^{2}}$ in (15). ### 2.7 An elementary, yet important, rescaling observation By writing the Laplacian as in the beginning of section 2.3, we have established specific conventions regarding various scales, or multiplicative constants, in our analysis. However, there is one additional scaling which could be considered. Specifically, one could consider the heat equation $\Delta_{z}+c\partial_{t}$ for any positive constant $c$. The associated heat kernel would be $K_{X}(z,w;t/c)$, if the heat kernel associated to $\Delta_{z}+\partial_{t}$ is $K_{X}(z,w;t)$. In doing so, we would replace (13) by ${\cal P}_{X,-Z}(z,w;u)=\frac{u}{\sqrt{4\pi}}\int_{0}^{\infty}K_{X}(z,w;t/c)e^{-Zt/c}e^{-u^{2}/(4t)}t^{-1/2}\,\frac{dt}{t}.$ (16) for some positive constant $c$. In effect, we are changing the parameterize for the positive real axis ${\mathbb{R}}^{+}$ from the parameter $t$ to $t/c$ for any positive constant $c$. In this manner, we rescale the data from the beginning of our consideration so that when we study the translation of the heat kernel, we can, provided $\rho_{0}^{2}>0$, choose $c$ appropriately so that translation $\rho_{0}^{2}/c$ is always equal to $1/4$. In the examples we develop, the choice of the $\rho_{0}^{2}$ will be determined by a “non-spectral” representation of the heat kernel, after which we choose $c=1/(4\rho_{0}^{2})$, provided $\rho_{0}\neq 0$. As it turns out, the translation by $1/4$ matters. This point will become relevant in section 6 below. ## 3 The wave distribution For $z,w\in X$ and function $g\in C^{\infty}_{c}({\mathbb{R}}^{+})$, we formally define the wave distribution ${\cal W}_{X,\rho_{0}^{2}}(z,w)(g)$ applied to $g$ by the series ${\cal W}_{X,\rho_{0}^{2}}(z,w)(g)=\sum_{\lambda_{j}\geq 0}H(t_{j},g)\psi_{j}(z)\overline{\psi}_{j}(w),$ (17) where $H(t_{j},g)$ is given by (6) and $t_{j}=\sqrt{\lambda_{j}-\rho_{0}^{2}}$ if $\lambda_{j}\geq\rho_{0}^{2}$, otherwise $t_{j}\in(0,i\rho_{0}]$. ###### Definition 1 For $a\in{\mathbb{R}}^{+}$ and $m\in\mathbb{N}$, let $S_{m}^{\prime}({\mathbb{R}}^{+},a)$ be the set of Schwartz functions on ${\mathbb{R}}^{+}$ with $g^{(k)}(0)=0$ for all odd integers $k$ with $0\leq k\leq m+1$ and where $e^{ua}|g(u)|$ is dominated by an integrable function on ${\mathbb{R}}^{+}$. The following proposition addresses the question of convergence of (17). ###### Theorem 1 Fix $z,w\in X$, with $z\neq w$, there exists a continuous, real-valued function $F_{z,w}(u)$ on ${\mathbb{R}}^{+}$ and an integer $m$ sufficiently large such that the following assertions hold. 1. (i) One has that $F_{z,w}(u)=(-1)^{m+1}\sum_{\lambda_{j}<\rho_{0}^{2}}e^{u\sqrt{\rho_{0}^{2}-\lambda_{j}}}\cdot t_{j}^{-(m+1)}\psi_{j}(z)\overline{\psi}_{j}(w)+O(u^{m+1})$ as $u\rightarrow\infty$. 2. (ii) For any non-negative integer $j\leq m$, we have the bound $\partial_{u}^{j}F_{z,w}(u)=O(u^{m+1-j})$ as $u\rightarrow 0^{+}$. 3. (iii) For any $g\in S_{m}^{\prime}(\mathbb{R}^{+},\rho_{0})$ such that $\partial_{u}^{j}g(u)\exp(\rho_{0}u)$ has a limit as $u\to\infty$ and is bounded by some integrable function on $\mathbb{R}^{+}$ for all non-negative integers $j\leq m+1$, we have $\mathcal{W}_{X,\rho_{0}^{2}}(z,w)(g)=\int\limits_{0}^{\infty}F_{z,w}(u)\partial_{u}^{m+1}g(u)du.$ (18) The implied constants in the error terms in statements (i) and (ii) depend on $m$ and the distance between $z$ and $w$. Proof: Choose an integer $m\geq 4N+1$, where $N$ is the complex dimension of $X$. To begin, we claim the following statement: For every integer $k$ with $0\leq k\leq m$, there is a polynomial $h_{k,m}(x)$ of degree at most $m$ such that $h_{k,m}(\sin(x))=\frac{x^{k}}{k!}+O_{m}(x^{m+1})\,\,\,\,\,\text{\rm as $x\rightarrow 0$.}$ Indeed, one begins by initially setting $h_{k,m}^{(0)}(x)=x^{k}/k!$. The function $h_{k,m}^{(0)}(\sin(x))$ has a Taylor series expansion near zero of the form $h_{k,m}^{(0)}(\sin(x))=\frac{x^{k}}{k!}+c_{\ell}x^{\ell}+O_{\ell}(x^{\ell+1})\,\,\,\,\,\text{\rm as $x\rightarrow 0$}$ for some real number $c_{\ell}$ and integer $\ell\geq k+1$. Now set $h_{k,m}^{(1)}(x)$ to be $h_{k,m}^{(1)}(x)=h_{k,m}^{(0)}(x)-c_{\ell}x^{\ell}$ so then $h_{k,m}^{(1)}(\sin(x))=\frac{x^{k}}{k!}+c_{p}x^{p}+O_{p}(x^{p+1})\,\,\,\,\,\text{\rm as $x\rightarrow 0$}$ for some real number $c_{p}$ and integer $p\geq\ell+1\geq k+2$. One can continue to subtract multiplies of monomials of higher degree, thus further reducing the order of the error term until the claimed result is obtained; it is elementary to complete the proof of the assertion with the appropriate proof by induction argument. Having proved the above stated assertion, we then have for any $\zeta\in{\mathbb{C}}\setminus\\{0\\}$ that $e^{-t\zeta}-\sum_{k=0}^{m}h_{k,m}(\sin(t))(-\zeta)^{k}=O_{\zeta}(t^{m+1})\,\,\,\,\,\text{\rm as $t\rightarrow 0$.}$ (19) For $t>0$ we define $P_{m}(t,\zeta):=\frac{e^{-t\zeta}-\sum_{k=0}^{m}h_{k,m}(\sin(t))(-\zeta)^{k}}{(-t)^{m+1}}$ (20) and set $P_{m}(0,\zeta)=\lim_{t\to 0}P_{m}(t,\zeta)$; the existence of this limit is ensured by (19). For ${\mathrm{Re}}(\zeta)\geq 0$, we have $P_{m}(t,\zeta)=O(t^{-m-1})$ as $t\to\infty$. Hence, the bounds for the eigenvalue growth (8) and sup-norm for eigenfunctions (9), together with the choice of $m$, imply that the series $\tilde{F}_{z,w}(\zeta)=\sum_{\lambda_{j}\geq 0}P_{m}(t_{j},\zeta)\psi_{j}(z)\overline{\psi}_{j}(w)$ (21) converges uniformly and absolutely on $X$ for $\zeta$ in the closed half-plane ${\mathrm{Re}}(\zeta)\geq 0$. Furthermore, any of the first $m+1$ derivatives of $\tilde{F}_{z,w}(\zeta)$ in $\zeta$ converges uniformly and absolutely when ${\mathrm{Re}}(\zeta)>0$. This allows us to differentiate the series above term by term, and by doing so $m+1$ times and using that $\frac{d^{m+1}}{d\zeta^{m+1}}P_{m}(t_{j},\zeta)=e^{-t_{j}\zeta}$ we conclude that for ${\mathrm{Re}}(\zeta)>0$, one has the identity $\frac{d^{m+1}}{d\zeta^{m+1}}\tilde{F}_{z,w}(\zeta)={\cal P}_{M,\rho_{0}^{2}}(z,w;\zeta),$ (22) where ${\cal P}_{M,\rho_{0}^{2}}$ is defined in (15). Set ${\cal P}^{(0)}(z,w;\zeta)={\cal P}_{X,\rho_{0}^{2}}(z,w;\zeta)$, and define inductively for ${\mathrm{Re}}(\zeta)>0$ the function ${\cal P}^{(k)}(z,w;\zeta)=\int_{0}^{\zeta}{\cal P}^{(k-1)}(z,w;\xi)\,d\xi,$ where the integral is taken over a ray contained in the upper half plane. Note that ${\cal P}^{(k)}(z,w;\zeta)=O_{z,w,k}(\zeta^{k})\;\;\text{as}\;\;\zeta\rightarrow 0.$ (23) From (22), we have that ${\cal P}^{(m+1)}(z,w;\zeta)-\tilde{F}_{z,w}(\zeta)=q_{m}(z,w;\zeta),$ (24) where $q_{m}(z,w;\zeta)$ is a degree $m$ polynomial in $\zeta$ with coefficients which depend on $z$ and $w$. Using this, for $u\in{\mathbb{R}}^{+}$ we define $F_{z,w}(u)=\frac{1}{2i}\left[\left(\tilde{F}_{z,w}(iu)+q_{m}(z,w;iu)\right)-\left(\tilde{F}_{z,w}(-iu)+q_{m}(z,w;-iu)\right)\right]$ (25) Assertions (i) and (ii) follow immediately from the above construction of $F$ and its relation to the Poisson kernel. Since the expansion (21) converges uniformly for ${\mathrm{Re}}(\zeta)=0$, property (iii) will follow directly from (i), (ii) and $(m+1)$ term-by-term integration by parts. $\square$ ## 4 A basic test function The building block for our Kronecker limit formula is obtained by applying the wave distribution to the function $\cosh^{-(s-\rho_{0})}(u)$. As stated above, we will choose $\rho_{0}$ depending on the geometry of $X$ and then re-scale the time variable in the heat kernel so that ultimately we have either $\rho_{0}=0$ or $\rho_{0}=1/2$. For the time being, let us work out the results for a general $\rho_{0}$. ###### Proposition 1 For $s\in{\mathbb{C}}$ with ${\mathrm{Re}}(s)>2\rho_{0}$, the wave distribution of $g(u)=\cosh^{-(s-\rho_{0})}(u)$ exists and admits the spectral expansion ${\cal W}_{X,\rho_{0}^{2}}(z,w)(\cosh^{-(s-\rho_{0})})=\sum_{\lambda_{j}\geq 0}c_{j,(s-\rho_{0})}\psi_{j}(z)\overline{\psi}_{j}(w),$ (26) where $c_{j,(s-\rho_{0})}=\frac{2^{s-\rho_{0}-1}}{\Gamma(s-\rho_{0})}\Gamma\left(\frac{s-\rho_{0}-it_{j}}{2}\right)\Gamma\left(\frac{s-\rho_{0}+it_{j}}{2}\right).$ (27) Furthermore, for any $z,w\in X$, the series (26) converges absolutely and uniformly in $s$ on any compact subset of the half-plane ${\mathrm{Re}}(s)>2\rho_{0}$. Proof: If ${\mathrm{Re}}(s)>2\rho_{0}$, then the conditions of Theorem 1 apply. The spectral coefficients are computed using (3) and (4). Finally, Stirling’s formula (2) implies that the factor (27) decays exponentially as $t_{j}\to\infty$. When combined with the sup-norm bound (9) on the eigenfunctions $\psi_{j}$, the assertion regarding uniform convergence follows. $\square$ ###### Corollary 1 For $z,w\in X$ with $z\neq w$ and $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>2\rho_{0}$ let $K_{X;\rho_{0}^{2}}(z,w;s):=\frac{\Gamma(s-\rho_{0})}{\Gamma(s)}{\cal W}_{X,\rho_{0}^{2}}(z,w)(\cosh^{-(s-\rho_{0})}).$ Then the function $\Gamma(s)\Gamma^{-1}(s-\rho_{0})K_{X;\rho_{0}^{2}}(z,w;s)$ admits a meromorphic continuation to all $s\in{\mathbb{C}}$ with poles at points $s=\rho_{0}\pm it_{j}-2m$ for any integer $m\geq 0$. Furthermore, the function $K_{X;\rho_{0}^{2}}(z,w;s)$ satisfies the differential-difference equation $(\Delta_{X}+s(s-2\rho_{0}))K_{X;\rho_{0}^{2}}(z,w;s)=s(s+1)K_{X;\rho_{0}^{2}}(z,w;s+2).$ (28) Proof: By Proposition 1, $K_{X;\rho_{0}^{2}}(z,w;s)$ is well defined for ${\mathrm{Re}}(s)>2\rho_{0}$. Keeping ${\mathrm{Re}}(s)>2\rho_{0}$, we have $K_{X;\rho_{0}^{2}}(z,w;s)=\frac{\Gamma(s-\rho_{0})}{\Gamma(s)}\sum_{\lambda_{j}\geq 0}H(t_{j},\cosh^{-(s-\rho_{0})})\psi_{j}(z)\overline{\psi_{j}}(w).$ (29) For $\nu\in\mathbb{C}$, ${\mathrm{Re}}(\nu)>\rho_{0}$, $r\in\mathbb{R}^{+}$ or $r\in[0,i\rho_{0}]$ and non-negative integer $n$, one has that $H(r,\cosh^{-(\nu+2n)})=H(r,\cosh^{-\nu})\frac{2^{2n}\left(\frac{\nu+ir}{2}\right)_{n}\left(\frac{\nu- ir}{2}\right)_{n}}{(\nu)_{2n}}.$ (30) Indeed, the evaluation of $H(t_{j},\cosh^{-(s-\rho_{0})})$ in terms of the Gamma function is stated in (27). One then can use that $\Gamma(s+1)=s\Gamma(s)$ and the definition of the Pochammer symbol $(s)_{n}$ to arrive at (30). With this, we can write, for any positive integer $n$, $\frac{2^{2n}\Gamma(s)}{\Gamma(s-\rho_{0})}K_{X;\rho_{0}^{2}}(z,w;s)=\frac{\Gamma(s-\rho_{0}+2n)}{\Gamma(s-\rho_{0})}\sum_{\lambda_{j}\geq 0}\frac{H(t_{j},\cosh^{-(s-\rho_{0}+2n)})}{Q_{n}(t_{j},s-\rho_{0})}\psi_{j}(z)\overline{\psi_{j}}(w),$ (31) where $Q_{n}(r,\nu)=\left(\frac{\nu+ir}{2}\right)_{n}\left(\frac{\nu- ir}{2}\right)_{n}.$ For $n\geq\lfloor\rho_{0}\rfloor+1$, the right-hand-side of (31) defines a meromorphic function in the half-plane ${\mathrm{Re}}(s)>2\rho_{0}-2n$ with possible poles at the points $s=\rho_{0}\pm it_{j}-2l$, for $l\in{0,...,n-1}$. Therefore, the function $\Gamma(s)\Gamma^{-1}(s-\rho_{0})K_{X;\rho_{0}^{2}}(z,w;s)$ admits a meromorphic continuation to all $\nu\in{\mathbb{C}}$ with poles at points $s=\rho_{0}\pm it_{j}-2m$ for integers $m\geq 0$. It remains to prove the difference-differential equation. As stated, the right-hand-side of the equation (29) converges absolutely and uniformly on compact subsets of the right half plane ${\mathrm{Re}}(s)>2\rho_{0}$. When viewed as a function of $z\in X$, the convergence is uniform on $X$. Therefore, when restricting $s$ to ${\mathrm{Re}}(s)>2\rho_{0}$, we can interchange the action of $\Delta_{X}$ and the sum in (29) to get $\Delta_{X}K_{X;\rho_{0}^{2}}(z,w;s)=\frac{\Gamma(s-\rho_{0})}{\Gamma(s)}\sum_{\lambda_{j}\geq 0}(t_{j}^{2}+\rho_{0}^{2})H(t_{j},\cosh^{-(s-\rho_{0})})\psi_{j}(z)\overline{\psi_{j}}(w).$ Applying (30) with $n=1$ we can write the above equation, for sufficiently large ${\mathrm{Re}}(s)$ as $\Delta_{X}K_{X;\rho_{0}^{2}}(z,w;s)=\frac{\Gamma(s+2-\rho_{0})}{\Gamma(s)}\sum_{\lambda_{j}\geq 0}\frac{(t_{j}^{2}+\rho_{0}^{2})}{(s-\rho_{0})^{2}+t_{j}^{2}}H(t_{j},\cosh^{-(s+2-\rho_{0})})\psi_{j}(z)\overline{\psi_{j}}(w).$ Let $n=1$ in (31) and multiply by $2^{-2}s(s-2\rho_{0})\Gamma(s-\rho_{0})\Gamma^{-1}(s)$ to get $s(s-2\rho_{0})K_{X;\rho_{0}^{2}}(z,w;s)=s(s+1)\frac{\Gamma(s+2-\rho_{0})}{\Gamma(s+2)}\sum_{\lambda_{j}\geq 0}H(t_{j},\cosh^{-(s+2-\rho_{0})})\frac{s^{2}-2s\rho_{0}}{(s-\rho_{0})^{2}+t_{j}^{2}}\psi_{j}(z)\overline{\psi_{j}}(w).$ Adding up the last two equations, we obtained the desired result for sufficiently large ${\mathrm{Re}}(s)$, and then for all $s$ by meromorphic continuation. $\square$ ###### Remark 1 It is necessary to assume that $z\neq w$ when considering the wave distribution of the test function $g(u)=\cosh^{-(s-\rho_{0})}(u)$. Only after one computes the spectral expansion of $K_{X;\rho_{0}^{2}}(z,w;s)$ is one able to extend the function to $z=w$. ## 5 Two series expansions We will define two series using the function $K_{X;\rho_{0}^{2}}(z,w;s)$. The first, in the next Theorem, is shown to equal the resolvent kernel, which is integral kernel that inverts the operator $(\Delta_{X}+s(s-2\rho_{0}))$ for almost all values of $s$. As a reminder, the resolvent kernel can be realized as an integral transform of the heat kernel $K_{X}(z,w;t)$, namely $\int\limits_{0}^{\infty}K_{X}(z,w;t)e^{-s(s-2\rho_{0})t}dt,$ provided $z\neq w$ and ${\mathrm{Re}}(s(s-2\rho_{0}))>0$. In that instance, the heat kernel decays exponentially as $t$ approaches zero, so then $\displaystyle\int\limits_{0}^{\infty}K_{X}(z,w;t)e^{-s(s-2\rho_{0})t}dt$ $\displaystyle=\lim\limits_{\epsilon\rightarrow 0}\int\limits_{\epsilon}^{\infty}K_{X}(z,w;t)e^{-s(s-2\rho_{0})t}dt$ $\displaystyle=\lim\limits_{\epsilon\rightarrow 0}\sum\limits_{\lambda_{j}\geq 0}\frac{1}{(s-\rho_{0})^{2}+t_{j}^{2}}\psi_{j}(z)\overline{\psi_{j}}(w)\cdot e^{-\epsilon(s(s-\rho_{0})+\lambda_{j})}.$ (32) ###### Theorem 2 For $z,w\in X$, $z\neq w$ and $s\in{\mathbb{C}}$ with ${\mathrm{Re}}(s)>2\rho_{0}$ consider the function $G_{X;\rho_{0}^{2}}(z,w;s)=\frac{2^{-s-1+\rho_{0}}\Gamma(s)}{\Gamma(s+1-\rho_{0})}\sum_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}K_{X;\rho_{0}^{2}}(z,w;s+2k).$ (33) Then we have the following results. * i) The series defining $G_{X;\rho_{0}^{2}}(z,w;s)$ is holomorphic in the half- plane ${\mathrm{Re}}(s)>2\rho_{0}$ and continues meromorphically to the whole $s$-plane. * ii) The function $G_{X;\rho_{0}^{2}}(z,w;s)$ admits the spectral expansion $G_{X;\rho_{0}^{2}}(z,w;s)=\sum_{\lambda_{j}\geq 0}\frac{1}{(s-\rho_{0})^{2}+t_{j}^{2}}\psi_{j}(z)\overline{\psi_{j}}(w)$ which is conditionally convergent, in the sense of (32), for $z\neq w$ and for all $s\in{\mathbb{C}}$ provided $s(s-\rho_{0})+\lambda_{j}\neq 0$ for some $\lambda_{j}$. * iii) The function $G_{X;\rho_{0}^{2}}(z,w;s)$ satisfies the equation $(\Delta_{X}+s(s-2\rho_{0}))G_{X;\rho_{0}^{2}}(z,w;s)=0,$ (34) for all $s\in{\mathbb{C}}$ provided $s(s-\rho_{0})+\lambda_{j}\neq 0$ for some $\lambda_{j}$. Proof: Let us study each term in the series (33), which is $\displaystyle\frac{2^{-s-1+\rho_{0}}\Gamma(s)}{\Gamma(s+1-\rho_{0})}$ $\displaystyle\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}K_{X;\rho_{0}^{2}}(z,w;s+2k)$ $\displaystyle=2^{-1+\rho_{0}}\frac{2^{-(s+2k)}\Gamma(s+2k-\rho_{0})}{\Gamma(k+1)\Gamma(s+1-\rho_{0}+k)}{\cal W}_{X,\rho_{0}^{2}}(z,w)(\cosh^{-(s+2k-\rho_{0})}).$ (35) For now, let us assume that $\textrm{\rm Re}(s)>2\rho_{0}$. A direct computation using Stirling’s formula (1) yields that $2^{-1+\rho_{0}}\frac{2^{-(s+2k)}\Gamma(s+2k-\rho_{0})}{\Gamma(k+1)\Gamma(s+1-\rho_{0}+k)}=O_{s}(k^{-3/2})\,\,\,\,\,\text{\rm as $k\rightarrow\infty$}.$ (36) It remains to determine the asymptotic behavior of the factor in (35) involving the wave distribution. For this, Theorem 1 implies that for any $\delta>0$, there is a $C>1$ depending upon the distance between $z$ and $w$ such that we have the bound $\int\limits_{\delta}^{\infty}F_{z,w}(u)\partial_{u}^{m+1}\left(\cosh^{-(s+2k-\rho_{0})}(u)\right)du=O_{s,z,w}(C^{-(s+2k-\rho_{0})})\,\,\,\,\,\text{\rm as $k\rightarrow\infty$},$ where the implied constant depends on the distance between $z$ and $w$ and $m\geq 4N+1$ is a sufficiently large, fixed integer. Since $z\neq w$, we can combine equations (22), (23), (24) and (25) together with integration by parts to write, for some $C_{1}>1$ $\displaystyle\int\limits_{0}^{\delta}F_{z,w}(u)\partial_{u}^{m+1}\left(\cosh^{-(s+2k-\rho_{0})}(u)\right)du$ $\displaystyle=(-1)^{m+1}\int\limits_{0}^{\delta}\left(\partial_{u}^{m+1}F_{z,w}(u)\right)\cosh^{-(s+2k-\rho_{0})}(u)du$ (37) $\displaystyle+O_{s,z,w}(C_{1}^{-(s+2k-\rho_{0})})\,\,\,\,\,\text{\rm as $k\rightarrow\infty$}.$ In essence, the use of Theorem 1 ensures that the boundary term at $u=0$ vanishes, so then the constant $C_{1}$ comes from the evaluation of the boundary terms at $u=\delta$ and depends on the distance between $z$ and $w$. To finish, we may use (7) where $h(t)=-\log(\cosh(t))$, so then $\lambda=1$ and $\nu=2$, to conclude that $\int\limits_{0}^{\delta}\left(\partial_{u}^{m+1}F_{z,w}(u)\right)\cosh^{-(s+2k-\rho_{0})}(u)du=O_{s,z,w}(k^{-1/2})\,\,\,\,\,\text{\rm as $k\rightarrow\infty$},$ (38) where, again, the implied constant depends on the distance between $z$ and $w$, and we assume that $z\neq w$. If we combine (36), (37), and (38), we obtain that (35) is of order $O_{s,z,w}(k^{-2})$. Therefore, the series (33) converges uniformly and absolutely for $s$ in compact subsets in a right half plane ${\mathrm{Re}}(s)>2\rho_{0}$, and $z,w\in X$ provided $z$ and $w$ are uniformly bounded apart. At this point, we have the convergence of the series defining $G_{X;\rho_{0}^{2}}(z,w;s)$ for ${\mathrm{Re}}(s)>2\rho_{0}$. In order to obtain the meromorphic continuation of (33), re-write the series as a finite sum of terms for $k\leq n$ and an infinite sum for $k>n$, for any integer $n$. For the finite sum, the meromorphic continuation is established in Corollary 1. For the infinite sum, the above argument applies to prove the convergence in the half-plane ${\mathrm{Re}}(s)>2\rho_{0}-n$. With this, we have completed the proof of assertion (i). Going further, one can follow the argument given above using (2) for any positive integer $\ell$ and conclude that for $z\neq w$, $s$ in some compact subset of the half-plane ${\mathrm{Re}}(s)>2\rho_{0}$, we have $\partial_{s}^{\ell}\left(\frac{2^{-s-1+\rho_{0}}\Gamma(s)}{\Gamma(s+1-\rho_{0})}\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}K_{X;\rho_{0}^{2}}(z,w;s+2k)\right)=O_{s,z,w}(k^{-2-\ell/2})\,\,\,\,\,\text{\rm as $k\rightarrow\infty$},$ where the implied constant depends on the distance between $z$ and $w$ and the compact set which contains $s$. Namely, repeated differentiation of Gamma factors $\ell$ times reduces the exponent by $\ell$, while differentiation of (37), after application of formula (7), reduces the exponent by $\ell/2$. The convergence of the series (33) for ${\mathrm{Re}}(s)>2\rho_{0}$ as well as the series of derivatives allows us to interchange differentiation and summation. Therefore, for $z\neq w$, ${\mathrm{Re}}(s)>2\rho_{0}$ and any $\ell\geq 0$ we get $\partial_{s}^{\ell}\left(\sum_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}K_{X;\rho_{0}^{2}}(z,w;s+2k)\right)=\sum_{k=0}^{\infty}\partial_{s}^{\ell}\left(\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}K_{X;\rho_{0}^{2}}(z,w;s+2k)\right).$ (39) Now, we would like to include the case $z=w$. Recall that $K_{X;\rho_{0}^{2}}(z,w;s)=\frac{\Gamma(s-\rho_{0})}{\Gamma(s)}\sum_{\lambda_{j}\geq 0}c_{j,(s-\rho_{0})}\psi_{j}(z)\overline{\psi_{j}}(w),$ where the spectral coefficients $c_{j,(s-\rho_{0})}$ are given by (27). The coefficients $c_{j,(s-\rho_{0})}$ are exponentially decreasing in $t_{j}=\sqrt{\lambda_{j}-\rho_{0}^{2}}$, as $j\to\infty$ and differentiable with respect to $s$, with the derivatives also exponentially decreasing in $t_{j}$. Moreover, the application of the the sup-norm bound for the eigenfunctions $\psi_{j}$ and the Stirling formula for coefficients $c_{j,(s+2k-\rho_{0})}$ shows that, uniformly in $z,w\in X$ for $s$ in a compact subset of the half-plane ${\mathrm{Re}}(s)>2\rho_{0}$, one has $\left|{\cal W}_{X,\rho_{0}^{2}}(z,w)(\cosh^{-(s+2k-\rho_{0})})\right|=O_{s,z,w}(k^{N+1}),$ where $N$ is the complex dimension of $X$. In addition, repeated differentiation of the coefficients with respect to $s$ reduces the exponent of $k$ by one each time. Therefore, for sufficiently large $\ell$ $\sum_{k=0}^{\infty}\left|\partial_{s}^{\ell}\left(\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}K_{X;\rho_{0}^{2}}(z,w;s+2k)\right)\right|=O_{s,z,w}(1),$ where ${\mathrm{Re}}(s)>2\rho_{0}$, and the bound is uniform in $z,w\in X$. Hence, we may interchange the sum and the integral to get, for sufficiently large $\ell$ $\displaystyle\int_{X}\partial_{s}^{\ell}$ $\displaystyle\left(\sum_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}K_{X;\rho_{0}^{2}}(z,w;s+2k)\right)\psi_{j}(w)\mu(w)$ $\displaystyle=\sum_{k=0}^{\infty}\partial_{s}^{\ell}\left(\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}\int_{X}K_{X;\rho_{0}^{2}}(z,w;s+2k)\psi_{j}(w)\mu(w)\right)$ $\displaystyle=\partial_{s}^{\ell}\left(\sum_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}\int_{X}K_{X;\rho_{0}^{2}}(z,w;s+2k)\psi_{j}(w)\mu(w)\right),$ where the last equation above follows from the absolute and uniform convergence of the series over $k$, derived in the previous lines. From the spectral expansion of $K_{X;\rho_{0}^{2}}(z,w;s)$ we immediately get $\displaystyle\int_{X}$ $\displaystyle K_{X;\rho_{0}^{2}}(z,w;s+2k)\psi_{j}(w)\mu(w)$ $\displaystyle=\frac{2^{s+2k-\rho_{0}-1}}{\Gamma(s+2k)}\Gamma\left(\frac{s+2k-\rho_{0}-it_{j}}{2}\right)\Gamma\left(\frac{s+2k-\rho_{0}+it_{j}}{2}\right)\psi_{j}(z)$ $\displaystyle=\frac{2^{s+2k-\rho_{0}-1}}{\Gamma(s+2k)}\left(\frac{s-\rho_{0}-it_{j}}{2}\right)_{k}\left(\frac{s-\rho_{0}+it_{j}}{2}\right)_{k}\Gamma\left(\frac{s-\rho_{0}-it_{j}}{2}\right)\Gamma\left(\frac{s-\rho_{0}+it_{j}}{2}\right)\psi_{j}(z).$ An application of the doubling formula for the Gamma function yields that $\frac{2^{s+2k-\rho_{0}-1}\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{\Gamma(s+2k)}=\frac{2^{s-1-\rho_{0}}}{\Gamma(s)}.$ Therefore, $\displaystyle\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}\int_{X}$ $\displaystyle K_{X;\rho_{0}^{2}}(z,w;s+2k)\psi_{j}(w)\mu(w)$ $\displaystyle=\Gamma\left(\frac{s+2k-\rho_{0}-it_{j}}{2}\right)\Gamma\left(\frac{s+2k-\rho_{0}+it_{j}}{2}\right)\psi_{j}(z)$ $\displaystyle=\frac{2^{s-1-\rho_{0}}}{\Gamma(s)}\frac{\left(\frac{s-\rho_{0}-it_{j}}{2}\right)_{k}\left(\frac{s-\rho_{0}+it_{j}}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}\Gamma\left(\frac{s-\rho_{0}-it_{j}}{2}\right)\Gamma\left(\frac{s-\rho_{0}+it_{j}}{2}\right)\psi_{j}(z).$ Observe that ${\mathrm{Re}}\left(\frac{s-\rho_{0}-it_{j}}{2}+\frac{s-\rho_{0}+it_{j}}{2}-(s+1-\rho_{0})\right)=-1<0$, so then the hypergeometric function $\sum_{k=0}^{\infty}\frac{\left(\frac{s-\rho_{0}-it_{j}}{2}\right)_{k}\left(\frac{s-\rho_{0}+it_{j}}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}=F\left(\frac{s-\rho_{0}-it_{j}}{2},\frac{s-\rho_{0}+it_{j}}{2},s+1-\rho_{0};1\right)$ is uniformly and absolutely convergent. From [GR07], formula 9.122.1 we get $F\left(\frac{s-\rho_{0}-it_{j}}{2},\frac{s-\rho_{0}+it_{j}}{2},s+1-\rho_{0};1\right)=\frac{\Gamma(s+1-\rho_{0})}{\Gamma\left(\frac{s-\rho_{0}+it_{j}}{2}\right)\Gamma\left(\frac{s-\rho_{0}-it_{j}}{2}\right)}\cdot\frac{4}{(s-\rho_{0})^{2}+t_{j}^{2}}.$ Therefore, $\sum_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{k!(s+1-\rho_{0})_{k}}\int_{X}K_{X;\rho_{0}^{2}}(z,w;s+2k)\psi_{j}(w)\mu(w)=\frac{2^{s+1-\rho_{0}}\Gamma(s+1-\rho_{0})}{\Gamma(s)((s-\rho_{0})^{2}+t_{j}^{2})}\psi_{j}(z).$ This, together with the definition of the function $G_{X;\rho_{0}^{2}}(z,w;s)$ yields $\int_{X}\partial_{s}^{\ell}\left(G_{X;\rho_{0}^{2}}(z,w;s)\right)\psi_{j}(w)\mu(w)=\partial_{s}^{\ell}\left(\frac{1}{(s-\rho_{0})^{2}+t_{j}^{2}}\right)\psi_{j}(z),$ (40) for sufficiently large positive integer $\ell$. The above computations are valid provided ${\mathrm{Re}}(s)>2\rho_{0}$. The arguments could be repeated with the portion of the series in (33) with $k>n$, for an arbitrary positive integer $n$, from which one would arrive at a version of (40) where the right-hand-side would have a finite sum of terms subtracted with the restriction that ${\mathrm{Re}}(s)>2\rho_{0}-n$. However, there is no problem interchanging sum and differentiation for the finite sum of terms in (40) obtained by considering those with $k\leq n$, from which we conclude that (40) holds for all $s$ with ${\mathrm{Re}}(s)>2\rho_{0}-n$ provided $s$ is not a pole of (33). There is a unique meromorphic function $\tilde{G}(z,w;s)$ which is symmetric in $z$ and $w$ and satisfies $(\Delta_{X}+s(s-2\rho_{0}))\tilde{G}(z,w;s)=0$. Indeed, for ${\mathrm{Re}}(s(s-2\rho_{0}))>0$ one can express $\tilde{G}(z,w;s)$ as an integral transform of the heat kernel, namely $\tilde{G}(z,w;s)=\int\limits_{0}^{\infty}K_{X}(z,w;t)e^{-s(s-2\rho_{0})t}dt.$ At this point, we have that $\tilde{G}(z,w;s)=G_{X;\rho_{0}^{2}}(z,w;s)+p_{\ell}(s)$, where $p_{\ell}(s)$ is a polynomial of degree $\ell$. The asymptotic behavior as $s$ tends to infinity can be computed for $G_{X;\rho_{0}^{2}}(z,w;s)$ using Stirling’s formula, and that of $\tilde{G}(z,w,s)$ using the above integral expression. By combining, we get that $p_{\ell}(s)=o(1)$ as $s$ tends to infinity, thus $p_{\ell}(s)=0$. This proves that $G_{X;\rho_{0}^{2}}(z,w;s)$ coincides with the conditionally convergent series given as a limit (32) for ${\mathrm{Re}}(s)>2\rho_{0}$. Moreover, since both $G_{X;\rho_{0}^{2}}(z,w;s)$ and the resolvent kernel $\tilde{G}(z,w;s)$ possess meromorphic continuation to the whole complex ${\mathbb{C}}-$plane, they must coincide. With all this, assertions (ii) and (iii) are established. $\square$ ###### Theorem 3 Let $E_{X;\rho_{0}^{2}}(z,w;s)=\frac{\Gamma((s+1-2\rho_{0})/2)}{\Gamma(s/2)}\sum\limits_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}}{k!}K_{X;\rho_{0}^{2}}(z,w;s+2k)$ for $z,w\in X$, $z\neq w$. Then $E_{X;\rho_{0}^{2}}(z,w;s)$ converges to a meromorphic function for $\textrm{\rm Re}(s)<0$ away from the poles of any $K_{X;\rho_{0}^{2}}(z,w;s+2k)$ and negative integers. Furthermore, $E_{X;\rho_{0}^{2}}(z,w;s)$ extends to a meromorphic function for all $s$ and satisfies the differential-difference equation $(\Delta_{X}+s(s-2\rho_{0}))E_{X;\rho_{0}^{2}}(z,w;s)=-s^{2}E_{X;\rho_{0}^{2}}(z,w;s+2)$ (41) Proof: The estimates in the proof of Theorem 2, namely (36), (37), and (38) combine to show that $\frac{\left(\frac{s}{2}\right)_{k}}{k!}K_{X;\rho_{0}^{2}}(z,w;s+2k)=O_{s,z,w}(k^{s/2-\rho_{0}-3/2})\,\,\,\,\,\text{\rm as $k\rightarrow\infty$},$ where the implied constant depends upon $s$ and upon the distance between points $z$ and $w$. Therefore, the series converges for $s$ with $\textrm{Re}(s)<0$ provided no term has a pole. Set $\tilde{E}_{X;\rho_{0}^{2}}(z,w;s)=\sum\limits_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}}{k!}K_{X;\rho_{0}^{2}}(z,w;s+2k).$ Using the difference-differential equation for $K_{X;\rho_{0}^{2}}(z,w;s)$, as established in Corollary 1, we can prove such an equation for $\tilde{E}_{X;\rho_{0}^{2}}(z,w;s)$. Indeed, for ${\mathrm{Re}}(s)\ll 0$ begin by writing $\displaystyle(\Delta_{X}$ $\displaystyle+s(s-2\rho_{0}))\tilde{E}_{X;\rho_{0}^{2}}(z,w;s)=\sum\limits_{k=0}^{\infty}\left(\frac{\left(\frac{s}{2}\right)_{k}}{k!}\Delta_{X}K_{X;\rho_{0}^{2}}(z,w;s+2k)+\frac{\left(\frac{s}{2}\right)_{k}}{k!}s(s-2\rho_{0})K_{X;\rho_{0}^{2}}(z,w;s+2k)\right)$ $\displaystyle=\sum\limits_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}}{k!}\left(-(s+2k)(s+2k-2\rho_{0})K_{X;\rho_{0}^{2}}(z,w;s+2k)+(s+2k)(s+2k+1)K_{X;\rho_{0}^{2}}(z,w;s+2k+2)\right)$ $\displaystyle\hskip 14.22636pt+\sum\limits_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}}{k!}s(s-2\rho_{0})K_{X;\rho_{0}^{2}}(z,w;s+2k)$ $\displaystyle=\sum\limits_{k=1}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}}{k!}\left(-(s+2k)(s+2k-2\rho_{0})+s(s-2\rho_{0})\right)K_{X;\rho_{0}^{2}}(z,w;s+2k)$ $\displaystyle\hskip 14.22636pt+\sum\limits_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}}{k!}(s+2k)(s+2k+1)K_{X;\rho_{0}^{2}}(z,w;s+2k+2)$ $\displaystyle=\sum\limits_{n=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{n+1}}{(n+1)!}\left(-(s+2n+2)(s+2n+2-2\rho_{0})+s(s-2\rho_{0})\right)K_{X;\rho_{0}^{2}}(z,w;s+2n+2)$ $\displaystyle\hskip 14.22636pt+\sum\limits_{n=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{n}}{n!}(s+2n)(s+2n+1)K_{X;\rho_{0}^{2}}(z,w;s+2n+2).$ Since $-(s+2n+2)(s+2n+2-2\rho_{0})+s(s-2\rho_{0})=-(2n+2)(2s+2n+2-2\rho_{0}),$ the coefficient of $K_{X;\rho_{0}^{2}}(z,w;s+2n+2)$ in the last expression is $-\frac{\left(\frac{s}{2}\right)_{n+1}}{(n+1)!}(2n+2)(2s+2n+2-2\rho_{0})+\frac{\left(\frac{s}{2}\right)_{n}}{n!}(s+2n)(s+2n+1).$ Using the definition of the Pochhammer symbol, it is elementary to show that $-\frac{\left(\frac{s}{2}\right)_{n+1}}{(n+1)!}(2n+2)(2s+2n+2-2\rho_{0})+\frac{\left(\frac{s}{2}\right)_{n}}{n!}(s+2n)(s+2n+1)=\frac{\left(\frac{s+2}{2}\right)_{n}}{n!}(-s(s+1-2\rho_{0})),$ hence we arrive at the equation $(\Delta_{X}+s(s-2\rho_{0}))\tilde{E}_{X;\rho_{0}^{2}}(z,w;s)=-s(s+1-2\rho_{0})\tilde{E}_{X;\rho_{0}^{2}}(z,w;s+2).$ Notice that $E_{X;\rho_{0}^{2}}(z,w;s)=\frac{\Gamma((s+1-2\rho_{0})/2)}{\Gamma(s/2)}\tilde{E}_{X;\rho_{0}^{2}}(z,w;s),$ so then $\displaystyle(\Delta_{X}+s(s-2\rho_{0}))E_{X;\rho_{0}^{2}}(z,w;s)$ $\displaystyle=\frac{\Gamma((s+1-2\rho_{0})/2)}{\Gamma(s/2)}\left(-s(s+1-2\rho_{0})\right)\tilde{E}_{X;\rho_{0}^{2}}(z,w;s+2)$ $\displaystyle=-s^{2}\frac{\Gamma((s+1-2\rho_{0})/2)}{\Gamma(s/2)}\frac{(s+1-2\rho_{0})/2}{s/2}\tilde{E}_{X;\rho_{0}^{2}}(z,w;s+2)$ $\displaystyle=-s^{2}\frac{\Gamma(((s+2)+1-2\rho_{0})/2)}{\Gamma((s+2)/2)}\tilde{E}_{X;\rho_{0}^{2}}(z,w;s+2)$ $\displaystyle-s^{2}E_{X;\rho_{0}^{2}}(z,w;s+2),$ as asserted. $\square$ ###### Remark 2 The motivation of the series in Theorem 3 is the following elementary formula first employed in the context of elliptic Eisenstein series in [vP10]. For any $x$ with $|x|<1$ and complex $s$, one has the convergent Taylor series $(1-x)^{-s/2}=\sum\limits_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}}{k!}x^{k}.$ By setting $x=(\cosh u)^{-2}$, one then gets that $(1-(\cosh u)^{-2})^{-s/2}=\sum\limits_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}}{k!}(\cosh u)^{-2k}.$ Now write $(1-(\cosh u)^{-2})^{-s/2}=(\cosh u)^{s}((\cosh u)^{2}-1)^{-s/2}=(\cosh u)^{s}(\sinh u)^{-s}$ from which we obtain the identity $\sinh^{-s}(u)=\sum\limits_{k=0}^{\infty}\frac{\left(\frac{s}{2}\right)_{k}}{k!}\cosh^{-(s+2k)}(u).$ In this way, we can study the function obtained by applying the wave distribution to $g(u)=\sinh^{-s}(u)$, even though this function does not satisfy the conditions of Theorem 1. Indeed, this observation is the motivation behind the definition of $E_{X;\rho_{0}^{2}}(z,w;s)$. ## 6 Kronecker limit formulas We now prove the Kronecker limit formulas for $G_{X;\rho_{0}^{2}}(z,w;s)$ and $E_{X;\rho_{0}^{2}}(z,w;s)$, meaning we analyze the first two terms in the Laurent series at $s=0$. We will continue assuming that $\rho_{0}\geq 0$ is arbitrary. The choice of $\rho_{0}$ plays no role in the analysis of $G_{X;\rho_{0}^{2}}(z,w;s)$. However, in the approach taken in this section, the case when $\rho_{0}=1/2$ will be particularly interesting when studying $E_{X;\rho_{0}^{2}}(z,w;s)$. ###### Corollary 2 If $\rho_{0}\neq 0$, then the function $G_{X;\rho_{0}^{2}}(z,w;s)$ has the asymptotic behavior $G_{X;\rho_{0}^{2}}(z,w;s)=\frac{-1/(2\rho_{0})}{\textrm{\rm vol}_{\omega}(X)}s^{-1}+G_{X;\rho_{0}^{2}}(z,w)+\frac{1/(2\rho_{0})^{2}}{\textrm{\rm vol}_{\omega}(X)}+O(s)\,\,\,\,\,\textrm{as $s\rightarrow 0$}$ where $G_{X;\rho_{0}^{2}}(z,w)$ is the Green’s function associated to the Laplacian $\Delta_{X}$ acting on the space of smooth functions on $X$ which are orthogonal to the constant functions. If $\rho_{0}=0$, then we have the expansion $G_{X;\rho_{0}^{2}}(z,w;s)=\frac{1}{\textrm{\rm vol}_{\omega}(X)}s^{-2}+G_{X;\rho_{0}^{2}}(z,w)+O(s)\,\,\,\,\,\textrm{as $s\rightarrow 0$}$ Proof: The result follows directly from part (ii) of Theorem 2 having noted that the eigenfunction associated to the zero eigenvalue is $1/(\textrm{\rm vol}_{\omega}(X))^{1/2}$ and that $\frac{1}{s(s-2\rho_{0})}=\frac{-1/(2\rho_{0})}{s}+\frac{-1/2\rho_{0}}{s-2\rho_{0}}=\frac{-1/(2\rho_{0})}{s}+\frac{1}{(2\rho_{0})^{2}}+O(s)\,\,\,\,\,\textrm{\rm as $s\rightarrow 0$}$ in the case $\rho_{0}\neq 0$. If $\rho_{0}=0$, the assertion follows immediately from part (ii) of Theorem 2. $\square$ ###### Remark 3 Corollary 2 is, in some sense, elementary and well-known. Indeed, using (32), we can write $G_{X;\rho_{0}^{2}}(z,w;s)=\int\limits_{0}^{\infty}\left(K_{X}(z,w;t)e^{-s(s-2\rho_{0})t}-\frac{1}{\textrm{\rm vol}_{\omega}(X)}\right)dt+\frac{1}{s(s-2\rho_{0})}\frac{1}{\textrm{\rm vol}_{\omega}(X)}.$ As $s$ approaches zero while ${\mathrm{Re}}(s(s-2\rho_{0}))>0$, the above integral converges to the Green’s function $G_{X;\rho_{0}^{2}}(z,w)$. Nonetheless, the novel aspect of Theorem 2 is the expression of the resolvent kernel as a series. ###### Remark 4 The statement of Corollary 2 highlights the difference between the cases when $\rho_{0}=0$ and $\rho_{0}\neq 0$. The difference determines the order of the singularity of the resolvent kernel $G_{X;\rho_{0}^{2}}(z,w;s)$ at $s=0$. Of course, one could re-write Corollary 2 as $G_{X;\rho_{0}^{2}}(z,w;s)=\frac{1}{\textrm{\rm vol}_{\omega}(X)}\frac{1}{s(s-2\rho_{0})}+G_{X;\rho_{0}^{2}}(z,w)+O(s)\,\,\,\,\,\textrm{as $s\rightarrow 0$,}$ which includes both $\rho_{0}=0$ and $\rho_{0}\neq 0$. ###### Theorem 4 Let $D$ be the divisor of a holomorphic form $F_{D}$ on $X$, and assume that $D$ is smooth up to codimension two in $X$. Then, for $z\notin D$, there exist constants $c_{0}$ and $c_{1}$ such that $\int_{D}G_{X;\rho_{0}^{2}}(z,w;s)\mu_{D}(w)=\frac{\textrm{\rm vol}_{\omega}(D)}{\textrm{\rm vol}_{\omega}(X)}\frac{1}{s(s-2\rho_{0})}+c_{0}\log\|F_{D}(z)\|^{2}_{\omega}+c_{1}+O(s)\,\,\,\,\,\textrm{as $s\rightarrow 0$.}$ (42) Proof: For now, assume that $z\notin D$ and $w\in D$. By part (i) of Theorem 2, the function $G_{X;\rho_{0}^{2}}(z,w;s)$ is holomorphic in $s$ for ${\mathrm{Re}}(s)>2\rho_{0}$, so then the integral in (42) exists for ${\mathrm{Re}}(s)>2\rho_{0}$. The integral has a meromorphic continuation in $s$, again by part (i) of Theorem 2, and the Laurent expansion of the integral near $s=0$ can be evaluated integrating over $D$ the expansion given in Corollary 2. The singularity of the Green’s function $G_{X;\rho_{0}^{2}}(z,w)$ as $z$ approaches $w$ is known; see, for example, page 94 of [Fo76] as well as [JK98] and [JK01]. In the latter references, the authors carefully evaluate the integrals of functions with Green’s function type singularities; see section 3 of [JK98]. From those arguments, we conclude that $\int_{D}G_{X;\rho_{0}^{2}}(z,w;s)\mu_{D}(w)$ has a logarithmic singularity as $z$ approaches $D$. Throughout this discussion the Laplacian $\Delta_{X}$ acts on the variable $z$. From the equation $(\Delta_{X}+s(s-2\rho_{0}))G_{X;\rho_{0}^{2}}(z,w;s)=0,$ as proved in Theorem 2, and the expansion in Corollary 2, we conclude that for $z\neq w$, we have $\Delta_{X}G_{X;\rho_{0}^{2}}(z,w)=\frac{2}{\textrm{\rm vol}_{\omega}(X)}.$ (43) Let us consider the difference $\int_{D}G_{X;\rho_{0}^{2}}(z,w;s)\mu_{D}(w)-\frac{\textrm{\rm vol}_{\omega}(D)}{\textrm{\rm vol}_{\omega}(X)}\frac{1}{s(s-2\rho_{0})}-c_{0}\log\|F_{D}(z)\|^{2}_{\omega}$ (44) near $s=0$. For any $c_{0}$, the difference is holomorphic in $s$ near $s=0$. From section 2.5, we have that $\Delta_{X}\log\|F_{D}(z)\|^{2}_{\omega}$ is a non-zero constant. Choose $c_{0}$ so that $c_{0}\Delta_{X}\log\|F_{D}(z)\|^{2}_{\omega}=\frac{2}{\textrm{\rm vol}_{\omega}(X)}.$ By combining (43) and (12), we conclude that $\textrm{\rm d}\textrm{\rm d}^{c}\int_{D}G_{X;\rho_{0}^{2}}(z,w)\mu_{D}(w)=\delta_{D^{\prime}}-\omega$ where $D^{\prime}$ is a divisor whose support is equal to the support of $D$. It remains to show that $D^{\prime}=D$. Consider the difference $R_{X;\rho_{0}^{2}}(z;D):=\int_{D}G_{X;\rho_{0}^{2}}(z,w)\mu_{D}(w)-\frac{c_{0}}{\textrm{\rm vol}_{\omega}(X)}\log\|F_{D}(z)\|^{2}_{\omega}.$ (45) which satisfies $\textrm{\rm d}\textrm{\rm d}^{c}R_{X;\rho_{0}^{2}}(z;D)=\delta_{D^{\prime}}-n\delta_{D},$ which means that $R_{X;\rho_{0}^{2}}(z;D)$ is harmonic away from the support of $D$ and has logarithmic growth as $z$ approaches $D$. If $X$ is an algebraic curve, then $D$ is a finite sum of points, say $D=\sum m_{j}D_{j}$, with multiplicities $m_{j}$. In this case, $\int_{D}G_{X;\rho_{0}^{2}}(z,w)\mu_{D}(w)=\sum m_{j}G_{X;\rho_{0}^{2}}(z,D_{j}).$ It follows that $D^{\prime}=D$. By the Riemann removable singularity theorem, the difference (45) is harmonic on all of $X$, hence bounded, which implies that $R_{X;\rho_{0}^{2}}(z;D)$ is a constant. The argument for general $X$ is only slightly different. Again, write $D=\sum m_{j}D_{j}$ where each $D_{j}$ is irreducible. Choose a smooth point $P$ on $D$, hence on some $D_{j}$. One can express the integral in (45) near $P$ using suitably chosen local coordinates in $X$, as in section 3 of [JK98]. By doing so, one again concludes that the value of the integral of $G_{X;\rho_{0}^{2}}(z,w)$ as $P$ approaches $D$ is equal to the coefficient of $D_{j}$. Therefore, the difference (44) is bounded as $z$ approaches $P$. Since $D$ is smooth in codimension two, we can again apply the Riemann removable singularity theorem (see Corollary 7.3.2, page 262 of [Kr82]) to conclude that $R_{X;\rho_{0}^{2}}(z;D)$ is bounded and harmonic on $X$, hence constant. $\square$ ###### Remark 5 The constant $c_{0}$ can be expressed as a function of the weight of the form $F_{D}$. The constant $c_{1}$ of (42) can be determined by integrating both sides of (42) with respect to $z$, using that the integral of the Green’s function $G_{X;\rho^{2}_{0}}(z,w)$ is zero. This will express $c_{1}$ as an integral of $\log\|F_{D}(z)\|^{2}_{\omega}$. ###### Remark 6 In effect, the proof of Theorem 4 requires that the norm $\|F_{D}\|_{\omega}$ is such that the Laplacian $\Delta_{X}$ of its logarithm is constant, so then a certain linear combination of the Green’s function and $\log\|F_{D}\|_{\omega}$ has Laplacian equal to zero away from $D$. This statement can hold in settings not covered by the conditions stated of Theorem 4. In this setting, the forms $F_{D}$ one studies are determined by the condition that $\log|F_{D}|$ are harmonic even in the setting when a complex structure does not exist. In fact, this requirement is true for quotients of hyperbolic $n$-spaces of any dimension $n\geq 2$. ###### Remark 7 Suppose we are given a codimension one subvariety $D$ of $X$, and assume that $D$ is smooth in codimension one. Then one can realize the log norm of the form $F_{D}$ which vanishes along $D$ by (42). In this manner, we can construct $F_{D}$ when $D$ has been given. The form $F_{D}$ need not be a holomorphic form, but rather a section of the canonical bundle twisted by a flat line bundle. The parameters of the flat line bundle can be viewed as a generalization of Dedekind sums since classical Dedekind sums stem from attempting to drop the absolute values from the Kronecker limit function associated to the parabolic Eisenstein series for $\textrm{\rm PSL}(2,{\mathbb{Z}})$. Let us now extend the development of Kronecker limit functions to $E_{X;\rho_{0}^{2}}(z,w;s)$. To do so, we first proof that for certain $\rho_{0}^{2}$, the functions $G_{X;\rho_{0}^{2}}(z,w;s)$ and $E_{X;\rho_{0}^{2}}(z,w;s)$ have the same expansion at $s=0$ out to $O(s^{2})$. ###### Proposition 2 For $z,w\in X$, $z\neq w$ consider the difference $D_{X;\rho_{0}^{2}}(z,w;s):=E_{X;\rho_{0}^{2}}(z,w;s)-2^{s+1-\rho_{0}}\frac{\Gamma(s+1-\rho_{0})\Gamma((s+1-2\rho_{0})/2)}{\Gamma(s)\Gamma(s/2)}G_{X;\rho_{0}^{2}}(z,w;s).$ Then for all $\rho_{0}\geq 0$, such that $\rho_{0}\neq m$ or $\rho_{0}\neq m+1/2,$ for integers $m\geq 1$ we have that $D_{X;\rho_{0}^{2}}(z,w;s)=O(s^{2})$ as $s\rightarrow 0$. Proof: The factor $2^{s+1-\rho_{0}}\frac{\Gamma(s+1-\rho_{0})\Gamma((s+1-2\rho_{0})/2)}{\Gamma(s)\Gamma(s/2)}$ (46) of $G_{X;\rho_{0}^{2}}(z,w;s)$ was chosen so that the $k=0$ term in the series expansions for $E_{X;\rho_{0}^{2}}(z,w;s)$ and $G_{X;\rho_{0}^{2}}(z,w;s)$ agree. For any $k\geq 1$, the $k$-th term in the series expansion for the difference $D_{X;\rho_{0}^{2}}(z,w;s)$ is $\frac{\Gamma((s+1-2\rho_{0})/2)}{\Gamma(s/2)}\frac{\left(\frac{s}{2}\right)_{k}}{k!}\left(1-\frac{\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{(s+1-\rho_{0})_{k}}\right)K_{X;\rho_{0}^{2}}(z,w;s+2k)$ (47) From the spectral expansion in Proposition 1, the function $K_{X;\rho_{0}}(z,w;s+2k)$ is holomorphic at $s=0$ for any integer $k\geq 1$. When $\rho_{0}$ is distinct from any positive half-integer, the function $\Gamma((s+1-2\rho_{0})/2)$ is also holomorphic at $s=0$, while $\frac{\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{(s+1-\rho_{0})_{k}}$ is holomorphic at $s=0$ for $\rho_{0}$ distinct from any positive integer. Since there is a factor of $\Gamma^{-2}(s/2)$ in the above coefficient, it follows that the function $D_{X;0}(z,w;s)$ is $O(s^{2})$ as $s$ approaches zero, for all $\rho_{0}$ different from positive integers or half-integers. It remains to prove the statement for $\rho_{0}=1/2$, in which case (47) becomes $\frac{\left(\frac{s}{2}\right)_{k}}{k!}\left(1-\frac{\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{(s+1/2)_{k}}\right)K_{X;1/4}(z,w;s+2k)=\frac{\left(\frac{s}{2}\right)_{k}}{k!}\left(\frac{(s+1/2)_{k}-\left(\frac{s}{2}+\frac{1}{2}\right)_{k}}{(s+1/2)_{k}}\right)K_{X;1/4}(z,w;s+2k)$ For $k\geq 1$, the factor $(s/2)_{k}$ vanishes at $s=0$, as does the difference $(s+1/2)_{k}-(s/2+1/2)_{k}$, so it follows that the function $D_{X;1/4}(z,w;s)$ is $O(s^{2})$ as $s$ approaches zero. $\square$ ###### Corollary 3 For $z,w\in X$, $z\neq w$ and $\rho_{0}>0$ which is not equal to a positive integer or half-integer, $E_{X;\rho_{0}^{2}}(z,w;s)=O(s)$, as $s\to 0$. When $\rho_{0}=1/2$, there are constants $b_{0}$, $b_{1}$ and $b_{2}$ such that the function $E_{X;1/4}(z,w;s)$ has the asymptotic behavior $E_{X;1/4}(z,w;s)=b_{0}+(b_{1}+b_{2}G_{X;1/4}(z,w))s+O(s^{2})\,\,\,\,\,\textrm{as $s\rightarrow 0$}$ where $G_{X;\rho_{0}^{2}}(z,w)$ is the Green’s function associated to the Laplacian $\Delta_{X}$ acting on the space of smooth functions on $X$ which are orthogonal to the constant functions. Proof: When $\rho_{0}>0$ is not equal to a positive integer or half-integer, the functions $\Gamma(s+1-\rho_{0})$ and $\Gamma((s+1-2\rho_{0})/2)$ are holomorphic at $s=0$, hence the factor (46) is $O(s^{2})$ as $s$ approaches zero. Combining this with Proposition 2 and Corollary 2 yields the statement. When $\rho_{0}=1/2$ $2^{s+1-\rho_{0}}\frac{\Gamma(s+1-\rho_{0})\Gamma((s+1-2\rho_{0})/2)}{\Gamma(s)\Gamma(s/2)}=2^{s+1/2}\frac{\Gamma(s+1/2)}{\Gamma(s)}=a_{1}s+a_{2}s^{2}+O(s^{3})$ so the statement of Proposition 2 becomes $E_{X;1/4}(z,w;s)=(a_{1}s+a_{2}s^{2}+O(s^{3}))\left(\frac{1}{\textrm{\rm vol}_{\omega}(X)}s^{-1}+G_{X;1/4}(z,w)+O(s)\right)+O(s^{2}),$ as $s\to 0$. Multiplying the above expression we deduce the statement. $\square$ ###### Remark 8 When $\rho_{0}=0$ the term in (46) is $\sqrt{\pi}s^{2}+O(s^{3})$ as $s\to 0$. When combined with Proposition 2 and Corollary 2, one gets that $E_{X;0}(z,w;s)=\sqrt{\pi}/\mathrm{Vol}_{\omega}(X)+O(s^{2})\,\,\,\,\,\textrm{\rm as $s\rightarrow 0$.}$ ###### Remark 9 In the case when $X$ is a hyperbolic Riemann surface, the result of Proposition 2 is stated in Corollary 7.4 of [vP16], with a slightly different renormalization constant $\sqrt{2\pi}$ in front of the Green’s function, which stems from a different constant term in the definition of the corresponding series. However, in their proof, the author used special function identities which are specific to that setting. ###### Remark 10 The constants $b_{0}$, $b_{1}$ and $b_{2}$ in case when $\rho_{0}=1/2$ are easily evaluated using asymptotic behavior of the factor (46) near $s=0$, which are not so significant to us at this point. What does matter is that for $\rho_{0}=1/2$, we have that $E_{X;1/4}(z,w;s)$ admits a Kronecker limit formula. In the notation of Theorem 4, there are constants $c_{0}$, $c_{1}$ and $c_{2}$ such that $\int_{D}E_{X;1/4}(z,w;s)d\mu_{D}(w)=c_{0}\textrm{\rm vol}_{\omega}(D)+\left(c_{1}\log\|F_{D}(z)\|^{2}_{\mu}+c_{2}\right)s+O(s^{2})\,\,\,\,\,\textrm{as $s\rightarrow 0$}.$ ###### Remark 11 It is important to note that we have not excluded the possibility of a “nice” Kronecker limit function for $E_{X;\rho_{0}^{2}}(z,w;s)$ when $\rho_{0}\neq 1/2$. The approach we took in this article was to compare the Kronecker limit function of $E_{X;\rho_{0}^{2}}(z,w;s)$ to that of the resolvent kernel $G_{X;\rho_{0}^{2}}(z,w;s)$. We find it quite interesting that the comparison yields a determination of the Kronecker limit function of $E_{X;\rho_{0}^{2}}(z,w;s)$ only in the case when $\rho_{0}=1/2$. ###### Remark 12 Ultimately, we are interested in the cases when $X$ is the quotient of a symmetric space $G/K$. In this setting, $\rho_{0}$ is zero only when $G/K$ is Euclidean. In all other cases, $\rho_{0}^{2}$ is positive. (See section 1.3.) ## 7 Examples As stated above, we began our analysis with the heat kernel and obtained our results using its spectral expansion. As one could imagine, any other representation of the heat kernel has the potential of combining with our results to yield formulas of possible interest. We will proceed along these lines and introduce three examples. It is our opinion that each example is of independent interest. Rather than expanding upon any one example, we will present, in rather broad strokes, the type of formulas which will result, and we will leave a detailed analysis for future work. ### 7.1 Abelian varieties Let $\Omega$ be an $N\times N$ complex matrix which is symmetric and whose imaginary part is postive definite. Let $\Lambda_{\Omega}$ denote the ${\mathbb{Z}}$-lattice formed by vectors in ${\mathbb{Z}}^{N}$ and $\Omega{\mathbb{Z}}^{N}$. Let $X$ be an abelian variety whose complex points form the $N$-dimension complex torus ${\mathbb{C}}^{N}/({\mathbb{Z}}^{N}\otimes\Omega{\mathbb{Z}}^{N})$. Assume that $X$ is equipped with its natural flat metric induced from the Euclidean metric on ${\mathbb{C}}^{N}$. It can be shown that all eigenfunctions on the associated Laplacian are exponential functions. In addition, the heat kernel on $X$ can be obtained by periodizing over $\Lambda_{\Omega}$ the heat kernel on ${\mathbb{C}}^{N}$. By the uniqueness of the heat kernel on $X$, one obtains a formula of the type $K_{X}(z,w;t)=\sum_{k=0}^{\infty}e^{-\lambda_{k}t}\psi_{k}(z)\psi_{k}(w)=\sum\limits_{v\in\Lambda_{\Omega}}\frac{1}{(4\pi t)^{N}}e^{-\|z-w-v\|^{2}/(4t)}$ where $\|\cdot\|$ denotes the absolute value in ${\mathbb{C}}^{N}$. In effect, the identity obtained by equating the above two expressions for the heat kernel is the Poisson summation formula. In the setting of section 3, we take $\rho_{0}^{2}=0$, so then the Poisson kernel (13) becomes ${\cal P}_{X,0}(z,w;u)=\frac{u}{\sqrt{4\pi}}\int_{0}^{\infty}K_{X}(z,w;t)e^{-u^{2}/(4t)}t^{-1/2}\,\frac{dt}{t}=\sum\limits_{v\in\Lambda_{\Omega}}\frac{u\Gamma(N+1/2)}{\pi(u^{2}+\|z-w-v\|^{2})^{N+1/2}}.$ (48) As is evident, one cannot simply replace $u$ by $iu$ in (48) since then the sum would have singularities whenever $u^{2}=\|z-w-v\|^{2}$. However, this is where the distribution theory approach is necessary and, indeed, one will obtain the function $K_{X;0}(z,w;s)$. For now, one can formally express $K_{X;0}(z,w;s)$ as the integral of $\cosh^{-s}(u)$. In the notation of Theorem 4, one can take $D$ to be the theta divisor of the Riemann theta function $\Theta$ on $X$. The Kronecker limit formula for $\log\|\Theta\|$ then could be viewed as coming from the series over $\Lambda_{\Omega}$. Upon exponentiation, one would have a product formula, or regularized product, formula for $\|\Theta\|^{2}$. Certainly, the exploration of this example is worthy of study. ### 7.2 Complex projective space Let $\omega_{FS}$ denote the Fubini-Study metric on complex projective space ${\mathbb{C}}{\mathbb{P}}^{n}$. The authors in [HI02] derived an explicit expression for the heat kernel $K_{{\mathbb{C}}{\mathbb{P}}^{n}}$ associated to the Laplacian of the Fubini-Study metric on ${\mathbb{C}}{\mathbb{P}}^{n}$. Specifically, it is proved that $K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)=\frac{e^{n^{2}t}}{2^{n-2}\pi^{n+1}}\int_{r}^{\pi/2}\frac{-d(\cos u)}{\sqrt{\cos^{2}r-\cos^{2}u}}\left(-\frac{1}{\sin u}\frac{d}{du}\right)^{n}[\Theta_{n+1}(t,u)],$ (49) where $z,w\in{\mathbb{C}}{\mathbb{P}}^{n}$, $t>0$, and $r=\textrm{\rm dist}_{g_{FS}}(z,w)=\tan^{-1}(|z-w|)$, and the function $\Theta_{n+1}(t,u)$ is given by $\Theta_{n+1}(t,u)=\sum_{\ell=0}^{\infty}e^{-4t(\ell+n/2)^{2}}\cos((2\ell+n)u).$ Equivalently, one can write $K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w,t)=\sum_{\ell=0}^{\infty}e^{-\lambda_{\ell}t}\theta_{\ell}(r),$ (50) where $\lambda_{\ell}=4\ell(\ell+n)$, and $\theta_{\ell}(r)=\frac{1}{2^{n-2}\pi^{n+1}}\int_{r}^{\pi/2}\frac{\sin\tau}{\sqrt{\cos^{2}r-\cos^{2}\tau}}\left(-\frac{1}{\sin\tau}\frac{d}{d\tau}\right)^{n}\cos((2\ell+n)\tau)\,d\tau.$ As in the previous example, the formula for the heat kernel is explicit, and all integral transforms leading up to the resolvent kernel $G_{X;\rho^{2}}(z,w;s)$ and $E_{X;\rho^{2}}(z,w;s)$ can be evaluated, at least formally. It seems as if one would also take $\rho_{0}^{2}=0$ in this case, though it would be worthwhile to consider $\rho_{0}^{2}=1/2$ as well. Of course, the divisors to consider would be the zeros of homogenous polynomials in $N$-variables, and the norm of homogenous polynomials would be with respect to the Fubini-Study metric. ### 7.3 Compact quotients of symmetric spaces Let $G$ be a connected, non-compact semisimple Lie group with finite centrer, and let $K$ be its maximal compact subgroup. Let $\Gamma$ be a discrete subgroup of $G$ such that the quotient $\Gamma\setminus G$ is compact. Then the quotient space $X=\Gamma\setminus G/K$ is also compact. On page 160 of [Ga68], the author presents a formula for the heat kernel on $G$. In general terms, the heat kernel $K_{G}(g;t)$ with singularity when $g$ is the identity, is equal to the inverse spherical transform of a Gaussian; see, Proposition 3.1 as well as [JLa01] in the case $G=\textrm{\rm SL}_{n}(\mathbb{R})$. In the case that $G$ is complex, the inverse transform can be computed and the resulting formula is particularly elementary; see Proposition 3.2 of [Ga68]. In this case, one has that $\rho_{0}^{2}$ is equal to the norm of $1/2$ of the sum of the positive roots of the Lie algebra of $G$. The heat kernel on $X$ can be written, as in the notation of (4.2) of [Ga68], as the series $K_{X}(z,w;t)=\sum\limits_{\gamma\in\Gamma}K_{G}(z^{-1}\gamma w;t).$ The expressions from Proposition 3.1 and Proposition 3.2 of [Ga68] are such that the integral in (13) can be computed term-by-term. As discussed in section 2.5, one should replace $t$ by $t/(4\rho_{0}^{2})$ so then one has the Kronecker limit theorem as in Remark 10. One can be optimistic that the case of general $G$ will not be significantly different from $G=\textrm{\rm SL}_{2}({\mathbb{R}})$. ### 7.4 Concluding remarks Though we began with the assumption that $X$ is a Kähler variety, one could review the proofs we developed and relax this condition. For example, if $X$ is a hyperbolic $n$-manifold, meaning the compact quotient of $\textrm{\rm SO}(n,1)$, then the structure of the Laplacian associated to the natural hyperbolic metric is such that all aspects of our proofs apply. In this case, the Kronecker limit function associated to $G_{X;\rho_{0}^{2}}(z,w;s)$ would be a harmonic form with a singularity when $z$ approaches $w$. Furthermore, the heat kernel on the hyperbolic $n$-space has a particularly elementary expression; see, for example, [DGM76] who attribute the result to Millson. In this case, $\rho_{0}^{2}\neq 0$, so then would expect, as in the case when $n=2$, a generalization of the elliptic Eisenstein series as a sum over the uniformizing group. The study of Poincaré series associated to $\textrm{\rm SO}(n,1)$ is developed in [CLPS91], and it will be interested to connect those results with the non-$L^{2}$ series $E_{X;\rho_{0}^{2}}(z,w;s)$. Finally, we began with the heat kernel acting on smooth functions. Certainly, one could follow the same construction when using a form-valued heat kernel. By doing so, one would perhaps not consider the resolvent kernel, but rather focus on $K_{X;\rho_{0}^{2}}(z,w;s)$. In this case, one would integrate one of the variables over a cycle $\gamma$ on $X$, as in section 5 of [JvPS16], and study the resulting Kronecker limit function. It seems plausible to expect that in this manner one would obtain a direct generalization of [KM79], whose series admitted a Kronecker limit function which was the Poincaré dual to the $\gamma$. ## References * [Ba06] Ballmann, W.: _Lectures on Kähler Manifolds_. ESI Lectures in Mathematics and Physics. European Mathematical Society (EMS), Zürich, 2006. * [BGV91] Berline, N., Getzler, E., and Vergne, M.: _Heat Kernels and Dirac Operators_ , Sprniger, New York, 1991. * [Ch84] Chavel, I.: _Eigenvalues in Riemannian Geometry_ , Academic Press, New York, 1984. * [CLPS91] Cogdell, J., Li, J.-S., Piatetski-Shapiro, I., and Sarnak, P.: _Poincaré series for $SO(n,1)$_, Acta Math. 167 (1991), 229-285. * [DGM76] Debiard, A. Gaveau, B., and Mazet, E.: _Théorèmes de comparaison en géoétrie riemannienne_ , Publ. Res. Inst. Math. Sci. 12 (1976/77), 391-425. * [Er56] Erdélyi, A.: _Asymptotic Expansions_ , Dover Publications, New York, 1956. * [Fi18] Fine, J.: _A rapid introduction to Kähler geometry_. Available at http://homepages.ulb.ac.be/~joelfine/papers.html. * [Fo76] Folland, G.: _Introduction to Partial Differential Equations_ , Princeton University Press, Princeton, NJ, 1976. * [Fr82] Friedlander, F.: _Introduction to the Theory of Distributions_ , Cambridge University Press, Cambridge, England, 1982. * [Ga68] Gangolli, R.: _Asymptotic behavior of spectra of compact quotients of certain symmetric spaces_. Acta Math. 121 (1968), 151-192. * [Ge81] Gérardin, P.: _Formes automorphes associées aux cycles géodésiques des surfaces de Riemann hyperboliques (d’après S. Kudla et J. Millson)_. Bourbaki Seminar, Vol. 1980/81, pp. 23-35, Lecture Notes in Math., 901, Springer, Berlin-New York, 1981. * [GR07] Gradshteyn, I. S. and Ryzhik, I. M.: _Table of Integrals, Series and Products_. Elsevier Academic Press, Amsterdam, 2007. * [GH78] Griffiths, P. and Harris, J.: _Principles of Algebraic Geometry_. John Wiley $\&$ Sons, New York, 1978. * [HI02] Hafoud, A. and Intissar, A.: _Représentation intégrale de noyau de la chaleur sur l’espace projectif complexe $P^{n}(C)$, $n\geq 1$. _ C. R. Math. Acad. Sci. Paris 335 (2002), no. 11, 871-876. * [JK98] Jorgenson, J. and Kramer, J.: _Towards the arithmetic degree of line bundles on abelian varieties_ , Manuscripta Math. 96 (1998), 335–370. * [JK01] Jorgenson, J. and Kramer, J.: _Star products of Green’s currents and automorphic forms_ , Duke Math. J. 106 (2001), 553-580. * [JLa93] Jorgenson, J. and Lang, S.: _Basic Analysis of Regularized Products and Series_ , Springer Lecture Notes in Mathematics 1564 (1993). * [JLa01] Jorgenson, J. and Lang, S.: _Spherical Inversion on $\textrm{\rm SL}_{n}(\mathbb{R})$_. Springer-Verlag Monographs in Mathematics, Springer Verlag, New York, 2001. * [JLa03] Jorgenson, J. and Lang, S.: _Analytic continuation and identities involving heat, Poisson, wave and Bessel kernels_. Math. Nachr. 258 (2003), 44-70. * [JvPS16] Jorgenson, J., von Pippich, A.-M., and Smajlović, L.: _On the wave representation of elliptic and hyperbolic Eisenstein series_ , Advances in Math. 288 (2016), 887-921. * [JvPS18] Jorgenson, J., von Pippich, A.-M., and Smajlović, L.: _Applications of Kronecker’s limit formula for elliptic Eisenstein series_ , Annales Mathématiques du Québec 43 (2019), 99-124. * [JST16] Jorgenson, J, Smajlović, L., and Then, H.: _Kronecker’s limit formula, holomorphic modular functions and $q$-expansions on certain arithmetic groups_. Experimental Mathematics 54 (2016), 295-320. * [Kr82] Krantz, S.: _Function Theory of Several Complex Variables_ , John Wiley $\&$ Sons Inc., New York, 1982. * [KM79] Kudla, S. S. and Millson, J. J.: _Harmonic differentials and closed geodesics on a Riemann surface_. Invent. Math. 54 (1979), 193-211. * [La88] Lang, S.: _Introduction to Arakelov Theory._ Springer, New York, 1988. * [vP10] von Pippich, A.-M.: _The arithmetic of elliptic Eisenstein series_. PhD thesis, Humboldt-Universität zu Berlin, 2010. * [vP16] von Pippich, A.-M.: _A Kronecker limit type formula for elliptic Eisenstein series_. https://arxiv.org/abs/1604.00811 * [Si80] Siegel, C. L.: _Advanced Analytic Number Theory._ Tata Institute of Fundamental Research Studies in Mathematics, 9, Tata Institute of Fundamental Research, Bombay, 1980. * [SZ02] Sogge, C. D. and Zelditch, S.: _Riemannian manifolds with maximal eigenfunction growth_ , Duke Math. J. 114 (2002), 387-437 James W. Cogdell Department of Mathematics Ohio State University 231 W. 18th Ave. Columbus, OH 43210 U.S.A. e-mail<EMAIL_ADDRESS> Jay Jorgenson Department of Mathematics The City College of New York Convent Avenue at 138th Street New York, NY 10031 U.S.A. e-mail<EMAIL_ADDRESS> Lejla Smajlović Department of Mathematics University of Sarajevo Zmaja od Bosne 35, 71 000 Sarajevo Bosnia and Herzegovina e-mail<EMAIL_ADDRESS>
# Almost orthogonal subsets of vector spaces over finite fields Ali Mohammadi A.M.: School of Mathematics, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran<EMAIL_ADDRESS>and Giorgis Petridis G.P.: Department of Mathematics, University of Georgia, Athens, GA, 30602 USA<EMAIL_ADDRESS> ###### Abstract. We prove various results on the size and structure of subsets of vector spaces over finite fields which, in some sense, have too many mutually orthogonal pairs of vectors. In particular, we obtain sharp finite field variants of a theorem of Rosenfeld and an almost version of a theorem of Berlekamp. ## 1\. Introduction ### 1.1. Background An _orthogonal set_ of vectors in ${\mathbb{R}}^{n}$, that is a set of non- zero vectors with the property that every pair of distinct vectors is mutually orthogonal, is linearly independent and therefore contains at most $n$ elements. Erdős asked the question of determining the maximum size of a set of _almost orthogonal_ vectors: a set of non-zero vectors with the property that among any three distinct vectors, at least two are mutually orthogonal [14]. The union of two disjoint orthogonal sets is an almost orthogonal set of size $2n$. Rosenfeld, confirming a belief of Erdős, proved that $2n$ is the maximum size of an almost orthogonal subset of ${\mathbb{R}}^{n}$ [16]. Deaett gave a short and elegant proof of Rosenfeld’s theorem [6], which has similarities with an argument of Pudlák [15]. Deaett also proved that for dimension 4 and lower every almost orthogonal set of maximum size is the union of two orthogonal sets; and provided examples in dimension 5 and higher of almost orthogonal sets of maximum size that are not the union of two disjoint orthogonal sets. In ${\mathbb{C}}^{n}$, the existence of self-orthogonal vectors (like $(1,i)\in{\mathbb{C}}^{2})$ changes the answer to both questions. Even in dimension two, the span of $(1,i)$ is an uncountable set of orthogonal vectors. Deaett proved, however, that in ${\mathbb{C}}^{n}$ equipped with the Hermitian inner product, the maximum number of almost orthogonal vectors is $2n$, generalising Rosenfeld’s theorem [6]. Both questions have also been investigated over finite fields. The size of the largest orthogonal set in $({\mathbb{Z}}/(2{\mathbb{Z}}))^{n}$ was determined by Berlekamp [5] and the size of the largest orthogonal set in $({\mathbb{Z}}/(p{\mathbb{Z}}))^{n}$ for primes $p$ was determined by Zame [23]. There are similarities in their methods. The question Berlekamp answered is equivalent to solving another question of Erdős: determining the size of the largest family of subsets of $\\{1,2,\dots,n\\}$ with the property that every two distinct elements have even intersection. Erdős’ question was solved independently by Graver [5, 8]. A key to Berlekamp’s and Zame’s arguments is the existence of self-orthogonal vectors. Self-orthogonal vectors exist over any finite field when the dimension is at least 3 or, in dimension 2, when the order of the field is 2 or is congruent to 1 modulo 4. There are further intricacies when working in vector spaces over finite fields. For example, as is detailed in the next subsection, in dimension 6, the dot product is equivalent to the symmetric bilinear form $(\bm{x},\bm{y})\mapsto x_{1}y_{1}-x_{2}y_{2}+x_{3}y_{3}-x_{4}y_{4}+x_{5}y_{5}-x_{6}y_{6}$ when the order of the field is congruent to 1 modulo 4, and to the symmetric bilinear form $(\bm{x},\bm{y})\mapsto x_{1}y_{1}-x_{2}y_{2}+x_{3}y_{3}-x_{4}y_{4}+x_{5}y_{5}+x_{6}y_{6}$ when the order of the field is congruent to 3 modulo 4. The difference points to the fact that the largest orthogonal subspace has dimension that depends on the order of the field [21]. In addition to this, expressing the dot product in these equivalent ways has the advantage that it makes clear the existence of self-orthogonal vectors. There does not seem to be a significant difference between studying the dot product and studying any symmetric non-degenerate bilinear form and this is the approach taken in the literature recently. Ahmadi and Mohammadian [1], using an argument similar to Berlekamp, determined the size of the largest orthogonal set with respect to any non-degenerate symmetric bilinear form over fields of odd order (see also [10, 21]). Ahmadi and Mohammadian also made progress on the question of determining the size of the largest almost orthogonal set with respect to any non-degenerate symmetric bilinear form. The main purpose of this paper is to determine the size of the largest almost orthogonal set with respect to any bilinear form in any vector space over any sufficiently large finite field of odd order; and also in $({\mathbb{Z}}/(2{\mathbb{Z}}))^{n}$ for sufficiently large $n$. It is worth recording here that, unlike Rosenfeld’s and Deaett’s theorems, it is not always the case that the size of the largest almost orthogonal set equals twice the size of the largest orthogonal set. The $({\mathbb{Z}}/(2{\mathbb{Z}}))^{n}$ question has a set system formulation that can be thought of as an “almost” version of Berlekamp’s theorem: determine the size of the largest family of subsets of $\\{1,2,\dots,n\\}$ with the property that among every three distinct elements, at least two have even intersection. We show that the size of the largest family almost doubles. We also investigate the finite field analogue of another question that Erdős asked for Euclidean space: determine the maximum size of subsets of ${\mathbb{R}}^{n}$ with the property that among any $k$ of their elements, at least two are mutually orthogonal [4, 7]. ### 1.2. Notation and definitions Throughout the paper, we use $m$ and $n$ to be positive integers, $p$ a prime and $q=p^{m}$. We also use ${\mathbb{F}}_{q}$ to denote a finite field of order $q$ and write ${\mathbb{F}}_{q}^{*}={\mathbb{F}}_{q}\setminus\\{0\\}$. A bilinear form over ${\mathbb{F}}_{q}^{n}$ is a mapping $\mathcal{B}:{\mathbb{F}}_{q}^{n}\times{\mathbb{F}}_{q}^{n}\rightarrow{\mathbb{F}}_{q}$, which takes the form ${\mathcal{B}}(\bm{x},\bm{y})=\bm{x}^{T}A\bm{y},\ \text{for all }\bm{x},\bm{y}\in{\mathbb{F}}_{q}^{n},$ for some $n\times n$ matrix $A$ over ${\mathbb{F}}_{q}$. We say ${\mathcal{B}}$ is symmetric if $A$ is a symmetric matrix and say ${\mathcal{B}}$ is degenerate if $\text{det}(A)=0$. We call two bilinear forms equivalent if their corresponding matrices are equivalent ($A,B$ are equivalent if $A=M^{T}BM$ for an invertible matrix $M$). Fix a non-square element $\gamma\in{\mathbb{F}}_{q}$ and let $k=\lfloor\frac{n}{2}\rfloor$. For any bilinear form $\mathcal{B}$ over ${\mathbb{F}}_{q}^{n}$, with associated matrix $A$, we define $\varepsilon(\mathcal{B})=\begin{cases}0,&\mbox{if }\det(A)=0;\\\ 1,&\mbox{if }k\ \text{is even and }\det(A)\ \text{is a non-zero square, or}\\\ &\mbox{if }k\ \text{is odd and }-\det(A)\ \text{is a non-zero square};\\\ \gamma,&\mbox{if }k\ \text{is even and }\det(A)\ \text{is a non-square, or}\\\ &\mbox{if }k\ \text{is odd and }-\det(A)\ \text{is a non-square.}\end{cases}$ For odd $q$, by a result in [9, p. 79], which also appears as [1, Theorem 1], any non-degenerate symmetric bilinear form, ${\mathcal{B}}$, over ${\mathbb{F}}_{q}^{n}$ is equivalent to the form (1) $(\bm{x},\bm{y})\mapsto x_{1}y_{1}-x_{2}y_{2}+\dots+x_{n-2}y_{n-2}-x_{n-1}y_{n-1}+\varepsilon({\mathcal{B}})x_{n}y_{n}$ for odd $n$ and is equivalent to the form (2) $(\bm{x},\bm{y})\mapsto x_{1}y_{1}-x_{2}y_{2}+\dots+x_{n-3}y_{n-3}-x_{n-2}y_{n-2}+x_{n-1}y_{n-1}-\varepsilon({\mathcal{B}})x_{n}y_{n}$ for even $n$, where $\bm{x}=(x_{1},\dots,x_{n})$ and $\bm{y}=(y_{1},\dots,y_{n})$. Given finite-dimensional vector spaces $V_{1}$ and $V_{2}$ over ${\mathbb{F}}_{q}$, with $n_{i}=\text{dim}(V_{i})$ for $i=1,2$, we define the _direct sum_ $V_{1}\oplus V_{2}$ to be the vector space $V_{1}\times V_{2}$, which may be identified by ${\mathbb{F}}_{q}^{n_{1}+n_{2}}$. Furthermore, if $M_{1}$ and $M_{2}$ are matrices corresponding to bilinear forms over ${\mathbb{F}}_{q}^{n_{1}}$ and ${\mathbb{F}}_{q}^{n_{2}}$ respectively, we define the matrix $M_{1}\oplus M_{2}$ by $M_{1}\oplus M_{2}=\begin{pmatrix}M_{1}&0\\\ 0&M_{2}\end{pmatrix},$ which gives rise to a bilinear form over ${\mathbb{F}}_{q}^{n_{1}+n_{2}}.$ For $q=2$, one can infer from (5) in [12, p. 7] that every non-degenerate symmetric bilinear form in odd dimension is equivalent to the dot product that arises from the $n\times n$ identity matrix $I_{n}$. In even dimensions every non-degenerate symmetric bilinear form is either equivalent to the dot product or to the _hyperbolic form_ ${\mathcal{H}}$ that arises from the matrix $H\oplus\dots\oplus H$, where $H=\begin{pmatrix}0&1\\\ 1&0\end{pmatrix}.$ ###### Definition 1.1. We refer to two vectors $\bm{v_{1}},\bm{v_{2}}\in{\mathbb{F}}_{q}^{n}\setminus\\{\bm{0}\\}$ as mutually _orthogonal_ if $\mathcal{B}(\bm{v_{1}},\bm{v_{2}})=0$. If $\bm{v}\in{\mathbb{F}}_{q}^{n}\setminus\\{\bm{0}\\}$, satisfies $\mathcal{B}(\bm{v},\bm{v})=0$, we refer to it as _self-orthogonal_. We call a subset $S\subset{\mathbb{F}}_{q}^{n}\setminus\\{\bm{0}\\}$ an _orthogonal set_ if every distinct pair of elements of $S$ are mutually orthogonal and we say $S\subset{\mathbb{F}}_{q}^{n}\setminus\\{\bm{0}\\}$ is _$(k,l)$ -orthogonal_ if for any $k$ vectors in $S$ at least $l$ of them are pairwise mutually orthogonal. Furthermore, we call a subspace $V\subset{\mathbb{F}}_{q}^{n}$ an _orthogonal subspace_ if $V\setminus\\{\bm{0}\\}$ is an orthogonal set. We denote by ${\mathcal{S}}_{k,l}={\mathcal{S}}_{k,l}(q,n,{\mathcal{B}})$ the maximum size of any $(k,l)$-orthogonal subset of ${\mathbb{F}}_{q}^{n}$ with respect to ${\mathcal{B}}$. Given a set $X\subset{\mathbb{F}}_{q}^{n}$, we use $\langle X\rangle$ to denote the subspace of ${\mathbb{F}}_{q}^{n}$ generated by $X$ and write $\langle\bm{v}_{1},\cdots,\bm{v}_{k}\rangle$, instead of $\langle\\{\bm{v}_{1},\cdots,\bm{v}_{k}\\}\rangle$. We also define the _orthogonal complement_ of $X$ by $X^{\perp}=\\{\bm{v}\in{\mathbb{F}}_{q}^{n}:{\mathcal{B}}(\bm{v},\bm{x})=0\ \text{for all }\bm{x}\in X\\}$, which constitutes a subspace of ${\mathbb{F}}_{q}^{n}$. Finally, for sets $S,S_{1},\dots,S_{k}$, where $k\geq 2$, we write $S=S_{1}\sqcup S_{2}\sqcup\cdots\sqcup S_{k}$ to mean firstly that $S=S_{1}\cup S_{2}\cup\cdots\cup S_{k}$ and secondly that $S_{i}\cap S_{j}=\emptyset$ for $1\leq i<j\leq k$. ### 1.3. Previous results for almost orthogonal sets In [1, Examples 12–15], explicit examples of $(3,2)$-orthogonal sets are provided for odd $q$, showing that (3) ${\mathcal{S}}_{3,2}(q,n,{\mathcal{B}})\geq\begin{cases}2q^{\frac{n-1}{2}},&\mbox{if }n\ \text{is odd and }\varepsilon({\mathcal{B}})=1;\\\ 2q^{\frac{n-1}{2}}-q+1,&\mbox{if }n\ \text{is odd and }\varepsilon({\mathcal{B}})=\gamma;\\\ 2q^{\frac{n}{2}}-q-1,&\mbox{if }n\ \text{is even and}\ \varepsilon(\mathcal{B})=1;\\\ 2q^{\frac{n}{2}-1}+2,&\mbox{if }n\ \text{is even and}\ \varepsilon(\mathcal{B})=\gamma.\end{cases}$ The examples also work for $q=2$, showing that (4) ${\mathcal{S}}_{3,2}(2,n,\cdot)\geq\begin{cases}2^{\frac{n+1}{2}},&\mbox{if $n$ is odd};\\\ 2^{\frac{n}{2}+1}-3,&\mbox{if $n$ is even}.\end{cases}$ The authors further conjectured, in [1, Conjectures 11 and 16], that the inequalities in (3) and (4) could be replaced by equalities and outlined a proof, in [1, Theorem 17], that for all $n$ and either choices (1) and (2) of bilinear forms, ${\mathcal{S}}_{3,2}\leq 3q^{\lfloor\frac{n}{2}\rfloor}$. ### 1.4. Improved lower bounds on $\mathcal{S}_{3,2}$ We proceed to present examples of $(3,2)$-orthogonal subsets of ${\mathbb{F}}_{q}^{n}$ for odd $q$ that have slightly more elements than the examples given in (3). Theorem 2.1, which is stated in the next section, shows that the examples described below are of maximum size. We denote by $\\{\bm{e}_{1},\dots,\bm{e}_{n}\\}$ the standard basis of ${\mathbb{F}}_{q}^{n}$. ###### Example 1.2. Let $q$ be odd and $n=2k+1\geq 3$ and ${\mathcal{B}}$ satisfy $\varepsilon({\mathcal{B}})\in\\{1,\gamma\\}$. Consider the two mutually disjoint orthogonal sets $S_{1}=\big{(}\\{(x_{1},x_{1},\dots,x_{k},x_{k},0):x_{1},\dots,x_{k}\in{\mathbb{F}}_{q}\\}\setminus\\{\bm{0}\\}\big{)}\sqcup\\{\bm{e}_{n-2}+\bm{e}_{n-1}+\bm{e}_{n}\\}$ and $S_{2}=\big{(}\\{(x_{1},-x_{1},\dots,x_{k},-x_{k},0):x_{1},\dots,x_{k}\in{\mathbb{F}}_{q}\\}\setminus\\{\bm{0}\\}\big{)}\sqcup\\{\bm{e}_{n-2}-\bm{e}_{n-1}-2\varepsilon({\mathcal{B}})^{-1}\bm{e}_{n}\\}.$ Then, the set $S=S_{1}\sqcup S_{2}\sqcup\\{\bm{e}_{n}\\}$ is a $(3,2)$-orthogonal set of size $2q^{k}+1=2q^{(n-1)/2}+1$. ###### Example 1.3. Let $q$ be odd and $n=2k\geq 2$ and ${\mathcal{B}}$ satisfy $\varepsilon({\mathcal{B}})=1$. Consider the two mutually disjoint orthogonal sets $S_{1}=\\{(x_{1},x_{1},\dots,x_{k},x_{k}):x_{1},\dots,x_{k}\in{\mathbb{F}}_{q}\\}\setminus\\{\bm{0}\\}$ and $S_{2}=\\{(x_{1},-x_{1},\dots,x_{k},-x_{k}):x_{1},\dots,x_{k}\in{\mathbb{F}}_{q}\\}\setminus\\{\bm{0}\\}.$ Then, the set $S=S_{1}\sqcup S_{2}$ is a $(3,2)$-orthogonal set of size $2q^{k}-2=2q^{n/2}-2$. ###### Example 1.4. Let $q$ be congruent to 3 modulo 4 and $n=2k\geq 4$ and ${\mathcal{B}}$ satisfy $\varepsilon({\mathcal{B}})=-1$ ($-1$ is not a square). Consider the three pairwise disjoint orthogonal sets $\displaystyle S_{1}=\big{(}\\{($ $\displaystyle x_{1},x_{1},\dots,x_{k-1},x_{k-1},0,0):x_{1},\dots,x_{k-1}\in{\mathbb{F}}_{q}\\}\setminus\\{\bm{0}\\}\big{)}\sqcup\\{\bm{e}_{n-3}+\bm{e}_{n-2}+\bm{e}_{n-1}+\bm{e}_{n},$ $\displaystyle\bm{e}_{n-3}+\bm{e}_{n-2}+\bm{e}_{n-1}-\bm{e}_{n}\\}$ and $\displaystyle S_{2}=\big{(}\\{($ $\displaystyle x_{1},-x_{1},\dots,x_{k-1},-x_{k-1},0,0):x_{1},\dots,x_{k-1}\in{\mathbb{F}}_{q}\\}\setminus\\{\bm{0}\\}\big{)}\sqcup\\{\bm{e}_{n-3}-\bm{e}_{n-2}-\bm{e}_{n-1}-\bm{e}_{n},$ $\displaystyle\bm{e}_{n-3}-\bm{e}_{n-2}-\bm{e}_{n-1}+\bm{e}_{n}\\},$ and $S_{3}=\\{\bm{e}_{n-1}+\bm{e}_{n},\bm{e}_{n-1}-\bm{e}_{n}\\}.$ Then, the set $S=S_{1}\sqcup S_{2}\sqcup S_{3}$ is a $(3,2)$-orthogonal set of size $2q^{k-1}+4=2q^{n/2-1}+4$. For $n=2$, $\\{\bm{e}_{1},2\bm{e}_{1},\bm{e}_{2},2\bm{e}_{2}\\}$ is a $(3,2)$-orthogonal set of size 4. Next, we provide examples for $q=2$. Theorem 2.2 shows they are of maximum size. The following example is obtained by adding a single element to the example given by [1, Example 12]. ###### Example 1.5. Let $n=2k+1\geq 3$ and ${\mathcal{B}}$ denote the dot product in ${\mathbb{F}}_{2}^{n}$. Consider the disjoint orthogonal sets $S_{1}=\big{(}\\{(0,x_{1},x_{1}\dots,x_{k},x_{k}):x_{1},\dots,x_{k}\in{\mathbb{F}}_{2}\\}\setminus\\{\bm{0}\\}\big{)}\sqcup\\{\bm{e}_{1}\\}$ and $S_{2}=\big{(}\\{(x_{1},x_{1},\dots,x_{k},x_{k},0):x_{1},\dots,x_{k}\in{\mathbb{F}}_{2}\\}\setminus\\{\bm{0}\\}\big{)}\sqcup\\{\bm{e}_{n}\\}.$ Then, the set $S=S_{1}\sqcup S_{2}\sqcup\\{\bm{e}_{1}+\bm{e}_{2}+\dots+\bm{e}_{n}\\}$ is a $(3,2)$-orthogonal set of size $2^{k+1}+1=2^{(n+1)/2}+1$. The example below is the same as [1, Example 14]. ###### Example 1.6. Let $n=2k\geq 2$ and ${\mathcal{B}}$ denote the dot product in ${\mathbb{F}}_{2}^{n}$. Consider the disjoint orthogonal sets $S_{1}=\\{(x_{1},x_{1},\dots,x_{k},x_{k}):x_{1},\dots,x_{k}\in{\mathbb{F}}_{2}\\}\setminus\\{\bm{0}\\}$ and $S_{2}=\\{(x_{k},x_{1},x_{1},\dots,x_{k-1},x_{k-1},x_{k}):x_{1},\dots,x_{k}\in{\mathbb{F}}_{2}\\}\setminus\\{\bm{0}\\}.$ Then, noting $(1,1,\dots,1,1)\in S_{1}\cap S_{2}$, it follows that the set $S=S_{1}\cup S_{2}$ is a $(3,2)$-orthogonal set of size $2^{k+1}-3=2^{n/2+1}-3$. Our final example concerns the hyperbolic form. ###### Example 1.7. Let $n=2k\geq 2$ and ${\mathcal{H}}$ denote the hyperbolic form in ${\mathbb{F}}_{2}^{n}$. Consider the disjoint orthogonal sets $S_{1}=\langle\bm{e}_{1},\bm{e}_{3},\bm{e}_{5},\dots,\bm{e}_{2k-1}\rangle\setminus\\{\bm{0}\\}$ and $S_{2}=\langle\bm{e}_{2},\bm{e}_{4},\bm{e}_{6},\dots,\bm{e}_{2k}\rangle\setminus\\{\bm{0}\\}.$ It follows that the set $S=S_{1}\cup S_{2}$ is a $(3,2)$-orthogonal set of size $2^{k+1}-2=2^{n/2+1}-2$. ## 2\. Main results Our first main result is an upper bound on the size of $(3,2)$-orthogonal sets for odd $q$. ###### Theorem 2.1. Let $n\geq 0$ be an integer and $q\geq 7$ be an odd prime power. If $S\subset{\mathbb{F}}_{q}^{n}$ is $(3,2)$-orthogonal with respect to a non- degenerate symmetric bilinear form ${\mathcal{B}}$, then $|S|\leq\begin{cases}2q^{\frac{n-1}{2}}+1,&\mbox{if }n\ \text{is odd};\\\ 2q^{\frac{n}{2}}-2,&\mbox{if }n\ \text{is even and}\ \varepsilon(\mathcal{B})=1;\\\ 2q^{\frac{n}{2}-1}+4,&\mbox{if }n\geq 4\ \text{is even and}\ \varepsilon(\mathcal{B})=\gamma;\\\ 4,&\mbox{if }n=2\ \text{ and}\ \varepsilon(\mathcal{B})=\gamma.\end{cases}$ Combined with Examples 1.2, 1.3, and 1.4, Theorem 2.1 establishes the value of $\mathcal{S}_{3,2}$ for all $n$, sufficiently large $q$, and ${\mathcal{B}}$: $\mathcal{S}_{3,2}(q,n,{\mathcal{B}})=\begin{cases}2q^{\frac{n-1}{2}}+1,&\mbox{if }n\geq 3\ \text{is odd};\\\ 2q^{\frac{n}{2}}-2,&\mbox{if }n\geq 2\ \text{is even and}\ \varepsilon(\mathcal{B})=1;\\\ 2q^{\frac{n}{2}-1}+4,&\mbox{if }n\geq 4\ \text{is even and}\ \varepsilon(\mathcal{B})=\gamma;\\\ 4,&\mbox{if }n=2\ \text{ and}\ \varepsilon(\mathcal{B})=\gamma.\end{cases}$ In contrast to the Euclidean space ${\mathbb{R}}^{n}$, $\mathcal{S}_{3,2}$ is sometimes larger than twice the size of the largest orthogonal set (specifically when $n\geq 3$ is odd or when $n\geq 4$ is even and $\varepsilon({\mathcal{B}})=\gamma$). See [1] or Lemma 3.3 below for the size of the largest orthogonal set for the various possibilities of $n$, $q$, and ${\mathcal{B}}$. The proof of Theorem 2.1 relies on the framework developed by Ahmadi and Mohammadian [1]. It also has similarities with the work of Berlekamp [5] and the paper of Deaett [6]. In Section 5 we present a different argument for even $n$ with $\varepsilon({\mathcal{B}})=1$ that is based on character sum estimates. The proof in Section 5 works for all odd $q$. We also answer the corresponding question for $q=2$. ###### Theorem 2.2. Let $n$ be an integer. If $S\subset{\mathbb{F}}_{2}^{n}$ is a $(3,2)$-orthogonal with respect to the dot product, then $|S|\leq\begin{cases}2^{\frac{n+1}{2}}+1,\quad&\text{if}\quad n\geq 21\ \text{is odd};\\\ 2^{\frac{n}{2}+1}-3,\quad&\text{if}\quad n\geq 18\ \text{is even}.\end{cases}$ For even $n\geq 2$ and $S\subset{\mathbb{F}}_{2}^{n}$ be $(3,2)$-orthogonal with respect to the hyperbolic form ${\mathcal{H}}$, $|S|\leq 2^{\frac{n}{2}+1}-2.$ Combined with Examples 1.5, 1.6, and 1.7, Theorem 2.2 establishes the value of $\mathcal{S}_{3,2}$ for all sufficiently large $n$: $\mathcal{S}_{3,2}(2,n,{\mathcal{B}})=\begin{cases}2^{\frac{n+1}{2}}+1,&\mbox{if }n\geq 21\ \text{is odd and }{\mathcal{B}}=\cdot\,;\\\ 2^{\frac{n}{2}+1}-3,&\mbox{if }n\geq 18\ \text{is even and}\ \mathcal{B}=\cdot\,;\\\ 2^{\frac{n}{2}+1}-2,&\mbox{if }n\geq 2\ \text{is even and}\ \mathcal{B}={\mathcal{H}}.\end{cases}$ As highlighted in [1], the quantity $\mathcal{S}_{3,2}(2,n,\cdot)$ may be interpreted as the size of a maximally large family $\mathcal{F}$ of non-empty subsets of an $n$-element set such that among every three distinct elements of $\mathcal{F}$, there is a pair of sets whose intersection is of even cardinality. The values above confirm [1, Conjecture 16] of Ahmadi and Mohammadian for sufficiently large, even $n$. Although, for odd $n$, we have shown that the relevant value is larger by one than what was conjectured. The sufficiently large $n$ assumption cannot be removed, see Remark 6.7. We also prove an upper bound for $\mathcal{S}_{k,2}(q,n,{\mathcal{B}})$. This is the finite field analogue of another question of Erdős [4, 7]. ###### Theorem 2.3. Let $q$ be an odd prime power and $k\geq 2$ be an integer. Suppose that $S\subset{\mathbb{F}}_{q}^{n}\setminus\\{\bm{0}\\}$ is $(k,2)$-orthogonal with respect to a non-degenerate symmetric bilinear form ${\mathcal{B}}$. Then $|S|\leq\bigg{(}k-1+\frac{(k-1)^{2}}{q-k+1}\bigg{)}(q^{n/2}+1).$ Theorem 2.3 implies $\mathcal{S}_{k,2}\leq(k-1+o_{q\to\infty}(1))q^{n/2},$ which is asymptotically sharp in some cases. For example for odd $q$, $k=n=4$ and $\varepsilon({\mathcal{B}})=1$ the union of the following three pairwise disjoint orthogonal sets has size $3(q^{2}-1)$: $S_{1}=\\{(x_{1},x_{1},x_{2},x_{2}):x_{1},x_{2}\in{\mathbb{F}}_{q}\\}\setminus\\{\bm{0}\\}$ and $S_{2}=\\{(x_{1},-x_{1},x_{2},-x_{2}):x_{1},x_{2}\in{\mathbb{F}}_{q}\\}\setminus\\{\bm{0}\\}$ and $S_{3}=\\{(x_{1},x_{2},x_{2},-x_{1}):x_{1},x_{2}\in{\mathbb{F}}_{q}\\}\setminus\\{\bm{0}\\}.$ ### 2.1. Outline of the proofs of Theorems 2.1 and 2.2 The proofs of Theorems 2.1 and 2.2 are based on an inductive scheme developed by Ahmadi and Mohammadian [1]. To outline the argument let us denote by $d_{n}=d_{n}(q,{\mathcal{B}})$ the dimension of the largest orthogonal subspace of ${\mathbb{F}}_{q}^{n}$ with respect to a non-degenerate symmetric bilinear form ${\mathcal{B}}$. Both theorems take the form (5) ${\mathcal{S}}_{3,2}(q,n,{\mathcal{B}})=2q^{d_{n}}+f(q,n,{\mathcal{B}}),$ where $f(q,n,{\mathcal{B}})\in\\{-3,-2,1,4\\}$. We show this by proving by induction a weaker statement of the form ${\mathcal{S}}_{3,2}(q,n,{\mathcal{B}})\leq(2+o(1))q^{d_{n}}$. Note here that the $o(1)$ term is for $q\to\infty$ for odd $q$ and $n\to\infty$ for $q=2$. A technical difficulty in carrying out the induction is that one must ensure that the restriction of ${\mathcal{B}}$ to the orthogonal complement considered is non-degenerate. The inductive argument that proves the weaker bound is based on the basic observation that for every $\bm{v}\in S$, the set of elements in $S\setminus\\{\bm{v}\\}$ not orthogonal to $\bm{v}$ constitute an orthogonal set. The structure of orthogonal sets has been determined by Berlekamp, and Ahmadi and Mohammadian [5, 1]. A key feature is that they contain few elements not in a single orthogonal subspace. We make repeated use of this fact. The second basic fact we use to our advantage for large odd $q$ is that a proper subspace of a vector space is significantly smaller than the vector space. Once the weak bound has been established, it is used to determine $f(q,n,{\mathcal{B}})$. It is at this point that different arguments must be used according to $(q,n,{\mathcal{B}})$. The key observation, implicit in the literature, is that if an orthogonal set contains just a few elements that are not self-orthogonal, then it is much smaller than $q^{d_{n}}$ (for large $q$). For $q=2$, which is not large, slightly different arguments are utilised. The basic fact that drives the proof is that, unlike for odd $q$, the set of vectors not orthogonal to any $\bm{v}$ contains at most half of any orthogonal subspace. The proofs of Theorems 2.1 and 2.2 probably yield a characterisation of nearly extremal sets: they are mostly contained in two disjoint orthogonal subspaces of maximum dimension. ## 3\. Preparations We begin with some basic facts about Ramsey numbers. Given positive integers $s,t$ the _Ramsey number_ $R(s,t)$ is the least integer with the property that every graph on $R(s,t)$ vertices either contains a $K_{s}$ or the complement of the graph contains a $K_{t}$. We will use the following bounds: (6) $R(3,3)=6,R(3,4)=9,R(s,t)\leq\binom{s+t-2}{s-1}.$ The connection between Ramsey numbers and almost orthogonal sets goes back to at least the paper of Deaett [6]. A connection with other similar questions of Erdős is detailed in [14]. We deduce the following elementary observation concerning graphs. ###### Lemma 3.1. A triangle-free graph with the property that its complement is also triangle- free is either the 5-cycle or has at most 4 vertices. ###### Proof. We denote by $H$ the graph. Its order is at most $R(3,3)-1=5$ (by (6)). Suppose now that the order of $H$ is 5. Note that $H$ does not have a vertex of degree at least 3. This is because if such a vertex existed, then either two of its neighbours would be connected by an edge, giving rise to a triangle; or none of its neighbours would be connected by an edge, giving a triangle in the complement. Similarly, the complement has no edge of degree at least 3. Therefore all vertices of the graph have degree 2. Finite graphs of constant degree 2 contain a cycle. The graph $H$ has no 3-cycle. It does not contain a 4-cycle (the fifth vertex would be isolated). Therefore it contains a 5-cycle, which is the entire graph. ∎ We proceed with results concerning orthogonal sets. The most important is a structural characterisation of orthogonal sets which we take from [1, Lemma 3] – see also [5] for $q=2$. ###### Lemma 3.2. Let $\mathcal{B}$ denote a non-degenerate, symmetric bilinear form over ${\mathbb{F}}_{q}^{n}$, where $q$ is a prime power and $n\geq 2$. Suppose that $S\subset{\mathbb{F}}_{q}^{n}$ is an orthogonal set. Then, there exists an orthogonal subspace $V\subset\\{\bm{x}\in{\mathbb{F}}_{q}^{n}:\mathcal{B}(\bm{x},\bm{x})=0\\}$ and a set $T=\\{\bm{x}\in S:\mathcal{B}(\bm{x},\bm{x})\not=0\\}$, such that $S\subset V\sqcup T$ and $2\dim(V)+|T|\leq n.$ Next, we recall [1, Theorem 4], which relying mainly on Lemma 3.2, obtains the following sharp bound on orthogonal sets. We note that in [1] $\mathcal{S}_{2,2}(q,n,{\mathcal{B}})$ is denoted by $\mathcal{S}_{0}(q,n)$. ###### Lemma 3.3. For $n\geq 2$ and a prime power $q\geq 3$, let $\mathcal{B}$ denote a non- degenerate symmetric bilinear form over ${\mathbb{F}}_{q}^{n}$. Then ${\mathcal{S}}_{2,2}(q,n,{\mathcal{B}})=\begin{cases}q^{\frac{n-1}{2}},&\mbox{if }n\ \text{is odd};\\\ q^{\frac{n}{2}}-1,&\mbox{if }n\ \text{is even and}\ \varepsilon(\mathcal{B})=1;\\\ q^{\frac{n}{2}-1}+1,&\mbox{if }n\ \text{is even and}\ \varepsilon(\mathcal{B})=\gamma.\end{cases}$ An analogue of Lemma 3.3, for $q=2$, was earlier proved by Berlekamp [5]. Also see [8]. ###### Lemma 3.4. ${\mathcal{S}}_{2,2}(2,n,\cdot)=\begin{cases}n,&\mbox{if }n\leq 5;\\\ 1+2^{\frac{n-1}{2}},&\mbox{if }n\ \text{is odd and}\ n\geq 7;\\\ 2^{\frac{n}{2}},&\mbox{if }n\ \text{is even and}\ n\geq 6.\end{cases}$ The following corollary is central to our considerations. ###### Lemma 3.5. For $n\geq 2$ and a prime power $q$, let $S\subset{\mathbb{F}}_{q}^{n}$ be a $(3,2)$-orthogonal set with respect to a non-degenerate symmetric bilinear form ${\mathcal{B}}$ over ${\mathbb{F}}_{q}^{n}$. For $\bm{s}\in S$, define (7) $S_{\bm{s}}=\\{\bm{x}\in S\setminus\\{\bm{s}\\}:\ \mathcal{B}(\bm{x},\bm{s})\not=0\\}.$ If $|S_{\bm{s}}|\geq 2$, then $S_{\bm{s}}$ is an orthogonal set. In particular, we may write for all $\bm{s}\in S$ $S_{\bm{s}}=R_{\bm{s}}\sqcup T_{\bm{s}},$ where $T_{\bm{s}}=\\{\bm{x}\in S_{\bm{s}}:\mathcal{B}(\bm{x},\bm{x})\not=0\\}$ and $\langle R_{\bm{s}}\rangle=V_{\bm{s}}$, an orthogonal subspace of ${\mathbb{F}}_{q}^{n}$ that contains only self-orthogonal vectors. ###### Proof. Given two distinct vectors $\bm{x_{1}},\bm{x_{2}}\in S_{\bm{s}}$, by the $(3,2)$-orthogonality of $S$, two of $\\{\bm{x_{1}},\bm{x_{2}},\bm{s}\\}$ must be mutually orthogonal. Thus, given the definition of $S_{\bm{s}}$, we must have $\mathcal{B}(\bm{x_{1}},\bm{x_{2}})=0$. The rest follows from Lemma 3.2 or is immediate when $|S_{\bm{s}}|\leq 1$. ∎ We also collect some basic facts about orthogonal subspaces as follows. ###### Lemma 3.6. Let $V\subset{\mathbb{F}}_{q}^{n}$ denote an orthogonal subspace with at least three elements. 1. (i) Every vector in $V$ is self-orthogonal. 2. (ii) Suppose that $V$ is of maximum dimension and $V=\langle R\rangle$ for some $R\subset{\mathbb{F}}_{q}^{n}$. If $\bm{z}\notin V$ is a self-orthogonal vector, then $\bm{z}$ is not orthogonal to $R$. ###### Proof. For the first statement, let $\bm{x}$ be any element of $V$ and $\bm{y}$ some other element of $V$. It follows that $\bm{x}+\bm{y}\in V\setminus\\{\bm{x}\\}$ and so $0={\mathcal{B}}(\bm{x},\bm{x}+\bm{y})={\mathcal{B}}(\bm{x},\bm{x})+{\mathcal{B}}(\bm{x},\bm{y})={\mathcal{B}}(\bm{x},\bm{x}).$ For the second statement, we have ${\mathcal{B}}(\bm{z},\bm{z})=0$. Suppose for a contradiction that $\bm{z}\perp R$. Then $\bm{z}\perp V$. Note that by the first part, for all $\lambda,\mu\in{\mathbb{F}}_{q}$ and $\bm{x},\bm{y}\in V$ we have ${\mathcal{B}}(\lambda\bm{z}+\bm{x},\mu\bm{z}+\bm{y})=\lambda\mu{\mathcal{B}}(\bm{z},\bm{z})+\lambda{\mathcal{B}}(\bm{z},\bm{y})+\mu{\mathcal{B}}(\bm{z},\bm{x})+{\mathcal{B}}(\bm{x},\bm{y})=0.$ Hence $\langle\\{\bm{z}\\}\cup R\rangle$ is an orthogonal subspace that strictly contains $V$, a contradiction. ∎ The next result forms the basis of the induction argument in the proof of Theorem 2.1. The key is to show that if we restrict ${\mathcal{B}}$ to a certain type of orthogonal complement, then it remains non-degenerate; and that, under a further condition, the equivalence class of ${\mathcal{B}}$ is conserved. ###### Lemma 3.7. Let $n\geq 2$, $\mathcal{B}$ be a non-degenerate symmetric bilinear form over ${\mathbb{F}}_{q}^{n}$, and $\\{\bm{v},\bm{w}\\}\subset{\mathbb{F}}_{q}^{n}$ be linearly independent. 1. (i) If ${\mathcal{B}}(\bm{v},\bm{w})^{2}\neq{\mathcal{B}}(\bm{v},\bm{v}){\mathcal{B}}(\bm{w},\bm{w}),$ then the restriction $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{v},\bm{w}\\}^{\perp}$}$ of $\mathcal{B}$ to the orthogonal complement of $\\{\bm{v},\bm{w}\\}$, is a non-degenerate symmetric bilinear form. 2. (ii) If $q$ is odd, $\bm{v}$ and $\bm{w}$ are not mutually orthogonal, and $\bm{w}$ is self-orthogonal (that is ${\mathcal{B}}(\bm{v},\bm{w})\neq 0$ and ${\mathcal{B}}(\bm{w},\bm{w})=0$), then $\varepsilon(\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{v},\bm{w}\\}^{\perp}$})=\varepsilon(\mathcal{B})$. 3. (iii) If $q=2$, $n$ is even, $\bm{v}$ and $\bm{w}$ are not mutually orthogonal, and both $\bm{v},\bm{w}$ are self-orthogonal (that is ${\mathcal{B}}(\bm{v},\bm{w})\neq 0$ and ${\mathcal{B}}(\bm{v},\bm{v})={\mathcal{B}}(\bm{w},\bm{w})=0$), then ${\mathcal{B}}$ is equivalent to ${\mathcal{H}}$ if and only if $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{v},\bm{w}\\}^{\perp}$}$ is equivalent to ${\mathcal{H}}$. ###### Proof. Throughout the proof we write $a={\mathcal{B}}(\bm{v},\bm{v}),b={\mathcal{B}}(\bm{v},\bm{w}),c={\mathcal{B}}(\bm{w},\bm{w}).$ For (i), we first show $\langle\bm{v},\bm{w}\rangle\cap\\{\bm{v},\bm{w}\\}^{\perp}=\\{\bm{0}\\}$ and therefore that ${\mathbb{F}}_{q}^{n}=\langle\bm{v},\bm{w}\rangle\oplus\\{\bm{v},\bm{w}\\}^{\perp}$. Suppose $\lambda\bm{v}+\mu\bm{w}\in\\{\bm{v},\bm{w}\\}^{\perp}$. Applying ${\mathcal{B}}(\bm{v},-)$ and then ${\mathcal{B}}(\bm{w},-)$ to both sides gives the linear system $\left\\{\begin{matrix}\lambda a+\mu b=0\\\ \lambda b+\mu c=0\end{matrix}\right..$ It follows that $\lambda=\mu=0$ because $ac\neq b^{2}$. We write $M_{1}$ for the matrix of $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\langle\bm{v},\bm{w}\rangle$}$ with respect to the basis $\\{\bm{v},\bm{w}\\}$, $M_{2}$ for the matrix of $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{v},\bm{w}\\}^{\perp}$}$ with respect to any basis, and $M$ for the matrix of $\mathcal{B}$ with respect to the union of these two bases, then $M=\begin{pmatrix}M_{1}&0\\\ 0&M_{2}\end{pmatrix}.$ It follows immediately that if $\mathcal{B}$ is non-degenerate, then so is $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{v},\bm{w}\\}^{\perp}$}$. For (ii), we have $b\neq 0$ and $c=0$. We show $M_{1}=\begin{pmatrix}a&b\\\ b&0\end{pmatrix}\sim\begin{pmatrix}1&0\\\ 0&-1\end{pmatrix},$ which proves $\varepsilon(\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{v},\bm{w}\\}^{\perp}$})=\varepsilon(\mathcal{B})$. Let $\alpha,\beta,\gamma$ be solutions to $\alpha^{2}-\gamma^{2}=a$ and $\beta(\alpha-\gamma)=b$. Such $\alpha,\gamma$ exist because every element of ${\mathbb{F}}_{q}$ is the difference of two squares. The characteristic is not 2, so we can always take $\alpha\neq\gamma$ (even when $a=0$). Then there exists a suitable $\beta$. Now a simple calculation confirms $\begin{pmatrix}\alpha&\beta\\\ \gamma&\beta\end{pmatrix}^{T}\begin{pmatrix}1&0\\\ 0&-1\end{pmatrix}\begin{pmatrix}\alpha&\beta\\\ \gamma&\beta\end{pmatrix}=\begin{pmatrix}\alpha^{2}-\gamma^{2}&\beta(\alpha-\gamma)\\\ \beta(\alpha-\gamma)&0\end{pmatrix}=\begin{pmatrix}a&b\\\ b&0\end{pmatrix};$ and $\det\begin{pmatrix}\alpha&\beta\\\ \gamma&\beta\end{pmatrix}=\beta(\alpha-\gamma)=b\neq 0$. For (iii), we have $b=1$ and $a=c=0$. Therefore $M_{1}=H$. So ${\mathcal{B}}$ is equivalent to ${\mathcal{H}}$ if and only if $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{v},\bm{w}\\}^{\perp}$}$ is equivalent to ${\mathcal{H}}$. ∎ ###### Remark 3.8. The condition in part (i) of Lemma 3.7 is necessary. Take, for example, $n=4$ and ${\mathcal{B}}$ the bilinear form given by the diagonal matrix with diagonal entries $(1,-1,1,-1)$. ${\mathcal{B}}$ is the dot product when $q=2$. Take $\bm{v}=(1,0,0,0)$ and $\bm{w}=(1,1,1,0)$. These are two linearly independent vectors with the numbers $a,b,c$ defined in the proof of the lemma all equal to 1. Hence $ac=b^{2}$. It is not true that $\langle\bm{v},\bm{w}\rangle$ trivially intersects $\\{\bm{v},\bm{w}\\}^{\perp}$ because the span of $\bm{w}-\bm{v}=(0,1,1,0)$ lies in both subspaces. Furthermore, ${\mathcal{B}}$ restricted to $\\{\bm{v},\bm{w}\\}^{\perp}$ is degenerate because $\bm{w}-\bm{v}$ is orthogonal to both $\bm{w}-\bm{v}$ and $(0,0,0,1)$, which span $\\{\bm{v},\bm{w}\\}^{\perp}$. The next step is to bound the number of vectors in any $(3,2)$-orthogonal subset in ${\mathbb{F}}_{q}^{n}$ that are not self-orthogonal. It may be true that, analogously to the results of Rosenfeld and Deaett [16, 6], there are at most $2n$ such vectors. We prove a weaker result that suffices for our purposes. As part of the proof, we require a straightforward adaptation of [6, Proposition 4.4], which we state. The proof is nearly identical to that in [6]. ###### Lemma 3.9. Let $n\geq 1$ be a positive integer, $F$ be a field, and $S\subset F^{n}$ be a $(3,2)$-orthogonal set with respect to a symmetric bilinear form. If $B\subset S$ is an orthogonal basis for $F^{n}$, then $S\setminus B$ is an orthogonal set. We state and prove another result that is implicit in [6, Section 4]. It is convenient to phrase many of the subsequent arguments in terms of the simple graph $G$ with vertex set $S$ and edges given by pairs of elements of $S$ that are not mutually orthogonal ($\bm{xy}$ is an edge precisely when ${\mathcal{B}}(\bm{x},\bm{y})\neq 0$). ###### Lemma 3.10. Let $n\geq 1$ be a positive integer, $F$ a field, and $D\subset F^{n}$ be a $(3,2)$-orthogonal set with respect to a symmetric bilinear form. If $D$ consists entirely of vectors that are not self-orthogonal, then $|D|\leq\max\\{2n,R(3,n)-1\\}\overset{\eqref{eqn:Ramsey}}{=}\begin{cases}2n,&\mbox{if }0\leq n\leq 4;\\\ \tfrac{n(n+1)}{2}-1,&\mbox{if }n\geq 5.\end{cases}$ ###### Proof. We use the graph $G$ described just above the statement of the lemma. The claim is true for $n=0$. For $n\geq 1$ we observe that an independent set of vertices is an orthogonal set in $F^{n}$ and so is linearly independent (we need here that all vectors in $D$ are not self-orthogonal). If $G$ has an independent set $B$ of size $n$, then that set is linearly independent and therefore is a basis for $F^{n}$. By Lemma 3.9 we get that $D\setminus B$ is orthogonal and hence contains at most $n$ elements. Hence $|D|=|B|+|D\setminus B|\leq 2n$. If $G$, which is triangle-free, has no independent set of size $n$, then $|D|<R(3,n)$, by the definition of $R(3,n)$. ∎ Note that by work of Ajtai, Komlós and Szemerédi, and of Kim [2, 11] $R(3,n)=(1+o_{n\to\infty}(1))\frac{n^{2}}{\log n},$ with stronger explicit upper bounds in [17]. This means that $|D|=o(n^{2})$. We also extract this consequence of Lemma 3.2 and Lemma 3.9 from the proof of [1, Theorem 17]. ###### Lemma 3.11. Let $S\subset{\mathbb{F}}_{q}^{n}$ be a $(3,2)$-orthogonal set with respect to a non-degenerate symmetric bilinear form ${\mathcal{B}}$. If every pair of linearly independent vectors in $S$ is mutually orthogonal (that is ${\mathcal{B}}(\bm{x},\bm{y})=0$ for every linearly independent $\\{\bm{v},\bm{w}\\}\subset S$), then $|S|\leq{\mathcal{S}}_{2,2}(q,n,{\mathcal{B}})+n$. As the final result of this section, we recall [1, Theorem 17], which is a quantitatively weaker version of Theorem 2.1. We will use this result in Section 5 and so provide a proof which follows the same scheme as that introduced in [1], while paying special attention to certain intricacies involved in carrying out the induction. In particular, the proof relies on Lemma 3.7 to sidestep a potential issue that appears to have been overlooked in the original proof of [1, Theorem 17]. It also serves as a prelude to the proof of Theorem 2.1. We employ for the first of many times a decomposition of a $(3,2)$-orthogonal set $S$ that appears in [1], and so we describe it in detail. Given two distinct elements $\bm{x},\bm{y}\in S$, every element of $S\setminus\\{\bm{x},\bm{y}\\}$ is either not orthogonal to $\bm{x}$, or not orthogonal to $\bm{y}$, or orthogonal to both $\bm{x}$ and $\bm{y}$. Using the notation of Lemma 3.5 we decompose $S$ as follows (8) $S=S_{\bm{x}}\cup S_{\bm{y}}\cup S_{\bm{x}\bm{y}}\cup\\{\bm{x},\bm{y}\\},$ where $S_{\bm{x}}$ and $S_{\bm{y}}$ are defined in (7) and $S_{\bm{x}\bm{y}}=S\cap\\{\bm{x},\bm{y}\\}^{\perp}$. Note that $\\{\bm{x},\bm{y}\\}$ can be left out if ${\mathcal{B}}(\bm{x},\bm{y})\neq 0$ because $\bm{x}\in S_{\bm{y}}$ and vice versa. When bounding $|S_{\bm{x}\bm{y}}|$ by induction it is essential that ${\mathcal{B}}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{x},\bm{y}\\}^{\perp}$}$ is non-degenerate. ###### Proposition 3.12. Let $q$ be odd and let ${\mathcal{B}}$ be a non-degenerate symmetric bilinear form over ${\mathbb{F}}_{q}^{n}$. If $S\subset{\mathbb{F}}_{q}^{n}\setminus\\{\bm{0}\\}$ is $(3,2)$-orthogonal, then $|S|\leq 3q^{\lfloor\frac{n}{2}\rfloor}.$ ###### Proof. We proceed by induction on $n$. Note that the result is true for $n\in\\{0,1\\}$ because $|S|\leq 2$ and assume it is also true for all dimensions strictly less than $n$. If every linearly independent pair of vectors in $S$ is mutually orthogonal, by Lemma 3.11 and Lemma 3.3, we have $|S|\leq{\mathcal{S}}_{2,2}(q,n,{\mathcal{B}})+n\leq 2q^{\lfloor\frac{n}{2}\rfloor}.$ Hence suppose there exists a linearly independent pair $\\{\bm{x},\bm{y}\\}\subset S$, with ${\mathcal{B}}(\bm{x},\bm{y})\not=0$. If at least one of these vectors is self-orthogonal, by Lemma 3.7 (i), ${\mathcal{B}}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{x},\bm{y}\\}^{\perp}$}$ is non-degenerate. Recalling the decomposition (8) and noting that $\bm{x}\in S_{\bm{y}}$ and $\bm{y}\in S_{\bm{x}}$, we have $|S|\leq|S_{\bm{x}}|+|S_{\bm{y}}|+|S_{\bm{xy}}|.$ Since $\bm{x}$ and $\bm{y}$ are linearly independent, $\\{\bm{x},\bm{y}\\}^{\perp}$ constitutes a subspace of ${\mathbb{F}}_{q}^{n}$ of dimension $n-2$. Then, using that $S_{\bm{xy}}\subset\\{\bm{x},\bm{y}\\}^{\perp}$, for $n\in\\{2,3\\}$ we have $|S_{\bm{xy}}|\leq q^{n-2}\leq q$ and for $n\geq 4$ we have $|S_{\bm{xy}}|\leq 3q^{\lfloor\frac{n}{2}\rfloor-1}$ by the induction hypothesis. Furthermore, by Lemma 3.3 and Lemma 3.5, we have $|S_{\bm{x}}|,|S_{\bm{y}}|\leq q^{\lfloor\frac{n}{2}\rfloor}$. Adding this all up, we obtain the required result in this case. Next, suppose that neither $\bm{x}$ nor $\bm{y}$ is self-orthogonal and note that in this case, we may no longer assume ${\mathcal{B}}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{x},\bm{y}\\}^{\perp}$}$ is non-degenerate (see Remark 3.8). If every pair of elements of $S_{\bm{xy}}$ is mutually orthogonal, by Lemma 3.3, we have $|S_{\bm{xy}}|\leq q^{\lfloor\frac{n}{2}\rfloor}$ and the required result follows. Hence suppose there exist $\bm{v},\bm{w}\in S_{\bm{xy}}$, with ${\mathcal{B}}(\bm{v},\bm{w})\not=0$. Again, if at least one of $\\{\bm{v},\bm{w}\\}$ is self-orthogonal, we may repeat the arguments of the first case to obtain the required result. Thus assume otherwise. Consider the decomposition (9) $S=S_{\bm{x}}\cup S_{\bm{v}}\cup S_{\bm{xv}}\cup\\{\bm{x},\bm{v}\\}.$ By Lemma 3.7 (i), ${\mathcal{B}}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{x},\bm{v}\\}^{\perp}$}$ is non-degenerate. Employing the notation of Lemma 3.5, note that $\bm{y}\in T_{\bm{x}}$ and $\bm{w}\in T_{\bm{v}}$. Suppose there exists $\bm{z}\in R_{\bm{v}}$, with ${\mathcal{B}}(\bm{y},\bm{z})\not=0$. Then by Lemma 3.7 (i), ${\mathcal{B}}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{y},\bm{z}\\}^{\perp}$}$ is non-degenerate and we may repeat the arguments of the first case, with $\bm{z}$ in place of $\bm{x}$, to obtain the required result. Otherwise, if $\bm{y}$ is orthogonal to $R_{\bm{v}}$, it follows that $R_{\bm{v}}\sqcup\\{\bm{y},\bm{w}\\}$ is an orthogonal set, which by Lemma 3.2, implies that $\dim(V_{\bm{v}})\leq\lfloor n/2\rfloor-1$. By a similar argument, we may assume $R_{\bm{x}}\sqcup\\{\bm{y},\bm{w}\\}$ is an orthogonal set and that $\dim(V_{\bm{x}})\leq\lfloor n/2\rfloor-1$. Furthermore note that $\bm{w}\in S_{\bm{xy}}$ and so $\bm{w}\not\in S_{\bm{x}}$ and similarly $\bm{y}\not\in S_{\bm{v}}$. For $n\in\\{2,3\\}$, by Lemma 3.2 and the above observations, we have $|S_{\bm{x}}\cup S_{\bm{v}}|\leq 2n-2$ and $|S_{\bm{xv}}|\leq q$. Thus going back to (9), we get $|S|\leq 2n+q\leq 3q$ as required. For $n\geq 4$, we may again use Lemma 3.2 to see $|S_{\bm{x}}\cup S_{\bm{v}}|\leq 2(q^{\lfloor n/2\rfloor-1}+3)-2$. Then $\displaystyle|S|$ $\displaystyle\leq(2q^{\lfloor n/2\rfloor-1}+4)+3q^{\lfloor\frac{n}{2}\rfloor-1}+2$ $\displaystyle=5q^{\lfloor n/2\rfloor-1}+6$ $\displaystyle\leq 3q^{\lfloor n/2\rfloor},$ for all $n\geq 4$ and $q\geq 3$. ∎ ## 4\. Proof of Theorem 2.1 We set $d_{n}=\begin{cases}\frac{n-1}{2},&\mbox{if }n\geq 3\ \text{is odd};\\\ \frac{n}{2},&\mbox{if }n\geq 2\ \text{is even and}\ \varepsilon(\mathcal{B})=1;\\\ \frac{n}{2}-1,&\mbox{if }n\geq 2\ \text{is even and}\ \varepsilon(\mathcal{B})=\gamma.\end{cases}$ It was proved in [21] that $d_{n}$ is the dimension of the largest orthogonal subspace of ${\mathbb{F}}_{q}^{n}$ (also follows from Lemma 3.2). Note that $d_{n-2}=d_{n}-1$. We proceed by induction on $n$. For $n=0$ and $n=1$ the size of the largest $(3,2)$-orthogonal set is at most 2, and the theorem follows. We will show that either $|S|\leq q^{d_{n}}+O(q^{d_{n}-1})$ or that $S$ possesses certain properties that make proving the theorem a matter of case analysis. For sufficiently large $q$ the former upper bound is smaller than the one in the theorem. We phrase the argument in terms of the graph $G$ with vertex set $S$ and two vectors adjacent precisely when they are not mutually orthogonal. The two properties of $G$ we use is that it is triangle free (follows from $S$ being $(3,2)$-orthogonal) and the largest independent set in $G$ having size at most $\mathcal{S}_{2,2}(q,n,{\mathcal{B}})$ (because an independent set is an orthogonal subset of ${\mathbb{F}}_{q}^{n}$). We will also use the fact that every orthogonal set in ${\mathbb{F}}_{q}^{n}$ has size at most $\mathcal{S}_{2,2}(q,n,{\mathcal{B}})$, a quantity that is determined in Lemma 3.3. By Lemma 3.11 we may assume from now on the existence of linearly independent $\\{\bm{v},\bm{w}\\}\subset S$ with $\bm{vw}$ an edge (that is ${\mathcal{B}}(\bm{v},\bm{w})\neq 0$). This is because $n\leq q^{d_{n}}-2$ for all $n\geq 2$ when $q\geq 5$. We decompose $S$ in the neighbourhood $S_{\bm{v}}$ of $\bm{v}$, the neighbourhood $S_{\bm{w}}$ of $\bm{w}$, and the set of vertices $S_{\bm{v}\bm{w}}$ that are not adjacent to either $\bm{v}$ or $\bm{w}$: $S=S_{\bm{v}}\cup S_{\bm{w}}\cup S_{\bm{vw}},$ where $S_{\bm{vw}}\subset\\{\bm{v},\bm{w}\\}^{\perp}$. We follow the set up of Lemma 3.5 and decompose $S_{\bm{v}}=R_{\bm{v}}\sqcup T_{\bm{v}}$ with $R_{\bm{v}}$ spanning the orthogonal vector space $V_{\bm{v}}$. When $n=2$ we get that $S_{\bm{vw}}$ is a subset of a zero dimensional vector space that does not include $\bm{0}$ and so is empty. Since both $S_{\bm{v}}$ and $S_{\bm{w}}$ are orthogonal sets, we get $|S|\leq 2\mathcal{S}_{2,2}(q,2,{\mathcal{B}})$. This proves the theorem for $n=2$. For $n\geq 3$ we have to be more careful when dealing with $S_{\bm{vw}}$. We need the following to be able to apply the second part of Lemma 3.7. ###### Lemma 4.1. For $n\geq 3$, let $S\subset{\mathbb{F}}_{q}^{n}$ be a $(3,2)$-orthogonal set with respect to a non-degenerate symmetric bilinear form ${\mathcal{B}}$. If every pair of linearly independent self-orthogonal vectors in $S$ is orthogonal to one another (that is ${\mathcal{B}}(\bm{x},\bm{y})=0$ for all linearly independent self-orthogonal $\bm{x},\bm{y}\in S$), then $|S|\leq\begin{cases}{\mathcal{S}}_{2,2}(q,n,{\mathcal{B}})+2n,&\mbox{if }0\leq n\leq 4;\\\ {\mathcal{S}}_{2,2}(q,n,{\mathcal{B}})+\tfrac{n(n+1)}{2}-1,&\mbox{if }n\geq 5.\end{cases}$ ###### Proof. Let $D$ be the set of vectors in $S$, which are not self-orthogonal: $D=\\{\bm{x}\in S:{\mathcal{B}}(\bm{x},\bm{x})\neq 0\\}.$ By the hypothesis on $S$ we have that $S\setminus D$ is an orthogonal set. Hence $|S\setminus D|\leq{\mathcal{S}}_{2,2}(q,n,{\mathcal{B}})$. The claim follows by bounding $|D|$ via Lemma 3.10. ∎ The upper bound on $|S|$ in Lemma 4.1 is smaller than the bound in Theorem 2.1 for $n\geq 3$ when $q\geq 5$. From now on we assume the existence of linearly independent vectors $\\{\bm{v},\bm{w}\\}$ that are self-orthogonal but are not mutually orthogonal: ${\mathcal{B}}(\bm{v},\bm{v})={\mathcal{B}}(\bm{w},\bm{w})=0\text{, but }{\mathcal{B}}(\bm{v},\bm{w})\neq 0.$ Recalling the definition of $f=f(q,n,{\mathcal{B}})$ inferred from Theorem 2.1 and (5), we get from Lemma 3.7 (ii) (10) $|S_{\bm{vw}}|\leq 2q^{d_{n-2}}+f=2q^{d_{n}-1}+f.$ Therefore $|S_{\bm{vw}}|$ is much smaller than the bound on $|S|$ we are trying to prove. What drives the proof is that if either $V_{\bm{v}}$ or $V_{\bm{w}}$ is not of maximum dimension, then we are done. To see why, suppose $V_{\bm{v}}$ is not of maximum dimension. Then using Lemma 3.2, Lemma 3.3, Lemma 3.5 and (10) we get $\displaystyle|S|$ $\displaystyle\leq|S_{\bm{v}}|+|S_{\bm{w}}|+|S_{\bm{vw}}|$ $\displaystyle\leq(q^{d_{n}-1}+3)+(q^{d_{n}}+1)+(2q^{d_{n}-1}+f)$ $\displaystyle=q^{d_{n}}+3q^{d_{n}-1}+4+f.$ Since for $q\geq 7$ we have $3q^{d_{n}-1}+4\leq q^{d_{n}}$, we assume from now on $\dim(V_{\bm{v}})=\dim(V_{\bm{w}})=d_{n}$. One more property we need is that $\bm{v}\in V_{\bm{w}}$ and $\bm{w}\in V_{\bm{v}}$. To confirm, say the latter, note that there is no edge from $\bm{w}$ to $R_{\bm{v}}$ (because the graph is triangle-free). Therefore $\bm{w}\perp\langle R_{\bm{v}}\rangle=V_{\bm{v}}$. By Lemma 3.6 (ii), and using the fact that $\bm{w}$ is self-orthogonal, we get $\bm{w}\in V_{\bm{v}}$. We summarise all this in a proposition. ###### Proposition 4.2. Let $q\geq 7$ be an odd prime power, $n\geq 3$ and $S\subset{\mathbb{F}}_{q}^{n}$ be a $(3,2)$-orthogonal set with respect to a non-degenerate symmetric bilinear form. Then $|S|$ satisfies the upper bound of Theorem 2.1 unless there exist linearly independent self-orthogonal vectors $\bm{v}$ and $\bm{w}$ with $\dim(V_{\bm{v}})=\dim(V_{\bm{w}})=d_{n}$; and $\bm{v}\in V_{\bm{w}}$ and $\bm{w}\in V_{\bm{v}}$. In this case $S_{\bm{vw}}=S\cap\\{\bm{v},\bm{w}\\}^{\perp}$ satisfies $|S_{\bm{vw}}|\leq 2q^{d_{n}-1}+f(q,n,{\mathcal{B}})$, with $f(q,n,{\mathcal{B}})$ inferred from Theorem 2.1 and (5). The final preparatory result is that for the remaining $S$ described in Proposition 4.2, $R_{\bm{z}}$ is considerably smaller than $q^{d_{n}}$ for all $\bm{z}\in S_{\bm{vw}}$. The proof is typical of forthcoming considerations. The key observation is that if a subspace of a vector space does not contain a single element of the vector space, then it is considerably smaller. ###### Lemma 4.3. Let $q$ be an odd prime power and let $S\subset{\mathbb{F}}_{q}^{n}$ be $(3,2)$-orthogonal with respect to a non-degenerate symmetric bilinear form ${\mathcal{B}}$. Suppose $\\{\bm{v},\bm{w}\\}\subset S$ is a linearly independent subset that consists of two self-orthogonal vectors that are not mutually orthogonal (that is ${\mathcal{B}}(\bm{v},\bm{v})={\mathcal{B}}(\bm{w},\bm{w})=0$, but ${\mathcal{B}}(\bm{v},\bm{w})\neq 0$). If $\bm{z}\in S_{\bm{vw}}=S\cap\\{\bm{v},\bm{w}\\}^{\perp}$, then $|R_{\bm{z}}|\leq 3q^{d_{n}-1}-3.$ ###### Proof. We have $S\subset(V_{\bm{v}}\setminus\\{\bm{0}\\})\cup T_{\bm{v}}\cup(V_{\bm{w}}\setminus\\{\bm{0}\\})\cup T_{\bm{w}}\cup S_{\bm{vw}}.$ By Lemma 3.5 we know that $R_{\bm{z}}$ contains only self-orthogonal vectors and so is disjoint from $T_{\bm{v}}\cup T_{\bm{w}}$. Hence $|R_{\bm{z}}|\leq(|V_{\bm{v}}\cap V_{\bm{z}}|-1)+(|V_{\bm{w}}\cap V_{\bm{z}}|-1)+|R_{\bm{z}}\cap S_{\bm{vw}}|.$ $V_{\bm{v}}\neq V_{\bm{z}}$ because $\bm{w}\in V_{\bm{v}}\setminus V_{\bm{z}}$. Hence $V_{\bm{v}}\cap V_{\bm{z}}$ is a proper subspace of $V_{\bm{v}}$ and is therefore not of maximum dimension. This means $|V_{\bm{z}}\cap V_{\bm{v}}|\leq q^{d_{n}-1}$. Similarly $|V_{\bm{z}}\cap V_{\bm{w}}|\leq q^{d_{n}-1}$. Moreover, note that $R_{\bm{z}}\cap S_{\bm{vw}}$ is an orthogonal subset of $\\{\bm{v},\bm{w}\\}^{\perp}$, on which non- degeneracy and type of ${\mathcal{B}}$ is preserved by Lemma 3.7 (ii). Thus by Lemma 3.2, we have $|R_{\bm{z}}\cap S_{\bm{vw}}|\leq q^{d_{n}-1}-1$. Putting everything together gives the desired bound. ∎ We begin the final stage of the proof of the theorem. We assume we are in the remaining case detailed in Proposition 4.2. Let $S_{\bm{vw}}^{*}=S_{\bm{vw}}\setminus(V_{\bm{v}}\cup V_{\bm{w}}).$ We distinguish between two different cases. Case 1: An edge exists between $R_{\bm{v}}\cup R_{\bm{w}}$ and $S_{\bm{vw}}^{*}$. Suppose $\bm{uz}$ is an edge with $\bm{z}\in S_{\bm{vw}}$ and, say, $\bm{u}\in R_{\bm{v}}$. Our first claim is that $\\{\bm{u},\bm{z}\\}$ is linearly independent. Indeed if $\bm{z}=\lambda\bm{u}$, then we would have ${\mathcal{B}}(\bm{z},\bm{u})=\lambda B(\bm{u},\bm{u})=0$, which contradicts $\bm{uz}$ being an edge. Furthermore, $\bm{u}$ is self-orthogonal and so by Lemma 3.7 (ii) we get that ${\mathcal{B}}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{u},\bm{z}\\}^{\perp}$}$ is non-degenerate and $\varepsilon({\mathcal{B}})$ is preserved. We have $|S|\leq|S_{\bm{u}}|+|R_{\bm{z}}|+|T_{\bm{z}}|+|S_{\bm{uz}}|.$ We have the following bounds: by Lemma 3.3 and Lemma 3.5 $|S_{\bm{u}}|\leq q^{d_{n}}+1$; by Lemma 3.2 and Lemma 4.3 (and its proof), $|R_{\bm{z}}|+|T_{\bm{z}}|\leq 3q^{d_{n}-1}+1$; and by induction, just like in (10), $|S_{\bm{uz}}|\leq 2q^{d_{n}-1}+f$. In total $|S|\leq q^{d_{n}}+5q^{d_{n}-1}+2+f.$ We are done because for $q\geq 7$ and $n\geq 3$, $5q^{d_{n}-1}+2\leq q^{d_{n}}$. Case 2: No edge exists between $R_{\bm{v}}\cup R_{\bm{w}}$ and $S_{\bm{vw}}^{*}$. There is no edge from $S_{\bm{vw}}^{*}$ to $R_{\bm{v}}$ and therefore $S_{\bm{vw}}^{*}$ is orthogonal to $R_{\bm{v}}$. It follows that $S_{vw}^{*}$ is orthogonal to $V_{\bm{v}}=\langle R_{\bm{v}}\rangle$. Similarly, $S_{vw}^{*}$ is orthogonal to $V_{\bm{w}}$. All vectors in $T_{\bm{v}}\cup T_{\bm{w}}\cup S_{\bm{vw}}$ are not self-orthogonal by Lemma 3.6 (ii) and $\dim(V_{\bm{v}})$ being maximum. We use the decomposition (11) $S\subset(V_{\bm{v}}\setminus\\{\bm{0}\\})\cup(V_{\bm{w}}\setminus\\{\bm{0}\\})\cup(T_{\bm{v}}\cup T_{\bm{w}}\cup S_{\bm{vw}}).$ We consider the three different possibilities separately. Even $n\geq 4$ and $\varepsilon({\mathcal{B}})=1$. Our aim is to show that $T_{\bm{v}}=T_{\bm{w}}=S_{\bm{vw}}^{*}=\emptyset$. Then by (11) $|S|\leq(|V_{\bm{v}}|-1)+(|V_{\bm{w}}|-1)\leq 2(q^{d_{n}}-1)=2q^{n/2}-2.$ We may assume $T_{\bm{v}}=T_{\bm{w}}=\emptyset$ else, by Lemma 3.2, $V_{\bm{v}}$ or $V_{\bm{w}}$ are not of maximum dimension, which is not allowed by Proposition 4.2. To show that $S_{\bm{vw}}^{*}=\emptyset$, suppose for a contradiction that $\bm{z}\in S_{\bm{vw}}^{*}$, then $V_{\bm{v}}\cup\\{\bm{z}\\}$ would be an orthogonal set, forcing, via Lemma 3.2, $V_{\bm{v}}$ not to have maximum dimension. Odd $n\geq 3$. We want to show $|T_{\bm{v}}|+|T_{\bm{w}}|+|S_{\bm{vw}}^{*}|\leq 3$. Then by (11) $|S|\leq(|V_{\bm{v}}|-1)+(|V_{\bm{w}}|-1)+3\leq 2q^{d_{n}}+1=2q^{(n-1)/2}+1.$ For any distinct $\bm{x},\bm{y}\in S_{\bm{vw}}^{*}$, $\bm{xy}$ is an edge. This is because if $\bm{xy}$ were not an edge, then $V_{\bm{v}}\cup\\{\bm{x},\bm{y}\\}$ would be an orthogonal set of size $|V_{\bm{v}}|+2$, which, by Lemma 3.2, would force $V_{\bm{v}}$ not to have maximum dimension. Therefore the induced subgraph on $S_{\bm{vw}}^{*}$ is complete and triangle-free. Hence $S_{\bm{vw}}^{*}$ must have at most two vertices. Moreover, by Lemma 3.2, $|T_{\bm{v}}|,|T_{\bm{w}}|\leq 1$ (else the subspaces do not have maximum dimension). We are done unless $|S_{\bm{vw}}^{*}|=|T_{\bm{v}}\cup T_{\bm{w}}|=2$. Suppose $S_{\bm{vw}}^{*}=\\{\bm{x},\bm{y}\\}$ with $\bm{xy}$ an edge, and $T_{\bm{v}}=\\{\bm{u}\\}$. The graph is triangle-free and so one of $\bm{ux}$, $\bm{uy}$ is not an edge. Suppose that $\bm{ux}$ is not an edge. Then $V_{\bm{v}}\cup\\{\bm{u},\bm{x}\\}$ is an orthogonal set, forcing $V_{\bm{v}}$ not to be of maximal dimension. Even $n\geq 4$ and $\varepsilon({\mathcal{B}})=\gamma$. We want to show $|T_{\bm{v}}|+|T_{\bm{w}}|+|S_{\bm{vw}}^{*}|\leq 6$. Then by (11) $|S|\leq(|V_{\bm{v}}|-1)+(|V_{\bm{w}}|-1)+6\leq 2q^{d_{n}}+4=2q^{n/2-1}+4.$ In fact, by Lemma 3.2 and Proposition 4.2, $|T_{\bm{w}}|\leq 2$ and we must show $|T_{\bm{v}}|+|S_{\bm{vw}}^{*}|\leq 4$. Note that every vertex in $T_{\bm{v}}\cup S_{\bm{vw}}^{*}$ is orthogonal to $R_{\bm{v}}$ and therefore is orthogonal to $V_{\bm{v}}$. Similarly, $S_{\bm{vw}}^{*}$ is orthogonal to $\langle V_{\bm{v}}\cup V_{\bm{w}}\rangle=V_{\bm{v}}+V_{\bm{w}}$. Now consider the graph $H$ induced on $T_{\bm{v}}\cup S_{\bm{vw}}^{*}$. This is a triangle-free graph. Moreover, it has no independent set of size 3 because otherwise we could join this set to $V_{\bm{v}}$ and obtain an orthogonal set of size $|V_{\bm{v}}|+3$, which would force $V_{\bm{v}}$ not to have maximum dimension. By Lemma 3.1 we get $|T_{\bm{v}}\cup S_{\bm{vw}}^{*}|\leq 5$ with equality only when $H$ is a 5-cycle. Our final task is to rule out this possibility. Suppose for a contradiction that $H$ is a 5-cycle. We set $T_{\bm{v}}=\\{\bm{u}_{1},\bm{u}_{2}\\}$ and $S_{\bm{vw}}^{*}=\\{\bm{z}_{1},\bm{z}_{2},\bm{z}_{3}\\}$. $\bm{u}_{1}\bm{u}_{2}$ is not an edge (because both $\bm{u}_{1},\bm{u}_{2}$ are incident to $\bm{v}$) and so $H$ can be taken to be the 5-cycle $\bm{z}_{1}\bm{u}_{1}\bm{z}_{2}\bm{u}_{2}\bm{z}_{3}$. The complement of $H$ is the 5-cycle $\bm{z}_{1}\bm{z}_{2}\bm{z}_{3}\bm{u}_{1}\bm{u}_{2}$. In the complement of $H$ vertex adjacency is equivalent to orthogonality, and so $\\{\bm{z}_{1},\bm{z}_{2}\\}\subset(V_{\bm{v}}+V_{\bm{w}})^{\perp}$ is an orthogonal (and hence linearly independent) set in the orthogonal complement of $V_{\bm{v}}+V_{\bm{w}}$. We make a small digression to investigate the dimension of $V_{\bm{v}}+V_{\bm{w}}$. We may assume $V_{\bm{v}}\cap V_{\bm{w}}=\\{\bm{0}\\}$. This is because if the intersection is non-trivial, then $|S|\leq|(V_{\bm{v}}\cup V_{\bm{w}})\setminus\\{\bm{0}\\}|+|T_{\bm{v}}|+|T_{\bm{w}}|+|S_{\bm{vw}}^{*}|\leq(2q^{d_{n}}-q)+7\leq 2q^{d_{n}}+4,$ and we are done. We may therefore assume that $\dim(V_{\bm{v}}+V_{\bm{w}})=2d_{n}=n-2$ and hence ${\mathbb{F}}_{q}^{n}=(V_{\bm{v}}+V_{\bm{w}})\oplus\langle\bm{z}_{1}\rangle\oplus\langle\bm{z}_{2}\rangle.$ To complete the argument we exploit the orthogonality relations encoded in the complement of $H$ and show that $\bm{z_{3}}=\bm{0}$, the contradiction we are after. To start note that $\bm{z}_{3}\in\langle\bm{z}_{1},\bm{z}_{2}\rangle$. As $\bm{z}_{3}$ is orthogonal to $\bm{z}_{2}$, we get $\bm{z}_{3}=\lambda\bm{z}_{1}$. We are left to show $\lambda=0$. We next show $\bm{u}_{1},\bm{u}_{2}\in V_{\bm{v}}\oplus\langle\bm{z}_{1},\bm{z}_{2}\rangle$. Let’s start with, say, the decomposition $\bm{u}_{1}=\alpha\bm{z}_{1}+\beta\bm{z}_{2}+\bm{x}_{\bm{v}}+\bm{x}_{\bm{w}},$ for $\alpha,\beta\in{\mathbb{F}}_{q}$, $\bm{x}_{\bm{v}}\in V_{\bm{v}}$, and $\bm{x}_{\bm{w}}\in V_{\bm{w}}$. Suppose for a contradiction that $\bm{x}_{\bm{w}}\neq\bm{0}$. By Lemma 3.6 (ii) and the maximality of $\dim(V_{\bm{v}})$ we get that the self-orthogonal vector $\bm{x}_{\bm{w}}$ is not orthogonal to $V_{\bm{v}}$. Therefore there exists $\bm{y}\in V_{\bm{v}}\setminus\\{\bm{0}\\}$ such that ${\mathcal{B}}(\bm{x}_{\bm{w}},\bm{y})\neq 0$. But then ${\mathcal{B}}(\bm{u}_{1},\bm{y})={\mathcal{B}}(\bm{x}_{\bm{w}},\bm{y})\neq 0,$ a contradiction to $\bm{u}_{1}$ being orthogonal to the whole of $V_{\bm{v}}$. We therefore have $\bm{u}_{1}=\alpha\bm{z}_{1}+\beta\bm{z}_{2}+\bm{x}_{\bm{v}}\text{ and }\bm{u}_{2}=\alpha^{\prime}\bm{z}_{1}+\beta^{\prime}\bm{z}_{2}+\bm{x}_{\bm{v}}^{\prime}.$ Now, $\bm{u}_{2}$ is orthogonal to $\bm{z_{1}}$ and so $0=\alpha^{\prime}{\mathcal{B}}(\bm{z}_{1},\bm{z}_{1})$, which gives $\alpha^{\prime}=0$. Moreover $\bm{u}_{2}\notin V_{\bm{v}}$, which gives $\beta^{\prime}\neq 0$. Next, $\bm{u}_{2}$ is orthogonal to $\bm{u}_{1}$ and so $0=\beta\beta^{\prime}{\mathcal{B}}(\bm{z}_{2},\bm{z}_{2})$. Hence $\beta=0$ and, similarly to above, $\bm{u}_{1}=\alpha\bm{z}_{1}+\bm{x}_{\bm{v}}$ for $\alpha\in{\mathbb{F}}_{q}^{*}$. Finally, $\bm{u}_{1}$ is orthogonal to $\bm{z}_{3}=\lambda\bm{z}_{1}$. Hence $0=\lambda\alpha{\mathcal{B}}(\bm{z}_{1},\bm{z}_{1})$, which implies the desired $\lambda=0$. The graph $H$ is therefore not a 5-cycle and consequently $|T_{\bm{v}}\cup S_{\bm{vw}}^{*}|\leq 4$. The proof of the theorem is concluded. ## 5\. Character sum proof of Theorem 2.1 for even $n$, $\varepsilon({\mathcal{B}})=1$ and all odd $q$ First, we recall some basic facts from the theory of character sums, which we use to give an alternative proof of Theorem 2.1 for even $n$ and $\varepsilon({\mathcal{B}})=1$ that holds for all odd $q$. See, for example, [13, Chapter 5] for more details. ###### Lemma 5.1. Let $H$ be a subgroup of a finite abelian group $G$ and $\chi$ a character of $G$, then $\sum_{g\in H}\chi(g)=\begin{cases}|H|\quad&\text{if}\quad\chi\ \text{is trivial on}\ H,\\\ 0\quad&\text{otherwise}.\end{cases}$ Let $e_{p}(x)=\exp(2\pi ix/p)$, $\mathrm{Tr}(x)=x+x^{p}+\dots+x^{p^{m-1}}$ (recalling $q=p^{m}$) and $\psi(x)=e_{p}(\mathrm{Tr}(x))$. Then the functions $\\{\psi(\lambda x):\lambda\in{\mathbb{F}}_{q}\\}$ determine all of the characters of ${\mathbb{F}}_{q}$. ###### Lemma 5.2. Let $\mathcal{B}$ denote a non-degenerate, symmetric bilinear form over ${\mathbb{F}}_{q}^{n}$, where $q$ is odd and $n\geq 2$. Let $V$ denote a subspace of ${\mathbb{F}}_{q}^{n}$. Suppose that $\bm{s}\not\in V^{\perp}$. Then $\psi(\mathcal{B}(\bm{s},-))$ is a nontrivial character of $V$. The next result is a slight extension of Vinogradov’s bound on bilinear character sums, which appears, for example, in [22, p. 92]. Also see [18, Lemma 5] for the special case, where ${\mathcal{B}}$ is the dot product. ###### Lemma 5.3. Given $X,Y\subset{\mathbb{F}}_{q}^{n}$, we have $\bigg{|}\sum_{\bm{x}\in X}\sum_{\bm{y}\in Y}\psi\Big{(}\mathcal{B}(\bm{x},\bm{y})\Big{)}\bigg{|}\leq\sqrt{|X||Y|q^{n}}.$ ###### Proof. We apply the triangle inequality and then the Cauchy-Schwarz inequality to get $\displaystyle\bigg{|}\sum_{\bm{x}\in X}\sum_{\bm{y}\in Y}\psi\Big{(}\mathcal{B}(\bm{x},\bm{y})\Big{)}\bigg{|}^{2}$ $\displaystyle\leq|X|\sum_{\bm{x}\in X}\bigg{|}\sum_{\bm{y}\in Y}\psi\Big{(}\mathcal{B}(\bm{x},\bm{y})\Big{)}\bigg{|}^{2}$ $\displaystyle\leq|X|\sum_{\bm{x}\in{\mathbb{F}}_{q}^{n}}\bigg{|}\sum_{\bm{y}\in Y}\psi\Big{(}\mathcal{B}(\bm{x},\bm{y})\Big{)}\bigg{|}^{2}$ $\displaystyle=|X|\sum_{\bm{x}\in{\mathbb{F}}_{q}^{n}}\sum_{\bm{y},\bm{z}\in Y}\psi\Big{(}\mathcal{B}(\bm{x},\bm{y}-\bm{z})\Big{)}$ $\displaystyle=|X|\sum_{\bm{y},\bm{z}\in Y}\sum_{\bm{x}\in{\mathbb{F}}_{q}^{n}}\psi\Big{(}\mathcal{B}(\bm{x},\bm{y}-\bm{z})\Big{)}$ $\displaystyle=|X||Y|q^{n}.$ To obtain the last equality, we used the fact that the inner sum in the penultimate line equals $q^{n}$ if $\bm{y}=\bm{z}$ and zero otherwise. This, in turn, follows from the observation that ${{\mathbb{F}}_{q}^{n}}^{\perp}=\\{\bm{0}\\}$, combined with Lemmas 5.1 and 5.2. ∎ ###### Proof of Theorem 2.1 for even $n$ and $\varepsilon({\mathcal{B}})=1$. Firstly, replace $S$ by $S\sqcup\\{\bm{0}\\}$. This makes calculations easier. We will take away $\bm{0}$ at the end of the proof. For $\bm{s}\in S$, write $S^{{}^{\prime}}_{\bm{s}}=\\{\bm{x}\in S:\mathcal{B}(\bm{x},\bm{s})\not=0\\}$. Also write $D=\\{\bm{x}\in S:\mathcal{B}(\bm{x},\bm{x})\not=0$}. Recalling (7), note that $\sum_{\bm{s}\in S}|S^{{}^{\prime}}_{\bm{s}}|=\sum_{\bm{s}\in S}|S_{\bm{s}}|+|D|.$ Now (12) $\displaystyle|S|^{2}-\sum_{\bm{s}\in S}|S_{\bm{s}}|-|D|$ $\displaystyle=|S|^{2}-\sum_{\bm{s}\in S}|S^{{}^{\prime}}_{\bm{s}}|$ $\displaystyle=\bigg{|}\sum_{\bm{s}_{1}\in S}\sum_{\bm{s}_{2}\in S\setminus S^{{}^{\prime}}_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}.$ Here, we used just that for $\bm{s}_{1}\in S$ and $\bm{s}_{2}\in S\setminus S^{{}^{\prime}}_{\bm{s}_{1}}$, we have $\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))=1$. By the triangle inequality, we also have $\displaystyle\bigg{|}\sum_{\bm{s}_{1}\in S}\sum_{\bm{s}_{2}\in S\setminus S^{{}^{\prime}}_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}$ $\displaystyle\leq\bigg{|}\sum_{\bm{s}_{1}\in S}\sum_{\bm{s}_{2}\in S}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}+\bigg{|}\sum_{\bm{s}_{1}\in S}\sum_{\bm{s}_{2}\in S^{{}^{\prime}}_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}$ $\displaystyle\leq\bigg{|}\sum_{\bm{s}_{1}\in S}\sum_{\bm{s}_{2}\in S}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}+\bigg{|}\sum_{\bm{s}_{1}\in S}\sum_{\bm{s}_{2}\in S_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}+|D|.$ Next, let $W=\\{\bm{x}\in S:|S_{\bm{x}}|\geq 2\\}$. Then, using Lemmas 3.2 and 3.5, we have (13) $\displaystyle\bigg{|}\sum_{\bm{s}_{1}\in W}\sum_{\bm{s}_{2}\in S_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}$ $\displaystyle\leq\bigg{|}\sum_{\bm{s}_{1}\in W}\sum_{\bm{s}_{2}\in(V_{\bm{s}_{1}}\sqcup T_{\bm{s}_{1}})}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}$ $\displaystyle+\bigg{|}\sum_{\bm{s}_{1}\in W}\sum_{\bm{s}_{2}\in(V_{\bm{s}_{1}}\sqcup T_{\bm{s}_{1}})\setminus S_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}.$ To bound the first sum, on the RHS of (13), we first apply the triangle inequality to obtain $\displaystyle\bigg{|}\sum_{\bm{s}_{1}\in W}\sum_{\bm{s}_{2}\in(V_{\bm{s}_{1}}\sqcup T_{\bm{s}_{1}})}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}$ $\displaystyle\leq\bigg{|}\sum_{\bm{s}_{1}\in W}\sum_{\bm{s}_{2}\in V_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}+\bigg{|}\sum_{\bm{s}_{1}\in W}\sum_{\bm{s}_{2}\in T_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}.$ Note that for $\bm{s}\in W$, it follows from the definition of $V_{\bm{s}}$ (see Lemma 3.5) that $\bm{s}\not\in V_{\bm{s}}^{\perp}$. Thus, by Lemma 5.2, $\psi(\mathcal{B}(\bm{s},-))$ constitutes a character of ${\mathbb{F}}_{q}^{n}$ which is nontrivial on the subspace $V_{\bm{s}}$. Consequently, by Lemma 5.1, we have $\sum_{\bm{x}\in V_{\bm{s}}}\psi(\mathcal{B}(\bm{s},\bm{x}))=0\quad\implies\quad\sum_{\bm{s}_{1}\in W}\sum_{\bm{s}_{2}\in V_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))=0.$ Then, based on this observation and applications of the triangle inequality, we obtain $\displaystyle\bigg{|}\sum_{\bm{s}_{1}\in W}\sum_{\bm{s}_{2}\in(V_{\bm{s}_{1}}\sqcup T_{\bm{s}_{1}})}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}$ $\displaystyle\leq\bigg{|}\sum_{\bm{s}_{1}\in W}\sum_{\bm{s}_{2}\in T_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}$ $\displaystyle\leq\sum_{\bm{s}_{1}\in W}\bigg{|}\sum_{\bm{s}_{2}\in T_{\bm{s}_{1}}}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}$ $\displaystyle\leq\sum_{\bm{s}\in W}|T_{\bm{s}}|.$ The second sum, on the RHS of (13), is bounded trivially by $\sum_{\bm{s}\in W}|V_{\bm{s}}|+|T_{\bm{s}}|-|S_{\bm{s}}|.$ Going back to (12), we have $\displaystyle|S|^{2}\leq 2|D|+\bigg{|}\sum_{\bm{s}_{1}\in S}\sum_{\bm{s}_{2}\in S}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}+\sum_{\bm{s}\in W}|V_{\bm{s}}|+2|T_{\bm{s}}|+\sum_{\bm{s}\in S\setminus{W}}2|S_{\bm{s}}|.$ By Lemma 5.3, we know $\bigg{|}\sum_{\bm{s}_{1}\in S}\sum_{\bm{s}_{2}\in S}\psi(\mathcal{B}(\bm{s}_{1},\bm{s}_{2}))\bigg{|}\leq|S|q^{n/2}.$ For $\bm{s}\in W$, write $k_{\bm{s}}=\dim(V_{\bm{s}})$. Then by Lemma 3.2, we know $|V_{\bm{s}}|+2|T_{\bm{s}}|\leq q^{k_{\bm{s}}}+2n-4k_{\bm{s}}\leq q^{n/2}.$ Furthermore, for $\bm{s}\in S\setminus{W}$, we have $2|S_{\bm{s}}|\leq q^{n/2}$. So adding it all up, $|S|^{2}\leq 2|S|q^{n/2}+2|D|.$ At this stage we go back to the original $S$ that does not include $\bm{0}$. The above becomes (14) $|S|\leq 2q^{n/2}+\Big{\lfloor}\frac{2|D|}{|S|}\Big{\rfloor}-1.$ Note that from (14), one can only deduce the bound $|S|\leq 2q^{n/2}+1$. However, we proceed to sharpen this bound through an analysis of the set $D$. Some aspects of the remaining arguments can certainly be simplified if we are not aiming to prove the theorem for all $q$. To deal with some small technicalities that follow, we require the bound (15) $|S|\leq 2(q-1),$ for $n=2$, all odd $q$ and either scenarios $\varepsilon({\mathcal{B}})\in\\{1,\gamma\\}$. This bound has already been established as the base case of the induction in the proof of Theorem 2.1111In the proof of Theorem 2.1, when applying Lemma 3.11, to avoid a lengthy multi- case analysis, it is assumed that $q>3$. However one may easily confirm that, for our purposes here, the argument remains valid when $q=3$.. In particular, henceforth assume $n\geq 4$. First we establish (16) $|S|\leq 2q^{n/2}-1,$ which follows from (14) if $2|D|<|S|$. So suppose otherwise. Using the bound on $|D|$ provided by Lemma 3.10, we get $|S|\leq 16$ for $n=4$ and $|S|\leq n(n+1)-2$ with both being better than (16) for all $q\geq 3$ and $n\geq 4$. It remains to show (16) may be lowered by one. To this end, we consider a few cases showing that the assumption $|S|=2q^{n/2}-1$ leads to contradictions on either the size or the parity of $|S|.$ In particular, the following observations will be useful. ###### Claim 5.4. Let $S\subset{\mathbb{F}}_{q}^{n}$ denote a maximal $(3,2)$-orthogonal set. Then $|S\setminus D|\equiv 0\pmod{q-1}$. ###### Proof. Each $\bm{v}\in S\setminus D$ is self-orthogonal and so $S$ must contain the entire punctured line $l_{\bm{v}}=\\{\lambda\bm{v}:\lambda\in{\mathbb{F}}_{q}^{*}\\}$, otherwise maximality of $S$ is violated. The result follows noting that $l_{\bm{v}}\cap l_{\bm{w}}$ is empty if $\bm{w}\not\in l_{\bm{v}}$ and of size $q-1$ otherwise. ∎ ###### Claim 5.5. At least one of the following statements holds. 1. (i) $|S|\leq 2q^{n/2}-2$, 2. (ii) $|D|=2$, 3. (iii) $|T_{\bm{v}}|\leq 1$ for each $\bm{v}\in S$, where $T_{\bm{v}}=D\cap S_{\bm{v}}$. ###### Proof. We suppose that neither (ii) nor (iii) is true and prove (i). Thus, assume there exist $\bm{v}\in S$, distinct elements $\bm{w}_{1},\bm{w}_{2}\in D\setminus\\{\bm{v}\\}$, with ${\mathcal{B}}(\bm{v},\bm{w}_{1})$ and ${\mathcal{B}}(\bm{v},\bm{w}_{2})$ both non-zero, and potentially a fourth element $\bm{w}_{3}\in D$, which need not be distinct from the previous ones. We consider two main cases as to whether $\bm{v}$ is self-orthogonal or not. First, assume ${\mathcal{B}}(\bm{v},\bm{v})\not=0$, which in particular implies $\bm{v}\in T_{\bm{w}_{1}}\cap T_{\bm{w}_{2}}$. By the $(3,2)$-orthogonality of $S$, we have ${\mathcal{B}}(\bm{w}_{1},\bm{w}_{2})=0$ and so firstly $\\{\bm{w}_{1},\bm{w}_{2}\\}$ is linearly independent and secondly by Lemma 3.7 (i), $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{w}_{1},\bm{w}_{2}\\}^{\perp}$}$ is non-degenerate. Write $S=S_{\bm{w}_{1}}\cup S_{\bm{w}_{2}}\cup S_{\bm{w}_{1}\bm{w}_{2}}\cup\\{\bm{w}_{1},\bm{w}_{2}\\}.$ For $(q,n)=(3,4)$, by (15), we have $|S|\leq 2(q^{n/2-1}+1)-1+2(q-1)+2=13<16=2q^{n/2}-2.$ For other admissible choices of $(q,n)$, by Proposition 3.12, we have $|S|\leq 2(q^{n/2-1}+1)-1+3q^{n/2-1}+2=5q^{n/2-1}+3<2q^{n/2}-2.$ Next, assume ${\mathcal{B}}(\bm{v},\bm{v})=0$. In this case $\bm{w}_{3}\not\in\\{\bm{v},\bm{w}_{1},\bm{w}_{2}\\}.$ We split this case further by first assuming that $\bm{w}_{3}$ is orthogonal to $R_{\bm{v}}$ (using the notation of Lemma 3.5). This implies $R_{\bm{v}}\sqcup\\{\bm{w}_{1},\bm{w}_{2},\bm{w}_{3}\\}$ is an orthogonal set. Further note that $\\{\bm{w}_{1},\bm{v}\\}$ is linearly independent and that by parts (i) and (ii) of Lemma 3.7, $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{w}_{1},\bm{v}\\}^{\perp}$}$ is non-degenerate and its equivalence class is preserved. Write $S=S_{\bm{w}_{1}}\cup S_{\bm{v}}\cup S_{\bm{w}_{1}\bm{v}}.$ For $(q,n)=(3,4)$, by Lemma 3.2 and (15), we have $|S|\leq(q^{n/2}-1)+(q^{n/2-2}+3)+2q^{n/2-1}-2=16=2q^{n/2}-2.$ For the remaining combinations of $(q,n)$, we use the bound (16) to get $|S|\leq(q^{n/2}-1)+(q^{n/2-2}+3)+2q^{n/2-1}-1\leq 2q^{n/2}-2.$ Finally, suppose there exists some $\bm{u}\in R_{\bm{v}}$, such that ${\mathcal{B}}(\bm{w}_{3},\bm{u})\not=0$. By definition ${\mathcal{B}}(\bm{u},\bm{u})=0$ and ${\mathcal{B}}(\bm{u},\bm{v})\not=0$, from which we may deduce $\\{\bm{u},\bm{v}\\}$ is linearly independent and that by parts (i) and (ii) of Lemma 3.7, $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{u},\bm{v}\\}^{\perp}$}$ is non-degenerate and its equivalence class is preserved. Write $S=S_{\bm{u}}\cup S_{\bm{v}}\cup S_{\bm{u}\bm{v}}$ and note that $\bm{w}_{1},\bm{w}_{2}\in T_{\bm{v}}$ and $\bm{w}_{3}\in T_{\bm{u}}$. We use the bound (16) to get $|S|\leq 2(q^{n/2-1}+1)+2q^{n/2-1}-1=4q^{n/2-1}+1\leq 2q^{n/2}-2$ for all odd $q$ and $n\geq 4$. ∎ ###### Claim 5.6. Suppose that $S\subset\\{\bm{z}\\}^{\perp}$, where $\bm{z}\in{\mathbb{F}}_{q}^{n}$ is not self-orthogonal. Then $|S|\leq 3q^{n/2-1}\leq 2q^{n/2}-2,$ for all $n\geq 4$ and odd $q$. ###### Proof. Since $\bm{z}$ is not self-orthogonal, the restriction of ${\mathcal{B}}$ on $\\{\bm{z}\\}^{\perp}$ remains non-degenerate. Now, using that $S=S\cap\\{\bm{z}\\}^{\perp}$, we may use Proposition 3.12, to obtain the required result. ∎ Writing $Q=\\{\bm{v}\in S:\bm{v}\in(S\setminus\\{\bm{v}\\})^{\perp}\\}$, note that $S=\bigcup_{\bm{v}\in S}S_{\bm{v}}\cup Q.$ Now if $\bm{z}\in D\cap Q$, then $S\setminus\\{\bm{z}\\}\subset\\{\bm{z}\\}^{\perp}$. Thus, by Claim 5.6, we have $|S|\leq 1+3q^{n/2-1}\leq 2q^{n/2}-2,$ for all $n\geq 4$ and odd $q$. In particular, we may assume (17) $D=\bigcup_{\bm{v}\in S}T_{\bm{v}}.$ Suppose $|S|=2q^{n/2}-1$. Note that, if $|D|$ is even, by Claim 5.4 we must have that $|S|$ is even leading to a contradiction. Recalling Claim 5.5, if statement (ii) holds, we are done and so assume statement (iii) is true. Let $\bm{w}\in D$ and so, by (17), we have $T_{\bm{v}}=\\{\bm{w}\\}$ for some $\bm{v}\in S$. If $\bm{v}$ is self-orthogonal, then firstly $\\{\bm{v},\bm{w}\\}$ is linearly independent and secondly by parts (i) and (ii) of Lemma 3.7, $\mathcal{B}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{v},\bm{w}\\}^{\perp}$}$ is non-degenerate and its equivalence class is unchanged. We write $S=S_{\bm{v}}\cup S_{\bm{w}}\cup S_{\bm{v}\bm{w}}$ and use Lemma 3.2 as before, being mindful of the crucial fact that $S_{\bm{v}}$ contains exactly one non-self-orthogonal element. For $(q,n)=(3,4)$, we have, by (15) $|S|\leq q^{n/2-1}+(q^{n/2}-1)+2q^{n/2-1}-2=3q^{n/2-1}+q^{n/2}-3=15<16=2q^{n/2}-2.$ For other combinations of $(q,n)$, by (16), we have $|S|\leq q^{n/2-1}+(q^{n/2}-1)+2q^{n/2-1}-1=3q^{n/2-1}+q^{n/2}-2\leq 2q^{n/2}-2.$ Both bounds above contradict the presumed size of $S$. Then, we must have that $\bm{v}$ is not self-orthogonal. It follows that $T_{\bm{w}}=\\{\bm{v}\\}$, which in turn implies that elements of $D$ occur in pairs. As explained above, this contradicts the presumed parity of $|S|$, concluding the proof. ∎ ## 6\. Proof of Theorem 2.2 The proof of Theorem 2.2 is similar to the proof of Theorem 2.1. The differences arise from having characteristic 2 (the theory of bilinear forms is different) and not being able to assume that $q$ is large enough. We are however free to assume that $n$ is large enough. A fact special to ${\mathbb{F}}_{2}^{n}$ that we use is that every two distinct non-zero vectors are linearly independent. In particular the requirement for $\\{\bm{v},\bm{w}\\}$ to be linearly independent in Lemma 3.7 becomes redundant. From now on we use the notation in Lemma 3.2 and Lemma 3.5. The following simple inequality will be useful. It is specific to ${\mathbb{F}}_{2}^{n}$, is true for all $n$, and is sharp. ###### Lemma 6.1. For $n\geq 2$, let $S\subset{\mathbb{F}}_{2}^{n}$ be a $(3,2)$-orthogonal set with respect to a non-degenerate symmetric bilinear form ${\mathcal{B}}$. If $\bm{v}\in S$, then in the notation of Lemma 3.5, $|R_{\bm{v}}|\leq|V_{\bm{v}}|/2$. ###### Proof. Note that $R_{\bm{v}}$ is disjoint from $R_{\bm{v}}+R_{\bm{v}}$. Indeed, if $\bm{x},\bm{y}\in R_{\bm{v}}$, then ${\mathcal{B}}(\bm{v},\bm{x})={\mathcal{B}}(\bm{v},\bm{y})=1$. Therefore ${\mathcal{B}}(\bm{v},\bm{x}+\bm{y})=1+1=0$. This means that $\bm{x}+\bm{y}\notin R_{\bm{v}}$. Now, $V_{\bm{v}}$ is a vector space containing $R_{\bm{v}}$. Therefore $R_{\bm{v}}$ and $R_{\bm{v}}+R_{\bm{v}}$ are two disjoint sets contained in $V_{\bm{v}}$. Hence $2|R_{\bm{v}}|\leq|R_{\bm{v}}|+|R_{\bm{v}}+R_{\bm{v}}|\leq|V_{\bm{v}}|.\qed$ We derive a bound on $|S_{\bm{v}}|$. ###### Lemma 6.2. For $n\geq 2$, let $S\subset{\mathbb{F}}_{2}^{n}$ be a $(3,2)$-orthogonal set with respect to a non-degenerate symmetric bilinear form ${\mathcal{B}}$. If $\bm{v}\in S$, then in the notation of Lemma 3.5: * • If ${\mathcal{B}}=\cdot$, then $|S_{\bm{v}}|\leq\begin{cases}n,&\mbox{if }n\leq 7;\\\ 1+2^{\frac{n-1}{2}-1},&\mbox{if }n\ \text{is odd and}\ n\geq 9;\\\ 2^{\frac{n}{2}-1},&\mbox{if }n\ \text{is even and}\ n\geq 8.\end{cases}$ * • If ${\mathcal{B}}={\mathcal{H}}$ and $n$ is even, then $|S_{\bm{v}}|\leq 2^{\frac{n}{2}-1}$. ###### Proof. We begin with ${\mathcal{B}}=\cdot$. By Lemma 3.5 and Lemma 6.1, we have $|S_{\bm{v}}|\leq|R_{\bm{v}}|+|T_{\bm{v}}|\leq\frac{|V_{\bm{v}}|}{2}+|T_{\bm{v}}|.$ By Lemma 3.2 we have $\dim(V_{\bm{v}})\leq\lfloor(n-|T_{\bm{v}}|)/2\rfloor$. Setting $t=|T_{\bm{v}}|$ we get $|S_{\bm{v}}|\leq 2^{\lfloor\frac{n-t}{2}\rfloor-1}+t.$ A routine calculation confirms that, for $n\leq 7$, the right side is maximum when $t=n$. Otherwise, the maximum is achieved when $t=1$ for odd $n$ and when $t=0$ for even $n$. If ${\mathcal{B}}={\mathcal{H}}$, there are no non-self-orthogonal vectors and so, similarly to above, $|S_{\bm{v}}|\leq|V_{\bm{v}}|/2\leq 2^{\frac{n}{2}-1}$. ∎ We first prove the theorem for the hyperbolic form ${\mathcal{H}}$. ###### Proposition 6.3. Let $n\geq 2$ be even and $S\subset{\mathbb{F}}_{2}^{n}$ be a $(3,2)$-orthogonal set with respect to the hyperbolic form ${\mathcal{H}}$. Then $|S|\leq 2^{\frac{n}{2}+1}-2.$ ###### Proof. We prove the claim by induction. For $n=2$, ${\mathbb{F}}_{2}^{2}\setminus\\{\bm{0}\\}$ is not $(3,2)$-orthogonal. So the claim is true for $n=2$. For the inductive step, we may assume there exist linearly independent $\bm{v},\bm{w}$ such that $\bm{v}\cdot\bm{w}=1$. If not, then $S$ is an orthogonal set and by Lemma 3.2, we have the better bound $|S|\leq 2^{\frac{n}{2}}-1$. By Lemma 3.7 (iii), ${\mathcal{H}}{\mathbin{\upharpoonright}}\raise-2.15277pt\hbox{$\\{\bm{v},\bm{w}\\}^{\perp}$}$ is non-degenerate and is equivalent to ${\mathcal{H}}$ (in this lower dimensional vector space). By the induction hypothesis we have $|S_{\bm{vw}}|\leq 2^{\frac{n}{2}}-2.$ Hence by Lemma 6.2, $|S|\leq|S_{\bm{v}}|+|S_{\bm{w}}|+|S_{\bm{vw}}|\leq 2^{\frac{n}{2}-1}+2^{\frac{n}{2}-1}+(2^{\frac{n}{2}}-2)=2^{\frac{n}{2}+1}-2.\qed$ From now on we mainly restrict our attention to the dot product, though we will use Proposition 6.3 for even $n$ because we sometimes use Lemma 3.7 (i) and the restriction of the dot product may be equivalent to ${\mathcal{H}}$. To prove the theorem we consider two cases separately depending on whether $S$ contains a vector that is not self-orthogonal or not. We first prove the theorem when all vectors in $S$ are self-orthogonal. The proof is similar to that of Proposition 6.3. ###### Proposition 6.4. For $n\geq 1$, let $S\subset{\mathbb{F}}_{2}^{n}$ be a $(3,2)$-orthogonal set with respect to the dot product. If $S$ consists entirely of self-orthogonal vectors, then $|S|\leq\begin{cases}2^{\frac{n+1}{2}}-2,&\mbox{if }n\ \text{is odd};\\\ 2^{\frac{n}{2}+1}-3,&\text{if }n\ \text{is even}.\end{cases}$ ###### Proof. We prove the claim by induction. For $n=1$, $|S|=0$. For $n=2$, we have $S\subset\\{(1,1)\\}$ and the claim follows. For the inductive step, we may assume there exist linearly independent $\bm{v},\bm{w}$ such that $\bm{v}\cdot\bm{w}=1$. If not, then $S$ is an orthogonal set and by Lemma 3.4, we have a better bound on $|S|$ than required. Furthermore, by Lemma 3.7 (iii), the dot product restricted to $\\{\bm{v},\bm{w}\\}^{\perp}$ is equivalent to the (lower dimensional) dot product and $S_{\bm{vw}}$ contains only self-orthogonal vectors. For even $n$, by the induction hypothesis we have $|S_{\bm{vw}}|\leq 2^{\frac{n}{2}}-3.$ All vectors in $S$ are self-orthogonal and so $T_{\bm{v}}=T_{\bm{w}}=\emptyset$. Therefore $S_{\bm{v}}=R_{\bm{v}}$. Lemma 6.1 gives $|R_{\bm{v}}|\leq 2^{\frac{n}{2}-1}.$ The same holds for $\bm{w}$. Putting everything together gives $|S|\leq 2^{\frac{n}{2}-1}+2^{\frac{n}{2}-1}+(2^{\frac{n}{2}}-3)=2^{\frac{n}{2}+1}-3.$ For odd $n$, the induction hypothesis gives $|S_{\bm{vw}}|\leq 2^{\frac{n-1}{2}}-2.$ Again $T_{\bm{v}}=T_{\bm{w}}=\emptyset$ and so by Lemma 6.2, we have $|S_{\bm{v}}|,|S_{\bm{w}}|\leq 2^{\frac{n-1}{2}-1}.$ This gives $|S|\leq 2^{\frac{n-1}{2}-1}+2^{\frac{n-1}{2}-1}+(2^{\frac{n-1}{2}}-2)=2^{\frac{n+1}{2}}-2,$ as required. ∎ The next step is to prove a bound for all $S$ that is weaker than that in Theorem 2.2. It will be used to prove the theorem when $S$ contains a vector that is not self-orthogonal. ###### Lemma 6.5. For $n\geq 1$, let $S\subset{\mathbb{F}}_{2}^{n}$ be a $(3,2)$-orthogonal set with respect to a non-degenerate symmetric bilinear form. Then $|S|\leq\begin{cases}2^{\frac{n+1}{2}}+2n-2,&\mbox{if }n=1,3;\\\ 2^{\frac{n+1}{2}}+\tfrac{n(n+1)}{2}-3,&\mbox{if }n\geq 5\ \text{is odd};\\\ 2^{\frac{n}{2}+1}+2n-3,&\mbox{if }n=2,4;\\\ 2^{\frac{n}{2}+1}+\tfrac{n(n+1)}{2}-4,&\mbox{if }n\geq 6\ \text{is even}.\end{cases}$ ###### Proof. If the bilinear form is equivalent to ${\mathcal{H}}$, the result follows from Proposition 6.3. If the bilinear form is equivalent to the dot product, we let $D\subset S$ be the collection of vectors in $S$ that are not self-orthogonal. The claim follows by applying Proposition 6.4 to $S\setminus D$ and Lemma 3.10 to $D$. ∎ We continue with the case when there is a vector that is non-self-orthogonal. The proof is longer because we cannot initiate the induction (for example, Remark 6.7 on p. 6.7 shows ${\mathcal{S}}_{3,2}(2,4,\cdot)\geq 7>2^{3}-3$) and because we can no longer assume, say, $T_{\bm{v}}=\emptyset$. ###### Proposition 6.6. Let $n$ be an integer and $S\subset{\mathbb{F}}_{2}^{n}$ be a $(3,2)$-orthogonal with respect to the dot product. If $S$ contains a vector that is not self-orthogonal, then $|S|\leq\begin{cases}2^{\frac{n+1}{2}}+1,&\mbox{if }n\geq 21\ \text{is odd};\\\ 2^{\frac{n}{2}+1}-3,&\mbox{if }n\geq 18\ \text{is even}.\end{cases}$ ###### Proof. We begin with familiar notation. We let $G$ be the simple graph with vertex set $S$ and edges given by not mutually orthogonal pairs of vertices, and $D=\\{\bm{v}\in S:\bm{v}\cdot\bm{v}=1\\}.$ Even $n$. Let $\bm{z}\in D$. We consider two separate cases according to whether there exists an edge between $\bm{z}$ and $S\setminus D$ or not. Suppose first that there is no edge between $\bm{z}$ and $S\setminus D$. Then $S\setminus D\subset\\{\bm{z}\\}^{\perp}$. The dot product restricted to $\\{\bm{z}\\}^{\perp}$ is non-degenerate (because $\bm{z}$ is not self- orthogonal). The dimension of $\\{\bm{z}\\}^{\perp}$ is odd and so the restriction is equivalent to the (lower dimensional) dot product. Moreover, all elements of $S\setminus D$ are self-orthogonal. Therefore by Proposition 6.4 we get $|S\setminus D|\leq 2^{\frac{n}{2}}-2$. By Lemma 3.10 we have $|D|\leq\tfrac{n(n+1)}{2}-1$. Hence (because $n\geq 14)$ $|S|\leq(2^{\frac{n}{2}}-2)+(\tfrac{n(n+1)}{2}-1)\leq 2^{\frac{n+1}{2}}-3.$ Next we suppose that there exists and edge $\bm{vz}$ with $\bm{v}\in S\setminus D$. We have $|S|\leq|S_{\bm{v}}|+|S_{\bm{z}}|+|S_{\bm{vz}}|.$ By Lemma 3.7 (i), the dot product restricted to $\\{\bm{v},\bm{z}\\}^{\perp}$ is non-degenerate. Lemma 6.5 gives $|S_{\bm{vz}}|\leq 2^{\frac{n}{2}}+\tfrac{(n-2)(n-1)}{2}-4.$ To bound $|S_{\bm{v}}|$ note $\bm{z}\in T_{\bm{v}}$. Writing $t=|T_{\bm{v}}|$, and applying Lemma 3.5 and Lemma 6.1 we get (using $n\geq 12$) $|S_{\bm{v}}|\leq 2^{\lfloor\frac{n-1-t}{2}\rfloor-1}+t\leq 2^{\frac{n}{2}-2}+1.$ By Lemma 6.2 we get $|S_{\bm{z}}|\leq 2^{\frac{n}{2}-1}$. Putting everything together gives (using $n\geq 18$) $|S|\leq 2^{\frac{n}{2}+1}-3-(2^{\frac{n}{2}-2}-\tfrac{(n-2)(n-1)}{2})\leq 2^{\frac{n}{2}+1}-3.$ Odd $n$. If $D$ contains up to three elements, the required result follows from Proposition 6.4. Let $\bm{x},\bm{y},\bm{z}\in D$ denote three distinct elements and let $H$ be the graph induced on $\\{\bm{x},\bm{y},\bm{z}\\}$. The graph $H$ is not a triangle because $D$ is a subset of a $(3,2)$-orthogonal set. We consider three cases based on the number of edges in $H$. Suppose $H$ is the empty graph. First, assume a pair of the sets $R_{\bm{x}},R_{\bm{y}},R_{\bm{z}}$ has non-empty intersection. Namely, say $\bm{v}\in R_{\bm{x}}\cap R_{\bm{y}}$. Consider the decomposition (18) $S=S_{\bm{x}}\cup S_{\bm{v}}\cup S_{\bm{xv}}.$ Note that $\bm{x},\bm{y}\in T_{\bm{v}}$ and that by Lemma 3.7 (i), we may apply Lemma 6.5 to obtain (19) $|S|\leq(2^{\frac{n-1}{2}-2}+3)+(2^{\frac{n-1}{2}-1}+1)+(2^{\frac{n-1}{2}}+\tfrac{(n-2)(n-1)}{2}-3)\leq 2^{\frac{n+1}{2}}+1,$ for $n\geq 21$. Thus suppose $R_{\bm{x}},R_{\bm{y}},R_{\bm{z}}$ are pairwise disjoint. This means that $R_{\bm{z}}\cup\\{\bm{x},\bm{y}\\}$ is orthogonal. Since $\bm{z}\in S_{\bm{xy}}$ and the dot product restricted to $\\{\bm{x},\bm{y}\\}^{\perp}$ is non-degenerate (by Lemma 3.7 (i)), Lemma 6.2 gives $|S_{\bm{z}}|=|S_{\bm{z}}\cap S_{\bm{xy}}|\leq 1+2^{\frac{n-1}{2}-2}.$ Considering the decomposition (20) $S=S_{\bm{x}}\cup S_{\bm{z}}\cup S_{\bm{xz}}\cup\\{\bm{x},\bm{z}\\},$ and using Lemma 3.7 (i), Lemma 6.2 (as well as its proof), and Lemma 6.5, we have (21) $|S|\leq(2^{\frac{n-1}{2}-1}+1)+(2^{\frac{n-1}{2}-2}+3)+(2^{\frac{n-1}{2}}+\tfrac{(n-2)(n-1)}{2}-3)+2\leq 2^{\frac{n+1}{2}}+1$ for $n\geq 21$. (We can do better but will refer later to (21)). Next, suppose $H$ has exactly one edge. Without loss of generality take $\bm{yx}$ to be the edge. We split this case further. First suppose there exists an edge between $\bm{z}$ and $R_{\bm{x}}\cup R_{\bm{y}}$. Say, an edge between $\bm{z}$ and $\bm{v}\in R_{\bm{x}}$. We consider the decomposition (18) Then noting that $\bm{x},\bm{z}\in T_{\bm{v}}$ and that Lemma 3.7 (i) allows one to apply Lemma 6.5, one recovers the same bound on $|S|$ as (19). Next, suppose there is no edge between $\bm{z}$ and $R_{\bm{x}}\cup R_{\bm{y}}$. In particular, $R_{\bm{x}}\cup\\{\bm{y},\bm{z}\\}$ is an orthogonal set. It follows that $V_{\bm{x}}$ does not have the maximum dimension that is possible for orthogonal subspaces. Thus, using the decomposition (20) and using Lemma 3.7 (i), Lemma 3.2, Lemma 6.1 and Lemma 6.5, we may obtain the same bound on $|S|$ as (21). Finally, suppose $H$ has two edges. Without loss of generality let $\bm{yxz}$ be the path of length 2. Here, we proceed to show that we may assume $|D|=3$, which, as pointed out earlier, gives the required result. Suppose there exists a fourth vector $\bm{w}\in D$. If $\bm{w}$ forms an edge with $\bm{x}$, then there is no edge between any two of $\\{\bm{y},\bm{z},\bm{w}\\}$ and we are done by the arguments of the first case. If, on the other hand, $\bm{w}$ does not form an edge with $\bm{x}$, by Lemma 3.7 (i), dot product is non-degenerate on $\\{\bm{x},\bm{w}\\}^{\perp}$, so we use $S=S_{\bm{x}}\cup S_{\bm{w}}\cup S_{\bm{xw}}\cup\\{\bm{x},\bm{w}\\}.$ Then, noting $|T_{\bm{x}}|\geq 2$ and arguing as before, we obtain $|S|\leq(2^{\frac{n-1}{2}-2}+3)+(2^{\frac{n-1}{2}-1}+1)+(2^{\frac{n-1}{2}}+\tfrac{(n-2)(n-1)}{2}-3)+2\leq 2^{\frac{n+1}{2}}+1,$ which is the same as (21). ∎ The proof of Theorem 2.2 is completed by combining Propositions 6.3, 6.4 and 6.6. ###### Remark 6.7. Theorem 2.2 is false for small $n$. For $n=2$, ${\mathcal{S}}_{3,2}(2,2,\cdot)=3$ as we see by taking $S={\mathbb{F}}_{2}^{2}\setminus\\{\bm{0}\\}$. For $n=4$ the example below shows ${\mathcal{S}}_{3,2}(2,4,\cdot)\geq 7$: $S=\\{(1,1,1,0),(1,0,0,0),(1,0,1,1),(0,0,0,1),(0,1,1,1),(0,1,1,0),(1,1,0,1)\\}.$ The graph of $S$ is indeed triangle-free: using the implicit order on the vertices, it is the union of the 6-cycle $234567$ with the edges $12$ and $47$. ## 7\. Proof of Theorem 2.3 The following is essentially the same as [10, Equation 2.4] and [18, Lemma 5]. Also see [3] or apply the point-hyperplane incidence bound in [20]. ###### Lemma 7.1. For $X,Y\subset{\mathbb{F}}_{q}^{n}$, define $O(X,Y)=|\\{(\bm{x},\bm{y})\in X\times Y:\mathcal{B}(\bm{x},\bm{y})=0\\}|.$ Then $\bigg{|}O(X,Y)-\frac{|X||Y|}{q}\bigg{|}\leq\sqrt{|X||Y|q^{n}}.$ The following result is due to Turán [19]. ###### Lemma 7.2. Any graph of $n$ vertices, which is $K_{r+1}$-free contains at most $(1-1/r)(n^{2}/2)$ edges. ###### Proof of Theorem 2.3. Let $G=G(S,E_{1})$ be the simple graph, where $(\bm{s_{1}},\bm{s_{2}})\in S^{2}$ forms an edge in $E_{1}$ if $\bm{s_{1}}\neq\bm{s_{2}}$ and $\mathcal{B}(\bm{s_{1}},\bm{s_{2}})\not=\bm{0}$. Then using the fact that $S$ is $(k,2)$-orthogonal, we know that $G$ is $K_{k}$-free and thus by Lemma 7.2, $|E_{1}|\leq\frac{k-2}{k-1}\frac{|S|^{2}}{2}.$ Denoting $G^{{}^{\prime}}=G(S,E_{2})$ as the complement of $G$, we deduce that $|E_{2}|\geq\frac{|S|(|S|-1)}{2}-|E_{1}|\geq\frac{|S|^{2}}{2(k-1)}-\frac{|S|}{2}.$ Now, clearly $O(S,S)\geq 2|E_{2}|$. Hence, applying Lemma 7.1, we have $\frac{|S|^{2}}{k-1}-\frac{|S|^{2}}{q}-|S|\leq|S|q^{n/2},$ which gives $|S|\leq\bigg{(}\frac{q(k-1)}{q-k+1}\bigg{)}(q^{n/2}+1).\qed$ ## Acknowledgement The authors are grateful to anonymous referees for their suggestions, which helped improve the presentation of the paper. ## References * [1] O. Ahmadi and A. Mohammadian, ‘Sets with many pairs of orthogonal vectors over finite fields’, Finite Fields Appl., 37 (2016), 179–192. * [2] M. Ajtai, J. Komlós, and E. Szemerédi, ‘A note on Ramsey numbers’, J. Combin. Theory Ser. A, 29 (1980), 354–360. * [3] N. Alon and M. Krivelevich, ‘Constructive bounds for a Ramsey-type problem’, Graphs Combin., 13 (1997), 217–225. * [4] N. Alon and M. Szegedy, ‘Large sets of nearly orthogonal vectors’, Graphs Combin., 15 (1999), 1–4. * [5] E. R. Berlekamp, ‘On subsets with intersections of even cardinality’, Can. Math. Bull., 23 (1969), 471–474. * [6] L. Deaett, ‘The minimum semidefinite rank of a triangle-free graph’, Linear Algebra Appl., 434 (2011), 1945–1955. * [7] Z. Füredi and R. P. Stanley, ‘Sets of vectors with many orthogonal pairs’, Graphs Combin., 8 (1992), 391–394. * [8] J. E. Graver, ‘Boolean designs and self-dual matroids’, Linear Algebra Appl., 10 (1975), 111–128. * [9] L. C. Grove, ‘Classical Groups and Geometric Algebra’, Grad. Stud. Math., Vol. 90, American Mathematical Society, Providence, RI, 2008. * [10] D. Hart, A. Iosevich, D. Koh, and M. Rudnev, ‘Averages over hyperplanes, sum-product theory in vector spaces over finite fields and the Erdős-Falconer distance conjecture’, Trans. Amer. Math. Soc., 363 (2011), 3255–3275. * [11] J. H. Kim, ‘The Ramsey number $R(3,t)$ has order of magnitude $t^{2}/\log t$’, Random Structures Algorithms, 7 (1995), 173–207. * [12] M. Knebusch, ‘Specialization of quadratic and symmetric bilinear forms’, Translated from the German by Thomas Unger Algebra and Applications, 11. Springer-Verlag London 2010. * [13] R. Lidl and H. Niederreiter, Finite Fields, Cambridge Univ. Press, Cambridge, 1997. * [14] J. Nešetřil and M. Rosenfeld, ‘Embedding graphs in Euclidean spaces, an exploration guided by Paul Erdős’, Geombinatorics, 6 (1997), 143–155. * [15] P. Pudlák, ‘Cycles of nonzero elements in low rank matrices’, Combinatorica, 22 (2002), 321–334. * [16] M. Rosenfeld, ‘Almost orthogonal lines in $E^{d}$’, in Applied Geometry and Discrete Mathematics, in DIMACS Ser. Discret. Math. Theor. Comput. Sci. Vol. 4, (1991), 489–492. * [17] J. B. Shearer, ‘A note on the independence number of triangle-free graphs. II’, J. Combin. Theory Ser. B, 53 (1991), 300–307. * [18] I. E. Shparlinski, ‘On the additive energy of the distance set in finite fields’, Finite Fields Appl., 42 (2016), 187–199. * [19] P. Turán, ‘Eine Extremalaufgabe aus der Graphentheorie’, Mat. Fiz. Lapok, 48 (1941), 436–452. * [20] L. A. Vinh, ‘The Szemerédi–Trotter type theorem and the sum-product estimate in finite fields’, European J. Combin., 32 (2011), 1177–1181. * [21] L. A. Vinh, ‘Maximal sets of pairwise orthogonal vectors in finite fields’, Canad. Math. Bull., 55 (2012), 418–423. * [22] I. M. Vinogradov, An Introduction to the Theory of Numbers, Pergamon Press, London and New York, 1955. * [23] A. Zame, ‘Orthogonal sets of vectors over $Z_{m}$’, J. Combinatorial Theory, 9 (1970), 136–143.
# Evaluating the Mahler measure of linear forms via the Kronecker limit formula on complex projective space James Cogdell Jay Jorgenson 111The second named author acknowledges grant support from several PSC-CUNY Awards, which are jointly funded by the Professional Staff Congress and The City University of New York. Lejla Smajlović ###### Abstract In Cogdell et al., LMS Lecture Notes Series 459, 393–427 (2020), the authors proved an analogue of Kronecker’s limit formula associated to any divisor $\mathcal{D}$ which is smooth in codimension one on any smooth Kähler manifold $X$. In the present article, we apply the aforementioned Kronecker limit formula in the case when $X$ is complex projective space ${\mathbb{C}}{\mathbb{P}}^{n}$ for $n\geq 2$ and $\mathcal{D}$ is a hyperplane, meaning the divisor of a linear form $P_{D}({z})$ for ${z}=(\mathcal{Z}_{j})\in{\mathbb{C}}{\mathbb{P}}^{n}$. Our main result is an explicit evaluation of the Mahler measure of $P_{D}$ as a convergent series whose each term is given in terms of rational numbers, multinomial coefficients, and the $L^{2}$-norm of the vector of coefficients of $P_{D}$. ## 1 Introduction ### 1.1 Mahler measure Let $P(x_{1},\cdots,x_{n})\in{\mathbb{C}}[x_{1},\cdots,x_{n}]$ be a polynomial in $n$ variables with complex coefficients; we assume that $P$ is not identically equal to zero. The Mahler measure $M(P)$ of $P$ is defined by the expression $M(P)=\exp\left({\frac{1}{(2\pi)^{n}}}\int\limits_{0}^{{2\pi}}\int\limits_{0}^{{2\pi}}\cdots\int\limits_{0}^{{2\pi}}\log{\Bigl{(}}{\bigl{|}}P(e^{{i\theta_{1}}},e^{{i\theta_{2}}},\ldots,e^{{i\theta_{n}}}){\bigr{|}}{\Bigr{)}}\,d\theta_{1}\,d\theta_{2}\cdots d\theta_{n}\right).$ (1) If $n=1$, we can write $P(x)=a_{d}x^{d}+\cdots+a_{1}x+a_{0}=a_{d}\prod\limits_{k=1}^{d}(x-\alpha_{k})$, in which case Jensen’s formula implies that $M(P)=|a_{d}|\prod\limits_{|\alpha_{j}|>1}|\alpha_{j}|.$ (2) As usual, one sets $m(P)=\log M(P)$ to denote the logarithmic Mahler measure of $P$. Amongst the many articles involving Mahler measures, we shall highlight a few which we find particularly motivating. In [Sm08] the author presents an excellent survey of the many ways in which the Mahler measure of polynomials in one variables are related to various questions in mathematics, including problems in algebraic number theory, ergodic theory, knot theory, transcendental number theory and diophantine approximation, just to name a few. In [De97] the author established a fascinating connection between Mahler measures and Deligne periods associated to mixed motives; see [De09] and [De12] for subsequent development of the insight from [De97]. In [Bo98], the author undertakes a study of numerical methods by which one can estimate Mahler measures and, as a result, is able to investigate some of the ideas from [De97]. Since then, many authors have extend the study of Mahler measures both in the numerical direction as in [Bo98] and in the theoretical framework as in [De97]. On page 22 of [BG06] the authors stated the definition of Mahler measure (1) in the context of heights of polynomials, though subsequent discussion only considers the setting of one variable polynomials. In [Ma00] the author computed the arithmetic height, in the sense of Arakelov theory, of divisors in projective space. Specifically, it was shown that a hypersurface defined over ${\mathbb{Z}}$ has canonical height which was expressed by the Mahler measure of a defining polynomials; see page 107 of [Ma00]. The following observation is an underlying aspect of a considerable part of the aforementioned work: In many instances, Mahler measures can be expressed as special number, such as norms of algebraic numbers, arithmetic heights or special values of $L$-functions. As such, the study of Mahler measures is intrinsically interesting. ### 1.2 Mahler measure of a linear polynomial For this article, we will consider the specific setting of linear polynomials, which itself has been the focus of attention; see, for example, [R-VTV04] or [Sm81]. Let $P_{D}(\mathcal{Z}_{0},\ldots,\mathcal{Z}_{n})=\mathcal{W}_{0}\mathcal{Z}_{0}+\mathcal{W}_{1}\mathcal{Z}_{1}+...+\mathcal{W}_{n}\mathcal{Z}_{n}$ (3) denote the linear polynomial in $n$ complex projective coordinates variables, and assume for now that $n\geq 2$. We will parameterize the polynomial $P_{D}$ through the $(n+1)$-tuple $D=(\mathcal{W}_{0},\mathcal{W}_{1},\ldots,\mathcal{W}_{n})$ of its coefficients. Of course, we assume that some $\mathcal{W}_{j}$ is not zero, and we set $\|D\|^{2}=|\mathcal{W}_{0}|^{2}+\cdots+|\mathcal{W}_{n}|^{2}.$ Assuming that $\mathcal{W}_{0}\neq 0$, one has that after dehomogenization the (logarithmic) Mahler measure $m(P_{D})$ can be evaluated as $m(P_{D})={\frac{1}{(2\pi)^{n}}}\int\limits_{0}^{{2\pi}}\int\limits_{0}^{{2\pi}}\cdots\int\limits_{0}^{{2\pi}}\log{\Bigl{(}}{\bigl{|}}P_{D}(1,e^{{i\theta_{1}}},e^{{i\theta_{2}}},\ldots,e^{{i\theta_{n}}}){\bigr{|}}{\Bigr{)}}\,d\theta_{1}\,d\theta_{2}\cdots d\theta_{n}.$ (4) In [R-VTV04], the authors derived the bounds $\log||D||-\frac{1}{2}\gamma-2\leq m(P_{D})\leq\log\|D\|,$ (5) where $\gamma$ denotes Euler’s constant. The upper bound in (5) is trivial; however, the lower bound follows from reasonably extensive computations stemming from an infinite series expansion of the Mahler measure in terms of certain weighted integrals of $J$-Bessel functions. Indeed, one of the points made in [R-VTV04] is that their results are amenable to numerical estimation of $m(P_{D})$ for any linear polynomial $P_{D}$; see, in particular, Corollary 1.4 on page 476 of [R-VTV04]. In [Sm81] it is shown that for certain classes of linear polynomials in $n+1$ variables one can evaluate the corresponding Mahler measure. Some of the main results of [Sm81] follow from clever applications of Jensen’s formula, thus the resulting formulas are similar to (2). To specialize further, let us now assume that for each $j$ we have that $\mathcal{W}_{j}=1$. In [BSWZ12] it is shown that $m(P_{D})=\frac{d}{ds}W_{n+1}(s)\Big{|}_{s=0}$ where $W_{n+1}(s)=\int\limits_{0}^{1}\cdots\int\limits_{0}^{1}\left|\sum_{k=1}^{n+1}e^{2\pi it_{k}}\right|^{s}dt_{1}\cdots dt_{n+1}.$ When studying the Mahler measure of $P_{D}$ the authors of [BSWZ12] employed arguments from probability theory to $W_{n+1}(s)$, which they viewed as the $s$-th moment of an $(n+1)-$step random walk. In doing so, it is asserted on page 982 of [BSWZ12] that $m(P_{D})=\log(n+1)-\sum_{j=1}^{\infty}\frac{1}{2j}\sum_{k=0}^{j}\binom{j}{k}\frac{(-1)^{k}}{(n+1)^{2k}}W_{n+1}(2k)\,\,\,\,\,\textrm{\rm when $D=(1,1,\cdots,1)$.}$ (6) As it turns out, equation (6) is a special case of our Theorem 1 as stated below. Finally, let us note that in certain special instances the values of the Mahler measure of a linear polynomial $P_{D}$, have been computed explicitly. When $n=2$ and $D=(1,1,1)$, it is proved in [Sm81] that $m(1+z_{1}+z_{2})=\frac{3\sqrt{3}}{4\pi}L(2,\chi_{3})$ where $L(s,\chi_{3})$ is the Dirichlet $L-$function associated to the non- principal odd character $\chi_{3}$ modulo $3$. If $n=3$, then it also is proved in [Sm81] that $m(1+z_{1}+z_{2}+z_{3})=\frac{7}{2\pi^{2}}\zeta(3)$ where $\zeta(s)$ denotes the Riemann zeta function. There are many other examples of explicit evaluations of Mahler measures, far too many to provide an exhaustive listing. However, it should be noted that each new evaluation is in and of itself interesting and aids in the understanding the importance of Mahler measures. ### 1.3 Our main results The purpose of this article is to develop a different means to evaluate Mahler measures of linear polynomials in $n+1$ complex variables. Our approach is based on the following observation. A holomorphic section of a power of the canonical bundle on $n$-dimensional complex projective space ${\mathbb{C}}{\mathbb{P}}^{n}$ can be realized as a homogeneous polynomial in $n+1$ projective coordinates. Therefore, the log-norm of such a polynomial, which appears in the definition of the Mahler measure, can be expressed in terms of the log-norm of a holomorphic form on ${\mathbb{C}}{\mathbb{P}}^{n}$ which, from the results of [CJS20], are related to an integral over its divisor of a “truncated” Green’s function, or resolvent kernel, on ${\mathbb{C}}{\mathbb{P}}^{n}$ by way of its Kronecker limit formula. The spectral expansion of Green’s function on ${\mathbb{C}}{\mathbb{P}}^{n}$ yields a representation of the log-norm of the polynomial in terms of a certain infinite series which we are able to explicitly evaluate. Our first main result is the following theorem. ###### Theorem 1 With the notation as above, let $c(D)^{2}=(n+1)||D||^{2}$ and set $a(n,k,D)=\sum_{\ell_{0}+\ldots+\ell_{n}=k,\ell_{m}\geq 0}\binom{k}{\ell_{0},\ell_{1},\ldots,\ell_{n}}^{2}|\mathcal{W}_{0}|^{2\ell_{0}}\cdots|\mathcal{W}_{n}|^{2\ell_{n}}$ where $\binom{k}{\ell_{0},\ell_{1},\ldots,\ell_{n}}=\frac{k!}{\ell_{0}!\ell_{1}!\cdots\ell_{n}!}.$ is the multinomial coefficient. Then for $n\geq 3$ the logarithmic Mahler measure $m(P_{D})$ of the linear polynomial $P_{D}$ is given by $m(P_{D})=\log c(D)-\frac{1}{2}\sum_{j=1}^{\infty}\frac{1}{j}\sum_{k=0}^{j}\binom{j}{k}\frac{(-1)^{k}a(n,k,D)}{c(D)^{2k}}.$ (7) The sum over $j$ on the right-hand side of (7) is absolutely convergent. However, it is not possible to view the series as a double series in $j$ and $k$ and then interchange the order of summation. In particular, the series diverges when viewed as a sum over $j$ and for a fixed $k$. We also obtain the following expression for $m(P_{D})$. ###### Theorem 2 For any integer $\ell\geq 1$, let $H_{\ell}=1+\frac{1}{2}\cdots+\frac{1}{\ell}$ and set $S_{D}(\ell)=\sum_{j=1}^{\infty}\frac{2j+\ell}{j(j+\ell)}\sum_{k=0}^{j}\binom{j+\ell+k-1}{k}\binom{j}{k}\frac{(-1)^{k}a(n,k,D)}{c(D)^{2k}}.$ Then for any $n\geq 3$ and any $D$, we have that $m(P_{D})=\log c(D)-\frac{1}{2}H_{1}-\frac{1}{2}S_{D}(1).$ (8) Further, for any $n\geq 3$ and $\ell\geq 2$ we have that $m(P_{D})=\log c(D)-\frac{1}{2}H_{\ell}-\frac{1}{2}S_{D}(\ell)$ (9) provided $D\neq r(1,1,\cdots,1)$ for some $r\neq 0$. From (8), we will prove that for all $n\geq 3$ and all $D$ one has that $m(P_{D})=\log c(D)-\frac{1}{2}S_{D}(0).$ (10) In summary, we will prove that (9) holds in the following cases: 1. (i) All $n\geq 3$ and all $D$ when $\ell=0$; 2. (ii) All $n\geq 3$ and all $D$ when $\ell=1$; 3. (iii) All $n\geq 3$ and all $\ell\geq 2$ provided $D\neq r(1,1,\cdots,1)$ for some $r\neq 0$. In our concluding comments to this paper, we will discuss the exceptional instance in case (iii) as well as the general setting when $n=2$. As stated, [Sm81] obtains explicit evaluations of Mahler measures for certain linear polynomials. Thus, by combining our main theorem with the formulas from [Sm81], we obtain many intriguing identities. Along this line, let us point out the following “amusing” corollary which comes from comparing our results to those from [R-VTV04]. ###### Corollary 1 For any non-zero vector $D=(\mathcal{W}_{0},\mathcal{W}_{1},\ldots,\mathcal{W}_{n})\in{\mathbb{C}}^{n+1}$ one has that $\sum_{j=1}^{\infty}\frac{1}{j}\sum_{k=0}^{j}\binom{j}{k}\frac{(-1)^{k}a(n,k,D)}{||D||^{2k}}\left(\frac{1}{(n+1)^{k}}-\frac{1}{k!}\right)=\log(n+1)+\gamma,$ (11) where $\gamma$ denotes the Euler constant. It is intriguing that the right-hand side of (11) is independent of $D$. When $n=0$, equation (11) still makes sense and will follow from equation (2.4) of [R-VTV04] after one would show that (11) converges. As such, it is possible that (11) could be proved directly, at least for some ”small” values of $n$. A further discussion of additional identities is given in the concluding section of this article, see e.g. identity (60). In our proof of Theorem 1 and Theorem 2 we obtain precise rates of convergence of the infinite series involved. Specifically, we obtain the following estimates. ###### Theorem 3 With notation as above, assume $n\geq 3$ and choose any $D\neq 0$. Then there is an explicitly computable constant $G(n,D)$, which depends solely on $n$ and $D$, such that for any $N\geq 1$ we have the bounds $\left|m(P_{D})-E_{1}(N;n,D)\right|\leq\frac{\Gamma(3/4)}{3}\frac{G(n,D)}{N^{3/4}}\quad and\quad\left|m(P_{D})-E_{2}(N;n,D)\right|\leq 2\sqrt[4]{2}\frac{G(n,D)}{\sqrt{N}},$ (12) where $E_{1}(N;n,D)=\log c(D)-\frac{1}{2}\sum_{j=1}^{N}\frac{1}{j}\sum_{k=0}^{j}\binom{j}{k}\frac{(-1)^{k}a(n,k,D)}{c(D)^{2k}}$ (13) and $E_{2}(N;n,D)=\log c(D)-\frac{1}{2}-\frac{1}{2}\sum_{j=1}^{N}\frac{2j+1}{j(j+1)}\sum_{k=0}^{j}\binom{j+k}{k}\binom{j}{k}\frac{(-1)^{k}a(n,k,D)}{c(D)^{2k}}.$ (14) The bound we derive for $G(n,D)$ will be given in terms of the $J$-Bessel function, see formula (47). However, we note here that the bound for $G(n,D)$ is elementary. In particular, Theorem 3 leads to an explicit computational means by which one can estimate $m(P_{D})$ as accurately as one may wish. We will derive explicit bounds for the tail of the series in (9) for all $\ell\geq 2$ and for all $D\neq r(1,1,\cdots,1)$. We refer the reader to Section 6 for the statements. In the course of the proof it becomes clear why this sole $D$ is singled out; it is the only instance where an application of the Cauchy-Schwarz inequality is an equality rather than a strict inequality. As is evident from equations (12) through (14) and the statements of our main theorems, the approximating sum $E_{1}(N;n,D)$ is somewhat simpler and with a better rate of convergence to $m(P_{D})$ than the sum $E_{2}(N;n,D)$. However, we find the estimate to $m(P_{D})$ by $E_{2}(N;n,D)$ to be theoretically interesting as well. Indeed, when combining the various expressions for $m(P_{D})$ derived above one has a potential source of combinatorial identities amongst weighted series of sums of binomial and multinomial coefficients. Finally, we would like to emphasize that our main theorem is the first result of which we are aware which gives an explicit expression of the (logarithmic) Mahler measure of a linear form in terms of an absolutely convergent series which involves elementary quantities, such as binomial and multinomial coefficients. The explicit and effective upper bound for the approximation of $m(P_{D})$ by a partial sum of this series provides a tool which can be used in estimating the size of $m(P_{D})$, hence the canonical height of the divisor $\mathcal{D}$ of $P_{D}$; see [Ma00] and the discussion in Section 7.5. ### 1.4 Outline of the proofs In general terms, the analysis of the present paper involves a detailed investigation of the general Kronecker-type limit formula which was proved in [CJS20]. In [CJS20], we considered a general smooth Kähler manifold $X$ with divisor $\mathcal{D}$ which was assumed to be smooth up to codimension two. For this article, we take $X$ to be $n$-dimensional complex projective space ${\mathbb{C}}{\mathbb{P}}^{n}$ for $n\geq 2$ and $\mathcal{D}$ to be a hyperplane, meaning the divisor of a degree one polynomial $P_{D}(z)$. We equip ${\mathbb{C}}{\mathbb{P}}^{n}$ with its natural Fubini-Study metric. With this setup, we employ a representation of the associated Green’s function in terms of the heat kernel; see [HI02]. The spectral expansion of the Green’s function also can be computed explicitly; see [Lu98]. In order to evaluate the Kronecker-type limit function as in [CJS20], we need to integrate the Green’s function on ${\mathbb{C}}{\mathbb{P}}^{n}$ along a hyperplane. In doing so, the evaluation of such integrals amounts to the Radon transform on projective space for which we use results from [Gr83]. Ultimately, we are able to express the log-norm of the polynomial $P_{D}$ as an absolutely convergent series of Jacobi polynomials. The evaluation of the Mahler measure then reduces to the problem of evaluating certain integrals involving Jacobi polynomials, which yields the results stated above. The different expressions for the Mahler measure $m(P_{D})$ amount to various identities involving Jacobi polynomials. We wish to emphasize here that our main result holds for all linear polynomials provided $n\geq 3$; all our results, except possibly (9) with $\ell\geq 3$ and $D=r(1,1,\ldots,1)$ for some $r\neq 0$, hold when the the divisor of $P_{D}(z)$ intersects the domain of integration in (1). Previous authors such as [Sm81] used techniques of complex analysis to obtain their results. Their computations are important and interesting, but are limited because of the logarithmic-type singularities which naturally occur. From out point of view, such singularities are $L^{2}$, hence can be addressed when using real analytic methods. ### 1.5 Organization of the paper In Section 2 we state additional notation and recall relevant results from the literature. In Section 3 we will recall the general Kronecker-type limit formula from [CJS20], which holds for a reasonably general Kähler manifold $X$, and make the result explicit in the case $X={\mathbb{C}}{\mathbb{P}}^{n}$. In Section 4 we study the results from Section 3 and obtain various expressions for $\log\|P_{D}\|_{\mu}$ where the norm is with respect to the Fubini-Study metric. In Section 5 we derive a change of variables formula which is an important ingredient in the proof of our main result, carried out in Section 6. We conclude with Section 7 where we present several comments regarding the analysis of this article. ## 2 Preliminaries In this section we will introduce the necessary notation and prove some intermediate results related to the representation of the resolvent kernel on ${\mathbb{C}}{\mathbb{P}}^{n}$ and its associated Kronecker limit formula as proved in [CJS20]. ### 2.1 Some special functions For any non-negative integers $\alpha$, $\beta$ and $j$, we let $P_{j}^{(\alpha,\beta)}$ denote the Jacobi polynomial, which is defined for $x\in(-1,1)$ by $P_{j}^{(\alpha,\beta)}(x):=\frac{(-1)^{j}}{2^{j}j!}(1-x)^{-\alpha}(1+x)^{-\beta}\frac{d^{j}}{dx^{j}}\left[(1-x)^{\alpha+j}(1+x)^{\beta+j}\right].$ (15) If $\alpha=\beta=0$, then the Jacobi polynomials specialize to the Legrendre polynomials, which can given by $P_{j}(x)=\frac{1}{2^{j}j!}\frac{d^{j}}{dx^{j}}(x^{2}-1)^{j}.$ Many fascinating properties of Jacobi and Legrendre polynomials are developed in the classical text [Sz74]. In the present article, we will use the following bound which we quote from [HS14]. For any $j\geq 1$ and for all $x\in[-1,1]$, we have that $(1-x^{2})^{\tfrac{1}{4}}|P_{j}(x)|\leq\sqrt{4/\pi}(2j+1)^{-\tfrac{1}{2}}.$ (16) This bound is referred to as the the sharp form of the Bernstein’s inequality for Legendre polynomials $P_{j}=P_{j}^{(0,0)}$; see Theorem 3.3 of [Sz74], [Lo82/83], or the discussion on page 228 of [HS14]. More generally, we will use the main theorem of [HS14] which gives the uniform upper bound $(1-x^{2})^{\tfrac{1}{4}}\left(\frac{1-x}{2}\right)^{(m-1)/2}|P^{(m-1,0)}_{j}(x)|\leq 12\cdot(2j+m)^{-\tfrac{1}{2}},$ (17) which holds for all positive integers $j$ and $m$ and for $x\in[-1,1]$; see Theorem 1.1 and subsequent discussion on page 228 of [HS14]. For complex numbers $\nu$ and $z$ with $|\arg z|<\pi$, the Bessel function of the first kind $J_{\nu}(z)$ is defined by absolutely convergent power series $J_{\nu}(z):=\frac{z^{\nu}}{2^{\nu}}\sum_{k=0}^{\infty}\frac{(-1)^{k}}{2^{2k}k!\Gamma(\nu+k+1)}z^{2k}.$ For non-integral complex $\nu$ and any complex $z$ with $|\arg z|<\pi$ one defines the Bessel function of the second kind $Y_{\nu}(z)$ by $Y_{\nu}(z):=(\sin\pi\nu)^{-1}(\cos(\pi\nu)J_{\nu}(z)-J_{-\nu}(z))$; when $\nu=n$ is a non-negative integer, then $\displaystyle\pi Y_{n}(z):$ $\displaystyle=2J_{n}(z)\log\left(\frac{z}{2}\right)-\sum_{k=0}^{n-1}\frac{(n-k-1)!}{k!}\left(\frac{z}{2}\right)^{2k-n}$ $\displaystyle-\sum_{k=0}^{\infty}\frac{(-1)^{k}(z/2)^{2k+n}}{k!(k+n)!}\left(\frac{\Gamma^{\prime}}{\Gamma}(k+1)+\frac{\Gamma^{\prime}}{\Gamma}(k+n+1)\right),$ with the convention that the empty sum when $n=0$ is zero. A thorough analysis of Bessel functions and functions associated with them can be found in the seminal book [Wa66]. The article [Kr06] contains very explicit pointwise bounds for Bessel functions. For our purposes, we will use the inequality from 7.31.2 [Sz74] which states that $|J_{0}(2x)|\leq(\max\\{1,|\pi x|\\})^{-\tfrac{1}{2}}$ (18) ### 2.2 Complex projective space Let ${\mathbb{C}}{\mathbb{P}}^{n}$ denote the $n$-dimensional complex projective space with the usual projective coordinates $(\mathcal{Z}_{0},\cdots,\mathcal{Z}_{n})$. If $U$ is any open set in ${\mathbb{C}}{\mathbb{P}}^{n}$ and $z:U\rightarrow{\mathbb{C}}^{n+1}\setminus\\{0\\}$ a holomorphic lifting of $U$, so a holomorphically varying choice of homogeneous coordinates for $z$, then the local Kähler potential is given by $\rho(z)=\log\|z\|^{2}=\log(|\mathcal{Z}_{0}|^{2}+\cdots+|\mathcal{Z}_{n}|^{2})$. The Kähler $(1,1)$ form $\frac{i}{2}\partial_{z}\partial_{\bar{z}}\rho$ will be denoted by $\omega$ and we equip ${\mathbb{C}}{\mathbb{P}}^{n}$ with the Fubini-Study metric $\mu=\mu_{FS}$ associated to $\omega$. The Fubini-Study distance between two points $z,w\in{\mathbb{C}}{\mathbb{P}}^{n}$ will be denoted by $d_{\mathrm{FS}}(z,w)$. It is given by the formula $\cos(d_{\mathrm{FS}}(z,w))=\frac{|\langle z,w\rangle|}{\sqrt{\langle z,z\rangle}\sqrt{\langle w,w\rangle}}$ where, if $z=(\mathcal{Z}_{0},\dots,\mathcal{Z}_{n})$ and $w=(\mathcal{W}_{0},\dots,\mathcal{W}_{n})$, then $\langle z,w\rangle=z\cdot{{}^{t}\overline{w}}=\mathcal{Z}_{0}\overline{\mathcal{W}_{0}}+\cdots\mathcal{Z}_{n}\overline{\mathcal{W}_{n}}$. Occasionally, our computations will be on the affine chart where $\mathcal{Z}_{0}\neq 0$, so then we will consider the affine coordinates $(z_{1},\ldots,z_{n})$. Then the local Kähler potential takes the form $\rho_{0}(z)=\log(1+|z_{1}|^{2}+\ldots+|z_{n}|^{2})$. Let $P_{D}(z)$ denote any homogenous polynomial with divisor $\mathcal{D}$. We denote by $\|P_{D}(z)\|^{2}_{\mu}$ the $\log$-norm of the polynomial $P_{D}$ with respect to $\mu$. The formula for $\|P_{D}(z)\|^{2}_{\mu}$ is $\log\|P_{D}(z)\|^{2}_{\mu}=\log|P_{D}(z)|^{2}-\textrm{\rm deg}(P_{D})\rho(z)$ (19) for $z\in{\mathbb{C}}{\mathbb{P}}^{n}\setminus\mathcal{D}$. If $z$ approaches $\mathcal{D}$ transversally, then $\log\|P_{D}(z)\|_{\mu}^{2}$ has a logarithmic singularity which is $L^{1}$ integrable. For the sake of brevity, we may omit the subscript $\mu$. Let $\Delta_{{\mathbb{C}}{\mathbb{P}}^{n}}$ signify the corresponding Laplacian $\Delta_{{\mathbb{C}}{\mathbb{P}}^{n}}$ which acts on smooth functions on ${\mathbb{C}}{\mathbb{P}}^{n}$. An eigenfunction of the Laplacian $\Delta_{{\mathbb{C}}{\mathbb{P}}^{n}}$ is an a priori $C^{2}$ function $\psi_{j}$ which satisfies the equation $\Delta_{{\mathbb{C}}{\mathbb{P}}^{n}}\psi_{j}+\lambda_{j}\psi_{j}=0$ for some constant $\lambda_{j}$, which is the eigenvalue associated to $\psi_{j}$. The spectrum $\\{\lambda_{j}\\}_{j\geq 0}$ of $\Delta_{{\mathbb{C}}{\mathbb{P}}^{n}}$ is well known; see, for example, [BGM71] or [Lu98]. Classically, $\lambda_{0}=0$ where the eigenfunction is the appropriately normalized positive constant function. Let $\mathrm{vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})$ denote the volume of ${\mathbb{C}}{\mathbb{P}}^{n}$, meaning the integral over ${\mathbb{C}}{\mathbb{P}}^{n}$ of the volume form $\mu^{n}$. In our normalizations, we have that $\mathrm{vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})=\frac{\pi^{n}}{n!}.$ Additionally, we have that $\lambda_{j}=4j(j+n)$ for all $j\geq 1$. Let $H_{j,j}(n+1)$ be the vector space of eigenfunctions with eigenvalue $\lambda_{j}$, Then the dimension $N_{j}$ of $H_{j,j}$ is $N_{j}=\binom{n+j}{j}^{2}-\binom{n+j-1}{j-1}^{2}=\frac{(n+2j)((n+j-1)!)^{2}}{n!(n-1)!(j!)^{2}}.$ Moreover, as discussed in section 1 of [Gr83], the Hilbert space $L^{2}({\mathbb{C}}{\mathbb{P}}^{n})$ of all square integrable functions on ${\mathbb{C}}{\mathbb{P}}^{n}$, with respect to the volume form $\mu^{n}$, has the orthogonal decomposition $L^{2}({\mathbb{C}}{\mathbb{P}}^{n})=\bigoplus_{j=0}^{\infty}H_{j,j}(n+1)$ into finite dimensional subspaces $H_{j,j}(n+1)$ consisting of eigenfunctions of the Laplacian with the corresponding eigenvalue $4j(j+n)$. Each subspace $H_{j,j}(n+1)$ is an irreducible representation of the unitary group $\mathbf{U}(n+1)$ and they are distinct. More precisely, elements of $H_{j,j}(n+1)$ are homogeneous harmonic polynomials of degree $j$ in the variables $\mathcal{Z}_{0},...,\mathcal{Z}_{n}$ and $\overline{\mathcal{Z}}_{0},...,\overline{\mathcal{Z}}_{n}$. As is standard, we may assume that coefficients of those harmonic polynomials are real so then any eigenfunction evaluated at real values of its variables is itself real- valued. ### 2.3 The Radon transform Let $f$ be a continuous function on ${\mathbb{C}}{\mathbb{P}}^{n}$, and let $H$ be any hyperplane in ${\mathbb{C}}{\mathbb{P}}^{n}$. The Radon transform of $f$ evaluated at $H$, which we denote by $Rf(H)$, is defined by $Rf(H)=\int\limits_{H}f(w)\mu_{H}(w)$ where $\mu_{H}(w)$ is the Fubini-Study volume element induced on $H$ from the Fubini-Study metric on ${\mathbb{C}}{\mathbb{P}}^{n}$. Denote the Grassmannian of hyperplanes in ${\mathbb{C}}{\mathbb{P}}^{n}$ by $({\mathbb{C}}{\mathbb{P}}^{n})^{\ast}$. Recall that $({\mathbb{C}}{\mathbb{P}}^{n})^{\ast}$ is non-canonically isomorphic to ${\mathbb{C}}{\mathbb{P}}^{n}$. Let us make a choice regarding this isomorphism. Quite simply, the point $(\mathcal{W}_{0},\mathcal{W}_{1},\ldots,\mathcal{W}_{n})\in{\mathbb{C}}{\mathbb{P}}^{n}$ is identified with the hyperplane $\\{(\mathcal{Z}_{0},\mathcal{Z}_{1},\ldots,\mathcal{Z}_{n})\in{\mathbb{C}}{\mathbb{P}}^{n}:\mathcal{Z}_{0}\mathcal{W}_{0}+\mathcal{Z}_{1}\mathcal{W}_{1}+\ldots+\mathcal{Z}_{n}\mathcal{W}_{n}=0\\}.$ As such, we can view the Radon transform $Rf(H)$ of $f$ as a function on ${\mathbb{C}}{\mathbb{P}}^{n}$. As proved in [Gr83], by Schur’s Lemma the Radon transform $R$ acts on $H_{j,j}(n+1)$ by scalar multiplication by $c(j,n)=c_{n}\cdot\frac{(-1)^{j}j!}{(j+n-1)!}$ where $c_{n}$ is certain normalizing factor depending solely on the dimension $n$. In our setting, the normalizing factor can be easily computed by evaluating the Radon transform of the $L^{2}$-normalized constant eigenfunction $\psi_{0}(w)=\frac{1}{\sqrt{\mathrm{vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}}$ and taking $H$ to be the (affine) hyperplane $z_{1}=0$. In this case $c(0,n)=c_{n}\cdot\frac{1}{(n-1)!}$, so then $\int\limits_{H}\frac{1}{\sqrt{\mathrm{vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}}\mu_{H}(w)=c_{n}\cdot\frac{1}{(n-1)!}\cdot\frac{1}{\sqrt{\mathrm{vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}}.$ Therefore, $c_{n}=(n-1)!\cdot\mathrm{vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n-1})=\pi^{n-1}$. If $\psi_{j}\in H_{j,j}(n+1)$, where $R$ acts by the scalar $c(j,n)$, we will simply have $R\psi_{j}(H)=c(j,n)\psi_{j}(H)$ where we identify $({\mathbb{C}}{\mathbb{P}}^{n})^{*}$ with ${\mathbb{C}}{\mathbb{P}}^{n}$ as above. In summary, we have the following formula. With the notation and normalizations set above, for any hyperplane $H$ in ${\mathbb{C}}{\mathbb{P}}^{n}$ and any eigenfunction $\psi_{j}\in H_{j,j}(n+1)$, one has that $\int\limits_{H}\psi_{j}(w)\mu_{H}(w)=\pi^{n-1}\cdot\frac{(-1)^{j}j!}{(j+n-1)!}\psi_{j}(H).$ (20) Again, it is necessary to note that we have chosen an identification between the hyperplane $H$, which is a point in $({\mathbb{C}}{\mathbb{P}}^{n})^{\ast}$, and a point in ${\mathbb{C}}{\mathbb{P}}^{n}$, which through a slight abuse of notation we also write as $H$. ### 2.4 Heat kernel and Green’s function The heat kernel $K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)$ associated to the Laplacian $\Delta_{{\mathbb{C}}{\mathbb{P}}^{n}}$ on ${\mathbb{C}}{\mathbb{P}}^{n}$ is the unique solution to the heat-equation $\frac{\partial}{\partial t}K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)=\Delta_{{\mathbb{C}}{\mathbb{P}}^{n}}K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)\,\,\,\,\,\textrm{\rm\,\,\, for \,\,\, $t>0$ \,\,\, and \,\,\,$z,w\in{\mathbb{C}}{\mathbb{P}}^{n}$}$ such that for any continuous function $f$ on ${\mathbb{C}}{\mathbb{P}}^{n}$, one has $\lim\limits_{t\rightarrow 0}\int\limits_{{\mathbb{C}}{\mathbb{P}}^{n}}K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)f(w)\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(w)=f(z).$ As can be shown, $K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)$ depends only on $t>0$ and the Fubini-Study distance between $z$ and $w$; see, for example, [Lu98]. The heat kernel admits a spectral expansion in terms of the eigenfunctions $\psi_{j}\in H_{j,j}(n+1)$. Namely, one has the formula that $K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)=\sum_{\lambda_{j}\geq 0}\psi_{j}(z)\overline{\psi_{j}(w)}e^{-\lambda_{j}t}=\frac{1}{\mathrm{vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}+\sum_{\lambda_{j}>0}\psi_{j}(z)\overline{\psi_{j}(w)}e^{-\lambda_{j}t}.$ (21) In [HI02] it is proved that $K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)=\frac{1}{\mathrm{vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}+\frac{1}{\pi^{n}}\sum_{j=1}^{\infty}(2j+n)\frac{(j+n-1)!}{j!}P_{j}^{(n-1,0)}(\cos(2r))e^{-4j(j+n)t},$ (22) where $r=d_{\textrm{FS}}(z,w)$ is the Fubini-Study distance and $P_{j}^{(\alpha,\beta)}$ is the Jacobi polynomial defined in (15). Actually, the transition from (21) to (22) is based on stronger results. Indeed, it is proved that $\sum_{\lambda_{j}=4j(n+j)}\psi_{j}(z)\overline{\psi_{j}(w)}=\frac{(2j+n)}{\pi^{n}}\frac{(j+n-1)!}{j!}P_{j}^{(n-1,0)}(\cos(2r));$ (23) see Theorem 1 of [Lu98] as well as Theorem 1 and the preceding discussion in [HI02], keeping in mind the notation conventions employed in the present article. The Green’s function $G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;s)$ on ${\mathbb{C}}{\mathbb{P}}^{n}$ is the integeral kernel of the right inverse to the operator $\Delta_{{\mathbb{C}}{\mathbb{P}}^{n}}+s(1-s)$ on $L^{2}({\mathbb{C}}{\mathbb{P}}^{n})$ for $s\in{\mathbb{C}}$. In order for such an inverse to exist, it is necessary to assume that $s(1-s)$ is not equal to an eigenvalue of $\Delta_{{\mathbb{C}}{\mathbb{P}}^{n}}$. As discussed in Section 5 of [CJS20] and references therein, the Green’s function $G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;s)$ and the heat kernel $K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)$ are related by the formula $G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;s)-\frac{1}{\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}\frac{1}{s^{2}}=\int\limits_{0}^{\infty}\left(K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)-\frac{1}{\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}\right)e^{-s^{2}t}dt.$ (24) The identity (24) holds for $s\in{\mathbb{C}}$ with ${\mathrm{Re}}(s^{2})>0$ and for all distinct points $z,w\in{\mathbb{C}}{\mathbb{P}}^{n}$. However, one can use the spectral expansion of the heat kernel in order to obtain a meromorphic continuation of (24) to all $s\in{\mathbb{C}}$. ## 3 A Kronecker limit formula The following result from [CJS20] is an analogue of the classical Kronecker’s limit formula, which is stated here in the setting of projective space. ###### Theorem 4 Let $\mathcal{D}$ be the divisor of a polynomial $P_{D}$ on ${\mathbb{C}}{\mathbb{P}}^{n}$, and assume that $\mathcal{D}$ is smooth up to codimension two in ${\mathbb{C}}{\mathbb{P}}^{n}$. Then there exist constants $c_{0}$ and $c_{1}$ such that for $z\notin\mathcal{D}$ we have that $\int\limits_{\mathcal{D}}G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;s)\mu_{\mathcal{D}}(w)=\frac{\textrm{\rm vol}_{\mu}(\mathcal{D})}{\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}\frac{1}{s^{2}}+c_{0}\log\|P_{D}(z)\|^{2}_{\mu}+c_{1}+O(s)\,\,\,\,\,\textrm{as $s\rightarrow 0$.}$ (25) Let us now consider the case when $P_{D}$ is a linear polynomial in $n+1$ projective coordinates $(\mathcal{Z}_{0},\ldots,\mathcal{Z}_{n})$ of ${\mathbb{C}}{\mathbb{P}}^{n}$. With this, Theorem 4 becomes the following result. ###### Proposition 1 Let $P_{D}(z)=P_{D}(\mathcal{Z}_{0},\ldots,\mathcal{Z}_{n})=\mathcal{W}_{0}\mathcal{Z}_{0}+\mathcal{W}_{1}\mathcal{Z}_{1}+...+\mathcal{W}_{n}\mathcal{Z}_{n}$ be the linear polynomial in $n+1$ complex projective coordinates variables with the divisor $\mathcal{D}$. Let $H_{n}$ denote the $n$-th harmonic number. Then, for $z\notin\mathcal{D}$ we have that $\displaystyle\int\limits_{\mathcal{D}}G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;s)\mu_{\mathcal{D}}(w)$ $\displaystyle=\frac{\textrm{\rm vol}_{\mu}(\mathcal{D})}{\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}\frac{1}{s^{2}}-\frac{1}{4\pi}\log|P_{D}(z)|^{2}$ $\displaystyle+\frac{1}{4\pi}\left(\log||D||^{2}+\rho(z)-H_{n}\right)+O(s)\,\,\,\,\,\textrm{as $s\rightarrow 0$.}$ (26) Proof: Set $G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w)=\lim_{s\to 0}\left(G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;s)\mu_{\mathcal{D}}(w)-\frac{1}{\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}\frac{1}{s^{2}}\right).$ Then $G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w)$ is the integral kernel which inverts the action of the Laplacian when restricted to the subspace of $L^{2}({\mathbb{C}}{\mathbb{P}}^{n})$ that is orthogonal to the constant functions. Formula (25) in Theorem 4 can be written as $\int\limits_{\mathcal{D}}G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w)\mu_{\mathcal{D}}(w)=c_{0}\log\|P_{D}(z)\|^{2}_{\mu}+c_{1}.$ If we now integrate over ${\mathbb{C}}{\mathbb{P}}^{n}$ with respect to $\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(z)$, we get that $0=c_{0}\int\limits_{{\mathbb{C}}{\mathbb{P}}^{n}}\log\|P_{D}(z)\|^{2}_{\mu}\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(z)+c_{1}\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n}).$ In our normalizations, we have that $c_{0}=-1/4\pi$; see, for example, page 94 of [Fo76] and page 10 of [La88] as well as page 338 of [JK98]. As a result, we have $c_{1}=\frac{1}{4\pi}\cdot\frac{1}{\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}\int\limits_{{\mathbb{C}}{\mathbb{P}}^{n}}\log\|P_{D}(z)\|^{2}_{\mu}\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(z).$ (27) It is left to evaluate the integral on the right-hand side of (27). Set $D=(\mathcal{W}_{0},\mathcal{W}_{1},\ldots,\mathcal{W}_{n})$, so then $P_{D}(z)=D\cdot{{}^{t}z}$. One can choose an element $\gamma\in\mathbf{U}(n+1)$ so that $D\gamma=(c,0,\ldots,0)$. Through a rescaling of $w$ by a complex number of absolute value one, we may assume that $c=\|D\|.$ Let $z=\tilde{z}{{}^{t}{\gamma}}$, where $\tilde{z}=(\tilde{\mathcal{Z}}_{0},\tilde{\mathcal{Z}}_{1},\ldots,\tilde{\mathcal{Z}}_{n})$. On the affine chart $\tilde{\mathcal{Z}}_{0}\neq 0$, the polynomial $P_{D}(z)$ is simply $P_{D}(z)=P_{D}(\tilde{z}{{}^{t}\gamma})=D\gamma{{}^{t}\tilde{z}}=c\tilde{\mathcal{Z}}_{0}$. By the $\mathbf{U}(n+1)$ invariance of the Fubini-Study volume form $\int\limits_{{\mathbb{C}}{\mathbb{P}}^{n}}\log\|P_{D}(z)\|^{2}_{\mu}\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(z)=\int\limits_{{\mathbb{C}}{\mathbb{P}}^{n}}\log\|P_{D}(\tilde{z}{{}^{t}{\gamma}})\|^{2}_{\mu}\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(\tilde{z}).$ By (19) we have $\log\|P_{D}(\tilde{z}{{}^{t}\gamma})\|^{2}_{\mu}=\log|P_{D}(\tilde{z}{{}^{t}\gamma})|^{2}-\textrm{\rm deg}(P_{D})\rho(\tilde{z}{{}^{t}\gamma}).$ Here we have $\log|P_{D}(\tilde{z}{{}^{t}\gamma})|^{2}=\log|c\tilde{\mathcal{Z}}_{0}|^{2}=\log(c^{2})+\log|\tilde{\mathcal{Z}}_{0}|^{2}$, $\deg(P_{D})=1$, and $\rho(\tilde{z}{{}^{t}\gamma})=\log\|\tilde{z}{{}^{t}\gamma}\|^{2}=\log\|\tilde{z}\|^{2}=\log(|\tilde{\mathcal{Z}}_{0}|^{2}+\cdots+|\tilde{\mathcal{Z}}_{n}|^{2})$. Therefore $\log\|P_{D}(\tilde{z}{{}^{t}\gamma})\|^{2}_{\mu}=\log(c^{2})+\log|\tilde{\mathcal{Z}}_{0}|^{2}-\log(|\tilde{\mathcal{Z}}_{0}|^{2}+\cdots+|\tilde{\mathcal{Z}}_{n}|^{2})=\log\|D\|^{2}-\rho_{0}(\tilde{z})$ and $\int\limits_{{\mathbb{C}}{\mathbb{P}}^{n}}\log\|P_{D}(z)\|^{2}_{\mu}\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(z)=\log\|D\|^{2}\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})-\int\limits_{{\mathbb{C}}{\mathbb{P}}^{n}}\rho_{0}(\tilde{z})\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(\tilde{z}).$ (28) The last integral can be evaluated using polar coordinates on the affine chart $\tilde{\mathcal{Z}}_{0}\neq 0$, which can be viewed as ${\mathbb{C}}^{n}$. Indeed, if we let $S^{2n-1}$ denote the unit sphere in ${\mathbb{C}}^{n}$, then $\displaystyle\int\limits_{{\mathbb{C}}{\mathbb{P}}^{n}}\rho_{0}(\tilde{z})\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(\tilde{z})$ $\displaystyle=\text{\rm vol}(S^{2n-1})\int\limits_{0}^{\infty}\log(1+\rho^{2})\frac{\rho^{2n-1}}{(\rho^{2}+1)^{n+1}}d\rho$ $\displaystyle=n\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})\int\limits_{1}^{\infty}\log(t)\frac{(1-t)^{n-1}}{t^{n+1}}dt.$ The last integral can be evaluated by using formula 4.253.3 from [GR07] with $u=1$, $\mu=n$ and $\lambda=n+1$. With this, we obtain that $\int\limits_{{\mathbb{C}}{\mathbb{P}}^{n}}\rho_{0}(\tilde{z})\mu_{{\mathbb{C}}{\mathbb{P}}^{n}}(\tilde{z})=\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})\left(\frac{\Gamma^{\prime}}{\Gamma}(n+1)-\frac{\Gamma^{\prime}}{\Gamma}(1)\right)=\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})H_{n}.$ Combining this evaluation with (28) and (27) we arrive at the formula that $c_{1}=\frac{1}{4\pi}\left(\log||D||^{2}-H_{n}\right).$ By substituting this expression together with $c_{0}=-1/(4\pi)$ into (25), and employing (19) we have completed our proof of (26). $\square$ ###### Remark 1 An anonymous referee informed us that a different approach may be undertaken to derive the calculations in this section by relating $m(P_{D})$ to a computation of integrals of Green’s functions for the divisor $\mathcal{D}$; see Section 7.5 for further discussion. With this approach, the computation of constant $c_{1}$ in the above proposition is related to the proof of Proposition 5.3 from [GS90]. ## 4 The $\log$-norm of a linear polynomial In this section we will refine further the results of Theorem 4 and Proposition 1 by expressing the integral of the Green’s function in terms of a certain series involving Jacobi polynomials. The convergence issues will be addressed by appealing to the bounds (16) and (17). ###### Proposition 2 With the above notation, assume that $\mathrm{Re}(s^{2})>0$, $z\notin\mathcal{D}\cup\\{D\\}$. Then, we have the relation $\displaystyle\int\limits_{\mathcal{D}}\left(G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;s)-\frac{1}{\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}\frac{1}{s^{2}}\right)$ $\displaystyle\mu_{\mathcal{D}}(w)$ $\displaystyle=\frac{1}{\pi}\sum_{j=1}^{\infty}\frac{(2j+n)(-1)^{j}}{s^{2}+4j(j+n)}$ $\displaystyle P_{j}^{(n-1,0)}(\cos(2d_{\textrm{FS}}(z,{D}))).$ (29) The series in (2) converges uniformly and absolutely when $z$ and $s$ lie in compact subsets of the above specified regions. Proof: We start with the expression (24) for the Green’s function in terms of the heat kernel $K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)$ on ${\mathbb{C}}{\mathbb{P}}^{n}$ for $\mathrm{Re}(s^{2})>0$. Using the equations (21) and (22) the function under the integral sign in (24) can be written as $\displaystyle K_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;t)-\frac{1}{\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}$ $\displaystyle=\sum_{\lambda_{j}>0}\psi_{j}(z)\overline{\psi_{j}(w)}e^{-\lambda_{j}t}$ $\displaystyle=\frac{1}{\pi^{n}}\sum_{j=1}^{\infty}(2j+n)\frac{(j+n-1)!}{j!}P_{j}^{(n-1,0)}(\cos(2r))e^{-4j(j+n)t},$ where we have set $r=d_{\textrm{FS}}(z,w)$. The above series is uniformly bounded provided the distance between $z$ and $w$ is bounded away from zero, which is satisfied for $z\notin\mathcal{D}$ since we will consider $w\in\mathcal{D}$. Therefore, we can apply the Fubini-Tonelli theorem to get that $\int\limits_{\mathcal{D}}\left(G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w;s)-\frac{1}{\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})}\frac{1}{s^{2}}\right)\mu_{\mathcal{D}}(w)=\int\limits_{0}^{\infty}\int\limits_{\mathcal{D}}\left(\sum_{\lambda_{j}>0}\psi_{j}(z)\overline{\psi_{j}(w)}e^{-\lambda_{j}t}\right)\mu_{\mathcal{D}}(w)e^{-s^{2}t}dt.$ As stated above, we may assume that the eigenfunctions $\psi_{j}$ are homogeneous polynomials with real coefficients, so then $\int\limits_{\mathcal{D}}\overline{\psi_{j}(w)}\mu_{\mathcal{D}}(w)=\int\limits_{\mathcal{D}}\psi_{j}(\overline{w})\mu_{\mathcal{D}}(w)=\int\limits_{\overline{\mathcal{D}}}\psi_{j}(w)\mu_{\overline{\mathcal{D}}}(w).$ In other words, integrating $\overline{\psi_{j}(w)}$ over $\mathcal{D}$ amounts to taking the Radon transform of the function $\psi_{j}$ which belongs to the subspace $H_{j,j}(n+1)$, so the integral equals the integral of $\psi_{j}(w)$ over the conjugate hyperplane $\overline{\mathcal{D}}$. On the other hand, under our isomorphism $({\mathbb{C}}{\mathbb{P}}^{n})^{*}\simeq{\mathbb{C}}{\mathbb{P}}^{n}$ we have that $\overline{\mathcal{D}}$ corresponds to $\overline{D}$. Let us make this precise. $\overline{\mathcal{D}}$ is the divisor $\overline{\mathcal{D}}=\\{\overline{v}=(\overline{\mathcal{V}}_{0},\dots,\overline{\mathcal{V}}_{n})\mid P_{D}(\overline{v})=0\\}$. Now $P_{D}(\overline{v})=D\cdot{{}^{t}\overline{v}}$ so $P_{D}(\overline{v})=0$ is equivalent to $D\cdot{{}^{t}\overline{v}}=0=\overline{D}\cdot{{}^{t}v}$. So under the isomorphism $({\mathbb{C}}{\mathbb{P}}^{n})^{*}\simeq{\mathbb{C}}{\mathbb{P}}^{n}$ we have $\overline{\mathcal{D}}$ corresponds to $\overline{D}$. For fixed and positive $t$, we can use equation (20) as applied to the hyperplane $\overline{\mathcal{D}}$ combined with the equation (23) to get the formula $\displaystyle\int\limits_{\mathcal{D}}\sum_{\lambda_{j}>0}\psi_{j}(z)\overline{\psi_{j}(w)}$ $\displaystyle e^{-\lambda_{j}t}\mu_{\mathcal{D}}(w)=\sum_{j=1}^{\infty}\frac{\pi^{n-1}(-1)^{j}j!}{(j+n-1)!}\sum_{\lambda_{j}=4j(n+j)}\psi_{j}(z)\psi_{j}(\overline{\mathcal{D}})e^{-\lambda_{j}t}$ $\displaystyle=\sum_{j=1}^{\infty}\frac{\pi^{n-1}(-1)^{j}j!}{(j+n-1)!}\sum_{\lambda_{j}=4j(n+j)}\psi_{j}(z)\psi_{j}(\overline{D})e^{-\lambda_{j}t}$ $\displaystyle=\sum_{j=1}^{\infty}\frac{\pi^{n-1}(-1)^{j}j!}{(j+n-1)!}\sum_{\lambda_{j}=4j(n+j)}\psi_{j}(z)\overline{\psi_{j}({D})}e^{-\lambda_{j}t}$ $\displaystyle=\sum_{j=1}^{\infty}\frac{\pi^{n-1}(-1)^{j}j!}{(j+n-1)!}\frac{(2j+n)}{\pi^{n}}\frac{(j+n-1)!}{j!}P_{j}^{(n-1,0)}(\cos(2d_{\textrm{FS}}(z,D)))e^{-\lambda_{j}t}$ $\displaystyle=\frac{1}{\pi}\sum_{j=1}^{\infty}(2j+n)(-1)^{j}P_{j}^{(n-1,0)}(\cos(2d_{\textrm{FS}}(z,{D})))e^{-4j(j+n)t}.$ (30) As described above, the eigenfunctions are appropriately scaled so that $\psi_{j}(\overline{D})=\overline{\psi_{j}(D)}$. Now, multiply (4) by $e^{-s^{2}t}$ and then integrate with respect to $t$ for $t>0$. After interchanging the sum and the integral over $t$, we deduce that $\displaystyle\int\limits_{0}^{\infty}\int\limits_{\mathcal{D}}$ $\displaystyle\left(\sum_{\lambda_{j}>0}\psi_{j}(z)\overline{\psi_{j}(w)}e^{-\lambda_{j}t}\right)\mu_{\mathcal{D}}(w)e^{-s^{2}t}dt=\frac{1}{\pi}\sum_{j=1}^{\infty}\frac{(2j+n)(-1)^{j}}{s^{2}+4j(j+n)}P_{j}^{(n-1,0)}(\cos(2d_{\textrm{FS}}(z,{D}))).$ The absolute convergence of each series in the above discussion is confirmed using the bound (17). With all this, the proof of (2) is complete. $\square$ By combining Proposition 1 and Proposition 2 we arrive at the identity $\displaystyle\frac{1}{\pi}\sum_{j=1}^{\infty}\frac{(2j+n)(-1)^{j}}{s^{2}+4j(j+n)}$ $\displaystyle P_{j}^{(n-1,0)}(\cos(2d_{\textrm{FS}}(z,{D})))=-\frac{1}{4\pi}\log|P_{D}(z)|^{2}$ $\displaystyle+\frac{1}{4\pi}\left(\log||D||^{2}+\rho(z)-H_{n}\right)+O(s)\,\,\,\,\,\textrm{\rm as $s\rightarrow 0$ with ${\mathrm{Re}}(s^{2})>0$.}$ The series of the left-hand side of the above equation is a holomorphic function of $s$ in the region ${\mathrm{Re}}(s^{2})>0$ which can be analytically continued to $s=0$, since the integral on the left-hand side of the equation (2) is analytic at $s=0$. The uniqueness of Taylor series representation of analytic function implies the identity $\log|P_{D}(z)|^{2}=\log||D||^{2}+\rho(z)-H_{n}-\sum_{j=1}^{\infty}\frac{(2j+n)(-1)^{j}}{j(j+n)}P_{j}^{(n-1,0)}(\cos(2d_{\textrm{FS}}(z,{D}))).$ (31) The bound (17) immediately implies that the series in (31) uniformly and absolutely for $w\in\mathcal{D}$ and $z$ in compact subsets of ${\mathbb{C}}{\mathbb{P}}^{n}\setminus(\mathcal{D}\cup\\{D\\})$. The next proposition considers the sum of Jacobi polynomials. ###### Proposition 3 With the notation as above, for $r=d_{\textrm{FS}}(z,{D})\neq 0$ and any positive integer $\ell$, one has that $\displaystyle\sum_{j=1}^{\infty}\frac{(2j+\ell)(-1)^{j}}{j(j+\ell)}P_{j}^{(\ell-1,0)}(\cos(2r))+H_{\ell}$ $\displaystyle=-\frac{d}{d\nu}P_{\nu}(\cos(2r))\Big{|}_{\nu=0}$ $\displaystyle=-\frac{d}{d\nu}F(-\nu,\nu+1;1;1-\cos^{2}r)\Big{|}_{\nu=0}$ $\displaystyle=-\int\limits_{0}^{\infty}\left(\pi Y_{1}(x)+\frac{2}{x}J_{0}(x)\right)J_{0}(x\sin r)dx,$ where $P_{\nu}(x)$ is the Legendre function, $F(-\nu,\nu+1;1;1-\cos^{2}r)$ is the classical hypergeometric function and $Y_{1}$ and $J_{0}$ are the Bessel functions. Proof: By starting with the formula 8.961.8. from [GR07] we see that for any $\ell\geq 2$ and $x\in[-1,1)$ $\displaystyle\sum_{j=1}^{\infty}(-1)^{j}\frac{2j+\ell}{j(j+\ell)}P_{j}^{(\ell-1,0)}(x)$ $\displaystyle=\sum_{j=1}^{\infty}\frac{(-1)^{j}}{j}P_{j}^{(\ell,0)}(x)-\sum_{j=1}^{\infty}\frac{(-1)^{j}}{(j+\ell)}P_{j-1}^{(\ell,0)}(x)$ $\displaystyle=\sum_{j=1}^{\infty}(-1)^{j}\frac{2j+\ell+1}{j(j+\ell+1)}P_{j}^{(\ell,0)}(x)+\frac{1}{\ell+1}.$ By replacing $\ell$ with $\ell-1$, we get that $\sum_{j=1}^{\infty}(-1)^{j}\frac{2j+\ell}{j(j+\ell)}P_{j}^{(\ell-1,0)}(x)=\sum_{j=1}^{\infty}(-1)^{j}\frac{2j+\ell-1}{j(j+\ell-1)}P_{j}^{(\ell-2,0)}(x)-\frac{1}{\ell}.$ (32) Let us now repeat this process $\ell-1$ times, after which we get for any positive integer $\ell$ and $r\neq 0$ the formula $\sum_{j=1}^{\infty}(-1)^{j}\frac{2j+\ell}{j(j+\ell)}P_{j}^{(\ell-1,0)}(\cos(2r))+H_{\ell}=\sum_{j=1}^{\infty}(-1)^{j}\frac{2j+1}{j(j+1)}P_{j}(\cos(2r))+1,$ (33) where $P_{n}$ denotes the Legendre polynomial. From formula 8.793 of [GR07] we have, for any $\nu\notin\mathbb{Z}$ and $x\in(-1,1]$, the expression $\sum\limits_{k=0}^{\infty}(-1)^{k}\left(\frac{1}{\nu-k}-\frac{1}{k+\nu+1}\right)P_{k}(x)=\frac{\pi P_{\nu}(x)}{\sin(\nu x)}.$ (34) From (34), we can write $\sum\limits_{k=1}^{\infty}(-1)^{k}\left(\frac{1}{\nu-k}-\frac{1}{k+\nu+1}\right)P_{k}(x)=\frac{\pi P_{\nu}(x)}{\sin(\nu x)}-\left(\frac{1}{\nu}-\frac{1}{\nu+1}\right).$ By letting $\nu$ approach zero, one obtains $\sum\limits_{k=1}^{\infty}(-1)^{k}\left(\frac{1}{-k}-\frac{1}{k+1}\right)P_{k}(x)=\frac{d}{d\nu}P_{\nu}(x)\Big{|}_{\nu=0}+1,$ or $\sum\limits_{k=1}^{\infty}(-1)^{k}\left(\frac{1}{k}+\frac{1}{k+1}\right)P_{k}(x)=-1-\frac{d}{d\nu}P_{\nu}(x)\Big{|}_{\nu=0}.$ By letting $x=\cos(2r)$, where $r=d_{\textrm{FS}}(z,{D})$, and combining the last equation with (33) we arrive at the identity $\sum_{j=1}^{\infty}(-1)^{j}\frac{2j+\ell}{j(j+\ell)}P_{j}^{(\ell-1,0)}(\cos(2r))+H_{\ell}=-\frac{d}{d\nu}P_{\nu}(\cos(2r))\Big{|}_{\nu=0},$ which holds for any $\ell\geq 1$ and $r\neq 0$. With this, we have proved the first equality which was claimed in the statement of the Proposition. For convenience, let us recall that $r=\textrm{d}_{\textrm{FS}}(z,{D})$ and $\cos^{2}(r)=\frac{1}{2}(\cos(2r)+1).$ By applying formula 8.751.1 with $m=0$ from [GR07] we obtain that $P_{\nu}(\cos(2r))=P_{\nu}(2\cos^{2}r-1)=F(-\nu,\nu+1;1;1-\cos^{2}r).$ Further, if we employ formula 6.512.1 from [GR07] (where, in their notation, we take $\nu=0$, $\mu$ to be equal to (our) $2\nu+1$, $a=1$ and $b=\sin r$) we get that $F(-\nu,\nu+1;1;1-\cos^{2}r)=F(-\nu,\nu+1;1;\sin^{2}r)=\int\limits_{0}^{\infty}J_{2\nu+1}(x)J_{0}(x\sin r)dx.$ Finally, upon differentiating with respect to $\nu$ and applying the formula 8.486(1), part 6, of [GR07] with $n=1$, the proof is complete. $\square$ The following corollary summarizes different representations of the $\log$-norm of a linear polynomial which are obtained by combining Proposition 3 with (31). ###### Corollary 2 Assuming the notation as above and $z\notin\mathcal{D}\cup\\{D\\}$, one has that $2\log|P_{D}(z)|=\rho(z)+\log||D||^{2}+\frac{d}{d\nu}F(-\nu,\nu+1;1;1-\cos^{2}r)\Big{|}_{\nu=0}.$ (35) Additionally, for any integer $\ell\geq 1$, one has that $2\log|P_{D}(z)|=\rho(z)+\log||D||^{2}-H_{\ell}-\sum_{j=1}^{\infty}\frac{(2j+\ell)(-1)^{j}}{j(j+\ell)}P_{j}^{(\ell-1,0)}(\cos(2r)).$ (36) Finally, one also has that $2\log|P_{D}(z)|=\rho(z)+\log||D||^{2}+\int\limits_{0}^{\infty}\left(\pi Y_{1}(x)+\frac{2}{x}J_{0}(x)\right)J_{0}(x\sin r)dx.$ (37) The identities (35) and (36) will serve as a starting point for the computation of the equivalent expressions for the Mahler measure of the linear polynomial $P_{D}(z)$. Actually, we will not use equation (37) and present the formula only for possible future interest. ## 5 A change of variables formula Let $S$ denote the domain of integration in (4), meaning that $S$ is the subset in the affine chart $\mathcal{Z}_{0}\neq 0$ of ${\mathbb{C}}{\mathbb{P}}^{n}$ consisting of $n$-tuples of affine coordinates $(z_{1},...,z_{n})$ such that $(z_{1},...,z_{n})=(e^{i\theta_{1}},\ldots,e^{i\theta_{n}})\,\,\,\,\,\textrm{\rm with}\,\,\,\,\,(\theta_{1},\ldots,\theta_{n})\in[0,2\pi]^{n}.$ We assume that $S$ is equipped with the measure $\mu_{S}(z)=\frac{1}{(2\pi i)^{n}}\frac{dz_{1}}{z_{1}}\cdots\frac{dz_{n}}{z_{n}}=\frac{1}{(2\pi)^{n}}d\theta_{1}\cdots d\theta_{n}.$ The following discussion is based on the material from pages 419-422 of [Wa66] which is summarized here for the convenience of the reader. Let $h\in L^{1}([0,1])$, meaning $h(x)$ is an absolutely integrable function for $x\in[0,1]$. We will view $x$ as a function of $z,D\in{\mathbb{C}}{\mathbb{P}}^{n}$ by $x=x(z,D):=(\cos(d_{\textrm{FS}}(z,{D})))^{2}=\cos^{2}r,$ in the notation of the previous section. Consider the integral $I(D;h)=\int\limits_{S}h(x(z,D))\mu_{S}(z).$ Let us write $D=(\mathcal{W}_{0},\ldots,\mathcal{W}_{n})=(r_{0}e^{i\varphi_{0}},\ldots,r_{n}e^{i\varphi_{n}})$. Set $\mathcal{Z}_{0}=1$ and $\mathcal{Z}_{m}=e^{i\theta_{m}}$ for each integer $m$ from $1$ to $n$ to be a choice of coordinates for $z\in S$. With this, set $\mathcal{X}:=\sum_{m=0}^{n}\mathcal{Z}_{m}\overline{{\mathcal{W}}}_{m}=\sum_{m=0}^{n}r_{m}e^{i(\theta_{m}-\varphi_{m})}.$ Following the discussion on pages 419-422 of [Wa66], we can view $\mathcal{X}$ as the endpoint of an $(n+1)-$step random walk in two dimensions. Step number $m$ is of length $r_{m}$, and the walk occurs in the direction with angle $(\theta_{m}-\varphi_{m})\in[-\pi,\pi]$. The directions are viewed as independent and identically distributed random variables, and the probability distribution of each is uniform on the interval $[-\pi,\pi]$. Let $d(D)=|\mathcal{W}_{0}|+\cdots+|{\mathcal{W}_{n}}|$ be the $L^{1}$ norm of $D$. Let $\mathcal{Y}$ be the random variable which is the distance of $\mathcal{X}$ to the origin. Observe that for $z\in S$ we can write $x$ as $x=(\cos(d_{\mathrm{FS}}(z,{D})))^{2}=\frac{1}{c(D)^{2}}\left|\sum_{m=0}^{n}\mathcal{Z}_{m}\overline{{\mathcal{W}}}_{m}\right|^{2}=\frac{\mathcal{Y}^{2}}{c(D)^{2}}$ where $c(D)^{2}=(n+1)\|D\|^{2}$. It is proved on page 420 of [Wa66] that for any $u\in[0,d(D)]$ the cumulative distribution $F_{D}(u)$ of $\mathcal{Y}$ is given by $\textrm{\rm Prob}(\mathcal{Y}\leq u)=F_{D}(u)=u\int\limits_{0}^{\infty}J_{1}(ut)\prod_{m=0}^{n}J_{0}(r_{m}t)dt.$ Of course, $F_{D}(u)=0$ for $u<0$ and $F_{D}(u)=1$ for $u>d(D)$, and $J_{0}$ and $J_{1}$ are the classical $J$-Bessel functions. The probability density function $f_{D}(u)$ of $\mathcal{Y}$ is obtained by differentiating $F_{D}(u)$ with respect to $u$. Using formula 8.472.1 of [GR07], we deduce that for $u\in[0,d(D)]$ the function $f_{D}(u)$ is given by $f_{D}(u)=\int\limits_{0}^{\infty}utJ_{0}(ut)\prod_{m=0}^{n}J_{0}(r_{m}t)dt;$ also, $f_{D}(u)$ is equal to zero for $u\notin[0,d(D)]$. When $r_{m}=1$ for all $m$, the above formula is a classical result of Kluyver [Kl05]; see also formula (2.1) of [BSWZ12]. With all this, we can re-write the integral $I(D;h)$ as $\displaystyle I(D;h)$ $\displaystyle=\int\limits_{S}h(x(z,D))\mu_{S}(z)=\int\limits_{0}^{d(D)}h\left(u^{2}/c(D)^{2}\right)f_{D}(u)du$ $\displaystyle=\int\limits_{0}^{d(D)}h\left(u^{2}/c(D)^{2}\right)\left(\int\limits_{0}^{\infty}utJ_{0}(ut)\prod_{m=0}^{n}J_{0}(r_{m}t)dt\right)du.$ (38) Finally, if we let $v=u/c(D)$, we arrive at a general change of variables formula, namely that $\int\limits_{S}h(x(z,D))\mu_{S}(z)=c(D)^{2}\int\limits_{0}^{d(D)/c(D)}h(v^{2})\left(\int\limits_{0}^{\infty}vtJ_{0}(c(D)vt)\prod_{m=0}^{n}J_{0}(r_{m}t)dt\right)dv.$ (39) Recall that the Cauchy-Schwarz inequality implies that $d(D)\leq c(D)$, so then the above stated assumption on the function $h$ indeed is sufficient for the above identity to hold. Indeed, only in the case when $D$ is a multiple of $(1,\cdots,1)$ we have $d(D)=c(D)$, otherwise $d(D)<c(D)$. This well-known aspect of the Cauchy-Schwarz inequality will be important when we apply (39) to prove our results. ## 6 Proof of the main results ### 6.1 Proof of Theorem 1 We begin with equation (35) and integrate along $S$ with respect to the measure $\mu_{S}(z)$. Recall that we derived (35) under the condition that $z\notin\mathcal{D}\cup\\{D\\}$ where $\mathcal{D}$ is the divisor of the polynomial $P_{D}$ on ${\mathbb{C}}{\mathbb{P}}^{n}$ The left-hand-side of the resulting formula is $2m(P_{D})$, so it remains to compute the integral of the right-hand side. Recall that $r=d_{\mathrm{FS}}(z,{D})$, in the notation of Section 4 and $x=x(z,D)=\cos^{2}r$, in the notation of the previous section. When $(\mathcal{D}\cup\\{D\\})\cap S=\emptyset$, there is an $\epsilon>0$ such that for all $z\in S$ we have the bound $d_{\mathrm{FS}}(z,{D})\geq\epsilon>0$. Hence, the hypergeometric function $F(-\nu,\nu+1;1;1-\cos^{2}r)$ converges absolutely and uniformly in $\nu$ since $f$ is positive and bounded away from $0$. Therefore, we may differentiate the series expansion for the hypergeometric function $F(-\nu,\nu+1;1;1-\cos^{2}r)$ term by term to get that $\frac{d}{d\nu}F(-\nu,\nu+1;1;1-\cos^{2}r)\Big{|}_{\nu=0}=-\sum_{j=1}^{\infty}\frac{1}{j}(1-\cos^{2}r)^{j}.$ (40) Each term in the series (40) is non-negative, so the monotone convergence theorem applies to give that $2m(P_{D})=2\log c(D)-\int\limits_{S}\left(\sum_{j=1}^{\infty}\frac{1}{j}(1-\cos^{2}r)^{j}\right)\mu_{S}(z).$ (41) Suppose that $(\mathcal{D}\cup\\{D\\})\cap S\neq\emptyset$. Choose an $\epsilon>0$ and set $S_{\epsilon}:=\\{z\in S:d_{\mathrm{FS}}(z,w)\geq\epsilon\,\,\textrm{\rm for all}\,\,w\in\mathcal{D}\cup\\{D\\}\\}.$ By preceding as above, we get arrive at the formula that $2m(P_{D};S_{\epsilon})=2\frac{\textrm{\rm vol}(S_{\varepsilon})}{(2\pi)^{n}}\log c(D)-\int\limits_{S_{\epsilon}}\left(\sum_{j=1}^{\infty}\frac{1}{j}(1-\cos^{2}r)^{j}\right)\mu_{S}(z).$ (42) where $m(P_{D};S_{\epsilon}):=\frac{1}{(2\pi)^{n}}\int\limits_{S_{\epsilon}}\log\left|P_{D}(1,e^{{i\theta_{1}}},e^{{i\theta_{2}}},\ldots,e^{{i\theta_{n}}})\right|d\theta_{1}\,d\theta_{2}\cdots d\theta_{n}$ If $(\mathcal{D}\cup\\{D\\})\cap S\neq\emptyset$, then $(\mathcal{D}\cup\\{D\\})\cap S$ has $\mu_{S}$ measure zero, as also noted on page 270 of [De97]. Hence, the function $\log|P_{D}|$ lies in $L^{1}(\mu_{S})$. Therefore, by letting $\epsilon$ approach zero, we have, by the monotone convergence theorem, that (42) becomes (41). In other words, in both cases when $(\mathcal{D}\cup\\{D\\})\cap S=\emptyset$ and when $(\mathcal{D}\cup\\{D\\})\cap S\neq\emptyset$, we arrive at (41), which we now study. The series on the right-hand side of (41) is a series of non-negative functions; hence, we can interchange the sum and the integral to get, for any integer $N\geq 1$ $2m(P_{D})-2\log c(D)+\sum_{j=1}^{N}\frac{1}{j}\int\limits_{S}(1-\cos^{2}r)^{j}\mu_{S}(z)=-\sum_{j=N+1}^{\infty}\frac{1}{j}\int\limits_{S}(1-\cos^{2}r)^{j}\mu_{S}(z).$ (43) In order to complete the proof of Theorem 1 we will evaluate the integral over $S$ and show that the right-hand side of (43) is dominated by $2\Gamma(3/4)G(n,D)/(3N^{3/4})$ for a certain constant $G(n,D)$ which depends solely on $n$ and $D$. We will now do so, and the formula for $G(n,D)$ is given in (47) below. As before, $D$ is identified with $(w_{1},...,w_{n})=(\mathcal{W}_{0},...,\mathcal{W}_{n})\in{\mathbb{C}}{\mathbb{P}}^{n}$ (recall that the affine chart is chosen so that $\mathcal{W}_{0}\neq 0$). For $z\in S$, we have that $x^{k}(z,D)=(\cos r)^{2k}=\frac{\left|1+\sum_{\ell=1}^{n}\overline{w}_{\ell}e^{i\theta_{\ell}}\right|^{2k}}{(1+n)^{k}\left(1+\sum_{\ell=1}^{n}|w_{\ell}|^{2}\right)^{k}},$ so then $\frac{1}{(2\pi)^{n}}\int\limits_{0}^{2\pi}\cdots\int\limits_{0}^{2\pi}x^{k}(z,D)d\theta_{1}\cdots d\theta_{n}=\frac{a_{1}(n,k,D)}{(1+n)^{k}\left(1+\sum_{\ell=1}^{n}|w_{\ell}|^{2}\right)^{k}},$ (44) where $a_{1}(n,k,D)$ denotes the constant term in the expression $x^{k}(z,D)=\left|1+\sum_{\ell=1}^{n}\overline{w}_{\ell}e^{i\theta_{\ell}}\right|^{2k}=\left(1+\sum_{\ell=1}^{n}w_{\ell}e^{-i\theta_{\ell}}\right)^{k}\left(1+\sum_{\ell=1}^{n}\overline{w}_{\ell}e^{i\theta_{\ell}}\right)^{k}.$ The multinomial theorem implies that $\displaystyle a_{1}(n,k,D)$ $\displaystyle=\sum_{\ell_{0}+\ldots+\ell_{n}=k,\ell_{m}\geq 0,m=1,\ldots n}\binom{k}{\ell_{0},\ell_{1},\ldots,\ell_{n}}^{2}|w_{1}|^{2\ell_{1}}\cdots|w_{n}|^{2\ell_{n}}=\frac{a(n,k,D)}{|\mathcal{W}_{0}|^{2k}}.$ Therefore, $\int\limits_{S}(1-x(z,D))^{j}\mu_{S}(z)=\sum_{k=0}^{j}\binom{j}{k}(-1)^{k}\frac{a(n,k,D)}{c(D)^{2k}}.$ Inserting this into (43) we get $\left|2m(P_{D})-2\log c(D)+\sum_{j=1}^{N}\frac{1}{j}\sum_{k=0}^{j}\binom{j}{k}\frac{(-1)^{k}a(n,k,D)}{c(D)^{2k}}\right|\leq\sum_{j=N+1}^{\infty}\frac{1}{j}\int\limits_{S}(1-x(z,D))^{j}\mu_{S}(z).$ (45) To complete the proof of the theorem, it remains to deduce a uniform bound for the series in (45). Let us apply the change of variables formula (39) and write, for any $j\geq 1$, $\int\limits_{S}(1-x(z,D))^{j}\mu_{S}(z)=c(D)^{2}\int\limits_{0}^{d(D)/c(D)}(1-v^{2})^{j}\left(\int\limits_{0}^{\infty}vtJ_{0}(c(D)vt)\prod_{m=0}^{n}J_{0}(r_{m}t)dt\right)dv.$ (46) Since $d(D)/c(D)\leq 1$, we have that $v\leq 1$, hence $\max\\{1,\frac{\pi}{2}c(D)vt\\}\geq v\max\\{1,\frac{\pi}{2}c(D)t\\}$. By using the bound (18), we get that $|J_{0}(c(D)vt)|\leq v^{-1/2}\max\\{1,\frac{\pi}{2}c(D)t\\}^{-1/2}.$ Therefore, $\displaystyle\int\limits_{0}^{d(D)/c(D)}(1-v^{2})^{j}v|J_{0}(c(D)vt)|dv$ $\displaystyle\leq\left(\int\limits_{0}^{1}(1-v^{2})^{j}v^{1/2}dv\right)\max\\{1,\frac{\pi}{2}c(D)t\\}^{-1/2}$ $\displaystyle=\frac{1}{2}\left(\int\limits_{0}^{1}(1-u)^{j}u^{-1/4}du\right)\max\\{1,\frac{\pi}{2}c(D)t\\}^{-1/2}$ $\displaystyle=\frac{\Gamma(j+1)\Gamma(3/4)}{2\Gamma(j+1+3/4)}\max\\{1,\frac{\pi}{2}c(D)t\\}^{-1/2}$ $\displaystyle\leq\frac{\Gamma(3/4)}{2j^{3/4}}\max\\{1,\frac{\pi}{2}c(D)t\\}^{-1/2},$ where we have applied [GR07], formula 3.196.3 with $a=0$, $b=1$, $\mu=3/4$ and $\nu=j+1$ in order to evaluate the integral with respect to $u$. Trivially, $\sum_{j=N+1}^{\infty}j^{-7/4}\leq\frac{4}{3}N^{-3/4}$. Hence, after multiplying by $1/j$ and summing over $j\geq N+1$ in (46), we arrive at the bound $\sum_{j=N+1}^{\infty}\frac{1}{j}\int\limits_{S}(1-x(z,D))^{j}\mu_{S}(z)\leq\frac{2\Gamma(3/4)}{3}\frac{G(n,D)}{N^{3/4}}$ where $G(n,D)=c(D)^{2}\int\limits_{0}^{\infty}t\left(\max\\{1,\frac{\pi}{2}c(D)t\\}\right)^{-\tfrac{1}{2}}\prod_{m=0}^{n}|J_{0}(r_{m}t)|dt.$ (47) By combining with (45) we obtain the inequality $\left|2m(P_{D})-2E_{1}(N;n,D)\right|\leq\frac{2\Gamma(3/4)}{3}\frac{G(n,D)}{N^{3/4}},$ where $E_{1}(N;n,D)$ is defined by (13). We have now completed the proof of the first inequality in (12). When dividing by $2$ and letting $N\rightarrow\infty$, we also have completed the proof of Theorem 1. As a concluding comment, let us point out a further refinement which will yield an elementary bound for $G(n,D)$. By using (18), we arrive at the inequality $G(n,D)\leq c(D)^{2}\int\limits_{0}^{\infty}t\left(\max\\{1,\frac{\pi}{2}c(D)t\\}\right)^{-\tfrac{1}{2}}\prod_{m=0}^{n}\left(\max\\{1,\frac{\pi}{2}r_{m}t\\}\right)^{-\tfrac{1}{2}}dt.$ (48) Without loss of generality, assume that $r_{0}\leq\ldots\leq r_{n}$, so then $r_{n}\leq c(D)$. If $n\geq 3$, the integral is convergent since the integral is $O(t^{-n/2})$ for $t>2/(\pi r_{0})$. In order to evaluate (48), we can write the domain of integration as $\int\limits_{0}^{\infty}=\int\limits_{0}^{2/(\pi c(D))}+\int\limits_{2/(\pi c(D))}^{2/(\pi r_{n})}+\cdots+\int\limits^{2/(\pi r_{j})}_{2/(\pi r_{j+1})}+\cdots+\int\limits^{\infty}_{2/(\pi r_{0})}.$ All integrals are elementary, so then the evaluation of each integral will yield an explicit and elementary upper bound for $G(n,D)$. In the case when not all $r_{j}$’s are distinct, some of these integrals are zero, and the exponents for each integrand need to take into account the multiplicity of $r_{j}$ in set of components of $\mathcal{D}$. The computations are elementary. ### 6.2 Proof of Theorem 2 when $\ell=1$ Choose any integer $N\geq 1$. when $\ell=1$ we can write (36) for $z\notin(\mathcal{D}\cup\\{D\\})$ as $\displaystyle 2\log|P_{D}(z)|=\rho(z)+$ $\displaystyle\log||D||^{2}-1-\sum_{j=1}^{N}(-1)^{j}\frac{2j+1}{j(j+1)}P_{j}(\cos(2r))$ $\displaystyle-\sum_{j=N+1}^{\infty}(-1)^{j}\frac{2j+1}{j(j+1)}P_{j}(\cos(2r)),$ (49) where, as before $r=d_{\mathrm{FS}}(z,{D})$. We now will utilize the following three points: The identity $\cos(2r)=2\cos^{2}r-1$; formula 8.962.1 from [GR07] for the Jacobi polynomial with $\alpha=\beta=0$; and that hypergeometric function $F(j+\ell,-j;1;\cos^{2}r)$ at these values is the finite sum. In doing so, we arrive at the formula $P_{j}^{(\ell-1,0)}(2\cos^{2}r-1)=(-1)^{j}\sum_{k=0}^{j}\binom{j+\ell+k-1}{k}\binom{j}{k}(-1)^{k}(\cos r)^{2k},$ which holds for all $z\in S$ when $(\mathcal{D}\cup\\{D\\})\cap S=\emptyset$. For now, we will assume $(\mathcal{D}\cup\\{D\\})\cap S=\emptyset$. By integrating this equation with respect to $S$ and employing (44) we get $(-1)^{j}\int\limits_{S}P_{j}^{(\ell-1,0)}(\cos(2r))\mu_{S}(z)=\sum_{k=0}^{j}\binom{j+\ell+k-1}{k}\binom{j}{k}(-1)^{k}\frac{a(n,k,D)}{c(D)^{2k}}.$ (50) By integrating (6.2) with respect to $S$ and applying (50) with $\ell=1$ we arrive at $\left|m(P_{D})-E_{2}(N;n,D)\right|\leq\frac{1}{2}\sum_{j=N+1}^{\infty}\frac{2j+1}{j(j+1)}\int\limits_{S}\left|P_{j}(\cos(2d_{\mathrm{FS}}(z,{D})))\right|\mu_{S}(z),$ (51) where $E_{2}(N;n,D)$ is defined by (14). Before studying the right-hand-side of (51), let us address the setting when $(\mathcal{D}\cup\\{D\\})\cap S\neq\emptyset$. Indeed, the extension of (51) to the case when $(\mathcal{D}\cup\\{D\\})\cap S\neq\emptyset$ follows the method of proof of (41) in that case. By integrating over $S_{\epsilon}$ rather than all of $S$, one gets an analogue of (51) for $\epsilon>0$. At that point, one lets $\epsilon$ approach zero. The function $\log|P_{D}|$ and each Jacobi polynomial $P_{j}^{(\ell-1,0)}$ is in $L^{1}(S)$, so one obtains the left-hand-side of (51) as $\epsilon$ approaches zero. As for the right-hand- side of (51), one uses the monotone convergence theorem, as in the proof of (43). With this, one shows that (51) also holds when $(\mathcal{D}\cup\\{D\\})\cap S\neq\emptyset$. It is left to study the series on the right-hand side of (51). We will use inequality (16) with $\cos(2r)=\cos(2d_{\mathrm{FS}}(z,{D}))$ instead of $x$. Recall the notation $x(z,D)=\cos^{2}(d_{\mathrm{FS}}(z,{D}))$, so then the inequality (16) gives that $\int\limits_{S}\left|P_{j}(\cos(2d_{\mathrm{FS}}(z,{D})))\right|\mu_{S}(z)\leq\frac{2}{\sqrt{\pi}\sqrt[4]{2}}\frac{1}{\sqrt{2j+1}}\int\limits_{S}\left(x(z,D)(1-x(z,D))\right)^{-1/4}\mu_{S}(z).$ (52) Therefore, the right-hand-side of (51) can be bounded from above by $\frac{1}{\sqrt{\pi}\sqrt[4]{2}}\left(\sum_{j=N+1}^{\infty}\frac{\sqrt{2j+1}}{j(j+1)}\right)\int\limits_{S}\left(x(z,D)(1-x(z,D))\right)^{-1/4}\mu_{S}(z).$ The goal is to make all bounds effective and explicit, so we shall. Trivially, we have that $\sum_{j=N+1}^{\infty}\frac{\sqrt{2j+1}}{j(j+1)}\leq\sum_{j=N+1}^{\infty}\frac{2j^{-1/2}}{j^{2}}\leq\frac{4}{\sqrt{N}},$ which, when combined with (51), yields the inequality $\left|m(P_{D})-E_{2}(N;n,D)\right|\leq\frac{C}{\sqrt{N}}\int\limits_{S}\left(x(z,D)(1-x(z,D))\right)^{-1/4}\mu_{S}(z)\,\,\,\textrm{\rm for}\,\,\,C=\frac{4}{\sqrt{\pi}\sqrt[4]{2}}.$ (53) The integral in (53) can be re-written using the change of variables formula (39). In doing so, it becomes $H(n,D)=c(D)^{2}\int\limits_{0}^{d(D)/c(D)}v^{-1/2}(1-v^{2})^{-1/4}\left(\int\limits_{0}^{\infty}vtJ_{0}(c(D)vt)\prod_{m=0}^{n}J_{0}(r_{m}t)dt\right)dv.$ (54) The Fubini-Tonelli theorem then implies that $H(n,D)\leq c(D)^{2}\int\limits_{0}^{\infty}t\prod_{m=0}^{n}|J_{0}(r_{m}t)|\cdot\left(\int\limits_{0}^{1}v^{\tfrac{1}{2}}\left(1-v^{2}\right)^{-\tfrac{1}{4}}|J_{0}(c(D)tv)|dv\right)dt.$ (55) The Cauchy-Schwarz inequality together with the elementary inequality (18) for the $J$-Bessel function gives the inequality $\int\limits_{0}^{1}v^{\tfrac{1}{2}}\left(1-v^{2}\right)^{-\tfrac{1}{4}}|J_{0}(c(D)tv)|dv\leq\sqrt{\frac{\pi}{2}}\left(\int\limits_{0}^{1}vJ_{0}^{2}(c(D)tu)dv\right)^{\tfrac{1}{2}}\leq\sqrt{\frac{\pi}{2}}\left(\max\\{1,\frac{\pi}{2}c(D)t\\}\right)^{-\tfrac{1}{2}}.$ Finally, by substituting this inequality into (55), we arrive at the bound $H(n,D)\leq c(D)^{2}\sqrt{\frac{\pi}{2}}\int\limits_{0}^{\infty}t\left(\max\\{1,\frac{\pi}{2}c(D)t\\}\right)^{-\tfrac{1}{2}}\prod_{m=0}^{n}|J_{0}(r_{m}t)|dt=\sqrt{\frac{\pi}{2}}G(n,D).$ Therefore, $\left|m(P_{D})-E_{2}(N;n,D)\right|\leq\frac{2\sqrt[4]{2}}{\sqrt{N}}G(n,D),$ which proves the second inequality in (12). Formula (8) follows by letting $N$ tend to infinity. ### 6.3 Proof of equation (10) Equation (10) follows by a direct manipulation of the inner sum appearing in (8). To ease the notation, we will set $b(n,k,D)=(-1)^{k}a(n,k,D)/c(D)^{2k}$. Then, we can write $\displaystyle\sum_{j=1}^{\infty}\frac{2j+1}{j(j+1)}$ $\displaystyle\sum_{k=0}^{j}\binom{j+k}{k}\binom{j}{k}b(n,k,D)=1-\frac{2}{n+1}$ $\displaystyle+\sum_{j=2}^{\infty}\frac{1}{j}\left(\sum_{k=0}^{j}\binom{j+k}{k}\binom{j}{k}b(n,k,D)+\sum_{k=0}^{j-1}\binom{j+k-1}{k}\binom{j-1}{k}b(n,k,D)\right).$ It is elementary to prove that $\displaystyle\sum_{k=0}^{j}\binom{j+k}{k}\binom{j}{k}b(n,k,D)$ $\displaystyle+\sum_{k=0}^{j-1}\binom{j+k-1}{k}\binom{j-1}{k}b(n,k,D)$ $\displaystyle=2\sum_{k=0}^{j}\binom{j+k-1}{k}\binom{j}{k}b(n,k,D),$ which completes the proof of equation (10). ### 6.4 Proof of Theorem 2 when $\ell\geq 2$ Assume $\ell\geq 2$ and $D\neq r(1,1,\ldots,1)$ for some $r\neq 0$. Proceeding as above, we integrate (36) along $S$ with respect to $\mu_{S}(z)$ and employ (50) to arrive at $\left|m(P_{D})-\log c(D)-\frac{H_{\ell}}{2}-\frac{1}{2}\sum_{j=1}^{N}\frac{2j+\ell}{j(j+\ell)}\sum_{k=0}^{j}\binom{j+\ell+k-1}{k}\binom{j}{k}\frac{(-1)^{k}a(n,k,D)}{c(D)^{2k}}\right|\\\ \leq\frac{1}{2}\sum_{j=N+1}^{\infty}\frac{2j+\ell}{j(j+\ell)}\int\limits_{S}\left|P_{j}^{(\ell-1,0)}(\cos(2d_{\mathrm{FS}}(z,{D})))\right|\mu_{S}(z).$ (56) In the notation as above, set $H(j;\ell,D):=\int\limits_{S}\left|P_{j}^{(\ell-1,0)}(\cos(2d_{\mathrm{FS}}(z,{D})))\right|\mu_{S}(z)=\int\limits_{S}\left|P_{j}^{(\ell-1,0)}(2x(z,D)-1)\right|\mu_{S}(z).$ (57) Equation (39) applies, so then $H(j;\ell,D)=c(D)^{2}\int\limits_{0}^{d(D)/c(D)}\left|P^{(\ell-1,0)}_{N}(2v^{2}-1)\right|\left(\int\limits_{0}^{\infty}vtJ_{0}(c(D)vt)\prod_{m=0}^{n}J_{0}(r_{m}t)dt\right)dv.$ We now apply the bound in (17) with $x=2v^{2}-1\in[-1,2(d(D)/c(D))^{2}-1]\subset[-1,1)$ to get $(1-v^{2})^{1/4}v^{1/2}\left(1-v^{2}\right)^{(\ell-1)/2}|P^{(\ell-1,0)}_{N}(2v^{2}-1)|\leq\frac{6\sqrt{2}}{\sqrt{2j+\ell}}.$ Recall that $d(D)$ is the $L^{1}$ norm of $D$, and $c(D)^{2}=(n+1)\|D\|^{2}$. By the Cauchy-Schwarz inequality, $d(D)/c(D)\leq 1$ with equality if and only if $D=r(1,1,\cdots,1)$ for some non-zero $r$. Since we assume that $D\neq r(1,1,\cdots,1)$, it follows that $d(D)/c(D)<1$, hence $|P^{(\ell-1,0)}_{N}(2v^{2}-1)|\leq\frac{A(D,\ell)}{\sqrt{2j+\ell}}v^{-1/2}(1-v^{2})^{-1/4},$ where $A(D,\ell)=6\sqrt{2}\left(1-\frac{d(D)^{2}}{c(D)^{2}}\right)^{-(\ell-1)/2}.$ Therefore $H(j;\ell,D)\leq\frac{c(D)^{2}A(D,\ell)}{\sqrt{2j+\ell}}\int\limits_{0}^{\infty}t\prod_{m=0}^{n}|J_{0}(r_{m}t)|\cdot\left(\int\limits_{0}^{1}v^{\tfrac{1}{2}}\left(1-v^{2}\right)^{-\tfrac{1}{4}}|J_{0}(c(D)tv)|dv\right)dt$ The integral on the right-hand side of the above equation is the same as the integral which appears in (55). So then the argument following (55) applies and gives the inequality $H(j;\ell,D)\leq\sqrt{\frac{\pi}{2}}\frac{A(D,\ell)}{\sqrt{2j+\ell}}G(n,D).$ Consequently, we have shown that $\sum_{j=N+1}^{\infty}\frac{2j+\ell}{j(j+\ell)}\int\limits_{S}\left|P_{j}^{(\ell-1,0)}(\cos(2\textrm{\rm dist}_{\mathrm{FS}}(z,{D})))\right|\mu_{S}(z)\leq\sqrt{\frac{\pi}{2}}A(D,\ell)G(n,D)\sum_{j=N+1}^{\infty}\frac{\sqrt{2j+\ell}}{j(j+\ell)}.$ This proves that the right-hand side of (56) is bounded by $\sqrt{\pi}A(D,\ell)G(n,D)N^{-1/2}$. With this, the proof of Theorem 2 when $\ell\geq 2$ follows upon letting $N\to\infty$ in (56). ## 7 Concluding remarks ### 7.1 Proof of Corollary 1 Let us rewrite a result from [R-VTV04] in our notation. Specifically, equation (4.7) from [R-VTV04] becomes the formula that $2m(P_{D})=\log||D||^{2}-\gamma-\sum_{m=1}^{\infty}\frac{1}{m}\sum_{\ell=0}^{m}\binom{m}{\ell}\frac{(-1)^{\ell}a(n,\ell,D)}{l!||D||^{2l}}.$ (58) Trivially, $\sum_{\ell=0}^{1}\binom{1}{\ell}\frac{(-1)^{\ell}a(n,\ell,D)}{l!||D||^{2l}}=0,$ from which (11) follows immediately by comparing (7) with (58). ### 7.2 Additional formulas for Mahler measures For any $n\geq 3$ choose any $D$ and one of the formulas we have proved, say Theorem 1. For any integer $\tilde{n}>n$, let $\widetilde{D}$ be the vector of coefficients whose first $n$ components is $D$ and whose last $\tilde{n}-n$ coordinates are zero. The normalization in (1) is such that $m(P_{D})=m(P_{\widetilde{D}})$. Also, for any $k$ we have that $a(\tilde{n},k,\widetilde{D})=a(n,k,D)$. However, $c(\widetilde{D})^{2}=(\tilde{n}+1)\|\widetilde{D}\|^{2}=(\tilde{n}+1)\|D\|^{2}=\frac{\tilde{n}+1}{n+1}c(D)^{2}.$ Let us set $m=\tilde{n}-n$. With this, the main formula in Theorem 1 becomes the statement that for any $m\geq 0$, one has that $\displaystyle m(P_{D})=\log c(D)+\frac{1}{2}\log\left(\frac{n+m+1}{n+1}\right)-\frac{1}{2}\sum_{j=1}^{\infty}\frac{1}{j}\sum_{k=0}^{j}\binom{j}{k}\frac{(-1)^{k}a(n,k,D)(n+1)^{2k}}{c(D)^{2k}(n+m+1)^{2k}}.$ (59) Similar identities can be proved by the by the same means from Theorem 2. Equation (59) with $m=0$ and $m\geq 1$ yields the following curious combinatorial identity, similar to (11) $\displaystyle\log\left(\frac{n+m+1}{n+1}\right)=\sum_{j=1}^{\infty}\frac{1}{j}\sum_{k=0}^{j}\binom{j}{k}\frac{(-1)^{k}a(n,k,D)}{c(D)^{2k}}\left[\left(\frac{n+1}{n+m+1}\right)^{2k}-1\right],$ (60) which holds true for any $m\geq 1$. Note that all estimates we have derived for $G(n,D)$ grow for fixed $D$ as $n$ increase. As such, the above considerations do not seem to aid with convergence issues when applying our results to numerical estimations. ### 7.3 The excluded cases when $n=2$ and $D=r(1,\cdots,1)$ In the case $n=2$, it may be possible to revisit our computations and obtain bounds. For example, rather than using (18), one could use the asymptotic formula $J_{0}(x)=\sqrt{\frac{2}{\pi x}}\cos(z-\pi/4)+O(x^{-3/2})\,\,\,\,\,\textrm{\rm as $x\rightarrow\infty$.}$ The oscillatory bounds may be such that one an derive a finite bound for $G(2,D)$ and possibly improved bounds for $G(n,D)$ for $n\geq 3$ as well. These considerations are undertaken in [AMS20]. The exclusion of the point $D=r(1,\cdots,1)$ in Theorem 2 comes from the problem of deriving bounds for the $P^{(\ell-1,0)}_{j}(r)$ near $r=1$; see (56). For this, one can seek to employ Hilb-type formulas; see page 197 of [Sz74], page 6 of [BZ07] or page 980 of [FW85]. We will leave these questions for future consideration. ### 7.4 Other choices for $S$ We shall now describe how the approach taken in this paper can be generalized. In doing so, we will be somewhat vague in our discussion. For this section let $S$ be a “nice” set in ${\mathbb{C}}{\mathbb{P}}^{n}$ with a “nice” measure $\mu_{S}$. One example is the product of circles in an affine chart of ${\mathbb{C}}{\mathbb{P}}^{n}$ and $\mu_{S}$ is the translation invariant metric on each circle. Let us define $m_{S}(F_{\mathcal{D}})=\int\limits_{S}\log\|F_{\mathcal{D}}\|^{2}_{\mu}(z)\mu_{S}(z),$ where $F_{\mathcal{D}}$ is a holomorphic form on ${\mathbb{C}}{\mathbb{P}}^{n}$ with divisor $\mathcal{D}$. The invariant $m_{{\mathbb{C}}{\mathbb{P}}^{n}}(F_{\mathcal{D}})$ is obtained by integrating with respect to the Fubini-Study metric. By using the spectral expansion of the Green’s function and the proof of Proposition 1 we obtain a general formula, namely $4\pi\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})\sum\limits_{\lambda_{j}>0}\frac{1}{\lambda_{j}}\left(\int\limits_{S}\psi_{j}\mu_{S}\right)\left(\int\limits_{\mathcal{D}}\overline{\psi}_{j}\mu_{\mathcal{D}}\right)=\textrm{\rm vol}_{\mu}({\mathbb{C}}{\mathbb{P}}^{n})m_{S}(F_{\mathcal{D}})-\textrm{\rm vol}_{\mu}(S)m_{{\mathbb{C}}{\mathbb{P}}^{n}}(F_{\mathcal{D}}).$ (61) In the above calculations, we were able to express the series $\sum\limits_{\lambda_{j}>0}\frac{1}{\lambda_{j}}\left(\int\limits_{S}\psi_{j}\mu_{S}\right)\left(\int\limits_{\mathcal{D}}\overline{\psi}_{j}\mu_{\mathcal{D}}\right)$ as a sum of integrals of Jacobi polynomials by using the Radon transform. When this method applies, one then obtains a series of Legendre or Jacobi polynomials at various arguments which are then integrated over $S$. It certainly seems plausible that our approach will apply in other settings. In particular, let us note that in [LM18] the authors considered the case when $S$ is a product of circles with different radii. It seems as if the methodology developed in this article will apply to give analogues of Theorem 1, Theorem 2 and Theorem 3 in that setting. ### 7.5 Estimates for canonical heights The contents of this section are based on comments from an anonymous referee; we gratefully acknowledge them for sharing their mathematical insight. First, let us rephrase our study in the context of Arakelov theory. Following the work in [Ma00], the calculation of Mahler measures is manifest within the study of arithmetic intersection theory. As such, one is naturally led to determine suitable Green’s currents associated to a divisor $\mathcal{D}$. There are two immediate possibilities. First, if $P_{D}$ is a polynomial whose divisor is $\mathcal{D}$, then the function $g_{\mathcal{D}}(z)=-\log(\|P_{D}(z)\|^{2}_{\textrm{\rm FS}})$ is one such Green’s function. In this expression, we have used $z=(\mathcal{Z}_{0},\ldots,\mathcal{Z}_{n})$ and the subscript “FS” to denote the Fubini-Study metric. Second, from above, one also can consider the function $\tilde{g}_{\mathcal{D}}(z)=\int\limits_{\mathcal{D}}G_{{\mathbb{C}}{\mathbb{P}}^{n}}(z,w)\mu_{\mathcal{D}}(w).$ The difference $g_{\mathcal{D}}(z)-\tilde{g}_{\mathcal{D}}(z)$ admits a smooth extension across the divisor $\mathcal{D}$, from which one can prove that $\textrm{\rm d}_{z}\textrm{\rm d}_{z}^{c}\left(g_{\mathcal{D}}(z)-\tilde{g}_{\mathcal{D}}(z)\right)=0.$ Therefore, there is a constant $B(D,n)$, which depends on the coefficients $D$ of $P_{D}$ and the dimension $n$, such that $g_{\mathcal{D}}(z)-\tilde{g}_{\mathcal{D}}(z)=B(D,n).$ At this point, we have arrived at the second displayed line in the proof of Proposition 1. Our subsequent analysis addresses the details of establishing normalizations and evaluation of the constant $B(D,n)$, as well as the study of $\tilde{g}_{\mathcal{D}}(z)$ which we undertake via analytic continuation through the generalization of Kronecker limit formula from [CJS20]. Second, we now have an opportunity to restate the contents of Theorem 3 as follows. The bounds in (12) provide estimates for the Mahler measure in terms of either $E_{1}(N;n,D)$ or $E_{2}(N;n,D)$. Furthermore, the constant $G(n,D)$ can be explicitly computed when, for example, one combines (47) and (18). As such, Theorem 3 provides a means by which one can effectively and efficiently estimate the canonical height which was computed on page 107 of [Ma00]. ### 7.6 Reinterpreting Mahler measures The results in the present paper follow from the Kronecker-type limit formula derived in [CJS20]. The setting of [CJS20] was that of a general Kähler manifold $X$ and $\mathcal{D}$ is a divisor which is smooth up to codimension two. In this article we took $X$ to be ${\mathbb{C}}{\mathbb{P}}^{n}$ and $\mathcal{D}$ to be a hyperplane. From this initial point, we then delved into detailed computations and identities involving the Legendre polynomials, Jacobi polynomials and $J$-Bessel functions. However, the foray into special function theory was expected. After all, in many instances one knows that heat kernels can be expressed in terms of spherical functions, and the Green’s function can be computed from the heat kernel; see page 436 of [CJS20] and references therein. Additionally, Jacobi polynomials and Jacobi functions are known to be present in such aspects of harmonic analysis; see, for example, [Ko84]. We find it quite interesting that (61) can be viewed in the setting of harmonic analysis, and possibly beyond. ## References * [AMS20] Anton, G., Malathu, J. A., Stinson, S.: _On an approximation of a J-Bessel integral and its applications (with an appendix by J.S. Friedman)_ arxiv.org 2012.04165. * [BZ07] Bao, X.-X., Zhao, Y.-Q.: _A uniform asymptotic expansion for Jacobi polynomials via uniform treatment of Darboux’s method_ , J. of Approx. Theory 148 (2007), 1–11. * [BGM71] Berger, M., Gauduchon P., Mazet E.: _Le spectre d’une variété Riemanniane_ , Lecture Notes in Math. 194, Springer-Verlag, Berlin and New York, 1971. * [BG06] Bombieri, E., Gubler, W.: _Heights in Diophantine geometry_ , New Mathematical Monographs 4, Cambridge University Press, Cambridge, 2006. * [BSWZ12] Borwein, J. M., Straub, A., Wan, J., Zudilin, W.: _Densities of short uniform random walks (with an Appendix by D. Zagier)_ , Canad. J. Math. 64 (2012), 961–990. * [Bo81] Boyd, D.: _Speculations concerning the range of Mahler’s measure_ , Can. Math. Bull 24 (1981), 453–469. * [Bo98] Boyd, D.: _Mahler’s measure and special values of $L-$functions_, Exp. Math. 7 (1998), 37–82. * [CJS20] Cogdell, J., Jorgenson, J., Smajlović, L.: _Spectral construction of non-holomorphic Eisenstein-type series and their Kronecker limit formula_ , in: Integrability systems and algebraic geometry, London Math. Soc. Lecture Note Ser., 459, Cambridge Univ. Press, Cambridge, 2020, 393–427. * [De97] Deninger, C.: _Deligne periods of mixed motives, K-theory and the entropy of certain ${\mathbb{Z}}^{n}$-actions_, J. Amer. Math. Soc. 10 (1997), 259–-281. * [De09] Deninger, C.: _Mahler measures and Fuglede-Kadison determinants_ , Münster J. Math. 2 (2009), 45–-63. * [De12] Deninger, C.: _Regulators, entropy and infinite determinants_ , in: Proceedings of the Regulators III Conference, Contemp. Math., 571 Amer. Math. Soc., Providence, RI, 2012, 117-134. * [Fo76] Folland, G.: _Introduction to Partial Differential Equations_ , Princeton University Press, Princeton, NJ, 1976. * [FW85] Frenzen, C. L., Wong, R.: _A uniform asymptotic expansion of the Jacobi polynomials with error bounds_ , Canad. J. Math. 37 (1985), 979–1007. * [GS90] Gillet, H., Soulé, C.: _Characteristic classes for algebraic vector bundles with hermitian metric, II_ , Annals of Math. 131 (1990), 205–238. * [GR07] Gradshteyn, I. S., Ryzhik, I. M.: _Table of Integrals, Series and Products_. Elsevier Academic Press, Amsterdam, 2007. * [Gr83] Grinberg, E. L.: _Spherical harmonics and integral geometry on projective spaces_ , Trans. AMS 279 No. 1 (1983), 187–203. * [HS14] Haagerup, H., Schlichtkrull, H.: _Inequalities for Jacobi polynomials_ , Ramanujan J. 33 (2014), 227–246. * [HI02] Hafoud, A., Intissar, A.: _Représentation intégrale de noyau de la chaleur sur l’espace projectif complexe $P^{n}(C)$, $n\geq 1$. _ C. R. Math. Acad. Sci. Paris 335 No. 11 (2002), 871–876. * [JK98] Jorgenson, J., Kramer, J.: _Towards the arithmetic degree of line bundles on abelian varieties_ , Manuscripta Math. 96 (1998), 335–370. * [Kl05] Kluyver, J. C.: _A local probability problem_ , in: _Royal Netherlands Academy of Arts and Sciences_ , Proceedings, 8, 1905, 341-350. * [Ko84] Koornwinder, T.: _Jacobi functions and analysis on noncompact semisimple Lie groups_ , in: _Special functions: group theoretical aspects and applications,_ Math. Appl., Reidel, Dordrecht, 1984, 1-85. * [Kr06] Krasikov, I.: _Uniform bounds for Bessel functions_ , J. Appl. Anal. 12 (2006), 83-91. * [LM18] Lalín, M., Mittal, T. _The Mahler measure for arbitrary tori_ , Res. Number Theory 4 (2018), no. 2, Paper No. 16, 23 pp. * [La88] Lang, S.: _Introduction to Arakelov Theory_ , Springer, New York, 1988. * [Lo82/83] Lorch, L.: _Alternative proof of a sharpened form of Bernstein’s inequality for Legendre polynomials_ , Appl. Anal. 14 (1982/1983), 237-240 Corrigendum in Appl. Anal. 50, (1993), 47. * [Lu98] Lu, Q.: _The eigen functions of the complex projective space_ , Acta Math. Sinica N.S. 14 no 1. (1998), 1–8. * [Ma00] Maillot, V.: _Géométrie d’Arakelov des variétés toriques et fibrés en droites intégrables_ , Mém. Soc. Math. Fr. (N.S.) No. 80 (2000), vi+129 pp. * [R-VTV04] Rodriguez-Villegas, F., Toledano, R., Vaaler, J. D. : _Estimates for Mahler’s measure of a linear form_ , Proc. Edinb. Math. Soc. 47 (2004), 473–494. * [Sm81] Smyth, C. J.: _On measures of polynomials in several variables_ , Bull. Austral. Math. Soc. 23 (1981), 49–63. * [Sm08] Smyth, C. J.: _The Mahler measure of algebraic numbers: A survey_ , in: Number theory and polynomials, London Math. Soc. Lecture Note Ser., 352, Cambridge Univ. Press, Cambridge, 2008, 322–349. * [Sz74] Szegö, G.: _Orthogonal Polynomials_ , American Mathematical Society, Providence, RI, 1975. * [Wa66] Watson, G. N.: _A Treatise on the Theory of Bessel Functions_ ( 2nd.ed.), Cambridge University Press, Cambrige, 1966. James W. Cogdell Department of Mathematics Ohio State University 231 W. 18th Ave. Columbus, OH 43210 U.S.A. e-mail<EMAIL_ADDRESS> Jay Jorgenson Department of Mathematics The City College of New York Convent Avenue at 138th Street New York, NY 10031 U.S.A. e-mail<EMAIL_ADDRESS> Lejla Smajlović Department of Mathematics University of Sarajevo Zmaja od Bosne 35, 71 000 Sarajevo Bosnia and Herzegovina e-mail<EMAIL_ADDRESS>
# Kronecker limit functions and an extension of the Rohrlich-Jensen formula James Cogdell Jay Jorgenson 111The second named author acknowledges grant support from several PSC-CUNY Awards, which are jointly funded by the Professional Staff Congress and The City University of New York. Lejla Smajlović ###### Abstract In [Ro84] Rohrlich proved a modular analogue of Jensen’s formula. Under certain conditions, the Rohrlich-Jensen formula expresses an integral of the log-norm $\log\|f\|$ of a $\text{\rm PSL}(2,{\mathbb{Z}})$ modular form $f$ in terms of the Dedekind Delta function evaluated at the divisor of $f$. In [BK20] the authors re-interpreted the Rohrlich-Jensen formula as evaluating a regularized inner product of $\log\|f\|$ and extended the result to compute a regularized inner product of $\log\|f\|$ with what amounts to powers of the Hauptmoduli of $\text{\rm PSL}(2,{\mathbb{Z}})$. In the present article, we revisit the Rohrlich-Jensen formula and prove that it can be viewed as a regularized inner product of special values of two Poincaré series, one of which is the Niebur-Poincaré series and the other is the resolvent kernel of the Laplacian. The regularized inner product can be seen as a type of Maass- Selberg relation. In this form, we develop a Rohrlich-Jensen formula associated to any Fuchsian group $\Gamma$ of the first kind with one cusp by employing a type of Kronecker limit formula associated to the resolvent kernel. We present two examples of our main result: First, when $\Gamma$ is the full modular group $\text{\rm PSL}(2,{\mathbb{Z}})$, thus reproving the theorems from [BK20]; and second when $\Gamma$ is an Atkin-Lehner group $\Gamma_{0}(N)^{+}$, where explicit computations are given for certain genus zero, one and two levels. ## 1 Introduction and statement of results ### 1.1 The Poisson-Jensen formula Let $D_{R}=\\{z=x+iy\in{\mathbb{C}}:|z|<R\\}$ be the disc of radius $R$ centered at the origin in the complex plane ${\mathbb{C}}$. Let $F$ be a non- constant meromorphic function on the closure $\overline{D_{R}}$ of $D_{R}$. Denote by $c_{F}$ the leading non-zero coefficient of $F$ at zero, meaning that for some integer $m$ we have that $F(z)=c_{F}z^{m}+O(z^{m+1})$ as $z$ approaches zero. For any $a\in D_{R}$, let $n_{F}(a)$ denote the order of $F$ at $a$; there are a finite number of points $a$ for which $n_{F}(a)\neq 0$. With this, Jensen’s formula, as stated on page 341 of [La99], asserts that $\frac{1}{2\pi}\int\limits_{0}^{2\pi}\log|F(Re^{i\theta})|d\theta+\sum\limits_{a\in D_{R}}n_{F}(a)\log(|a|/R)+n_{F}(0)\log(1/R)=\log|c_{F}|.$ (1) One can consider the action of a Möbius transformation which preserves $D_{R}$ and seek to determine the resulting expression from (1). Such a consideration leads to the Poisson-Jensen formula, and we refer the reader to page 161 of [La87] for a statement and proof. On their own, the Jensen formula and the Poisson-Jensen formula paved the way toward Nevanlinna theory, which in its most elementary interpretation establishes subtle growth estimates for meromorphic functions; see Chapter VI of [La99]. Going further, Nevanlinna theory provided motivation for Vojta’s conjectures whose insight into arithmetic algebraic geometry is profound. In particular, page 34 of [Vo87] contains a type of “dictionary” which translates between Nevalinna theory and number theory where Vojta asserts that Jensen’s formula should be viewed as analogous to the Artin-Whaples product formula from class field theory. ### 1.2 A modular generalization In [Ro84] Rohrlich proved what he aptly called a modular version of Jensen’s formula. We now shall describe Rohrlich’s result. Let $f$ be a meromorphic function on the upper half plane ${\mathbb{H}}$ which is invariant with respect to the action of the full modular group $\mathrm{PSL}(2,\mathbb{Z})$. Set $\mathcal{F}$ to be the “usual” fundamental domain of the quotient $\mathrm{PSL}(2,\mathbb{Z})\backslash{\mathbb{H}}$, and let $d\mu$ denote the area form of the hyperbolic metric. Assume that $f$ does not have a pole at the cusp $\infty$ of $\mathcal{F}$, and assume further that the Fourier expansion of $f$ at $\infty$ has its constant term equal to one. Let $P(w)$ be the Kronecker limit function associated to the parabolic Eisenstein series associated to $\mathrm{PSL}(2,\mathbb{Z})$; below we will write $P(w)$ in terms of the Dedekind Delta function, but for now we want to keep the concept of a Kronecker limit function in the conversation. With all this, the Rohrlich-Jensen formula is the statement that $\frac{1}{2\pi}\int\limits_{\mathrm{PSL}(2,\mathbb{Z})\backslash\mathbb{H}}\log|f(z)|d\mu(z)+\sum_{w\in\mathcal{F}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}P(w)=0.$ (2) In this expression, $\mathrm{ord}_{w}(f)$ denotes the order of $f$ at $w$ as a meromorphic function, and $\mathrm{ord}(w)$ denotes the order of the action of $\mathrm{PSL}(2,\mathbb{Z})$ on ${\mathbb{H}}$. As a means by which one can see beyond the above setting, one can view (2) as evaluating the inner product $\langle 1,\log|f(z)|\rangle=\int\limits_{\mathrm{PSL}(2,\mathbb{Z})\backslash\mathbb{H}}1\cdot\log|f(z)|d\mu(z)$ within the Hilbert space of $L^{2}$ functions on $\mathrm{PSL}(2,\mathbb{Z})\backslash{\mathbb{H}}$. There are various directions in which (2) has been extended. In [Ro84], Rohrlich described the analogue of (2) for general Fuchsian groups of the first kind and for meromorphic modular forms $f$ of non-zero weight; see page 19 of [Ro84]. In [HIvPT19] the authors studied the quotient of hyperbolic three space when acted upon by the discrete group $\mathrm{PSL}(2,\mathcal{O}_{K})$ where $\mathcal{O}_{K}$ denotes the ring of integers of an imaginary quadratic field $K$. In that setting, the function $\log|f|$ is replaced by a function which is harmonic at all but a finite number of points and at those points the function has prescribed singularities. As in [Ro84], the analogue of (2) involves a function $P$ which is constructed from a type of Kronecker limit formula. In [BK20] the authors returned to the setting of $\mathrm{PSL}(2,\mathbb{Z})$ acting on ${\mathbb{H}}$. Let $q_{z}=e^{2\pi iz}$ be the standard local coordinate near $\infty$ of $\mathrm{PSL}(2,\mathbb{Z})\backslash{\mathbb{H}}$. The Hauptmodul $j(z)$ is the unique $\mathrm{PSL}(2,\mathbb{Z})$ invariant holomorphic function on ${\mathbb{H}}$ whose expansion near $\infty$ is $j(z)=q_{z}^{-1}+o(q_{z}^{-1})$ as $z$ approaches $\infty$. Let $T_{n}$ denote the $n$-th Hecke operator and set $j_{n}(z)=j|T_{n}(z)$. The main results of [BK20] are the derivation of formulas for the regularized scalar product $\langle j_{n}(z),\log(({\mathrm{Im}}(z))^{k}|f(z)|)\rangle$ where $f$ is a weight $2k$ meromorphic modular form with respect to $\mathrm{PSL}(2,\mathbb{Z})$. Below we will discuss further the formulas from [BK20] and describe the way in which their results are natural extensions of (2). ### 1.3 Revisiting Rohrlich’s theorem The purpose of this article is to extend the point of view that the Rohlrich- Jensen formula is the evaluation of a particular type of inner product. To do so, we shall revisit the role of each of the two terms $j|T_{n}(z)$ and $\log(({\mathrm{Im}}(z))^{k}|f(z)|)$. The function $j|T_{n}(z)$ can be characterized as the unique holomorphic function which is $\mathrm{PSL}(2,\mathbb{Z})$ invariant on ${\mathbb{H}}$ and whose expansion near $\infty$ is $q_{z}^{-n}+o(q_{z}^{-1})$. These properties hold for the special value $s=1$ of the Niebur-Poincaré series $F_{-n}^{\Gamma}(z,s)$, which is defined in [Ni73] for any Fuchsian group $\Gamma$ of the first kind with one cusp and discussed in section 3.1 below. As proved in [Ni73], for any $m\in\mathbb{N}$, the Niebur-Poincaré series $F_{m}^{\Gamma}(z,s)$ is an eigenfunction of the hyperbolic Laplacian $\Delta_{\operatorname{hyp}}$; specifically, we have that $\Delta_{\operatorname{hyp}}F_{m}^{\Gamma}(z,s)=s(s-1)F_{m}^{\Gamma}(z,s).$ Also, $F_{m}^{\Gamma}(z,s)$ is orthogonal to constant functions. Furthermore, if $\Gamma=\mathrm{PSL}(2,\mathbb{Z})$, then for any positive integer $n$ there is an explicitly computable constant $c_{n}$ such that $F_{-n}^{\mathrm{PSL}(2,\mathbb{Z})}(z,1)=\frac{1}{2\pi\sqrt{n}}j_{n}(z)+c_{n}.$ (3) As a result, the Rohrlich-Jensen formula proved in [BK20], when combined with Rohrlich’s formula from [Ro84], reduces to computing the regularized inner product of $F_{-n}^{\mathrm{PSL}(2,\mathbb{Z})}(z,1)$ with $\log(({\mathrm{Im}}(z))^{k}|f(z)|)$. As for the term $\log(({\mathrm{Im}}(z))^{k}|f(z)|)$, we begin by recalling Proposition 12 from [JvPS19]. Let $2k\geq 4$ be any even positive integer, and let $f$ be a weight $2k$ meromorphic form $f$ which is $\Gamma$ invariant and with $q$-expansion at $\infty$ that is normalized so its constant term is equal to one. Set $\|f\|(z)=y^{k}|f(z)|$, where $z=x+iy$. Let $\mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s)$ be the elliptic Eisenstein series associated to the aforementioned data; a summary of the relevant properties of $\mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s)$ is given in section 4.3 below. Then, in [JvPS19] it is proved that one has the asymptotic relation $\sum_{w\in\mathcal{F}_{\Gamma}}\mathrm{ord}_{w}(f)\mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s)=-s\log\left(|f(z)||\eta_{\Gamma,\infty}^{4}(z)|^{-k}\right)+O(s^{2})\,\,\,\,\,\textrm{\rm as $s\to 0$}$ (4) where $\mathcal{F}_{\Gamma}$ is the fundamental domain for the action of $\Gamma$ on $\mathbb{H}$ and $\eta_{\Gamma,\infty}(z)$ is the analogue of the classical eta function for the modular group, see the Kronecker limit formula (24) for the parabolic Eisenstein series. With this, formula (4) can be written as $\log\left(\|f\|(z)\right)=kP_{\Gamma}(z)-\sum_{w\in\mathcal{F}_{\Gamma}}\mathrm{ord}_{w}(f)\lim_{s\to 0}\frac{1}{s}\mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s),$ (5) where $P_{\Gamma}(z)=\log(|\eta_{\Gamma,\infty}^{4}(z)|{\mathrm{Im}}(z))$ is the Kronecker limit function associated to the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par}}_{\Gamma,\infty}(z,s)$; the precise normalizations and expressions defining $\mathcal{E}^{\mathrm{par}}_{\Gamma,\infty}(z,s)$ will be clarified below. Following [CJS20], one can recast (5) in terms of the resolvent kernel, which we now shall undertake. The resolvent kernel, also called the automorphic Green’s function, $G_{s}^{\Gamma}(z,w)$ is the integral kernel which for almost all $s\in{\mathbb{C}}$ inverts the operator $\Delta_{\operatorname{hyp}}+s(s-1)$. In other words, $\Delta_{\operatorname{hyp}}G_{s}^{\Gamma}(z,w)=s(1-s)G_{s}^{\Gamma}(z,w).$ The resolvent kernel is closely related to the elliptic Eisenstein series; see [vP16] as well as [CJS20]. Specifically, from Corollary 7.4 of [vP16], after taking into account a sign difference in our normalization, we have that $\mathrm{ord}(w)\mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s)=-\frac{2^{s+1}\sqrt{\pi}\Gamma(s+1/2)}{\Gamma(s)}G_{s}^{\Gamma}(z,w)+O(s^{2})\,\,\,\,\,\textrm{\rm as $s\to 0$}$ (6) for all $z,w\in\mathbb{H}$ with $z\neq\gamma w$ when $\gamma\in\Gamma$. It is now evident that one can express $\log\left(\|f\|(z)\right)$ as a type of Kronecker limit function. Indeed, upon using the functional equation for the Green’s function, we will prove below the following result. Under certain general conditions the form $f$, as described above, can be realized through a type of factorization theorem, namely that $\displaystyle\log\left(\|f\|(z)\right)$ $\displaystyle=-2k+2\pi\sum_{w\in\mathcal{F}_{\Gamma}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}\lim_{s\to 1}\left(G_{s}^{\Gamma}(z,w)+\mathcal{E}_{\Gamma,\infty}^{\mathrm{par}}(z,s)\right)$ $\displaystyle=2\pi\sum_{w\in\mathcal{F}_{\Gamma}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}\left[\lim_{s\to 1}\left(G_{s}^{\Gamma}(z,w)+\mathcal{E}_{\Gamma,\infty}^{\mathrm{par}}(z,s)\right)-\frac{2}{\mathrm{vol}_{\operatorname{hyp}}(\Gamma\backslash{\mathbb{H}})}\right].$ (7) With all this, it is evident that one can view the inner product realization of the Rohrlich-Jensen formula as a special value of the inner product of the Niebur-Poincaré series $F_{m}^{\Gamma}(z,s)$ and the resolvent kernel $G_{s}^{\Gamma}(z,w)$ plus the parabolic Eisenstein series $\mathcal{E}_{\Gamma,\infty}^{\mathrm{par}}(z,s)$. Furthermore, because all terms are eigenfunctions of the Laplacian, one can seek to compute the inner product in hand in a manner similar to that which yields the Maass-Selberg formula. ### 1.4 Our main results Unless otherwise explicitly stated, we will assume for the remainder of this article that $\Gamma$ is any Fuchsian group of the first kind with one cusp. By conjugating $\Gamma$, if necessary, we may assume that the cusp is at $\infty$, with the cuspidal width equal to one. The group $\Gamma$ will be arbitrary, but fixed, throughout this article, so, for the sake of brevity, in the sequel, we will suppress the index $\Gamma$ in the notation for Eisenstein series, the Niebur-Poincaré series, the Kronecker limit function, the fundamental domain and the resolvent kernel. When $\Gamma$ is taken to be the modular group or the Atkin-Lehner group, that will be indicated in the notation. With the above discussion, we have established that one manner in which the Rohrlich-Jensen formula can be understood is through the study of the regularized inner product $\langle F_{-n}(\cdot,1),\overline{\lim_{s\to 1}\left(G_{s}(\cdot,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(\cdot,s)\right)}\rangle,$ (8) which is defined as follows. Since $\Gamma$ has one cusp at $\infty$, one can construct a (Ford) fundamental domain $\mathcal{F}$ of the action of $\Gamma$ on ${\mathbb{H}}$. Let $M=\Gamma\backslash{\mathbb{H}}$. A cuspidal neighborhood $\mathcal{F}_{\infty}(Y)$ of $\infty$ is given by $0<x\leq 1$ and $y\geq Y$, where $z=x+iy$ and some $Y\in{\mathbb{R}}$ sufficiently large. (We recall that we have normalized the cusp to be of width one.) Let $\mathcal{F}(Y)=\mathcal{F}\setminus\mathcal{F}_{\infty}(Y)$. Then, we define (8) to be $\lim_{Y\to\infty}\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(z,s)\right)d\mu_{\operatorname{hyp}}(z)$ where $d\mu_{\operatorname{hyp}}(z)$ denotes the hyperbolic volume element. The function $G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(z,s)$ is unbounded as $z\to w$. However, the asymptotic growth of the function is logarithmic thus integrable, hence it is not necessary to regularize the integral in (8) in a neighborhood containing $w$. The need to regularize the inner product (8) stems solely the from the exponential growth behavior of the factor $F_{-n}(z,1)$ as $z\to\infty$. Our first main result of this article is the following theorem. ###### Theorem 1. For any positive integer $n$ and any point $w\in\mathcal{F}$ $\langle F_{-n}(\cdot,1),\overline{\lim_{s\to 1}\left(G_{s}(\cdot,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(\cdot,s)\right)}\rangle=-\frac{\partial}{\partial s}F_{-n}(w,s)\Big{|}_{s=1}.$ (9) We can combine Theorem 1 with the factorization theorem (1.3) and properties of $F_{-n}(z,1)$ proved in [Ni73] and obtain the following extension of the Rohrlich-Jensen formula. ###### Corollary 1. In addition to the notation above, assume that the even weight $2k\geq 0$ meromorphic form $f$ has been normalized so its $q$-expansion at $\infty$ has constant term equal to $1$. Then we have that $\langle F_{-n}(\cdot,1),\log\|f\|\rangle=-2\pi\sum_{w\in\mathcal{F}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}\frac{\partial}{\partial s}\left.F_{-n}(w,s)\right|_{s=1}.$ (10) Let $g$ be a $\Gamma$ invariant analytic function which necessarily has a pole at $\infty$. As such, there is a positive integer $K$ and set of complex numbers $\\{a_{n}\\}_{n=1}^{K}$ such that $g(z)=\sum_{n=1}^{K}a_{n}q_{z}^{-n}+O(1)\,\,\,\,\,\textrm{as $z\rightarrow\infty$.}$ It is proved in [Ni73] that $g(z)=\sum_{n=1}^{K}2\pi\sqrt{n}a_{n}F_{-n}(z,1)+c(g)$ (11) for some constant depending only upon $g$. With this, we can combine Corollary 1 and the Theorem on page 19 of [Ro84] to obtain the following result. ###### Corollary 2. With notation as above, there is a constant $\beta$, defined by the Laurent expansion of $\mathcal{E}_{\infty}^{\mathrm{par}}(z,s)$ near $s=1$, such that $\langle g,\log\|f\|\rangle=-2\pi\sum_{w\in\mathcal{F}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}\Bigg{(}2\pi\sum_{n=1}^{K}\sqrt{n}a_{n}\frac{\partial}{\partial s}F_{-n}(w,s)\Big{|}_{s=1}\\\ +c(g)(P(w)-\beta\operatorname{vol}_{\operatorname{hyp}}(M)+2)\Bigg{)}.$ (12) The constant $\beta$ is given in (24). We refer the reader to equation (24) for further details regarding the normalizations which define $\beta$ and the parabolic Kronecker limit function $P$. Finally, we will consider the generating function of the normalized series constructed from the right-hand side of (9). Specifically, we will prove the following identity. ###### Theorem 2. With notation as above, the generating series $\sum_{n\geq 1}2\pi\sqrt{n}\frac{\partial}{\partial s}F_{-n}(w,s)\Big{|}_{s=1}q_{z}^{n}$ is, in the $z$ variable, the holomorphic part of the weight two biharmonic Maass form $\mathcal{G}_{w}(z):=i\frac{\partial}{\partial z}\left(\frac{\partial}{\partial s}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)\Big{|}_{s=1}\right).$ Note that the weight two biharmonic Maas form is a function which satisfies the weight two modularity in $z$ and which is annihilated by $\Delta_{2}^{2}=(\xi_{0}\circ\xi_{2})^{2}$, where, classically $\xi_{\kappa}:=2iy^{\kappa}\overline{\frac{\partial}{\partial\overline{z}}}$. It is clear from the definition that $\mathcal{G}_{w}(z)$ satisfies the weight two modularity in the $z$ variable. In section 5.4 we will prove that $(\xi_{0}\circ\xi_{2})^{2}\mathcal{G}_{w}(z)=0$. In the case $\Gamma=\text{\rm PSL}(2,{\mathbb{Z}})$, our results will generalize the main theorems from [BK20], as we will discuss below. ### 1.5 Outline of the paper In section 2 we will establish notation and recall certain results from the literature. There are two specific examples of Poincaré series which are particularly important for our study, the Niebur-Poincaré series and the resolvent kernel. Both series are defined, and basic properties are presented in section 3. In section 4 we state the Kronecker limit formulas associated to parabolic and elliptic Eisenstein series, and we prove the factorization theorem (1.3). The proofs of the main results listed above will be given in section 5. To illustrate our results, various examples are given in section 6. Our first example is when $\Gamma=\text{\rm PSL}(2,{\mathbb{Z}})$ where, as claimed above, our results yield the main theorems of [BK20]. We then turn to the case when $\Gamma$ is an Atkin-Lehner group $\Gamma_{0}(N)^{+}$ for square-free level $N$. The first examples are when the genus of $\Gamma_{0}(N)^{+}$ is zero and when the function $g$ in Corollary 2 is the Hauptmodul $j_{N}^{+}(z)$. The next two examples we present are for levels $N=37$ and $N=103$. For these levels the genus of the quotient by $\Gamma_{0}(N)^{+}$ is one and two, respectively. In these cases, certain generators of the corresponding function fields were constructed in [JST16]. Consequently, we are able to employ the results from [JST16] and fully develop Corollary 2. ## 2 Background material ### 2.1 Basic notation Let $\Gamma\subset\text{\rm PSL}(2,\mathbb{R})$ denote a Fuchsian group of the first kind acting by fractional linear transformations on the hyperbolic upper half-plane $\mathbb{H}:=\\{z=x+iy\in\mathbb{C}\,|\,x,y\in\mathbb{R};\,y>0\\}$. We let $M:=\Gamma\backslash\mathbb{H}$, which is a finite volume hyperbolic Riemann surface, and denote by $p:\mathbb{H}\longrightarrow M$ the natural projection. We assume that $M$ has $e_{\Gamma}$ elliptic fixed points and one cusp at $\infty$ of width one. By an abuse of notation, we also say that $\Gamma$ has a cusp at $\infty$ of width one, meaning that the stabilizer $\Gamma_{\infty}$ of $\infty$ is generated by the matrix $\bigl{(}\begin{smallmatrix}1&1\\\ 0&1\end{smallmatrix}\bigr{)}$. We identify $M$ locally with its universal cover $\mathbb{H}$. By $\mathcal{F}$ we denote the “usual” (Ford) fundamental domain for $\Gamma$ acting on $\mathbb{H}$. We let $\mu_{\mathrm{hyp}}$ denote the hyperbolic metric on $M$, which is compatible with the complex structure of $M$, and has constant negative curvature equal to minus one. The hyperbolic line element $ds^{2}_{\operatorname{hyp}}$, resp. the hyperbolic Laplacian $\Delta_{\operatorname{hyp}}$ acting on functions, are given in the coordinate $z=x+iy$ on $\mathbb{H}$ by $\displaystyle ds^{2}_{\operatorname{hyp}}:=\frac{dx^{2}+dy^{2}}{y^{2}},\quad\textrm{resp.}\quad\Delta_{\operatorname{hyp}}:=-y^{2}\left(\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}\right).$ By $d_{\mathrm{hyp}}(z,w)$ we denote the hyperbolic distance between to the two points $z\in\mathbb{H}$ and $w\in\mathbb{H}$. Our normalization of the hyperbolic Laplacian is different from the one considered in [Ni73] and [He83] where the Laplacian is taken with the plus sign. ### 2.2 Modular forms Following [Se73], we define a weakly modular form $f$ of even weight $2k$ for $k\geq 0$ associated to $\Gamma$ to be a function $f$ which is meromorphic on $\mathbb{H}$ and satisfies the transformation property $f\left(\frac{az+b}{cz+d}\right)=(cz+d)^{2k}f(z),\quad\textrm{for any $\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in\Gamma$.}$ (13) In the setting of this paper, any weakly modular form $f$ will satisfy the relation $f(z+1)=f(z)$, so that for some positive integer $N$ we can write $f(z)=\sum\limits_{n=-N}^{\infty}a_{n}q_{z}^{n},\quad\text{ where }q_{z}=e(z)=e^{2\pi iz}.$ If $a_{n}=0$ for all $n<0$, then $f$ is said to be holomorphic at the cusp at $\infty$. A holomorphic modular form with respect to $\Gamma$ is a weakly modular form which is holomorphic on $\mathbb{H}$ and at all the cusps of $\Gamma$. When the weight $k$ is zero, the transformation property (13) indicates that the function $f$ is invariant with respect to the action of elements of the group $\Gamma$, so it may be viewed as a meromorphic function on the surface $M=\Gamma\backslash\mathbb{H}$. In other words, a meromorphic function on $M$ is a weakly modular form of weight $0$. For any two weight $2k$ weakly modular forms $f$ and $g$ associated to $\Gamma$, with integrable singularities at finitely many points in $\mathcal{F}$, the generalized inner product $\langle\cdot,\,\cdot\rangle$ is defined as $\langle f,g\rangle=\lim_{Y\to\infty}\int\limits_{\mathcal{F}(Y)}f(z)\overline{g(z)}(\text{\rm Im}(z))^{2k}d\mu_{\operatorname{hyp}}(z)$ (14) where the integration is taken over the portion $\mathcal{F}(Y)$ of the fundamental domain $\mathcal{F}$ equal to $\mathcal{F}\setminus\mathcal{F}_{\infty}(Y)$. ### 2.3 Atkin-Lehner groups Let $N=p_{1}\cdot\ldots\cdot p_{r}$ be a square-free, non-negative integer including the case $N=1$. The subset of $\text{\rm SL}(2,\mathbb{R})$, defined by $\displaystyle\Gamma_{0}(N)^{+}:=\left\\{\frac{1}{\sqrt{e}}\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in\text{\rm SL}(2,\mathbb{R}):\,\,\,ad- bc=e,\,\,\,a,b,c,d,e\in\mathbb{Z},\,\,\,e\mid N,\ e\mid a,\ e\mid d,\ N\mid c\right\\}$ is an arithmetic subgroup of $\text{\rm SL}(2,\mathbb{R})$. We use the terminology Atkin-Lehner groups of level $N$ to describe $\Gamma_{0}(N)^{+}$ in part because these groups are obtained by adding all Atkin-Lehner involutions to the congruence group $\Gamma_{0}(N)$, see [AL70]. Let $\\{\pm\textrm{Id}\\}$ denote the set of two elements where Id is the identity matrix. In general, if $\Gamma$ is a subgroup of $\text{\rm SL}(2,\mathbb{R})$, we let $\overline{\Gamma}:=\Gamma/\\{\pm\textrm{Id}\\}$ denote its projection into $\textrm{PSL}(2,\mathbb{R})$. Set $Y_{N}^{+}:=\overline{\Gamma_{0}(N)^{+}}\backslash\mathbb{H}$. According to [Cum04], for any square-free $N$ the quotient space $Y_{N}^{+}$ has one cusp at $\infty$ with the cusp width equal to one. The spaces $Y_{N}^{+}$ will be used in the last section where we give examples of our results for generators of function fields of meromorphic functions on $Y_{N}^{+}$. ### 2.4 Generators of function fields of Atkin-Lehner groups of small genus An explicit construction of generators of function fields of all meromorphic functions on $Y_{N}^{+}$ with genus $g_{N,+}\leq 3$ was given in [JST16]. When $g_{N,+}=0$, the function field of meromorphic functions on $Y_{N}^{+}$ is generated by a single function, the Hauptmodul $j_{N}^{+}(z)$, which is normalized so that its $q$-expansion is of the form $q_{z}^{-1}+O(q_{z})$. The Hauptmodul $j_{N}^{+}(z)$ appears in the “Monstrous Moonshine” and was investigated in many papers, starting with Conway and Norton [CN79]. The action of the $m$-th Hecke operator $T_{m}$ on $j_{N}^{+}(z)$ produces a meromorphic form on $Y_{N}^{+}$ with the $q$-expansion $j_{N}^{+}|T_{m}(z)=q_{z}^{-m}+O(q_{z})$. When $g_{N,+}\geq 1$, the function field associated to $Y_{N}^{+}$ is generated by two functions $x_{N}^{+}(z)$ and $y_{N}^{+}(z)$. Stemming from the results in [JST16], we have that for $g_{N,+}\leq 3$ the generators $x_{N}^{+}(z)$ and $y_{N}^{+}(z)$ such that their $q$-expansions are of the form $x_{N}^{+}(z)=q_{z}^{-a}+\sum_{j=1}^{a-1}a_{j}q_{z}^{-j}+O(q_{z})\quad\text{and}\quad y_{N}^{+}(z)=q_{z}^{-b}+\sum_{j=1}^{b-1}b_{j}q_{z}^{-j}+O(q_{z})$ where $a,b$ are positive integers with $a\leq 1+g_{N,+}$, and $b\leq 2+g_{N,+}$. Furthermore, for $g_{N,+}\leq 3$, it is shown in [JST16] that all coefficients in the $q$-expansion for $x_{N}^{+}(z)$ and $y_{N}^{+}(z)$ are integers. For all such $N$, the precise values of these coefficients out to large order were computed, and the results are available at [JSTurl]. ## 3 Two Poincaré series In this section we will define the Niebur-Poincaré series $F_{m}(z,s)$ and the resolvent kernel, also referred to as the automorphic Green’s function $G_{s}(z,w)$. We refer the reader to [Ni73] for additional information regarding $F_{m}(z,s)$ and to [He83] and [Iwa02] and references therein for further details regarding $G_{s}(z,w)$. As said above, we will suppress the group $\Gamma$ from the notation. ### 3.1 Niebur-Poincaré series We start with the definition and properties of the Niebur-Poincaré series $F_{m}(z,s)$ associated to a co-finite Fuchsian group with one cusp; then we will specialize results to the setting of Atkin-Lehner groups. #### 3.1.1 Niebur-Poincaré series associated to a co-finite Fuchsian group with one cusp Let $m$ be a non-zero integer, $z=x+iy\in\mathbb{H}$, and $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>1$. Recall the notation $e(x):=\exp(2\pi ix)$, and let $I_{s-1/2}$ denote the modified $I$-Bessel function of the first kind; see, for example Appendix B.4, formula (B.32) of [Iwa02]). The Niebur-Poincaré series $F_{m}(z,s)$ is defined by the series $F_{m}(z,s)=F_{m}^{\Gamma}(z,s):=\sum_{\gamma\in\Gamma_{\infty}\backslash\Gamma}e(m{\mathrm{Re}}(\gamma z))({\mathrm{Im}}(\gamma z))^{1/2}I_{s-1/2}(2\pi|m|{\mathrm{Im}}(\gamma z)).$ (15) For fixed $m$ and $z$, the series (15) converges absolutely and uniformly on any compact subset of the half plane ${\mathrm{Re}}(s)>1$. Moreover, $\Delta_{\operatorname{hyp}}F_{m}(z,s)=s(1-s)F_{m}(z,s)$ for all $s\in\mathbb{C}$ in the half plane ${\mathrm{Re}}(s)>1$. From Theorem 5 of [Ni73], we have that for any non-zero integer $m$, the function $F_{m}(z,s)$ admits a meromorphic continuation to the whole complex plane $s\in{\mathbb{C}}$. Moreover, $F_{m}(z,s)$ is holomorphic at $s=1$ and, according to the spectral expansion given in Theorem 5 of [Ni73], $F_{m}(z,1)$ is orthogonal to constant functions, meaning that $\langle F_{m}(z,1),1\rangle=0.$ For our purposes, it is necessary to employ the Fourier expansion of $F_{m}(z,s)$ in the cusp $\infty$. The Fourier expansion is proved in [Ni73] and involves Kloosterman sums $S(m,n;c)$, which we now define. For any integers $m$ and $n$, and real number $c$, define $S(m,n;c)=S_{\Gamma}(m,n;c):=\sum_{\bigl{(}\begin{smallmatrix}a&\ast\\\ c&d\end{smallmatrix}\bigr{)}\in\Gamma_{\infty}\diagdown\Gamma\diagup\Gamma_{\infty}}e\left(\frac{ma+nd}{c}\right).$ For ${\mathrm{Re}}(s)>1$ and $z=x+iy\in\mathbb{H}$, the Fourier expansion of $F_{m}(z,s)$ is given by $F_{m}(z,s)=e(mx)y^{1/2}I_{s-1/2}(2\pi|m|y)+\sum_{k=-\infty}^{\infty}b_{k}(y,s;m)e(kx),$ (16) where $b_{0}(y,s;m)=\frac{y^{1-s}}{(2s-1)\Gamma(s)}2\pi^{s}|m|^{s-1/2}\sum_{c>0}S(m,0;c)c^{-2s}=\frac{y^{1-s}}{(2s-1)}B_{0}(s;m)$ and, for $k\neq 0$ $b_{k}(y,s;m)=B_{k}(s;m)y^{1/2}K_{s-1/2}(2\pi|m|y),$ with $B_{k}(s;m)=2\sum_{c>0}S(m,k;c)c^{-1}\cdot\left\\{\begin{array}[]{ll}J_{2s-1}\left(\frac{4\pi}{c}\sqrt{mk}\right),&\textrm{\rm if \,}mk>0\\\ I_{2s-1}\left(\frac{4\pi}{c}\sqrt{|mk|}\right),&\textrm{\rm if \,}mk<0.\end{array}\right.$ In the above expression, $J_{2s-1}$ denotes the $J$-Bessel function and $K_{s-1/2}$ is the modified Bessel function; see, for example, formula (B.28) in [Iwa02] for $J_{2s-1}$ and formula (B.34) of [Iwa02]) for $K_{s-1/2}$. According to the proof of Theorem 6 from [Ni73], the Fourier expansion (16) extends by the principle of analytic continuation to the case when $s=1$, hence putting $B_{k}(1;m):=\lim_{s\downarrow 1}B_{k}(s;m)$, we have $F_{m}(z,1)=\frac{\sinh(2\pi|m|y)}{\pi\sqrt{|m|}}e(mx)+B_{0}(1;m)+\sum_{k\in\mathbb{Z}\setminus\\{0\\}}\frac{1}{2\sqrt{|k|}}e^{-2\pi|k|y}B_{k}(1;m)e(kx).$ (17) It is clear from (17) that for $n>0$ one has that $F_{-n}(z,1)=\frac{1}{2\pi\sqrt{n}}q_{z}^{-n}+O(1)\,\,\,\,\,\textrm{\rm as $z\rightarrow\infty$.}$ Moreover, applying $\frac{\partial}{\partial s}$ to the Fourier expansion (16), taking $s=1$ and reasoning analogously as in the proof of Lemma 4.3. (1), p. 19 of [BK20] we immediately deduce the following crude bound $\left.\frac{\partial}{\partial s}F_{-n}(z,s)\right|_{s=1}\ll\exp\left(2\pi n{\mathrm{Im}}(z)\right),\quad\text{as}\quad{\mathrm{Im}}(z)\to\infty.$ (18) We note that the value of the derivative of the Niebur-Poincaré series at $s=1$ satisfies a differential equation, namely that $\displaystyle\Delta_{\operatorname{hyp}}\left(\frac{\partial}{\partial s}\left.F_{-n}(z,s)\right|_{s=1}\right)$ $\displaystyle=\lim_{s\to 1}\Delta_{\operatorname{hyp}}\left(\frac{F_{-n}(z,s)-F_{-n}(z,1)}{(s-1)}\right)=$ $\displaystyle=\lim_{s\to 1}\left(\frac{s(1-s)F_{-n}(z,s)-0}{(s-1)}\right)=-F_{-n}(z,1).$ (19) #### 3.1.2 Fourier expansion when $\Gamma$ is an Atkin-Lehner group One can explicitly evaluate $B_{0}(1;m)$ for $m>0$ when $\Gamma$ is an Atkin- Lehner group. Set $\Gamma=\overline{\Gamma_{0}(N)^{+}}$ where $N$ is a squarefree, which we express as $N=\prod\limits_{\nu=1}^{r}p_{\nu}$. Let $B_{0,N}^{+}(1;m)$ denote the coefficient $B_{0}(1;m)$ for $\overline{\Gamma_{0}(N)^{+}}$. From Theorem 8 and Proposition 9 of [JST16] we get that $B_{0,N}^{+}(1;m)=\frac{12\sigma(m)}{\pi\sqrt{m}}\prod\limits_{\nu=1}^{r}\left(1-\frac{p_{\nu}^{\alpha_{p_{\nu}}(m)+1}(p_{\nu}-1)}{\left(p_{\nu}^{\alpha_{p_{\nu}}(m)+1}-1\right)(p_{\nu}+1)}\right),$ (20) where $\sigma(m)$ denotes the sum of divisors of a positive integer $m$ and $\alpha_{p}(m)$ is the largest integer such that $p^{\alpha_{p}(m)}$ divides $m$. These expressions will be used in our explicit examples in section 6 below. ### 3.2 Automorphic Green’s function The automorphic Green’s function, also called the resolvent kernel, for the Laplacian on $M$ is defined on page 31 of [He83]. In the notation of [He83], let $\chi$ be the identity character, $z,w\in\mathcal{F}$ with $z\neq w$, and $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>1$. Formally, consider the series $G_{s}(z,w)=\sum_{\gamma\in\Gamma}k_{s}(\gamma z,w)$ with $k_{s}(z,w):=-\frac{\Gamma(s)^{2}}{4\pi\Gamma(2s)}\left[1-\left|\frac{z-w}{z-\overline{w}}\right|^{2}\right]^{s}F\left(s,s;2s;1-\left|\frac{z-w}{z-\overline{w}}\right|^{2}\right)$ and where $F(\alpha,\beta;\gamma;u)$ is the classical hypergeometric function. We should point out that the normalization we are using, which follows [He83], differs from the normalization for the Green’s function in Chapter 5 of [Iwa02]; the two normalizations differ by a minus sign. With this said, it is proved in [He83], Proposition 6.5. on p.33 that the series which defines $G_{s}(z,w)$ converges uniformly and absolutely on compact subsets of $(z,w,s)\in\mathcal{F}\times\mathcal{F}\times\\{s\in\mathbb{C}:{\mathrm{Re}}(s)>1\\}$. Furthermore, for all $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>1$, and all $z,w\in\mathbb{H}$ with $z\neq\gamma w$ for $\gamma\in\Gamma$, the function $G_{s}(z,w)$ is the eigenfunction of $\Delta_{\operatorname{hyp}}$ associated to the eigenvalue $s(1-s)$. Combining formulas 9.134.1. and 8.703. from [GR07] and applying the identity $\cosh(d_{\operatorname{hyp}}(z,w))=\left(2-\left[1-\left|\frac{z-w}{z-\overline{w}}\right|^{2}\right]\right)\left(1-\left|\frac{z-w}{z-\overline{w}}\right|^{2}\right)^{-1}$ we deduce that $k_{s}(z,w)=-\frac{1}{2\pi}Q^{0}_{s-1}(\cosh(d_{\operatorname{hyp}}(z,w))),$ where $Q_{\nu}^{\mu}$ is the associated Legendre function as defined by formula 8.703 in [GR07], with $\nu=s-1$ and $\mu=0$. Now, we can combine Theorem 4 of [Ni73] with Theorem 5.3 of [Iwa02], to deduce the Fourier expansion of the automorphic Green function in terms of the Niebur-Poincaré series. Specifically, let $w\in\mathcal{F}$ be fixed. Assume $z\in\mathcal{F}$ with $y={\mathrm{Im}}(z)>\max\\{{\mathrm{Im}}(\gamma w):\gamma\in\Gamma\\}$, and assume $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>1$. Then $G_{s}(z,w)$ admits the expansion $G_{s}(z,w)=-\frac{y^{1-s}}{2s-1}\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)-\sum_{k\in\mathbb{Z}\smallsetminus\\{0\\}}y^{1/2}K_{s-1/2}(2\pi|k|y)F_{-k}(w,s)e(kx)$ (21) where $\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)$ is the parabolic Eisenstein series associated to the cusp at $\infty$ of $\Gamma$, see the next section for its full description. Function $G_{s}(z,w)$ is unbounded as $z\to w$ and, according to Proposition 6.5. from [He83] we have the asymptotics $G_{s}(z,w)=\frac{\mathrm{ord}(w)}{2\pi}\log|z-w|+O(1),\quad\text{as}\quad z\to w.$ ## 4 Eisenstein series and their Kronecker limit formulas The purpose of this section is two-fold. First, we state the definitions of parabolic and elliptic Eisenstein series as well as their associated Kronecker limit formulas. Specific examples of the parabolic Kronecker limit formulas are recalled from [JST16]. Second, we prove the factorization theorem for meromorphic forms in terms of elliptic Kronecker limit functions, as stated in (5). ### 4.1 Parabolic Kronecker limit functions Associated to the cusp at $\infty$ of $\Gamma$ one has a parabolic Eisenstein series ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$. Let $\Gamma_{\infty}$ denote the stabilizer subgroup within $\Gamma$ of $\infty$. For $z\in\mathbb{H}$ and $s\in\mathbb{C}$ with $\textrm{Re}(s)>1$, ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ is defined by the series ${\cal E}^{\mathrm{par}}_{\infty}(z,s)=\sum\limits_{\gamma\in\Gamma_{\infty}\backslash\Gamma}\textrm{Im}(\gamma z)^{s}.$ It is well-known that ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ admits a meromorphic continuation to all $s\in{\mathbb{C}}$ and a functional equation in $s$. For us, the Kronecker limit formula means the determination of the constant term in the Laurent expansion of ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ at $s=1$. Classically, Kronecker’s limit formula is the assertion that for $\Gamma=\textrm{PSL}(2,\mathbb{Z})$ one has that $\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)=\frac{3}{\pi(s-1)}-\frac{1}{2\pi}\log\bigl{(}|\Delta(z)|{\mathrm{Im}}(z)^{6}\bigr{)}+C+O(s-1)\,\,\,\text{\rm as}\,\,\,s\rightarrow 1.$ (22) where $C=6(1-12\,\zeta^{\prime}(-1)-\log(4\pi))/\pi$ and $\Delta(z)$ is Dedekind’s Delta function which defined by $\Delta(z)=\left[q_{z}^{1/24}\prod\limits_{n=1}^{\infty}\left(1-q_{z}^{n}\right)\right]^{24}=\eta(z)^{24}.$ (23) We refer to [Si80] for a proof of (22), though the above formulation follows the normalization from [JST16]. For general Fuchsian groups of the first kind, Goldstein [Go73] studied analogues of the Kronecker’s limit formula associated to parabolic Eisenstein series. After a slight renormalization and trivial generalization, Theorem 3-1 from [Go73] asserts that the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)$ admits the Laurent expansion $\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)=\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(M)(s-1)}+\beta-\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(M)}\log(|\eta_{\infty}^{4}(z)|{\mathrm{Im}}(z))+O(s-1),$ (24) as $s\to 1$ and where $\beta=\beta_{\Gamma}$ is a certain real constant depending only on the group $\Gamma$. As the notation suggests, the function $\eta_{\infty}(z)$ is a holomorphic form for $\Gamma$ and can be viewed as a generalization of the eta function $\eta(z)$ which is defined in (23) for the full modular group. By employing the functional equation for the parabolic Eisenstein series, as stated in Theorem 6.5 of [Iwa02], one can re-write the Kronecker limit formula as stating that $\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)=1+\log(|\eta_{\infty}^{4}(z)|{\mathrm{Im}}(z))\cdot s+O(s^{2})\quad\text{ as }s\to 0,$ (25) see Corollary 3 of [JvPS19]. In this formulation, we will call the function $P(z)=P_{\Gamma}(z):=\log(|\eta_{\infty}^{4}(z)|{\mathrm{Im}}(z))$ the parabolic Kronecker limit function of $\Gamma$. ### 4.2 Atkin-Lehner groups Let $N=p_{1}\cdot\ldots\cdot p_{r}$ be a positive squarefree number, which includes the possibility that $N=1$ and set $\ell_{N}=2^{1-r}\textrm{lcm}\Big{(}4,\ 2^{r-1}\frac{24}{(24,\sigma(N))}\Big{)}$ where lcm stands for the least common multiple of two numbers. In [JST16], Theorem 16, it is proved that $\Delta_{N}(z):=\left(\prod_{v\mid N}\eta(vz)\right)^{\ell_{N}}$ (26) is a weight $k_{N}=2^{r-1}\ell_{N}$ holomorphic form for $\Gamma_{0}(N)^{+}$ vanishing only at the cusp. By the valence formula, the order of vanishing of $\Delta_{N}(z)$ at the cusp is $\nu_{N}:=k_{N}\operatorname{vol}_{\operatorname{hyp}}(Y_{N}^{+})/(4\pi)$ where $\operatorname{vol}_{\operatorname{hyp}}(Y_{N}^{+})=\pi\sigma(N)/(3\cdot 2^{r})$ is the hyperbolic volume of the surface $Y_{N}^{+}$. The Kronecker limit formula (24) for the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par},N}_{\infty}(z,s)$ associated to $Y_{N}^{+}$ reads as $\mathcal{E}^{\mathrm{par},N}_{\infty}(z,s)=\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(Y_{N}^{+})(s-1)}+\beta_{N}-\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(Y_{N}^{+})}P_{N}(z)+O((s-1))$ (27) as $s\to 1$. From Example 7 and Example 4 of [JvPS19] we have the explicit evaluations of $\beta_{N}$ and $P_{N}(z)$. Namely, $\beta_{N}=-\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(Y_{N}^{+})}\left(\sum_{j=1}^{r}\frac{(p_{j}-1)\log p_{j}}{2(p_{j}+1)}-\log N+2\log(4\pi)+24\zeta^{\prime}(-1)-2\right)$ (28) and the parabolic Kronecker limit function $P_{N}(z)$ is given by $P_{N}(z)=\log\left(\sqrt[2^{r}]{\prod_{v\mid N}|\eta(vz)|^{4}}\cdot{\mathrm{Im}}(z)\right).$ ### 4.3 Elliptic Kronecker limit functions Elliptic subgroups of $\Gamma$ have finite order and a unique fixed point within $\mathbb{H}$. For all but a finite number of $w\in\mathcal{F}$, the order of the elliptic subgroup $\Gamma_{w}$ which fixes $w$ is one. For $z\in\mathbb{H}$ with $z\not=w$ and $s\in\mathbb{C}$ with $\textrm{Re}(s)>1$, the elliptic Eisenstein series ${\cal E}^{\textrm{ell}}_{w}(z,s)$ is defined by the series ${\cal E}^{\textrm{ell}}_{w}(z,s)=\sum\limits_{\gamma\in\Gamma_{w}\backslash\Gamma}\sinh(d_{\mathrm{hyp}}(\gamma z,w))^{-s}=\sum\limits_{\gamma\in\Gamma_{w}\backslash\Gamma}\left(\frac{2\,\textrm{Im}(w)\textrm{Im}(\gamma z)}{|\gamma z-w|\,|\gamma z-\overline{w}|}\right)^{s}.$ (29) It was first shown in [vP10] that (29) admits a meromorphic continuation to all $s\in{\mathbb{C}}$. The analogue of the Kronecker limit formula for ${\cal E}^{\textrm{ell}}_{w}(z,s)$ was first proved in [vP10]; see also [JvPS19]. In the setting of this paper, it is shown in [vP10] that for any $w\in\mathcal{F}$ the series (29) admits the Laurent expansion $\mathrm{ord}(w)\,\mathcal{E}^{\mathrm{ell}}_{w}(z,s)-\frac{2^{s}\sqrt{\pi}\,\Gamma(s-\frac{1}{2})}{\Gamma(s)}\mathcal{E}^{\mathrm{par}}_{\infty}(w,1-s)\,\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)=\\\ =-\frac{2\pi}{\operatorname{vol}_{\operatorname{hyp}}(M)}-\frac{2\pi}{\operatorname{vol}_{\operatorname{hyp}}(M)}\log\bigl{(}|H_{\Gamma}(z,w)|^{\mathrm{ord}(w)}{\mathrm{Im}}(z)\bigr{)}\cdot s+O(s^{2})\quad\textrm{as $s\rightarrow 0$.}$ (30) As a function of $z$, $H(z,w):=H_{\Gamma}(z,w)$ is holomorphic on $\mathbb{H}$ and uniquely determined up to multiplication by a complex constant of absolute value one; in addition, $H(z,w)$ is an automorphic form with a non-trivial multiplier system, which depends on $w$, with respect to $\Gamma$ acting on $z$. The function $H(z,w)$ vanishes if and only if $z=\gamma w$ for some $\gamma\in\Gamma$. We call the function $E_{w}(z)=E_{w,\Gamma}(z):=\log\bigl{(}|H(z,w)|^{\mathrm{ord}(w)}{\mathrm{Im}}(z)\bigr{)}$ the elliptic Kronecker limit function of $\Gamma$ at $w$. ### 4.4 A factorization theorem We can now prove equation (5). ###### Proposition 1. With notation as above, let $f$ be a weight $2k$ meromorphic form on $\mathbb{H}$ with $q$-expansion at $\infty$ given by $f(z)=1+\sum_{n=1}^{\infty}b_{f}(n)q_{z}^{n},$ (31) Let $\mathrm{ord}_{w}(f)$ denote the order $f$ at $w$ and define the function $H_{f}(z):=\prod_{w\in\mathcal{F}}H(z,w)^{\mathrm{ord}_{w}(f)}$ where $H(z,w)=H_{\Gamma}(z,w)$ is given in (30). Then there exists a complex constant $c_{f}$ such that $f(z)=c_{f}H_{f}(z).$ (32) Furthermore, $\left|c_{f}\right|=\exp\left(-\frac{2\pi}{\operatorname{vol}_{\operatorname{hyp}}(M)}\sum_{w\in\mathcal{F}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}\left(2-\log 2+P(w)-\beta\operatorname{vol}_{\operatorname{hyp}}(M)\right)\right),$ where $P(w)$ and $\beta$ are defined through the parabolic Kronecker limit function (24). ###### Proof. The proof closely follows the proof of Theorem 9 from [JvPS19]. Specifically, following the first part of the proof almost verbatim, we conclude that the quotient $F_{f}(z):=\frac{H_{f}(z)}{f(z)}$ is a non-vanishing holomorphic function on $M$ which is bounded and non-zero at the cusp at $\infty$. Hence, $\log|F_{f}(z)|$ is $L^{2}$ on $M$. From its spectral expansion and the fact that $\log|F_{f}(z)|$ is harmonic, one concludes $\log|F_{f}(z)|$ is constant, hence so is $F_{f}(z)$. The evaluation of the constant is obtained by considering the limiting behavior as $z$ approaches $\infty$, which is obtained by using the asymptotic behavior of $H(z,w)$ as ${\mathrm{Im}}(z)\to\infty$, as given in Proposition 6 of [JvPS19]. ∎ By following the proof of Proposition 12 from [JvPS19] we obtain (4), and hence (5), for meromorphic forms $f$ on $\mathbb{H}$ with $q$-expansion (31). We leave the verification of this simple argument to the reader. ## 5 Proofs of main results ### 5.1 Proof of Theorem 1 Let $Y>1$ be sufficiently large so that the cuspidal neighborhood $\mathcal{F}_{\infty}(Y)$ of the cusp $\infty$ in $\mathcal{F}$ is of the form $\\{z\in{\mathbb{H}}:0<x<1,y>Y\\}$. For $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>1$, and arbitrary, but fixed $w\in\mathcal{F}$, we then have that $\displaystyle\int\limits_{\mathcal{F}(Y)}\Delta_{\operatorname{hyp}}(F_{-n}(z,1))$ $\displaystyle\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)d\mu_{\operatorname{hyp}}(z)$ $\displaystyle-\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\Delta_{\operatorname{hyp}}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)d\mu_{\operatorname{hyp}}(z)$ $\displaystyle=-s(1-s)\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)d\mu_{\operatorname{hyp}}(z).$ Actually, the first summand on the left-hand side is zero since $F_{-n}(n,1)$ is holomorphic; however, this judicious form of the number zero is significant since we will use the method behind the Maass-Selberg theorem to study the left-hand side of the above equation. Before this, note that the integrand on the right-hand side of the above equation is holomorphic at $s=1$. As a result, we can write $\displaystyle\frac{\partial}{\partial s}$ $\displaystyle\left.\left(-s(1-s)\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)d\mu_{\operatorname{hyp}}(z)\right)\right|_{s=1}$ $\displaystyle=\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)d\mu_{\operatorname{hyp}}(z).$ Therefore, $\displaystyle\langle F_{-n}(z,1),$ $\displaystyle\overline{\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)}\rangle$ $\displaystyle=\lim_{Y\to\infty}\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(z,s)\right)d\mu_{\operatorname{hyp}}(z)$ $\displaystyle=\lim_{Y\to\infty}\left[\frac{\partial}{\partial s}\left(\int\limits_{\mathcal{F}(Y)}\Delta_{\operatorname{hyp}}(F_{-n}(z,1))\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)d\mu_{\operatorname{hyp}}(z)\right.\right.$ $\displaystyle-\left.\left.\left.\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\Delta_{\operatorname{hyp}}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)d\mu_{\operatorname{hyp}}(z)\right)\right|_{s=1}\right]$ (33) The quantity on the right-hand side of (5.1) is setup for an application of Green’s theorem as in the proof of the Maass-Selberg relations for the Eisenstein series. As described on page 89 of [Iwa02], when applying Green’s theorem to each term on the right-side of (5.1) for fixed $Y$, the resulting boundary terms on the sides of the fundamental domain, which are identified by $\Gamma$, will sum to zero. As such, we get that $\displaystyle\langle F_{-n}(z,1),$ $\displaystyle\overline{\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)}\rangle$ $\displaystyle=\lim_{Y\to\infty}\left[\frac{\partial}{\partial s}\left(\int\limits_{0}^{1}\frac{\partial}{\partial y}F_{-n}(z,1)\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)dx\right.\right.$ $\displaystyle-\left.\left.\left.\int\limits_{0}^{1}F_{-n}(z,1)\frac{\partial}{\partial y}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)dx\right)\right|_{s=1}\right],$ (34) where functions of $z$ and its derivatives with respect to $y={\mathrm{Im}}(z)$ are evaluated at $z=x+iY$. In order to compute the difference of the two integrals of the right-hand side of (5.1), we will use the Fourier expansions (17) and (21) of the series $F_{-n}(z,1)$ and $G_{s}(z,w)$ respectively. It will be more convenient to write the first coefficient in the expansion (17) as $e(-nx)\sqrt{y}I_{\tfrac{1}{2}}(2\pi ny)$, as in (16). Specifically, since the exponential functions $e(-nx)$ are orthogonal for different values of $n$, we get that (5.1) is equal to $-F_{-n}(w,s)\sqrt{Y}\left(\frac{\partial}{\partial y}\left.\left(\sqrt{y}I_{\tfrac{1}{2}}(2\pi ny)\right)\right|_{y=Y}\cdot K_{s-\tfrac{1}{2}}(2\pi nY)\right.\\\ \left.-I_{\tfrac{1}{2}}(2\pi nY)\cdot\frac{\partial}{\partial y}\left.\left(\sqrt{y}K_{s-\tfrac{1}{2}}(2\pi ny)\right)\right|_{y=Y}\right)$ $+B_{0}(1;-n)(1-s)\frac{Y^{-s}}{2s-1}\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\\\ +\sum_{j\in\mathbb{Z}\smallsetminus\\{0\\}}F_{j}(w,s)\left(b_{j}(Y,1;-n)\cdot\frac{\partial}{\partial y}\left.\left(\sqrt{y}K_{s-\tfrac{1}{2}}(2\pi|j|y)\right)\right|_{y=Y}\right.\\\ \left.-\frac{\partial}{\partial y}\left.b_{j}(y,1;-n)\right|_{y=Y}\cdot\sqrt{Y}K_{s-\tfrac{1}{2}}(2\pi|j|Y)\right)=T_{1}(Y,s;w)+T_{2}(Y,s;w)+T_{3}(Y,s;w),$ where the last equality above provides the definitions of the functions $T_{1}$, $T_{2}$ and $T_{3}$. Therefore, from (5.1) we conclude that $\langle F_{-n}(z,1),\overline{\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)}\rangle=\\\ =\lim_{Y\to\infty}\left[\left.\frac{\partial}{\partial s}\left(T_{1}(Y,s;w)+T_{2}(Y,s;w)+T_{3}(Y,s;w)\right)\right|_{s=1}\right]$ (35) We will treat each of the three terms on the right-hand side of (35) separately. To evaluate the term $T_{1}$ in (35), we apply formulas 8.486.2 and 8.486.11 of [GR07] in order to compute derivatives of the Bessel functions. In doing so, we conclude that $T_{1}(Y,s;w)=-\frac{X}{2}F_{-n}(w,s)\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X)+I_{\tfrac{3}{2}}(X))+I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X)+K_{s+\tfrac{1}{2}}(X))\right],$ where we set $X=2\pi nY$. Next, we express $K_{s+\tfrac{1}{2}}(X)$ in terms of $K_{s-\tfrac{1}{2}}(X)$ and $K_{s-\tfrac{3}{2}}(X)$, using formula 8.485.10 from [GR07] to get $K_{s+\tfrac{1}{2}}(X)=K_{s-\tfrac{3}{2}}(X)+\frac{2s-1}{X}K_{s-\tfrac{1}{2}}(X).$ Then, applying formula 8.486.21 from [GR07], we deduce that $\displaystyle\frac{\partial}{\partial s}$ $\displaystyle\left.\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X)+I_{\tfrac{3}{2}}(X))+I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X)+K_{s+\tfrac{1}{2}}(X))\right]\right|_{s=1}$ $\displaystyle=\sqrt{\frac{\pi}{2X}}e^{X}\mathrm{Ei}(-2X)\left[-(I_{-\tfrac{1}{2}}(X)+I_{\tfrac{3}{2}}(X))+\sqrt{\frac{2}{\pi X}}(2-1/X)\sinh(X)\right]+\frac{2}{X^{2}}e^{-X}\sinh(X),$ where $\mathrm{Ei}(x)$ denotes the exponential integral; see section 8.2 of [GR07]. Continuing, we now employ formula (B.36) from [Iwa02] which asserts certain asymptotic behavior of the $I$-Bessel function as $X\to\infty$; we are interested in the cases when $\nu=-1/2$ and when $\nu=3/2$. This result, together with the bound $\mathrm{Ei}(-2X)\leq e^{-2X}/(2X)$, which follows from the expression 8.212.10 from [GR07] for $\mathrm{Ei}(-x)$ with $x>0$, yields that $\lim_{X\to\infty}\frac{X}{2}\frac{\partial}{\partial s}\left.\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X)+I_{\tfrac{3}{2}}(X))+I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X)+K_{s+\tfrac{1}{2}}(X))\right]\right|_{s=1}=0.$ Therefore, $\lim_{Y\to\infty}\frac{\partial}{\partial s}\left.T_{1}(Y,s;w)\right|_{s=1}=-\frac{\partial}{\partial s}\left.F_{-n}(w,s)\right|_{s=1}\cdot\\\ \cdot\lim_{X\to\infty}\frac{X}{2}\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X)+I_{\tfrac{3}{2}}(X))+I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X)+K_{s+\tfrac{1}{2}}(X))\right].$ Finally, by applying (B.36) from [Iwa02] again, we deduce that $\lim_{X\to\infty}\frac{X}{2}\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X)+I_{\tfrac{3}{2}}(X))+I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X)+K_{s+\tfrac{1}{2}}(X))\right]=1.$ Hence $\lim_{Y\to\infty}\frac{\partial}{\partial s}\left.T_{1}(Y,s;w)\right|_{s=1}=-\frac{\partial}{\partial s}\left.F_{-n}(w,s)\right|_{s=1}.$ (36) As for the term $T_{2}$ in (35), let us use the Laurent series expansion (24) of $\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)$, from which one easily deduces that $\frac{\partial}{\partial s}\left.(s-1)\frac{Y^{-s}}{2s-1}\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right|_{s=1}=\frac{1}{Y}\left(\beta-\frac{P(w)+2+\log Y}{\operatorname{vol}_{\operatorname{hyp}}(M)}\right).$ Therefore $\lim_{Y\to\infty}\frac{\partial}{\partial s}\left.T_{2}(Y,s;w)\right|_{s=1}=0.$ (37) It remains to study the term $T_{3}$ in (35). Let us set $g(s,y,k):=\sqrt{y}K_{s-\tfrac{1}{2}}(2\pi ky)$ for some positive integers $k$. Then $b_{j}(y,1;-n)=B_{j}(1;-n)g(1,y,|n|)$ and $T_{3}(Y,s;w)=\sum_{j\in\mathbb{Z}\smallsetminus\\{0\\}}B_{j}(1;-n)F_{j}(w,s)\left(g(1,Y,|n|)\frac{\partial}{\partial y}\left.g(s,y,|j|)\right|_{y=Y}\right.\\\ -\left.g(s,Y;|j|)\frac{\partial}{\partial y}\left.g(1,y,|n|)\right|_{y=Y}\right).$ For positive integers $m$ and $\ell$ let us define $G(s,Y,m,\ell):=g(1,Y,m)\frac{\partial}{\partial y}\left.g(s,y,\ell)\right|_{y=Y}-g(s,Y;\ell)\frac{\partial}{\partial y}\left.g(1,y,m)\right|_{y=Y}.$ Applying the formula 8.486.11 from [GR07] to differentiate the $K-$Bessel function, together with formula 8.486.10 to express $K_{s+\tfrac{1}{2}}(2\pi|j|Y)$ we arrive at $G(s,Y,|n|,|j|)=\frac{\pi Y}{2}K_{s-\tfrac{1}{2}}(2\pi|j|Y)K_{\tfrac{1}{2}}(2\pi|n|Y)\cdot\\\ \cdot\left(|n|(K_{-\tfrac{1}{2}}(2\pi|n|Y)+K_{\tfrac{3}{2}}(2\pi|n|Y))-|j|\left(2K_{s-\tfrac{3}{2}}(2\pi|j|Y)+\frac{2s-1}{2\pi|j|Y}K_{s-\tfrac{1}{2}}(2\pi|j|Y)\right)\right).$ Now, we combine the bound (B.36) from [Iwa02] with evaluation of the derivative $\frac{\partial}{\partial\nu}K_{\nu}$ at $\nu=\pm 1/2$ (formula 8.486.21 of [GR07]) and the bound $\mathrm{Ei}(-4\pi|j|Y)\leq\exp(-4\pi|j|Y)/(4\pi|j|Y)$ for the exponential integral function to deduce the following crude bounds $\max\left\\{G(s,Y,|n|,|j|),\left.\frac{\partial}{\partial s}G(s,Y,|n|,|j|)\right|_{s=1}\right\\}\ll(|n|+|j|)\exp(-2\pi Y(|n|+|j|)),\text{ as }Y\to+\infty,$ where the implied constant is independent of $Y,|j|$. This, together with the bound (18) and the Fourier expansion (17) yields $\frac{\partial}{\partial s}T_{3}(Y,s;w)\Big{|}_{s=1}\ll\sum_{j\in\mathbb{Z}\smallsetminus\\{0\\}}(|n|+|j|)|B_{j}(1;-n)|\exp\left(-2\pi Y(|n|+|j|)+2\pi|j|{\mathrm{Im}}(w)\right)$ It remains to estimate the sum on the right hand side of the above equation as $Y\to\infty$. The bounds for the Kloosterman sum zeta function as stated on page 75 of [Iwa02]) yield bounds for $B_{j}(1;-n)$ for $j\neq 0$. Specifically, one has that $B_{j}(1;-n)\ll\exp\left(\frac{4\pi\sqrt{|jn|}}{c_{\Gamma}}\right)$ where $c_{\Gamma}$ is a certain positive constant depending on the group $\Gamma$; in fact, $c_{\Gamma}$ is equal to the minimal positive left-lower entry of a matrix from $\Gamma$. Also, the implied constant in the bound for $B_{j}(1;-n)$ is independent of $j$. Therefore $\frac{\partial}{\partial s}\left.T_{3}(Y,s;w)\right|_{s=1}\ll\sum_{j\in\mathbb{Z}\smallsetminus\\{0\\}}(|n|+|j|)\exp\left(-2\pi\left((|j|+|n|)Y-2\sqrt{|jn|}/c_{\Gamma}-|j|{\mathrm{Im}}(w)\right)\right).$ For $Y>2{\mathrm{Im}}(w)+2\sqrt{n}/c_{\Gamma}$, this series over $j$ is uniformly convergent and is $o(1)$ as $Y\to\infty$. In other words, $\lim_{Y\to\infty}\frac{\partial}{\partial s}\left.T_{3}(Y,s;w)\right|_{s=1}=0.$ (38) When combining (38) with (35) (36) and (37), we have that $\langle F_{-n}(z,1),\overline{\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)}\rangle=\frac{\partial}{\partial s}\left.F_{-n}(w,s)\right|_{s=1},$ which completes the proof of (9). ### 5.2 Proof of Corollary 1 The proof of Corollary 1 is a combination of Theorem 1 and the factorization theorem as stated in Proposition 1. The details are as follows. To begin we shall prove formula (1.3). Starting with (5), which is written as $\log\left(y^{k}|f(z)|\right)=kP(z)-\sum_{w\in\mathcal{F}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}\lim_{s\to 0}\frac{1}{s}\mathrm{ord}(w)\mathcal{E}^{\mathrm{ell}}_{w}(z,s),$ we can express $\lim_{s\to 0}\frac{1}{s}\mathrm{ord}(w)\mathcal{E}^{\mathrm{ell}}_{w}(z,s)$ in terms of the resolvent kernel. Specifically, using (6), we have that $\log\left(y^{k}|f(z)|\right)=kP(z)+\sum_{w\in\mathcal{F}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}\lim_{s\to 0}\left(\frac{2^{s}\sqrt{\pi}\Gamma(s-1/2)}{\Gamma(s+1)}(2s-1)G_{s}(z,w)\right).$ (39) By applying the functional equation for the Green’s function, see Theorem 3.5 of [He83] on pages 250–251, we get $\displaystyle\lim_{s\to 0}\frac{2^{s}\sqrt{\pi}\Gamma(s-1/2)}{\Gamma(s+1)}(2s-1)G_{s}(z,w)=\lim_{s\to 1}$ $\displaystyle\left(\frac{2^{1-s}\sqrt{\pi}\Gamma(1/2-s)}{\Gamma(2-s)}\left((1-2s)G_{s}(z,w)\right.\right.$ $\displaystyle-\left.\left.\frac{}{}\mathcal{E}_{\infty}^{\mathrm{par}}(z,1-s)\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)\right).$ From the Kronecker limit formula (25) and standard Taylor series expansion of the gamma function we immediately deduce that $\lim_{s\to 0}\frac{2^{s}\sqrt{\pi}\Gamma(s-1/2)}{\Gamma(s+1)}(2s-1)G_{s}(z,w)=\lim_{s\to 1}2\pi(-1+(s-1)(2-\log 2))\cdot\\\ \cdot\left[2(1-s)G_{s}(z,w)-(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s))-P(z)(1-s)\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right].$ According to [Iwa02], p. 106, the point $s=1$ is the simple pole of $G_{s}(z,w)$ with the residue $-1/\operatorname{vol}_{\operatorname{hyp}}(M)$ (note that our $G_{s}(z,w)$ differs from the automorphic Green’s function from [Iwa02] by a factor of $-1$). Therefore, the Kronecker limit formula (24) yields the following equation $\displaystyle\lim_{s\to 0}\frac{2^{s}\sqrt{\pi}\Gamma(s-1/2)}{\Gamma(s+1)}(2s-1)G_{s}(z,w)$ $\displaystyle=-\frac{2\pi}{\operatorname{vol}_{\operatorname{hyp}}(M)}P(z)-\frac{4\pi}{\operatorname{vol}_{\operatorname{hyp}}(M)}$ (40) $\displaystyle+2\pi\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right).$ Recall that the classical the Riemann-Roch theorem implies that $k\frac{\operatorname{vol}_{\operatorname{hyp}}(M)}{2\pi}=\sum_{w\in\mathcal{F}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)};$ hence, after multiplying (40) by $\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}$ and taking the sum over all $w\in\mathcal{F}$ from (39), we arrive at (1.3), as claimed. Having proved (1.3), observe that the left-hand side of (1.3) is real valued. As proved in [Ni73], $F_{-n}(z,1)$ is orthogonal to constant functions. Therefore, in order to prove (10) one simply applies (9), which was established above. ### 5.3 Proof of Corollary 2 In order to prove (12), it suffices to compute $\langle 1,\overline{\lim_{s\to 1}(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s))}\rangle$, which we will write as $\int\limits_{\mathcal{F}}\lim_{s\to 1}\left(G_{s}(z,w)+\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(M)(s-1)}+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)-\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(M)(s-1)}\right)d\mu_{\operatorname{hyp}}(z).$ From its spectral expansion, the function $\lim_{s\to 1}\left(G_{s}(z,w)+\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(M)(s-1)}\right)$ is $L^{2}$ on $\mathcal{F}$ and orthogonal to constant functions. Therefore, by using the Laurent series expansion (24), we get that $\langle 1,\overline{\lim_{s\to 1}(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s))}\rangle=\operatorname{vol}_{\operatorname{hyp}}(M)\left(\beta-\frac{P(w)}{\operatorname{vol}_{\operatorname{hyp}}(M)}\right),$ which completes the proof. ### 5.4 Proof of Theorem 2 Our starting point is the Fourier expansion of the sum $G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)$. Namley, for ${\mathrm{Re}}(s)>1$ and ${\mathrm{Im}}(w)$ sufficiently large we have that $\displaystyle G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)$ $\displaystyle=\left(1-\frac{y^{1-s}}{2s-1}\right)\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)$ $\displaystyle-\sum_{k\in\mathbb{Z}\setminus\\{0\\}}\sqrt{y}K_{s-\tfrac{1}{2}}(2\pi|k|y)F_{-k}(w,s)e(kx).$ (41) If ${\mathrm{Im}}(z)$ is sufficiently large, exponential decay of $K_{s-\tfrac{1}{2}}(2\pi|k|y)$ is sufficient to ensure that the right-hand side of (5.4) is holomorphic at $s=1$. The Laurent series expansion of $\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)$, combined with the expansions $y^{1-s}=1+(1-s)\log y+\tfrac{1}{2}(1-s)^{2}\log^{2}y+O((1-s)^{3})$ and $(2s-1)^{-1}=(1-2(s-1))^{-1}=1-2(s-1)+4(s-1)^{2}+O((s-1)^{3})$ yields $\frac{\partial}{\partial s}\left.\left(1-\frac{y^{1-s}}{2s-1}\right)\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right|_{s=1}=\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(M)}\left[-4+2\beta\operatorname{vol}_{\operatorname{hyp}}(M)-2P(w)\right.\\\ \left.+\log y\left(\beta\mathrm{vol}_{\operatorname{hyp}}(M)-P(w)-2\right)-\tfrac{1}{2}\log^{2}y\right].$ Additionally, for ${\mathrm{Im}}(z)$ sufficiently large, the series on the right-hand side of (5.4) is a uniformly convergent series of functions which are holomorphic at $s=1$. As such, we may differentiate the series term by term. By employing formulas 8.469.3 and 8.486.21 of [GR07], we deduce for $k\neq 0$ that $\frac{\partial}{\partial s}\left.\left(\sqrt{y}K_{s-\tfrac{1}{2}}(2\pi|k|y)F_{-k}(w,s)\right)\right|_{s=1}=\frac{e^{-2\pi|k|y}}{2\sqrt{|k|}}\cdot\\\ \cdot\left[\frac{\partial}{\partial s}\left.F_{-k}(w,s)\right|_{s=1}-F_{-k}(w,1)e^{4\pi|k|y}\mathrm{Ei}(-4\pi|k|y)\right],$ where $\mathrm{Ei}(x)$ denotes the exponential integral function; see section 8.21 of [GR07]. From this, we get the expression that $\frac{\partial}{\partial s}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)\Big{|}_{s=1}=(\log y+2)\left(\beta-\frac{P(w)+2}{\mathrm{vol}_{\operatorname{hyp}}(M)}\right)-\frac{\log^{2}y}{2\mathrm{vol}_{\operatorname{hyp}}(M)}\\\ -\sum_{k\in\mathbb{Z}\setminus\\{0\\}}\frac{1}{2\sqrt{|k|}}\left[\frac{\partial}{\partial s}\left.F_{-k}(w,s)\right|_{s=1}-F_{-k}(w,1)e^{4\pi|k|y}\mathrm{Ei}(-4\pi|k|y)\right]e^{2\pi ikx-2\pi|k|y}.$ Let us now compute the derivative $\frac{\partial}{\partial z}$ of the above expression. After multiplying by $i=\sqrt{-1}$, we get that $\mathcal{G}_{w}(z)=\frac{1}{y}\left(\beta-\frac{P(w)+2}{\mathrm{vol}_{\operatorname{hyp}}(M)}\right)-\frac{\log y}{y\mathrm{vol}_{\operatorname{hyp}}(M)}+\sum_{k\geq 1}2\pi\sqrt{k}\frac{\partial}{\partial s}\left.F_{-k}(w,s)\right|_{s=1}q_{z}^{k}\\\ +\sum_{k\geq 1}\frac{F_{-k}(w,1)}{2\sqrt{k}y}q_{z}^{k}-\sum_{k\leq-1}2\pi\sqrt{|k|}F_{-k}(w,1)\mathrm{Ei}(4\pi ky)q_{z}^{k}+\sum_{k\leq-1}\frac{F_{-k}(w,1)}{2\sqrt{|k|}y}e^{2\pi ik(x-iy)}.$ The proof of the assertion that $\sum_{k\geq 1}2\pi\sqrt{k}\frac{\partial}{\partial s}\left.F_{-k}(w,s)\right|_{s=1}q_{z}^{k}$ is the holomorphic part of $\mathcal{G}_{w}(z)$ follows by citing the uniqueness of the analytic continuation in $z$. It is left to prove that $\mathcal{G}_{w}(z)$ is weight two biharmonic Maass form. Since $\mathcal{G}_{w}(z)$ is obtained by taking the derivative $\frac{\partial}{\partial z}$ of a $\Gamma-$invariant function, it is obvious that $\mathcal{G}_{w}(z)$ is weight two in $z$. Moreover, the straightforward computation that $iy^{2}\frac{\partial}{\partial\bar{z}}\mathcal{G}_{w}(z)=\Delta_{\operatorname{hyp}}\left(\frac{\partial}{\partial s}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)\Big{|}_{s=1}\right)=-\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right),$ combined with the fact that $\Delta_{\operatorname{hyp}}\left(\lim_{s\to 1}\left(G_{s}(z,w)+\mathcal{E}_{\infty}^{\mathrm{par}}(w,s)\right)\right)=0$ proves that $\mathcal{G}_{w}(z)$ is biharmonic. ## 6 Examples ### 6.1 The full modular group Throughout this subsection, let $\Gamma=\mathrm{PSL}(2,\mathbb{Z})$, in which case the the parabolic Kronecker limit function, $P(w)$ can be expressed, in the notation of [BK20], as $P(w)=P_{\mathrm{PSL}(2,\mathbb{Z})}(w)=\log(|\eta(w)|^{4}\cdot{\mathrm{Im}}(w))=\mathbbm{j}(w)-1,$ where $\eta(w)$ is Dedekind’s eta function and the last equality follows from the definition of $\mathbbm{j}_{0}(w)=\mathbbm{j}(w)$ given on p. 1 of [BK20]. In this setting, Corollary 1, when combined with (3) and Rohrlich’s theorem (2) yields that $\langle j_{n},\log||f||\rangle=2\pi\sqrt{n}\left(-2\pi\sum_{w\in\mathcal{F}}\frac{\mathrm{ord}_{w}(f)}{\mathrm{ord}(w)}\left(\frac{\partial}{\partial s}\left.F_{-n}(w,s)\right|_{s=1}-c_{n}P(w)\right)\right).$ (42) Moreover, equating the constant terms in the Fourier series expansions for $F_{-n}(z,1)$ and $j_{n}(z)$, one easily deduces that $2\pi\sqrt{n}c_{n}=24\sigma(n)$. This proves Theorem 1.2 of [BK20] and shows that, in the notation of [BK20] one has $\mathbbm{j}_{n}(w)=2\pi\sqrt{n}\frac{\partial}{\partial s}\left.F_{-n}(w,s)\right|_{s=1}-24\sigma(n)P(w),$ (43) an identity which provides a description of $\mathbbm{j}_{n}(w)$, for $n\geq 1$ different from the one given by formula (3.10) of [BK20]. Furthermore, from the identity (19), combined with the fact that $\Delta_{\operatorname{hyp}}P(w)=1$, which is a straightforward implication of the Kronecker limit formula (24), it follows that $\Delta_{\operatorname{hyp}}\mathbbm{j}_{n}(w)=2\pi\sqrt{n}\left(F_{-n}(w,1)-c_{n}\right)=j_{n}(w),$ which agrees with formula (3.10) of [BK20]. Reasoning as above, we easily see that Theorem 1.3. of [BK20] follows from Corollary 2 with $g(z)=j_{n}(z)$. Finally, in view of (42), Theorem 2 is closely related to the first part of Theorem 1.4 of [BK20]. Namely, for large enough ${\mathrm{Im}}(z)$, in the notation of [BK20] $\displaystyle\mathbb{H}_{w}(z)$ $\displaystyle=\sum_{n\geq 0}\mathbbm{j}_{n}(w)q_{z}^{n}=\mathbbm{j}_{0}(w)+\sum_{n\geq 1}\left(2\pi\sqrt{n}\frac{\partial}{\partial s}\left.F_{-n}(w,s)\right|_{s=1}-24\sigma(n)P(w)\right)q_{z}^{n}$ $\displaystyle=1+P(w)\left(1-24\sum_{n\geq 1}\sigma(n)q_{z}^{n}\right)+\sum_{n\geq 1}2\pi\sqrt{n}\frac{\partial}{\partial s}\left.F_{-n}(w,s)\right|_{s=1}q_{z}^{n}.$ Theorem 2 implies that the function $\mathbb{H}_{w}(z)$ is the holomorphic part of the weight two biharmonic Maass form $\widehat{\mathbb{H}}_{w}(z)=P(w)\widehat{E}_{2}(z)+\mathcal{G}_{w}(z),$ where $\widehat{E}_{2}(z)=1-24\sum_{n\geq 1}\sigma(n)q_{z}^{n}-\frac{3}{\pi y}$ is the weight two completed Eisenstein series for the full modular group. ### 6.2 Genus zero Atkin-Lehner groups Let $N=\prod_{\nu=1}^{r}p_{\nu}$ be a positive square-free integer which is one of the $44$ possible values for which the quotient space $Y_{N}^{+}=\overline{\Gamma_{0}^{+}(N)}\backslash{\mathbb{H}}$ has genus zero; see [Cum04] for a list of such $N$ as well as [JST16b]. Let $\Delta_{N}(z)$ be the Kronecker limit function on $Y_{N}^{+}$ associated to the parabolic Eisenstein series; it is given by formula (26) above. In the notation of Section 4.2,the function $\Delta_{N}(z)(j_{N}^{+}(z)-j_{N}^{+}(w))^{\nu_{N}}$, is the weight $k_{N}=2^{r-1}\ell_{N}$ holomorphic modular form which possesses the constant term $1$ in its $q$-expansion. Furthermore, this function vanishes only at the point $z=w$, and, by the Riemann-Roch formula, its order of vanishing is equal to $k_{N}\operatorname{vol}_{\operatorname{hyp}}(Y_{N}^{+})\cdot\mathrm{ord}(w)/(4\pi)$. When $N=1$, one has $k_{1}=12$, $\ell_{1}=24$, $\nu_{1}=1$ and $\operatorname{vol}_{\operatorname{hyp}}(Y_{N}^{+})=\pi/3$, hence $\Delta_{1}(z)(j_{1}^{+}(z)-j_{1}^{+}(w))^{\nu_{1}}$ equals the prime form $(\Delta(z)(j(z)-j(w)))^{1/\mathrm{ord}(w)}$ taken to the power $\mathrm{ord}(w)$; see page 3 of [BK20]. For any integer $m>1$ the $q$-expansion of the form $j_{N}^{+}|T_{m}(z)$ is $q_{z}^{-m}+O(q_{z})$; hence there exists a constant $C_{m,N}$ such that $j_{N}^{+}|T_{m}(z)=2\pi\sqrt{m}F_{-m}(z,1)+C_{m,N}$. The constant $C_{m,N}$ can be explicitly evaluated in terms of $m$ and $N$ by equating the constant terms in the $q$-expansions. Upon doing so, one obtains, using equation (20), that $\displaystyle C_{m,N}=-2\pi\sqrt{m}B_{0,N}^{+}(1;-m)$ $\displaystyle=-24\sigma(m)\prod\limits_{\nu=1}^{r}\left(1-\frac{p_{\nu}^{\alpha_{p_{\nu}}(m)+1}(p_{\nu}-1)}{\left(p_{\nu}^{\alpha_{p_{\nu}}(m)+1}-1\right)(p_{\nu}+1)}\right)$ $\displaystyle=-24\sigma(m)\prod\limits_{\nu=1}^{r}\left(1-\kappa_{m}(p_{\nu})\right),$ where we simplified the notation by denoting the second term in the product over $\nu$ by $\kappa_{m}(p_{\nu})$. We now can apply Corollary 2 with $g(z)=j_{N}^{+}|T_{m}(z)=2\pi\sqrt{m}F_{-m}(z,1)-24\sigma(m)\prod\limits_{\nu=1}^{r}\left(1-\kappa_{m}(p_{\nu})\right)$ and $f(z)=\Delta_{N}(z)(j_{N}^{+}(z)-j_{N}^{+}(w))^{\nu_{N}}$. Corollary 2 becomes the statement that $\displaystyle\langle j_{N}^{+}|T_{m}(z),$ $\displaystyle\log(y^{\tfrac{k_{N}}{2}}|\Delta_{N}(z)(j_{N}^{+}(z)-j_{N}^{+}(w))^{\nu_{N}}|)\rangle$ $\displaystyle=-k_{N}\operatorname{vol}_{\operatorname{hyp}}(Y_{N}^{+})\left[\pi\sqrt{m}\left.\frac{\partial}{\partial s}F_{-m}(w,s)\right|_{s=1}\right.$ $\displaystyle\left.+12\sigma(m)\prod\limits_{\nu=1}^{r}\left(1-\kappa_{m}(p_{\nu})\right)\left(\beta_{N}\operatorname{vol}_{\operatorname{hyp}}(Y_{N}^{+})-\log\left(|\Delta_{N}(w)|^{2/k_{N}}\cdot{\mathrm{Im}}(w)\right)-2\right)\right],$ where $\beta_{N}$ is given by (28). In this form, we have obtained an alternate proof and generalization of formula (1.2) from [BK20], which is the special case $N=1$. ### 6.3 A genus one example Let us consider the case when $\Gamma=\overline{\Gamma_{0}(37)^{+}}$. The choice of $N=37$ is significant since this level corresponds to the smallest square-free integer $N$ such that $Y_{N}^{+}$ is genus one. From Proposition 11 of [JST16], we have that $\operatorname{vol}_{\operatorname{hyp}}(Y_{37}^{+})=19\pi/3$ and $\beta_{37}=\frac{3}{19\pi}\left(\frac{10}{19}\log 37+2-2\log(4\pi)-24\zeta^{\prime}(-1)\right).$ The function field generators are $x_{37}^{+}(z)=q_{z}^{-2}+2q_{z}^{-1}+O(q_{z})$ and $y_{37}^{+}(z)=q_{z}^{-3}+3q_{z}^{-1}+O(q_{z})$, as displayed in Table 5 of [JST16]. The generators $x_{37}^{+}(z)$ and $y_{37}^{+}(z)$ satisfy the cubic relation $y^{2}-x^{3}+6xy-6x^{2}+41y+49x+300=0$. The functions $x_{37}^{+}(z)$ and $y_{37}^{+}(z)$ can be expressed in in terms of the Niebur-Poincaré series by comparing their $q$-expansions. The resulting expressions are that $\displaystyle x_{37}^{+}(z)$ $\displaystyle=2\pi[\sqrt{2}F_{-2}(z,1)+2F_{-1}(z,1)]-2\pi(\sqrt{2}B_{0,37}^{+}(1;-2)+2B_{0,37}^{+}(1;-1))$ $\displaystyle=2\pi[\sqrt{2}F_{-2}(z,1)+2F_{-1}(z,1)]-\frac{60}{19}$ and $\displaystyle y_{37}^{+}(z)$ $\displaystyle=2\pi[\sqrt{3}F_{-3}(z,1)+3F_{-1}(z,1)]-2\pi(\sqrt{3}B_{0,37}^{+}(1;-3)+3B_{0,37}^{+}(1;-1))$ $\displaystyle=2\pi[\sqrt{3}F_{-3}(z,1)++3F_{-1}(z,1)]-\frac{84}{19}.$ It is important to note that $x_{37}^{+}(z)$ has a pole of order two at $z=\infty$, i.e., its $q$-expansion begins with $q_{z}^{-2}$. As such, $x_{37}^{+}(z)$ is a linear transformation of the Weierstrass $\wp$-function, in the coordinates of the upper half plane, associated to the elliptic curve obtained by compactifying the space $Y_{37}^{+}$. Hence, there are three distinct points $\\{w\\}$ on $Y_{37}^{+}$, corresponding to the two torsion points under the group law, such that $x_{37}^{+}(z)-x_{37}^{+}(w)$ vanishes as a function of $z$ only when $z=w$. The order of vanishing necessarily is equal to two. The cusp form $\Delta_{37}(z)$ vanishes at $\infty$ to order $19$. Therefore, for such $w$, the form $f_{37,w}(z)=\Delta_{37}^{2}(z)(x_{37}^{+}(z)-x_{37}^{+}(w))^{19}$ is a weight $2k_{37}=24$ holomorphic form. The constant term in its $q$-expansion is equal to $1$, and $f_{37,w}(z)$ vanishes for points $z\in\mathcal{F}$ only when $z=w$. The order of vanishing of $f_{37,w}(z)$ at $z=w$ is $38\cdot\mathrm{ord}(w)$. With all this, we can apply Corollary 2. The resulting formulas are that $\displaystyle\langle x_{37}^{+},\log(\|f_{37,w}\|)\rangle$ $\displaystyle=-152\pi^{2}\left(\frac{\partial}{\partial s}\left.(\sqrt{2}F_{-2}(w,s)+2F_{-1}(w,s))\right|_{s=1}\right)$ $\displaystyle+240\pi\left(\log\left(|\eta(w)\eta(37w)|^{2}\cdot{\mathrm{Im}}(w)\right)-\frac{10}{19}\log 37+2\log(4\pi)+24\zeta^{\prime}(-1)\right)$ and $\langle y_{37}^{+},\log(\|f_{37,w}\|)\rangle=-152\pi^{2}\left(\frac{\partial}{\partial s}\left.(\sqrt{3}F_{-3}(w,s)+3F_{-1}(w,s))\right|_{s=1}\right)\\\ +336\pi\left(\log\left(|\eta(w)\eta(37w)|^{2}\cdot{\mathrm{Im}}(w)\right)-\frac{10}{19}\log 37+2\log(4\pi)+24\zeta^{\prime}(-1)\right).$ Of course, one does not need to assume that $w$ corresponds to a two torsion point. In general, Corollary 2 yields an expression where the right-hand side is a sum of two terms, and the corresponding factor in front would be one-half of the factors above. ### 6.4 A genus two example Consider the level $N=103$. In this case, $\operatorname{vol}_{\operatorname{hyp}}(Y_{103}^{+})=52\pi/3$ and the function field generators are $x_{103}^{+}(z)=q_{z}^{-3}+q_{z}^{-1}+O(q_{z})$ and $y_{103}^{+}(z)=q_{z}^{-4}+3q_{z}^{-2}+3q_{z}^{-1}+O(q_{z})$, as displayed in Table 7 of [JST16]. The generators $x_{103}^{+}(z)$ and $y_{103}^{+}(z)$ satisfy the polynomial relation $y^{3}-x^{4}-5yx^{2}-9x^{3}+16y^{2}-21yx-60x^{2}+65y-164x+18=0$. The surface $Y_{103}^{+}$ has genus two. From Theorem 6 of [Ni73], we can write $x_{103}^{+}(z)$ and $y_{103}^{+}(z)$ in terms of the Niebur-Poincaré series. Explictly, we have that $\displaystyle x_{103}^{+}(z)$ $\displaystyle=2\pi[\sqrt{3}F_{-3}(z,1)+F_{-1}(z,1)]-2\pi(\sqrt{3}B_{0,103}^{+}(1;-3)+B_{0,103}^{+}(1;-1))$ $\displaystyle=2\pi[\sqrt{3}F_{-3}(z,1)+F_{-1}(z,1)]-\frac{15}{13}$ and $\displaystyle y_{103}^{+}(z)$ $\displaystyle=2\pi[\sqrt{4}F_{-4}(z,1)+3\sqrt{2}F_{-2}(z,1)+3F_{-1}(z,1)]$ $\displaystyle-2\pi(\sqrt{4}B_{0,103}^{+}(1;-4)+3\sqrt{2}B_{0,103}^{+}(1;-2)+3B_{0,103}^{+}(1;-1))$ $\displaystyle=2\pi[2F_{-4}(z,1)+3\sqrt{2}F_{-2}(z,1)+3F_{-1}(z,1)]-\frac{57}{13}.$ The order of vanishing of $\Delta_{103}(z)$ at the cusp is $\nu_{103}=(12\cdot 52\pi/3)/(4\pi)=52$. Therefore, for an arbitrary, fixed $w\in{\mathbb{H}}$, the form $f_{103,w}(z)=\Delta_{103}^{3}(z)(x_{103}^{+}(z)-x_{103}^{+}(w))^{52}$ is the weight $3k_{103}=36$ holomorphic form which has constant term in the $q$-expansion equal to $1$. Let $\\{w_{1},w_{2},w_{3}\\}$ be the three, not necessarily distinct, points in the fundamental domain $\mathcal{F}$ where $(x_{103}^{+}(z)-x_{103}^{+}(w))$ vanishes. One of the points $w_{j}$ is equal to $w$. The form $f_{103,w_{j}}(z)$ vanishes at $z=w_{j}$ to order $52\cdot\mathrm{ord}(w_{j})$, $j=1,2,3$. From Section 4.2, we have that $\beta_{103}=\frac{3}{52\pi}\left(\frac{53}{104}\log 103+2-2\log(4\pi)-24\zeta^{\prime}(-1)\right)$ and $P_{103}(z)=\log\left(|\eta(z)\eta(103z)|^{2}\cdot{\mathrm{Im}}(z)\right)$. Let us now apply Corollary 2 with $g(z)=x_{103}^{+}(z)$, in which case $c(g)=-15/13$. In doing so, we get that $\displaystyle\langle x_{103}^{+},\log(\|f_{103,w}\|)\rangle$ $\displaystyle=-208\pi^{2}\sum\limits_{j=1}^{3}\left(\frac{\partial}{\partial s}\left.(\sqrt{3}F_{-3}(w_{j},s)+F_{-1}(w_{j},s))\right|_{s=1}\right)$ $\displaystyle+120\pi\sum\limits_{j=1}^{3}\left(\log\left(|\eta(w_{j})\eta(103w_{j})|^{2}\cdot{\mathrm{Im}}(w_{j})\right)\right)$ $\displaystyle-360\pi\left(\frac{53}{104}\log 103-2\log(4\pi)-24\zeta^{\prime}(-1)\right).$ Similarly, we can take $g(z)=y_{103}^{+}(z)$, in which case $c(g)=-57/13$ and we get that $\displaystyle\langle y_{103}^{+},\log(\|f_{103,w}\|)\rangle$ $\displaystyle=-208\pi^{2}\sum\limits_{j=1}^{3}\left(\frac{\partial}{\partial s}\left.(2F_{-4}(w_{j},s)+3\sqrt{2}F_{-2}(w_{j},s)+3F_{-1}(w_{j},s))\right|_{s=1}\right)$ $\displaystyle+456\pi\sum\limits_{j=1}^{3}\left(\log\left(|\eta(w_{j})\eta(103w_{j})|^{2}\cdot{\mathrm{Im}}(w_{j})\right)\right)$ $\displaystyle-1368\pi\left(\frac{53}{104}\log 103-2\log(4\pi)-24\zeta^{\prime}(-1)\right).$ ### 6.5 An alternative formulation In the above discussion, we have written the constant $\beta$ and the Kronecker limit function $P$ separately. However, it should be pointed out that in all instances the appearance of these terms are in the combination $\beta\operatorname{vol}_{\operatorname{hyp}}(M)-P(z)$. From (24), we can write $\beta\operatorname{vol}_{\operatorname{hyp}}(M)-P(z)=\frac{1}{\operatorname{vol}_{\operatorname{hyp}}(M)}\textrm{\rm CT}_{s=1}\mathcal{E}^{\mathrm{par}}_{\infty}(z,s),$ where $\textrm{\rm CT}_{s=1}$ denotes the constant term in the Laurent expansion at $s=1$. It may be possible that such notational change can provide additional insight concerning the formulas presented above. ## References * [AL70] Atkin, A. O. L., Lehner, J.: _Hecke operators on $\Gamma_{0}(m)$_, Math. Ann. 185 (1970), 134–160. * [BK20] Bringmann, K., Kane, B.: _An extension of the Rohrlich’s theorem to the $j$-function_, Forum Math. Sigma 8 (2020), e3, 33 pp. * [CJS20] Cogdell, J., Jorgenson, J., Smajlović, L.: _Spectral construction of non-holomorphic Eisenstein-type series and their Kronecker limit formula_ , in: Integrability systems and algebraic geometry, London Math. Soc. Lecture Note Ser., 459 (2020), Cambridge Univ. Press, Cambridge, 393–427. * [CN79] Conway, J. H., Norton, S. P.: _Monstrous moonshine_ , Bull. London Math. Soc. 11 (1979), 308–339. * [Cum04] Cummins, C.: _Congruence subgroups of groups commensurable with $\operatorname{PSL}(2,\mathbb{Z})$ of genus $0$ and $1$_, Experiment. Math. 13 (2004), 361–382. * [GR07] Gradshteyn, I. S., Ryzhik, I. M.: _Table of integrals, series and products_. Elsevier Academic Press, Amsterdam, 2007. * [Go73] Goldstein, L. J.: _Dedekind sums for a Fuchsian group_ I, Nagoya Math. J. 80 (1973), 21–47. * [He83] Hejhal, D.: _The Selberg trace formula for ${\rm PSL}(2,\mathbb{R})$._ II, Lecture Notes in Math. 1001, Springer-Verlag, Berlin, 1983. * [HIvPT19] Herrero, S., Imamoglu, Ö., von Pippich, A.-M., Tóth, Á.: _A Jensen-Rohrlich type formula for the hyperbolic 3-space_ , Trans. Amer. Math. Soc. 371 (2019), no. 9, 6421–6446. * [Iwa02] Iwaniec, H.: _Spectral methods of automorphic forms_. Graduate Studies in Mathematics 53, American Mathematical Society, Providence, RI, 2002. * [JvPS19] Jorgenson, J., von Pippich, A.-M., Smajlović, L.: _Applications of Kronecker’s limit formula for elliptic Eisenstein series_ , Ann. Math. Quebec 43 (2019), 99–124. * [JST16] Jorgenson, J, Smajlović, L., Then, H.: _Kronecker’s limit formula, holomorphic modular functions and $q$-expansions on certain arithmetic groups_, Exp. Math. 25 (2016), 295–319. * [JST16b] Jorgenson, J, Smajlović, L., Then, H.: _Certain aspects of holomorphic function theory on some genus zero arithmetic groups_ , LMS J. Comput. Math. 19 (2016), 360–381. * [JSTurl] Jorgenson, J, Smajlović, L., Then, H.: web page with computational data, http://www.efsa.unsa.ba/~lejla.smajlovic/jst2/. * [La87] Lang, S.: _Introduction to Complex Hyperbolic Spaces_. Springer-Verlag, Berlin, 1987. * [La99] Lang, S.: _Complex Analysis, fourth edition_. Graduate Texts in Mathematics, 103, Springer-Verlag, New York, 1999. * [Ni73] Niebur, D.: _A class of nonanalytic automorphic functions_ , Nagoya Math. J. 52 (1973), 133–145. * [vP10] von Pippich, A.-M.: _The arithmetic of elliptic Eisenstein series_. PhD thesis, Humboldt-Universität zu Berlin, 2010. * [vP16] von Pippich, A.-M.: _A Kronecker limit type formula for elliptic Eisenstein series_ , arXiv:1604.00811 [math.NT], 2016. * [Ro84] Rohrlich, D. E. : _A modular version of Jensen’s formula_ , Math. Proc. Cambridge Philos. Soc. 95 (1984), no. 1, 15–20. * [Se73] Serre, J.-P.: _A Course in Arithmetic_ , Graduate Texts in Mathematics, 7, Springer-Verlag, New York, 1973. * [Si80] Siegel, C. L.: _Advanced analytic number theory._ Tata Institute of Fundamental Research Studies in Mathematics, 9, Tata Institute of Fundamental Research, Bombay, 1980. * [Vo87] Vojta, P.: _Diophantine approximations and value distribution theory_. Lecture Notes in Mathematics 1239, Springer-Verlag, Berlin-New York, 1987. James Cogdell Department of Mathematics Ohio State University 231 W. 18th Ave Columbus, OH 43210, U.S.A. e-mail<EMAIL_ADDRESS> Jay Jorgenson Department of Mathematics The City College of New York Convent Avenue at 138th Street New York, NY 10031 U.S.A. e-mail<EMAIL_ADDRESS> Lejla Smajlović Department of Mathematics University of Sarajevo Zmaja od Bosne 35, 71 000 Sarajevo Bosnia and Herzegovina e-mail<EMAIL_ADDRESS>
# Optimistic and Adaptive Lagrangian Hedging Ryan D’Orazio,1 Ruitong Huang 2 ###### Abstract In online learning an algorithm plays against an environment with losses possibly picked by an adversary at each round. The generality of this framework includes problems that are not adversarial, for example offline optimization, or saddle point problems (i.e. min max optimization). However, online algorithms are typically not designed to leverage additional structure present in non-adversarial problems. Recently, slight modifications to well- known online algorithms such as optimism and adaptive step sizes have been used in several domains to accelerate online learning – recovering optimal rates in offline smooth optimization, and accelerating convergence to saddle points or social welfare in smooth games. In this work we introduce optimism and adaptive stepsizes to Lagrangian hedging, a class of online algorithms that includes regret-matching, and hedge (i.e. multiplicative weights). Our results include: a general general regret bound; a path length regret bound for a fixed smooth loss, applicable to an optimistic variant of regret- matching and regret-matching+; optimistic regret bounds for $\Phi$ regret, a framework that includes external, internal, and swap regret; and optimistic bounds for a family of algorithms that includes regret-matching+ as a special case. ## Introduction Online optimization is a general framework applicable to various problems such as offline optimization, and finding equilibria in games. Typical algorithms only use first-order information (i.e. a subgradient or gradient), such as online mirror descent (MD) (Nemirovsky and Yudin 1983; Warmuth and Jagota 1997; Beck and Teboulle 2003) which generalizes projected gradient descent (see for example (Orabona 2019)), and follow the regularized leader (FTRL) (Shalev-Shwartz and Singer 2006; Abernethy, Hazan, and Rakhlin 2009; Nesterov 2009).111See Orabona for an excellent historical overview of MD and FTRL. In general, online learning is adversarial, losses may change almost arbitrarily from one time step to the next. However, most problems of interest including offline optimization, and saddle point optimization can be “predictable.” That is, the sequence of losses induced by running an online algorithm in these settings has specific structure and can be predictable under the right conditions, like smoothness (i.e. Lipschitz continuous gradient). When losses are predictable a powerful framework is optimistic online learning (Rakhlin and Sridharan 2013a, b; Chiang et al. 2012). Where algorithms are modified to incorporate a _guess_ of the next loss, $m_{t}$, into their update. Combining optimism with MD and FTRL yields their optimistic counterparts, optimistic mirror descent (OMD), and optimistic follow the regularized leader (OFTRL), respectively. OMD and OFTRL both provide tangible benefits when problems are not quite adversarial. For example, faster convergence to a saddle point on the average (Rakhlin and Sridharan 2013b; Syrgkanis et al. 2015; Farina et al. 2019; Farina, Kroer, and Sandholm 2019); faster convergence to optimal social wellfare in n-player games (Syrgkanis et al. 2015); last iterate convergence in games (Daskalakis and Panageas 2018); acceleration in offline or online optimization (Cutkosky 2019; Mohri and Yang 2016; Joulani et al. 2020; Joulani, György, and Szepesvári 2017). Interestingly, much of the analysis of optimistic algorithms is black-box. For example, most of the results rely on regret bounds being of a particular form, which is satisfied by both OMD and OFTRL. Naturally, one may ask what other classes of algorithms can be combined with optimism to achieve faster rates in predictable problems? In this paper we extend the idea of optimism to the class of algorithms known as Lagrangian hedging (Gordon 2007). Unfortunately, the regret bounds attained are not consistent with OMD and OFTRL, therefore, immediate theoretical acceleration via the previously mentioned works is not attained. However, our analysis provides interesting regret bounds that should be small given a “good” guess. And in the case for a smooth fixed loss we show a path length bound for the regret. This result, for example, is applicable to an optimistic varaint of the well-known regret-matching algorithm when used to train a linear regressor with $L_{1}$ regularization and the least-squares loss (Schuurmans and Zinkevich 2016). Additionally, our analysis extends beyond the typical regret objectives of MD and FTRL, and includes regret bounds for internal and swap regret (Cesa- Bianchi and Lugosi 2006). To the best of our knowledge, our results provide the first optimistic and adaptive algorithms for minimizing internal regret, with possible applications including finding correlated equilibria in $n$-player general sum games (Cesa-Bianchi and Lugosi 2006). ## Background ### Online Linear Optimization In online convex optimization an algorithm $\mathcal{A}$ interacts with an environment for $T$ rounds (Zinkevich 2003). In each round $t$, $\mathcal{A}$ selects an iterate $x_{t}$ within some convex compact set $\mathcal{X}$, afterwhich a convex loss function $\ell_{t}:{\mathcal{X}}\to\mathbb{R}$ chosen by the environment is revealed. Furthermore, $\mathcal{A}$ is only allowed to use information from previous rounds. The performance of $\mathcal{A}$ after $T$ rounds is measured by its regret $\displaystyle R^{T}_{\mathcal{X}}=\sum_{t=1}^{T}\ell_{t}(x_{t})-\underset{x\in\mathcal{X}}{\min}\sum_{t=1}^{T}\ell_{t}(x).$ (1) The objective is to ensure sublinear regret, $R^{T}_{\mathcal{X}}\in o(T)$, e.g $R^{T}_{\mathcal{X}}\in O(\sqrt{T})$. In the most general of settings, no assumptions are made on the sequence of losses $\\{\ell_{t}\\}_{t\leq T}$, they may be chosen by an adversary with knowledge of $\mathcal{A}$. If each loss $\ell_{t}$ is subdifferentiable at $x_{t}$, then there exists a vector $\partial\ell_{t}(x_{t})$ (a subgradient) such that $\ell_{t}(x)\geq\ell_{t}(x_{t})+\langle\partial\ell_{t}(x_{t}),x-x_{t}\rangle\quad\forall x\in\mathcal{X}.$ Provided $\mathcal{A}$ has access to a subgradient, it is enough to design algorithms for linear losses. The original regret $R^{T}_{\mathcal{X}}$ is upper bounded by the regret with respect to the linear losses $\\{\tilde{\ell}_{t}\\}_{t\leq T}$, where $\tilde{\ell}_{t}(x)=\langle\partial\ell_{t}(x_{t}),x\rangle$. For the remainder of the paper we assume linear losses unless specified otherwise. ### Lagrangian Hedging Lagrangian hedging defines a class of algorithms for online linear optimization (Gordon 2007). The class generalizes potential based methods introduced by Cesa-Bianchi and Lugosi for learning with expert advice (Cesa- Bianchi and Lugosi 2003),222Learning with expert advice resembles online linear optimization where the decision set $\mathcal{X}$ is an $n$-dimensional simplex $\Delta^{n}$, which is interpreted as the set of distributions over $n$ experts. and includes the well-known Hedge aglorithm (also known as multiplicative weights) (Freund and Schapire 1997), and regret-matching (Hart and Mas-Colell 2000) . At each round $t$ a Lagrangian hedging algorithm maintains a regret vector $s_{1:t-1}=s_{1:t-2}+\langle\ell_{t-1},x_{t-1}\rangle u-\ell_{t-1},$ with the initial vector initialized as $s_{1:0}=s_{0}=0$. The change in the regret vector is denoted as $s_{t}=\langle\ell_{t},x_{t}\rangle u-\ell_{t}$, $s_{1:t}=\sum_{k=1}^{t}s_{k}$. $u$ is a vector such that $\langle u,x\rangle=1$ for any $x\in\mathcal{X}$. As mentioned by Gordon (Gordon 2007), if no such $u$ can be found then we may append an extra $1$ for each $x\in\mathcal{X}$. Then we can take $u$ to be the vector of zeros except for a $1$ coinciding with the new dimension added to $x$, as well as append a $0$ to each loss. $s_{1:t}$ is referred to as the regret vector because it tracks how well an algorithm has done so far, $\sum_{t=1}^{T}\langle\ell_{t},x_{t}\rangle-\sum_{t=1}^{T}\langle\ell_{t},x\rangle=\langle s_{1:T},x\rangle\quad\forall x\in\mathcal{X}.$ The regret is then simply $R^{T}_{\mathcal{X}}=\max_{x\in\mathcal{X}}\langle s_{1:T},x\rangle$. Instead of explicitly ensuring the regret to be small, Lagrangian hedging ensures $s_{1:T}$ is not too far from a safe set $\mathcal{S}$. The safe set is defined to be the polar cone to $\mathcal{X}$, $\mathcal{S}=\\{s:\forall x\in\mathcal{X}\,\langle s,x\rangle\leq 0\\}.$ Forcing $s_{1:T}$ to be in $\mathcal{S}$ may not be possible, as it would guarantee $R^{T}_{\mathcal{X}}\leq 0$ when it is possible to encounter an adversary that guarantees $\Omega(\sqrt{T})$ regret (Orabona 2019; Hazan et al. 2016). However, $\displaystyle R^{T}_{\mathcal{X}}$ $\displaystyle=\max_{x\in\mathcal{X}}\langle s_{1:T},x\rangle\leq\max_{x\in\mathcal{X}}\langle s_{1:T}-s,x\rangle\quad\forall s\in\mathcal{S}$ $\displaystyle\leq\inf_{s\in\mathcal{S}}\left\lVert s_{1:T}-s\right\rVert\max_{x\in\mathcal{X}}\left\lVert x\right\rVert_{\ast}.$ (2) Therefore, if the distance of $s_{T}$ to the set $\mathcal{S}$ grows at a sublinear rate then the regret will be sublinear, since by assumption the set $\mathcal{X}$ is bounded, $\left\lVert x\right\rVert_{\ast}\leq D$. The norm $\left\lVert\cdot\right\rVert_{\ast}$ is the dual norm of $\left\lVert\cdot\right\rVert$, defined as $\left\lVert x\right\rVert_{\ast}=\sup\\{\langle x,y\rangle|\left\lVert y\right\rVert\leq 1\\}$. Additionally, we assume the change in the regret vector is bounded in norm, $\left\lVert s_{t}\right\rVert^{2}\leq C$. This assumption is similar to assuming bounded linear functions $\left\lVert\ell_{t}\right\rVert\leq C$, or in the convex (possibly non-linear) case $\left\lVert\partial\ell_{t}\right\rVert\leq C$ (i.e convex Lipschitz continuous functions). The distance of $s_{1:t}$ to $\mathcal{S}$ is then controlled via a smooth potential function $F$, with the following conditions: $\displaystyle F(s)\leq 0\quad\forall s\in\mathcal{S}$ (3) $\displaystyle F(x+y)\leq F(x)+\langle\partial F(x),y\rangle+\frac{L}{2}\left\lVert y\right\rVert^{2}$ (4) $\displaystyle(F(s)+A)^{+}\geq\inf_{s^{\prime}\in S}B\left\lVert s-s^{\prime}\right\rVert^{p},$ (5) for constants $L,B>0$, $A\geq 0$, and $1\leq p\leq 2$. $\partial F(x)$ is a subgradient of $F$ at $x$. $(x)^{+}$ refers to the Relu operation which sets all negative values in the vector to $0$. In addition to the above conditions we will also assume that $F$ is convex, and therefore differentiable with $\partial F(x)=\nabla F(x)$, the gradient of $F$ at $x$. When $F$ is differentiable condition (4) is equivalent to Lipschitz continuity of the gradient, $\left\lVert\nabla F(x)-\nabla F(y)\right\rVert_{\ast}\leq L\left\lVert x-y\right\rVert$ (Nesterov 2018)[Theorem 2.1.5]. Once an appropriate potential function is chosen, a Lagrangian hedging algorithm ensures $F(s_{1:T})\in O(T)$ by picking an iterate at each round $t$ such that $\displaystyle\langle\nabla F(s_{1:t-1}),s_{t}\rangle\leq 0,$ (6) for any possible $s_{t}$ ($s_{t}$ can change depending on the loss $\ell_{t}$ picked by the environment). The above inequality is also known as the Blackwell condition, often used in potential based expert algorithms (Cesa- Bianchi and Lugosi 2006), and, as shown by Gordon, is guaranteed if the iterate at time $t$ is chosen by the following rule $\displaystyle x_{t}=\begin{cases}\frac{\nabla F(s_{1:t-1})}{\langle\nabla F(s_{1:t-1}),u\rangle}&\mbox{if }\langle\nabla F(s_{1:t-1}),u\rangle>0\\\ \mbox{arbitrary }x\in\mathcal{X}&o.w.\end{cases}$ (7) Gordon also showed that procedure (7) always yields a feasible iterate $x_{t}\in\mathcal{X}$. Equipped with the Blackwell condition and the smoothness of $F$ (condition 4), the growth of $F(s_{1:t})$ is easily bounded by $\displaystyle F(s_{1:t})=F(s_{1:t-1}+s_{t})$ $\displaystyle\leq F(s_{1:t-1})+\frac{L}{2}\left\lVert s_{t}\right\rVert^{2}$ $\displaystyle\leq F(s_{1:t-1})+\frac{LC}{2}.$ Summing across time and with $s_{1:0}=s_{0}=0$, $F(s_{1:t})\leq F(0)+\frac{LCt}{2}\leq\frac{LCt}{2},$ since $0\in\mathcal{S}$. With a linear bound on $F(s_{1:t})$ a regret bound follows immediately by condition (5) and (2), $\displaystyle R^{T}_{\mathcal{X}}\leq D\left(\frac{LCT+2A}{2B}\right)^{1/p}.$ (8) If $p=1$ then the regret is linear, however, as mentioned by Gordon, a stepsize can be used to achieve sublinear regret. We can define a new potential function with stepsize $\eta$, $F_{\eta}(s)=F(\eta s)$. The smoothness condition becomes $\displaystyle F_{\eta}(x+y)$ $\displaystyle=F(\eta(x+y))\leq$ $\displaystyle F(\eta x)+\eta\langle\nabla F(\eta x),y\rangle+\frac{\eta^{2}L}{2}\left\lVert y\right\rVert^{2},$ $\displaystyle F_{\eta}(x)+\langle\nabla F_{\eta}(x),y\rangle+\frac{\eta^{2}L}{2}\left\lVert y\right\rVert^{2}.$ $F_{\eta}$ is therefore a valid potential function, with condition (4) now being $(F_{\eta}(s)+A)^{+}\geq\inf_{s^{\prime}\in S}\eta^{p}B\left\lVert s-s^{\prime}\right\rVert^{p}.$ Following the same arguments as Gordon ((Gordon 2007)[Theorem 3]), the regret becomes $\displaystyle R^{T}_{\mathcal{X}}\leq D\left(\frac{\eta^{2}LCT+2A}{2B\eta^{p}}\right)^{1/p}.$ (9) When $p=1$ a stepsize $\eta\in O(\frac{1}{\sqrt{T}})$ achieves a regret bound of $O(\sqrt{T})$, similar to the standard results in MD and FTRL analysis (see (Orabona 2019) for example). Despite this guarantee on regret, this bound does not hold uniformly over time, one must have knowledge of the horizon $T$ to select a stepsize. However, the standard doubling trick can be applied to achieve a regret guarantee for all time steps, requiring algorithm resets after exponentially growing time intervals (Cesa-Bianchi and Lugosi 2006). In MD, FTRL, and the potential based approaches from expert problems, however, a stepsize schedule of $\eta_{t}\in O(\frac{1}{\sqrt{t}}))$ is enough to achieve a $O(\sqrt{T})$ regret bound that holds uniformly over time (applies to any time horizon). Given that Lagrangian hedging generalizes potential based methods, a similar result likely should hold. Indeed we show with the help of the following simple yet important lemma, that the same learning rate schedule would suffice for Lagrangian Hedging algorithms with potential functions that need a learning rate (i.e $p=1$). This result is interesting as it makes no additional assumptions on the potential function; whereas, for example when viewing multiplicative weights as a potential based method and a specific instance of Lagrangian hedging, inequalities particular to the algorithm (a specific potential function) are used to derive the regret bounds that hold uniformly over time (Cesa-Bianchi and Lugosi 2006). First we extend the Lagrangian hedging framework with an arbitrary sequence of stepsizes $\\{\eta_{t}\\}_{t\leq T}$, where the potential function $F(\eta_{t}s)$ is used at round $t$ to construct the iterate $x_{t}$, $\displaystyle x_{t}=\begin{cases}\frac{\nabla F(\eta_{t}s_{1:t-1})}{\langle\nabla F(\eta_{t}s_{1:t-1}),u\rangle}&\mbox{if }\langle\nabla F(\eta_{t}s_{1:t-1}),u\rangle>0\\\ \mbox{arbitrary }x\in\mathcal{X}&o.w.\end{cases}$ (10) ###### Lemma 1. Assume $F$ is a convex function satisfying condition (3), consider step sizes $0<\eta_{t}\leq\eta_{t-1}$, then $F(\eta_{t}s)\leq\frac{\eta_{t}}{\eta_{t-1}}F(\eta_{t-1}s).$ ###### Proof. $\displaystyle F(\eta_{t}s)$ $\displaystyle=F\left(\frac{\eta_{t}}{\eta_{t-1}}\eta_{t-1}s+0\right)=F\left(\frac{\eta_{t}}{\eta_{t-1}}\eta_{t-1}s+\left(1-\frac{\eta_{t}}{\eta_{t-1}}\right)0\right)$ $\displaystyle\leq\frac{\eta_{t}}{\eta_{t-1}}F(\eta_{t-1}s)+(1-\frac{\eta_{t}}{\eta_{t-1}})F(0)$ $\displaystyle\leq\frac{\eta_{t}}{\eta_{t-1}}F(\eta_{t-1}s)\mbox{, \quad since $0\in\mathcal{S}$}.$ ∎ Coupling the above lemma with the algorithm (10) and the Blackwell condition yields a bound on the growth of $F(\eta_{t}s_{t})$ and therefore a regret bound. Such a bound is a special case of optimistic Lagrangian hedging when the prediction is $0$, and so we defer the presentation to the next section. ## Adaptivity and Optimism in Lagrangian Hedging In this section we present the optimistic Lagrangian hedging algorithm along with adaptive stepsizes and the regret guarantees. Optimistic Lagangian hedging leverages a prediction $m_{t}$ at round $t$ to construct the iterate $x_{t}$. In the optimistic and adaptive variants of MD and FTRL, one hopes to have $m_{t}\approx\ell_{t}$ since the regret bounds attained are usually of the form $O\left(\sqrt{\sum_{t=1}^{T}\left\lVert m_{t}-\ell_{t}\right\rVert^{2}}\right)$, with adaptive stepsizes (in the case of MD) similar to $\displaystyle\eta_{t}=\frac{1}{\sqrt{\sum_{s=1}^{t-1}\left\lVert m_{s}-\ell_{s}\right\rVert^{2}}}.$ (11) In optimistic Lagrangian hedging we hope the prediction $m_{t}$ to be a good predictor of the change in the regret vector $m_{t}\approx s_{t}$, with the provable regret bound of $O\left(\sqrt{\sum_{t=1}^{T}\left\lVert m_{t}-s_{t}\right\rVert^{2}}\right)$. Interestingly for the case of $p=2$ no adaptive step size is needed! ### General Optimistic Bound Given a prediction $m_{t}$ we define optimistic Lagrangian hedging with stepsizes $\eta_{t}$ as the following rule $\displaystyle x_{t}=\begin{cases}\frac{\nabla F(\eta_{t}(s_{1:t-1}+m_{t}))}{\langle\nabla F(\eta_{t}(s_{1:t-1}+m_{t})),u\rangle}&\mbox{if }\langle\nabla F(\eta_{t}(s_{1:t-1}+m_{t})),u\rangle>0\\\ \mbox{arbitrary }x\in\mathcal{X}&o.w.\end{cases}$ (12) Optimistic Lagrangian hedging then guarantees the general upper bound on the growth of the potential function. ###### Theorem 1. An optimistic Lagrangian hedging algorithm with a convex potential function $F$ satisfying conditions (3-4) and positive decreasing stepsizes $0<\eta_{t}\leq\eta_{t-1}$, ensures $F(\eta_{T}s_{1:T})\leq\frac{L}{2}\sum_{t=1}^{T}\eta_{T}\eta_{t}\left\lVert s_{t}-m_{t}\right\rVert^{2}.$ ###### Proof. From the same arguments as Gordon, we have the following Blackwell condition $\langle\nabla F(\eta_{t}(s_{1:t-1}+m_{t})),s_{t})\rangle\leq 0.$ By the smoothness of $F$ we have $\displaystyle F(\eta_{t}s_{1:t})=F(\eta_{t}(s_{t-1}+m_{t}+s_{t}-m_{t}))\leq$ $\displaystyle F(\eta_{t}(s_{1:t-1}+m_{t}))$ $\displaystyle+\langle\nabla F(\eta_{t}(s_{1:t-1}+m_{t})),\eta_{t}(s_{t}-m_{t})\rangle$ $\displaystyle+\eta_{t}^{2}\frac{L}{2}\left\lVert s_{t}-m_{t}\right\rVert^{2}$ $\displaystyle=F(\eta_{t}(s_{1:t-1}+m_{t}))-F(\eta_{t}(s_{1:t-1}))+F(\eta_{t}(s_{1:t-1}))$ $\displaystyle+\langle\nabla F(\eta_{t}(s_{1:t-1}+m_{t})),-\eta_{t}m_{t}\rangle+\eta_{t}^{2}\frac{L}{2}\left\lVert s_{t}-m_{t}\right\rVert^{2}$ $\displaystyle\leq F(\eta_{t}(s_{1:t-1}))+\eta_{t}^{2}\frac{L}{2}\left\lVert s_{t}-m_{t}\right\rVert^{2}\mbox{ (by convexity)}$ $\displaystyle\leq\frac{\eta_{t}}{\eta_{t-1}}F(\eta_{t-1}s_{1:t-1})+\eta_{t}^{2}\frac{L}{2}\left\lVert s_{t}-m_{t}\right\rVert^{2}\mbox{ (by Lemma 1)}.$ We now proceed by induction. Observe that $F(\eta_{0}s_{0})=F(0)\leq 0$ by assumption. So, for any $\eta_{0}\leq\eta_{1}$,555$\eta_{0}$ is not used to construct $x_{1}$ and is only used for the analysis. $F(\eta_{1}s_{1:1})\leq\frac{\eta_{1}}{\eta_{0}}F(\eta_{0}s_{0})+\eta_{1}^{2}\frac{L}{2}\left\lVert s_{1}-m_{1}\right\rVert^{2}\leq\eta_{1}^{2}\frac{L}{2}\left\lVert s_{1}-m_{1}\right\rVert^{2}.$ Assume that $F(\eta_{t-1}s_{1:t-1})\leq\frac{L}{2}\sum_{k=1}^{t-1}\eta_{t-1}\eta_{k}\left\lVert s_{k}-m_{k}\right\rVert^{2}.$ Then we have $\displaystyle F(\eta_{t}s_{t})$ $\displaystyle\leq\frac{\eta_{t}}{\eta_{t-1}}F(\eta_{t-1}s_{t-1})+\eta_{t}^{2}\frac{L}{2}\left\lVert r_{t}-m_{t}\right\rVert^{2}$ $\displaystyle\leq\frac{L}{2}\sum_{s=1}^{t-1}\eta_{t}\eta_{s}\left\lVert r_{s}-m_{s}\right\rVert^{2}+\eta_{t}^{2}\frac{L}{2}\left\lVert r_{t}-m_{t}\right\rVert^{2}$ ∎ Taking no stepsize or constant stepsize and setting $m_{t}=0$ recovers the original results by Gordon. When $p=1$ and $m_{t}=0$, and applying the assumed upper bound on $s_{t}$, Theorem 1 gives $\displaystyle R^{T}_{\mathcal{X}}\leq D\left(\frac{LC\sum_{t=1}^{T}\eta_{t}}{2B}+\frac{A}{B\eta_{T}}\right).$ (13) Therefore, taking $\eta_{t}\in O(\frac{1}{\sqrt{t}})$ gives a regret bound holding uniformly over time that is of the order $O(\sqrt{T})$. For the case of when $p>1$ where no stepsize is needed the following regret bound is immediate $\displaystyle R^{T}_{\mathcal{X}}\leq D\left(\frac{L(\sum_{t=1}^{T}\left\lVert s_{t}-m_{t}\right\rVert^{2})+2A}{2B}\right)^{1/p}.$ (14) In the case of regret-matching on the simplex, where $B=1$, $A=0$, $L=2$, $D=1$, and $p=2$ (Gordon 2007), we get $\displaystyle R^{T}_{\mathcal{X}}\leq\sqrt{\sum_{t=1}^{T}\left\lVert s_{t}-m_{t}\right\rVert^{2}}.$ (15) ### Adaptive Stepsizes For the case of $p=1$ we can still achieve a regret bound similar to (15) by taking adaptive stepsizes. Intuitively, the stepsizes account for how well previous predictions have done, or in the case of no predictions, how large in norm $s_{t}$ have been. Unlike the typical adaptive stepsize scheme for mirror descent (11), the stepsizes for Lagrangian hedging will be similar to adaptive FTRL methods (Mohri and Yang 2016), including the initial stepsize $\eta_{1}$, $\displaystyle\eta_{t}=\frac{1}{\sqrt{\frac{1}{\eta_{1}^{2}}+\sum_{k=1}^{t-1}\left\lVert s_{k}-m_{k}\right\rVert^{2}}}\quad t>1.$ (16) Our result is a direct application of the following Lemma, which is a slight modification of a similar result by Orabona (Orabona 2019)[Lemma 4.13], we provide the proof in the appendix. ###### Lemma 2. Let $a_{0}\geq 0$ and $0\leq a_{i}\leq C$ for $i>0$. If $f$ is a non-negative decreasing function then $\sum_{t=1}^{T}a_{t}f(a_{0}+\sum_{i=1}^{t-1}a_{i})\leq(C-a_{0})f(a_{0})+\int_{a_{0}}^{s_{T-1}}f(x)dx.$ Following adaptive stepsize scheme (16) yields the following regret bound. ###### Theorem 2. An optimistic Lagrangian hedging algorithm with a convex potential function $F$ satisfying conditions (3-5), with $p=1$ and stepsizes $\eta_{t}=\frac{1}{\sqrt{\frac{1}{\eta_{1}^{2}}+\sum_{k=1}^{t-1}\left\lVert s_{k}-m_{k}\right\rVert^{2}}}\quad t>1,$ and $\eta_{1}\leq\sqrt{\frac{3}{C}}$, attains the following regret bound $R^{T}_{\mathcal{X}}\leq\frac{D}{B}\left((L+A)\sqrt{\frac{1}{\eta_{1}^{2}}+\sum_{t=1}^{T-1}\left\lVert s_{t}-m_{t}\right\rVert^{2}}\right).$ See appendix for proof. ### Path Length Bound with Smooth Losses Optimism and adaptivity have found useful applications in improving rates for several smooth problems. For example, faster rates in smooth games (Rakhlin and Sridharan 2013b; Syrgkanis et al. 2015; Farina et al. 2019; Farina, Kroer, and Sandholm 2019), and faster rates for offline optimization (Cutkosky 2019; Joulani et al. 2020). Unfortunately, these results strongly depend on the regret bound having the same form as OMD and OFTRL. However, in Lagrangian hedging we can attain a path length regret bound when the loss is fixed and smooth; the regret is upper bounded by the change in iterates. The new path length bound is a direct application of the general optimistic results of the previous section combined with the assumption of a fixed Lipschitz continuous smooth convex loss (possibly non-linear), that is $\ell_{t}=\ell$,$K\geq\left\lVert\nabla\ell(x)\right\rVert$, and $\left\lVert\nabla\ell(x)-\nabla\ell(y)\right\rVert\leq L\left\lVert x-y\right\rVert_{\ast}$. If we take the typical martingale prediction $m_{t}=s_{t-1}$ then we have that $\left\lVert s_{t}-m_{t}\right\rVert^{2}\leq\tilde{C}\left\lVert x_{t}-x_{t-1}\right\rVert_{\ast}^{2}$. ###### Proof. In the fixed loss case we have $s_{t}=\langle\nabla\ell(x_{t}),x_{t}\rangle u-\nabla\ell(x_{t})$. Therefore, $\displaystyle\left\lVert s_{t}-m_{t}\right\rVert=\left\lVert s_{t}-s_{t-1}\right\rVert=$ $\displaystyle\left\lVert\nabla\ell(x_{t-1})-\nabla\ell(x_{t})+\langle\nabla\ell(x_{t}),x_{t}\rangle u-\langle\nabla\ell(x_{t-1}),x_{t-1}\rangle u\right\rVert$ $\displaystyle\leq L\left\lVert x_{t-1}-x_{t}\right\rVert_{\ast}+\left\lVert u\right\rVert|\langle\nabla\ell(x_{t}),x_{t}\rangle-\langle\nabla\ell(x_{t-1}),x_{t-1}|$ $\displaystyle=L\left\lVert x_{t-1}-x_{t}\right\rVert_{\ast}$ $\displaystyle+\left\lVert u\right\rVert|\langle\nabla\ell(x_{t})-\nabla\ell(x_{t-1}),x_{t}\rangle+\langle\nabla\ell(x_{t-1}),x_{t}-x_{t-1})|$ $\displaystyle\leq\left(L+\left\lVert u\right\rVert DL+K\right)\left\lVert x_{t}-x_{t-1}\right\rVert_{\ast}.$ Taking $\tilde{C}=\left(L+\left\lVert u\right\rVert DL+K\right)^{2}$ gives the result. ∎ ## Generalization to $\Phi$-Regret in Experts When $\mathcal{X}=\Delta^{n}$, the $n$-dimensional simplex, online linear optimization becomes a problem of learning with expert advice. At each round $t$ an iterate $x_{t}\in\Delta^{n}$ is a distribution over $n$ actions, interpreted as weightings over recommendations by $n$ experts. Similar to before, regret will compare the total loss with the best $x^{\ast}\in\mathcal{X}$. However, this is equal to comparing with the best action (best expert recommendation) and is referred to as external regret, $\displaystyle R^{T}_{\mathcal{X}}=\max_{a\in A}\sum_{t=1}^{T}\langle\ell_{t},x_{t}\rangle-\langle\ell_{t},\delta_{a}\rangle.$ (17) Where $\delta_{a}$ is a distribution over $A$ with full weight on action $a$. The regret can be interpreted as considering an alternative sequence of iterates $\\{\tilde{x}_{t}\\}_{t\leq T}$, where each $\tilde{x}_{t}=\phi(x_{t})$, for some transformation of the form $\phi(x)=\delta_{a}$. More generally, we can measure regret with respect to a set of linear transformations $\Phi$, referred to as $\Phi$ regret $\displaystyle R^{T}_{\Phi}=\max_{\phi\in\Phi}\sum_{t=1}^{T}\langle\ell_{t},x_{t}\rangle-\langle\ell_{t},\phi(x_{t}))\rangle.$ (18) Similar to Lagrangian hedging, we seek to force a vector to some safe set. More precisely, we consider the $\Phi$ regret vector $s_{1:t}^{\Phi}$ that keeps track of how an algorithm is doing with respect to the set $\Phi$, $\displaystyle s_{1:t}^{\Phi}=s_{1:t-1}^{\Phi}+s_{t}^{\Phi}.$ (19) Where $s_{t}^{\Phi}=\\{\langle\ell_{t},x_{t}\rangle-\langle\ell_{t},\phi(x_{t})\rangle\\}_{\phi\in\Phi}\in\mathbb{R}^{|\Phi|}$. If $s_{1:t}^{\Phi}$ has all non-positive entries then $R^{T}_{\Phi}\leq 0$, therefore the safe set is chosen to be $\mathbb{R}^{|\Phi|}_{\leq 0}$, the negative orthant. This $\Phi$ regret framework, though abstract, includes other interesting forms of regret such as internal and swap regret (Greenwald, Li, and Marks 2006). Internal regret is interesting as it allows for efficient computation of a correlated equilibrium in game theory (Cesa-Bianchi and Lugosi 2006). Similar to Lagrangian hedging, and as proposed by Greenwald, Li, and Marks, the algorithms will use a potential function $F$ to measure how far $s_{1:t}^{\Phi}$ is from the safe set and slow down its growth with the Blackwell condition $\displaystyle\langle\nabla F(s^{\Phi}_{1:t-1}),s_{t}^{\Phi}\rangle\leq 0.$ (20) As shown by Greenwald, Li, and Marks, the generalized Blackwell condition with respect to $\Phi$ is achieved if an algorithm plays a fixed point of a linear operator $M_{t}^{\Phi}$, $\displaystyle M_{t}^{\Phi}(x)=\frac{\sum_{\phi\in\Phi}(\nabla F(s^{\Phi}_{1:t-1}))_{\phi}\phi(x)}{\langle\nabla F(s^{\Phi}_{1:t-1}),\mathbf{1}\rangle}$ (21) where $(\nabla F(s^{\Phi}_{t-1}))_{\phi}$ denotes the component of the vector $\nabla F(s^{\Phi}_{t-1})\in\mathbb{R}^{|\Phi|}$ associated with the transformation $\phi\in\Phi$, and $\mathbf{1}=(1,\cdots,1)\in\mathbb{R}^{|\Phi|}$. This fixed point exactly coincides with the Lagrangian hedging method when $\mathcal{X}$ is a simplex, and $\Phi=\\{\phi:\exists\,a\in A\,\,\forall x\,\phi(x)=\delta_{a}\\}$. In other words, the rule (7) is a fixed point of $M_{t}^{\Phi}(x)$ for external regret. If an upperbound on $F$ provides a regret bound, as in the previous sections, then optimistic Lagrangian hedging can be generalized to the $\Phi$-regret setting, by defining a new operator, $\displaystyle\tilde{M}_{t}^{\Phi}(x)=\frac{\sum_{\phi\in\Phi}(\nabla F(\eta_{t}(s^{\Phi}_{t-1}+m_{t})))_{\phi}\phi(x)}{\langle\nabla F(\eta_{t}(s^{\Phi}_{t-1}+m_{t})),\mathbf{1}\rangle}.$ (22) The main result is a theorem analogous to Theorem 1, except with the new regret vector $s^{\Phi}_{1:t}$. ###### Theorem 3. An optimistic Lagrangian hedging algorithm playing a fixed point of $\tilde{M}^{\Phi}_{t}$, with a convex potential function $F$ satisfying conditions (3-4) and positive decreasing stepsizes $0<\eta_{t}\leq\eta_{t-1}$, ensures $F(\eta_{T}s^{\Phi}_{1:T})\leq\frac{L}{2}\sum_{t=1}^{T}\eta_{T}\eta_{t}\left\lVert s^{\Phi}_{t}-m_{t}\right\rVert^{2}.$ The proof is identical to Theorem 1 except we use the Blackwell condition $\langle\nabla F(\eta_{t}(s^{\Phi}_{1:t-1}+m_{t})),s_{t}^{\Phi}\rangle\leq 0,$ see appendix for more details. To the best of our knowledge, this results in the first set of optimistic and adaptive algorithms for minimizing internal and swap regret. ### Lagrangian Hedging+ In this section we extend optimistic Lagrangian hedging in the $\Phi$-regret setting to use a modified regret vector $s^{\Phi+}_{1:t}=(s^{\Phi+}_{1:t-1}+s^{\phi}_{t})^{+}.$ This modification is inspired by the regret-matching+ algorithm, which has been successfully used to solve large two-player zero-sum games and play poker at an expert-level (Tammelin 2014; Tammelin et al. 2015; Burch 2017). Indeed this framework generalizes regret-matching+, beyond external regret and beyond regret-matching. With the modified regret vector $s^{\Phi+}_{1:t}$, the safe set remains as $\mathbb{R}^{|\Phi|}_{\leq 0}$, because of the component wise inequality $s^{\Phi}_{1:t}\leq s^{\Phi+}_{1:t}.$ Therefore, $s^{\Phi+}_{1:t}\in\mathbb{R}^{|\Phi|}_{\leq 0}$ implies $s^{\Phi}_{1:t}\in\mathbb{R}^{|\Phi|}_{\leq 0}$, and we have $R^{T}_{\phi}=\max_{\phi\in\Phi}(s^{\Phi}_{1:t})_{\phi}\leq\max_{\phi\in\Phi}(s^{\Phi+}_{1:t})_{\phi}.$ As one would expect, we define optimistic Lagrangian hedging+ with the operator $M_{t}$ but modified to use the regret vector $s^{\Phi+}_{1:t}$ and a prediction $m_{t}$. $\displaystyle\tilde{M}_{t}^{\Phi+}(x)=\frac{\sum_{\phi\in\Phi}(\nabla F(\eta_{t}(s^{\Phi+}_{t-1}+m_{t})))_{\phi}\phi(x)}{\langle\nabla F(\eta_{t}(s^{\Phi+}_{t-1}+m_{t})),\mathbf{1}\rangle}.$ (23) Like the previous section, if we can control the growth of $F(s^{\Phi,+}_{1:t})$ and $F$ indeed provides an upper bound to the safe set, then a regret bound is attainable. However, we must make an additional assumption on $F$, for which we call _positive invariant and smooth_ (D’Orazio 2020). That is $F((x+y)^{+})\leq F(x)+\langle\partial F(x),y\rangle+\frac{L}{2}\left\lVert y\right\rVert^{2}.$ Once again, equipped with the new smoothness condition and the Blackwell condition $\langle\nabla F(\eta_{t}(s^{\Phi+}_{1:t-1}+m_{t})),s_{t}^{\Phi}\rangle\leq 0,$ which is guaranteed by playing the fixed point (23) (see appendix for details), we have the following bound on $F$. ###### Theorem 4. An optimistic Lagrangian hedging+ algorithm playing a fixed point of $\tilde{M}_{t}^{\Phi+}$, with a convex potential function $F$ that is positive invariant and smooth and satisfying condition (3), with positive decreasing stepsizes $0<\eta_{t}\leq\eta_{t-1}$, ensures $F(\eta_{T}s^{\Phi+}_{1:T})\leq\frac{L}{2}\sum_{t=1}^{T}\eta_{T}\eta_{t}\left\lVert s^{\Phi}_{t}-m_{t}\right\rVert^{2}.$ ### $\Phi$ Regret Examples Inspired by Lagrangian hedging, Greenwald, Li, and Marks present different potential functions that are appropriate to minimizing $\Phi$ regret. The functions include a polynomial family of algorithms, with regret-matching as special case, and an exponential variant which amounts to the hedge algorithm when external regret is minimized. #### Polynomial $\displaystyle F(x)=\left\lVert x^{+}\right\rVert^{2}_{p}\quad p\geq 2,$ (24) $F$ is smooth with $L=2(p-1)$, and with respect to the $p$-norm $\left\lVert\cdot\right\rVert_{p}$. Greenwald, Li, and Marks showed that an upper bound on $F(s_{1:T})\leq K$ amounts to the regret bound $R^{T}_{\Phi}\leq\sqrt{K}$. In the case of when an algorithm is using the modified regret vector $s_{1:T}^{\Phi+}$ it is easy to show $\max_{\phi\in\Phi}(s^{\Phi+}_{1:t})_{\phi}\leq\sqrt{F(s^{\Phi+}_{1:t})}\leq\sqrt{K}$, since $F$ is positive invariant and smooth because $F(x^{+})=F(x)$. When $p=2$ and $\Phi$ is taken to be equivalent to external regret, we have the gradient of $F(x)$ is $x^{+}$, which gives the regret-matching algorithm when if $m_{t}=0$, and the regret-matching+ algorithm if $s^{\Phi+}_{1:t}$ is used with $m_{t}=0$. Notice that we exactly recover the regret-matching bound (15) in this case by applying the upper bound from Theorem 3 and with no stepsize ($\eta_{t}=1$). Greenwald, Li, and Marks also showed that the polynomial case can be extended to $1<p<2$ with the potential function $F(x)=\left\lVert x^{+}\right\rVert^{p}_{p}\quad 1<p<2.$ However, the smoothness condition (4) must be modified by replacing $\left\lVert\cdot\right\rVert^{2}$ with $\left\lVert\cdot\right\rVert^{p}$. This does not change the analysis but the bounds need to be changed accordingly. More importantly the regret bound degrades as $p$ approaches $1$, $R^{T}_{\Phi}\leq K^{1/p}$. Similar to the case of $p\geq 2$, if $1<p<2$ bounds on $\max_{\phi\in\Phi}(s^{\Phi+}_{1:t})_{\phi}$ are attainable since $F(x^{+})=F(x)$ and hence is positive invariant and smooth.666See (D’Orazio 2020) for more details. #### Exponential In addition to the polynomial family, we can pick the exponential variant with potential function $F(\eta x)=\ln\sum_{i}e^{\eta x_{i}}-\ln(d).$ Where $x\in\mathbb{R}^{d}$, $L=1$, and $\left\lVert\cdot\right\rVert^{2}=\left\lVert\cdot\right\rVert^{2}_{\infty}$ for the smoothness condition. It can also be shown that $\max_{i}x_{i}$ for some vector $x\in\mathbb{R}^{d}$ is upper bounded by $\frac{1}{\eta}(F(\eta x)+\ln(d))$, therefore the bound on $F$ from Theorem 3 gives an upper bound on $\max_{\phi}(s^{\Phi}_{1:t})_{\phi}$ with $d=|\Phi|$. The gradient of $F(x)$ is the softmax function and gives the hedge algorithm if $\Phi$ is chosen to correspond with external regret. ## Related Work An important instance of Lagrangian hedging is regret-matching, an algorithm typically used within the game theory community (Hart and Mas-Colell 2000; Zinkevich et al. 2008), and is a special case of Blackwell’s algorithm (Blackwell et al. 1956). At the same time of writing Farina, Kroer, and Sandholm have also analyzed an optimistic variant of regret-matching and its popular variant regret-matching+, named predictive regret-matching and predictive regret-matching+, respectively (Farina, Kroer, and Sandholm 2020). On the surface, our analysis provides more generality as it includes both of their variants of regret-matching and more. However, it is conceivable that the main tool used in their paper, the equivalence of Blackwell approachability and online linear optimization (Abernethy, Bartlett, and Hazan 2011), provides generality to analyze other optimistic Blackwell style algorithms. More importantly, we do not believe that the tools from Abernethy, Bartlett, and Hazan to be equivalent to Lagrangian hedging. Further investigation is left to future work. ## Conclusion In this paper we extend Lagrangian hedging to include optimism, a guess $m_{t}$ of how the regret vector will change, and adaptive stepsizes. The regret bounds attained for optimistic and adaptive Lagrangian hedging lead to a path length bound in constrained smooth convex optimization. Furthermore, we devise optimistic and adaptive algorithms to minimize $\Phi$ regret, a generalization of external regret that includes internal regret, and include a new class of algorithms that generalizes regret-matching+. The analysis in this paper provides new algorithms, experimental evaluation is left to future work. For example, do the new optimistic and adaptive algorithms for internal regret provide better convergence to correlated equilibria then their non-optimistic counterparts? Additionally, in this work the step size scheme (16) is prescribed for potential functions with parameter $p=1$, which amounts to a new step size scheme for the well-known hedge algorithm, with many preexisting adaptive variants, does this scheme provide any benefits over other adaptive schemes in practice? ## References * Abernethy, Bartlett, and Hazan (2011) Abernethy, J.; Bartlett, P. L.; and Hazan, E. 2011. Blackwell approachability and no-regret learning are equivalent. In _Proceedings of the 24th Annual Conference on Learning Theory_ , 27–46. * Abernethy, Hazan, and Rakhlin (2009) Abernethy, J. D.; Hazan, E.; and Rakhlin, A. 2009. Competing in the dark: An efficient algorithm for bandit linear optimization . * Beck and Teboulle (2003) Beck, A.; and Teboulle, M. 2003. Mirror descent and nonlinear projected subgradient methods for convex optimization. _Operations Research Letters_ 31(3): 167–175. * Blackwell et al. (1956) Blackwell, D.; et al. 1956. An analog of the minimax theorem for vector payoffs. _Pacific Journal of Mathematics_ 6(1): 1–8. * Burch (2017) Burch, N. 2017. _Time and Space: Why Imperfect Information Games are Hard_. Ph.D. thesis, University of Alberta. * Cesa-Bianchi and Lugosi (2003) Cesa-Bianchi, N.; and Lugosi, G. 2003. Potential-based algorithms in on-line prediction and game theory. _Machine Learning_ 51(3): 239–261. * Cesa-Bianchi and Lugosi (2006) Cesa-Bianchi, N.; and Lugosi, G. 2006. _Prediction, learning, and games_. Cambridge university press. * Chiang et al. (2012) Chiang, C.-K.; Yang, T.; Lee, C.-J.; Mahdavi, M.; Lu, C.-J.; Jin, R.; and Zhu, S. 2012. Online optimization with gradual variations. In _Conference on Learning Theory_ , 6–1. * Cutkosky (2019) Cutkosky, A. 2019. Anytime online-to-batch, optimism and acceleration. In _International Conference on Machine Learning_ , 1446–1454. * Daskalakis and Panageas (2018) Daskalakis, C.; and Panageas, I. 2018. Last-iterate convergence: Zero-sum games and constrained min-max optimization. _arXiv preprint arXiv:1807.04252_ . * D’Orazio (2020) D’Orazio, R. 2020. _Regret Minimization with Function Approximation in Extensive-Form Games_. Master’s thesis, University of Alberta. * Farina et al. (2019) Farina, G.; Kroer, C.; Brown, N.; and Sandholm, T. 2019. Stable-Predictive Optimistic Counterfactual Regret Minimization. In _International Conference on Machine Learning_ , 1853–1862. * Farina, Kroer, and Sandholm (2019) Farina, G.; Kroer, C.; and Sandholm, T. 2019. Optimistic regret minimization for extensive-form games via dilated distance-generating functions. In _Advances in Neural Information Processing Systems_ , 5221–5231. * Farina, Kroer, and Sandholm (2020) Farina, G.; Kroer, C.; and Sandholm, T. 2020. Faster Game Solving via Predictive Blackwell Approachability: Connecting Regret Matching and Mirror Descent. _arXiv preprint arXiv:2007.14358_ . * Freund and Schapire (1997) Freund, Y.; and Schapire, R. E. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. _Journal of computer and system sciences_ 55(1): 119–139. * Gordon (2007) Gordon, G. J. 2007. No-regret algorithms for online convex programs. In _Advances in Neural Information Processing Systems_ , 489–496. * Greenwald, Li, and Marks (2006) Greenwald, A.; Li, Z.; and Marks, C. 2006. Bounds for Regret-Matching Algorithms. In _ISAIM_. * Hart and Mas-Colell (2000) Hart, S.; and Mas-Colell, A. 2000. A Simple Adaptive Procedure Leading to Correlated Equilibrium. _Econometrica_ 68(5): 1127–1150. * Hazan et al. (2016) Hazan, E.; et al. 2016. Introduction to Online Convex Optimization. _Foundations and Trends® in Optimization_ 2(3-4): 157–325. * Joulani, György, and Szepesvári (2017) Joulani, P.; György, A.; and Szepesvári, C. 2017. A Modular Analysis of Adaptive (Non-) Convex Optimization: Optimism, Composite Objectives, and Variational Bounds. _Journal of Machine Learning Research_ 1: 40. * Joulani et al. (2020) Joulani, P.; Raj, A.; Gyorgy, A.; and Szepesvari, C. 2020. A simpler approach to accelerated optimization: iterative averaging meets optimism. In III, H. D.; and Singh, A., eds., _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , 4984–4993. PMLR. URL http://proceedings.mlr.press/v119/joulani20a.html. * Mohri and Yang (2016) Mohri, M.; and Yang, S. 2016. Accelerating online convex optimization via adaptive prediction. In _Artificial Intelligence and Statistics_ , 848–856. * Nemirovsky and Yudin (1983) Nemirovsky, A. S.; and Yudin, D. B. 1983. Problem complexity and method efficiency in optimization. . * Nesterov (2009) Nesterov, Y. 2009. Primal-dual subgradient methods for convex problems. _Mathematical programming_ 120(1): 221–259. * Nesterov (2018) Nesterov, Y. 2018. _Lectures on convex optimization_ , volume 137. Springer. * Orabona (2019) Orabona, F. 2019. A modern introduction to online learning. _arXiv preprint arXiv:1912.13213_ . * Rakhlin and Sridharan (2013a) Rakhlin, A.; and Sridharan, K. 2013a. Online Learning with Predictable Sequences. In _Conference on Learning Theory_ , 993–1019. * Rakhlin and Sridharan (2013b) Rakhlin, S.; and Sridharan, K. 2013b. Optimization, Learning, and Games with Predictable Sequences. In Burges, C. J. C.; Bottou, L.; Welling, M.; Ghahramani, Z.; and Weinberger, K. Q., eds., _Advances in Neural Information Processing Systems 26_ , 3066–3074. Curran Associates, Inc. URL http://papers.nips.cc/paper/5147-optimization-learning-and-games-with-predictable-sequences.pdf. * Schuurmans and Zinkevich (2016) Schuurmans, D.; and Zinkevich, M. A. 2016. Deep learning games. In _Advances in Neural Information Processing Systems_ , 1678–1686. * Shalev-Shwartz and Singer (2006) Shalev-Shwartz, S.; and Singer, Y. 2006. Online learning meets optimization in the dual. In _International Conference on Computational Learning Theory_ , 423–437. Springer. * Syrgkanis et al. (2015) Syrgkanis, V.; Agarwal, A.; Luo, H.; and Schapire, R. E. 2015. Fast convergence of regularized learning in games. In _Advances in Neural Information Processing Systems_ , 2989–2997. * Tammelin (2014) Tammelin, O. 2014. Solving large imperfect information games using CFR+. _arXiv preprint arXiv:1407.5042_ . * Tammelin et al. (2015) Tammelin, O.; Burch, N.; Johanson, M.; and Bowling, M. 2015. Solving heads-up limit Texas Hold’em. In _Twenty-Fourth International Joint Conference on Artificial Intelligence_. * Warmuth and Jagota (1997) Warmuth, M. K.; and Jagota, A. K. 1997. Continuous and discrete-time nonlinear gradient descent: Relative loss bounds and convergence. In _Electronic proceedings of the 5th International Symposium on Artificial Intelligence and Mathematics_ , volume 326. Citeseer. * Zinkevich (2003) Zinkevich, M. 2003. Online convex programming and generalized infinitesimal gradient ascent. In _Proceedings of the 20th International Conference on Machine Learning (ICML-03)_ , 928–936. * Zinkevich et al. (2008) Zinkevich, M.; Johanson, M.; Bowling, M.; and Piccione, C. 2008. Regret minimization in games with incomplete information. In _Advances in neural information processing systems_ , 1729–1736. ## Appendix A Appendix ## Appendix B Proof of Lemma 2 Let $a_{0}\geq 0$ and $0\leq a_{i}\leq C$ for $i>0$ $f$. If $f$ is a non- negative decreasing function then $\sum_{t=1}^{T}a_{t}f(a_{0}+\sum_{i=1}^{t-1}a_{i})\leq(C-a_{0})f(a_{0})+\int_{a_{0}}^{s_{T-1}}f(x)dx$ ###### Proof. Let $s_{t}=\sum_{i=0}^{t}a_{i}$. $\displaystyle a_{t}f(a_{0}+\sum_{i=1}^{t-1}a_{i})$ $\displaystyle=a_{t}f(s_{t-1})=(a_{t}-a_{t-1})f(s_{t-1})+a_{t-1}f(s_{t-1})$ $\displaystyle=(a_{t}-a_{t-1})f(s_{t-1})+\int_{s_{t-2}}^{s_{t-1}}f(s_{t-1})dx$ $\displaystyle\leq(a_{t}-a_{t-1})f(s_{t-1})+\int_{s_{t-2}}^{s_{t-1}}f(x)dx$ So we have that $\sum_{t=1}^{T}a_{t}f(a_{0}+\sum_{i=1}^{t-1}a_{i})\leq\sum_{t=1}^{T}(a_{t}-a_{t-1})f(s_{t-1})+\int_{a_{0}}^{s_{T-1}}f(x)dx.$ Now we will apply the summation by parts formula to analyze the first sum. $\displaystyle\sum_{t=1}^{T}(a_{t}-a_{t-1})f(s_{t-1})$ $\displaystyle=f(s_{T-1})a_{T}-f(a_{0})a_{0}-\sum_{t=2}^{T}a_{t-1}(f(s_{t-1})-f(s_{t-2}))$ $\displaystyle=f(s_{T-1})a_{T}-f(a_{0})a_{0}+\sum_{t=1}^{T-1}a_{t}(f(s_{t-1})-f(s_{t}))$ $\displaystyle\leq f(s_{T-1})a_{T}-f(a_{0})a_{0}+\sum_{t=1}^{T-1}C(f(s_{t-1})-f(s_{t}))$ $\displaystyle=f(s_{T-1})a_{T}-f(a_{0})a_{0}+Cf(a_{0})-Cf(s_{T-1})$ $\displaystyle\leq(C-a_{0})f(a_{0})$ The first inequality is due to $f$ being a decreasing function, hence $f(s_{t-1})-f(s_{t})\geq 0$, and because $0\leq a_{i}\leq C$. The last inequality also follows because $a_{T}\leq C$. ∎ ## Appendix C Proof of Theorem 2 An optimistic Lagrangian hedging algorithm with a convex potential function $F$ satisfying conditions (1-3), with $p=1$ and stepsizes $\eta_{t}=\frac{1}{\sqrt{\frac{1}{\eta_{1}^{2}}+\sum_{k=1}^{t-1}\left\lVert s_{k}-m_{k}\right\rVert^{2}}}\quad t>1,$ and $\eta_{1}\leq\sqrt{\frac{3}{C}}$, attains the following regret bound $R^{T}_{\mathcal{X}}\leq\frac{D}{B}\left((L+A)\sqrt{\frac{1}{\eta_{1}^{2}}+\sum_{t=1}^{T-1}\left\lVert s_{t}-m_{t}\right\rVert^{2}}\right).$ ###### Proof. Recall assumption (5) with $p=1$, and inequality (2), then a non-negative upperbound on $F(\eta_{T}s_{1:T})\leq K$ translates into the regret bound $R^{T}_{\mathcal{X}}\leq D\left(\frac{K}{B\eta_{T}}+\frac{A}{B\eta_{T}}\right).$ From Theorem 1 we have that $0\leq K=\frac{L}{2}\sum_{t=1}^{T}\eta_{T}\eta_{t}\left\lVert s_{t}-m_{t}\right\rVert^{2},$ is a valid upper bound. Therefore $R^{T}_{\mathcal{X}}\leq\frac{D}{B}\left(\frac{L}{2}\sum_{t=1}^{T}\eta_{t}\left\lVert s_{t}-m_{t}\right\rVert^{2}+\frac{A}{\eta_{T}}\right).$ We now apply Lemma 2 on the sum across $T$ rounds by noticing that $\eta_{t}=f(a_{0}+\sum_{i=1}^{t-1}a_{i})$, where $a_{i}=\left\lVert s_{i}-m_{i}\right\rVert^{2}$, $f(x)=\frac{1}{\sqrt{x}}$, and $a_{0}=\frac{1}{\eta_{1}^{2}}$. By assumption we also have that $0\leq a_{i}=\left\lVert s_{i}-m_{i}\right\rVert^{2}\leq C$. Therefore, by Lemma 2 $\displaystyle\sum_{t=1}^{T}\eta_{t}\left\lVert s_{t}-m_{t}\right\rVert^{2}\leq$ $\displaystyle\left(C-\frac{1}{\eta_{1}^{2}}\right)\eta_{1}+2\left(\sqrt{\frac{1}{\eta_{1}^{2}}+\sum_{t=1}^{T}\left\lVert s_{t}-m_{t}\right\rVert^{2}}-\frac{1}{\eta_{1}}\right)$ $\displaystyle=C\eta_{1}-\frac{3}{\eta_{1}}+2\sqrt{\frac{1}{\eta_{1}^{2}}+\sum_{t=1}^{T}\left\lVert s_{t}-m_{t}\right\rVert^{2}}$ $\displaystyle\leq 2\sqrt{\frac{1}{\eta_{1}^{2}}+\sum_{t=1}^{T}\left\lVert s_{t}-m_{t}\right\rVert^{2}}\quad\mbox{ if }\eta_{1}\leq\sqrt{\frac{3}{C}}.$ ∎ ## Appendix D Proof of Theorem 4 An optimistic Lagrangian hedging+ algorithm playing a fixed point of $\tilde{M}^{\Phi+}$, with a convex potential function that is positive invariant and smooth $F$ and satisfying conditions (3-4), with positive decreasing stepsizes $0<\eta_{t}\leq\eta_{t-1}$, ensures $F(\eta_{T}s^{\Phi+}_{1:T})\leq\frac{L}{2}\sum_{t=1}^{T}\eta_{T}\eta_{t}\left\lVert s^{\Phi}_{t}-m_{t}\right\rVert^{2}.$ ###### Proof. The proof resembles closely to that of Theorem 1. $\displaystyle F(\eta_{t}s^{\Phi+}_{1:t})=F(\eta_{t}(s^{\Phi+}_{1:t-1}+s^{\Phi}_{t}+m_{t}-m_{t})^{+})$ $\displaystyle\leq F(\eta_{t}(s^{\Phi+}_{1:t-1}+m_{t}))+\langle\nabla F(\eta_{t}(s^{\Phi+}_{1:t-1}+m_{t})),\eta_{t}(s^{\Phi}_{t}-m_{t})\rangle$ $\displaystyle+\eta_{t}^{2}\frac{L}{2}\left\lVert s^{\Phi}_{t}-m_{t}\right\rVert^{2}$ $\displaystyle\leq F(\eta_{t}(s^{\Phi+}_{1:t-1}+m_{t}))-F(\eta_{t}s^{\Phi+}_{1:t-1})+F(\eta_{t}s^{\Phi+}_{1:t-1})$ $\displaystyle+\langle\nabla F(\eta_{t}(s^{\Phi+}_{1:t-1}+m_{t})),-\eta_{t}m_{t}\rangle+\eta_{t}^{2}\frac{L}{2}\left\lVert s^{\Phi}_{t}-m_{t}\right\rVert^{2}.$ The rest follows from the same arguments as Theorem 1. ∎ ## Appendix E The $\Phi$ Regret Fixed Point Greenwald, Li, and Marks showed that $x_{t}$ that is a fixed point of $M_{t}$ is guaranteed to satisfy the generalized Blackwell condition $\langle\nabla F(s^{\Phi}_{1:t-1}),s_{t}^{\Phi}\rangle\leq 0.$ Our results reuse this observation by modifiying the regret vector $s^{\Phi}_{1:t-1}$ with a prediction and possibly using the modified $s^{\Phi+}_{1:t-1}$ vector. More generally, we use the following result that follows directly from Greenwald, Li, and Marks; the following inequality holds $\langle\nabla F(z),s_{t}^{\Phi}\rangle\leq 0,$ if the operator used to construct the fixed point is defined to be $M_{t}^{\Phi}(x)=\frac{\sum_{\phi\in\Phi}(\nabla F(z))_{\phi}\phi(x)}{\langle\nabla F(z),\mathbf{1}\rangle}.$
# Curvature Invariants in a Binary Black Hole Merger Jeremy M. Peters Department of Mathematics and Statistics, Dalhousie University, Halifax, Nova Scotia, B3H 3J5, Canada Perimeter Institute for Theoretical Physics, Waterloo, Ontario, N2L 2Y5, Canada Alan Coley Department of Mathematics and Statistics, Dalhousie University, Halifax, Nova Scotia, B3H 3J5, Canada Erik Schnetter Perimeter Institute for Theoretical Physics, Waterloo, Ontario, N2L 2Y5, Canada Department of Physics & Astronomy, University of Waterloo, Waterloo, ON N2L 3G1, Canada Center for Computation & Technology, Louisiana State University, Baton Rouge, LA 70803, USA ###### Abstract We study curvature invariants in a binary black hole merger. It has been conjectured that one could define a quasi-local and foliation independent black hole horizon by finding the level–$0$ set of a suitable curvature invariant of the Riemann tensor. The conjecture is the geometric horizon conjecture and the associated horizon is the geometric horizon. We study this conjecture by tracing the level–$0$ set of the complex scalar polynomial invariant, $\mathcal{D}$, through a quasi-circular binary black hole merger. We approximate these level–$0$ sets of $\mathcal{D}$ with level–$\varepsilon$ sets of $|\mathcal{D}|$ for small $\varepsilon$. We locate the local minima of $|\mathcal{D}|$ and find that the positions of these local minima correspond closely to the level–$\varepsilon$ sets of $|\mathcal{D}|$ and we also compare with the level–$0$ sets of $\text{Re}(\mathcal{D})$. The analysis provides evidence that the level–$\varepsilon$ sets track a unique geometric horizon. By studying the behaviour of the zero sets of $\text{Re}(\mathcal{D})$ and $\text{Im}(\mathcal{D})$ and also by studying the MOTSs and apparent horizons of the initial black holes, we observe that the level–$\varepsilon$ set that best approximates the geometric horizon is given by $\varepsilon=10^{-3}$. E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ## 1 Introduction ### 1.1 Black Hole Horizons Black holes are solutions of general relativity and are most naturally characterized by their event horizon. The event horizon of a black hole (BH) is defined as the boundary of the causal past of future null infinity. Intuitively, this means that on the inner side of the event horizon, light cannot escape to null infinity. Notice that event horizons require knowledge of the global structure of spacetime [8, 9, 10]. However, for numerical relativity it is more convenient to use an initial value formulation of GR (a 3+1 approach), where initial data is given on a Cauchy hypersurface and is then evolved forward in time. This approach requires an alternative description of BH horizons which is not dependent of the BH’s future. [61, 60, 3, 4, 36, 29, 31, 32, 33]. Let $\mathcal{S}$ be a compact 2D surface without border and of spherical topology, and consider light rays leaving and entering $\mathcal{S}$, with directions $l$ and $n$, respectively. Let $q_{ab}$ be the induced metric on $\mathcal{S}$ and denote the respective expansions as $\Theta_{(l)}=q^{ab}\nabla_{a}l_{b}$ and $\Theta_{(n)}=q^{ab}\nabla_{a}n_{b}$ [55]. Then, $\Theta_{(l)}$ and $\Theta_{(n)}$ are quantities which are positive if the light rays locally diverge, and negative if the light rays locally converge, and are zero if the light rays are locally parallel. We say that $\mathcal{S}$ is a trapping surface if $\Theta_{(l)}<0$ and $\Theta_{(n)}<0$ [8, 48, 53, 50]. Define $\mathcal{S}$ to be a marginally outer trapped surface (MOTS) if it has zero expansion for the outgoing light rays, $\Theta_{(l)}=0$ [53, 50, 51, 57, 27, 30]. ($\mathcal{S}$ is a future MOTS if $\Theta_{(l)}=0$ and $\Theta_{(n)}<0$ and a past MOTS if $\Theta_{(l)}=0$ and $\Theta_{(n)}>0$ [51]). MOTSs turn out to be well-behaved numerically, and can be used to trace physical quantities of a BH as they evolve over time and through a BBH merger [57, 27, 30, 53, 54]. In practice, it is common to view MOTSs as contained in a given 3D Cauchy surface. Within such a surface, the outermost MOTS is called the apparent horizon (AH) [53, 50, 51, 57, 27, 30]. AHs have many applications to numerical relativity, since tracking an AH only requires knowledge of the intrinsic metric $q_{ab}$ restricted to the spacetime hypersurface and the extrinsic curvature of that hypersurface at a given time [28, 10, 27]. For example, AHs are useful to study gravitational waves, as gravitational fields at the AH are correlated with gravitational wave signals [28, 39, 38, 35, 30, 55, 34]. AHs are also used to numerically simulate binary black hole (BBH) mergers and the collapse of a star to form a BH [13]. As another example, AHs play a role in checking initial parameters and reading off final parameters of Kerr black holes in gravitational wave simulations at LIGO [13, 2, 1]. One possible disadvantage of AHs is that the definition of AHs as the ”outermost MOTS” relies on the given foliation of the spacetime into Cauchy surfaces [7, 57]. If one smoothly evolves a given MOTS forward in time, one obtains a world tube which is foliated by these MOTS. This world tube is known as a dynamical horizon (DH) [14, 8, 9, 10]. One application of DHs is that they could contribute to our understanding of BH formation [8, 9, 10, 13]. As is the case with AHs, DHs are dependent on the foliation of the spacetime into Cauchy surfaces, as this spacetime foliation corresponds uniquely to a DHs with a unique foliation into MOTSs [36, 7, 3, 4]. Furthermore, the above definitions of MOTSs, AHs and DHs serve as a quasi-local description of BHs [22, 21]. It has been conjectured that one can uniquely define a smooth, locally determined and foliation invariant horizon based on the algebraic (Petrov) classification of the Weyl tensor [22, 21]. The necessary conditions for the Weyl tensor to be of a certain Petrov type can be stated in terms of scalar polynomials in the Riemann tensor and its contractions which are called scalar polynomial (curvature) invariants (SPIs). The first aim of this work is to study certain SPIs numerically during a BBH merger. The Petrov classification is an eigenvalue classification of the Weyl tensor, valid in 4 dimensions (D). Based on this classification, there are six different Petrov types for the Weyl tensor in 4D: types ${\bf I}$, ${\bf II}$, ${\bf D}$, ${\bf III}$, ${\bf N}$ and ${\bf O}$ (which is flat spacetime). One can also use the boost weight decomposition to classify the Weyl tensor, which is equivalent in 4D to the Petrov classification. One can also algebraically classify the symmetric trace free operator, $S_{ab}$, that is the trace free Ricci tensor, which is equivalent to the Segre classification [59]. The boost weight algebraic classification generalizes the Petrov classification to $N$-dimensional spacetimes [22, 21, 23, 24, 15, 45]. In $N$ D, and with Lorentzian signature $(+1,-1,\ldots,-1)$, we start with the frame of $N$–vectors, $\\{{\bf l},{\bf n},\\{{\bf m}_{i}\\}_{i=2}^{N-1}\\}$, where ${\bf l}$ and ${\bf n}$ are null and future pointing, ${\bf l}\cdot{\bf n}=1$, and the $\\{{\bf m}_{i}\\}$ are real, spacelike, mutually orthonormal, and span the orthogonal complement to the plane spanned by ${\bf l}$ and ${\bf n}$. The possible orthochronous Lorentz transformations are generated by null rotations about ${\bf l}$, null rotations about ${\bf n}$, spins (which involve rotations about ${\bf m}_{i}$), and boosts [45]. With respect to the given frame, boosts are given by the transformation: $\displaystyle{\bf l}$ $\displaystyle\rightarrow\lambda{\bf l}$ $\displaystyle{\bf n}$ $\displaystyle\rightarrow\lambda^{-1}{\bf n}$ $\displaystyle{\bf m_{i}}$ $\displaystyle\rightarrow{\bf m_{i}}$ for all $i\in\\{2,\ldots,N-1\\}$ and for some $\lambda\in\mathbb{R}\backslash\\{0\\}$. (The remaining transformations are given in [23, 24, 15, 45].) It is possible to decompose the Weyl tensor into components organized by boost weight [23, 24, 15]. It is of particular interest to know whether a given 4D spacetime is of special algebraic type II or D. We can state the necessary conditions as discriminant conditions in terms of simple $SPI$s [22, 21, 16, 18]. Just as an $SPI$ is a scalar obtained from a polynomial in the Riemann tensor and its contractions [22, 21], an $SPI$ of order $k$ is a scalar given as a polynomial in various contractions of the Riemann tensor and its covariant derivatives up to order $k$ [22, 21]. It turns out that BH spacetimes are completely characterized by their $SPI$s [17]. The necessary discriminant conditions on the 4D Weyl tensor for the spacetime to be of type II/D can be stated as two real conditions and are given in [17]. Contracting the 4D (complex) null tetrad, $({\bf l},{\bf n},{\bf m},{\overline{\bf m}})$ where ${\bf m}$ and ${\overline{\bf m}}$ are complex conjugates, with the Weyl tensor, $C_{abcd}$, one may form the complex scalars, ${\bf\Psi_{0}},{\bf\Psi_{1}},{\bf\Psi_{2}},{\bf\Psi_{3}},{\bf\Psi_{4}}$ and, in terms of these scalars, as in the Newman-Penrose (NP) formalism (discussed later), one may define the scalar invariants: $\displaystyle I$ $\displaystyle={\bf\Psi_{0}}{\bf\Psi_{4}}-4{\bf\Psi_{1}}{\bf\Psi_{3}}+3{\bf\Psi_{2}}^{2}$ (1) $\displaystyle J$ $\displaystyle=\begin{vmatrix}{\bf\Psi_{4}}&{\bf\Psi_{3}}&{\bf\Psi_{2}}\\\ {\bf\Psi_{3}}&{\bf\Psi_{2}}&{\bf\Psi_{1}}\\\ {\bf\Psi_{2}}&{\bf\Psi_{1}}&{\bf\Psi_{0}}\end{vmatrix}$ (2) It can be shown that the two aforementioned real scalar conditions are equivalent to the real and imaginary parts of the following complex syzygy [59]: $\mathcal{D}\equiv I^{3}-27J^{2}=0$ (3) Thus for Petrov types ${\bf II}$ and ${\bf D}$, equation (3) holds everywhere. It also turns out that for Petrov types ${\bf III}$, ${\bf N}$, and for ${\bf O}$, we have $I=J=0$, so (3) is satisfied trivially. ### 1.2 The Geometric Horizon Conjecture Having discussed the Petrov and boost weight classifications, we now turn to the Geometric Horizon Conjecture (GHC) in which we define the geometric horizon (GH) as the set on which the $SPI$s, defined in (3), vanish [22, 21]. The level–$0$ sets of these $SPI$s might not form a horizon with nice properties, however, since these $SPI$s could vanish additionally on axes of symmetry or fixed points of isometries [22, 21]. We know from (3) that if the spacetime is algebraically special, then the given complex $SPI$ vanishes. More precisely, the GHC is given as follows [22, 21]: #### GH Conjecture: If a BH spacetime is zeroth-order algebraically general, then on the geometric horizon the spacetime is algebraically special. We can identify this geometric horizon using scalar curvature invariants. #### Comments: Note that when studying the GHC, one would need to ensure that the GH exists and is unique. If the GHC were true in an algebraically general spacetime, then one could say on this horizon, the Weyl tensor is more algebraically special than its background spacetime and this horizon is at least of type ${\bf II}$. This horizon is then foliation independent and quasi-local [22, 21]. If the spacetime is algebraically special, one then considers the second part of the GHC, which is analogous to the algebraic GHC above, but involving differential $SPI$s. Differential $SPI$s (of order $k\geq 1$) are scalars obtained from polynomials in the Riemann tensor and its covariant derivatives and their contractions. This second part of the GHC thus states that if a BH spacetime is algebraically special (so that on any GH the BH spacetime is automatically algebraically special), and if the first covariant derivative of the Weyl tensor is algebraically general, then on the GH the covariant derivative of the Weyl tensor is algebraically special, and this GH can also be identified as the level–$0$ set of certain differential $SPI$s [22, 21]. In this case, the GH is identified as the set of points on which the covariant derivative of the Weyl tensor, $C_{abcd;e}$ is of type ${\bf II}$ [22]. It follows that one may obtain a clearer picture of the GH by taking the level–$0$ sets of these differential $SPI$s. In addition to $SPI$s, Cartan invariants can play a role within the frame approach and they are easier to compute. For example, Cartan invariants are useful in event horizon detection; indeed, it was demonstrated in [44, 20] that in 4D and 5D, one can construct invariants in terms of the Cartan invariants which detect the event horizon of any stationary asymptotically flat BH solutions. One could rewrite the statement of the algebraic GHC in the language of the boost-weight classification [45] to say that ”if there is some frame with respect to which the Weyl tensor in a BH spacetime has a vanishing boost-weight $+2$ term, then on the GH, there is some frame with respect to which the Weyl tensor has a vanishing boost-weight $+1$ term.” This desired frame is called the algebraically preferred null frame (APNF). For example, in an algebraically general 4D spacetime, the APNF is the frame in which the Weyl tensor is of algebraic type I so that the boost weight $+2$ terms of the Weyl tensor are $0$ with respect to this frame, which is always possible in 4D. Then, the GH is identified as the set of points on which the terms of boost weight $+1$ are zero. (If the Weyl tensor is type ${\bf II}$, then one can analyze the covariant derivative of the Weyl tensor and ask for it to be algebraically special). The task in this frame approach to study the GHC, therefore, is to first find this APNF, $(l,n,m,\overline{m})$ and thus the AHs/DHs which are orthogonal to $l,n$ [58, 7]. To this end, the Cartan algorithm can be used to completely fix this frame [44], and with respect to this frame, one obtains the associated Cartan scalars. From these scalars, one can identify the level–$0$ set of $C_{abcd;e}$ with the GH and, via NP calculus, obtain the NP spin expansion coefficients with respect to this APNF. It is of particular interest to study the spin coefficients, $\rho$ and $\mu$, as their level–$0$ set could be associated with the GH. However, a more careful study of $\rho$ and $\mu$ and their evolution through a BBH merger is beyond the scope of this paper. In this paper, we shall study the (algebraic) $SPI$s in relation to the first (algebraic) part of the GHC. More specifically, we will study the complex level–zero set of the invariant, $\mathcal{D}=I^{3}-27J^{2}$, as given in (3), in 4D during a BBH merger. This could possibly help provide insight as to whether one can define a proper unique horizon based on the algebraic classification of the Weyl tensor. This conjecture might have to be modified so that instead of analyzing level–$0$ sets of the real $SPI$s, we analyze instead level–$\varepsilon$ sets for small $\varepsilon$. Such an $\varepsilon$ could be determined by locating the local minima of the $SPI$s. However, further evidence from the analysis of $\mathcal{D}_{r}$ below perhaps suggests that this is not the case. ### 1.3 Examples and Motivation There are many examples of spacetimes that support the plausibility of the GHC either by explicitly exhibiting GHs or by finding other established BH horizons on which the Weyl and Ricci tensor are algebraically special [44, 20, 8, 9, 10, 22, 21, 19, 59]. For example, in the Kerr spacetime, by invoking the notion of a non-expanding weakly isolated null horizon and an isolated horizon, it can be proven, using the induced metric and induced covariant derivatives on the submanifold and assuming the dominant energy condition, that the Weyl and Ricci tensors are both of type ${\bf II/D}$ on the event horizon. This means that one can extract a subset of the set of points where the Weyl and Ricci tensors are both of algebraic type ${\bf II/D}$, to define a smooth BH horizon, namely the event horizon [5, 41, 8, 9, 10, 22, 21]. It can also be shown that the covariant derivatives of the Riemann tensor are of type ${\bf II}$ on this horizon [22, 21]. The Kerr geometry approximates the spacetime outside the event horizon of a BH formed by a collapsing star. By continuity, the region just inside the event horizon must closely approximate the Kerr geometry and it is suggested in [22] that there is a surface inside this horizon which is smooth and uniquely identified by geometric constraints [8, 9, 10, 5, 41, 22, 21], thereby qualifying as a GH. Another example to support the GHC comes from a family of exact closed universe solutions to the Einstein-Maxwell equations with a cosmological constant representing an arbitrary number of BHs, discovered by Kastor and Traschen (KT) [40]. Consider the merger of 2 BHs. At early times, there are two 3D disjoint GHs forming around each BH [22, 21]. However, at intermediate times, it turns out that the invariant, $\mathcal{D}=I^{3}-27J^{2}=0$ as in (3) only at the coordinate positions of each of the BHs, along certain segments of the symmetry axis, and along a 2D cylindrical surface, which expands to engulf the 2 BHs as they coalesce [22, 21]. During the intermediate process, there are 3D surfaces located at a finite distance from the axis of symmetry for which the traceless Ricci tensor (and hence the Ricci tensor, $R_{ab}$) is of algebraic type ${\bf II/D}$. There is also evidence of a minimal 3D dynamically evolving surface where a scalar invariant, ${\cal W}_{1}$, assumes a constant minimum value. This suggests that there is a GH during the dynamical regime between the spacetimes [22, 21], but further investigation is needed. At late times, the spacetime then settles down to a type ${\bf D}$ Reissner-Nordstrom-de-Sitter BH spacetime with mass $M=m_{1}+m_{2}$, which turns out to have a GH [44, 20]. So a GH forms at the beginning and end of the coalescence. For further information on the two-BH solution, see [40]. The KT solution for multiple BHs was studied and GHs around each BH were found in [43]. The results were compared with the previously mentioned 2-BH solution. Additionally, three black-hole solutions were studied and GHs were found around these BHs also [22, 21]. For information on more than two BHs, see [47]. There are additional examples of spacetimes that support the GHC by identifying GHs with the level–$0$ sets of $\rho$ and $\mu$. These examples include stationary spacetimes with stationary horizons (e.g. Kerr-Newman-NUT- AdS) [44]; spherically symmetric spacetimes such as vacuum solutions or known exact solutions (e.g. Vaidya or LTB dust models) [22]; quasi-spherical Szekeres spacetimes [19]; and the Kastor–Traschen solution for $N>1$ BHs, as mentioned previously [40]. The authors have also studied vacuum solutions in the case of axisymmetry (so $R_{ab}=0$) and where the Weyl tensor is of algebraic type ${\bf I}$ [25]. Based on the previous examples, it is also natural to study the covariant derivative of the Weyl tensor in this setting and the level–$0$ sets of $\rho$ and $\mu$. ## 2 Simulating a Binary Black Hole Merger ### 2.1 Previous Work We wish to study the behaviour of the complex $SPI$, $\mathcal{D}$, as defined in (3), through a BBH merger. Since the Kerr geometry is type ${\bf D}$ everywhere, it follows that $\mathcal{D}=0$ everywhere for a Kerr BH. Based on our understanding of the features of gravitational collapse and by a plethora of numerical simulations, it is believed that in a BBH merger the merged BHs at late times settle down to a solution well described by the Kerr metric [22, 21]. Thus, for a merger of 2 initially Kerr BHs, a plot of the real part and imaginary part of $\mathcal{D}$ should be roughly zero everywhere at early and late times. However, in the intermediate “dynamical” region (during the actual merger and coalescence at intermediate times), these same zero plots should yield important information. This is what we wish to study. We highlight some known features of a binary black hole merger, as described in [53, 54, 28]. This also serves to set up our notation. In [53, 54], it was found that there is a connected sequence of MOTSs, which interpolate between the initial and final states of the merger (two separate BHs to one BH, respectively) [53, 54]. The dynamics are as follows: Initially, there are two BHs with disjoint MOTS (which are AHs at this point [28]), $\mathcal{S}_{1}$, and $\mathcal{S}_{2}$, one around each BH. Then, as the two BHs evolve, a common MOTS forms around the two separate BHs and bifurcates into an inner MOTS, $\mathcal{S}_{i}$, which surrounds the MOTS and an outer MOTS, $\mathcal{S}_{c}$. $\mathcal{S}_{c}$ increases in area and encloses $\mathcal{S}_{1}$, $\mathcal{S}_{2}$ and $\mathcal{S}_{i}$, and is the AH at the time of the merger [28, 53, 54]. The fate of $\mathcal{S}_{c}$ and the bifurcation at the time of the merger is well understood [28, 9, 57, 6, 30, 53, 54, 37, 46, 52]. Figure 1: Contour plots of $|\mathcal{D}|$ during a quasi-circular BBH merger consisting of two merging, equal mass and non-spinning BHs at selected times $t=8$ (upper left), $t=12$ (upper center), $t=16$ (upper right), $t=18$ (middle left), $t=20$ (center), $t=24$ (middle right), $t=26$ (lower left), $t=27$ (lower center) and $t=28$ (lower right). Figure 2: Comparing selected local minima of $|\mathcal{D}|$ along the x–coordinate direction with selected level sets of $|\mathcal{D}|$ at times $t=12$ (upper left), $t=16$ (middle left) and $t=20$ (lower left). The upper right, middle right, and lower right plots show plots of $|\mathcal{D}|$ vs $y$ for $x=0.000$ and $t=12$ (upper right); $x=0.0625$ and $t=16$ (middle right); and $x=0.125$ and $t=20$ (lower right). Figure 3: Comparing level–±0.01 contours of $\mathcal{D}_{r}=\text{Re}(\mathcal{D})$ and $\mathcal{D}_{i}=\text{Im}(\mathcal{D})$ with level–-$0.001$ contours of $|\mathcal{D}|$. The upper, middle and lower left plots are plots of $\mathcal{D}_{r}=\text{Re}(\mathcal{D})$ at times $t=12,\;16,\;20$, respectively and the upper, middle and lower right plots are plots of $\mathcal{D}_{i}=\text{Im}(\mathcal{D})$ at times $t=12,\;16,\;20$, respectively. Figure 4: Comparing the white level–$0.001$ sets of $|\mathcal{D}|$ with the MOTS as described in [53, 54] at times $t=12$ (upper left), $t=16$ (upper right) and $t=20$ (lower left and right). The lower left panel shows the inner MOTS in purple whereas the lower right panel shows the outer MOTS in purple. ### 2.2 Present Work Instead of studying a head-on collision, in this paper we shall study a quasi- circular orbit of two merging, equal mass and non-spinning BHs. Apart from [49, 25], the analysis of the quantity $\cal{D}$ employed in this simulation is new. In these simulations, the Einstein toolkit infrastructure was used [42] and the simulations are run using $4^{\text{th}}$ order finite differencing on an adaptive mesh grid, with adaptive refinement level of 6 [56, 12]. Brill-Lindquist initial data are used, with BH positions and momenta set up to satisfy the initial conditions necessary for a quasi-circular orbit which evolves for less than $1$ orbit before merging (QC0-initial condition). For more details, see Table I of [26]. Instead of analyzing a sequence of MOTS throughout the merger, we seek to define and study a GH as it evolves through the merger, in accordance with the GHC. Since (3) sets necessary conditions for the Weyl tensor to be of algebraic type II, we seek to analyze the constant contours of the difference $\mathcal{D}=I^{3}-27J^{2}$. In the simulations, the real and imaginary parts of $I$ and $J$ are calculated using the Weyl scalars, $\\{{\bf\Psi}_{i}\\}_{i=0}^{5}$, as given in equations (1) and (2), and the calculations are carried out using the orthonormal fiducial tetrad, as given by [11]. Note that in the rest of this paper, we will use the notation from [53, 54, 28] to describe the various MOTSs that appear in our simulations. To recapitulate, $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ are the 1st and 2nd initial MOTS and $\mathcal{S}_{i}$ and $\mathcal{S}_{c}$ describe the inner and common MOTS as they appear in our simulation, respectively. Figures 1–4 provide plots of various level sets of $\mathcal{D}_{r}=\text{Re}\\{D\\}$, $\mathcal{D}_{i}=\text{Im}\\{D\\}$ and $|\mathcal{D}|$ as functions of $(x,y)\in\mathbb{R}^{2}$ at selected instances of the time parameter, $t$, where $t=0$ indicates the start of the numerical computation. The level–$\varepsilon$ sets of $|\mathcal{D}|,\mathcal{D}_{r},\mathcal{D}_{i}$ in Figures 1–4 are calculated with a fixed spatial coordinate value of $z=0.03125$, as are points displayed in Figure 2. We choose this value of $z=0.03125$ to illustrate most clearly the main features of the plots. However, the MOTSs $\mathcal{S}_{1}\cup\mathcal{S}_{2}$ in Figure 4 are calculated with $z$ lying in a range $z\in[0.02,0.04]$ and $\mathcal{S}_{i}$ and $\mathcal{S}_{c}$ are calculated with $z\in[-0.1,+0.1]$ to accomodate the grid spacing in the spatial coordinates $(x,y,z)$ so that these MOTSs can be displayed fully. In Figure 4, these level sets of $|\mathcal{D}|,\mathcal{D}_{r},\mathcal{D}_{i}$ are also compared with $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$. The full compliment of pictures describing this BBH merger are displayed in [49]. We present a subset of those figures to illustrate the essential features. In each of Figures 1-4, the data corresponding to $x<0$ was obtained by rotating the data corresponding to $x>0$ by $180$ degrees about the $x=y=0$ axis. In Figures 1–4, we plot the centroids and outlines of $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ along with $\mathcal{S}_{i}$ and $\mathcal{S}_{c}$, when they have formed in Figure 4. The centroids have been added as a marker to track the locations of the BHs through the merger and provide a reference against which we can compare our candidate GHs, namely the level–$\varepsilon$ sets of $|\mathcal{D}|$. ### 2.3 Discussion Figure 1 provides the contour plots of the magnitude of $\mathcal{D}=I^{3}-27J^{2}$, denoted $|\mathcal{D}|$, on a log scale (see (3)) for $t=8,\;12,\;16,\;18,\;20,\;24,\;26,\;27,\;28$ in the upper left, upper center, upper right, middle left, center, middle right, lower right, lower center and lower right panels, respectively, for fixed $z=0.03125$. Since $|\mathcal{D}|$ is positive definite, the level–$0$ sets of $|\mathcal{D}|$ are impossible to find precisely due to discrete resolution and numerical error. Instead, we highlight the evolution of the level–$\varepsilon$ sets, where $\varepsilon=3\times 10^{-4},\;5\times 10^{-4},\;1\times 10^{-3}$. The overlaid green, red and white contours of each frame are the level–$3\times 10^{-4}$, level–$5\times 10^{-4}$ and level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$, respectively. At early times (i.e., at $t=8$ and $t=12$ in the upper left and upper center panels, respectively), each of the level–$\varepsilon$ sets are partitioned into pairs of simple closed curves. At $t=16$ (upper right panel), the red level–$5\times 10^{-4}$ set and the white level–$1\times 10^{-3}$ set of $|\mathcal{D}|$ each form a third simple closed curve between $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$, which is centred at the origin. At times $t=18$ (middle left panel), $t=20$ (center panel) and $t=24$ (middle right panel), for each respective $\varepsilon=3\times 10^{-4},\;5\times 10^{-4},\;1\times 10^{-3}$, the multiple simple closed curves partitioning the level–$\varepsilon$ set of $|\mathcal{D}|$ have joined so that each level–$\varepsilon$ curve is now a single simple closed curve. At times $t=26,\;27,\;28$, the level–$\varepsilon$ sets of $|\mathcal{D}|$ continue to track the merged BHs and are displayed in the lower three panels of Figure 1. It follows that the level–$\varepsilon$ curves for each $\varepsilon=3\times 10^{-4},\;5\times 10^{-4},\;1\times 10^{-3}$ at each $t$ form an invariantly defined, foliation invariant horizon that contains each separate BH at early times, and contains the merged BH at late times. The evolution of the level–$\varepsilon$ curves through the quasi-circular BBH merger in Figure 1 is reminiscent of the sequence of MOTS that take place during the head-on collision simulation in [53, 54]. In particular, the bifurcation into $\mathcal{S}_{i}$ and $\mathcal{S}_{c}$ described in [53, 54, 28] can be compared to our present quasi-circular simulations. This comparison is most striking at $t=16$, in the upper right panel of Figure 1, when the white level–$1\times 10^{-3}$ and red level–$5\times 10^{-4}$ sets are partitioned into three simple closed curves. However, our numerical computations are not precise enough to study the details of the bifurcation in [53, 54, 28]. At times $t=24,\;26,\;27,\;28$, in the middle right panel and lower three panels of Figure 1 respectively, it also seems that $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ do not merge fully. Thus, it is possible that at at late times, the level–$\varepsilon$ sets of $|\mathcal{D}|$ for $\varepsilon=3\times 10^{-4},\;5\times 10^{-4},\;1\times 10^{-3}$ may track $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$, which have been found in [28] to overlap but not intersect at late times. However, our simulations did not run to late enough times to make this clear. In any case, Figure 1 provides strong evidence that for each $\varepsilon=3\times 10^{-4},\;5\times 10^{-4},\;1\times 10^{-3}$, the level–$\varepsilon$ sets of $|\mathcal{D}|$ track a unique GH, which can be identified by the level–$0$ set of $\mathcal{D}$. It is of interest to study the level–$\varepsilon$ curves as $\varepsilon\rightarrow 0$ and extrapolate from our results the appearance of level–$\varepsilon$ curves for arbitrary small $\varepsilon$. This could be aided with improved numerical resolution. Such an extrapolation scheme is beyond the scope of this paper, however, and in the meantime we study the features of level–$\varepsilon$ curves for an appropriate value of $\varepsilon$. We observe that for $\varepsilon=3\times 10^{-4},\;5\times 10^{-4},\;1\times 10^{-3}$, the level–$\varepsilon$ contours are very close to each other, showing that the level–$\varepsilon$ sets vary continuously with $\varepsilon$. We also observe that if $\varepsilon_{1}\leq\varepsilon_{2}$, then the 2D area enclosed by the level–$\varepsilon_{1}$ curve encloses the 2D area enclosed by the level–$\varepsilon_{2}$ curve. Thus, each panel of Figure 1 indicates that $|\mathcal{D}|$ decreases on average away from $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$, and the plots of $|\mathcal{D}|$ show no global minima. However, Figure 2 indicates that the plots of $|\mathcal{D}|$ do have have local minima which approximately coincide with the level–$0$ sets of $\mathcal{D}_{r}$ and with the level–$\varepsilon$ sets for $\varepsilon=3\times 10^{-4},\;1\times 10^{-3}$. In order to investigate further the level–$0$ sets of $|\mathcal{D}|$ (or equivalently the level–$0$ sets of $\mathcal{D}$), we find out where $|\mathcal{D}|$ takes a local minimum value. If the value of $|\mathcal{D}|$ itself is small, then these locations of the local minima could possibly indicate positions of the actual zeros of $|\mathcal{D}|$, which would be caused by numerical errors. It could also be the case that the GHC should be modified so that the GH is defined as the set of points where $|\mathcal{D}|$ reaches the local minimum instead of being identically zero. If this were the case, then locating the local minima of $|\mathcal{D}|$ would locate the GH precisely instead of approximating it. However, further evidence from the analysis of $\mathcal{D}_{r}$ below perhaps suggests that this is not the case. To this end, in Figure 2, we examine 1D plots of $|\mathcal{D}|$ as functions of $y$ for fixed $x=x_{0}$. At time $t=12$, (resp. $t=\;16,\;20$), we display the $|\mathcal{D}|$ vs $y$ plot in the upper-right panel (resp. middle-right panel, lower-right panel) of Figure 2 for $x_{0}=0.000$ (resp. $x=0.0625,\;0.125$). Along each $|\mathcal{D}|$ vs $y$ plot, we find and track the values of $y=y_{min}$, where $|\mathcal{D}|$ assumes a local minimum value and lies in the range $\left[1\times 10^{-4},1.2\times 10^{-3}\right]$. The resulting coordinates $(x_{0},y_{min})$ are then superimposed on the level–$1\times 10^{-3}$ and level–$3\times 10^{-4}$ sets of $|\mathcal{D}|$ at $t=12$ (resp. $t=16,\;t=20$) in the upper-left panel (resp. middle-left, lower-left) panel of Figure 2. More specifically, in the upper-right panel of Figure 2, where $t=12$ and $x_{0}=0.000$, we see that local minima of $|\mathcal{D}|$ occur roughly at $y_{min}=-0.8,\;0,\;0.8$. The points $(x_{0},y_{min})=(0,-0.8),(0,0),(0,+0.8)$ are then marked with green dots on the level–$\varepsilon$ plots of $\mathcal{D}$ at $t=12$, as shown on the upper-left panel of Figure 2. The remaining green dots on this upper-left panel are found similarly, using the positions $(x_{0},y_{min})$ of the local minima of $|\mathcal{D}|$, with $|\mathcal{D}|\in\left[1\times 10^{-4},1.2\times 10^{-3}\right]$, but with varying $x_{0}$. One can similarly inspect the plot in the middle-right panel of Figure 2, where $t=16$ and $x_{0}=0.0625$, to find that $|\mathcal{D}|$ is minimized roughly where $y_{min}=-0.2,\;0.1$. The corresponding points $(x_{0},y_{min})=(0.125,-0.2),(0.125,0.1)$ are then recorded with green dots on the level–$\varepsilon$ plots of $\mathcal{D}$ at $t=16$, in the middle- left panel of Figure 2. The remaining green dots on this panel are again obtained by varying $x_{0}$ and mark the positions $(x_{0},y_{min})$ of the local minima of $|\mathcal{D}|\in\left[1\times 10^{-4},1.2\times 10^{-3}\right]$. Finally, in the lower-right panel of Figure 2, where $t=20$ and $x_{0}=0.125$, the quantity $|\mathcal{D}|$ is minimized at roughly $y_{min}=0.2$, so that the point $(x_{0},y_{min})=(0.125,0.2)$ is marked with a green dot on the lower-left panel of Figure 2 and the remaining green dots track the positions $(x_{0},y_{min})$ of the local min of $|\mathcal{D}|$, $|\mathcal{D}|\in\left[1\times 10^{-4},1.2\times 10^{-3}\right]$, as above. By studying the left three panels of Figure 2, we observe that at times $t=12$ and $t=16$, these local minima appear to track the green level–$3\times 10^{-4}$ sets, while at time $t=20$, these local minima appear to track more closely the white level–$1\times 10^{-3}$ sets. This shows that the positions of the local minima of $|\mathcal{D}|$ align closely with the level-$\varepsilon$ sets of $|\mathcal{D}|$ for $\varepsilon=3\times 10^{-4},\;1\times 10^{-3}$. In [49], the positions of $(x_{0},y_{min})$ are compared with the positions of the minima $(x_{min},y_{0})$–obtained through the above procedure but with the roles of $x$ and $y$ reversed–and it is possible that $(x_{min},y_{0})$ are indeed the overall local minima of $|\mathcal{D}|$. In this case, Figure 2 provides supporting evidence that the local minima of $|\mathcal{D}|$ and hence the level–$\varepsilon$ sets of $|\mathcal{D}|$, accurately track the GH. It remains as future work, however, to study the local minima more closely and devise and implement an algorithm to track its $(x,y)$ position through a BBH merger. When studying the zero set of $\mathcal{D}$, it is also helpful to study separately $\mathcal{D}_{r}=\text{Re}(\mathcal{D})$ and $\mathcal{D}_{i}=\text{Im}(\mathcal{D})$, as these quantities change sign through a zero. To wit, we plot in Figure 3 the contour plots of $\mathcal{D}_{r}$ (left panels) and $\mathcal{D}_{i}$ (right panels), with magnified resolution, along with their level–$-0.01$ sets in yellow and their level–$+0.01$ sets in lime green at times $t=12,\;16,\;20$ in the upper, middle and lower panels, respectively. In each of the frames in Figure 3, the grey regions correspond to $-0.01<\mathcal{D}_{r}<0.01$ (resp. $-0.01<\mathcal{D}_{i}<0.01$), the black regions correspond to $\mathcal{D}_{r}\geq 1$ (resp. $\mathcal{D}_{i}\geq 1$), and the white regions correspond to $\mathcal{D}_{r}\leq-1$ (resp. $\mathcal{D}_{i}\leq-1$). Interpolating between the level–$+0.01$ sets and the level–$-0.01$ sets of $\mathcal{D}_{r}$ (resp. $\mathcal{D}_{i}$), we deduce that there must be a surface among the level–$\pm 0.01$ sets of $\mathcal{D}_{r}$ (resp. $\mathcal{D}_{i}$) across which $\mathcal{D}_{r}$ (resp. $\mathcal{D}_{i}$) changes sign. This surface is then the level–$0$ set of $\mathcal{D}_{r}$ (resp. $\mathcal{D}_{i}$). Upon inspection of each frame in Figure 3, we see that the level–$\pm 0.01$ sets of $\mathcal{D}_{r}$ (resp. $\mathcal{D}_{i}$) occur in close proximity with, but are contained in, the interior of the level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$. Therefore, Figure 3 provides strong evidence that the level–$1\times 10^{-3}$ set of $|\mathcal{D}|$ well approximates the level–$0$ set of $\mathcal{D}_{r}$ and $\mathcal{D}_{i}$ and, hence, the level–$0$ set of $\mathcal{D}$. We next explicitly compare the level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$ with the corresponding MOTSs $\mathcal{S}_{1,2,i,c}$ in Figure 4. We display the 2D contour plots of $|\mathcal{D}|$ with magnified resolution at times $t=12$ and $t=16$ in the upper left and upper right panels, respectively, and we display the 2D contour plots of $|\mathcal{D}|$ at $t=20$ and in the lower left and lower right panels. As in Figures 1–3, the white curves denote the white level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$ and the blue curves mark the $(x,y)$ coordinates of points on $\mathcal{S}_{1}\cup\mathcal{S}_{2}$ whose corresponding $z$ coordinate values lie in the range $\left[0.02,0.04\right]$, as mentioned previously. In the present quasi-circular simulation, the bifurcation of the the third MOTS into $\mathcal{S}_{i}$ and $\mathcal{S}_{c}$ occurs between times $t=18.5$ and $t=18.75$. Once this happens, $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ are no longer AHs, as the MOTS, $\mathcal{S}_{c}$, now surrounds $\mathcal{S}_{1}$, $\mathcal{S}_{2}$ and $\mathcal{S}_{i}$. Thus, in order to compare our level–$\varepsilon$ sets of $|\mathcal{D}|$ with AHs (the outermost MOTSs), we have included plots of $\mathcal{S}_{i}$ and $\mathcal{S}_{c}$ here. The purple dots on the bottom left (resp. bottom right) panel of Figure 4 label the $(x,y)$ coordinates of the points on $\mathcal{S}_{i}$ (resp. $\mathcal{S}_{c}$) whose corresponding $z$ value lies in the range, $\left[-0.1,+0.1\right]$. Note that in the bottom right–hand corner, the outer MOTS at late times is so big that the entire two dots and the scale for scale of the white level–$0.001$ sets of $|\mathcal{D}|$ and $\mathcal{S}_{i,1,2}$ are squashed at the origin. We find that the white level–$1\times 10^{-3}$ set of $|\mathcal{D}|$ coincides closely with $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$, especially at early times. Since the MOTSs $\mathcal{S}_{c}$ and $\mathcal{S}_{i}$ have not yet formed, $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ are AHs at early times. Hence, Figure 4 shows that the white level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$ well approximate the AH at early times. This lends support to the choice of the white level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$ as a representative approximation to the level–$0$ set of $|\mathcal{D}|$ (and hence of $\mathcal{D}$). At later times, however, it appears that the AH, $\mathcal{S}_{c}$, diverges from the white level–$1\times 10^{-3}$ sets, so that the white level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$ no longer approximate the AH in this régime. Thus, it appears in this current simulation that the level-sets of the curvature invariant $\mathcal{D}$ does not detect the common outer horizon forming at late times. Furthermore, in this simulation, no significant patterns were observed in the level–$\varepsilon$ sets of $|\mathcal{D}|$ immediately prior to the formation of $\mathcal{S}_{i}$ and $\mathcal{S}_{c}$ [49]. However, the authors intend to further study the GHs at the formation of $\mathcal{S}_{i}$ and $\mathcal{S}_{c}$, and also at later times, which would require rerunning these simulations with improved numerical resolution. Therefore, Figures 1–4 provide strong evidence that one can define a unique smooth GH, theoretically given by the level–$0$ set of the complex invariant $\mathcal{D}=I^{3}-27J^{2}$, which we have found is best approximated in the numerics by the level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$. ## 3 Conclusions We have studied the algebraic properties of the Weyl tensor by analyzing the time evolution of various level–$\varepsilon$ sets of $|\mathcal{D}|$ through a quasi-circular merger of two non-spinning, equal mass BHs where, in particular, $\varepsilon=3\times 10^{-4},\;5\times 10^{-4},\;1\times 10^{-3}$. These level–$\varepsilon$ contours are superimposed on the contour plots of $|\mathcal{D}|$ in Figure 1. In these plots, the locations of the two initial BHs were tracked by using the centroids of the initial AHs. We found that at early times, each such level set is partitioned into two disjoint simple closed curves, each of which contains one of the two centroids of the AHs of the 2 separate initial BHs. Then each level set, at some intermediate time, forms a third simple closed curve which is centred at the origin and positioned between the centroids of the AHs of the two initial BHs. These three simple closed curves then join and form one simple closed curve for each level set, which contains the centroids of both initial BHs. The plots for $|\mathcal{D}|$ in Figure 1 provide strong evidence that the level sets of $|\mathcal{D}|$ identify the GH. However, it is impossible to identify the level–$0$ sets of $|\mathcal{D}|$ precisely, since $|\mathcal{D}|$ is a sum of positive definite terms, so numerical errors and discrete resolution cause $|\mathcal{D}|$ to be strictly positive. Thus, to further study the zeros of $|\mathcal{D}|$, which would indicate the zeros of the complex quantity $\mathcal{D}$, we studied the positions of the local minima of $|\mathcal{D}|$ along plots of $|\mathcal{D}|$ vs $y$ for a fixed $x$ in Figure 2. Figure 2 demonstrates that the level–$\varepsilon$ sets of $|\mathcal{D}|$ correspond closely to the local minima of $|\mathcal{D}|$, where $\varepsilon=3\times 10^{-4},\;1\times 10^{-3}$. Since the local minima of $|\mathcal{D}|$ approximate the zeros of $|\mathcal{D}|$, Figure 2 provides supporting evidence that the level–$\varepsilon$ sets of $|\mathcal{D}|$ for $\varepsilon=3\times 10^{-4},\;1\times 10^{-3}$ track the GH of the BBH merger. Since $|\mathcal{D}|$ is positive definite, its zeros cannot be traced by positive and negative level sets. Therefore, we have also analyzed quantities which change sign through a zero. In Figure 3, we examined contour plots of $\mathcal{D}_{r}=\text{Re}(\mathcal{D})$ and $\mathcal{D}_{i}=\text{Im}(\mathcal{D})$ with their associated level–$\pm 0.01$ sets and compared these plots with the white level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$. We found surfaces surrounding the union of the level–$\pm 0.01$ contours of $\mathcal{D}_{r}$ (resp. $\mathcal{D}_{i}$) across which $\mathcal{D}_{r}$ (resp. $\mathcal{D}_{i}$) change sign, and are hence a subset of the level–$0$ sets of $\mathcal{D}_{r}$ (resp. $\mathcal{D}_{i}$). We also found these particular zeros of $\mathcal{D}_{r}$ (and those of $\mathcal{D}_{i}$) to be well approximated by the level–$1\times 10^{-3}$ contours of $|\mathcal{D}|$ in our plots and suggest that this approximation is valid in this setting. In Figure 4, we compare the level–$1\times 10^{-3}$ contours of $|\mathcal{D}|$ with the AHs. These AHs are given by $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ at early times and by $\mathcal{S}_{c}$ after $\mathcal{S}_{c}$ has formed. Figure 4 shows that the level–$1\times 10^{-3}$ sets of $|\mathcal{D}|$ are very well approximated by the AHs, $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$, at early times, but at later times the AH diverges from this level set of $|\mathcal{D}|$. However, even at late times, $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ continue to track a subset of the level–$1\times 10^{-3}$ set of $|\mathcal{D}|$, as does $\mathcal{S}_{i}$. Therefore, in the binary black hole merger, as displayed in Figures 1–43 in [49] and summarized in Figures 1–4 above, the algebraic structure of the Weyl tensor is clearly identified by the level–$\varepsilon$ sets of $|\mathcal{D}|$, and it is plausible that the level set with $\varepsilon=1\times 10^{-3}$ accurately identifies the geometric horizon. However, there is much future work still to be done. It is of interest to rerun these simulations, but at a higher numerical resolution to analyze more systematically the quantity $|\mathcal{D}|$ and its level sets. In particular, it remains to study the behaviour of the level–$\varepsilon$ curves of $|\mathcal{D}|$ as $\varepsilon\rightarrow 0$ and compare these level sets with the respective features in Figures 2–4. It would also be of interest to study in greater detail the local minima of $|\mathcal{D}|$ and track its evolution through the BBH merger. Thirdly, it remains to study the evolution of the level–$\varepsilon$ curves immediately prior to the formation of $\mathcal{S}_{i,c}$, and at late times, when $\mathcal{S}_{c}$ is the AH. Finally, the authors plan to study the time evolution of the covariant derivative of the Weyl tensor through a BBH merger in the context of the APNF approach. ## 4 Acknowledgements This work was supported financially by NSERC (AAC and ES). JMP would like to thank AAC for supervising his masters thesis and ES for numerical assistance and useful discussions, and the Perimeter Institute for Theoretical Physics for hospitality during this work. ## 5 Data Availability Complete data for this work is presented in [49], which is available on https://dalspace.library.dal.ca/handle/10222/79721, and is also available from the corresponding author on reasonable request. ## References * [1] Benjamin P. Abbott et al. Binary black hole mergers in the first advanced ligo observing run. Phys. Rev. X, 6:041015, 2016. * [2] Benjamin P. Abbott et al. Observation of gravitational waves from a binary black hole merger. Phys. Rev. Lett., 116:061102, 2016. * [3] Lars Andersson, Marc Mars, and Walter Simon. Local existence of dynamical and trapping horizons. Physical Review Letters, 95:111102, 2005. * [4] Lars Andersson, Marc Mars, and Walter Simon. Stability of marginally outer trapped surfaces and existence of marginally outer trapped tubes. Advances in Theoretical and Mathematical Physics, 12(4):853–888, 2008. * [5] Abhay Ashtekar, Christopher Beetle, and Jerzy Lewandowski. Geometry of generic isolated horizons. Classical and Quantum Gravity, 19(6):1195–1225, 2002. * [6] Abhay Ashtekar, Miguel Campiglia, and Samir Shah. Dynamical black holes: Approach to the final state. Physical Review D, 88:064045, 2013. * [7] Abhay Ashtekar and Gregory J. Galloway. Some uniqueness results for dynamical horizons. Advances in Theoretical and Mathematical Physics, 9(1), 2005. * [8] Abhay Ashtekar and Badri Krishnan. Dynamical horizons: Energy, angular momentum, fluxes, and balance laws. Physical Review Letters, 89:261101, 2002. * [9] Abhay Ashtekar and Badri Krishnan. Dynamical horizons and their properties. Physical Review D, 68(10), 2003. * [10] Abhay Ashtekar and Badri Krishnan. Isolated and dynamical horizons and their applications. Living Reviews in Relativity, 7(1), 2004. * [11] John Baker, Manuela Campanelli, and Carlos O. Lousto. The lazarus project: A pragmatic approach to binary black hole evolutions. Physical Review D, 65(4), 2002. * [12] Marsha J. Berger and Joseph Oliger. Adaptive mesh refinement for hyperbolic partial differential equations. Journal of Computational Physics, 53(3):484–512, March 1984. * [13] Ivan Booth. Black-hole boundaries. Canadian Journal of Physics, 83(11):1073–1099, 2005. * [14] Ivan Booth and Stephen Fairhurst. Isolated, slowly evolving, and dynamical trapping horizons: Geometry and mechanics from surface deformations. Physical Review D, 75(8), 2007. * [15] Alan Coley. Classification of the weyl tensor in higher dimensions and applications. Classical and Quantum Gravity, 25(3):033001, 2008. * [16] Alan Coley and Sigbjørn Hervik. Higher dimensional bivectors and classification of the weyl operator. Classical and Quantum Gravity, 27(1):015002, 2009. * [17] Alan Coley and Sigbjørn Hervik. Discriminating the weyl type in higher dimensions using scalar curvature invariants. General Relativity and Gravitation, 43:2199–2207, 2011. * [18] Alan Coley, Sigbjørn Hervik, and Nicos Pelavas. Spacetimes characterized by their scalar curvature invariants. Classical and Quantum Gravity, 26(2):025013, 2009. * [19] Alan Coley, Nicholas Layden, and David D. McNutt. An invariant characterization of the quasi-spherical szekeres dust models. General Relativity and Gravitation, 51(12):164, 2019. * [20] Alan Coley and David D. McNutt. Horizon detection and higher dimensional black rings. Classical and Quantum Gravity, 34(3):035008, 2017. * [21] Alan Coley and David D. McNutt. Identification of black hole horizons using scalar curvature invariants. Classical and Quantum Gravity, 35(2):025013, 2017. * [22] Alan Coley, David D. McNutt, and Andrey A. Shoom. Geometric horizons. Physics Letters B, 771:131–135, 2017. * [23] Alan Coley, Robert Milson, Vojtech Pravda, and Alena Pravdová. Classification of the weyl tensor in higher dimensions. Classical and Quantum Gravity, 21(7):L35–L41, 2004. * [24] Alan Coley, Robert Milson, Vojtech Pravda, and Alena Pravdová. Vanishing scalar invariant spacetimes in higher dimensions. Classical and Quantum Gravity, 21(23):5519–5542, 2004. * [25] Alan Coley, Jeremy M Peters, and Erik Schnetter. Geometric horizons in binary black hole mergers. Classical and Quantum Gravity, 38(17):17LT01, Jul 2021. * [26] Gregory B. Cook. Three-dimensional initial data for the collision of two black holes. ii. quasicircular orbits for equal-mass black holes. Physical Review D, 50(8):5025–5032, 1994. * [27] Olaf Dreyer, Badri Krishnan, Deirdre Shoemaker, and Erik Schnetter. Introduction to isolated horizons in numerical relativity. Physical Review D, 67(2), 2003. * [28] Christopher Evans, Deborah Ferguson, Bhavesh Khamesra, Pablo Laguna, and Deirdre Shoemaker. Inside the final black hole: Puncture and trapped surface dynamics, 2020\. * [29] Eric Gourgoulhon and José Luis Jaramillo. A 3+1 perspective on null hypersurfaces and isolated horizons. Physics Reports, 423(4-5):159–294, 2006. * [30] Anshu Gupta, Badri Krishnan, Alex B. Nielsen, and Erik Schnetter. Dynamics of marginally trapped surfaces in a binary black hole merger: Growth and approach to equilibrium. Physical Review D, 97(8):084028, 2018. * [31] Sean A. Hayward. General laws of black hole dynamics. Phys. Rev. D, 49:6467–6474, 1994. * [32] Sean A. Hayward. Unified first law of black hole dynamics and relativistic thermodynamics. Class. Quant. Grav., 15:3147–3162, 1998. * [33] Sean A. Hayward. Formation and evaporation of regular black holes. Phys. Rev. Lett., 96:031103, 2006. * [34] Dante A. B. Iozzo, Neev Khera, Leo C. Stein, Keefe Mitman, Michael Boyle, Nils Deppe, François Hébert, Lawrence E. Kidder, Jordan Moxon, Harald P. Pfeiffer, and et al. Comparing remnant properties from horizon data and asymptotic data in numerical relativity. Physical Review D, 103(12), Jun 2021. * [35] José Louis Jaramillo, Rodrigo P. Macedo, Philipp Mösta, and Luciano Rezzolla. Towards a cross-correlation approach to strong-field dynamics in Black Hole spacetimes. AIP Conference Proceedings, 1458(1):158–173, 2012. * [36] José Luis Jaramillo. An introduction to local black hole horizons in the 3+1 approach to general relativity. International Journal of Modern Physics, 20(11), 2011. * [37] José Luis Jaramillo, Marcus Ansorg, and Nicolas Vasset. Application of initial data sequences to the study of black hole dynamical trapping horizons. AIP Conference Proceedings, 1122(1):308–311, May 2009. * [38] José Luis Jaramillo, Rodrigo P. Macedo, Philipp Mösta, and Luciano Rezzolla. Black-hole horizons as probes of black-hole dynamics. I. Post-merger recoil in head-on collisions. Physical Review D, 85:084030, 2012. * [39] José Luis Jaramillo, Rodrigo P. Macedo, Philipp Mösta, and Luciano Rezzolla. Black-hole horizons as probes of black-hole dynamics. II. Geometrical insights. Physical Review D, 85:084031, 2012. * [40] David Kastor and Jennie Traschen. Cosmological multi-black-hole solutions. Physical Review D, 47(12):5370–5375, 1993. * [41] Jerzy Lewandowski and Tomasz Pawlowski. Quasi-local rotating black holes in higher dimension: geometry. Classical and Quantum Gravity, 22(9):1573–1598, 2005. * [42] Frank Löffler, Joshua Faber, Eloisa Bentivegna, Tanja Bode, Peter Diener, Roland Haas, Ian Hinder, Bruno C. Mundim, Christian D. Ott, Erik Schnetter, Gabrielle Allen, Manuela Campanelli, and Pablo Laguna. The einstein toolkit: A community computational infrastructure for relativistic astrophysics. Classical and Quantum Gravity, 29(11):115001, 2012. * [43] David D. McNutt and Alan Coley. Geometric horizons in the kastor-traschen multi-black-hole solutions. Physical Review D, 98(6), 2018. * [44] David D. McNutt, Malcolm MacCallum, Daniele Gregoris, Adam Forget, Alan Coley, Paul-Christopher Chavy-Waddy, and Dario Brooks. Cartan invariants and event horizon detection, extended version, 2017\. * [45] Robert Milson, Alan Coley, Vojtech Pravda, and Alena Pravdová. Alignment and algebraically special tensors in lorentzian geometry. International Journal of Geometric Methods in Modern Physics, 02(01):41–61, 2005. * [46] Philipp Mösta, Lars Andersson, Jan Metzger, Béla Szilágyi, and Jeffrey Winicour. The merger of small and large black holes. Classical and Quantum Gravity, 32(23):235003, 2015. * [47] Ken-ichi Nakao, Tetsuya Shiromizu, and Sean A. Hayward. Horizons of the kastor-traschen multi-black-hole cosmos. Physical Review D, 52:796–808, 1995. * [48] Roger Penrose. Gravitational collapse and space-time singularities. Physical Review Letters, 14:57–59, 1965. * [49] Jeremy M. Peters, Alan Coley, and Erik Schnetter. A study of the geometric horizon conjecture as applied to a binary black hole merger. Master’s thesis, Dalhousie University, 2020. * [50] Daniel Pook-Kolb, Ofek Birnholtz, José Luis Jaramillo, Badri Krishnan, and Erik Schnetter. Horizons in a binary black hole merger i: Geometry and area increase, 2020\. * [51] Daniel Pook-Kolb, Ofek Birnholtz, José Luis Jaramillo, Badri Krishnan, and Erik Schnetter. Horizons in a binary black hole merger ii: Fluxes, multipole moments and stability, 2020. * [52] Daniel Pook-Kolb, Ofek Birnholtz, Badri Krishnan, and Erik Schnetter. Existence and stability of marginally trapped surfaces in black-hole spacetimes. Physical Review D, 99(6):064005, 2019. * [53] Daniel Pook-Kolb, Ofek Birnholtz, Badri Krishnan, and Erik Schnetter. Interior of a binary black hole merger. Physical Review Letters, 123(17), 2019. * [54] Daniel Pook-Kolb, Ofek Birnholtz, Badri Krishnan, and Erik Schnetter. Self-intersecting marginally outer trapped surfaces. Physical Review D, 100(8), 2019. * [55] Vaishak Prasad, Anshu Gupta, Sukanta Bose, Badri Krishnan, and Erik Schnetter. News from horizons in binary black hole mergers, 2020. * [56] Erik Schnetter, Scott H. Hawley, and Ian Hawke. Evolutions in 3d numerical relativity using fixed mesh refinement. Classical and Quantum Gravity, 21(6):1465–1488, 2004. * [57] Erik Schnetter, Badri Krishnan, and Florian Beyer. Introduction to dynamical horizons in numerical relativity. Physical Review D, 74(2):024018, 2006. * [58] José M. M. Senovilla. Shear-free surfaces and distinguished DHs. 21st International Conference on General Relativity and Gravitation, July, 2015. * [59] Hans Stephani, Dietrich Kramer, Malcolm MacCallum, Cornelius Hoenselaers, and Eduard Herlt. Exact solutions of Einstein’s field equations, second edition. Cambridge University Press; Cambridge, 2003. * [60] Jonathan Thornburg. Finding apparent horizons in numerical relativity. Physical Review D, 54:4899–4918, 1996. * [61] Jonathan Thornburg. Event and apparent horizon finders for 3+1 numerical relativity, 2005\.
Mathematics of Stable Tensegrity Structures 1,2]Ajay B. Harish one 1]Shubham Deshpande$^{\dagger}$ 3]Stephanie R. Andress$^{\dagger}$ [one]Corresponding author at: Department of Mechanical, Aerospace and Civil Engineering, University of Manchester, UK; $^{\dagger}$ Equal contributors [1]Department of Civil and Environmental Engineering, University of California, Berkeley, USA [2]School of Mechanical, Aerospace and Civil Engineering (MACE), University of Manchester, UK [3]Department of Mechanical Engineering, Purdue University, USA $\mathbf{A}$Small motion prescribed on the tensegrity, particularly representing the end points of the rods, points of beads and pivot points related to the tensegrity $\left[A \right]_{(9\times 3)}$Matrix representing the rigid-body rotations $\left[B \right]_{(9\times 3)}$Matrix representing the rigid-body translations $\left[C \right]_{(9\times 6)}$Matrix representing the standard basis function representing the motion of points $p_j$ and $p_k$ $b$Number of beads in the tensegrity $c_{ij}$Length of the cable $C$List of unordered pairs ${i,j}$ describing the list of cables $d_{ij}$Distance between any two points ${i,j}$ $f_k$Force acting on a member of the tensegrity $f^{k} (\mathbf{v})$Functional representation for the $c-e$-inequalities, i.e. cable constraints $g^{\ell} (\mathbf{v})$Functional representation for the $e$-equalities, i.e. rod constraints $\ell_k$Length of the member of the tensegrity $m$Number of cables in the tensegrity $n$Number of rods in the tensegrity $N$Number of points making the tensegrity $p_i$Position vector of a single point, represented by $(x_i,y_i,z_i)$, of the tensegrity $P$Position of the tensegrity, represented by $3N$-tuples of $(x_i,y_i,z_i)$ $q_k$Force density in a member of the tensegrity $Q$Position of the tensegrity after movement, represented by $3N$-tuples of $(x_i,y_i,z_i)$ $r_{ij}$Length of the rod $\mathbf{r}_i$Rigid-body motion vectors with $i = 0, \cdots, 5$ $R$List of unordered pairs ${i,j}$ describing the list of rods $\mathbb{R}^{3N}$Space spanning all the $3N$ points of the tensegrity $\mathcal{R}$Sub-space of all the motions of the tensegrity $T$Translation vector applied on the tensegrity $\mathbf{v}_j$The vector representing each connection of the tensegrity and also referred to as constraint vectors. Here, $j =6, \cdots, 3N-1$ $\mathcal{V}$Sub-space of all constraint vectors $(x_i,y_i,z_i)$The coordinate of each point in the tensegrity $\varepsilon$Small motion prescribed on the tensegrity $\Pi$The set of all points in the tensegrity $S$A plane that constraints the motion of a bead $\mathcal{S}$The space of all the string index pairs $\{i,j\}$ $c_r$Magnitude of compressive force acting on the rod in the tensegrity $t_{ij}$Magnitude of tensile force acting on the string in the tensegrity $\hat{\mathbf{v}}_{i,j}$Unit vector depicting the direction along which the force acts, i.e from point $p_j$ to $p_i$ $\mathbf{t}_{ij}$Tensile force vector acting on the wire $\mathbf{r}_{(\cdot)}$Compressive force vector acting on the rod § INTRODUCTION Tensegrities go back to several decades earlier when <cit.> developed some of his first structures. In 1961, <cit.> presented a class of cable-bar structures where bars were arranged in compression, structural integrity maintained by strings, known as Tensegrities. In addition, tensegrity structures are defined to have not two rods joined at the same point. Today, people use both broader and narrower definitions of `tensegrity.' However, in this work, we will consider the definition of <cit.> and <cit.>. The structures developed by Snelson and co-workers led to a significant renewal in interest related to tensegrities. However, like the famous “Snelson Tower”, many of these are unstable, making them harder to use and integrate into actual engineering structures. This work demonstrates that this is due to these designs developed based on soft elastic strings rather than rigid strings. Over the years, researchers have focused attention to mathematically understand the source of these instabilities and develop rigid and stable structures for various applications. In the last years, there has been a renewed interest in the area of tensegrities, including ideas and discussions in the areas of tensegrity metamaterials <cit.>, bio-tensegrities <cit.>, planetary lander modules <cit.>, tensegrity-inspired structures <cit.>, foldable structures <cit.>. Of particular interest to this work are those that discuss the mechanics of tensegrities and form-finding methods and shall be highlighted. While the two topics appear diverse, it is necessary to develop a thorough comprehension of these structures' mechanics to develop robust form-finding methodologies. §.§ Literature review General discussion and detailed literature reviews on many of the topics of interest to the tensegrity community have been discussed in <cit.>. Some common conclusions common to many of the studies into the mechanics of the tensegrity structures include: * The structures are statically indeterminate and dynamically quasi-stable * Most of the structures have a soft mode, that is also often referred to as infinitesimal flex or mechanism mode or swinging mode * Such soft modes are reduced by the addition of pre-stress that can also be equivalent to requiring an additional string * There is a need to understand the source of non-linearity / snapping behavior/instability observed in the force-displacement curve. * There is a need for a solution that can do away with these instabilities to help design stable, rigid tensegrities that can be used in actual engineering applications. §.§ Mechanics of tensegrities The 1980s and 90s saw tremendous growth in the mathematical literature surrounding tensegrity structures' mechanics and stability. This included several works from Connelly <cit.>, Whiteley <cit.>, Calladine and Pelligrino <cit.> and several other mathematicians and structural engineers. In <cit.> present four mathematical concepts to uniquely define the stability of a tensegrity, i.e. infinitesimally rigid $\subset$ pre-stress stability $\subset$ second-order rigidity $\subset$ rigid. This rigidity hierarchy has formed the basis for several works to this day, including the present work where a second-order rigidity is presented. Alongside the seminal preliminary works of Connelly and co-workers, several recent works from the early 2000s re-visit the mechanics of tensegrities. <cit.> state that tensegrities are under-constrained and undergo an infinitesimal flex and display nonlinear geometric stiffening upon loading. Such a definition automatically paves the way for unstable and soft modes in tensegrities, as this work, later, demonstrates. They discuss instability in the force-displacement relations by considering the famous 3-rod and 9-string tensegrity. This is further supported by a vibration analysis <cit.> where they demonstrate insufficient damping due to the cables used to build the tensegrity structure. <cit.> conclude that this inefficient damping that is apparent in the models is a serious drawback for practical usage of tensegrity structures. The same three-rod structure is further again considered for dynamic analysis by <cit.>. They unequivocally demonstrate the soft mode in this structure, which is mentioned as an . They further demonstrate that the pre-stressing leads to an increase in the eigen-frequency of the lowest mode of oscillation. As will be discussed later in this work, this pre-stressing acts as the additional string that would have been required to build a stable three-rod and ten-string tensegrity. This non-linearity in the force-displacement behavior of such a tensegrity is again explored and discussed as a snapping instability by <cit.>. Here, they explore the snapping behavior as a function of the applied torque and the structure's symmetry. Such a snap behavior is an obvious effect of the soft modes that are inherent in the structure. Other prominent works that discussed statics and dynamics of tensegrities include <cit.>. §.§.§ Tensegrity towers Another area that has received attention includes deployable structures and tensegrity towers. <cit.> further use the tower, made of three-rod nine-string units and proposed by <cit.>, for the design of a deployable structure. The work starts with the pre-conceived notion that the tensegrities are composed of soft and hard members. While the work explores possible deployable paths, such would work in the absence of soft modes. Juan and Mirats Tur <cit.>, motivated by robotics and controls, discuss the static and dynamic analysis of tensegrity structures. They use the ideas of Connelly and co-workers and provide a comprehensive review of the static analysis of such structures to date. <cit.> propose an algorithm to consider deployable structures. These deployable structures are made of unit cells of tensegrities and continuous cables that can be adjusted to actively deploy and activate structures. However, while they discuss the active deformation behavior, local soft modes that arise in such structures are not considered, limiting the serious applicability for tensegrity engineering. Some of the other works that consider the behavior of tensegrity towers include <cit.>. The concept of deployable tensegrity is re-visited by <cit.> where they propose the usage of soft elastomers for the programmable deployment of these tensegrities. The work again considers the famous three-rod and nine-string tensegrity. The proposed work's potential applications are immense but again does not consider the swinging mode instability in these tensegrities repeatedly discussed in the mechanics' literature. However, exploring such deployment for more rigid tensegrities of the kind proposed in this work can help make deployable tensegrity-based stable space structures. §.§.§ Design philosophies Specific designs based on the hypothesis of stability arising due to symmetry have been another direction of common interest explored in many works on tensegrity structures. <cit.> considered the regular truncated icosahedral and dodecahedral shaped tensegrities. The authors again confirm the presence of 15 distinct infinitesimal mechanism modes in these structures. The same idea is used by <cit.> to construct tensegrity structures based on truncated octahedron. However, the work is motivated towards building lattice-based composite structures that show superior impact behavior. They use a homogenization technique to obtain the dynamic response of a structure and pitch its potential applications in metamaterials. While the averaged temporal behavior during impact shows promise, the oscillations in the results indicate several soft modes that warrant significant further investigations. §.§.§ Form-finding techniques While mechanics of tensegrities has been one of the focus areas, the other area remains form-finding. Various form-finding techniques have been proposed over the years. In general, the form-finding techniques can be classified based on (a) force method (b) energy method. <cit.> discussed the application of dynamic relaxation technique for tensegrity nets and membranes. The idea here is the application of an explicit solution technique for the static behavior of structures. In the last two decades, this has been used extensively. A thorough review of the form-finding techniques till the early 2000s can be found in the work of <cit.>. <cit.> consider the idea of dynamic relaxation for pre-stressed strut structures and determine equilibrium configurations for three and four-rod tensegrities. The minimization starts with a pre-set initial configuration to obtain a relatively near-final configuration. The force density methods are based on the formulation of a constrained optimization problem to minimize the overall force density. Here, force density, $q_k$, is defined as \begin{equation} q_k = \frac{f_k}{\ell_k} \end{equation} where for any member $k$ has a member force of $f_k$ and a length of $\ell_k$. <cit.> consider the constrained optimization problem, aiming to minimize the overall force density in the cable structures. Similar to <cit.>, <cit.> also consider a minimization through grouping of elements. Alternatively, <cit.> propose nonlinear programming as a possible methodology for minimization of the force density in the cables. <cit.> consider ideas from computer graphics to create zones of non-intersection and propose a force maximization method of form-finding. The work demonstrates tensegrities with a varying number of rods and strings. While they denote the structures as super-stable, the work does not show any static/dynamic/vibrational analysis to prove the same. Some of the other recent works using form-finding based on force density idea include those of <cit.>. <cit.> alternative uses the idea of genetic algorithms to perform potential energy minimization for various configurations. There are several more that have explored the concept of form-finding including <cit.>. One of the common aspects of all the above literature in form-finding is a general lack of discussions on the presence or absence of a soft swinging mode discussed thoroughly in mechanics literature, stated earlier. A general overview of the form-finding papers shows that most of the structures obtained from these form-finding methods lack stability and have one or more soft modes. A simple check is the total number of strings in the structure, which will be discussed in the following sections. <Ref> compares some of the structures produced from the form-finding techniques that show evidence of string-deficiency and definite inherent soft modes. While exceptions could exist, most often, the proposed structures are deficient in one more strings. Tensegrities proposed in literature using form-finding methods. The number of strings proposed are compared with actual number of strings required for linearization stability. The concept of linearization stability will be introduced later in this work. Work Figure Number of rods Number of strings c]@c@Required number of strings for stability 11 3 9 10 -2*<cit.> 12 4 12 15 <cit.> 7 6 24 25 2c 3 9 10 2d 4 12 15 2e 4 11 15 -4*<cit.> 4e 6 9 25 3 9 10 4 12 15 -3*<cit.> -3*19 6 15 25 <cit.> 1 12 32 55 <cit.> 5 4 12 15 <cit.> 3 20 60 95 4 6 24 25 7 30 90 145 <cit.> 7 3 12 10 9 6 24 25 §.§ Overview and need of this work As evident, most of the designed tensegrities employ methods that lead to “string-deficient” structures that have inherent soft swinging modes. Such structures lack the rigidity and are not suitable for actual engineering structures. As will be discussed in this work, a possible reasoning for form-finding leading to such string-deficient structures is that many of the designs are based on usage of elastic/rubbery bands as many researchers have pointed out that the tensegrities are harder to build with rigid strings. In contrast, the large elastic behavior of elastic strings can help through small adjustments in length. However, small adjustments in length automatically introduce such swinging modes. While the obvious question is, “why can we not add more cables to stabilize them”. Unfortunately, such an addition is not feasible, without using more cables than required. The only possible way is to design them from the ground up, considering constraints to eliminate such soft modes. Firstly, this work demonstrates the mathematical framework that integrates the mechanics into the form-finding process to ensure robust tensegrity structural designs. Nevertheless, in certain applications, the presence of swinging modes are a necessity. A simple example is an application to mimic water plants that sway with the water current; a planetary lander module that can have a soft mode that allows it to deform and diffuse impact energy; meta-materials that block or allow certain frequency modes. However, if the form-finding technique does not account for the mechanics of the swinging modes, they cannot eliminate or control them in preferred directions. Thus, this work stems from the above need for understanding string-deficiency, leading to soft swinging modes, and further incorporating this into the form-finding process to design robust tensegrity structures suitable for any engineering application. It is also important to note here that this work tries to eliminate the assumption that tensegrity structures are necessarily pre-stressed. It demonstrates that the notion of pre-stressing arose owing the form-finding and design philosophies that use rubber bands (rather than rigid ropes/strings). This work demonstrates the mathematics of mechanics and stability involved in designing these structures and derives a relation between the number of rods/beads/cables required to ensure a stable structure. The work also further outlines possible form-finding methods, that integrate the mechanics, to build larger stable tensegrity structures. To demonstrate the developed concepts, the work considers the famous three-rod and nine-string tensegrity to compare with the three-rod and ten-string tensegrity derived from the methods proposed here. § MATHEMATICS OF TENSEGRITIES It is important to define some terminology required to identify a tensegrity and the space it spans uniquely. It can be assumed that given some arbitrary $N$ points in space, these points need to be connected by $n$ rods, $b$ beads, and $m$ cables. Considering generality, it can be assumed that it is unknown at this stage as to how the rods and cables need to be placed nor if the selected points are suitable to form a tensegrity. To illustrate this point with an example, a kite is the simplest form of a tensegrity. In this case, there are four points and connected by two rods and four strings. A kite is feasible only if the four points form a quadrilateral. However, if 3/4 points were co-linear, then it is not feasible to form a kite tensegrity. Thus, it is essential to check if the selected $N>0$ points are suitable for forming a tensegrity even before form-finding. In this endeavor, some relevant mathematical concepts are defined. The $n > 0$ rods, $b \ge 0$ beads, and $N = 2n+b$ cable attachment points are indexed by $0,1,\cdots, N-1$. Here, the relation between the number of cables $(m)$ / rods $(n)$ / beads $(b)$ is unknown and yet to be determined. Each rod has two attachments points while a bead has only one. The topology of rods are uniquely defined by a list $R$ of $n$ unordered pairs $\{i,j\}$. This typical pair $\{2k,2k + 1\}$ for $k = 0, \cdots,n$ are to be joined by rods. Similarly, a list $C$ defines the $m$ unordered pairs to be joined by cables. At this point, the dependency between $n$ and $m$ remains unknown and shall be discussed in the upcoming section. Topology enforces obvious conditions like length $r_{ij}$ for each rod pair and a length $c_{ij}$ for each cable pair. Here the position includes specific location given as $\left(x_i,y_i,z_i\right)$ for each node $i$. A single position $P$ shall be defined to include where the tensegrity is and what shape it is, and this needs to be disentangled to study the shape properly. Note here that a single tensegrity can occupy multiple positions due to soft modes like the swinging mode. The set of all these positions are denoted by $\Pi$. To be mathematically precise, $\Pi$ defines the space of all possible positions where each single position $P$ (of a tensegrity) corresponds to $3N$ tuples $\left(x_0, y_0, z_0, \cdots , x_{N-1}, y_{N-1}, z_{N-1}\right)$. In other words, this looks like the space $\mathbb{R}^{3N}$ of $3N$ tuples of real numbers. It is natural that the position $P$ is allowed if it satisfies, the “rod constraints” \begin{equation} \left(x_i-x_j\right)^2 +\left(x_i-x_j\right)^2 + \left(x_i-x_j\right)^2 = d_{ij} = r_{ij}^{2} \ \forall \left\{i,j\right\} \in R \label{rodconstraint} \end{equation} and the “cable constraints” \begin{equation} \left(x_i-x_j\right)^2 +\left(x_i-x_j\right)^2 + \left(x_i-x_j\right)^2 = d_{ij} \le c_{ij}^{2} \ \forall \left\{i,j\right\} \in C \label{cableconstraint} \end{equation} where $d_{ij}$ is referred to as distance between any given ends $i$ and $j$, $r_{ij}$ is the length of the rod and $c_{ij}$ is the original undeformed length of the cable. In addition, “anchorage constraints”, or boundary conditions also need to be considered to define the boundary value problem uniquely. Thus, \begin{equation} \left( x_i, y_i, z_i\right) = \left( \overline{x}_i, \overline{y}_i, \overline{z}_i\right) \end{equation} are also considered into the mathematical description of the tensegrity. §.§ Tensegrity shape coordinates Further on, here, to avoid the soft modes and help uniquely characterize a tensegrity, it is essential to define additional mathematical constraints. If the original tensegrity defined by position $P$ is moved to $P'$ due to some combination of translations and rotations, and the only possible positions $P'$ are such that they are not near, i.e., $P \approx P'$ is not true, then $P$ represents a uniquely defined stable tensegrity. In other words, it is necessary to ensure that there are no possible positions $P'$ in the proximity of $P$ that can be reached without appreciable change in energy (i.e., like a swinging mode or soft mode). For example, the famous three-rod and nine-string tensegrity has a swinging mode instability since there are possible configurations near the equilibrium position where it can move into. Given a Position $P = \left\{\left(x_i, y_i, z_i \right) \ \forall i = 0, \cdots , N-1\right\}$ then a translation $T = \left(t_x,t_y,t_z\right)$ moves each point $p_i = \left(x_i, y_i, z_i \right)$, such that the rod and cable constraints are not violated. This is equivalent to a virtual displacement offered to test the stability of the structure at a particular position $P$. Now assuming that a small motion $\mathbf{A} = \mathbf{I} + \varepsilon \mathbf{B}$ is applied to position $P$. We can write the squared distance, originally $d_{ij}^{2} = \left( p_i - p_j \right) \cdot \left( p_i - p_j \right)$, as \begin{equation} \left( \mathbf{A}p_i - \mathbf{A}p_j \right) \cdot \left( \mathbf{A}p_i - \mathbf{A}p_j \right) = d_{ij}^{2} + 0 + \varepsilon^{2} \left( \mathbf{B}v_{ij} \right) \cdot \left( \mathbf{B}v_{ij} \right) \end{equation} where $v_{ij} = \left( p_i - p_j \right)$. It is evident that this is constant to the first order in $\varepsilon$ but has higher order terms. Thus, turning the tensegrity moves $P$ in a vector direction that does not change the $d_{ij}$ or their match to the constraints. Like all other separations, rods and cables are changing length at zero rate. At $P = \left(x_0, y_0, z_0, x_1, y_1, z_1, \cdots, x_i, y_i, z_i, \cdots, x_{N-1}, y_{N-1}, z_{N-1} \right)$, all vectors giving zero (rate of) change in shape and are exactly the linear combinations of the rigid body motion vectors given as \begin{equation} \begin{split} \mathbf{r}_{0} &= \left[ 1 \ 0 \ 0 \ 1 \ 0 \ 0 \cdots 1 \ 0 \ 0 \cdots 1 \ 0 \ 0\right]^{T} \\ \mathbf{r}_{1} &= \left[ 0 \ 1 \ 0 \ 0 \ 1 \ 0 \cdots 0 \ 1 \ 0 \cdots 0 \ 1 \ 0\right]^{T} \\ \mathbf{r}_{2} &= \left[ 0 \ 0 \ 1 \ 0 \ 0 \ 1 \cdots 0 \ 0 \ 1 \cdots 0 \ 0 \ 1\right]^{T} \end{split} \end{equation} for translation, and \begin{equation} \begin{split} \mathbf{r}_{3} &= \left[ 0 \ z_1 \ -y_1 \ 0 \ z_2 \ -y_2 \cdots 0 \ z_i \ -y_i \cdots 0 \ z_{N-1} \ -y_{N-1} \right]^{T} \\ \mathbf{r}_{4} &= \left[ z_1 \ 0 \ -x_1 \ z_1 \ 0 \ -x_2 \cdots z_i \ 0 \ -x_i \cdots z_{N-1} \ 0 \ -x_{N-1} \right]^{T} \\ \mathbf{r}_{5} &= \left[ y_1 \ -x_1 \ 0 \ y_2 \ -x_2 \ 0 \cdots y_i \ -x_i \ 0 \cdots y_{N-1} \ -x_{N-1} \ 0 \right]^{T} \end{split} \end{equation} for rotations. The $3N$-vector that describes the rigid body motion of the $N$ points will necessarily be a unique combination of the above six vectors. Further on, one can assume that there exists additional $(3N-6)$ vectors $\mathbf{v}_j$ with $j = 6, ..., 3N-1$, such that the set comprising of $r_0,r_1, \cdots, r_5, v_6, v_7, \cdots,v_{3N-1}$ span the subspace $\mathcal{R}$, where $\mathcal{R}$ is the subspace of all possible motions for $P$. While the vectors $\mathbf{r}_i$ represent the rigid body motions, the vectors $\mathbf{v}_i$ represent the swinging modes or soft modes. A more rigorous definition is available in the upcoming sections. For now, it can be said that $\mathbf{v}_j$ are linearly independent and that any vector in the space $\mathcal{R}$ can be written uniquely as a combination of the $\mathbf{r}_i$ and $\mathbf{v}_j$. This also means that any shape-and-position $Q$ can be reached from $P$ as \begin{equation} Q - P = r_0 \mathbf{r}_0 + r_1 \mathbf{r}_1 + \cdots + r_5 \mathbf{r}_5 + v_6 \mathbf{v}_6 + v_7 \mathbf{v}_7 + \cdots + v_{3N-1} \mathbf{v}_{3N-1} \end{equation} for some $3N$ tuple $\left(r_0, \cdots, r_5, v_6, \cdots, v_{3N-1} \right)$. The $3N$ tuples can be decomposed into rigid body rotation and translation \begin{equation} \begin{split} P\left(r_0, \cdots, r_5, v_6, \cdots, v_{3N-1}\right) &= R_{\left(r_0, \cdots, r_5\right)} \left( P + \left( v_6 \mathbf{v}_6 + v_7 \mathbf{v}_7 + \cdots + v_{3N-1} \mathbf{v}_{3N-1} \right) \right) \\ &= R_{\left(r_0, \cdots, r_5\right)} \left( P + \sum_{i=6}^{N-1} {v_{i} \mathbf{v}_{i}} \right) \end{split} \end{equation} The operator $R$, for a single point, can be illustrated to be \begin{equation} \left(x,y,z\right) \rightarrow \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos r_3 & \sin r_3 \\ 0 & -\sin r_3 & \cos r_3 \end{bmatrix} \left( \begin{bmatrix} \cos r_4 & 0 & \sin r_4 \\ 0 & 1 & 0 \\ -\sin r_4 & 0 & \cos r_4 \end{bmatrix} \left( \begin{bmatrix} \cos r_5 & \sin r_5 & 0 \\ -\sin r_5 & \cos r_5 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ \end{bmatrix} \right) \right) + \begin{bmatrix} r_0 \\ r_1 \\ \end{bmatrix} \end{equation} For the $N$ tuples of points, $3N$ tuples of coordinates, the operator performs the same operation, though only three at a time. If one were to add and write out the entire $3N$ vector, it would lead to an inelegant and large matrix formula with the result dependent on the order of rotation $r_3$ around the $x$-axis, $r_4$ around the $y$-axis, and $r_5$ around the $z$-axis. Additionally, since it is known that the position $P$ with all $N$ points can itself be written as a vector of size $3N$ \begin{equation} P = \left(x_0,y_0,z_0,x_1,y_1,z_1, \cdots , x_{N-1},y_{N-1},z_{N-1} \right) \end{equation} it can also be expressed as a linear combination of the $3N$ tuple vectors as \begin{equation} P = r_0 \mathbf{r}_0 + \cdots + r_5 \mathbf{r}_5 + v_6 \mathbf{v}_6 + \cdots + v_{N-1} \mathbf{v}_{3N-1}. \end{equation} This implies that the coefficients $\left(r_0, \cdots, r_5, v_6, \cdots, v_{3N-1} \right)$ can be uniquely determined if the Jacobian matrix resulting from the column vectors, i.e. $\left[ \mathbf{r}_0, \cdots , \mathbf{r}_5, \mathbf{v}_6, \cdots, \mathbf{v}_{3N-1} \right]$ has a rank of $3N$. In such a case, it can also be said that the selected set of points can lead to a stable tensegrity. It is pertinent to note here that the vectors $\mathbf{v}_i$ are yet to be determined and calculated. §.§ Constraint vectors Elaborating from earlier, this sub-section outlines the determination of the constraint vectors $\left[ \mathbf{v}_6, \mathbf{v}_7,\cdots, \mathbf{v}_{N-1}, \right]$. One reasonable starting point to determine them is to consider the usual basis vectors that span the space $\mathbb{R}^{3N}$ consisting of the $N$ points of the tensegrity. These are \begin{equation} \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \\ \vdots \\ 0 \\ 0 \\ 0 \end{bmatrix}, \cdots, \begin{bmatrix} 0 \\ 0 \\ 0 \\ \vdots \\ 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \\ 0 \\ 1 \end{bmatrix} \end{equation} Thus, this results in $3N$ vectors and in addition to the six vectors $\left(\mathbf{r}_i\right)$, there are a total of $3N+6$ vectors. This implies that six of them need to be discarded to obtain a final set for the basis vectors $\left(\mathbf{v}_i\right)$. Additionally, the final set of vectors can be set to span a space $\mathcal{V}$ such that $\left(\mathbf{v}_{6}, \mathbf{v}_{7}, \cdots, \mathbf{v}_{N-1} \right) \in \mathcal{V}$. The question remains on the conditionality to choose the six vectors that need to be discarded and yet maintain a rank of $3N$. For example: if one were to discard the first six vectors, this would allow $\mathcal{V}$ to contain any pattern of motion direction for the point $p_i$ that keeps points $p_0$ and $p_1$ fixed. The points are kept fixed by the line $\overline{p_0 p_1}$. Thus, the uniqueness of the configuration $P$ is lost. Similar random omissions can lead to choices that include a non-unique configuration for $P$. A more systematic manner to choose the set of six vectors to be discarded could be to find the three points that give the biggest triangle, i.e. to determine the three indices \begin{equation} 0 \le i < j < k < N \text{ such that } \| \left( p_j - p_i \right) \| \text{ is greatest.} \end{equation} If one were to construct a matrix by just restricting to these three points, then \begin{equation} \begin{bmatrix} \left[A \right]_{(9\times 3)} & \left[B \right]_{(9\times 3)} & \left[C \right]_{(9\times 6)} \end{bmatrix} \end{equation} where $\left[A \right]_{(9\times 3)}$ relates to the rigid body rotations along $x-$, $y-$ and $z-$ axis; $\left[B \right]_{(9\times 3)}$ relates to the rigid body translations about $x-$, $y-$ and $z-$ axis; $\left[C \right]_{(9\times 3)}$ relates to the standard basis functions to move points $p_j$ and $p_k$ (three each). The matrices can be elaborated as \begin{equation} \left[A\right] = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}; \left[B\right] = \begin{bmatrix} 0 & z_i & y_i \\ z_i & 0 & -x_i\\ -y_i & x_i & 0\\ 0 & z_j & y_j \\ z_j & 0 & -x_j\\ -y_j & x_j & 0\\ 0 & z_k & y_k \\ z_k & 0 & -x_k\\ -y_k & x_k & 0 \end{bmatrix}; \left[C\right] = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} \end{equation} Now, if one were to drop three vectors from the combined matrix, there are a total of 20 possible combinations (choosing 3 out of 6, without regard to order). This requires calculating the determinant of these combinations and choosing one that allows for a full-rank matrix. With these, now one can also determine the the coefficients $\left( v_6, v_7, \cdots, v_{3N-1} \right)$ that allow $P$ to be represented in a unique manner. If there was any motion that leads to $P'$, the above ensures $P \ne P'$. This implies that no soft-modes are present, and the only way to change would be a completely new shape. The coming sections will discuss how the developed ideas could help identify actual rod and cable positions, or otherwise commonly known as form-finding, in the tensegrity literature. §.§ Linear constraints and convexity At this point, the physical meaning of the vectors $\mathbf{v}$ need to be defined. Considering the constraint equations as a functional representation of the vectors $\mathbf{v}$, they can be given as \begin{equation} \begin{split} \mathbf{g}^{\ell} \left( \mathbf{v} \right) &= 0 \ \forall \ \ell = 0, \cdots, e-1 \ \text{(e equalities, representing rod constraints)} \\ \mathbf{f}^{k} \left( \mathbf{v} \right) &\le 0 \ \forall \ k = e, \cdots, c-1 \ \text{(c-e inequalities, representing cable constraints)} \end{split} \end{equation} Here, the functionals $\mathbf{f}^{k} \left(\mathbf{v}\right) $ are given to be \begin{equation} \mathbf{f}^{k} \left(\mathbf{v}\right) = \left[ f_{0}^{k} \ f_{1}^{k} \ \cdots f_{m-1}^{k} \right] \begin{bmatrix} v_0 \\ v_1 \\ \vdots \\ \end{bmatrix} = f_{0}^{k} v_0 + f_{1}^{k} v_1 + \cdots + f_{m-1}^{k} v_{m-1} \end{equation} where $f^k$ approximates the change in the length from its value at $P$. The above automatically satisfy the zero vector or otherwise known as the trivial solution. However, the interest is often in the non-trivial solutions where $\mathbf{v} \neq 0$. The vectors $\mathbf{v} \in \mathcal{V}$ represent the small changes from $P$ under the cable constraints $\mathbf{f}^{k} \left(\mathbf{v}\right)$. As discussed earlier, these cable constraints represent the change in length of the string/cables from its value at $P$. These are zero when $\mathbf{v}=0$. This is feasible only if all coefficients are zero or if the vectors are linearly independent. In the case that ${\mathbf{g}^0, \cdots, \mathbf{g}^{e-1}, \mathbf{f}^{e}, \cdots, \mathbf{g}^{c-1}}$ satisfies the full rank condition and \begin{equation} b_0 \mathbf{g}^0 + \cdots + b_{e-1} \mathbf{g}^{e-1}+ a_{e} \mathbf{f}^{e} + \cdots + a_{c-1}\mathbf{g}^{c-1} = 0 \end{equation} for some $a_{e} > 0, \cdots, a_{c-1} >0$, then it is said to satisfy the spanning convexity test. § BEAD-BASED MODELS The previous section discussed some mathematical constructs to define a tensegrity uniquely. The introduction also discussed that many tensegrities designed to date are based on concepts of using an elastic cable (like an elastic band) rather than actual rigid cables (like the behavior of an actual rope). Before proceeding further, it is imperative to comprehend the difference between the behavior ensuing the usage of elastic vs. rigid cables. Here, simple structures with $b=1$ (not rods but only beads) are considered to illustrate the difference. In a curved space, such as a sphere's surface, the bead can be fixed firmly on the surface. However, in 3D space, it collapses unless anchored at fixed points. While these do not satisfy the conventional definition of “Tensegrities,” they provide low-dimensional examples to demonstrate the calculations that can also apply to free-standing stable tensegrities. Consider a bead $b$ held on a table top by $N$ cables of length $c_j$ as shown in <Ref>. Holding bead $b$ with one, two, three or four cables. The cable is anchored to the table at anchor points $p_0 = \left(x_0, y_0 \right)$ up to $p_{N-1} = \left(x_{N-1}, y_{N-1} \right)$. To start with, if the bead was constrained only by a single cable, then it is sure to lie around loose; with two, it can be held tight between $p_0$ and $p_1$ but does not feel very firm in the direction perpendicular to the line $\overline{p_0p_1}$. With three, it can be held “rather” securely. However, four gets tricky. In the case of four cables, if the cable from $\overline{bp_0}$ was tightened, then the cable from $\overline{p_0p_3}$ goes slack and vice-versa. However, this is different if it was an elastic band where such a configuration with four cables is possible but can remain unstable. The case of three strings is a simple case of force balance from engineering mechanics as shown in <Ref>. The tensions (up to a scale factor) in the three cables balanced at $b$. The vectors are parallel to the cables, but unrelated to the cable lengths: we could move $\mathbf{t}_0$ twice as far from $P$ and yet pull in the same direction with the same force $\mathbf{t}_0$. For this scenario, considering the force vectors $t_1$, $t_2$ and $t_3$ and the position vector of bead $b$, we can say that \begin{equation} \left( t_2 - b \right) = a_0 \left( t_0 - b \right) + a_1 \left( t_1 - b \right) \end{equation} for unique values of $a_0$ and $a_1$. It is also important to note here that their values are strictly negative since the cables can only consider tension loads. However, the above discussion on uniqueness fails once four cables are considered. Now considering the same in 3D as shown in <Ref>. Here, four cables in 3D can hold a bead securely in the same analogy to three in a plane. With $b$ confined to a plane $S$, three cables (from above, below or on $S$) trap it. If $b$ is stuck to a plane $S$ but cables can reach it from points above, below and points on $S$, then three cables are sufficient. Each $\overline{bp_i}$ sets up a constraint restricted to $S$, giving a circle centered at the nearest $T$-point to $p_i$. Here, the four inequality constraints (with $p_0, p_1, p_2, p_3$ not in the same plane) is exchanged for three inequalities and one $S$-defining equality. For example, if $S$ is the plane $z = 0$, and suppose $t_0 = \left(-2,-1,1\right)$, $t_1 = \left(1,0,2\right)$ and $t_0 = \left(-1,1,-1\right)$, then the constraints on $P$ are \begin{equation} \begin{split} Z &= 0 \\ \left(X+2 \right)^2 + \left(Y+1 \right)^2 + \left(Z-1 \right)^2 &\le 6\\ \left(X-1 \right)^2 + Y^2 + \left(Z-2 \right)^2 &\le 5\\ \left(X+1 \right)^2 + \left(Y-1 \right)^2 + \left(Z+1 \right)^2 & \le 3 \end{split} \end{equation} and solving first for $Z$ leads to a two-variable constraint. Similarly, if $b$ was on the end of a free-turning stick with its other end attached at $Q$, so an equality $\|b-Q\| = radius$ holds it to a spherical surface. Thus, three cables can fix its position, holding it to the tangent plane near $b$ approximates the spherical surface. §.§ Rubber vs. cables There is an easy way to make the situation with four cables right, and that is to use elastic/rubbery cables, such as elastic bands. It is fairly easy to show that joining $b$ to any number of fixed points $p_i$ gives an elastic energy function $E(P)$ of the position which is “strictly convex” as long as all the cables stay stretched. The configuration also has a unique equilibrium $P_0$ at the minimum-$E$ point, which is stable in two senses: move the bead off $P_0$, and it will move back (elastically stable); change any of the $p_i$ or the elastic constants a little, and $P_0$ will move just a little (structurally stable). Thus, one can easily get close to the designed system. Hence, it is no coincidence that most of the tutorials available on building a tensegrity suggest the usage of rubber bands. This also remains one of the origins for the long-standing notion that tensegrities are necessarily pre-stressed structures. As iterated in this discussion and mathematically illustrated in the coming sections, such notion has arisen from the frequent usage of elastic bands for the design of such tensegrity structures. These elastic bands need to be stretched, atleast slightly, if they are to remain in tension which is a pre-requisite for the balance of forces and static equilibrium. However, as will be demonstrated in the later sections, the proposed three-rod and ten-string structure does not necessarily need the strings to be in tension. § MECHANICS OF STABLE TENSEGRITIES As defined earlier, an $n$-rod tensegrity is said to comprise of a set of points $p_0 = (x_0,y_0,z_0), \cdots, p_{2n-1} = (x_{2n-1},y_{2n-1},z_{2n-1})$ with a rod between each pair $\{p_0,p_1\}, \{p_2, p_3\},$ $\cdots, \{p_{2r}, p_{2r+1}\},$ $\cdots, \{p_{2n-2}, p_{2n-1}\}$. The strings in the tensegrity join end pairs $p_{i}$ and $p_{j}$ on different rods, for index pairs $\{i,j\} \in \mathcal{S}$. For each point $p_{i}$ has a set $\mathcal{S}_i$ of indices $j$ for which $p_{i}$ and $p_{j}$ are joined by a string, of which there are $\sigma$ altogether. At present, the relation between $n$, $r$ and $\sigma$ are unknown and the concepts related to these are introduced here. Irrespective of whether the pair $\{i,j\}$ belongs to rod or string, it is possible to define a vector such as \begin{equation} \mathbf{v}_{i,j} = p_{j} - p_{i} \text{ and } \mathbf{v}_{j,i} = p_{i} - p_{j} \end{equation} and its modulus as $m_{i,j} = m_{j,i} = \| \mathbf{v}_{i,j} \|$ and the corresponding unit vectors as \begin{equation} \hat{\mathbf{v}}_{i,j} = \frac{\mathbf{v}_{i,j}}{m_{i,j}} \text{ and } \hat{\mathbf{v}}_{j,i} = -\hat{\mathbf{v}}_{i,j}. \end{equation} Assuming no inelastic effects like buckling, each rod has a compressive force $c_r$ and each string has a tensile force $t_{ij}$. The compressive force acts on the points $p_{2r}$ and $p_{2r+1}$ along the rod and given to be \begin{equation} \begin{split} \mathbf{r}_{2r} &= c_{r} \hat{\mathbf{v}}_{2r,2r+1} = \frac{c_r}{\| \mathbf{v}_{2r,2r+1} \|} \mathbf{v}_{2r,2r+1} = \Tilde{c}_r \mathbf{v}_{2r,2r+1} \\ \mathbf{r}_{2r+1} &= c_{r} \hat{\mathbf{v}}_{2r+1,2r} = \frac{c_r}{\| \mathbf{v}_{2r+1,2r} \|} \mathbf{v}_{2r+1,2r} = \Tilde{c}_r \mathbf{v}_{2r+1,2r} \end{split} \end{equation} Similarly, the tensile forces can be given to be \begin{equation} \mathbf{t}_{ij} = t_{ij} \hat{\mathbf{v}}_{i,j} = \frac{t_{ij}}{\| \mathbf{v}_{i,j} \|} \mathbf{v}_{i,j} = \Tilde{t}_{ij} \mathbf{v_{i,j}} \end{equation} It is also evident that $\mathbf{r}_{2r+1} = -\mathbf{r}_{2r}$ and $\mathbf{t}_{ij} = -\mathbf{t}_{ji}$ and the equilibrium at any point $\mathbf{p}_{i}$ requires \begin{equation} \mathbf{r}_{i} = \sum_{j \in \mathcal{S}_i} {\mathbf{t}_{ij}} \end{equation} §.§ Constraint matrices Considering the vectors in terms of constraints, the configuration change vector can be given as \begin{equation} \delta \mathbf{p} = \left( \delta \mathbf{p}_1, \cdots, \delta \mathbf{p}_{2n} \right) = \left(\delta x_1, \delta y_1, \delta z_1, \cdots, \delta x_{2n}, \delta y_{2n}, \delta z_{2n} \right). \end{equation} The change in distance between the points $\mathbf{p}_{i}$ and $\mathbf{p}_{j}$, to the first order, can be given to be \begin{equation} \begin{split} \hat{\mathbf{v}}_{i,j} \cdot \left(\delta \mathbf{p}_j - \delta \mathbf{p}_i \right) &= \hat{\mathbf{v}}_{i,j} \cdot \delta \mathbf{p}_j + \hat{\mathbf{v}}_{j,i} \cdot \delta \mathbf{p}_i \\ &= \left[ 0 \ 0 \ 0 \ \cdots \ 0 \ \hat{\mathbf{v}}_{i,j}^{x} \ \hat{\mathbf{v}}_{i,j}^{y} \ \hat{\mathbf{v}}_{i,j}^{z} \ 0 \ \cdots \ 0 \ \hat{\mathbf{v}}_{j,i}^{x} \ \hat{\mathbf{v}}_{j,i}^{y} \ \hat{\mathbf{v}}_{j,i}^{z} \ 0 \ \cdots \ 0 \ 0 \ 0 \right] \begin{bmatrix} \delta x_0 \\ \delta y_0 \\ \delta z_0 \\ \vdots \\ \delta x_{2n-1} \\ \delta y_{2n-1} \\ \delta z_{2n-1} \\ \end{bmatrix} \\ &= \left[ \hat{C}_{i,j} \right] \left[ \delta \mathbf{p} \right] \end{split} \end{equation} Here, the non-zero entries are at the locations $3i,3i+1,3i+2$ and their negatives at $3j,3j+1,3j+2$. This can also be re-written as \begin{equation} {\mathbf{v}}_{i,j} \cdot \left(\delta \mathbf{p}_j - \delta \mathbf{p}_i \right) = {\mathbf{v}}_{i,j} \cdot \delta \mathbf{p}_j + {\mathbf{v}}_{j,i} \cdot \delta \mathbf{p}_i = \left[ {C}_{i,j} \right] \left[ \delta \mathbf{p} \right] \end{equation} This leads to the equality constraint $\left[ {C}_{2r,2r+1} \right] \delta \mathbf{p} = 0$ for each rod $r$ and inequality constraint $\left[ {C}_{i,j} \right] \delta \mathbf{p} \leq 0$ for each string $\{i,j\}$. Additionally the rigid body translation vectors, given earlier, considers the net axis translations $\left[X\right] \delta \mathbf{p}$, $\left[Y\right] \delta \mathbf{p}$ and $\left[Z\right] \delta \mathbf{p}$ of the centroid $\overline{p}$ of the point set. Similarly, the rotation vectors, given earlier, consider (to linear order) the net turns $\left[\Tilde{X}\right] \delta \mathbf{p}$, $\left[\Tilde{Y}\right] \delta \mathbf{p}$ and $\left[\Tilde{Z}\right] \delta \mathbf{p}$ around the axes. §.§ Linearization stability criterion It is hypothesized here that a stable tensegrity needs to satisfy the linearization stability criterion stated as * The set of all rod or string constraint covectors $[C_{i,j}]$ along with the translation and rotation vectors span the $6n$-dimensional space $\mathcal{L}_{n}$ of all possible linear constraints on the configurations $\mathbf{p}$. Thus, for any $\delta \mathbf{p} \neq 0$, there is atleast one non-zero value $\Delta_{i,j} = [C_{i,j}] \ \delta \mathbf{p}$. If $\delta \mathbf{p}$ is not the translation or rotation, then this $\Delta_{i,j}$ must be for a string $\{i,j\}$ and in that case, $\Delta_{i,j} < 0$. In other words, if any $\delta \mathbf{p} \neq 0$ leads to $\Delta_{i,j} = [C_{i,j}] \ \delta \mathbf{p} = 0$, then this $\delta \mathbf{p}$ is a null eigenvector. Since null eigenvectors represent the soft modes, these cannot be permitted. * The row vector $\mathbf{0}$ can be expressed as a linear combination with all $a_{i,j} > 0$ \begin{equation} \begin{split} \mathbf{0} &= \rho_0 [C_{0,1}] + \cdots + \rho_r [C_{2r,2r+1}] + \cdots + \rho_n [C_{2n-2,2n-1}] + \sum_{\{i,j\} \in \mathcal{S}, i<j} {a_{i,j}} [C_{i,j}] \end{split} \label{eq:eq16} \end{equation} However, from the equality constraints, it is known that $\left[ {C}_{2r,2r+1} \right] \delta \mathbf{p} = 0$ and thus \begin{equation} \sum_{\{i,j\} \in \mathcal{S}, i<j} {a_{i,j}} [C_{i,j}] \ \delta \mathbf{p} = \mathbf{0} \end{equation} However, from the inequality constraint $\Delta_{i,j} = \left[ {C}_{i,j} \right] \delta \mathbf{p} \leq 0$. For all $a_{i,j} > 0$, the $\Delta_{i,j} \neq 0$ cannot all be negative. Thus, only possible solution would be if $\delta \mathbf{p} = 0$. In the $6n$-dimensional space, there are six co-vectors from the rigid body translations and rotations, $n$ from the rods, and $\sigma$ from the strings. To satisfy the condition (a), it is necessary to satisfy \begin{equation} 6 + n + \sigma \geq 6n \ \Rightarrow \ \sigma \geq 5n-6. \end{equation} However, the exact equality constraint is satisfied only if the rod and string co-vectors, together with those from the rigid body modes for a basis in $\mathcal{L}_n$. In such a case, <Ref> implies that $\rho_r$ and $a_{i,j}$ are all zeros. This contradicts the second criterion (b). Thus, if the constraints form a basis, there necessarily exists a $\delta \mathbf{p}$ which satisfies all the equalities and inequalities and thus leading to an unstable structure. However, alternatively, if \begin{equation} 6+n+\sigma = 6n+1 \ \Rightarrow \ \sigma = 5(n-1) \end{equation} then the stability condition (a) implies that there is only one row vector that is a linear combination of the other and this is unique, with exactly one set of co-efficients. Thus, for some $\{I,J\}$ between any two points $\mathbf{p}_i$ and $\mathbf{p}_j$, there exists a non-zero value \begin{equation} [C_{I,J}] = \sum_{\text{rod or string} \ \{i,j\} \neq \{I,J\}} {b_{i,j} [C_{i,j}] \ \delta \mathbf{p}} \end{equation} Such a set of coefficients $b_{i,j}$ can be easily found using Gauss elimination. Thus, if $5(n-1)$ strings are considered, the test simplifies to the fact that “all $b_{i,j}$ for string $\{i,j\}$ are negative”. For such a structure, the co-efficients $\rho_r$ and $a_{i,j}$ are also directly related to the compressive and tensile forces in the member and given to be \begin{equation} \rho_r = \frac{c_r}{\| \mathbf{v}_{2r,2r+1} \|} \ \text{ and } \ a_{i,j} = \frac{t_{ij}}{\| \mathbf{v}_{i,j} \|} \end{equation} Thus, since one of the rows is a linear combination of the others, it is possible to obtain a common scalar multiplier, say $\lambda$, for all the co-efficients. If this holds and all the constraints are in the set, whether or not the rank is maximum, then by uniqueness these co-efficients must correspond to the tensions and compressions, up to some $\gamma$. Without loss of generality, this scalar multiplier can be considered to be \begin{equation} \gamma = \frac{1}{\sqrt{\rho_0^2 + \rho_1^2 + \cdots + \rho_{n-1}^2 + \sum_{\{i,j\} \in \mathcal{S}, i<j} {\left( a_{ij} \right)^2 }}} \end{equation} then the compressions and tensions can be given to be \begin{equation} c_r = \gamma \rho_r \| \mathbf{v}_{2r,2r+1} \| \ \text{ and } \ t_{ij} = \gamma a_{i,j} \| \mathbf{v}_{i,j} \| \end{equation} If more than $5(n-1)$ strings are used, then setting the tensions are more complicated. Thus, adjusting the stiffness for a over-determined tensegrity can require a lot of careful tension adjustment and maintenance. Thus, one can conclude that for stability of a $n$-rod tensegrity, atleast $5(n-1)$ strings are required. However, as seen in the earlier sections, most of the designed tensegrities satisfy the condition of $6+n+\sigma \leq 6n$ and thus leading to unstable configurations. § FORM-FINDING STRATEGY Consider the regular dodecahedron with 10 rods shown in Fig. <ref>. A regular dodecahedron tensegrity with 10 rods. The shape has several options for strings that include 30 face edges, 60 face diagonals, and another 90 connections (not shown in the figure) that cross via the inside of the shape. As discussed in the earlier sections, $5\left(n-1\right) = 45$ strings are required to obtain linearization stability in the model. If one were to allow only the face edges for strings, this is already 53 trillion ways. If face edges and diagonals were allowed, there are $10^{26}$ possible options, and allowing all the inner strings means $6\times 10^{42}$ options. This combinatorial nightmare means that a simple one-by-one test is not feasible, but a better form-finding strategy is necessary. If the two ends of each rod $r$ are indexed as $2r$ and $2r+1$, then the unordered list of $2n$ end locations $p_i$ are given to be $p_{2r}=\left(x_{2r},y_{2r},z_{2r}\right)$ and $p_{2r+1}=\left(x_{2r+1},y_{2r+1},z_{2r+1}\right) \ \forall r = 0, 1, \cdots, n-1$. As earlier, the configuration of the tensegrity can be given to be $P = \left(p_0,p_1,\cdots,p_{2n-1}\right)$. Additionally, the tangent vectors $\delta \mathbf{p} = \left( \delta x_{0}, \delta y_{0}, \delta z_{0}, \cdots, \delta x_{2n-1}, \delta y_{2n-1}, \delta z_{2n-1} \right)$ representing the motion at $P$ are all defined in the $6n$-dimensional space $\mathcal{V}$. Linearizing, each rod $r=0,1,\cdots,n-1$ gives the co-vector \begin{equation} \mathbf{e}_{i} = \left[ 0 \ \cdots \ 0 \ x_{2r}-x_{2r+1} \ y_{2r}-y_{2r+1} \ z_{2r}-z_{2r+1} \ x_{2r+1}-x_{2r} \ y_{2r+1}-y_{2r} \ z_{2r+1}-z_{2r} \ 0 \ \cdots \ 0 \right] \label{eq:eqrowvec} \end{equation} to be used in the equality constraint $[C_{2r,2r+1}] \ \delta \mathbf{p} =0$. These vectors $\{\mathbf{e}_i\}$ from <Ref> are automatically independent and mutually orthogonal. Additionally, they are also orthogonal and independent to the rigid body motion co-vectors unless all the rod ends are co-linear. The complete set comprising of $n+6$ co-vectors from rods and rigid body motions can be denoted by the set $\mathcal{E}$. There are a total of $2n\left(n-1\right)$ unordered pairs possibilities \begin{equation} \left\{i,j\right\} \neq \left\{ 2r,2r+1 \right\} \ \text{, any } r \end{equation} for string connections. For the above dodecahedron in <Ref>, this implies 180 possibilities, as discussed earlier. It is possible that one might aim to select string locations from all these pairs or from a subset $\mathcal{T}$ chosen by some adhoc criterion that limits the length or the position. Without loss of generality, the size of this subset can be stated to be $L+1$. If each of the pair is listed as $\{ i_{\ell}, j_{\ell} \} \ \forall \ \ell = 0, 1, \cdots, L$, then there exists a set $\left( \ell_0, \ell_1, \cdots, \ell_L \right)$ associated with the vectors $\mathbf{v}_{\ell}$ and given as $\ell_0 \mathbf{v}_{0} + \cdots + \ell_{L} \mathbf{v}_{L}$. <Ref> states that \begin{equation} \begin{split} \mathbf{0} &= \rho_0 [C_{0,1}] + \cdots + \rho_r [C_{2r,2r+1}] + \cdots + \rho_n [C_{2n-2,2n-1}] + \sum_{\{i,j\} \in \mathcal{S}, i<j} {a_{i,j}} [C_{i,j}] \end{split} \end{equation} and since it known that ${a_{i,j}} > 0$, it can be re-written as \begin{equation} \begin{split} &\mathbf{0} = \rho_0 [C_{0,1}] + \cdots + \rho_r [C_{2r,2r+1}] + \cdots + \rho_n [C_{2n-2,2n-1}] + \sum_{k = 0}^{L} {\lambda_k} [C_{k}] \\ \Rightarrow & \mathbf{X} = -\rho_0 [C_{0,1}] - \cdots + \rho_r [C_{2r,2r+1}] - \cdots - \rho_n [C_{2n-2,2n-1}] = \sum_{k = 0}^{L} {\lambda_k} [C_{k}] \end{split} \end{equation} where all $\lambda_k \geq 0$ and $\lambda_0 + \lambda_1 + \cdots + \lambda_L = 1$ without loss of generality. Here, a mapping is used to relate the unordered pairs with the co-efficients of the string co-vectors. The co-efficients $\left( \lambda_0, \lambda_1, \cdots, \lambda_L\right)$ form a $(L+1)$-dimensional subspace known as $L$-simplex. Since the interest is to determine $5(n-1)$ non-zero and positive values, this implies the intersection of the above defined simplex $\mathcal{S}$ with a space $\mathcal{E}$ of dimension $5n-4$. The intersection of this space $\mathcal{E}$ and the simplex $\mathcal{S}$ at a face leads to some $\lambda_{\ell}$ to be zero and one can leave out the corresponding $\mathbf{v}_{\ell}$, i.e. the string inequality constraint. For example, for the dodecahedron from <Ref> and if all ordered pairs are considered, then $L=180$. It is evident that visualizations of such larger dimensional spaces can be harder and thus it is prudent to develop the concept using a lower dimensional simplex. <Ref> shows the simplex for $L=2$. The affine plane $H$ contains all the $\left(\lambda_0, \lambda_1, \lambda_2 \right)$ with $\sum_{K} {\lambda_K} = 1$. The triangle $S$ shows where they are all positive. If $L=3$, the set S becomes a tetrahedron in the 3D hyperplane $H$ of $\left(\lambda_0, \lambda_1, \lambda_2, \lambda_3 \right)$-space and so on The <Ref> shows the intersection of a space with the simplex. Only here, the space $\mathcal{E}$ is a line $E$ intersecting with a simplex $\mathcal{S}$ in 2D and 3D and ensure one or more $\lambda$'s to be zero. Intersection of line with a 2- (left) and 3- (right) simplex. The search is posed as a nonlinear iterative procedure that is pictorially depicted in the <Ref> Determination of the point $\rho$ which is on the face of the Simplex $\mathcal{S}$ and also on the line segment $\overline{q\varepsilon}$. Here, the point $\varepsilon$ is the nearest point to the centroid of the simplex. and described below * Define the centroid of the simplex. The centroid $q \in \mathcal{S}$ can be defined as \begin{equation} q = \left(m,\cdots,m\right), \text{ where } m = \frac{1}{L+1} \end{equation} * Determine a point, say $\varepsilon = \left(\varepsilon_0, \cdots, \varepsilon_{\ell}, \cdots, \varepsilon_{L+1} \right)$ such that there exists $\rho_r$ that satisfies \begin{equation} \mathbf{X} = -\rho_0 [C_{0,1}] - \cdots + \rho_r [C_{2r,2r+1}] - \cdots - \rho_n [C_{2n-2,2n-1}] = \sum_{k = 0}^{L} {\varepsilon_k} [C_{k}] \end{equation} and $\varepsilon \in \mathcal{H}$ is the nearest point to $q \in \mathcal{S}$. * If the chosen point $\varepsilon$ is a point of the simplex $\mathcal{S}$, then the search is complete * If the chosen point $\varepsilon$ is not a point of the simplex $\mathcal{S}$, then it is necessary to determine the face $F$ through which the line $Q$ from $q$ to $\varepsilon$ leaves $\mathcal{S}$. If $\mathbf{r}$ is the vector from $q$ to $\varepsilon$, then $Q = \left\{ q + t\mathbf{r} \ | \ t \in \mathbb{R} \right\}$. Here, each hyperplane $\lambda_{\ell}$ meets $Q$ at \begin{equation} \rho = q + q + t_{\ell} \mathbf{r} = \left( \frac{m}{m-\varepsilon_{\ell}} \right) \mathbf{r} \end{equation} and the segment $\overline{q\varepsilon}$ leaves $\mathcal{S}$ at the smallest positive $t_{\ell}$ and this corresponds to the most negative $\varepsilon_{\ell}$. Referring to this index with $\ell$, one would consider the space $\Sigma$ as the convex set of all points where $\lambda_{\ell \neq \underline{\ell}} \geq 0$. Here, either $\varepsilon$ is in $\Sigma$ with $\lambda_{\underline{\ell}}$ as the only negative $\lambda_{\ell}$ or not. * If $\varepsilon$ is in $\Sigma$, then the string $\mathbf{v}_{\ell}$ can be dropped and the remaining set of strings leave the search as viable as it was. * In the case that $\varepsilon$ is not in $\Sigma$, then this implies that $\mathcal{E}$ does not meet $\mathcal{E}$ at a face. Thus, one of the $\varepsilon_{\ell} < 0$ is safe to omit. * The procedure terminates when atleast one branch reaches a solution with $\varepsilon_{\ell} > 0$. If every branch ends with a solution that is $\varepsilon_{\ell} < 0$, then there is no solution to the original problem. The above provides a recursive strategy that facilitates a solution with every step with branching, if required to omit $\ell$ with $\varepsilon < 0$. This procedure is much faster than a brute force approach that tries all combinations of $5(n-1)$ strings among the candidate set $\mathcal{T}$. § RESULTS AND DISCUSSIONS The famous three-rod tensegrity that has been often used in studying the mechanics of tensegrity structures has been considered here to demonstrate the applicability of the developed methods to find stable structures. In order to differentiate between the stable and unstable structure, the below nomenclature is employed for convenience. * Three-rod with nine-strings (which has a swinging mode): 9-segrity * Three-rod with ten-strings (fully stable): 10-segrity The rod length for both the models are considered to be 4 units to provide a one-to-one comparison. The geometry of the 9-segrity considered is as given below. Consider a scale that puts the bottom points on the unit circle at height $z=0$. The bottom three points are \begin{equation} \begin{split} p_0 &= \left(1,0,0\right) \\ p_1 &= \left(\cos{\left(\frac{2\pi}{3}\right)},\sin{\left(\frac{2\pi}{3}\right)},0\right) = \left(-1/2,\sqrt{3}/2,0\right)\\ p_2 &= \left(\cos{\left(\frac{-2\pi}{3}\right)},\sin{\left(-\frac{2\pi}{3}\right)},0\right) = \left(-1/2,-\sqrt{3}/2,0\right) \end{split} \end{equation} The top three points are given to be \begin{equation} \begin{split} q_0 &= \left(\cos{\left(\theta\right)},\sin{\left(\theta\right)},h\right) \\ q_1 &= \left(\cos{\left(\theta + \frac{2\pi}{3}\right)},\sin{\left(\theta + \frac{2\pi}{3}\right)},h\right)\\ q_2 &= \left(\cos{\left(\theta - \frac{2\pi}{3}\right)},\sin{\left(\theta - \frac{2\pi}{3}\right)},h\right) \end{split} \end{equation} Here, two configurations are considered with $\theta = -100^{\circ}$ and $210^{\circ}$. The height of the tensegrity is adjusted to ensure a rod length of 4 units using the relation \begin{equation} 2\left(1-\cos{\theta} \right) + h^2 = r^2 \end{equation} Depiction of the 9-segrity. The co-ordinates $h$ and $\theta$ on a subset of the shapes possible for a 9-segrity, with ends on the radius-1 cylinder. The lower ends are all $\sqrt{3}$ apart; so are the top ones, at height $h$. The rod and cable lengths follow from how far $\theta$ at the top twists, relative to the base. As shown in the <Ref>, the rods are positioned such that the pairs of rod ends are given by $\left(p_0,q_0\right)$, $\left(p_1,q_1\right)$ and $\left(p_2,q_2\right)$ and the nine string pairs, without repetition, are given by $\left(p_0, p_1 \right)$, $\left(p_0, p_2 \right)$, $\left(p_0, q_1\right)$, $\left(p_1, p_2 \right)$, $\left( p_1, q_2\right)$, $\left(p_2, q_0\right)$, $\left( q_0, q_1\right)$, $\left(q_0,q_2 \right)$, $\left(q_1,q_2 \right)$. §.§ 9-segrity: Theoretical observations The 9-segrity has an inherent symmetry. Considering the squared distance of the point $q_0$ w.r.t. $p_0$, considering a fixed rod length of $r$ units for all rods, \begin{equation} \left( \cos{\theta} - 1 \right)^2 + \left( \sin{\theta} \right)^2 + h^2 = 2-2\cos{\theta} + h^2 \Rightarrow 2\left(1-\cos{\theta} \right) + h^2 = r^2 \end{equation} Similarly, the squared distance of point $q_0$ from $p_1$ is \begin{equation} \left( \cos{\theta} + \frac{1}{2} \right)^{2} + \left( \sin{\theta} - \frac{\sqrt{3}}{2} \right)^{2} + h^2 = 2 - 2 \cos{\left(\theta + \frac{2\pi}{3}\right)} + h^2 \end{equation} and thus joining them by a cable of length $c$ gives a constraint \begin{equation} 2\left[1-\cos{\left(\theta+\frac{2\pi}{3}\right)} \right] + h^2 \leq c^2 \end{equation} It is easier to see how these two constraints interact if one unrolls the cylinder onto a flat diagram as shown in <Ref>. The variation of $h$ vs. $\theta$ for various fixed r (red) and c (green). Along each particular red curve, the square of the distance from $p_0$ to $q_0$ is constant. Along each green curve the square of the distance between $p_1$ to $q_0$ is s constant. For any particular rod length $r$, the rod forces the upper end to be on the particular corresponding red curve, but able to move along it as $\theta$ changes. Similarly, upon choosing a particular cable length $c$, the cable forces the upper end to be on or below a particular corresponding green curve. Wherever a red and green curve cross at an angle, the $\theta$ can be varied to move along the red curve, and go downward from the green one. Thus, the point is not fixed and can move without violating the constraints. At a point like $P$, the two curves are tangent and the shape is free to move along the common tangent line. The linear approximations given by \begin{equation} \begin{split} \left[ \frac{\partial\left(\text{red}\right)}{\partial \theta} \ \frac{\partial\left(\text{red}\right)}{\partial h} \right] &= \left[ 2\sin \theta \ \ 2h \right] \\ \left[ \frac{\partial\left(\text{green}\right)}{\partial \theta} \ \frac{\partial\left(\text{green}\right)}{\partial h} \right] &= \left[ 2\sin \left(\theta + \frac{2\pi}{3} \right) \ \ 2h \right] \end{split} \end{equation} The above functions are identical at points like $P$ where $\theta = 210^{\circ}$ and the above reduces to \begin{equation} \begin{split} -\delta \theta + 2h \delta h &= 0 \\ -\delta \theta + 2h \delta h &\leq 0 \\ \end{split} \end{equation} where the second automatically holds true if the first does. Looking beyond the linear approximation at $P$, the green curve is tangent to the red curve from below. Thus, to second order, therefore, following the red curve strictly increases the cable length, moving to higher green curves. This is forbidden by the constraint, so the shape is fixed. However, there are also points like $Q$, where a green curve touches the red from above. This gives another situation where a linear analysis, like above, has a constant tangent and thus requires higher order terms. In this case, moving along the red curve quadratically lowers the green value from a maximum at $Q$ and the tensegrity can fall apart. However, if build exactly, while the forces will balance each other but lead to an unstable equilibrium. Further, as iterated throughout literature, firm tensegrities are hard to make, except with elastic bands. If the $r$ and $c$ do not lead to curves that meet exactly and tangentially at $P$ , they will likely either cross each other (twice) or fail to meet at all. If they do not meet, then the measured cable is too short, and the tensegrity cannot be made with it. <Ref> shows the area near the point $P$. Perturbation from the red and green curves tangent at $P$. A similar shape occurs if they both move up, or both down, by different amounts. Here, the crossing pair of curves is shown distinctly. A small perturbation from passing through $P$ produces a much larger grey area between the curves. This amplification comes precisely from the ideal tangency. As second order contact is perturbed the widest separation of the two curves, near the contact point, grows like the errors in $r$ and $c$. If the perturbation produces crossings, then the distance between the crossings grows like the square root of this separation. Note here, that if the cross is a small number $\delta$, then its square root is bigger than $\delta$. Thus, the distance grows drastically with fabrication error. All of this allow a large amount of “swinging”, compared to the fabrication error. Thus, very careful fabrication is necessary to avoid this. As proposed in several literature, one intends to use such a tensegrity as a deployable structure, particularly of interest to space applications. If the number of length constraints are too small to meet the full-rank convexity condition, their typical situation automatically allows movement away from the target shape. The lengths must be very precisely and non-linearly co-ordinated to keep them in the relationship needed for the degenerate condition of tangency and a swinging-free shape. At every stage of adjusting them, they must not deviate into the typical kind of crossing. Further on, any adjustment mechanism has moving parts, and is thus subject to wear. Preserving tangency through the process of mutually non-linear length adjustments, and maintaining this precision through a lifetime of service cycles, is a narrow and expensive target. Thus, such structures are not optimal nor suitable as deployable structures. §.§ 9-segrity with an additional cable is still a 9-segrity A stable and firm tensegrity is possible with three rods and ten cables. However, as shown later in <Ref>, this cannot be achieved by just adding one cable to the linearly degenerate system of the three-rod and nine-string tensegrity. This section outlines the discussion and reasoning behind this conclusion. From a simple observation of the top view of the 9-segrity shown in <Ref>, it is evident that there are no cables that can prevent it from swinging inwards. A view of the 9-segrity from directly above. There are clearly no cables that can hold rods in these positions against an inward swing. Only equilibrium angle allowed is $\theta = 210^{\circ} (right)$. Hypothetically, if one were to add one additional string to the 9-segrity, the stability at point $P$ is as shown in <Ref>. Point $P$ upon addition of one additional cable $d_2$ from $t_2$. Here, one additional cable $d_2$ is added from $t_2$. This stops the point $P$ from going up the line of degenerate vector. While, it cannot cross the tangent to the $d_2$ circle centered at $t_2$, it can still move along the grey area outlined earlier. The system fails the full-rank convexity test. A tenth cable, unless can be tightened enough to change the length of the existing nine cables, would have zero tension, and be almost slack. This is further illustrated by rolling out the constraint equations, as earlier, in <Ref>. Curves of equal distance (through the unit cylinder) from $q_0$ to $p_0$ (red), $p_1$ (green) and $p_2$ (blue). It is easy to check that the green and red curves are tangent at $\theta = 30^{\circ}, 210^{\circ}$, the blue and red curves at $\theta = 150^{\circ}, 330^{\circ}$. Using three equal cables to join more ends ($p_2$ as well as $p_1$ to $q_0$ , plus $p_0$ to $q_1$ and $p_1$ to $q_2$) will over-constraint the system but still does not get the structure out of the certainly-soft, hard-to-build zone. From <Ref>, if neither the green function nor the blue can increase, and one can move only along a particular red curve, still any motions that these constraints do permit, definitely leads to an instability. If none of these motions are permitted, that does not prove stability either. In <Ref>, one can see that not increasing the blue values also prevents $P$ from moving to the right along its red equality curve, but not to the left! Holding $P$ against this motion depends on higher order terms. Again, anywhere between $P$ and $R$, moving to the left along a red curve actually decreases both the green and blue values and thus these constraints cannot hold a point still either. In the wide band on the left, between $S$ and $Q$, moving right along a red curve decreases both green and blue, and is allowed by the cables. Between the lines through $P$ and $Q$, however, not increasing the green values prevents leftward motion along any red curve, while not increasing blue values blocks it on the right. A target range is in the strip $150^{\circ} < \theta < 210^{\circ}$ between $P$ and $Q$. Suppose there is a small error in choosing $(\theta,h)$ and instead one chooses $(\theta',h')$, then this small error is sufficient to cause a soft motion or collapse. IN other words, if $P'$ is obtained instead of $P$ such that $\| P' - P \| < \varepsilon$, then the convexity condition holds true for $P'$ too and thus allows for the swinging between $P$ and $P'$. §.§ 10-segrity: A stabilized model Using the proposed form-finding technique, an alternative three-rod model consisting of ten strings is proposed here, as shown in <Ref>, that is stable and free of swinging modes. Firm three-rod tensegrity with ten cables. Looking down (left) and from front (right). The configuration of the 10-segrity is given by the unordered sets for one set of rod ends \begin{equation} \begin{split} p_0 &= (0.5,0,-2) \\ %N1 p_1 &= (0,-2,0.5) \\ %N2 p_2 &= (2,0.5,0) %N5 \end{split} \end{equation} and the other set of rod ends given by \begin{equation} \begin{split} q_0 &= (0.5,0,2)\\ %N3 q_1 &= (0,2,0.5) \\ %N4 q_2 &= (-2,0.5.0) %N6 \end{split} \end{equation} The rods are position such that the pairs of rod ends are given by $(p_0,q_0)$, $(p_1,q_1)$ and $(p_2,q_2)$ and the set of string pair ends are given by $(p_0,p_1)$, $(p_0,p_2)$, $(p_0,q_1)$, $(p_1,q_0)$, $(p_1,p_2)$, $(p_1,q_2)$, $(q_0,p_2)$, $(q_0,q_2)$, $(q_1,p_2)$ and $(q_1,q_2)$. In order to demonstrate the stability of the proposed 10-segrity, a modal analysis is considered and discussed here. Comparison of natural frequencies for the 9- and 10-segrity for various modes (7 - 12). The first six modes represent rigid body motions. For the 9-segrity, here $\theta = 210^{\circ}$ §.§.§ Modal analysis (No constraints) In order to compare the stability of the proposed model with the original 9-segrity with the proposed 10-segrity, a modal analysis is performed to extract the first 12 eigenmodes. The 9-segrity considered here has a $\theta = 210^{\circ}$ since this is the most stable configuration possible. In order to keep the comparison acceptable, both the models are considered with the same rod lengths, i.e. 4 units. The rods are considered to be made of steel and strings of Kevlar. Both as assumed to have a circular cross-section. The materials are considered to be linearly elastic in nature. The properties of the steel rod is considered to have a Youngs' modulus of $210$ GPa, Poisson ratio of 0.3, density of 8000 kg/m$^3$ and a cross-sectional radius of 0.01m. The Kevlar strings are considered to have a Poisson ratio of 0.36, density of 1440 kg/m$^3$ and cross-sectional radius of 0.001m. Four values of Youngs' modulus of the Kevlar string are considered to check the influence of material properties: 112 GPa, 10 GPa, 1 GPa and 10 MPa. However, the strings are considered to have no stiffness under compression. The rods and strings are modeled using 3D, second-order Timoshenko beam elements. In this work, we consider element from Abaqus standard <cit.>. The rods are discretized with 20 elements and the strings with 14-15 elements. In this first case, there are no boundary conditions applied on both the models. As expected, the natural frequencies for the first six modes are zeros indicating a rigid body mode. Thus, only the modes 7 - 12 are considered for analysis. The variation of the natural frequency of vibration for modes 7 - 12 as a function of the Youngs' modulus of Kevlar are shown in <Ref>. As discussed in earlier literature, it is clearly evident here that the lowest mode (other than the rigid body modes) is a soft / swinging mode of deformation with a natural frequency at 0.3655 rad/s, i.e. nearly zero. The original shape and the 7 / 8 / 9-th mode shapes for both the 9-segrity and the 10-segrity are visualized in the <Ref> and <Ref>. Here, the visualization is considered only for the case with the Youngs' modulus of Kevlar being 112 GPa. The Mode 7 for the 9-segrity shows the famous swinging mode, originally discussed by Connelly and co-workers. However, the other higher modes relate direction to individual motion of the string elements rather than the overall structure itself. In contrast, the 10-segrity proposed in this work shows no sign of a soft mode similar to that in the 9-segrity. It is further important to note here that once the strings are slack, they are under a compressive load and thus do not have any stiffness. The whole concept of stability of a tensegrity is based on the idea that the strings are always in tension and rods in compression. Once the strings are in compression, they have zero stiffness and they can deform and maintain a deformed configuration without any external work. This also means that they are in continuous equilibrium over a finite range of motion. This is also what can be physically seen that once a string is slack, they can occupy several positions. The positions for the strings shown in higher modes would be only one representation of the several finite shapes that the string can attain. However, the primary noteworthy point here is that at the first eigenmode of deformation, the slackness of the string can result in the structural instability in the 9-string rather than the 10-string configuration. There are no strings available to prevent the swinging mode of deformation. In contrast, the first eigenmode of the 10-string configuration requires much higher energy and can be prevented by usage of stiff strings and no-prestressing. Original shape Mode 7 (0.3655 rad/s) Mode 7 (0.3655 rad/s) Mode 8 (7.6274 rad/s) Mode 8 (7.6274 rad/s) Mode 9 (7.7131 rad/s) Mode 9 (7.7131 rad/s) Visualization of different modes of deformation of the 9-segrity. No boundary condition is applied. Original shape Mode 7 (7.1414 rad/s) Mode 7 (7.1414 rad/s) Mode 8 (7.5383 rad/s) Mode 8 (7.5383 rad/s) Mode 9 (8.1529 rad/s) Mode 9 (8.1529 rad/s) Visualization of different modes of deformation of the 10-segrity. No boundary condition is applied. §.§.§ Modal analysis (with constraints) In order to visualize the swinging mode better, one end of each of the rods are fixed. This is equivalent to fixing one end of the tensegrity on the ground or to another structure. The lowest mode for the 9-segrity and the 10-segrity are visualized in <Ref> and <Ref> respectively. The videos of these visualizations are also enclosed along with this paper. The 9-segrity shows a twist in its top surface with the rods moving apart. As discussed in the earlier section, no 10-th string can be added to this structure that can prevent this torsional motion. In contrast, the proposed 10-segrity demonstrates are more stable behavior. As discussed in the previous sub-section, it is again important to note here that once a string is slack, they can occupy several positions. The positions for the strings shown in higher modes would be only one representation of the several finite shapes that the string can attain. However, the primary noteworthy point again is that such weaker configurations are possible easily in the lower-energy modes in the existing 9-string rather than the 10-string structures. Visualization of the lowest mode (also the swinging mode at 0.3655 rad/s) of deformation of the 9-segrity. One end of each rod is fixed. Visualization of the lowest mode of deformation of the 10-segrity at 7.1414 rad/s. One end of each rod is fixed. § CONCLUSIONS AND FUTURE WORK This work presents a mathematical framework for the design of tensegrities. The famous three-rod tensegrity, used here, shows a vividly clear torsional mode of vibration, often also referred to a swinging soft mode. The proposed form-finding strategy facilitates to find a three-rod and ten-string structure that are stable to the second-order and suitable for engineering. The proposed form-finding methodology uses mechanics as a foundation to ensure that the resulting designs are stable and free of swinging modes. The work provides a simple to use relation between the number of rods, beads and strings to check for the stability of the resulting structure. This has been demonstrated through the famous three-rod tensegrity. Further on, this work also provides a comprehensive review to show that most of the designs produced till date do not satisfy this relation and thus, significant room for improvements exists. Such a mathematical framework has the potential to be further extended to introduce swinging modes to leverage the instabilities for engineering purposes as well. One of the areas for the future includes consideration of contact behavior at the joints of rods and strings. This work uses beam models and considers a common node at the ends. Joining the strings through a common node, instead of joint constraint, is expected to allow the bending mode of deformations to be transferred to the strings as well. This has been addressed in this work through the usage of no-compression constitutive model for the strings. Any bending deformation will necessarily introduce compression but however such compressive loads cannot be sustained by the strings and thus non-physical and will not be permitted. However, a string element as introduced in the work of <cit.> would be more appropriate alongside usage of appropriate joints and will be considered in the future work. Further on, contacts at joints can introduce additional non-linearities that warrants additional future investigation. Some of the recent works <cit.> in this area demonstrate novel modeling architectures that can facilitate modeling of such lattice structures by considering the role of joints. One of the areas for application of such stable tensegrity towers is as deployable space structures. Deployable space structures can help save space and weight, that form the primary constraints for space engineering. While mechanisms have been explored for deployment of these structures, extremely fine tuning is required to actually maintain them if unstable configurations were used. The proposed mathematical framework can be used to design and develop stable tensegrity designs that can be used in deployable space structures. In the recent years, lattice-based metamaterials have been proposed as viable alternatives for impact absorbers or to selectively shield certain frequencies. With the advent of 3D printing, tensegrity structures will find many potential applications in these areas. Thus, the above proposed mathematical framework can provide an easy way to design them from ground-up. § ACKNOWLEDGEMENTS I (Ajay) would like to thank Dr. Tim Poston for introducing me to the world of tensegrities, his expertise, advice and discussions in the early course of this study that has been extremely valuable towards completion of this work. Consider an $m$-dimensional space $\mathcal{V}$. For a bead on cables, $m = 2 \text{ or } 3$ and $\mathcal{V}$ represents the ordinary physical space. However, in tensegrity applications, typically $m = 3N-6$. The vectors $\mathbf{v} \in \mathcal{V}$ represent the small changes representing soft swinging modes. The $c$ linear functions, that approximate the changes in length from its value at $P$, are given to be \begin{equation} \mathbf{f}^k : \mathcal{V} \rightarrow \mathbb{R}, \ \forall k=0, \cdots, c-1 \end{equation} and the condition $\mathbf{f}^k = 0$ is automatically satisfied for the trivial solution $\mathbf{v} = 0$.
††thanks: Corresponding author. E-mail<EMAIL_ADDRESS> # Phase-Field Modeling of Wetting and Balling Dynamics in Powder Bed Fusion Process Lu Li Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA Ji-Qin Li Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA Tai-Hsi Fan Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA ###### Abstract In a powder bed fusion additive manufacturing (AM) process, the balling effect has a significant impact on the surface quality of the printing parts. Surface wetting helps the bonding between powder and substrate and the inter-particle fusion, whereas the balling effect forms large spheroidal beads around the laser beam and causes voids, discontinuities, and poor surface roughness during the printing process. To better understand the transient dynamics, a theoretical model with a simplified 2D configuration is developed to investigate the underlying fluid flow and heat transfer, phase transition, and interfacial instability along with the laser heating. We demonstrate that the degree of wetting and fast solidification counter-balance the balling effect, and the Rayleigh-Plateau flow instability plays an important role for cases with relatively low substrate wettability and high scanning rate. Keywords: Wetting, balling, powder bed fusion, Rayleigh-Plateau instability, phase-field modeling, additive manufacturing ## Introduction Surface quality is a great concern in making high-end 3D printing products. In a typical metal printing process a thin layer of powders is heated locally by a scanning laser or electron beam to selectively melt and fuse the powders or form a small region of the melt pool, and then followed by rapid solidification of the molten material. During each scan, a thin layer of powders is adhered to the substrate or to a previously solidified powder layer. A geometrically, structurally, or functionally complicated 3D configuration can be built by repeating this process, known as a layer-by- layer or solid freeform fabrication technique, which can be applied to a broad range of materials including alloys, ceramics, polymers, and composites. The fabrication technology has shown great promises in building light-weight structure from aerospace, automotive, to biomedical applications. For recent advances in additive manufacturing (AM) technology including materials, structures, processes, and the relevant multiscale physics see the comprehensive reviews gu2012 ; yap2015 ; herzog2016 ; markl2016 ; malekipour2017 ; debroy2018 . The 3D printing technique may provide equivalent or superior microstructure, and thus the mechanical properties and performance, than conventionally cast and wrought materials herzog2016 . However, the lack of consistent surface quality due to microstructural defects including pores, discontinuities, incomplete melting, process-induced microcracks, delamination, and balling-induced poor surface roughness hinder the advancement of the printing technology. Complicated interfacial phenomena in a metal AM process include multiple deforming interfaces, coalescence and change of morphology, conjugated heat and mass transfer, and multiscale phase transition dynamics. A critical concern regarding serious defects generated by the AM process is the balling phenomenon malekipour2017 ; debroy2018 , which is the formation of large spheroidal beads and ripples from aggravated melting and solidification of metallic powders. Balling often appears around the scanning laser beam, resulting in a nonuniform adherence to the substrate or previously fused powder layer, including discontinuities or unfilled spots, which results in poor surface quality, poor interlayer bonding, and affects mechanical properties of the final parts. The balling phenomenon is primarily contributed by three factors: surface wettability, competition between spreading and solidification, and the Rayleigh-Plateau instability. In a few focused studies, Li et al. li2012 observed the increase of balling tendency with higher oxygen content in the gas environment, higher scanning speed and powder layer thickness, and lower laser power, for both nickel and stainless steel powders. The process windows for tungsten and aluminum powders were observed by Wang et al. wang2017 and Aversa et al. aversa2018 , respectively. The former investigated the morphology and stability of a single scan track, whereas the latter suggested that balling may occur at either insufficient or excessive laser exposure, which is likely due to incomplete melting and denuded powders around the melt pool, respectively. This is consistent with the observations for iron-based powders kruth2004 ; cherry2015 ; gunenthiram2017 . Surface oxidation reduces the wettability of the molten powders and may create a reversed (from negative to positive temperature coefficients) thermal Marangoni convection that further promotes balling niu1999 ; rombouts2006 . Using high purity inert environment can prevent or reduce the content of surface oxides das2003 . Agarwala et al. agarwala1995 also suggested the addition of deoxidizer to improve wetting or applying the fluxing agent to enhance the fluidity of the molten metal during the fusion process. The competition between spreading and solidification plays an important role in transient fusion dynamics. Zhou et al. zhou2015a specifically compared the balling tendency and the resulting surface morphology under various laser parameters. They found that titanium and steel have faster spreading than solidification and are considered easy-to-process metals, whereas copper and tungsten have faster solidification time, implying a rapid solidification and arresting of the three-phase contact line, and thus a larger balling tendency. However, the new results found by Qiu et al. qiu2020 on alumina ceramic powders indicated that severe balling could still happen even spreading is much faster than solidification. For materials with good wetting and spreading abilities, the Rayleigh-Plateau instability tends to break up the scan track and cause balling rombouts2006 ; hunter2012 . Spattering under a very high laser power can also cause balling gu2009 . The morphology of defects and discontinuities caused by balling can be reconstructed by 3D imaging using synchrotron radiation micro-CT zhou2015b . Many experimental observations have indicated that balling is primarily enhanced by three factors: 1) poor wetting ability, 2) higher tendency of solidification before spreading, and 3) onset of Rayleigh-Plateau instability. These factors are further complicated by local Marangoni convection due to surface oxidation and a large temperature gradient. Although a remelting procedure or tuning of the process window (including laser power, spot size, exposure time, hatch space, scanning speed and strategy) may help to avoid balling, a fundamental understanding and quantitative analysis of the interfacial phenomenon involved in balling are important for a better process design and control of the microstructure evolution during phase transition. Direct numerical simulation of selective laser melting process including fluid flow, phase transition, and heat transfer analysis has been successful using the volume of fluid (VOF) method, showing balling can be initiated by the Rayleigh-Plateau instability khairallah2014 ; tang2018 . To better understand and quantify the influence of wettability, the competing dynamics of solidification and spreading, and interfacial instability to the balling effect, here we develop a theoretical model using an idealized 2D configuration with a single layer of powders on top of a substrate. The theoretical framework is developed based on the thermodynamically consistent phase-field method, which is broadly used for investigating the microstructure evolution in metallic systems such as growth kinetics and the formation of dendritic microstructure in material sciences kobayashi1993 ; wheeler1993 ; warren1995 ; murray1995 ; karma1998 ; boettinger2002 . The phase-field method cahn1958 ; cahn1961 ; penrose1990 ; wang1993 ; anderson1998 ; sekerka2011 has been broadened and applied to the mesoscale analysis of additive and pharmaceutical manufacturing processes that involve multiphase fluids, heat and mass transfer, thermal elasticity, phase transition, and three-phase contact line dynamics li2018 ; li2019 ; fan2019 ; li2020a ; li2020b . ## Theoretical Analysis In a simplified 2D configuration (Fig. 1) we consider a single layer of equal- sized metal powders aligned with the substrate. The powders are heated from the top by a laser beam with an assumed Gaussian irradiation heat flux. Further assumptions are made to facilitate the theoretical analysis: i) evaporation kinetics and recoil pressure of the liquid metal are neglected at low to medium laser power, ii) the ambient argon gas is assumed ideal, iii) the gravity acceleration is neglected, iv) thermal elasticity is not considered, and v) the latent heat, heat capacity, density of the metal powders are assumed constants, whereas the surface tension, dynamic viscosity, and thermal conductivity are temperature dependent. Starting from the definition of entropy functional, a concise mathematical framework that underlies the phase-field approach is provided here. More details about the derivations can be found in our recent work on the modeling of a laser brazing process li2020b . Figure 1: Schematic of selective laser melting of a horizontal layer of equal- sized metal powders placed on top of the substrate and is surrounded by argon gas. Phase field variables $\phi_{1}$, $\phi_{2}$, and $\phi_{3}$ indicate the volume fractions of argon gas, metal powders, and the substrate, respectively. $D$ represents the width of the computational domain, and the Gaussian beam is featured by its irradiation intensity H and a characteristic spot radius $a$. ### Entropy functional and free energy density As a starting point of thermodynamically consistent approach penrose1990 ; wang1993 ; sekerka2011 , the entropy functional that describes the phase transition of a multi-component system can be expressed as $\begin{split}\ \mathcal{S}&=\int_{\Omega}\bigg{[}s\left(e,\varphi,\phi_{1},\phi_{2},\phi_{3}\right)-\lambda\left(\sum_{i=1}^{3}\phi_{i}-1\right)\\\ &~{}~{}~{}-\frac{1}{2}\xi_{\varphi}^{2}|\boldsymbol{\nabla}\varphi|^{2}-\frac{1}{2}\sum_{i=1}^{3}\xi_{i}^{2}|\boldsymbol{\nabla}\phi_{i}|^{2}\bigg{]}dV~{},\end{split}$ (1) where the overall material volume $\Omega$ indicates the computational domain shown in Fig. 1, including the substrate, metal powders, and the surrounding argon gas, $s$ is the local entropy density (per unit volume), $e$ is the internal energy, $\varphi\in[-1,1]$ is a non-conserved phase-field variable that describes solid-liquid phase transition of the powders (here $-1$ is for the liquid phase and $+1$ for the solid phase), $\phi_{1}$ to $\phi_{3}$ $\in[0,1]$ are material volume fractions for argon gas, metal powders, and the substrate, respectively, with the constraint of $\sum_{i=1}^{3}\phi_{i}=1$, and $\lambda$ is the Lagrange multiplier. The gradient effects, $|\boldsymbol{\nabla}\varphi|^{2}$ and $|\boldsymbol{\nabla}\phi_{i}|^{2}$, along with their coefficients $\xi_{\varphi}$ and $\xi_{1\sim 3}$ are associated with the interfacial energy, apparent thickness, and mobility. The transient evolution of the continuous phase field $\varphi$ and the volume fractions $\phi_{1}$ to $\phi_{3}$ are described by the phase-field equations, derived from the entropy transport equation and the above entropy functional. By requiring a positive entropy production rate, the time evolution of the non-conserved phase field $\varphi$ and conserved volume fraction $\phi_{i}$ can be formulated as $\frac{\partial\varphi}{\partial t}=M_{\varphi}\left(\frac{\partial s}{\partial\varphi}+\xi_{\varphi}^{2}\nabla^{2}\varphi\right)$ (2) and $\frac{\partial\phi_{i}}{\partial t}=-\boldsymbol{\nabla}\cdot\left[M_{i}\boldsymbol{\nabla}\left(\frac{\partial s}{\partial\phi_{i}}+\xi_{i}^{2}\nabla^{2}\phi_{i}\right)\right]~{},$ (3) respectively, where the assumed positive proportional constant $M_{\varphi}$ represents mobility at the solid-liquid interface of the melting powders, and $M_{i}$ represents mobilities of the interfaces between different compositions. The entropy in the partial derivatives in Eqs. (2) and (3) are further associated with the free energy density (Appendix A). Here we consider the free energy that accommodates the enthalpy effect for the mixing of different components across the smooth interface and with an additional constraint from mass conservation li2020b , expressed as $\begin{split}f=&\sum_{i=1}^{3}\phi_{i}f_{i}+T\sum_{i=1}^{3}h_{i}\phi_{i}^{2}(1-\phi_{i})^{2}\\\ &+T\lambda\left(\sum_{i=1}^{3}\phi_{i}-1\right)~{},\end{split}$ (4) where $f_{i}$ is the free energy of pure material, the 2nd term on the right assumes a double-well type mixing enthalpy with $h_{i}$ as the energy barriers, and the last term on the right takes the constraint in the entropy functional into account. The thermally driven phase transition dynamics, including melting and solidification, is described by the free energy density $f_{2}$ developed by Wang et al. wang1993 , expressed as $\ f_{2}=T\bigg{[}-\int_{T_{m}}^{T}\frac{e_{2}(T^{\prime},\varphi)}{T^{\prime 2}}dT^{\prime}+\frac{1}{4}h_{\varphi}\left(1-\varphi^{2}\right)^{2}\bigg{]}~{},$ (5) where $T_{m}$ is the equilibrium melting temperature of the pure metallic powders, $e_{2}=e_{2_{s}}+P(\varphi)L_{a}$ wang1993 is the corresponding internal energy to accommodate both solid and liquid phases of the metallic material using the interpolation function $P(\varphi)$ and the latent heat $L_{a}$. Here the polynomial interpolation function $P(\varphi)=1/2-(1/16)\left(3\varphi^{5}-10\varphi^{3}+15\varphi\right)$ has a range from $P(-1)=1$ to $P(1)=0$. $h_{\varphi}$ is the corresponding energy barrier across the solid and liquid phases. As a result, the phase field $\varphi$-equation that governs the solid-liquid phase transition wang1993 can be formulated as $\begin{split}\frac{\partial\varphi}{\partial t}=M_{\varphi}&\bigg{[}\xi_{\varphi}^{2}\nabla^{2}\varphi+\phi_{2}P^{\prime}L_{a}\frac{T-T_{m}}{TT_{m}}\\\ &~{}~{}~{}~{}+\phi_{2}h_{\varphi}\left(\varphi-\varphi^{3}\right)\bigg{]}~{}.\end{split}$ (6) The transient evolution of the phase field $\varphi$ is primarily determined by the thermally driving force based on the local temperature and the assumed temperature-independent latent heat, the balance of diffusive effect (the 1st term on the right) and double-well type phase separation (the 3rd term) for generating and evolving a smooth yet narrow interfacial profile. $P^{\prime}$ is the $\varphi$-derivative of interpolation function $P$. The second phase- field equation that traces the volume fractions can be formulated as $\begin{split}\frac{\partial\phi_{i}}{\partial t}=\boldsymbol{\nabla}\cdot\Bigg{\\{}M_{i}\boldsymbol{\nabla}&\bigg{[}2h_{i}\phi_{i}(1-\phi_{i})(1-2\phi_{i})\\\ &~{}~{}~{}~{}+\lambda-\xi_{i}^{2}\nabla^{2}\phi_{i}\bigg{]}\Bigg{\\}}\end{split}$ (7) for $i=1$ to $3$ in general. The first term on the right-hand side is originated from the double-well mixing enthalpy term, in which the energy barrier is adjustable numerically to prevent the mixing of different components in this case, and the Lagrange multiplier is resulting from the mass conservation constraint. The 4th-order term takes the long-ranged effect into account, which is obtained from the gradient terms that appeared in the entropy functional. In the above phase-field equations, the gradient coefficients $\xi_{\varphi}^{2}$ and $\xi_{i}^{2}$, and the energy barriers $h_{\varphi}$ and $h_{i}$ are associated with interfacial energy and the characteristic thickness of the interface, which will be explained in the following section. Note that to accommodate the fluid flow effect, hereafter we replace $\partial/\partial t$ by the substantial derivative $D/Dt\equiv\partial/\partial t+\boldsymbol{v}\cdot\boldsymbol{\nabla}$ with $\boldsymbol{v}$ indicating the velocity field. ### Thermal energy equation Following the thermodynamically-consistent formulation penrose1990 , the differential energy equation associated with the entropy can be expressed as $\frac{\partial e}{\partial t}=-\boldsymbol{\nabla}\cdot\left(M_{e}\boldsymbol{\nabla}\frac{\partial s}{\partial e}\right)+\dot{Q}~{},$ (8) where the apparent mobility coefficient $M_{e}=T^{2}k_{T}(T)$, and $k_{T}$ is the temperature-dependent thermal conductivity to be determined by the material, phase, and temperature. The first term on the right-hand side reduces to the classical Fourier heat conduction effect, and the source term $\dot{Q}$ incorporates the radiation loss $\dot{Q}_{r}$ and laser irradiation $\dot{Q}_{ir}$ effects. The above internal energy has an additive form $e=\sum_{i=1}^{3}\phi_{i}e_{i}$, with $e_{2}=e_{2_{s}}+P(\varphi)L_{a}$. With further assumption that all specific heats $c_{p_{i}}$ remain constants, the energy equation (8) can be further written as $\begin{split}\sum_{i=1}^{3}\phi_{i}\rho_{i}c_{p_{i}}\frac{DT}{Dt}&=\boldsymbol{\nabla}\cdot(k_{T}\boldsymbol{\nabla}T)+\dot{Q}_{r}+\dot{Q}_{ir}\\\ &~{}~{}~{}~{}~{}~{}-\phi_{2}P^{\prime}L_{a}\frac{D\varphi}{Dt}~{},\end{split}$ (9) where $\rho_{i}$ is the mass density. The exchange of thermal radiation energy between the material surface and ambient environment is simplified and expressed as $\dot{Q}_{r}(\textbf{x}\in\partial\Omega)=-\frac{\epsilon\sigma_{\mbox{\tiny$B$}}(T^{4}-T_{a}^{4})}{W}~{},$ (10) where $\epsilon$ is the emissivity of the metal surface with an apparent characteristic width $W$ in the phase-field model, $\sigma_{\mbox{\tiny$B$}}$ is the Stefan-Boltzmann constant, $T_{a}$ is the ambient temperature, $\alpha$ is the absorptivity of the metal material, assumed approximately the same as $\epsilon$. The gas absorption or participation is neglected. The irradiation of the laser beam on the powder surface is estimated as $\dot{Q}_{ir}(\textbf{x}\in\partial\Omega)=-\frac{\alpha\boldsymbol{H}\cdot\boldsymbol{n}}{W}~{},$ (11) where $\boldsymbol{n}$ is the surface normal computed by $\boldsymbol{n}=\boldsymbol{\nabla}\phi_{1}/|\boldsymbol{\nabla}\phi_{1}|$, and $\boldsymbol{H}$ is the intensity of an assumed 2D Gaussian laser beam, calculated by $\boldsymbol{H}=\frac{-\sqrt{2/\pi}\mathcal{Q}}{a}\textrm{exp}\left[\frac{-2(x-x_{0}-U_{a}t)^{2}}{a^{2}}\right]{\rm{\hat{\rm{}\textbf{e}}}_{y}}~{},$ (12) where $\mathcal{Q}$ is the laser power per unit width, $a$ is the characteristic spot radius, $x$ is the horizontal coordinate, $x_{0}$ is the initial laser focal point, and $U_{a}$ is the scanning speed of the laser beam traveling along the horizontal direction ($\hat{\textbf{e}}_{x}$ as shown in Fig. 1). ### Interfacial dynamics In the phase-field approach, one has to determine a few characteristics of the smooth interface, including the interfacial energy, mobility, and apparent thickness. The interfacial energy, denoted by $\gamma$, is associated with the excess energy of the interface at equilibrium. As the interfacial thickness is much smaller than the feature size of the morphology during the phase transition, it is usually estimated by the 1D profile cahn1958 , where the analytical solution of the equilibrium phase field is known. As a result, the interfacial energy at the solid-liquid interface is associated with the thickness, temperature, and the gradient coefficient as $\gamma_{\varphi}=\int_{-\infty}^{\infty}T_{m}\xi_{\varphi}^{2}|\boldsymbol{\nabla}\varphi|^{2}dx=\frac{2\sqrt{2}}{3}\frac{\xi_{\varphi}^{2}}{W_{\varphi}}T_{m}~{},$ (13) where $x$ indicates the coordinate in an assumed unbounded 1D domain, $T_{m}$ is the reference temperature at the melting point of the material, and $W_{\varphi}$ is the characteristic thickness of interface correlated with the entropy gradient coefficient by setting $\xi_{\varphi}^{2}=h_{\varphi}W_{\varphi}^{2}$. With a further extension to a multi-component system, the energy barriers and gradient coefficients are related to the interfacial energies as $\begin{bmatrix}\xi_{1}^{2}\\\ \xi_{2}^{2}\\\ \xi_{3}^{2}\end{bmatrix}=W^{2}\begin{bmatrix}h_{1}\\\ h_{2}\\\ h_{3}\end{bmatrix}=\frac{3W}{\sqrt{2}T_{m}}\begin{bmatrix}~{}1~{}&1~{}&-1~{}\\\ ~{}1~{}&-1~{}&1~{}\\\ ~{}-1~{}&1~{}&1~{}\\\ \end{bmatrix}\begin{bmatrix}\gamma_{12}\\\ \gamma_{13}\\\ \gamma_{23}\end{bmatrix},$ (14) where the subscript 12, 13, and 23 indicate the gas-powder, gas-substrate, and power-substrate interfaces. To incorporate the thermal Marangoni effect, the interfacial energy between the nickel powder and the argon gas is calculated by a linear model $\gamma_{12}=\gamma_{12}^{0}-\beta_{\gamma}(T-T_{m})$, with $\beta_{\gamma}$ as the temperature or Marangoni coefficient. The rest two interfacial energies $\gamma_{13}$ and $\gamma_{23}$ are treated as constants.The Lagrange multiplier $\lambda$ can be determined by combining Eq. (7), the constraint $\sum_{i=1}^{3}\phi_{i}=1$, and the assumption $M_{1}\xi_{1}^{2}=M_{2}\xi_{2}^{2}=M_{3}\xi_{3}^{2}$ boyer2006 ; boyer2011 , expressed as $\lambda=\frac{-1}{\sum_{i=1}^{3}M_{i}}\left(\sum_{i=1}^{3}2M_{i}h_{i}\phi_{i}(1-\phi_{i})(1-2\phi_{i})\right)~{}.$ (15) We further assume that the mass density of nickel remains constant during the phase transition process, the molten nickel is a quasi-incompressible Newtonian fluid that satisfies the continuity equation: $\boldsymbol{\nabla}\cdot\boldsymbol{v}\simeq 0~{}.$ (16) Furthermore, the fluid dynamics involving the interfacial force can be described by the Naiver-Stokes-Korteweg momentum equation, written as $\rho\frac{D\boldsymbol{v}}{Dt}=-\boldsymbol{\nabla}\hat{p}+\boldsymbol{\nabla}\cdot\left[\eta(\boldsymbol{\nabla}\boldsymbol{v}+\boldsymbol{\nabla}\boldsymbol{v}^{T})\right]+\sum_{i=1}^{3}\mu_{i}\boldsymbol{\nabla}\phi_{i}+\textbf{F}_{M}~{},$ (17) where $\hat{p}$ is a modified pressure, expressed as $\hat{p}=p-\sum_{i=1}^{3}\left(T_{m}\xi_{i}^{2}\phi_{i}\nabla^{2}\phi_{i}\right)+f~{},$ (18) where $p$ is the hydrodynamic pressure, and the isotropic component of the interfacial force has been absorbed in the pressure gradient term for convenienc, $\eta$ is the temperature-dependent dynamic viscosity, $\mu_{i}=\partial f/\partial\varphi-T\xi_{i}^{2}\nabla^{2}\phi$ is the generalized chemical potential with $f$ indicating the bulk free energy, and $\textbf{F}_{M}$ is the body force due to the thermal Marangoni effect. Here we assume that the apparent Marangoni force $\textbf{F}_{M}$ is linearly proportional to the temperature gradient, within the region of interface selected by a scalar quantity $|\boldsymbol{\nabla}\phi_{1}|^{2}$ and written as $\textbf{F}_{M}\simeq\chi\boldsymbol{\nabla}T|\boldsymbol{\nabla}\phi_{1}|^{2},$ (19) where the coefficient $\chi$ is approximated by the increment of the surface tension through a simple control volume analysis around the interface, so that $\chi=-\beta_{\gamma}W$ with $\beta_{\gamma}$ obtained from the experimental Marangoni coefficient and $W$ as the characteristic thickness of the interface. ### Material properties and parameters Because temperature variation influences the transport properties significantly, the nonlinear effect due to temperature-dependent properties is included in the computation. Here we summarize the material properties below for the case studies based on pure nickel powders and the argon gas environment. The thermal conductivity of nickel powders is determined by a smooth transition between solid and liquid phases using the $P$-function, expressed as $k_{T_{Ni}}\simeq k_{Ni}^{(s)}\left[1-P(\varphi)\right]+k_{Ni}^{(\ell)}P(\varphi)~{},$ (20) where $k_{Ni}^{(s)}$ represents the solid-state thermal conductivity of pure nickel and is approximated by $k_{Ni}^{(s)}\simeq 50.06+0.022~{}T$ CRC1 in terms of dimensional values in MKS unit and degree Kelvin, and $k_{Ni}^{(\ell)}\simeq 49.7~{}\textrm{W}/(\textrm{m}\cdot\textrm{K})$ is the liquid-state thermal conductivity of pure nickel. The dynamic viscosity of the molten nickel can be correlated with temperature CRC1 as $\eta_{Ni}\simeq\eta_{0}+5.257\times 10^{-6}(T-T_{m})~{},$ (21) where $T_{m}$ is the pure nickel’s melting temperature, and $T$ is the absolute temperature field. The substrate material is assumed stainless steel with thermal conductivity CRC1 estimated by: $k_{T_{Fe}}\simeq 9.42+0.0143~{}T~{}.$ (22) The thermal conductivity and dynamic viscosity of argon gas eckhard2010 are approximated by $k_{T_{Ar}}\simeq 1.473\times 10^{-2}+2.840\times 10^{-5}~{}T$ (23) and $\eta_{Ar}\simeq 1.885\times 10^{-5}+3.362\times 10^{-8}~{}T~{},$ (24) respectively. The density of argon gas is calculated by the ideal gas law. Table 1 lists the assumed constant properties used for the case studies. Several characteristic lengths and model parameters are listed in Table 2. In summary, the fully coupled governing equations are in general applicable for 2D and 3D cases by solving the solid-liquid phase transition field $\varphi$, the volume fraction phase fields $\phi_{i(i=1,2,3)}$, temperature field $T$, and the velocity field $\boldsymbol{v}$, along with the initial and boundary conditions. Table 1: Constant material properties. Parameters | Value ---|--- mass density: | $\textrm{kg}/\textrm{m}^{3}$ nickel $\rho_{Ni}$ CRC2 | $7810$ stainless steel $\rho_{Fe}$ CRC2 | $7874$ reference thermal conductivity $k_{T_{0}}$ CRC2 | $90.9$ $\textrm{W}/(\textrm{m}\cdot\textrm{K})$ specific heat: | $\textrm{J}/(\textrm{kg}\cdot\textrm{K})$ argon $c_{p_{Ar}}$ CRC2 | $520.3$ pure nickel $c_{p_{Ni}}$ CRC2 | $490.0$ stainless steel $c_{p_{Fe}}$ CRC2 | $633.0$ reference dynamic viscosity $\eta_{0}$ CRC2 | $5.01$ $\textrm{mPa}\cdot\textrm{s}$ interfacial energy in between: | $\textrm{J}/\textrm{m}^{2}$ nickel and argon gas $\gamma_{12}^{0}$ CRC2 | 1.838 nickel and stainless steel $\gamma_{23}$ CRC2 | 2.385 stainless steel and argon gas $\gamma_{13}$ WH | 1.860 solid and liquid nickel $\gamma_{\varphi}$ jones2002 | 0.347 Marangoni coefficient $\beta_{\gamma}$ CRC2 | $0.39$ $\textrm{mN}/(\textrm{m}\cdot\textrm{K})$ melting temperature of nickel $T_{m}$ CRC1 | $1726$ K latent heat of fusion of nickel $L_{a}$ CRC1 | $2.32\times 10^{6}$ $\textrm{J}/\textrm{m}^{3}$ emissivity: | nickel powder $\epsilon_{Ni}$ MR | $0.34$ stainless steel $\epsilon_{Fe}$ MR | $0.40$ --- Table 2: Model parameters. Parameters | Value ---|--- characteristic length $L$ | $200$ $\rm{\mu}m$ domain size $D=2\pi L$ | $\sim 1200$ $\rm{\mu}m$ interfacial thickness for $\varphi$-field $W_{\varphi}$ | $8$ $\rm{\mu}m$ interfacial thickness for $\phi_{i}$-field $W$ | $4$ $\rm{\mu}m$ solid-liquid energy barrier $h_{\varphi}$ | $18.5$ $\textrm{J}/(\textrm{m}^{3}\cdot\textrm{K})$ energy barrier for powders $h_{2}$ | $91.7$ $\textrm{J}/(\textrm{m}^{3}\cdot\textrm{K})$ characteristic velocity $U$ | 0.73 $\textrm{m}/\textrm{s}$ interfacial mobility for $\varphi$-field $M_{\varphi}$ | 32.1 $\textrm{m}\cdot\textrm{s}\cdot\textrm{K}/\textrm{kg}$ mobility of nickel $M_{2}$ | $0.259$ $\textrm{m}^{3}\cdot\rm{\mu}\textrm{s}\cdot\textrm{K}/\textrm{kg}$ 2D power of laser beam $\mathcal{Q}$ | $2.1\times 10^{5}$ $\textrm{W}/\textrm{m}$ spot size of laser beam $a$ | $100$ $\rm{\mu}\textrm{m}$ scanning speed of laser beam $U_{a}$ | $0.1$ $\textrm{m}/\textrm{s}$ char. temperature difference $\Delta T$ | $500$ K Using the material properties and chosen model parameters above, a few characteristic time scales can be drawn: the thermal diffusion time scale $\tau_{\mbox{\tiny$T$}}=L^{2}\rho_{0}c_{p_{0}}/k_{T_{0}}=1.68\times 10^{-3}~{}s$, convective time scale $\tau_{c}=L/U=2.74\times 10^{-4}~{}s$, solid-liquid phase transition time scale $\tau_{\varphi}=1/h_{\varphi}M_{\varphi}=1.68\times 10^{-3}~{}s$, time scale for wetting dynamics $\tau_{\textrm{wet}}=L^{2}/h_{2}M_{2}=1.68\times 10^{-5}~{}s$, and the viscous diffusion time scale $\tau_{\textrm{vis}}=\rho_{0}L^{2}/\eta_{0}=2.50\times 10^{-2}~{}s$ li2020b . Here we have applied characteristic length $L$, characteristic velocity $U$, characteristic overheating temperature $\Delta T$, phase transition time scale $\tau_{\varphi}$, and other reference material properties as provided in the tables. The reference parameters (with subscript 0) are based on powder material with density $\rho_{0}=\rho_{Ni}$ and specific heat $c_{p_{0}}=c_{p_{Ni}}$. The characteristic velocity $U$ is the scaled capillary velocity determined by $U=\beta\gamma_{12}/\eta_{0}$, with an adjustable factor $\beta=0.005$. For scaled formulation we suggest temperature to be scaled as $\tilde{T}=(T-T_{m})/\Delta T$, and pressure and stress to be scaled by the inertial effect $\rho_{0}U^{2}$. We develop the computational solver based on Euler time integration and Fourier spectral discretization with $800\times 800$ uniform mesh. ## Results and Discussion Figure 2: Evolution of melting, wetting, and coalescence dynamics in terms of solid-liquid phase field $\varphi$ (left panels) and the corresponding temperature field on the right at five scaled time instants $\tilde{t}=0.2$, 0.4, 0.6, 1.2, and 1.6. The scaled temperature is defined as $\tilde{T}=(T-T_{m})/\Delta T$, and the time is scaled by $\tau_{\varphi}=1.68\times 10^{-3}~{}s$. Three temperature contours $\tilde{T}=0$, 0.5, and 1.0 are provided for reference with the melting temperature at $\tilde{T}=0$. The laser parameters include: intensity $\mathcal{Q}=3.0\times 10^{6}~{}\rm{W}/\rm{m}$, spot size $a=50~{}\mu m$, laser spot starting from $\tilde{x}=0.5$ and moving horizontally to the right with a scanning speed $U_{a}=0.5~{}\rm{m}/\rm{s}$. Figure 3: Transient evolution of the solid-liquid interface at three scaled time instants $\tilde{t}=0.2$, 0.8, and 1.6 under three assumed contact angles: (a) $\theta=15^{\circ}$, (b) $\theta=30^{\circ}$, and (c) $\theta=110^{\circ}$. The temperature contour at $\tilde{T}=0$ is provided for reference. Figure 2 shows the interplay between melting, wetting, and coalescence of a horizontal layer of metal powders. Melting and coalescence form larger droplets first, and then the poor wetting condition to the substrate with a contact angle $106.7^{\circ}$ further enhances the balling effect and creates a persistent pattern of separated balls. The process is followed by rapid solidification of the fused droplets without changing the pattern. A closer observation of the transient dynamics from the phase field shows that, initially ($\tilde{t}=0$) the whole system has uniform temperature $\tilde{T}=-0.2$, at $\tilde{t}=0.2$ four powders near the laser spot are fully melted, three of them are merged and the next one to the right splits due to the poor wetting condition and the cohesive force from the unmelted powders on the right. The overall profile of the melted powders has a blunt shape. Meanwhile, the ongoing and partially melted powder wets both the solid powder on the right and the substrate concurrently. The lower interfacial energy between the liquid and solid nickel powers essentially leads to a much better wetting condition for the laser melting process. The balling pattern is phenomenologically similar to Rayleigh-Plateau’s classical work on the interfacial instability of a liquid column, however, here the characteristic distance between the gaps is roughly two times of the powder diameter, much smaller than the Rayleigh’s criteria, which is about 4.5 times of the diameter of a 3D liquid column and without the influence of wetting and phase transition. Note that in practice the discontinuities in between droplets before and after solidification may create voids during the layer-by-layer AM process and should be avoided. Furthermore, the corresponding temperature distribution on the right panels shows the laser heating effect from Gaussian irradiation, phase transition, and thermal diffusion into the powders, substrate, and surrounding. The irradiation is absorbed by both powders and the substrate, whereas argon gas is assumed not participating in the radiation heat transfer. The system is thermally controlled and the melting front shown in the phase field coincides with the equilibrium temperature contour at $\tilde{T}=0$, which is consistent with our selection of time scale or interfacial mobility for the phase transition based on the thermal diffusion time scale. At dimensionless time $\tilde{t}=0.4$ and 0.6, metal powders are melted and merged, and soon started to solidify. The penetration depth of the thermal wave during the transient process can be used to characterize the heat affected zone. Finally, the morphology shows that the balling effect repeats itself through melting, coalescence, splitting, thermal relaxation, and solidification. The discontinuities that appeared in the balling pattern indicate a wavelength of around 60 to 65$\%$ of the wavelength obtained from a typical Raleigh instability, about 9 times of the radius of a thin liquid column. In addition to interfacial instability, the result suggests the importance of wetting and solidification to the formation of balling pattern. Figure 3 further demonstrates the balling effect under various wetting conditions. Under the same configuration and laser scanning speed, a better wetting condition with low contact angles (Fig. 3a) provides a flattened melt pool that coats the top surface of the substrate more uniformly than the high- contact-angle case. On the other hand, the number of discontinuities increases at a relatively poor wetting condition with a high contact angle (Fig. 3c). Note that the advancing and receding angles are not specified in this phase- field simulation, and the reference contact angle is defined based on the equilibrium condition. Overall, a good wetting condition inhibits the occurrence of balling and discontinuities. The morphology in general remains the same before and after solidification for each test case. The heat affected zone, featured by the melting temperature line within the substrate, appears quite similar for each case at a longer range. This is reasonable as balling is a very local phenomenon with a faster time scale and thus it has less influence on the temperature distribution in a larger heat affected zone. Figure 4: Interfacial evolution showing wetting dynamics near a hypothetical solid powder afixed to the substrate at four sequential time instants for three cases: (a) without hydrodynamics, (b) including hydrodynamics but without Marangoni effect, and (c) including hydrodynamics and thermal Marangoni effect. The profiles color coded by blue, green, and red are the contours for the volumetric fraction $\phi_{2}$=0.1, 0.5, and 0.9, respectively. In Fig. 4 we apply the same test conditions as shown in Fig. 2 to have a closer look at the wetting dynamics. Note that for the case with a very low contact angle or a high wettability, the lubrication approximation may be applicable to simplify the analysis, however, the full Navier-Stokes-Korteweg system is required in general and it is coupled with the phase transition dynamics in this study. Figure 4a shows the transient relaxation of the interfacial energy without the hydrodynamics. To simplify the study, first we consider two melted nickel droplets, described by the phase field $\phi_{2}$, placed on top of a solid substrate ($\phi_{3}$) and near a solid sphere made of stainless steel (also defined by $\phi_{3}$). Initially, the temperature is assumed uniform and above the melting temperature of pure nickel so that the heating and phase transition can be neglected. The equilibrium contact angle between the droplet and substrate is set to $\theta=60^{\circ}$. Figure 4b shows the results at the same time steps with fluid flow and convective effect taken into account. The characteristic velocity $U=0.92~{}m/s$ and reference Reynold number $Re=150$. The solid contour lines stand for the volume fraction of droplet at $\phi_{2}=0.1$, 0.5, and 0.9, whereas the dashed contour lines are for the volume fraction of stainless steel at $\phi_{3}=0.1$, 0.5, and 0.9. Apparently, fluid flow is driven by the moving three-phase contact line and the Laplace pressure around the free surface. The inertial and convective effects further accelerate the wetting, enhance heat transfer, and thus promote the phase transition process in terms of powder fusion and bonding efficiency to the substrate. However, due to the inertial effect, stronger oscillatory motion of the free surface along with few circulations around the droplet are introduced by the fluid flow inertial effect. One would expect more vigorous oscillation and even a spattering can happen near the melt pool under high laser power and long dwelling time, which is an important feature of the keyhole operation mode. Figure 4c includes the thermal Marangoni effect in the interfacial dynamics, and the result shows that in the test case Marangoni effect has a minor influence on the local velocity field. The result is reasonable since the characteristic Marangoni force is much smaller than the inertial force, i.e., $W\beta_{\gamma}\Delta T/(\rho_{0}U^{2}L^{2})\sim 0.004$. The Marangoni effect is expected to be higher at high power laser melting process. ## Conclusion We develop a thermodynamically consistent phase-field theoretical model to describe the balling effect that appeared in a simplified powder bed fusion process, which involves phase transition, wetting dynamics, interfacial deformation and instability, droplet coalescence, and the resulting discontinuities after solidification. The computational framework relies on the entropy functional of the system in addition to the conservation principles. All governing equations are derived to ensure positive local entropy generation. The fully coupled transport equations are solved by the spectral method. The result demonstrates that wetting condition and interfacial instability play major roles in the formation of discontinuities along the laser scanning track. A better wetting to the substrate reduces the degree of balling and the chance of discontinuity. At low to medium power laser heating, the solidification, inertial, and Marangoni convective effect around the micro melt pool are considered secondary effects to the balling pattern. Acknowledgments The authors acknowledge the financial support of this research from the National Science Foundation (CBET 1930906). ## Derivative of internal energy and free energy The total derivative of internal energy $e(s,\varphi,\phi_{1},\phi_{2},\phi_{3})$ is expressed as $de=Tds+\frac{\partial e}{\partial\varphi}d\varphi+\sum_{i=1}^{3}\frac{\partial e}{\partial\phi_{i}}d\phi_{i}~{},$ (25) and thus $ds=\frac{1}{T}de-\frac{1}{T}\frac{\partial e}{\partial\varphi}d\varphi-\frac{1}{T}\sum_{i=1}^{3}\frac{\partial e}{\partial\phi_{i}}d\phi_{i}~{}.$ (26) By comparing the partial derivatives of entropy $s(e,\varphi,\phi_{1},\phi_{2},\phi_{3})$ with the above expression, one can establish the following relations: $\frac{\partial s}{\partial\varphi}\bigg{)}_{e,\phi_{1},\phi_{2},\phi_{3}}=-\frac{1}{T}\frac{\partial e}{\partial\varphi}\bigg{)}_{s,\phi_{1},\phi_{2},\phi_{3}}$ (27) and $\frac{\partial s}{\partial\phi_{i}}\bigg{)}_{e,\varphi,\phi_{j(j\neq i)}}=-\frac{1}{T}\frac{\partial e}{\partial\phi_{i}}\bigg{)}_{s,\varphi,\phi_{j(j\neq i)}}$ (28) for $i=1$ to 3. Moreover, since the Helmholtz free energy density is introduced as $f(T,\varphi,\phi_{1},\phi_{2},\phi_{3})=e-Ts$, the total derivative of the free energy is $df=d(e-Ts)=-sdT+\frac{\partial e}{\partial\varphi}d\varphi+\sum_{i=1}^{3}\frac{\partial e}{\partial\phi_{i}}d\phi_{i}~{},$ (29) and therefore, $\frac{\partial e}{\partial\varphi}\bigg{)}_{s,\phi_{1},\phi_{2},\phi_{3}}=\frac{\partial f}{\partial\varphi}\bigg{)}_{T,\phi_{1},\phi_{2},\phi_{3}}$ (30) and $\frac{\partial e}{\partial\phi_{i}}\bigg{)}_{s,\varphi,\phi_{j(j\neq i)}}=\frac{\partial f}{\partial\phi_{i}}\bigg{)}_{T,\varphi,\phi_{j(j\neq i)}}$ (31) for $i=1$ to 3. Finally, $\frac{\partial s}{\partial\varphi}\bigg{)}_{e,\phi_{1},\phi_{2},\phi_{3}}=-\frac{1}{T}\frac{\partial f}{\partial\varphi}\bigg{)}_{T,\phi_{1},\phi_{2},\phi_{3}}$ (32) and $\frac{\partial s}{\partial\phi_{i}}\bigg{)}_{e,\varphi,\phi_{j(j\neq i)}}=-\frac{1}{T}\frac{\partial f}{\partial\phi_{i}}\bigg{)}_{T,\varphi,\phi_{j(j\neq i)}}$ (33) for $i=1$ to 3. ## References * [1] D.D. Gu, W. Meiners, K. Wissenbach, R. Poprawe, Laser additive manufacturing of metallic components: materials, processes and mechanisms, Int. Mater. Rev. 57(3), 133-164, 2012. * [2] C.Y. Yap, C.K. Chua, Z.L. Dong, Z.H. Liu, D.Q. Zhang, L.E. Loh, S.L. Sing, Review of selective laser melting: materials and applications, Appl. Phys. Rev. 2(4), 041101, 2015. * [3] D. Herzog, V. Seyda, E. Wycisk, C. Emmelmann, Additive manufacturing of metals, Acta Mater. 117, 371-392, 2016. * [4] M. Markl, C. Körner, Multi-scale modeling of powder-bed-based additive manufacturing, Annu. Rev. Mater. Res. 46, 1-34, 2016. * [5] E. Malekipour, H. El-Mounayri, Common defects and contributing parameters in powder bed fusion AM process and their classfication for online monitoring and control: a review, Int. J. Adv. Manuf. Technol. 95, 527-550, 2017. * [6] T. DebRoy, H.L. Wei, J.S. Zuback, T. Mukherjee, J.W. Elmer, J.O. Milewski, A.M. Beese, A. Wilson-Heid, A. De, W. Zhang, Additive manufacturing of metallic components - process, structure and properties, Prog. Mater Sci. 92, 112-224, 2018. * [7] R. Li, J. Liu, Y. Shi, L. Wang, W. Jiang, Balling behavior of stainless steel and nickel powder during selective laser melting process, Int. J. Adv. Manuf. Technol. 59, 1025-1035, 2012. * [8] D. Wang, C. Yu, X. Zhou, J. Ma, W. Liu, Z. Shen, Dense pure tungsten fabricated by selective laser melting, Appl. Sci. 7, 430, 2017. * [9] A. Aversa, M. Moshiri, E. Librera, M. Hadi, G. Marchese, D. Manfredi, M. Lorusso, F. Calignano, S. Biamino, M. Lombardi, M. Pavese, Single scan track analyses on aluminium based powders, J. Mater. Process. Technol. 255, 17-25, 2018. * [10] J.P. Kruth, L. Froyen, J.V. Vaerenbergh, P. Mercelis, M. Rombouts, B. Lauwers, Selective laser melting of iron-based powder, J. Mater. Process. Technol. 149, 616-622, 2004. * [11] J.A. Cherry, H.M. Davies, S. Mehmood, N.P. Lavery, S.G.R. Brown, J. Sienz, Investigation into the effect of process parameters on microstructural and physical properties of 316L stainless steel parts by selective laser melting, Int. J. Adv. Manuf. Technol. 76, 869-879, 2015. * [12] V. Gunenthiram, P. Peyre, M. Schneider, M. Dal, F. Coste, R. Fabbro, Analysis of laser-melt pool-powder bed interaction during the selective laser melting of a stainless steel, J. Laser Appl. 29(2), 022303, 2017. * [13] H.J. Niu, I.T.H. Chang, Instability of scan tracks of selective laser sintering of high speed steel powder, Scr. Mater. 41(11), 1229-1234, 1999. * [14] M. Rombouts, J.P. Kruth, L. Froyen, P. Mercelis, Fundamentals of selective laser melting of alloyed steel powders, CIRP Ann. 55(1), 187-192, 2006. * [15] C.A. Hartnett, I. Seric, K. Mahady, L. Kondic, S. Afkhami, J.D. Fowlkes, P.D. Rack, Exploiting the Marangoni effect to initiate instabilities and direct the assembly of liquid metal filaments, Langmuir 33, 8123-8128, 2017. * [16] S. Das, Physical aspects of process control in selective laser sintering of metals, Adv. Eng. Mater. 5(10), 701-711, 2003. * [17] M. Agarwala, D. Bourell, J. Beaman, H. Marcus, J. Barlow, Direct selective laser sintering of metals, Rapid Prototyp. J. 1(1), 26-36, 1995. * [18] X. Zhou, X. Liu, D. Zhang, Z. Shen, W. Liu, Balling phenomena in selective laser melted tungsten, J. Mater. Process. Technol. 222, 33-42, 2015. * [19] Y.-D. Qiu, J.-M. Wu, A.-N. Chen, P. Chen, Y. Yang, R.-Z. Liu, G. Chen, S. Chen, Y.-S. Shi, C.-H. Li, Balling phenomenon and cracks in alumina creamics prepared by direct selective laser melting assisted with pressure treatment, Ceram. Int., in press, 2020. * [20] R. Mead-Hunter, A.J.C. King, B.J. Mullins, Plateau Rayleigh instability simulation, Langmuir 28, 6731-6735, 2012. * [21] D. Gu, Y. Shen, Balling phenomena in direct laser sintering of stainless steel powder: Metallurgical mechanisms and control methods, Mater. Des. 30, 2903-2910, 2009. * [22] X. Zhou, D. Wang, X. Liu, D. Zhang, S. Qu, J. Ma, G. London, Z. Shen, W. Liu, 3D-imaging of selective laser melting defects in a Co-Cr-Mo alloy by synchrotron radiation micro-CT, Acta Mater., 98, 1-16, 2015. * [23] S.A. Khairallah, A. Anderson, Mesoscopic simulation model of selective laser melting of stainless steel powder, J. Mater. Process. Technol. 214, 2627-2636, 2014. * [24] C. Tang, J.L. Tan, C.H. Wong, A numerical investigation on the physical mechanisms of single track defects in selective laser melting, Int. J. Heat Mass Transfer 126, 957-968, 2018. * [25] R. Kobayashi, Modeling and numerical simulations of dendritic crystal growth, Physica D 63(3-4), 410-423, 1993. * [26] A.A Wheeler, B.T. Murray, R.J. Schaefer, Computation of dendrites using a phase field model, Physica D 66, 243-262, 1993. * [27] J.A. Warren, W.J. Boettinger, Prediction of dendritic growth and microsegregation patterns in a binary alloy using the phase-field method, Acta Metall. Mater. 43(2), 689-703, 1995. * [28] B.T. Murray, A.A Wheeler, M.E. Glicksman, Simulations of experimentally observed dendritic growth behavior using a phase-field model, J. Cryst. Growth 154, 386-400, 1995. * [29] A. Karma, W.-J. Rappel, Quantitative phase-field modeling of dendritic growth in two and three dimensions, Phys. Rev. E 57(4), 4323-4349, 1998. * [30] W.J. Boettinger, J.A. Warren, C. Beckermann, A. Karma, Phase-field simulation of solidification, Annu. Rev. Mater. Res. 32, 163-194, 2002. * [31] J.W. Cahn, J.E. Hilliard, Free energy of a nonuniform system. I. interfacial free energy, J. Chem. Phys. 28(2), 258–267, 1958. * [32] J.W. Cahn, On spinodal decomposition, Acta Metall. 9(9), 795-801, 1961. * [33] O. Penrose, P.C. Fife, Thermodynamically consistent models of phase-field type for the kinetics of phase transitions, Physica D 43(1), 44-62, 1990. * [34] S.-L. Wang, R.F. Sekerka, A.A. Wheeler, B.T. Murray, S.R. Coriell, R.J. Braun, G.B. McFadden, Thermodynamically-consistent phase-field models for solidification, Physica D 69(1-2), 189-200, 1993. * [35] D.M. Anderson, G.B. McFadden, A.A. Wheeler, Diffuse-interface methods in fluid mechanics, Annu. Rev. Fluid Mech. 30(1), 139–165, 1998. * [36] R.F. Sekerka, Irreversible thermodynamic basis of phase field models, Philos. Mag. 91(1), 3-23, 2011. * [37] J.-Q. Li, T.-H. Fan, T. Taniguchi, B. Zhang, Phase-field modeling on laser melting of a metallic powder, Int. J. Heat Mass Transfer 117, 412-424, 2018. * [38] J.-Q. Li, T.-H. Fan, Phase-field modeling of metallic powder-substrate interaction in laser melting process, Int. J. Heat Mass Transfer 133, 872-884, 2019. * [39] T.-H. Fan, J.-Q. Li, B. Minatovicz, E. Soha, L. Sun, S. Patel, B. Chaudhuri, R. Bogner. Phase-field modeling of freeze concentration of protein solutions, Polymers 11(1), 10, 2019. * [40] J.-Q. Li, T.-H. Fan, Phase-field modeling of macroscopic freezing dynamics in a cylindrical vessel, Int. J. Heat Mass Transfer, 156, 119915, 2020. * [41] L. Li, S. Li, B. Zhang, T.-H. Fan, Phase-Field Modeling of Selective Laser Brazing of Diamond Grits, in review, 2020. * [42] Y.-C. Wu, F. Wang, M. Selzer, B. Nestler, Investigation of equilibrium droplet shapes on chemically striped patterned surface using phase-field method, Langmuir 35, 8500-8516, 2019. * [43] D. Xu, Y. Ba, J.-J. Sun, X.-J. Fu, A numerical study of micro-droplet spreading behaviors on wettability-confined tracks using a three-dimensionless phase-field lattice Boltzmann model, Langmuir 36, 340-353, 2020. * [44] S.Y. Yeh, C.W. Lan, Adaptive phase-field modeling of anisotropic wetting with line tension at the triple junction, Langmuir 31, 9348-9355, 2015. * [45] F. Boyer, C. Lapuerta, Study of a three component Cahn-Hilliard flow model, ESAIM: Math. Model. Numer. Anal. 40(4), 653-687, 2006. * [46] F. Boyer, S. Minjeaud, Numerical schemes for a three component Cahn-Hilliard model, ESAIM: Math. Model. Numer. Anal. 45(4), 697-738, 2010. * [47] J.F. Shackelford (Ed.) CRC Materials Science and Engineering Handbook, CRC Press, Boca Raton, 4th edition, 2016. * [48] E. Vogel, B. Jäger, R. Hellmann, E. Bich, Ab initio pair potential energy curve for the argon atom pair and thermophysical properties for the dilute argon gas. II. Thermophysical properties for low-density argon, Mol. Phys. 108(24), 3335-3352, 2010. * [49] W.M. Haynes (Ed.) CRC Handbook of Chemistry and Physics, CRC Press, Boca Raton, 2012. * [50] N. Eustathopoulos, M.G. Nicholas, B. Drevet, Wettability at High Temperatures, Elsevier, Amsterdam, 1999. * [51] H. Jones, The solid-liquid interfacial energy of metals: calculations versus measurements, Mater. Lett. 53, 364-366, 2002. * [52] W.F. Gale, T.C. Totemeier(Eds), Smithells Metals Reference Book, Elsevier, Amsterdam, 2004.
# An unfitted finite element method for two-phase Stokes problems with slip between phases M. Olshanskii1, A. Quaini1 and Q. Sun1 1Department of Mathematics, University of Houston, 3551 Cullen Blvd, Houston TX 77204 <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> Abstract We present an isoparametric unfitted finite element approach of the CutFEM or Nitsche-XFEM family for the simulation of two-phase Stokes problems with slip between phases. For the unfitted generalized Taylor–Hood finite element pair $\mathbf{P}_{k+1}-P_{k}$, $k\geq 1$, we show an inf-sup stability property with a stability constant that is independent of the viscosity ratio, slip coefficient, position of the interface with respect to the background mesh and, of course, mesh size. In addition, we prove stability and optimal error estimates that follow from this inf-sup property. We provide numerical results in two and three dimensions to corroborate the theoretical findings and demonstrate the robustness of our approach with respect to the contrast in viscosity, slip coefficient value, and position of the interface relative to the fixed computational mesh. Keywords: XFEM, cutFEM, two-phase flow, Stokes problem, finite elements ## 1 Introduction The finite element approximation of two-phase problems involving immiscible fluids features several challenging aspects. The first challenge is the presence of a sharp interface between the two phases, that might move and undergo topological changes. A second critical aspect is the presence of surface tension forces that create a jump in the pressure field at the interface. In addition, if one accounts for slip between phases [25], a jump in the velocity field at the interface needs to be captured as well. Finally, lack of robustness may arise when there is a high contrast in fluid densities and viscosities. Tackling all of these challenges has motivated a large body of literature. One possible way to categorize numerical methods proposed in the literature is to distinguish between _diffusive interface_ and _sharp interface_ approaches. Phase field methods (e.g., [4, 28]) belong to the first category, while level set methods (e.g., [42]), and conservative level set methods (e.g., [38]) belong to the second. Diffusive interface methods introduce a smoothing region around the interface between the two phases to vary smoothly, instead of sharply, from one phase to the other and usually apply the surface tension forces over the entire smoothing region. The major limitation of diffusive interface methods lies in the need to resolve the smoothing region with an adequate number of elements, which results in high computational costs. Sharp interface methods require less elements to resolve the interface between phases. Thus, we will restrict our attention to sharp interface approaches, which can be further divided into _geometrically fitted_ and _unfitted_ methods. In fitted methods, the discretization mesh is fitted to the computational interface. Perhaps, Arbitrary Lagrangian Eulerian (ALE) methods [16] are the best known fitted methods. In case of a moving interface, ALE methods deform the mesh to track the interface. While ALE methods are known to be very robust for small interface displacement, complex re-meshing procedures are needed for large deformations and topological changes. Certain variations of the method, like the extended ALE [5, 6], successfully deal with large interface displacement while keeping the same mesh connectivity. The price to pay for such improvement is a higher computational cost. Unfitted methods allow the sharp interface to cut through the elements of a fixed background grid. Their main advantage is the relative ease of handling time-dependent domains, implicitly defined interfaces, and problems with strong geometric deformations [8]. The immersed finite element method (e.g., [3]) and front-tracking methods (e.g., [43]) are examples of unfitted approaches. Applied in the finite element framework, these methods require an enrichment of the elements intersected by the interface in order to capture jumps and kinks in the solution. One complex aspect of these methods is the need for tailored stabilization. Popular unfitted methods that embed discontinuities in finite element solvers are XFEM [36] and CutFEM [12]. XFEM enriches the finite element shape functions by the Partition-of-Unity method. To learn more about XFEM applied to two-phase flow problems, we refer the reader to [14, 19, 21, 30, 40]. CutFEM is a variation of XFEM, also called Nitsche-XFEM [23]. CutFEM uses overlapping fictitious domains in combination with ghost penalty stabilization [11] to enrich and stabilize the solution. See [15, 18, 24, 27, 35, 45] for the application of CutFEM or Nitsche-XFEM to approximate two-phase flows. Finally, recently proposed unfitted methods are a hybrid high-order method [10] and an enriched finite element/level-set method [26]. In this paper, we study an isoparametric unfitted finite element approach of the CutFEM or Nitsche-XFEM family for the simulation of two-phase Stokes problems with slip between phases. All the numerical works cited above consider the homogeneous model of two-phase flow, i.e. no slip is assumed between the phases. This assumption is appropriate in three cases: one of the phases has a relatively small volume, one phase forms drops of minute size, or one phase (representing the continuous medium in which droplets are immersed) has high speed [25]. In all other cases, slip between the phases has to be accounted for. In fact, experimentally it is observed that the velocity of the two phases can be significantly different, also depending on the flow pattern (e.g., plug flow, annular flow, bubble flow, stratified flow, slug flow, churn flow) [29]. A variation of our unfitted approach has been analyzed for the homogeneous two-phase Stokes problem in [13], where robust estimates were proved for individual terms of the Cauchy stress tensor. In the present paper, the analysis is done in the energy norm, allowing a possible slip between phases. In particular, we show an inf-sup stability property of the unfitted generalized Taylor–Hood finite element pair $\mathbf{P}_{k+1}-P_{k}$, $k\geq 1$, with a stability constant that is independent of the viscosity ratio, slip coefficient, position of the interface with respect to the background mesh, and of course mesh size. This inf-sup property implies stability and optimal error estimates for the unfitted finite element method under consideration, which are also shown. For more details on the isoparametric unfitted finite element, we refer to [31, 32, 34]. Two-phase flow problems with high contrast for the viscosity are known to be especially challenging. While some authors test different viscosity ratios but do not comment on the effects of high contrast on the numerics [15, 26, 46], others show or prove that their method is robust for all viscosity ratios [27, 10, 30, 37, 45]. In other cases, numerical parameters, like the penalty parameters, are adjusted to take into account large differences in the viscosity [18]. Through analysis and a series of numerical tests in two and three dimensions, we demonstrate that our approach is robust not only with respect to the contrast in viscosity, but also with respect to the slip coefficient value and the position of the interface relative to the fixed computational mesh. For all the simulations in this paper, we have used NGsolve [1, 20], a high performance multiphysics finite element software with a Python interface, and add-on library ngsxfem [2], which enables the use of unfitted finite element technologies. The remainder of the paper is organizes as follows. In Section 2, we introduce the strong and weak formulations of the two-phase Stokes problem with slip between phases, together with the finite element discretization. We present a stability result in Sec. 3, while in Sec. 4 we prove optimal order convergence for the proposed unfitted finite element approach. Numerical results in 2 and 3 dimensions are shown in Sec. 5. Concluding remarks are provided in Sec. 6. ## 2 Problem definition We consider a fixed domain $\Omega\subset\mathbb{R}^{d}$, with $d=2,3$, filled with two immiscible, viscous, and incompressible fluids separated by an interface $\Gamma$. In this study, we assume $\Gamma$ does not evolve with time although our approach is designed to handle interface evolution. We assume that $\Gamma$ is closed and sufficiently smooth. Interface $\Gamma$ separates $\Omega$ into two subdomains (phases) $\Omega^{+}$ and $\Omega^{-}=\Omega\setminus\overline{\Omega^{+}}$. We assume $\Omega^{-}$ to be completely internal, i.e. $\partial\Omega^{-}\cap\partial\Omega=\emptyset$. See Fig. 1. Let $\mathbf{n}^{\pm}$ be the outward unit normal for $\Omega^{\pm}$ and $\mathbf{n}$ the outward pointing unit normal on $\Gamma$. It holds that $\mathbf{n}^{-}=\mathbf{n}$ and $\mathbf{n}^{+}=-\mathbf{n}$ at $\Gamma$. Figure 1: Illustration of a domain $\Omega$ in $\mathbb{R}^{2}$. On part of the boundary (dashed line) a Neumann boundary condition is imposed, while on the remaining part of the boundary (solid line with three bars) a Dirichlet boundary condition is enforced. Let $\mathbf{u}^{\pm}:\Omega^{\pm}\to\mathbb{R}^{d}$ and $p^{\pm}:\Omega^{\pm}\to\mathbb{R}$ denote the fluid velocity and pressure, respectively. We assume that the motion of the fluids occupying subdomains $\Omega^{\pm}$ can be modeled by the Stokes equations $\displaystyle-\nabla\cdot\boldsymbol{\sigma}^{\pm}$ $\displaystyle=\mathbf{f}^{\pm}$ $\displaystyle\text{ in }\Omega^{\pm},$ (2.1) $\displaystyle\nabla\cdot{}\mathbf{u}^{\pm}$ $\displaystyle=0$ $\displaystyle\text{ in }\Omega^{\pm},$ (2.2) endowed with boundary conditions $\displaystyle\mathbf{u}^{+}$ $\displaystyle=\mathbf{g},$ $\displaystyle\text{ on }\partial\Omega_{D},$ (2.3) $\displaystyle\boldsymbol{\sigma}^{+}\mathbf{n}^{+}$ $\displaystyle=\mathbf{g}_{N}$ $\displaystyle\text{ on }\partial\Omega_{N}.$ (2.4) Here, $\overline{\partial\Omega_{D}}\cup\overline{\partial\Omega_{N}}=\overline{\partial\Omega}$ and $\partial\Omega_{D}\cap\partial\Omega_{N}=\emptyset$. See Fig. 1. In (2.1), $\mathbf{f}^{\pm}$ are external the body forces and $\boldsymbol{\sigma}^{\pm}$ are the Cauchy stress tensors. For Newtonian fluids, the Cauchy stress tensor has the following expression: $\displaystyle\boldsymbol{\sigma}^{\pm}=-p^{\pm}\mathbf{I}+2\mu_{\pm}\mathbf{D}(\mathbf{u}^{\pm}),\quad\mathbf{D}(\mathbf{u}^{\pm})=\frac{1}{2}(\nabla\mathbf{u}^{\pm}+(\nabla\mathbf{u}^{\pm})^{T})\text{ in }\Omega^{\pm},$ where constants $\mu_{\pm}$ represent the fluid dynamic viscosities. Finally, $\mathbf{g}$ and $\mathbf{g}_{N}$ in (2.3) and (2.4) are given. Subproblems (2.1)-(2.2) are coupled at the interface $\Gamma$. The conservation of mass requires the balance of normal fluxes on $\Gamma$: $\displaystyle\mathbf{u}^{+}\cdot\mathbf{n}$ $\displaystyle=\mathbf{u}^{-}\cdot\mathbf{n}\qquad\text{ on }\Gamma.$ (2.5) This is the first coupling condition. We are interested in modelling slip with friction between the two phases. Thus, we consider the following additional coupling conditions: $\displaystyle\mathbf{P}{\boldsymbol{\sigma}^{+}\mathbf{n}}$ $\displaystyle=f(\mathbf{P}\mathbf{u}^{+}-\mathbf{P}\mathbf{u}^{-})$ $\displaystyle\text{ on }\Gamma,$ (2.6) $\displaystyle\mathbf{P}{\boldsymbol{\sigma}^{-}\mathbf{n}}$ $\displaystyle=-f(\mathbf{P}\mathbf{u}^{-}-\mathbf{P}\mathbf{u}^{+})$ $\displaystyle\text{ on }\Gamma,$ (2.7) where $f$ is a constant that can be seen as a slip coefficient and $\mathbf{P}=\mathbf{P}(\mathbf{x})=I-\mathbf{n}(\mathbf{x})\mathbf{n}(\mathbf{x})^{T}$ for $\mathbf{x}\in\Gamma$ is the orthogonal projection onto the tangent plane. Finally, the jump of the normal stress across $\Gamma$ is given by: $\displaystyle[\mathbf{n}^{T}\boldsymbol{\sigma}\mathbf{n}]^{-}_{+}$ $\displaystyle=\sigma\kappa\qquad\text{ on }\Gamma,$ (2.8) where $\sigma$ is the surface tension coefficient and $\kappa$ is the double mean curvature of the interface. Since the boundary conditions on $\partial\Omega$ do not affect the subsequent discussion, from now on we will consider that a Dirichlet condition (2.3) is imposed on the entire boundary. This will simplify the presentation of the fully discrete problem. ### 2.1. Variational formulation The purpose of this section is to derive the variational formulation of coupled problem (2.1)–(2.8). Let us introduce some standard notation. The space of functions whose square is integrable in a domain $\omega$ is denoted by $L^{2}(\omega)$. With $L^{2}_{0}(\omega)$, we denote the space of functions in $L^{2}(\omega)$ with zero mean value over $\omega$. The space of functions whose distributional derivatives of order up to $m\geq 0$ (integer) belong to $L^{2}(\omega)$ is denoted by $H^{m}(\omega)$. The space of vector-valued functions with components in $L^{2}(\omega)$ is denoted with $L^{2}(\omega)^{d}$. $H^{1}(\textrm{div}\ \\!,\omega)$ is the space of functions in $L^{2}(\omega)$ with divergence in $L^{2}(\omega)$. Moreover, we introduce the following functional spaces: $\displaystyle V^{-}$ $\displaystyle=H^{1}(\Omega^{-})^{d},~{}V^{+}=\\{\mathbf{u}\in H^{1}(\Omega^{+})^{d},\mathbf{u}{\big{|}_{\partial\Omega_{D}}}=\mathbf{g}\\},~{}V^{+}_{0}=\\{\mathbf{u}\in H^{1}(\Omega^{+})^{d},\mathbf{u}{\big{|}_{\partial\Omega_{D}}}=\boldsymbol{0}\\},$ $\displaystyle V^{\pm}$ $\displaystyle=\\{\mathbf{u}=(\mathbf{u}^{-},\mathbf{u}^{+})\in V^{-}\times V^{+},\mathbf{u}^{-}\cdot\mathbf{n}=\mathbf{u}^{+}\cdot\mathbf{n}~{}\text{on}~{}\Gamma\\},$ $\displaystyle V^{\pm}_{0}$ $\displaystyle=\\{\mathbf{u}=(\mathbf{u}^{-},\mathbf{u}^{+})\in V^{-}\times V^{+}_{0},\mathbf{u}^{-}\cdot\mathbf{n}=\mathbf{u}^{+}\cdot\mathbf{n}~{}\text{on}~{}\Gamma\\},$ $\displaystyle Q^{\pm}$ $\displaystyle=\\{p=(p^{-},p^{+})\in L^{2}(\Omega^{-})\times L^{2}(\Omega^{+})\\}.$ Notice that space $V^{\pm}$ can be also characterized as $(V^{-}\times V^{+})\cap H^{1}(\textrm{div}\ \\!,\Omega)$. We use $(\cdot,\cdot)_{\omega}$ and $\langle,\rangle_{\omega}$ to denote the $L^{2}$ product and the duality pairing, respectively. The integral formulation of the problem (2.1)-(2.8) reads: Find $(\mathbf{u},p)\in V^{\pm}\times L^{2}(\Omega)/\mathbb{R}$ such that $\displaystyle-(p^{-},\nabla\cdot\mathbf{v}^{-})_{\Omega^{-}}-(p^{+},\nabla\cdot\mathbf{v}^{+})_{\Omega^{+}}+2(\mu_{-}\mathbf{D}(\mathbf{u}^{-}),\mathbf{D}(\mathbf{v}^{-}))_{\Omega^{-}}+2(\mu_{+}\mathbf{D}(\mathbf{u}^{+}),\mathbf{D}(\mathbf{v}^{+}))_{\Omega^{+}}$ $\displaystyle\quad\quad+\langle f(\mathbf{P}\mathbf{u}^{-}-\mathbf{P}\mathbf{u}^{+}),\mathbf{P}\mathbf{v}^{-}\rangle_{\Gamma}+\langle f(\mathbf{P}\mathbf{u}^{+}-\mathbf{P}\mathbf{u}^{-}),\mathbf{P}\mathbf{v}^{+}\rangle_{\Gamma}$ $\displaystyle\quad\quad=(\mathbf{f}^{-},\mathbf{v}^{-})_{\Omega^{-}}+(\mathbf{f}^{+},\mathbf{v}^{+})_{\Omega^{+}}+\langle\sigma\kappa,\mathbf{v}^{-}\cdot\mathbf{n}\rangle_{\Gamma}$ (2.9) $\displaystyle(\nabla\cdot\mathbf{u}^{-},q^{-})_{\Omega^{-}}+(\nabla\cdot\mathbf{u}^{+},q^{+})_{\Omega^{+}}=0$ (2.10) for all $(\mathbf{v},q)\in V^{\pm}_{0}\times Q^{\pm}$. The interface terms in (2.9) have been obtained using coupling conditions (2.6), (2.7), and (2.8) as follows: $\displaystyle-\langle\boldsymbol{\sigma}^{-}\mathbf{n},\mathbf{v}^{-}\rangle_{\Gamma}+\langle\boldsymbol{\sigma}^{+}\mathbf{n},\mathbf{v}^{+}\rangle_{\Gamma}$ $\displaystyle=-\langle\mathbf{P}\boldsymbol{\sigma}^{-}\mathbf{n},\mathbf{P}\mathbf{v}^{-}\rangle_{\Gamma}+\langle\mathbf{P}\boldsymbol{\sigma}^{+}\mathbf{n},\mathbf{P}\mathbf{v}^{+}\rangle_{\Gamma}-\langle[\mathbf{n}^{T}\boldsymbol{\sigma}\mathbf{n}]^{-}_{+},\mathbf{v}^{-}\cdot\mathbf{n}\rangle_{\Gamma}$ $\displaystyle=\langle f(\mathbf{P}\mathbf{u}^{-}-\mathbf{P}\mathbf{u}^{+}),\mathbf{P}\mathbf{v}^{-}\rangle_{\Gamma}+\langle f(\mathbf{P}\mathbf{u}^{+}-\mathbf{P}\mathbf{u}^{-}),\mathbf{P}\mathbf{v}^{+}\rangle_{\Gamma}$ $\displaystyle\quad-\langle\sigma\kappa,\mathbf{v}^{-}\cdot\mathbf{n}\rangle_{\Gamma}.$ Notice that problem (2.9)-(2.10) can be rewritten as: Find $(\mathbf{u},p)\in V^{\pm}\times L^{2}(\Omega)/\mathbb{R}$ such that $\left\\{\begin{split}a(\mathbf{u},\mathbf{v})+b(\mathbf{v},p)&=r(\mathbf{v})\\\ b(\mathbf{u},q)&=0\end{split}\right.$ (2.11) for all $(\mathbf{v},q)\in V_{0}^{\pm}\times Q^{\pm}$, where $\displaystyle a(\mathbf{u},\mathbf{v})=$ $\displaystyle 2(\mu_{-}\mathbf{D}(\mathbf{u}^{-}),\mathbf{D}(\mathbf{v}^{-}))_{\Omega^{-}}+2(\mu_{+}\mathbf{D}(\mathbf{u}^{+}),\mathbf{D}(\mathbf{v}^{+}))_{\Omega^{+}}+\langle f(\mathbf{P}\mathbf{u}^{-}-\mathbf{P}\mathbf{u}^{+}),\mathbf{P}\mathbf{v}^{-}-\mathbf{P}\mathbf{v}^{+}\rangle_{\Gamma},$ $\displaystyle b(\mathbf{v},p)=$ $\displaystyle-(p^{-},\nabla\cdot\mathbf{v}^{-})_{\Omega^{-}}-(p^{+},\nabla\cdot\mathbf{v}^{+})_{\Omega^{+}},$ $\displaystyle r(\mathbf{v})=$ $\displaystyle(\mathbf{f}^{-},\mathbf{v}^{-})_{\Omega^{-}}+(\mathbf{f}^{+},\mathbf{v}^{+})_{\Omega^{+}}+\langle\sigma\kappa,\mathbf{v}^{-}\cdot\mathbf{n}\rangle_{\Gamma}.$ ### 2.2. Finite element discretization We consider a family of shape regular triangulations $\\{\mathcal{T}_{h}\\}_{h>0}$ of $\Omega$. We adopt the convention that the elements $T$ and edges $e$ are open sets and use the over-line symbol to refer to their closure. Let $h_{T}$ denote the diameter of element $T\in\mathcal{T}_{h}$ and $h_{e}$ the diameter of edge $e$. The set of elements intersecting $\Omega^{\pm}$ and the set of elements having a nonzero intersection with $\Gamma$ are $\displaystyle\mathcal{T}^{\pm}_{h}=\\{T\in\mathcal{T}_{h}:T\cap\Omega^{\pm}\neq\emptyset\\},\quad\mathcal{T}_{h}^{\Gamma}=\\{T\in\mathcal{T}_{h}:\overline{T}\cap\Gamma\neq\emptyset\\},$ (2.12) respectively. We assume $\\{\mathcal{T}_{h}^{\Gamma}\\}$ to be quasi-uniform. However, in practice adaptive mesh refinement is possible. The domain formed by all tetrahedra in $\mathcal{T}_{h}^{\Gamma}$ is denoted by $\Omega^{\Gamma}_{h}:=\text{int}(\cup_{T\in\mathcal{T}_{h}^{\Gamma}}\overline{T})$. We define the $h$-dependent domains: $\displaystyle\Omega_{h}^{\pm}=\text{int}\left(\cup_{T\in\mathcal{T}_{h}^{\pm}}\overline{T}\right)$ (2.13) and the set of faces of $\mathcal{T}_{h}^{\Gamma}$ restricted to the interior of $\Omega_{h}^{\pm}$: $\displaystyle\mathcal{E}_{h}^{\Gamma,\pm}=\\{e=\text{int}(\partial T_{1}\cap\partial T_{2}):T_{1},T_{2}\in\mathcal{T}_{h}^{\pm}~{}\text{and}~{}T_{1}\cap\Gamma\neq\emptyset~{}\text{or}~{}T_{2}\cap\Gamma\neq\emptyset\\}.$ (2.14) For the space discretization of the bulk fluid problems, we restrict our attention to inf-sup stable finite element pair $\mathbf{P}_{k+1}-P_{k}$, $k\geq 1$, i.e. Taylor-Hood elements. Specifically, we consider the spaces of continuous finite element pressures given by: $\displaystyle Q_{h}^{-}=\\{p\in C(\Omega_{h}^{-}):q|_{T}\in P_{k}(T)~{}\forall T\in\mathcal{T}^{-}_{h}\\}.$ (2.15) Space $Q_{h}^{+}$ is defined analogously. Our pressure space is given by: $\displaystyle Q^{\pm}_{h}=\\{p=(p^{-},p^{+})\in Q_{h}^{-}\times Q_{h}^{+}\,:\,\int_{\Omega^{-}}\mu_{-}^{-1}p^{-}+\int_{\Omega^{+}}\mu_{+}^{-1}p^{+}=0\\}.$ Let $\displaystyle V_{h}^{-}$ $\displaystyle=\\{\mathbf{u}\in C(\Omega_{h}^{-})^{d}:\mathbf{u}|_{T}\in\mathbf{P}_{k+1}(T)~{}\forall T\in\mathcal{T}^{-}_{h}\\}.$ (2.16) with the analogous definition for $V_{h}^{+}$. Our velocity spaces are given by: $\displaystyle V^{\pm}_{h}=\\{\mathbf{u}=(\mathbf{u}^{-},\mathbf{u}^{+})\in V_{h}^{-}\times V_{h}^{+}\\}$ and $V_{0,h}^{\pm}$, a subspace of $V^{\pm}_{h}$ with vector functions $\mathbf{u}^{+}$ vanishing on $\partial\Omega$. All above constructions and spaces readily carry over to tessellations of $\Omega$ into squares or cubes and using $\mathbf{Q}_{k+1}-Q_{k}$ elements. Functions in $Q^{\pm}_{h}$ and $V^{\pm}_{h}$ and their derivatives are multivalued in $\Omega^{\Gamma}_{h}$, the overlap of $\Omega_{h}^{-}$ and $\Omega_{h}^{+}$. The jump of a multivalued function over the interface is defined as the difference of components coming from $\Omega_{h}^{-}$ and $\Omega_{h}^{+}$, i.e. $[\mathbf{u}]=\mathbf{u}^{-}-\mathbf{u}^{+}$ on $\Gamma$. Note that this is the jump that we have previously denoted with $[\cdot]^{-}_{+}$. We are now using $[\cdot]$ to simplify the notation. Moreover, we define the following averages: $\displaystyle\\{\mathbf{u}\\}=\alpha\mathbf{u}^{+}+\beta\mathbf{u}^{-},$ (2.17) $\displaystyle\langle\mathbf{u}\rangle=\beta\mathbf{u}^{+}+\alpha\mathbf{u}^{-},$ (2.18) where $\alpha$ and $\beta$ are weights to be chosen such that $\alpha+\beta=1$, $0\leq\alpha,\beta\leq 1$. For example, in [15] the setting $\alpha=\mu_{-}/(\mu_{+}+\mu_{-})$ and $\beta=\mu_{+}/(\mu_{+}+\mu_{-})$ is suggested. In [13], the authors choose $\alpha=0$, $\beta=1$ if $\mu_{-}\leq\mu_{+}$ and $\alpha=1$, $\beta=0$ otherwise. Below, in (2.22) and (2.25) we will use relationship: $\displaystyle[ab]=[b]\\{a\\}+\left\langle b\right\rangle[a].$ (2.19) A discrete variational analogue of problem (2.11) reads: Find $\\{\mathbf{u}_{h},p_{h}\\}\in V^{\pm}_{h}\times Q^{\pm}_{h}$ such that $\left\\{\begin{split}a_{h}(\mathbf{u}_{h},\mathbf{v}_{h})+b_{h}(\mathbf{v}_{h},p_{h})&=r_{h}(\mathbf{v}_{h})\\\ b_{h}(\mathbf{u}_{h},q_{h})-b_{p}(p_{h},q_{h})&=0\end{split}\right.$ (2.20) for all $(\mathbf{v}_{h},q_{h})\in V_{0,h}^{\pm}\times Q_{h}^{\pm}$. We define all the bilinear forms in (2.20) for all $\mathbf{u}_{h}\in V_{h}^{\pm}$, $\mathbf{v}_{h}\in V^{\pm}_{0,h}$, $p\in Q^{\pm}$. Let us start with form $a_{h}(\cdot,\cdot)$: $\displaystyle a_{h}(\mathbf{u}_{h},\mathbf{v}_{h})=$ $\displaystyle a_{i}(\mathbf{u}_{h},\mathbf{v}_{h})+a_{n}(\mathbf{u}_{h},\mathbf{v}_{h})+a_{p}(\mathbf{u}_{h},\mathbf{v}_{h}),$ (2.21) where we group together the terms that arise from the integration by parts of the divergence of the stress tensors: $\displaystyle a_{i}(\mathbf{u}_{h},\mathbf{v}_{h})=$ $\displaystyle 2(\mu_{-}\mathbf{D}(\mathbf{u}_{h}^{-}),\mathbf{D}(\mathbf{v}_{h}^{-}))_{\Omega^{-}}+2(\mu_{+}\mathbf{D}(\mathbf{u}_{h}^{+}),\mathbf{D}(\mathbf{v}_{h}^{+}))_{\Omega^{+}}+\langle f[\mathbf{P}\mathbf{u}_{h}],[\mathbf{P}\mathbf{v}_{h}]\rangle_{\Gamma}$ $\displaystyle-2\langle\\{\mu\mathbf{n}^{T}\mathbf{D}(\mathbf{u}_{h})\mathbf{n}\\},[\mathbf{v}_{h}\cdot\mathbf{n}]\rangle_{\Gamma},$ (2.22) and the terms that enforce condition (2.5) weakly using Nitsche’s method $\displaystyle a_{n}(\mathbf{u}_{h},\mathbf{v}_{h})=\sum_{T\in\mathcal{T}_{h}^{\Gamma}}\frac{\gamma}{h_{T}}\\{\mu\\}\langle[\mathbf{u}_{h}\cdot\mathbf{n}],[\mathbf{v}_{h}\cdot\mathbf{n}]\rangle_{\Gamma\cap T}-2\langle\\{\mu\mathbf{n}^{T}\mathbf{D}(\mathbf{v}_{h})\mathbf{n}\\},[\mathbf{u}_{h}\cdot\mathbf{n}]\rangle_{\Gamma}.$ (2.23) We recall that $h_{T}$ is the diameter of element $T\in\mathcal{T}_{h}$. To define the penalty terms $a_{p}(\mathbf{u}_{h},\mathbf{v}_{h})$ we need $\omega_{e}$, the facet patch for $e\in\mathcal{E}_{h}^{\Gamma,\pm}$ consisting of all $T\in\mathcal{T}_{h}$ sharing $e$. Then, we set $\displaystyle a_{p}(\mathbf{u}_{h},\mathbf{v}_{h})$ $\displaystyle={\mu_{-}}\mathbf{J}_{h}^{-}(\mathbf{u}_{h},\mathbf{v}_{h})+{\mu_{+}}\mathbf{J}_{h}^{+}(\mathbf{u}_{h},\mathbf{v}_{h}),$ $\displaystyle\mathbf{J}_{h}^{\pm}(\mathbf{u}_{h},\mathbf{v}_{h})$ $\displaystyle=\gamma^{\pm}_{\mathbf{u}}\sum_{e\in\mathcal{E}_{h}^{\Gamma,\pm}}\frac{1}{h^{2}_{e}}\int_{\omega_{e}}(\mathbf{u}_{1}^{e}-\mathbf{u}_{2}^{e})\cdot(\mathbf{v}_{1}^{e}-\mathbf{v}_{2}^{e})dx,$ (2.24) where $\mathbf{u}_{1}^{e}$ is the componentwise canonical extension of a polynomial vector function $\mathbf{u}^{\pm}_{h}$ from $T_{1}$ to $\mathbb{R}^{d}$, while $\mathbf{u}_{2}^{e}$ is the canonical extension of $\mathbf{u}^{\pm}_{h}$ from $T_{2}$ to $\mathbb{R}^{d}$(and similarly for $\mathbf{v}_{1}$, $\mathbf{v}_{2}$). We recall that $h_{e}$ is the diameter of facet $e\in\mathcal{E}_{h}^{\Gamma,\pm}$. This version of the ghost penalty stabilization has been proposed in [39]. In [33], it was shown to be essentially equivalent to other popular ghost penalty stabilizations such as local projection stabilization [11] and normal derivative jump stabilization [12]. In the context of the Stokes problem, this stabilization was recently used in [44]. For the analysis in Sec. 3 and 4, we also define $\mathbf{J}_{h}^{\pm}(\mathbf{u},\mathbf{v})$ for arbitrary smooth functions $\mathbf{u},\mathbf{v}$ in $\Omega_{h}^{\pm}$. In this case, we set $\mathbf{u}_{1}=\left(\Pi_{T_{1}}\mathbf{u}|_{T_{1}}\right)^{e}$, $\mathbf{u}_{2}=\left(\Pi_{T_{2}}\mathbf{u}|_{T_{2}}\right)^{e}$, where $\Pi_{T_{i}}$ is the $L^{2}(T_{i})$-orthogonal projection into the space of degree $k+1$ polynomial vector functions on $T_{i}$. The remaining terms coming from the integration by parts of the divergence of the stress tensors are contained in $\displaystyle b_{h}(\mathbf{v}_{h},p_{h})=$ $\displaystyle-(p_{h}^{-},\nabla\cdot\mathbf{v}_{h}^{-})_{\Omega^{-}}-(p_{h}^{+},\nabla\cdot\mathbf{v}_{h}^{+})_{\Omega^{+}}+\left\langle\\{p_{h}\\},[\mathbf{v}_{h}\cdot\mathbf{n}]\right\rangle_{\Gamma},$ (2.25) and the penalty terms are grouped together in $\displaystyle b_{p}(p_{h},q_{h})$ $\displaystyle=\mu_{-}^{-1}J_{h}^{-}(p_{h},q_{h})+\mu_{+}^{-1}J_{h}^{+}(p_{h},q_{h}),$ $\displaystyle J_{h}^{\pm}(p_{h},q_{h})$ $\displaystyle={\gamma_{p}^{\pm}}\sum_{e\in\mathcal{E}_{h}^{\Gamma,\pm}}\int_{\omega_{e}}(p_{1}^{e}-p_{2}^{e})(q_{1}^{e}-q_{2}^{e})dx,$ (2.26) where $p_{1}^{e},p_{2}^{e},q_{1}^{e},q_{2}^{e}$ are canonical polynomial extensions as defined above. Finally, $\displaystyle r_{h}(\mathbf{v}_{h})=$ $\displaystyle(\mathbf{f}_{h}^{-},\mathbf{v}_{h}^{-})_{\Omega^{-}}+(\mathbf{f}_{h}^{+},\mathbf{v}_{h}^{+})_{\Omega^{+}}+\langle\sigma\kappa,\left\langle\mathbf{v}_{h}\cdot\mathbf{n}\right\rangle\rangle_{\Gamma}.$ We recall that some of the interface terms in $a_{i}(\cdot,\cdot)$ and $b_{h}(\cdot,\cdot)$ have been obtained using relationship (2.19) and interface conditions. Parameters $\gamma^{\pm}_{\mathbf{u}}$, $\gamma_{p}^{\pm}$ and $\gamma$ are all assumed to be independent of $\mu_{\pm}$, $h$, and the position of $\Gamma$ against the underlying mesh. Parameter $\gamma$ in (2.23) needs to be large enough to provide the bilinear form $a_{h}(\cdot,\cdot)$ with coercivity. Parameters $\gamma^{\pm}_{\mathbf{u}}$, $\gamma_{p}^{\pm}$ can be tuned to improve the numerical performance of the method. #### 2.2.1. Numerical integration It is not feasible to compute integrals entering the definition of the bilinear forms over cut elements and over $\Gamma$ for an arbitrary smooth $\Gamma$. We face the same problem if $\Gamma$ is given implicitly as a zero level of a piecewise polynomial function for polynomial degree greater than one. Piecewise linear approximation of $\Gamma$ on the given mesh and polygonal approximation of subdomains lead to second order geometric consistency error, which is suboptimal for Taylor–Hood elements. To ensure a geometric error of the same order or higher than the finite element (FE) approximation error, we define numerical quadrature rules on the given mesh using the isoparametric approach proposed in [31]. In the isoparametric approach, one considers a smooth function $\phi$ such that $\pm\phi>0$ in $\Omega^{\pm}$ and $|\nabla\phi|>0$ in a sufficiently wide strip around $\Gamma$. Next, one defines polygonal auxiliary domains $\Omega^{\pm}_{1}$ given by $\Omega^{\pm}_{1}:=\\{\mathbf{x}\in\mathbb{R}^{d}\,:\,\pm I_{h}^{1}(\phi)>0\\},$ where $I_{h}^{1}$ is the continuous piecewise linear interpolation of $\phi$ on $\mathcal{T}_{h}$. Interface $\Gamma_{1}$ between $\Omega^{+}_{1}$ and $\Omega^{-}_{1}$ is then $\Gamma_{1}:=\\{\mathbf{x}\in\mathbb{R}^{d}\,:\,I_{h}^{1}(\phi)=0\\}.$ On $\Omega^{\pm}_{1}$ and $\Gamma_{1}$ standard quadrature rules can be applied elementwise. Since using $\Omega^{\pm}_{1}$, $\Gamma_{1}$ alone limits the accuracy to second order, one further constructs a transformation of the mesh in $\mathcal{T}^{\Gamma}_{h}$ with the help of an explicit mapping $\Psi_{h}$ parameterized by a finite element function. The mapping $\Psi_{h}$ is such that $\Gamma_{1}$ is mapped approximately onto $\Gamma$; see [31] for how $\Psi_{h}$ is constructed. Then, $\widetilde{\Omega}^{\pm}=\Psi_{h}(\Omega^{\pm}_{1})$, $\widetilde{\Gamma}=\Psi_{h}(\Gamma_{1})$ are high order accurate approximations to the phases and interface which have an explicit representation so that the integration over $\widetilde{\Omega}^{\pm}$ and $\widetilde{\Gamma}$ can be done exactly. The finite element spaces have to be adapted correspondingly, using the explicit pullback mapping: $\mathbf{v}_{h}\circ\Psi_{h}^{-1}$. ## 3 Stability For the analysis in this and the next section, we assume that the integrals over cut elements in $\Omega^{\pm}$ are computed exactly. In addition, we restrict our attention to the choice $\alpha=0$ and $\beta=1$ for the averages in (2.17)–(2.18), assuming $\mu_{-}\leq\mu_{+}$. The key for the stability analysis of the two-phase Stokes problem is an inf- sup stability property of the unfitted generalized Taylor–Hood finite element pair, which extends the classical LBB stability result for the standard $\mathbf{P}_{k+1}-P_{k}$ Stokes element from [7]. There is no similar stability result in the literature for $\mathbf{Q}_{k+1}-Q_{k}$ unfitted elements. However, we expect that the extension, and so the analysis below, can be carried over to these elements as well. One is interested in the inf-sup inequality with a stability constant that is independent of the viscosity ratio, position of $\Gamma$ with respect to the background mesh and, of course, mesh size $h$. The result is given in the following lemma. ###### Lemma 3.1 Denote by $V_{h}$ the space of continuous $P_{k+1}$ finite element vector functions on $\Omega$, $V_{h}=\\{\mathbf{u}\in C(\Omega)^{d}:\mathbf{u}|_{T}\in\mathbf{P}_{k+1}(T)~{}\forall T\in\mathcal{T}_{h}\\}$. There exists $h_{0}>0$ such that for all $h<h_{0}$ and any $q_{h}\in Q_{h}^{\pm}$ there exists $\mathbf{v}_{h}\in V_{h}$ such that it holds $\begin{split}\mu_{-}^{-1}\|q_{h}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{h}^{+}\|_{{\Omega}_{h}^{+}}^{2}&\leq(q_{h}^{-},\nabla\cdot\mathbf{v}_{h})_{{\Omega}^{-}}+(q_{h}^{+},\nabla\cdot\mathbf{v}_{h})_{{\Omega}^{+}}+c\,b_{p}(q_{h},q_{h})\\\ \|\mu^{\frac{1}{2}}\nabla\mathbf{v}_{h}\|_{\Omega}^{2}&\leq C\,\left(\mu_{-}^{-1}\|q_{h}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{h}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right).\end{split}$ (3.1) with $h_{0}$ and two positive constants $c$ and $C$ independent of $q_{h}$, $\mu_{\pm}$, the position of $\Gamma$ in the background mesh and mesh size $h$. ##### Proof: Consider subdomains $\Omega_{h,i}^{\pm}\subset\Omega^{\pm}$ built of all strictly internal simplexes in each phase: $\overline{\Omega}_{h,i}^{\pm}:=\bigcup\\{\overline{T}\,:\,T\in\mathcal{T}_{h},\quad T\subset\Omega^{\pm}\\}.$ The following two results are central for the proof. First, we have the uniform inf-sup inequalities in ${\Omega}_{h,i}^{-}$ and ${\Omega}_{h,i}^{+}$ [22]: there exist constants $C_{\pm}$ independent of the position of $\Gamma$ and $h$ such that $0<C_{\pm}\leq\inf_{q\in Q_{h}^{\pm}\cap L^{2}_{0}({\Omega}_{h,i}^{\pm})}~{}\sup_{\scriptsize\begin{array}[]{c}\mathbf{v}\in V_{h}\\\ {\rm supp}(\mathbf{v})\subset{\Omega}_{h,i}^{\pm}\end{array}}\frac{(q,\nabla\cdot\mathbf{v})_{{\Omega}_{h,i}^{\pm}}}{\|\mathbf{v}\|_{H^{1}({\Omega}_{h,i}^{\pm})}\|q\|_{{\Omega}_{h,i}^{\pm}}}.$ (3.2) The above result can be equivalently formulated as follows: For any $q\in Q_{h}^{\pm}\cap L^{2}_{0}({\Omega}_{h,i}^{\pm})$ there exist $\mathbf{v}_{h}^{\pm}\in V_{h}$ such that ${\rm supp}(\mathbf{v})\subset{\Omega}_{h,i}^{\pm}$ and $\|q^{\pm}\|_{{\Omega}_{h,i}^{\pm}}^{2}=\left(q^{\pm},\nabla\cdot\mathbf{v}_{h}^{\pm}\right)_{{\Omega}^{\pm}_{h}},\quad\|\nabla\mathbf{v}_{h}^{\pm}\|_{\Omega}\leq C_{\pm}^{-1}\|q^{\pm}\|_{{\Omega}_{h,i}^{\pm}}.$ (3.3) The second important results is the simple observation that the $L^{2}$ norm of $q_{h}$ in $\Omega_{h}^{\pm}$ can be controlled by the $L^{2}$ norm in ${\Omega}_{h,i}^{\pm}$ plus the stabilization term in (2.26) (see, [33, 39]): $\|q_{h}\|_{{\Omega}_{h}^{\pm}}^{2}\leq C\,(\|q_{h}\|_{{\Omega}_{h,i}^{\pm}}^{2}+J_{h}^{\pm}(q_{h},q_{h})),$ (3.4) with some constant $C$ independent of the position of $\Gamma$ and $h$. We note that (3.4) holds also for discontinuous finite elements. Consider now $q_{\mu}=\left\\{\begin{array}[]{r}~{}\mu_{-}|\Omega^{-}|^{-1}\in Q_{h}^{-}\\\ -\mu_{+}|\Omega^{+}|^{-1}\in Q_{h}^{+}.\end{array}\right.$ Note that $q_{\mu}$ satisfies the orthogonality condition imposed for elements from $Q_{h}^{\pm}$, and hence $\mbox{span}\\{q_{\mu}\\}$ is a subspace in $Q_{h}^{\pm}$. Using a trick from [37], we decompose arbitrary $q_{h}\in Q_{h}^{\pm}$ into a component collinear with $q_{\mu}$ and the orthogonal complement in each phase: $q_{h}=q_{1}+q_{0},\quad\text{with}~{}q_{1}\in\mbox{span}\\{q_{\mu}\\},\quad\text{and}~{}~{}(q_{0}^{-},1)_{\Omega_{h,i}^{-}}=(q_{0}^{+},1)_{\Omega_{h,i}^{+}}=0.$ Thus, $q_{1}$ and $q_{0}$ are orthogonal with respect to $L^{2}$ product in the inner domains $\Omega_{h,i}^{\pm}$. Next, we let $q^{\pm}=\mu^{-\frac{1}{2}}_{\pm}q_{0}^{\pm}$ in (3.3) and for $\mathbf{v}^{\pm}_{h}\in V_{h}$ given by (3.3) consider $\mathbf{v}_{h}^{0}=\mu_{-}^{\frac{1}{2}}\mathbf{v}^{-}_{h}+\mu_{+}^{\frac{1}{2}}\mathbf{v}^{+}_{h}\in V_{h}$. Then after applying (3.4) and summing up, the relations in (3.3) become $\begin{split}\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}_{h}^{+}}^{2}&\leq C\,\left((q_{0}^{-},\nabla\cdot\mathbf{v}_{h}^{0})_{{\Omega}^{-}}+(q_{0}^{+},\nabla\cdot\mathbf{v}_{h}^{0})_{{\Omega}^{+}}+b_{p}(q_{0},q_{0})\right),\\\ \|\mu^{\frac{1}{2}}\nabla\mathbf{v}_{h}^{0}\|_{\Omega}&\leq C_{0}\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)^{\frac{1}{2}},\end{split}$ (3.5) with $C$ from (3.4) and $C_{0}=\max\\{C_{-}^{-1},C_{+}^{-1}\\}$, both of which are independent of $\mu_{\pm}$ and how $\Gamma$ overlaps the background mesh. In (3.5), we also used the fact that supports of $\mathbf{v}^{-}$ and $\mathbf{v}^{+}$ do not overlap. Since ${\rm supp}(\mathbf{v}_{h}^{\pm})\subset{\Omega}^{\pm}$ and $q_{1}^{\pm}$ are constant in ${\Omega}^{\pm}$, integration by parts shows that $(q_{1}^{\pm},\nabla\cdot\mathbf{v}_{h}^{0})_{{\Omega}^{\pm}_{h}}=0.$ (3.6) Next, we need the following result from Lemma 5.1 in [30]: For all $h\leq h_{0}$ there exists $\mathbf{v}_{h}^{1}\in V_{h}$ such that $\begin{split}\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h}^{+}}^{2}&=(q_{1},\nabla\cdot\mathbf{v}_{h}^{1})_{{\Omega}^{-}}+(q_{1},\nabla\cdot\mathbf{v}_{h}^{1})_{{\Omega}^{+}},\\\ \|\mu^{\frac{1}{2}}\nabla\mathbf{v}_{h}^{1}\|_{\Omega}&\leq C_{1}\left(\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)^{\frac{1}{2}},\end{split}$ (3.7) with $h_{0}>0$ and $C_{1}>0$ independent of $\mu_{\pm}$ and how $\Gamma$ overlaps the background mesh. The above result follows from the classical inf- sup stability condition for $\mathbf{P}_{2}-P_{1}$ Taylor–Hood elements and a simple scaling and interpolation argument. See [30] for details. As the next step, set $\mathbf{v}_{h}=\tau\mathbf{v}_{h}^{0}+\mathbf{v}_{h}^{1}$ with some $\tau>0$ and proceed with calculations using (3.6), (3.5), (3.7), and the Cauchy- Schwartz inequality: $\begin{split}(q_{h}^{-},&\nabla\cdot\mathbf{v}_{h})_{{\Omega}^{-}}+(q_{h}^{+},\nabla\cdot\mathbf{v}_{h})_{{\Omega}^{+}}\\\ &=(q_{1}^{-},\nabla\cdot\mathbf{v}_{h}^{1})_{{\Omega}^{-}}+(q_{1}^{+},\nabla\cdot\mathbf{v}_{h}^{1})_{{\Omega}^{+}}+\tau(q_{0}^{-},\nabla\cdot\mathbf{v}_{h}^{0})_{{\Omega}^{-}}+\tau(q_{0}^{+},\nabla\cdot\mathbf{v}_{h}^{0})_{{\Omega}^{+}}\\\ &\qquad+(q_{0}^{-},\nabla\cdot\mathbf{v}_{h}^{1})_{{\Omega}^{-}}+(q_{0}^{+},\nabla\cdot\mathbf{v}_{h}^{1})_{{\Omega}^{+}}\\\ &\geq\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h}^{+}}^{2}+\tau C^{-1}\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)-\tau b_{p}(q_{0},q_{0})\\\ &\qquad-\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}^{-}_{h}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}^{+}_{h}}^{2}\right)^{\frac{1}{2}}d^{\frac{1}{2}}\|\mu^{\frac{1}{2}}\nabla\mathbf{v}_{h}^{1}\|_{\Omega}\\\ &\geq\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h}^{+}}^{2}+\tau C^{-1}\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)-\tau b_{p}(q_{0},q_{0})\\\ &\qquad-\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}^{-}_{h}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}^{+}_{h}}^{2}\right)^{\frac{1}{2}}\,C_{1}d^{\frac{1}{2}}\left(\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)^{\frac{1}{2}}\\\ &\geq\frac{1}{2}\left(\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)+\left(\frac{\tau}{C}-\frac{C_{1}^{2}d}{2}\right)\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)-\tau b_{p}(q_{0},q_{0}).\end{split}$ We set $\tau$ such that $\frac{\tau}{C}-\frac{C_{1}^{2}d}{2}=\frac{1}{2}$ and note that $b_{p}(q_{0},q_{0})=b_{p}(q_{h},q_{h})$. Using this and the orthogonality condition for $q_{0}$, we get $\begin{split}(q_{h},&\nabla\cdot\mathbf{v}_{h})_{{\Omega}^{-}}+(q_{h},\nabla\cdot\mathbf{v}_{h})_{{\Omega}^{+}}\\\ &\geq\frac{1}{2}\left(\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)+\frac{1}{2}\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)-\tau b_{p}(q_{h},q_{h})\\\ &\geq\frac{1}{2}\left(\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h,i}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h,i}^{+}}^{2}\right)+\frac{1}{2}\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}_{h,i}^{+}}^{2}\right)-\tau b_{p}(q_{h},q_{h})\\\ &=\frac{1}{2}\left(\mu_{-}^{-1}\|q_{h}^{-}\|_{{\Omega}_{h,i}^{-}}^{2}+\mu_{+}^{-1}\|q_{h}^{+}\|_{{\Omega}_{h,i}^{+}}^{2}\right)-\tau b_{p}(q_{h},q_{h})\\\ &\geq\frac{1}{2C}\left(\mu_{-}^{-1}\|q_{h}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{h}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)-\left(\tau+\frac{1}{2}\right)b_{p}(q_{h},q_{h}).\end{split}$ (3.8) Using $(q_{0}^{\pm},q_{1}^{\pm})_{{\Omega}_{h,i}^{\pm}}=0$, $|\Omega^{\pm}_{h}\setminus\Omega^{\pm}_{h,i}|\leq c\,h$ and so $\|q_{1}^{\pm}\|_{\Omega^{\pm}_{h}\setminus\Omega^{\pm}_{h,i}}\leq ch^{\frac{1}{2}}\|q_{1}^{\pm}\|_{\Omega^{\pm}_{h}}$, we estimate $|\mu_{-}^{-1}(q_{0}^{-},q_{1}^{-})_{{\Omega}_{h}^{-}}+\mu_{+}^{-1}(q_{0}^{+},q_{1}^{+})_{{\Omega}_{h}^{+}}|\\\ \leq c\,h^{\frac{1}{2}}\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)^{\frac{1}{2}}\left(\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)^{\frac{1}{2}}.$ (3.9) From (3.5), (3.7), and (3.9), we also get the following upper bound for $\mathbf{v}_{h}$, $\begin{split}\|\mu^{\frac{1}{2}}\nabla\mathbf{v}_{h}\|_{\Omega}^{2}&\leq 2(\|\mu^{\frac{1}{2}}\tau\nabla\mathbf{v}_{h}^{0}\|_{\Omega}^{2}+\|\mu^{\frac{1}{2}}\nabla\mathbf{v}_{h}^{1}\|_{\Omega}^{2})\\\ &\leq 2\tau^{2}C_{0}^{2}\left(\mu_{-}^{-1}\|q_{0}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{0}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)+2C_{1}^{2}\left(\mu_{-}^{-1}\|q_{1}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{1}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)\\\ &\leq\frac{2\max\\{\tau^{2}C_{0}^{2},C_{1}^{2}\\}}{1-c\,h^{\frac{1}{2}}}\left(\mu_{-}^{-1}\|q_{h}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q_{h}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right).\end{split}$ (3.10) The assertion of the lemma follows from (3.8) and (3.10) after simple calculations. $\square$ The next lemma shows the uniform coercivity of the symmetric form $a_{h}(\mathbf{u}_{h},\mathbf{v}_{h})$ in (2.21) on $V_{h}^{\pm}\times V_{h}^{\pm}$. ###### Lemma 3.2 If $\gamma=O(1)$ in (2.23) is sufficiently large, then it holds $a_{h}(\mathbf{u}_{h},\mathbf{u}_{h})\geq C\,\left(\mu_{-}\|\mathbf{D}(\mathbf{u}_{h}^{-})\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}\|\mathbf{D}(\mathbf{u}_{h}^{+})\|_{{\Omega}_{h}^{+}}^{2}+h^{-1}\|\\{\mu\\}[\mathbf{u}_{h}\cdot\mathbf{n}]\|_{\Gamma}^{2}+f\|[\mathbf{P}\mathbf{u}_{h}]\|_{\Gamma}^{2}\right)$ (3.11) $\forall~{}\mathbf{u}_{h}\in V_{h}^{\pm}$, with $C>0$ independent of $\mu_{\pm}$, $h$, f, and the position of $\Gamma$ with respect to the background mesh. ##### Proof: For the proof, we need the local trace inequality in $T\in\mathcal{T}^{\Gamma}_{h}$ (see, e.g. [22, 23]): $\|v\|_{T\cap\Gamma}\leq C(h_{T}^{-\frac{1}{2}}\|v\|_{T}+h_{T}^{\frac{1}{2}}\|\nabla v\|_{T}),\quad~{}~{}\forall~{}v\in H^{1}(T),$ (3.12) with a constant $C$ independent of $v$, $T$, how $\Gamma$ intersects $T$, and $h_{T}<h_{0}$ for some arbitrary but fixed $h_{0}$. We also need the following estimate $\|\mathbf{D}(\mathbf{v}_{h}^{\pm})\|^{2}_{L^{2}(\Omega_{h}^{\pm})}\leq C(\|\mathbf{D}(\mathbf{v}_{h}^{\pm})\|^{2}_{L^{2}(\Omega^{\pm})}+\mathbf{J}^{\pm}(\mathbf{v}_{h}^{\pm},\mathbf{v}_{h}^{\pm})\,),$ (3.13) which follows from (3.4) by applying it componentwise and further using FE inverse inequality (note $h^{-2}$ scaling in the definition of $\mathbf{J}^{\pm}$ in (2.24)). Applying (3.12), finite element inverse inequalities and (3.13), we can bound the interface term $\begin{split}\langle\\{\mu\mathbf{n}^{T}\mathbf{D}(\mathbf{v}_{h})\mathbf{n}\\},&[\mathbf{u}_{h}\cdot\mathbf{n}]\rangle_{\Gamma}=\langle\mu_{-}\mathbf{n}^{T}\mathbf{D}(\mathbf{v}_{h}^{-})\mathbf{n},[\mathbf{u}_{h}\cdot\mathbf{n}]\rangle_{\Gamma}\\\ &\leq\sum_{T\in\mathcal{T}_{h}^{\Gamma}}\left[\frac{h_{T}\delta}{2}\|\mu_{-}^{\frac{1}{2}}\mathbf{n}^{T}\mathbf{D}(\mathbf{v}_{h}^{-})\mathbf{n}\|^{2}_{T\cap\Gamma}+\frac{1}{2h_{T}\delta}\|\mu_{-}^{\frac{1}{2}}[\mathbf{u}_{h}\cdot\mathbf{n}]\|^{2}_{T\cap\Gamma}\right]\\\ &\leq\frac{\delta}{2}\|\mu_{-}^{\frac{1}{2}}\mathbf{n}^{T}\mathbf{D}(\mathbf{v}_{h}^{-})\mathbf{n}\|^{2}_{\Omega_{h}^{-}}+\frac{1}{h_{T}\delta}\\{\mu\\}|[\mathbf{u}_{h}\cdot\mathbf{n}]|^{2}_{\Gamma},\quad\forall~{}\delta>0,~{}\mathbf{u}_{h},\mathbf{v}_{h}\in V_{h}^{\pm}.\end{split}$ This estimate with $\mathbf{v}_{h}=\mathbf{u}_{h}$ and with $\delta>0$ sufficiently small, together with the definition of the bilinear form $a_{h}(\mathbf{u}_{h},\mathbf{u}_{h})$, allows to show its coercivity. $\square$ We further need the continuity result for the velocity stabilization form contained in the next lemma. ###### Lemma 3.3 It holds $a_{p}(\mathbf{v}_{h},\mathbf{v}_{h})\leq C\,\left(\mu_{-}\|\mathbf{D}(\mathbf{v}^{-}_{h})\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}\|\mathbf{D}(\mathbf{v}^{+}_{h})\|_{{\Omega}_{h}^{+}}^{2}\right)\qquad\forall~{}\mathbf{v}_{h}\in V^{\pm}_{h},$ with $C>0$ independent of $\mu_{\pm}$, $h$, and the position of $\Gamma$ in the background mesh. ##### Proof: For any $\mathbf{v}=\mathbf{v}_{h}^{-}\in V^{-}_{h}$, facet $e\in\mathcal{E}_{h}^{\Gamma,-}$ and the corresponding patch $\omega_{e}$ formed by two tetrahedra $T_{1}$ and $T_{2}$, it holds $\|\mathbf{v}_{1}^{e}-\mathbf{v}_{2}^{e}\|^{2}_{\omega_{e}}=\|\mathbf{v}_{1}-\mathbf{v}_{2}^{e}\|^{2}_{T_{1}}+\|\mathbf{v}_{1}^{e}-\mathbf{v}_{2}\|^{2}_{T_{2}}\leq(1+c)\|\mathbf{v}_{1}-\mathbf{v}_{2}^{e}\|^{2}_{T_{1}},$ where the constant $c$ depends only on shape regularity of the tetrahedra, since $\mathbf{v}_{1}^{e}-\mathbf{v}_{2}$ on $T_{2}$ is the canonical polynomial extension of $\mathbf{v}_{1}-\mathbf{v}_{2}^{e}$ from $T_{1}$. Now, we need the following local Korn’s inequality: $\|\nabla\mathbf{v}\|_{T}\leq C\|\mathbf{D}(\mathbf{v})\|_{T},\quad\forall~{}\mathbf{v}\in H^{1}(T)^{d},~{}~{}\text{s.t.}~{}\mathbf{v}=0~{}\text{on any face of}~{}T\in\mathcal{T}_{h},$ (3.14) where $C$ depends only on shape regularity of $T$. The result in (3.14) follows from eq. (3.3) in [9] and the observation that vector fields vanishing on any face $T$ support only zero rigid motions. A simple scaling argument also proves the local Poincare inequality: $\|\mathbf{v}\|_{T}\leq Ch^{2}_{T}\|\nabla\mathbf{v}\|_{T},\quad\forall~{}\mathbf{v}\in H^{1}(T)^{d},~{}~{}\text{s.t.}~{}\mathbf{v}=0~{}\text{on any face of}~{}T\in\mathcal{T}_{h},$ (3.15) where $C$ depends only on shape regularity of $T$. Applying (3.14), (3.15) and triangle inequalities on $T_{1}$ for $\mathbf{v}_{1}-\mathbf{v}_{2}^{e}$ which vanishes on $e$ (a face of $T_{1}$), we obtain: $\displaystyle\|\mathbf{v}_{1}-\mathbf{v}_{2}^{e}\|^{2}_{T_{1}}$ $\displaystyle\leq C_{p}h^{2}\|\mathbf{D}(\mathbf{v}_{1}-\mathbf{v}_{2}^{e})\|^{2}_{T_{1}}\leq 2C_{p}h^{2}(\|\mathbf{D}\mathbf{v}_{1}\|^{2}_{T_{1}}+\|\mathbf{D}\mathbf{v}_{2}^{e}\|^{2}_{T_{1}})$ $\displaystyle\leq 2C_{p}h^{2}(\|\mathbf{D}\mathbf{v}_{1}\|^{2}_{T_{1}}+c\,\|\mathbf{D}\mathbf{v}_{2}\|^{2}_{T_{2}}),$ (3.16) where for the last inequality we again use shape regularity and the fact that $\mathbf{D}\mathbf{v}_{2}^{e}=(\mathbf{D}\mathbf{v}_{2})^{e}$. Thus, we see that $\|\mathbf{v}_{1}^{e}-\mathbf{v}_{2}^{e}\|^{2}_{\omega_{e}}\leq c\,h^{2}\|\mathbf{D}\mathbf{v}\|^{2}_{\omega_{e}}$, with some $c$ depending only on shape regularity. Summing up over all $e\in\mathcal{E}_{h}^{\Gamma,-}$ leads to the required upper bound for $\mathbf{J}_{h}^{-}(\mathbf{v},\mathbf{v})$: $\mathbf{J}_{h}^{-}(\mathbf{v},\mathbf{v})\leq C\|\mathbf{D}(\mathbf{v})\|_{\Omega_{h}^{-}}$. Repeating the same argument for the edges in $\mathcal{E}_{h}^{\Gamma,+}$ and summing up the two bounds scaled by viscosity coefficients proves the lemma. $\square$ The finite element problem (2.20) can be equivalently formulated as follows: Find $\\{\mathbf{u}_{h},p_{h}\\}\in V^{\pm}_{h}\times Q^{\pm}_{h}$ such that $\mathcal{A}(\mathbf{u}_{h},p_{h};\mathbf{v}_{h},q_{h})=r_{h}(\mathbf{v}_{h}),\quad\forall~{}\\{\mathbf{v}_{h},q_{h}\\}\in V^{\pm}_{h}\times Q^{\pm}_{h}$ (3.17) with $\mathcal{A}(\mathbf{u}_{h},p_{h};\mathbf{v}_{h},q_{h})=a_{h}(\mathbf{u}_{h},\mathbf{v}_{h})+b_{h}(\mathbf{v}_{h},p_{h})-b_{h}(\mathbf{u}_{h},q_{h})+b_{p}(p_{h},q_{h}).$ Lemmas 3.1–3.3 enable us to show the inf-sup stability of the bilinear form $\mathcal{A}$. The stability result is formulated using the following composite norm: $\|\mathbf{v},q\|^{2}:=\mu_{-}\|\mathbf{D}(\mathbf{v}^{-})\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}\|\mathbf{D}(\mathbf{v}^{+})\|_{{\Omega}_{h}^{+}}^{2}+h^{-1}\|\\{\mu\\}[\mathbf{v}\cdot\mathbf{n}]\|_{\Gamma}^{2}+f\|[\mathbf{P}\mathbf{v}]\|_{\Gamma}^{2}+\mu_{-}^{-1}\|q^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|q^{+}\|_{{\Omega}_{h}^{+}}^{2}$ for $\mathbf{v}\in V^{\pm}_{h},~{}q\in Q^{\pm}_{h}$. ###### Theorem 3.4 There exists $h_{0}>0$ such that for all $h<h_{0}$ it holds $\sup_{\\{\mathbf{v}_{h},q_{h}\\}\in V^{\pm}_{h}\times Q^{\pm}_{h}}\frac{\mathcal{A}(\mathbf{u}_{h},p_{h};\mathbf{v}_{h},q_{h})}{\|\mathbf{v}_{h},q_{h}\|}\geq C\,\|\mathbf{u}_{h},p_{p}\|,\quad\forall\,\\{\mathbf{u}_{h},p_{h}\\}\in V^{\pm}_{h}\times Q^{\pm}_{h},$ with $h_{0}>0$ and $C>0$ independent of $\mu_{\pm}$, $h$, f, and the position of $\Gamma$ in the background mesh. ##### Proof: For a given $p_{h}\in Q^{\pm}_{h}$, Lemma 3.1 implies the existence of such $\mathbf{w}_{h}\in V_{h}$ that $b_{h}(\mathbf{w}_{h},p_{h})+b_{p}(p_{h},q_{h})\geq c\,\left(\mu_{-}^{-1}\|p_{h}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|p_{h}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)$ (3.18) and $\|\mu^{\frac{1}{2}}\nabla\mathbf{w}_{h}\|_{\Omega}^{2}\leq C\,\left(\mu_{-}^{-1}\|p_{h}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|p_{h}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right),$ (3.19) with some positive $c$, $C$ independent of $\mu$ and how $\Gamma$ overlaps the background mesh. Next, we extend the finite element function $\mathbf{w}_{h}\in V_{h}$ to the element of the product space $\widehat{\mathbf{w}}_{h}\in V_{h}^{\pm}$ by setting $\widehat{\mathbf{w}}_{h}^{\pm}=\mathbf{w}_{h}|_{\Omega_{h}^{\pm}}\in V_{h}^{\pm}$. We let $\mathbf{v}_{h}=\mathbf{u}_{h}+\tau\widehat{\mathbf{w}}_{h}$ for some $\tau>0$ and $q_{h}=p_{h}$. Using the definition of the form $\mathcal{A}$ and (3.18), we calculate $\begin{split}\mathcal{A}(\mathbf{u}_{h}&,p_{h};\mathbf{v}_{h},q_{h})=a_{h}(\mathbf{u}_{h},\mathbf{u}_{h})+\tau a_{h}(\mathbf{u}_{h},\widehat{\mathbf{w}}_{h})+\tau b_{h}(\widehat{\mathbf{w}}_{h},p_{h})+b_{p}(p_{h},p_{h})\\\ &\geq\frac{1}{2}a_{h}(\mathbf{u}_{h},\mathbf{u}_{h})-\frac{\tau^{2}}{2}a_{h}(\widehat{\mathbf{w}}_{h},\widehat{\mathbf{w}}_{h})+\min\\{\tau,1\\}\,c\,\left(\mu_{-}^{-1}\|p_{h}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|p_{h}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right),\end{split}$ (3.20) where we used the Cauchy-Schwartz inequality: $\tau a_{h}(\mathbf{u}_{h},\widehat{\mathbf{w}}_{h})\leq\tau|a_{h}(\mathbf{u}_{h},\mathbf{u}_{h})|^{\frac{1}{2}}|a_{h}(\widehat{\mathbf{w}}_{h},\widehat{\mathbf{w}}_{h})|^{\frac{1}{2}}\leq\frac{1}{2}a_{h}(\mathbf{u}_{h},\mathbf{u}_{h})+\frac{\tau^{2}}{2}a_{h}(\widehat{\mathbf{w}}_{h},\widehat{\mathbf{w}}_{h}).$ Note that it holds $[\widehat{\mathbf{w}}_{h}\cdot\mathbf{n}]=0$ and $[\mathbf{P}\widehat{\mathbf{w}}_{h}]=0$ on $\Gamma$. Since all Nitsche and ‘friction’ terms in $a_{h}(\widehat{\mathbf{w}}_{h},\widehat{\mathbf{w}}_{h})$ vanish, the results of the Lemma 3.3 and estimate (3.19) imply the upper bound $a_{h}(\widehat{\mathbf{w}}_{h},\widehat{\mathbf{w}}_{h})\leq C\,\|\mu^{\frac{1}{2}}\nabla\widehat{\mathbf{w}}_{h}\|_{\Omega}^{2}\leq C\,\left(\mu_{-}^{-1}\|p_{h}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|p_{h}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right).$ Using it in (3.20) and choosing $\tau>0$ small enough, but independent of all problem parameters, leads us to the lower bound $\mathcal{A}(\mathbf{u}_{h},p_{h};\mathbf{v}_{h},q_{h})\geq\frac{1}{2}a_{h}(\mathbf{u}_{h},\mathbf{u}_{h})+c\,\left(\mu_{-}^{-1}\|p_{h}^{-}\|_{{\Omega}_{h}^{-}}^{2}+\mu_{+}^{-1}\|p_{h}^{+}\|_{{\Omega}_{h}^{+}}^{2}\right)\geq c\,\|\mathbf{u}_{h},p_{h}\|^{2},$ (3.21) with some $c>0$ independent of $\mu_{\pm}$, $h$, and the position of $\Gamma$ in the background mesh. For the last inequality, we used (3.11). Finally, by the construction of $\mathbf{v}_{h}$ and thanks to (3.19) it is straightforward to see the upper bound: $\|\mathbf{v}_{h},q_{h}\|\leq c\,\|\mathbf{u}_{h},p_{h}\|.$ This combined with (3.21) proves the theorem. $\square$ The stability of the finite element solution in the composite norm immediately follows from (3.17) and Theorem 3.4: $\|\mathbf{u}_{h},p_{h}\|\leq C\,\sup_{\\{\mathbf{v}_{h},q_{h}\\}\in V^{\pm}_{h}\times Q^{\pm}_{h}}\frac{|r_{h}(\mathbf{v}_{h})|}{\|\mathbf{v}_{h},q_{h}\|},$ where on the right-hand side we see the dual norm of the functional $r_{h}$ and constant $C$, which is independent of the mesh size $h$, the ratio of the viscosity coefficients $\mu_{\pm}$, and the position of $\Gamma$ in the background mesh. ## 4 Error analysis The stability result shown in Sec. 3 and interpolation properties of finite elements enable us to prove optimal order convergence with uniformly bounded constants. We assume in this section that the solution to problem (2.1)–(2.8) is piecewise smooth in the following sense: $\mathbf{u}^{\pm}\in H^{k+2}(\Omega^{\pm})^{d}$ and $p^{\pm}\in H^{k+1}(\Omega^{\pm})$. For the sake of notation, we define the following semi-norm $\|\mathbf{u},p\|_{\ast}=\left(\mu_{-}|\mathbf{u}^{-}|_{H^{k+2}(\Omega^{-})}^{2}+\mu_{+}|\mathbf{u}^{+}|_{H^{k+2}(\Omega^{+})}^{2}+\mu_{-}^{-1}|p^{-}|_{H^{k+1}(\Omega^{-})}^{2}+\mu_{+}^{-1}|p^{+}|_{H^{k+1}(\Omega^{+})}^{2}\right)^{\frac{1}{2}}.$ (4.1) Since we assume $\Gamma$ to be at least Lipschitz, there exist extensions $\mathcal{E}\mathbf{u}^{\pm}$ and $\mathcal{E}p^{\pm}$ of the solution from each phase to $\mathbb{R}^{d}$ such that $\mathcal{E}\mathbf{u}^{\pm}\in H^{k+2}(\mathbb{R}^{d})^{3}$, $\mathcal{E}p^{\pm}\in H^{k+1}(\mathbb{R}^{d})$. The corresponding norms are bounded as follows $\|\mathcal{E}\mathbf{u}^{\pm}\|_{H^{k+2}(\mathbb{R}^{d})}\leq C\,\|\mathbf{u}^{\pm}\|_{H^{k+2}(\Omega^{\pm})},\quad\|\mathcal{E}p^{\pm}\|_{H^{k+1}(\mathbb{R}^{d})}\leq C\,\|p^{\pm}\|_{H^{k+1}(\Omega^{\pm})}$ (4.2) See [41]. Denote by $I_{h}\mathbf{u}^{\pm}$ the Scott-Zhang interpolants of $\mathcal{E}\mathbf{u}^{\pm}$ onto $V^{\pm}_{h}$ and $I_{h}\mathbf{u}:=\\{I_{h}\mathbf{u}^{-},I_{h}\mathbf{u}^{+}\\}$. Same notation $I_{h}p^{\pm}$ will be used for the Scott-Zhang interpolants of $\mathcal{E}p^{\pm}$ onto $Q^{\pm}_{h}$. For the pressure interpolants, we can always satisfy the orthogonality condition of $Q^{\pm}_{h}$ by choosing a suitable additive constant in the definition of $p$. Applying trace inequality (3.12), standard approximation properties of $I_{h}$, and bounds (4.2), one obtains the approximation property in the product norm: $\|\mathbf{u}-I_{h}\mathbf{u},p-I_{h}p\|\leq C\,h^{k+1}\|\mathbf{u},p\|_{\ast}.$ (4.3) The following continuity result is the immediate consequence of the Cauchy–Schwatz inequality: $\mathcal{A}(\mathbf{u}-I_{h}\mathbf{u},p-I_{h}p;\,\mathbf{v}_{h},q_{h})\leq C\,\|\mathbf{u}-I_{h}\mathbf{u},p-I_{h}p\|\|\mathbf{v}_{h},q_{h}\|\\\ +|\langle\\{\mu\mathbf{n}^{T}\mathbf{D}(\mathbf{v}_{h})\mathbf{n}\\},[(\mathbf{u}-I_{h}\mathbf{u})\cdot\mathbf{n}]\rangle_{\Gamma}+\langle\\{\mu\mathbf{n}^{T}\mathbf{D}(\mathbf{u}-I_{h}\mathbf{u})\mathbf{n}\\},[\mathbf{v}_{h}\cdot\mathbf{n}]\rangle_{\Gamma}|,$ (4.4) for all $\\{\mathbf{v}_{h},q_{h}\\}\in V_{h}^{\pm}\times Q_{h}^{\pm}$. The last term on the right-hand side in (4.4) needs a special treatment. Applying the Cauchy–Schwatz, inequalities (3.12) and (3.13), FE inverse inequalities and approximation properties of the interpolants, we get $\begin{split}|\langle\\{\mu\mathbf{n}^{T}\mathbf{D}(\mathbf{v}_{h})\mathbf{n}\\},[(\mathbf{u}-I_{h}\mathbf{u})\cdot\mathbf{n}]\rangle_{\Gamma}|\leq C\,h^{k+1}\|\mathbf{u},0\|_{\ast}\|\mathbf{v}_{h},0\|,\\\ |\langle\\{\mu\mathbf{n}^{T}\mathbf{D}(\mathbf{u}-I_{h}\mathbf{u})\mathbf{n}\\},[\mathbf{v}_{h}\cdot\mathbf{n}]\rangle_{\Gamma}|\leq C\,h^{k+1}\|\mathbf{u},0\|_{\ast}\|\mathbf{v}_{h},0\|.\end{split}$ (4.5) The consistency of the stabilization term is formalized in the estimates that follow from [33, lemma 5.5]: For $p^{-}\in H^{k+1}(\Omega^{-})$, $\mathbf{u}^{-}\in H^{k+2}(\Omega^{-})^{d}$, it holds $J_{h}^{-}(p^{-},p^{-})\leq C\,h^{2k+2}\|p^{-}\|_{H^{k+1}(\Omega^{-})}^{2},\quad\mathbf{J}_{h}^{-}(\mathbf{u}^{-},\mathbf{u}^{-})\leq C\,h^{2k+2}\|\mathbf{u}^{-}\|_{H^{k+2}(\Omega^{-})}^{2}.$ (4.6) The above estimates and the stability of the interpolants also imply $\begin{split}J_{h}^{-}(p^{-}-I_{h}p^{-},p^{-}-I_{h}p^{-})&\leq C\,h^{2k+2}{|p^{-}|}_{H^{k+1}(\Omega^{-})}^{2},\\\ \mathbf{J}_{h}^{-}(\mathbf{u}^{-}-I_{h}\mathbf{u}^{-},\mathbf{u}^{-}-I_{h}\mathbf{u}^{-})&\leq C\,h^{2k+2}{|\mathbf{u}^{-}|}_{H^{k+2}(\Omega^{-})}^{2}.\end{split}$ (4.7) Similar estimates to (4.6), (4.7) hold for $J_{h}^{+}$ and $\mathbf{J}_{h}^{+}$ with $p^{+}\in H^{k+1}(\Omega^{+})$, $\mathbf{u}^{+}\in H^{k+2}(\Omega^{+})^{d}$, which can be combined with suitable weights to yield $b_{p}(p-I_{h}p,p-I_{h}p)+a_{p}(\mathbf{u}-I_{h}\mathbf{u},\mathbf{u}-I_{h}\mathbf{u})\leq C\,h^{2k+2}\|\mathbf{u},p\|_{\ast}^{2}.$ (4.8) Denote the error functions by $\mathbf{e}_{u}=\mathcal{E}\mathbf{u}-\mathbf{u}_{h}$ and $e_{p}=\mathcal{E}p-p_{h}$. Galerkin orthogonality holds up to the consistency terms $\mathcal{A}(\mathbf{e}_{u},e_{p};\,\mathbf{v}_{h},q_{h})=b_{p}(p-I_{h}p,q_{h})+a_{p}(\mathbf{u}-I_{h}\mathbf{u},\mathbf{v}_{h}),$ (4.9) for all $\mathbf{v}_{h}\in V_{h}^{\pm}$ and $q_{h}\in Q_{h}^{\pm}$. The result of Lemma 3.2, (4.8) and the trivial bound $b_{p}(q_{h},q_{h})\leq C\|\mathbf{0},q_{h}\|^{2}$ imply the following estimate for the consistency term on the right-hand side of (4.9): $\begin{split}|b_{p}(p&-I_{h}p,q_{h})+a_{p}(\mathbf{u}-I_{h}\mathbf{u},\mathbf{v}_{h})|\\\ &\leq|b_{p}(p-I_{h}p,p-I_{h}p)|^{\frac{1}{2}}|b_{p}(q_{h},q_{h})|^{\frac{1}{2}}+|a_{p}(\mathbf{u}-I_{h}\mathbf{u},\mathbf{u}-I_{h}\mathbf{u})|^{\frac{1}{2}}|a_{p}(\mathbf{v}_{h},\mathbf{v}_{h})|^{\frac{1}{2}}\\\ &\leq C\,h^{k+1}\|\mathbf{u},p\|_{\ast}\|\mathbf{v}_{h},q_{h}\|,\end{split}$ (4.10) The optimal order error estimate in the energy norm is given in the next theorem. ###### Theorem 4.1 For sufficiently regular $\mathbf{u},p$ solving problem (2.1)–(2.8) and $\mathbf{u}_{h},p_{h}$ solving problem (2.20), the following error estimate holds: $\|\mathbf{u}-\mathbf{u}_{h},p-p_{h}\|\leq\,Ch^{k+1}\|\mathbf{u},p\|_{\ast},$ (4.11) with a constant $C$ independent of $h$, the values of viscosities $\mu_{\pm}$, slip coefficient $f\geq 0$, and the position of $\Gamma$ with respect to the triangulation $\mathcal{T}_{h}$. ##### Proof: This result follows by standard arguments (see, for example, section 2.3 in [17]) from the inf-sup stability results of Theorem 3.4, continuity estimates (4.4) and (4.5), Galerkin orthogonality and consistency (4.9)–(4.10), and approximation properties (4.3). $\square$ ###### Remark 4.2 If we consider using isoparametric elements to handle numerical integration over cut cells (see section 2.2.1), then the Sobolev seminorms in the definition of $\|\mathbf{u},p\|_{\ast}$ on the right-hand side in (4.11) should be replaced by the full Sobolev norms of the same order; see the error analysis of the isoparametric unfitted FEM in [34]. ## 5 Numerical results The aim of the numerical results collected in this section is twofold: (i) support the theoretical results presented in Sec. 4 and (ii) provide evidence of the robustness of the proposed finite element approach with respect to the contrast in viscosity, slip coefficient value, and position of the interface relative to the fixed computational mesh. For the averages in (2.17)-(2.18), we set $\alpha=0$ and $\beta=1$ for all the numerical experiments since we have $\mu_{-}\leq\mu_{+}$. Recall that this is the choice for the analysis carried out in Sec. 3 and 4. In addition, we set $\gamma^{\pm}_{\mathbf{u}}=0.05$, $\gamma_{p}^{\pm}=0.05$, and $\gamma=40$. The value of all other parameters will depend on the specific test. For all the results presented below, we will report the $L^{2}$ error and a weighted $H^{1}$ error for the velocity defined as $\left(2\mu_{-}\|D(\mathbf{u}-\mathbf{u}_{h}^{-})\|_{L^{2}(\Omega^{-})}^{2}+2\mu_{+}\|D(\mathbf{u}-\mathbf{u}_{h}^{+})\|_{L^{2}(\Omega^{+})}^{2}\right)^{\frac{1}{2}},$ (5.1) and a weighted $L^{2}$ error for the pressure defined as $\left(\mu^{-1}_{-}\|p-p_{h}^{-}\|_{L^{2}(\Omega^{-})}^{2}+\mu^{-1}_{+}\|p-p_{h}^{+}\|_{L^{2}(\Omega^{+})}^{2}\right)^{\frac{1}{2}}.$ (5.2) ### 5.1. 2D tests First, we perform a series of tests in 2D. For all the tests, the domain $\Omega$ is square $[-1,1]\times[-1,1]$ and interface $\Gamma$ is a circle of radius $2/3$ centered at $\mathbf{c}=(c_{1},c_{2})$. Let $(x,y)=(\tilde{x}-c_{1},\tilde{y}-c_{2})$, $(\tilde{x},\tilde{y})\in\Omega$. The exact solution we consider is given by: $\displaystyle p^{-}$ $\displaystyle=(x-c_{1})^{3},\hskip 68.28644ptp^{+}=(x-c_{1})^{3}-\frac{1}{2},$ (5.3) $\displaystyle\mathbf{u}^{-}$ $\displaystyle=g^{-}(x,y)\left[\begin{array}[]{c}-y\\\ x\end{array}\right],\qquad\mathbf{u}^{+}=g^{+}(x,y)\left[\begin{array}[]{c}-y\\\ x\end{array}\right],$ (5.8) where $\displaystyle g^{+}(x,y)=\frac{3}{4\mu_{+}}(x^{2}+y^{2}),\quad g^{-}(x,y)=\frac{3}{4\mu_{-}}(x^{2}+y^{2})+\frac{\mu_{-}-\mu_{+}}{3\mu_{+}\mu_{-}}+\frac{1}{f}.$ The forcing terms $\mathbf{f}^{-}$ and $\mathbf{f}^{+}$ are found by plugging the above solution in (2.1). The surface tension coefficient $\sigma$ is set to -0.5. The value of the other physical parameters will be specified for each test. We impose a Dirichlet condition (2.3) on the entire boundary, where function $\mathbf{g}$ is found from $\mathbf{u}^{+}$ in (5.8). Spatial convergence. First, we check the spatial accuracy of the finite element method described in Sec. 2.2. The aim is to validate our implementation of the method and support the theoretical findings in Sec. 4. For this purpose, we consider exact solution (5.3)-(5.8) with $\mathbf{c}=\boldsymbol{0}$ (i.e., interface $\Gamma$ is a circle centered at the origin of the axes), viscosities $\mu_{-}=1$ and $\mu_{+}=10$, and $f=10$. We consider structured meshes of quads with six levels of refinement. The initial triangulation has a mesh size $h=1/2$ and all the other meshes are obtained by halving $h$ till $h=1/128$. We choose to use finite element pairs $\mathbf{Q}_{2}-Q_{1}$. Fig. 2 shows the velocity vectors colored with the velocity magnitude and the pressure computed with mesh $h=1/128$. Fig. 3 shows the $L^{2}$ error and weighted $H^{1}$ error (5.1) for the velocity and weighted $L^{2}$ error (5.2) for the pressure against the mesh size $h$. For the range of mesh sizes under consideration, we observe close to cubic convergence in the $L^{2}$ norm for the velocity and quadratic convergence in the weighted $L^{2}$ norm for the pressure and in the weighted $H^{1}$ norm for the velocity. Figure 2: Approximation of exact solution (5.3)-(5.8) for $\mathbf{c}=\boldsymbol{0}$, $\mu_{-}=1$, $\mu_{+}=10$, and $f=10$, computed with mesh $h=1/128$: velocity vectors colored with the velocity magnitude (left) and pressure (right). Figure 3: 2D test with $\mathbf{c}=\boldsymbol{0}$, $\mu_{-}=1$, $\mu_{+}=10$, and $f=10$: $L^{2}$ error and weighted $H^{1}$ error (5.1) for the velocity and weighted $L^{2}$ error (5.2) for the pressure against the mesh size $h$. Robustness with respect to the viscosity contrast. As mentioned in Sec. 1, the case of high contrast for the viscosities in a two-phase problem is especially challenging from the numerical point of view. To test the robustness of our approach, we consider exact solution (5.3)-(5.8) and fix $\mu_{-}=1$, while we let $\mu_{+}$ vary from 1 to $10^{8}$. We set $\mathbf{c}=\boldsymbol{0}$ and $f=10$. We consider one of the meshes adopted for the previous sets of simulations (with $h=1/64$) and use again $\mathbf{Q}_{2}-Q_{1}$ finite elements. Fig. 4 (left) shows the $L^{2}$ error and weighted $H^{1}$ error (5.1) for the velocity and weighted $L^{2}$ error (5.2) for the pressure against the value of $\mu_{+}$. We observe that all the errors quickly reach a plateau as the $\mu_{+}/\mu_{-}$ ratio increases, after initially decreasing. These results show that our approach is substantially robust with respect to the viscosity contrast $\mu_{+}/\mu_{-}$. Figure 4: 2D test with $\mathbf{c}=\boldsymbol{0}$ and $\mu_{-}=1$: $L^{2}$ error and weighted $H^{1}$ error (5.1) for the velocity and weighted $L^{2}$ error (5.2) for the pressure against the value of $\mu_{+}$ (left) and against the value of the slip coefficient $f$ (right). Robustness with respect to the slip coefficient. For the next set of simulations, we consider exact solution (5.3)-(5.8) and let the slip coefficient $f$ in (2.6)-(2.7) vary from 1/256 to 256. Notice that the larger $f$ becomes, the closer the two-phase problem gets to the homogeneous model. The other parameters are set as follows: $\mathbf{c}=\boldsymbol{0}$, $\mu_{-}=1$, and $\mu_{+}=10$. We consider again the structured mesh with mesh size $h=1/64$ and $\mathbf{Q}_{2}-Q_{1}$ finite elements. Fig. 4 (right) shows the $L^{2}$ error and weighted $H^{1}$ error (5.1) for the velocity scaled by the $H^{3}$ norm of $\mathbf{u}$ and weighted $L^{2}$ error (5.2) for the pressure against the value of $f$. We observe that the scaled weighted $H^{1}$ error for the velocity does not vary substantially as $f$ varies, while the other two errors increase as $f$ decreases. When $f$ goes to zero, the external phase loses its control over tangential motions in the internal fluid on $\Gamma$, thus allowing for purely rigid rotations in the perfectly circular $\Omega^{-}$; see the definition of $\mathbf{u}^{-}$ in (5.8). While the seminorm $\|\mathbf{u},p\|_{\ast}$ appearing on the right-hand side in (4.11) remains the same, the full Sobolev norm $\|\mathbf{u}^{-}\|_{k+2}$ grows as $O(f^{-1})$. Since we use isoparametric unfitted FE, we indeed see the uniform error bound with respect to $f\to 0$ if we normalize the error by the full Sobolev norm of the solution. See Remark 4.2. Summarizing, the approach proves to be robust in the energy norm as the physical parameter $f$ varies. Robustness with respect to the position of the interface. We conclude the series of the 2D tests with a set of simulations aimed at checking that our approach is not sensitive to the position of the interface with respect to the background mesh. For this purpose, we vary the center of the circle that represents $\Gamma$: $\mathbf{c}=(c_{1},c_{2}),~{}c_{1}=\frac{h}{20}k\cos\left(\frac{k}{10}\pi\right),~{}c_{2}=\frac{h}{20}k\sin\left(\frac{k}{10}\pi\right),\quad k=1,2,...,20,$ (5.9) where $h$ is the mesh size. We set $\mu_{-}=1$, $\mu_{+}=10$ and $f=10$. Just like the two previous sets of simulations, we consider the mesh with mesh size $h=1/64$ and the $\mathbf{Q}_{2}-Q_{1}$ pair. Fig. 5 shows the $L^{2}$ error and weighted $H^{1}$ error (5.1) for the velocity and weighted $L^{2}$ error (5.2) for the pressure against the value of $k$ in (5.9). We see that all the errors are fairly insensitive to the position of $\Gamma$ with respect to the background mesh, indicating robustness. Figure 5: 2D test with $\mathbf{c}=\boldsymbol{0}$, $\mu_{-}=1$, $\mu_{+}=10$, and $f=10$: $L^{2}$ error and weighted $H^{1}$ error (5.1) for the velocity and weighted $L^{2}$ error (5.2) for the pressure against the value of $k$ in (5.9). ### 5.2. 3D tests For the 3D tests, the domain $\Omega$ is cube $[-1.5,1.5]\times[-1.5,1.5]\times[-1.5,1.5]$ and interface $\Gamma$ is the unit sphere, centered at origin of the axes. We characterize $\Gamma$ as the zero level set of function $\phi(\mathbf{x})=||\mathbf{x}||^{2}_{2}-1$, with $\mathbf{x}=(x,y,z)$. We consider the exact solution given by: $\displaystyle p^{+}$ $\displaystyle=\frac{1}{2}x,\hskip 79.6678ptp^{-}=x,$ (5.10) $\displaystyle\mathbf{u}^{-}$ $\displaystyle=g^{-}(x,y)\left[\begin{array}[]{c}-y\\\ x\\\ 0\end{array}\right],\qquad\mathbf{u}^{+}=g^{+}(x,y)\left[\begin{array}[]{c}-y\\\ x\\\ 0\end{array}\right],$ (5.17) where $\displaystyle g^{+}(x,y)=\frac{1}{2\mu_{+}}(x^{2}+y^{2}+z^{2}),$ $\displaystyle g^{-}(x,y)=\frac{1}{2\mu_{-}}(x^{2}+y^{2}+z^{2})+\frac{\mu_{-}-2\mu_{+}\mu_{-}-\mu_{+}}{2\mu_{+}\mu_{-}}.$ The forcing terms $\mathbf{f}^{-}$ and $\mathbf{f}^{+}$ are found by plugging the above solution in in (2.1). We set $f=1$, $\mu_{-}=1$, and $\mu_{+}=100$. The surface tension coefficient is set to $\sigma=-0.5x$. Just like for the 2D tests, we impose a Dirichlet condition (2.3) on the entire boundary, where function $\mathbf{g}$ is found from $\mathbf{u}^{+}$ in (5.17). To verify our implementation of the finite element method in Sec. 2.2 in three dimensions and to further corroborate the results in Sec. 4, we consider structured meshes of tetrahedra with four levels of refinement. The initial triangulation has mesh size $h=1$ and all the other meshes are obtained by halving $h$ till $h=0.125$. All the meshes feature a local one-level refinement near the corners of $\Omega$. We choose to use finite element pair $\mathbf{P}_{2}-P_{1}$. Fig. 6 shows a visualization of the solution computed with mesh $h=0.125$. Fig. 7 shows the $L^{2}$ error and weighted $H^{1}$ error (5.1) for the velocity and weighted $L^{2}$ error (5.2) for the pressure against the mesh size $h$. For the small range of mesh sizes that we consider, we observe almost cubic convergence in the $L^{2}$ norm for the velocity, quadratic convergence in the weighted $L^{2}$ norm for the pressure and in the weighted $H^{1}$ norm for the velocity. Figure 6: Approximation of exact solution (5.10)-(5.17) computed with the mesh with $h=0.125$: velocity vectors colored with the velocity magnitude on the $xz$-section of $\Omega^{+}$ and in $\Omega^{-}$ (left) and pressure in $\Omega^{-}$ and half $\Omega^{+}$ (right). Figure 7: 3D test: $L^{2}$ error and weighted $H^{1}$ error (5.1) for the velocity and weighted $L^{2}$ error (5.2) for the pressure against the mesh size $h$. ## 6 Conclusions In this paper, we focused on the two-phase Stokes problem with slip between phases, which has received much less attention than its homogeneous counterpart (i.e. no slip between the phases). For the numerical approximation of this problem, we chose an isoparametric unfitted finite element approach of the CutFEM or Nitsche-XFEM family. For the unfitted generalized Taylor–Hood finite element pair $\mathbf{P}_{k+1}-P_{k}$, we prove stability and optimal error estimates, which follow from an inf-sup stability property. We show that the inf-sup stability constant is independent of the viscosity ratio, slip coefficient, position of the interface with respect to the background mesh and, of course, mesh size. The 2D and 3D numerical experiments we used to test our approach feature an exact solution. They have been designed to support the theoretical findings and demonstrate the robustness of our approach for a wide range of physical parameter values. Finally, we show that our unfitted approach is insensitive to the position of the interface between the two phases with respect to the fixed computational mesh. ## Acknowledgments This work was partially supported by US National Science Foundation (NSF) through grant DMS-1953535. M.O. also acknowledges the support from NSF through DMS-2011444. A.Q. also acknowledges the support from NSF through DMS-1620384. ## References * [1] Netgen/NGSolve. https://ngsolve.org/. * [2] ngsxfem. https://github.com/ngsxfem/ngsxfem/tree/49205a1ae637771a0ed56d4993ce99008f3a00e0. * [3] Slimane Adjerid, Nabil Chaabane, and Tao Lin. An immersed discontinuous finite element method for stokes interface problems. Computer Methods in Applied Mechanics and Engineering, 293:170 – 190, 2015. * [4] D. M. Anderson, G. B. McFadden, and A. A. Wheeler. Diffuse-interface methods in fluid mechanics. Annual Review of Fluid Mechanics, 30(1):139–165, 1998. * [5] S. Basting and M. Weismann. A hybrid level set/front tracking approach for finite element simulations of two-phase flows. Journal of Computational and Applied Mathematics, 270:471–483, 2014\. * [6] Steffen Basting, Annalisa Quaini, Suncica Canic, and Roland Glowinski. Extended ALE method for fluid-structure interaction problems with large structural displacements. Journal of Computational Physics, 331:312 – 336, 2017. * [7] Michel Bercovier and Olivier Pironneau. Error estimates for finite element method solution of the stokes problem in the primitive variables. Numerische Mathematik, 33(2):211–224, 1979. * [8] S.P. Bordas, E. Burman, M.G. Larson, and eds. M. A. Olshanskii. Geometrically Unfitted Finite Element Methods and Applications, volume Lecture Notes in Computational Science and Engineering 121. Springer, Berlin, 2018. * [9] Susanne C Brenner. Korn’s inequalities for piecewise H1 vector fields. Mathematics of Computation, pages 1067–1087, 2004. * [10] E. Burman, G. Delay, and A. Ern. An unfitted hybrid high-order method for the Stokes interface problem. hal-02519896v3, 2020. * [11] Erik Burman. Ghost penalty. C. R. Math. Acad. Sci. Paris, 348(21-22):1217–1220, 2010. * [12] Erik Burman, Susanne Claus, Peter Hansbo, Mats G Larson, and André Massing. Cutfem: Discretizing geometry and partial differential equations. International Journal for Numerical Methods in Engineering, 104(7):472–501, 2015. * [13] Ernesto Cáceres, Johnny Guzmán, and Maxim Olshanskii. New stability estimates for an unfitted finite element method for two-phase stokes problem. SIAM Journal on Numerical Analysis, 58(4):2165–2192, 2020. * [14] J. Chessa and T. Belytschko. An extended finite element method for two-phase fluids. ASME Journal of Applied Mechanics, 70:10–17, 2003. * [15] Susanne Claus and Pierre Kerfriden. A cutfem method for two-phase flow problems. Computer Methods in Applied Mechanics and Engineering, 348:185 – 206, 2019. * [16] Jean Donea, Antonio Huerta, J.-Ph. Ponthot, and A. Rodríguez-Ferran. Arbitrary Lagrangian–Eulerian Methods, chapter 14. John Wiley & Sons Ltd.,, 2004. * [17] A. Ern and J.-L. Guermond. Theory and practice of finite elements, volume 159. Springer, New York, 2013. * [18] Thomas Frachon and Sara Zahedi. A cut finite element method for incompressible two-phase Navier–Stokes flows. Journal of Computational Physics, 384:77 – 98, 2019. * [19] T. P. Fries. The intrinsic XFEM for two-fluid flows. International Journal for Numerical Methods in Fluids, 60(4):437–471, 2009. * [20] P. Gangl, K. Sturm, M. Neunteufel, and J. Schöberl. Fully and semi-automated shape differentiation in NGSolve. arXiv:2004.06783, 2020. * [21] S. Groß, V. Reichelt, and A. Reusken. A finite element based level set method for two-phase incompressible flows. Comp. Visual. Sci., 9:239–257, 2006. * [22] Johnny Guzmán and Maxim Olshanskii. Inf-sup stability of geometrically unfitted stokes finite elements. Mathematics of Computation, 87(313):2091–2112, 2018. * [23] A. Hansbo and P. Hansbo. An unfitted finite element method, based on Nitsche’s method, for elliptic interface problems. Comput. Methods Appl. Mech. Engrg., 191:5537–5552, 2002. * [24] Peter Hansbo, Mats G. Larson, and Sara Zahedi. A cut finite element method for a Stokes interface problem. Applied Numerical Mathematics, 85:90 – 114, 2014. * [25] Jerzy Hapanowicz. Slip between the phases in two-phase water–oil flow in a horizontal pipe. International Journal of Multiphase Flow, 34(6):559 – 566, 2008\. * [26] Mohammad R. Hashemi, Pavel B. Ryzhakov, and Riccardo Rossi. An enriched finite element/level-set method for simulating two-phase incompressible fluid flows with surface tension. Computer Methods in Applied Mechanics and Engineering, 370:113277, 2020. * [27] X. He, F. Song, and W. Deng. Stabilized nonconforming Nitsche’s extended finite element method for Stokes interface problems. https://arxiv.org/abs/1905.04844, 2019. * [28] David Jacqmin. Calculation of Two-Phase Navier-Stokes Flows Using Phase-Field Modeling. Journal of Computational Physics, 155(1):96–127, October 1999. * [29] Mohammad J. Kermani and John M. Stockie. The effect of slip velocity on saturation for multiphase condensing mixtures in a pem fuel cell. International Journal of Hydrogen Energy, 36(20):13235 – 13240, 2011. 3rd Iranian Fuel Cell Seminar. * [30] Matthias Kirchhart, Sven Gross, and Arnold Reusken. Analysis of an XFEM discretization for Stokes interface problems. SIAM Journal on Scientific Computing, 38(2):A1019–A1043, 2016. * [31] Christoph Lehrenfeld. High order unfitted finite element methods on level set domains using isoparametric mappings. Computer Methods in Applied Mechanics and Engineering, 300:716–733, 2016. * [32] Christoph Lehrenfeld. A higher order isoparametric fictitious domain method for level set domains. In Stéphane P. A. Bordas, Erik Burman, Mats G. Larson, and Maxim A. Olshanskii, editors, Geometrically Unfitted Finite Element Methods and Applications, pages 65–92, Cham, 2017. Springer International Publishing. * [33] Christoph Lehrenfeld and Maxim Olshanskii. An Eulerian finite element method for PDEs in time-dependent domains. ESAIM: Mathematical Modelling and Numerical Analysis, 53(2):585–614, 2019. * [34] Christoph Lehrenfeld and Arnold Reusken. Analysis of a high-order unfitted finite element method for elliptic interface problems. IMA Journal of Numerical Analysis, 38(3):1351–1387, 2018. * [35] A Massing, M.G. Larson, A. Logg, and M.E. Rognes. A stabilized Nitsche overlapping mesh method for the Stokes problem. Numer. Math., 128:73 – 101, 2014. * [36] Nicolas Moës, John Dolbow, and Ted Belytschko. A finite element method for crack growth without remeshing. International Journal for Numerical Methods in Engineering, 46(1):131–150, 1999. * [37] Maxim A Olshanskii and Arnold Reusken. Analysis of a Stokes interface problem. Numerische Mathematik, 103(1):129–149, 2006. * [38] Elin Olsson and Gunilla Kreiss. A conservative level set method for two phase flow. Journal of Computational Physics, 210(1):225 – 246, 2005. * [39] J. Preuß. Higher order unfitted isoparametric space-time FEM on moving domains. Master’s thesis, NAM, University of Göttingen, 2018. * [40] Henning Sauerland and Thomas-Peter Fries. The stable XFEM for two-phase flows. Computers & Fluids, 87:41 – 49, 2013. USNCCM Moving Boundaries. * [41] Elias M Stein. Singular integrals and differentiability properties of functions, volume 30. Princeton university press, 1970. * [42] Mark Sussman, Peter Smereka, and Stanley Osher. A level set approach for computing solutions to incompressible two-phase flow. Journal of Computational Physics, 114(1):146 – 159, 1994. * [43] S O Unverdi and G Tryggvason. A front-tracking method for viscous, incompressible, multi-fluid flows. Journal of Computational Physics; (United States), 100, 3 1992. * [44] Henry von Wahl, Thomas Richter, and Christoph Lehrenfeld. An unfitted Eulerian finite element method for the time-dependent Stokes problem on moving domains. arXiv preprint arXiv:2002.02352, 2020. * [45] N. Wang and J. Chen. A nonconforming Nitsche’s extended finite element method for Stokes interface problems. J Sci Comput, 81:342–374, 2019. * [46] Qiuliang Wang and Jinru Chen. A new unfitted stabilized Nitsche’s finite element method for Stokes interface problems. Computers & Mathematics with Applications, 70(5):820 – 834, 2015\.
# Twisted Basic Dolbeault cohomology on transverse Kähler foliations Seoung Dal Jung Department of Mathematics Jeju National University Jeju 690-756 Republic of Korea<EMAIL_ADDRESS> ###### Abstract. In this paper, we study the twisted basic Dolbeault cohomology and transverse hard Lefschetz theorem on a transverse Kähler foliation. And we give some properties for $\Delta_{\kappa}$-harmonic forms and prove the Kodaira-Serre type duality and Dolbeault isomorphism for the twisted basic Dolbeault cohomology. ###### Key words and phrases: Riemannian foliation, transverse Kähler foliation, basic Dolbeault cohomology, twisted basic Dolbeault cohomology, Kodaira-Serre duality, hard Lefschetz theorem ###### 2010 Mathematics Subject Classification: 53C12; 53C21; 53C55; 57R30; 58J50 ††The author was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF-2018R1A2B2002046). ## 1\. Introduction Let $(M,\mathcal{F})$ be a smooth manifold with a foliation $\mathcal{F}$. One of smooth invariants of $\mathcal{F}$ is the basic cohomology. The basic forms of $(M,\mathcal{F})$ are locally forms on the leaf space; that is, forms $\phi$ satisfying $X\lrcorner\phi=X\lrcorner d\phi=0$ for any vector $X$ tangent to the leaves, where $X\lrcorner$ denotes the interior product with $X$. Basic forms are preserved by the exterior derivative and are used to define basic de Rham cohomology groups $H_{B}^{*}(\mathcal{F})$, which is defined by $\displaystyle H_{B}^{r}(\mathcal{F})={\ker d_{B}\over{\rm Im}\,d_{B}},$ where $d_{B}$ is the restriction of $d$ to the basic forms. In general, the basic de Rham cohomology group does not necessarily satisfy Poincaré duality, in fact, it satisfies the twisted Poincaré duality [18]: $\displaystyle H_{B}^{r}(\mathcal{F})\cong H_{T}^{q-r}(\mathcal{F}),$ where $q=codim(\mathcal{F})$ and $H_{T}^{r}(\mathcal{F})={\ker d_{T}\over{\rm Im}\;d_{T}}$ is the cohomology of $d_{T}=d_{B}-\kappa_{B}\wedge$. Here $\kappa_{B}$ is the basic part of the mean curvature form $\kappa$ of $\mathcal{F}$. It is well-known [2, 7, 19, 22] that on a compact oriented manifold $M$ with a transversally oriented Riemannian foliation $\mathcal{F}$, $H_{B}^{r}(\mathcal{F})\cong{\rm ker}\Delta_{B}$ is finite dimensional, where $\Delta_{B}$ is the basic Laplacian. Because of this Hodge theorem, researchers have been able to show relationships between curvature bounds and basic cohomology. In [10], J. Hebda proved that a lower bound on transversal Ricci curvature for a Riemannian foliation of a compact manifold causes the space of leaves to be compact and the first basic cohomology group to be trivial. Another example relating the geometry and the topology came in 1991, when M. Min-Oo et al. [20] proved that if the transversal curvature operator of $(M,\mathcal{F})$ is positive definite, then the cohomology $H_{B}^{r}(\mathcal{F})=0\ (0<r<q)$; that is, any basic harmonic $r$-form is trivial. There are many other examples of known relationships between transversal curvature and basic cohomology. Recently, G. Habib and K. Richardson [8] introduced the twisted basic de Rham cohomology $H_{\kappa}^{r}(\mathcal{F})={\ker d_{\kappa}\over{\rm Im}\,d_{\kappa}}$ of the twisted differential operator $d_{\kappa}=d_{B}-\frac{1}{2}\kappa_{B}\wedge$ and proved the Poincaré duality of the twisted basic de Rham cohomology $H_{\kappa}^{*}(\mathcal{F})$ on foliations, that is, $\displaystyle H_{\kappa}^{r}(\mathcal{F})\cong H_{\kappa}^{q-r}(\mathcal{F}).$ Moreover, $H_{\kappa}^{r}(\mathcal{F})\cong\ker\Delta_{\kappa}$, where $\Delta_{\kappa}=d_{\kappa}\delta_{\kappa}+\delta_{\kappa}d_{\kappa}$ is the twisted basic Laplacian (cf. Section 2.2). From the Weitzenböck formula for the twisted basic Laplacian, if the transversal Ricci curvature ${\rm Ric}^{Q}$ is non-negative and either $\mathcal{F}$ is nontaut or ${\rm Ric}^{Q}$ is positive at some point, then $H_{\kappa}^{1}(\mathcal{F})=\\{0\\}$. Also, the authors [8] proved that $\mathcal{F}$ is taut if and only if $H_{\kappa}^{0}(\mathcal{F})\cong H_{\kappa}^{q}(\mathcal{F})=\\{0\\}$ (Tautness theorem). On a transverse Kähler foliation of codimension $q=2n$, the basic Dolbeault cohomology group of $\bar{\partial}_{B}$ is defined by $H^{r,s}_{B}(\mathcal{F})={\ker\bar{\partial}_{B}\over{\rm Im}\,\bar{\partial}_{B}},$ where $0\leq r,s\leq n$. The Kodaira-Serre duality for the basic Dolbeault cohomology does not necessarily hold unless $\mathcal{F}$ is taut, but we exhibit a version of Kodaira-Serre duality [15] that actually does hold in all cases. That is, $\displaystyle H_{B}^{r,s}(\mathcal{F})\cong H_{T}^{n-r,n-s}(\mathcal{F}),$ where $H_{T}^{r,s}(\mathcal{F})={\ker\bar{\partial}_{T}\over{\rm Im}\;\bar{\partial}_{T}}$ is the Dolbeault cohomology group of $\bar{\partial}_{T}$ (cf. Section 3.2). Recently, G. Habib and L. Vezzoni [9] studied the basic Dolbeault cohomology and proved some vanishing theorem for basic Dolbeault cohomology by using the Weitzenböck formula for the twisted basic Laplacian. In this paper, we define the twisted basic Dolbeault cohomolgy $H_{\kappa}^{r,s}(\mathcal{F})$ of the twisted operator $\bar{\partial}_{\kappa}$ (cf. Section 3.2) acting on the basic forms of type $(r,s)$ by $\displaystyle H_{\kappa}^{r,s}(\mathcal{F})={\ker\bar{\partial}_{\kappa}\over{\rm Im}\,\bar{\partial}_{\kappa}}.$ The twisted basic Dolbeault cohomology satisfies the Kodaira-Serre duality (Theorem 3.11), that is, $\displaystyle H_{\kappa}^{r,s}(\mathcal{F})\cong H_{\kappa}^{n-r,n-s}(\mathcal{F}).$ Trivially, if $\mathcal{F}$ is taut, then $H_{B}^{r,s}(\mathcal{F})\cong H_{T}^{r,s}(\mathcal{F})\cong H_{\kappa}^{r,s}(\mathcal{F})$. Also, we prove the Hodge decomposition for the twisted basic Dolbeault cohomology (Theorem 3.13) and give some properties for $\Delta_{\kappa}$-harmonic forms. ###### Remark 1.1. The twisted basic de Rham cohomology is the special cohomology of Lichnerowicz basic cohomology on foliations [1]. The Lichnerowicz cohomology is the cohomology of the complex $(\Omega^{*}(M),d_{\theta})$ of differential forms on a smooth manifold $M$ with the de Rham differential operator $d_{\theta}=d+\theta\wedge$ deformed by a closed 1-form $\theta$. The Lichnerowicz cohomology is a proper tool of locally conformal symplectic geometry [8]. ###### Remark 1.2. The Lichnerowicz basic cohomology depends only on the basic class of a closed basic 1-form [1, Proposition 3.0.11]. In particular, the twisted basic de Rham cohomology group depends only on the Álvarez class $[\kappa_{B}]\in H_{B}^{1}(\mathcal{F})$. That is, for any bundle-like metrics associated with the same transverse structure, the twisted basic de Rham cohomologies are isomorphic [8, Theorem 2.11]. ## 2\. The basic cohomology ### 2.1. The basic cohomology Let $(M,\mathcal{F},g_{Q})$ be a $(p+q)$-dimensional Riemannian foliation of codimension $q$ with a holonomy invariant metric $g_{Q}$ on the normal bundle $Q=TM/T\mathcal{F}$, meaning that $L_{X}g_{Q}=0$ for all $X\in T\mathcal{F}$ on a Riemannian manifold $(M,g_{M})$ with a bundle-like metric $g_{M}$ adapted to $g_{Q}$, where $T\mathcal{F}$ is the tangent bundle of $\mathcal{F}$ and $L_{X}$ denotes the Lie derivative. Let $\nabla$ be the transverse Levi-Civita connection on the normal bundle $Q$, which is torsion-free and metric with respect to $g_{Q}$ [23, 24]. Let $R^{Q}$ and ${\rm Ric}^{Q}$ be the transversal curvature tensor and the transversal Ricci operator of $\mathcal{F}$ with respect to $\nabla$, respectively. The mean curvature vector $\tau$ of $\mathcal{F}$ is given by $\tau=\sum_{i=1}^{p}\pi(\nabla^{M}_{f_{i}}f_{i}),$ where $\\{f_{i}\\}_{i=1,\cdots,p}$ is a local orthonormal basis of $T\mathcal{F}$ and $\pi:TM\to Q$ is a natural projection. Then the mean curvature form $\kappa$ of $\mathcal{F}$ is given by $\kappa(X)=g_{Q}(\tau,\pi(X))$ for any tangent vector $X\in\Gamma TM$. Let $\Omega_{B}^{r}(\mathcal{F})$ be the space of all basic $r$-forms, i.e., $\phi\in\Omega_{B}^{r}(\mathcal{F})$ if and only if $X\lrcorner\phi=0$ and $L_{X}\phi=0$ for any vector $X\in\Gamma T\mathcal{F}$. The foliation $\mathcal{F}$ is said to be minimal if $\kappa=0$. It is well-known [23] that on a compact manifold, $\kappa_{B}$ is closed, i.e., $d\kappa_{B}=0$, where $\kappa_{B}$ is the basic part of $\kappa$. And the mean curvature form satisfies the Rummler’s formula: $d\chi_{\mathcal{F}}=-\kappa\wedge\chi_{\mathcal{F}}+\varphi_{0},\quad\chi_{\mathcal{F}}\wedge*\varphi_{0}=0,$ where $\chi_{\mathcal{F}}$ is the characteristic form of $\mathcal{F}$ and $*$ is the Hodge star operator associated to $g_{M}$. Now we recall the transversal star operator $\bar{*}:\Omega_{B}^{r}(\mathcal{F})\to\Omega_{B}^{q-r}(\mathcal{F})$ given by $\displaystyle\bar{*}\phi=(-1)^{p(q-r)}*(\phi\wedge\chi_{\mathcal{F}}).$ Trivially, $\bar{*}^{2}\phi=(-1)^{r(q-r)}\phi$ for any basic form $\phi\in\Omega_{B}^{r}(\mathcal{F})$. Let $\nu$ be the transversal volume form, that is, $*\nu=\chi_{\mathcal{F}}$. Then the pointwise inner product $\langle\cdot,\cdot\rangle$ on $\Omega_{B}^{*}(\mathcal{F})$ is defined by $\langle\phi,\psi\rangle\nu=\phi\wedge\bar{*}\psi$. Let $d_{B}=d|_{\Omega_{B}^{*}(\mathcal{F})}$ and $d_{T}=d_{B}-\kappa_{B}\wedge$. Then the formal adjoint operators $\delta_{B}$ and $\delta_{T}$ of $d_{B}$ and $d_{T}$ on basic forms are given by $\delta_{B}=(-1)^{q(r+1)+1}\bar{*}d_{T}\bar{*}=\delta_{T}+\kappa_{B}^{\sharp}\lrcorner,\quad\delta_{T}=(-1)^{q(r+1)+1}\bar{*}d_{B}\bar{*},$ (2.1) respectively [2, 19, 24], where $\kappa_{B}^{\sharp}$ be the dual vector of $\kappa_{B}$. For a local orthonormal frame $\\{E_{a}\\}_{a=1,\cdots,q}$ of the normal bundle $Q$, $\delta_{T}$ is given [16] by $\delta_{T}=-\sum_{a=1}^{q}E_{a}\lrcorner\nabla_{E_{a}}.$ (2.2) Now we define two Laplacians $\Delta_{B}$ and $\Delta_{T}$ acting on $\Omega_{B}^{*}(\mathcal{F})$ by $\Delta_{B}=d_{B}\delta_{B}+\delta_{B}d_{B},\quad\Delta_{T}=d_{T}\delta_{T}+\delta_{T}d_{T},$ respectively. The Laplacian $\Delta_{B}$ is said to be basic Laplacian. Then $\Delta_{B}\bar{*}=\bar{*}\Delta_{T}.$ (2.3) Also, we have the generalization of the usual de Rham-Hodge decompositon. ###### Theorem 2.1. ([19],[24]) Let $(M,\mathcal{F},g_{Q})$ be a transversally oriented Riemannian foliation $\mathcal{F}$ on a compact oriented manifold $M$ with a bundle-like metric. Then $\displaystyle\Omega_{B}^{r}(\mathcal{F})$ $\displaystyle=\mathcal{H}_{B}^{r}(\mathcal{F})\oplus{\rm Im}\,d_{B}\oplus{\rm Im}\,\delta_{B}$ $\displaystyle=\mathcal{H}_{T}^{r}(\mathcal{F})\oplus{\rm Im}\,d_{T}\oplus{\rm Im}\,\delta_{T}$ with finite dimensional $\mathcal{H}_{B}^{r}(\mathcal{F})=\ker\Delta_{B}$ and $\mathcal{H}_{T}^{r}(\mathcal{F})=\ker\Delta_{T}$. On a compact manifold, $d_{T}^{2}=0$ because of $d\kappa_{B}=0$. So the basic de Rham cohomology groups $H_{B}^{r}(\mathcal{F})$ and $H_{T}^{r}(\mathcal{F})$ are defined by $\displaystyle H_{B}^{r}(\mathcal{F})={\ker d_{B}\over{\rm Im}\,d_{B}},\quad H_{T}^{r}(\mathcal{F})={\ker d_{T}\over{\rm Im}\,d_{T}},$ respectively. Then it is well-known [24] that $\displaystyle H_{B}^{r}(\mathcal{F})\cong\mathcal{H}_{B}^{r}(\mathcal{F}),\quad H_{T}^{r}(\mathcal{F})\cong\mathcal{H}_{T}^{r}(\mathcal{F}).$ From (2.3), we have the twisted duality [18] $\displaystyle H_{B}^{r}(\mathcal{F})\cong H_{T}^{q-r}(\mathcal{F}).$ (2.4) If the foliation $\mathcal{F}$ is taut (means that there exists a Riemannian metric on $M$ for which all leaves are minimal), then the Poincaré duality holds [17]. That is, $\displaystyle H_{B}^{r}(\mathcal{F})\cong H_{B}^{q-r}(\mathcal{F}).$ Now, we introduce the operator $\nabla_{\rm tr}^{*}\nabla_{\rm tr}$, which is defined by $\displaystyle\nabla_{\rm tr}^{*}\nabla_{\rm tr}\phi=-\sum_{a}\nabla_{E_{a}}\nabla_{E_{a}}\phi+\nabla_{\kappa_{B}^{\sharp}}\phi.$ It is well-known ([12, Proposition 3.1]) that the operator $\nabla_{\rm tr}^{*}\nabla_{\rm tr}$ is non-negative and formally self-adjoint. Then we have the following generalized Weitzenböck formula [12]: for any basic form $\phi\in\Omega_{B}^{r}(\mathcal{F})$ $\Delta_{B}\phi=\nabla_{\rm tr}^{*}\nabla_{\rm tr}\phi+A_{\kappa_{B}^{\sharp}}(\phi)+F(\phi),$ (2.5) where $A_{Y}(\phi)=L_{Y}\phi-\nabla_{Y}\phi$ and $\displaystyle F(\phi)=\sum_{a,b}\theta^{a}\wedge E_{b}\lrcorner R^{Q}(E_{b},E_{a})\phi.$ Here, $\theta^{a}$ is the dual basic 1-form of $E_{a}$. In particular, if $\phi$ is a basic 1-form, then $F(\phi)^{\sharp}={\rm Ric}^{Q}(\phi^{\sharp})$. ###### Remark 2.2. Observe that, from the preceding results, the dimensions of $H_{B}^{r}(\mathcal{F})$ and $H_{T}^{r}(\mathcal{F})$ are smooth invariants of $\mathcal{F}$ and do not depend on the choices of bundle-like metric $g_{M}$ or on the transversal metric $g_{Q}$, even though the spaces of harmonic forms do depend on these choices. ###### Theorem 2.3. [15] Let $(M,\mathcal{F},g_{M})$ be a compact Riemannian manifold with a foliation $\mathcal{F}$ of codimension $q$ and a bundle-like metric $g_{M}$. If the endomorphism $F$ is positive-definite, then there are no nonzero basic harmonic forms, that is, $H_{B}^{r}(\mathcal{F})=\\{0\\}.$ In particular, if ${\rm Ric}^{Q}$ is positive-definite, then $H_{B}^{1}(\mathcal{F})=\\{0\\}$. ### 2.2. The twisted basic cohomology Now, we recall the twisted differential operators $d_{\kappa}$ and $\delta_{\kappa}$ [8], which are given by $\displaystyle d_{\kappa}=d_{B}-\frac{1}{2}\kappa_{B}\wedge,\quad\delta_{\kappa}=\delta_{B}-\frac{1}{2}\kappa_{B}^{\sharp}\lrcorner.$ (2.6) The operator $\delta_{\kappa}$ is a formal adjoint operator of $d_{\kappa}$ with respect to the global inner product. Let $\Delta_{\kappa}=d_{\kappa}\delta_{\kappa}+\delta_{\kappa}d_{\kappa}$ be the twisted basic Laplacian. Then we have the following relation: for any basic form $\phi$, $\displaystyle\Delta_{\kappa}\phi=\Delta_{B}\phi-\frac{1}{2}\Big{(}L_{\kappa_{B}^{\sharp}}+(L_{\kappa_{B}^{\sharp}})^{*}\Big{)}\phi+\frac{1}{4}|\kappa_{B}|^{2}\phi.$ (2.7) Also, we have the following relations. ###### Lemma 2.4. [8] On $\Omega_{B}^{r}(\mathcal{F})$, the following equations hold: 1. (1) $d_{\kappa}^{2}=\delta_{\kappa}^{2}=0,\quad[d_{\kappa},\Delta_{\kappa}]=[\Delta_{\kappa},\delta_{\kappa}]=[\Delta_{\kappa},\bar{*}]=0$. 2. (2) $d_{\kappa}\bar{*}=(-1)^{r}\bar{*}\delta_{\kappa},\quad\bar{*}d_{\kappa}=(-1)^{r+1}\delta_{\kappa}\bar{*}$. Since $d_{\kappa}^{2}=0$, we can define the twisted basic de Rham cohomology group $H_{\kappa}^{r}(\mathcal{F})$ by $\displaystyle H_{\kappa}^{r}(\mathcal{F})={\ker d_{\kappa}\over{\rm Im}\,d_{\kappa}}.$ Then we have the Hodge decomposition. ###### Theorem 2.5. [8] Let $(M,\mathcal{F},g_{Q})$ be as in Theorem2.1. Then $\displaystyle\Omega_{B}^{r}(\mathcal{F})={\mathcal{H}}_{\kappa}^{r}(\mathcal{F})\oplus{\rm Im}\,d_{\kappa}\oplus{\rm Im}\,\delta_{\kappa}$ with finite dimensional ${\mathcal{H}}_{\kappa}^{r}(\mathcal{F})=\ker\Delta_{\kappa}$. Moreover, ${\mathcal{H}}_{\kappa}^{r}(\mathcal{F})\cong H_{\kappa}^{r}(\mathcal{F})$. And we have the Poincaré duality for $d_{\kappa}$-cohomology. That is, ###### Theorem 2.6. [8] (Poincaré duality for $d_{\kappa}$-cohomology) On a compact Riemannian manifold with a Riemannian foliation $\mathcal{F}$, we have $\displaystyle H_{\kappa}^{r}(\mathcal{F})\cong H_{\kappa}^{q-r}(\mathcal{F}).$ ###### Remark 2.7. Theorem 2.6 resolves the problem of the failure of Poincaré duality to hold for standard basic de Rham cohomology $H_{B}^{r}(\mathcal{F})$ (cf. (2.4)). ###### Theorem 2.8. [8] (Tautness Theorem) Let $(M,\mathcal{F},g_{Q})$ be as in Theorem 2.1. Then $\mathcal{F}$ is taut if and only if $H_{\kappa}^{0}(\mathcal{F})\cong H_{\kappa}^{q}(\mathcal{F})\neq\\{0\\}$. From (2.5) and (2.7), we have the Weitzenböck formula [8] for the twisted basic Laplacian $\Delta_{\kappa}$. Namely, for any basic form $\phi$, $\displaystyle\Delta_{\kappa}\phi=\nabla_{\rm tr}^{*}\nabla_{\rm tr}\phi+F(\phi)+\frac{1}{4}|\kappa_{B}|^{2}\phi.$ Then we have the following theorem. ###### Theorem 2.9. [8] Let $(M,\mathcal{F},g_{Q})$ be a Riemannian foliation on a compact, connected manifold $M$ with a bundle-like metric such that the mean curvature form $\kappa$ is basic-harmonic. Then: (1) if the operator $F+{\frac{1}{4}}|\kappa|^{2}$ is strictly positive, then $H_{\kappa}^{r}(\mathcal{F})=\\{0\\}$. (2) if the transversal Ricci curvature ${\rm Ric}^{Q}$ is non-negative and either $M$ is nontaut or ${\rm Ric}^{Q}$ is positive at least one point, then $H_{\kappa}^{1}(\mathcal{F})=\\{0\\}$. (3) suppose that the transversal sectional curvatures are nonnegative and positive at least one point. If $\mathcal{F}$ is nontaut, then $H_{\kappa}^{r}(\mathcal{F})=\\{0\\}$ for $1<r<q$. ## 3\. The twisted basic Dolbealt cohomology ### 3.1. The basic Dolbeault cohomology In this section, we generally use the same notation and cite elementary results from [13, Section 3] and [21]. Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation of codimension $q=2n$ on a Riemannian manifold $M$ with a holonomy invariant transverse Hermitian metric $g_{Q}$ and an almost complex structure $J$ on $Q$ such that $\nabla J=0$, with $\nabla$ being the transversal Levi-Civita connection on $Q$, extended in the usual way to tensors [21]. In some of what follows, we will merely need the fact that the foliation is transverse Hermitian (all of the above, merely requiring $J$ is integrable and not that $\nabla J=0$), and other times we will need the full power of the Kähler condition $\nabla J=0$. The basic Kähler form $\omega$ is given by $\omega(X,Y)=g_{Q}(\pi(X),J\pi(Y))$ for any vecgor fields $X,Y\in TM$. Locally, the basic Kähler form $\omega$ may be expressed by $\displaystyle\omega=-\frac{1}{2}\sum_{a=1}^{2n}\theta^{a}\wedge J\theta^{a},$ where $\\{\theta^{a}\\}_{a=1,\cdots,2n}$ is a local orthonormal frame of $Q^{*}$. Here, we extend $J$ to elements of $Q^{*}$ by setting $(J\phi)(X)=-\phi(JX)$ for any $X\in Q_{x}$ and $\phi\in Q_{x}^{*}$. When it is convenient, we will also refer to the bundle map $J^{\prime}:TM\to TM$ defined by $J^{\prime}(v)=J(\pi(v))$ and abuse notation by denoting $J=J^{\prime}$. Similarly, we sometimes will act on all of $T^{*}M$ using the symbol $J$. We note that all of the above is true for transverse Hermitian foliations, but the form $\omega$ is not closed unless the foliation is Kähler. Let $Q^{C}=Q\otimes\mathbb{C}$ be the complexified normal bundle and let $\displaystyle Q^{1,0}=\\{Z\in Q^{C}|JZ=iZ\\},\quad Q^{0,1}=\\{Z\in Q^{C}|JZ=-iZ\\}.$ The element of $Q^{1,0}$ (resp. $Q^{0,1}$) is called a complex normal vector field of type $(1,0)$ (resp. (0,1)). Then $Q^{C}=Q^{1,0}\oplus Q^{0,1}$. Now, let $Q^{*}_{C}$ be the real dual bundle of $Q^{C}$, defined at each $x\in M$ to be the $\mathbb{C}$-linear maps from $Q_{x}^{C}$ to $\mathbb{C}$. Then $Q_{C}^{*}=Q_{1,0}\oplus Q_{0,1}$, where $\displaystyle Q_{1,0}=\\{\theta+iJ\theta|\ \theta\in Q^{*}\\}\quad\text{and}\quad Q_{0,1}=\\{\theta-iJ\theta|\ \theta\in Q^{*}\\}.$ Let $\Lambda^{r,s}_{C}Q^{*}$ be the subspace of $\Lambda Q_{C}^{*}$ spanned by $\xi\wedge\eta$, where $\xi\in\Lambda^{r}Q_{1,0}$ and $\eta\in\Lambda^{s}Q_{0,1}$. The sections of $\Lambda^{r,s}_{C}Q^{*}$ are said to be forms of type $(r,s)$. Let $\Omega_{B}^{r,s}(\mathcal{F})$ be the set of the basic forms of type $(r,s)$. Let $\\{E_{a},JE_{a}\\}_{a=1,\cdots,n}$ be a local orthonormal frame of $Q$ and $\\{\theta^{a},J\theta^{a}\\}_{a=1,\cdots,n}$ be their dual basic forms on $Q^{*}$. Let $V_{a}={1\over\sqrt{2}}(E_{a}-iJE_{a})$ and $\omega^{a}={1\over\sqrt{2}}(\theta^{a}+iJ\theta^{a})$. Then $\displaystyle\omega^{a}(V_{b})=\overline{\omega}^{a}(\overline{V}_{b})=\delta_{ab},\ \omega^{a}(\overline{V}_{b})=\overline{\omega}^{a}(V_{b})=0.$ A frame fields $\\{V_{a}\\}_{a=1,\cdots,n}$ is a local orthonormal frame of $Q^{1,0}$, which is called a normal frame field of type $(1,0)$, and $\\{\omega^{a}\\}_{a=1,\cdots,n}$ is a dual frame of $Q_{1,0}$. Now, we extend the connection $\nabla$ on $Q$ in the natural way so that $\nabla_{X}Y$ is defined for any $X\in\Gamma(TM\otimes\mathbb{C})$ and any $Y\in\Gamma(Q^{C})$. We further extend it to differential forms, requiring that $\nabla$ is a Hermitian connection, i.e., for any $V\in Q^{C}$ and any $\phi,\psi\in\Omega_{B}^{r,s}(\mathcal{F})$, $\displaystyle V\langle\phi,\psi\rangle=\langle\nabla_{V}\phi,\psi\rangle+\langle\phi,\nabla_{\overline{V}}\psi\rangle.$ It is an easy exercise to show that for any complex vector field $X$, $\nabla_{X}$ preserves the $(r,s)$ type of the form or vector field. Now, the transversal star operator $\bar{*}:\Omega_{B}^{r,s}(\mathcal{F})\to\Omega_{B}^{n-s,n-r}(\mathcal{F})$ on $\Omega_{B}^{*,*}(\mathcal{F})$ is given by $\displaystyle\phi\wedge\bar{*}\bar{\psi}=\langle\phi,\psi\rangle\nu$ for any $\phi,\psi\in\Omega_{B}^{r,s}(\mathcal{F})$, where $\nu={\omega^{n}\over n!}$ is the transversal volume form. Then for any $\phi\in\Omega_{B}^{r,s}(\mathcal{F})$, $\displaystyle\overline{\bar{*}\phi}=\bar{*}\bar{\phi},\quad\bar{*}^{2}\phi=(-1)^{r+s}\phi.$ From (2.1), the adjoint operators $\delta_{B}$ and $\delta_{T}$ of $d_{B}$ and $d_{T}$ are given by $\displaystyle\delta_{B}=-\bar{*}d_{T}\bar{*},\quad\delta_{T}=-\bar{*}d_{B}\bar{*},$ (3.1) respectively. Note that $d_{B}=\partial_{B}+\bar{\partial}_{B}$ and $d_{T}=\partial_{T}+\bar{\partial}_{T}$, where $\displaystyle\partial_{T}\phi=\partial_{B}\phi-\kappa_{B}^{1,0}\wedge\phi,\quad\bar{\partial}_{T}\phi=\bar{\partial}_{B}\phi-\kappa_{B}^{0,1}\wedge\phi,$ (3.2) where $\kappa_{B}^{1,0}=\frac{1}{2}(\kappa_{B}+iJ\kappa_{B})\in\Omega_{B}^{1,0}(\mathcal{F})$ and $\kappa_{B}^{0,1}=\overline{\kappa_{B}^{1,0}}$ [15]. Let $\partial_{T}^{*}$, $\bar{\partial}_{T}^{*}$, $\partial_{B}^{*}$ and $\bar{\partial}_{B}^{*}$ be the formal adjoint operators of $\partial_{T},\ \bar{\partial}_{T},\ \partial_{B}$ and $\bar{\partial}_{B}$, respectively, on the space of basic forms. Then $\displaystyle\delta_{B}=\partial_{B}^{*}+\bar{\partial}_{B}^{*},\quad\delta_{T}=\partial_{T}^{*}+\bar{\partial}_{T}^{*},$ (3.3) and from (3.1), $\displaystyle\partial_{T}^{*}\phi=-\bar{*}\bar{\partial}_{B}\bar{*}\phi,\quad\bar{\partial}_{T}^{*}\phi=-\bar{*}\partial_{B}\bar{*}\phi,$ (3.4) $\displaystyle\partial_{B}^{*}\phi=-\bar{*}\bar{\partial}_{T}\bar{*}\phi,\quad\bar{\partial}_{B}^{*}\phi=-\bar{*}\partial_{T}\bar{*}\phi.$ (3.5) Since $\bar{*}(\kappa_{B}^{0,1}\wedge)\bar{*}=H^{1,0}\lrcorner$ [15], from (3.2) and (3.5), we have [15] $\displaystyle\partial^{*}_{B}\phi=\partial_{T}^{*}\phi+H^{1,0}\lrcorner\,\phi,\quad\bar{\partial}_{B}^{*}\phi=\bar{\partial}_{T}^{*}\phi+H^{0,1}\lrcorner\,\phi,$ (3.6) where $H^{1,0}=\frac{1}{2}(\kappa_{B}^{\sharp}-iJ\kappa_{B}^{\sharp})$ and $H^{0,1}=\overline{H^{1,0}}$. Then from (2.2) and (3.3) $\displaystyle\partial_{T}^{*}\phi=-\sum_{a=1}^{n}V_{a}\lrcorner\nabla_{\overline{V}_{a}}\phi,\quad\bar{\partial}_{T}^{*}\phi=-\sum_{a=1}^{n}\overline{V}_{a}\lrcorner\nabla_{V_{a}}\phi.$ (3.7) Since $\bar{\partial}_{B}^{2}=0$, we can define the basic Dolbeault cohomology group $H_{B}^{r,s}(\mathcal{F})$ by $H_{B}^{r,s}(\mathcal{F})={{\ker\bar{\partial}_{B}}\over{\rm Im}\>\bar{\partial}_{B}}.$ Now, let $\square_{B}=\partial_{B}\partial_{B}^{*}+\partial_{B}^{*}\partial_{B}$ and $\overline{\square}_{B}=\bar{\partial}_{B}\bar{\partial}_{B}^{*}+\bar{\partial}_{B}^{*}\bar{\partial}_{B}$. Then we have the basic Dolbeault decomposition. ###### Theorem 3.1. [6, 15] Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation on a compact Riemannian manifold $M$ with a bundle-like metric. Then $\Omega_{B}^{r,s}(\mathcal{F})=\mathcal{H}_{B}^{r,s}(\mathcal{F})\otimes{\rm Im}\,\bar{\partial}_{B}\otimes{\rm Im}\,\bar{\partial}_{B}^{*},$ where $\mathcal{H}_{B}^{r,s}(\mathcal{F})=\ker\overline{\square}_{B}$ is finite dimensional. Moreover, $\mathcal{H}_{B}^{r,s}(\mathcal{F})\cong H_{B}^{r,s}(\mathcal{F})$. Generally, the basic Laplacians do not satisfies the properties which hold on a ordinary Kähler manifold such as $\Delta=2\square=2\overline{\square}$. But if $\mathcal{F}$ is taut, then $\Delta_{B}=2\square_{B}=2\overline{\square}_{B}$ [15]. ### 3.2. The twisted basic Dolbeault cohomology Let $\partial_{\kappa}:\Omega_{B}^{r,s}(\mathcal{F})\to\Omega_{B}^{r+1,s}(\mathcal{F})$ and $\bar{\partial}_{\kappa}:\Omega_{B}^{r,s}(\mathcal{F})\to\Omega_{B}^{r,s+1}(\mathcal{F})$ be defined by $\displaystyle\partial_{\kappa}\phi=\partial_{B}\phi-\frac{1}{2}\kappa_{B}^{1,0}\wedge\phi,\quad\bar{\partial}_{\kappa}\phi=\bar{\partial}_{B}\phi-\frac{1}{2}\kappa_{B}^{0,1}\wedge\phi,$ (3.8) respectively. From (2.6), it is trivial that $d_{\kappa}=\partial_{\kappa}+\bar{\partial}_{\kappa}.$ Let $\partial_{\kappa}^{*}$ and $\bar{\partial}_{\kappa}^{*}$ be the formal adjoint operators of $\partial_{\kappa}$ and $\bar{\partial}_{\kappa}$, respectively. Then we have the following. ###### Proposition 3.2. On a transverse Kähler foliation, we have $\displaystyle\partial_{\kappa}^{*}=\partial_{B}^{*}-\frac{1}{2}H^{1,0}\lrcorner,\quad\bar{\partial}_{\kappa}^{*}=\bar{\partial}_{B}^{*}-\frac{1}{2}H^{0,1}\lrcorner,\quad\delta_{\kappa}=\partial_{\kappa}^{*}+\bar{\partial}_{\kappa}^{*}.$ ###### Proof. From (2.6) and (3.8), the proofs are easy. ∎ Let $L:\Omega_{B}^{r}(\mathcal{F})\to\Omega_{B}^{r+2}(\mathcal{F})$ and $\Lambda:\Omega_{B}^{r}(\mathcal{F})\to\Omega_{B}^{r-2}(\mathcal{F})$ be given by $\displaystyle L(\phi)=\omega\wedge\phi,\quad\Lambda(\phi)=\omega\lrcorner\phi,$ respectively, where $(\xi_{1}\wedge\xi_{2})\lrcorner=\xi_{2}^{\sharp}\lrcorner\xi_{1}^{\sharp}\lrcorner$ for any basic 1-forms $\xi_{i}(i=1,2)$. Trivially, $\langle L\phi,\psi\rangle=\langle\phi,\Lambda\psi\rangle$ and $\Lambda=-\bar{*}L\bar{*}$ [5]. Also, it is well-known [13] that $\displaystyle[L,X\lrcorner]=JX^{b}\wedge,\quad[\Lambda,X^{b}\wedge]=-JX\lrcorner,\quad[L,X^{b}\wedge]=[\Lambda,X\lrcorner]=0$ (3.9) for any vector field $X\in Q$. From (3.9), we have the following. ###### Proposition 3.3. [13] On a transverse Kähler foliation, we have $\displaystyle[L,d_{B}]=[\Lambda,\delta_{B}]=[L,\partial_{B}]=[L,\bar{\partial}_{B}]=[\Lambda,\partial_{B}^{*}]=[\Lambda,\bar{\partial}_{B}^{*}]=0,$ $\displaystyle[L,\partial_{B}^{*}]=-i\bar{\partial}_{T},\ [L,\bar{\partial}_{B}^{*}]=i\partial_{T},\ [\Lambda,\partial_{B}]=-i\bar{\partial}_{T}^{*},\ [\Lambda,\bar{\partial}_{B}]=i\partial_{T}^{*}.$ From (3.9) and Proposition 3.3, we have the following. ###### Proposition 3.4. On a transverse Kähler foliation, we have $\displaystyle[L,d_{\kappa}]=[\Lambda,\delta_{\kappa}]=[L,\partial_{\kappa}]=[L,\bar{\partial}_{\kappa}]=[\Lambda,\partial_{\kappa}^{*}]=[\Lambda,\bar{\partial}_{\kappa}^{*}]=0,$ (3.10) $\displaystyle[L,\partial_{\kappa}^{*}]=-i\bar{\partial}_{\kappa},\ [L,\bar{\partial}_{\kappa}^{*}]=i\partial_{\kappa},\ [\Lambda,\partial_{\kappa}]=-i\bar{\partial}_{\kappa}^{*},\ [\Lambda,\bar{\partial}_{\kappa}]=i\partial_{\kappa}^{*}.$ (3.11) Let $\square_{\kappa}$ and $\overline{\square}_{\kappa}$ be Laplace operators, which are defined by $\displaystyle\square_{\kappa}=\partial_{\kappa}\partial_{\kappa}^{*}+\partial_{\kappa}^{*}\partial_{\kappa}\quad{\rm and}\quad\overline{\square}_{\kappa}=\bar{\partial}_{\kappa}\bar{\partial}_{\kappa}^{*}+\bar{\partial}_{\kappa}^{*}\bar{\partial}_{\kappa},$ respectively. Trivially, $\square_{\kappa}$ and $\overline{\square}_{\kappa}$ preserve the types of the forms. ###### Theorem 3.5. On a transverse Kähler foliation, we have $\displaystyle\square_{\kappa}=\overline{\square}_{\kappa},\quad\Delta_{\kappa}=2\square_{\kappa}=2\overline{\square}_{\kappa}.$ ###### Proof. Since $d_{\kappa}^{2}=0$, it is trivial that $\partial_{\kappa}^{2}=\bar{\partial}_{\kappa}^{2}=\partial_{\kappa}\bar{\partial}_{\kappa}+\bar{\partial}_{\kappa}\partial_{\kappa}=0$. From Proposition 3.4 (3.11), we have $\displaystyle i(\partial_{\kappa}\partial_{\kappa}^{*}+\partial_{\kappa}^{*}\partial_{\kappa})$ $\displaystyle=\partial_{\kappa}\Lambda\bar{\partial}_{\kappa}+\Lambda\bar{\partial}_{\kappa}\partial_{\kappa}-\partial_{\kappa}\bar{\partial}_{\kappa}\Lambda-\bar{\partial}_{\kappa}\Lambda\partial_{\kappa}$ $\displaystyle=[\partial_{\kappa},\Lambda]\bar{\partial}_{\kappa}+\bar{\partial}_{\kappa}[\partial_{\kappa},\Lambda]$ $\displaystyle=i(\bar{\partial}_{\kappa}^{*}\bar{\partial}_{\kappa}+\bar{\partial}_{\kappa}\bar{\partial}_{\kappa}^{*})$ and $\displaystyle i(\bar{\partial}_{\kappa}\partial_{\kappa}^{*}+\partial_{\kappa}^{*}\bar{\partial}_{\kappa})=i(\partial_{\kappa}\bar{\partial}_{\kappa}^{*}+\bar{\partial}_{\kappa}^{*}\partial_{\kappa})=0.$ Hence $\square_{\kappa}=\overline{\square}_{\kappa}$ and by a direct calculation, $\displaystyle\Delta_{\kappa}=\square_{\kappa}+\overline{\square}_{\kappa}=2\square_{\kappa}=2\overline{\square}_{\kappa}.$ ∎ ###### Remark 3.6. Recall that if the transverse Kähler foliation is minimal, then a basic form of type $(r,0)$ is basic-harmonic if and only if it is basic holomorphic [15]. But, if the foliation is not minimal, then the relation does not hold. ###### Definition 3.7. On a transverse Kähler foliation, a basic form $\phi$ is said to be $\bar{\partial}_{\kappa}$-holomorphic if $\bar{\partial}_{\kappa}\phi=0$. ###### Theorem 3.8. On a transverse Kähler foliation, a $\bar{\partial}_{\kappa}$-holomorphic form of type $(r,0)$ is $\Delta_{\kappa}$-harmonic. In addition, if $M$ is compact, then the converse holds. ###### Proof. Let $\phi$ be a $\bar{\partial}_{\kappa}$-holomorphic form of type $(r,0)$. Since $\bar{\partial}^{*}_{\kappa}\phi=0$ automatically, $\bar{\partial}_{\kappa}\phi$ implies $\Delta_{\kappa}\phi=2\overline{\square}_{\kappa}\phi=0$. Conversely, if $M$ is compact, then $\Delta_{\kappa}\phi=0$ implies that $\int_{M}|\bar{\partial}_{\kappa}\phi|^{2}=0$, i.e., $\phi$ is $\bar{\partial}_{\kappa}$-holomorphic. ∎ Now, we consider $\bar{\partial}_{\kappa}$-complex $\displaystyle\cdots\overset{\bar{\partial}_{\kappa}}{\longrightarrow}\Omega_{B}^{r,s-1}(\mathcal{F})\overset{\bar{\partial}_{\kappa}}{\longrightarrow}\Omega_{B}^{r,s}(\mathcal{F})\overset{\bar{\partial}_{\kappa}}{\longrightarrow}\Omega_{B}^{r,s+1}(\mathcal{F})\overset{\bar{\partial}_{\kappa}}{\longrightarrow}\cdots.$ Since $\bar{\partial}_{\kappa}^{2}=0$, the twisted basic Dolbeault cohomology group is defined by $\displaystyle H_{\kappa}^{r,s}(\mathcal{F})={{\ker\bar{\partial}_{\kappa}}\over{\rm Im}\>\bar{\partial}_{\kappa}}.$ Then we have the generalization of the Dolbeault decomposition. ###### Theorem 3.9. Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation on a compact manifold $M$ with a bundle-like metric. Then $\displaystyle\Omega_{B}^{r,s}(\mathcal{F})=\mathcal{H}_{\kappa}^{r,s}(\mathcal{F})\oplus{\rm Im}\,\bar{\partial}_{\kappa}\oplus{\rm Im}\,\bar{\partial}_{\kappa}^{*},$ where $\mathcal{H}_{\kappa}^{r,s}(\mathcal{F})=\ker\overline{\square}_{\kappa}$ is finite dimensional. ###### Proof. The proof is similar to the one in Theorem 2.2. See [19] precisely. ∎ As a Corollary of Theorem 3.9, we have the Dolbeault isomorphism. ###### Corollary 3.10. (Dolbeault isomorphism) Let $(M,\mathcal{F},g_{Q},J)$ be as in Theorem 3.9. Then $\mathcal{H}_{\kappa}^{r,s}(\mathcal{F})\cong H_{\kappa}^{r,s}(\mathcal{F}).$ ###### Proof. The proof is similar to the proof of the Hodge isomorphism. ∎ Then we have the Kodaira-Serre duality. ###### Theorem 3.11. (Kodaira-Serre duality) Let $(M,\mathcal{F},g_{Q},J)$ be as in Theorem 3.9. Then $\displaystyle H_{\kappa}^{r,s}(\mathcal{F})\cong H_{\kappa}^{n-r,n-s}(\mathcal{F}).$ ###### Proof. We define the operator $\sharp:\Omega_{B}^{r,s}(\mathcal{F})\to\Omega_{B}^{n-r,n-s}(\mathcal{F})$ by $\displaystyle\sharp\phi:=\bar{*}\bar{\phi},$ which is an isomorphism. Since $\bar{*}(\kappa_{B}^{1,0}\wedge)\bar{*}=H^{0,1}\lrcorner$ [15], we have that for $\phi\in\Omega_{B}^{r,s}(\mathcal{F})$, $\bar{*}(\kappa_{B}^{0,1}\wedge\phi)=(-1)^{r+s}H^{1,0}\lrcorner\bar{*}\phi$ (3.12) and from (3.4), we have $\bar{*}\bar{\partial}_{B}\phi=(-1)^{r+s+1}\partial_{T}^{*}\bar{*}\phi.$ (3.13) From (3.12) and (3.13), we get that on $\Omega_{B}^{r,s}(\mathcal{F})$, $\bar{*}\bar{\partial}_{\kappa}=(-1)^{r+s+1}\partial_{\kappa}^{*}\bar{*}.$ (3.14) Also, from (3.5), we get that on $\Omega_{B}^{r,s}(\mathcal{F})$, $\bar{*}\bar{\partial}_{\kappa}^{*}=(-1)^{r+s}\partial_{\kappa}\bar{*}.$ (3.15) From (3.14) and (3.15), we have that on $\Omega_{B}^{r,s}(\mathcal{F})$, $\displaystyle\bar{*}\bar{\partial}_{\kappa}\bar{\partial}_{\kappa}^{*}=-\partial_{\kappa}^{*}\partial_{\kappa}\bar{*},\quad\bar{*}\bar{\partial}_{\kappa}^{*}\bar{\partial}_{\kappa}=-\partial_{\kappa}\partial_{\kappa}^{*}\bar{*},$ which implies $\displaystyle\bar{*}\overline{\square}_{\kappa}=\square_{\kappa}\bar{*}.$ Hence for any basic form $\phi\in\Omega_{B}^{r,s}(\mathcal{F})$, we get $\displaystyle\sharp\overline{\square}_{\kappa}\phi=\bar{*}\square_{\kappa}\bar{\phi}=\overline{\square}_{\kappa}\bar{*}\bar{\phi}=\overline{\square}_{\kappa}\sharp\phi.$ That is, $\sharp$ preserves $\mathcal{H}_{\kappa}^{r,s}(\mathcal{F})=\ker\overline{\square}_{\kappa}$. From the Dolbeault isomorphism (Corollary 3.10), the proof follows. ∎ ###### Remark 3.12. In general, the Kodaira-Serre duality does not hold for $\bar{\partial}_{B}$-cohomology. In fact, $\displaystyle H_{B}^{r,s}(\mathcal{F})\cong H_{T}^{n-r,n-s}(\mathcal{F}),$ where $H_{T}^{r,s}(\mathcal{F})={{\ker\bar{\partial}_{T}}\over{{\rm Im}\,\bar{\partial}_{T}}}$ is the $\bar{\partial}_{T}$-cohomology. $H_{T}^{r,s}(\mathcal{F})$ is a type of Lichnerowicz basic cohomology. The interested reader may consult [3, Section 3] and [25, Section 3, called “adapted cohomology” here] for information about ordinary Lichnerowicz cohomology and [1] for the basic case. Theorem 3.11 resolves the problem of the failure of Kodaira-Serre duality to hold for $\bar{\partial}_{B}$-cohomology. From Theorem 3.5, we have the following Hodge decomposition for the twisted basic Dolbeault cohomology. ###### Proposition 3.13. Let $(M,\mathcal{F},g_{Q},J)$ be as in Theorem 3.9. Then $H_{\kappa}^{l}(\mathcal{F})=\oplus_{r+s=l}H_{\kappa}^{r,s}(\mathcal{F})$ (3.16) for $0\leq l\leq 2n$ and ${\rm dim}_{\mathbb{C}}H_{\kappa}^{r,s}(\mathcal{F})={\rm dim}_{\mathbb{C}}H_{\kappa}^{s,r}(\mathcal{F}).$ (3.17) ###### Proof. Since $\Delta_{\kappa}=2\overline{\square}_{\kappa}$ and $\overline{\square}_{\kappa}$ preserves the space $\Omega_{B}^{r,s}(\mathcal{F})$, the proof of (3.16) follows by Hodge isomorphism (Theorem 3.11). The proof of (3.17) follows from that the map $\mathcal{H}_{\kappa}^{r,s}(\mathcal{F})\to\mathcal{H}_{\kappa}^{s,r}(\mathcal{F})$ is a conjugate linear isomorphism. ∎ ###### Remark 3.14. The Hodge decomposition for $\bar{\partial}_{B}$-cohomology does not hold unless the mean curvature of $\mathcal{F}$ is automorphic [16, Corollary 6.7]. Now let $\Omega_{B,P}^{r}(\mathcal{F})$ be the set of all primitive basic $r$-forms $\phi$, that is, $\Lambda\phi=0$. Then by $\mathfrak{sl}_{2}(\mathbb{C})$ representation theory, we have the following proposition. ###### Proposition 3.15. [16] Let $(M,\mathcal{F},g_{Q},J)$ be as in Theorem 3.9. Then we have the following. (1) $\Omega_{B,P}^{r}(\mathcal{F})=0$ if $r>n$. (2) If $\phi\in\Omega_{B,P}^{r}(\mathcal{F})$, then $L^{s}\phi\neq 0$ for $0\leq s\leq n-r$ and $L^{s}\phi=0$ for $s>n-r$. (3) The map $L^{s}:\Omega_{B}^{r}(\mathcal{F})\to\Omega_{B}^{r+2s}(\mathcal{F})$ is injective for $0\leq s\leq n-r$. (4) The map $L^{s}:\Omega_{B}^{r}(\mathcal{F})\to\Omega_{B}^{r+2s}(\mathcal{F})$ is surjective for $s\geq n-r$. (5) $\Omega_{B}^{r}(\mathcal{F})=\oplus_{s\geq 0}L^{s}\Omega_{B,P}^{r-2s}(\mathcal{F})$. ###### Theorem 3.16. (Hard Lefschetz theorem) Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation on a compact manifold $M$ with a bundle-like metric. Then the Hard Lefschetz theorem holds for twisted basic cohomology. That is, the map $L^{s}:H_{\kappa}^{r}(\mathcal{F})\to H_{\kappa}^{r+2s}(\mathcal{F})$ (3.18) is injective for $0\leq s\leq n-r$ and surjective for $s\geq n-r$, $s\geq 0$. Moreover, $\displaystyle H_{\kappa}^{r}(\mathcal{F})$ $\displaystyle=\oplus_{s\geq 0}L^{s}H_{\kappa,P}^{r-2s}(\mathcal{F}),$ (3.19) $\displaystyle H_{\kappa}^{r,s}(\mathcal{F})$ $\displaystyle=\oplus_{t\geq 0}L^{t}H_{\kappa,P}^{r-t,s-t}(\mathcal{F}),$ (3.20) where $H_{\kappa,P}^{r}(\mathcal{F})\cong\Omega_{B,P}^{r}(\mathcal{F})\cap\ker\Delta_{\kappa}$ and $H_{\kappa,P}^{r,s}(\mathcal{F})\cong\Omega_{B,P}^{r,s}(\mathcal{F})\cap\ker\Delta_{\kappa}$. ###### Proof. Since $\bar{\partial}_{\kappa}\partial_{\kappa}+\partial_{\kappa}\bar{\partial}_{\kappa}=0$, $[L,\overline{\square}_{\kappa}]=[L,\square_{\kappa}]=0$ by Proposition 3.4, and so $[L,\Delta_{\kappa}]=0$. Hence by Proposition 3.15 and Hodge isomorphism (Theorem 2.5), the proofs of (3.18) and (3.19) follow. The proof of (3.20) follows from the Dolbeault isomorphism (Corollary 3.10). ∎ ###### Remark 3.17. Generally, Hard Lefschetz theorem for basic cohomology does not hold unless $[\partial_{B}\kappa_{B}^{0,1}]$ is trivial. (cf. [16, Theorem 5.11]). ###### Example 3.18. We consider the Carrière example from [4]. Also, see [8, Section 7.1] and [16, Example 9.1]. Let $A$ be a matrix in $\mathrm{SL}_{2}(\mathbb{Z})$ of trace strictly greater than $2$. We denote respectively by $w_{1}$ and $w_{2}$ unit eigenvectors associated with the eigenvalues $\lambda$ and $\frac{1}{\lambda}$ of $A$ with $\lambda>1$ irrational. Let the hyperbolic torus $\mathbb{T}_{A}^{3}$ be the quotient of $\mathbb{T}^{2}\times\mathbb{R}$ by the equivalence relation which identifies $(m,t)$ to $(A(m),t+1)$, where $\mathbb{T}^{2}=\mathbb{R}^{2}/\mathbb{Z}^{2}$ is the torus. The flow generated by the vector field $W_{2}$ is a Riemannian foliation with bundle- like metric (letting $\left(x,s,t\right)$ denote local coordinates in the $w_{2}$ direction, $w_{1}$ direction, and $\mathbb{R}$ direction, respectively) $g=\lambda^{-2t}dx^{2}+\lambda^{2t}ds^{2}+dt^{2}.$ Since $A$ preserves the integral lattice $\mathbb{Z}^{2}$, it induces a diffeomorphism $A_{0}$ of the torus $\mathbb{T}^{2}$. So the flow generated by $W_{2}$ is invariant under the diffeomorphism $A_{0}$ of $\mathbb{T}^{2}$. Note that the mean curvature of the flow is $\kappa=\kappa_{B}=\log\left(\lambda\right)dt$, since $\chi_{\mathcal{F}}=\lambda^{-t}dx$ is the characteristic form and $d\chi_{\mathcal{F}}=-\log\left(\lambda\right)\lambda^{-t}dt\wedge dx=-\kappa\wedge\chi_{\mathcal{F}}$. It is well known that all twisted de Rham cohomology groups vanish, that is, $H_{\kappa}^{r}(\mathcal{F})=\\{0\\}$ for all $r=0,1,2$ in [8]. Now we will show that all twisted basic Dolbeault cohomology grous satisfy Kodaira-Serre duality. First, we note that an orthonormal frame field for this manifold is $\\{X=\lambda^{t}\partial_{x},S=\lambda^{-t}\partial_{s},T=\partial_{t}\\}$ corresponding to the orthonormal coframe $\\{X^{\ast}=\chi_{\mathcal{F}}=\lambda^{-t}dx,S^{\ast}=\lambda^{t}ds,T^{\ast}=dt\\}$. Then, letting $J$ be defined by $J(S)=T,J(T)=-S$, the Nijenhuis tensor $N_{J}(S,T)=[S,T]+J\left([JS,T]+[S,JT]\right)-[JS,JT]$ clearly vanishes, so that $J$ is integrable. The corresponding transverse Kähler form is seen to be $\omega=T^{\ast}\wedge S^{\ast}=\lambda^{t}dt\wedge ds=d(\frac{1}{\log\lambda}S^{\ast})$, an exact form in basic cohomology. From the above, $\kappa_{B}=-i\left(\log\lambda\right)Z^{\ast}+i\left(\log\lambda\right)\bar{Z}^{\ast},$ where $Z^{\ast}=\frac{1}{2}(S^{\ast}+iT^{\ast})\in\Omega_{B}^{1,0}(\mathcal{F})$. Then $\displaystyle\kappa_{B}^{1,0}$ $\displaystyle=$ $\displaystyle-i\log\left(\lambda\right)Z^{\ast}=-\frac{i}{2}\left(\log\lambda\right)\left(\lambda^{t}ds+idt\right)$ $\displaystyle\bar{\partial}_{B}\kappa_{B}^{1,0}$ $\displaystyle=$ $\displaystyle d\kappa_{B}^{1,0}=\left(\log\lambda\right)^{2}\bar{Z}^{\ast}\wedge Z^{\ast}$ $\displaystyle\bar{\partial}_{\kappa}\kappa_{B}^{1,0}$ $\displaystyle=$ $\displaystyle\bar{\partial}_{B}\kappa_{B}^{1,0}-\frac{1}{2}\kappa_{B}^{0,1}\wedge\kappa_{B}^{1,0}=\frac{9}{8}\bar{\partial}_{B}\kappa_{B}^{1,0}.$ It is impossible to change the metric so that $\bar{\partial}_{B}\kappa_{B}^{1,0}=0$. Hence $\mathcal{F}$ is nontaut [16, Corollary 5.12]. The basic Dolbeault cohomolog groups are given by $H_{B}^{0,0}=\mathbb{R},\ H_{B}^{1,0}=\\{0\\},\ H_{B}^{0,1}=\mathbb{R}$ and $H_{B}^{1,1}=\\{0\\}$. Then observe that the ordinary basic cohomology Betti numbers for this foliation are $h_{B}^{0}=h_{B}^{1}=1$, $h_{B}^{2}=0$, we see that the basic Dolbeault Betti numbers satisfy $h_{B}^{0,0}=h_{B}^{0,1}=1,\quad h_{B}^{1,0}=h_{B}^{1,1}=0.$ So even though it is true that $h_{B}^{j}=\sum_{r+s=j}h_{B}^{r,s},$ and the foliation is transversely Kähler, we also have (with $n=1$) $h_{B}^{r,s}\neq h_{B}^{s,r},\quad h_{B}^{r,s}\neq h_{B}^{n-r,n-s}.$ Thus, for a nontaut, transverse Kähler foliation, it is not necessarily true that the odd basic Betti numbers are even, and the basic Dolbeault numbers do not have the same kinds of symmetries as Dolbeault cohomology on Kähler manifolds. Now we compute the twisted basic Dolbeault cohomology groups $H_{\kappa}^{\ast,\ast}\left(\mathcal{F}\right)$. Let $f\in H_{\kappa}^{0,0}(\mathcal{F})$, that is, $f$ is a periodic function alone $t$ and $\bar{\partial}_{\kappa}f=0$. Equivalently, $\displaystyle\bar{\partial}_{B}f=\frac{1}{2}f\kappa_{B}^{0,1}.$ (3.21) On the other hand, since $\bar{\partial}_{B}f\in\Omega_{B}^{0,1}(\mathcal{F})$, by a direct calculation, $\bar{\partial}_{B}f=if^{\prime}(t)\overline{Z}^{*}$. Hence from (3.34), $f^{\prime}(t)\overline{Z}^{*}=\frac{1}{2}f(\log\lambda)\overline{Z}^{*}$. That is, $f^{\prime}=\frac{1}{2}(\log\lambda)f$. Then $f=c\lambda^{-t}$ for some constant. Since $f$ is periodic, $f(t)=0$. Hence $\displaystyle H_{\kappa}^{0,0}(\mathcal{F})=\\{0\\}.$ Let $\varphi\in H_{\kappa}^{1,0}$. That is, $\varphi=f(t)Z^{*}\in\Omega_{B}^{1,0}(\mathcal{F})$ and $\bar{\partial}_{\kappa}\varphi=0$ for a periodic function $f$. Hence $\displaystyle\bar{\partial}_{B}\varphi=\frac{1}{2}\kappa_{B}^{0,1}\wedge\varphi.$ (3.22) By a direct calculation, $\bar{\partial}_{B}\varphi={i\over 2}(\log\lambda)f\overline{Z}^{*}\wedge Z^{*}$. So from (3.4) $\displaystyle f^{\prime}=-\frac{3}{2}(\log\lambda)f$ and so $f(t)=c\lambda^{-{3\over 2}t}$ for some $c\in\mathbb{R}$. Since $f$ is periodic, it is zero. Thus, $\varphi=0$. That is, $\displaystyle H_{\kappa}^{1,0}(\mathcal{F})=\\{0\\}.$ By complex conjugatation, we get $\displaystyle H_{\kappa}^{0,1}(\mathcal{F})=\\{0\\}.$ Let $\varphi\in H_{\kappa}^{1,1}(\mathcal{F})$. Then $\varphi\in\Omega_{B}^{1,1}(\mathcal{F})$ is of the form $\varphi=f(t)Z^{*}\wedge\overline{Z}^{*}$, where $f$ is a periodic function. Trivially, $\bar{\partial}_{\kappa}\varphi=0$. Now, let $\bar{\partial}_{\kappa}^{*}\varphi=0$. Since $\displaystyle\delta_{\kappa}\varphi$ $\displaystyle=\delta_{T}\varphi+\frac{1}{2}\kappa^{\sharp}\lrcorner\varphi$ $\displaystyle=-{i\over 2}(f^{\prime}-\frac{3}{4}(\log\lambda)f)(Z^{*}+\overline{Z}^{*}),$ we have $\bar{\partial}_{\kappa}^{*}\varphi=-{i\over 2}(f^{\prime}-\frac{3}{4}(\log\lambda)f)Z^{*}$. Thus the solution of $f^{\prime}-\frac{3}{4}(\log\lambda)f=0$ reduced to zero for periodic functions $f$. Hence $\displaystyle H_{\kappa}^{1,1}(\mathcal{F})=\\{0\\}.$ This shows that the Kodaira-Serre duality and Hodge isomorphism are satisfied for the twisted basic Dolbeault cohomology. ### 3.3. The $d_{\kappa}d_{\kappa}^{c}$ Lemma Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation of codimension $2n$ on a compact Riemannian manifold $M$ with bundle-like metric. First, we recall the operator $C:\Omega_{B}^{*}(\mathcal{F})\to\Omega_{B}^{*}(\mathcal{F})$ defined by [16] $C=\sum_{0\leq r,s\leq n}(\sqrt{-1})^{r-s}P_{r,s},$ where $P_{r,s}:\Omega_{B}^{*}(\mathcal{F})\to\Omega_{B}^{r,s}(\mathcal{F})$ is the projection. Then $C^{*}=C^{-1}=\sum_{0\leq r,s\leq n}(\sqrt{-1})^{s-r}P_{r,s}.$ Now, we define $d_{\kappa}^{c}:\Omega_{B}^{r}(\mathcal{F})\to\Omega_{B}^{r+1}(\mathcal{F})$ by $d_{\kappa}^{c}=C^{*}d_{\kappa}C=C^{-1}d_{\kappa}C.$ Then $d_{\kappa}^{c}=\sqrt{-1}(\bar{\partial}_{k}-\partial_{k})$ and $d_{\kappa}d_{\kappa}^{c}|_{\Omega_{B}^{*}(\mathcal{F})}=-d_{\kappa}^{c}d_{\kappa}|_{\Omega_{B}^{*}(\mathcal{F})}.$ Let $\delta_{\kappa}^{c}$ be the adjoint operator of $d_{\kappa}^{c}$, which is given by $\delta_{\kappa}^{c}=C^{*}\delta_{\kappa}C=C^{-1}\delta_{\kappa}C.$ Let $\Delta_{\kappa}^{c}=d_{\kappa}^{c}\delta_{\kappa}^{c}+\delta_{\kappa}^{c}d_{\kappa}^{c}$. Since $\Delta_{\kappa}$ preserves the type of differential form, we have $\Delta_{\kappa}^{c}=C^{-1}\Delta_{\kappa}C=\Delta_{\kappa}.$ ###### Lemma 3.19. ($d_{\kappa}d_{\kappa}^{c}$ Lemma) Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation on a compact manifold $M$ with a bundle-like metric. Then on $\Omega_{B}^{*}(\mathcal{F})$, $\ker d_{\kappa}\cap{\rm Im}\;d_{\kappa}^{c}={\rm Im}\;d_{\kappa}d_{\kappa}^{c}.$ ###### Proof. Let $\alpha\in\ker d_{\kappa}\cap{\rm Im}\;d_{\kappa}^{c}\cap\Omega_{B}^{r}(\mathcal{F})$. That is, for some basic $(r-1)-$form $\beta$, $\alpha=d_{\kappa}^{c}\beta$ and $\beta=\gamma+d_{\kappa}\gamma_{1}+\delta_{\kappa}\gamma_{2}$ with $\gamma\in\mathcal{H}_{\kappa}^{r-1}(\mathcal{F})$ by the Hodge decomposition (Theorem 2.5). Since $M$ is compact, by Theorem 3.5, $\Delta_{\kappa}\gamma=0$ implies $\square_{\kappa}\gamma=\overline{\square}_{\kappa}\gamma=0$. Therefore, $\partial_{\kappa}\gamma=\bar{\partial}_{\kappa}\gamma=0$ and so $d_{\kappa}^{c}\gamma=0$. Hence $\displaystyle\alpha$ $\displaystyle=d_{\kappa}^{c}\beta$ $\displaystyle=d_{\kappa}^{c}\gamma+d_{\kappa}^{c}d_{\kappa}\gamma_{1}+d_{\kappa}^{c}\delta_{\kappa}\gamma_{2}$ $\displaystyle=d_{\kappa}^{c}d_{\kappa}\gamma_{1}+d_{\kappa}^{c}\delta_{\kappa}\gamma_{2}$ $\displaystyle=d_{\kappa}d_{\kappa}^{c}(-\gamma_{1})+d_{\kappa}^{c}\delta_{\kappa}\gamma_{2}.$ (3.23) Moreover, since $d_{\kappa}\alpha=0$, by the equation above, $\displaystyle 0=d_{\kappa}d_{\kappa}^{c}\beta=d_{\kappa}d_{\kappa}^{c}\delta_{\kappa}\gamma_{2}=-d_{\kappa}\delta_{\kappa}d_{\kappa}^{c}\gamma_{2}.$ The last equality follows from $d_{\kappa}^{c}\delta_{\kappa}+\delta_{\kappa}d_{\kappa}^{c}=0$ (Theorem 3.5). By integrating, $\displaystyle 0=\int_{M}\langle d_{\kappa}\delta_{\kappa}d_{\kappa}^{c}\gamma_{2},d_{\kappa}^{c}\gamma_{2}\rangle=\int_{M}\|\delta_{\kappa}d_{\kappa}^{c}\gamma_{2}\|^{2}=\int_{M}\|d_{\kappa}^{c}\delta_{\kappa}\gamma_{2}\|^{2}.$ That is, $d_{\kappa}^{c}\delta_{\kappa}\gamma_{2}=0$. So from (3.23), $\displaystyle\alpha=d_{\kappa}d_{\kappa}^{c}(-\gamma_{1}),$ which implies that $\alpha\in{\rm Im}\;d_{\kappa}d_{\kappa}^{c}$. ∎ ###### Remark 3.20. If $\mathcal{F}$ is taut, then the $d_{\kappa}d_{\kappa}^{c}$ Lemma implies that $dd_{c}$ Lemma [14, Lemma 7.3]. ### 3.4. $\Delta_{\kappa}$-harmonic forms Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation on a compact Riemannian manifold $M$ with a bundle-like metric. We define two operators $\displaystyle\nabla_{T}^{*}\nabla_{T}\phi$ $\displaystyle=-\sum_{a}\nabla_{V_{a}}\nabla_{\bar{V}_{a}}\phi+\nabla_{H^{0,1}}\phi,$ $\displaystyle\bar{\nabla}_{T}^{*}\bar{\nabla}_{T}\phi$ $\displaystyle=-\sum_{a}\nabla_{\bar{V}_{a}}\nabla_{V_{a}}\phi+\nabla_{H^{1,0}}\phi.$ Then by a direct calculation, we have $\displaystyle\nabla_{T}^{*}\nabla_{T}\phi=\bar{\nabla}_{T}^{*}\bar{\nabla}_{T}\phi+\nabla_{H^{0,1}-H^{1,0}}\phi-\sum_{a}R^{Q}(V_{a},\bar{V}_{a})\phi$ (3.24) for any basic form $\phi$. Then the operators $\nabla_{T}^{*}\nabla_{T}$ and $\bar{\nabla}_{T}^{*}\bar{\nabla}_{T}$ are formally self-adjoint and positive- definite [15]. ###### Proposition 3.21. [15] Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation on a Riemannian manifold $M$ with a bundle-like metric. Then for all $\phi\in\Omega_{B}^{r,s}(\mathcal{F})$, $\displaystyle\overline{\square}_{B}\phi$ $\displaystyle=\nabla_{T}^{*}\nabla_{T}\phi+\sum_{a,b}\bar{\omega}^{a}\wedge\bar{V}_{b}\lrcorner R^{Q}(V_{b},\bar{V}_{a})\phi+\sum_{a}\bar{\omega}^{a}\wedge(\nabla_{\bar{V}_{a}}H^{0,1})\lrcorner\,\phi,$ (3.25) From (3.24), we have the following. ###### Proposition 3.22. [15] Let $(M,\mathcal{F},g_{Q},J)$ be as in Proposition 3.21. $(1)$ If $\phi$ is a basic form of type $(r,0)$, then $\displaystyle\overline{\square}_{B}\phi$ $\displaystyle=\nabla_{T}^{*}\nabla_{T}\phi.$ (3.26) $\displaystyle=\bar{\nabla}_{T}^{*}\bar{\nabla}_{T}\phi+\nabla_{H^{0,1}-H^{1,0}}\phi-\sum_{a}R^{Q}(V_{a},\bar{V}_{a})\phi.$ (3.27) $(2)$ If $\phi$ is a basic form of type $(r,n)$, then $\displaystyle\overline{\square}_{B}\phi$ $\displaystyle=\nabla_{T}^{*}\nabla_{T}\phi+\sum_{a}R^{Q}(V_{a},\bar{V}_{a})\phi+\operatorname{div}_{\nabla}(H^{0,1})\phi$ (3.28) $\displaystyle=\bar{\nabla}_{T}^{*}\bar{\nabla}_{T}\phi+\nabla_{H^{0,1}-H^{1,0}}\phi+\operatorname{div}_{\nabla}(H^{0,1})\phi.$ (3.29) On the other hand, by a direct calculation, we have the following. ###### Proposition 3.23. On a transverse Kähler foliation, we have $\displaystyle\overline{\square}_{\kappa}$ $\displaystyle=\overline{\square}_{B}-\frac{1}{2}\Big{(}\epsilon(\kappa_{B}^{0,1})\bar{\partial}_{B}^{*}+\bar{\partial}_{B}^{*}\epsilon(\kappa_{B}^{0,1})\Big{)}-\frac{1}{2}\Big{(}\bar{\partial}_{B}H^{0,1}\lrcorner+H^{0,1}\lrcorner\bar{\partial}_{B}\Big{)}+\frac{1}{2}|\kappa_{B}^{0,1}|^{2}.$ From Proposition 3.21 and Proposition 3.23, we have the following. ###### Proposition 3.24. On a transverse Kähler foliation, we have $\displaystyle\overline{\square}_{\kappa}\phi$ $\displaystyle=\nabla_{T}^{*}\nabla_{T}\phi+\sum_{a,b}\overline{\omega}^{a}\wedge\overline{V}_{b}\lrcorner R^{Q}(V_{b},\overline{V}_{a})\phi+\sum_{a}\overline{\omega}^{a}\wedge(\nabla_{\overline{V}_{a}}H^{0,1})\lrcorner\phi$ $\displaystyle-\frac{1}{2}\Big{(}\epsilon(\kappa_{B}^{0,1})\bar{\partial}_{B}^{*}+\bar{\partial}_{B}^{*}\epsilon(\kappa_{B}^{0,1})\Big{)}\phi-\frac{1}{2}\Big{(}\bar{\partial}_{B}H^{0,1}\lrcorner+H^{0,1}\lrcorner\bar{\partial}_{B}\Big{)}\phi+\frac{1}{2}|\kappa_{B}^{0,1}|^{2}\phi.$ ###### Remark 3.25. Proposition 3.24 was also shown in [9, Theorem 3.1], but the expression is little bit different. The authors in [9] proved the vanishing theorem of transversally holomorphic basic $(r,0)$-form by using the Weitzenböck formula for the twisted basic Laplacian $\overline{\square}_{\kappa}$ [9, Theorem 3.4]. In this research, we deal with the vanishing theorem of $\bar{\partial}_{\kappa}$-holomorphic $(r,0)$-form (Corollary 3.31 below). ###### Proposition 3.26. [16] Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation on a closed Riemannian manifold $M$. Then there exists a bundle-like metric compatible with the Kähler structure such that $\partial_{B}^{*}\kappa_{B}^{1,0}=0$, or $\bar{\partial}_{B}^{*}\kappa_{B}^{0,1}=0$. ###### Lemma 3.27. For any $\phi\in\Omega_{B}^{r,0}(\mathcal{F})$, we get $\displaystyle\bar{\partial}_{B}^{*}\epsilon(\kappa_{B}^{0,1})\phi=-\nabla_{H^{1,0}}\phi.$ ###### Proof. From (3.12) and (3.13), for any $\phi\in\Omega_{B}^{r,0}(\mathcal{F})$, since $\overline{V}_{a}\lrcorner\phi=0$, we have $\displaystyle\bar{\partial}_{B}^{*}\epsilon(\kappa_{B}^{0,1})\phi$ $\displaystyle=-\sum_{a}\overline{V}_{a}\lrcorner\nabla_{V_{a}}\kappa_{B}^{0,1}\wedge\phi-\sum_{a}\kappa_{B}^{0,1}(\overline{V}_{a})\nabla_{V_{a}}\phi+H^{0,1}\lrcorner\epsilon(\kappa_{B}^{0,1})\phi$ $\displaystyle=(\bar{\partial}_{T}^{*}\kappa_{B}^{0,1})\wedge\phi-\nabla_{H^{1,0}}\phi+|\kappa_{B}^{0,1}|^{2}\phi.$ By Proposition 3.26, if we choose the bundle-like metric such that $\bar{\partial}_{B}^{*}\kappa_{B}^{0,1}=0$, the the proof follows. ∎ From Lemma 3.26 and Proposition 3.27, we get ###### Theorem 3.28. On a transverse Kähler foliation, the following hold: (1) If $\phi\in\Omega_{B}^{r,0}(\mathcal{F})$, then $\displaystyle\overline{\square}_{\kappa}\phi$ $\displaystyle=\nabla_{T}^{*}\nabla_{T}\phi-\frac{1}{2}\bar{\partial}_{B}^{*}\epsilon(\kappa_{B}^{0,1})\phi-\frac{1}{2}H^{0,1}\lrcorner\bar{\partial}_{B}\phi+\frac{1}{2}|\kappa_{B}^{0,1}|^{2}\phi$ (3.30) $\displaystyle=\overline{\nabla}_{T}^{*}\overline{\nabla}_{T}\phi+\sum_{a}R^{Q}(\overline{V}_{a},V_{a})\phi+\nabla_{H^{0,1}}\phi+\frac{1}{2}\bar{\partial}_{B}^{*}\epsilon(\kappa_{B}^{0,1})\phi-\frac{1}{2}H^{0,1}\lrcorner\bar{\partial}_{B}\phi+\frac{1}{2}|\kappa_{B}^{0,1}|^{2}\phi.$ (3.31) (2) If $\phi\in\Omega_{B}^{r,n}(\mathcal{F})$, then $\displaystyle\overline{\square}_{\kappa}\phi$ $\displaystyle=\overline{\nabla}_{T}^{*}\overline{\nabla}_{T}\phi-\nabla_{H^{1,0}}\phi+\frac{1}{2}\nabla_{H^{0,1}}\phi+\frac{1}{2}{\rm div}_{\nabla}(H^{0,1})\phi-\frac{1}{2}\kappa_{B}^{0,1}\wedge\bar{\partial}_{B}^{*}\phi+\frac{1}{2}|\kappa_{B}^{0,1}|^{2}\phi$ (3.32) $\displaystyle=\nabla_{T}^{*}\nabla_{T}\phi+\sum_{a}R^{Q}(V_{a},\overline{V}_{a})\phi-\frac{1}{2}\nabla_{H^{0,1}}\phi-\frac{1}{2}\kappa_{B}^{0,1}\wedge\bar{\partial}_{B}^{*}\phi+\frac{1}{2}{\rm div}_{\nabla}(H^{0,1})\phi+\frac{1}{2}|\kappa_{B}^{0,1}|^{2}\phi.$ (3.33) ###### Proof. Let $\phi$ be a basic form of type $(r,0)$. Since $Z\lrcorner\phi=0$ for any $Z\in Q^{0,1}$, from Proposition 3.24, (3.30) is proved. Now, we choose the bundle-like metric such that the mean curvature form is basic harmonic, that is, $\bar{\partial}_{B}^{*}\kappa_{B}^{0,1}=0$. From Lemma 3.26 and (3.24), the proof of (3.31) follows. From Proposition 3.21(2) and Proposition 3.22, the proofs of (3.32) and (3.33) follow. ∎ ###### Proposition 3.29. Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation on a compact manifold $M$ with a bundle-like metric. If $\phi\in\Omega_{B}^{r,0}(\mathcal{F})$ is a $\Delta_{\kappa}$-harmonic form, then $\displaystyle\nabla_{V}\phi=0$ for any $V\in\Gamma Q^{0,1}$. ###### Proof. Since $\Delta_{\kappa}\phi=0$, by Theorem 3.5, $\overline{\square}_{\kappa}\phi=0$ and so $\bar{\partial}_{\kappa}\phi=0$. That is, $\bar{\partial}_{B}\phi=\frac{1}{2}\kappa_{B}^{0,1}\wedge\phi$. Since $\phi$ is type of $(r,0)$, we have $\displaystyle H^{0,1}\lrcorner\bar{\partial}_{B}\phi=\frac{1}{2}H^{0,1}\lrcorner(\kappa_{B}^{0,1}\wedge\phi)=\frac{1}{2}|\kappa_{B}^{0,1}|^{2}\phi.$ From (3.30), we get $\displaystyle\nabla_{T}^{*}\nabla_{T}\phi-\frac{1}{2}\bar{\partial}_{B}^{*}\epsilon(\kappa_{B}^{0,1})\phi+\frac{1}{4}|\kappa_{B}^{0,1}|^{2}\phi=0.$ (3.34) By integrating $(\ref{3-34})$, we get $\displaystyle 0$ $\displaystyle=\int_{M}\|\nabla_{T}\phi\|^{2}-\frac{1}{2}\int_{M}\langle\kappa_{B}^{0,1}\wedge\phi,\bar{\partial}_{B}\phi\rangle+\frac{1}{4}\int_{M}|\kappa_{B}^{0,1}|^{2}\|\phi\|^{2}$ $\displaystyle=\int_{M}\|\nabla_{T}\phi\|^{2}-\frac{1}{4}\int_{M}\|\kappa_{B}^{0,1}\wedge\phi\|^{2}+\frac{1}{4}\int_{M}|\kappa_{B}^{0,1}|^{2}\|\phi\|^{2}.$ (3.35) Since $H^{0,1}\lrcorner\phi=0$ for any $\phi\in\Omega_{B}^{r,0}(\mathcal{F})$, we get $\displaystyle\langle\kappa_{B}^{0,1}\wedge\phi,\kappa_{B}^{0,1}\wedge\phi\rangle=\langle\phi,H^{0,1}\lrcorner(\kappa_{B}^{0,1}\wedge\phi)\rangle=|\kappa_{B}^{0,1}|^{2}\|\phi\|^{2}.$ From (3.4), we get $\displaystyle\int_{M}\|\nabla_{T}\phi\|^{2}=0,$ which implies $\nabla_{T}\phi=0$, that is, for any $V\in\Gamma Q^{0,1}$, $\nabla_{V}\phi=0$. ∎ ###### Theorem 3.30. Let $(M,\mathcal{F},g_{Q},J)$ be a transverse Kähler foliation on a compact manifold $M$ with a bundle-like metric. (1) If the transverse Ricci curvature is nonnegative and positive at some point, then $\mathcal{H}_{\kappa}^{r,0}(\mathcal{F})=\\{0\\}$ for all $r>0$. (2) If $\mathcal{F}$ is transversely Ricci-flat and non-taut, then $\mathcal{H}_{\kappa}^{r,0}(\mathcal{F})=\\{0\\}$ for all $r>0$. ###### Proof. Let $\phi$ be a $\Delta_{\kappa}$-harmonic form of type $(r,0)$. If we choose the bundle-like metric such that $\bar{\partial}_{B}^{*}\kappa_{B}^{0,1}=0$, then from (3.24), Lemma 3.27 and Proposition 3.29, $\displaystyle\bar{\nabla}_{T}^{*}\bar{\nabla}_{T}\phi+\bar{\partial}_{B}^{*}\epsilon(\kappa_{B}^{0,1})\phi-\sum_{a}R^{Q}(V_{a},\bar{V}_{a})\phi=0.$ (3.36) Note that $(\sum_{a}R^{Q}(\bar{V}_{a},V_{a}))\omega^{b}=Ric^{Q}(E_{b},E_{b})\omega^{b}$ [15, Remark 4.7]. By integrating (3.36), we get $\displaystyle\int_{M}\|\bar{\nabla}_{T}\phi\|^{2}+\int_{M}|\kappa_{B}^{0,1}|^{2}\|\phi\|^{2}+\sum_{i=1}^{r}\int_{M}R^{Q}(E_{a_{i}},E_{a_{i}})\|\phi\|^{2}=0.$ If $Ric^{Q}$ is nonnegative and positive at some point, then $\phi=0$. So the proof of (1) is proved. If $Ric^{Q}=0$, then $\displaystyle|\kappa_{B}^{0,1}|\|\phi\|=0.$ So if $\mathcal{F}$ is nontaut, then $\phi=0$, that is, the proof of (2) is finished. ∎ ###### Corollary 3.31. Let $(M,\mathcal{F},g_{Q},J)$ be as in Theorem 3.30 with a transversely Ricci- flat foliation. If $\mathcal{F}$ is nontaut, then $\displaystyle H^{r,0}_{\kappa}(\mathcal{F})\cong H_{\kappa}^{0,r}(\mathcal{F})\cong H_{\kappa}^{n,s}(\mathcal{F})\cong H_{\kappa}^{s,n}(\mathcal{F})=\\{0\\}$ for $r,s>0$. ###### Proof. The proofs follow from complex conjugation, Kodaira-Serra duality and Dolbeault isomorphism. ∎ The vanishing theorems for the basic harmonic space were proved in [9] and [15], respectively. ## References * [1] H. Ait Haddou, _Foliations and Lichnerowicz basic cohomology_ , Int. Math. Forum 2 (2007), no. 49-52, 2437-2446. * [2] J. A. Álvarez-López, _The basic component of the mean curvature of Riemannian foliations_ , Ann. Global Anal. Geom. 10 (1992), 179–194. * [3] A. Banyaga, _Some properties of locally conformal symplectic structures_ , Comment. Math. Helv. 77 (2002), no. 2, 383-398. * [4] Y. Carrière, _Flots riemanniens_ , Astérisque 116 (1984), 31–52. * [5] L. A. Cordero and R. A. Wolak, Properties of the basic cohomology of transversally Kähler foliations, Rendiconti del Circolo Matematico di Palermo, Serie II, 40 (1991), 177-188. * [6] A. El Kacimi-Alaoui, _Opérateurs transversalement elliptiques sur un feuilletage riemannien et applications_ , Compositio Math. 73 (1990), no. 1, 57-106. * [7] A. El Kacimi-Alaoui and G. Hector, _Décomposition de Hodge basique pour un feuilletage riemannien_ , Ann. Inst. Fourier 36, 3(1986), 207-227. * [8] G. Habib and K. Richardson, _Modified differentials and basic cohomology for Riemannian foliations_ , J. Geom. Anal. 23 (2013), no. 3, 1314-1342. * [9] G. Habib and L. Vezzoni, _Some remarks on Calabi-Yau and hyper-Kähler foliations_ , Differential Geom. Appl. 41 (2015), 12-32. * [10] J. Hebda, _Curvature and focal points in Riemannian foliations_ , Indiana Univ. Math. J. 35 (1986), no. 2, 321-331. * [11] S. Haller and T. Rybicki, _On the group of diffeomorphisms preserving a locally conformal symplectic structure_ , Ann. Global Anal. and Geom. 17 (1999), 475-502. * [12] S. D. Jung, The first eigenvalue of the transversal Dirac operator, J. Geom. Phys. 39 (2001), 253-264. * [13] S. D. Jung and M. J. Jung, Transversally holomorphic maps between Kähler foliations, J. Math. Anal. Appl. 416 (2014), 683-697. * [14] S. D. Jung and K. Richardson, Transverse conformal Killing forms and a Gallot-Meyer theorem for foliations, Math. Z. 270 (2012), 337-350. * [15] S. D. Jung and K. Richardson, Basic Dolbeault cohomology and Weitzenböck formulas on transversaly Kähler foliations, Journal of Topology and Analysis (https://doi.org/10.1142/S1793525320500260). * [16] S. D. Jung and K. Richardson, The mean curvature of transverse Kähler foliations, Documenta Math. 24 (2019), 995-1031. * [17] F. W. Kamber and Ph. Tondeur, Duality for Riemannian foliations, Proc. Sympos. Pure Math., Amer. Math. Soc. 40 (1983), Part I, 609-618. * [18] F. W. Kamber and Ph. Tondeur, Duality theorems for foliations, Asterisque 116 (1984), 108-116. * [19] F. W. Kamber and Ph. Tondeur, De Rham-Hodge theory for Riemannian foliations, Math. Ann. 277 (1987), 415-431. * [20] P. March, M. Min-Oo and E. A. Ruh, Mean curvature of Riemannian foliations, Canad. Math. Bull. 39 (1996), 95-105 * [21] S. Nisikawa and Ph. Tondeur, Transversal infinitesimal automorphisms for harmonic Kähler foliations, Tohoku Math. J. 40 (1988), 599-611. * [22] E. Park and K. Richardson, _The basic Laplacian of a Riemannian foliation_ , Amer. J. Math. 118 (1996), 1249–1275. * [23] Ph. Tondeur, Foliations on Riemannian manifolds, Springer-Verlag, New-York, 1988. * [24] Ph. Tondeur, Geometry of foliations, Birkhäuser Verlag, 1997. * [25] I. Vaisman, _Remarkable operators and commutation formulas on locally conformal Kähler manifolds_ , Compositio Math. 40 (1980), no. 3, 287-299.
# FlowReg: Fast Deformable Unsupervised Medical Image Registration using Optical Flow111Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp- content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf. Sergiu Mocanu<EMAIL_ADDRESS> Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada Alan R. Moody<EMAIL_ADDRESS> Department of Medical Imaging, University of Toronto, Toronto, ON, Canada April Khademi<EMAIL_ADDRESS> Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada. Keenan Research Center for Biomedical Science, St. Michael’s Hospital, Unity Health Network, Toronto, ON, Canada. Institute for Biomedical Engineering, Science and Technology (iBEST), a partnership between St. Michael’s Hospital and Ryerson University, Toronto, ON, Canada ###### Abstract In this work we propose FlowReg, a deep learning-based framework that performs unsupervised image registration for neuroimaging applications. The system is composed of two architectures that are trained sequentially: FlowReg-A which affinely corrects for gross differences between moving and fixed volumes in 3D followed by FlowReg-O which performs pixel-wise deformations on a slice-by- slice basis for fine tuning in 2D. FlowReg-A warps the moving volume using gross global parameters to align rotation, scale, shear, and translation to the fixed volume. A correlation loss that encourages global alignment between the moving and the fixed volumes is employed to regress the affine parameters. The deformable network FlowReg-O operates on 2D image slices and is based on the optical flow CNN network that is adapted to neuroimaging with three loss components. The photometric loss minimizes pixel intensity differences, the smoothness loss encourages similar magnitudes between neighbouring vectors, and a correlation loss that is used to maintain the intensity similarity between fixed and moving image slices. The proposed method is compared to four open source registration techniques ANTs, Demons, SE, and Voxelmorph for FLAIR MRI applications. In total, $4643$ FLAIR MR imaging volumes (approximately $255,000$ image slices) are used from dementia and vascular disease cohorts, acquired from over 60 international centres with varying acquisition parameters. To quantitatively assess the performance of the registration tools, a battery of novel validation metrics are proposed that focus on the structural integrity of tissues, spatial alignment, and intensity similarity. Experimental results show FlowReg (FlowReg-A+O) performs better than iterative-based registration algorithms for intensity and spatial alignment metrics with a Pixelwise Agreement (PWA) of $0.65$, correlation coefficient (R) of $0.80$, and Mutual Information (MI) of $0.29$. Among the deep learning frameworks evaluated, FlowReg-A or FlowReg-A+O provided the highest performance over all but one of the metrics. Results show that FlowReg is able to obtain high intensity and spatial similarity between the moving and the fixed volumes while maintaining the shape and structure of anatomy and pathology. Our code is available at https://github.com/IAMLAB-Ryerson/FlowReg. Keywords: Registration, unsupervised, deep learning, CNNs, neuroimaging, FLAIR-MRI, registration validation ## 1 Introduction Magnetic resonance imaging (MRI) offers non-invasive visualization of soft tissue that is ideal for imaging the brain. The etiology and pathogenesis of neurodegeneration, and the effects of treatment options have been heavily investigated in T1, T2, Proton Density (PD), Diffusion Weighted (DW), and Fluid-Attenuated Inversion Recovery (FLAIR) MR sequences (Udaka et al., 2002)(Hunt et al., 1989)(Sarbu et al., 2016)(Trip and Miller, 2005)(Guimiot et al., 2008) for dementia (Oppedal et al., 2015) and Alzheimer’s Disease (AD) (Kobayashi et al., 2002). As cerebrovascular disease (CVD) has been shown to be a leading cause of dementia, there is growing interest into examining cerebrovascular risk factors in the brain using neuroimaging. CVD markers, such as white matter lesions (WML), predict cognitive decline, dementia, stroke, death, and WML progression increases these risks (Debette and Markus, 2010), (Alber et al., 2019). Research into the vascular contributions to dementia and neurodegeneration could be valuable for developing new therapies (Frey et al., 2019), (Alber et al., 2019), (Griffanti et al., 2018), (Malloy et al., 2007). FLAIR MRI is preferred for WML analysis (Wardlaw et al., 2013) (Badji and Westman, 2020) since the usual T2 high signal from cerebrospinal fluid (CSF) is suppressed, highlighting the white matter disease (WMD) high signal. This is due to increased water content secondary to ischemia and demyelination and much more robustly seen in FLAIR than with T1 and T2 sequences (Lao et al., 2008), (Khademi et al., 2011), (Kim et al., 2008), Piguet et al. (2005). One family of algorithms heavily relied upon in neuroimaging research is image registration, which is the process of aligning two images (one fixed and one moving) so they are in the same geometric space. Structures and changes between two images can be directly compared when images are registered to align longitudinal scans of the same patient to assess disease change over time (Csapo et al., 2012), (El-Gamal et al., 2016), map patient images to an atlas for template-based segmentation (Iglesias and Sabuncu, 2015), (Phellan et al., 2014), or to correct for artifacts such as patient motion and orientation (Mani and Arivazhagan, 2013). As medical images are composed of highly relevant anatomical and pathological structures such as brain tissue, ventricles, and WMLs, it is important that the shape and relative size of each structure is maintained in the registered output. MR images are highly complex so obtaining correspondence while maintaining structural and anatomical integrity presents a challenging task. Traditionally, the process of registration, or aligning two images has been framed as an optimization problem, which searches for the transform $T$ between a moving $(I_{m})$ and a fixed $(I_{f})$ image by optimizing some similarity criteria between the fixed and moving images $T=\arg\max_{T}\mathcal{C}\left(I_{f},T\left(I_{m}\right)\right)$. This optimization can be calculated via gradient descent and ends when maximum similarity is found or a maximum number of iterations is obtained. The similarity between the fixed and (transformed) moving images is calculated via a cost function $\mathcal{C}$, such as mutual information (MI), cross- correlation (CC), and mean-squared error (MSE) (Maes et al., 1997) (Avants et al., 2008). Registrations can be done globally via affine transformation (translation, rotation, shear, and scale) or on a per-pixel level through the use of non-uniform deformation fields (each pixel in the moving image has a target movement vector). Registration algorithms that involve an iterative-based approach are computationally expensive and any calculated characteristics are not saved after an intensive computational procedure; the transformation parameters are discarded and not used for the next pair of images. In large multi-centre datasets this can create large computation times or non-optimal transformations. To overcome the non-transferable nature of traditional image registration algorithms, machine learning models that learn transformation parameters between images have been gaining interest (Cao et al., 2017) (Sokooti et al., 2017) (Balakrishnan et al., 2018). Recently, several convolutional neural network-based (CNN) medical image registration algorithms have emerged to address the non-transferable nature, lengthy execution and high computation cost of the classic iterative-based approaches (Balakrishnan et al., 2018; Uzunova et al., 2017). In 2017, researchers adapted an optical flow CNN model, FlowNet (Dosovitskiy et al., 2015), to compute the deformation field between temporally spaced images (Uzunova et al., 2017). Although promising, this approach required ground truth deformation fields during training, which is intractable for large clinical datasets. To overcome these challenges, Voxelmorph was developed as a completely unsupervised CNN-based registration scheme that learns a transformation without labeled deformation fields (Jaderberg et al., 2015). Further work by Fan et. al Fan et al. (2018) has shown that Generative Adversarial Networks (GAN) can be used to generate deformation fields. These fields are then used to warp the moving image until the discriminator is unable to distinguish between the registered and fixed image. Others have suggested a sequential affine and deformable 3D network for brain MRI registration Zhu et al. (2020). In another work, Zhao et al. (2019) proposed a fully unsupervised method based on CNNs that includes cascaded affine and deformable networks to perform alignment in 3D in one framework. The proposed method is inspired by these pioneering works but instead an affine alignment is performed in 3D first, followed by a 2D fine-tuning on a slice-by-slice basis. Global differences such as head angle or brain size can vary significantly between patients and these global differences are likely to be mainly realized in 3D. Additionally, there are local and fine anatomical differences that are more visible on a per slice basis. Therefore, to get maximal alignment between neuroimages, both global differences in 3D and local differences in 2D should be addressed. Other design considerations include dependence on ground truth data which is impractical to obtain for large datasets. Lastly, and importantly, registration algorithms must maintain the structural integrity of important objects such as WML and ventricles. To this end, this paper proposes a CNN-based registration method called FlowReg: Fast Deformable Unsupervised Image Registration using Optical Flow that addresses these design considerations in a unified framework. FlowReg is composed of an affine network FlowReg-A for alignment of gross head differences in 3D and a secondary deformable registration network FlowReg-O that is based on the optical flow CNNs (Ilg et al., 2017) (Yu et al., 2016) for fine movements of pixels in 2D, thus managing both global and local differences at the same time. In contrast to previous works that perform affine and deformable registration strictly in 3D, we postulate that performing affine registration in 3D followed by 2D refinement will result in higher quality registrations for neuroimages. FlowReg is fully unsupervised, and ground truth deformations are not required. The loss functions are modified to ensure optimized performance for neuroimaging applications. Lastly, this framework does not require preprocessing such as brain extraction and is applied to the whole head of neuroimaging scans. As an additional contribution, a battery of novel validation metrics are proposed to quantify registration performance from a clinically salient perspective. Validation of registration performance is not a simple task and many methods have employed manually placed landmarks (Balakrishnan et al., 2018), (Uzunova et al., 2017), (de Vos et al., 2019). In medical image registration, it is of interest to maintain the structural integrity of anatomy and pathology in the moving images, while obtaining maximum alignment with the fixed volume. To measure this, three groups of metrics are proposed: structural integrity, spatial alignment, and intensity similarity. Structural (tissue) integrity is measured via volumetric and structural analysis with the proportional volume (PV), volume ratios $\Delta V$ and surface-surface distance (SSD) metrics. Spatial alignment is measured via pixelwise agreement (PWA), head angle (HA) and dice similarity coefficient (DSC). Intensity similarity is measured using traditional metrics: mutual information (MI) and correlation coefficient (Pearson’s R) as well as one additional new one, the mean-intensity difference (MID). The performance of FlowReg is compared to four established and openly available image registration methods: Demons (Thirion, 1995, 1998; Pennec et al., 1999), Elastix (Klein et al., 2010), ANTs (Avants et al., 2008), and Voxelmorph (Balakrishnan et al., 2018). Three are traditional non-learning iterative based registration methods while Voxelmorph is a CNN-based registration tool. Performance is measured over a large and diverse multi- institutional dataset collected from over 60 imaging centres world wide of subjects with dementia (ADNI) (Mueller et al., 2005) and vascular disease (CAIN) (Tardif et al., 2013). There are roughly 270,000 images and 4900 imaging volumes in these datasets with a range of imaging acquisition parameters. The rest of the paper is structured as follows: Section 2.1 describes the FlowReg architecture and Section 2.2 outlines the validation metrics. The data used, experimental setup, and results are shown in Section 3 followed by the discussions and conclusions in Section 4 and Section 5. ## 2 Methods In this section, we introduce FlowReg, Fast Deformable Unsupervised Image Registration using Optical Flow, with focus on neuroimaging applications. Given a moving image volume denoted by $M(x,y,z)$, where $M$ is the intensity at the $(x,y,z)\in Z^{3}$ voxel, and the fixed image volume $F(x,y,z)$ the system automatically learns the registration parameters for many image pairs, and uses that knowledge to predict the transformation $T$ for new testing images. The section closes with a battery of novel registration validation metrics focused on structural integrity, intensity and spatial alignment. Figure 1: FlowReg consists of affine (FlowReg-A) and optical flow (FlowReg-O) networks. ### 2.1 FlowReg FlowReg, is based exclusively on CNN architectures, and the alignment is fully unsupervised, meaning that registration parameters are regressed without ground truth knowledge. Registration with FlowReg is completed with a two phase approach shown in Fig. 1. FlowReg-A is a 3D affine registration network that corrects for global differences and FlowReg-O refines the affine registration results on a per slice basis through a 2D optical flow-based registration method from the video processing field (Dosovitskiy et al., 2015), (Ilg et al., 2017), (Garg et al., 2016), (Yu et al., 2016). The affine component is trained first and the affine parameters are obtained for each volume. Once all the volumes are registered using the affine components, FlowReg-O is trained to obtain the deformation fields. The held out test set is used to test the full FlowReg pipeline end-to-end. #### 2.1.1 FlowReg-A: Affine Registration in 3D Figure 2: Flowreg-A model structure. A pair of 3D input volumes are concatenated, each yellow box represents the output of 3D convolutional layers, the numbers at the bottom of the box are the number of feature maps generated by each convolutional kernel. The last layer (purple) is a fully connected layer with 12 nodes and a linear activation function. The proposed affine model FlowReg-A warps the moving volume using gross global parameters to align head rotation, scale, shear, and translation to the fixed volume. This is beneficial when imaging the brain in 3D, since each the orientation of subjects’ heads can vary. Additionally, images are often of different physical dimensions depending on the scanner type and parameters used for acquisition. To normalize these global differences, we propose a completely unsupervised CNN-based 3D affine registration method (i.e. volume registration), where the transformation parameters are learned. The CNN network used to regress the affine matrix parameters is shown in Figure 2 and described in Table 6. The network architecture and hyperparameter selection is similar to the encoder arm of the FlowNet-Simple network, with changes made to the input size and the number of 3D convolution kernels. The affine model is comprised of six convolutional layers and one fully-connected layer which is used to regress the flattened version of the three-dimensional affine matrix, $A$: $A=\begin{bmatrix}a&b&c&d\\\ e&f&g&h\\\ i&j&k&l\end{bmatrix},$ (1) where $A$ contains the rotation, scale, and translation parameters. Given this affine transformation matrix, the original image volume may be transformed by $M_{w}(x,y,z)=A\times M(x,y,z)$. To solve for the transformation parameters $A$, a correlation loss was used to encourage an overall alignment of the mean intensities between the moving and the fixed volumes: $\begin{split}\ell_{\operatorname{corr}_{3D}}\left(F,M_{w}\right)=1-\frac{\sum_{i=1}^{N}\left(F_{i}-\overline{F}\right)\left(M_{w_{i}}-\overline{M_{w}}\right)}{\sqrt{\sum_{i=1}^{N}\left(F_{i}-\overline{F}\right)^{2}}\sqrt{\sum_{i=1}^{N}\left(M_{w_{i}}-\overline{M_{w}}\right)^{2}}}\end{split}$ (2) where $F$ is the fixed volume, $M_{w}$ is the moving volume warped with the calculated matrix, $N$ is the number of voxels in the volume, $F_{i}$ is the $i^{th}$ element in $F$, $M_{w_{i}}$ is the $i^{th}$ element in $M_{w}$, and $\overline{F}$, $\overline{M_{w}}$ are the mean intensities of the fixed and moving volumes, respectively. Using the correlation loss function, the parameters are selected during training that ensures global alignment between the fixed and moving image volumes. #### 2.1.2 FlowReg-O: Optical Flow-based Registration in 2D The optical flow component of the registration framework, FlowReg-O, is used to perform fine-tuning of the affine registration results on a slice-by-slice basis (i.e. in 2D). FlowReg-O is an adapted version of the original optical flow FlowNet architecture, used in video processing frameworks. Optical flow is a vector field that quantifies the apparent displacement of a pixel between two temporally separated images. A video is composed of a number of frames at a certain frame-rate per second, $\frac{F}{s}$ and the optical flow measures the motion between objects and pixels across frames and can be used to calculate the velocity of objects in a scene (Horn and Schunck, 1981). For the task of medical image registration, instead of aligning neighbouring frames, we will be aligning moving and fixed images. The same principles as the original optical flow framework are adopted here, where the displacement vector is found and used to warp pixels between moving and fixed images in 2D. The proposed deformable registration network is identical to the FlowNet Simple architecture in terms of the convolutional layers and hyperparameter selection, but adapted to grayscale medical image sizes and content. See Fig 3 for the FlowReg-O architecture, which is also described in Table 7. The original FlowNet architecture was implemented on the synthetically generated dataset ”Flying Chairs” with known optical flow values for ground truths, thus dissimilarities are calculated as a simple endpoint-error (EPE) (Dosovitskiy et al., 2015; Ilg et al., 2017). Since then, unsupervised methods have been proposed to train optical flow regressor networks based on a loss that compares a warped image using the regressed flow and its corresponding target image with the use of Spatial Transformer Networks (Jaderberg et al., 2015) (Yu et al., 2016). In medical imaging, the optical flow ground truths are impossible to obtain, so the same unsupervised principle is adopted here, but the losses have been conformed for medical images. In addition to photometric and smoothness loss components which were used in the original work (Yu et al., 2016), FlowReg-O utilizes an additional correlation loss term, with each loss encouraging overall similarity between the fixed and moving images while maintaining small 2D movements in the displacement field. Figure 3: Flowreg-O model structure. A pair of 2D input images are concatenated first followed by 2D convolutional layers (yellow). Numbers below layers correspond to the number of feature maps. Skip connections between the upscaling decoder (blue) arm are concatenated (gray boxes) with the output of the encoder layers. The flow at seven resolutions are labeled with flow above the corresponding outputs. I is the input image resolution $256\times 256$. The total loss function is a summation of three components: photometric loss $\ell_{photo}$ to keep photometric similarity through the Charbonnier function, the smoothness loss $\ell_{smooth}$ which ensures the deformation field is smooth (and limits sharp discontinuities in the vector field), and the correlation loss $\ell_{corr}$, which was added to enforce global similarity in the intensities between the moving and fixed images. The total loss for FlowReg-O is $\begin{split}\mathcal{L}(\mathbf{u,v};F(x,y),M_{w}(x,y))=&\gamma\cdot\ell_{photo}(\mathbf{u,v};F(x,y),M_{w}(x,y))+\\\ &\zeta\cdot\ell_{corr}(\mathbf{u,v};F(x,y),M_{w}(x,y))+\lambda\cdot\ell_{smooth}(\mathbf{u,v})\end{split}$ (3) where $\mathbf{u,v}$ are the estimated horizontal and vertical vector fields, $F(x,y)$ is the fixed image, $M_{w}(x,y)=M(x+u,y+v)$ is the warped moving image, and $\gamma$, $\zeta$, and $\lambda$ are weighting hyper-parameters. The photometric loss, adopted from Yu et al. (2016), is the difference between intensities of the fixed image and the warped moving image and evaluates to what degree the predicted optical flow is able to warp the moving image to match the intensities of the fixed image on a pixel-by-pixel basis: $\begin{split}\ell_{photo}(\mathbf{u,v};&F(x,y),M_{w}(x,y))=\frac{1}{N}\sum_{i,j}\rho(F(i,j)-M_{w}(i,j)))\end{split}$ (4) where $N$ is the number of pixels and $\rho$ is the Charbonnier penalty function which is used to reduce contributions of outliers. The Charbonnier penalty is defined by: $\rho(x)=(x^{2}+\epsilon^{2})^{\alpha}$ (5) where $x=(F-M_{w})$, $\epsilon$ is a small value ($0.001$), and $\alpha$ regulates the difference in intensities between the moving and fixed images such that large differences can be damped to keep the magnitudes of the deformation vectors within a reasonable limit. The effect of the $\alpha$ parameter on the Charbonnier function is shown in Fig. 4. For smaller $\alpha$ values, the Charbonnier function suppresses the output magnitude which is used to regress finer movements in the displacement field. Figure 4: Charbonnier function (Eqn. 5) for $\alpha=$ $0.5$, $0.4$, $0.3$, and $0.2$. The smoothness loss is implemented to regularize the flow field. The loss component encourages small differences between neighbouring flow vectors in the height and width directions and is defined by $\begin{split}\ell_{smooth}(\mathbf{u,v})=\sum_{j}^{H}\sum_{i}^{W}[&\rho(u_{i,j}-u_{i+1,j})+\rho(u_{i,j}-u_{i,j+1})+\\\ &\rho(v_{i,j}-v_{i+1,j})+\rho(v_{i,j}-v_{i,j+1})],\end{split}$ (6) where $H$ and $W$ are the number of rows and columns in the image and $u_{i,j}$ and $v_{i,j}$ are displacement vectors for pixel $(i,j)$ and $\rho$ is the Charbonnier function. This loss measures the difference between local displacement vectors and minimizes the chances of optimizing to a large displacement between neighbouring pixels. Lastly, we added an additional correlation loss component to encourage an overall alignment of the mean intensities between the moving and the fixed 2D images (similar to FlowReg-A), as in: $\begin{split}\ell_{\operatorname{corr}_{2D}}\left(F,M_{w}\right)=1-\frac{\sum_{i=1}^{N}\left(F_{i}-\overline{F}\right)\left(M_{w_{i}}-\overline{M_{w}}\right)}{\sqrt{\sum_{i=1}^{N}\left(F_{i}-\overline{F}\right)^{2}}\sqrt{\sum_{i=1}^{N}\left(M_{w_{i}}-\overline{M_{w}}\right)^{2}}},\end{split}$ (7) where $F$ is the fixed image, $M_{w}$ is the moving image warped with the calculated flow, $N$ is the number of pixels in the image, $F_{i}$ is the $i^{th}$ element in $F$, $M_{w_{i}}$ is the $i^{th}$ element in $M_{w}$, and $\overline{F}$, $\overline{M_{w}}$ are the mean intensities of the fixed and moving images, respectively. A summary of the loss components and how they are implemented for the FlowReg-O network is is shown in Fig. 5. Figure 5: Overview of loss components for the deformable registration network, FlowReg-O. ### 2.2 Validation Metrics There are three categories of metrics that are proposed that each measure a particular aspect of registration accuracy that have clinical relevance, including: structural integrity, spatial alignment, and intensity similarity. The validation measures for each category are shown in Table 1 and the flow diagram to compute each of the metrics is shown in Figure 21. In total, there are nine metrics computed, and each metric is summarized in Table 1. A more detailed explanation of how to compute each metric is available in Appendix Appendix B. Table 1: Summary of validation metrics. $M(x,y,z)$, $F(x,y,z)$ and $A(x,y,z)$ are the moving, fixed, and generated atlas volumes respectively, with spatial coordinates $(x,y,z)$. $vol_{s}$ is the volume of a structure $s$, $N$ is the number of images, $b_{M}$ and $b_{F}$ are the brain masks of the moving and fixed, and $p$, $q$ and $r$ are distances between pixels. | Metric | Equation ---|---|--- Structural Integrity | Proportional Volume | $PV=\frac{vol_{s}}{vol_{b}}$ Volume Ratio | $\Delta V_{s}=\frac{vol_{orig}}{vol_{reg}}$ | Surface-Surface-Distance | SSD = $\frac{1}{N}\left[\sum_{i=1}^{N}\mathrm{argmin}_{(x_{s},y_{s},z_{s})\in S}\left(\sqrt{p+q+r}\right)\right]$ Spatial Alignment | Head Angle | $\theta(\degree$) from midsagital plane Pixel-Wise Agreement | $PWA(z)=\frac{1}{N_{j}}\frac{1}{N_{xy}}\sum_{j\in J}\sum_{(x,y)}(M_{j}(x,y,z)-F(x,y,z))^{2}$ | Brain-DSC | $DSC=\frac{2|b_{M}\cap b_{F}|}{|b_{M}|+|b_{F}|}$ Intensity Similarity | Mutual Information | $I(M;F)=\sum_{f\in F}\sum_{m\in M}p_{(M,F)}(m,f)\log\left(\frac{p_{(M,F)}(m,f)}{p_{M}(m)p_{F}(f)}\right)$ Correlation Coefficient | $r(M,F)=\frac{\sum_{i=1}^{n}(M_{i}-\overline{M})(F_{i}-\overline{F})}{\sqrt{\sum_{i=1}^{n}(M_{i}-\overline{M})^{2}}\sqrt{\sum_{i=1}^{n}(F_{i}-\overline{F})^{2}}}$ | Mean Absolute Intensity Difference | $MAID(A,F)=\frac{1}{N_{i}}\sum_{i}|p_{A}(i)-p_{F}(i)|$ ## 3 Experiments and Results In this section the data and the experimental results are detailed. ### 3.1 Data The performance of FlowReg is evaluated in a large and diverse FLAIR MRI data repository. Over 270,000 FLAIR MR images were retrieved from two datasets which comprises roughly 5000 imaging volumes from over 60 international imaging centres. This comprises one of the largest FLAIR MRI datasets in the literature that is being processed automatically to the best of our knowledge. The first dataset is from the Canadian Atherosclerosis Imaging Network (CAIN) Tardif et al. (2013) and is a pan-Canadian study of vascular disease. The second dataset is from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (Mueller et al., 2005) which is an international study for Alzheimer’s and related dementia pathologies. The acquisition and demographics information is shown in the Appendix in Table 5 and 4. Based on image quality metrics supplied with the ADNI database, scans with large distortions, motion artifacts, or missing slices, were excluded from the study. In total there were 310 volumes excluded based on this criteria. For training, validation and testing an 80/10/10 data split was employed and volumes were randomly sampled from CAIN and ADNI. The resulting splits were 3714 training volumes (204,270 images), 465 validation volumes (25,575 images) and 464 test volumes (25,520 images). See Figure 19 for example slices from several volumes of the ADNI and CAIN test set, exhibiting wide variability in intensity, contrast, anatomy and pathology. To measure the validation metrics proposed in Section 2.2, two sets of images are required. Firstly, all volumes in the test set (464 volumes) are used to compute the intensity and spatial alignment metrics: HA, PWA, DSC, MI, Corr, MAID. Second, to compute the structural integrity metrics (PV, volume ratio and SSD metrics), binary segmentation masks of the structures of interest are required and 50 CAIN and 20 ADNI volumes were sampled randomly from the test set for this task. For the objects of interest, ventricles and WML objects are selected since they represent clinically relevant structures that characterize neurodegeneration and aging. Manual segmentations for the ventricular and WML regions were generated by a medical student trained by a radiologist. These objects are used to examine the structural integrity before and after registration. To generate brain tissue masks, the automated brain extraction method from (Khademi et al., 2020) is utilized to segment cerebral tissue in FLAIR MRI. The atlas used is this work as the fixed volume $F(x,y,z)$ has the dimensions of $256\times 256\times 55$ and is further detailed in (Winkler et al., ). The moving image volumes $M(x,y,z)$ comes from the image datasets described in Table 5 and no pre-processing was done to any of the volumes other than resizing $M(x,y,z)$ to the atlas resolution ($256\times 256\times 55$) through bilinear interpolation. ### 3.2 Experimental Setup FlowReg-A and FlowReg-O models were trained sequentially. First, FlowReg-A is trained using 3D volume pairs of $M(x,y,z)$ and $F(x,y,z)$ and using the optimized model parameters, the training volume set is affinely registered to the atlas using the found transformation (Dalca et al., 2018). Subsequently, the globally aligned volumes are used to train FlowReg-O, on a slice-by-slice basis, using paired moving $M(x,y)$ and fixed $F(x,y)$ images to obtain the fine-tuning deformation fields in 2D. For FlowReg-A and FlowReg-O the Adam optimizer was used (Kingma and Ba, 2014) with a $\beta_{1}=0.9$ and $\beta_{2}=0.999$, and a learning rate of $lr=10^{-4}$. FlowReg-A training was computed for 100 epochs using a batch size of four pairs of volumes from $M(x,y,z)$ and $F(x,y,z)$. FlowReg-O was trained using the globally aligned 2D images for 10 epochs using a batch size of 64 image pairs (2D) at seven two- dimensional resolutions: $256\times 256$, $128\times 128$, $64\times 64$, $32\times 32$, $16\times 16$, $8\times 8$, and $4\times 4$. The loss hyper- parameters were set as $\gamma=1$, $\zeta=1$, and $\lambda=0.5$ as per the original optical flow work Yu et al. (2016). During the testing phase, the deformation field in the last layer of the decoding arm is used to warp the moving test images as this resolution provides per pixel movements and is generated at the same resolution of the input image. Using the trained models for the complete FlowReg pipeline, the testing performance is compared to that of VoxelMorph, ANTs, Demons and SimpleElastix. Training for CNN models was performed using a NVIDIA GTX 1080Ti, with Keras (Chollet et al., 2015) as a backend to Tensorflow (Abadi et al., 2015) for FlowReg-A, FlowReg-O, and Voxelmorph. ANTs registration was performed in Python using the Symmetric Normalization (SyN) with default values (Avants et al., 2008). Demons algorithm was implemented in Python using SimpleITK (Johnson et al., 2013). Similarly, the Pythonic implementation of Elastix (Klein et al., 2010) was employed for SimpleElastix (Marstal et al., 2016) as an add-on to SimpleITK. As a preprocessing step, prior to running Voxelmorph, the volumes were first affinely registered using ANTs-Affine. Voxelmorph was then trained for 37,000 iterations (to avoid observed overfitting) using the same training dataset utilized for training FlowReg. Voxelmorph performs 3D convolutions, thus resizing was necessary to keep the output of the decoder the same size as the pooling and upsampling layers. Both the atlas and the moving volumes were resized to $256\times 256\times 64$. This resolution was chosen to be a binary multiple of $2^{n}$ to ensure that the decoder arm of the U-net style network is able to rebuild the learned feature maps to the original input size and warp the volumes accordingly. To validate the registration methods, the metrics described in Section 2.2 and shown in Fig. 21 were used. The structural integrity validation metrics (PV, volume ratio and SSD) used binary masks for the corresponding brain, ventricle, and WML masks and the resultant deformation field or transformation from each registration method. The PV calculation includes PV from the ventricles and WMLs with respect to the whole brain volume. The SSD is computed between the ventricle surface and the brain surface only; WML are not included for this metric since small lesion loads and irregular WML boundaries can create large differences in the SSD which may not be related to overall integrity. Finally, for all registration methods and all test data, the large- scale metrics are computed: HA, PWA, DSC, MI, R and MAID were calculated between the registered volumes and atlas for all testing volumes. Warped masks were binarized with a threshold of 0.1 so as to avoid the non-integer values obtained from interpolation. ### 3.3 Results In this section the experimental results will be presented. First, the effect of $\alpha$ on the Charbonnier penalty function (Equation 5) for the optical flow photometric and smooth loss functions in FlowReg-O was analyzed since this parameter plays a major role in reducing over-fitting and overtly large deformations. Using the held out validation set, the value for $\alpha$ is selected based on the effect this parameter has on the images, which includes visual analysis of the distortion on the images, the magnitude of the optical flow field as well as the average flow magnitude. The model with the appropriate $\alpha$ is used to train the final model for the remainder experiments. The registration performance of FlowReg-O is studied for $\alpha$ from $0.10$ to $0.45$ in $0.05$ increments by training the network, registering images from the holdout set, and visual inspection. See Figure 19 and Figure 20 for images and flow magnitudes for different $\alpha$ in Appendix A. At $\alpha\geq 0.25$ values there is more distortion and smearing within the brain, ventricle shapes are warped, and there is distortion of the WMLs. At $\alpha\leq 0.15$ there was little to no pixel movement. These findings are confirmed by computing the average flow magnitude per pixel in Figure 6, which shows low pixel movement for low $\alpha$ and large displacements for larger $\alpha$. Based on these findings, the ”sweet-spot” for $\alpha$ value lies within $0.2$ and $0.25$ to ensure moderate pixel movement without distortion. To ensure there are no overt movements and to be conservative side, we have selected $\alpha=0.2$ for FlowReg-O. The overall effect of FlowReg-A and FlowReg-O with $\alpha=0.2$ are used for the final model. Figure 6: The average flow magnitude per pixel for FlowReg-O at various $\alpha$ values. Figure 7: Registration results for seven slices from a single volume. Top row is the atlas that is used as the fixed volume $F(x,y,z)$ and the second row contains the moving volume $M(x,y,z)$. The remaining rows show registration results of ANTs, Demons, SimpleElastix, VoxelMorph, FlowReg-A and FlowReg (FlowReg-A+O). Using the finalized FlowReg model, test performance was compared to of each of the other registration methods using the proposed validation metrics. All of the testing data (464 volumes) were registered using the models and algorithms described previously. Example registration results for all of the methods are shown in Figure 7. Bottom, top and middle slices were chosen to show the spectrum of slices that need to be accurately warped and transformed from a 3D imaging perspective. In the first row, slices from the fixed (atlas) volume $F(x,y,z)$ are shown, followed by the corresponding slices from the moving volume $M(x,y,z)$. The first column contains images with the ocular orbits and cerebellum in both in the moving and fixed. In the middle slices of the volume, the ventricles are visible and some periventricular WMLs as well. The top slice of the fixed volume is the top of the head and is included since it is comprised of small amounts of brain tissue. The remaining rows display the results of registering the moving images $M(x,y,z)$ to the fixed images $F(x,y,z)$ for each of the respective tools. For ANTs registration with the affine and deformable component (SyN) there is good alignment on the middle slices but the top slice has some pixels missing, and the lower slices have the ocular orbits in multiple slices (which are not present in the atlas for these slices) indicating poor alignment in 3D. Demons exhibits large deformations for all three slices and therefore, this tool is not ideal for clinical applications involving FLAIR MRI. SimpleElastix seems to align the images over most spatial locations, except the lower slices as they contain ocular orbits for slices that do not anatomically align with the atlas. Voxelmorph exhibits similar trends with good alignment in middle slices. The lower slices however contain ocular orbits in slices that are not present in the atlas. The remaining rows show results for the proposed work. First the results of only the affine component, FlowReg-A, is displayed. There is excellent alignment in the bottom and middle slices as well as in the top image slices indicating high anatomical alignment with the atlas. When combining FlowReg-A with FlowReg-O in the last row, the overall similarity and alignment with the atlas is improved. The shape of the head is more similar to that of the fixed volume slices, and also the top slice is more anatomically aligned. Figure 8: Visual comparison between Voxelmorph and FlowReg-A+O. The top row contains the fixed volume $F(x,y,z)$ and the columns display sample registration results over four subjects for each method. To examine the effect of anatomical alignment between the two CNN methods (VoxelMorph and the proposed FlowReg), see Figure 8. The top row has the fixed volume $F(x,y,z)$ and the remaining columns contain sequential registration results for four subjects using VoxelMorph and FlowReg. FlowReg consistently provides a more accurate alignment of the slice data, and is more consistent in the anatomy it is representing across cases. This is especially evident in the bottom slices, where images registered with Flowreg contain ocular orbits only in the correct slices, and in the top slices, where the top of the head is more accurate in size, shape and anatomy as compared to Voxelmorph. To quantitatively compare performance across the methods, the average of the evaluations metrics from the testing set are shown in Table 2. The results for each category of validation metric will be described next. Table 2: Average registration performance (structural integrity, spatial alignment and intensity-based metrics). Bold values are best performance across methods while underline is best among deep learning methods. $\simeq X$ indicates a value closest to X is best, $\downarrow$ lowest value is best, $\uparrow$ highest value is best. $\Delta PV_{vent}\times 10{{}^{-}3}$, $\Delta PV_{wml}\times 10^{-3}$, $MAID\times 10^{-3}$, $MAID-zp\times 10^{-3}$. | | ANTs | Demons | SE | VM | FlowReg-A | FlowReg-A+O ---|---|---|---|---|---|---|--- Structural | $\Delta V_{brain}\simeq 1$ | 1.28 | 1.47 | 1.25 | 1.01 | 1.14 | 1.11 $\Delta V_{vent}\simeq 1$ | 1.27 | 1.31 | 1.60 | 0.64 | 0.96 | 0.86 $\Delta V_{wml}\simeq 1$ | 0.84 | 1.01 | 0.81 | 0.35 | 0.67 | 0.53 $\Delta PV_{vent}\simeq 0$ | 0.52 | -2.80 | 7.19 | -23.11 | -7.31 | -11.52 $\Delta PV_{wml}\simeq 0$ | -5.17 | -4.74 | -5.67 | -20.17 | -7.14 | -11.20 $\Delta SSD\simeq 0$ | -4.70 | -8.66 | -21.81 | 29.63 | 14.58 | 13.82 Spatial | HA-$\varsigma\downarrow$ | 6.21 | 5.16 | 2.981 | 10.83 | 6.89 | 3.91 PWA-$\Sigma\downarrow$ | 1.24 | 2.40 | 1.92 | 1.47 | 1.15 | 0.65 Brain-DSC$\uparrow$ | 0.88 | 0.77 | 0.87 | 0.84 | 0.86 | 0.85 Intensity | MI$\uparrow$ | 0.24 | 0.13 | 0.16 | 0.20 | 0.25 | 0.29 R$\uparrow$ | 0.64 | 0.41 | 0.39 | 0.60 | 0.65 | 0.80 MAID$\simeq 0$ | 5.24 | 6.26 | 5.99 | 5.54 | 5.08 | 5.33 MAID-zp$\simeq 0$ | 0.53 | 1.99 | 1.35 | 1.16 | 0.86 | 0.84 #### 3.3.1 Structural Integrity To ensure structures of interest are not distorted and integrity is maintained, the following structural integrity metrics are examined: change in proportional volume ($\Delta PV$), the volume ratio ($\Delta V$), and the change in the structural similarity distance ($\Delta SSD$). The average of the metrics are listed in Table 2 and the corresponding plots are shown in Figures 10, 9, and 11. PV measures the proportional volumes of WMLs and ventricles before and after each registration. It is quantified as the PV change, $\Delta PV$, and the results are shown in Figure: 10 while the average measures are in Table 2. The results nearest the zero line indicate the least change in PV compared to pre- registration and the least deformation. The PV metric shows that for all registration techniques the relative volumes of objects are mostly enlarged after deformation. The only cases where structures were decreased in size were the ventricles for the non-learning based methods ANTs and SimpleElastix for the ventricles. The least amount of distortion as quantified by the PV difference, for both ventricles and WMLs, is seen using the ANTs and Demons registration methods. The largest change in the WML and ventricles is seen in with VoxelMorph. FlowReg-A and FlowReg-O slightly enlarge both ventricles and WML. FlowReg-A has a PV difference for the ventricles of $-7.3\times 10^{-3}$ and a WML with $-7.1\times 10^{-3}$. FlowReg-A+O, the combination of the affine and optical flow steps, shows approximately a $-11.5\times 10^{-3}$ change in PV for the ventricles and $-11.2\times 10^{-3}$ change in PV for the WML class. Since the two steps are performed in a sequential manner, the difference between FlowReg-A and FlowReg-A+O would provide the amount of change in PV provided by FlowReg-O, which was found to be $4.2\times 10^{-3}$ and $4.1\times 10^{-3}$ for ventricles and WMLs, respectively. These values are closer to that of ANTs registration. The volume ratio metrics quantify how much the structures of interest have decreased ($>1$) or increased in volume $(<1)$ after registration. Ideally, structures would remain the same size after registration ($\Delta V_{s}=1$). As shown in Table 2 and Figure 9, the volume ratio of the brain is most unchanged through registration with VoxelMorph, followed closely by the proposed work (FlowReg-A and the total pipeline FlowReg-A+O). The ventricles are most similar to the original using FlowReg-A and FlowReg-A+O, where the size of the ventricles were increased slightly. In contrast to traditional registration algorithms where ventricles mostly decrease in size, both FlowReg and VoxelMorph increase the size of the ventricles, where the size of the ventricles in VoxelMorph have approximately doubled. In terms of WML, Demons was the most favourable as the WML volume was almost unchanged after registration. This may be due to the fact this registration scheme seemed to mainly warp the boundary surrounding the head. Compared to the deep learning methods, in terms of WML enlargement, it seems that the traditional registration methods are more favourable in this regard, with FlowReg-A providing the lowest volume increase out of all deep learning methods. Figure 9: Structural volumetric ratio for brain, ventricles and WML. Figure 10: Proportional Volume (PV) difference of ventricles and WMLs. Figure 11: Surface to surface Distance (SSD) percent change of ventricles. The third integrity metric considered is SSD, which measures the shape of an anatomic object (this case the ventricles) with respect to the boundary of the brain. To measure the extent to which the shape of the ventricles has changed in shape before and after registration, the difference, or percent change in SSD ($\Delta SSD$) is measured over the testing dataset and reported in Figure 11 for each of the registration methods. The lowest values are observed after registration using ANTs and Demons, followed by FlowReg-A+O. FlowReg-A+O has a change of around $13.8\%$ after registration which when compared to FlowReg-A, the difference is about $1\%$ which is the assumed contribution from FlowReg-O only. Since majority of the warping is done in 2D and largely affects the outer region of the brain and head, this metric exhibits that FlowReg-O maintains the shape of the brain and ventricles. The highest structural change when measured with the SSD validation metric is noticed using SimpleElastix and VoxelMorph. #### 3.3.2 Spatial Alignment Figure 12: Head angle ($HA$) measure. Blue solid line indicates the average $\mu$ and the green line is the spread, $\sigma_{HA}$. Figure 12 displays the head angle (HA) results computed over all testing volumes. For best performance, the $HA$ would be ideally 0 (i.e. aligned with the midsagital plane), with minimal spread across the dataset to indicate consistency. The spread of the HA metric is denoted by $\varsigma_{HA}$ (average values can be found in Table 2) and is shown by the green line in Figure 12 which indicates three standard deviations away from the mean. A tighter clustering around 0 degrees indicates less deviation from the midsagittal plane (or lower HA over the entire registered dataset). As seen, the lowest spread from the mean is seen by SimpleElastix and FlowReg-A+O registration methods, indicating these methods produce the most consistent spatial alignment with the midsaggital plane. It is also noted that the performance of FlowRegA+O compared to the affine only FlowReg-A volumes shows a reduction in the spread of the HA through the application of the optical flow algorithm, which indicates that FlowReg-O improves overall alignment. The largest spread (or higher variability of the HA) is obtained by Voxelmorph. Pixelwise Agreement (PWA) is measured by calculating the per-slice mean- squared error (MSE) when compared to respective slices from the original atlas $F(x,y,z)$. A lower value of PWA indicates intensity and spatial alignment across slices in a registered dataset. PWA is computed on a slice-by-slice basis for slice $z$ by $PWA(z)$ which is summed over all slices in the volumes to get a volume-based PWA, $\sum_{z}PWA(z)$. The slice- and volume-based PWA for the registered, testing dataset are reported in Fig. 13. The lowest PWA over all slices is FlowReg-A+O followed by FlowReg-A, indicating there is maximal intensity and spatial alignment across slices using the proposed work. The highest error is seen by Demons, SE, and VoxelMorph. Figure 13: Pixelwise agreement (PWA). Left: the slice-wise alignment error $PWA(z)$ as a function of slice number $z$ in the registered dataset. Right: average PWA over all slices $\sum_{z}PWA(z)$. The last spatial alignment measure investigated is the $DSC$ between the registered brain masks from the moving volumes, and the brain mask of the fixed atlas. Figure 14 and Table 2 contains the average $DSC$ values for each registration method over the testing dataset. The largest agreement is for ANTs, SE, FlowReg-A, and FlowReg-A+O. The lowest spatial overlap comes from the Demons method. To visualize spatial alignment, a heatmap is generated for each method by averaging the binary masks of the same slice in the registered output. Figure 15 shows the heatmaps for a bottom, middle and top slice over all methods. As can be seen, there is consistency in the lower slices for FlowReg, as there is minimal ghosting artifacts in the heatmap. However, with other methods, such as Demons or VM, there are many areas with inconsistencies in the posterior regions (likely where the ocular orbits occur). In the middle slices, most methods seem to have good alignment, and the performance is somewhat comparable on the top slices. Figure 14: Dice Similarity Coefficient (DSC) between fixed and registered brain masks. Figure 15: The average brain mask generated by each registration method. Red areas indicate high agreement, and blue indicates poor agreement. #### 3.3.3 Intensity Alignment The last set of evaluation metrics investigated are the intensity alignment measures, and the average over the entire testing dataset is shown in Table 2. Intensity alignment measures, mutual information (MI) and correlation (R), investigate how the probability mass functions of the registered volumes compare to the atlas’ intensity distribution. The intensity profiles in neuroimages are related to anatomy and pathology. Fig. 16 shows boxplots of the MI and R metrics over the entire testing dataset. For both metrics, the highest MI and correlation are reported by FlowReg-A+O and FlowReg-A followed by ANTs. Therefore, the proposed work maintains and matches the intensity histograms the best over all competing methods. Figure 16: Intensity alignment validation over 464 testing volumes. Box and whisker plots. Left: mutual information. Right: correlation coefficient. If a registration method generates images that have a high degree of spatial alignment, regions of high correspondence will have the same intensity characteristics reflected in the average. If different tissue regions are being averaged, however, there will be mixing of neighbouring tissues and therefore, the intensity profile of the images will not be maintained. To measure this, registration-specific atlases are generated via synchronized averaging. The quality of these atlases $A(x,y,z)$ are quantitatively compared to the original template $F(x,y,z)$ by examining histogram differences via Mean Absolute Intensity Differences (MAID). In FLAIR MRI, histogram peaks represent significant tissue regions such as brain matter and cerebrospinal fluid (CSF). These peaks should be aligned in the newly generated atlas $A(x,y,z)$ with the original atlas $F(x,y,z)$. Fig. 17 (left) shows the intensity histograms (normalized) of the atlases $A(x,y,z)$ compared to the histogram of the original fixed volume. It can be seen that the histogram of the FlowReg-O+A and and FlowRegA are very similar to that of the atlas for the middle (major) peak (which corresponds to the brain tissue). To quantitatively measure the similarity of histograms, the MAID is computed between the original and new generated atlases in Figure 17 (middle) and the lowest error is found with FlowReg-A. We analyze a second set of results for a thresholded histogram that removes the background noise peak from the histogram. This MAID, computed on the histogram without the noise peak is called $MAID-zp$ and is shown in Figure 17 (right). In these results, ANTs, FlowReg-A, and FlowReg-A+O provide the best performance indicating good spatial and intensity alignment in the registered outputs for these methods. The performance of FlowReg-A+O and FlowReg-A are similar, indicating FlowReg-O does not distort the intensity histogram. The highest error is observed in Demons. Figure 17: Left: Intensity distribution histograms of the atlases $A(x,y,z)$ created through registering all test volumes per method. Middle: MAID computed between intensity PMFs of the generated atlas and the original atlas (fixed). Right: MAID for PMFs with the background-nulled from bin $0$ to $20$ ($MAID_{zp}$) ## 4 Discussion Table 2 contains the summary of all the validation metrics. In comparison to all methods (ANTs, Demons, SE, VM), the proposed FlowReg framework (FlowReg-A+O) achieves the highest performance across the spatial alignment metric (PWA) which indicates excellent slice to slice correspondence between registered datasets and the fixed volume. This can be attributed to the initial alignment of the volumes in 3D using the affine component, followed by the slice-by-slice refinement using optical flow in 2D that performs fine pixel movements. FlowReg also achieves high intensity similarity based on the MI and R metrics, which indicates the histograms of the registered volumes and that of the atlas are aligned. The correlation loss function may contribute to this phenomena since it enforces global intensity similarity between images. Since intensity distributions are related to the tissue content in the images, if the histograms are more aligned in the registered images, it will make subsequent analysis consistent and comparable across patients. FlowReg-A also was a top performer for several metrics, namely the volumetric ratio metric for the ventricles ($\Delta V_{vent}$) and the Mean Absolute Intensity Difference (MAID). The ventricular integrity metric indicates that the related shapes and volumes are maintained the best using FlowReg-A. The intensity similarity can also be attributed to the correlation loss function in the affine network. Among the deep learning frameworks either FlowReg-A or FlowReg-A+O outperform Voxelmorph in all metrics except for the structural integrity metric for the brain. This may be due to the deformation field calculated by Voxelmorph, which was found to have lower vector magnitudes at the periphery of the head indicating little displacement in these regions. Since Voxelmorph was trained and tested for other neuroimaging sequences (i.e. T1), perhaps this architecture is not suited for FLAIR neuroimages. Overall, FlowReg maintains anatomical and structure features while obtaining high intensity similarity to the fixed volume and excellent spatial alignment across the testing datasets. This can be attributed to several reasons. First, FlowReg-A performs the affine alignment in 3D which will globally deform the volume to achieve maximal correspondence. Further, FlowReg-O calculates the 2D displacement field and refines the movement of pixels on a slice-by-slice basis. The optical flow model architecture has been adapted from the video processing field to medical images. The advantage of this approach is that it is able to calculate small differences and perform the refinements needed to obtain correspondence. The three component loss function (photometric, smoothness, and correlation) perform three separate but important roles. The photometric loss, which is a pixelwise difference, ensures that pixels with similar intensities are displaced to areas of similar intensities and is the refinement component. The role of the smoothness loss is to ensure that for the calculated optical flow, continuity of the flow fields is encouraged. The correlation loss operates on the overall histogram intensity alignment between the moving and fixed volumes. Finally, the Charbonnier penalty function was used to reduce the effect of gross outliers to ensure the structural integrity of anatomy and pathology was maintained. It can be seen that with the combination of these loss functions, FlowReg-A+O performs the best for the intensity measures, MI, R, and for the spatial alignment measure PWA and is likely due to the complementary nature of the loss components. As the deformation vector field is generated for each pixel in the moving image, the photometric loss specifically minimizes the difference between similar intensity pixels in the moving and fixed images and can therefore regress accurate vectors for pixel displacement. Similarly, the correlation loss component operates on the global intensity differences and contributes to accurate global flow regression. To investigate the relative run-time speed for each algorithm, each method was evaluated on ten randomly sampled volumes from the testing set. The average computation times per method for these ten volumes are shown in Table 3. Note that the times reported for FlowReg are for the two components FlowReg-A and FlowReg-O separately. For Voxelmorph, only the test time for the CNN network is shown. As can be seen, the fastest algorithm is FlowReg-A with an average time of $1.72$ seconds per 3D volume, followed by Voxelmorph at 4.31s and then FlowReg-O with 6.9s. Given the testing time for FlowReg-A and FlowReg-O, the total time to register a volume (3D) and all the slices (2D) is 8.62s. Compared to the traditional approaches, the proposed method is faster by 2.6-4.7$\times$. It is to be noted that the time reported for Voxelmorph is only the CNN network testing time, but for optimal performance affine transformation is required first using tools such as ANTs which adds additional computation time. Further, all deep learning methods outperform the traditional iterative methods by a large margin, indicating that the transferable nature of CNN-based registration tools are efficient and effective. As with many CNN-based systems, the challenge is the training time; the average training time on the 3714 training volumes for FlowReg-A was about 18 hours, while FlowReg-O was just over 5 days. As training can be completed offline, it is not too big of a concern for real-time applications. Table 3: Average run-time for ten testing 3D volumes for each registration method. ANTs | Demons | SE | VM-only | FlowReg-A | FlowReg-O ---|---|---|---|---|--- 27.81s | 22.77s | 40.33s | 4.31s | 1.72s | 6.90s Another interesting observation is that for WML structural integrity, all CNN- based solutions perform poorly compared to classic iterative-based approaches. Hyperintense lesions are usually the brightest spots in a brain MRI, thus when performing an alignment to a template (such as an averaged atlas) these hyperintense regions will be represented as areas of high dissimilarities. A network with a loss function that attempts to mitigate pixel differences between the two volumes will attempt to over-correct these areas and displace the pixels in undesirable ways. This displacement will change the registered volume’s WMLs from a structural and volumetric perspective. A possible solution that can be investigated to mitigate this problem in future works is lesion-inpainting (Sdika and Pelletier, 2009) prior to registration. Inpainting frameworks mask the lesions with the average intensities of the surrounding brain tissue. However, this approach would require either manual or automatic segmentation of WMLs which is an active area of research. Additionally, future works could also consider the incorporation of skull- stripping as a preprocessing step, to remove all non-cerebral tissues. This could improve performance since a lot the warping is occurring in regions where pixel intensities are high, which correspond to extra-cranial structures. If these areas are removed it is possible that more of the pixel movement would be focused to resolve areas of higher differences in the brain. When considering all validation groups and metrics, the traditional iterative based registration methods perform well over many of the structural integrity metrics such as $\Delta V_{wml}$, $\Delta PV_{vent}$, $\Delta PV_{wml}$, indicating that these methods do not deform the WML and ventricles as much as the CNN-based methods. In terms of Demons, although it does perform the best according to the $\Delta V$ and $\Delta PV$ metrics, when visually examining several volumes as depicted in Figure 7, the cerebral tissue seems warped in a manner that is uncharacteristic of FLAIR MRI. Further, structural changes are visible around the edges of the sulci and the WML themselves have been smeared and blended with the remainder of the gray matter of the brain. A limitation of the design is noted in the FlowReg-A registration when it comes to the $\Delta V$ and $\Delta PV$. As FlowReg-A operates on the volume using an affine matrix, which is a global transformation that equally warps every voxel in the volume, the proportional and volumetric difference of a structure before and after registration should remain unchanged. In our experiments we did notice differences for FlowReg-A and reported the differences. This outcome is likely due to slice thickness, limited pixel resolution, and the slice gap of $~{}3mm$ during image acquisition. ANTs maintains moderate distortion over all structural integrity metrics and the best for the ventricular metrics. One reason for this could be due to the Symmetric Normalization, where both the moving and the fixed volumes are warped symmetrically to a similarity ”mid-point” (Avants et al., 2008). SimpleElastix, another iterative-based registration method, performs well for the Head Angle metric likely due to the two step process of an affine alignment followed by a B-spline transform (Klein et al., 2010). One major downside of using iterative-based methods for image registration is the lengthy computation for 3D neuroimaging volumes (as is seen in Table 2) and the lack of transferring this knowledge to new image pairs. Medical image registration is a preprocessing tool that can map two images to the same geometric space. Once images are registered, direct spatial comparisons can be made between the two images to quantify disease progression, treatment efficacy, pathology changes, and age-specific anatomical changes. The proposed FlowReg model is able to warp a moving image to a fixed image space in an unsupervised manner, and is computed in a two- phase approach: initially for gross alignment in 3D, followed by fine-tuning in 2D on an image-by-image basis which is a novel approach. Alongside the registration framework, several clinically relevant validation metrics are proposed that we hope will be used by researchers in the future. ## 5 Conclusion In this work we propose FlowReg, a deep learning-based framework that performs unsupervised image registration for multicentre FLAIR MRI. The system is composed of two architectures: FlowReg-A which affinely corrects for gross differences between moving and fixed volumes in 3D followed by FlowReg-O which performs pixelwise deformations on a slice-by-slice basis for fine tuning in 2D. Using 464 testing volumes, with 70 of the imaging volumes having ground truth manual delineations for ventricles and lesions, the proposed method was compared to ANTs, Demons, SE, and Voxelmorph. To quantitatively assess the performance of the registration tools, several proposed validation metrics were used. These metrics focused on structural integrity of tissues, spatial alignment, and intensity similarity. Tissue integrity was analyzed using volumetric and structural measures: rolumetric ratio, proportional volume (PV), and structural similarity distance (SSD). Spatial alignment was analyzed with a Head Angle with respect to the saggital plane, Pixelwise Agreement, and Brain DSC. The intensity metrics measured the similarity in intensities and intensity distrubitons of the moving and fixed volumes with Mutual Information (MI), correlation (R), and Mean Intensity Difference. Experimental results show FlowReg (FlowReg-A+O) performs better than iterative-based registration algorithms for intensity and spatial alignment metrics, indicating that FlowReg delivers optimal intensity and spatial alignment between moving and fixed volumes. Among the deep learning frameworks evaluated, FlowReg-A or FlowReg-A+O provided the highest performance over all but one of the metrics. In terms of structural integrity metrics, FlowReg provided moderate (or best) performance for the brain, ventricle and WML objects. The success of the proposed work can be attributed to: 1) the two- step registration process that consists of affine followed by optical flow deformations and 2) the three component loss function in optical flow that encourages global intensity similarity, while minimizing large deformations. Finally, the novel validation metrics to assess medical image registration provide the necessary context when compared to other registration methods. Acknowledgments We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through the NSERC Discovery Grant program. We would also like to acknowledge the research mentorship received for this work from Dr. Konstantinos Derpanis (Ryerson University, Computer Science Dept.) and Jason J. Yu (York University, Computer Science Dept.) which included methodological input on the optical flow network and on the use of the penalty function, in addition to editing of the manuscript. The Canadian Atherosclerosis Imaging Network (CAIN) was established through funding from a Canadian Institutes of Health Research Team Grant for Clinical Research Initiatives (CIHR-CRI 88057). Funding for the infrastructure was received from the Canada Foundation for Innovation (CFI-CAIN 20099), with matching funds provided by the governments of Alberta, Ontario, and Quebec. Data collection and sharing for this project was partially funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defence award number W81XWH-12-2-0012). Ethical Standards The work follows appropriate ethical standards in conducting research and writing the manuscript, following applicable laws and regulations. Conflicts of Interest There are no conflicts of interest. ## References * Abadi et al. (2015) Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015\. URL https://www.tensorflow.org/. Software available from tensorflow.org. * Alber et al. (2019) Jessica Alber, Suvarna Alladi, Hee-Joon Bae, David A Barton, Laurel A Beckett, Joanne M Bell, Sara E Berman, Geert Jan Biessels, Sandra E Black, Isabelle Bos, et al. White matter hyperintensities in vascular contributions to cognitive impairment and dementia (vcid): knowledge gaps and opportunities. _Alzheimer’s & Dementia: Translational Research & Clinical Interventions_, 5:107–117, 2019. * Avants et al. (2008) Brian B Avants, Charles L Epstein, Murray Grossman, and James C Gee. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. _Medical Image Analysis_ , 12(1):26–41, 2008\. * Badji and Westman (2020) Atef Badji and Eric Westman. Cerebrovascular pathology in alzheimer’s disease: Hopes and gaps. _Psychiatry Research: Neuroimaging_ , page 111184, 2020. * Balakrishnan et al. (2018) Guha Balakrishnan, Amy Zhao, Mert R Sabuncu, John Guttag, and Adrian V Dalca. An unsupervised learning model for deformable medical image registration. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 9252–9260, 2018. * Cao et al. (2017) Xiaohuan Cao, Jianhua Yang, Jun Zhang, Dong Nie, Minjeong Kim, Qian Wang, and Dinggang Shen. Deformable image registration based on similarity-steered cnn regression. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 300–308. Springer, 2017. * Chollet et al. (2015) François Chollet et al. Keras. https://keras.io, 2015. * Csapo et al. (2012) Istvan Csapo, Brad Davis, Yundi Shi, Mar Sanchez, Martin Styner, and Marc Niethammer. Longitudinal image registration with non-uniform appearance change. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 280–288. Springer, 2012. * Dalca et al. (2018) Adrian V Dalca, Guha Balakrishnan, John Guttag, and Mert R Sabuncu. Unsupervised learning for fast probabilistic diffeomorphic registration. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 729–738. Springer, 2018. * de Vos et al. (2019) Bob D de Vos, Floris F Berendsen, Max A Viergever, Hessam Sokooti, Marius Staring, and Ivana Išgum. A deep learning framework for unsupervised affine and deformable image registration. _Medical Image Analysis_ , 52:128–143, 2019. * Debette and Markus (2010) Stéphanie Debette and HS Markus. The clinical importance of white matter hyperintensities on brain magnetic resonance imaging: systematic review and meta-analysis. _Bmj_ , 341:c3666, 2010. * Dosovitskiy et al. (2015) Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 2758–2766, 2015. * El-Gamal et al. (2016) Fatma El-Zahraa Ahmed El-Gamal, Mohammed Elmogy, and Ahmed Atwan. Current trends in medical image registration and fusion. _Egyptian Informatics Journal_ , 17(1):99–124, 2016. * Fan et al. (2018) Jingfan Fan, Xiaohuan Cao, Zhong Xue, Pew-Thian Yap, and Dinggang Shen. Adversarial similarity network for evaluating image alignment in deep learning based registration. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 739–746. Springer, 2018. * Frey et al. (2019) Benedikt M Frey, Marvin Petersen, Carola Mayer, Maximilian Schulz, Bastian Cheng, and Goetz Thomalla. Characterization of white matter hyperintensities in large-scale mri-studies. _Frontiers in Neurology_ , 10:238, 2019. * Garg et al. (2016) Ravi Garg, Vijay Kumar Bg, Gustavo Carneiro, and Ian Reid. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In _European Conference on Computer Vision_ , pages 740–756. Springer, 2016. * Griffanti et al. (2018) Ludovica Griffanti, Mark Jenkinson, Sana Suri, Enikő Zsoldos, Abda Mahmood, Nicola Filippini, Claire E Sexton, Anya Topiwala, Charlotte Allan, Mika Kivimäki, et al. Classification and characterization of periventricular and deep white matter hyperintensities on mri: a study in older adults. _Neuroimage_ , 170:174–181, 2018. * Guimiot et al. (2008) F Guimiot, C Garel, C Fallet-Bianco, F Menez, S Khung-Savatovsky, J-F Oury, G Sebag, and A-L Delezoide. Contribution of diffusion-weighted imaging in the evaluation of diffuse white matter ischemic lesions in fetuses: correlations with fetopathologic findings. _American journal of neuroradiology_ , 29(1):110–115, 2008. * Horn and Schunck (1981) Berthold KP Horn and Brian G Schunck. Determining optical flow. _Artificial Intelligence_ , 17(1-3):185–203, 1981\. * Hunt et al. (1989) AL Hunt, WW Orrison, RA Yeo, KY Haaland, RL Rhyne, PJ Garry, and GA Rosenberg. Clinical significance of mri white matter lesions in the elderly. _Neurology_ , 39(11):1470–1470, 1989. * Iglesias and Sabuncu (2015) Juan Eugenio Iglesias and Mert R Sabuncu. Multi-atlas segmentation of biomedical images: a survey. _Medical image analysis_ , 24(1):205–219, 2015\. * Ilg et al. (2017) Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 2462–2470, 2017. * Jaderberg et al. (2015) Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In _Advances in Neural Information Processing Systems_ , pages 2017–2025, 2015. * Johnson et al. (2013) Hans J. Johnson, M. McCormick, L. Ibáñez, and The Insight Software Consortium. _The ITK Software Guide_. Kitware, Inc., third edition, 2013. URL http://www.itk.org/ItkSoftwareGuide.pdf. In press. * Khademi et al. (2011) April Khademi, Anastasios Venetsanopoulos, and Alan R Moody. Robust white matter lesion segmentation in flair mri. _IEEE Transactions on Biomedical Engineering_ , 59(3):860–871, 2011. * Khademi et al. (2020) April Khademi, Brittany Reiche, Justin DiGregorio, Giordano Arezza, and Alan R Moody. Whole volume brain extraction for multi-centre, multi-disease flair mri datasets. _Magnetic Resonance Imaging_ , 66:116–130, 2020. * Kim et al. (2008) Ki Woong Kim, James R MacFall, and Martha E Payne. Classification of white matter lesions on magnetic resonance imaging in elderly persons. _Biological psychiatry_ , 64(4):273–280, 2008\. * Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014. * Klein et al. (2010) Stefan Klein, Marius Staring, Keelin Murphy, Max A Viergever, Josien PW Pluim, et al. Elastix: a toolbox for intensity-based medical image registration. _IEEE Transactions on Medical Imaging_ , 29(1):196–205, 2010. * Kobayashi et al. (2002) K Kobayashi, M Hayashi, H Nakano, Y Fukutani, K Sasaki, M Shimazaki, and Y Koshino. Apoptosis of astrocytes with enhanced lysosomal activity and oligodendrocytes in white matter lesions in alzheimer’s disease. _Neuropathology and applied neurobiology_ , 28(3):238–251, 2002. * Lao et al. (2008) Zhiqiang Lao, Dinggang Shen, Dengfeng Liu, Abbas F Jawad, Elias R Melhem, Lenore J Launer, R Nick Bryan, and Christos Davatzikos. Computer-assisted segmentation of white matter lesions in 3d mr images using support vector machine. _Academic radiology_ , 15(3):300–313, 2008. * Liu et al. (2001) Yanxi Liu, Robert T Collins, and William E Rothfus. Robust midsagittal plane extraction from normal and pathological 3-d neuroradiology images. _IEEE Transactions on Medical Imaging_ , 20(3):175–192, 2001. * Maes et al. (1997) Frederik Maes, Andre Collignon, Dirk Vandermeulen, Guy Marchal, and Paul Suetens. Multimodality image registration by maximization of mutual information. _IEEE transactions on Medical Imaging_ , 16(2):187–198, 1997. * Malloy et al. (2007) Paul Malloy, Stephen Correia, Glenn Stebbins, and David H Laidlaw. Neuroimaging of white matter in aging and dementia. _The Clinical Neuropsychologist_ , 21(1):73–109, 2007. * Mani and Arivazhagan (2013) VRS Mani and S Arivazhagan. Survey of medical image registration. _Journal of Biomedical Engineering and Technology_ , 1(2):8–25, 2013. * Marstal et al. (2016) Kasper Marstal, Floris Berendsen, Marius Staring, and Stefan Klein. Simpleelastix: A user-friendly, multi-lingual library for medical image registration. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , pages 134–142, 2016. * Mueller et al. (2005) Susanne G Mueller, Michael W Weiner, Leon J Thal, Ronald C Petersen, Clifford Jack, William Jagust, John Q Trojanowski, Arthur W Toga, and Laurel Beckett. The alzheimer’s disease neuroimaging initiative. _Neuroimaging Clinics_ , 15(4):869–877, 2005\. * Oppedal et al. (2015) Ketil Oppedal, Trygve Eftestøl, Kjersti Engan, Mona K Beyer, and Dag Aarsland. Classifying dementia using local binary patterns from different regions in magnetic resonance images. _Journal of Biomedical Imaging_ , 2015:1–14, 2015. * Pennec et al. (1999) Xavier Pennec, Pascal Cachier, and Nicholas Ayache. Understanding the “demon’s algorithm”: 3d non-rigid registration by gradient descent. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 597–605. Springer, 1999. * Phellan et al. (2014) Renzo Phellan, Alexandre X Falcao, and Jayaram Udupa. Improving atlas-based medical image segmentation with a relaxed object search. In _International Symposium Computational Modeling of Objects Represented in Images_ , pages 152–163. Springer, 2014. * Piguet et al. (2005) Olivier Piguet, LJ Ridley, DA Grayson, HP Bennett, H Creasey, TC Lye, and G Anthony Broe. Comparing white matter lesions on t2 and flair mri in the sydney older persons study. _European journal of neurology_ , 12(5):399–402, 2005. * Rehman and Lee (2018) Hafiz Rehman and Sungon Lee. An efficient automatic midsagittal plane extraction in brain mri. _Applied Sciences_ , 8(11):2203, 2018. * Sarbu et al. (2016) Nicolae Sarbu, Robert Y Shih, Robert V Jones, Iren Horkayne-Szakaly, Laura Oleaga, and James G Smirniotopoulos. White matter diseases with radiologic-pathologic correlation. _Radiographics_ , 36(5):1426–1447, 2016. * Sdika and Pelletier (2009) Michaël Sdika and Daniel Pelletier. Nonrigid registration of multiple sclerosis brain images using lesion inpainting for morphometry or lesion mapping. _Human Brain Mapping_ , 30(4):1060–1067, 2009\. * Sokooti et al. (2017) Hessam Sokooti, Bob De Vos, Floris Berendsen, Boudewijn PF Lelieveldt, Ivana Išgum, and Marius Staring. Nonrigid image registration using multi-scale 3d convolutional neural networks. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 232–239. Springer, 2017. * Tardif et al. (2013) Jean-Claude Tardif, J David Spence, Therese M Heinonen, Alan Moody, Josephine Pressacco, Richard Frayne, Philippe L’Allier, Benjamin JW Chow, Matthias Friedrich, Sandra E Black, et al. Atherosclerosis imaging and the canadian atherosclerosis imaging network. _Canadian Journal of Cardiology_ , 29(3):297–303, 2013. * Thirion (1998) J-P Thirion. Image matching as a diffusion process: an analogy with maxwell’s demons. _Medical Image Analysis_ , 2(3):243–260, 1998\. * Thirion (1995) Jean-Philippe Thirion. Fast non-rigid matching of 3d medical images. 1995\. * Trip and Miller (2005) S A Trip and D H Miller. Imaging in multiple sclerosis. _Journal of Neurology, Neurosurgery & Psychiatry_, 76(suppl 3):11–18, 2005. ISSN 0022-3050. doi: 10.1136/jnnp.2005.073213. URL https://jnnp.bmj.com/content/76/suppl_3/iii11. * Udaka et al. (2002) Fukashi Udaka, Hideyuki Sawada, and Masakuni Kameyama. White matter lesions and dementia mri-pathological correlation. _Annals of the New York Academy of Sciences_ , 977(1):411–415, 2002. * Uzunova et al. (2017) Hristina Uzunova, Matthias Wilms, Heinz Handels, and Jan Ehrhardt. Training cnns for image registration from few samples with model-based data augmentation. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 223–231. Springer, 2017. * Wardlaw et al. (2013) Joanna M Wardlaw, Eric E Smith, Geert J Biessels, Charlotte Cordonnier, Franz Fazekas, Richard Frayne, Richard I Lindley, John T O’Brien, Frederik Barkhof, Oscar R Benavente, et al. Neuroimaging standards for research into small vessel disease and its contribution to ageing and neurodegeneration. _The Lancet Neurology_ , 12(8):822–838, 2013\. * (53) AM Winkler, P Kochunov, and DC Glahn. Flair templates— brainder. * Yu et al. (2016) Jason J Yu, Adam W Harley, and Konstantinos G Derpanis. Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness. In _European Conference on Computer Vision_ , pages 3–10. Springer, 2016. * Zhao et al. (2019) Shengyu Zhao, Tingfung Lau, Ji Luo, I Eric, Chao Chang, and Yan Xu. Unsupervised 3d end-to-end medical image registration with volume tweening network. _IEEE journal of biomedical and health informatics_ , 24(5):1394–1404, 2019. * Zhu et al. (2020) Zhenyu Zhu, Yiqin Cao, Chenchen Qin, Yi Rao, Dong Ni, and Yi Wang. Unsupervised 3d end-to-end deformable network for brain mri registration. In _2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)_, pages 1355–1359. IEEE, 2020\. ## Appendix A ### Imaging Dataset Details A summary of the datasets used, and demographical information are shown in Table 4. The ADNI database consisted of images from three MR scanner manufacturers: General Electric ($n=1075$), Phillips Medical Systems ($n=848$), and Siemens ($n=2076$) with $18$ different models in total. In CAIN there is five different models across three vendors with General Electric ($n=181$), Phillips Medical Systems ($n=230$), and Siemens ($n=289$). The number of cases per scanner model and vendor are shown in Table 5 along with the ranges of the acquisition parameters. As can be seen, this dataset represents a diverse multicentre dataset, with varying scanners, diseases, voxel resolutions, imaging acquisition parameters and pixel resolutions. Therefore, this dataset will give insight into how each registration method can generalize in multicentre data. Table 4: Experimental datasets used in this work (CAIN and ADNI). Dataset | # Subjects | # Volumes | # Slices | # Centers | Age | Sex F/M ($\%$) ---|---|---|---|---|---|--- CAIN | 400 | 700 | 31,500 | 9 | $73.87\pm 8.29$ | 38.0/58.6 ADNI | 900 | 4263 | 250,00 | 60 | $73.48\pm 7.37$ | 46.5/53.5 Table 5: Detailed acquisition parameters and information for each dataset. Dataset Vendor Model TR ($ms$) TE ($ms$) TI ($ms$) Magnetic Field ($B$) Pixel Size ($mm^{2}$) Slice Thickness ($mm$) N ADNI GE Medical Systems Discovery MR750/w 11000 149.42 - 153.13 2250 3 0.7386 5 614 Signa HDxt 10002 - 11002 149.10 - 192.65 2200 - 2250 1.5 - 3 0.7386 - 0.8789 5 - 6 461 Phillips Medical Systems Ingenia 9000 90 2500 3 0.7386 5 83 Achieva 9000 90 2500 3 0.6104 - 0.7386 5 520 Gemini 9000 90 2500 3 0.7386 5 35 Ingenuity 9000 90 2500 3 0.7386 5 19 Intera 6000 - 9000 90 - 140 2000 - 2500 1.5 - 3 0.7386 - 0.8789 5 191 Siemens Biograph mMR 9000 91 2500 3 0.7386 5 13 Prisma 9000 91 2500 3 0.7386 5 5 Skyra 9000 91 2500 3 0.7386 5 213 SymphonyTim 10000 125 2200 1.5 0.8789 5 2 TrioTim 9000 - 11000 90 - 149 2250 - 2800 3 0.7386 - 1 2 - 5 1332 Verio 9000 91 2500 3 0.7386 - 1.0315 5 511 CAIN GE Medical Systems Discovery MR750 8000 - 9995 140.84 - 150.24 2200 - 2390 3 0.7386 - 0.8789 3 181 Phillips Medical Systems Achieva 9000 - 11000 125 2800 3 0.1837 3 230 Siemens InteraMR 9000 119 2500 3 1 3 14 Skyra 9000 119 2500 3 1 3 162 TrioTim 9000 117 - 122 2500 3 1 3 113 ### Supplemental Tables and Figures Table 6: FlowReg-A model, details of architecture in Figure 2. Layer | Filters | Kernel | Stride | Activation ---|---|---|---|--- fixedInput | - | - | - | - movingInput | - | - | - | - concatenate | - | - | - | - conv3D | 16 | 7x7x7 | 2, 2, 1 | ReLu conv3D | 32 | 5x5x5 | 2, 2, 1 | ReLu conv3D | 64 | 3x3x3 | 2, 2, 2 | ReLu conv3D | 128 | 3x3x3 | 2, 2, 2 | ReLu conv3D | 256 | 3x3x3 | 2, 2, 2 | ReLu conv3D | 512 | 3x3x3 | 2, 2, 2 | ReLu flatten | - | - | - | - dense | 12 | - | - | Linear Table 7: FlowReg-O model details of architecture in Figure 3. Layer | Filters | Kernel | Strides | Activation ---|---|---|---|--- fixedInput | - | - | - | - movingInput | - | - | - | - concatenate | - | - | - | - conv2D | 64 | 7x7 | 2, 2 | L-ReLu conv2D | 128 | 5x5 | 2, 2 | L-ReLu conv2D | 256 | 5x5 | 2, 2 | L-ReLu conv2D | 256 | 3x3 | 1, 1 | L-ReLu conv2D | 512 | 3x3 | 2, 2 | L-ReLu conv2D | 512 | 3x3 | 1, 1 | L-ReLu conv2D | 512 | 3x3 | 2, 2 | L-ReLu conv2D | 512 | 3x3 | 1, 1 | L-ReLu conv2D | 1024 | 3x3 | 2, 2 | L-ReLu conv2D | 1024 | 3x3 | 1, 1 | L-ReLu conv2D | 2 | 3x3 | 1, 1 | - upconv2D | 2 | 4x4 | 2, 2 | - upconv2D | 512 | 4x4 | 2, 2 | L-ReLu conv2D | 2 | 3x3 | 1, 1 | - upconv2D | 2 | 4x4 | 2, 2 | - upconv2D | 256 | 4x4 | 2, 2 | L-ReLu conv2D | 2 | 3x3 | 1, 1 | - upconv2D | 2 | 4x4 | 2, 2 | - upconv2D | 128 | 4x4 | 2, 2 | L-ReLu conv2D | 2 | 3x3 | 1, 1 | - upconv2D | 2 | 4x4 | 2, 2 | - upconv2D | 64 | 4x4 | 2, 2 | L-ReLu conv2D | 2 | 3x3 | 1, 1 | - upconv2D | 2 | 4x4 | 2, 2 | - upconv2D | 32 | 4x4 | 2, 2 | L-ReLu conv2D | 2 | 3x3 | 1, 1 | - upconv2D | 2 | 4x4 | 2, 2 | - upconv2D | 16 | 4x4 | 2, 2 | L-ReLu conv2D | 2 | 3x3 | 2, 2 | - resampler | - | - | - | - Figure 18: The total loss values during training for FlowReg-O at different $\alpha$ values in the Charbonnier penalty function (Eqn. 5). Figure 19: Single slices from five volumes registered using FlowReg-O at various $\alpha$ values for the Charbonnier penalty. Figure 20: Flow magnitudes of deformation fields from FlowReg-O at various $\alpha$ values. Images correspond to slices in Figure 19. Blue indicates areas of low flow vector magnitude and red indicates larger vector magnitude. ## Appendix B Here we describe in detail the calculations of each registration metric. The block diagram to compute the metrics is shown in Figure 21. Figure 21: Registration metrics extraction process. HA = Head Angle, MI = Mutual Information, R = Correlation Coefficient, PV = Proportional Volume, SSD = Surface to Surface Distance. ### Structural Integrity: Proportional Volume — PV If a registration scheme maintains the structural integrity of anatomical and pathological objects, the relative volume of objects should remain approximately the same after registration. Using binary masks of anatomical or pathological objects of interest, the proportional volume (PV) is proposed to measure the volume of a structure ($vol_{s}$) compared to the total brain volume ($vol_{b}$), as in: $PV=\frac{vol_{s}}{vol_{b}}.$ (8) The volume of objects in physical dimensions is found by multiplying the number of pixels by voxel resolution: $vol=V_{x}\times V_{y}\times V_{z}\times n_{p},$ (9) where $vol$ is the volume in $mm^{3}$, $V_{x}$ and $V_{y}$ are the pixel width and height and $V_{z}$ is the slice-thickness. The difference between the PV before and after registration can be investigated to analyze how registration changes the proportion of each structure with respect to the brain. The difference in PV before and after registration can be measured by $\Delta PV=PV_{orig}-PV_{reg}.$ (10) where $PV_{orig}$ and $PV_{reg}$ are the PV computed before and after registration. In this work, two structures of interest are examined for $vol_{s}$: white matter lesions (WML) and ventricles since these objects are important for disease evaluation. Ideally the ratio of object volumes to the total brain volume would stay the same before and after registration ### Structural Integrity: Volume Ratio — $\Delta V_{s}$ In addition to the $\Delta PV$ which looks at proportional changes in volumes before and after registration, the volume ratio $\Delta V_{s}$ is also defined on a per object basis. The volume ratio investigates the volumes of objects before and after registration, as in $\Delta V_{s}=\frac{vol_{orig}}{vol_{reg}},$ (11) where $vol_{orig}$ and $vol_{reg}$ are the volumes of object $s$ before and after registration, respectively. The volume ratio is computed for three objects $s$: the brain, WML and ventricles. This metric quantifies the overall change in volume before and after registration (the best ratio is a value equal to 1). ### Structural Integrity: Surface to Surface Distance — SSD A third structural integrity measure, SSD, is proposed to measure the integrity between objects in the brain before and after registration. In particular, the SSD measures how the overall shape of a structure changes after registration. To compute the SSD, binary masks of the brain $B(x,y,z)$, and structure $S(x,y,z)$ of interest are obtained, and an edge map is obtained to find $E_{B}(x,y,z)$ and $E_{S}(x,y,z)$, respectively. For every non-zero pixel coordinate $(x,y,z)$ in the boundary of the structure, i.e. $E_{S}(x,y,z)=1$, the minimum Euclidean distance from the structure’s boundary to the brain edge is found. This closest surface-to-surface distance between pixels in the objects’ boundaries is averaged over the entire structure of interest to compute the average SSD $SSD=\frac{1}{N}\left[\sum_{i=1}^{N}\mathrm{argmin}_{(x_{s},y_{s},z_{s})\in S}\left(\sqrt{p+q+r}\right)\right],$ (12) where $p=(x_{s}-x_{b})^{2}$, $q=(y_{s}-y_{b})^{2}$, $r=(z_{s}-z_{b})^{2}$ are differences between points in the edge maps of $E_{s}$ and $E_{b}$, $(x_{s},y_{s},z_{s})$ and $(x_{b},y_{b},z_{b})$ are triplets of the co- ordinates in the binary edge maps for the structural objects of interest and the brain, respectively, and $N$ is the number of points in the structure’s boundary. The overall structural integrity of objects should be maintained with respect to the brain structure after registration, i.e. the distance between the objects and the brain shape should not be significantly altered. This metric can be used to investigate the extent in which anatomical shapes are maintained or distorted by registration by examining the difference in SSD before and after registration by $\Delta SSD=\frac{SSD_{orig}-SSD_{reg}}{SSD_{orig}},$ (13) where $SSD_{orig}$ and $SSD_{reg}$ is the SSD before and after registration, respectively. ### Spatial Alignment: Head Angle — HA Head Angle (HA) is an orientation metric that measures the extent to which the head is rotated with respect to the midsaggital plane. For properly registered data (especially for data that is being aligned to an atlas), the HA should be close to zero with the head being aligned along the midstagittal plane. To measure the HA, a combination of Principal Component Analysis (PCA) and angle sweep as described in Rehman and Lee (2018); Liu et al. (2001) is adopted to find the head orientation of registered and unregistered data. The MRI volume is first binarized using a combination of adaptive thresholding and opening and closing techniques to approximately segment the head. The coordinates of each non-zero pixel in this 2D mask are stored in two vectors (one for each coordinate) and the eigenvectors of these arrays are found through PCA. The directions of eigenvectors specify the orientation of the major axes of the approximated ellipse for the head region in a slice with respect to the vertical saggital plane. Eigenvalues are the magnitude of the eigenvectors (or length of the axes). The largest eigenvalue dictates the direction of the longest axis which is approximately the head angle $\theta_{1}$. For improved robustness to outliers and improve the precision of the estimated angles, a secondary refinement step is utilized to compute the refined HA $\theta_{2}$. Every second slice from the middle (axial) slice to the top of the head are used and the three smallest angles over all slices are taken as candidates for further refinement.The lowest angles are selected as they are the most representative of head orientation. Each selected slice is mirrored and rotated according to an angle sweep from $-2\times\theta_{1}<\theta_{2}<2\times\theta_{1}$ at $0.5\degree$ angle increments. At every increment of the rotation, the cross-correlation between the mirrored rotating image and the original is calculated and a score is recorded. The angle at which the highest score is selected for the optimized value $\theta_{2}$. The final HA is obtained by summing the respective coarse and fine angle estimates, i.e. $\theta=\theta_{1}+\theta_{2}$. ### Spatial Alignment: Pixelwise Agreement — PWA Physical alignment in 3D means that within a registered dataset, each subject should have high correspondence between slices, i.e. the same slice from each subject should account for the same anatomy across patients. To measure this effect, a metric called Pixelwise Agreement (PWA) is proposed. It considers the same slice across all the registered volumes in a dataset and compares them to the same slice from an atlas template (the fixed volume) through the mean-squared error. The sum of the error is computed for each slice, to obtain a slice-wise estimate of the difference between the same slice in the atlas as compared to each of the same slices from the registered data: $PWA(z)=\frac{1}{N_{j}}\frac{1}{N_{xy}}\sum_{j\in J}\sum_{(x,y)}(M_{j}(x,y,z)-F(x,y,z))^{2}$ (14) where $z$ is the slice number for which PWA is computed, $M_{j}(x,y,z)$ is the moving test volume $j$ from a total of $N_{j}$ volumes from the dataset, $N_{x}y$ is the number of voxels in slice $z$ and $F(x,y,z)$ is the atlas. Thus, at each slice, for the entire dataset, the PWA compares every slice to the same slice of the atlas. Low PWA indicates high-degree of correspondence between all the slices from the registered dataset and that of the atlas and considers both spatial and intensity alignment. If there is poor spatial alignment, there will be poor intensity alignment since different tissues will be overlapping during averaging. The slice-based PWA may also be summed to get a total volume PWA, i.e. $\frac{1}{N_{z}}\sum_{z}PWA(z)$ where $N_{z}$ is the number of slices. ### Spatial Alignment: Dice Similarity Coefficient — DSC To further examine spatial alignment, manually delineated brain masks from the moving volumes $M(x,y,z)$ were warped with the calculated deformation and compared to the brain mask of the atlas template $F(x,y,z)$ through the Dice Similarity Coefficient (DSC): $DSC=\frac{2|b_{M}\cap b_{F}|}{|b_{M}|+|b_{F}|},$ (15) where $b_{M}$ is the registered, moving brain mask and $b_{F}$ is the brain mask of the atlas template. DSC will be higher when there is a high-degree of overlap between the brain regions from the atlas and moving volume. For visual inspection of overlap, all registered brain masks were averaged to summarize alignment accuracy as a heatmap. ### Intensity Similarity: Mutual Information — MI The first intensity-based metric used to investigate registration performance is the widely adopted Mutual Information (MI) metric that describes the statistical dependence between two random variables. If there is excellent alignment between the moving and fixed images, there will be tight clustering in the joint probability mass functions. The MI of two volumes $M(x,y,z)$ and $F(x,y,z)$ with PMFs of $p_{M}(i)$ and $p_{F}(i)$ is calculated as follows: $I(M;F)=\sum_{f\in F}\sum_{m\in M}p_{(M,F)}(m,f)\log\left(\frac{p_{(M,F)}(m,f)}{p_{M}(m)p_{F}(f)}\right)$ (16) where $p_{(M,F)}(m,f)$ is the joint probability mass function of the intensities of the moving and fixed volumes, $p_{M}(m)$ is the marginal probability of the moving volume intensities, and $p_{F}(f)$ is the marginal probability for the fixed volume. ### Intensity Similarity: Pearson Correlation Coefficient — $r$ The Pearson Correlation Coefficient, $r$, is used as the second intensity measure which quantifies the correlation between the intensities in the moving $M(x,y,z)$ and $F(x,y,z)$ volumes: $r(M,F)=\frac{\sum_{i=1}^{n}(M_{i}-\overline{M})(F_{i}-\overline{F})}{\sqrt{\sum_{i=1}^{n}(M_{i}-\overline{M})^{2}}\sqrt{\sum_{i=1}^{n}(F_{i}-\overline{F})^{2}}}$ (17) where $N$ is the number of voxels in a volume, $M_{i}$ and $F_{i}$ are the pixels from the moving and fixed volumes, and $\overline{F}$ and $\overline{M}$ are the respective volume mean intensities. ### Intensity Similarity: Mean Intensity Difference — MAID The last registration performance metric considered is the mean intensity difference, MAID, which measures the quality of a newly generated atlas $A(x,y,z)$ compared to the original atlas (fixed volume). To create the new atlas, the moving volumes $M(x,y,z)$ from a dataset are registered to the original atlas $F(x,y,z)$ and then the same slices are averaged across the registered dataset generating the atlas $A(x,y,z)$. The intensity histograms of the original $F(x,y,z)$ and newly generated atlases $A(x,y,z)$ are compared through the mean absolute error to get the MAID, as in $MAID(A,F)=\frac{1}{N_{i}}\sum_{i}|p_{A}(i)-p_{F}(i)|$ (18) where $p_{F}$ and $p_{A}$ are the probability distributions of the intensities for the fixed (original atlas) and moving volumes (new atlas) and $N_{i}$ is the maximum number of intensities. Changes to the intensity distribution of registered images could arise from poor slice alignment. The higher the similarity between the new atlas and the original, the lower the error between the two.
# A Dual-branch Network for Infrared and Visible Image Fusion Yu Fu Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi, China, 214122 Email: yu fu<EMAIL_ADDRESS>Xiao-Jun Wu Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi, China, 214122 Email:xiaojun wu<EMAIL_ADDRESS> ###### Abstract In recent years, deep learning has been used extensively in the field of image fusion. In this article, we propose a new image fusion method by designing a new structure and a new loss function for a deep learning model. Our backbone network is an autoencoder, in which the encoder has a dual branch structure. We input infrared images and visible light images to the encoder to extract detailed information and semantic information respectively. The fusion layer fuses two sets of features to get fused features. The decoder reconstructs the fusion features to obtain the fused image. We design a new loss function to reconstruct the image effectively. Experiments show that our proposed method achieves state-of-the-art performance. ## I Introduction Infrared and visible light image fusion is an important subject in the field of image processing. Infrared images and visible light images come from different signal sources. Generally, visible light images have high spatial resolution and rich detail information. Meanwhile, infrared images contain rich thermal radiation information and global structure information. How to extract important information of source images from different sources is a problem that needs to be solved. A high-quality fused image can be used for video surveillance, object recognition, tracking and remote sensing, etc. In the field of traditional methods, image fusion has achieved amazing results. Some classic or popular image fusion algorithms are introduced by Li et al. [1]. Here are some mature multi-scale transform fusion methods. Source images are decomposed into a set of multi-scale representation features and fused by pyramid[2], curvelet[3], contourlet[4] , etc. Inverse multi-scale transform is used to obtain the fused image. In the low-rank representation domain, Li and Wu et al. proposed MDLatLRR[5] based on deep decomposition with latent LRR which extracts the features of image in the low-rank domain. In the Subspace domain, there are successful algorithms such as PCA[6], ICA[7], NMF[8]. With the development of deep learning in recent years, the powerful feature extraction capabilities of deep neural networks have become increasingly prominent. Liu et al. summarized many outstanding CNN-based image fusion methods[9]. For example, features are extracted using dense block (DenseFuse) proposed by Li and Wu [10] or PCA filters of PCANet[11] proposed by Song and Wu. The extracted features are fused with specific fusion strategies. In addition to using neural networks as feature extraction tools, there are also end-to-end fused image generation networks such as FusionGan[12] proposed by Ma et al. Compared with the end-to-end network, we think that using deep neural networks to extract features reasonably can better perform image fusion tasks. Moreover, deep neural networks have more powerful feature extraction capabilities than traditional methods. However, the existing fusion networks have only several layers. We try to design a more reasonable network structure to extract their important features from infrared and visible light images. In this paper, we propose a novel and effective autoencoder in which encoder has dual branch. One branch is the detail branch, which uses dense connections for extracting shallow and edge information. Another branch is the semantic one, which uses rapid downsampling extracting semantic and structural information. The infrared and visible light images are input to the encoder to obtain two sets of features containing the original image information. Two sets of features are fused by fusion layer. The decoder reconstructs the fused feature to obtain the fused image. This paper is structured as follows. In Section II, we introduce the related image fusion algorithms. Section III presents the network structure, loss function, and the fusion strategy respectively. Section IV introduces the training details and the results of its comparison with state-of-the-art methods. Section V draws the paper to conclusion. Figure 1: The backbone of our network in this paper. The orange blocks is the detail branch, and the green blocks is the semantic branch. Concatenate the encoding results of the two branches to get our latent layer. ## II Related Work In recent years, the development of deep learning has promoted the new methods of image fusion. In 2017, Liu proposed a multi-focus image fusion based on CNN[13], which divides the image into many small blocks, and the network is utilized to predict whether the block is clear or fuzzy, and multi-focus fusion based on the predicted decision map is developed. This method simply uses the classification capabilities of CNN, and firstly using CNN in the field of image fusion, but cannot be used in the field of infrared images and visible light images. Ma et al. proposed an end-to-end image generation network FusionGan[12] in 2019. They use the generator to generate the fused image, and use the discriminator for adversarial learning. However, the image generated by GAN is unstable and has a lot of noise sometimes. In ICCV 2017, Prabhakar et al. proposed Deepfuse[14] for multi-exposure image fusion. They proposed a simple CNN structured autoencoder with a total of five layers. They use the encoder to extract features and use the decoder to reconstruct the fused image. Huang and Liu et al. propose the DenseNet[15]. In the network, the input of each layer are concatenated with the outputs of previous layers. Denseblock strengthens the forward propagation of features and the reuse of features, so that deep networks can also obtain shallow information. Li and Wu proposed Densefuse[10] for infrared and visible light image fusion in 2019. Compared with the network structure of Deepfuse, they increased the number of network layers and added denseblock structure and designed a new fusion strategy. ## III The Proposed Fusion Method In this section, we introduce the designed deep neural network in detail. This section gives details of the network, loss function, and fusion strategy. ### III-A Network Structure We design an autoencoder network to reconstruct the image. This network includes two parts: encoder, and decoder. The proposed network structure is shown in Fig.1. In the training phase, we input infrared and visible light images $I(I_{1},I_{2},I_{3}...I_{k})$ into the network. The input images will be resized to a fixed size. Then, the images are processed by the encoder and decoder. The different types of images share one same encoder and decoder. Our encoder has two branches, the detail branch and the semantic branch. In the encoder, we first perform convolution on the input images $I$ to obtain a set of features. Then this set of features goes through the detail branch and the semantic branch at the same time. The purpose of the detail branch of the encoder is to extract more detailed and texture space information from the original image. Therefore, we design the detail branch as a four-layer denseblock structure. As shown in the upper branch of the encoder in the Fig.1. The input of each layer of convolution concatenates with the outputs of all previous layers. That is, the output of $p_{th}$ layer is $X_{p}=F_{p}([X_{0},X_{1},X_{2},...,X_{p-1}])$ . Where $F_{p}$ represents a nonlinear operation of the $p_{th}$ layer including normalization, activation layer, convolution layer, etc. $[X_{0},X_{1},X_{2},...,X_{p-1}]$ represents concatenation of the outputs of all prior layers. Such a densely connected network can enable the detail branch to better learn the shallow features of the original images. This branch can focus on low-level details. In addition to detail information, of course we also need global structural semantic information. In order to extract the global information of the features, we design a rapid downsampling network structure, also called semantic branch. As shown in the lower branch of the encoder in the Fig. 1. We quickly downsample the features three times and expand the number of channels of the features. Then the semantic features is upsampled to the original size to be able to concatenate with the detail features. Figure 2: The network during the test phase. During the test phase, we input a pair of the infrared and visible light images into the weights shared siamese encoder at the same time. Then, we can obtain two set of features separately. We design a fusion layer in the test phase to fuse these features. Finally, we input the fused features into the decoder to reconstruct the fused image, as shown in the Fig. 2. The details about the fusion layer will be shown in the later sections. ### III-B The details of the network The input images will be pre-registered to resize to 128$\times$128\. Firstly, we perform a convolution on the images and obtain 32 channels of features. Then we make a copy of these features and input them into the detail branch and semantic branch respectively. In the detail branch, we perform four convolution and dense connection operations. The size of features is unchanged, and the number of channels is accumulated, which is 16, 32, 48 and 64. In the semantic branch, we perform four convolution operations with a step of 2 to achieve the propose of downsampling. The size of the features is reduced to 64, 32 and 16. The number of channels is first increased to 64 and 128, and then decreased to 64. Therefore, the number of channels can be the same as the detail branch. At the end of the semantic branch, we upsample the features using the bilinear method, expanding their size by 8 times. Finally, after four convolutions, the number of channels is gradually reduced to one in the decoder. And we can obtain the reconstituted images or the fused images. All the above convolution operations use 3$\times$3 kernels with padding. The parameters of the network are shown in Table. I. TABLE I: The parameters of the network. Block | Layer | Channel | Channel | Size | Size ---|---|---|---|---|--- (input) | (output) | (input) | (output) Encoder | | Conv_1 | 1 | 32 | 128$\times$128 | 128$\times$128 Detail Branch | Conv_d1 | 32 | 16 | 128$\times$128 | 128$\times$128 Conv_d2 | 16 | 16 | 128$\times$128 | 128$\times$128 Conv_d3 | 32 | 16 | 128$\times$128 | 128$\times$128 Conv_d4 | 48 | 16 | 128$\times$128 | 128$\times$128 Semantic Branch | Conv_s1 | 32 | 64 | 128$\times$128 | 64$\times$64 Conv_s2 | 64 | 128 | 64$\times$64 | 32$\times$32 Conv_s3 | 128 | 64 | 32$\times$32 | 16$\times$16 Upsample | 64 | 64 | 16$\times$16 | 128$\times$128 Decoder | Conv_1 | 128 | 64 | 128$\times$128 | 128$\times$128 Conv_2 | 64 | 32 | 128$\times$128 | 128$\times$128 Conv_3 | 32 | 16 | 128$\times$128 | 128$\times$128 Conv_4 | 16 | 1 | 128$\times$128 | 128$\times$128 The ReLU function discards negative activation and loses half of the activation information. It may be that the sparse ability of the ReLU function is suitable for image classification, but not suitable for image reconstruction. LeakyReLU solves this problem to a certain extent, and can retain certain negative activations. We choose the MISH function[16] as the activation function, which is more sensitive to small errors and has a suppressive effect on the large errors. The formula is as follows: $Mish(x)=x\cdot tanh(ln(1+e^{x}))$ (1) ### III-C Loss Function In order to better extract detailed information and semantic information, we design multiple losses in $L$, including pixel loss, gradient loss, color loss and perceptual loss respectively. The formula is presented as follows: $L=L_{pixel}+\alpha L_{gradient}+\beta L_{color}+\gamma L_{perceptual}$ (2) where $\alpha$, $\beta$, $\gamma$ are hyper parameters, used to balance the weight of the four losses. $L_{pixel}$ loss function can calculate the pixel error of the reconstructed image and the input image, the formula is given as follows: $L_{pixel}=MSE(I_{re},I_{in})$ (3) $MSE(X,Y)=\frac{1}{N}\sum_{n=1}^{N}(X_{n}-Y_{n})^{2}$ (4) where $I_{re}$ is the reconstructed image, $I_{in}$ is the input image, and $MSE(x,y)$ is the mean square error function of $x$ and $y$. $L_{gradient}$ loss function can calculate the edge information loss of the reconstructed image and the input image, the formula is presented as follows: $L_{gradient}=MSE(Gradient(I_{re}),Gradient(I_{in}))$ (5) where, $Gradient(x)$ is the image sharpening using the Laplace operator to obtain the gradient map. The Laplace operator performs a mathematical convolution operation. Its definition and approximate discrete expression is given as follows: $\displaystyle\nabla^{2}f(x,y)=\frac{\partial^{2}f(x,y)}{\partial x^{2}}+\frac{\partial^{2}f(x,y)}{\partial y^{2}}$ (6) $\displaystyle\approx[f(x+1,y)+f(x-1,y)+f(x,y+1)f(x,y-1)]-4f(x,y)$ $L_{color}$ loss function function can calculate the color histogram error of the reconstructed image and the input image, because we consider the light and dark of the visible light image and the infrared radiation information of the infrared image. $L_{color}=\frac{1}{255}\|Histogram(I_{re})-Histogram(I_{in})\|_{2}$ (7) where $Histogram(x)$ is the color histogram of $x$. We set the number of the histogram bins to 255, the maximum and minimum values of the histogram calculation are the maximum and minimum values between $I_{re}$ and $I_{in}$. We calculate the norm of the difference between the two histograms. $L_{perceptual}$ loss function[17] can calculate the errors of features between the reconstructed image and the input image. $L_{perceptual}=\sum_{i=1}^{4}MSE(\phi_{i}(I_{re}),\phi_{i}(I_{in}))$ (8) where $\phi_{i}(x)$ represents the features of the $i$th layer obtained by inputting images $I$ into a specific trained network. In this paper, we choose pre-trained vgg19 network as the feature extraction network. (9) Figure 3: The network during the test phase. ### III-D Fusion Strategy We choose two feature fusion strategies, one is addition strategy and the other is channel strategy, as shown in Fig. 3: #### III-D1 Addition Strategy This is a simple and effective fusion method. Two sets of features are element-wisely added in the channel dimension as follows. $F_{fuse}(x,y)=F_{ir}(x,y)+F_{vi}(x,y)$ (10) where $F_{fuse}$, $F_{ir}$ and $F_{vi}$ represent the fused features, infrared features and visible light features, respectively. $(x,y)$ denotes the corresponding position of features. #### III-D2 Channel Strategy It is easy to know that the features of infrared images and visible light images have different importance on different channel. The visible light images contain more detail information, while the infrared images contain more semantic structural information. Moreover, the extracted features of different channels generated by different kernels are also different. We design a channel selection strategy. As shown in (b) of Fig. 3, the infrared and visible image features $F_{ir},F_{vi},F\in\mathbb{R}^{H\times W\times C}$ are pooled by the global average to $u_{ir},u_{vi},u\in\mathbb{R}^{1\times 1\times C}$. We calculate the probability $\widetilde{u}_{ir},\widetilde{u}_{vi}$ based on $u_{ir},u_{vi}$. Multiplying $F_{ir}$ and $\widetilde{u}_{ir}$, $F_{vi}$ and $\widetilde{u}_{vi}$ to get the enhanced channel features $\widetilde{F}_{ir},\widetilde{F}_{vi}$ respectively. Element-wise adding $\widetilde{F}_{ir}$ and $\widetilde{F}_{vi}$ to get the final fusion features ${{F}}_{fuse}$.The formula are given as follows: $\displaystyle{u}_{i}=\frac{1}{H\times W}\sum_{m=1}^{H}\sum_{n=1}^{W}x_{i}(m,n)$ (11) $\displaystyle\widetilde{{u}}_{ir}=\frac{e^{u_{ir}}}{e^{u_{ir}}+e^{u_{vi}}}~{},~{}\widetilde{{u}}_{vi}=1-\widetilde{{u}}_{ir}$ $\displaystyle\widetilde{{F}}_{ir}=\widetilde{{u}}_{ir}{F_{ir}}~{},~{}~{}~{}~{}\widetilde{{F}}_{vi}=\widetilde{{u}}_{vi}{F_{vi}}$ $\displaystyle{F}_{fuse}=\widetilde{{F}}_{ir}+\widetilde{{F}}_{vi}$ ## IV Experiments and Analysis ### IV-A Training and Testing Details For the selection of hyperparameters, we make the values of four losses as close to the same order of magnitude as possible. So, in formula 2, we set $\alpha=1,\beta=0.001,\gamma=1000$ by cross validation.. We select 59 pairs of images in the TNO[18] data set as training and test data sets. We select 10 pairs of them as the test set and 49 pairs of them as the training set. 49 pairs of images are not enough as a training set, we crop each image to a size of 128$\times$128 with a step size of 14. At the same time, the cropped images will be expanded symmetrically. Finally we get 6,720 images as training set. In the training phase, the batchsize = 64 images is set for training. We choose Adam [19] as the optimizer and adaptive learning rate descent method as the learning rate scheduler. The initial learning rate is 1e-3, the patience is 10 epoch, the adjustment factor is 0.5, and the minimum learning rate is 1e-10. We trained the network for 100 epochs. In the test phase, the proposed network is a full-convolutional network, we input the source images without crop into the network to obtain the final fused image. The experiment is conducted on the NVIDIA TITAN Xp and 128GB of CPU memory. ### IV-B Fusion Evaluation Image quality evaluation is still a problem that has not been solved well. In this paper, we choose subjective evaluation and objective evaluation to evaluate the image quality. We compare the proposed method with some classic and latest fusion methods including cross bilateral filter fusion method(CBF)[20], Laplacian Pyramid (LP)[21], Ratio of Low-pass Pyramid (RP)[22], Dual-Tree Complex Wavelet Transform (DTCWT)[23], Curvelet Transform (CVT)[24], Multiresolution Singular Value Decomposition (MSVD)[25], gradient transfer and total variation minimization(GTF) [26], CNN[13], FusionGan[12], Deepfuse[14] and DenseFuse[10] respectively . Figure 4: Subjective evaluation of fused images of courtyard. Infrared image and visible light image are our input images. The other 11 images are the results of the comparison method and our proposed method. #### IV-B1 Subjective Evaluation: We follow human observation standards for images, such as brightness, contrast, color and other naturalness indicators. Generally speaking, it is the satisfaction degree of the image to people. In this paper, we list and compare the results of multiple algorithms. As shown in Fig. 4, the channel strategy retains more detailed information such as the texture of bushes and global information such as the contrast of sky and buildings than the addition strategy. Compared with other methods, our proposed method also retains the details of visible light and infrared radiation information. At the same time, no noise and artifacts are introduced in the results. For example, noise points in the RP method, artifacts in CBF and FusionGan. In DenseFuse, the results are already quite good, but there are more white noises in the four corners. #### IV-B2 Objective Evaluation: However, there is no single standard for subjective evaluation. People in different industries and different fields have different requirements for images. Therefore, we introduce five objective evaluation indicators, which are EI[27], SF[28], EN[29], SSIM[30], $N_{abf}$[31] and MI[32] respectively. EI represents the contrast intensity of adjacent pixels, SF represents the degree of mutation in the image, EN represents the amount of information contained in the image, SSIM represents the structural similarity between the two images, $N_{abf}$ represents the ratio of noise artificially added to the final image and MI represents the correlation of two images. TABLE II: Objective evaluation of classic and latest fusion algorithms. | EI | SF | EN | SSIM | Nabf | MI ---|---|---|---|---|---|--- CBF | 52.3045 | 13.0921 | 6.8275 | 0.6142 | 0.2669 | 13.6550 LP | 44.7055 | 11.5391 | 6.6322 | 0.7037 | 0.1328 | 13.2644 RP | 44.9054 | 12.7249 | 6.5397 | 0.6705 | 0.1915 | 13.0794 GTF | 35.0073 | 9.5022 | 6.5781 | 0.6798 | 0.0710 | 13.1562 DTCWT | 42.6159 | 11.1913 | 6.4799 | 0.6939 | 0.1428 | 12.9599 CVT | 43.1558 | 11.2006 | 6.5005 | 0.6907 | 0.1644 | 13.0011 MSVD | 27.9727 | 8.9758 | 6.2869 | 0.7219 | 0.0378 | 12.5738 Deepfuse | 33.8768 | 8.3500 | 6.6102 | 0.7135 | 0.0610 | 13.2205 CNN | 44.8334 | 11.6483 | 7.0629 | 0.6955 | 0.1286 | 14.1259 Fusion | 30.6847 | 7.5492 | 6.5299 | 0.6211 | 0.1344 | 13.0597 Densefuse | 36.4838 | 9.3238 | 6.8526 | 0.7108 | 0.0890 | 13.7053 ours-addition | 25.3765 | 6.2758 | 6.2691 | 0.7593 | 0.0010 | 12.5382 ours-channel | 59.2286 | 21.9070 | 6.8793 | 0.5990 | 0.1958 | 13.7586 It can be seen in Table. II that the result of the addition strategy obtains the best value in SSIM and $N_{abf}$, which means that this strategy retains a higher structural similarity and less noise. The result of the channel strategy obtains the best value in EI and SF, which means that the image contains more detailed edge information. The result of the channel strategy also has the second best value in EN and MI, which means that the result contains a lot of information from source image. Applying to addition or channel strategies, our algorithm has the best or second best value among the six indicators, which indicates that our proposed algorithm is effective for the fusion of infrared and visible light images. ## V CONCLUSION In this paper, we propose a novel and effective deep learning network for infrared and visible light image fusion. The network we proposed has three parts, which are the encoder, fusion layer and decoder. After the two images are encoded, the extracted features are fused by the fusion layer, which is then input to the decoder to reconstruct the fused image. The encoder we designed has dual branch, one of which is the detail branch that uses dense connections to extract shallow layer information, and the other is the semantic branch that uses fast downsampling to extract global information. We design a new fusion strategy to fuse two sets of features according to the importance of the channel. Our proposed method has obtained the best value or the second best value on the six objective indicators. It shows that our proposed method has advantages for the fusion of infrared and visible light images. Although our research focuses on infrared and visible light image fusion, our proposed fusion strategy can be applied to a series of image fusion tasks including but not limited to medical image fusion and hyperspectral fusion. The proposed loss function can be used for a variety of image reconstruction or image generation tasks, and we hope the proposed network structure can be used for other image processing tasks. ## References * [1] S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: A survey of the state of the art,” _information Fusion_ , vol. 33, pp. 100–112, 2017. * [2] T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: A simple and practical alternative to high dynamic range photography,” in _Computer graphics forum_ , vol. 28, no. 1. Wiley Online Library, 2009, pp. 161–171. * [3] Z. Zhang and R. S. Blum, “A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,” _Proceedings of the IEEE_ , vol. 87, no. 8, pp. 1315–1326, 1999. * [4] K. P. Upla, M. V. Joshi, and P. P. Gajjar, “An edge preserving multiresolution fusion: Use of contourlet transform and mrf prior,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 53, no. 6, pp. 3210–3220, 2014. * [5] H. Li, X.-J. Wu, and J. Kittler, “Mdlatlrr: A novel decomposition method for infrared and visible image fusion,” _IEEE Transactions on Image Processing_ , 2020. * [6] D. P. Bavirisetti, G. Xiao, and G. Liu, “Multi-sensor image fusion based on fourth order partial differential equations,” in _2017 20th International Conference on Information Fusion (Fusion)_. IEEE, 2017, pp. 1–9. * [7] N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ica bases,” _IEEE Sensors Journal_ , vol. 7, no. 5, pp. 743–751, 2007\. * [8] J. Mou, W. Gao, and Z. Song, “Image fusion based on non-negative matrix factorization and infrared feature extraction,” in _2013 6th International Congress on Image and Signal Processing (CISP)_ , vol. 2. IEEE, 2013, pp. 1046–1050. * [9] Y. Liu, X. Chen, Z. Wang, Z. J. Wang, R. K. Ward, and X. Wang, “Deep learning for pixel-level image fusion: Recent advances and future prospects,” _Information Fusion_ , vol. 42, pp. 158–173, 2018. * [10] H. Li and X.-J. Wu, “Densefuse: A fusion approach to infrared and visible images,” _IEEE Transactions on Image Processing_ , vol. 28, no. 5, pp. 2614–2623, 2018. * [11] X. Song and X.-J. Wu, “Multi-focus image fusion with pca filters of pcanet,” in _IAPR Workshop on Multimodal Pattern Recognition of Social Signals in Human-Computer Interaction_. Springer, 2018, pp. 1–17. * [12] J. Ma, W. Yu, P. Liang, C. Li, and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion,” _Information Fusion_ , vol. 48, pp. 11–26, 2019. * [13] Y. Liu, X. Chen, H. Peng, and Z. Wang, “Multi-focus image fusion with a deep convolutional neural network,” _Information Fusion_ , vol. 36, pp. 191–207, 2017. * [14] K. R. Prabhakar, V. S. Srikar, and R. V. Babu, “Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs.” in _ICCV_ , 2017, pp. 4724–4732. * [15] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 4700–4708. * [16] D. Misra, “Mish: A self regularized non-monotonic neural activation function.” _arXiv: Learning_ , 2019. * [17] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in _European conference on computer vision_. Springer, 2016, pp. 694–711. * [18] A. Toet _et al._ , “Tno image fusion dataset,” _Figshare. data_ , 2014\. * [19] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv: Learning_ , 2014. * [20] B. S. Kumar, “Image fusion based on pixel significance using cross bilateral filter,” _Signal, image and video processing_ , vol. 9, no. 5, pp. 1193–1204, 2015. * [21] P. J. Burt and E. H. Adelson, “The laplacian pyramid as a compact image code,” _IEEE Transactions on Communications_ , vol. 31, no. 4, pp. 671–679, 1983. * [22] A. Toet, “Image fusion by a ration of low-pass pyramid.” _Pattern Recognition Letters_ , vol. 9, no. 4, pp. 245–253, 1989. * [23] J. J. Lewis, R. J. O’Callaghan, S. G. Nikolov, D. R. Bull, and N. Canagarajah, “Pixel-and region-based image fusion with complex wavelets,” _Information fusion_ , vol. 8, no. 2, pp. 119–130, 2007. * [24] F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” _Information fusion_ , vol. 8, no. 2, pp. 143–156, 2007. * [25] V. Naidu, “Image fusion technique using multi-resolution singular value decomposition,” _Defence Science Journal_ , vol. 61, no. 5, p. 479, 2011\. * [26] J. Ma, C. Chen, C. Li, and J. Huang, “Infrared and visible image fusion via gradient transfer and total variation minimization,” _Information Fusion_ , vol. 31, pp. 100–109, 2016. * [27] C. Xydeas, , and V. Petrovic, “Objective image fusion performance measure,” _Electronics letters_ , vol. 36, no. 4, pp. 308–309, 2000. * [28] A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” _IEEE Transactions on communications_ , vol. 43, no. 12, pp. 2959–2965, 1995. * [29] J. W. Roberts, J. A. van Aardt, and F. B. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” _Journal of Applied Remote Sensing_ , vol. 2, no. 1, p. 023522, 2008. * [30] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” _IEEE transactions on image processing_ , vol. 13, no. 4, pp. 600–612, 2004. * [31] B. S. Kumar, “Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform,” _Signal, Image and Video Processing_ , vol. 7, no. 6, pp. 1125–1143, 2013\. * [32] G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” _Electronics letters_ , vol. 38, no. 7, pp. 313–315, 2002.
# Protecting qubit coherence by spectrally engineered driving of the spin environment Maxime Joos Department of Physics, University of California, Santa Barbara, California 93106, USA Dolev Bluvstein Department of Physics, University of California, Santa Barbara, California 93106, USA Department of Physics, Harvard University, Cambridge, MA 02138, USA Yuanqi Lyu Department of Physics, University of California, Santa Barbara, California 93106, USA Department of Physics, University of California, Berkeley, CA 94720, USA David Weld Department of Physics, University of California, Santa Barbara, California 93106, USA Ania Bleszynski Jayich Department of Physics, University of California, Santa Barbara, California 93106, USA ###### Abstract Modern quantum technologies rely crucially on techniques to mitigate quantum decoherence; these techniques can be either passive, achieved for example via materials engineering, or active, typically achieved via pulsed monochromatic driving fields applied to the qubit. Using a solid-state defect spin coupled to a microwave-driven spin bath, we experimentally demonstrate a decoherence mitigation method based on spectral engineering of the environmental noise with a polychromatic drive waveform, and show that it outperforms monochromatic techniques. Results are in agreement with quantitative modeling, and open the path to active decoherence protection using custom-designed waveforms applied to the environment rather than the qubit. Quantum decoherence, resulting from the unavoidable coupling between a qubit and its environment, underlies fundamental descriptions of the quantum- classical transition [1, 2] and poses a major challenge to a variety of quantum technologies [3]. Several methods exist to mitigate environment- induced decoherence. Materials-based approaches aim to perfect the environment through techniques like surface passivation and optimized synthesis [4, 5, 6, 7]. Dynamical decoupling instead relies on manipulating the qubit rapidly enough to average out deleterious environmental fluctuations at particular frequencies [8, 9, 10, 11]. However, the flexibility and power of this technique comes at some cost to the qubit’s utility, as the decoupling pulses need to be interleaved with gate operations or sensing sequences [12]. An alternative and complementary approach aims at manipulating the noise frequency spectrum of the bath itself, a technique referred to as spin-bath driving [13] and originally developed in nuclear magnetic resonance and known as spin decoupling [14]. Recent work has shown that monochromatic driving of a spin bath can extend the coherence of bulk [15] and near-surface NV centers [16, 17]. An intrinsic limitation of this monochromatic approach is that it does not efficiently address spectrally broad classes of spins that naturally emerge in inhomogeneous environments and interacting spin baths [18]. In this work, we introduce a new polychromatic drive scheme and demonstrate experimentally that spectrally engineered driving of a spin bath enhances qubit coherence beyond the limit of what can be achieved with monochromatic driving. The drive operates in analogy to motional narrowing and broad-band decoupling [18] in nuclear magnetic resonance: driving the spin bath accelerates incoherent bath fluctuations, reducing the integrated phase acquired from the spin bath. Our driving scheme not only enables significantly increased power efficiency for protecting coherence, it also paves the way toward more complex and powerful techniques of tailored dynamical bath engineering. Figure 1: Coherence extension by driving the spin bath. (a) A shallow NV center (qubit) is dephased by the magnetic noise originating from a bath of surface spins. (b) Hahn-echo sequence used to measure coherence during bath driving with linewidth $\Delta\nu$ and Rabi frequency $\Omega_{ss}$. (c) Coherence decay without driving (grey), with monochromatic driving (blue) and polychromatic/stochastic driving (orange), described in the text. Solid lines are fits to $\exp{\left[-(\tau/T_{2})^{n}\right]}$. Fig. 1(a) shows a model of the system we investigate. The polychromatic drive is applied to the spin environment of a shallow nitrogen-vacancy (NV) center in diamond (see [19] for details on the diamond sample preparation). The NV center is a solid-state qubit that exhibits long coherence in ambient conditions and is being used in a variety of applications ranging from networking to quantum sensing. This work uses NV centers located just a few nanometers below the diamond surface. Recent studies have identified magnetic noise from the surface spins as a major contributor to shallow NV decoherence [20, 21, 16, 22, 23, 24]. Radio-frequency (rf) fields, delivered by a single free-space antenna, enable coherent control of NV qubit states $\\{\ket{m_{s}=0},\ket{m_{s}=-1}\\}$ and surface spin qubit states $\\{\ket{\uparrow},\ket{\downarrow}\\}$. NV centers are addressed individually by means of a home-built confocal microscope operating in ambient conditions, and are subjected to a static magnetic field $B_{0}\approx\text{$315\text{\,}\mathrm{G}$}$ aligned along the NV axis. We use green ($532\text{-nm}$ wavelength) laser pulses for initialization and readout of the NV spin state. The NV coherence $C(\tau)$ is probed using the Hahn-echo sequence represented in Fig. 1(b); simultaneously, the surface spins are driven with spectrally engineered rf fields. Specifically, we apply drives with Lorentzian spectral line-shapes characterized by a tunable full-width at half maximum (FWHM) $\Delta\nu$, generated by stochastic phase modulation of a carrier wave with Rabi frequency $\Omega_{ss}$ on the surface spin transition, denoted as a stochastic drive (see [19] for details). Spectrum engineering via phase modulation (as opposed to amplitude modulation) is a key feature of the experiment, ensuring that the driving power remains constant, whereas amplitude modulation could cause fluctuating AC Stark shifts of the NV spin states and decohere the NV [16]. The NV coherence time $T_{2}$ is extracted from a stretched exponential fit ($\exp{\left[-(\tau/T_{2})^{n}\right]}$) to the measured coherence decay shown in Fig. 1(c). To understand the effect of bath driving on the qubit’s coherence, we first give a quantitative description of NV decoherence based on a model of the surrounding bath. The NV coherence $C(\tau)$ is determined by the overlap between a filter function and the noise spectral density [25, 26, 27, 28]: $C(\tau)=\exp\left[-\frac{1}{\pi}\int_{0}^{\infty}d\omega S(\omega)F(\tau,\omega)\right],$ (1) where $S(\omega)=\gamma^{2}\int_{-\infty}^{\infty}e^{-i\omega t}\langle B(t^{\prime}+t)B(t^{\prime})\rangle dt,$ (2) is the magnetic noise spectral density experienced by the NV (see Fig. 2(c) and (d)), $\gamma$ is the gyromagnetic ratio, $B$ is the field component along the NV axis, $F(\tau,\omega)=8/\omega^{2}\sin^{4}{(\omega\tau/4)}$ is the qubit Hahn-echo filter function, and $\omega$ is the angular frequency. This decoherence model assumes a random Gaussian distribution for the noise amplitudes and is valid in the regime of pure dephasing. Our strategy to decouple the qubit from its noise environment relies on actively reshaping the spectral density so that it minimally overlaps with the filter function. This decoupling mechanism and its underlying microscopic model is illustrated in Fig. 2. The upper panels are time-domain representations of the magnetic noise $B(t)$ produced by the surface spins, and the lower panels are the corresponding frequency domain representations $S(\omega)$. We now illustrate three cases of driving: no driving, monochromatic, and stochastic. In the absence of driving (grey curve in Fig. 2), intrinsic spin relaxations at a rate $\Gamma$ lead to a random process for the noise, characterized by a correlation time $\tau_{c}=1/\Gamma$ in the time domain (Fig. 2(a) and (b)) and a Lorentzian spectrum with half width at half maximum (HWHM) $\Gamma$ centered at $\omega=0$ (Fig. 2(c) and (d)). The overlap of the noise spectrum with the Hahn-echo filter function sets the contribution of surface spins to NV decoherence. In the case of resonant monochromatic bath driving with a Rabi frequency $\Omega_{ss}$, the surface spins undergo Rabi flopping in addition to their intrinsic relaxation dynamics, leading to oscillations of the field $B(t)$ in the time domain (blue curve in Fig. 2(a)). In the frequency domain (Fig. 2(c)), monochromatic driving shifts the center of the spectral density from 0 to $\Omega_{ss}$; the overlap with the qubit filter function (represented by the blue shaded area) is consequently reduced from the undriven case. Figure 2: Principle of monochromatic and stochastic bath driving. Time domain representations of the magnetic noise $B(t)$ produced by a spin bath, (a) for monochromatic driving (blue) and (b) for stochastic driving (orange). For simplicity, all the spins are assumed to have the same resonance frequency. Frequency domain representations of the noise spectral density $S(\omega)$, (c) for monochromatic driving (blue) and (d) for stochastic driving (orange). Both driving methods reduce the overlap between the filter function $F(\tau,\omega)$ (dashed line) and the noise spectrum, as compared to the undriven case (grey curve), resulting in increased coherence. In contrast, stochastic driving consists of a continuum of frequency tones with random phase relations. Each tone drives partial Rabi oscillations of the surface spins and the incoherent sum of these Rabi oscillations results in random evolution as shown in Fig. 2(b). These incoherent dynamics lead to an effective relaxation rate $R$ induced by stochastic driving, on top of the intrinsic relaxation rate $\Gamma$ [19]. In the case of a broad Lorentzian driving spectrum $\Delta\nu\gg\Omega_{ss},\Gamma$, $R=\Gamma+2\frac{\Omega_{ss}^{2}}{\Delta\nu}.$ (3) In the frequency domain (Fig. 2(d)), the effect of the stochastic drive is to broaden and flatten the noise spectral density. The overlap with the filter function is reduced, leading to an extension of coherence. Compared to monochromatic driving, the advantage of stochastic driving is that it more efficiently addresses a broad spectral range of spins: the spectral range over which spins are decoupled scales linearly (quadratically) with the Rabi frequency $\Omega_{ss}$ for monochromatic (stochastic) driving [18]. Having discussed the predicted spin dynamics under stochastic excitation and its potential for extending qubit coherence, we now experimentally characterize the frequency and time response of the surface-spin bath and the effects of stochastic driving. We use the NV to probe the surface spins dynamics with the double electron-electron resonance (DEER) sequence shown in Fig. 3(a). The Hahn-echo sequence on the NV cancels out low frequency noise from the environment, but the DEER pulse (duration $t_{ss}$, center frequency $f_{ss}$ and spectral width $\Delta\nu$) selectively recouples the surface spins and reveals their dynamics. Figure 3(b) shows the NV coherence as a function of $f_{ss}$, where the fixed pulse duration $t_{ss}=60\text{ ns}$ is chosen to be a $\pi\text{-pulse}$ on resonance with the surface spins. The acquired spectrum exhibits a resonance at $885.3\pm 0.4\text{\,}\mathrm{MHz}$ which is the magnetic signature of a $g=2$ $\text{spin-}1/2$ particle in the applied static field of $315\text{\,}\mathrm{G}$. The line-shape is best modeled by assuming inhomogeneous broadening of the surface spins and accounting for the finite pulse width $t_{ss}$ (see [19] for details). From the fit, we extract the FWHM of the underlying Gaussian distribution of the spins $2\Gamma_{2}=\text{$15.7\pm 1.3\text{\,}\mathrm{MHz}$}$. Having confirmed the electronic nature of the surface spins and identified their inhomogeneous broadening, we now investigate the temporal response of the bath. Figure 3(c) shows the measured NV coherence as a function of $t_{ss}$ for different line-widths $\Delta\nu$ of the drive and a fix sequence duration $\tau=\text{$16\text{\,}\mathrm{\SIUnitSymbolMicro s}$}$. When $\Delta\nu=0$, the NV coherence exhibits damped oscillations as a function of $t_{ss}$. The oscillations reflect the Rabi flopping of the surface spins while the damping is due to the inhomogeneous broadening and the intrinsic relaxation. For $\Delta\nu>0$, the stochastic DEER pulse decorrelates the field produced by the surface spins between the first and second half of DEER sequence, leading to increased damping of the Rabi oscillations. For $\Delta\nu/2\pi=48\text{ MHz}>2\Gamma_{2}$, the measured NV coherence decays exponentially at a rate given by Eq. (3), as expected for broad Lorentzian excitation. The solid curves in Fig. 3(c) are fits using our model of inhomogeneously broadened surface spins subjected to a stochastic drive [19]. From these fits, we extract an independent measure of the inhomogeneous broadening $2\Gamma_{2}=\text{$14.9\pm 1.2\text{\,}\mathrm{MHz}$}$ in agreement with the DEER spectrum in Fig. 3(b), and a Rabi frequency $\Omega_{ss}/2\pi=\text{$8.7\pm 0.1\text{\,}\mathrm{MHz}$}$. To investigate the noise spectrum generated by the measured surface spin dynamics, Figure 3(d) shows the Fourier transform of the fitted bath correlations of Fig. 3(c) obtained after symmetrization with respect to $t_{ss}=0$. We constrain the integrated power of the spectra to be constant. As expected from the time domain dynamics, the spectrum under monochromatic driving ($\Delta\nu=0$) exhibits a peak at $\Omega_{ss}/2\pi$, but still has a remaining static component due to the detuned Rabi oscillations of the inhomogeneously broadened spins (the simplified Fig. 2 schematic does not account for this inhomogeneous broadening). In the case of stochastic driving, the noise spectrum is flattened and broadened, and the overlap with $F(\tau,\omega)$ is considerably reduced. Figure 3: Probing bath correlations to quantify noise spectrum under drive. (a) Pulse sequence used to probe the surface spin bath, and (b) corresponding spectrum with $\Delta\nu=0$, $t_{ss}=60\text{ ns}$, $\tau=\text{$20\text{\,}\mathrm{\SIUnitSymbolMicro s}$}$, $\Omega_{ss}/2\pi=9.7\text{ MHz}$, and an NV $\pi\text{-pulse}$ of $80\text{\,}\mathrm{ns}$. (c) Experimental spin bath dynamics, obtained by measuring NV coherence as a function of pulse length $t_{ss}$ for different excitation linewidths $\Delta\nu$ and for $\Omega_{ss}/2\pi=8.7\text{ MHz}$. Increasing $\Delta\nu$ accelerates the damping of the Rabi oscillations, eventually reaching an overdamped regime. (d) Noise spectral densities obtained by Fourier transformation of the fitted DEER signals in (c) and normalized so that $2\int_{0}^{\infty}S(\omega)d\omega/2\pi=1$. Grey shaded area represents the Hahn-echo filter function corresponding to a sequence duration of $\tau=\text{$10\text{\,}\mathrm{\SIUnitSymbolMicro s}$}$. We now demonstrate that stochastic driving extends NV coherence beyond the gains of monochromatic bath driving. We perform the sequence shown in Fig. 1(b) and measure the NV coherence time $T_{2}\left(\Omega_{ss},\Delta\nu\right)$. We observe that stochastic driving outperforms monochromatic driving over a broad range of drive parameters. Figure 4 shows the coherence time $T_{2}(\Omega_{ss},\Delta\nu)$, where the bare coherence time $T_{2}(\Omega_{ss}=0)=\text{$33.1\pm 1.5\text{\,}\mathrm{\SIUnitSymbolMicro s}$}$. Figures 4(a) and (b) highlight the $\Delta\nu$ and $\Omega_{ss}$ dependences, respectively. For example, we measure a 2.7-fold increase in coherence time with stochastic driving at $(\Omega_{ss}/2\pi=4.9\text{ MHz},\Delta\nu/2\pi=17.4\text{ MHz})$, as compared to a 1.8-fold increase with monochromatic driving at the same power. Our experimental data are well-captured by Eq. (1) where $S(\omega)$ includes contributions from two g=2 spin baths (both are driven, but have different bath parameters), and a residual bath which is undriven (see [19] for details). Figure 4: Coherence time for different driving parameters. (a) NV $T_{2}$ as a function of $\Delta\nu$ for different bath Rabi frequencies $\Omega_{ss}$. Stochastic drive outperforms monochromatic drive ($\Delta\nu=0$). (b) $T_{2}$ as a function of drive Rabi frequency $\Omega_{ss}$ for monochromatic driving $\Delta\nu=0$ and stochastic driving with $\Delta\nu=17\text{ MHz}$. Solid lines are fit using our model [19]. Our results suggest that, for a given driving power, an optimal $\Delta\nu$ exists that maximizes the coherence. The non-monotonic dependence on $\Delta\nu$ can be understood intuitively as follows: starting from $\Delta\nu=0$, increasing $\Delta\nu$ enables a larger fraction of the inhomogeneously broadened surface spins to be resonantly addressed and therefore decoupled from the NV center. When $\Delta\nu$ exceeds the inhomogeneous broadening of the surface spins (reported above as $2\Gamma_{2}\sim 16\text{ MHz}$), drive energy is spread over non-resonant frequencies and the power efficiency of stochastic driving decreases. Figure 4(b) also shows that stochastic driving reduces coherence at the lowest Rabi frequencies, an effect also captured by our model. Intuitively, as $\Omega_{ss}$ first increases from zero, the noise spectral density broadens, increasing in overlap with the filter function (see Fig. 2(d)) and decreasing the coherence, before broadening well beyond the peak and flattening out. In [19] we derive the condition for stochastic driving to increase coherence: $\sqrt{\Gamma R}\gtrsim 2\pi/T_{2}$. Incorporating two driven baths with long- and short-lived correlations (as suggested by [29]) in our model is essential to quantitatively capture the features of Fig. 4(a) and (b), including the drop of coherence discussed above as detailed in the supplemental [19]. From the fit in Fig. 4(b), we extract the correlation times of the two driven baths: $68\pm 17\text{\,}\mathrm{\SIUnitSymbolMicro s}$ and $0.37\pm 0.05\text{\,}\mathrm{\SIUnitSymbolMicro s}$. Finally, we provide a qualitative explanation for the gains of stochastic driving relative to monochromatic driving. In order to completely decouple an ensemble of inhomogeneously broadened spins, monochromatic driving relies on power broadening and requires $\Omega_{ss}\gg\Gamma_{2}$. On the other hand, stochastic driving naturally addresses all the spins as long as $\Delta\nu\gtrsim\Gamma_{2}$. Combining this relation with Eq. (3), the condition for decoupling all the spins in the case of stochastic driving is $\Omega_{ss}\gg\sqrt{\Gamma\Gamma_{2}}$. Therefore, stochastic driving outperforms monochromatic driving as long as $\Gamma<\Gamma_{2}$, which is often satisfied and, for example, is satisfied by several orders of magnitude for near-surface NV qubits. For the surface spin bath investigated here, we observe $\Gamma_{2}\sim\text{$10\text{\,}\mathrm{MHz}$}$, and expect $\Gamma\sim$0.03\text{\,}\mathrm{MHz}$$ as reported in [30] (consistent with our observations), explaining the better performance of stochastic driving as $\Gamma\ll\Gamma_{2}$. A practical advantage of stochastic driving is that it reduces the driving power required to achieve a given coherence extension. AC Stark shifts of the qubit energy levels and heating at high rf driving powers are experimental limitations to the coherence time that scale with drive power ($\propto|\Omega_{ss}|^{2}$) [16]. For example, Fig. 4(b) shows that doubling the coherence necessitates $\Omega_{ss}/2\pi\approx\text{$3\text{\,}\mathrm{MHz}$}$ for stochastic driving, and $\approx\text{$8\text{\,}\mathrm{MHz}$}$ for monochromatic driving, corresponding to a 7-fold power reduction when stochastic driving is used. This advantage may prove crucial for avoiding heating and other undesired effects, in particular for cryogenic systems like superconducting qubits [31], which could also benefit from spin bath driving techniques. While coherent driving of surface electric dipoles that decohere trapped ion qubits [32, 33, 34, 35] or NV qubits [36] may be infeasible, this work suggests that such dipoles could possibly be incoherently driven and motionally narrowed. In this work we have considered only driving with a Lorentzian spectrum, but other types of polychromatic driving could be even more advantageous. The drive could for example be engineered to precisely cover the exact spectral shape of the noise sources, and engineering alternative spectral shapes could also enable maximization of the coherence at a desired Rabi frequency. In the supplemental material [19], we show that a Gaussian spectrum for the drive performs better than a Lorentzian spectrum at specific Rabi frequencies. Ultimately, our method may be extended to polychromatic driving with non- random phase relations between different frequencies. In conclusion, we demonstrate the extension of coherence of a qubit by driving its spin environment with spectrally engineered fields. We show that stochastic driving of the surface-spin bath of shallow NV centers is more efficient than monochromatic driving, and corroborate our findings with a quantitative model. Further improvements, especially for mesoscopic baths, may be possible using customized structured drive waveforms. We thank Shreyas Parthasarathy and Esat Kondakci for careful reading of the manuscript. We gratefully acknowledge support from the US Department of Energy (BES grant No. DE-SC0019241) for surface spin studies and the DARPA DRINQS program (Agreement No. D18AC00014) for driving protocols. D.B. acknowledges support from the NSF Graduate Research Fellowship Program (grant DGE1745303) and The Fannie and John Hertz Foundation. ## References * Zurek [2007] W. H. Zurek, “Decoherence and the transition from quantum to classical — revisited,” in _Quantum Decoherence: Poincaré Seminar 2005_, edited by B. Duplantier, J.-M. Raimond, and V. Rivasseau (Birkhäuser Basel, Basel, 2007) pp. 1–31. * Schlosshauer [2019] M. Schlosshauer, Physics Reports 831, 1 (2019), quantum decoherence. * Acín _et al._ [2018] A. Acín, I. Bloch, H. Buhrman, T. Calarco, C. Eichler, J. Eisert, D. Esteve, N. Gisin, S. J. Glaser, F. Jelezko, S. Kuhr, M. Lewenstein, M. F. Riedel, P. O. Schmidt, R. Thew, A. Wallraff, I. Walmsley, and F. K. Wilhelm, New Journal of Physics 20, 080201 (2018). * Quintana _et al._ [2014] C. M. Quintana, A. Megrant, Z. Chen, A. Dunsworth, B. Chiaro, R. Barends, B. Campbell, Y. Chen, I.-C. Hoi, E. Jeffrey, J. Kelly, J. Y. Mutus, P. J. J. O’Malley, C. Neill, P. Roushan, D. Sank, A. Vainsencher, J. Wenner, T. C. White, A. N. Cleland, and J. M. Martinis, Applied Physics Letters 105, 062601 (2014). * Kumar _et al._ [2016] P. Kumar, S. Sendelbach, M. A. Beck, J. W. Freeland, Z. Wang, H. Wang, C. C. Yu, R. Q. Wu, D. P. Pappas, and R. McDermott, Phys. Rev. Applied 6, 041001 (2016). * Ohno _et al._ [2012] K. Ohno, F. Joseph Heremans, L. C. Bassett, B. A. Myers, D. M. Toyli, A. C. Bleszynski Jayich, C. J. Palmstrøm, and D. D. Awschalom, Applied Physics Letters 101, 082413 (2012). * Eichhorn _et al._ [2019] T. R. Eichhorn, C. A. McLellan, and A. C. Bleszynski Jayich, Phys. Rev. Materials 3, 113802 (2019). * Viola and Lloyd [1998] L. Viola and S. Lloyd, Phys. Rev. A 58, 2733 (1998). * Viola _et al._ [1999] L. Viola, E. Knill, and S. Lloyd, Phys. Rev. Lett. 82, 2417 (1999). * Bylander _et al._ [2011] J. Bylander, S. Gustavsson, F. Yan, F. Yoshihara, K. Harrabi, G. Fitch, D. G. Cory, Y. Nakamura, J.-S. Tsai, and W. D. Oliver, Nature Physics 7, 565 (2011). * de Lange _et al._ [2010] G. de Lange, Z. H. Wang, D. Ristè, V. V. Dobrovitski, and R. Hanson, Science 330, 60 (2010). * van der Sar _et al._ [2012] T. van der Sar, Z. H. Wang, M. S. Blok, H. Bernien, T. H. Taminiau, D. M. Toyli, D. A. Lidar, D. D. Awschalom, R. Hanson, and V. V. Dobrovitski, Nature 484, 82 (2012). * Barry _et al._ [2020] J. F. Barry, J. M. Schloss, E. Bauch, M. J. Turner, C. A. Hart, L. M. Pham, and R. L. Walsworth, Rev. Mod. Phys. 92, 015004 (2020). * Slichter [1990] C. P. Slichter, _Principles of Magnetic Resonance_, 3rd ed. (Springer-Verlag Berlin Heidelberg, 1990). * Bauch _et al._ [2018] E. Bauch, C. A. Hart, J. M. Schloss, M. J. Turner, J. F. Barry, P. Kehayias, S. Singh, and R. L. Walsworth, Phys. Rev. X 8, 031025 (2018). * Bluvstein _et al._ [2019] D. Bluvstein, Z. Zhang, C. A. McLellan, N. R. Williams, and A. C. B. Jayich, Phys. Rev. Lett. 123, 146804 (2019). * Ishizu _et al._ [2020] S. Ishizu, K. Sasaki, D. Misonou, T. Teraji, K. M. Itoh, and E. Abe, “Spin coherence and depths of single nitrogen-vacancy centers created by ion implantation into diamond via screening masks,” (2020), arXiv:2006.07763 [quant-ph] . * Ernst [1966] R. R. Ernst, The Journal of Chemical Physics 45, 3845 (1966). * [19] See the Supplemental Material for supporting details on sample information, stochastic driving engineering and the spin bath model. * Mamin _et al._ [2012] H. J. Mamin, M. H. Sherwood, and D. Rugar, Phys. Rev. B 86, 195422 (2012). * Grinolds _et al._ [2014] M. S. Grinolds, M. Warner, K. De Greve, Y. Dovzhenko, L. Thiel, R. L. Walsworth, S. Hong, P. Maletinsky, and A. Yacoby, Nature Nanotechnology 9, 279–284 (2014). * Myers _et al._ [2014] B. Myers, A. Das, M. Dartiailh, K. Ohno, D. Awschalom, and A. Bleszynski Jayich, Physical Review Letters 113, 027602 (2014). * Sangtawesin _et al._ [2019] S. Sangtawesin, B. L. Dwyer, S. Srinivasan, J. J. Allred, L. V. H. Rodgers, K. De Greve, A. Stacey, N. Dontschuk, K. M. O’Donnell, D. Hu, D. A. Evans, C. Jaye, D. A. Fischer, M. L. Markham, D. J. Twitchen, H. Park, M. D. Lukin, and N. P. de Leon, Phys. Rev. X 9, 031052 (2019). * Stacey _et al._ [2019] A. Stacey, N. Dontschuk, J.-P. Chou, D. A. Broadway, A. K. Schenk, M. J. Sear, J.-P. Tetienne, A. Hoffman, S. Prawer, C. I. Pakes, A. Tadich, N. P. de Leon, A. Gali, and L. C. L. Hollenberg, Advanced Materials Interfaces 6, 1801449 (2019). * de Sousa [2009] R. de Sousa, “Electron spin as a spectrometer of nuclear-spin noise and other fluctuations,” in _Electron Spin Resonance and Related Phenomena in Low-Dimensional Structures_, edited by M. Fanciulli (Springer Berlin Heidelberg, Berlin, Heidelberg, 2009) pp. 183–220. * Biercuk _et al._ [2011] M. J. Biercuk, A. C. Doherty, and H. Uys, Journal of Physics B: Atomic, Molecular and Optical Physics 44, 154002 (2011). * Degen _et al._ [2017] C. L. Degen, F. Reinhard, and P. Cappellaro, Rev. Mod. Phys. 89, 035002 (2017). * Cywiński _et al._ [2008] L. Cywiński, R. M. Lutchyn, C. P. Nave, and S. Das Sarma, Phys. Rev. B 77, 174509 (2008). * Tetienne _et al._ [2018] J.-P. Tetienne, R. W. de Gille, D. A. Broadway, T. Teraji, S. E. Lillie, J. M. McCoey, N. Dontschuk, L. T. Hall, A. Stacey, D. A. Simpson, and L. C. L. Hollenberg, Phys. Rev. B 97, 085402 (2018). * Sushkov _et al._ [2014] A. O. Sushkov, I. Lovchinsky, N. Chisholm, R. L. Walsworth, H. Park, and M. D. Lukin, Phys. Rev. Lett. 113, 197601 (2014). * Chang _et al._ [2013] J. B. Chang, M. R. Vissers, A. D. Córcoles, M. Sandberg, J. Gao, D. W. Abraham, J. M. Chow, J. M. Gambetta, M. Beth Rothwell, G. A. Keefe, M. Steffen, and D. P. Pappas, Applied Physics Letters 103, 012602 (2013). * Brownnutt _et al._ [2015] M. Brownnutt, M. Kumph, P. Rabl, and R. Blatt, Rev. Mod. Phys. 87, 1419 (2015). * Daniilidis _et al._ [2011] N. Daniilidis, S. Narayanan, S. A. Möller, R. Clark, T. E. Lee, P. J. Leek, A. Wallraff, S. Schulz, F. Schmidt-Kaler, and H. Häffner, New Journal of Physics 13, 013032 (2011). * Hite _et al._ [2013] D. Hite, Y. Colombe, A. Wilson, D. Allcock, D. Leibfried, D. Wineland, and D. Pappas, MRS Bulletin 38, 826–833 (2013). * Noel _et al._ [2019] C. Noel, M. Berlin-Udi, C. Matthiesen, J. Yu, Y. Zhou, V. Lordi, and H. Häffner, Phys. Rev. A 99, 063427 (2019). * Myers _et al._ [2017] B. A. Myers, A. Ariyaratne, and A. C. B. Jayich, Phys. Rev. Lett. 118, 197201 (2017).
# Relational Type Theory (All Proofs) Aaron Stump Computer Science The University of Iowa Iowa City, Iowa, 52242 Email<EMAIL_ADDRESS>Benjamin Delaware Computer Science Purdue University West Lafayette, Indiana, 47907 Email<EMAIL_ADDRESS>Christopher Jenkins Computer Science The University of Iowa Iowa City, Iowa, 52242 Email<EMAIL_ADDRESS> ###### Abstract This paper introduces Relational Type Theory (RelTT), a new approach to type theory with extensionality principles, based on a relational semantics for types. The type constructs of the theory are those of System F plus relational composition, converse, and promotion of application of a term to a relation. A concise realizability semantics is presented for these types. The paper shows how a number of constructions of traditional interest in type theory are possible in RelTT, including $\eta$-laws for basic types, inductive types with their induction principles, and positive-recursive types. A crucial role is played by a lemma called Identity Inclusion, which refines the Identity Extension property familiar from the semantics of parametric polymorphism. The paper concludes with a type system for RelTT, paving the way for implementation. ## I Introduction Modern constructive type theories have long wish lists of features, from inductive and coinductive types, to type-specific extensionality principles, quotient types, higher-order datatypes, and more. In tension with this, there are excellent reasons to seek to keep the core type theory small and trustworty. This has been done, in different ways, for Lean [1] and recently Coq [2]. Both those systems implement (variants of) the Calculus of Inductive Constructions, which lacks type-specific extensionality principles. The present paper proposes Relational Type Theory (RelTT) for deriving expressive type constructs, with type-specific extensionality principles, from a formally small core theory. The approach followed is, to the authors’ knowledge, novel: RelTT is based a semantics for types as binary relations on untyped terms. For example, the semantics for a function type $R\to R^{\prime}$ makes it the set of pairs of terms $(t_{1},t_{2})$ that jointly map inputs related by the meaning of $R$ to outputs related by the meaning of $R^{\prime}$ (the semantics familiar from the field of logical relations). The notion that a term “is” a function is expressed only by saying that it is related to itself at function type. So relations between terms are the primary concern of the theory, and expression of program behavior in isolation (i.e., traditional typing) is secondary. This commitment to relational semantics leads us in an unexplored direction: we extend the set of type constructs with relational constructs. We may use these to express asymmetric relations, which are crucial for developing reasoning principles, like induction principles, within the theory. Interestingly, dependent types are unnecessary for this. The relational semantics already gives us a form of dependency which is all we need for inductive reasoning about terms. So RelTT is an extension, with relational type constructs, of System F, not the Calculus of Constructions. Avoiding dependent types notably simplifies the semantics. The power of System F is needed because the terms that the theory (relationally) types are those of pure lambda calculus, so we adopt impredicative lambda encodings to represent inductive types. The contributions of the paper are: * • The syntax and semantics of relational types (Section II) * • Basic properties of the semantics, crucially including $\beta\eta$-closure (Section III) * • Interesting derived type forms and examples (Section IV), and basic type- specific extensionality principles (Section V). * • Classes of types whose interpretations are proved to be, respectively, symmetric (Section VI) and transitive (Section VII). The proof of transitivity crucially relies on a novel theorem dubbed Identity Inclusion (Lemma 32). The intricate proof of this makes use of duality between types where all quantifiers occur only positively ($\forall^{+}$ types), and ones where they occur only negatively. * • Derivation of induction principles from types for Church-encodings (Section X). This covers any inductive type definable by a type scheme which is positive and, due to a critical use of Identity Inclusion, $\forall^{+}$. Positive-recursive types are also derived (Section XI). * • A proof system called RelPf (Section VIII) and type system RelTy (Section XII), which are proven sound with respect to the semantics, and are intended as the starting point for implementation of RelTT as a proof assistant. We will reference lemmas and theorems by name, with the theorem number following in braces (e.g., “$\beta\eta$-Closure {2}”). We will label assumptions and proven local facts with numbers (e.g., “(1)”), and goals with capital letters (e.g., “(A)”). ## II Relational types and their semantics $\begin{array}[]{lll}\textit{terms }t&::=&x\ |\ \lambda\,x.\,t\ |\ t\,t^{\prime}\\\ \textit{types }R&::=&X\ |\ R\to R^{\prime}\ |\ \forall\,X.\,R\ |\ R^{\cup}\ |\ R\cdot R^{\prime}\ |\ t\end{array}$ Figure 1: Syntax for relational types ($X$ ranges over type variables) The syntax of relational types $R$ is given in Figure 1. Terms $t$ are those of pure untyped lambda calculus. Relational type constructs include those of System F, plus $R^{\cup}$ for converse of a relation, $R\cdot R^{\prime}$ for composition of relations, and promotion of terms $t$ to relations, to be explained shortly. Usual parsing precedences from type theory are followed; additionally, $R^{\cup}$ binds most tightly, $R\cdot R^{\prime}$ second most tightly, and the other constructs after these. We also follow the usual convention that distinct meta-variables ranging over variables denote distinct variables (so $x$ and $y$ denote different variables), and treat terms and types up to $\alpha$-equivalence. Capture-avoiding substitution of $R$ for $X$ in $R^{\prime}$ is denoted $[R/X]R^{\prime}$ (similarly $[t/x]t^{\prime}$ for terms). The set of free variables of any syntactic entity $e$ is denoted $\textit{FV}(e)$. The obvious definitions of these syntactic notions are omitted. ###### Definition 1. A relation $r$ on terms of pure untyped $\lambda$-calculus is $\beta\eta$-closed iff $t_{1}\ [r]\ t_{2}$, $t_{1}^{\prime}=_{\beta\eta}t_{1}$, and $t_{2}^{\prime}=_{\beta\eta}t_{2}$ imply $t_{1}^{\prime}\ [r]\ t_{2}^{\prime}$. Write $\mathcal{R}$ for the set of all such relations, and use meta-variable $r$ to range over $\mathcal{R}$ The relational semantics of types is defined in Figure 2, where environment $\gamma$ is a function mapping a finite set of type variables to elements of $\mathcal{R}$. We use infix notation for application of a relation to a pair of terms and, following [3], we sometimes put square brackets around the relation for readability; for example, in the three equations at the bottom of the figure. In those equations, operators like “$\to$” on the right-hand sides have their standard meaning in the background meta-logic. The interpretation $\llbracket R\rrbracket_{\gamma}$ is defined iff $\gamma$ is defined for all free type variables of $R$. When referencing $\llbracket R\rrbracket_{\gamma}$ in theorems, we assume it is defined. The semantics extends $\gamma$ from type variables to arbitrary types. Promotion of term $t$ is the graph of the meta-level operation mapping $t^{\prime}$ to $t\ t^{\prime}$. Many examples are below. $\begin{array}[]{lll}\llbracket X\rrbracket_{\gamma}&=&\gamma(X)\\\ \llbracket R\to R^{\prime}\rrbracket_{\gamma}&=&\llbracket R\rrbracket_{\gamma}\to\llbracket R^{\prime}\rrbracket_{\gamma}\\\ \llbracket\forall\,X.\,R\rrbracket_{\gamma}&=&\bigcap_{r\in\mathcal{R}}\,\llbracket R\rrbracket_{\gamma[X\mapsto r]}\\\ \llbracket R^{\cup}\rrbracket_{\gamma}&=&\llbracket R\rrbracket_{\gamma}^{\cup}\\\ \llbracket R\cdot R^{\prime}\rrbracket_{\gamma}&=&\llbracket R\rrbracket_{\gamma}\cdot\llbracket R^{\prime}\rrbracket_{\gamma}\\\ \llbracket\hat{t}\rrbracket_{\gamma}&=&\\{(t,t^{\prime})\ |\ \hat{t}\,t\,=_{\beta\eta}\,t^{\prime}\\}\\\ \\\ \lx@intercol\textit{where:}\hfil\lx@intercol\\\ t\,[r_{1}\to r_{2}]\,t^{\prime}&=&\forall\,a.\,\forall\,a^{\prime}.\,a\,[r_{1}]\,a^{\prime}\to t\,a\,[r_{2}]\,t^{\prime}\,a^{\prime}\\\ t\,[r^{\cup}]\,t^{\prime}&=&t^{\prime}\,[r]\,t\\\ t\,[r_{1}\cdot r_{2}]\,t^{\prime}&=&\exists\,t^{\prime\prime}.\,t\,[r_{1}]\,t^{\prime\prime}\,\wedge\,t^{\prime\prime}\,[r_{2}]\,t^{\prime}\par\end{array}$ Figure 2: Semantics for relational types; relational operators $\to$, ∪, and $\cdot$ While the semantics of universal types quantifies (at the meta-level) over all relations in $\mathcal{R}$, we will restrict ourselves in all examples below to instantiating such quantifiers only with definable relations (i.e., ones of the form $\llbracket R\rrbracket_{\gamma}$). In Sections VIII and XII below, we will consider deductive systems for RelTT where this restriction will be enforced. ## III Basic properties ###### Lemma 2 ($\beta\eta$-Closure). $\llbracket R\rrbracket_{\gamma}\in\mathcal{R}$. ###### Proof. The proof is by induction on $R$. Suppose $t_{1}=_{\beta\eta}t_{1}^{\prime}$ and $t_{2}=_{\beta\eta}t_{2}^{\prime}$, assume (1) $t_{1}\ \llbracket R\rrbracket_{\gamma}\ t_{2}$, and show $t_{1}^{\prime}\ \llbracket R\rrbracket_{\gamma}\ t_{2}^{\prime}$. Case $X$: $\gamma(X)\in\mathcal{R}$ by specification of $\gamma$. Case $R\to R^{\prime}$: assume (2) $a\ \llbracket R\rrbracket_{\gamma}\ a^{\prime}$, and show $t_{1}^{\prime}\ a\ \llbracket R^{\prime}\rrbracket_{\gamma}\ t_{2}^{\prime}\ a^{\prime}$. From (1) and (2) we have $t_{1}\ a\ \llbracket R^{\prime}\rrbracket_{\gamma}\ t_{2}\ a^{\prime}$. From this, the IH gives us the required conclusion, as $t_{1}\ a=_{\beta\eta}t_{1}^{\prime}\ a$ and $t_{2}\ a^{\prime}=_{\beta\eta}t_{2}^{\prime}\ a^{\prime}$. Case $\forall\,X.\,R$: assume $r\in\mathcal{R}$, and show $t_{1}^{\prime}\ \llbracket R\rrbracket_{\gamma[X\mapsto r]}\ t_{2}^{\prime}$. By (1), we have $t_{1}\ \llbracket R\rrbracket_{\gamma[X\mapsto r]}\ t_{2}$, from which the IH then yields the desired conclusion. Case $R^{\cup}$: this follows by the IH (using symmetry of $=_{\beta\eta}$). Case $R\cdot R^{\prime}$: from (1), there exists $t$ such that $t_{1}\ \llbracket R\rrbracket_{\gamma}\ t$ and $t\ \llbracket R^{\prime}\rrbracket_{\gamma}\ t_{2}$. By the IH, $t_{1}^{\prime}\ \llbracket R\rrbracket_{\gamma}\ t$ and $t\ \llbracket R^{\prime}\rrbracket_{\gamma}\ t_{2}^{\prime}$. These imply the desired conclusion. Case $t$: from (1), we have $t\ t_{1}=_{\beta\eta}t_{2}$; $t\ t_{1}^{\prime}=_{\beta\eta}t_{2}^{\prime}$ then follows. ∎ ###### Lemma 3 (Symmetry Properties). 1. 1. $\llbracket(\forall\,X.\,R)^{\cup}\rrbracket_{\gamma}=\llbracket\forall\,X.\,R^{\cup}\rrbracket_{\gamma}$ 2. 2. $\llbracket(R_{1}\to R_{2})^{\cup}\rrbracket_{\gamma}=\llbracket R_{1}^{\cup}\to R_{2}^{\cup}\rrbracket_{\gamma}$ 3. 3. $\llbracket(R_{1}\cdot R_{2})^{\cup}\rrbracket_{\gamma}=\llbracket R_{2}^{\cup}\cdot R_{1}^{\cup}\rrbracket_{\gamma}$ ###### Proof. (1): assume $t\ \llbracket(\forall\,X.\,R)^{\cup}\rrbracket_{\gamma}\ t^{\prime}$, and hence $t^{\prime}\ \llbracket(\forall\,X.\,R)\rrbracket_{\gamma}\ t$. For any $r\in\mathcal{R}$, $t^{\prime}\ \llbracket R\rrbracket_{\gamma[X\mapsto r]}\ t$, hence $t\ \llbracket R^{\cup}\rrbracket_{\gamma[X\mapsto r]}\ t^{\prime}$. From this, $t\ \llbracket\forall\,X.\,R^{\cup}\rrbracket_{\gamma}\ t^{\prime}$ as required. Conversely, assume $t\ \llbracket\forall\,X.\,R^{\cup}\rrbracket_{\gamma}\ t^{\prime}$, and $r\in\mathcal{R}$. Then $t\ \llbracket R^{\cup}\rrbracket_{\gamma[X\mapsto r]}\ t^{\prime}$, hence $t^{\prime}\ \llbracket R\rrbracket_{\gamma[X\mapsto r]}\ t$. From this, $t^{\prime}\ \llbracket\forall\,X.\,R\rrbracket_{\gamma}\ t$, hence the required $t\ \llbracket(\forall\,X.\,R)^{\cup}\rrbracket_{\gamma}\ t^{\prime}$. (2): Assume $t\ \llbracket(R_{1}\to R_{2})^{\cup}\rrbracket_{\gamma}\ t^{\prime}$ and $a\ \llbracket R_{1}^{\cup}\rrbracket_{\gamma}\ a^{\prime}$. From these, $t^{\prime}\ \llbracket(R_{1}\to R_{2})\rrbracket_{\gamma}\ t$ and $a^{\prime}\ \llbracket R_{1}\rrbracket_{\gamma}\ a$, which yield $t^{\prime}\ a^{\prime}\ \llbracket R_{1}\rrbracket_{\gamma}\ t\ a$. Thus, $t\ a\ \llbracket R_{1}^{\cup}\rrbracket_{\gamma}\ t^{\prime}\ a^{\prime}$ as required. Conversely, assume $t\ \llbracket R_{1}^{\cup}\to R_{2}^{\cup}\rrbracket_{\gamma}\ t^{\prime}$ and $a\ \llbracket R_{1}\rrbracket_{\gamma}\ a^{\prime}$. From the latter, $a^{\prime}\ \llbracket R_{1}^{\cup}\rrbracket_{\gamma}\ a$, so $t\ a^{\prime}\ \llbracket R_{2}^{\cup}\rrbracket_{\gamma}\ t^{\prime}\ a$. From this, $t^{\prime}\ a\ \llbracket R_{2}\rrbracket_{\gamma}\ t\ a^{\prime}$, as required. (3): assume $t\ \llbracket(R_{1}\cdot R_{2})^{\cup}\rrbracket_{\gamma}\ t^{\prime}$, hence $t^{\prime}\ \llbracket R_{1}\cdot R_{2}\rrbracket_{\gamma}\ t$. So there exists $t^{\prime\prime}$ with $t^{\prime}\ \llbracket R_{1}\rrbracket_{\gamma}\ t^{\prime\prime}$ and $t^{\prime\prime}\ \llbracket R_{2}\rrbracket_{\gamma}\ t$. From these, $t^{\prime\prime}\ \llbracket R_{1}^{\cup}\rrbracket_{\gamma}\ t^{\prime}$ and $t\ \llbracket R_{2}^{\cup}\rrbracket_{\gamma}\ t^{\prime\prime}$; thus, $t\ \llbracket R_{2}^{\cup}\cdot R_{1}^{\cup}\rrbracket_{\gamma}\ t^{\prime}$. Conversely, assume $t\ \llbracket R_{2}^{\cup}\cdot R_{1}^{\cup}\rrbracket_{\gamma}\ t^{\prime}$. So there exists $t^{\prime\prime}$ with $t\ \llbracket R_{2}^{\cup}\rrbracket_{\gamma}\ t^{\prime\prime}$ and $t^{\prime\prime}\ \llbracket R_{1}^{\cup}\rrbracket_{\gamma}\ t^{\prime}$. From these, $t^{\prime\prime}\ \llbracket R_{2}\rrbracket_{\gamma}\ t$ and $t^{\prime}\ \llbracket R_{1}\rrbracket_{\gamma}\ t^{\prime\prime}$. So $t\ \llbracket(R_{1}\cdot R_{2})^{\cup}\rrbracket_{\gamma}\ t^{\prime}$. ∎ ###### Lemma 4 (Deapplication). 1. 1. $t_{1}\ \llbracket t\cdot R\rrbracket_{\gamma}\ t_{2}=t\ t_{1}\ \llbracket R\rrbracket_{\gamma}\ t_{2}$ 2. 2. $t_{1}\ \llbracket R\cdot t^{\cup}\rrbracket_{\gamma}\ t_{2}=t_{1}\ \llbracket R\rrbracket_{\gamma}\ t\ t_{2}$ ###### Proof. For the first fact: first, assume $t_{1}\ \llbracket t\cdot R\rrbracket_{\gamma}\ t_{2}$. The semantics gives $t^{\prime}$ such that (1) $t_{1}\ \llbracket t\rrbracket_{\gamma}\ t^{\prime}$ and (2) $t^{\prime}\ \llbracket R\rrbracket_{\gamma}\ t_{2}$. But (1) is equivalent to $t\ t_{1}=_{\beta\eta}t^{\prime}$. Applying $\beta\eta$-Closure {2}, $t\ t_{1}\ \llbracket R\rrbracket_{\gamma}\ t_{2}$ as required. Next, assume $t\ t_{1}\ \llbracket R\rrbracket_{\gamma}\ t_{2}$. Then there is a $t^{\prime}$, namely $t\ t_{1}$, such that $t_{1}\ \llbracket t\rrbracket_{\gamma}\ t^{\prime}$ and $t^{\prime}\ \llbracket R\rrbracket_{\gamma}\ t_{2}$. Hence $t_{1}\ \llbracket t\cdot R\rrbracket_{\gamma}\ t_{2}$ as required. For the second: assuming $t_{1}\ \llbracket R\cdot t^{\cup}\rrbracket_{\gamma}\ t_{2}$, the semantics gives $t^{\prime}$ such that (1) $t_{1}\ \llbracket R\rrbracket_{\gamma}\ t^{\prime}$ and (2) $t^{\prime}\ \llbracket t^{\cup}\rrbracket_{\gamma}\ t_{2}$. But (2) is equivalent to $t\ t_{2}=_{\beta\eta}t^{\prime}$. Applying $\beta\eta$-Closure {2}, $t_{1}\ \llbracket R\rrbracket_{\gamma}\ t\ t_{2}$ as required. Next, assume $t_{1}\ \llbracket R\rrbracket_{\gamma}\ t\ t_{2}$. Then there is a $t^{\prime}$, namely $t\ t_{2}$, such that $t_{1}\ \llbracket R\rrbracket_{\gamma}\ t^{\prime}$ and $t^{\prime}\ \llbracket t^{\cup}\rrbracket_{\gamma}\ t_{2}$. Hence $t_{1}\ \llbracket R\cdot t^{\cup}\rrbracket_{\gamma}\ t_{2}$ as required. ∎ We make use of a few definitions for terms in Figure 3. $\begin{array}[]{lll}I&:=&\lambda\,x.\,x\\\ K&:=&\lambda\,x.\,\lambda\,y.\,x\\\ t\circ t^{\prime}&:=&\lambda\,x.\,t\ (t^{\prime}\ x)\end{array}$ Figure 3: Some standard definitions and notations for terms, used below ###### Lemma 5 (Relational Laws). 1. 1. $\llbracket R_{1}\cdot(R_{2}\cdot R_{3})\rrbracket_{\gamma}=\llbracket(R_{1}\cdot R_{2})\cdot R_{3}\rrbracket_{\gamma}$ 2. 2. $\llbracket(R^{\cup})^{\cup}\rrbracket_{\gamma}=\llbracket R\rrbracket_{\gamma}$ 3. 3. $\llbracket R\cdot I\rrbracket_{\gamma}=\llbracket I\cdot R\rrbracket_{\gamma}=\llbracket R\rrbracket_{\gamma}$ ###### Proof. (1) follows from the semantics of $\cdot$ as relational composition, (2) from the semantics of $\cup$ as relational converse, and (3) from Deapplication {4} (applying also $\beta\eta$-Closure {2}). ∎ We may observe that Symmetry Properties {3} part (3) and Relational Laws {5} validate the complement- and union-free axioms of the Calculus of Relations (RelTT omits complement and union) [4]. ###### Lemma 6 (Interpretation Over Substitution). $\llbracket[R/X]R^{\prime}\rrbracket_{\gamma}=\llbracket R^{\prime}\rrbracket_{\gamma[X\mapsto\llbracket R\rrbracket_{\gamma}]}$ ###### Proof. The proof is by induction on $R^{\prime}$. Let $\gamma^{\prime}$ denote $\gamma[X\mapsto\llbracket R\rrbracket_{\gamma}]$. Case $X$: $\llbracket[R/X]X\rrbracket_{\gamma}=\llbracket R\rrbracket_{\gamma}=\llbracket X\rrbracket_{\gamma^{\prime}}$ Case $Y$: $\llbracket[R/X]Y\rrbracket_{\gamma}=\gamma(Y)=\llbracket Y\rrbracket_{\gamma^{\prime}}$ Case $R_{1}\to R_{2}$: $\begin{array}[]{l}\llbracket[R/X](R_{1}\to R_{2})\rrbracket_{\gamma}=\\\ \llbracket[R/X]R_{1}\rrbracket_{\gamma}\to\llbracket[R/X]R_{2}\rrbracket_{\gamma}=\\\ \llbracket R_{1}\rrbracket_{\gamma^{\prime}}\to\llbracket R_{2}\rrbracket_{\gamma^{\prime}}=\\\ \llbracket R_{1}\to R_{2}\rrbracket_{\gamma^{\prime}}\end{array}$ Case $\forall\,X.\,R_{1}$: $\begin{array}[]{l}\llbracket[R/X]\forall\,X.\,R_{1}\rrbracket_{\gamma}=\\\ \bigcap_{r\in\mathcal{R}}\ \llbracket[R/X]R_{1}\rrbracket_{\gamma[X\mapsto r]}\\\ \bigcap_{r\in\mathcal{R}}\ \llbracket R_{1}\rrbracket_{\gamma^{\prime}[X\mapsto r]}\\\ \llbracket\forall\,X.\,R_{1}\rrbracket_{\gamma^{\prime}}\end{array}$ Case $R_{1}^{\cup}$: $\begin{array}[]{l}\llbracket[R/X](R_{1}^{\cup})\rrbracket_{\gamma}=\\\ \llbracket[R/X]R_{1}\rrbracket_{\gamma}^{\cup}=\\\ \llbracket R_{1}\rrbracket_{\gamma^{\prime}}^{\cup}=\\\ \llbracket R_{1}^{\cup}\rrbracket_{\gamma^{\prime}}\end{array}$ Case $R_{1}\cdot R_{2}$: $\begin{array}[]{l}\llbracket[R/X](R_{1}\cdot R_{2})\rrbracket_{\gamma}=\\\ \llbracket[R/X]R_{1}\rrbracket_{\gamma}\cdot\llbracket[R/X]R_{2}\rrbracket_{\gamma}=\\\ \llbracket R_{1}\rrbracket_{\gamma^{\prime}}\cdot\llbracket R_{2}\rrbracket_{\gamma^{\prime}}=\\\ \llbracket R_{1}\cdot R_{2}\rrbracket_{\gamma^{\prime}}\end{array}$ Case $\hat{t}$: $\llbracket[R/X]\hat{t}\rrbracket_{\gamma}=\llbracket\hat{t}\rrbracket_{\gamma}=\llbracket\hat{t}\rrbracket_{\gamma^{\prime}}$ ∎ ###### Lemma 7 (Environment Extension). 1. 1. If $X\not\in\textit{FV}(R)$, then $\llbracket R\rrbracket_{\gamma[X\mapsto r]}=\llbracket R\rrbracket_{\gamma}=\llbracket[X/Y]R\rrbracket_{\gamma[X\mapsto\gamma(Y)]}$ 2. 2. If $R$ is closed, then $\llbracket R\rrbracket_{\gamma}=\llbracket R\rrbracket_{\gamma^{\prime}}$. ###### Proof. The first fact is by an obvious induction on $R$. The second follows by iterating the first one to shrink $\gamma$ to the empty environment, and then build it back up to $\gamma^{\prime}$ (recall that environments map a finite set of type variables). ∎ ## IV Basic examples and definitions ###### Lemma 8 (Identity). $I\ \llbracket X\to X\rrbracket_{\gamma}\ I$ ###### Proof. Assume (1) $t\ [\gamma(X)]\ t^{\prime}$ and show $I\ t\ [\gamma(X)]\ I\ t^{\prime}$. But this follows from (1) by $\beta\eta$-Closure {2}. ∎ ###### Definition 9. 1. 1. $[t]R:=(K\ t)\cdot R$ 2. 2. $R[t]:=R\cdot(K\ t)^{\cup}$ We can express within the theory the property of being related to term $t$ by $R$ with the relational types $[t]R$ and $R[t]$. In particular, this gives us a form of internalized typing: for example, we may use the type $[I]\forall\,X.\,X\to X[I]$ to express the property that $I$ has the expected polymorphic type. These notations are to be parsed with highest precedence. ###### Lemma 10 (Internalized Typing). 1. 1. $t_{1}\ \llbracket[t]R\rrbracket_{\gamma}\ t_{2}=t\ \llbracket R\rrbracket_{\gamma}\ t_{2}$ 2. 2. $t_{1}\ \llbracket R[t]\rrbracket_{\gamma}\ t_{2}=t_{1}\ \llbracket R\rrbracket_{\gamma}\ t$ ###### Proof. For (1), use $\beta\eta$-Closure {2}: $(t_{1}\ \llbracket[t]R\rrbracket_{\gamma}\ t_{2})=(K\ t\ t_{1}\ \llbracket R\rrbracket_{\gamma}\ t_{2})=(t\ \llbracket R\rrbracket_{\gamma}\ t_{2})$ For (2), use $\beta\eta$-Closure {2} and also Deapplication {4}: $(t_{1}\ \llbracket R[t]\rrbracket_{\gamma}\ t_{2})=(t_{1}\ \llbracket R\rrbracket_{\gamma}\ K\ t\ t_{2})=(t_{1}\ \llbracket R\rrbracket_{\gamma}\ t)$ ∎ The following operations are reminiscent of conjugation in group theory: ###### Definition 11. $t_{1}.R.t_{2}:=t_{1}\cdot R\cdot t_{2}^{\cup}$ ###### Definition 12. $t\mathbin{\ast}R:=t.R.t$ ###### Lemma 13 (Conjugation). 1. 1. $t_{1}\ \llbracket t.R.t^{\prime}\rrbracket_{\gamma}\ t_{2}=t\ t_{1}\ \llbracket R\rrbracket_{\gamma}\ t^{\prime}\ t_{2}$. 2. 2. $t_{1}\ \llbracket t\mathbin{\ast}R\rrbracket_{\gamma}\ t_{2}=t\ t_{1}\ \llbracket R\rrbracket_{\gamma}\ t\ t_{2}$. ###### Proof. Apply Deapplication {4}. ∎ We may internalize inclusion of relations as a type, using term promotions: ###### Definition 14. $R\subseteq R^{\prime}:=(K\ I)\mathbin{\ast}(R\to R^{\prime})$ ###### Lemma 15 (Subset). $t_{1}\ \llbracket R\subseteq R^{\prime}\rrbracket_{\gamma}\ t_{2}$ iff $\llbracket R\rrbracket_{\gamma}\subseteq\llbracket R^{\prime}\rrbracket_{\gamma}$. ###### Proof. Making use of Conjugation {13}, deduce $\begin{array}[]{l}t_{1}\ \llbracket R\subseteq R^{\prime}\rrbracket_{\gamma}\ t_{2}=\\\ K\ I\ t_{1}\ \llbracket R\to R^{\prime}\rrbracket_{\gamma}\ K\ I\ t_{2}=\\\ I\ \llbracket R\to R^{\prime}\rrbracket_{\gamma}\ I\end{array}$ The semantics (Figure 2) states that this latter relational typing is true in environment $\gamma$ iff for all $(x,x^{\prime})\in\llbracket R\rrbracket_{\gamma}$, $(I\ x,I\ x^{\prime})\in\llbracket R^{\prime}\rrbracket_{\gamma}$, which by $\beta\eta$-Closure {2} is equivalent to $(x,x^{\prime})\in\llbracket R^{\prime}\rrbracket_{\gamma}$. ∎ Term promotions also enable us to derive implicit products [5]. In traditional type theories, implicit products are used to express quantifications without corresponding $\lambda$-abstractions in the subject. One may think of them as describing specificational (or “ghost”) inputs to functions. In RelTT, we express this by stating that the subject has a function type but erases its input; i.e., it is of the form $K\,t$ for some $t$. ###### Definition 16. $R\Rightarrow R^{\prime}:=K\mathbin{\ast}(R\to R^{\prime})$ Note in the following theorem the essential feature of implicit products: we conclude by relating (with $R^{\prime}$) just $t_{1}$ and $t_{2}$, not their applications to $x$ and $x^{\prime}$ respectively. ###### Lemma 17 (Implicit Product). $t_{1}\ \llbracket R\Rightarrow R^{\prime}\rrbracket_{\gamma}\ t_{2}$ iff for all $(x,x^{\prime})\in\llbracket R\rrbracket$, $t_{1}\ \llbracket R^{\prime}\rrbracket_{\gamma}\ t_{2}$. ###### Proof. $\begin{array}[]{l}t_{1}\ \llbracket R\Rightarrow R^{\prime}\rrbracket_{\gamma}\ t_{2}=\\\ K\ t_{1}\ \llbracket R\to R^{\prime}\rrbracket_{\gamma}\ K\ t_{2}\end{array}$ And the latter holds iff for all $(x,x^{\prime})\in\llbracket R\rrbracket$, $K\ t_{1}\ x\ \llbracket R^{\prime}\rrbracket_{\gamma}\ K\ t_{2}\ x^{\prime}$. By $\beta\eta$-Closure {2}, this is equivalent to $t_{1}\ \llbracket R^{\prime}\rrbracket_{\gamma}\ t_{2}$. ∎ Finally, using internalized inclusion, we may neatly express equality of relations as a type: ###### Definition 18. $R\topdoteq R^{\prime}:=(R\subseteq R^{\prime})\cdot(R^{\prime}\subseteq R)$ ###### Lemma 19 (Relational Equality). $t_{1}\ \llbracket R\topdoteq R^{\prime}\rrbracket_{\gamma}\ t_{2}$ iff $\llbracket R\rrbracket_{\gamma}=\llbracket R^{\prime}\rrbracket_{\gamma}$. ###### Proof. First, suppose $t_{1}\ \llbracket R\topdoteq R^{\prime}\rrbracket_{\gamma}\ t_{2}$. Then by semantics of composition, there exists some $t$ such that * • $t_{1}\ \llbracket R\subseteq R^{\prime}\rrbracket_{\gamma}\ t$, and * • $t\ \llbracket R\subseteq R^{\prime}\rrbracket_{\gamma}\ t_{2}$. Applying Subset {15}, these facts are equivalent to * • $\llbracket R\rrbracket_{\gamma}\subseteq\llbracket R^{\prime}\rrbracket_{\gamma}$, and * • $\llbracket R^{\prime}\rrbracket_{\gamma}\subseteq\llbracket R\rrbracket_{\gamma}$. This proves the two relations are equal. Next, suppose $\llbracket R\rrbracket_{\gamma}=\llbracket R^{\prime}\rrbracket_{\gamma}$. Then similarly, applying Subset {15}, we may arbitrarily choose $I$ for $t$ to satisfy * • $t_{1}\ \llbracket R\subseteq R^{\prime}\rrbracket_{\gamma}\ t$, and * • $t\ \llbracket R\subseteq R^{\prime}\rrbracket_{\gamma}\ t_{2}$. which suffices, again by the semantics of composition. ∎ ###### Lemma 20 (Substitutivity Of Relational Equality). If $t_{1}\ \llbracket R\topdoteq R^{\prime}\rrbracket_{\gamma}\ t_{2}$, then $\llbracket[R/X]R^{\prime\prime}\rrbracket_{\gamma}=\llbracket[R^{\prime}/X]R^{\prime\prime}\rrbracket_{\gamma}$ ###### Proof. The proof is by induction on $R^{\prime\prime}$, making use of Environment Extension {7} as we induct on the bodies of universal types (in extended environments). We omit the details, as all cases are obvious thanks to the compositionality of the semantics (Figure 2). ∎ ## V Extensionality principles We prove a few examples of standard type-specific extensionality principles. ###### Lemma 21 ($\eta$-Unit). If $t\,\llbracket\forall\,X.\,X\to X\rrbracket_{\gamma}\,t^{\prime}$, then $t\,\llbracket\forall\,X.\,X\to X\rrbracket_{\gamma}\,I$. ###### Proof. Assume (1) $t\,\llbracket\forall\,X.\,X\to X\rrbracket_{\gamma}\,t^{\prime}$. Next, assume $r\in\mathcal{R}$ with $y\,[r]\,y^{\prime}$. Instantiate (1) with $\llbracket X[y^{\prime}]\rrbracket_{X\mapsto r}$ (note this is a definable relation) to get $t\,\llbracket X\to X\rrbracket_{\gamma[X\mapsto\llbracket X[y^{\prime}]\rrbracket_{X\mapsto r}]}\,t^{\prime}$ Simplifying using Interpretation Over Substitution {6} and also Environment Extension {7}, this gives us $t\,\llbracket X[y^{\prime}]\to X[y^{\prime}]\rrbracket_{\gamma[X\mapsto r]}\,t^{\prime}$ We may apply this to $y\,[X[y^{\prime}]]\,y^{\prime}$ which we have from (1) by Internalized Typing {10}. This application yields $t\,y\,\llbracket X[y^{\prime}]\rrbracket_{\gamma[X\mapsto r]}\,t^{\prime}\,y^{\prime}$ Again applying Internalized Typing {10}, this gives us $t\,y\,[r]\,y^{\prime}$, as required. ∎ ###### Definition 22. $\begin{array}[]{lll}R\times R^{\prime}&:=&\forall\,X.\,(R\to R^{\prime}\to X)\to X\\\ \textit{pair}&:=&\lambda\,x.\,\lambda\,y.\,\lambda\,c.\,c\,x\,y\\\ (t,t^{\prime})&:=&\textit{pair}\ t\,t^{\prime}\\\ t.1&:=&t\ \lambda\,x.\,\lambda\,y.\,x\\\ t.2&:=&t\ \lambda\,x.\,\lambda\,y.\,y\end{array}$ ###### Lemma 23 (Surjective Pairing). If $t\,\llbracket R\times R^{\prime}\rrbracket_{\gamma}\,t^{\prime}$, then $(t.1,t.2)\,\llbracket R\times R^{\prime}\rrbracket_{\gamma}\,t^{\prime}$ ###### Proof. Assume (1) $t\,[R\times R^{\prime}]\,t^{\prime}$. Then assume $r\in\mathcal{R}$ and (2) $c\,\llbracket R\to R^{\prime}\to X\rrbracket_{\gamma[X\mapsto r]}\,c^{\prime}$, and show $\textit{pair}\,(t.1)\,(t.2)\,c\,[r]\,t^{\prime}\,c^{\prime}$ (A) Instantiate (1) with $\llbracket\lambda\,x.\,x\,c\cdot X\rrbracket_{[X\mapsto r]}$ (note this is a definable relation). Then (A) follows from $\textit{pair}\,\llbracket R\to R^{\prime}\to X\rrbracket_{\gamma[X\mapsto\llbracket\lambda\,x.\,x\,c\cdot X\rrbracket_{[X\mapsto r]}]}\,c^{\prime}$ (B) Let us apply Environment Extension {7} implicitly to simplify environments. To prove (B), assume (3) $r_{1}\,\llbracket R\rrbracket_{\gamma}\,r_{1}^{\prime}$ and (4) $r_{2}\,\llbracket R^{\prime}\rrbracket_{\gamma}\,r_{2}^{\prime}$, and show $\textit{pair}\ r_{1}\,r_{2}\llbracket\lambda\,x.\,x\,c\cdot X\rrbracket_{[X\mapsto r]}\,c^{\prime}\,r_{1}^{\prime}\,r_{2}^{\prime}$ By Deapplication {4}, this is equivalent to $\textit{pair}\ r_{1}\,r_{2}\,c\,[r]\,c^{\prime}\,r_{1}^{\prime}\,r_{2}^{\prime}$ By $\beta\eta$-Closure {2}, this is equivalent to $c\,r_{1}\,r_{2}[r]\,c^{\prime}\,r_{1}^{\prime}\,r_{2}^{\prime}$ This follows from (2), (3), and (4) by the semantics. ∎ ## VI Symmetric types ###### Definition 24. Call a type _symmetric_ iff it does not use $R\cdot R^{\prime}$, and every occurrence of a promotion of a term $t$ either has $t=_{\beta\eta}I$ or occurs as $t$ in subexpressions of the form $t\mathbin{\ast}R$. Use $S$ as a metavariable for symmetric types. ###### Definition 25. $\gamma^{\cup}(X)=(\gamma(X))^{\cup}$; i.e., the converse of relation $\gamma(X)$. ###### Lemma 26. $(\gamma^{\cup})^{\cup}=\gamma$. ###### Theorem 27 (Symmetric types). $\llbracket S\rrbracket_{\gamma}=\llbracket S^{\cup}\rrbracket_{\gamma{\cup}}$ ###### Proof. The proof is by induction on $S$. Case $X$: $\llbracket X\rrbracket_{\gamma}=\gamma(X)=(\gamma^{\cup})^{\cup}(X)=\llbracket X^{\cup}\rrbracket_{\gamma^{\cup}}$. Case $S\to S^{\prime}$: assume $t\ \llbracket S\to S^{\prime}\rrbracket_{\gamma}\ t^{\prime}$. To show $t^{\prime}\ \llbracket S\to S^{\prime}\rrbracket_{\gamma^{\cup}}\ t$, assume $a\ \llbracket S\rrbracket_{\gamma^{\cup}}\ a^{\prime}$. By the IH, $a^{\prime}\ \llbracket S\rrbracket_{\gamma}\ a$, so $t\ a^{\prime}\ \llbracket S^{\prime}\rrbracket_{\gamma}\ t^{\prime}\ a$. By the IH again, $t^{\prime}\ a\ \llbracket S^{\prime}\rrbracket_{\gamma^{\cup}}\ t\ a^{\prime}$, as required. Conversely, assume $t\ \llbracket S\to S^{\prime}\rrbracket_{\gamma^{\cup}}\ t^{\prime}$, and assume $a\ \llbracket S\rrbracket_{\gamma}\ a^{\prime}$. By the IH, $a^{\prime}\ \llbracket S\rrbracket_{\gamma^{\cup}}\ a$, so $t\ a^{\prime}\ \llbracket S^{\prime}\rrbracket_{\gamma^{\cup}}\ t^{\prime}\ a$. By the IH again, $t^{\prime}\ a\ \llbracket S^{\prime}\rrbracket_{\gamma}\ t\ a^{\prime}$, as required. Case $\forall\,X.\,S$: assume $t\ \llbracket\forall\,X.\,S\rrbracket_{\gamma}\ t^{\prime}$, and $r\in\mathcal{R}$. So $t\ \llbracket S\rrbracket_{\gamma[X\mapsto r^{\cup}}\ t^{\prime}$, and by the IH, $t^{\prime}\ \llbracket S\rrbracket_{\gamma^{\cup}[X\mapsto r]}\ t$, as required. Conversely, assume $t\ \llbracket\forall\,X.\,S\rrbracket_{\gamma^{\cup}}\ t^{\prime}$, and $r\in\mathcal{R}$. So $t\ \llbracket S\rrbracket_{\gamma^{\cup}[X\mapsto r^{\cup}]}\ t^{\prime}$, and by the IH, $t^{\prime}\ \llbracket S\rrbracket_{\gamma[X\mapsto r]}\ t$, as required. Case $S^{\cup}$: assume $t\ \llbracket S^{\cup}\rrbracket_{\gamma}\ t^{\prime}$. By the IH, $t\ \llbracket S\rrbracket_{\gamma^{\cup}}\ t^{\prime}$ as required. Conversely, assume $t\ \llbracket S\rrbracket_{\gamma^{\cup}}\ t^{\prime}$. Then by the IH, $t\ \llbracket S^{\cup}\rrbracket_{\gamma}\ t^{\prime}$, as required. Case $\hat{t}\mathbin{\ast}S$: assume $t\ \llbracket\hat{t}\mathbin{\ast}S\rrbracket_{\gamma}\ t^{\prime}$. By Conjugation {13}, this is equivalent to $\hat{t}\ t\ \llbracket S\rrbracket_{\gamma}\ \hat{t}\ t^{\prime}$. By the IH, $\hat{t}\ t^{\prime}\ \llbracket S\rrbracket_{\gamma}\ \hat{t}\ t$, which is then similarly equivalent to the desired typing. The converse follows similarly, applying Symmetry Properties {3} and Relational Laws {5}. Case $\hat{t}=_{\beta\eta}I$: $\begin{array}[]{l}(t\ \llbracket\hat{t}\rrbracket_{\gamma}\ t^{\prime})=(\hat{t}\ t=_{\beta\eta}t^{\prime})=(t=_{\beta\eta}t^{\prime})=(t=_{\beta\eta}\hat{t}\ t^{\prime})=\\\ (t^{\prime}\ \llbracket\hat{t}\rrbracket_{\gamma}\ t)\end{array}$ ∎ ## VII Transitive types ###### Definition 28. Use metavariable $p$ to range over the set $\\{-,+\\}$ of _polarities_. $\bar{p}$ denotes the other polarity from $p$. The following notion extends a similar one due to Krivine [6, Section 8.5], put also to good use in other works like [7]. ###### Definition 29 ($\forall^{p}$). Define a property $\forall^{p}$ of types inductively by the following clauses. Type variables $X$ are $\forall^{p}$. If $R$ is $\forall^{\bar{p}}$ and $R^{\prime}$ is $\forall^{p}$, then $R\to R^{\prime}$ is $\forall^{p}$. If $R$ is $\forall^{+}$ then so is $\forall\,X.\,R$. If $R$ is $\forall^{p}$, then so is $R^{\cup}$. If $t=_{\beta\eta}I$, then the promotion of $t$ to a type is $\forall^{p}$. Note that $\forall^{p}$ types are symmetric types (Definition 24). We let $P$ range over $\forall^{+}$ types, and $N$ over $\forall^{-}$ types. Recall the following fact from classical lambda calculus (e.g., Chapter 7 of [8]). ###### Lemma 30 (Zeta). If $t\ x=_{\beta\eta}t^{\prime}\ x$ and $x\not\in\textit{FV}(t\,t^{\prime})$, then $t=_{\beta\eta}t^{\prime}$. ###### Proof. From the assumption, deduce $\lambda\,x.\,t\ x=_{\beta\eta}\lambda\,x.\,t^{\prime}\ x$. The sides of this equation are $\eta$-equal to $t$ and $t^{\prime}$, respectively. ∎ ###### Definition 31. Let $e$ denote the environment where $e(X)$ is the relation $=_{\beta\eta}$, for all type variables $X$. As discussed further in Section XIII, RelTT by design does not satisfy Identity Extension (a property proposed originally by Reynolds [9]). The following is a partial refinement: ###### Theorem 32 (Identity Inclusion). 1. 1. $\llbracket P\rrbracket_{e}\subseteq\ =_{\beta\eta}$. 2. 2. $=_{\beta\eta}\ \subseteq\llbracket N\rrbracket_{e}$. ###### Proof. Proceed by induction on the assumption of $R$ in $\forall^{p}$. Case $X\in\forall^{p}$: $\llbracket X\rrbracket_{e}=e(X)$, which is $=_{\beta\eta}$. Case $R\to R^{\prime}\in\forall^{+}$: assume (1) $t\ \llbracket R\to R^{\prime}\rrbracket_{e}\ t^{\prime}$. By Zeta {30}, it suffices to prove $t\ x=_{\beta\eta}t^{\prime}\ x$. Since $R\in\forall^{-}$, the IH applies to $x=_{\beta\eta}x$ to yield $x\ \llbracket R\rrbracket_{e}\ x$. Combining this with (1) gives $t\ x\ \llbracket R^{\prime}\rrbracket_{e}\ t^{\prime}\ x$. Then by the IH, $t\ x=_{\beta\eta}t^{\prime}\ x$, as required. Case $R\to R^{\prime}\in\forall^{-}$: assume (1) $t=_{\beta\eta}t^{\prime}$ and (2) $a\ \llbracket R\rrbracket_{e}\ a^{\prime}$, and show $t\ a\ \llbracket R^{\prime}\rrbracket_{e}\ t^{\prime}\ a^{\prime}$. Since $R\in\forall^{+}$, the IH applies to (2) yielding $a=_{\beta\eta}a^{\prime}$. Combining this with (1) gives $t\ a=_{\beta\eta}t^{\prime}\ a^{\prime}$, from which the IH yields $t\ a\ \llbracket R^{\prime}\rrbracket_{e}\ t^{\prime}\ a^{\prime}$. Case $\forall\,X.\,R\in\forall^{+}$: assume (1) $t\ \llbracket\forall\,X.\,R\rrbracket_{e}\ t^{\prime}$, and show $t=_{\beta\eta}t^{\prime}$. From (1), we have $t\ \llbracket R\rrbracket_{e[X\mapsto\ =_{\beta\eta}]}\ t^{\prime}$. By the IH, this yields $t=_{\beta\eta}t^{\prime}$, as required. Case $R^{\cup}\in\forall^{+}$: assume $t\ \llbracket R^{\cup}\rrbracket_{e}\ t^{\prime}$, which implies $t^{\prime}\ \llbracket R\rrbracket_{e}\ t$. By the IH, $t^{\prime}=_{\beta\eta}t$, hence $t=_{\beta\eta}t^{\prime}$ as required. Case $R^{\cup}\in\forall^{-}$: assume $t=_{\beta\eta}t^{\prime}$, hence $t^{\prime}=_{\beta\eta}t$. By the IH, $t^{\prime}\ \llbracket R\rrbracket_{e}\ t$, which equals the required $t\ \llbracket R^{\cup}\rrbracket_{e}\ t^{\prime}$. Case $\hat{t}\in\forall^{p}$: $\llbracket\hat{t}\rrbracket_{e}$ is then just $=_{\beta\eta}$. ∎ Using the terminology of [10], Identity Inclusion {32} identifies $\forall^{+}$ types as _extensive_ (they are included in the equality relation), and $\forall^{-}$ types as _parametric_ (the equality relation is included in them). ###### Lemma 33 (Transitivity For $\forall^{+}$-Types). $I\ \llbracket P\cdot P\to P\rrbracket_{e}\ I$. ###### Proof. Assume (1) $x\ \llbracket R\rrbracket_{e}\ y$ and (2) $y\ \llbracket R\rrbracket_{e}\ z$, and show $x\ \llbracket R\rrbracket_{e}\ z$. By Identity Inclusion {32}, (1) implies $x\ =_{\beta\eta}\ y$. From this and (2), $\beta\eta$-Closure {2} yields the desired conclusion. ∎ ###### Corollary 34 ($\forall^{+}$ Per). If $R$ is $\forall^{+}$ and closed, then $\llbracket R\rrbracket_{\gamma}$ is a partial equivalence relation (i.e., symmetric and transitive; abbreviated per). ###### Proof. Since $R$ is closed, $\llbracket R\rrbracket_{\gamma}=\llbracket R\rrbracket_{e}$ by Environment Extension {7}. Transitivity for $\forall^{+}$-Types {33} then implies transitivity. Symmetry follows from Symmetric Types {27}, since $\forall^{+}$ types are symmetric types (Definition 24). ∎ ###### Definition 35 (simple transitive types). Simple transitive types $T$ are defined by the following grammar: $T\ ::=\ P\ |\ P\to T\ |\ N\to T\ |\ t\mathbin{\ast}T$ ###### Lemma 36 (transitivity for simple transitive types). $I\ \llbracket T\cdot T\to T\rrbracket_{e}I$ ###### Proof. The proof is by induction on $T$, in each case assuming (1) $x\ \llbracket T\rrbracket_{\gamma}\ y$ and (2) $y\ \llbracket T\rrbracket_{\gamma}\ z$. Case $P$: Transitivity for $\forall^{+}$-Types {33}. Case $P\to T$: assume (3) $a\ \llbracket P\rrbracket_{e}\ a^{\prime}$. By Symmetric Types {27}, $a^{\prime}\ \llbracket P\rrbracket_{e}\ a$ (as $e^{\cup}=e$). By Transitivity for $\forall^{+}$-Types {33}, this can be combined with (3) to obtain $a\ \llbracket P\rrbracket_{e}\ a$. Using this with (1), $x\ a\ \llbracket T\rrbracket_{e}\ y\ a$. Using (3) with (2), $y\ a\ \llbracket T\rrbracket_{e}\ z\ a^{\prime}$. By the induction hypothesis, $x\ a\ \llbracket T\rrbracket_{e}\ z\ a^{\prime}$ as required. Case $N\to T$: assume (3) $a\ \llbracket N\rrbracket_{e}\ a^{\prime}$. By Identity Inclusion {32}, since $N$ is $\forall^{-}$, $a\ \llbracket N\rrbracket_{e}\ a$ (since $a=_{\beta\eta}a$). Using this with (1), $x\ a\ \llbracket T\rrbracket_{e}\ y\ a$. Then as in the previous case, we obtain $y\ a\ \llbracket T\rrbracket_{e}\ z\ a^{\prime}$ using (3) with (2), and the required $x\ a\ \llbracket T\rrbracket_{e}\ z\ a^{\prime}$ by the induction hypothesis. Case $\hat{t}\mathbin{\ast}T$: by Conjugation {13}, it suffices to show $\hat{t}\ x\ \llbracket T\rrbracket_{\gamma}\ \hat{t}\ z$. This follows by the IH from assumptions (1) and (2), since these are equivalent to $\hat{t}\ x\ \llbracket T\rrbracket_{\gamma}\ \hat{t}\ y$ and $\hat{t}\ y\ \llbracket T\rrbracket_{\gamma}\ \hat{t}\ z$ by Conjugation {13}. ∎ ## VIII A relational proof system Figure 6 presents a proof system, RelPf, for judgments of the form $\Gamma\vdash t\ [R]\ t^{\prime}$. (Here, the square brackets are part of the syntax for the judgment; in our meta-language, we are using them for application of a mathematical relation.) RelPf Soundness {40} below shows that this system is sound with respect to the semantics of Figure 2 (extended for contexts). In Section XII, we will develop a type theory based on RelPf, but introduce the proof system here because the fragment for System F types will be useful in Section X on inductive types. A few details: * • typing contexts $\Gamma$ are described by the grammar $\Gamma\ ::=\ \cdot\ |\ \Gamma,t\,[R]\,t^{\prime}$ We may elide $\cdot$ in examples. * • There is an introduction and elimination rule for each connective. * • The introduction rule for term promotions is the axiom $\Gamma\vdash t\,[t^{\prime}]\,t^{\prime}\,t$. This states that $t$ is related to $t^{\prime}\,t$ by the relation (i.e., term promotion) $t^{\prime}$. * • The rule allowing to change the sides of the relational typing to $\beta\eta$-equal terms is called _conversion_. While $\beta\eta$-equality is undecidable in general, we may view the side conditions on conversion as license for an implementation to check reductions to as deep a finite depth as desired. So we view reduction as being implicitly bounded in applications of this rule, making type-checking decidable. We do not formalize bounded reduction. $\Gamma\vdash x\,[R]\,y^{\prime}\Gamma\vdash\textit{tt}\,x\,y\,[R]\,\textit{ff}\,x^{\prime}\,y^{\prime}\lx@proof@logical@and\Gamma\vdash\textit{tt}\,x\,[R\to R]\,\textit{ff}\,x^{\prime}\,y^{\prime}\lx@proof@logical@and\Gamma\vdash\textit{tt}\,[R\to R\to R]\,\textit{ff}\Gamma\vdash\textit{tt}\,[\textit{Bool}]\,\textit{ff}\ \Gamma\vdash x\,[R]\,x^{\prime}\ \Gamma\vdash y\,[R]\,y^{\prime}\ $ Figure 4: Derivation of True Different From False {38}. The final inference is by the conversion rule, noting $\textit{tt}\,x\,y=_{\beta\eta}x$ and $\textit{ff}\,x^{\prime}\,y^{\prime}=_{\beta\eta}y^{\prime}$ $\begin{array}[]{ll}\Gamma\vdash x:Tx:T\in\Gamma&\Gamma\vdash\lambda\,x.\,t:T\to T^{\prime}\Gamma,x:T\vdash t:T^{\prime}\\\ \\\ \Gamma\vdash t\ t^{\prime}:T\lx@proof@logical@and\Gamma\vdash t:T^{\prime}\to T\Gamma\vdash t^{\prime}:T^{\prime}&\Gamma\vdash t:\forall\,X.\,T\lx@proof@logical@and\Gamma\vdash t:TX\not\in\textit{FV}(\Gamma)\\\ \\\ \Gamma\vdash t:[T^{\prime}/X]T\Gamma\vdash t:\forall\,X.\,T&\end{array}$ Figure 5: Typing rules for Curry-style System F $\begin{array}[]{lllll}\Gamma\vdash t\,[R]\,t^{\prime}t\,[R]\,t^{\prime}\in\Gamma&&\Gamma\vdash\lambda\,x.\,t\,[R\to R^{\prime}]\,\lambda\,x^{\prime}.\,t^{\prime}\lx@proof@logical@and\Gamma,x\,[R]\,x^{\prime}\vdash t\,[R^{\prime}]\,t^{\prime}(*)&&\Gamma\vdash t\,t_{1}\,[R^{\prime}]\,t^{\prime}\,t_{2}\lx@proof@logical@and\Gamma\vdash t\,[R\to R^{\prime}]\,t^{\prime}\Gamma\vdash t_{1}\,[R]\,t_{2}\par\\\ \\\ \Gamma\vdash t\,[[R/X]R^{\prime}]\,t^{\prime}\Gamma\vdash t\,[\forall\,X.\,R^{\prime}]\,t^{\prime}&&\Gamma\vdash t\,[\forall\,X.\,R]\,t^{\prime}\lx@proof@logical@and\Gamma\vdash t\,[R]\,t^{\prime}X\not\in\textit{FV}(\Gamma)\par&&\Gamma\vdash t_{1}^{\prime}\,[R]\,t_{2}^{\prime}\lx@proof@logical@and\Gamma\vdash t_{1}\,[R]\,t_{2}t_{1}=_{\beta\eta}t_{1}^{\prime}t_{2}=_{\beta\eta}t_{2}^{\prime}\par\\\ \\\ \Gamma\vdash t\,[R^{\cup}]\,t^{\prime}\Gamma\vdash t^{\prime}\,[R]\,t&&\Gamma\vdash t^{\prime}\,[R]\,t\Gamma\vdash t\,[R^{\cup}]\,t^{\prime}\par\ \ \ \ \par\Gamma\vdash t\,[t^{\prime}]\,t^{\prime}\ t\ \par&&\Gamma\vdash[t^{\prime\prime}\,t/x]t_{1}\,[R]\,[t^{\prime\prime}\,t/x]t_{2}\lx@proof@logical@and\Gamma\vdash t\,[t^{\prime\prime}]\,t^{\prime}\Gamma\vdash[t^{\prime}/x]t_{1}\,[R]\,[t^{\prime}/x]t_{2}\\\ \\\ \lx@intercol\Gamma\vdash t_{1}\,[R^{\prime\prime}]\,t_{2}\lx@proof@logical@and\Gamma\vdash t\,[R\cdot R^{\prime}]\,t^{\prime}\Gamma,t\,[R]\,x,x\,[R^{\prime}]\,t^{\prime}\vdash t_{1}\,[R^{\prime\prime}]\,t_{2}(**)\hfil\lx@intercol&&\Gamma\vdash t\,[R\cdot R^{\prime}]\,t^{\prime}\lx@proof@logical@and\Gamma\vdash t\,[R]\,t^{\prime\prime}\Gamma\vdash t^{\prime\prime}\,[R^{\prime}]\,t^{\prime}\par\end{array}$ Side condition (*) is $x\not\in\textit{FV}(\Gamma,R,R^{\prime})$. Side condition (**) is $x\not\in\textit{FV}(\Gamma,t_{1},t_{2},t,t^{\prime},R,R^{\prime},R^{\prime\prime})$. Figure 6: Proof system for relational typing. Here is an example in RelPf, deriving a form of inconsistency from an assumption that different constructors of an inductive type are equal. It states that if _tt_ and _ff_ are equal as booleans, then any relation $R$ is trivial in the sense that $R=\textit{dom}(R)\times\textit{ran}(R)$. ###### Definition 37. $\begin{array}[]{lll}\textit{Bool}&:=&\forall\,X.\,X\to X\to X\\\ \textit{tt}&:=&\lambda\,x.\,\lambda\,y.\,x\\\ \textit{ff}&:=&\lambda\,x.\,\lambda\,y.\,y\end{array}$ ###### Lemma 38 (True Different From False). For any type $R$, let $\Gamma$ be a context with the following assumptions: 1. 1. $\textit{tt}\,[\textit{Bool}]\,\textit{ff}$ 2. 2. $x\,[R]\,x^{\prime}$ 3. 3. $y\,[R]\,y^{\prime}$ Then $\Gamma\vdash x\,[R]\,y^{\prime}$. ###### Proof. A derivation is in Figure 4. ∎ Turning now to meta-theory: let $\sigma$ range over term substitutions (finite functions from term variables to terms). Denote capture-avoiding application of a substitution $\sigma$ to a term $t$ as $\sigma\ t$. Apply substitutions $\sigma$ to types $R$ by applying them to all terms contained in $R$. Now we will define an interpretation of contexts $\Gamma$ as sets of substitutions satisfying the contexts constraints. ###### Definition 39. $\llbracket\Gamma\rrbracket_{\gamma}$ is defined by recursion on $\Gamma$: $\begin{array}[]{lll}\sigma\in\llbracket\Gamma,t\,[R]\,t^{\prime}\rrbracket_{\gamma}&=&\sigma\in\llbracket\Gamma\rrbracket_{\gamma}\ \wedge\ \sigma t\,\llbracket\sigma R\rrbracket_{\gamma}\,\sigma t^{\prime}\\\ \sigma\in\llbracket\cdot\rrbracket_{\gamma}&=&\textit{True}\end{array}$ ###### Theorem 40 (RelPf Soundness). Suppose $\gamma$ is defined on all free type variables of $\Gamma$ and $R$. If $\Gamma\vdash t\,[R]\,t^{\prime}$, and $\sigma\in\llbracket\Gamma\rrbracket_{\gamma}$, then $\sigma\ t\,\llbracket\sigma\,R\rrbracket_{\gamma}\,\sigma\ t^{\prime}$. ###### Proof. The proof is by induction on the RelPf derivation. In each case we assume arbitrary $\sigma\in\llbracket\Gamma\rrbracket_{\gamma}$. Case: $\Gamma\vdash t\,[R]\,t^{\prime}t\,[R]\,t^{\prime}\in\Gamma$ From $t\,[R]\,t^{\prime}\in\Gamma$ we obtain the desired $\sigma\,t\,\llbracket\sigma\,R\rrbracket_{\gamma}\sigma\,t^{\prime}$ from the semantics of contexts. Case: $\Gamma\vdash\lambda\,x.\,t\,[R\to R^{\prime}]\,\lambda\,x^{\prime}.\,t^{\prime}\lx@proof@logical@and\Gamma,x\,[R]\,x^{\prime}\vdash t\,[R^{\prime}]\,t^{\prime}(*)$ Assume arbitrary $t_{1}$ and $t_{2}$ with (1) $t_{1}\,\llbracket\sigma\,R\rrbracket_{\gamma}\,t_{2}$. Let $\sigma^{\prime}$ denote $\sigma[x\mapsto t_{1},x^{\prime}\mapsto t_{2}]$. By the IH, $\sigma^{\prime}\,t\llbracket\sigma^{\prime}\,R^{\prime}\rrbracket_{\gamma}\,\sigma^{\prime}\,t^{\prime}$ By side condition (*), $\sigma^{\prime}\,R^{\prime}=\sigma\,R$. Applying then $\beta\eta$-Closure {2}, we have $(\sigma\,\lambda\,x.\,t)\,t_{1}\llbracket\sigma\,R^{\prime}\rrbracket_{\gamma}\,(\sigma\,\lambda\,x^{\prime}.\,t^{\prime})\,t_{2}$ By the semantics of arrow types, the fact that this holds for all $t_{1}$ and $t_{2}$ satisfying (1) implies the desired $\sigma\,\lambda\,x.\,t\,\llbracket\sigma\,(R\to R^{\prime})\rrbracket_{\gamma}\,\sigma\,\lambda\,x^{\prime}.\,t^{\prime}$. Case: $\Gamma\vdash t\,t_{1}\,[R^{\prime}]\,t^{\prime}\,t_{2}\lx@proof@logical@and\Gamma\vdash t\,[R\to R^{\prime}]\,t^{\prime}\Gamma\vdash t_{1}\,[R]\,t_{2}$ By the IH, $\sigma\,t\,\llbracket R\to R^{\prime}\rrbracket_{\gamma}\,\sigma\,t^{\prime}$ and $\sigma\,t_{1}\,\llbracket R\rrbracket_{\gamma}\,\sigma\,t_{2}$. The semantics of arrow types then gives the desired $\sigma\,(t\,t_{1})\,\llbracket R^{\prime}\rrbracket_{\gamma}\,\sigma\,(t^{\prime}\,t_{2})$. Case: $\Gamma\vdash t\,[[R/X]R^{\prime}]\,t^{\prime}\Gamma\vdash t\,[\forall\,X.\,R^{\prime}]\,t^{\prime}$ By the IH, we have (1) $\sigma\,t\,\llbracket\sigma\,\forall\,X.\,R^{\prime}\rrbracket_{\gamma}\,\sigma\,t^{\prime}$. By the condition on $\gamma$, $\llbracket R\rrbracket_{\gamma}$ is defined, and we use it to instantiate (1). This gives $\sigma\,t\,\llbracket\sigma\,R^{\prime}\rrbracket_{\gamma[X\mapsto\llbracket R\rrbracket_{\gamma}]}\,\sigma\,t^{\prime}$ By Interpretation Over Substitution {6}, this implies the desired $\sigma\,t\,\llbracket\sigma\,[R/X]R^{\prime}\rrbracket_{\gamma}\,\sigma\,t^{\prime}$ Case: $\Gamma\vdash t\,[\forall\,X.\,R]\,t^{\prime}\lx@proof@logical@and\Gamma\vdash t\,[R]\,t^{\prime}X\not\in\textit{FV}(\Gamma)$ Assume arbitrary $r\in\mathcal{R}$. Then by the IH, $\sigma\,t\,\llbracket\sigma\,R\rrbracket_{\gamma[X\mapsto r]}\,\sigma^{\prime}\,t^{\prime}$. The desired $\sigma\,t\,\llbracket\sigma\,\forall\,X.\,R\rrbracket_{\gamma}\,\sigma^{\prime}\,t^{\prime}$ then follows by the semantics of universal quantification. Case: $\Gamma\vdash t_{1}^{\prime}\,[R]\,t_{2}^{\prime}\lx@proof@logical@and\Gamma\vdash t_{1}\,[R]\,t_{2}t_{1}=_{\beta\eta}t_{1}^{\prime}t_{2}=_{\beta\eta}t_{2}^{\prime}$ This case follows easily by the IH and $\beta\eta$-Closure {2}. Case: $\Gamma\vdash t\,[R^{\cup}]\,t^{\prime}\Gamma\vdash t^{\prime}\,[R]\,t$ By the IH, $\sigma\,t^{\prime}\llbracket R\rrbracket_{\gamma}\sigma\,t$. By the semantics of converse, this implies the required $\sigma\,t\llbracket R^{\cup}\rrbracket_{\gamma}\sigma\,t^{\prime}$. Case: $\Gamma\vdash t^{\prime}\,[R]\,t\Gamma\vdash t\,[R^{\cup}]\,t^{\prime}$ By the IH, $\sigma\,t\llbracket R^{\cup}\rrbracket_{\gamma}\sigma\,t^{\prime}$. By the semantics of converse, this implies the required $\sigma\,t^{\prime}\llbracket R\rrbracket_{\gamma}\sigma\,t$. Case: $\Gamma\vdash t\,[t^{\prime}]\,t^{\prime}\ t\ $ The desired conclusion is equivalent to $\sigma\,(t^{\prime}\,t)=_{\beta\eta}\sigma\,(t^{\prime}\ t)$, which holds. Case: $\Gamma\vdash[t^{\prime\prime}\,t/x]t_{1}\,[R]\,[t^{\prime\prime}\,t/x]t_{2}\lx@proof@logical@and\Gamma\vdash t\,[t^{\prime\prime}]\,t^{\prime}\Gamma\vdash[t^{\prime}/x]t_{1}\,[R]\,[t^{\prime}/x]t_{2}$ By the IH, we have * • $\sigma\,(t^{\prime\prime}\,t)=_{\beta\eta}\sigma\,t^{\prime}$ * • $\sigma\,[t^{\prime}/x]t_{1}\,\llbracket\sigma\,R\rrbracket_{\gamma}\,\sigma\,[t^{\prime}/x]t_{2}$ Using basic properties of $\beta\eta$-equivalence and substitution, these facts imply the desired $\sigma\,[t^{\prime\prime}\,t/x]t_{1}\,\llbracket\sigma\,R\rrbracket_{\gamma}\,\sigma\,[t^{\prime\prime}\,t/x]t_{2}$ Case: $\Gamma\vdash t_{1}\,[R^{\prime\prime}]\,t_{2}\lx@proof@logical@and\Gamma\vdash t\,[R\cdot R^{\prime}]\,t^{\prime}\Gamma,t\,[R]\,x,x\,[R^{\prime}]\,t^{\prime}\vdash t_{1}\,[R^{\prime\prime}]\,t_{2}(**)$ By the IH and semantics for composition we have that there exists $t^{\prime\prime}$ such that * (1) $\sigma\,t\,\llbracket\sigma\,R\rrbracket_{\gamma}\,t^{\prime\prime}$ * (2) $t^{\prime\prime}\,\llbracket\sigma\,R^{\prime}\rrbracket_{\gamma}\,\sigma\,t^{\prime}$ Let $\sigma^{\prime}$ denote $\sigma[x\mapsto t^{\prime\prime}]$. Using (1) and (2), we may prove that $\sigma^{\prime}$ is in the interpretation of the context in the right premise of the inference. Side condition (**) is used to deduce that $\sigma^{\prime}$ satisfies the two constraints added to $\Gamma$ in that context, from (1) and (2) (where only $\sigma$ appears). Then by the IH and (**), we have the required $\sigma\,t_{1}\,\llbracket\sigma\,R^{\prime\prime}\rrbracket_{\gamma}\,\sigma\,t_{2}$ Case: $\Gamma\vdash t\,[R\cdot R^{\prime}]\,t^{\prime}\lx@proof@logical@and\Gamma\vdash t\,[R]\,t^{\prime\prime}\Gamma\vdash t^{\prime\prime}\,[R^{\prime}]\,t^{\prime}$ By the IH, we have * • $\sigma\,t\,\llbracket\sigma\,R\rrbracket_{\gamma}\,\sigma\,t^{\prime\prime}$ * • $\sigma\,t^{\prime\prime}\,\llbracket\sigma\,R^{\prime}\rrbracket_{\gamma}\,\sigma\,t^{\prime}$ These imply the desired $\sigma\,t\,\llbracket\sigma\,(R\cdot R^{\prime})\rrbracket_{\gamma}\,\sigma\,t^{\prime}$ by the semantics of composition. ∎ ## IX Embedding System F Similar to the Abstraction Theorem of Reynolds [9], we may prove that each term typable in System F is related to itself by the relational interpretation of its type. Figure 5 recalls the typing rules of Curry-style System F (also known as $\lambda 2$-Curry [11]). We consider the set of types of System F a subset of the set of relational types (Figure 1). We first show that typing derivations in System F can be translated to RelTT in the obvious way. Then we may appeal to RelTT Soundness {40}. ###### Definition 41. Partition the set of variables by an injection $\dot{-}$. Assume $t$ does not contain any variables of the form $\dot{x}$ with $x\in\textit{FV}(t)$. Then let $\dot{t}$ be the term where every variable $x$ (free or bound) is renamed to $\dot{x}$. ###### Definition 42. Define $\ulcorner-\urcorner$ recursively on typing contexts $\Gamma$ of System F by: $\begin{array}[]{lll}\ulcorner\cdot\urcorner&=&\cdot\\\ \ulcorner\Gamma,x:T\urcorner&=&\ulcorner\Gamma\urcorner,x\,[T]\,\dot{x}\end{array}$ ###### Theorem 43 (Soundness Of System F). If $\Gamma\vdash t:T$ (in System F), then $\ulcorner\Gamma\urcorner\vdash t\ [T]\ \dot{t}$ (in RelPf), assuming $\dot{t}$ is defined. ###### Proof. The proof is by induction on the typing derivation in System F. Case: $\Gamma\vdash x:Tx:T\in\Gamma$ From $x:T\in\Gamma$ we derive $x\,[T]\,\dot{x}\in\ulcorner\Gamma\urcorner$, and conclude using the assumption rule of RelPf. Case: $\Gamma\vdash\lambda\,x.\,t:T\to T^{\prime}\Gamma,x:T\vdash t:T^{\prime}$ By the IH, we have $\ulcorner\Gamma\urcorner,x\,[T]\,\dot{x}\vdash t\,[T^{\prime}]\,\dot{t}$ From this, use arrow introduction (of RelPf) to derive the desired $\ulcorner\Gamma\urcorner\vdash\lambda\,x.\,t\,[T\to T^{\prime}]\,\lambda\,\dot{x}.\,\dot{t}$ Case: $\Gamma\vdash t\ t^{\prime}:T\lx@proof@logical@and\Gamma\vdash t:T^{\prime}\to T\Gamma\vdash t^{\prime}:T^{\prime}$ By the IH we have $\begin{array}[]{l}\ulcorner\Gamma\urcorner\vdash t\,[T^{\prime}\to T]\,\dot{t}\\\ \ulcorner\Gamma\urcorner\vdash t^{\prime}\,[T^{\prime}]\,\dot{t^{\prime}}\end{array}$ Use arrow elimination (of RelPf) to deduce the desired $\ulcorner\Gamma\urcorner\vdash t\,t^{\prime}\,[T]\,\dot{t\,t^{\prime}}$ Case: $\Gamma\vdash t:\forall\,X.\,T\lx@proof@logical@and\Gamma\vdash t:TX\not\in\textit{FV}(\Gamma)$ By the IH, we have $\ulcorner\Gamma\urcorner\vdash t\,[T]\,\dot{t}$. Apply forall introduction (of RelPf) to conclude the desired $\ulcorner\Gamma\urcorner\vdash t\,[\forall\,X.\,T]\,\dot{t}$ Case: $\Gamma\vdash t:[T^{\prime}/X]T\Gamma\vdash t:\forall\,X.\,T$ By the IH, we have $\ulcorner\Gamma\urcorner\vdash t\,[\forall\,X.\,T]\,\dot{t}$. Apply forall elimination (of RelPf) to conclude the desired $\ulcorner\Gamma\urcorner\vdash t\,[[T^{\prime}/X]T]\,\dot{t}$. ∎ ###### Corollary 44 (Soundness Of System F For Closed Terms). If $\cdot\vdash t:T$ (in System F), then $t\ \llbracket T\rrbracket_{\gamma}\ t$. ###### Proof. Use Soundness of System F {43} (noting that $t=_{\alpha}\dot{t}$ since $t$ closed), and then RelTT Soundness {40}. ∎ Below we will also need this basic syntactic property: ###### Proposition 45 (Weakening for System F). If $\Gamma_{1},\Gamma_{2}\vdash t:T$, then $\Gamma_{1},x:R,\Gamma_{2}\vdash t:T$ where $x$ is not declared in $\Gamma_{1},\Gamma_{2}$. ## X Inductive types Following a relational, and functorial, generalization of [10], this section shows how to derive a relational form of induction within RelTT. For this section, except as noted in Section X-A, let $R$ be a type of System F, possibly containing specified variable $X$ free. Under the usual requirement of positivity, we prove equal the following two relational types, where in the second one, we make use of our notation for internalized typing (Definition 9): ###### Definition 46. * • $D_{\textit{param}}:=\forall\,X.\,(R\to X)\to X$ * • $D_{\textit{ind}}:=\forall\,X.\,([\textit{in}_{X,R}]\,(R\to X)\,[\textit{in}_{X,R}])\Rightarrow X$ $\textit{in}_{X,R}$ represents the constructors of the inductive datatype in a standard way, and is defined below (Definition 54). ### X-A Variable Polarity and Monotonicity The first step to proving equality of $D_{\textit{param}}$ and $D_{\textit{ind}}$ is to extend the usual notion of a type variable’s occurring free only positively or only negatively, to relational types (recall Definition 28 for polarities $p$). For inductive types, our results hold only for $\forall^{+}$ types of System F. For positive-recursive types, however (Section XI), our derivation works for any relational type $R$. So we begin by defining when a variable occurs only with polarity $p$ ($X\in^{p}R$) generally for any relational type $R$: ###### Definition 47. Define $X\in^{p}R$ inductively by the clauses: * • $X\in^{+}X$ * • $X\in^{p}Y$ * • $X\in^{p}(R\to R^{\prime})$ iff $X\in^{\bar{p}}R$ and $X\in^{p}R^{\prime}$ * • $X\in^{p}\forall\,Y.\,R$ iff $X\in^{p}R$ * • $X\in^{p}(R\cdot R^{\prime})$ iff $X\in^{p}R$ and $X\in^{p}R^{\prime}$ * • $X\in^{p}(R^{\cup})$ iff $X\in^{p}R$ * • $X\in^{p}t$ (The intention is that $X\in^{+}R$ means $X$ occurs only positively in $R$, and $X\in^{-}R$ only negatively.) The following form of monotonicity then holds for any relational type. The statement of the lemma using a polarity meta-variable $p$ consolidates many dual cases in the proof (cf. [12]). ###### Lemma 48 (Monotonicity). Suppose $r_{+}$ and $r_{-}$ are in $\mathcal{R}$, with $r_{+}\subseteq r_{-}$. If $X\in^{p}R$, then $\llbracket R\rrbracket_{\gamma[X\mapsto r_{p}]}\subseteq\llbracket R\rrbracket_{\gamma[X\mapsto r_{\bar{p}}]}$. ###### Proof. The proof is by induction on $X\in^{p}R$, assuming (1) $r_{+}\subseteq r_{-}$ and (2) $t_{1}\ \llbracket R\rrbracket_{\gamma[X\mapsto r_{p}]}\ t_{2}$. Case $X\in^{+}X$: by (1). Case $X\in^{p}Y$: by (2), as $\llbracket Y\rrbracket_{\gamma[X\mapsto r_{p}]}=\llbracket Y\rrbracket_{\gamma}=\llbracket Y\rrbracket_{\gamma[X\mapsto r_{\bar{p}}]}$. Case $X\in^{p}(R_{1}\to R_{2})$: assume (3) $t_{a}\ \llbracket R_{1}\rrbracket_{\gamma[X\mapsto r_{\bar{p}}]}\ t_{b}$. From this, the IH for $R_{1}$ gives $t_{a}\ \llbracket R_{1}\rrbracket_{\gamma[X\mapsto r_{p}]}\ t_{b}$ (instantiating the quantified polarity in the IH with $\bar{p}$). Combine this with (2) to obtain $t_{1}\ t_{a}\ \llbracket R_{2}\rrbracket_{\gamma[X\mapsto r_{p}]}\ t_{2}\ t_{b}$. From this, the IH for $R_{2}$ gives $t_{1}\ t_{a}\ \llbracket R_{2}\rrbracket_{\gamma[X\mapsto r_{\bar{p}}]}\ t_{2}\ t_{b}$, as required. Case $X\in^{p}\forall\,Y.\,R^{\prime}$: assume $r\in\mathcal{R}$, and instantiate (2) with $r$. Then apply the IH to obtain the required $t_{1}\ \llbracket R^{\prime}\rrbracket_{\gamma[X\mapsto r_{\bar{p}},Y\mapsto r]}\ t_{2}$. Case $X\in^{p}(R_{1}\cdot R_{2})$: (2) implies that there exists $t$ such that $t_{1}\ \llbracket R_{1}\rrbracket_{\gamma[X\mapsto r_{P}]}\ t$ and $t\ \llbracket R_{2}\rrbracket_{\gamma[X\mapsto r_{p}]}\ t_{2}$. Applying the IH, we obtain $t_{1}\ \llbracket R_{1}\rrbracket_{\gamma[X\mapsto r_{\bar{P}}]}\ t$ and $t\ \llbracket R_{2}\rrbracket_{\gamma[X\mapsto r_{\bar{p}}]}\ t_{2}$, which suffices. Case $X\in^{p}(R_{a}^{\cup})$: (2) implies $t_{2}\ \llbracket R_{a}\rrbracket_{\gamma[X\mapsto r_{p}]}\ t_{1}$. From this, the IH gives $t_{2}\ \llbracket R_{a}\rrbracket_{\gamma[X\mapsto r_{\bar{p}}]}\ t_{1}$, which suffices. Case $X\in^{p}t$: by (2), as $\llbracket t\rrbracket_{\gamma[X\mapsto r_{p}]}=\llbracket t\rrbracket_{\gamma}=\llbracket t\rrbracket_{\gamma[X\mapsto r_{\bar{p}}]}$. ∎ ### X-B Fmap, Fold, and In Following a standard approach to derivation of inductive types (cf. [13]), we will define operations $\textit{fmap}_{X,R}$, fold, and finally $\textit{in}_{X,R}$, and prove relational typings about them. Because we will be considering terms related to themselves, it is convenient to introduce notation $t::r$: ###### Definition 49. $(t::r):=t\ [r]\ t$ ###### Definition 50. Define a term $\textit{fmap}_{X,R}$ by recursion on types $R$ of System F (also, recall Figure 3): $\begin{array}[]{lll}\textit{fmap}_{X,X}&=&I\\\ \textit{fmap}_{X,Y}&=&K\ I\\\ \textit{fmap}_{X,R\to R^{\prime}}&=&\lambda\,f.\,\lambda\,a.\,\textit{fmap}_{X,R^{\prime}}\ f\ \circ a\circ\textit{fmap}_{X,R}\ f\\\ \textit{fmap}_{X,\forall\,Y.\,R}&=&\lambda\,f.\,\textit{fmap}_{X,R}\ f\end{array}$ Note that as we treat expressions up to $\alpha$-equivalence, we do not need a case for $\textit{fmap}_{X,\forall\,X.\,R}$, as this will be handled as $\textit{fmap}_{X,\forall\,Y.\,[Y/X]R}$. ###### Lemma 51 (Fmap (System F)). Suppose $X_{+}$ and $X_{-}$ are type variables. Suppose $X_{p}\not\in\textit{FV}(R)$, for all $p$. If $X\in^{p}R$, then in System F we have $\cdot\vdash\textit{fmap}_{X,R}:(X_{+}\to X_{-})\to[X_{p}/X]R\to[X_{\bar{p}}/X]R$ ###### Proof. The proof is by induction on $X\in^{p}R$, implicitly applying Weakening for System F {45}. Case $X\in^{+}X$: the goal is $\cdot\vdash I:(X_{+}\to X_{-})\to(X_{+}\to X_{-})$ which is derivable. Case $X\in^{p}Y$: the goal is $\cdot\vdash K\ I:(X_{+}\to X_{-})\to(Y\to Y)$ which is derivable. Case $X\in^{p}(R_{1}\to R_{2})$: let $\Gamma$ be the context $f:(X_{+}\to X_{-}),a:[X_{p}/X](R_{1}\to R_{2}),x:[X_{\bar{p}}/X]R_{1}$ Using the typing rules of System F, it suffices to show $\Gamma\vdash\textit{fmap}_{X,R_{2}}\ f\ (a\ (\textit{fmap}_{X,R_{1}}\ f\ x)):[X_{\bar{p}}/X]R_{2}$ By the IH, since $X\in^{\bar{p}}R_{1}$, we have $\cdot\vdash\textit{fmap}_{X,R_{1}}:(X_{+}\to X_{-})\to([X_{\bar{p}}/X]R_{1}\to[X_{p}/X]R_{1}$ Hence we may derive $\cdot\vdash(\textit{fmap}_{X,R_{2}}\ f\ x):[X_{p}/X]R_{1}$ and then $\cdot\vdash a\ (\textit{fmap}_{X,R_{2}}\ f\ x):[X_{p}/X]R_{2}$. From this, using the IH with $X\in^{p}R_{2}$, we obtain the desired goal. Case $X\in^{p}\forall\,Y.\,R$: by the IH we have $\cdot\vdash\textit{fmap}_{X,R}:(X_{+}\to X_{-})\to[X_{p}/X]R\to[X_{\bar{p}}/X]R$ From this we obtain $f:(X_{+}\to X_{-})\vdash\textit{fmap}_{X,R}\ f:[X_{p}/X]R\to[X_{\bar{p}}/X]R$ Applying $\forall$-introduction, we get $f:(X_{+}\to X_{-})\vdash\textit{fmap}_{X,R}\ f:\forall\,Y.\,[X_{p}/X]R\to[X_{\bar{p}}/X]R$ Applying $\to$-introduction gives the desired conclusion (note we needed the $\eta$-expanded definition of $\textit{fmap}_{X,\forall\,Y.\,R}$). ∎ ###### Definition 52 (Fold). $\textit{fold}\ :=\ \lambda\,a.\,\lambda\,x.\,x\ a$ ###### Lemma 53 (Fold). Let $X$ be possibly free in $R$. Then in System F: $\cdot\vdash\textit{fold}:\forall\,X.\,(R\to X)\to D_{\textit{param}}\to X$ ###### Proof. Let $\Gamma$ be the context $a:R\to X,x:D_{\textit{param}}$. It suffices to prove $\Gamma\vdash x\ a:X$. Instantiating the type variable in $D_{\textit{param}}$ with $X$, we obtain $\Gamma\vdash x:(R\to X)\to X$ So applying $x$ to $a$ indeed has type $X$ in context $\Gamma$. ∎ ###### Definition 54. $\textit{in}_{X,R}\ :=\ \lambda\,x.\,\lambda\,a.\,a\ (\textit{fmap}_{X,R}\ (\textit{fold}\ a)\ x)$ ###### Lemma 55 (In For $D_{\textit{param}}$ (System F)). If $X\in^{+}R$ , then in System F we have $\cdot\vdash\textit{in}_{X,R}:[D_{\textit{param}}/X]R\to D_{\textit{param}}$ ###### Proof. Let $\Gamma$ be the context $x:[D_{\textit{param}}/X]R,a:R\to X$. Applying typing rules of System F, it suffices to show $\Gamma\vdash a\ (\textit{fmap}_{X,R}\ (\textit{fold}\ a)\ x):X$ This holds if $\Gamma\vdash\textit{fmap}_{X,R}\ (\textit{fold}\ a)\ x:R$. Using the assumption that $X\in^{+}R$, instantiate Fmap (System F) {51} with $D_{\textit{param}}$ for $X_{+}$ and $X$ for $X_{-}$ to obtain: $\cdot\vdash\textit{fmap}_{X,R}:(D_{\textit{param}}\to X)\to[D_{\textit{param}}/X]R\to R$ The desired typing follows using $\Gamma\vdash\textit{fold}\ a:D_{\textit{param}}\to X$, which holds by Fold {53}. ∎ ###### Lemma 56 (In For $D_{\textit{param}}$ (RelTT)). If $X\in^{+}R$ , then $\textit{in}_{X,R}::\llbracket[D_{\textit{param}}/X]R\to D_{\textit{param}}\rrbracket_{\gamma}$ ###### Proof. Apply Soundness Of System F For Closed Terms {44} to In For $D_{\textit{param}}$ (System F) {55}. ∎ We can prove a similar lemma about $\textit{in}_{X,R}$ and $D_{\textit{ind}}$, but since $D_{\textit{ind}}$ is not a System F type we cannot use Soundness Of System F {43}. We first need: ###### Lemma 57 ($D_{\textit{ind}}$ Containment). If $\textit{in}_{X,R}::\llbracket R\to X\rrbracket_{\gamma[X\mapsto r]}$, then $\llbracket D_{\textit{ind}}\rrbracket_{\gamma}\subseteq r$. ###### Proof. Call the hypothesis of the lemma (1), and suppose also (2) $t\ \llbracket D_{\textit{ind}}\rrbracket_{\gamma}\ t^{\prime}$. We must show $t\ [r]\ t^{\prime}$. Instantiating $X$ in $D_{\textit{ind}}$ with $r$, by Implicit Product {17} (1) indeed implies $t\ [r]\ t^{\prime}$. ∎ ###### Lemma 58 (In for $D_{\textit{ind}}$ (RelTT)). If $X\in^{+}R$ , then $\textit{in}_{X,R}::\llbracket[D_{\textit{ind}}/X]R\to D_{\textit{ind}}\rrbracket_{\gamma}$ ###### Proof. Assume (1) $t\ \llbracket[D_{\textit{ind}}/X]R\rrbracket_{\gamma}\ t^{\prime}$ and show $\textit{in}_{X,R}\ t\ \llbracket D_{\textit{ind}}\rrbracket_{\gamma}\ \textit{in}_{X,R}\ t^{\prime}$ Unfolding the definition of $D_{\textit{ind}}$ and applying Internalized Typing {10} and Implicit Product {17}, it suffices to assume $r\in\mathcal{R}$ with $\textit{in}_{X,R}\,\llbracket R\to X\rrbracket_{\gamma[X\mapsto r]}\,\textit{in}_{X,R}$ (2) and show $\textit{in}_{X,R}\ t\ [r]\ \textit{in}_{X,R}\ t^{\prime}$ This will follow from (2) if we can show (A) $t\ \llbracket R\rrbracket_{\gamma[X\mapsto r]}\ t^{\prime}$. To derive this, first instantiate Monotonicity {48} with $D_{\textit{ind}}$ for $X_{+}$ and $r$ for $X_{-}$. That tells us that if (B) $\llbracket D_{\textit{ind}}\rrbracket_{\gamma}\subseteq r$, then also (applying Interpretation Over Substitution {6}) $\llbracket[D_{\textit{ind}}/X]R\rrbracket_{\gamma}\subseteq\llbracket R\rrbracket_{\gamma[X\mapsto r]}$ This together with (1) proves (A). And (B) follows from (2) by $D_{\textit{ind}}$ Containment {57}. ∎ ### X-C Reflection Next, we prove a property known as _reflection_ (cf. [14]). For the specific case of natural numbers, a similar result is Proposition 14 of [10]. Recall the definitions of fold and in from Section X-B. ###### Definition 59. $\textit{rebuild}_{X,R}:=\textit{fold}\ \textit{in}_{X,R}$ ###### Lemma 60 (Reflection). If $X\in^{+}R$ , then $\textit{rebuild}_{X,R}\ \llbracket D_{\textit{param}}\to D_{\textit{param}}\rrbracket_{\gamma}\ I$ Before we can prove this, we need: ###### Lemma 61 (Fmap Fold). Suppose $Y\not\in\textit{FV}(R)$. Let $r_{+}=\llbracket f\cdot X\rrbracket_{\gamma}$ and $r_{-}=\gamma(X)$. If $X\in^{p}R$ , then, letting $\gamma^{\prime}=\gamma[Y\mapsto r_{p},X\mapsto r_{\bar{p}}]$, we have $\textit{fmap}_{X,R}\ f\ \llbracket[Y/X]R\to R\rrbracket_{\gamma^{\prime}}\ I$ ###### Proof. The proof is by induction on the derivation of $X\in^{p}R$. We simplify implicitly using $\beta\eta$-Closure {2}. Case $X\in^{+}X$: since $\textit{fmap}_{X,X}=I$, the goal becomes $I\ f\ [r_{+}\to r_{-}]\ I$ So assume $t_{1}\ [r_{+}]\ t_{2}$, which is equivalent (by Deapplication {4}) to (1) $f\ t_{1}\ [\gamma(X)]\ t_{2}$; and show $I\ f\ t_{1}\ [\gamma(X)]\ t_{2}$ but this simplifies to (1). Case $X\in^{p}Z$: since $\textit{fmap}_{X,Z}=K\ I$, the goal becomes $K\ I\ f\ \llbracket Z\to Z\rrbracket_{\gamma^{\prime}}\ I$ Further simplifying, it becomes $I\ \llbracket Z\to Z\rrbracket_{\gamma^{\prime}}\ I$ which holds obviously (Identity {8}). Since $Y\not\in\textit{FV}(R)$ by assumption, this concludes the variable cases. Case $X\in^{p}(R_{1}\to R_{2})$: the goal becomes $\lambda\,a.\,(\textit{fmap}_{X,R_{2}}\ f)\circ a\circ(\textit{fmap}_{X,R_{1}}\ f)\ \llbracket[Y/X]R\to R\rrbracket_{\gamma^{\prime}}\ I$ So assume (1) $a\,\llbracket[Y/X](R_{1}\to R_{2})\rrbracket_{\gamma^{\prime}}\ a^{\prime}$, and show $(\textit{fmap}_{X,R_{2}}\ f)\circ a\circ(\textit{fmap}_{X,R_{1}}\ f)\ \llbracket R\rrbracket_{\gamma^{\prime}}\ a^{\prime}$ Next, assume (2) $b\,\llbracket R_{1}\rrbracket_{\gamma^{\prime}}\,b^{\prime}$, and show $\textit{fmap}_{X,R_{2}}\ f\ (a\ (\textit{fmap}_{X,R_{1}}\ f\ b)\ \llbracket R_{2}\rrbracket_{\gamma^{\prime}}\ a^{\prime}\ b^{\prime}$ Since $X\in^{p}R_{2}$, this follows by the IH from $a\ (\textit{fmap}_{X,R_{1}}\ f\ b)\ \llbracket[Y/X]R_{2}\rrbracket_{\gamma^{\prime}}\ a^{\prime}\ b^{\prime}$ In turn, this follows by (1) from $\textit{fmap}_{X,R_{1}}\ f\ b\llbracket[Y/X]R_{1}\rrbracket_{\gamma^{\prime}}\ b^{\prime}$ Since $X\in^{\bar{p}}R_{1}$, this follows by the IH from (2). Case $X\in^{+}\forall\,Z.\,R^{\prime}$: the goal becomes $\textit{fmap}_{X,R^{\prime}}\ f\ \llbracket[Y/X]R\to R\rrbracket_{\gamma^{\prime}}\ I$ So assume (1) $a\,\llbracket\forall\,Z.\,[Y/X]R^{\prime}\rrbracket_{\gamma^{\prime}}\,a^{\prime}$, and show $\textit{fmap}_{X,R^{\prime}}\ f\ a\ \llbracket\forall\,Z.\,R^{\prime}\rrbracket_{\gamma^{\prime}}\ a^{\prime}$ For this, assume $r^{\prime}\in\mathcal{R}$, and show $\textit{fmap}_{X,R^{\prime}}\ f\ a\ \llbracket R^{\prime}\rrbracket_{\gamma^{\prime}[Z\mapsto r^{\prime}]}\ a^{\prime}$ Since $X\in^{p}R^{\prime}$, this follows by the IH from $a\,\llbracket[Y/X]R^{\prime}\rrbracket_{\gamma^{\prime}[Z\mapsto r^{\prime}]}\,a^{\prime}$ But this follows by instantiating (1) with $r^{\prime}$. ∎ We may now return to: ###### Proof of Reflection {60}. Assuming (1) $t\ \llbracket D_{\textit{param}}\rrbracket_{\gamma}\ t^{\prime}$, it suffices (applying $\beta\eta$-Closure {2}) to show $t\ \textit{in}_{X,R}\ \llbracket D_{\textit{param}}\rrbracket_{\gamma}\ t^{\prime}$ For this, assume $r\in\mathcal{R}$ and (2) $a\ \llbracket R\to X\rrbracket_{\gamma[X\mapsto r]}\ a^{\prime}$, and show $t\ \textit{in}_{X,R}\ a\ [r]\ t^{\prime}\ a^{\prime}$ (A) The key idea (generalizing Wadler’s Proposition 14 already mentioned) is to instantiate (1) with the asymmetric relation $\llbracket\textit{fold}\ a\cdot X\rrbracket_{[X\mapsto r]}$ Let us call this $r_{a}$. (A) will follow from that instantiation if we can prove $\textit{in}_{X,R}\ \llbracket R\to X\rrbracket_{\gamma[X\mapsto r_{a}]}\ a^{\prime}$ So assume (3) $t_{1}\ \llbracket R\rrbracket_{\gamma[X\mapsto r_{a}]}\ t_{2}$, and show $\textit{in}_{X,R}\ t_{1}\ [r_{a}]\ a^{\prime}\ t_{2}$ This follows, by Deapplication {4} and $\beta\eta$-Closure {2}, from $\textit{in}_{X,R}\ t_{1}\ a\ [r]\ a^{\prime}\ t_{2}$ Further applying $\beta\eta$-Closure {2}, this follows from $a\ (\textit{fmap}_{X,R}\ (\textit{fold}\ a)\ t_{1})\ [r]\ a^{\prime}\ t_{2}$ By (2), this follows from $(\textit{fmap}_{X,R}\ (\textit{fold}\ a)\ t_{1})\ \llbracket R\rrbracket_{\gamma[X\mapsto r]}\ t_{2}$ which follows from (3) by Fmap Fold {61}, applying also Environment Extension {7} to get the contexts and types in the required form; and using $X\in^{+}R$. ∎ ### X-D Equating $D_{\textit{param}}$ and $D_{\textit{ind}}$ ###### Theorem 62 (Inductive Types). Suppose $\textit{FV}(R)=\\{X\\}$ and $X\in^{+}R$ . 1. i. $t\,\llbracket D_{\textit{ind}}\subseteq D_{\textit{param}}\rrbracket_{\gamma}\,t^{\prime}$ 2. ii. If $R$ is $\forall^{+}$, then $t\,\llbracket D_{\textit{param}}\subseteq D_{\textit{ind}}\rrbracket_{\gamma}\,t^{\prime}$ 3. iii. If $R$ is $\forall^{+}$, then $t\,\llbracket D_{\textit{ind}}\topdoteq D_{\textit{param}}\rrbracket_{\gamma}\ t^{\prime}$ ###### Proof. Recall the definitions: $\begin{array}[]{lll}D_{\textit{param}}&:=&\forall\,X.\,(R\to X)\to X\\\ D_{\textit{ind}}&:=&\forall\,X.\,([\textit{in}_{X,R}]\,(R\to X)\,[\textit{in}_{X,R}])\Rightarrow X\end{array}$ For this proof, let us apply Subset {15} implicitly. (iii) follows from (i) and (ii). To show (i), assume $t\,\llbracket D_{\textit{ind}}\rrbracket_{\gamma}\,t^{\prime}$, and instantiate $X$ in this assumption with $D_{\textit{param}}$. This implies the required $t\,\llbracket D_{\textit{param}}\rrbracket_{\gamma}\,t^{\prime}$, as long as (applying Interpretation Over Substitution {6}) $\textit{in}_{X,R}\,\llbracket[D_{\textit{param}}/X]R\to D_{\textit{param}}\rrbracket_{\gamma}\ \textit{in}_{X,R}$ But this is exactly In For $D_{\textit{param}}$ {56}. To show (ii), assume (1) $t\,\llbracket D_{\textit{param}}\rrbracket_{\gamma}\,t^{\prime}$, and instantiate $X$ in this assumption with $D_{\textit{ind}}$ to get $t\,\llbracket([D_{\textit{ind}}/X]R\to D_{\textit{ind}})\to D_{\textit{ind}}\rrbracket_{\gamma}\ t^{\prime}$ (Here we again applied Interpretation Over Substitution {6}.) From this and In For $D_{\textit{ind}}$ {58}, we obtain (2) $t\,\textit{in}_{X,R}\ \llbracket D_{\textit{ind}}\rrbracket_{\gamma}\ t^{\prime}\ \textit{in}_{X,R}$ This is close to what we want. Applying Reflection {60} to (1), we obtain $t\,\textit{in}_{X,R}\ \llbracket D_{\textit{param}}\rrbracket_{\gamma}\ t^{\prime}$ Since $\textit{FV}(R)=X$, $D_{\textit{param}}$ is closed, so we may change $\gamma$ to $e$ here and in (1), by Environment Extension {7}. Then since $R$ is $\forall^{+}$, $D_{\textit{param}}$ is also, and we can apply Identity Inclusion {32} to get: $\begin{array}[]{l}t\,\textit{in}_{X,R}=_{\beta\eta}t^{\prime}\\\ t=_{\beta\eta}t^{\prime}\end{array}$ Using these facts with $\beta\eta$-Closure {2}, we may simplify (2) to the desired $t\ \llbracket D_{\textit{ind}}\rrbracket_{\gamma}\ t^{\prime}$. ∎ In light of this result, we denote $D_{\textit{param}}$ for particular $X$ and $R$ as $D_{X,R}$, and freely change between it and $D_{\textit{ind}}$ as long as $R$ is $\forall^{+}$. ### X-E Example: Nat In this section, we consider the basic example of natural numbers. To express this type using the parameter $R$ of $D_{X,R}$, we first need some standard types (namely $A+B$ and $1$) and associated term definitions: for $A+B$, constructors inl and inr, and eliminator $\langle n,m\rangle$; and for $1$, constructor unit. ###### Definition 63. $\begin{array}[]{lll}A+B&:=&\forall\,X.\,(A\to X)\to(B\to X)\to X\\\ 1&:=&\forall\,X.\,X\to X\\\ \textit{inl}&:=&\lambda\,a.\,\lambda\,x.\,\lambda\,y.\,x\ a\\\ \textit{inr}&:=&\lambda\,b.\,\lambda\,x.\,\lambda\,y.\,y\ b\\\ \langle n,m\rangle&:=&\lambda\,c.\,c\ n\ m\\\ \textit{unit}&:=&I\end{array}$ Now we define Nat and its constructors as expected, with addition as an example operation: ###### Definition 64. $\begin{array}[]{lll}\textit{Nat}&:=&D_{X,1+X}\\\ \textit{zero}&:=&\textit{in}_{X,1+X}\ (\textit{inl}\ \textit{unit})\\\ \textit{succ}&:=&\textit{in}_{X,1+X}\circ\textit{inr}\\\ \textit{add}&:=&\lambda\,n.\,\lambda\,m.\,n\ \langle m,\textit{succ}\rangle\end{array}$ Thanks to Soundness of System F For Closed Terms {44} and the usual System F typings of the above term definitions (including In For $D_{\textit{param}}$ (System F) {55}), we have the following relational typings: ###### Lemma 65 (Nat Operations). $\begin{array}[]{lll}\textit{zero}&::&\llbracket\textit{Nat}\rrbracket_{\gamma}\\\ \textit{succ}&::&\llbracket\textit{Nat}\to\textit{Nat}\rrbracket_{\gamma}\\\ \textit{add}&::&\llbracket\textit{Nat}\to\textit{Nat}\to\textit{Nat}\rrbracket_{\gamma}\end{array}$ Following a very similar development as for Inductive Types {62}, we may also equate $A+B$ and $1$ with inductive variants: ###### Definition 66. $\begin{array}[]{lll}A+_{i}B&:=&\forall\,X.\,[\textit{inl}]\,(A\to X)\,[\textit{inl}]\Rightarrow\\\ &&\ \ \ \ \ \ [\textit{inr}]\,(B\to X)\,[\textit{inr}]\Rightarrow X\\\ 1_{i}&:=&\forall\,X.\,[\textit{unit}]X[\textit{unit}]\Rightarrow X\end{array}$ Recall the notation $R\topdoteq R^{\prime}$ (Definition 18). ###### Proposition 67. $\begin{array}[]{l}t_{1}\ \llbracket A+B\topdoteq A+_{i}B\rrbracket_{\gamma}\ t_{2}\\\ t_{1}\ \llbracket 1\topdoteq 1_{i}\rrbracket_{\gamma}\ t_{2}\end{array}$ Finally, let us prove a basic inductive property of add, as an example. ###### Lemma 68. $\lambda\,n.\,\textit{add}\ n\ \textit{zero}\ \llbracket\textit{Nat}\to\textit{Nat}\rrbracket_{\gamma}\ I$ ###### Proof. For (i): Assume (1) $n\ \llbracket\textit{Nat}\rrbracket_{\gamma}\ n^{\prime}$, and show $\textit{add}\ n\ \textit{zero}\ \llbracket\textit{Nat}\rrbracket_{\gamma}\ n^{\prime}$ (A) Applying Inductive Types {62} to (1) allows us to reason inductively; we instantiate the type variable $X$ in $D_{\textit{ind}}$ with the interpretation of $r:=\lambda\,n.\,\textit{add}\ n\ \textit{zero}\cdot\textit{Nat}$ We must show this is preserved by $\textit{in}_{X,1+X}$; that is $\textit{in}_{X,1+X}::\llbracket(1+r)\to r\rrbracket_{\gamma}$ (B) By Deapplication {4} this suffices for (A). For (B), assume (2) $v\ \llbracket 1+r\rrbracket_{\gamma}\ v^{\prime}$, and show $\textit{in}_{X,1+X}v\ \llbracket r\rrbracket_{\gamma}\ \textit{in}_{X,1+X}v^{\prime}$ Switch to the inductive view of $1+r$ in (2), and induct using the interpretation of $r^{\prime}:=\textit{in}_{X,1+X}\mathbin{\ast}r$ By Deapplication {4}, this is sufficient for (B). We must prove * • $\textit{inl}\ \textit{unit}::\llbracket r^{\prime}\rrbracket_{\gamma}$ * • $\textit{inr}::\llbracket r^{\prime}\to r^{\prime}\rrbracket_{\gamma}$ Unfolding definitions of $r^{\prime}$ and $r$ using Deapplication {4}, we confirm the following using $\beta\eta$-Closure {2} and Nat Operations {65} * • $\textit{add}\ (\textit{in}_{X,1+X}\ (\textit{inl}\ \textit{unit}))\ \textit{zero}\ \llbracket\textit{Nat}\rrbracket_{\gamma}\ (\textit{in}_{X,1+X}\ (\textit{inl}\ \textit{unit}))$ * • $\textit{add}\ (\textit{in}_{X,1+X}\ (\textit{inr}\ x))\ \textit{zero}\ \llbracket\textit{Nat}\rrbracket_{\gamma}\ (\textit{in}_{X,1+X}\ (\textit{inr}\ x^{\prime}))$ from $\textit{add}\ x\ \textit{zero}\llbracket\textit{Nat}\rrbracket_{\gamma}\ x^{\prime}$ ∎ ### X-F Discussion Wadler proves a result similar to Inductive Types {62} for the special case of the natural numbers, in Section 5 of [10]. He shows, as a theorem of a second- order logic, that being related by the relational interpretation of $\textit{Nat}_{\textit{param}}$ is the same as being equal natural numbers that satisfy a predicate of unary induction. The result here is more general, covering any inductive datatype defined by a positive type scheme $R$. The equivalence is expressed not in a second-order logic, but in RelTT. So the proof is in terms only of binary relations, including a binary-relational form of induction (instead of using unary induction). Another technical difference is that the proof here relies on Identity Inclusion {32}. This does not show up in Wadler’s proof, but only because he considers just the simple example of natural numbers, with the type $\forall\,X.\,(X\to X)\to X\to X$. One may confirm that a categorical version, as we consider here, would require an analogous property for the proof of his Proposition 14 [10]. Thanks to Inductive Types {62}, we can transport properties between the denotations of $D_{\textit{ind}}$ and $D_{\textit{param}}$. For a simple example: ###### Lemma 69. Suppose $R$ is $\forall^{+}$. Then $\llbracket D_{\textit{ind}}\rrbracket$ is a per. ###### Proof. If $R$ is $\forall^{+}$, then so is $D_{\textit{param}}$, and hence $\llbracket D_{\textit{param}}\rrbracket_{\gamma}$ is a per by $\forall^{+}$ Per {34}. This implies $D_{\textit{ind}}$ is also a per, by Inductive Types {62}. ∎ Proving this lemma directly is not hard, but using Inductive Types {62}, unnecessary. Richer examples are enabled thanks to Substitutivity Of Relational Equality {20}. ## XI Positive-recursive types A very useful type form from standard type theory is the recursive type $\textit{rec}\,X.\,R$, where $X$ is bound in $R$, and $X$ occurs only positively in $R$. The type should be isomorphic to its unfolding $[\textit{rec}\,X.\,R/X]R$, where we desire that the functions witnessing the isomorphism are identity functions. (This form of recursive type can be seen as unifying the standardly distinguished _isorecursive_ and _equirecursive_.) This section shows how a relational version of this type can be derived in RelTT. The development is a (nontrivial) adaptation of ideas from [15], to our relational setting. It is built on the derivations of subset type and implicit product from Section III, and makes crucial use of Montonicity {48}. Let us assume that type $R$ may contain type variable $X$ free. ###### Definition 70. $\textit{rec}\,X.\,R:=\forall\,X.\,(R\subseteq X)\Rightarrow X$ ###### Lemma 71 (Rec Body). If $\llbracket R\rrbracket_{\gamma[X\mapsto r]}\subseteq r$, then $\llbracket\textit{rec}\,X.\,R\rrbracket_{\gamma}\subseteq r$. ###### Proof. Assume (1) $t_{1}\ \llbracket\textit{rec}\,X.\,R\rrbracket_{\gamma}\ t_{2}$, and instantiate this with $r$, to obtain $t_{1}\ \llbracket(R\subseteq X)\Rightarrow X\rrbracket_{\gamma[X\mapsto r]}\ t_{2}$ From this, applying Subset {15} and Implicit Product {17}, we have the desired $t_{1}\ r\ t_{2}$, as long as $\llbracket R\rrbracket_{\gamma[X\mapsto r]}\subseteq r$. But the latter is a condition of the lemma. ∎ ###### Lemma 72 (Rec Fold). If $X\in^{+}R$, then $t_{1}\ \llbracket[\textit{rec}\,X.\,R/X]R\subseteq\textit{rec}\,X.\,R\rrbracket_{\gamma}\ t_{2}$. ###### Proof. By Subset {15}, it suffices to show $\llbracket\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}\subseteq\llbracket\textit{rec}\,X.\,R\rrbracket_{\gamma}$. So assume (1) $t\ \llbracket[\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}\ t^{\prime}$, and show $t\ \llbracket\textit{rec}\,X.\,R\rrbracket_{\gamma}\ t^{\prime}$. Applying the semantics, Implicit Product {17}, and Subset {15}, it suffices to assume $r\in\mathcal{R}$ and (2) $\llbracket R\rrbracket_{\gamma[X\mapsto r]}\subseteq r$, and show $t_{1}\ [r]\ t_{2}$. Applying Interpretation Over Substitution {6} to (1), we have (3) $t\ \llbracket R\rrbracket_{\gamma[X\mapsto\llbracket\textit{rec}\,X.\,R\rrbracket_{\gamma}]}\ t^{\prime}$. By Rec Body {71} with (2), $\llbracket\textit{rec}\,X.\,R\rrbracket_{\gamma}\subseteq r$. By Monotonicity {48}, (3) implies $t\llbracket R\rrbracket_{\gamma[X\mapsto r_{\gamma}]}\ t^{\prime}$. Combining this with (2), we obtain the desired $t\ [r]\ t^{\prime}$. ∎ ###### Lemma 73 (Rec Unfold). If $X\in^{+}R$, then $t_{1}\ \llbracket\textit{rec}\,X.\,R\subseteq[\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}\ t_{2}$. ###### Proof. By Subset {15}, it suffices to show $\llbracket\textit{rec}\,X.\,R\rrbracket_{\gamma}\subseteq\llbracket\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}$. So assume (1) $t\ \llbracket\textit{rec}\,X.\,R\rrbracket_{\gamma}\ t^{\prime}$ and show $t\ \llbracket[\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}\ t^{\prime}$. Instantiate (1) with $\llbracket[\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}$ to obtain $t\ \llbracket(R\subseteq X)\Rightarrow X\rrbracket_{\gamma[X\mapsto\llbracket[\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}]}\ t^{\prime}$ Applying Interpretation Over Substitution {6}, this is equivalent to $\begin{array}[]{l}t\ \llbracket([[\textit{rec}\,X.\,R/X]R/X]R\subseteq[\textit{rec}\,X.\,R/X]R)\Rightarrow\\\ \hskip 156.49014pt[\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}\ t^{\prime}\end{array}$ By Implicit Product {17} and Subset {15}, this implies the desired typing as long as $\llbracket[[\textit{rec}\,X.\,R/X]R/X]R\rrbracket_{\gamma}\subseteq\llbracket[\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}$ But this follows by Monotonicity {48} (since $X\in^{+}R$) from $\llbracket[\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}\subseteq\llbracket\textit{rec}\,X.\,R\rrbracket_{\gamma}$ And this follows (by Subset {15}) directly from Rec Fold {72}. ∎ ###### Theorem 74 (Recursive Types). If $X\in^{+}R$, then $t_{1}\ \llbracket\textit{rec}\,X.\,R\topdoteq[\textit{rec}\,X.\,R/X]R\rrbracket_{\gamma}\ t_{2}$ ###### Proof. Using Relational Equality {19}, this follows from Rec Fold {72} and Rec Unfold {73} ∎ ## XII A relational type system Having considered now some of the expressive power of RelTT, in its ability to derive types which are often taken as primitive – for example, inductive types are derived here, but primitive for the Calculus of Inductive Constructions [16] – let us turn to the question of an implementable type system for RelTT. We follow the approach suggested by the Curry-Howard correspondence, to to devise a system of proof terms for derivations in RelPf. Figure 7 gives the syntax for contexts $\Gamma$ and proof terms $p$ of RelTy, together with an erasure function mapping these back to pure $\lambda$-calculus. Proof terms $(p,p^{\prime})$ and $\pi\,p-x.u.v.p^{\prime}$ are used for composition; the $\pi$-term is like an existential elimination. Erasure will indeed treat proofs of relational typings by compositions as pairs (Definition 22). The typing rules for RelTy are given in Figure 8. Given a context $\Gamma$ and a proof term $p$, the rules may be read bottom-up as an algorithm to compute the relational typing $t\,[T]\,t^{\prime}$ (if any) proved by the proof term. Proofs are organized in natural-deduction style: each type construct has introduction and elimination forms. For example, the introduction form for an identity $t\,[t^{\prime}]\,t^{\prime}\,t$ is $\iota\\{t,t^{\prime}\\}$. The elimination is more complicated, unfortunately, as we must describe substitution, using a proven identity $t\,[t^{\prime\prime}]\,t^{\prime}$, into the terms in some other relational typing. The syntax for the elimination form uses a “guide” $\\{x.t_{1},t_{2}\\}$ to give a mechanism for locating instances of $t$ in the left and right terms of the relational typing, to be rewritten to $t^{\prime}$. The variable $x$ in terms $t_{1}$ and $t_{2}$ marks these locations. By design, RelTy exactly follows the structure of RelPf. Define $\llcorner\Gamma\lrcorner$ by $\begin{array}[]{lll}\llcorner\cdot\lrcorner&=&\cdot\\\ \llcorner\Gamma,x:t\,[R]\,t^{\prime}\lrcorner&=&\llcorner\Gamma\lrcorner,t\,[R]\,t^{\prime}\end{array}$ This maps RelTy contexts to RelPf contexts. A reverse mapping $\langle\Gamma\rangle$ can be defined as $\langle\Gamma\rangle_{k}$ where $k$ is the length of $\Gamma$, and the helper function is defined as follows, using a canonical ordering $x_{1},x_{2},\ldots$ for assumption variables: $\begin{array}[]{lll}\langle\cdot\rangle_{k}&=&\cdot\\\ \langle\Gamma,t\,[R]\,t^{\prime}\rangle_{k}&=&\langle\Gamma\rangle_{k-1},x_{k}:t\,[R]\,t^{\prime}\end{array}$ ###### Theorem 75 (RelTy-RelPf Isomorphism). 1. i. If $\Gamma\vdash p:t\,[R]\,t^{\prime}$ in RelTy, then $\llcorner\Gamma\lrcorner\vdash t\,[R]\,t^{\prime}$ in RelPf. 2. ii. If $\Gamma\vdash t\,[R]\,t^{\prime}$ in RelPf, then there exists $p$ such that $\langle\Gamma\rangle\vdash p:t\,[R]\,t^{\prime}$ in RelTy. ###### Proof. For (i): because RelTy just expands RelPf with proof terms, the proof amounts to erasing all proof terms (including assumptions $u$ in contexts) from RelTy derivations. For (ii): by design, RelTy has proof-term constructs corresponding to all proof rules of RelPf, so the proof amounts to recursively adding in those terms. ∎ If we project even further, we can map from RelTy to System F. Recall the definition of pairs (Definition 22), which are used in projecting composition. ###### Definition 76. Define $|R|$ recursively by: $\begin{array}[]{lll}|X|&=&X\\\ |R\to R^{\prime}|&=&|R|\to|R^{\prime}|\\\ |\forall\,X.\,R|&=&\forall\,X.\,|R|\\\ |R^{\cup}|&=&|R|\\\ |R\cdot R^{\prime}|&=&|R|\times|R^{\prime}|\\\ |t|&=&\forall\,X.\,X\to X\end{array}$ Extend this to contexts by recursively defining $|\Gamma|$: $\begin{array}[]{lll}|\cdot|&=&\cdot\\\ |\Gamma,u:t\,[R]\,t^{\prime}|&=&|\Gamma|,u:|R|\end{array}$ ###### Theorem 77 (RelTy Projection). If $\Gamma\vdash p:t\,[R]\,t^{\prime}$ then $|\Gamma|\vdash|p|:|R|$ in System F. ###### Proof. The proof is by induction on the assumed RelTy derivation. Case: $\Gamma\vdash x:t\,[R]\,t^{\prime}x:t\,[R]\,t^{\prime}\in\Gamma$ From $x:t\,[R]\,t^{\prime}\in\Gamma$ we get $x:|R|\in\Gamma$ and hence the desired conclusion. Case: $\Gamma\vdash\lambda\,u:R.\,p:\lambda\,x.\,t\,[R\to R^{\prime}]\,\lambda\,x^{\prime}.\,t^{\prime}\lx@proof@logical@and\Gamma,u:x\,[R]\,x^{\prime}\vdash p:t\,[R^{\prime}]\,t^{\prime}(*)$ By the IH we have $|\Gamma|,u:|R|\vdash p:|R^{\prime}|$, from which we deduce the desired $|\Gamma|\vdash\lambda\,u.\,|p|:|R\to R^{\prime}|$. Case: $\Gamma\vdash p_{1}\,p_{2}:t_{1}\,t_{2}\,[R^{\prime}]\,t_{1}^{\prime}\,t_{2}^{\prime}\lx@proof@logical@and\Gamma\vdash p_{1}:t_{1}\,[R\to R^{\prime}]\,t_{1}^{\prime}\Gamma\vdash p_{2}:t_{2}\,[R]\,t_{2}^{\prime}$ By the IH we have $|\Gamma|\vdash|p_{1}|:|R\to R^{\prime}|$ and $|\Gamma|\vdash|p_{2}|:|R|$, from which we deduce the desired $|\Gamma|\vdash|p_{1}\,p_{2}|:|R^{\prime}|$. Case: $\Gamma\vdash p\\{R\\}:t\,[[R/X]R^{\prime}]\,t^{\prime}\Gamma\vdash p:t\,[\forall\,X.\,R^{\prime}]\,t^{\prime}$ By the IH we have $|\Gamma|\vdash|p|:\forall\,X.\,|R^{\prime}|$, from which the desired $|\Gamma|\vdash|p|:|[R/X]R^{\prime}|$ follows. Case: $\Gamma\vdash\Lambda\,X.\,p:t\,[\forall\,X.\,R]\,t^{\prime}\lx@proof@logical@and\Gamma\vdash p:t\,[R]\,t^{\prime}X\not\in\textit{FV}(\Gamma)$ By the IH we have $|\Gamma|\vdash|p|:|R|$, from which the desired $|\Gamma|\vdash|p|:\forall\,X.\,|R|$ follows. Case: $\Gamma\vdash t_{1}^{\prime}\blacktriangleleft p\blacktriangleright t_{2}^{\prime}:t_{1}^{\prime}\,[R]\,t_{2}^{\prime}\lx@proof@logical@and\Gamma\vdash p:t_{1}\,[R]\,t_{2}t_{1}=_{\beta\eta}t_{1}^{\prime}t_{2}=_{\beta\eta}t_{2}^{\prime}$ The erasure of $t_{1}^{\prime}\blacktriangleleft p\blacktriangleright t_{2}^{\prime}$ is $|p|$, so the desired conclusion is just $|\Gamma|\vdash|p|:|R|$, which we have by the IH. Case: $\Gamma\vdash\cup_{e}\,p:t^{\prime}\,[R]\,t\Gamma\vdash p:t\,[R^{\cup}]\,t^{\prime}$ Similar to the previous case. Case: $\Gamma\vdash\cup_{i}\,p:t^{\prime}\,[R^{\cup}]\,t\Gamma\vdash p:t\,[R]\,t^{\prime}$ Similar to the previous case. Case: $\Gamma\vdash\iota\\{t,t^{\prime}\\}:t\,[t^{\prime}]\,t^{\prime}\,t\ $ $|\iota\\{t,t^{\prime}\\}|$ is $I$ and erasure of the term promotion $t^{\prime}$ is $\forall\,X.\,X\to X$. So this inference translates to the familiar typing of the identity function in System F. Case: $\Gamma\vdash\rho\\{x.t_{1},t_{2}\\}\ p-\ p^{\prime}:[t^{\prime}/x]t_{1}\,[R]\,[t^{\prime}/x]t_{2}\lx@proof@logical@and\Gamma\vdash p:t\,[t^{\prime\prime}]\,t^{\prime}\Gamma\vdash p^{\prime}:[t^{\prime\prime}\,t/x]t_{1}\,[R]\,[t^{\prime\prime}\,t/x]t_{2}$ By the IH we have $|\Gamma|\vdash|p^{\prime}|:|R|$. Since the $\rho$-proof erases to just the erasure of its leftmost subproof, this suffices for the desired conclusion. Case: $\Gamma\vdash\pi\,p-x.u.v.p^{\prime}:t_{1}\,[R^{\prime\prime}]\,t_{2}\begin{array}[]{c}\Gamma\vdash p:t\,[R\cdot R^{\prime}]\,t^{\prime}\quad(**)\\\ \Gamma,u:t\,[R]\,x,v:x\,[R^{\prime}]\,t^{\prime}\vdash p^{\prime}:t_{1}\,[R^{\prime\prime}]\,t_{2}\end{array}$ By the IH we have $|\Gamma|\vdash|p|:|R|\times|R^{\prime}|$ and $|\Gamma|,u:|R|,v:|R^{\prime}|\vdash|p^{\prime}|:|R^{\prime\prime}|$. By the definition of product types in System F, from these derivations we may easily establish $|\Gamma|\vdash|p|\,\lambda\,u.\,\lambda\,v.\,|p^{\prime}|:|R^{\prime\prime}|$, which suffices since $|\pi\,p-x.u.v.p^{\prime}|=|p|\ \lambda\,u.\,\lambda\,v.\,|p^{\prime}|$. Case: $\Gamma\vdash(p,p^{\prime}):t\,[R\cdot R^{\prime}]\,t^{\prime}\lx@proof@logical@and\Gamma\vdash p:t\,[R]\,t^{\prime\prime}\Gamma\vdash p^{\prime}:t^{\prime\prime}\,[R^{\prime}]\,t^{\prime}$ By the IH we have $|\Gamma|\vdash|p|:|R|$ and $|\Gamma|\vdash|p^{\prime}|:|R^{\prime}|$. With these we may deduce $|\Gamma|\vdash(|p|,|p^{\prime}|):|R|\times|R^{\prime}|$ by the definition of product types in System F. ∎ This result is interesting, because it shows that any valid RelTy proof term proves a property of its own erasure: ###### Proposition 78 (RelTy Self). If $\Gamma\vdash p:t\,[R]\,t^{\prime}$, then $\Gamma\vdash p:|p|\,[R]\,|p|$. ###### Proof sketch. From the assumed RelTy derivation we get to $\Gamma\vdash|p|\,[R]\,|p|$ using RelTy Projection {77} and Soundness of System F {43}. We need then just a somewhat more informative version of part (ii) of RelTy-RelPf Isomorphism {75}, which maps RelPf derivations to particular proof terms $p$ (not just showing that some such $p$ exists) in correspondence with the RelPf derivations. ∎ $\begin{array}[]{lll}\Gamma&::=&\cdot\ |\ \Gamma,u:t\,[R]\,t^{\prime}\\\ \\\ \textit{proof terms }p&:=&u\ |\ \lambda\,u:T.\,p\ |\ p\ p^{\prime}\ |\\\ &&p\\{T\\}\ |\ \Lambda\,X.\,p\ |\\\ &&t\blacktriangleleft p\blacktriangleright t^{\prime}\ |\\\ &&\cup_{i}\,p\ |\ \cup_{e}\,p\ |\\\ &&\iota\\{t,t^{\prime}\\}\ |\ \rho\\{x.t_{1},t_{2}\\}\ p-p^{\prime}\ |\\\ &&(p,p^{\prime})\ |\ \pi\,p-x.u.v.p^{\prime}\\\ \\\ |u|&=&u\\\ |\lambda\,u:T.\,p|&=&\lambda\,u.\,p\\\ |p\ p^{\prime}|&=&|p|\ |p^{\prime}|\\\ |p\\{T\\}|&=&|p|\\\ |\Lambda\,X.\,p|&=&|p|\\\ |t\blacktriangleleft p\blacktriangleright t^{\prime}|&=&|p|\\\ |\cup_{i}\,p|&=&|p|\\\ |\cup_{e}\,p|&=&|p|\\\ |\iota\\{t,t^{\prime}\\}|&=&I\\\ |\rho\\{x.t_{1},t_{2}\\}\ p-p^{\prime}|&=&|p^{\prime}|\\\ |(p,p^{\prime})|&=&(|p|,|p^{\prime}|)\\\ |\pi\,p-x.u.v.p^{\prime}|&=&|p|\,\lambda\,u.\,\lambda\,v.\,|p^{\prime}|\end{array}$ Figure 7: Syntax for proof terms of RelTy, and erasure to pure $\lambda$-calculus $\begin{array}[]{lll}\Gamma\vdash x:t\,[R]\,t^{\prime}x:t\,[R]\,t^{\prime}\in\Gamma&\Gamma\vdash\lambda\,x:R.\,p:\lambda\,x.\,t\,[R\to R^{\prime}]\,\lambda\,x^{\prime}.\,t^{\prime}\lx@proof@logical@and\Gamma,p:x\,[R]\,x^{\prime}\vdash p:t\,[R^{\prime}]\,t^{\prime}(*)&\Gamma\vdash p_{1}\,p_{2}:t_{1}\,t_{2}\,[R^{\prime}]\,t_{1}^{\prime}\,t_{2}^{\prime}\lx@proof@logical@and\Gamma\vdash p_{1}:t_{1}\,[R\to R^{\prime}]\,t_{1}^{\prime}\Gamma\vdash p_{2}:t_{2}\,[R]\,t_{2}^{\prime}\par\\\ \\\ \Gamma\vdash p\\{R\\}:t\,[[R/X]R^{\prime}]\,t^{\prime}\Gamma\vdash p:t\,[\forall\,X.\,R^{\prime}]\,t^{\prime}&\Gamma\vdash\Lambda\,X.\,p:t\,[\forall\,X.\,R]\,t^{\prime}\lx@proof@logical@and\Gamma\vdash p:t\,[R]\,t^{\prime}X\not\in\textit{FV}(\Gamma)&\Gamma\vdash t_{1}^{\prime}\blacktriangleleft p\blacktriangleright t_{2}^{\prime}:t_{1}^{\prime}\,[R]\,t_{2}^{\prime}\lx@proof@logical@and\Gamma\vdash p:t_{1}\,[R]\,t_{2}t_{1}=_{\beta\eta}t_{1}^{\prime}t_{2}=_{\beta\eta}t_{2}^{\prime}\\\ \\\ \lx@intercol\Gamma\vdash\cup_{e}\,p:t^{\prime}\,[R]\,t\Gamma\vdash p:t\,[R^{\cup}]\,t^{\prime}\par\ \ \ \ \ \Gamma\vdash\cup_{i}\,p:t^{\prime}\,[R^{\cup}]\,t\Gamma\vdash p:t\,[R]\,t^{\prime}\par\ \ \ \ \ \Gamma\vdash\iota\\{t,t^{\prime}\\}:t\,[t^{\prime}]\,t^{\prime}\,t\ \hfil\lx@intercol&\Gamma\vdash\rho\\{x.t_{1},t_{2}\\}\ p-\ p^{\prime}:[t^{\prime}/x]t_{1}\,[R]\,[t^{\prime}/x]t_{2}\lx@proof@logical@and\Gamma\vdash p:t\,[t^{\prime\prime}]\,t^{\prime}\Gamma\vdash p^{\prime}:[t^{\prime\prime}\,t/x]t_{1}\,[R]\,[t^{\prime\prime}\,t/x]t_{2}\\\ \\\ \lx@intercol\Gamma\vdash\pi\,p-x.u.v.p^{\prime}:t_{1}\,[R^{\prime\prime}]\,t_{2}\lx@proof@logical@and\Gamma\vdash p:t\,[R\cdot R^{\prime}]\,t^{\prime}\Gamma,u:t\,[R]\,x,v:x\,[R^{\prime}]\,t^{\prime}\vdash p^{\prime}:t_{1}\,[R^{\prime\prime}]\,t_{2}(**)\hfil\lx@intercol&\Gamma\vdash(p,p^{\prime}):t\,[R\cdot R^{\prime}]\,t^{\prime}\lx@proof@logical@and\Gamma\vdash p:t\,[R]\,t^{\prime\prime}\Gamma\vdash p^{\prime}:t^{\prime\prime}\,[R^{\prime}]\,t^{\prime}\par\end{array}$ Side condition (*) is $x\not\in\textit{FV}(\Gamma,R,R^{\prime})$. Side condition (**) is $x\not\in\textit{FV}(\Gamma,t_{1},t_{2},t,t^{\prime},R,R^{\prime},R^{\prime\prime})$. Figure 8: RelTy typing rules ## XIII Related work RelTT’s semantics (Figure 2) is a relational realizability semantics, where realizers are terms of untyped lambda calculus (cf. [17, 18]). Relational semantics for types has been studied extensively in the context of logical relations; see Chapter 8 of [19]. An influential branch of this work was initiated by Reynolds, on what is now called parametricity [9]. [20] frames some recent results, using categorical semantics. [21] proposes a similar realizabiliity semantics, for the Calculus of Constructions plus an extensional equality type. The major difference is that in RelTT, we propose a notation for asymmetric relations, which is lacking in [21]. Instead, constructions based on the semantics are done at the meta-level (where asymmetric relations can be described). Indeed, the denotable relations of [21] are partial equivalences – albeit of a modified form due to basing the semantics on “zig-zag complete” relations. In contrast, we have seen above some families of types whose denotations are partial equivalences (unmodified) in RelTT. But by design, not all types denote partial equivalences in RelTT, since reasoning about terms generally involves asymmetric relations; an important example we saw is Reflection {60}. Observational Type Theory (OTT) is an approach to type-specific extensionality principles in an intensional dependent type theory, based on a primitive heterogeneous equality type and associated operators [22]. RelTT is similar in deriving extensionality principles, but more radical in design: where OTT extends a traditional (i.e., unary) type theory including $W$-types with an extensional form of equality, RelTT takes a binary view of all types, and does not use dependent types at all. The resulting system is hence formally quite a bit simpler. Unlike [9] and subsequent works like [23]), RelTT lacks Identity Extension. This property states that when free type variables are interpreted by identity relations, the relational meaning of a type $T$ is the identity relation on the unary (or “object”) interpretation of $T$. This is a very strong property, showing that the object interpretation of types gives canonical forms for the equivalence defined by the relational interpretation of types. But it rules out expression of asymmetric relations as types. RelTT preserves this possibility, at the cost of weakening Identity Extension to Identity Inclusion {32}. In [24], Plotkin and Abadi introduce a second-order logic for reasoning about (typable) terms of System F by quantification over relations, and using a parametricity axiom. In contrast, RelTT uses relational types to express relations in a more compact way. A parametricity axiom would not make sense here, for there is no separate notion of unary typing from which relational typing could be stated to follow. The only typings are relational. RelTT may be compared with previous work of Stump et al. on Cedille [25, 26, 27]. Both systems aim at a minimalistic extension of a small pure type system as a foundation for type theory. Cedille extends the Curry-style Calculus of Constructions with dependent intersections, implicit products, and an equality type over untyped terms. RelTT extends System F with three relational operators based on a relational semantics. While the systems are roughly equivalent in formal complexity – with RelTT having the simplifying advantage of eschewing dependent types – RelTT delivers type-specific extensionality principles, which Cedille lacks. [28] considers how parametricity results can be embedded in constructive type theory, by elaborating types into corresponding theorems in the logic of so- called “reflective” pure type systems. Subsequent work built an extended PTS internalizing these theorems [29]. These papers consider fairly rich Church- style lambda calculi, in contrast to the more compact Curry-style calculus of RelTT. Finally, RelTT may be compared with Homotopy Type Theory (HoTT), a line too active in recent years to summarize here [30]. Both theories support functional extensionality. The two approaches have different origins: logical relations and parametricity for RelTT, homotopy theory and higher category theory for HoTT. A major point of difference is univalence: while RelTT allows one to express and derive relational equalities within the theory, these are based on semantic inclusions, not isomorphisms (as in univalence). Thus, transporting results between isomorphic types as done in HoTT is not (in an obvious way) directly possible in RelTT. Another point of comparison is the compactness of the theory. RelTT is based on a very compact semantics for a small number of relational type forms. In contrast, systems like, for one notable example, Cubical Agda, are based on a larger array of primitives [31]. Whereas the free theorems provided by parametricity allows proofs to be transported to observationally equivalent terms, HOTT uses explicit equivalences between terms for this purpose. Only very recent work has considered how to combine these two complementary approaches inside of univalent type theories [32]. ## XIV Conclusion and future work Based on a binary relational semantics, RelTT is a new minimalistic extensional type theory, where inductive and positive-recursive types are derivable. The theory does not have dependent types, and indeed, an indirect conclusion of the paper is that type theory does not require dependent types for reasoning about programs. Just passing from the traditional unary semantics to a binary-relational one opens the possibility for formal (extensional) reasoning about programs. Future work includes direct support for existential types, for deriving coinductive types; the standard double- negation encoding of existentials is problematic due to the requirement of forall-positivity for Identity Inclusion {32}. ## Acknowledgments We gratefully acknowledge NSF support under award 1524519, and DoD support under award FA9550-16-1-0082 (MURI program). First author: St. Jer., AMDG. ## References * [1] L. M. de Moura, S. Kong, J. Avigad, F. van Doorn, and J. von Raumer, “The Lean Theorem Prover (System Description),” in _Automated Deduction - CADE-25 - 25th International Conference on Automated Deduction, Berlin, Germany, August 1-7, 2015, Proceedings_ , ser. Lecture Notes in Computer Science, A. P. Felty and A. Middeldorp, Eds., vol. 9195. Springer, 2015, pp. 378–388. [Online]. Available: https://doi.org/10.1007/978-3-319-21401-6_26 * [2] M. Sozeau, S. Boulier, Y. Forster, N. Tabareau, and T. Winterhalter, “Coq coq correct! verification of type checking and erasure for coq, in coq,” _Proc. ACM Program. Lang._ , vol. 4, no. POPL, pp. 8:1–8:28, 2020. [Online]. Available: https://doi.org/10.1145/3371076 * [3] C. Hermida, U. S. Reddy, and E. P. Robinson, “Logical relations and parametricity - A reynolds programme for category theory and programming languages,” _Electron. Notes Theor. Comput. Sci._ , vol. 303, pp. 149–180, 2014. [Online]. Available: https://doi.org/10.1016/j.entcs.2014.02.008 * [4] S. Givant, “The calculus of relations as a foundation for mathematics,” _J. Autom. Reason._ , vol. 37, no. 4, p. 277–322, Nov. 2006. [Online]. Available: https://doi.org/10.1007/s10817-006-9062-x * [5] A. Miquel, “The Implicit Calculus of Constructions,” in _Typed Lambda Calculi and Applications_ , ser. Lecture Notes in Computer Science, vol. 2044\. Springer, 2001, pp. 344–359. * [6] J. Krivine, _Lambda-calculus, types and models_ , ser. Ellis Horwood series in computers and their applications. Masson, 1993, available from Krivine’s web page. * [7] P. Pistone, “On completeness and parametricity in the realizability semantics of system F,” _Log. Methods Comput. Sci._ , vol. 15, no. 4, 2019. [Online]. Available: https://doi.org/10.23638/LMCS-15(4:6)2019 * [8] J. R. Hindley and J. P. Seldin, _Lambda-Calculus and Combinators: An Introduction_ , 2nd ed. USA: Cambridge University Press, 2008. * [9] J. C. Reynolds, “Types, Abstraction and Parametric Polymorphism,” in _Information Processing 83, Proceedings of the IFIP 9th World Computer Congress, Paris, France, September 19-23, 1983_ , R. E. A. Mason, Ed. North-Holland/IFIP, 1983, pp. 513–523. * [10] P. Wadler, “The Girard-Reynolds isomorphism (second edition),” _Theor. Comput. Sci._ , vol. 375, no. 1-3, pp. 201–226, 2007. [Online]. Available: https://doi.org/10.1016/j.tcs.2006.12.042 * [11] H. P. Barendregt, “Lambda Calculi with Types,” in _Handbook of Logic in Computer Science (Vol. 2)_ , S. Abramsky, D. M. Gabbay, and S. E. Maibaum, Eds. New York, NY, USA: Oxford University Press, Inc., 1992, pp. 117–309. * [12] H. D. E. III, A. Stump, and R. McCleeary, “Dualized simple type theory,” _Log. Methods Comput. Sci._ , vol. 12, no. 3, 2016. [Online]. Available: https://doi.org/10.2168/LMCS-12(3:2)2016 * [13] P. Wadler, “Recursive types for free!” 1990, available at https://homepages.inf.ed.ac.uk/wadler/papers/free-rectypes/free-rectypes.txt. * [14] T. Uustalu and V. Vene, “Primitive (Co)Recursion and Course-of-Value (Co)Iteration, Categorically,” _Informatica, Lith. Acad. Sci._ , vol. 10, no. 1, pp. 5–26, 1999. [Online]. Available: http://www.mii.lt/informatica/htm/INFO141.htm * [15] C. Jenkins and A. Stump, “Monotone recursive types and recursive data representations in cedille,” _CoRR_ , vol. abs/2001.02828, 2020, in second round of reviewing as of November, 2020. [Online]. Available: http://arxiv.org/abs/2001.02828 * [16] B. Werner, “Une Théorie des Constructions Inductives,” Ph.D. dissertation, Université Paris-Diderot - Paris VII, 1994. [Online]. Available: https://tel.archives-ouvertes.fr/tel-00196524 * [17] J. van Oosten, “Realizability: A historical essay,” _Math. Struct. Comput. Sci._ , vol. 12, no. 3, pp. 239–263, 2002. [Online]. Available: https://doi.org/10.1017/S0960129502003626 * [18] A. Troelstra, “Chapter vi - realizability,” in _Handbook of Proof Theory_ , ser. Studies in Logic and the Foundations of Mathematics, S. R. Buss, Ed. Elsevier, 1998, vol. 137, pp. 407 – 473. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0049237X98800219 * [19] J. C. Mitchell, _Foundations for programming languages_ , ser. Foundation of computing series. MIT Press, 1996\. * [20] K. Sojakova and P. Johann, “A general framework for relational parametricity,” in _Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2018, Oxford, UK, July 09-12, 2018_ , A. Dawar and E. Grädel, Eds. ACM, 2018, pp. 869–878. * [21] N. R. Krishnaswami and D. Dreyer, “Internalizing Relational Parametricity in the Extensional Calculus of Constructions,” in _Computer Science Logic 2013 (CSL 2013), CSL 2013, September 2-5, 2013, Torino, Italy_ , ser. LIPIcs, S. R. D. Rocca, Ed., vol. 23. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2013, pp. 432–451. * [22] T. Altenkirch, C. McBride, and W. Swierstra, “Observational equality, now!” in _Proceedings of the ACM Workshop Programming Languages meets Program Verification, PLPV 2007, Freiburg, Germany, October 5, 2007_ , A. Stump and H. Xi, Eds., 2007, pp. 57–68. * [23] R. Atkey, “Relational parametricity for higher kinds,” in _Computer Science Logic (CSL’12) - 26th International Workshop/21st Annual Conference of the EACSL, CSL 2012, September 3-6, 2012, Fontainebleau, France_ , ser. LIPIcs, P. Cégielski and A. Durand, Eds., vol. 16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2012, pp. 46–61. [Online]. Available: http://drops.dagstuhl.de/opus/portals/extern/index.php?semnr=12009 * [24] G. D. Plotkin and M. Abadi, “A Logic for Parametric Polymorphism,” in _Typed Lambda Calculi and Applications, International Conference on Typed Lambda Calculi and Applications, TLCA ’93, Utrecht, The Netherlands, March 16-18, 1993, Proceedings_ , ser. Lecture Notes in Computer Science, M. Bezem and J. F. Groote, Eds., vol. 664. Springer, 1993, pp. 361–375. [Online]. Available: https://doi.org/10.1007/BFb0037093 * [25] A. Stump, C. Jenkins, S. Spahn, and C. McDonald, “Strong functional pearl: Harper’s regular-expression matcher in cedille,” _Proc. ACM Program. Lang._ , vol. 4, no. ICFP, pp. 122:1–122:25, 2020. [Online]. Available: https://doi.org/10.1145/3409004 * [26] D. Firsov, R. Blair, and A. Stump, “Efficient mendler-style lambda-encodings in cedille,” in _Interactive Theorem Proving_ , J. Avigad and A. Mahboubi, Eds. Cham: Springer International Publishing, 2018, pp. 235–252. * [27] A. Stump, “From realizability to induction via dependent intersection,” _Ann. Pure Appl. Logic_ , vol. 169, no. 7, pp. 637–655, 2018. * [28] J.-P. Bernardy, P. Jansson, and R. Paterson, “Parametricity and dependent types,” in _Proceedings of the 15th ACM SIGPLAN International Conference on Functional Programming_ , ser. ICFP ’10. New York, NY, USA: Association for Computing Machinery, 2010, p. 345–356. [Online]. Available: https://doi.org/10.1145/1863543.1863592 * [29] J.-P. Bernardy and G. Moulin, “A computational interpretation of parametricity,” in _Proceedings of the 2012 27th Annual IEEE/ACM Symposium on Logic in Computer Science_ , ser. LICS ’12. USA: IEEE Computer Society, 2012, p. 135–144. [Online]. Available: https://doi.org/10.1109/LICS.2012.25 * [30] T. Univalent Foundations Program, _Homotopy Type Theory: Univalent Foundations of Mathematics_. Institute for Advanced Study: https://homotopytypetheory.org/book, 2013. * [31] A. Vezzosi, A. Mörtberg, and A. Abel, “Cubical agda: a dependently typed programming language with univalence and higher inductive types,” _Proc. ACM Program. Lang._ , vol. 3, no. ICFP, pp. 87:1–87:29, 2019. [Online]. Available: https://doi.org/10.1145/3341691 * [32] N. Tabareau, E. Tanter, and M. Sozeau, “The marriage of univalence and parametricity,” _J. ACM_ , vol. 68, no. 1, Jan. 2021. [Online]. Available: https://doi.org/10.1145/3429979
# Explanation as a Defense of Recommendation Aobo Yang1, Nan Wang1, Hongbo Deng2, Hongning Wang1 1University of Virginia, Charlottesville, USA 2Alibaba Group, Hangzhou, China ay6gv<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> (2021) ###### Abstract. Textual explanations have proved to help improve user satisfaction on machine- made recommendations. However, current mainstream solutions loosely connect the learning of explanation with the learning of recommendation: for example, they are often separately modeled as rating prediction and content generation tasks. In this work, we propose to strengthen their connection by enforcing the idea of sentiment alignment between a recommendation and its corresponding explanation. At training time, the two learning tasks are joined by a latent sentiment vector, which is encoded by the recommendation module and used to make word choices for explanation generation. At both training and inference time, the explanation module is required to generate explanation text that matches sentiment predicted by the recommendation module. Extensive experiments demonstrate our solution outperforms a rich set of baselines in both recommendation and explanation tasks, especially on the improved quality of its generated explanations. More importantly, our user studies confirm our generated explanations help users better recognize the differences between recommended items and understand why an item is recommended. Explainable Recommendation, Natural Language Generation, Sentiment Alignment ††journalyear: 2021††copyright: acmcopyright††conference: Proceedings of the Fourteenth ACM International Conference on Web Search and Data Mining; March 8–12, 2021; Virtual Event, Israel††booktitle: Proceedings of the Fourteenth ACM International Conference on Web Search and Data Mining (WSDM ’21), March 8–12, 2021, Virtual Event, Israel††price: 15.00††doi: 10.1145/3437963.3441726††isbn: 978-1-4503-8297-7/21/03††ccs: Information systems Recommender systems††ccs: Computing methodologies Natural language generation††ccs: Computing methodologies Neural networks††ccs: Computing methodologies Multi-task learning††ccs: Information systems Sentiment analysis ## 1\. Introduction After extensive amount of research effort endeavored to advance the recommendation algorithms (Koren et al., 2009; He et al., 2017; Sarwar et al., 2001; Rendle, 2010; Aggarwal et al., 2016), solutions that explain the machine-made decisions have recently come under the spotlight (Herlocker et al., 2000; Zhang and Chen, 2020). Numerous studies have shown that explanations help users make more accurate decisions (Bilgic and Mooney, 2005), improve their acceptance of recommendations (Herlocker et al., 2000), and build up confidence towards the recommender system (Sinha and Swearingen, 2002). Textual explanations have been identified as a preferred medium for explaining the recommendations (Zhang and Chen, 2020), e.g., “This restaurant’s decoration is unique and its sandwich is the best”. But due to the lack of explicit training data, most existing solutions appeal to user reviews as a proxy (Zhang et al., 2014a; Wang et al., 2018b, a; Tao et al., 2019; Chen et al., 2018; Truong and Lauw, 2019; Sun et al., 2020): a good explanation should overlap with user-provided reviews. This is backed by extensive prior research in sentiment analysis (Pang and Lee, 2007) that there is a strong correlation between opinion ratings and associated review content. But the approximation also inadvertently shifts the objective of explanation learning to generating or even memorizing reviews, in a verbatim manner. It unfortunately drives the current practice in explainable recommendation to decoupling the learning of recommendation and explanation into two loosely linked sub-problems with their own objectives (e.g., rating prediction vs., content reconstruction) (Zhang et al., 2014a; Li et al., 2017; Wang et al., 2018b). But we have to emphasize that the content generated with fairly fluent language is not sufficient to be qualified explanations, as a good explanation must elaborate why the recommendation is particular to the user. Ideally, based on the provided explanations, a user should reach the same conclusion as the system does about why an item is recommended, i.e., explanation as a defense of the recommendation. Table 1. Case study on two explainable recommendation algorithms’ output. Two restaurants are evaluated by the two algorithms, with corresponding recommendation scores and explanation output. We manually labeled attribute words in italic and sentiment words in bold. Item | Algorithm 1 (Our proposed model) | Algorithm 2 (NRT (Li et al., 2017)) ---|---|--- Score | Explanation | Score | Explanation A | 4.2 | the _sushi_ is good, the _rolls_ are fresh and the _service_ is excellent. | 4.1 | their _prices_ are decent, but the _portions_ are pretty small. B | 2.1 | it was a bit loud and the _service_ was slow. | 2.2 | great _food_ , clean, and nice _atmosphere_. To tie the loose ends in explainable recommendation, one needs to understand how users perceive and utilize the system-provided explanations. A recent user behavior study based on eye-tracking (Chen et al., 2019) finds that opinionated explanations at a detailed attribute-level stimulate users to compare across related recommendations, which in turn significantly increase users’ product knowledge, preference certainty, and perceived recommendation transparency and quality. Motivated by this finding, we believe the sentiment delivered by the explanation text needs to reveal the details of how items are scored and ranked differently by the system. We formulate this as _sentiment alignment_ between the explanation text and system’s corresponding recommendation. To demonstrate the importance of sentiment alignment, we compare example output from two explainable recommendation algorithms (one proposed in this work, and another from (Li et al., 2017)) in Table 1. Both algorithms strongly recommended restaurant A over B, as suggested by the corresponding large margins in their recommendation scores. Note such scores are not presented to the users in practice; even presented, they do not carry any detail about why an item is preferred by the algorithm. With Algorithm 1’s explanations, one can easily recognize restaurant A is recommended because of better quality in its food and service. But on the contrary, it is much harder to comprehend the recommendations based on the Algorithm 2’s explanations, as the presented difference become subtle, though their readability is comparable to Algorithm 1’s. Two major reasons cost misaligned explanations in the second algorithm: 1) at training time, it only uses text reconstruction loss for explanation learning; 2) at inference time, the explanation is generated largely independently from the recommendation (as it only uses the predicted rating as an initial input for text generation). The failure to align sentiment conveyed in the explanation text with the recommendations not only cannot help users make informed decisions, but also makes them confused or even doubt about recommendations, which is totally against the purpose of explainable recommendation. We propose to enforce sentiment alignment in _both_ training and inference time for improved explainable recommendation. In particular, the learning of recommendation is modeled as a neural collaborative filtering problem (He et al., 2017), and the learning of explanation is modeled as a neural text generation problem (Sutskever et al., 2011). We force the recommendation module to directly influence the learning of explanations by two means. First, we introduce two gated networks to our neural language model to fuse the intermediate output from the recommendation module to affect the word choice at every position of an explanation. Use examples shown in Table 1 again: given the currently generated content, the explanation module should properly choose the attribute words and corresponding sentiment modifiers (e.g., adjectives) to make their conveyed sentiment consistent with the recommendation module’s prediction on this user-item pair. Second, a stand- alone sentiment regressor is added in between the two modules’ output, such that its predicted sentiment score on the explanation text should be close to the given recommendation score. When discrepancy occurs, the explanation module is pushed to minimize the difference. At inference time, all our treatments for sentiment alignment are kept. But since the explanation module has been learnt, the sentiment score gap is minimized by solving a constrained decoding problem. Because the sentiment regressor can only be applied to a complete text sequence, we use the Monte Carlo Tree Search algorithm (Kocsis and Szepesvári, 2006) for decoding with efficiency. Enforcing the alignment at inference time is vital, as it avoids the issue of decoupled output in existing explainable recommendation solutions. We evaluate the proposed solution on both recommendation and explanation tasks, with particular focuses on the text quality, attribute personalization, and sentiment alignment of the generated explanations. The experiments are performed on Yelp and Ratebeer datasets in comparison with a rich set of popular baseline solutions. Empirical results show that our solution improves the performance on both tasks, with particularly improved explanation quality via its enhanced sentiment alignment. We also have our solution scrutinized under extensive user studies against competitive baselines. Positive user feedback suggests our explanations greatly help users gain a clearer understanding of the recommendations and make more accurate decisions. ## 2\. Related Work User-provided reviews have been popularly used as a proxy of explanations in explainable recommendations (Chen et al., 2018; Wang et al., 2018a; Li et al., 2019a). One typical type of solutions directly extract representative text segments from existing reviews as explanations. For example, NARRE (Chen et al., 2018) uses attention to aggregate reviews to represent users and items for recommendation, in order to choose the most attentive reviews as explanations for each particular item. CARP (Li et al., 2019a) adopts the capsule network instead of attention for the same purpose. Wang et al. (Wang et al., 2018a) extend the idea with reinforcement learning to extract the most relevant review text segments that match a given recommender system’s rating prediction. However, such explanations are limited to an item’s existing reviews, some of which may not even be qualified as explanations (e.g., describing a personal experience). Moreover, these models only focus on selecting reviews to identify the items’ characteristics, instead of addressing the reasons for a particular recommendation provided by the system. The lack of relevance hurts users’ trust on both system-provided explanations and recommendations, and thus undermines the value of explainable recommendation. Figure 1. Model architecture of SAER. Sentiment alignment is explicitly enforced through three channels. First, SAER uses a shared sentiment vector to connect the recommender and explanation generator by the sentiment gate and attribute gate. Second, the sentiment regularizer samples generated explanations with Gumbel softmax and requires their carried sentiment (calculated by a pre-trained sentiment regressor) to match with the recommender’s output score. Third, at inference time, constrained decoding is performed to ensure the alignment in the generated explanation. SAER also uses adversarial training to improve the explanations’ readability in its sentiment regularizer. The image contains three blocks: rater, generator and supervisor. Another family of solutions learn to generate explanations from reviews. Many of them learn to predict informative elements retrieved from reviews as explanations (Wang et al., 2018b; Tao et al., 2019; He et al., 2015; Ai et al., 2018). As a typical example, MTER (Wang et al., 2018b) predicts items’ attribute words and corresponding users’ opinion words alone with its recommendations. Its explanations are generated by placing the predicted words into predefined templates, which however lack necessary expressiveness and diversity of nature language. Such robotic style explanations are usually considered less appreciated by users. To address this deficiency, neural language models have been applied to synthesize natural language explanations (Li et al., 2019b, 2017; Ni et al., 2019; Truong and Lauw, 2019). For example, NRT (Li et al., 2017) models explanation generation and item recommendation with a shared user-item embedding space, where its predicted recommendation rating is used as part of the initial state for corresponding explanation generation. MRG (Truong and Lauw, 2019) integrates multiple modalities from user reviews, including ratings, text, and associated images, for explanation modeling, by treating them as parallel learning tasks. Neither the template- based or generation-based solutions paid enough attention to the sentiment alignment issue between recommendations and explanations. Although they jointly model recommendation and explanation (e.g., sharing embeddings), the objectives of training each module are still isolated. DualPC (Sun et al., 2020) realizes the importance of consistency between the two learning tasks, and introduces a duality regularization based on the joint probability of explanations and recommendations. However, the correlation imposed by duality does not have any explicit semantic meaning to the end users. In contrast, we require the output of models to be consistent in their carried sentiment, which is perceivable by an end user. Moreover, due to the required duality, DualPC has to use an ad-hoc approximation to break the coupling between the two models’ output at inference time, which unfortunately hurts the learnt consistence between the two models. Our solution treats explanation as a dependent of recommendation, and solves a constrained decoding problem to infer the most aligned explanation at testing time accordingly. ## 3\. Sentiment Aligned Explainable Recommendation The problem of explainable recommendation can be formulated as follows: for a given pair of user $u$ and item $i$, the model outputs a personalized recommendation based on its computed score $r_{u,i}$ and a word sequence $x_{u,i}=\\{w_{1},w_{2},\dots,w_{n}\\}$ as its explanation. To learn such a model, we assume an existing training dataset, which includes a set of users $\mathcal{U}$, items $\mathcal{I}$, ratings $\mathcal{R}$, attributes $\mathcal{A}$, and explanation text $\mathcal{X}$, denoted as as $\\{\mathcal{U},\mathcal{I},\mathcal{R},\mathcal{A},\mathcal{X}\\}$. The attributes and explanations can be prepared from user-provided review corpora; and we will introduce the procedure we adopted for this purpose later in the experiment section. We also define a vocabulary set $\mathcal{V}=\\{w_{1},w_{2},...,w_{|\mathcal{V}|}\\}$ for explanation generation. We define attributes as items’ popular properties mentioned in the review text, and thus they are a subset of vocabulary $\mathcal{A}\subset\mathcal{V}$. Our model architecture for addressing explainable recommendation is shown in Figure 1. It consists of three major components: 1) recommender, which takes a user and item pair $(u,i)$ as input to predict a recommendation score $\hat{r}_{u,i}$, which measures the affinity between $u$ and $i$; 2) explanation generator, which takes the $(u,i)$ pair as input and generates a word sequence $\hat{x}_{u,i}=\\{w_{1},w_{2},\dots,w_{n}\\}$ as the corresponding explanation; and 3) sentiment regularizer, which measures sentiment alignment between the generated explanation and recommendation. All three components closely interact with each other at _both_ training and inference time for improved explanation generation, especially for enhanced sentiment alignment. We name our solution Sentiment Aligned Explainable Recommendation, or SAER in short. Next, we will zoom into each component to introduce its design principle and technical details. ### 3.1. Personalized Recommendation As our focus in this work is not to design yet another recommendation algorithm, we adopted a latest neural collaborative filtering solution for the purpose (He et al., 2017). Arguably any latent factor models that explicitly learn user and item representations (Koren et al., 2009; Zhang et al., 2014a; Wang et al., 2018b) can be adopted. In this section, we will only cover the most important technical details of our recommender’s design, and leave interested readers to its original paper for more details. We stack two Multi-Layer Perceptron (MLP) networks to predict the recommendation score $\hat{r}_{u,i}$ for a given $(u,i)$ pair. The first MLP encodes the $(u,i)$ pair to a latent sentiment vector $\mathbf{s}_{u,i}\in\mathbb{R}^{d^{r}_{s}}$, and the second MLP maps the sentiment vector $\mathbf{s}_{u,i}$ into the numerical rating $\hat{r}_{u,i}$. We refer to the first MLP as _sentiment encoder_ and the second one as _rating regressor_. Instead of using the predicted score $\hat{r}_{u,i}$ to influence explanation generation, we choose to inform the explanation generator by the encoded sentiment vector $\mathbf{s}_{u,i}$. We defer the details of this design to the next section. In the recommendation module, we define the latent embedding matrices for users and items as $P^{r}\in\mathbb{R}^{d^{r}\times|\mathcal{U}|}$ and $Q^{r}\in\mathbb{R}^{d^{r}\times|\mathcal{I}|}$ respectively, where $d^{r}$ is the dimension of the embedding vectors. The sentiment encoder concatenates the embedding vector $p^{r}_{u}$ and $q^{r}_{i}$ as its input and passes it through multiple layers with leaky ReLU activation to get the sentiment vector $\mathbf{s}_{u,i}$ encoded. Besides its use in the explanation generator, $\mathbf{s}_{u,i}$ is then mapped by the rating regressor through another set of multi-layer leaky ReLUs to get the final recommendation score ${\hat{r}}_{u,i}$. In addition to the popularly used Minimal Squared Error (MSE) (Chen et al., 2018; Li et al., 2017) to train our recommender, we also introduce a pairwise hinge loss to improve the trained recommender’s ranking performance. Specifically, for each user $u$, we collect a set of personalized item pairs $\mathcal{B}_{u}=\\{(i,j)|r_{u,i}>r_{u,j}\\}$, where $i$ and $j$ are two items rated by user $u$ and one is preferred than another as observed in the training dataset. We did not use the popular BPR loss (Rendle et al., 2012), because it tends to push ratings to extreme values, which is inconsistent with our sentiment regularizer’s requirement to be explained later. Based on the rating set $\mathcal{R}$ and personalized item pair set $\\{\mathcal{B}_{u}\\}_{u\in\mathcal{U}}$, the loss for recommender training is defined as: $L^{r}=\frac{1}{|\mathcal{R}|}\sum_{r_{u,i}\in\mathcal{R}}\big{(}\hat{r}_{u,i}-r_{u,i}\big{)}^{2}+\sum_{u\in\mathcal{U}}\frac{\lambda_{h}}{|\mathcal{B}_{u}|}\sum_{(i,j)\in\mathcal{B}_{u}}\max\big{(}0,\beta-(\hat{r}_{u,i}-\hat{r}_{u,j})\big{)}$ where $\beta>0$ is a hyper-parameter to control the separation margin, i.e., it penalizes the model when the predicted difference between $\hat{r}_{u,i}$ and $\hat{r}_{u,j}$ is smaller than $\beta$, and $\lambda_{h}$ is the coefficient to control the balance between MSE loss and pairwise hinge loss. ### 3.2. Explanation Generation Motivated by the success of neural language generation, we appeal to a Recurrent Neural Network (RNN) model with Gated Recurrent Units (GRUs) (Chung et al., 2014) for explanation generation. To make the generation related to the user and item, we first map the input user $u$ and item $i$ to their embeddings $p^{x}_{u}$ and $q^{x}_{i}$ with the latent matrices $P^{x}\in\mathbb{R}^{d^{x}\times|\mathcal{U}|}$ and $Q^{x}\in\mathbb{R}^{d^{x}\times|\mathcal{I}|}$ learnt by the explanation generator. We should note this set of embeddings are different from those used in the recommender (i.e., $P^{r}$ and $Q^{r}$), as they should characterize different semantic aspects of users and items (ratings vs., text). We hence use superscript $x$ to indicate variables and parameters related to explanation generator. To generate explanation text, the embeddings are concatenated and linearly converted into the initial RNN hidden state; and then the GRU generates hidden state $\mathbf{h}^{x}_{t}\in\mathbb{R}^{d^{x}_{h}}$ at position $t$ with previous state $\mathbf{h}^{x}_{t-1}$ and input word $w_{t}$, and predicts the next word $w_{t+1}$ recursively. We initialize RNN with pretrained GloVe word embeddings $V\in\mathbb{R}^{d^{x}_{\mathbf{v}}\times|\mathcal{V}|}$ (Pennington et al., 2014). Though similar model design has been used for explanation generation (Li et al., 2017; Sun et al., 2020), this straightforward application of RNN can hardly generate satisfactory explanations, where two issues are left open. First, a good explanation is expected to be personalized and specific about the recommendation; generic content, such as “ _this is a nice restaurant_ ” can never be informative. It is important to explain the recommended item by the user’s most concerned attributes (Wang et al., 2010). Second, the sentiment carried in the explanation, especially on the mentioned attributes, should be explicit and consistent with the recommendation (as shown in our case study in Table 1). There is no guarantee that a simple RNN can satisfy both requirements. We enhance our generator design with two gated sub-networks upon GRU to address the aforementioned issues. First, we design a sub-network, named attribute gate, to guide attribute word generation with respect to the input user-item pair and the predicted recommendation sentiment. The attribute gate is built based on a pointer network (or copy mechanism) (See et al., 2017; Zeng et al., 2016), which decides whether the current position should mention an attribute word and the corresponding distribution of attribute words based on the generation context. To make the choice of attribute word specific to the item, for each item $i$ we build an attribute set with all attribute words that appear in $i$’s associated training explanation text: $\mathcal{A}_{i}=\big{\\{}a_{k}|a_{k}\in\\{x_{u,i}|u\in\mathcal{U}\\}\big{\\}}$. To make the attribute choice depend on the already generated content, we attend on the concatenation of the current position’s RNN hidden state $\mathbf{h}^{x}_{t}$ and sentiment vector $\mathbf{s}_{u,i}$ to compute the distribution of these attribute words, (1) $\mathbf{z}_{t,k}=[\mathbf{h}^{x}_{t},\mathbf{s}_{u,i}]^{\top}W^{x}_{z}\mathbf{v}_{a_{k}},\forall k,a_{k}\in\mathcal{A}_{i};~{}~{}\mathbf{\zeta}_{t}=softmax(z_{t}),$ where $W^{x}_{z}\in\mathbb{R}^{(d^{x}_{h}+d^{r}_{s})\times d^{x}_{\mathbf{v}}}$ and $\mathbf{v}_{a_{k}}$ is the word embedding of attribute $a_{k}$. $\mathbf{z}_{t,k}$ is computed for every $a_{k}$ in $\mathcal{A}_{i}$, i.e., $\mathbf{z}_{t}=\\{z_{t,1},z_{t,2}\\\ ...z_{t,|\mathcal{A}_{i}|}\\}$. $\mathbf{\zeta}_{t}$ is the resulting attribute word distribution at position $t$. For better performance, an extra linear transformation can be applied to $\mathbf{h}^{x}_{t}$ to compress it into a lower dimension before computing attention, which helps avoid overfitting attentions to the text generation context but ignoring the sentiment context. To decide if we need to generate an attribute word using Eq (1) at position $t$, we compute the copy probability with respect to the current context $\mathbf{h}^{x}_{t}$ by $c^{x}_{t}=\sigma(W^{x}_{c}\mathbf{h}^{x}_{t}+b^{x}_{c})$, where $\sigma(\cdot)$ is the sigmoid function, $W^{x}_{c}\in\mathbb{R}^{d^{x}_{h}}$ and $b^{x}_{c}\in\mathbb{R}$. $c^{x}_{t}$ allows us to mix the vocabulary distribution predicted by GRU and attribute word choice to get our final word distribution at position $t$. Second, we design a sentiment gate to fuse the sentiment vector $\mathbf{s}_{u,i}$ to align sentiment in the generated explanation text. Our key insight is that not all words convey sentiment, we need to choose the right word at the right place to express consistent sentiment as needed by the recommender. Similar to our attribute gate design, we apply a soft gate to decide how each position is related to the intended sentiment. At position $t$, the sentiment gate calculates a ratio $g^{x}_{t}$ with respect to the RNN’s hidden state $\mathbf{h}^{x}_{t}$. The sentiment vector $\mathbf{s}_{u,i}$ is then weighted and merged with $\mathbf{h}^{x}_{t}$, (2) $g^{x}_{t}=\sigma(W^{x}_{g}\mathbf{h}^{x}_{t}+b^{x}_{g}),~{}~{}\mathbf{m}^{x}_{t}=tanh\big{(}\mathbf{h}^{x}_{t}+g^{x}_{t}(W^{x}_{m}\mathbf{s}_{u,i}+b^{x}_{m})\big{)}$ where $W^{x}_{g}\in\mathbb{R}^{d^{x}_{h}}$ and $b^{x}_{g}\in\mathbb{R}$ produce a scalar $g^{x}_{t}$. $\mathbf{m}^{x}_{t}$ is the sentiment fused latent vector to predict the vocabulary distribution for position $t$. Because not all words are about sentiment, to better differentiate the positions where the intended sentiment needs to be expressed from the rest, we impose sparsity on the learned gate value $g^{x}_{t}$ using L1 regularization at training time. In other words, the gate is open only when necessary. We compute the final word distribution by consolidating the outputs of the two gated sub-networks (Eq (1) and (2)). First, the sentiment fused latent vector $\mathbf{m}^{x}_{t}$ is fed through a linear layer to calculate the vocabulary distribution $\mathbf{\eta}_{t}=softmax(W^{x}_{\mathbf{v}}\mathbf{m}^{x}_{t}+b^{x}_{\mathbf{v}})$, where $W^{x}_{\mathbf{v}}\in\mathbb{R}^{|\mathcal{V}|\times d^{x}_{h}}$ and $b^{x}_{\mathbf{v}}\in\mathbb{R}^{|\mathcal{V}|}$. Second, the vocabulary distribution $\mathbf{\eta}_{t}$ and attribute word distribution $\mathbf{\zeta}_{t}$ are merged to obtain the final word distribution with respect to the copy probability $c^{x}_{t}$, i.e., $\mathbf{y}_{t}=(1-c^{x}_{t})\mathbf{\eta}_{t}+c^{x}_{t}\mathbf{\zeta}_{t}$, where the value of $w_{k}$ in $\mathbf{\zeta}_{t}$ is $0$ if $w_{k}$ is not an attribute word. The objective for explanation generation is to minimize the negative log- likelihood loss (NLL) on the training explanation set $\mathcal{X}$, $L^{x}=-\sum_{x\in\mathcal{X}}\sum_{w_{t}\in x}\log{\mathbf{y}_{t}(w_{t})}+\lambda_{g}\sum_{x\in X}\sum_{w_{t}\in x}|g^{x}_{t}|$ where $\mathbf{y}_{t}(w_{t})$ is the resulting probability of word $w_{t}$ and $\lambda_{g}$ is the coefficient for the L1 regularization of the sentiment gate values. ### 3.3. Sentiment Alignment Though our sentiment gate design (Eq (2)) introduces predicted sentiment from the recommender to the explanation generator, it is still insufficient to guarantee sentiment alignment, for three major reasons. First, word-based NLL training cannot maintain the whole sentence’s sentiment. For example, as the number of sentiment words in an explanation is less than the number of non- sentiment words, the training is affected more by those non-sentiment words. This weakens its prediction quality on sentiment words. Second, the explanation generator might utilize the sentiment vector differently as the recommender does, so that the recommendation rating might diverge from the sentiment carried by the explanation. Third, the generation process at the inference stage works differently from the training stage (Ranzato et al., 2015): at inference time, the previously decoded word is used as the input for the next word prediction, instead of the ground-truth word as at the training time. Hence, the learnt text pattern might not be fully exploited at the inference time. We introduce the sentiment regularizer to close the loop between the recommender and explanation generator. It uses a stand-alone sentiment regressor to predict the sentiment rating $\hat{r}^{x}$ on the generated explanation text $\hat{x}_{u,i}$ for user-item pair $(u,i)$, and requires the explanation generator to match the rating $\hat{r}_{u,i}$ from the recommender accordingly. We do not have any particular assumption about the sentiment regressor; and any state-of-the-art regression models can be leveraged (Pang and Lee, 2007). In this work, we employed an MLP on top of a bidirectional RNN text encoder with inner attention for rating regression, and denote it as $f^{R}(x)\to r^{x}$. We pre-train this regressor based on ground-truth $\\{\mathcal{R},\mathcal{X}\\}$ in the training set; and fix the learnt model thereafter. To enforce sentiment alignment by the predicted ratings, we introduce a new loss to the training of our explanation generator, (3) $L^{a}=\sum_{u\in\mathcal{U},i\in\mathcal{I}}\mathbb{E}_{P(\hat{x}|u,i)}\big{[}(\hat{r}_{u,i}-f^{R}(\hat{x}))^{2}\big{]}$ where $P(\hat{x}|u,i)$ is the probability of generating $\hat{x}$ for the given $u$ and $i$. We should note this loss is not necessarily restricted to the observed $(u,i)$ pairs in the training set; instead, it could be any pairs of them, since both the recommender and explanation generator can generate output on any given $(u,i)$ pair. It thus enables data augmentation for sentiment alignment. However, because the word distribution is categorical, the generation of $\hat{x}$ is not differentiable. It makes direct optimization with respect to Eq (3) infeasible. As a result, we appeal to Gumbel softmax (Jang et al., 2016) to obtain approximated gradient of sampling from a categorical distribution. Briefly, Gumbel softmax reparameterizes the randomness in sampling by a gumbel distribution and simulates a relaxed one-hot vector with softmax. As we need a strict one-hot vector to represent each single word, we adopt the Straight-Through (ST) Gumbel softmax estimator (Jang et al., 2016). For each $(u,i)$ pair in Eq (3), we back-propagate the gradient from $L^{a}$ to the explanation generator to improve the quality of sentiment alignment on the whole sequence. Unfortunately, this new sentiment alignment loss might also attract the generation process to produce unreadable sequences, which however match the intended sentiment ratings. For example, the sentiment regressor may give a very positive rating to an unnatural sentence “good good good good”, when the recommender also happens to predict a high rating for this item. Giving a higher weight to the NLL loss $L^{x}$ in explanation learning cannot address this issue, as it cannot inform why a particular sequence should not be generated. To improve the readability of our generated explanation, we introduce a text discriminator $f^{D}$, which learns to differentiate the authentic explanations from the generated ones, to guide the explanation generation as well. Our design allows any text classifier. In this work, we used an MLP binary classifier on top of a bidirectional RNN encoder for the purpose. We train the discriminator using cross-entropy loss with the ground-truth explanations $x$ as positive and the generated explanations $\hat{x}$ as negative, $L^{D}=-\frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}}\log f^{D}(x)-\sum_{u\in\mathcal{U},i\in\mathcal{I}}\mathbb{E}_{P(\hat{x}|u,i)}\big{[}\log(1-f^{D}(\hat{x}))\big{]}$ Correspondingly, another objective of explanation generation is to fool the discriminator, i.e., the adversarial loss, $L^{c}=-\sum_{u\in\mathcal{U},i\in\mathcal{I}}\mathbb{E}_{P(\hat{x}|u,i)}\big{[}\log f^{D}(\hat{x})\big{]}$ This loss also requires sampled explanations $\hat{x}$ as the input, like the alignment loss defined in Eq (3). The same Gumbel softmax sampling technique is used for end-to-end training. As we pointed out before, addressing the sentiment alignment issue in training alone is still insufficient, we introduce a constraint-driven decoding strategy to enhance the alignment at the inference stage as well. Similarly as in training, we use MSE to quantify the difference between the rating predicted from the explanation text and that from the recommender. But at the inference stage, since the explanation generator has been trained and fixed, the discrepancy can only be minimized by varying the generation of explanation text, e.g., trial and error. Because the sentiment regressor can only be applied to a complete sequence, the search space is too large to enumerate by the generator. Hence, we treat generating explanation $\hat{x}$ at inference time as a sequence of decision making, where each action is to generate a word $w_{t}$ at position $t$, given its already generated prefix as state. But we do not have feedback on the actions, until we complete $\hat{x}$; and the return for taking the series of actions can be measured by $Q(\hat{x};\hat{r}_{u,i})=[\hat{r}_{u,i}-f^{R}(\hat{x})]^{2}$. To find a policy that minimizes return (since we want to reduce the discrepancy), we need to estimate the value function under each state. This is a well studied problem in reinforcement learning, and it can be effectively addressed by Monte Carlo Tree Search (MCTS) (Kocsis and Szepesvári, 2006). Basically, we estimate the value function using our trained explanation generator for roll- out. When at position $t$ for generating $\hat{x}_{u,i}$, we will sample $n$ complete sequences for every action $w$ using the current prefix $\\{w_{1},w_{2},\dots,w_{t-1}\\}$, following the distribution specified by the explanation generator: $\hat{X}_{u,i,t}(w)=\big{\\{}\hat{x}_{k}=MCTS_{u,i}(w_{1},w_{2},\dots,w_{t-1},w)\big{\\}}^{n}_{k=1}$. Then the value of taking action $w$ at position $t$ can be estimated by, $Q(w_{1},w_{2},\dots,w_{t-1},w;\hat{r}_{u,i})=\frac{1}{|\hat{X}_{u,i,t}(w)|}\sum_{\hat{x}_{k}\in\hat{X}_{u,i,t}(w)}Q(\hat{x}_{k},\hat{r}_{u,i})$ Based on the estimated values, we can take the action that minimizes the value. A recent study (Holtzman et al., 2019) suggests that top-k sampling oftentimes avoids bland and repetitive content compared to more commonly used greedy decoding strategies, such as top-1 or beam search. Therefore, we integrate our MCTS with top-k sampling, i.e., at each decoding position $t$, we sample $k$ most likely words according to word distribution $\mathbf{y_{t}}$ and then use MCTS to select the one that minimizes the estimated value under given state. A vanilla implementation of MCTS is expected to be expensive and slow in our problem, as it needs to complete the sequence at each position from an RNN model for multiple times. Fortunately, our sentiment gate design provides a short path for efficient sampling: as sentiment is only carried by a small number of words, there is no need to conduct such expensive sampling procedure at every position. Instead, we only need to perform MCTS at positions where sentiment is expressed. Hence, we set a threshold on the sentiment gate’s value to decide when to perform MCTS. When the gate’s value is below the threshold, we will directly sample from the top-k words of the explanation generator’s prediction. ### 3.4. End-to-End Model Training Putting together the three components in our proposed explainable recommendation solution SAER, the overall objective of our model training is formulated as: $J=\min_{\Theta}\big{(}\lambda_{r}L^{r}+\lambda_{x}L^{x}+\lambda_{a}L^{a}+\lambda_{c}L^{c}+\lambda_{n}||\Theta||^{2}\big{)}$ where $\Theta$ is the complete set of model parameters, and $\\{\lambda_{r},\lambda_{x},\lambda_{a},\lambda_{c}\\}$ are the corresponding coefficients to control the relative importance of each component in model training. We also include an L2 regularization for the model parameters $\Theta$, weighted by its coefficient $\lambda_{n}$. The parameters are then effectively estimated end-to-end with stochastic gradient optimizer of Adam (Kingma and Ba, 2014). However, due to our model’s complex structure, it is challenging to fully unleash the optimizer’s potential on its own. Therefore, we split the whole training process into five stages. First, estimate the sentiment regressor on $\\{\mathcal{X},\mathcal{R}\\}$, as it does not depend on the other parts of our model. Second, pre-train the recommender on $\\{\mathcal{U},\mathcal{I},\mathcal{R}\\}$ till convergent. This step is essential to learn a good sentiment encoder whose output will be used to inform the explanation generator. Third, freeze the recommender and train the generator on $\\{\mathcal{U},\mathcal{I},\mathcal{A},\mathcal{X}\\}$ with negative log-likelihood loss only. We found in our experiments that generation learning was more difficult than recommendation learning. First training the explanation generator separately can help align the training of both modules later. Fourth, after the separate training converges, start joint training of the recommender and explanation generator. This step allows the model to align the sentiment representation from both modules. At last, freeze the recommender, and turn on the sentiment regularizer to further improve the explanation generator. At this stage, the explanation discriminator and generator are trained in turn. ## 4\. Experimental Evaluation We quantitatively evaluate our model’s performance on personalized recommendation and explanation generation in two different domains: restaurant recommendation on Yelp reviews 111https://www.yelp.com/dataset and beer recommendation on Ratebeer reviews (McAuley et al., 2012). Our model is compared against a set of state-of-the-art baselines on both offline data and user studies, where encouraging improvements in both recommendation and explanation tasks are obtained. ### 4.1. Experiment Setup #### 4.1.1. Data Pre-Processing As the attributes are not directly provided in these two review datasets, we use the Sentires toolkit (Zhang et al., 2014b) to extract attribute words from reviews and manually filter out inappropriate ones based on domain knowledge. Although reviews are directly treated as explanations in many previous studies (Chen et al., 2018; Wang et al., 2018a), a recent work (Ni et al., 2019) suggests a large portion of review content is only about subjective emotion and thus does not qualify as explanations, e.g., “I love the food”. An informative explanation should depict the details of items, e.g., their attributes, to help users perceive the exact reason behind recommendations, e.g., “the fish is fresh”. Therefore, we restrict ourselves to sentences containing attribute words as explanations in our experiments. On top of the crafted explanations, we select 20,000 most frequent words and map others to unknown to build the vocabulary. Finally, as lots of users and items only have very few reviews in the datasets, we apply recursive filtering as in (Wang et al., 2018b) to refine the datasets and alleviate this sparsity issue. The resulting statistics of the datasets are summarized in Table 2. Table 2. Statistics of the processed datasets. Dataset | # Users | # Items | # Reviews | # Attributes ---|---|---|---|--- Yelp | 15,642 | 21,525 | 1,108,971 | 498 Ratebeer | 3,895 | 6,993 | 1,073,762 | 333 #### 4.1.2. Baselines To evaluate the personalized recommendation performance, we used the following recommendation baselines: * - NMF: Non-negative Matrix Factorization (Lee and Seung, 2001). A widely used latent factor model, which decomposes the rating matrix into lower dimensional matrices with non-negative factors. * - SVD: Singular Value Decomposition (Koren, 2008). It utilizes rating matrix as input for learning user and item representations. * - NCF: Neural Collaborative Filtering (He et al., 2017). It is a modified matrix factorization solution which adopts neural networks to model the nonlinear vector operations. We also include two explainable recommendation baselines that can output natural language sentences as explanations for comparing both the recommendation and explanation quality: * - NARRE: Neural Attentional Regression model with Review-level Explanations (Chen et al., 2018). It learns the usefulness of the existing reviews through attention, and incorporates the review to enrich user and item representations for rating prediction. To fit in our evaluation, we select sentences from its most attentive reviews as explanations. * - NRT: Neural Rating and Tips Generation (Li et al., 2017). A multi-task learning solution for rating regression and content generation. It uses the predicted recommendation score to create initial states for content generation. Table 3. Evaluation of personalized recommendation in terms of rating prediction (RMSE, MAE) and item ranking (NDCG). | Yelp | Ratebeer ---|---|--- Model | RMSE | MAE | NDCG@3 | NDCG@5 | NDCG@10 | RMSE | MAE | NDCG@3 | NDCG@5 | NDCG@10 NMF | 1.1034 | 0.8164 | 0.3777 | 0.5067 | 0.7344 | 2.2228 | 1.6609 | 0.5143 | 0.6334 | 0.7766 SVD | 1.0286 | 0.7975 | 0.3924 | 0.5246 | 0.7519 | 2.2942 | 1.6474 | 0.4952 | 0.6120 | 0.7593 NCF | 1.0532 | 0.8251 | 0.3850 | 0.5150 | 0.7420 | 2.0857 | 1.5002 | 0.5421 | 0.6621 | 0.8004 NARRE | 1.0275 | 0.8035 | 0.3918 | 0.5230 | 0.7509 | 2.0714 | 1.4975 | 0.5464 | 0.6641 | 0.8030 NRT | 1.0254 | 0.8017 | 0.3947 | 0.5262 | 0.7540 | 2.0743 | 1.4922 | 0.5436 | 0.6620 | 0.8008 SAER | 1.0190 | 0.7948 | 0.3953 | 0.5278 | 0.7553 | 2.0628 | 1.4842 | 0.5468 | 0.6648 | 0.8034 ### 4.2. Quality of Personalized Recommendations We evaluate the recommendation quality both in terms of rating prediction (by RMSE and MAE) and item ranking performance (by NDCG@{3,5,10} (Järvelin and Kekäläinen, 2017)). The results are shown in Table 3. SAER demonstrates better performance in all metrics on both datasets. In particular, thanks to the introduced hinge loss for pairwise ranking, SAER demonstrates improved ranking performance against all baselines, which only modeled recommendation as a rating prediction task. The performance difference among NCF, NRT and SAER is worth noting. Although their rating prediction modules all use MLP, NRT and SAER additionally leverage the content information for improved recommendation quality. Improvements from SAER against NARRE and NRT demonstrate that our sentiment vector and corresponding soft gate design better distill and exploit review data for joint learning. Again, as our focus in this work is not on improving recommendation quality, but more on explanation. Next, we will dive into our extensive evaluations about the generated explanation. ### 4.3. Quality of Generated Explanations We evaluate the quality of our generated explanations from three perspectives: text quality, attribute personalization, and sentiment alignment. We introduce two variants of our model to better analyze the effects of our sentiment regularizer and constrained decoding strategy. 1) SAER (topk), it removes sentiment regularization and decodes by top-k sampling, such that sentiment alignment is only introduced by the soft gates, without the alignment loss, nor the constrained decoding; 2) SAER (reg + topk), it uses sentiment regularization (i.e., the alignment loss) and decodes by top-k sampling, such that sentiment alignment is only enforced at training time. Table 4. BLEU scores of generated explanations. Dataset | Model | BLEU-1 | BLEU-2 | BLEU-4 ---|---|---|---|--- | NARRE | 20.46 | 5.72 | 2.12 | NRT | 26.25 | 8.84 | 2.97 Yelp | SAER (topk) | 27.43 | 9.53 | 3.18 | SAER (reg + topk) | 28.69 | 10.29 | 3.37 | SAER | 28.88 | 10.44 | 3.44 | NARRE | 29.78 | 9.47 | 3.27 | NRT | 42.16 | 17.54 | 5.63 Ratebeer | SAER (topk) | 43.92 | 19.60 | 6.56 | SAER (reg + topk) | 45.69 | 21.09 | 7.02 | SAER | 46.01 | 21.60 | 7.32 #### 4.3.1. Quality of Generated Text. We measure the quality of generated explanation text with BLEU (Papineni et al., 2002), and report the results in Table 4. The extraction-based NARRE performed clearly worse than other generation-based models. This is because the synthesized natural language explanations are not limited to the existing review content and is more flexible to customize for a particular user-item pair. NRT uses the predicted ratings in the initial state for content generation, in comparison to the sentiment vectors used in SAER. The performance gap between NRT and SAER (topk) suggests that our sentiment vectors are more expressive and the two soft gates can better guide explanation generation throughout the process, than only affecting RNN’s initial state. The additional gain brought by the sentiment regularizer in SAER (reg + topk) and constrained decoding in SAER highlights the benefits of sentiment alignment in both training and inference time. Table 5. Performance of attribute prediction in generated explanations. | Yelp | Ratebeer ---|---|--- Model | Precision | Recall | Precision | Recall NARRE | 0.1415 | 0.1906 | 0.2176 | 0.2245 NRT | 0.1791 | 0.1997 | 0.3443 | 0.1720 SAER (topk) | 0.2024 | 0.2297 | 0.3523 | 0.2554 SAER (reg + topk) | 0.1992 | 0.2319 | 0.3549 | 0.2614 SAER | 0.2115 | 0.2391 | 0.3702 | 0.2677 #### 4.3.2. Attribute Personalization An informative explanation should cover the users’ most concerned aspects. We evaluate such performance in terms of attribute personalization. For each user-item pair, we evaluate precision and recall of attribute word in the algorithms’ explanations against ground-truth explanations. The results in Table 5 show the improvement brought by our attribute gate, which is proved to be effective in predicting users’ most concerned attributes. As the two baselines do not pay attention to items’ attributes when generating the explanations, their quality in providing attribute-level explanations is much worse. Table 6. Sentiment alignment evaluation of decoded explanations by RMSE. PD is the RMSE between explanation rating and predicted rating, and GT is the RMSE between explanation rating and ground-truth rating. | Yelp | Ratebeer ---|---|--- | PD | GT | PD | GT NARRE | 1.0932 | 1.4950 | 2.0996 | 2.9641 NRT | 0.6676 | 1.2086 | 2.3302 | 3.1304 SAER (topk) | 0.6908 | 1.2216 | 2.1727 | 3.0026 SAER (reg + topk) | 0.6242 | 1.1849 | 1.6985 | 2.6769 SAER | 0.5505 | 1.1503 | 1.5911 | 2.6042 #### 4.3.3. Sentiment Alignment Between Ratings and Explanations Offline evaluation of sentiment alignment is not easy, since it should be evaluated by the end users who receive the recommendation and explanation. In addition to depending on user studies to evaluate this aspect (reported in the next section), we also use our pre-trained sentiment regressor for an approximated offline evaluation. For a generated explanation, we infer its carried sentiment by our sentiment regressor. We then compute the RMSE between the inferred rating from explanation and that predicted by the recommendation module (marked as PD). This measures sentiment difference between the recommendation and corresponding explanation. We also compare the inferred rating against the ground-truth rating (marked as GT) as a reference. The results are presented in Table 6. Without our sentiment regularizer, SAER (topk) can already significantly outperform the baselines on Yelp, which demonstrates the utility of our two gated network design for sentiment alignment. And the alignment loss and constrained decoding further push SAER’s explanations closer to its recommendations. Compared to the ground-truth rating, sentiment carried by the explanation is closer to the recommender’s prediction. We hypothesize that this can be caused by the difficulty to predict ground-truth rating: as reported in Table 3, the accuracy of the recommender’s rating prediction is at around the same level. Table 7. Agreement rate between the model’s predicted item ranking and the users perceived ranking based on the provided explanations. Agreement Rate | Model | gap¿0.5 | gap$\leq$0.5 ---|---|---|--- NRT | 64.76% | 52.62% SAER | 73.10% | 61.90% ## 5\. User Study We conduct extensive user studies on Amazon Mechanical Turk to evaluate our explanations’ utility to real users. We chose the restaurant recommendation task based on the Yelp dataset, as it is more familiar by general users. We design two separate tasks. The first task focuses on evaluating if the generated explanations can help users make more informed decisions about the recommendations. In this task, we randomly pair items with different ratings predicted by a tested algorithm, and ask participants to read the corresponding explanations before choosing the item they perceived as the better one. We then evaluate the agreement rate between participants’ choices and the algorithm’s predictions. Specifically, without showing the actual predicted scores to participants, we present the corresponding explanations and require them to answer the following question: * “After reading the provided explanations, which restaurant would you like to visit? You are expected to judge the quality of the recommended restaurant based on the provided explanations, and then choose the one with better quality.” In this experiment, we only adopted NRT as the baseline, because NARRE’s explanations are item-based and thus not personalized for individual users. To demonstrate the explanations’ sentiment sensitivity towards recommendations, i.e., whether a user can correctly tell the difference between the two recommended items by reading the explanations, we group the results by the gap between the two items’ predicted scores, and choose 0.5 as the threshold. We collected 420 responses for each model in each group, resulting 1,680 responses in total. The results are presented in Table 7. Both models’ explanations are reasonably discriminative when the rating gap is larger. But it is more challenging to explain the difference when the recommendation scores are close. When the gap is smaller than $0.5$, the agreement rate on NRT’s results dropped to around 50%, which suggests users can barely perceive the differences by reading the explanations. In contrary, users can better tell the difference from SAER’s explanations for making informed decisions. Table 8. Up-vote rate of explanations’ helpfulness. | Model | Positive | Negative ---|---|---|--- Up-vote Rate | NARRE | 23.33% | 42.86% NRT | 50.69% | 26.98% GT | 46.77% | 46.76% SAER | 57.58% | 41.76% Paired t-test | SAER v.s. NARRE | 0 | 0.6786 SAER v.s. NRT | 0.0046 | 0 SAER v.s. GT | 0 | 0.9810 The second task studies whether the explanations can help users comprehend the reason of a recommended item. In particular, we ask the participants to compare explanations of the same recommended item but provided by different algorithms, and then select the most useful ones. We categorize the items as recommended (top ranked items) or not recommended (bottom ranked items) to study if the model can provide the correct explanations for both categories. For each item, we shuffle the explanations from different models for participants to select from. To help participants better judge the explanation quality, we also provide the restaurant’s name and cuisine type. Specifically, we ask one of the following questions according to whether the item is recommended: * - Positive recommendation: “Which of the following explanations help you the most to understand why you should pay attention to the recommended restaurant?” * - Negative recommendation: “Which of the following explanations help you the most to understand why our system believes the restaurant is NOT a good fit for you?” We choose NARRE, NRT and ground-truth explanations for comparison; and compare them by their received helpfulness votes. We collected 904 responses for positive recommendations and 752 for negative. Table 8 reports the up-vote rates of the explanations from different models and the results of paired t-test. In positive recommendations, the generation- based methods, i.e., SAER and NRT, are preferred; and SAER significantly outperforms others. This reveals that the common and concise syntax and vocabulary of synthesized language are preferred in the explainable recommendation scenario, because users can more easily understand the explanations. On the negative recommendations, however, the results are mixed. SAER is still preferred over NRT, but worse than NARRE and ground-truth. The key reason is the inherent data bias: the Yelp dataset contains much more positive reviews than negative ones. Such imbalance makes SAER reluctant to generate negative explanations and less trained for negative content. Hence, its generated explanations cannot strongly justify the negative recommendations. But from a different perspective, this result also echoes the importance of aligned sentiment in explainable recommendation. ## 6\. Conclusion and Future Work In this paper, we present a new explainable recommendation solution which synthesizes sentiment aligned neural language explanations to defend its recommendations. The alignment is obtained at the word-level by two customized soft gates, and at the sequence-level by a content-based sentiment regularizer, at both training and inference time. Offline experiments and user studies demonstrate our model’s advantages in both personalized recommendation and explanation tasks. This work initiates the exploration of the critical role of sentiment in explainable recommendation. It leaves several valuable paths forward. Our sentiment regularizer design enables semi-supervised explainable recommendation via data augmentation. Considering the extreme sparsity of recommendation data, it can exploit the dominant amount of unobserved data for improved performance. Besides, recommendation eventually is a list-wise ranking problem; thus, it is vital to offer explanations that can reveal the relative order among the recommended items, i.e., a list-wise explanation. ###### Acknowledgements. We thank the anonymous reviewers for their insightful comments and suggestions. This work is partially supported by the National Science Foundation under grant SCH-1838615, IIS-1553568, and IIS-2007492, and by Alibaba Group through Alibaba Innovative Research Program. ## References * (1) * Aggarwal et al. (2016) Charu C Aggarwal et al. 2016\. _Recommender systems_. Vol. 1. Springer. * Ai et al. (2018) Qingyao Ai, Vahid Azizi, Xu Chen, and Yongfeng Zhang. 2018\. Learning heterogeneous knowledge base embeddings for explainable recommendation. _Algorithms_ 11, 9 (2018), 137. * Bilgic and Mooney (2005) Mustafa Bilgic and Raymond J Mooney. 2005. Explaining recommendations: Satisfaction vs. promotion. In _Beyond Personalization Workshop, IUI_ , Vol. 5. * Chen et al. (2018) Chong Chen, Min Zhang, Yiqun Liu, and Shaoping Ma. 2018\. Neural attentional rating regression with review-level explanations. In _Proceedings of the 2018 World Wide Web Conference_. 1583–1592. * Chen et al. (2019) Li Chen, Dongning Yan, and Feng Wang. 2019. User Evaluations on Sentiment-based Recommendation Explanations. _ACM Transactions on Interactive Intelligent Systems (TiiS)_ 9, 4 (2019), 1–38. * Chung et al. (2014) Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. _arXiv preprint arXiv:1412.3555_ (2014). * He et al. (2015) Xiangnan He, Tao Chen, Min-Yen Kan, and Xiao Chen. 2015\. Trirank: Review-aware explainable recommendation by modeling aspects. In _Proceedings of the 24th ACM International on Conference on Information and Knowledge Management_. 1661–1670. * He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017\. Neural collaborative filtering. In _Proceedings of the 26th international conference on world wide web_. 173–182. * Herlocker et al. (2000) Jonathan L Herlocker, Joseph A Konstan, and John Riedl. 2000\. Explaining collaborative filtering recommendations. In _Proceedings of the 2000 ACM conference on Computer supported cooperative work_. ACM, 241–250. * Holtzman et al. (2019) Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. _arXiv preprint arXiv:1904.09751_ (2019). * Jang et al. (2016) Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. _arXiv preprint arXiv:1611.01144_ (2016). * Järvelin and Kekäläinen (2017) Kalervo Järvelin and Jaana Kekäläinen. 2017. IR evaluation methods for retrieving highly relevant documents. In _ACM SIGIR Forum_ , Vol. 51. ACM New York, NY, USA, 243–250. * Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ (2014). * Kocsis and Szepesvári (2006) Levente Kocsis and Csaba Szepesvári. 2006. Bandit based monte-carlo planning. In _European conference on machine learning_. Springer, 282–293. * Koren (2008) Yehuda Koren. 2008\. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In _Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining_. 426–434. * Koren et al. (2009) Yehuda Koren, Robert Bell, and Chris Volinsky. 2009\. Matrix factorization techniques for recommender systems. _Computer_ 42, 8 (2009), 30–37. * Lee and Seung (2001) Daniel D Lee and H Sebastian Seung. 2001. Algorithms for non-negative matrix factorization. In _Advances in neural information processing systems_. 556–562. * Li et al. (2019a) Chenliang Li, Cong Quan, Li Peng, Yunwei Qi, Yuming Deng, and Libing Wu. 2019a. A Capsule Network for Recommendation and Explaining What You Like and Dislike. In _Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval_. 275–284. * Li et al. (2019b) Piji Li, Zihao Wang, Lidong Bing, and Wai Lam. 2019b. Persona-Aware Tips Generation?. In _The World Wide Web Conference_. 1006–1016. * Li et al. (2017) Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, and Wai Lam. 2017. Neural rating regression with abstractive tips generation for recommendation. In _Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval_. 345–354. * McAuley et al. (2012) Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012\. Learning Attitudes and Attributes from Multi-Aspect Reviews. In _Proceedings of the 2012 IEEE 12th International Conference on Data Mining_ _(ICDM ’12)_. IEEE Computer Society, USA, 1020–1025. * Ni et al. (2019) Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_. 188–197. * Pang and Lee (2007) Bo Pang and Lillian Lee. 2007. Opinion Mining and Sentiment Analysis. _Found. Trends Inf. Retr._ 2, 1-2 (2007), 1–135. * Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002\. BLEU: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting on association for computational linguistics_. Association for Computational Linguistics, 311–318. * Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_. 1532–1543. * Ranzato et al. (2015) Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. _arXiv preprint arXiv:1511.06732_ (2015). * Rendle (2010) Steffen Rendle. 2010\. Factorization machines. In _2010 IEEE International Conference on Data Mining_. IEEE, 995–1000. * Rendle et al. (2012) Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2012. BPR: Bayesian personalized ranking from implicit feedback. _arXiv preprint arXiv:1205.2618_ (2012). * Sarwar et al. (2001) Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In _Proceedings of the 10th international conference on World Wide Web_. 285–295. * See et al. (2017) Abigail See, Peter J Liu, and Christopher D Manning. 2017\. Get To The Point: Summarization with Pointer-Generator Networks. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. 1073–1083. * Sinha and Swearingen (2002) Rashmi Sinha and Kirsten Swearingen. 2002. The role of transparency in recommender systems. In _CHI’02 extended abstracts on Human factors in computing systems_. ACM, 830–831. * Sun et al. (2020) Peijie Sun, Le Wu, Kun Zhang, Yanjie Fu, Richang Hong, and Meng Wang. 2020\. Dual Learning for Explainable Recommendation: Towards Unifying User Preference Prediction and Review Generation. In _Proceedings of The Web Conference 2020_. 837–847. * Sutskever et al. (2011) Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011\. Generating text with recurrent neural networks. In _Proceedings of the 28th International Conference on International Conference on Machine Learning_. 1017–1024. * Tao et al. (2019) Yiyi Tao, Yiling Jia, Nan Wang, and Hongning Wang. 2019\. The FacT: Taming Latent Factor Models for Explainability with Factorization Trees. In _Proceedings of the 42Nd International ACM SIGIR Conference on Research and Development in Information Retrieval_. ACM, New York, NY, USA, 295–304. * Truong and Lauw (2019) Quoc-Tuan Truong and Hady Lauw. 2019. Multimodal Review Generation for Recommender Systems. In _The World Wide Web Conference_. 1864–1874. * Wang et al. (2010) Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: a rating regression approach. In _Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining_. 783–792. * Wang et al. (2018b) Nan Wang, Hongning Wang, Yiling Jia, and Yue Yin. 2018b. Explainable recommendation via multi-task learning in opinionated text data. In _The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval_. 165–174. * Wang et al. (2018a) Xiting Wang, Yiru Chen, Jie Yang, Le Wu, Zhengtao Wu, and Xing Xie. 2018a. A reinforcement learning framework for explainable recommendation. In _2018 IEEE International Conference on Data Mining (ICDM)_. IEEE, 587–596. * Zeng et al. (2016) Wenyuan Zeng, Wenjie Luo, Sanja Fidler, and Raquel Urtasun. 2016\. Efficient summarization with read-again and copy mechanism. _arXiv preprint arXiv:1611.03382_ (2016). * Zhang and Chen (2020) Yongfeng Zhang and Xu Chen. 2020. Explainable recommendation: A survey and new perspectives. _Foundations and Trends® in Information Retrieval_ 14, 1 (2020), 1–101. * Zhang et al. (2014a) Yongfeng Zhang, Guokun Lai, Min Zhang, Yi Zhang, Yiqun Liu, and Shaoping Ma. 2014a. Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In _Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval_. 83–92. * Zhang et al. (2014b) Yongfeng Zhang, Haochen Zhang, Min Zhang, Yiqun Liu, and Shaoping Ma. 2014b. Do users rate or review? Boost phrase-level sentiment labeling with review-level sentiment classification. In _Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval_. 1027–1030.
∎ 11institutetext: N. Sinhababu 22institutetext: Subir Chowdhury School of Quality and Reliability IIT Kharagpur Mobile: +91-8617826402 22email<EMAIL_ADDRESS>33institutetext: M. Sarma 44institutetext: Subir Chowdhury School of Quality and Reliability IIT Kharagpur 55institutetext: D.Samanta 66institutetext: Department of Computer Science and Engineering IIT Kharagpur # Computational Intelligence Approach to Improve the Classification Accuracy of Brain Neoplasm in MRI Data Nilanjan Sinhababu Monalisa Sarma Debasis Samanta ###### Abstract Automatic detection of brain neoplasm in Magnetic Resonance Imaging (MRI) is gaining importance in many medical diagnostic applications. This report presents two improvements for brain neoplasm detection in MRI data: an advanced preprocessing technique is proposed to improve the area of interest in MRI data and a hybrid technique using Convolutional Neural Network (CNN) for feature extraction followed by Support Vector Machine (SVM) for classification. The learning algorithm for SVM is modified with the addition of cost function to minimize false positive prediction addressing the errors in MRI data diagnosis. The proposed approach can effectively detect the presence of neoplasm and also predict whether it is cancerous (malignant) or non-cancerous (benign). To check the effectiveness of the proposed preprocessing technique, it is inspected visually and evaluated using training performance metrics. A comparison study between the proposed classification technique and the existing techniques was performed. The result showed that the proposed approach outperformed in terms of accuracy and can handle errors in classification better than the existing approaches. ###### Keywords: Medical imagingClassification of brain tumor dataBrain neoplasm diagnosisMRI data classification ## 1 Introduction Brain neoplasm is known as the deadliest of all forms of cancer. Any abnormalities in the growth of neural cells within the brain skull or the spinal cord lead to a brain neoplasm or more generally known as a brain tumor. Around two-thirds of adults are diagnosed with aggressive brain cancer resulting in death within 2 years of diagnosis chinot2014bevacizumab ; gilbert2014randomized . The primary cause of death due to health-related issues related to brain neoplasm is due to the unavailability of any special cure after a certain stage aldape2019challenges . It is worth to mention that any kind of neoplasm must be detected as early as possible. Diagnosis of any brain tumor is done by a neurological test, for example, CT scan, Magnetic Resonance Imaging (MRI)osborn2015diagnostic ; armstrong2004imaging , etc. MRI is one of the advanced medical imaging techniques and is used to produce high- quality images liang2000principles to study tumors in soft tissues of the brain. The need for a computationally intelligent approach for the classification of brain neoplasm is required for the following reasons: * • MRI provides the location and size of a tumor efficiently but has no way of classifying its type aronen1994cerebral ; young2006brain . * • For classification of the MRI, a biopsy is required, this process is painful, time-consuming, and has limitations such as sampling error and variability in interpretation aronen1994cerebral ; de1997proliferative . * • Visual inspection and detection may result in misdiagnosis due to human errors caused by visual fatigue de1997proliferative . * • Daily reporting of a large number of new cases implies a high chance of misdiagnosis due to the time limitations, interaction and doctor to patient ratio ford1996doctor ; bagcchi2015india . * • It is necessary to aid the medical fraternity to understand the tumor type automatically and hence helps in reducing load and saving time. * • Computerised analysis proven to be more superior and are being researched for predicting cancerous cell in a brain MRI data. To achieve the goal of developing an automatic brain neoplasm detection system, various techniques are used in the literature. The three most important stages present in these systems are data preprocessing, feature optimization, and classification. Existing preprocessing techniques consider noise removal while preserving texture information. Image preprocessing techniques, such as cropping, size normalization, histogram equalization based contrast enhancement amin2012brain , Gaussian filter-based noise removal mahajanidetection , and Center Weighted Median(CWM) george2012mri based techniques are used. The Principal Component Analysis (PCA) amin2012brain ; othman2011probabilistic ; singh2012classification and Gray Level Co- occurrence Matrix (GLCM) joshi2010classification ; mahajanidetection based techniques are used in majority of the works for feature extraction. Some of the works kharrat2010hybrid also used Spatial Gray Level Dependence Method (SGLDM) to extract wavelet-based textures. Some of the methodologies used manual intervention to extract texture and intensity features jafari2012hybrid . For classification, the majority of the work use Multi-Layer Perceptron (MLP), Probabilistic Neural Network (PNN), Support Vector Machine(SVM), etc. amin2012brain ; singh2012classification ; othman2011probabilistic ; kharrat2010hybrid . There exist works jafari2012hybrid ; mahajanidetection , which have combined multiple classifiers to improve the classification accuracy, such as Back Propagation Neural Network(BPNN) and K-Nearest Neighbours(KNN). Since, stacked MRI image slices can be used as a 3D representation of the MRI data, few techniques use 3D-CNN for classification of the neoplasm chen2018mri ; 6165309 . After a detailed analysis of the existing techniques, the following issues are found: * • In the literature, very few preprocessing techniques have been used like noise removal and contrast optimization. Further, many works in the literature skip this phase. * • Many approaches in the literature used techniques that extracts texture and intensity-based features from MRI data to improve the classification performance, but these techniques are used assuming that the feature requirements are already known. Although these features provide improvement, however, there is a possibility of unknown features that might not have been considered. * • PCA is a good feature selection algorithm as it tries to cover maximum variance among the features in a dataset. However, incorrect selection of the number of principal components can lead to information loss uyeda2015comparative . Information loss in this domain can be costly and can affect the overall model performance. * • Classification algorithms are optimized for generalized performance, but due to difference in the requirement of various domains, these generalized models may not be suitable like in the case of brain neoplasm MRI data classification pendharkar2003evolutionary . * • Some of the papers in the literature used the 3D-CNN model for MRI data. 3D-CNN is good for 3D image classification, but CNN, in general, suffers when the dataset size is not adequategal2017deep . Further, implementing an additional CNN is computationally costly and may suffer from the drawbacks of vanishing or exploding gradients. Identification of edge is a crucial task in brain neoplasm MRI classification but 3D CNN based method has a severe over- smoothing problem on edge boundariesDBLP:journals/corr/abs-1803-08669 . * • Significance of model performance metrics are vastly different according to the domain of the work and accuracy may not be the ultimate evaluating metric for MRI data classification. Preprocessing techniques in the literature consider noise removal, preserving texture information but does not consider edge detection, a technique considering edge is missing in the literature. The majority of the papers published in these areas have extracted mainly textural and wavelet-based features, but there is a possibility of other un-recognized features that may improve model performance. Further, these techniques require manual intervention and can be challenging when considering new data. Not many hybrid techniques are used or compared in this field of research. For classification, the majority of the papers have tried only to maximize accuracy assuming that false positives and false negatives are equally bad. However, in the medical domain, these two types of errors have very different costs associated. Concerning these limitations, two main objectives are drawn: 1. 1. Propose a hybrid preprocessing stage to improve the area of interest in MRI Image. 2. 2. Propose a CSVM model for harnessing the ability of CNN for feature extraction and SVM for classification. 3. 3. Increasing model performances taking into consideration the different costs associated with false positives and false negatives. ## 2 Related Work In this section, a few important works related to brain neoplasm detection are briefly surveyed. ### 2.1 Preprocessing techniques MRI data may contain noise and defects that can hamper model performance. To deal with such noisy data, various preprocessing techniques have been proposed in the literature. Amin and Mageed amin2012brain proposed image preprocessing using image cutting, image enhancement, and size normalization. They used histogram equalization to increase the contrast range by increasing the dynamic range of grey levels. They showed improvement in the distinction of features in the MRI image. Gadpayleand and Mahajani mahajanidetection used Gaussian filter to remove granular noise and resized the MRI data due to large size variations in the used dataset. They used manual cropping for some MRI data for convenience. George and Karnan george2012mri used a modified Tracking Algorithm to remove the film artifacts and then applying the Histogram Equalization and Center Weighted Median (CWM) filter techniques separately to enhance the MRI images. ### 2.2 Feature optimization techniques MRI data contains a large number of features and using all features not necessarily increases the performance of a classification algorithm. Reducing the number of features in the dataset usually improves accuracy as well as training and testing time. An efficient way to select important features is PCA ()Principle Component Analysis). PCA technique can remove multicollinearity in the dataset, improves algorithm performance, and reduces over-fitting. Further, the texture is an important aspect of MRI data, and to extract the texture features, GLCM (Gray Level Co-occurrence Matrix) is widely used, especially in medical image analysis. Amin and Mageed amin2012brain proposed a neural network and segmentation base system to automatically detect the tumor in brain MRI images. In their approach, PCA is used for feature extraction. Othman et al. othman2011probabilistic proposed a probabilistic neural network technique for brain tumor classification. In their technique, features are extracted using PCA. Daljit Singh et al. singh2012classification proposed a hybrid technique for automatic classification of an MRI image by extracting features using PCA and GLCM. Joshi et al. joshi2010classification proposed a brain tumor detection and classification system in MR images by first extracting the tumor portion from the brain image, then extracting the texture features of the detected tumor using GLCM. Gadpayleand and Mahajani mahajanidetection proposed a brain tumor detection and classification system. The tumor is extracted using segmentation and then texture features are extracted using GLCM. Kharrat kharrat2010hybrid proposes a hybrid approach for the classification of brain tissues. The classification is based on a Genetic algorithm (GA) and support vector machine (SVM). Features considered in their work are wavelet- based texture features. They extracted features by spatial gray level dependence method (SGLDM). The extracted features were given as input to the SVM classifier. ### 2.3 Classification techniques Many techniques are used in the literature for MRI brain neoplasm classification. The objective in these approaches is to detect the presence and/or severity of brain neoplasm. Amin and Mageed amin2012brain used Multi-Layer Perceptron (MLP) to classify the extracted features of MRI brain images. The average recognition rate is 88.2% and the peak recognition rate is 96.7%. Othman et al. othman2011probabilistic performed brain tumor presence classification using Probabilistic Neural Network (PNN). Daljit Singh et al. singh2012classification used a Support Vector Machine (SVM) classifier which classifies the image to either normal or abnormal. Joshi et al. joshi2010classification detected the presence of brain tumor using the neuro- fuzzy classifier. Gadpayleand and Mahajani mahajanidetection combined the BPNN and KNN classifiers to classify MRI brain images into the normal or abnormal brain. The accuracy is 70% using the KNN classifier and 72.5% by using the BPNN classifier. Kharrat kharrat2010hybrid used the binary SVM classifier with RBF kernel to detect the presence of a brain tumor with an accuracy rate that varies from 94.44% to 98.14%. Jafari and Shafaghi jafari2012hybrid proposed a hybrid approach for brain tumor detection in MR images based on Support Vector Machines(SVM). The accuracy of about 83.22% is achieved and claimed their approach robust in detection. ## 3 Proposed Methodology A computational intelligence approach is proposed to provide better and reliable classifications of brain neoplasms in MRI data. An overview of the proposed method is shown in Fig. 1. The major computational steps in the proposed approach are image preprocessing, feature engineering, and classification. The various steps in the approach are stated below: 1. 1. Preprocessing: The preprocessing step consists of two sub-steps. First, the basic preprocessing techniques like resizing, noise removal, and contrast optimization are performed. Next, a preprocessing technique is proposed based on Sobel edge detection and contour to improve the area of interest in the MRI image. 2. 2. Feature engineering: The proposed feature engineering includes feature identification, feature extraction followed by feature optimization. We propose an automatic approach to feature identification and extraction. The convolution neural network model is built to generate important features automatically for the brain neoplasm MRI data. Following this LASSO regularization technique is used to remove unnecessary features and select the suitable and most dominating features. 3. 3. Classification: We propose a two-stage classification approach. Two classification models are used for the purpose. In the first stage of classification, whether an input MRI data contains a tumor or not is predicted. If an image contains a brain neoplasm, then the image is passed through the second stage of the classifier. In the second stage of the classification, the severity of the tumor is classified as benign or malignant. In both stages of the classification, the SVM model with modified cost optimization constraint is considered to minimize the false negative classification. In the following subsections, the above-mentioned three steps are discussed in details. Figure 1: Overview of the proposed brain neoplasm detection method. ### 3.1 Data preprocessing There are two approaches to data preprocessing: regular data preprocessing and specific preprocessing. The regular preprocessing task is to deal with poor quality MR images. The improved quality MR image is used in the next level of the preprocessing step to specify a region of interest. The next level of preprocessing is proposed to improve the prediction accuracy. #### 3.1.1 Basic preprocessing The basic preprocessing includes resizing, noise removal, and contrast enhancement. The methodologies proposed for these activities are discussed in the following section. Resizing: It is the process of changing the scale of the images so that all the samples in the training and testing data sets are of the same resolution. This is required because, in the classification models, we must provide input of the same size. The images are resized to $150\times 150$ pixels using the using Numpy squeeze function for each MRI data, as shown in Fig. 2. Figure 2: Example of resizing an MRI image. Noise removal: Gaussian filter is a well-known method and is used to smooth a noise by taking the average values surrounding a noise kumar2017noise . In this experiment, the noise removal is performed using a Gaussian filter. Essentially, a Gaussian filter is a low pass filter that is non-uniform. Also, the kernel is symmetric in a Gaussian filter, which can reduce direction bias, if any. Application of a Gaussian filter requires convolution of 2D Gaussian distribution (see Eqn. 2) given an image input. It may be noted that 2D Gaussian distribution is the product of two 1D Gaussian functions (see Eqn. 1). The kernel coefficients are sampled from Eqn. 2. An example showing the noisy image and the same after the removal of noise is shown in Fig. 3. $G\left(x\right)=\frac{1}{\sqrt[]{2\pi\sigma^{2}}}e^{-\frac{x^{2}}{2\sigma^{2}}}$ (1) $G\left(x,y\right)=\frac{1}{2\pi\sigma^{2}}e^{-\frac{x^{2}+y^{2}}{2\sigma^{2}}}$ (2) Figure 3: Example of noisy MR image and the image after noise removal. Contrast enhancement: A Gaussian filter might hamper the image brightness and contrast. Hence, a contrast optimization procedure is required. The contrast of an image is the difference in the luminance of various sections in the image. An optimal contrast-enhanced image will be able to show every object present in the image. The need for contrast enhancement technique in medical field images kotkar2013review are of utmost importance. Local and global transformation techniques kabir2010brightness prove to be the best options in this regard as they tend to maintain the mean brightness and does not inject undesirable artifactskotkar2013review . Global Transformation Histogram Equalization is used for contrast enhancement in this work. The global transformation function formulated in Eqn. 3 is used for contrast enhancement. Here, $T(g)$ being the global transformation function, where g denote its intensity, $g_{min}$ and $g_{max}$ defines the lower and upper bounds of the histogram partitions; here, x as intensity value, h(x) denotes the histogram count. $T\left(g\right)=g_{min}+\left(g_{max}-g_{min}\right)\left(\frac{\sum_{x=g_{min}}^{g}h\left(x\right)}{\sum_{x=g_{min}}^{g_{max}}h\left(x\right)}\right)$ (3) An example showing the contrast enhancement using Global Transformation Histogram Equalization is shown in Fig. 4. Figure 4: Example of contrast enhancement in MRI image #### 3.1.2 Special preprocessing Apart from the regular preprocessing techniques, an extra filtering mechanism is proposed. The proposed technique is shown in Fig. 5. The proposed technique starts with loading an MRI data which is already preprocessed using the basic techniques described in Section 3.1.1. Then a contour plotting operation is performed on the MRI data. In the next step, the Sobel edge detection technique is used on the grayscale image to generate edges. Further, the Sobel edge MRI data matrix is subtracted from the grayscale image data matrix. This image is matched with the contour plot in the first step to preserve brightness. Then the final image is stored in a grayscale format. This preprocessing helped in reducing the already present artifacts and proved to improved the features of the region of interest in MRI data. The contour plot, Sobel edge detector, and the different functions are further described in detail below. Figure 5: The proposed Sobel edge detection based preprocessing. Contour calibration: Regarding MRI images, it is desired to have a higher contrast in the area of interest and lower on the rest. The images that are analyzed till now are in gray scale, hence the different variations in textures are not visible. To know the depth or height of a 3D plane contour lines can be used. It is a function of two variables in a curve along which the function has a constant value so that the curve joins points of equal value. A gray-scale image can also be seen as a 3D representation of the values ranging from 0 to 255 for each pixel of which contour can be plottedkim2014relationship . In this experiment, the MRI has a size of 150 $\times$150 and the contour line can be used to define various colors to the images with varying gray scale. An example showing the contour plot for an MRI sample is shown in Fig. 6(a). (a) Contour plot for gray scale MRI image. (b) Sobel edge detection for gray scale MRI image. (c) Difference contour for gray scale MRI image Figure 6: An example of special preprocessing technique. Sobel edge detection: Sobel edge detection technique is highly efficient in the detection of edges in images if the image is free of granular noises. Modified Sobel edge detection techniquesgao2010improved proved to perform superior in certain conditions. In this experiment, after the basic preprocessing of the MRI, granular noise is certainly removed in the Gaussian filtering process. Hence, we can proceed with the basic Sobel edge detection technique vijayarani2013performance ; vincent2009descriptive for detection of edges present in the MRI images. An example showing the Sobel edge detection for an MRI sample is shown in Figure-6(b). Differencing: In this phase, the matrices of grayscale MRI image is subtracted from the Sobel edge MRI image are on a pixel by pixel basis. If different contour intensity is present in original contour and Sobel edge contour then take the higher value of intensity. This phase reduced the less oriented regions and gave proper intensity uplifting in the regions of interest. An example showing the contour matching operation over the difference image for an MRI sample is shown in Fig. 6(c). ### 3.2 Feature engineering Due to the lack of a huge amount of MRI data in a single place, it is very difficult to analyze and classify images related to brain neoplasm. The machine learning algorithms are optimized by using feature extraction guyon2008feature techniques to work with even a very low amount of data and still performing quite well. Automatic feature extraction for brain neoplasm MRI data is performed using a convolution feature extractor. Convolutional neural networks are comprised of two parts, the first section performs convolution using various kernels to extract features and then classification is performed by dense layers. In a hybrid CSVM model, the first phase of a normal CNN is used for feature extraction and the features are then fed to SVM for further classification. Feature extraction using convolution neural networks have been performed in various fields of research due to its flexible and reliable results simard1999boxlets ; hammad2019novel . CNN model-based feature extractors also prove to be superior in extracting textures from imagesdewaele1988texture ; zhang2000all , which are a very important aspect in MRI classifications as well. The convolution feature extractor model for detecting the presence and severity of neoplasms in the MRI dataset is discussed below. Model: sequential --- Layer Name_Type | Dimension | Parameters Block-1_Conv-1 (Conv2D) | $(None,148,148,32)$ | 320 Block-1_MP-1 (MaxPooling2D) | $(None,74,74,32)$ | 0 Block-2_Conv-1 (Conv2D) | $(None,72,72,64)$ | 18496 Block-2_Conv-2 (Conv2D) | $(None,70,70,64)$ | 36928 Block-2_MP-1 (MaxPooling2D) | $(None,35,35,64)$ | 0 Block-3_Conv-1 (Conv2D) | $(None,33,33,96)$ | 55392 Block-3_MP-1 (MaxPooling2D) | $(None,16,16,96)$ | 0 Block-4_Conv-1 (Conv2D) | $(None,14,14,96)$ | 83040 Block-4_MP-1 (MaxPooling2D) | ($None,7,7,96)$ | 0 Block-5_Conv-1 (Conv2D) | $(None,5,5,64)$ | 55360 Block-5_MP-1 (MaxPooling2D) | $(None,2,2,64)$ | 0 Total params: 249,536 | | Trainable params: 249,536 | | Non$-$trainable params: 0 | | Table 1: Layer details of convolution model for feature extraction. Training with the MRI dataset, which is made up of 150$\times$150 gray-scale images, that is, each image is of shape (150,150,1) (Gray Scale = 1 channels). The sequential convolution feature extraction model is provided in Table 1 containing layer names, dimension, and parameters. Further, note that the CNN used in the experiment is also having the same architecture apart from the SVM classifier, which is replaced using a fully connected neural network. An example of the features extracted using a particular convolution layer on a particular slice is given as an example in Fig. 7. The generated feature vector will be used for the feature selection phase for reducing unnecessary features. LASSO regularization is used to select the optimal set of features and remove the features that may hamper the model performance. Convolutional feature extractor provides 2,49,536 features based on the total input data which has been significantly reduced to 5,240 features after LASSO feature selection is performed. Figure 7: Features extracted from a single slice in Block-1_Conv-1 layer. ### 3.3 MRI data classification To understand the presence of a brain tumor or to classify the severity to either benign or malignant is a binary classification problem. As per SVM (Support Vector Machine) is concerned, it is a highly accepted binary classifier for linear data and non-linear data(using special kernels). SVM uses the concept of finding the best hyperplane that can separate the data perfectly into two classes. SVM is a popular choice for classification irrespective of the application domains, which varies from textnilanjan17 to medical image classificationmachhale2015mri ; singh2012classification . Further, SVM tends to perform well even on the small datasets huang2017svm but deep learning requires a huge amount of datagal2017deep . The proposed approach (CSVM) using CNN feature extraction and SVM classification is shown in Fig. 8. Figure 8: CSVM model for automatic feature extraction and classification. SVM optimizes the hyperplane for the best classification of the classes without considering the various cost associated with false positives and false negatives. In general, this is not a problem as for many tasks there is no significant cost imbalance for classes. But for a medical domain such as the classification of brain MRI data for detection of neoplasm, the false negatives have a higher cost associated when compared with false positives. To deal with this issue of generalized SVM, the optimization function of the SVM needs to be modified. The sample space is {$x_{i},y_{i}$},i = 1,2, . . . , m. where, y${}_{\mathrm{\textrm{i}}}$ x${}_{\mathrm{\textrm{i}}}$$\mathrm{\in}$ $\mathrm{\\{}$F1, F2… Fn$\mathrm{\\}}$. The sample space has been separated with a hyperplane given by g(x) = $w^{T}x_{i}+b$ = 0 where, w is a n-dimensional coefficient vector that is normal to the hyperplane and b (bias) is the offset. Criteria for classification: Considering positive class indicates the presence of neoplasm for the first classifier model and indicates malignant for the severity classification model. The negative class on the other hand indicates an absence of neoplasm for the first model and benign for the severity classification of the neoplasm. Let $\boldsymbol{\mathrm{\textrm{C}}}_{s}$ denote the unequal misclassification costs associated with the false negative class and xi is not perfectly separable with the hyperplane, then some samples are allowed to be at a distance ${\mathrm{\xiup}}_{i}$ from their correct margin boundary. The primal problem of SVM has been modified into solving the following optimization task: $f\left(x_{i},y_{i}\right)=min\frac{1}{2}{\left\|w\right\|}^{2}+C_{s}\sum_{i|y_{i}}{{\mathrm{\xiup}}_{i}}$ $subject\leavevmode\nobreak\ to,\leavevmode\nobreak\ \leavevmode\nobreak\ y_{i}\left[\left(w^{T}x_{i}+b\right)\right]\leavevmode\nobreak\ \geq 1\leavevmode\nobreak\ -{\mathrm{\xiup}}_{i}$ $where,\leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2,\leavevmode\nobreak\ \dots,\leavevmode\nobreak\ m\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ and\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ {\mathrm{\xiup}}_{i}\geq 0$ (4) Considering C(penalty term) which controls the strength of penalty for $\mathrm{\xiup}$. $C_{s}i|y_{i=+1}$ = C and $C_{s}i|y_{i=-1}$ = $C^{r}$. Where, $C^{r}=\leavevmode\nobreak\ \frac{C_{s}}{C}$ provided $C_{s}>C$ implying, $C^{r}>0$. Since, $f\left(x_{i},y_{i}\right)$ is a constraint optimization problem, $\mathrm{therefore,}$ converting this to un-constraint optimization problem is done using Lagrangian Multiplier with $\boldsymbol{\alpha}\leavevmode\nobreak\ ,\leavevmode\nobreak\ \leavevmode\nobreak\ \boldsymbol{\beta}$ as the Lagrange multipliers where $\boldsymbol{\alpha}\boldsymbol{\geq}\boldsymbol{0}\leavevmode\nobreak\ and\leavevmode\nobreak\ \boldsymbol{\beta}\boldsymbol{\geq}\boldsymbol{0}$. $\leavevmode\nobreak\ \mathrm{L(}\leavevmode\nobreak\ \mathrm{w,}\leavevmode\nobreak\ \mathrm{b,}\leavevmode\nobreak\ C_{i},\leavevmode\nobreak\ {\mathrm{\xiup}}_{i}\mathrm{)}=$ = $\frac{1}{2}{\left\|w\right\|}^{2}$ \+ $\sum{C_{i}{\mathrm{\xiup}}_{i}}$ \+ $\sum{{\alpha}_{i}}$ \- $\sum{{\alpha}_{i}{\mathrm{\xiup}}_{i}}\leavevmode\nobreak\ -\leavevmode\nobreak\ \sum{{\alpha}_{i}y_{i}(w^{T}x_{i})}$ $-$ $\sum{{\alpha}_{i}y_{i}b}$ $-$ $\sum{{\beta}_{i}}{\mathrm{\xiup}}_{i}$ $C_{i}=\left\\{\begin{array}[]{c}C\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ y_{i}\in+1\\\ C^{r}*C\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ y_{i}\in-1\end{array}\right.$ (5) Optimization of the above Lagrangian (5) involves taking derivatives of the Lagrangian with respect to w, b, $C_{i}$ and ${\mathrm{\xiup}}_{i}$ and set the derivatives to zero to get the below equation: $\mathrm{L}\leavevmode\nobreak\ \mathrm{(}\leavevmode\nobreak\ \mathrm{w,}\leavevmode\nobreak\ \mathrm{b,}\leavevmode\nobreak\ C_{i},\leavevmode\nobreak\ {\mathrm{\xiup}}_{i}\mathrm{)}=\sum_{i}{{\alpha}_{i}}-\sum_{i}{{\sum_{j}{{\alpha}_{i}}\alpha}_{j}y_{i}y_{j}K\left(x^{(i)},x^{(j)}\right)}$ $subject\leavevmode\nobreak\ to\leavevmode\nobreak\ \left\\{\begin{array}[]{c}{\sum_{i}{{\alpha}_{i}}y}_{i}=0\\\ 0\leavevmode\nobreak\ \leq{\alpha}_{i}\leq C_{i}\end{array}\right.$ (6) Considering MRI brain neoplasm classification , $\mathrm{\textrm{f}}:\mathrm{\textrm{X}}\mathrm{\to}\mathrm{\textrm{Y}}$ is a nonlinear classification problem, SVM kernels needs to be used. Basically, a kernel function maps the nonlinear data to an n-dimensional space where the data is linearly separable by a hyperplane. SVM has many kernels to perform the optimizations like linear, polynomial, and radial basis function(RBF). The selection of a kernel proves to have significant variations in the performance of the modelsscholkopf2002learning . RBF kernel maps the nonlinear vectors to a high dimension space where the best hyper pane is drawn and it proves to be the most preferred choice for image classification problems kharrat2010hybrid ; kuo2013kernel ; cao2008approximate ; scholkopf2002learning . Therefore, the Radial Basis Function (RBF) kernel function is used for the experiment. $K\left(x_{i},x_{j}\right)=exp\left(-g{\left\|x_{i}-x_{j}\right\|}^{2}\right)$ where, ${\left\|x_{i}-x_{j}\right\|}^{2}$ is two-norm distance and, $\mathrm{g}$ is the kernel function parameter and $g=\leavevmode\nobreak\ \frac{1}{{2\sigma}^{2}}$ (7) For an unknown feature vector ‘$x_{j}$’, the classification decision with RBF Kernel (7) is represented as : D(z) = $w\bullet z+b$ = $\sum^{m}_{i=1}{{\alpha}_{i}y_{i}}$K($x_{i}\bullet x_{j}$) + b (8) The parameters, kernel function parameter ${}^{\prime}g^{\prime}$ and penalty (C) needs to be tuned for the model to perform optimal throughout the dataset. For tuning the SVM model parameters, the K-fold cross-validation technique with the value of ‘k’ as 10 is used. Cross-validation provided the desired value of parameters C and ${}^{\prime}g^{\prime}$ to be 13 and 2 for detecting the presence of neoplasm and 9 and 3 for severity classification respectively, which are used for the rest the experiment. ## 4 Experiments and Experimental Results To establish the efficacy of the proposed approach, several experiments have been carried out. In this section, the objectives of the experiments, experimental environment, data sets used in the experiments, and results observed are discussed. ### 4.1 Objectives of the experiments Following are the objectives of our experiments: 1. 1. Effectiveness of the proposed preprocessing: Visual inspection and model training evaluation of proposed preprocessing model. 2. 2. Effectiveness of the proposed cost optimization: Performance difference of CSVM with and without cost-optimization. 3. 3. Classification accuracy: Performance evaluation of CSVM, CNN, SVM and Random Forest on the MRI data considering cost imbalance of classes. 4. 4. Comparison with the existing approaches: Performance comparison of CSVM with the existing works. ### 4.2 Experimental environment The specification of the system used is as follows: O.S. - Windows 10 Professional, CPU - AMD® Ryzen™ 7-3700X Processor, RAM - 32GB DDR4, GPU - NVIDIA GeForce® GTX 1080 Ti. The windows version of the Python-64Bit10.5555/1593511 with editor IPython notebook PER-GRA:2007 throughout the experiment. Important modules used in the experiment include tensorflow tensorflow2015-whitepaper , packages from NumPy oliphant2006guide , SciPy 2020SciPy-NMeth and Matplotlib Hunter:2007 . ### 4.3 Dataset description The objective of our work is to classify MRI images to predict the presence of neoplasms and their severity. In this experiment, an MRI dataset is collected from the online repository of The Whole Brain Atlasjohnson_becker . This website is maintained by Harvard Medical School and has proved to be very authentic in terms of medical imaging datasummers2003harvard . T2 weighted images are used in the experiment as shown in Fig. 9. The types of the MRI for malignant brain neoplasms in the dataset are glioma and sarcoma. And the benign brain neoplasms are metastatic bronchogenic carcinoma(M.B.C.), metastatic adenocarcinoma(M.A.) and meningioma. The shape of an MRI image is h $\times$ w $\times$ n, where n is the number of slices in each image, h is height and w is width, where h and w both equals to 256. Figure 9: Example of stacked brain MRI slices ### 4.4 Results observed: Vis-a-vis the objectives of the experiments, the results which we have observed are presented in the following subsections. #### 4.4.1 Effectiveness of the proposed preprocessing Visual inspection: An overview of the preprocessing for normal MRI can be seen in Fig. 10. The preprocessing for benign MRI can be seen in the Fig. 11 and malignant MRI can be seen in the Fig. 12. To have a good demonstration of the preprocessing, three MRI images are considered, one is normal which is shown in Fig. 10(a), that is, without any neoplasm, the second one with benign neoplasm which is shown in Fig. 11(a) and the third one with malignant neoplasm which is shown in Fig. 12(a).The contour is plotted for three different MRI images, each for normal (see Fig.10(b)) , benign (see Fig.11(b)), and malignant (see Fig. 12(b)). Similar to contour, Sobel egdes are shown for the three MRI images: normal ()in Fig. 10(c)), benign (in Fig.11(c)), and malignant (in Fig. 12(c)). The final result after taking difference is shown for the three MRI images, normal (Fig. 10(d)), benign (Fig. 11(d)), and malignant (Fig. 12(d)). (a) Normal GS (b) Normal CT (c) Normal SE (d) Normal SD Figure 10: Proposed preprocessing applied over normal MRI images. To compare the preprocessed MRI images, visual inspection of image outline, the colour of the various regions needs to be explored. The green colour represents low intensity, yellow signifies higher intensity and brown lines indicate higher variation in the intensities over that region. Figure-10 shows a transition of the various stages of the proposed preprocessing for a normal brain. The contour of the normal brain produces many yellow lines as seen in Figure-10(b). But when compared with the difference contour, the regions are equally coloured without any yellow lines. Further, no brown edge is seen in the final image as seen in Figure-10(d). This means the proposed preprocessing can normalize the lower ranges of intensity difference (yellow lines). (a) Benign GS (b) Benign CT (c) Benign SE (d) Benign SD Figure 11: Proposed preprocessing applied over benign MRI images. Figure 11 shows a transition of the various stages of the proposed preprocessing for a benign brain neoplasm. The contour of the benign brain produces many yellow regions as shown in Fig. 11(b). But when compared with the difference contour, the less interesting regions are colored green with a brown border. Further, a brown edge with yellow is is seen in the final image where the benign neoplasm is present as seen in Fig. 11(d). This means the proposed preprocessing can intensify the regions with benign neoplasm than the other regions. (a) Malignant GS (b) Malignant CT (c) Malignant SE (d) Malignant SD Figure 12: Proposed preprocessing applied over malignant MRI images. Figure-12 shows a transition of the various stages of the proposed preprocessing for a malignant brain neoplasm. The contour of the malignant brain produces a centralized yellow region as seen in Fig. 12(b). But when compared with the difference contour, the less interesting regions are colored green with a brown border as previously seen with benign MRI. Further, a brown edge with yellow is is seen in the final image where the malignant neoplasm is present as seen in Fig. 12(d). The proposed preprocessing can outline the portion with the brain tumor effectively, even if the tumor is discreetly spaced. Further, many preprocessing techniques tend to over soften the image during the noise removal process to get better accuracy, but this changes the original image features. This proves to be beneficial when detecting the presence of neoplasm but during the classification of the severity, this process fails to get good results. But in the case of the proposed preprocessing, the final output MRI image is free of over softness and hence the features are preserved better. Figure 13: CSVM: Accuracy versus training loss with and without proposed preprocessing. Figure 14: CNN: Accuracy versus training loss with and without proposed preprocessing. Training evaluation: A loss function is used to optimize the machine learning algorithm by calculating the loss on the training MRI dataset images. Its interpretation is based on how well the model is performing for the dataset. It is the sum of errors made for each data points in training. This value implies how well or poor a model can perform after each iteration of optimization also known as epochs. On the other hand, the training accuracy metric is used to measure the algorithm’s performance based on the total number of correctly classified data instances in terms of a probability value during the training. For any model, our target is always to minimize the loss while still having good accuracy. To check the benefit of the proposed preprocessing technique over the models, four models are considered for the experiment. The four models compared in this regard are the proposed CSVM model, CNN, SVM, and Random Forest (RF). These models are chosen due to their popularity in this field of research and these models tend to provide decent results for image classification tasks. To test the effectiveness of the proposed preprocessing technique, the models are trained on the same dataset but one set with the proposed preprocessing applied and the other with the application of only basic preprocessing steps. Also, it is important to keep a note on the fact that the feature selection process in Section-3.2 only applies to CSVM and not other models. Other models use the general image features from the preprocessing step. Figure 15: SVM: Accuracy versus training loss with and without proposed preprocessing. Figure 16: RF:Accuracy versus training loss with and without proposed preprocessing. From the experimental results shown in Fig. 13, 14, 15, 16, it is observed that the accuracy for training when using proposed preprocessing is significantly better in most of the epochs compared to the same setting without the proposed preprocessing step. As per the training loss is concerned, the loss gets minimized much more with every epoch when the proposed preprocessing is applied. In some of the epochs, the accuracy or loss may be close but the final training accuracy with proposed preprocessing is higher and the loss is lower. The superiority of the hybrid CSVM model can also be seen in Fig. 13, as its training accuracy is consistently better than the other models. Further, the introduction of the advanced proposed preprocessing technique with all the models proves to be beneficial both in maximizing the accuracy and minimizing the loss. The cause of the poor performance of the CNN model is probably related to the very low number of items in the dataset. Neural Network models perform better for a large collection of data, in this case, prediction via hybrid CSVM proves the capability of SVM in dealing with low data count but still producing competitive results. For SVM without convolutional feature extraction, the training accuracy is decreased. Random Forest performed worst in this case with the lowest training accuracy. A more detailed comparison of the models is performed for the test data in the later section. For the rest of the experiment, the proposed preprocessing technique is considered regardless of the models used. #### 4.4.2 Effect of the proposed cost optimization From the experimental model of CSVM, confusion matrices are generated for each case of classification, that is, for detecting the presence of neoplasm and for classifying the type of neoplasm. An example confusion matrix is shown in Figure-LABEL:fig:cmatrix. Confusion matrices by themselves do not provide much insight into how a model performs. But rater, confusion matrices can provide a fast at-a-glance insight about the classification. Further, lots of performance metrics, like accuracy, precision, recall, F-measure is derived from the confusion matrices. These metrics in turn provides a better understanding of the model. For performance evaluation, the metrics used are provided in Eqn. 9, 10, 11, 12. $Accuracy(A)=\frac{TP+TN}{TP+TN+FP+FN}$ (9) $Precision(P)=\frac{TP}{TP+FP}$ (10) $Recall(R)=\frac{TP}{TP+FN}$ (11) $F1-Score(F1)=2\times\frac{P\times R}{P+R}$ (12) where: True Positives(TP): Correct classification of presence of neoplasm/malignant, True Negatives(TN): Correct classification of absence of neoplasm/benign, False Positives(FP): Incorrect classification of absence of neoplasm/benign, True Negatives(TN): Incorrect classification of presence of neoplasm/malignant. For detecting the presence of neoplasm, a false negative detection would mean, some MRIs were detected as an absence of neoplasm when in fact there are neoplasms present. So, there shall be no diagnosis involved, and could be fatal. Whereas, false positive case there may be over-diagnosis but will not be fatal. So, we shall try to analyze the results based on these conditions. The confusion matrices for the detection of neoplasm in an MRI for CSVM approaches with and without cost optimization are represented in Fig. 17. From the confusion matrices, it is clear that the total false classifications of CSVM with and without cost optimization are 3 and 2 respectively. Further, as already discussed false positives are more tolerable than false negatives in this situation. There are 2 false negative classifications via the CSVM model without cost optimization and no false negative classifications in the case of CSVM with cost optimization. It shows the advantage of the CSVM model with cost optimization in this regard. Figure 17: Confusion matrix for detecting presence of brain neoplasm. For detecting the severity of neoplasm, a false negative detection would mean, some MRIs were detected as benign when in fact it is malignant. In this case, if the MRI is not re-evaluated quickly, there is a chance of fatality. False positives in this case may suffer from sock but are still more acceptable than the previous one. The confusion matrices for the detection of the severity of the neoplasm in an MRI using CSVM approaches are represented in Fig. 18. From the confusion matrices, it is clear that the total false classifications of CSVM with and without cost optimization are 2 and 3 respectively. There are 2 false negative classifications via the CSVM model without cost optimization and 1 false negative classification in the case of CSVM with cost optimization. It again shows the advantage of cost optimized CSVM model over the regular CSVM model for medical domain-specific data. Figure 18: Confusion Matrix for Detecting Severity of Neoplasm #### 4.4.3 Classification accuracy The MRI dataset of human brain images is split into 80:20 ratio for the training and testing of the models and then the experiments are performed. The experimental overview with the evaluation scores for detecting the presence of neoplasms using CSVM, CNN, SVM, and Random Forest are presented in Table 2. The training and testing columns have normal and neoplasm data points defined as N and B+M. As seen from the result, CSVM achieved maximum scores in all the aspects for detecting the presence of neoplasm. | Number of MRI Images | ---|---|--- | Training | Testing | Evaluation Metrics Classifier | Total Data | N | N+B | N | N+B | Accuracy | Precision | Recall | F1-Score CSVM | 658 | 104 | 422 | 26 | 106 | 0.9848 | 0.9815 | 1 | 0.9907 CNN | 658 | 104 | 422 | 26 | 106 | 0.9242 | 0.9615 | 0.9434 | 0.9524 SVM | 658 | 104 | 422 | 26 | 106 | 0.8181 | 0.9271 | 0.8396 | 0.8812 RF | 658 | 104 | 422 | 26 | 106 | 0.6894 | 0.8652 | 0.7264 | 0.7898 Table 2: Evaluation scores for detecting presence of neoplasms. The evaluation scores for detecting the severity of neoplasms are presented in Table 3. The training and testing columns have benign and malignant data points defined as B and M in the table. As seen from the result, CSVM again achieved maximum scores in all the aspects for severity classification of brain neoplasm. Further, cost optimization in the CSVM model has shown significant improvement in recall score than other models. | Number of MRI Images | ---|---|--- | Training | Testing | Evaluation Metrics Classifier | Total Data | B | M | B | M | Accuracy | Precision | Recall | F1-Score CSVM | 528 | 216 | 205 | 55 | 52 | 0.972 | 0.9623 | 0.9808 | 0.9714 CNN | 528 | 216 | 205 | 55 | 52 | 0.9346 | 0.9412 | 0.9231 | 0.932 SVM | 528 | 216 | 205 | 55 | 52 | 0.8598 | 0.8491 | 0.8654 | 0.8571 RF | 528 | 216 | 205 | 55 | 52 | 0.7477 | 0.7451 | 0.7308 | 0.7379 Table 3: Evaluation scores for severity classification. #### 4.4.4 Comparison with the existing approaches To further evaluate the effectiveness of the CSVM approach, the final result is compared with other procedures using the same MRI dataset. The compared approaches omit the proposed preprocessing step and do not consider unequal classification costs associated with the MRI Brain neoplasm classification. The compared models use various techniques like Wavelet Transformation(WT), Spatial Gray Level Dependence Method (SGLDM), Genetic Algorithm (GA), and Principle Component Analysis(PCA). The result is shown in Table 4. From the comparison, it is seen that CSVM performed best for both the classification task i.e. neoplasm presence detection and severity classification from the compared models. Detection of the presence of neoplasm is having a 15.26% improvement over the compared technique. For severity detection, the CSVM model is 1% and 3% more accurate than the compared procedures. As already discussed, maximization of recall is necessary for this domain, we can see a difference of 3.5% and 6.2% respectively from the compared procedures with CSVM. | Evaluation Metrics ---|--- Classifier | Classification Task | Accuracy | Precision | Recall | F1-Score CSVM | Detect Neoplasm Presence | 0.9848 | 0.9815 | 1 | 0.9907 CSVM | Detect Neoplasm Severity | 0.9720 | 0.9623 | 0.9808 | 0.9714 | WT+SGLDM --- +GA+SVM Detect Neoplasm Severity | 0.9629 | N/A | 0.9460 | N/A | SGLDM+GA --- +SVM Detect Neoplasm Severity | 0.9444 | N/A | 0.9187 | N/A | PCA+GA --- +SVM Detect Neoplasm Presence | 0.8322 | N/A | N/A | N/A Table 4: Comparison of proposed model with other procedures. ## Conclusions A hybrid preprocessing using contour plot and Sobel edge is proposed and results are observed by including and excluding this preprocessing step. The results signify the benefit of using the proposed preprocessing technique, as the model performance is increased in each case. Further, the proposed CSVM performed better than a regular CNN, SVM, or Random Forest model every time in classifying between normal and abnormal brain MRI and also for benign and malignant. Cost optimization also helped in improving model performance and reduced false negative predictions. Comparison with other procedures using the same dataset also proves the efficacy of the overall proposed techniques. ## References * (1) Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015). URL https://www.tensorflow.org/. Software available from tensorflow.org * (2) Aldape, K., Brindle, K.M., Chesler, L., Chopra, R., Gajjar, A., Gilbert, M.R., Gottardo, N., Gutmann, D.H., Hargrave, D., Holland, E.C., et al.: Challenges to curing primary brain tumours. Nature Reviews Clinical Oncology 16(8), 509–520 (2019) * (3) Amin, S.E., Megeed, M.: Brain tumor diagnosis systems based on artificial neural networks and segmentation using mri. In: 2012 8th International Conference on Informatics and Systems (INFOS), pp. MM–119. IEEE (2012) * (4) Armstrong, T.S., Cohen, M.Z., Weinberg, J., Gilbert, M.R.: Imaging techniques in neuro-oncology. In: Seminars in oncology nursing, vol. 20, pp. 231–239. Elsevier (2004) * (5) Aronen, H.J., Gazit, I.E., Louis, D.N., Buchbinder, B.R., Pardo, F.S., Weisskoff, R.M., Harsh, G.R., Cosgrove, G., Halpern, E.F., Hochberg, F.H.: Cerebral blood volume maps of gliomas: comparison with tumor grade and histologic findings. Radiology 191(1), 41–51 (1994) * (6) Bagcchi, S.: India has low doctor to patient ratio, study finds (2015) * (7) Cao, H., Naito, T., Ninomiya, Y.: Approximate rbf kernel svm and its applications in pedestrian classification (2008) * (8) Chang, J., Chen, Y.: Pyramid stereo matching network. CoRR abs/1803.08669 (2018). URL http://arxiv.org/abs/1803.08669 * (9) Chen, L., Wu, Y., DSouza, A.M., Abidin, A.Z., Wismüller, A., Xu, C.: Mri tumor segmentation with densely connected 3d cnn. In: Medical Imaging 2018: Image Processing, vol. 10574, p. 105741F. International Society for Optics and Photonics (2018) * (10) Chinot, O.L., Wick, W., Mason, W., Henriksson, R., Saran, F., Nishikawa, R., Carpentier, A.F., Hoang-Xuan, K., Kavan, P., Cernea, D., et al.: Bevacizumab plus radiotherapy–temozolomide for newly diagnosed glioblastoma. New England Journal of Medicine 370(8), 709–722 (2014) * (11) De Wolde, H., Pruim, J., Mastik, M.F., Koudstaal, J., Molenaar, W.M.: Proliferative activity in human brain tumors: Comparison of histopathology and l-[l-11c] tyrosine pet. Journal of Nuclear Medicine 38(9), 1369–1374 (1997) * (12) Dewaele, P., Van Gool, L., Wambacq, A., Oosterlinck, A.: Texture inspection with self-adaptive convolution filters. In: 9th International Conference on Pattern Recognition, pp. 56–57. IEEE Computer Society (1988) * (13) Ford, S., Fallowfield, L., Lewis, S.: Doctor-patient interactions in oncology. Social science & medicine 42(11), 1511–1519 (1996) * (14) Gal, Y., Islam, R., Ghahramani, Z.: Deep bayesian active learning with image data. arXiv preprint arXiv:1703.02910 (2017) * (15) Gao, W., Zhang, X., Yang, L., Liu, H.: An improved sobel edge detection. In: 2010 3rd International conference on computer science and information technology, vol. 5, pp. 67–71. IEEE (2010) * (16) George, E.B., Karnan, M.: Mri brain image enhancement using filtering techniques. International Journal of Computer Science & Engineering Technology (IJCSET) 3(9), 399–403 (2012) * (17) Gilbert, M.R., Dignam, J.J., Armstrong, T.S., Wefel, J.S., Blumenthal, D.T., Vogelbaum, M.A., Colman, H., Chakravarti, A., Pugh, S., Won, M., et al.: A randomized trial of bevacizumab for newly diagnosed glioblastoma. New England Journal of Medicine 370(8), 699–708 (2014) * (18) Guyon, I., Gunn, S., Nikravesh, M., Zadeh, L.A.: Feature extraction: foundations and applications, vol. 207. Springer (2008) * (19) Hammad, M., Zhang, S., Wang, K.: A novel two-dimensional ecg feature extraction and classification algorithm based on convolution neural network for human authentication. Future Generation Computer Systems 101, 180–196 (2019) * (20) Huang, M.W., Chen, C.W., Lin, W.C., Ke, S.W., Tsai, C.F.: Svm and svm ensembles in breast cancer prediction. PloS one 12(1), e0161501 (2017) * (21) Hunter, J.D.: Matplotlib: A 2d graphics environment. Computing in Science & Engineering 9(3), 90–95 (2007). DOI 10.1109/MCSE.2007.55 * (22) Jafari, M., Shafaghi, R.: A hybrid approach for automatic tumor detection of brain mri using support vector machine and genetic algorithm. Global journal of science, engineering and technology 3, 1–8 (2012) * (23) Ji, S., Xu, W., Yang, M., Yu, K.: 3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1), 221–231 (2013) * (24) Johnson, K.A., Becker, J.A.: The whole brain atlas. URL http://www.med.harvard.edu/AANLIB/ * (25) Joshi, D.M., Rana, N., Misra, V.: Classification of brain cancer using artificial neural network. In: 2010 2nd International Conference on Electronic Computer Technology, pp. 112–116. IEEE (2010) * (26) Kabir, M.H., Abdullah-Al-Wadud, M., Chae, O.: Brightness preserving image contrast enhancement using weighted mixture of global and local transformation functions. Int. Arab J. Inf. Technol. 7(4), 403–410 (2010) * (27) Kharrat, A., Gasmi, K., Messaoud, M.B., Benamrane, N., Abid, M.: A hybrid approach for automatic classification of brain mri using genetic algorithm and support vector machine. Leonardo journal of sciences 17(1), 71–82 (2010) * (28) Kim, J.S., Ha, J., Kim, B.K., Shin, D.H., Ko, Y.G., Choi, D., Jang, Y., Hong, M.K.: The relationship between post-stent strut apposition and follow-up strut coverage assessed by a contour plot optical coherence tomography analysis. JACC: Cardiovascular Interventions 7(6), 641–651 (2014) * (29) Kotkar, V.A., Gharde, S.S.: Review of various image contrast enhancement techniques. International journal of innovative research in Science, Engineering and Technology 2(7) (2013) * (30) Kumar, N., Nachamai, M.: Noise removal and filtering techniques used in medical images. Orient. J. Comput. Sci. Technol 10(1), 103–113 (2017) * (31) Kuo, B.C., Ho, H.H., Li, C.H., Hung, C.C., Taur, J.S.: A kernel-based feature selection method for svm with rbf kernel for hyperspectral image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 7(1), 317–326 (2013) * (32) Liang, Z.P., Lauterbur, P.C.: Principles of magnetic resonance imaging: a signal processing perspective. SPIE Optical Engineering Press (2000) * (33) Machhale, K., Nandpuru, H.B., Kapur, V., Kosta, L.: Mri brain cancer classification using hybrid classifier (svm-knn). In: 2015 International Conference on Industrial Instrumentation and Control (ICIC), pp. 60–65. IEEE (2015) * (34) Mahajani, P.P.P.: Detection and classification of brain tumor in mri images. International Journal of Emerging Trends in Electrical and Electronics (IJETEE – ISSN: 2320-9569) 5(1), 45–49 (2013) * (35) Oliphant, T.E.: A guide to NumPy, vol. 1. Trelgol Publishing USA (2006) * (36) Osborn, A.G., Salzman, K.L., Jhaveri, M.D., Barkovich, A.J.: Diagnostic imaging: brain E-book. Elsevier Health Sciences (2015) * (37) Othman, M.F., Basri, M.A.M.: Probabilistic neural network for brain tumor classification. In: 2011 Second International Conference on Intelligent Systems, Modelling and Simulation, pp. 136–138. IEEE (2011) * (38) Pendharkar, P.C., Nanda, S., Rodger, J.A., Bhaskar, R.: An evolutionary misclassification cost minimization approach for medical diagnosis. In: Managing data mining technologies in organizations: techniques and applications, pp. 32–44. IGI Global (2003) * (39) Pérez, F., Granger, B.E.: IPython: a system for interactive scientific computing. Computing in Science and Engineering 9(3), 21–29 (2007). DOI 10.1109/MCSE.2007.53. URL https://ipython.org * (40) Sarkar, B., Sinhababu, N., Roy, M., Dutta Pramanik, P., Choudhury, P.: Mining multilingual and multiscript twitter data: Unleashing the language and script barrier. International Journal of Business Intelligence and Data Mining 16, 107–127 (2019). DOI 10.1504/IJBIDM.2020.103847 * (41) Schölkopf, B., Smola, A.J., Bach, F., et al.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press (2002) * (42) Simard, P., Bottou, L., Haffner, P., LeCun, Y.: Boxlets: a fast convolution algorithm for signal processing and neural networks. In: Advances in neural information processing systems, pp. 571–577 (1999) * (43) Singh, D., Kaur, K.: Classification of abnormalities in brain mri images using glcm, pca and svm. International Journal of Engineering and Advanced Technology (IJEAT) 1(6), 243–248 (2012) * (44) Summers, D.: Harvard whole brain atlas: www. med. harvard. edu/aanlib/home. html. Journal of Neurology, Neurosurgery & Psychiatry 74(3), 288–288 (2003) * (45) Uyeda, J.C., Caetano, D.S., Pennell, M.W.: Comparative analysis of principal components can be misleading. Systematic Biology 64(4), 677–689 (2015) * (46) Van Rossum, G., Drake, F.L.: Python 3 Reference Manual. CreateSpace, Scotts Valley, CA (2009) * (47) Vijayarani, S., Vinupriya, M.: Performance analysis of canny and sobel edge detection algorithms in image mining. International Journal of Innovative Research in Computer and Communication Engineering 1(8), 1760–1767 (2013) * (48) Vincent, O.R., Folorunso, O., et al.: A descriptive algorithm for sobel image edge detection. In: Proceedings of Informing Science & IT Education Conference (InSITE), vol. 40, pp. 97–107. Informing Science Institute California (2009) * (49) Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S.J., Brett, M., Wilson, J., Jarrod Millman, K., Mayorov, N., Nelson, A.R.J., Jones, E., Kern, R., Larson, E., Carey, C., Polat, İ., Feng, Y., Moore, E.W., Vand erPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E.A., Harris, C.R., Archibald, A.M., Ribeiro, A.H., Pedregosa, F., van Mulbregt, P., Contributors, S…: SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17, 261–272 (2020). DOI https://doi.org/10.1038/s41592-019-0686-2 * (50) Young, R.J., Knopp, E.A.: Brain mri: tumor evaluation. Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine 24(4), 709–724 (2006) * (51) Zhang, X., Ke, H.: All/aml cancer classification by gene expression data using svm and csvm approach. Genome informatics 11, 237–239 (2000) ## Authors’ Biographies Nilanjan Sinhababu received his B. Tech. degree in Computer Science & Engineering from West Bengal Technical University, India. He is currently pursuing his Master of Science (by research) degree Subir Chwdhury School of Quality and Reliability, Indian Institute of Technology Kharagpur. His research interests include recommendation system, data analytics, Machine learning, deep learning, artificial intelligence, etc. Monalisa Sarma received her Ph.D. degree in Computer Science & Engineering from Indian Institute of Technology Kharagpur, India. She holds M.S. (by research) and B. Tech. degrees both in Computer Science & Engineering from Indian Institute of Technology Kharagpur, India, and North Hill University, India, respectively. Presently, she is an assistant professor, Reliability Engineering Centre, India Institute of Technology Kharagpur. Prior to joining Indian Institute of Technology Kharagpur, she was working in the Department of Computer Science & Engineering, Indian Institute of Technology Indore and Siemens Research and Devolvement, Bangalore, India. Her current research includes human reliability, big data security, biometric-based cryptography, etc. Debasis Samanta received his Ph.D. degree in Computer Science & Engineering from Indian Institute of Technology Kharagpur, India. He holds M. Tech. and B. Tech. degrees both in Computer Science & Engineering from Jadavpur University, Kolkata, India and Calcutta University, India, respectively. Presently, he is an Associate Professor, Department of Computer Science & Engineering, Indian Institute of Technology Kharagpur. His current research includes Human Computer Interaction, Brain Computing Interaction, Biometric-based System Security, and Data Analytics.
###### Abstract _In continuation of the paper[3], we discuss various consequences of Hahn- Banach theorem for bounded b-linear functional in linear n-normed space and describe the notion of reflexivity of linear n-normed space with respect to bounded b-linear functional. The concepts of strong convergence and weak convergence of a sequence of vectors with respect to bounded b-linear functionals in linear n-normed space have been introduced and some of their properties are being discussed._ Reflexivity of linear $n$-normed space with respect to $b$-linear functional Prasenjit Ghosh Department of Pure Mathematics, University of Calcutta, 35, Ballygunge Circular Road, Kolkata, 700019, West Bengal, India e-mail<EMAIL_ADDRESS> T. K. Samanta Department of Mathematics, Uluberia College, Uluberia, Howrah, 711315, West Bengal, India e-mail<EMAIL_ADDRESS> Keywords: _Hahn-Banach theorem, Reflexivity of normed linear space, Weak and strong convergence, Linear n-normed space, n-Banach space._ 2010 Mathematics Subject Classification: 46A22, 46B07, 46B25. ## 1 Introduction The dual space of a normed linear space is the set of all bounded linear functionals on the space. In some cases, the dual of the dual space, i. e., second dual space of a normed space, under a specific mapping-called the natural embedding, is isometrically isomorphic to the original space. Such normed spaces are known as reflexive spaces. This concept was introduced by H. Hahn in 1927 and called reflexivity by E. R Lorch in 1939. Hahn recognized the importance of reflexivity in his study of linear equations in normed spaces. Weak convergence of sequence of vectors in a normed space is a certain kind of interplay between a normed space and its dual space. This concept demonstrates a fundamental principle of functional analysis which in turn states that the investigation of normed spaces is generally linked with that of their dual spaces. Weak convergence has various applications in the calculus of variations, general theory of differential equations and in fact, plays an important role in many problems of analysis. The notion of linear $2$-normed space was introduced by S. Gahler [2]. A survey of the theory of linear $2$-normed space can be found in [1]. The concept of $2$-Banach space is briefly discussed in [7]. H. Gunawan and Mashadi [4] developed the generalization of a linear $2$-normed space for $n\,\geq\,2$. In this paper, some important consequences of the Hahn-Banach theorem for bounded $b$-linear functionals in case of linear $n$-normed spaces are discussed. We shall introduce the notion of $b$-relexivity of linear $n$-normed space and see that a closed subspce of a $b$-reflexive $n$-Banach space is also $b$-reflexive. Finally, $b$-weak convergence and $b$-strong convergence of a sequence of vectors in a linear $n$-normed space in terms of bounded $b$-linear functionals are introduced and characterized. ## 2 Preliminaries ###### Theorem 2.1. [5] Let $\left\\{\,T_{k}\,\right\\}$ be a sequence of bounded linear operators $T_{k}\,:\,Y\,\to\,Z$ from a Banach space $Y$ into a normed space $Z$ such that $\left\\{\,\left\|\,T_{k}\,(\,x\,)\,\right\|\,\right\\}$ is bounded for every $x\,\in\,Y$. Then the sequence of the norms $\left\\{\,\left\|\,T_{k}\,\right\|\,\right\\}$ is bounded. ###### Definition 2.2. [4] Let $X$ be a linear space over the field $\mathbb{K}$, where $\mathbb{K}$ is the real or complex numbers field with $\text{dim}\,X\,\geq\,n$, where $n$ is a positive integer. A real valued function $\left\|\,\cdot\,,\,\cdots\,,\,\cdot\,\right\|\,:\,X^{\,n}\,\to\,\mathbb{R}$ is called an n-norm on $X$ if * (N1) $\left\|\,x_{\,1}\,,\,x_{\,2}\,,\,\cdots\,,\,x_{\,n}\,\right\|\,=\,0$ if and only if $x_{\,1},\,\cdots,\,x_{\,n}$ are linearly dependent, * (N2) $\left\|\,x_{\,1}\,,\,x_{\,2}\,,\,\cdots\,,\,x_{\,n}\,\right\|$ is invariant under permutations of $x_{\,1},\,x_{\,2},\,\cdots,\,x_{\,n}$, * (N3) $\left\|\,\alpha\,x_{\,1}\,,\,x_{\,2}\,,\,\cdots\,,\,x_{\,n}\,\right\|\,=\,|\,\alpha\,|\,\left\|\,x_{\,1}\,,\,x_{\,2}\,,\,\cdots\,,\,x_{\,n}\,\right\|\;\;\;\forall\;\;\alpha\,\in\,\mathbb{K}$, * (N4) $\left\|\,x\,+\,y\,,\,x_{\,2}\,,\,\cdots\,,\,x_{\,n}\,\right\|\,\leq\,\left\|\,x\,,\,x_{\,2}\,,\,\cdots\,,\,x_{\,n}\,\right\|\,+\,\left\|\,y\,,\,x_{\,2}\,,\,\cdots\,,\,x_{\,n}\,\right\|$ hold for all $x,\,y,\,x_{\,1},\,x_{\,2},\,\cdots,\,x_{\,n}\,\in\,X$. The pair $\left(\,X\,,\,\left\|\,\cdot\,,\,\cdots\,,\,\cdot\,\right\|\,\right)$ is then called a linear n-normed space. For particular value $n\,=\,2$, the space $X$ is said to be a linear 2-normed space [2]. Throughout this paper, $X$ will denote linear $n$-normed space over the field $\mathbb{K}$ ( $=\,\mathbb{R}\,\;\text{or}\,\;\mathbb{C}$ ) associated with the $n$-norm $\|\,\cdot\,,\,\cdots\,,\,\cdot\,\|$. ###### Definition 2.3. [4] A sequence $\\{\,x_{\,k}\,\\}\,\subseteq\,X$ is said to converge to $x\,\in\,X$ if $\lim\limits_{k\to\infty}\,\left\|\,x_{\,k}\,-\,x\,,\,x_{\,2}\,,\,\cdots\,,\,x_{\,n}\,\right\|\,=\,0$ for every $x_{\,2},\,\cdots,\,x_{\,n}\,\in\,X$ and it is called a Cauchy sequence if $\lim\limits_{l\,,\,k\to\infty}\,\left\|\,x_{\,l}\,-\,x_{\,k}\,,\,x_{\,2}\,,\,\cdots\,,\,x_{\,n}\,\right\|\,=\,0$ for every $x_{\,2},\,\cdots,\,x_{\,n}\,\in\,X$. The space $X$ is said to be complete or n-Banach space if every Cauchy sequence in this space is convergent in $X$. 2-Banach space [7] is a particular case of n-Banach space for $n\,=\,2$. ###### Definition 2.4. [6] We define the following open and closed ball in $X$: $B_{\,\\{\,e_{\,2}\,,\,\cdots\,,\,e_{\,n}\,\\}}\,(\,a\,,\,\delta\,)\,=\,\left\\{\,x\,\in\,X\,:\,\left\|\,x\,-\,a\,,\,e_{\,2}\,,\,\cdots\,,\,e_{\,n}\,\right\|\,<\,\delta\,\right\\}\;\text{and}$ $B_{\,\\{\,e_{\,2}\,,\,\cdots\,,\,e_{\,n}\,\\}}\,[\,a\,,\,\delta\,]\,=\,\left\\{\,x\,\in\,X\,:\,\left\|\,x\,-\,a\,,\,e_{\,2}\,,\,\cdots\,,\,e_{\,n}\,\right\|\,\leq\,\delta\,\right\\},\hskip 14.22636pt$ where $a,\,e_{\,2},\,\cdots,\,e_{\,n}\,\in\,X$ and $\delta$ be a positive number. ###### Definition 2.5. [6] A subset $G$ of $X$ is said to be open in $X$ if for all $a\,\in\,G$, there exist $e_{\,2},\,\cdots,\,e_{\,n}\,\in\,X$ and $\delta\,>\,0$ such that $B_{\,\\{\,e_{\,2}\,,\,\cdots\,,\,e_{\,n}\,\\}}\,(\,a\,,\,\delta\,)\,\subseteq\,G$. ###### Definition 2.6. [6] Let $A\,\subseteq\,X$. Then the closure of $A$ is defined as $\overline{A}\,=\,\left\\{\,x\,\in\,X\;|\;\,\exists\,\;\\{\,x_{\,k}\,\\}\,\in\,A\;\;\textit{with}\;\lim\limits_{k\,\to\,\infty}x_{\,k}\,=\,x\,\right\\}.$ The set $A$ is said to be closed if $A\,=\,\overline{A}$. ###### Definition 2.7. [3] Let $W$ be a subspace of $X$ and $b_{\,2},\,b_{\,3},\,\cdots,\,b_{\,n}$ be fixed elements in $X$ and $\left<\,b_{\,i}\,\right>$ denote the subspaces of $X$ generated by $b_{\,i}$, for $i\,=\,2,\,3,\,\cdots,\,n$. Then a map $T\,:\,W\,\times\,\left<\,b_{\,2}\,\right>\,\times\,\cdots\,\times\,\left<\,b_{\,n}\,\right>\,\to\,\mathbb{K}$ is called a b-linear functional on $W\,\times\,\left<\,b_{\,2}\,\right>\,\times\,\cdots\,\times\,\left<\,b_{\,n}\,\right>$, if for every $x,\,y\,\in\,W$ and $k\,\in\,\mathbb{K}$, the following conditions hold: * (I) $T\,(\,x\,+\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,+\,T\,(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)$ * (II) $T\,(\,k\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,k\;T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)$. A b-linear functional is said to be bounded if $\exists$ a real number $M\,>\,0$ such that $\left|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\,\leq\,M\;\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\;\;\forall\;x\,\in\,W.$ The norm of the bounded b-linear functional $T$ is defined by $\|\,T\,\|\,=\,\inf\,\left\\{\,M\,>\,0\,:\,\left|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\,\leq\,M\;\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\;\forall\;x\,\in\,W\,\right\\}.$ The norm of $T$ can be expressed by any one of the following equivalent formula: * (I) $\|\,T\,\|\,=\,\sup\,\left\\{\,\left|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\;:\;\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\leq\,1\,\right\\}$. * (II) $\|\,T\,\|\,=\,\sup\,\left\\{\,\left|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\;:\;\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,1\,\right\\}$. * (III) $\|\,T\,\|\,=\,\sup\,\left\\{\,\dfrac{\left|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|}{\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|}\;:\;\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\neq\,0\,\right\\}$. Also, we have $\left|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\,\leq\,\|\,T\,\|\,\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\;\forall\;x\,\in\,W$. Let $X_{F}^{\,\ast}$ denote the Banach space of all bounded b-linear functional defined on $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ with respect to the above norm. ###### Definition 2.8. [3] A set $\mathcal{A}$ of bounded b-linear functionals defined on $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ is said to be pointwise bounded if for each $x\,\in\,X$, the set $\left\\{\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,:\,T\,\in\,\;\mathcal{A}\,\right\\}$ is a bounded set in $\mathbb{K}$ and uniformly bounded if, $\exists\,\,K\,>\,0$ such that $\|\,T\,\|\,\leq\,K\;\;\;\forall\;T\,\in\,\mathcal{A}$. ###### Theorem 2.9. [3] Let $X$ be a n-Banach space over the field $\mathbb{K}$. If a set $\mathcal{A}$ of bounded b-linear functionals on $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\,\cdots\,\times\,\left<\,b_{\,n}\,\right>$ is pointwise bounded, then it is uniformly bounded. ###### Theorem 2.10. [3] Let $X$ be a linear n-normed space over the field $\mathbb{R}$ and $W$ be a subspace of $\,X$. Then each bounded b-linear functional $T_{\,W}$ defined on $W\,\times\,\left<\,b_{\,2}\,\right>\,\times\,\cdots\,\times\,\left<\,b_{\,n}\,\right>$ can be extended onto $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\,\cdots\,\times\,\left<\,b_{\,n}\,\right>$ with preservation of the norm. In other words, there exists a bounded b-linear functional $T$ defined on $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\,\cdots\,\times\,\left<\,b_{\,n}\,\right>$ such that $T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T_{\,W}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\forall\;x\,\in\,W\,\;\;\&\;\;\left\|\,T_{\,W}\,\right\|\,=\,\|\,T\,\|.$ ###### Theorem 2.11. [3] Let $X$ be a linear n-normed space over the field $\mathbb{R}$ and $x_{\,0}$ be an arbitrary non-zero element in $X$. Then there exists a bounded b-linear functional $T$ defined on $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\,\cdots\,\times\,\left<\,b_{\,n}\,\right>$ such that $\|\,T\,\|\,=\,1\;\;\;\text{and}\;\;\;T\,(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,\left\|\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|.$ ###### Theorem 2.12. [3] Let $X$ be a linear n-normed space over the field $\mathbb{R}$ and $\,x\,\in\,X$. Then $\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,\sup\,\left\\{\,\dfrac{\left|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|}{\|\,T\,\|}\,:\,T\,\in\,X^{\,\ast}_{F}\;,\;T\,\neq\,0\,\right\\}.$ ## 3 Consequences of Hahn-Banach theorem in linear $n$-normed space In this section, we shall consider some immediate corollaries and important consequences of the Hahn-Banach extension theorem for bounded $b$-linear functional [3] in case of linear $n$-normed space. ###### Theorem 3.1. Let $X$ be a linear n-normed space over the field $\mathbb{R}$ and let $x,\,y$ be two distinct points of $X$ such that the set $\left\\{\,x,\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\\}$ or $\left\\{\,y,\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\\}$ are linearly independent. Then $\exists$ $T\,\in\,X_{F}^{\,\ast}$ such that $T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\neq\,T\,(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,).$ ###### Proof. Consider, $z\,=\,x\,-\,y$. Then $\theta\,\neq\,z\,\in\,X$ and therefore by Theorem (2.11), $\exists\,\;T\,\in\,X^{\,\ast}_{F}$ such that $T\,(\,z\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,\left\|\,z\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\;\;\text{and}\;\;\|\,T\,\|\,=\,1$ $\Rightarrow\;T\,(\,x\,-\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,\left\|\,x\,-\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\neq\,0\hskip 14.22636pt$ $\Rightarrow\;T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,-\,T\,(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\neq\,0\hskip 56.9055pt$ $\Rightarrow\;T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\neq\,T\,(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,).\hskip 75.11525pt$ ∎ ###### Corollary 3.2. If $X\,\neq\,\\{\,\theta\,\\}$ is a linear n-normed space, then there are always non-trivial bounded b-linear functionals on $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\,\cdots\,\times\,\left<\,b_{\,n}\,\right>$, i. e., $X\,\neq\,\\{\,\theta\,\\}\,\Rightarrow\,X^{\,\ast}_{F}\,\neq\,\\{\,O\,\\},\,O$ being a null operator. ###### Proof. This is an immediate consequence of Theorem (2.11). ∎ ###### Corollary 3.3. Let $X$ be a linear n-normed space. Then for all $T\,\in\,X^{\,\ast}_{F}$, $T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,0\;\,\Rightarrow\,x\,=\,\theta.$ ###### Proof. If possible let $x\,\neq\,\theta$. Then by the Corollary (3.2), $\exists\,\,T\,\in\,X^{\,\ast}_{F}$ such that $T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\neq\,0$. This is a contradiction to the given hypothesis. Hence the results follows. ∎ We now proceed to present another implication of the Hahn-Banach theorem for bounded $b$-linear functional and establish that there are always sufficient bounded $b$-linear functionals on a linear $n$-normed space which separate points from proper subspaces. ###### Theorem 3.4. Let $X$ be a linear n-normed space over the field $\mathbb{R}$ and $W$ be a subspace of $X$ and let $x_{\,0}\,\in\,X$ such that $x_{\,0},\,b_{\,2},\,\cdots,\,b_{\,n}$ are linearly independent and suppose $d\,=\,\inf\limits_{x\,\in\,W}\,\left\|\,x_{\,0}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,>\,0$. Then $\exists$ $T\,\in\,X^{\,\ast}_{F}$ such that * (I) $T\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,1$, * (II) $T\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W\;\;\;\text{and}\;\;\;\|\,T\,\|\,=\,\dfrac{1}{d}$. ###### Proof. Let $W_{\,0}\,=\,W\,+\,\left<\,x_{\,0}\,\right>$ be the space spannded by $W$ and $x_{\,0}$. Since $d\,>\,0$, we have $x_{\,0}\,\not\in W$. Therefore, each $x\,\in\,W_{0}$ can be expressed uniquely in the form $x\,=\,y\,+\,\alpha\,x_{\,0},\;y\,\in\,W$ and $\alpha\,\in\,\mathbb{R}$. We define a functional as follows: $T_{\,1}\,:\,W_{0}\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>\,\to\,\mathbb{R},\;T_{\,1}\,\left(\,y\,+\,\alpha\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,\alpha.$ Then clearly $T_{\,1}$ is a $b$-linear functional on $W_{0}\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ satisfying $T_{\,1}\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W\;\,\;\text{and}\,\;\,T_{\,1}\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,1.$ Also, for each $x\,\in\,W_{0}$, we have $\left|\,T_{\,1}\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|\,=\,\left|\,T_{\,1}\,\left(\,y\,+\,\alpha\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|\,=\,\,|\,\alpha\,|\hskip 113.81102pt$ $\,=\,\dfrac{|\,\alpha\,|\,\left\|\,x,\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\|}{\left\|\,x,\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\|}=\,\dfrac{|\,\alpha\,|\,\left\|\,x,b_{\,2},\,\cdots,\,b_{\,n}\,\right\|}{\left\|\,y\,+\,\alpha\,x_{\,0},\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\|}\,=\,\dfrac{|\,\alpha\,|\,\left\|\,x,\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\|}{|\,\alpha\,|\,\left\|\,\dfrac{y}{\alpha}\,+\,x_{\,0},\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\|}$ $\,=\,\dfrac{\left\|\,x,\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\|}{\left\|\,x_{\,0}\,-\,\left(\,-\,\dfrac{y}{\alpha}\,\right)\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|}\,\leq\,\dfrac{\left\|\,x,\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\|}{d}\;\;\left[\;\text{since}\;\,-\,\dfrac{y}{\alpha}\,\in\,W\right].\hskip 28.45274pt$ This shows that $T_{\,1}$ is a bounded $b$-linear functional with $\|\,T_{\,1}\,\|\,\leq\,\dfrac{1}{d}$. To prove $\|\,T_{\,1}\,\|\,\geq\,\dfrac{1}{d}$, we consider a sequence $\left\\{\,x_{\,k}\,\right\\},\,x_{\,k}\,\in\,W$ such that $\lim\limits_{k\,\to\,\infty}\,\left\|\,x_{\,0}\,-\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,d.$ $\text{Now},\hskip 8.5359pt1\,=\,\left|\,T_{\,1}\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,-\,T_{\,1}\,\left(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|\hskip 56.9055pt$ $\hskip 28.45274pt\,=\,\left|\,T_{\,1}\,\left(\,x_{\,0}\,-\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|\,\leq\,\|\,T_{\,1}\,\|\,\left\|\,x_{\,0}\,-\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|$ $\Rightarrow\,1\,\leq\,\|\,T_{\,1}\,\|\,\lim\limits_{k\,\to\,\infty}\,\left\|\,x_{\,0}\,-\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,\|\,T_{\,1}\,\|\;d\,\Rightarrow\,\|\,T_{\,1}\,\|\,\geq\,\dfrac{1}{d}\;.$ Thus, we have established that $\exists$ a bounded $b$-linear functional $T_{\,1}$ on $W_{0}\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ such that $T_{\,1}\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W,\;\;T_{\,1}\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,1\;\;\text{and}\;\,\|\,T_{\,1}\,\|\,=\,\dfrac{1}{d}.$ Applying the Theorem (2.10), we obtain a $b$-linear functional $T\,\in\,X^{\,\ast}_{F}$ such that $T\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,T_{\,1}\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\;\;\forall\;x\,\in\,W_{0}\;\;\text{and}\;\,\|\,T\,\|\,=\,\|\,T_{\,1}\,\|\,=\,\dfrac{1}{d}.$ $\text{So},\hskip 8.5359ptT\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,T_{\,1}\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W\;\;\text{and}$ $T\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,T_{\,1}\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,1.$ Hence, the proof of the theorem is complete. ∎ ###### Remark 3.5. The Theorem (3.4) is a generalization of the Theorem (2.11) and its derivation is as follows: Consider $W\,=\,\\{\,0\,\\}$ and $d\,=\,\left\|\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|$, then by the Theorem (3.4), there exists a bounded $b$-linear functional $T_{\,0}\,\in\,X^{\,\ast}_{F}$ such that $\left\|\,T_{\,0}\,\right\|\,=\,\dfrac{1}{d}\,=\,\dfrac{1}{\left\|\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|}\;\;\text{and}\;\;T_{\,0}\,(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,1.$ Now, for all $x\,\in\,X$, we define $T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,\left\|\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\cdot\,T_{\,0}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,),\;\text{then}$ $T\,(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,\left\|\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\;T_{\,0}\,(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,\left\|\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|$ $\text{Also},\;\|\,T\,\|\,=\,\sup\left\\{\,\dfrac{|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,|}{\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\|}\,:\,\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\|\,\neq\,0\,\right\\}$ $\,=\,\sup\left\\{\,\dfrac{\left|\,\left\|\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\;T_{\,0}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|}{\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\|}\,:\,\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\|\,\neq\,0\,\right\\}$ $\,=\,\left\|\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\;\sup\left\\{\,\dfrac{|\,T_{\,0}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,|}{\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\|}\,:\,\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\|\,\neq\,0\,\right\\}$ $\,=\,\left\|\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\;\|\,T_{\,0}\,\|\,=\,1.\hskip 210.55022pt$ ###### Corollary 3.6. Let $X$ be a linear n-normed space over the field $\mathbb{R}$ and $W$ be a subspace of $X$ and let $x_{\,0}\,\in\,X$ such that $x_{\,0},\,b_{\,2},\,\cdots,\,b_{\,n}$ are linearly independent and suppose $d\,=\,\inf\limits_{x\,\in\,W}\,\left\|\,x_{\,0}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,>\,0$. Then * (I) $T\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,d$, * (II) $T\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W\;\;\;\&\;\;\;\|\,T\,\|\,=\,1$, for some $T\,\in\,X^{\,\ast}_{F}$. ###### Proof. By Theorem (3.4), $\exists$ $T_{\,1}\,\in\,X^{\,\ast}_{F}$ such that $T_{\,1}\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,1,\;T_{\,1}\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W\;\;\text{and}\;\,\|\,T_{\,1}\,\|\,=\,\dfrac{1}{d}.$ Define the bounded $b$-linear functional $T$ on $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ by $T\,=\,d\;T_{\,1}$. Then $T\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,d\;T_{\,1}\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,d$, $T\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,d\;T_{\,1}\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W$ with $\|\,T\,\|\,=\,d\;\|\,T_{\,1}\,\|\,=\,\dfrac{d}{d}\,=\,1$. This completes the proof. ∎ ###### Corollary 3.7. Let $X$ be a linear n-normed space over the field $\mathbb{R}$ and $W$ be a closed linear subspace of $X$ and let $x_{\,0}\,\in\,X\,-\,W$ such that $x_{\,0},\,b_{\,2},\,\cdots,\,b_{\,n}$ are linearly independent and suppose $d\,=\,\inf\limits_{x\,\in\,W}\,\left\|\,x_{\,0}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|$. Then $\exists$ $T\,\in\,X^{\,\ast}_{F}$ such that * (I) $T\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,1$, * (II) $T\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W\;\;\text{and}\;\;\|\,T\,\|\,=\,\dfrac{1}{d}$. ###### Proof. It can be easily verified that $\inf\limits_{x\,\in\,W}\,\left\|\,x_{\,0}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,0$ if and only if $x_{\,0}\,\in\,\overline{\,W}$. But $W\,=\,\overline{\,W}$ and it follows that $x_{\,0}\,\not\in\,\overline{\,W}$. Hence $d\,=\,\inf\limits_{x\,\in\,W}\,\left\|\,x_{\,0}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,>\,0.$ Now, the proof of this corollary follows from Theorem (3.4). ∎ ###### Corollary 3.8. Let $X$ be a linear n-normed space over the field $\mathbb{R}$ and $W$ be a closed linear subspace of $X$ and let $x_{\,0}\,\in\,X\,-\,W$ such that $x_{\,0},\,b_{\,2},\,\cdots,\,b_{\,n}$ are linearly independent. Then $\exists$ $T\,\in\,X^{\,\ast}_{F}$ such that $T\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\neq\,0\;\;\text{and}\;\;T\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W.$ ###### Proof. Proof of this corollary directly follows from that of the corollary (3.7) . ∎ The Hahn-Banach Theorem for bounded $b$-linear functional and its consequences can be used to revel much among the properties of linear $n$-normed space and its dual space. Next theorem relates separability of the dual space to the separability of its original space. ###### Theorem 3.9. Let $X$ be a linear n-normed space over the field $\mathbb{R}$ and $X^{\,\ast}_{F}$ be the Banach space of all bounded b-linear functionals defined on $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$. Then the space $X$ is separable if $X^{\,\ast}_{F}$ is separable. ###### Proof. Since $X^{\,\ast}_{F}$ is separable, $\exists$ a countable set $S\,=\,\left\\{\;T_{k}\,\in\,X^{\,\ast}_{F}\,:\,k\,\in\,\mathbb{N}\;\right\\}$ such that $S$ is dense in $X^{\,\ast}_{F}$, i. e., $\overline{S}\,=\,X^{\,\ast}_{F}$. For each $k\,\in\,\mathbb{N}$, choose $x_{\,k}\,\in\,X$ such that $\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,1$ and $\left|\,T_{k}\left(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|\,\geq\,\dfrac{1}{2}\,\left\|\,T_{k}\,\right\|$. Let $W$ be the closed subspace of $X$ generated by the sequence $\left\\{\,x_{\,k}\,\right\\}_{k\,=\,1}^{\,\infty}$, i. e., $W\,=\,\overline{\,span}\,\left\\{\,x_{\,k}\,\in\,X\,:\,k\,\in\,\mathbb{N}\,\right\\}$. Suppose $W\,\neq\,X$. Let $x_{\,0}\,\in\,X\,-\,W$ such that $x_{\,0},\,b_{\,2},\,\\\ \cdots,\,b_{\,n}$ are linearly independent. By Corollary (3.8), $\exists$ $0\,\neq\,T\,\in\,X^{\,\ast}_{F}$ such that $T\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\neq\,0\;\;\text{and}\;\;T\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0\;\;\;\forall\;x\,\in\,W.$ Since $\left\\{\,x_{\,k}\,\right\\}_{k\,=\,1}^{\,\infty}\,\subseteq\,W$, $T\,\left(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0,\;k\,\in\,\mathbb{N}$. Thus $\dfrac{1}{2}\,\left\|\,T_{k}\,\right\|\,\leq\,\left|\,T_{k}\left(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|\,=\,\left|\,T_{k}\left(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,-\,T\,\left(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|$ $\hskip 99.58464pt\,\leq\,\left\|\,T_{k}\,-\,T\,\right\|\,\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|$ $\hskip 167.87108pt=\,\left\|\,T_{k}\,-\,T\,\right\|\;[\;\text{since}\;\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,1\;].$ Again, since $\overline{\,S}\,=\,X^{\,\ast}_{F}$, for each $T\,\in\,X^{\,\ast}_{F}$, $\exists\,\;\text{a sequence}\;\,\left\\{\,T_{k}\,\right\\}$ in $S$ such that $\lim\limits_{k\,\to\,\infty}\,T_{k}\,=\,T$. Therefore, $\|\,T\,\|\,\leq\,\left\|\,T_{k}\,-\,T\,\right\|\,+\,\left\|\,T_{k}\,\right\|\,\leq\,3\,\left\|\,T_{k}\,-\,T\,\right\|\;\;\forall\,k\,\in\,\mathbb{N}.$ Taking limit on both sides as $k\,\to\,\infty$, it follows that $T\,=\,0$, which contradicts the assumption that $W\,\neq\,X$. Hence, $W\,=\,X$ and thus $X$ is separable. ∎ ## 4 Reflexivity of linear $n$-normed space Recall that given a linear $n$-normed space $X\,\neq\,\\{\,0\,\\}$, the dual space $X_{F}^{\,\ast}$ is a normed space with respect to the norm $\|\,\cdot\,\|\,:\,X_{F}^{\,\ast}\,\to\,\mathbb{R}$ defined by $\|\,T\,\|\,=\,\sup\,\left\\{\,\left|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\,:\,x\,\in\,X,\,\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,1\,\right\\}.$ Furthermore, $X_{F}^{\,\ast}$ is a Banach space. Also, by Corollary (3.2), $X_{F}^{\,\ast}\,\neq\,\\{\,O\,\\}$ and, therefore, as a normed space $X_{F}^{\,\ast}$ has its own dual space $\left(\,X_{F}^{\,\ast}\,\right)^{\,\ast}$, denoted by $X_{F}^{\,\ast\,\ast}$ and is called the second dual space of $X$, which is again a Banach space under the norm $\|\,\varphi\,\|\,=\,\sup\,\left\\{\,\left|\,\varphi\,(\,T\,)\,\right|\,\;:\;T\,\in\,X_{F}^{\,\ast}\,,\,\|\,T\,\|\,\leq\,1\,\right\\}\,,\,\varphi\,\in\,X_{F}^{\,\ast\,\ast}.$ ###### Theorem 4.1. Let $X$ be a real linear n-normed space. Given $x\,\in\,X$, let $\varphi_{(\,x\,,\,F\,)}\,(\,T\,)\,=\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\;\forall\;T\,\in\,X_{F}^{\,\ast}.$ (1) Then $\varphi_{(\,x\,,\,F\,)}$ is a bounded linear functional on $X_{F}^{\,\ast}$. Furthermore, the mapping $\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\to\,\varphi_{(\,x\,,\,F\,)}$ is an isometric isomorphism of $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ onto the subspace $\left\\{\,\varphi_{(\,x\,,\,F\,)}\,:\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\in\,X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>\,\right\\}$ of $X_{F}^{\,\ast\,\ast}$. ###### Proof. Let $\alpha,\,\beta\,\in\,\mathbb{R}$. Then $\varphi_{(\,x\,,\,F\,)}\,\left(\,\alpha\,T_{\,1}\,+\,\beta\,T_{\,2}\,\right)\,=\,\left(\,\alpha\,T_{\,1}\,+\,\beta\,T_{\,2}\,\right)\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\hskip 62.59596pt$ $\hskip 122.34692pt=\,\alpha\,T_{\,1}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,+\,\beta\,T_{\,2}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)$ $\hskip 136.5733pt=\,\alpha\,\varphi_{(\,x\,,\,F\,)}\,(\,T_{\,1}\,)\,+\,\beta\,\varphi_{(\,x\,,\,F\,)}\,(\,T_{\,2}\,)\;\;\forall\;T_{\,1}\,,\,T_{\,2}\,\in\,X_{F}^{\,\ast}.$ So, $\varphi_{(\,x\,,\,F\,)}$ is linear functional. Also, for all $T\,\in\,X_{F}^{\,\ast}$, we have $\left|\,\varphi_{(\,x\,,\,F\,)}\,(\,T\,)\,\right|\,=\,\left|\,T\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|\,\leq\,\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\|\,T\,\|.$ Consequently, $\varphi_{(\,x\,,\,F\,)}\,\in\,X_{F}^{\,\ast\,\ast}$ with $\left\|\,\varphi_{(\,x\,,\,F\,)}\,\right\|\,\leq\,\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|$. Moreover, such $\varphi_{(\,x\,,\,F\,)}$ is unique. So, for every fixed $x\,\in\,X$ there corresponds a unique bounded linear functional $\varphi_{(\,x\,,\,F\,)}\,\in\,X_{F}^{\,\ast\,\ast}$ given by (1). This defines a function $J\,:\,X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>\,\to\,X_{F}^{\,\ast\,\ast}$ given by $J\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,\varphi_{(\,x\,,\,F\,)}$. We now verify that $J$ is an isomorphism between $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ and the range of $J$, which is a subspace of $X_{F}^{\,\ast\,\ast}$. * (I) Let $x,\,y\,\in\,X\;\;\text{and}\;\;\alpha,\,\beta\,\in\,\mathbb{R}$. Then for all $T\,\in\,X_{F}^{\,\ast}$, we have $\left[\,J\,\left(\,\alpha\,x\,+\,\beta\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right]\,(\,T\,)\,=\,\varphi_{(\,\alpha\,x\,+\,\beta\,y\,,\,F\,)}\,(\,T\,)$ $=\,T\,\left(\,\alpha\,x\,+\,\beta\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,\alpha\;T\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,+\,\beta\;T\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)$ $=\,\alpha\;\varphi_{(\,x\,,\,F\,)}\,(\,T\,)\,+\,\beta\;\varphi_{(\,y\,,\,F\,)}\,(\,T\,)\,=\,\left(\,\alpha\;\varphi_{(\,x\,,\,F\,)}\,+\,\beta\;\varphi_{(\,y\,,\,F\,)}\,\right)\,(\,T\,)\hskip 51.21504pt$ $=\,\left[\,\alpha\;J\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,+\,\beta\;J\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right]\,(\,T\,)\;\;\;\forall\;T\,\in\,X_{F}^{\,\ast}.\hskip 39.83368pt$ $\Rightarrow\;J\,\left(\,\alpha\,x\,+\,\beta\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,\alpha\;J\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,+\,\beta\;J\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right).$ This shows that $J$ is a $b$-linear operator. * (II) $J$ preserves the norm: For each $\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\in\,X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$, we have $\left\|\,J\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right\|\,=\,\left\|\,\varphi_{(\,x\,,\,F\,)}\,\right\|\,=\,\sup\left\\{\,\dfrac{\left|\,\varphi_{(\,x\,,\,F\,)}\,(\,T\,)\,\right|}{\|\,T\,\|}\,:\,T\,\in\,X_{F}^{\,\ast}\,,\,T\,\neq\,0\,\right\\}$ $\hskip 99.58464pt=\,\sup\left\\{\,\dfrac{\left|\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|}{\|\,T\,\|}\,:\,T\,\in\,X_{F}^{\,\ast}\,,\,T\,\neq\,0\,\right\\}$ $\hskip 56.9055pt=\,\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\;\;[\;\text{by Theorem (\ref{th1.3})}\;].$ (2) * (III) $J$ is injective: Let $x,\,y\,\in\,X$ with $x\,\neq\,y$ such that the set $\left\\{\,x,\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\\}$ or $\left\\{\,y,\,b_{\,2},\,\cdots,\,b_{\,n}\,\right\\}$ are linearly independent. Then by (2), $\left\|\,x\,-\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\neq\,0\,\Rightarrow\,\left\|\,J\,\left(\,x\,-\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right\|\,\neq\,0$ $\Rightarrow\,\left\|\,J\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,-\,J\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right\|\,\neq\,0$ $\Rightarrow\,J\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\neq\,J\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right).\hskip 28.45274pt$ We thus conclude that $J$ is an isomeric isomorphism of $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ onto the subspace of $X_{F}^{\,\ast\,\ast}$. This completes the proof. ∎ ###### Definition 4.2. Let $X$ be a linear n-normed space over the field $\mathbb{R}$. The isometric isomorphism $J\,:\,X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>\,\to\,X_{F}^{\,\ast\,\ast}$ defined by $J\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,\varphi_{(\,x\,,\,F\,)}\;\;\forall\;x\,\in\,X\;\;\&\;\;\varphi_{(\,x\,,\,F\,)}\,\in\,X_{F}^{\,\ast\,\ast}$ is called the b-natural embedding or the b-canonical mapping of $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ into the second dual space $X_{F}^{\,\ast\,\ast}$. ###### Definition 4.3. A linear n-normed space $X$ is said to be b-reflexive if the b-natural embedding $J$, maps the space $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ onto its second dual space $X_{F}^{\,\ast\,\ast}$, i. e., $J\,\left(\,X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>\,\right)\,=\,X_{F}^{\,\ast\,\ast}$. ###### Theorem 4.4. Let $\left\\{\,x_{k}\,\right\\}_{k\,=\,1}^{\,\infty}$ be a sequence in a linear n-normed space $X$. Suppose $\sup\limits_{1\,\leq\,k\,<\,\infty}\,\left|\,T\left(\,x_{k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|\,<\,\infty\;\;\;\forall\;T\,\in\,X_{F}^{\,\ast}.\;\text{Then}$ $\sup\limits_{1\,\leq\,k\,<\,\infty}\,\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,<\,\infty.$ ###### Proof. Consider the $b$-natural embedding $\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\to\,\varphi_{(\,x\,,\,F\,)},\;\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\in\,X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>.$ Since $\left\\{\,x_{k}\,\right\\}_{k\,=\,1}^{\,\infty}$ is a sequence of vectors in $X$, $\left\\{\,\varphi_{(\,x_{k}\,,\,F\,)}\,\right\\}_{k\,=\,1}^{\,\infty}$ is a sequence of bounded linear functionals in $X_{F}^{\,\ast\,\ast}$. Also, $\left|\,\varphi_{(\,x_{k}\,,\,F\,)}\,(\,T\,)\,\right|\,=\,\left|\,T\left(\,x_{k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|\,\leq\,\sup\limits_{1\,\leq\,k\,<\,\infty}\left|\,T\left(\,x_{k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right|.$ Therefore, $\left\\{\,\varphi_{(\,x_{k}\,,\,F\,)}\,(\,T\,)\,\right\\}_{k\,=\,1}^{\,\infty}$ is bounded for each $T\,\in\,X_{F}^{\,\ast}$. Applying the Principle of Uniform Boundedness ( Theorem (2.1) ), to the family $\left\\{\,\varphi_{(\,x_{k}\,,\,F\,)}\,\right\\}_{k\,=\,1}^{\,\infty}$, we conclude that $\left\\{\,\left\|\,\varphi_{(\,x_{k}\,,\,F\,)}\,\right\|\,\right\\}_{k\,=\,1}^{\,\infty}$ is bounded and hence by (2), $\left\\{\,\left\|\,x_{k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\right\\}_{k\,=\,1}^{\,\infty}$ is bounded. This proves the theorem. ∎ ###### Theorem 4.5. A closed subspace of a b-reflexive n-Banach space is b-reflexive. ###### Proof. Let $X$ be a $b$-reflexive $n$-Banach space and $Y$ be a closed subspace of $X$. Let $T\,:\,X_{F}^{\,\ast}\,\to\,Y_{F}^{\,\ast}$ be an operator defined by $\left(\,T\,f\,\right)\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,f\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\;\;\forall\;y\,\in\,Y,\;f\,\in\,X_{F}^{\,\ast},$ where $Y_{F}^{\,\ast}$ denotes the Banach space of all bounded $b$-linear functionals defined on $Y\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$. Then for $f\,\in\,X_{F}^{\,\ast}$, $\left\|\,T\,f\,\right\|\,=\,\sup\left\\{\,\dfrac{\left|\,f\,(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|}{\left\|\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|}\,:\,\left\|\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\neq\,0\,\right\\}\,=\,\|\,f\,\|.$ Let $J_{Y}$ be the $b$-natural embedding of $Y\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ into $Y_{F}^{\,\ast\,\ast}$. That is, $J_{Y}\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,\psi_{(\,y\,,\,F\,)}\;\;\forall\;y\,\in\,Y,\;\psi_{(\,y\,,\,F\,)}\,\in\,Y_{F}^{\,\ast\,\ast}.\;\text{Define}$ $T_{\,1}\,:\,Y_{F}^{\,\ast\,\ast}\,\to\,X_{F}^{\,\ast\,\ast}$ by $\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)\,(\,f\,)\,=\,\psi_{(\,y\,,\,F\,)}\,(\,T\,f\,),\;f\,\in\,X_{F}^{\,\ast}$. We now verify that $T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\in\,X_{F}^{\,\ast\,\ast}$. * (I) $T_{\,1}\,\psi_{(\,y\,,\,F\,)}$ is linear functional: Let $\alpha,\,\beta\,\in\,\mathbb{R}$. Then for every $f,\,g\,\in\,X^{\,\ast}_{F}$ and $y\,\in\,Y$, we have $\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)\,\left(\,\alpha\,f\,+\,\beta\,g\,\right)\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\hskip 170.71652pt$ $\,=\,\psi_{(\,y\,,\,F\,)}\,\left[\,T\,\left(\,\alpha\,f\,+\,\beta\,g\,\right)\,\right]\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\hskip 128.0374pt$ $\,=\,\psi_{(\,y\,,\,F\,)}\,\left[\,\alpha\;T\,\left(\,f\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right)\,+\,\beta\;T\,\left(\,g\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right)\,\right]\hskip 19.91684pt$ $\,=\,\alpha\;\psi_{(\,y\,,\,F\,)}\,(\,T\,f\,)\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,+\,\beta\;\psi_{(\,y\,,\,F\,)}\,(\,T\,g\,)\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)$ $=\,\left[\,\alpha\;\psi_{(\,y\,,\,F\,)}\,(\,T\,f\,)\,+\,\beta\;\psi_{(\,y\,,\,F\,)}\,(\,T\,g\,)\,\,\right]\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\hskip 68.28644pt$ $=\,\left[\,\alpha\;\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)\,(\,f\,)\,+\,\beta\;\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)\,(\,g\,)\,\right]\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right).\hskip 17.07182pt$ $\Rightarrow\,\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)\,\left(\,\alpha\,f\,+\,\beta\,g\,\right)\,=\,\alpha\;\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)\,(\,f\,)\,+\,\beta\;\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)\,(\,g\,).$ * (II) $T_{\,1}\,\psi_{(\,y\,,\,F\,)}$ is bounded: Since $\psi_{(\,y\,,\,F\,)}$ preserves the norm, $\left\|\,\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)\,(\,f\,)\,\right\|\,=\,\left\|\,\psi_{(\,y\,,\,F\,)}\,(\,T\,f\,)\,\right\|\,=\,\|\,T\,f\,\|\,=\,\|\,f\,\|.$ So, $T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\in\,X_{F}^{\,\ast\,\ast}$ and hence $T_{\,1}$ is well-defined. Since $X$ is $b$-reflexive, the $b$-natural embedding $J_{X}\,:\,X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>\,\to\,X_{F}^{\,\ast\,\ast}$ defined by $J_{X}\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,\varphi_{(\,x\,,\,F\,)}\;,\;\varphi_{(\,x\,,\,F\,)}\,\in\,X_{F}^{\,\ast\,\ast}$ is such that $J_{X}\,\left(\,X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>\,\right)\,=\,X_{F}^{\,\ast\,\ast}$. Therefore, $T_{\,1}\psi_{(\,y\,,\,F\,)}\,\in\,X_{F}^{\,\ast\,\ast}$ implies that $J_{X}^{\,-\,1}\,\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)\,\in\,X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$. Write $\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,J_{X}^{\,-\,1}\,\left(\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}\,\right)$ so that $J_{X}\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,T_{\,1}\,\psi_{(\,y\,,\,F\,)}$. We need to prove that $x\,\in\,Y$. Let, if possible, $x\,\in\,X\,-\,Y$ such that $x,\,b_{\,2},\,\cdots,\,b_{\,n}$ are linearly independent. Then by Corollary (3.8), $\exists$ a bounded $b$-linear functional $f\,\in\,X_{F}^{\,\ast}$ such that $f\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\neq\,0$ and $f\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0$ for all $y\,\in\,Y$. Consequently, $T\,f\,=\,0$ and as such $\psi_{(\,y\,,\,F\,)}\,(\,T\,f\,)\,=\,0$. This leads to $\varphi_{(\,x\,,\,F\,)}\,(\,f\,)\\\ \,=\,0$ and hence $f\,\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,0$, which is a contradiction. Thus, we conclude that $\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,J_{X}^{\,-\,1}\,\left(\,T_{\,1}\,\psi_{(\,y\,\,F\,)}\,\right)\,\in\,Y\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$. This verifies that $J_{X}^{\,-\,1}\,\left(\,T_{\,1}\,\left(\,Y_{F}^{\,\ast\,\ast}\,\right)\,\right)\,\subset\,Y\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$. Now, let $\psi\,\in\,Y_{F}^{\,\ast\,\ast}$. Set $\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,J_{X}^{\,-\,1}\,\left(\,T_{\,1}\,\psi\,\right)$ so that $\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\in\,Y\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$. Let $g\,\in\,Y_{F}^{\,\ast}$. Then there exists a $b$-linear functional $f\,\in\,X_{F}^{\,\ast}$ such that $f\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,g\,\left(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\;\;\forall\;y\,\in\,Y\;\;\text{and}\;\;g\,=\,T\,f.$ $\text{Therefore,}\;\;\psi\,(\,g\,)\,=\,\psi\,(\,T\,f\,)\,=\,\left(\,T_{\,1}\,\psi\,\right)\,(\,f\,)\,=\,\left[\,J_{X}\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right]\,(\,f\,)\hskip 28.45274pt$ $\hskip 71.13188pt=\,\varphi_{(\,x_{\,0}\,,\,F\,)}\,(\,f\,)\,=\,f\,\left(\,x_{\,0},\,b_{\,2},\,\cdots,\,b_{\,n}\,\right)\,=\,g\,\left(\,x_{\,0},\,b_{\,2},\,\cdots,\,b_{\,n}\,\right).$ This proves that $J_{Y}\,\left(\,x_{\,0}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,=\,\psi_{(\,x_{\,0}\,,\,F\,)}$ and hence $J_{Y}\,\left(\,Y\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>\,\right)\,=\,Y_{F}^{\,\ast\,\ast}.\;\text{This proves that \,$Y$\, is $b$-reflexive.}$ ∎ ## 5 $b$-weak convergence and $b$-strong convergence in linear $n$-normed space In this section, we shall introduce $b$-weak convergence and $b$-strong convergence relative to bounded $b$-linear functionals in linear $n$-normed space and establish that these two types of convergence are equivalent in case of finite dimensional linear $n$-normed space. ###### Definition 5.1. A sequence $\\{\,x_{\,k}\,\\}$ in a linear n-normed space $X$ is said to be b-weakly convergent if $\exists$ an element $x\,\in\,X$ such that for every $T\,\in\,X^{\,\ast}_{F}$, $\lim\limits_{k\,\to\,\infty}\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,).$ The vector $x$ is called the b-weak limit of the sequence $\\{\,x_{\,k}\,\\}$ and we say that $\\{\,x_{\,k}\,\\}$ converges b-weakly to $x$. Note that, for each $T\,\in\,X^{\,\ast}_{F},\;\left\\{\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right\\}$ is a sequence of scalars in $\mathbb{K}$. Therefore, b-weak convergence means convergence of the sequence of scalars $\left\\{\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right\\}$ for every $T\,\in\,X^{\,\ast}_{F}$. ###### Theorem 5.2. Let $\\{\,x_{\,k}\,\\}$ be b-weakly convergent sequence in $X$. Then * (I) the b-weak limit of $\\{\,x_{\,k}\,\\}$ is unique. * (II) $\left\\{\,\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\right\\}$ is bounded sequence in $\mathbb{K}$. ###### Proof. (I) Suppose that $\\{\,x_{\,k}\,\\}$ converges $b$-weakly to $x$ as well as to $y$. Then $T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,\lim\limits_{k\;\to\;\infty}\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\forall\;T\,\in\,X^{\,\ast}_{F}$ $\Rightarrow\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,-\,T\,(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,0\;\;\;\forall\;\,T\,\in\,X^{\,\ast}_{F}$ $\Rightarrow\;T\,(\,x\,-\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,0\;\;\;\forall\;\,T\,\in\,X^{\,\ast}_{F}.\hskip 76.82234pt$ Hence, by Corollary (3.3), $x\,=\,y$. Proof of (II) Since $\\{\,x_{\,k}\,\\}$ converges $b$-weakly to $x$, we have $\lim\limits_{k\;\to\;\infty}\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\;\forall\;\,T\,\in\,X^{\,\ast}_{F}.$ Therefore, for each $T\,\in\,X^{\,\ast}_{F}\;,\;\left\\{\,T\left(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\right\\}$ is a convergent sequence in $\mathbb{K}$ and so the sequence $\left\\{\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right\\}$ is bounded. Consequently, $\exists$ a constant $K_{T}$ ( depending on $T$ ) such that $\left|\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\,\leq\,K_{T}\,\;\forall\;k\,\in\,\mathbb{N}$. Let $\left(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)\,\to\,\varphi_{(\,x\,,\,F\,)}$ be the $b$-natural embedding of $X\,\times\,\left<\,b_{\,2}\,\right>\,\times\cdots\,\times\,\left<\,b_{\,n}\,\right>$ into $X^{\,\ast\,\ast}_{F}$. Then for each $k\,\in\,\mathbb{N}\,,\;\left\|\,\varphi_{(\,x_{\,k}\,,\,F\,)}\,\right\|\,=\,\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\;\;[\;\text{by (\ref{eq2})}\;]$, and $\left|\,\varphi_{(\,x_{k}\,,\,F\,)}\,(\,T\,)\,\right|\,=\,\left|\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\,\leq\,K_{T}\;\;\;\forall\;k\,\in\,\mathbb{N}.$ Thus, $\left\\{\,\varphi_{(\,x_{k}\,,\,F\,)}\,(\,T\,)\,\right\\}$ is bounded for each $T\,\in\,X^{\,\ast}_{F}$. But the space $X^{\,\ast}_{F}$ being a Banach space, by the Principle of Uniform Boundedness ( Theorem (2.1) ), it follows that $\left\\{\,\left\|\,\varphi_{(\,x_{\,k}\,,\,F\,)}\,\right\|\,\right\\}$ is bounded and hence $\left\\{\,\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\right\\}_{k\,=\,1}^{\,\infty}$ is bounded. ∎ ###### Theorem 5.3. Let $\\{\,x_{\,k}\,\\}$ and $\\{\,y_{\,k}\,\\}$ be two sequences in a linear n-normed space $X$. If $\\{\,x_{\,k}\,\\}$ and $\\{\,y_{\,k}\,\\}$ converges b-weakly to $x$ and $y$, respectively then for any scalar $\alpha\,\;\text{and}\;\,\beta$, $\\{\,\alpha\,x_{\,k}\,+\,\beta\,y_{\,k}\,\\}$ converges b-weakly to $\alpha\,x\,+\,\beta\,y$. ###### Proof. Since $\\{\,x_{\,k}\,\\}$ and $\\{\,y_{\,k}\,\\}$ converges $b$-weakly to $x$ and $y$, $\lim\limits_{k\;\to\;\infty}\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\text{and}$ $\lim\limits_{k\;\to\;\infty}\,T\,(\,y_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\;\forall\;T\,\in\,X^{\,\ast}_{F}.$ Now, for all $T\,\in\,X^{\,\ast}_{F}$, $\lim\limits_{k\;\to\;\infty}\,T\,(\,\alpha\,x_{\,k}\,+\,\beta\,y_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)$ $\,=\,\lim\limits_{k\;\to\;\infty}\,\left[\,T\,(\,\alpha\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,+\,T\,(\,\beta\,y_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right]\hskip 102.43008pt$ $\,=\,\lim\limits_{k\;\to\;\infty}\,\alpha\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,+\,\lim\limits_{k\;\to\;\infty}\,\beta\,T\,(\,y_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\hskip 85.35826pt$ $=\;\alpha\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,+\,\beta\,T\,(\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,\alpha\,x\,+\,\beta\,y\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,).$ This shows that $\\{\,\alpha\,x_{\,k}\,+\,\beta\,y_{\,k}\,\\}$ converges $b$-weakly to $\alpha\,x\,+\,\beta\,y$. ∎ ###### Theorem 5.4. A sequence $\\{\,x_{\,k}\,\\}$ in $X$ converges b-weakly to $x\,\in\,X$ if and only if * (I) the sequence $\left\\{\,\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\right\\}$ is bounded and * (II) $\lim\limits_{k\;\to\;\infty}\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\forall\;T\,\in\,M$, where $M$ is fundamental or total subset of $X^{\,\ast}_{F}$. ###### Proof. In the case of $b$-weak convergence, (I) follows from the Theorem (5.2) and since $M\,\subset\,X^{\,\ast}_{F}$, (II) follows from the definition of $b$-weak convergence of $\\{\,x_{\,k}\,\\}$. Conversely, suppose that (I) and (II) hold . By (I), $\exists$ a constant $L$ such that $\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\leq\,L\;\;\forall\;k\,\in\,\mathbb{N}\;\;\;\text{and}\;\;\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,\leq\,L.$ Since $\overline{span\,M}\,=\,X^{\,\ast}_{F}$, for each $T\,\in\,X^{\,\ast}_{F}$, $\exists$ a sequence $\left\\{\,T_{\,m}\,\right\\}$ in $span\,M$ such that $\lim\limits_{m\;\to\;\infty}\,T_{\,m}\,=\,T$. Hence, for any given $\epsilon\,>\,0,\,\;\exists\;\,T_{\,m}\,\in\,span\,M$ such that $\left\|\,T_{\,m}\,-\,T\,\right\|\,<\,\dfrac{\epsilon}{3\,L}$. Furthermore, by the hypothesis (II), $\exists\;K\,\in\,\mathbb{N}$ such that $\left|\,T_{\,m}\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,-\,T_{\,m}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\,<\,\dfrac{\epsilon}{3}\;\,\;\forall\;\,m\,>\,K.$ Now, for $m\,>\,K$, $\left|\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,-\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|$ $\leq\,\left|\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,-\,T_{\,m}\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\,+\,$ $\hskip 36.98866pt\,+\,\left|\,T_{\,m}\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,-\,T_{\,m}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|$ $\hskip 48.36958pt+\;\left|\,T_{\,m}\,(\;x\;,\;b_{\,2}\,,\,\cdots\,,\,b_{\,n}\;)\;-\;T\,(\;x\;,\;b_{\,2}\,,\,\cdots\,,\,b_{\,n}\;)\,\right|$ $<\,\left\|\,T_{\,m}\,-\,T\,\right\|\left\|\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,+\,\dfrac{\epsilon}{3}\,+\,\left\|\,T_{\,m}\,-\,T\,\right\|\left\|\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|$ $<\,\dfrac{\epsilon}{3\,L}\,\cdot\,L\,+\,\dfrac{\epsilon}{3}\,+\,\dfrac{\epsilon}{3\,L}\,\cdot\,L\,=\,\dfrac{\epsilon}{3}\,+\,\dfrac{\epsilon}{3}\,+\,\dfrac{\epsilon}{3}\,=\,\epsilon\hskip 113.81102pt$ $\Rightarrow\,\lim\limits_{k\;\to\;\infty}\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\;\forall\;T\,\in\,X^{\,\ast}_{F}.\hskip 28.45274pt$ Hence, $\\{\,x_{\,k}\,\\}$ converges $b$-weakly to $x\,\in\,X$. ∎ ###### Definition 5.5. A sequence $\\{\,x_{\,k}\,\\}$ in $X$ is said to be b-strongly convergent if $\exists$ a vector $x\,\in\,X$ such that $\lim\limits_{k\to\infty}\,\left\|\,x_{\,k}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,0$. The vector $x$ is called b-strong limit and we say that $\\{\,x_{\,k}\,\\}$ converges b-strongly to $x$. ###### Theorem 5.6. If a sequence $\\{\,x_{\,k}\,\\}$ in $X$ converges b-strongly to $x$, then $\\{\,x_{\,k}\,\\}$ converges b-weakly to $x$ in $X$. ###### Proof. Suppose $\\{\,x_{\,k}\,\\}$ converges $b$-strongly to $x$. Then for every $T\,\in\,X^{\,\ast}_{F}$, we have $\left|\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,-\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|\,=\,\left|\,T\,(\,x_{\,k}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,\right|$ $\leq\,\|\,T\,\|\left\|\,x_{\,k}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\hskip 22.76228pt$ $\hskip 99.58464pt\,\to\,0\;\;\text{as}\;\,k\,\to\,\infty\;\;[\;\text{since $\\{\,x_{\,k}\,\\}$\, converges \,$b$-strongly to \,$x$}\;]$ $\Rightarrow\,\lim\limits_{k\;\to\;\infty}\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\forall\;T\,\in\,X^{\,\ast}_{F}.$ Hence, $\\{\,x_{\,k}\,\\}$ converges $b$-weakly to $x$ in $X$. ∎ ###### Theorem 5.7. In a finite dimensional linear n-normed space, b-weak convergence implies b-strong convergence. ###### Proof. Let $X$ be a linear $n$-normed space with $\text{dim}\,X\,=\,d\,\geq\,n$. Then, $\exists$ a basis $\left\\{\,e_{\,1}\,,\,e_{\,2}\,,\,\cdots\,,\,e_{\,d}\,\right\\}$ for $X$. Let $\\{\,x_{\,k}\,\\}$ be a sequence in $X$ such that $\\{\,x_{\,k}\,\\}$ converges $b$-weakly to $x$. Now, we can write $x_{\,k}\,=\,a_{\,k\,,\,1}\,e_{\,1}\,+\,a_{\,k\,,\,2}\,e_{\,2}\,+\,\,\cdots\,+\,a_{\,k\,,\,d}\,e_{\,d}\;,\;(\,k\,=\,1\,,\,2\,,\,\cdots\,)\,\;\text{and}$ $x\,=\,a_{\,1}\,e_{\,1}\,+\,a_{\,2}\,e_{\,2}\,+\,\cdots\,\cdots\,+\,a_{\,d}\,e_{\,d},$ where $a_{\,k\,,\,1},\,a_{\,k\,,\,2},\,\,\cdots\,,\,a_{\,k\,,\,d}\,,\,a_{\,1},\,a_{\,2},\,\,\cdots\,,\,a_{\,d}\,\in\,\mathbb{R}$. Consider the $b$-linear functionals $\left\\{\,T_{\,1}\,,\,T_{\,2}\,,\,\cdots\,,\,T_{\,d}\,\right\\}$ in $X^{\,\ast}_{F}$ such that $T_{\,i}\,(\,e_{\,j}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,\begin{cases}1&\text{if\;\;}\;i\;=\;j\\\ 0&\text{if\;\;}\;i\;\neq\;j\,,\;1\;\leq\;i,\,j\;\leq\;d\end{cases}$ Now, for $1\,\leq\,i\,\leq\,d$, we have $T_{\,i}\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T_{\,i}\,\left(\,\sum\limits_{j\,=\,1}^{\,d}\;a_{\,k\,,\,j}\,e_{\,j}\;,\;b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right)$ $=\,\sum\limits_{j\,=\,1}^{\,d}\;a_{\,k\,,\,j}\,T_{\,i}\,(\,e_{\,j}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,a_{\,k\,,\,i}$ and similarly, $T_{\,i}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,a_{\,i},\,(\,1\,\leq\,i\,\leq\,d\,)$. Since $\lim\limits_{k\;\to\;\infty}\,T\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\;\;\;\forall\;T\,\in\,X^{\,\ast}_{F},$ in particular, we have $\lim\limits_{k\;\to\;\infty}\,T_{\,i}\,(\,x_{\,k}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,)\,=\,T_{\,i}\,(\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,),\;(\,1\,\leq\,i\,\leq\,d\,)$ $\Rightarrow\;\lim\limits_{k\;\to\;\infty}\,a_{k\,,\,i}\,=\,a_{\,i}\,,\;(\,1\,\leq\,i\,\leq\,d\,).$ (3) Therefore, $\left\|\,x_{\,k}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,\left\|\,\sum\limits_{i\,=\,1}^{\,d}\,\left(\,a_{k\,,\,i}\,-\,a_{\,i}\,\right)\,e_{\,i}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|$ $\hskip 119.50148pt\leq\;\sum\limits_{i\,=\,1}^{\,d}\,\left|\,a_{k\,,\,i}\,-\,a_{\,i}\,\right|\,\left\|\,e_{\,i}\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|$ $\hskip 73.97733pt\to\;0\;\;\text{as}\;\;k\;\to\;\infty\;\;[\;\text{by}\;\;(\,\ref{eq3}\,)\;]$ $\Rightarrow\;\lim\limits_{k\;\to\;\infty}\,\left\|\,x_{\,k}\,-\,x\,,\,b_{\,2}\,,\,\cdots\,,\,b_{\,n}\,\right\|\,=\,0$ and hence $\\{\,x_{\,k}\,\\}$ converges $b$-strongly to $x$ in $X$. ∎ ## References * [1] R. Freese,Y.J. Cho, _Geometry of Linear 2-normed Spaces_ , Nova Science Publishers, New York (2001). * [2] S. Gahler, _Lineare 2-normierte raume_ , Math. Nachr. 28 (1964), 1-43. * [3] P. Ghosh, T. K Samanta, _Representation of Uniform Boundedness Principle and Hahn-Banach Theorem in linear $n$-normed space_, Submitted, arXiv: 2101.04555. * [4] H. Gunawan, Mashadi, _On $n$-normed spaces_, Int. J. Math. Math. Sci., 27 (2001), 631-639. * [5] E. Kreyszig, _Introductory Functional Analysis with applications_ , John Wiley & Sons, 1978. * [6] A. L. Soenjaya, _The Open Mapping Theorem in $n$-Banach space_, International Journal of Pure and Applied Mathematics, Vol. 76, No. 4, 2012, 593-597. * [7] A. White, _2-Banach spaces_ , Math. Nachr., 42 (1969), 43-60.
# Medical Information Retrieval and Interpretation: A Question-Answer based Interaction Model Nilanjan Sinhababu<EMAIL_ADDRESS>Rahul Saxena <EMAIL_ADDRESS>Monalisa Sarma<EMAIL_ADDRESS>Debasis Samanta<EMAIL_ADDRESS>Subir Chowdhury School of Quality and Reliability, IIT Kharagpur, India Department of Chemical Engineering, IIT Kharagpur, India Department of Computer Science and Engineering, IIT Kharagpur, India ###### Abstract The Internet has become a very powerful platform where diverse medical information are expressed daily. Recently, a huge growth is seen in searches like symptoms, diseases, medicines and many other health related queries around the globe. The search engines typically populates the result by using the single query provided by the user and hence reaching to the final result may require a lot of manual filtering from the user’s end. Current search engines and recommendation systems still lacks real time interactions that may provide more precise result generation. This paper proposes an intelligent and interactive system tied up with the vast medical big data repository in the web and illustrates its potential in finding medical information. ###### keywords: Information Processing , Computerised Interaction , Question Answering System , BiLSTM , Attention Mechanism ††journal: Expert Systems with Applications ## 1 Introduction ### 1.1 Context Numerous health related information are streamlined by the Internet. The Internet empowers users to pick up fast admittance to data that can help in the analysis of medical issues or the advancement of appropriate treatments. It enable consumers to gather health-related information themselves and from the comfort of their home. The Internet can likewise provide various health related information beyond the immediate arrangement of care. It requires no administrative or financial overheads. Provided these benefits, many consumers tend to use internet as a source of their medical information. But at the end of the day, searching through this huge amount of data and collecting health related information is quite difficult of not impossible task. Also, there is always a chance of mistake in manual checking of the results provided by the search engines. Although with advancements in search engine technologies the searching is far superior than before, but they are built to provide immediate results based on a single query. Hence, these kind of systems are only suitable for advanced users at least for medical domains. The process of finding a information in internet can be divided into two major steps. Firstly, the extraction of data present in the web regarding a particular query provided by the user. This is the step that the search engines are well optimised to do. Secondly, narrowing down the huge data collected in the extraction phase by using some interaction mechanism. And finally, providing the user with the well filtered data that does not require manual filtering from the user’s end. To perform such kind of intelligent searching, the two most important techniques required are information retrieval and natural language interactive question generation. Information discovery and interaction systems have been closely connected for many years and are the two aspects of computation that has proven to be superior in various domains. Recent advances in these techniques have provided a way of enabling truly “intelligent” medical information retrieval system. Presence of huge volume of data, specifically unstructured data in the internet has made it a quite difficult task to discover and find crucial information. Information retrieval is a key in many fields with an intent to collect information or develop knowledge databases. Earlier, these unstructured information required manual intervention for any kind of knowledge discovery. But with advancement of machine learning techniques the process of extracting information has become automatic and hence gained a lot of importance currently. With a lot of research in the machine learning for text data, various methods for information discovery has been proposed. Techniques such as embedding and clustering has provided the community with ways of dealing with unlabeled text data. On the other hand, question answer modelling systems are becoming one of the most important and emerging areas of interest in the research community. Earlier interaction with a computer were limited to rule based methods only. But with improvements in deep learning models, computer interactions are becoming more intelligent and accurate. These systems are able to help teachers and students by generating intelligent questions like fill in the blanks, MCQs and subjective questions from paragraphs, sentences and words. Furthermore, these systems are already proved to be beneficial to the artificial intelligence assistants and robots where interaction is required from both parties HCI-005. Question answering system can be beneficial for some casual users who may ask simple factual questionsheilman2011automatic, Doctors who seek quick answers on medicine or medical equipment; Patients who seek medical treatment information onlinesaito1988medical; This paper presents a novel way of extracting unstructured information from the internet and providing users with medical queries the most useful results using interaction. ### 1.2 State of the Art There are various research works that focus on converting unstructured data into some structured form for information retrieval purpose mccallum2005information, augenstein2012lodifier, zainol2018visualurtext. They generally use rule based methods for known data and for unknown data general machine learning and statistical methods are applied to get certain informationmccallum2005information, ahern2007world, nesi2014ge. Regarding the question generation models, the existing techniques greatly depend on the context of the input text documents and the required outputs. General RNN models does not capture enough context information to generate grammatically and contextually correct questions du2017learning. To overcome this issue, LSTM models are proposed and bi-LSTM in general performs the best considering the problem of question generation. To further improve the question generation models, attention and coverage mechanisms are utilized, outperforming other models in this domain du2017learning, chali-baghaee-2018-automatic. ### 1.3 Motivation and Scope Existing information retrieval methodologies are prone to unreliability due increasing variety of enormous unstructured dataINR-012. These models are generally static and serves only for a specific type of data mccallum2005information, nesi2014ge. With our literature survey, we were unable to find a research work that focuses on dynamic information retrieval. Further, information retrieval by reducing the unstructured web data corpus based on a interaction mechanism is totally missing in the literature. Our objective is to dynamically filter data to provide users with the minimal focused data using some interaction mechanisms. To do this work, the existing works for information retrieval are not sufficient. Further, bi-LSTM with attention and coverage suffers the most when the input sequence is too small du2017learning. That means, the model performance is greatly dependent on the length of the input sequences provided. This particular work requires the question generation model to be able to generate a valid context based question even with a single word, but regarding the current state of the art techniques, they are prone error for smaller sequences of input. A model that can generate proper context based questions even with small (single word) of the input sequences is missing and needs to be investigated. ### 1.4 Objectives The importance of question generation models are taken into consideration to address the man-machine interaction problem and how some intelligent question(s) can be formulated in order to retrieve some relevant context information. Further, these context information can be interpret using the answer(s) provided by user to derive some decision. To overcome limitation of the existing models, this research paper has three main objective as pointed below: 1. 1. Identifying a questionable sentence, which have enough context information for generating some decision. 2. 2. Generating contextually correct and medical related natural language questions from the questionable entity even with smaller sequences of inputs. 3. 3. Analysing the answers provided by the user to reduce the corpus volume. The later section of this paper is arranged in the following sequence. First, the related work and state of the art is identified. Then the task definition is outlined, followed by the demonstration of the model structure. Then the experimental parameters are discussed for the experiment and finally, results, discussion and validity of the model is presented. ## 2 Related Work In this section, the related work is divided into different sections using the different categories that the documents contain. This can be helpful in categorization of the content of the research papers more easily and hence the readability is highly increased. The related work is divided for information retrieval in unstructured data and question answer generation techniques. ### 2.1 Information retrieval in unstructured data Andrew McCallum in 2005 mccallum2005information showed different ways to convert raw unstructured data from web search to structured data were discussed like rule based model, hidden Markov model, conditional probability models, text classification etc. Johanna Fulda et al. in 2015 fulda2015timelinecurator developed a browser- based authoring tool that automatically extracts event data from temporal references in unstructured text documents using natural language processing and encodes them along a visual timeline. It uses context-dependent semantic parsing for entity extraction. Isabelle Augenstein et al. in 2012 augenstein2012lodifier modelled an approach (LODifier) that combines deep semantic analysis with named entity recognition, word sense disambiguation and controlled Semantic Web vocabularies in order to extract named entities and relations between them from text and to convert them into an RDF representation which is linked to DBpedia and WordNet. Rabeah Al-Zaidy et al. in 2012 al2012mining provide an end-to-end solution to automatically discover, analyze, and visualize criminal communities from unstructured textual data. Discover all prominent communities and measure the closeness among the members. generate all indirect relationship hypotheses with a maximum, user specified depth. Efficiently pruning the non prominent communities and examining the closeness of the ones that can potentially be prominent. They used Named Entity Tagger, Apriori algorithm, open and closed discovery algorithms described in Srinivasan (2004) Zuraini Zainol et al. in 2017 zainol2018visualurtext discusses about the development of text analytics tool that is proficient in extracting, processing, analyzing the unstructured text data and visualizing cleaned text data into multiple forms such as Document Term Matrix (DTM), Frequency Graph, Network Analysis Graph (based on co-occurrence), Word Cloud and Dendogram (tree structured graph). Byung-Kwon Park and Il-Yeol Song in 2011 park2011toward performed text mining operations and used XML-OLAP, DocCube, Topic Cube to extract structured data from text documents. Based on the schema of structured data. The architecture is based on the concept of OLAP, consolidation OLAP, which integrates two heterogeneous OLAP sources relational OLAP and Text OLAP. Consolidation OLAP enables users to explore structured and unstructured data at a time for getting total business intelligence, and to find out the information that otherwise could be missed by relational or text OLAP when processed separately. Shane Ahern et al. in 2007 ahern2007world developed a sample application that generates aggregate location-based knowledge from unstructured text associated with geographic coordinates. Used the k-Means clustering algorithm, based on the photos’ latitude and longitude. Author used a TF-IDF approach that assigns a higher score to tags that have a larger frequency within a cluster compared to the rest of the area under consideration. K.L.Sumathy and M.Chidambaram in 2013 sumathy2013text gives an overview of the concepts, applications, issues and tools used for text mining. Steps involved in text mining are discussed. A list of tools used is also given. Paolo Nesi et al. in 2014 nesi2014ge used part-of-speech-tagging, pattern recognition and annotation for extracting addresses and geographical coordinates of companies and organizations from their web domains. Ammar Ismael Kadhim et al. in 2014 kadhim2014text proposed processing of unstructured data. Their methodology includes preprocessing (stemming, removing stop words, removing highly frequent and least frequent words). TF- IDF was used for document representation. SVD was used to reduce high dimensional dataset to lower dimension. ### 2.2 Question Answer generation techniques Inter AI Developer Program in 2019 intelnlp2018 developed a system which can generate logical questions from given text input that only humans are capable of doing. Their process involved of selecting important sentence, using parser to extract NP and ADJP from important sentences as candidate gaps and then generate fill in the blanks and brief type questions using NLTK parser and grammar syntax logic. They have succeeded in forming two types of questions fill in the blanks statement and fully stated questions. Alec Kretch in 2018 medium2018 developed T2Q for various types of questions using US history dataset. A list of subjects and corresponding phrases was built from the tokens and POS tags using text chunking, Stanford Dependencies and a multiclass classifier. Universal Dependencies were extracted such as from using Stanford CoreNLP’s UDFeatureAnnotator then some words were converted to a Synonym using Princeton WordNet or Glove. T2Q also matches common pattern that identify the date/year of a given event. Questions of each question type for each subject/phrase were formed using basic pattern of each question type as guide. Then similar, but false, answers for each question based on the question type using patterns of good false answers for each question as a guide were formed. Athira P. M. et al. in 2013 athira2013architecture describes the architecture of a Natural Language Question Answering (NLQA) system for a specific domain based on the ontological information, a step towards semantic web question answering. Their main steps include syntactic analysis , semantic analysis, Question Classification, Query Reformulation. They also performed answer filtering and answer ranking. Their results showed that they were able to achieve 94 % accuracy of natural language question answering in their implementation. Ming Liu et al. in 2012 liu2012using proposed a semi automatic question generation to support academic writing. Key concepts were identified using an unsupervised algorithm to extract key phrases from an academic paper. The system then classifies each key phrase based on a Wikipedia article matched with the key phrase by using a rule-based approach then Wikipedia was used as a domain knowledge base. Knowledge from a single article is used to build conceptual graphs used to generate questions. To evaluate the quality of generated questions a Bystander Turing test was conducted which showed good system rating. Eiichiro S. et al. in 2005 sumita2005measuring proposed a technique of automatic generation of fill in the blanks questions (FBQs) together with testing based on Item Response Theory (IRT) to measure English proficiency. A method based on item information was proposed to estimate the proficiency of the test-taker by using as few items as possible. Results suggest that the generated questions plus IRT estimate the non-native speakers’ English proficiency while on the other hand, the test can be completed almost perfectly by English native speakers. The number of questions can be reduced by using item information in IRT. Yllias Chali and Tina Baghaee in 2018 chali-baghaee-2018-automatic proposed a sequence to sequence model that uses attention and coverage mechanisms for addressing the question generation problem at the sentence level. The attention and coverage mechanisms prevent language generation systems from generating the same word over and over again, and have been shown to improve a system’s output. They have used the simple RNN encoder-decoder architecture with the global attention model. Further, they applied a coverage mechanism, which prevents the word repetition problem. Experimental results on the Amazon question/answer dataset showed an improvement in automatic evaluation metrics as well as human evaluations from the state-of-the art question generation systems. Xinya Du et al. in 2017 du2017learning framed the task of question generation as a sequence-to-sequence learning problem that directly maps a sentence from a text passage to a question. Their approach is totally data driven and requires no manually generated rules. They modeled the conditional probability using RNN encoder-decoder architecture and adopt the global attention mechanism to make the model focus on certain elements of the input when generating each word during decoding. They investigated two variations of their models: one that only encodes the sentence and another that encodes both sentence and paragraph level information. Automatic evaluation results showed that their system significantly outperforms the state of the art rule-based system. In human evaluations, questions generated by this method are also rated as being more natural. ## 3 Proposed Methodology ### 3.1 Overview of the Work Before describing the detailed outline of this architecture, I shall use a real example to show how exactly it corresponds to the way a human tackles such problems. Imagine a scenario where a student has to find a particular word in a book without an appendix section. To perform this search operation, the most effective way is to know which domain this particular term belongs to and hence a relation of that can be found on the chapter name. After reaching that chapter, the student can relate to which particular section the term may be occurring and hence selects that. If the student is unable to find the term in that section, then he repeats it for the other sections that are related, until the term is found. The other non-human way is maybe searching the term throughout the book either serially or randomly. To solve this type of problem in computers, advanced and interactive techniques must be used to get down to the final result from a huge collection of unstructured data collected on a particular query. Figure 1: The proposed model framework To carry out the interaction based information retrieval, a model is proposed as shown in Figure-1. The two main objectives of this model is to find a word or a sentence from which a question can be developed and a specialized natural language question generator that can generate questions using either a word or a sentence. First phase of this model deals with the objective of identification of questionable entity. To identify a questionable entity, the unstructured raw data can be clustered into various sections and using interaction mechanisms it can be converged into a particular information that the user is looking for. The model consists of a section that retrieves unstructured data from the web following a query provided by the user. That raw data contains many unnecessary tokens that needs to be either removed or changed in order to make the later processes more accurate. The preprocessed tokens(words here) are fed into the clustering section. The K-Means clustering algorithm is selected for clustering as per the experiment in Section-3.2 and research shows K-Means to be beneficial for text data clustering kadhim2014text. After clustering is complete, a technique is proposed to identify the relations between the clusters in a semantic way. This technique is named as Cluster Relationship Matrix (CRM). Using the CRM, a semantic distance is calculated and provided with a hyper parameter threshold $\delta$. A mechanism is developed to identify cluster quality and tell whether any further re-clustering is required or not. If re-clustering is required, then a cluster using Algorithm 2 (Step-6) is selected. Similarly, if re-clustering is not required then a cluster is selected according to Algorithm 2 (Step-5) for ranking. A ranking mechanism is used to rank the items present in the selected cluster. After the cluster items are ranked, the sentence containing that particular item is selected and passed to the question generation phase. The above section describes the process used to identify a questionable entity. Once identified, this entity (basically a word or a sentence from which question can be formed) is passed to a sequence-to- sequenceSutskever:2014:SSL:2969033.2969173 encoder-decoder RNN modelcho-al- emnlp14 to generate questions. A hierarchical attention modelYang2016HierarchicalAN is introduced into the attention layer for this question generation model to provide both word based and sentence based attention discovery. This helps in generation of quality questions both for input word or sentences. Later, analysing the model using the provided answer is used to estimate the expected conclusion of the job and whether to terminate or continue the process. After termination, the final output result is provided to the user. ### 3.2 Selection of Clustering Algorithm Clustering is an integral part of machine learning process with unlabelled data. Selection of suitable clustering algorithm plays an important role in making the process of data discovery more accurate and efficient. The data to be clustered in this paper has two important properties, firstly, they are of text and numerical type and secondly they are collected with respect to a particular query. As per the clustering is concerned, the words are selected as tokens to be clustered. For testing the capability of clustering algorithms on text type data relevant to this work, a basic experiment is performed with a text corpus created by combining datasets from three labelled dataset. The clustering algorithms can be broadly divided into three types. So for the experiment, one of the mostly used algorithms in this particular domain is selected from the following three categories. 1. 1. Partition based clustering\- In partition clustering, each and every point in the dataset is assigned to exactly one of the K clusters formed by the algorithm where K is a parameter that has to be passed to the algorithm there are various methods available to find optimal K kadhim2014text. The most widely used algorithm that belongs to this category is K-Means. K-Means algorithm scales very well with large data scikit-learn, scikit:2.3. K-Means basically try to minimize WCSS (within cluster sum of squares). K-Means with cosine similarity is spherical clustering banerjee2005clustering. 2. 2. Hierarchical clustering \- Hierarchical clustering generates a tree structure of the nested group of clusterskadhim2014text. The Agglomerative Clustering is a widely used algorithm in this category. It is a bottom-up algorithm It initially treats each data point as a cluster and then they are successively merged together. For merge strategy, a metric is used known as linkage criterion scikit-learn, scikit:2.3. 3. 3. Density-based clustering \- The idea behind density-based clustering is to find clusters as high-point density that separates the area of low-point density which leads to an arbitrary shape cluster. High-density points considered to be within clusters and low-density points are considered to be noise. One of the most widely used algorithms in this category is GMM. GMM assumes there are K Gaussian distributions from which each and every data point was generated. All these K Gaussian distributions have unknown parameters. GMM can be thought of as an extension of K-Means where information about the covariance structure of the data and the centres of the latent Gaussians are also incorporated into the algorithm scikit:2.1. For clustering evaluation, four algorithms are selected namely Sphere clustering, KMeans with Euclidean, Agglomerative clustering and Gaussian Mixture Model Clustering (GMM) due to their high accuracy in text data. The steps associated with the selection of the best algorithm for this particular experiment are preprocessing, embedding and finally the selection, are discussed in the sub-sections below. #### 3.2.1 Preprocessing The web sources used to get labelled data had data arranged in tabular, list format, so not much preprocessing was required. Therefore, only basic preprocessing was done before using Sentence transformers for embedding using the steps discussed below. Algorithm 1 Preprocessing for clustering algorithm selection stage 1: Replaced “_” with spaces 2: Converted all characters to lowercase 3: Replaced “TABS” with white spaces 4: Removed all special characters 5: Removed all multiple spaces 6: Applied lemmatisation After preprocessing is complete, three lists are generated. One containing symptoms with 312 number of items, one containing diseases with 315 number of total items and the other containing medicines with a total of 300 items. #### 3.2.2 Embedding Embedding is a form of representation of text data in a multidimensional space, where their semantic meaning and relation is conserved. Embedding are very important in training machine learning models as they provide the source of knowledge of the unknown data to the model. An embedding can be trained with our own data or a pre-trained model can be used. In this paper a well known pre-trained embedding model Sentence Transformer embeddings with “distilroberta-base-paraphrase-v1” model is used DBLP:journals/corr/abs-1908-10084. This gives 768 dimensional embedding for the data. Sentence Transformers were used because most of the symptoms, medicines or even diseases are not single word, instead they are combination of multiple words. The distilroberta-base-paraphrase-v1 model is trained on millions of paraphrase examples. They create extremely good results for various similarity and retrieval tasks. Further, PCA was used to reduce dimension of data from 768 to 200 as High dimensional data is not suitable for clustering. #### 3.2.3 Selection Four clustering algorithms were selected as discussed, namely KMeans, Sphere clustering (KMeans with cosine similarity), Agglomerative clustering and GMM clustering. The experiment for selection of the algorithms used various parameters and considerations that are provided below. * • K-Means - K-Means was used with the number of clusters as 3 and with ‘K-Means++’ as method of initialization to speed up the convergence. * • Spherical clustering - All parameters used were the same as K-Means but this uses cosine similarity whereas K-Means uses euclidean similarity. * • Agglomerative clustering - The number of clusters were taken as 3. ward was chosen as linkage criterion which minimizes the variance of the clusters being merged. Euclidean metric was used to compute the linkage. * • GMM clustering - The number of mixture components were taken as 3 with co- variance type as full which means each component has its own general co- variance matrix. ### 3.3 Identification of Questionable Entity #### 3.3.1 Multilayered Clustering An algorithm is proposed to retrieve useful information from an unstructured preprocessed text corpus. The input to this algorithm is unstructured text and the objective output is a list of ranked (ranking is defined in subsequent sections) words from the cluster having maximum avg distance to all other clusters. Here, the distance is calculated via a novel cluster distance metric developed specifically for the type of the data, this paper is dealing with. The algorithm for multilayered clustering is provided below: Algorithm 2 Multilayered Clustering 1: First, the given text corpus is clustered into N (= 3) initial clusters using KMeans. We have chosen N=3 initially because our main focus on our study is on medical text corpus which generally consists of three group of words, which are medicines/treatments, diseases & symptoms. 2: Then a cluster relationship matrix(CRM) is created of size NXN where CRM[i][j] = distance metric of cluster pair i, j. This distance metric indicates how distinct two clusters are with respect to each other. Higher the distance metric farther the two clusters are. Detailed description of the CRM is provided in Section-3.3.2. 3: Then an un-directed graph is created with each node as a cluster (0 to N-1) and M[i][j] as the weight of an edge connecting two nodes(clusters) $i$, $j$. Now for each node sum of weights of all edges directly connected to it is calculated (there exist N-1 such edges for each node). Now out of all N nodes, two nodes are selected for which this calculated value is maximum and minimum let’s call these nodes as $maxDisNode$ and $minDisNode$. The clusters corresponding to these two nodes are chosen as $maxDisCluster$ and $minDisCluster$. 4: Then carefully a hyper-parameter threshold ’$\delta$’ is chosen (this threshold can be dynamically updated and tuned with each iteration of the algorithm). Now if each and every value of M[i][j] in $N\times{N}$ cluster relationship matrix is less than the threshold then reclustering (Step 6) is required else ranking (Step 5) is selected. 5: Ranking - In this phase, $maxDisCluster$ is picked and then algorithm sort the words in that cluster according to the increasing euclidean distance that the vector representation of that word has from the centre of the cluster and then the algorithm ends. 6: Reclustering - In the reclustering $minDisCluster$ is picked and then algorithm divides that cluster into two clusters so the total number of the cluster now changes from N to N+1. Accordingly, the dimension of the cluster relationship matrix changes to (N+1)X(N+1) and its values also update accordingly. Now go to step 4. #### 3.3.2 Cluster Relationship Matrix Our proposed algorithm takes $N$ clusters as an input and gives cluster relationship matrix as the output. It is an $N\times{N}$ matrix where $N$ is the total number of clusters present at the given moment where $M[i][j]$ = distance metric of cluster pair i, j. This distance metric indicates how distinct two clusters are with respect to each other. Higher the distance metric more distinct the two clusters are (varies between zero to one). To calculate $M[i][j]$ (distance metric of cluster pair i, j) a special distance Word Movers Distance (WMD) is calculated between each pair of words from cluster $i$ and cluster $j$, and then are summed up. Finally, the whole matrix is normalized to have values between 0 and 1. ##### Word Movers Distance (WMD): Term frequency-inverse document frequency (TF-IDF) and the bag of words (BOW) are two most common ways to represent documents however, these ways are frequent near orthogonal so these are not suitable for document distances. Also, the distance between individual words is not captured in these ways. Many attempts have been made to solve this problem most of which learned a latent low dimensional representation of documents. Although these attempts produced a more reasonable representation of documents as compared to BOW but they in many cases do not improve distance-based tasks performance of BOW. Due to these limitations WMD was introduced to capture distance between the two documents in a much better way. In our case, we are trying to keep words with less distance together in the same cluster WMD metric is suitable for our use case. WMD performance was evaluated using KNN on 8 document datasets and compared it with BOW, TFIDF, BM25 LSI, LDA, mSDA, CCG. The results were impressive. It was found out that WMD performed best on six out of eight datasets. Even on the two datasets on which WMD was not the best performer the error rates were very close to the top performers. WMD uses word embeddings like word2vec, GloVes, etc that learns semantic meaningful vector representation of words from their local co-occurrences in the sentence. MWD measures the semantic distance between two documents i.e. it measures the minimum amount of distance that the embedded words of one document need to “travel” to reach the embedded words of another document. It does not require any hyperparameters. It is highly interpretable and has high retrieval accuracy. The WMD distance between two documents is defined by the three main parts as described below: 1. 1. Document representation \- The document is represented as an $n$ dimensional $d\in R^{n}$ vector where $n$ is the vocabulary size. Each element of this vector is a word’s normalized frequency in the document. This vector is also known as normalised bag of words (nBOW) vector. If a word i appears $C_{i}$ times in the document then $d_{i}$ is found using equation-1. This vector is usually very sparse. $d_{i}=\frac{C_{i}}{\sum_{j=1}^{n}C_{j}}$ (1) 2. 2. Semantic similarity metric \- The travel cost$C(i,j)$ from word $i$ in one document to word $j$ in another document is defined in equation-2. $C(i,j)=\left|\left|X_{i}-X_{j}\right|\right|_{2}$ (2) where, $X_{i}$and $X_{j}$ are embedding representation of words $i$ and $j$ respectively. 3. 3. Flow matrix \- The flow matrix $T\in R^{n\times{n}}$ where, $n$ is the vocabulary size is a sparse matrix where $T_{ij}\geq 0$ denotes how much of word $i$ in one document travels to word $j$ in another document ##### Ranking This algorithm takes $maxDisCluster$ as an input and gives a ranked list of words from that cluster as an output. The ranking is an effort to find the most representative words of a cluster. The words with a higher ranking are better representative of the cluster as compared to words with a lower ranking. Once no reclustering is required the algorithm selects $maxDisCluster$ (defined in Section-3.3.1) and sort words (words appearing first have higher rank) in it. Sorting is done using Algorithm-3. $D_{i}=\sqrt{\sum_{j=1}^{K}(C_{j}-E_{ij})^{2}}$ (3) where, $K$ is the dimension of the embedding used. Algorithm 3 Semantic ranking of elements in the selected cluster 1: Let the centre of the $maxDisCluster$ be $C$. 2: Let our cluster has $M$ words in it and let the $i^{th}$ word be $W_{i}$ and its embedding representation be $E_{i}$ 3: For each $i^{th}$ word $W_{i}$ in $maxDisCluster$ euclidean distance between $E_{i}$ and $C$ is calculated let this calculated value be $D_{i}$ as shown in equation-3. 4: Now, all $M$ words in $maxDisCluster$ are sorted based on distance $D_{i}$ i.e. the words having smallest $D_{i}$ will come first. 5: The final sorted list is returned. ### 3.4 Natural Language Question Generation #### 3.4.1 Overview Natural language question generation is an important aspect of Natural Language Processing field in computer science due to recent advancement in education systems and interaction based autonomous systems. This section of the paper focuses on the methodology followed in generation of questions relevant to medical domain when provided either with a word or a sentence$D={W_{n}}$, where $n-1$ is the total number of words present in the sentence and $n=0$ represents a single word. Here, the main objective is to maximize accuracy of the conditional probabilities P($Q|W_{n}$); where $Q$ is the question and $W_{n}$ is the input provided to the system of which a question needs to be generated (Equation-4). $p(Q|W_{n})=\prod_{1}^{n}p(q_{t}|q_{1:t-1},W_{n})$ (4) For generation of natural language question, a Sequence-to-Sequence learning Sutskever:2014:SSL:2969033.2969173 Encoder Decoder model for Recurrent Neural Networks (RNN) Sutskever:2014:SSL:2969033.2969173 is used along with improved coverage mechanismchali-baghaee-2018-automatic for generating natural language questions for the input data $D$. A generic attention layer provides attention discovery for multiple words and usually suffer when sentences consists of lower number of words or if it a single word. To overcome this issue, Hierarchical Attention Network (HAN)Yang2016HierarchicalAN is used along with the general attention layer; this combined attention model ensures the classification from both word level and sentence level. So that it can identify the presence of words that convey a higher weight to the context of the overall sentence. Since, HAN doesn’t consider long term information about the previous sentences, it causes to loose information for longer sentences. Hence, the combined attention model can select the highest attention weights whenever applicable, in order to keep the context of the sentence through longer sentences. A well known pre-trained embedding GloVes is used and this model is trained, validated, and tested by both the AmazonQA dataset and PubMed datset. #### 3.4.2 Hierarchical Attention Network The content provided as training, contains sample of both, more and less important data. Knowledge of this content is usually achieved using attention mechanisms over the sequence of data. But the attention models usually face difficulty to identify the contribution of words for smaller sequence of inputs. To overcome the issue of attention discovery for smaller sequences of inputs and even for words, a hierarchical attention network(HAN) Yang2016HierarchicalAN proves to be a good option. It has two levels of attention mechanisms applied at the word and sentence-level, enabling it to attend differently to more and less important content. HAN calculates the attention in terms of words and sentences hence two context vectors are used, one for word level attention and another for sentence level attention. The process HAN starts by word level encoder followed by a attention mechanism only for words (Equation-7), then those attention vectors are passed to the sentence level encoders followed by the sentence attention (Equation-9). $u_{it}=tanh(W_{w}h_{it}+b_{w})$ (5) Where, $W_{s}$ are the weights associated with the concatenated hidden states $h_{it}$ and $b_{w}$ is the bias. $\alpha_{it}=\frac{exp(u_{it}^{T}u_{w})}{\sum_{t}exp(u_{it}^{T}u_{w})}$ (6) Where, $u_{it}$ is the hidden representation of $h_{it}$ and $u_{w}$ is word level context vector. $T$ is the total number of input sequences. The value of $u_{it}$ is calculated from Equation-5. $s_{i}=\sum_{t}\alpha_{it}h_{it}$ (7) Where, $s_{i}$ denotes the word level attention, and $\alpha_{it}$ is calculated from Equation-6 and $h_{it}$ are the concatenated hidden states of the network for a bi-directional model. For sentence level attention, the general attention model (Equation-9) is selected. #### 3.4.3 Encoder Modelling The input and output in a question generation model is dynamic in nature due to the fact that the input can have varying number of words and same for the output question. To deal with these kind of inputs and to convert the variable length input data into a fixed length output, an encoder is used. It is a set of neural network layers that maps the input data $D$ into word vectors, which are populated in the network as hidden states $H=\\{h_{1},h_{2},...,h_{n}\\}$, where $D={W_{n}}$. For this experiment, the selected encoder is a bidirectional LSTM layer hochreiter1997long. Two layers of the bidirectional LSTM is considered in this experiment, as it is already proven to be good in similar context chali-baghaee-2018-automatic. The bidirectional LSTM layer has two hidden states, one is a forward state $f_{i}$ and the other is a backward state $b_{i}$, for an index $i$, enabling the network to understand a sentence and its formation from both starting and from the ending. Considering a input index of $i$, the hidden states are concatenated to form a longer sequence of hidden states such that $h_{i}=\\{f_{i},b_{i}\\}$. The sequence of hidden states $h_{i}$ is used during decoding for generation of output vector $q_{t}$ as a method to identify the source and predicting the next target word. Target word in question, $q_{t}$ is calculated as the weighted sum of hidden states $h_{i}$ as expressed in Equation-8. $q_{t}=max[\sum_{i=1}^{n}a_{t}(n)H(n),\sum_{i=1}^{n}s_{i}(n)H(n)]$ (8) where, $a_{t}$ and $s_{i}$ are shift vectors and are calculated according to the general attention model and hierarchical attention model (described in Section-3.4.2) respectively. $a_{t}(i)=\frac{exp(h_{t}^{T}W_{a}h_{i})}{\sum_{j}exp(h_{t}^{T}W_{a}h_{i})}$ (9) #### 3.4.4 Decoder Modelling A sequence-to-sequence model requires a decoder in order to convert the fixed length context output provided by the encoder and convert it to a variable length output. Two layers of LSTM is used as a decoder in the experiment. To avoid attending repetitive words, the attention model provides coverage vector for each input in the time step. The attention model must attain to the next input taking into consideration the previous inputs using the coverage vector $c$ at a time step $t$. $c_{t}=\sum_{{t}^{\prime}=0}^{t-1}a{t}^{\prime}$ (10) Further, the coverage vector needs to be integrated with the concatenated hidden states, and this is done using a ${}^{\prime}tanh^{\prime}$ operation over the hidden states $h_{i}$ and point wise addition of coverage vector $c_{t}$ and the weights of the coverage vector $w_{c}$ to be learned. $h_{i}=tanh(h_{i}+w_{c}c_{t}(i))$ (11) The context vector $s_{t}$ and the output of the previous layers $[z_{1},z_{2},...,z_{(t-1)}]$ in the decoder are fed to the next layer in the current time step to make the prediction. The prediction is made using a fully connected layer with a softmax classifier. The attention hidden state $\tilde{h_{t}}$ is calculated as a ${}^{\prime}tanh^{\prime}$ activation operation on the weights of the context vector $w_{x}$ to be learned, the context vector from the source $s$ and the next target state $q_{t}$. $\tilde{h_{t}}=tanh(W_{x}[s_{t};q_{t}])$ (12) Finally, the next hidden state is formed using a LSTM cell with the previous hidden state $h_{t-1}$ and the output from the previous state $z_{t-1}$ as an input. $h_{t}=LSTM(z_{t-1},h_{t-1})$ (13) #### 3.4.5 Training In the training phase, the generated model is trained using the dataset corpus containing sample questions and their respective answers. Considering the total number of training data as $T$, the corpus data is defined as $C_{d}=(a_{i},q_{i})_{1}^{T}$. The model is trained by providing the answers as an input to the model. For a particular sample answer, the model predicts the questions, which is then optimised by minimizing the negative log- likelihood of the training corpus (Equation-14). For word level training, the same model is retrained taking each of the single word in the sample answer (not considering common words) (Equation-14). $M_{t}=\sum_{i=1}^{T}-logp(q_{i}|a_{i})$ (14) where, $M_{t}$ is the training model and the rest carry their usual meaning. $M_{t}=\sum_{i=1}^{T}\sum_{j=1}^{G_{i}}-logp(q_{i}|a_{ij})$ (15) where, $M_{t}$ is the training model and $G_{i}$ is the maximum number of words considered as an input for $i^{th}$ corpus answer. To generate the natural language questions from the model, the prediction vector mapping is performed. Due to limited embedding corpus, many corpus words will be new to the model, these unknown tokens are substituted by average attention weight of the words from the source sentence. ### 3.5 Elimination and Termination In this phase, after the generation of question, the user is required to provide some answer to the question. The answer to the question needs to be evaluated by the system such that the information and context related to the question is either kept or eliminated. This phase is also responsible regarding continuation or termination of the current session of information retrieval. To perform these operations, the following algorithm is designed. Algorithm 4 Elimination and Termination 1: First, the user input $A_{u}$ (answer) is preprocessed to remove unnecessary and irrelevant tokens. 2: Then, a sentence concatenation operation of the answer and the question is performed by removing the question word like what, when, where, etc. to get a modified $A_{u}$. 3: Then, a similarity measurement is performed $WMD(a_{i},A_{u})$ for $i^{th}$ prediction. 4: Considering the same hyper-parameter threshold ${}^{\prime}\delta^{\prime}$ as defined in Section-3.3.1, if $WMD(a_{i},A_{u})\geq\delta$ then go to step-5 else go to step-6. 5: Corpus data $a_{i}$ is kept intact and continuation of the steps is performed with the selected list and rest of the clusters are removed. Go to step-6. 6: Corpus data $a_{i}$ is removed and perform step $Reclustering$ (Algorithm-2) eliminating the row and column corresponding to the removed cluster. If re-clustering is required then continuation of the steps is performed with the rest of the clusters removing the CRM values (refer to Section-3.3.1). Else the process is terminated. ## 4 Experiments and Results ### 4.1 Objective of the Experiment This paper has various experiment associated with the different sections of the work. Hence, after providing a brief info about the experimental environment and the dataset description, the result section is divided into few categories to improve the readability: 1. 1. Selection of clustering algorithm: Clustering algorithm plays a vital role in entity selection phase of the work. To select a particular clustering algorithm in the context of our work, a separate experiment is performed to evaluate the performance of various clustering algorithms. 2. 2. Results for Question Generation: Results associated with the generation of medical based question generation is presented in this section. Expert opinion is also taken into consideration for question quality evaluation. 3. 3. Comparison with the Existing Approaches: Performance comparison of the proposed system with the existing works using same data are performed. ### 4.2 Experimental Environment The specification of the system used is as follows: O.S. - Windows 10 Professional, CPU - AMD® Ryzen™ 7-3700X Processor, RAM - 32GB DDR4, GPU - NVIDIA GeForce® GTX 1080 Ti. The windows version of the Python-64Bit 10.5555/1593511 with editor IPython notebook PER-GRA:2007 throughout the experiment. Important modules used in the experiment include tensorflow tensorflow2015-whitepaper, packages from NumPy oliphant2006guide, SciPy 2020SciPy-NMeth, scikit-learn scikit-learn and Matplotlib Hunter:2007. ### 4.3 Dataset Description Two different dataset are used in this section, namely Pubmed 200K dataset dernoncourt-lee-2017-pubmed and Amazon QA dataset McAuley:2016:ACS:2872427.2883044, DBLP:journals/corr/abs-1809-00934; to train the Recurrent Neural Network models at two different level. Pub-med has a very rich repository of medical data and research articles; in contrast, Amazon QA dataset has a good collection of opinion question and answer data for the actual natural language question formation phase of the experiment. To test the model, the labelled AmazonQA dataset is used; After training and testing the model with varying parameters, a model is selected which provided the highest accuracy using K-fold cross validation technique. After that, this model is fed with the output word or sentence from the proposed method of identification of questionable entity as discussed in Section-3.3, in turn, the model is capable of providing some relevant questions from the sentences of the article. Dataset | Training | Validation | Testing ---|---|---|--- Pubmed | 184472 | 46654 | 34195 Amazon QA | 215325 | 65487 | 45574 Table 1: Dataset Statistics ### 4.4 Results for selection of clustering algorithm Four top performing clustering algorithms for text related data are selected from each category of the algorithms present. Then the testing has been performed, to select a particular algorithm suitable for this work. The visualization of each of the clustering algorithms are shown in Fig-2. From the visualization, it is seen that all of the clustering algorithms are able to detect the clusters with minor variations. (a) Spehere Clustering (b) KMeans Clustering (c) Agglomerative (d) GMM Figure 2: Clustering Visualisation Bcubed precision, recall, f score was used for evaluation. Bcubed metric is widely used as standard metric for text clustering problems. It satisfy four formal constraints Cluster Homogeneity, Cluster Completeness, Rag Bag and Clusters size vs. quantity. The average BCubed precision is average precision of all items in the distribution similarly the average BCubed recall is average recall of all items in the distribution. The BCubed precision of an item is ratio of number of items present in its cluster that have same category as its (including itself) to the total number of items in its cluster. The BCubed recall of an item is the ratio of number of items present in its cluster that have same category as its (including itself) to the total number of items in its category. For performance evaluation, the general precision, recall and F-score metrics used are provided in the equations 16, 17, 18. $Precision(P)=\frac{TP}{TP+FP}$ (16) $Recall(R)=\frac{TP}{TP+FN}$ (17) $F1-Score(F1)=2\times\frac{P\times R}{P+R}$ (18) Algorithm | Precision | Recall | F score ---|---|---|--- Sphere clustering (KMeans with cosine similarity) | 0.71 | 0.71 | 0.71 KMeans with Euclidean | 0.74 | 0.75 | 0.75 Agglomerative Clustering | 0.68 | 0.70 | 0.69 GMM Clustering | 0.71 | 0.72 | 0.71 Table 2: Evaluation of clustering algorithm on medical texts The evaluation result as shown in the Table-2 for the clustering algorithm shows K-Means to be the best performing model and hence is selected for the later stages of the model. ### 4.5 Results for Question Generation #### 4.5.1 Model training and testing performance The test is performed for both word level and sentence level question formation. For word level testing of the model performance, the most important word from the sentence is selected and fed into the system. For sentence, the total sentence is passed as an input to the system. For the training and validation statistics model accuracy and perplexity is considered. The accuracy is measured as the total number of words correctly predicted and matched exactly with the labelled data. On the other hand, model perplexity is defined as the errors present in the prediction model.The objective is to always optimize the model by minimizing the perplexity while maximizing the accuracy. The word level training statistics is provided in Figure-3 and the sentence level training statistics is provided in Figure-4. The optimal number of epochs is found to be 20 and hence the model is trained and validated with 20 epochs as represented in the figures. The training and validation statistics confirm that with more length of input there is a higher chance of getting close to the actual data. Since there are many factors associated with text data evaluation like semantic relations, grammar, synonyms etc, which are not evaluated by this measure. To further evaluate the model, metrics based evaluation is also performed. (a) Training Statistics (b) Validation Statistics Figure 3: Model perplexity vs accuracy statistics for single words (a) Training Statistics (b) Validation Statistics Figure 4: Model perplexity vs accuracy statistics for sentences #### 4.5.2 Metrics based evaluation Evaluation of natural language machine learning models is far from just matching the exact similarity of the prediction to the labelled data. It is due to the fact that the natural language can be framed in various ways but still be correct even if it does not match the original labelled data. This makes it difficult to analyse the models using general metrics Accuracy or F1 score. To overcome the issue, some metrics proposed in the literature such as METEOR and BLEU that are proven to be quite good particularly in this type of task. The Bilingual Evaluation Understudy (BLEU), is a score for comparing the predicted text data to a list of reference labelled text data papineni2002bleu. It is used to evaluate a wide range of text generation models in natural language processing tasks. Despite of the fact that these evaluation models are not perfect but gives a much reliable evaluation when compared to the general metrics. BLEU is computationally inexpensive and is widely adopted. Further, it is language independent, making the model evaluation more easy to evaluate. In this experiment the medical corpus has a higher frequency of bi-gram data and hence considering those in the evaluation is required. So, we used cumulative n-gram BLEU score with n=1(BLEU-1) and n=2 (BLEU-2). METEOR banerjee2005meteor is similar to BLEU in many aspects, but in addition, it is designed to consider synonyms and word stems for more realistic evaluation and it highly correlates with human evaluations. The model prediction statistics with the evaluation metrics for word and sentences are provided in the Table-3. From the results, it is clearly visible that a model is benefited always with a sentence rather than a single word due to lack of context. But it does not mean that the model fails to generate an accurate question in spite of this. METEOR score for word tells that the sentence formation is good but BLEU scores tells that it does not match with the labelled data properly. As a side note, the results are produced using GloVes with six billion words and 100 dimensions. Increasing this embedding data and dimension will help by reducing unknown tokens generated and hence will provide much better results. All the experiments performed have been modelled using the same data corpus and embedding for better evaluation. Input Sequence | METEOR | BLEU-1 | BLEU-2 ---|---|---|--- Word | 3.65 | 7.61 | 4.87 Sentence | 5.24 | 9.86 | 4.56 Table 3: Model performance evaluation with standard metrics for words and sentence #### 4.5.3 Comparison with the Existing Approaches: The state of the art existing approaches for question generation considers a Bi-directional LSTM model with general attention applied at the hidden states with a coverage mechanism for removing redundant information (BiLSTM+GA+C). On the other hand, our model utilizes word and sentence level hierarchical attention mechanism (BiLSTM+HA+C). The experiment is performed separately for word and sentences to evaluate the model performance in both word and sentence level. The results shown in Table-4 represents the results for the word level model comparison. The hierarchical attention mechanism with word level embedding shows a much higher score for all the evaluation metrics. Algorithm | METEOR | BLEU-1 | BLEU-2 ---|---|---|--- BiLSTM+GA+C | 2.58 | 5.91 | 2.13 BiLSTM+HA+C | 3.65 | 7.61 | 4.87 Table 4: Model comparison with standard metrics for word The results shown in Table-5 represents the results for the sentence level model comparison. In this experiment, the results are equivalent and does not have much of a difference. Although, the model is similar for the sentence level question generation, the little difference in the scores are mainly due to the fact that many questions can be formed from a single input and hence the the sequences may vary. Overall our model performs much better when considering both the word level and sentence level evaluation. Algorithm | METEOR | BLEU-1 | BLEU-2 ---|---|---|--- BiLSTM+GA+C | 5.26 | 9.67 | 4.51 BiLSTM+HA+C | 5.24 | 9.86 | 4.56 Table 5: Model comparison with standard metrics for sentences #### 4.5.4 Human Evaluations Question generation is the process of generation of question either from a single word or a sentence. The testing is based on the labelled data that is sufficient to compare various models in the machine learning field. But, in order to identify the true capability of the question generation model and to provide accurate, error free and context related questions, human evaluations are also considered. We selected 15 volunteers in our lab with a good English background for the offline evaluation of the model and 100 volunteers for an online evaluation making a total of 115 volunteers. For human evaluation of the model performance, three categories are selected with scores between range 0 and 1, as described below: 1. 1. Question Selection \- This parameter is scored according to the systems ability to pick correct questions based on the initial input query provided to the system. It is in fact the relevancy of the question according to the input context. 2. 2. Question Formation \- This is the score that defines the quality of the question generation portion of the system. The quality parameters considered are sentence formation and grammatical correctness. 3. 3. Responsiveness \- Responsiveness is the systems ability to interact with the user. It is defined as the time taken by the system to respond to the user once an input is provided. This does not include the overall duration of the session, rather it is defined for each interaction made by the system. The results of human evaluation are provided in the Table-6. The human evaluation shows that the system was able to generate semantically correct questions both for word and sentence level inputs with appropriate context. Further, the responsiveness of the system is also acceptable. Human Evaluation --- Parameters | Word Rating | Sentence Rating | Overall Rating Question Selection | 0.5687 | 0.6470 | 0.7457 Question Formation | 0.3765 | 0.7352 | 0.7857 Responsiveness | 0.9665 | 0.7789 | 0.8595 Table 6: Human evaluation for question generation model ## 5 Conclusion and future scope This paper presents a novel approach of information retrieval using interaction based mechanism and provides enough information regarding the need of similar systems. A totally new framework is built for the identification of questionable entity and with the available evaluation methods, it was identified preforming as intended. Further, addition of word level attention mechanism in the question generation phase has also proven to be effective in improving the overall performance of the model. Comparison with existing techniques for the question generation phase also proves the performance benefit of the proposed model. Further, human evaluation is considered as a qualifier evaluation. Although, the proposed model is able to meet the objectives, there are still possible scopes of improvement such as using a manually trained embedding rather than a pre-trained model.
FF0Cfi # Grad-CAM guided channel-spatial attention module for Fine-grained visual classification ###### Abstract Fine-grained visual classification (FGVC) is becoming an important research field, due to its wide applications and the rapid development of computer vision technologies. The current state-of-the-art (SOTA) methods in the FGVC usually employ attention mechanisms to first capture the semantic parts and then discover their subtle differences between distinct classes. The channel- spatial attention mechanisms, which focus on the discriminative channels and regions simultaneously, have significantly improved the classification performance. However, the existing attention modules are poorly guided since part-based detectors in the FGVC depend on the network learning ability without the supervision of part annotations. As obtaining such part annotations is labor-intensive, some visual localization and explanation methods, such as gradient-weighted class activation mapping (Grad-CAM), can be utilized for supervising the attention mechanism. We propose a Grad-CAM guided channel-spatial attention module for the FGVC, which employs the Grad-CAM to supervise and constrain the attention weights by generating the coarse localization maps. To demonstrate the effectiveness of the proposed method, we conduct comprehensive experiments on three popular FGVC datasets, including CUB-$200$-$2011$, Stanford Cars, and FGVC-Aircraft datasets. The proposed method outperforms the SOTA attention modules in the FGVC task. In addition, visualizations of feature maps also demonstrate the superiority of the proposed method against the SOTA approaches. Index Terms— Fine-grained visual classification, gradient-weighted class activation mapping, channel-spatial attention mechanism ## 1 Introduction Fine-grained visual classification (FGVC) aims to distinguish fine-grained classes under the same coarse class labels, _e.g._ , birds [1], airplanes [2], cars [3], and flowers [4], _etc._ The main challenge of the FGVC task is the tiny inter-class difference along with significant intra-class diversity. For example, it is difficult to distinguish a redhead woodpecker from a pileated woodpecker and a downy woodpecker caused by highly similar sub- categories, but with the adjustment of poses, scales, and rotations, the redhead woodpecker can be photographed in a very different visual view. In order to generate discriminative features more precisely, we better have the ability to capture the key characteristics of the red head and ignore the background and other irrelevant regions, which is an obvious way for overcoming the challenge. Fig. 1: Motivation of the Grad-CAM guided channel-spatial attention module. The blue part is the general pipeline of the previous attention mechanisms. The yellow line is our proposed supervision mechanism that the weights obtained by the gradient backpropagation in the Grad-CAM are used for the guidance of the attention weights, with which the attention mechanisms focus on parts that contribute significantly to classification. The existing approaches can be roughly divided into two classes: ($1$) searching the informative regions that contribute the most to the classification task [5, 6, 7] and ($2$) paying more attention to extract high- order features from the images [8, 9, 10, 11]. For the former one, previous approaches [5] usually employed the prior location information such as part- level bounding boxes and segmented masks to generate the discriminative parts. Meanwhile, for the latter one, the powerful deep networks [8, 9] were employed for feature extraction, and different loss functions [10, 11] were designed for constraining these networks to improve the discrimination of the extracted features. Recently, the attention mechanisms [12, 13, 14], which only require image labels for training, have gradually replaced the manual annotation methods, since part annotations are time-consuming and laborious that limits the flexibility and versatility of the real-world FGVC applications. Compared with the approaches that introduce complex structures, the attention-based methods add fewer learned parameters for model training, which efficiently reduces the computation cost. The attention mechanisms fully simulate the observation habits of human eyes, which always concentrate on the most distinctive regions for observing. For example, we can easily pay attention to the head and the wings of a bird and ignore the other common regions to identify its species. Based on this motivation, many methods have been proposed by utilizing the attention mechanisms to detect the discriminative information from the images, including channel attention [6, 12], spatial attention [14, 15], and channel-spatial attention [16]. Specifically, SENet [12] introduced “squeeze-and-excitation” (SE) blocks to adaptively recalibrate the feature maps in channel-wise by modeling the interactions between channels. The trilinear attention sampling network [6] generated attention maps by integrating feature channels with their relationship matrix and highlighted the attended parts with high resolution. The recurrent attention convolutional neural network (RA-CNN) [14] introduced attention proposal network (APN) to capture the region relevance information based on the extracted features, and then amplified the attention region crops to make the network gradually focus on the key areas. The convolutional block attention mechanism (CBAM) [16] is a channel-spatial attention method that utilizes both the channel-level and region-level information. It can effectively improve the characteristic expression ability of the networks. The existing methods [6, 12, 14, 16] usually utilize different attention mechanisms to generally adjust the distributions of the attention weights for balancing the contributions of feature maps extracted from each part. Although these methods for obtaining the weights are different, they are all constructed based on the original feature maps only, without part information supervision. Obviously, if the feature maps focus on the non-significant parts such as backgrounds and distractions, the attention mechanism is meaningless under the unsupervised conditions. To address this issue, we propose a weakly-supervised guideline for discriminative part mining and informative feature learning. It drives the networks to focus on the parts which have specific characteristic information, such as the head and the beak of a bird. In addition, as each channel of the feature maps can be also considered as a semantic part [13], supervision on discriminative parts can be transferred to that on channels. Gradient-weighted class activation mapping (Grad-CAM) [17] is usually introduced to illustrate attentions of the networks with heat maps and visualize the attentions in each part by weighted averaging channels, which can be used for guiding the networks to focus on more efficient parts and discard the redundant information for the classification. In this paper, we introduce a channel- spatial attention mechanism with Grad-CAM guided, in which the channel weighted feature maps are pooled along with the channel dimensions and multiplied by the original feature maps to obtain the channel-spatial attention maps. Meanwhile, a novel Grad-CAM guided channel-spatial attention mechanism loss (GGAM-Loss) is applied for guiding the learning process of the channel weights and forcing the attention module to focus on the parts that contribute most to the classification. As shown in Figure 1, we employ the channel weights obtained from the gradient backpropagation in the Grad-CAM to constrain the channel weights of the forward propagation. Our contributions can be summarized as follows: * • We address the challenge of the FGVC by proposing a Grad-CAM guided channel- spatial attention module, which constrains the channel-spatial attention mechanism to focus on the parts that contribute most to the classification. * • We propose a Grad-CAM guided channel-spatial attention mechanism loss (GGAM- Loss) which employs the Grad-CAM to supervise and constrain the attention weights. Moreover, it is not limited to a specific network architecture. * • We conduct comprehensive experiments on the three commonly used FGVC datasets, _i.e._ , CUB-$200$-$2011$ [1], FGVC-Aircraft [2], and Stanford Cars [3] datasets. The results show the effectiveness of our method. ## 2 Methodology Fig. 2: The framework of our attention module. The upper line (from left to right) and the bottom line (from right to left) present the forward and the gradient backpropagation processes, respectively. A symmetrical Kullback- Leibler (KL) divergence between the weights of each channel in forward propagation and the weights of each feature map in the Grad-CAM is utilized as the loss function in backpropagation to supervise the channel-spatial attention. ### 2.1 Channel-spatial Attention Mechanism The channel-spatial attention is a module that combines both the spatial attention and the channel attention. Specifically, as shown in Figure 2, the input image is processed through a series of convolution and pooling operations $F_{cp}$ and feature maps denoted as $A=[a_{1},a_{2},...,a_{C}]\in R^{C\times W\times H}$ are obtained, with height $H$, width $W$, and channel number $C$, respectively. Then we apply a global average pooling $F_{cg}$ to downsample each feature map in $A$ and a two-layer fully connected (FC) network $F_{cr}$ with softmax function to calculate the weights of each channel as the channel attention weights $S=[s_{1},s_{2},...,s_{C}]\in R^{C}$, according to [12]. We rescale the original feature maps $A$ by $S$, which obtains the weighted feature maps $B=[b_{1},b_{1},...,b_{c}]\in R^{C\times W\times H}$ by $F_{cm}$ as $b_{c}=F_{cm}(a_{c},s_{c})=a_{c}\cdot F_{cr}\left(F_{cg}(A)\right)_{c},$ (1) where $c=1,\cdots,C$. After gaining the channel attention-weighted feature maps $B$, spatial attention is undertaken. Through the operation $F_{fa}$, which combines a channel-wise summation and a $2$D softmax function, the feature maps in $B$ are flattened along the channel dimension to obtain the spatial attention weights $T\in R^{W\times H}$. Then the channel-spatial attention-weighted feature maps $D=[d_{1},d_{1},...,d_{c}]\in R^{C\times W\times H}$ are obtained by rescaling $A$ with $T$ as $d_{c}=F_{sm}(a_{c},T)=a_{c}\odot F_{fa}(B),$ (2) where $\odot$ is Hadamard product and $T=F_{fa}(B)=\frac{\sum_{c=1}^{C}b_{c}}{\sum_{i=1}^{W}\sum_{j=1}^{H}\sum_{c=1}^{C}b_{c,i,j}}.$ (3) Then the classification is undertaken according to $D$. $F_{tc}$ is the classifier with multiple FC layers and a softmax function. ### 2.2 Grad-CAM The Grad-CAM uses the class-specific gradient information and it flows into the final convolutional layer of a CNN to generate a heat map, which shows the main concentrated regions of the CNN. Specifically, as illustrated in Figure 2, given an input image, we obtain the score $y^{k}$ for the predicted class $k$ before the last softmax function. Then, $y^{k}$ is propagated back to the elements of $A$ through the upper line and we gain the gradient $\frac{{\partial{y^{k}}}}{{\partial A_{c,i,j}}}$. The weights $\beta_{c}^{k}$ of the Grad-CAM, which represent the importance of feature map $c$ with the predicted class $k$, can be defined as $\beta_{c}^{k}=\frac{1}{W\times H}\sum_{i=1}^{W}\sum_{j=1}^{H}\frac{{\partial{y^{k}}}}{{\partial A_{c,i,j}}}.$ (4) Table 1: The statistics of the three FGVC datasets. #Class, #Training, and #Test are class number, training sample number, and test sample number, respectively. Dataset | #Class | #Training | #Test ---|---|---|--- CUB-$200$-$2011$ [1] | $200$ | $5994$ | $5794$ FGVC-Aircraft [2] | $100$ | $6667$ | $3333$ Stanford Cars [3] | $196$ | $8144$ | $8041$ Table 2: Classification accuracies (%) on the CUB-$200$-$2011$, the FGVC-Aircraft, and the Stanford Cars datasets. The best results on each dataset are in bold, and the second best results are in underline. Datasets | Base Model | CUB-$200$-$2011$ | FGVC-Aircraft | Stanford Cars ---|---|---|---|--- RA-CNN (CVPR$17$ [14]) | VGG$19$ | $85.30$ | $88.20$ | $92.50$ MA-CNN (ICCV$17$ [13]) | VGG$19$ | $84.92$ | $90.35$ | $92.80$ SENet (CVPR$18$ [12]) | VGG$19$ | $84.75$ | $90.12$ | $89.75$ SENet (CVPR$18$ [12]) | ResNet$50$ | $86.78$ | $91.37$ | $93.10$ CBAM (ECCV$18$ [16]) | VGG$19$ | $84.92$ | $90.32$ | $91.12$ CBAM (ECCV$18$ [16]) | ResNet$50$ | $86.99$ | $91.91$ | $93.35$ DFL (CVPR$18$ [18]) | ResNet$50$ | $87.40$ | $91.73$ | $93.11$ NTS (ECCV$18$ [7]) | ResNet$50$ | $87.52$ | $91.48$ | $93.90$ TASN(CVPR$2019$ [6] ) | VGG$19$ | $86.10$ | $90.83$ | $92.40$ TASN(CVPR$2019$ [6]) | ResNet$50$ | $87.90$ | $92.56$ | $93.80$ DCL(CVPR$2019$ [19]) | ResNet$50$ | $87.80$ | $\underline{93.00}$ | $\underline{94.50}$ ACNet(CVPR$2020$ [20]) | ResNet$50$ | $\underline{88.10}$ | $92.40$ | $\boldsymbol{94.60}$ Ours | VGG$19$ | $87.34$ | $91.55$ | $93.32$ Ours | ResNet$50$ | $\boldsymbol{88.45}$ | $\boldsymbol{93.42}$ | $94.41$ ### 2.3 Grad-CAM guided channel-spatial attention loss In the FGVC, attention mechanism is introduced to ensure the CNN focus on more effective parts mainly, so as to improve the classification accuracy. As mentioned above, Grad-CAM can extract the key parts of the input image. In this section, we follow the same motivation and propose the Grad-CAM guided channel-spatial attention loss to enhance discriminative part searching and feature extraction abilities of CNNs in the FGVC. In Figure 2, after the $F_{cr}$ operation, we can obtain the weights $S$ of each channel in $A$. Through the operation $F_{si}$, we apply a sigmoid function for $\beta_{c}^{k},c=1,\cdots,C$, to scale their intervals and obtain $\tilde{\beta}^{k}=[\tilde{\beta}_{1}^{k},\cdots,\tilde{\beta}_{C}^{k}]\in R^{C}$, where $\tilde{\beta}_{c}^{k}=\text{sigmoid}(\beta_{c}^{k})$. As $\tilde{\beta}^{k}$ can reflect the contribution of each channel to the classification, we constrain the channel attention weights $S$ with it. Here, we propose the Grad-CAM guided channel-spatial attention mechanism loss, GGAM- Loss in short, to construct the regularization term. The GGAM-Loss ($L_{\text{GGAM}}$), which performs as a symmetrical Kullback-Leibler (KL) divergence between $S$ and $\tilde{\beta}^{k}$, can be defined as $L_{\text{GGAM}}=\frac{1}{2}\left(\text{KL}(S||\tilde{\beta}^{k})+\text{KL}(\tilde{\beta}^{k}||S)\right),$ (5) where $\text{KL}(x||y)$ is the KL divergence from $x$ to $y$. Moreover, as we use the original cross-entropy (CE) loss $L_{\text{CE}}$ for training the model as well, the total loss function $Loss$ of the whole network can be defined as $Loss=L_{\text{CE}}+\lambda L_{\text{GGAM}},$ (6) where $\lambda$ is a nonnegative multiplier. ## 3 Experimental Results and Discussions ### 3.1 Datasets We evaluate our method on three challenging FGVC datasets, including CUB-$200$-$2011$ [1], FGVC-Aircraft [2], and Stanford Cars [3] datasets. The statistics of the datasets mentioned above, including class numbers and the training/testing sample numbers are shown in Table 1. We followed the same train/test splits as presented in the Table 1. For model training, we did not use artificially marked bounding box or part annotation. ### 3.2 Implementation Details In order to compare the proposed method with other attention mechanisms, we resized every image to $448\times 448$, which is standard in the literatures [19, 20]. The backbones we used for extracting features were VGG$19$ and ResNet$50$ which were pre-trained on the ImageNet dataset. We used stochastic gradient descent optimizer. The weight dacay value and the momentum were kept as $1\times 10^{-4}$ and $0.9$, respectively, with $100$ epochs. The learning rate of the FC layers was initially set at $0.1$ and we used the cosine anneal schedule update strategy [21]. The learning rate of the pre-trained feature extaction layers was one-tenth of the FC layers. Fig. 3: Visualizations of the ablation models in Section 3.4. The first column represents the original image. The following four columns show visualization results of the baseline, the channel attention, the spatial attention, and the channel-spatial attention, respectively. The top row is trained without the GGAM-Loss, while the bottom row is trained with the GGAM-Loss.The red box indicates the visualization result of our proposed method. ### 3.3 Experimental Results According to the aforementioned implementation details, the detailed results are listed in Table 2. Our method achieves significant performance improvement on all the three datasets and the evaluation results can be summarized as follows: * • On the CUB-$200$-$2011$ dataset, our method achieves the best result on both VGG$19$ and ResNet$50$, respectively, comapred with their corresponding referred methods. Our method exceeds the second best method, TASN, by $1.24\%$ with the VGG$19$. In addition, compared with the leading result achieved by the ACNet, our method has improved the accuracy by $0.35\%$ with the ResNet$50$. * • On the FGVC-Aircraft dataset, our method also obtains the best accuracy of $93.42\%$ with the ResNet$50$, around $0.4\%$ improvement than the DCL. With the VGG$19$, the result of our method also improves slightly. * • On the Stanford Cars dataset, our method outperforms the most compared methods, especially with the same VGG$19$ backbone. The accuracy of the ACNet with the ResNet$50$ turns out $0.19\%$ better than ours. Note that the ACNet depends mainly on the addition of the network parameters and the complex training process to improve the accuracy, which is much more complex than ours. Table 3: Ablation study of our method on classification accuracies (%). Key modules of the proposed method, including the channel attention, the spatial attention, and the Grad-CAM are compared. “$\checkmark$” represents the module contained, otherwise “$\times$”. The best result is in bold. Spatial attention | Channel attention | GGAM-Loss | Accuracy ---|---|---|--- $\times$ | $\times$ | $\times$ | $85.10$ $\checkmark$ | $\times$ | $\times$ | $85.61$ $\times$ | $\checkmark$ | $\times$ | $85.39$ $\checkmark$ | $\checkmark$ | $\times$ | $86.86$ $\times$ | $\times$ | $\checkmark$ | $85.30$ $\checkmark$ | $\times$ | $\checkmark$ | $86.58$ $\times$ | $\checkmark$ | $\checkmark$ | $86.26$ $\checkmark$ | $\checkmark$ | $\checkmark$ | $\boldsymbol{88.45}$ ### 3.4 Ablation Study Attention mechanisms and Grad-CAM are major modules of our method, and the attention mechanisms include channel and spatial attention mechanisms. We analyze the influence of each module by the experimental results. The ablation experiments are all conducted on the CUB-$200$-$2011$ dataset and we use the ResNet$50$ as the base model if not particularly mentioned. The experimental results are shown in Table 3. * • Effectiveness of the attention mechanisms. Compared with the baseline model, the spatial attention can improve performance by $0.51\%$ and the channel attention also has a slight promotion. In particular, the combination of channel and spatial attention obtains a $1.76\%$ increase on accuracy. This enhancement is obvious and shows that the channel-spatial attention is useful for the FGVC. * • Effectiveness of the GGAM-Loss. It can be seen that the classification accuracy of each attention mechanism model is improved after adding the GGAM- Loss as the constraint for the attention mechanism. The above results demonstrate the effectiveness of the GGAM-Loss. ### 3.5 Visualizations In order to better explain the improvement of our method, Figure 3 shows the visualizations of each model in Section 3.4, which were generated by the Grad- CAM. The baseline cannot clearly focus on the right region of the object. With the addition of the attention mechanisms, the models tend to pay attention on the beak and the neck of the bird, which are discriminative parts. After adding the GGAM-Loss, the models can focus on more accurate discriminant characteristics and pay less attention to background information. ### 3.6 Sensitivity Study of ${\lambda}$ Fig. 4: Sensitivity study of $\lambda$ for our model on the CUB-$200$-$2011$ dataset. In order to evaluate the robustness of our method, we conduct the sensitivity study of the hyperparameter $\lambda$ to see whether the network performance changes a lot with a change of $\lambda$. We conduct this study on the CUB-$200$-$2011$ dataset and we use the ResNet$50$ as the base model. We run the proposed model set with $\lambda$ varying from $1$ to $9$ with step size of $1$. The classification accuracies are shown in Figure 4. From Figure 4, it can be observed that the performance of our method has always been better than the channel-spatial attention (without the GGAM-Loss) and does not change much by varying the value of $\lambda$, which proves the effectiveness and robustness of our method. ## 4 Conclusions In this paper, we proposed a Grad-CAM guided channel-spatial attention mechanism loss (GGAM-Loss) for the FGVC task, which can constrain the channel- spatial attention module to focus on the most discriminative parts in the images. Note that the proposed GGAM-Loss can be also applied to other network architectures. The performance of our method is evaluated in the FGVC task and superior performance is achieved on three FGVC datasets (CUB-$200$-$2011$, Stanford Cars, and FGVC-Aircraft datasets). The effectiveness of the key modules of the proposed method were also evaluated. Visualizations of the feature maps illustrate the validity of the porposed method. ## References * [1] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The caltech-ucsd birds-200-2011 dataset,” California Institute of Technology, 2011. * [2] S. Maji, E. Rahtu, J. Kannala, M. B. Blaschko, and A. Vedaldi, “Fine-grained visual classification of aircraft,” CoRR, vol. abs/1306.5151, 2013. * [3] J. Krause, M. Stark, J. Deng, and F.-F. Li, “3D object representations for fine-grained categorization,” in Proceedings of the International Conference on Computer Vision, 2013. * [4] M. Nilsback and A. Zisserman, “Automated flower classification over a large number of classes,” in Proceedings of the 2008 Sixth Indian Conference on Computer Vision, Graphics Image Processing, 2008. * [5] N. Zhang, J. Donahue, R. Girshick, and T. Darrell, “Part-based R-CNNfs for fine-grained category detection”,” in Proceedings of the European Conference on Computer Vision, 2014\. * [6] H. Zheng, J. Fu, Z. Zha, and J. Luo, “Looking for the devil in the details: Learning trilinear attention sampling network for fine-grained image recognition,” in Proceedings of the Computer Vision and Pattern Recognition, 2019\. * [7] Z. Yang, T. Luo, D. Wang, Z. Hu, J. Gao, and L. Wang, “Learning to navigate for fine-grained classification,” in Proceedings of the European Conference on Computer Vision, 2018\. * [8] C. Yu, X. Zhao, Q. Zheng, P. Zhang, and X. You, “Hierarchical bilinear pooling for fine-grained visual recognition,” in Proceedings of the European Conference on Computer Vision, 2018\. * [9] H. Zheng, J. Fu, Z. Zha, and J. Luo, “Learning deep bilinear transformation for fine-grained image representation,” in Advances in Neural Information Processing Systems, 2019. * [10] M. Sun, Y. Yuan, F. Zhou, and E. Ding, “Multi-attention multi-class constraint for fine-grained image recognition,” in Proceedings of the European Conference on Computer Vision, 2018\. * [11] D. Chang, Y. Ding, J. Xie, A. K. Bhunia, X. Li, Z. Ma, M.Wu, J. Guo, and Y.-Z. Song, “The devil is in the channels: Mutual-channel loss for fine-grained image classification,” IEEE Transactions on Image Processing, vol. 29, pp. 4683–4695, 2020\. * [12] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the Computer Vision and Pattern Recognition, 2018\. * [13] H. Zheng, J. Fu, T. Mei, and J. Luo, “Learning multi-attention convolutional neural network for fine-grained image recognition,” in Proceedings of the International Conference on Computer Vision, 2017. * [14] J. Fu, H. Zheng, and T. Mei, “Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition,” in Proceedings of the Computer Vision and Pattern Recognition, 2017\. * [15] H. Zheng, J. Fu, Z.-J. Zha, J. Luo, and T. Mei, “Learning rich part hierarchies with progressive attention networks for fine-grained image recognition,” IEEE Transactions on Image Processing, vol. 29, pp. 476–488, 2020\. * [16] J. Woo, S.and Park, J.-Y. Lee, and I. S. Kweon, “CBAM: Convolutional block attention module,” in Proceedings of the European Conference on Computer Vision, 2018\. * [17] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the International Conference on Computer Vision, 2017. * [18] Y. Wang, V. I. Morariu, and L. S. Davis, “Learning a discriminative filter bank within a CNN for fine-grained recognition,” in Proceedings of the Computer Vision and Pattern Recognition, 2018\. * [19] Y. Chen, Y. Bai, W. Zhang, and T. Mei, “Destruction and construction learning for fine-grained image recognition,” in Proceedings of the Computer Vision and Pattern Recognition, 2019\. * [20] R. Ji, L. Wen, L. Zhang, D. Du, Y. Wu, C. Zhao, X. Liu, and F. Huang, “Attention convolutional binary neural tree for fine-grained visual categorization,” in Proceedings of the Computer Vision and Pattern Recognition, 2020\. * [21] G. Huang, Y. Li, G. Pleiss, Z. Liu, J. E. Hopcroft, and K. Q. Weinberger, “Snapshot ensembles: Train 1, get M for free,” in Proceedings of the International Conference on Learning Representations, 2017.
11institutetext: Comilla University, Cumilla, Bangladesh 11email<EMAIL_ADDRESS>22institutetext: Queensland University of Technology, Brisbane, Australia 22email<EMAIL_ADDRESS> # Automatic Monitoring Social Dynamics During Big Incidences: A Case Study of COVID-19 in Bangladesh Fahim Shahriar 11 Md Abul Bashar 22 ###### Abstract Newspapers are trustworthy media where people get the most reliable and credible information compared with other sources. On the other hand, social media often spread rumors and misleading news to get more traffic and attention. Careful characterization, evaluation, and interpretation of newspaper data can provide insight into intrigue and passionate social issues to monitor any big social incidence. This study analyzed a large set of spatio-temporal Bangladeshi newspaper data related to the COVID-19 pandemic. The methodology included volume analysis, topic analysis, automated classification, and sentiment analysis of news articles to get insight into the COVID-19 pandemic in different sectors and regions in Bangladesh over a period of time. This analysis will help the government and other organizations to figure out the challenges that have arisen in society due to this pandemic, what steps should be taken immediately and in the post-pandemic period, how the government and its allies can come together to address the crisis in the future, keeping these problems in mind. ###### Keywords: Topic Analysis LDA Topic Model Dynamic Topic Modeling Time Series Decomposition Bengali Text Dataset Newspaper Text Classification RNN LSTM Sentiment Analysis CNN-BiLSTM ## 1 Introduction The outbreak of COVID-19 has brought serious health and economic consequences to society. It triggered one of the largest recessions in the world. Travel and currency companies lost billions of dollars, global stock markets plummeted, schools were closed, and the health care system was exhausted. Mental and social problems arose as people started to worry about infection, losing friends and family, losing their jobs, or isolation. Bangladesh has not been rid of this terrible virus. The virus has had major impacts on people’s lives and significantly degraded quality of life. There were significant cases of infections and deaths. The hospital did not have adequate treatment facilities, including doctors, beds, and emergency supplies. Besides the health crisis, people have suffered enormous economic losses. Many people have lost their jobs; companies lost revenues, many of them go bankrupt. The most affected were the day-laborer and low-income workers. The lockdown in the pandemic suppressed their income. Many workers starved since their livelihood was cut-off. Working people took to the streets in search of their livelihood. They started protesting in the streets for relief. Seeing their plight, many people, including the government, came forward to help them. Because of the lockdown, the international transport system was shut down, and stopped imports and exports. As a result, the country’s industry suffered miserably. Objective monitoring and analysis of social dynamics during such a big incident can help the government and other authorities decide and take initiatives where required. This research proposes utilizing articles published in newspapers to objectively monitor and analyze social dynamics during a big incidence, such as the COVID-19 pandemic. Newspapers are one of the most popular mass media in our daily life. Newspapers provide information on all the country’s financial, political, social, environmental, etc. Whether it is a public campaign, an emergency, or a provocation, newspapers are a great resource for keeping track of internal and external events and stories. This mass media generally provide authentic information, whereas social media such as Facebook and Twitter often spread rumors and cannot be relied upon for authentic news. Effective classification, analysis, and interpretation of newspaper data can provide a deep understanding of any big incident in a society. In this research, we analyzed a large spatio-temporal dataset of Bangladeshi Daily Newspapers related to COVID-19. The approach incorporated volume analysis, topic analysis, automatic classification of news articles, and sentiment analysis to better understand the COVID-19 pandemic in Bangladesh’s divisions and districts over time. The experimental results and analysis will give an objective insight into the COVID-19 pandemic in Bangladesh that will benefit the government and other authorities for disseminating resources. This paper especially shows how to utilize automatic techniques for monitoring social dynamics in big incidents such as a pandemic, natural disaster, and social unrest. This research makes the following main contributions. (1) It collects, manually classifies, and publishes a large collection of COVID-19 related Bangladeshi news articles in Bengali and English. (2) It investigates the topics discussed during the COVID-19 pandemic in Bangladesh and how they have changed over time using manual and automatic techniques. (3) It designs a CNN- BiLSTM architecture for analyzing sentiment in Bengali text. (4) It analyzes COVID-19 related sentiments in the community over time and space. (5) It automatically categorizes documents into classes of observation interest for monitoring social interests. The rest of the paper is organized as follows: Section 2 discusses related work, Sections 3 discussed methodology and data collection, Section 4 presents experimental results, and Section 5 concludes the paper. ## 2 Related Work In this segment, we will discuss some past related works done by different analysts. We will divide it into four sections: Static Topic Modeling, Dynamic Topic Modeling, Sentiment Analysis, and Text Classification. ### 2.1 Static Topic Modeling Topic modeling is a process of discovering hidden topics in a collection of texts Bashar et al. (2020a); Balasubramaniam et al. (2020). It can be considered as a factual show of topics through text mining. One of the most popular topics modeling technique Latent Dirichlet Allocation (LDA) (Blei et al., 2003; Bashar et al., 2020a) discovers topics based on word recurrence in a set of documents. LDA is incredibly valuable for finding a sensibly precise blend of topics inside a given record. Topic modeling has been well studied for English text mining. For instance, Zhao et al. (2011) used unsupervised topic modeling in their research and compared the content of Twitter with the traditional news media _“New York Times”_. They used the Twitter-LDA model to find topics from a representative sample of the entire Twitter and then used text mining techniques to compare these Twitter topics with _New York Times_ ’ topics, taking into account the topic category and type. Wang and Blei (2011) developed an algorithm to recommend scientific articles to users in online communities. Their method combines the advantages of traditional collaborative filtering and probabilistic topic modeling. They applied collaborative topic modeling for recommending scientific articles. Wayasti et al. (2018) applied the Latent Dirichlet Allocation function in the research and extracted topics based on ride-hailing customers’ posts on Twitter. In their research, they used 40 parameter combinations of LDA to obtain the best combination of topics. According to the perplexity value, the customers discussed 9 topics in the post, including keywords for each topic. Tong and Zhang (2016) recommended two experiments to build topic models on Wikipedia articles and Twitter users’ tweets. However, topic modeling has not been well studied for Bengali text mining, unlike English text mining. Das and Bandyopadhyay (2010b) used topic wise opinion summarization from Bengali text. They applied K-Means clustering and _document-level theme relational graph_ representation. However, they did not use any topic modeling technique, such as LDA. Rakshit et al. (2015) applied a Multi-class SVM classifier for analyzing Bengali poetry and poet relations. They performed a subject-wise classification of poems into foreordained categories. Hasan et al. (2019) compared the performance of the LDA and LDA2vec topic model in Bengali Newspaper. Al Helal and Mouhoub (2018) used LDA for detecting the primary topics from a Bengali news corpus. However, they did not directly apply LDA in the Bengali text. Instead, they translated the Bengali text into English and then applied LDA to detect the topics. Rahman et al. (2019) used lexical analysis for sentence wise topic modeling. Their topic modeling was based on sentiment analysis. None of the existing works used Bengali text topic modeling for monitoring a pandemic or a major event. In addition to English and Bengali, topic modeling in various languages is also studied. De Santis et al. (2020) analyzed a system that uses NLP pipelines, a theoretical framework for content aging to determine the qualitative parameters of tweets, and co-occurrence analysis to build topic maps chart splits to identify topics related to posts from Italian Twitter users. Han et al. (2020) extracted topics related to COVID-19 from Sina Weibo(Chinese microblogging website) text dataset through the LDA topic model. ### 2.2 Dynamic Topic Modeling The dynamic topic model is a cumulative model that can be used to analyze changes in document collection over time Bashar et al. (2020a). There are many studies on dynamic topic modeling for the English language. For example, AlSumait et al. (2008) showed that the LDA model could be extended to the online version by gradually updating the current model with new data, and the model has the ability to capture the dynamic changes of the topics. Dieng et al. (2019) researched D-ETM on three data sets and discovered the word probabilities of eight different topics that D-ETM learned over time. Nguyen et al. (2020) discovered latent topics from the financial reports of listed companies in the United States and studied the evolution of the themes discovered through dynamic topic modeling methods. Marjanen et al. (2020) discussed humanistic interpretation’s role in analyzing discourse dynamics through historical newspapers’ topic models. Bashar et al. (2020a) extracted five COVID-19 related topics from the Twitter dataset through LDA topic modeling, and they showed the changes in the extracted topics over time. Nevertheless, for the Bengali language, so far, there is no research on dynamic topic modeling. In this study, we study the evolution of the extracted COVID-19 related topics over time using dynamic topic modeling. ### 2.3 Text Classification Text classification, moreover known as text labeling or text categorization, is categorizing content into organized bunches Bashar et al. (2020b); Bashar and Nayak (2020); Bashar et al. (2018); Bashar and Nayak (2020). By utilizing NLP, classifiers can naturally label text and, after that, relegate a set of predefined labels or categories based on its substance. Many researchers worked on text classification in English. For example, Patil and Pawar (2012) used the Naive Bayes algorithm to classify website content. They divided the website content into ten categories, and the average accuracy of the ten categories was almost 80%. Bijalwan et al. (2014) used K-Nearest Neighbors, Naive Bayes, and Term-gram to classify text. They showed that in their research, K-Nearest Neighbors’ accuracy was better than Naive Bayes and Term-gram. Tam et al. (2002) showed that K-Nearest Neighbors was superior to NNet and Naive Bayes for English documents. Pawar and Gawande (2012) showed that Support Vector Machines’ performance is far superior to Decision Trees, Naive Bayes, K-Nearest Neighbors, Rocchio’s algorithms, and Backpropagation networks. Liu et al. (2010) showed that Support Vector Machines is better than K-Nearest Neighbors and Naive Bayes. In addition to English text classification, some researchers have also classified Bengali text. For example, Mandal and Sen (2014) applied four supervised learning methods: (Naive Bayes, k nearest neighbor, Decision Tree classifier, and Support Vector Machine) for labeled web documents. They classified the documents into five categories: (Business, Sports, Health, Technology, Education). Chy et al. (2014) applied a Naive Bayes classifier to categorized Bengali news. Pal et al. (2015) described Naive Bayes classifier for Bengali sentence classification. They used over 1747 sentences in their experiment and got an accuracy of 84%. Kabir et al. (2015) used Stochastic Gradient Descent (SGD) classifier to categorize Bengali documents. Eshan and Hasan (2017) created an application that identifies abusive texts in Bengali. They applied Naive Bayes, Random Forest, Support Vector Machine (SVM) with Radial Basis Function (RBF), Linear, Polynomial, and Sigmoid kernel to classify the texts and compare the results among them. Islam et al. (2017) applied SVM, Naive Bayes, and Stochastic Gradient Descent(SGD) to classify Bengali documents and compare results of those classifiers. However, non of the existing works used Bengali text classification for monitoring a pandemic or a major event. ### 2.4 Sentiment Analysis Sentiment Analysis refers to computationally recognizing and categorizing opinions communicated in a chunk of text. It is successfully used in commerce where they use it to track online discussions to identify social estimation of their brand, item, or benefit. A lot of research work has been done in sentiment analysis for the English language. For example, Cui et al. (2006) have reviewed about 100,000 product reviews from various websites. They divided reviews into two main categories: positive and negative. Jagtap and Dhotre (2014) applied the Support Vector Machine and Hidden Markov Model, and the Hybrid classification model is well suited for extracting teacher feedback and evaluating sentiments. Alm et al. (2005) divided the seven emotional words into three polarity categories: positive emotion, negative emotion, and neutral, and the Winnow parameter adjustment method used can reach 63% accuracy. For extracting the Twitter sentiment, Agarwal et al. (2011) applied unigram, tree model, and feature- based model. Bashar et al. (2020a) used Convolutional Neural Network to extract sentiments related to COVID-19 from the Twitter dataset. Some research used sentiment analysis in Bengali texts. For instance, Das and Bandyopadhyay (2010a) classified emotions into six categories: Happy, Sad, Anger, Disgust, Fear, and Surprise. Chowdhury and Chowdhury (2014) used sentiment analysis in Bangla Microblog Posts. They applied a semi-supervised bootstrapping method utilizing SVM and Maximum Entropy. Hasan et al. (2014) proposed a strategy to identify sentiments in Bengali texts by Contextual Valency Analysis. They employed the methodology of POS Tagger in their approach. Hassan et al. (2016) used recurrent neural networks to Romanize Bengali texts and analyze sentiments. In their experiments, they used Bangla and Romanized Bangla Text (BRBT) dataset. For Sentiment Analysis of Bangla Microblogs, Asimuzzaman et al. (2017) used Adaptive Neuro-Fuzzy Deduction Framework to anticipate extremity and utilized fluffy rules of speech in semantic rules. Mahtab et al. (2018) designed a model for sentiment analysis on Bangladesh Cricket news. They applied TF-IDF and SVM (Support Vector Machine) in their model and found 64.596% accuracy. Tripto and Ali (2018) used sentiment analysis on Youtube comments. Their research built a model based on deep learning that classifies a Bengali text into three classes and five sentiment classes. Tabassum and Khan (2019) used the Random Forest Algorithm to classify the sentiments in Bengali texts. Tuhin et al. (2019) applied Naive Bayes and a topic modeling approach to design an Automated System of Sentiment Analysis in Bengali Text. Their system classifies emotions into six categories: happy, sad, tender, excited, angry, and scared. However, non of the existing works used Bengali text sentiment analysis for monitoring a pandemic or a major event. ## 3 Experimental Methodology ### 3.1 Methodology This pandemic situation has changed society and the country by a significant margin. The whole face of the country has changed completely. Some significant sectors of the nation, such as economic, social, political, have been affected massively. The education systems have been hit particularly hard. This research aims to automatically analyze the daily newspapers in Bangladesh to reveal what is going on in society and gain knowledge to comprehend the fundamental topics (or subjects) and sentiment arising and advance in the discussion. This study will conduct a topic and sentiment analysis on a large collection of COVID-19 related news articles published in Bangladesh both in Bengali and English texts. The study will focus the analysis on both spatial and temporal dimensions. In the topic analysis, we used LDA-based topic modeling and dynamic topic modeling to find the topics, their evolution over time, and their time and space (location). We also analyzed what impact each topic had on particular areas. Then we analyzed the sentiment distribution over time and space to identify social sentiment in space and time. The experimental workflow of this study is shown in Figure 1. First, we manually gathered a large collection of COVID-19 related news articles from Bangladeshi six most circulated daily newspapers. Along with the news, the collection contains geospatial and temporal information on the news. The dataset was then preprocessed by removing HTML, markers, and other non- relevant information such as adverts. Then, we manually organized the news articles in a set of classes and sub- classes. Then we extracted the topics and the subtopics from the dataset. We used these classes and sub-classes to perform basic analysis such as comparing similarity and diversity in the news. These classes and sub-classes have also been used to qualitatively evaluate the accuracy of the topics discovered by LDA and labels predicted by classifiers before LDA and classifiers are employed for detailed analysis. Figure 1: Experimental Work flow ### 3.2 Data Collection and Preparation These publicly available News articles related to COVID-19 have been collected from the six most popular newspapers in Bangladesh from 21 January 2020 to 19 May 2020. The six newspapers are _The Daily Prothom Alo_ , _Bangladesh Pratidin_ , _Kaler Kantho_ , _The Daily Star_ , _Newage_ , and _The Daily Observer_. A total of 15,565 news articles are collected from these six newspapers. From every news article, we extracted the news title, the main body of the news, a summary of the news (i.e., first few lines of the news body), the published date, and the news incident’s location. We used Python’s _BeautifulSoup_ and _Newspaper3k_ tool for extracting the news content. _BeautifulSoup_ is a popular Python package for parsing HTML and XML archives and one of the most popular web scraping tools. _Newspaper3k_ is a user- friendly library for scraping the news articles and other related data from newspaper portals. It is built upon request and used to parse LXML. This module is an improved version of the _Newspaper_ module and is also used for the same purpose. Table 1 summarizes the statistics of the collection. We call this collection _Comilla University COVID-19 News Collection_ (CoU-CNC). We made it available online111CoU-CNC Dataset: https://cutt.ly/djGILi2 for anyone for further analysis. Article Source | Language | Article Count ---|---|--- The Daily Prothom Alo | Bengali | 4169 Bangladesh Pratidin | Bengali | 5584 Kaler Kantho | Bengali | 1160 The Daily Star | English | 1278 The Daily Observer | English | 1191 New Age | English | 2183 Table 1: CoU-CNC Dataset Statistics Out of these six newspapers, news articles in three newspapers (_The Daily Prothom Alo_ , _Bangladesh Pratidin_ , _Kaler Kantho_) are composed in the Bengali language, and in the other three newspapers (_The Daily Star_ , _The Daily Observer_ , _New Age_) articles are composed in the English language. There are 10,913 news articles in the Bengali language, and the remaining 4652 news articles are in the English language. As the dataset has 4,652 articles in English and we wanted all articles in the same language to be better parsed, so we translated the English articles into Bengali via Python’s _googletrans_ module. As a result, after translating these articles, all the articles are in the Bengali language. Then, we applied tokenization to split a string of text into smaller tokens. The news articles are split into sentences, and sentences are tokenized into words. Then, we applied noise removal (e.g., removing HTML tags, extra white spaces, special characters, numbers) to clean up the text. Then, we removed the stopwords from the document. As there is no build-in stopwords module for Bengali nltk, we manually created a stopword list and made it available online222Bengali- Stopwords: https://cutt.ly/2jXbDRB. Then, we expanded contraction. We set the minimum letter length to 6. We also removed all the words that were below the minimum letter length. There are no good resources for stemming and lemmatization in the Bengali language. So, we applied stemming and lemmatization to the tokens in our own process. After removing all the stopwords and other noises, there were a total of 80,693 tokens. There are some specific suffixes for the Bengali language. The suffix removal from the word has also been done with the help of Python. We used _Bangla_Steamer.Steamer_ library of Python to improve accuracy. However, it did not show the expected results as the library is effective for a small number of Bengali words. To increase the accuracy of this 80,693 sizes lemmatized dictionary, we manually verified about 30000 most frequent tokens from 80693 words. We lemmatized where we needed to lemmatized manually, and we also corrected the incorrect and misspelled words where it was needed. Many more words are manually lemmatized and corrected through this manually 30,000 words check. We have published verified Bengali words on the Internet and titled _“Modified Bengali Words”_ 333Modified Bengali Words: https://cutt.ly/8jE6GIC for further analysis. To compare the number of news published and the COVID-19 cases of Bangladesh, we collected an open-source dataset444https://www.worldometers.info/coronavirus/country/bangladesh/ of confirmed COVID-19 cases and death cases of Bangladesh from March 8 to May 19.We also collected another open-source dataset555https://data.humdata.org/dataset/district-wise-quarantine-for- covid-19 of confirmed cases based on divisions and districts of Bangladesh from March 8 to May 19. #### 3.2.1 Class Distribution in News Articles After collecting the new articles, first, we analyzed them manually. In this process, we extracted eight classes (shown in Table 2) and 19 sub-classes from the news articles. The representation of the extracted eight classes and the hierarchical organization of sub-classes are shown in Figure 2. The distribution of the extracted classes over news articles is shown in Figure 3 and the distribution of the extracted sub-classes over news articles is shown in Figure 4. Table 2: Eight Classes Extracted from the Collected News Articles (1) Statistics, (2) Social Information, (3) COVID-19 Effects, (4) COVID-19 Responses and Preventive Measure, (5) Government Announcement and Responses, (6) Solidarity and Cooperation, (7) International Information, and (8) Health Organization Responses --- Figure 2: Manually Extracted Classes and Sub-classes Figure 3: Distribution of Manually Extracted Classes over News Articles Figure 4: Distribution of Manually Extracted Sub-classes over News Articles ### 3.3 Spatio-Temporal Analysis Time Series or temporal analysis of newspaper articles is utilized to observe the transient expansion during the pandemic. Time series decomposition includes considering a series of components in the time dimension: Level, Trend, Seasonality, and Noise segments. Level refers to the average value in the series, Trend refers to the increasing or decreasing value in the series, Seasonality refers to the repeating short-term cycle in the series, and finally, Noise refers to the random variation in the series. Decomposition gives a powerful supportive model for pondering time series and better arrangement issues during time series analysis and decision making. The additive model (Dagum, 2010) suggests that the segments are added as the following formula: $y(x)=l(x)+t(x)+s(x)+n(x)$ (1) where $y(x)$ represents the additive model, $l(x)$ represents the observed level, $t(x)$ represents the trend, $s(x)$ represents the seasonality and $n(x)$ represents the noise or residual in the signal $x$. This model is linear. The change over a period of time is reliably affected by the similar sum of the linear trend as a straight line. A linear seasonality has a similar recurrence and abundance. On the other hand, A multiplicative model (Dagum, 2010) recommends that the components are multiplied together as the following formula: $y(x)=l(x)\times t(x)\times s(x)\times n(x)$ (2) where $y(x)$ represents the multiplicative model, $l(x)$ represents the observed level, $t(x)$ represents the trend, $s(x)$ represents the seasonality and $n(x)$ represents the noise in the signal $x$. A multiplicative model is exponential or quadratic when expanding or diminishing over the long run. A nonlinear pattern is a bent line. In this examination, we disintegrated the time series utilizing the multiplicative model. For Spatial Analysis, we used Tableau Software to compare the number of news published, and the number of COVID-19 confirmed cases geographically. ### 3.4 Static Topic Analysis Analyzing the topics of news articles published during a major incident or a pandemic like COVID-19 can help monitor the situation and understand the public concerns, which is critical for government authorities and charity organizations to disseminate required resources and aids. However, in such a situation, a large number of news articles are published in various newspapers. We observed that as the situation deteriorated during the pandemic, newspapers had to publish much news on various topics. Manually analyzing the topics by reading a large number of articles is time-consuming and expensive. We utilized two unsupervised machine learning techniques: (a) LDA (Blei et al., 2003), a popular topic modeling technique, as static topic modeling to automatically find topics of articles published in newspapers, and (b) dynamic topic modeling in (Blei and Lafferty, 2006) to see how those topics evolve over the long haul. LDA is a Bayesian probabilistic model that discovers topics and provides topic distribution over documents and word distribution over topics. It has two phases: (a) the first phase models each document as a composition of topics, and (b) the second phase models each topic as a composition of words. LDA utilizes word co-occurrences inside documents for discovering topics in a document assortment. Words occurring in an equivalent document are practically coming from the same topics, and documents containing comparative words will undoubtedly include comparable topics. In this research, the _Gensim_ package in Python was utilized to execute the LDA model. We utilized every news article as a document in the topic modeling. Before applying the LDA topic model, we manually associated documents into general classes and sub-classes to know about the quality of LDA extracted fine-grained topics. Then, we analyzed each LDA extracted topic’s temporal trends to see when a topic has been discussed more or published more in the newspapers. Finally, we analyzed each topic’s spatial distribution to see what effect each had in a particular place. We used Tableau software to analyze the spatial distribution of each topic. ### 3.5 Dynamic Topic Analysis The static topic modeling treats words as interchangeable and indeed treats documents as interchangeable. However, the presumption of replaceable documents is impractical for some assortments when accumulating along the time. For example, tweets, news articles, and insightful articles as they are advancing substance along time. The subjects in a newspaper article assortment develop, and it is essential to display the elements of the fundamental topics unequivocally. Dynamic topic modeling extends the static theme, which illustrates the progression of the theme in consolidate. Dynamic topic modeling can catch the development of topics in a successively coordinated assortment of news articles. In this research, the articles are synchronized by week. We used the dynamic topic model to analyze discussion topics and topic changes over time. ### 3.6 Text Classification Then we built a text classifier to verify their performance and predict the class, sub-class, and topics in the unknown (upcoming in the future) news articles. Such classification is important when we need to monitor a specific category (or class or group) of news. We made Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) models in Python utilizing Keras deep learning library for text classification. RNN is a special kind of neural network where the previous step’s output will be used as the current step’s input. In a traditional neural network, not all inputs and outputs are interdependent. However, interdependence is an important part of text data. In such cases, the model needs to predict the next word given the previous words, so the previous word must be stored. Thus RNN was born, which solved the problem with the help of hidden layers. The primary function of RNN is also essential, namely the _hidden state_. It can remember some information about the sequence. RNN is a neural feedback network that operates on the internal memory. Since the RNN has a similar function for each piece of information and the current range’s output is based on the last count, the RNN is essentially recursive. When there is an output, it is copied and sent back to the relay network. The current input and the output of the previous input are taken into account in determining the prediction of the next word. Unlike direct feedback neural networks, RNNs can use their internal state (memory) to manage the input elements’ interdependence. That makes them useful for text data, handwriting recognition, or speech recognition. The architecture of an unrolled recurrent neural network is shown in Figure 5. Figure 5: Unrolled Recurrent Neural Network In Figure 5, first the model gets $x_{0}$ from the input sequence. Then it produces $h_{0}$, which is used in the next input to the model along with $x_{1}$. That is, both $h_{0}$ and $x_{1}$ become inputs to the next step. Then, $h_{1}$ and $x_{2}$ are input to the next step, and so on. Like this, RNN continues summarizing the unique circumstance in the hidden state while training. Then, it uses the summarized hidden state to classify the sequence (Bashar et al., 2020b). ### 3.7 Sentiment Detection We proposed a hybrid neural network model based on Convolutional Neural Network (CNN) and LSTM for sentiment analysis in Bengali texts. Integrated models are used to solve various vision and NLP problems and improve a single model’s performance. The following subsections provide an overview of the LSTM and CNN models offered. In subsection 3.6, we described LSTM. In this research, we used two-layer bi-LSTM, word embedding include words in the news articles and provide sentiments. Another part of our proposed structure is based on Convolutional Neural Network (CNN). CNN has very successful in various image processing and NLP tasks these last years. They are powerful in exploration, achieving local relevance, and data standards through learning. Generally, to rank text on CNN, different words in sentences (or paragraphs) need to be placed. Stacked to form a two-dimensional matrix, pleated filters (different lengths) are applied to the window. To use CNN for text classification, the different words stacked in a sentence are usually stacked in a two-dimensional matrix, and afterward, a convolution is applied to the word in the window in one word to be created applied a new function declaration. Then, a max-pooling is applied to the new function, and the combined functions of different filters are combined to shape a concealed portrayal. Completely associated layers trail these portrayals for the last estimate. The architecture of our CNN-BiLSTM Hybrid network model is shown in Figure 6. Figure 6: The architecture of CNN-BiLSTM Hybrid network for Sentiment Identification We created a sequential model that includes an LSTM layer. Then we made our model sequential and adding layers. In the first layer, we applied a conv1D with 200 as a filter for CNN. After that, we applied two Bi-LSTMs on the second and third layer with an error of 0.5. Then we applied a dense network on the remaining levels. We also used _Adam_ as an optimizer with tight hyperparameters and applied _L2_ adjustments to reduce overfitting as much as possible. We only used five epochs, as using more epochs resulted in overfitting and kept the stack size of 256, as it worked very well. ## 4 Experimental Results ### 4.1 Volume Analysis #### 4.1.1 Temporal Analysis of Volume The time series volume analysis of newspapers is shown in Figure 7. The figure has four plots, namely observed level, trend, seasonal, and noise or residual. The first plot Figure 7a shows the original volume, i.e., the number of COVID-19 related news articles in a time point. It shows that the curve began to rise from January when some COVID-19 cases were found in China and other countries. The plot increased sharply in early March when a few instances of COVID-19 cases were identified in Bangladesh. The curve remained high onward with some fluctuations. The second plot Figure 7b shows the trend of the COVID-19 related news publication volume. It shows that the COVID-19 related news started becoming trendy by the end of January, and the trend increased significantly in early March. The trend stayed high through the rest of the time with some fluctuations. The third plot Figure 7c shows the seasonal, cyclical change in the volume. Moreover, the fourth plot Figure 7d shows a residual or random variation in the volume. (a) Observed (b) Trend (c) Seasonal (d) Residual Figure 7: Time Series Decomposition To see how newspapers reacted during the COVID-19 pandemic, we tracked COVID-19 cases, death from COVID-19, and COVID-19 news volume in Figure 8. The figure shows that the newspapers were vigilant from the beginning of the pandemic. The newspaper journalists increased COVID-19 related news coverage exponentially as soon as COVID-19 cases were found in Bangladesh in early March. The news volume continued increasing until the last quarter of March. This part of the news volume shows the newspapers reacted from about COVID-19 from the very early pandemic stage. They significantly covered the pandemic during the early period of the COVID-19 cases. The number of identified cases increased significantly by the second quarter of April, and it continued to increase. However, the number of COVID-19 related news articles did not increase during this time. Even in some cases, the news article volume decreases marginally. The possible reasons might be: (a) Because Bangladesh is a developing country, to survive at this point, people had to think more about earnings than pandemic. As a result, pandemic news did not increase attention, and newspapers did not increase COVID-19 related articles. (b) Some other big incidences gained more attention than COVID-19. (c) The newspapers reached their allocated space for pandemic news already. Figure 8: Comparison of Daily News Article Counts and Daily Cases (21 January 2020 - 19 May 2020) #### 4.1.2 Spatial Analysis of Volume The spatial Distribution of Bengali newspapers is shown in Figure 9a. The number of news articles was concentrated on the central part of Bangladesh, mainly Dhaka, Narayanganj, and Gazipur. More than 6000 COVID-19 related news articles were published in Bangladeshi newspapers related to Bangladesh’s central part. More than 2000 news articles related to the southern part of Bangladesh, mainly Chittagong and Cox’s Bazar. (a) Number of News Published Up to 19 May 2020 (b) Confirmed Cases Up to 19 May 2020 Figure 9: Spatial Analysis of News Article Volume The spatial distribution of confirmed cases of COVID-19 is also shown in Figure 9b. The central part of Bangladesh is the most affected area. More than 10,000 COVID-19 patients were identified in Dhaka during this time. Outbreaks have been reported in the surrounding areas of Dhaka, mainly Narayanganj and Gazipur. After the Dhaka division, we can see the highest infection rate in the southern part of Bangladesh, mainly Chittagong. Figure 9 shows a correlation between the number of confirmed COVID-19 cases in an area and the published news volume related to that area. This means automatic monitoring of news article volume can give a clear view of the severity of a pandemic or big instances in a society. Figure 10: District-wise Distribution of News Articles Figure 11: Division- wise Distribution of News Articles Figure 12: Geo-spatial and Temporal Distributions of News Articles Published over Time. Horizontal axis shows consecutive weeks in the duration and vertical axis shows the volume (count of news articles). The district-wise break down of published news articles for significant volume is shown in Figure 10 and division-wise break down in Figure 11. The figures show that most news published was related to the Dhaka district and Dhaka division. More than 57% of the published news was related to the Dhaka division. After Dhaka, most news has been published on Chittagong. More than 19% of the news was related to the Chittagong division. The geospatial and temporal distributions of newspaper articles are shown in Figure 12. The figure shows that the volume of published news articles related to each location significantly changed over time, lower volume before and beginning of the pandemic, and significantly increased during the pandemic. ### 4.2 Topic Analysis #### 4.2.1 Topic Extraction For topic analysis through the LDA topic model, it is indispensable to decide the optimal number of topics. Seeking an appropriate LDA topic number and clarifications to examine the relationship between the COVID-19 emergency and news articles, we have given much thought. We used a coherence score and perplexity score to assess the choice of an appropriate number of topics. After preprocessing the data, we applied the LDA model to discover hidden topics in news articles. To determine the optimal number of topics, we diagnosed the coherent score and the perplexity score graph shown in Figure 13. Figure 13a is showing the coherent score graph and Figure 13b is showing the perplexity score graph. (a) Coherent Score (b) Perplexity Score Figure 13: Determining optimal number of topic From the coherence score graph, we got the highest coherence score (0.5077) when we set the number of topics to 9, shown in Figure 13a. Moreover, from the perplexity score graph, we got the highest perplexity score (-7.59) when we set the number of topics to 24, shown in Figure 13b. We chose the coherent score between the coherent score and perplexity score as the optimal number of topics for the coherent score is 9, which is very close to the number of manually extracted classes of 8, shown in Table 2. So we set the number of topics for LDA topic extraction to 9. The word clouds for top words (i.e., keywords) in each of the nine topics is shown in Figure 14. The weights and appearance counts of the keywords in each topic is shown in Figure 15. The visualization of the clusters of documents in a 2D space using the t-SNE (t-distributed stochastic neighbor embedding) algorithm is shown in Figure 16. In Figure 17, inter-topic distance map and 30 relevant keywords are displayed for each topic. They discovered nine topics are listed in Table 3. Table 3: Nine Topics Discovered by LDA (1) Economic Crisis and Incentives, (2) Epidemic Situation and Outbreak, (3) Vaccine and Treatment, (4) Demonstration for Wages and Relief, (5) Medical Care and Health Organization Responses, (6) Repatriation and International Situations, (7) Daily Infected Death and Recovered Cases, (8) Strategic Preparedness, and (9) Government Announcement and Responses --- Figure 18 shows the topic frequency ratio in the document collection (news articles). The figure shows that Topic 8 (Strategic Preparedness) is the most frequent topic amongst all the nine topics discovered by LDA, and this topic accounted for 26.3% of all the nine topics. The second most frequent LDA topic is Topic 2 (Epidemic Situation and Outbreak), which accounted for 20.1%. Topic 9 (Government Announcement and Responses) and Topic 7 (Daily Infected, Death, and Recovered Cases) are 13.6% and 11.7%, respectively, and are the third and fourth most frequent topics. Topic 5 (Medical Care and Health Organization Responses), Topic 3 (Vaccine and Treatment), and Topic 4 (Demonstration for Wages) and Relief are at fifth, sixth, and seventh positions, and They accounted for 9.8%, 5.7%, and 5.2%, respectively. Finally, Topic 6 (Repatriation and International Situations) and Topic 1 (Economic Crisis and Incentives) are the least frequent topics, and the proportion of these two topics is less than 5%. By reviewing all these topics and analysis, we can insight into the pandemic or any important incident in a society. (a) Word cloud of Topic 1 (b) Word cloud of Topic 2 (c) Word cloud of Topic 3 (d) Word cloud of Topic 4 (e) Word cloud of Topic 5 (f) Word cloud of Topic 6 (g) Word cloud of Topic 7 (h) Word cloud of Topic 8 (i) Word cloud of Topic 9 Figure 14: Word Clouds for nine topics (a) Word counts of Topic 1 (b) Word counts of Topic 2 (c) Word counts of Topic 3 (d) Word counts of Topic 4 (e) Word counts of Topic 5 (f) Word counts of Topic 6 (g) Word counts of Topic 7 (h) Word counts of Topic 8 (i) Word counts of Topic 9 Figure 15: Word Counts for nine topics Figure 16: t-SNE clustering chart Figure 17: Interactive Topic Visualization Figure 18: Proportion of Topic Frequency Distribution in the News Collection (a) Topic 1 (b) Topic 2 (c) Topic 3 (d) Topic 4 (e) Topic 5 (f) Topic 6 (g) Topic 7 (h) Topic 8 (i) Topic 9 Figure 19: Topics Evolution of nine topics #### 4.2.2 Dynamic Topic Modeling: Temporal Trends of Topics To show the evolution of topics overtime during the pandemic, we used Dynamic Topic Modeling Blei and Lafferty (2006). Figure 19 shows the evolution of nine topics over weeks during the pandemic. The figure shows how the popularity of each topic and the top words (i.e., keywords) in the topic changed over time. The overall temporal trend of these topics is shown in Figure 20. Topic 1: “Economic Crisis and Incentives” climbed from early March and stopped rising at the end of March. Then the curve continued to decline until the beginning of April and rose slowly in the middle of April. Then again, the curve continued to decline until the beginning of May. Then finally it reached a peak in the middle of May. Topic 2: “Epidemic Situation and Outbreak” climbed from the beginning of March and stopped rising at the end of April. The curve then steadily continued to decline until the beginning of April. Then slowly rose the curve to the middle of April. Furthermore, again continued to decrease the curve until the beginning of May and then rose to a peak in the middle of May finally. Topic 3: “Vaccine and Treatment” climbed from the end of February and stopped growing at the beginning of March. Then the curve steadily continued to decline until the middle of March. And then again started to rise and reached a peak at the end of March. Then it also started to decline the curve until the middle of May. After that, the curve fluctuated till the end. Topic 4: “Demonstration for wages and relief” climbed from the beginning of March. Then the curve stopped rising in the middle of March. After that, the curve was fluctuating between the middle of March and the middle of April, and then the curve reached a peak in the middle of April. Then the curve steadily declined until the beginning of May and rose slowly in the middle of May. Topic 5: “Medical Care and Health Organization Responses” climbed from the beginning of February and stopped rising in early March. Then the curve started to rise from the beginning of March. In the middle of April, the curve reached its peak point. Then The slope of the curve gradually deteriorates after the middle of April. Topic 6: “Repatriation and International Situations” climbed from the beginning of February and stopped rising early in March. The curve then steadily declined and then again started to rise from early March to the middle of March. After that, the curve had a steady state for a while; it went to the top at the end of March. Then again, the curve steadily declined until the beginning of April. Then the curve fluctuated till the end. Topic 7: “Daily infected, death and recovered cases” climbed from the beginning of February. As the number of infected and death cases was growing every day so that the number of daily infected deaths, death cases were also rising every day. So the curve was also rising. The curve reached its peak at the end of April. Then the curve declined until the beginning of May. After that, the curve again started to rise from the end of April to the end. Topic 8: “Strategic Preparedness” climbed from the beginning of March and stopped rising at the end of March. Then the curve declined until the end of March. After that, the curve started to rise from the end of March. Suddenly the curve downgraded for a while and then again started to rise up and then it peaked. The curve then steadily declined until the beginning of May and rose slowly in the middle of May. Topic 9: “Government Announcement and Responses” climbed from the beginning of March and reached a peak in the middle of March. Then the curve declined until the middle of April. After that, the curve again started to fluctuate. Then also, the curve steadily deteriorated until the end of April. Furthermore, finally, the curve rose slowly till the end. (a) Temporal Trends of Topic 1 (b) Temporal Trends of Topic 2 (c) Temporal Trends of Topic 3 (d) Temporal Trends of Topic 4 (e) Temporal Trends of Topic 5 (f) Temporal Trends of Topic 6 (g) Temporal Trends of Topic 7 (h) Temporal Trends of Topic 8 (i) Temporal Trends of Topic 9 Figure 20: Overall Temporal Trends of Topics #### 4.2.3 Spatial Distribution of Topics This subsection details the experimental results of the spatial distribution of the topics. Topic 1: “Economic Crisis and Incentives” mainly concentrated on the Dhaka division shown in Figure 21a. Most people of the Dhaka division lost their job in that period. This incident also happened in Chittagong to a small extent. The government and various agencies provided Relief and incentives to the victims. In this case, the area that has received the most relief is the Dhaka and the Chittagong division. Topic 2: “Epidemic Situation and Outbreak” mainly focused on the central and southern parts of Bangladesh, shown in Figure 21b. We can see that Dhaka is the most affected city in Bangladesh at that particular time. Most COVID-19 infected patients have been identified in Dhaka, more deaths have been reported in Dhaka, the situation in Dhaka was much worse than other districts and divisions at that time, and there was a much higher prevalence. Apart from Dhaka, Narayanganj, Gazipur, Chittagong has also been affected so much. Topic 3: “Vaccine and Treatment” mainly focused on Dhaka, Chittagong, Gazipur, Narayanganj area in Bangladesh, shown in Figure 21c. Since the prevalence of COVID-19 is higher in Dhaka, its adjoining districts like Narayanganj, Gazipur, and Chittagong so that treatments were shouted comparatively higher than in other districts. COVID-19 vaccine is also being studied in Dhaka. Topic 4: “Demonstration for wages and relief” is mainly concentrated all over the country shown in Figure 21d. The situation is terrible all over the country due to COVID-19. Those who are day laborers have lost their jobs; they have become destitute. For this, they had to come out of the house to survive. They had to move on the streets to provide food for their families. Since in every district of Dhaka, Chittagong, Rajshahi, Barisal, Khulna, Mymensingh, Sylhet, Rangpur, people had taken to the streets to protest for survival. Topic 5: “Medical Care and Health Organization Responses” also focused all over the country like Topic 4, shown in Figure 21e. The state of the health system in the whole country is deplorable. Health organizations were in a very critical situation. Topic 6: “Repatriation and International Situations” mainly concentrated on China, USA, Italy, Russia, and other countries shown in Figure 21f. This topic talks about the situations of foreign countries and the immigrants who wanted to return to Bangladesh. Here those countries have been shown in Figure 21f. Topic 7: “Daily infected, death and recovered cases” mainly focused on the central region, shown in Figure 21g. Dhaka division has the highest number of COVID-19 infected cases, as well as the death cases. Dhaka division includes Dhaka city, Narayanganj, Gazipur had the most cases. After Dhaka, most of the cases were found in the Chittagong division. Topic 8: “Strategic Preparedness” focused on all over the country shown in Figure 21h. Lockdown, isolation, home quarantine, social distancing was imposed across the country. Topic 9: “Government Announcement and Responses” shown in Figure 21i. It was effective almost everywhere, especially in Dhaka. (a) Topic 1 (b) Topic 2 (c) Topic 3 (d) Topic 4 (e) Topic 5 (f) Topic 6 (g) Topic 7 (h) Topic 8 (i) Topic 9 Figure 21: Spatial Distribution of Topics ### 4.3 Automatic Classification of News Articles We built a text classifier to automatically categorize upcoming new news articles into classes, sub-classes, and topics. We implemented an LSTM recurrent neural network model in Python utilizing Keras666https://keras.io/ deep learning library for this classification. We splited our data into 80% for training, 10% for validation, and 10% for testing. In the data preparation, first, we cleaned the text by removing unnecessary characters and stopwords. After cleaning the text, we tokenized the data using Keras Tokenizer. After that, we built a word index from it. Then we vectorized Bengali text by turning each text into a vector. We limited the dataset to the top 50,000 words and set the max number of words in each article at 1000. After that, we added padding and truncated to our data to make the input sequences uniform and the same length for modeling. After cleaning the data, we selected pre-trained word embeddings 777Bengali Word Embeddings: https://cutt.ly/KjXmEio. Word embedding maps each word from the vocabulary to a vector of real numbers. We used these pre-trained word embedding in the embedding layer of our LSTM model. In our classification model, the first layer is the embedded layer that employments 300 length vectors to represent each word. The second layer is an LSTM layer with 100 hidden units. The final layer is a dense layer, also known as the output layer. This final layer has a length of 8, 19, and 9 for the classes, sub-classes, and LDA-discovered topics, respectively. _Softmax_ is used as the activation function for multi-class classification in the final layer. We used _categorical cross-entropy_ as the loss function, _Adam_ as the optimizer, and a batch size of 32. We used only five epochs as it worked quite well. The experimental results for Precision, Accuracy, F1 score and Recall are given in Table 4. The Precision, Accuracy, F1 score, and Recall are 47.80%, 44.39%, 45.13%, and 42.80%, respectively, for the eight classes. For the 19 sub-classes, Precision, Accuracy, F1 score, and Recall are 47.33%, 38.51%, 37.20%, and 30.82%, respectively. Furthermore, we also computed the performance of 9 LDA topics. For the 9 LDA topics, Precision, Accuracy, F1 score, and Recall are found 81.37%, 79.55%, 79.67%, and 78.10%, respectively. Criteria | Classes | Sub-classes | Topics ---|---|---|--- Precision | 47.80% | 47.33% | 81.37% Accuracy | 44.39% | 38.51% | 79.55% F1 Score | 45.13% | 37.20% | 79.67% Recall | 42.80% | 30.82% | 78.10% Table 4: Performance in Classes, Sub-classes, and Topics ### 4.4 Sentiment Analysis We analyzed sentiment in the COVID-19 related news articles to see how positively and negatively the society was affected by the COVID-19 (or any big incident). We also analyzed the effectiveness of a hybrid CNN-BiLSTM model in identifying sentiments in Bengali texts. First, we manually labeled each news article according to positive or negative sentiment in the article. Then we trained CNN-BiLSTM to detect the sentiment of any upcoming new articles. After labeling the articles’ positive/negative sensation, we visualized the results in Figure 22. The figure shows that there were more negative sentiment news articles than positive sentiments. Figure 22: Proportion of Positive and Negative Sentiments Figure 23: Spatial and Temporal Distribution of Number of Positive and Negative Sentiment News Articles We prepared our labeled news collection as 80% of for training, 10% for validation, and 10% for testing for sentiment analysis. Using Keras Tokenizer, we tokenized the data after cleaning the dataset. Then we built a word index and vectorize each text. We retrained the dataset to the 60,000 top words and set the max number of words in each article at 200 using feature selection. We added padding and truncated the data to make the input sequences uniform and the same for modeling. After data preparation, we built our model. The first layer of the model is the embedding layer. We set the embedding dimension to 300 for embedding each word. In the second layer, we started a conv1D with 200 filters for CNN. Then in the third and fourth layers, we applied two Bi-LSTM with a dropout of 0.5. In the final layer, we used a dense network. We used _Adam_ as the optimizer with finely tuned hyperparameters and applied _L2_ regularizations to reduce overfitting. We kept the batch size of 256 as it worked quite well. We used only five epochs that gave us reasonably good results. After calculating the Precision, Accuracy, F1 score, and Recall, the performance and the sentiments of the classification presented in Table 5. The Precision, Accuray, F1 score, and Recall are 74.89%, 74.94%, 74.88%, and 74.89%, respectively, for sentiment analysis. Criteria | Performance ---|--- Precision | 74.89% Accuracy | 74.94% F1 Score | 74.88% Recall | 74.89% Table 5: Performance of Sentiment Analysis Figure 23 shows the spatial and temporal distribution of the number of positive and negative sentiment news articles during the pandemic. It shows how sentiments are changing over eight divisions for 20 weeks. ## 5 Conclusions This study took an in-depth analysis of Bangladeshi daily newspaper reports from the onset of the COVID-19 pandemic. After collecting the news articles, we investigated and manually classified them into eight classes and nineteen sub-classes. We used LDA for extracting nine COVID-19 related topics from the news articles. We used the dynamic topic model to see the evaluation of topics over time. We also provided the spatial distribution of the topics. We created a text classifier that will automatically sort upcoming articles into classes, sub-classes, and topics. We also did a spatial and temporal analysis of news article volume. In the temporal analysis of volume, we decomposed the time series into four components: observed, trend, seasonal, and residual. Besides, we analyzed daily news article counts and daily infected and death cases in the temporal and spatial dimensions. Finally, we analyzed the sentiments in the news articles related to COVID-19 to understand the positive and negative impacts of events and initiatives during the COVID-19 pandemic using a CNN- BiLSTM architecture. In a period of big social incidence, continuous analysis of newspaper articles is essential to ensure public well-being, maintain social consensus, and save lives. The automatic analysis techniques and the analysis outcomes presented in this study will help government and crisis reaction faculty improve public comprehension, evaluation, predisposition, quicken emergency reaction, and backing post-incidence administration. ## References * Agarwal et al. (2011) Apoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Rambow, and Rebecca J Passonneau. Sentiment analysis of twitter data. In _Proceedings of the workshop on language in social media (LSM 2011)_ , pages 30–38, 2011. * Al Helal and Mouhoub (2018) Mustakim Al Helal and Malek Mouhoub. Topic modelling in bangla language: An lda approach to optimize topics and news classification. _Computer and Information Science_ , 11(4), 2018. * Alm et al. (2005) Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. Emotions from text: machine learning for text-based emotion prediction. In _Proceedings of human language technology conference and conference on empirical methods in natural language processing_ , pages 579–586, 2005. * AlSumait et al. (2008) Loulwah AlSumait, Daniel Barbará, and Carlotta Domeniconi. On-line lda: Adaptive topic models for mining text streams with applications to topic detection and tracking. In _2008 eighth IEEE international conference on data mining_ , pages 3–12. IEEE, 2008. * Asimuzzaman et al. (2017) Md Asimuzzaman, Pinku Deb Nath, Farah Hossain, Asif Hossain, and Rashedur M Rahman. Sentiment analysis of bangla microblogs using adaptive neuro fuzzy system. In _2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)_ , pages 1631–1638. IEEE, 2017\. * Balasubramaniam et al. (2020) Thirunavukarasu Balasubramaniam, Richi Nayak, and Md Abul Bashar. Understanding the spatio-temporal topic dynamics of covid-19 using nonnegative tensor factorization: A case study. _arXiv preprint arXiv:2009.09253_ , 2020. * Bashar and Nayak (2020) Md Abul Bashar and Richi Nayak. Qutnocturnal@ hasoc’19: Cnn for hate speech and offensive content identification in hindi language. _arXiv preprint arXiv:2008.12448_ , 2020. * Bashar et al. (2018) Md Abul Bashar, Richi Nayak, Nicolas Suzor, and Bridget Weir. Misogynistic tweet detection: Modelling cnn with small datasets. In _Australasian Conference on Data Mining_ , pages 3–16. Springer, 2018. * Bashar et al. (2020a) Md Abul Bashar, Richi Nayak, and Thirunavukarasu Balasubramaniam. Topic, sentiment and impact analysis: Covid19 information seeking on social media. _arXiv preprint arXiv:2008.12435_ , 2020a. * Bashar et al. (2020b) Md Abul Bashar, Richi Nayak, and Nicolas Suzor. Regularising lstm classifier by transfer learning for detecting misogynistic tweets with small training set. _Knowledge and Information Systems_ , 62(10):4029–4054, 2020b. * Bijalwan et al. (2014) Vishwanath Bijalwan, Vinay Kumar, Pinki Kumari, and Jordan Pascual. Knn based machine learning approach for text and document mining. _International Journal of Database Theory and Application_ , 7(1):61–70, 2014. * Blei and Lafferty (2006) David M Blei and John D Lafferty. Dynamic topic models. In _Proceedings of the 23rd international conference on Machine learning_ , pages 113–120, 2006. * Blei et al. (2003) David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. _Journal of machine Learning research_ , 3(Jan):993–1022, 2003. * Chowdhury and Chowdhury (2014) Shaika Chowdhury and Wasifa Chowdhury. Performing sentiment analysis in bangla microblog posts. In _2014 International Conference on Informatics, Electronics & Vision (ICIEV)_, pages 1–6. IEEE, 2014. * Chy et al. (2014) Abu Nowshed Chy, Md Hanif Seddiqui, and Sowmitra Das. Bangla news classification using naive bayes classifier. In _16th Int’l Conf. Computer and Information Technology_ , pages 366–371. IEEE, 2014. * Cui et al. (2006) Hang Cui, Vibhu Mittal, and Mayur Datar. Comparative experiments on sentiment classification for online product reviews. In _AAAI_ , volume 6, page 30, 2006. * Dagum (2010) Estela Bee Dagum. Time series modeling and decomposition. _Statistica_ , 70(4):433–457, 2010. * Das and Bandyopadhyay (2010a) Amitava Das and Sivaji Bandyopadhyay. Sentiwordnet for bangla. _Knowledge Sharing Event-4: Task_ , 2:1–8, 2010a. * Das and Bandyopadhyay (2010b) Amitava Das and Sivaji Bandyopadhyay. Topic-based bengali opinion summarization. In _Coling 2010: Posters_ , pages 232–240, 2010b. * De Santis et al. (2020) Enrico De Santis, Alessio Martino, and Antonello Rizzi. An infoveillance system for detecting and tracking relevant topics from italian tweets during the covid-19 event. _IEEE Access_ , 8:132527–132538, 2020. * Dieng et al. (2019) Adji B Dieng, Francisco JR Ruiz, and David M Blei. The dynamic embedded topic model. _arXiv preprint arXiv:1907.05545_ , 2019. * Eshan and Hasan (2017) Shahnoor C Eshan and Mohammad S Hasan. An application of machine learning to detect abusive bengali text. In _2017 20th International Conference of Computer and Information Technology (ICCIT)_ , pages 1–6. IEEE, 2017. * Han et al. (2020) Xuehua Han, Juanle Wang, Min Zhang, and Xiaojie Wang. Using social media to mine and analyze public opinion related to covid-19 in china. _International Journal of Environmental Research and Public Health_ , 17(8):2788, 2020. * Hasan et al. (2014) KM Azharul Hasan, Mosiur Rahman, et al. Sentiment detection from bangla text using contextual valency analysis. In _2014 17th International Conference on Computer and Information Technology (ICCIT)_ , pages 292–295. IEEE, 2014. * Hasan et al. (2019) M. Hasan, M. M. Hossain, A. Ahmed, and M. S. Rahman. Topic modelling: A comparison of the performance of latent dirichlet allocation and lda2vec model on bangla newspaper. In _2019 International Conference on Bangla Speech and Language Processing (ICBSLP)_ , pages 1–5. IEEE, 2019. * Hassan et al. (2016) Asif Hassan, Mohammad Rashedul Amin, Abul Kalam Al Azad, and Nabeel Mohammed. Sentiment analysis on bangla and romanized bangla text using deep recurrent models. In _2016 International Workshop on Computational Intelligence (IWCI)_ , pages 51–56. IEEE, 2016. * Islam et al. (2017) Md Islam, Fazla Elahi Md Jubayer, Syed Ikhtiar Ahmed, et al. A comparative study on different types of approaches to bengali document categorization. _arXiv preprint arXiv:1701.08694_ , 2017. * Jagtap and Dhotre (2014) Balaji Jagtap and Virendrakumar Dhotre. Svm and hmm based hybrid approach of sentiment analysis for teacher feedback assessment. _International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)_, 3(3):229–232, 2014. * Kabir et al. (2015) Fasihul Kabir, Sabbir Siddique, Mohammed Rokibul Alam Kotwal, and Mohammad Nurul Huda. Bangla text document categorization using stochastic gradient descent (sgd) classifier. In _2015 International Conference on Cognitive Computing and Information Processing (CCIP)_ , pages 1–4. IEEE, 2015. * Liu et al. (2010) Zhijie Liu, Xueqiang Lv, Kun Liu, and Shuicai Shi. Study on svm compared with the other text classification methods. In _2010 Second international workshop on education technology and computer science_ , volume 1, pages 219–222. IEEE, 2010. * Mahtab et al. (2018) Shamsul Arafin Mahtab, Nazmul Islam, and Md Mahfuzur Rahaman. Sentiment analysis on bangladesh cricket with support vector machine. In _2018 International Conference on Bangla Speech and Language Processing (ICBSLP)_ , pages 1–4. IEEE, 2018. * Mandal and Sen (2014) Ashis Kumar Mandal and Rikta Sen. Supervised learning methods for bangla web document categorization. _arXiv preprint arXiv:1410.2045_ , 2014. * Marjanen et al. (2020) Jani Marjanen, Elaine Zosa, Simon Hengchen, Lidia Pivovarova, and Mikko Tolonen. Topic modelling discourse dynamics in historical newspapers. _arXiv preprint arXiv:2011.10428_ , 2020. * Nguyen et al. (2020) Ba-Hung Nguyen, Shirai Kiyoaki, and Van-Nam Huynh. Topics in financial filings and bankruptcy prediction with distributed representations of textual data. In _Proceedings of ECML-PKDD_ , 2020. * Pal et al. (2015) Alok Ranjan Pal, Diganta Saha, and Niladri Sekhar Dash. Automatic classification of bengali sentences based on sense definitions present in bengali wordnet. _arXiv preprint arXiv:1508.01349_ , 2015. * Patil and Pawar (2012) Ajay S Patil and BV Pawar. Automated classification of web sites using naive bayesian algorithm. In _Proceedings of the international multiconference of engineers and computer scientists_ , volume 1, pages 519–523. Citeseer, 2012. * Pawar and Gawande (2012) Pratiksha Y Pawar and SH Gawande. A comparative study on different types of approaches to text categorization. _International Journal of Machine Learning and Computing_ , 2(4):423, 2012. * Rahman et al. (2019) Shahinur Rahman, Sheikh Abujar, SM Mazharul Hoque Chowdhury, Mohd Saifuzzaman, and Syed Akhter Hossain. Sentence-based topic modeling using lexical analysis. In _Emerging Technologies in Data Mining and Information Security_ , pages 487–494. Springer, 2019. * Rakshit et al. (2015) Geetanjali Rakshit, Anupam Ghosh, Pushpak Bhattacharyya, and Gholamreza Haffari. Automated analysis of bangla poetry for classification and poet identification. In _Proceedings of the 12th International Conference on Natural Language Processing_ , pages 247–253, 2015. * Tabassum and Khan (2019) Nusrath Tabassum and Muhammad Ibrahim Khan. Design an empirical framework for sentiment analysis from bangla text using machine learning. In _2019 International Conference on Electrical, Computer and Communication Engineering (ECCE)_ , pages 1–5. IEEE, 2019. * Tam et al. (2002) Vincent Tam, Ardi Santoso, and Rudy Setiono. A comparative study of centroid-based, neighborhood-based and statistical approaches for effective document categorization. In _Object recognition supported by user interaction for service robots_ , volume 4, pages 235–238. IEEE, 2002. * Tong and Zhang (2016) Zhou Tong and Haiyi Zhang. A text mining research based on lda topic modelling. In _International Conference on Computer Science, Engineering and Information Technology_ , pages 201–210, 2016. * Tripto and Ali (2018) Nafis Irtiza Tripto and Mohammed Eunus Ali. Detecting multilabel sentiment and emotions from bangla youtube comments. In _2018 International Conference on Bangla Speech and Language Processing (ICBSLP)_ , pages 1–6. IEEE, 2018. * Tuhin et al. (2019) Rashedul Amin Tuhin, Bechitra Kumar Paul, Faria Nawrine, Mahbuba Akter, and Amit Kumar Das. An automated system of sentiment analysis from bangla text using supervised learning techniques. In _2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS)_ , pages 360–364. IEEE, 2019. * Wang and Blei (2011) Chong Wang and David M Blei. Collaborative topic modeling for recommending scientific articles. In _Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining_ , pages 448–456, 2011. * Wayasti et al. (2018) Reggia Aldiana Wayasti, Isti Surjandari, et al. Mining customer opinion for topic modeling purpose: Case study of ride-hailing service provider. In _2018 6th International Conference on Information and Communication Technology (ICoICT)_ , pages 305–309. IEEE, 2018. * Zhao et al. (2011) Wayne Xin Zhao, Jing Jiang, Jianshu Weng, Jing He, Ee-Peng Lim, Hongfei Yan, and Xiaoming Li. Comparing twitter and traditional media using topic models. In _European conference on information retrieval_ , pages 338–349. Springer, 2011.
# Deformations and rigidity in varieties of Lie algebras Josefina Barrionuevo ${\dagger}$ and Paulo Tirao ${\dagger}$ ${\ddagger}$ (appendix by Diego Sulca ${\dagger}$) ${\dagger}$ CIEM-FaMAF, CONICET- Universidad Nacional de Córdoba Ciudad Universitaria, 5000 Córdoba, Argentina ${\ddagger}$ Guangdong Technion Israel Institute of Technology 241 Daxue Road, Jinping District, Shantou, Guandong Province, China (Date: July, 2022) ###### Abstract. We present a novel construction of linear deformations for Lie algebras and use it to prove the non-rigidity of several classes of Lie algebras in different varieties. In particular, we address the problem of $k$-rigidity for $k$-step nilpotent Lie algebras and $k$-solvable Lie algebras. We show that Lie algebras with an abelian factor are not rigid, even for the case of a 1-dimensional abelian factor. This holds in the more restricted case of $k$-rigidity. We also prove that the $k$-step free nilpotent Lie algebras are not $(k+1)$-rigid, but however they are $k$-rigid. ###### Key words and phrases: Lie algebras varieties, deformations, rigidity. Keywords: Lie algebras varieties, deformations, rigidity. MSC 2020: Primary 17B30; Secondary 17B56; Tertiary 17B99. ## 1\. Introduction Let $\mathbb{K}$ be an algebraically closed field of characteristic zero. The variety $\mathcal{L}_{n}$ of $n$-dimensional Lie algebras over $\mathbb{K}$ is the affine algebraic variety of all antisymmetric bilinear maps $\mu:\mathbb{K}^{n}\times\mathbb{K}^{n}\to\mathbb{K}^{n}$ which satisfy the Jacobi identity, called Lie brackets over $\mathbb{K}$. The orbits of the natural action of $\operatorname{GL}(\mathbb{K}^{n})$ by change of basis are the isomorphism classes of Lie brackets. A Lie bracket $\mu$ is called _rigid_ if its orbit is Zariski open, or equivalently $\mu$ is not rigid if and only if in any Zariski neighborhood of it there is a non-isomorphic Lie bracket. Determining all $n$-dimensional rigid Lie algebras is an enormous and highly relevant problem that is out of reach today. There are a finite number of them and the closure of the orbit of a rigid bracket is an irreducible component of $\mathcal{L}_{n}$. Different problems concerning the variety of Lie algebras have been addressed quite extensively for a long time. Determining their irreducible components and their rigid points as understanding degenerations and deformations have been some of the goals for many authors. The reader may look at the following shortlist, which is far from exhaustive in any sense, and the references therein: [BS, C, GA, GH1, GT1, GT2, S, TV, V]. The general picture becomes even more interesting if one looks to different subvarieties of $\mathcal{L}_{n}$ and address the same problems there. We consider with special interest the subvariety $\mathcal{N}_{n}$ of $n$-dimensional nilpotent Lie algebras and the descending chain of subvarieties $\mathcal{N}_{n,k}$, for $k=n-1,\dots,1$, of $n$-dimensional nilpotent Lie algebras with nilpotency index less than or equal to $k$. Notice that $\mathcal{N}_{n,n-1}=\mathcal{N}_{n}$ and that the complement of $\mathcal{N}_{n,n-2}$ inside $\mathcal{N}$ is the open subvariety of $n$-dimensional filiform Lie algebras introduced and studied by M. Vergne [V]. We also consider the subvariety $\mathcal{S}_{n}$ of $n$-dimensional solvable Lie algebras and the corresponding chain $\mathcal{S}_{n,k}$ of $n$-dimensional solvable Lie algebras with solvability index less than or equal to $k$. A classical theorem by Nijenhuis and Richardson states that if the second Chevalley-Eilenberg adjoint cohomology group vanishes ($H^{2}(\mu,\mu)=0$), then $\mu$ is rigid in $\mathcal{L}_{n}$ and the first example shows that the reciprocal does not hold was provided in [R]. This result can be adapted to extend it to other varieties of Lie algebras. A general strategy is discussed by Remm in [RE, Sections 2.2 and 2.3]. We include a self-contained proof of this fact in the Appendix and make explicit the corresponding statements for the varieties $\mathcal{N}_{n,k}$ and $\mathcal{S}_{n,k}$. For $\mu\in\mathcal{N}_{n,k}$, the vanishing of the space $\displaystyle H^{2}_{k\textrm{-nil}}(\mu,\mu)$ $\displaystyle=\frac{Z_{N_{n,k}}^{2}(\mu,\mu)}{B^{2}(\mu,\mu)}$ $\displaystyle=\frac{Ker(\delta)\bigcap Ker(\eta_{k})}{Im(\delta^{1})},$ with $\delta:\Lambda^{2}({\mathbb{K}^{n}}^{*})\rightarrow\Lambda^{3}({\mathbb{K}^{n}}^{*})$ given by: $\delta\omega(x,y,z):=\circlearrowleft\mu(\omega(x,y),z)+\circlearrowleft\omega(\mu(x,y),z),$ $\delta^{1}:\Lambda^{1}({\mathbb{K}^{n}}^{*})\rightarrow\Lambda^{2}({\mathbb{K}^{n}}^{*})$ given by: $\delta^{1}(f)(x,y)=\mu(f(x),y)+\mu(x,f(y))-f(\mu(x,y)),$ and $\eta_{k}$ given by: $\eta_{k}(\omega)=\sum_{j=0}^{k-1}\mu^{k-1-j}\circ\omega\circ\mu^{j},$ implies the rigidity of $\mu$ in $\mathcal{N}_{n,k}$. This description of $H^{2}_{k\textrm{-nil}}(\mu,\mu)$ was given in [BCC] in a slightly different form and in a differential geometry context. In [GR1] the instances for $k=2$ and $k=3$ were also discussed. Semisimple Lie algebras are rigid by Whitehead’s Lemma, and a semisimple Lie algebra $\mathfrak{g}$ plus a 1-dimensional abelian factor $\mathfrak{a}$ is also rigid, since its second cohomology group vanishes. This fact follows from the Hochschild-Serre spectral sequence associated with the ideal $\mathfrak{g}$ of $\mathfrak{g}\oplus\mathfrak{a}$. Also, Borel subalgebras of semisimple Lie algebras have null second cohomology group [LL] and hence are rigid. A natural and very interesting open question is: * • Are there nilpotent rigid Lie algebras in $\mathcal{L}_{n}$? This question, known since 1970 as Vergne’s conjecture, has not been answered yet. We believe that the answer is _no_. This paper, in particular, adds support to our beliefs. The following stronger versions of this question are also challenging: * • Are there $k$-step nilpotent rigid Lie algebras in $\mathcal{N}_{n,k}$? * • Are there $k$-step nilpotent rigid Lie algebras in $\mathcal{N}_{n,k+1}$? The 3-dimensional Heisenberg Lie algebra is rigid in $\mathcal{N}_{3,2}=\mathcal{N}_{3}$. Besides this small dimension example we prove that the free $k$-step nilpotent Lie algebra on $m$ generators, $L_{(k)}(m)$, is rigid in $\mathcal{N}_{n,k}$ where $n=\dim L_{(k)}(m)$. The result follows by proving that their second nil-cohomology vanishes. In an analogous way one can show the $(2n+1)$-dimensional Heisenberg Lie algebra is rigid in $\mathcal{N}_{2n+1,2}$, something that was proved in [GR1] and [A] by different means. Regarding the second question, up to our knowledge based on existing examples, its answer is (in general) _no_. In this paper, we provide further classes of examples by constructing non-trivial linear deformations. In particular we show that the free $k$-step nilpotent Lie algebra $L_{(k)}(m)$ is not rigid $\mathcal{N}_{n,k+1}$ where $n=\dim L_{(k)}(m)$ and similarly the $(2n+1)$-dimensional Heisenberg Lie algebra is not rigid in $\mathcal{N}_{2n+1,3}$. The construction of non-trivial deformations is done by using a novel construction of linear deformations that we present in Section 3. With this tool, we tackle the rigidity problem for Lie algebras with an abelian factor, showing that in general, they are non-rigid. More precisely, all $k$-solvable Lie algebras plus an abelian factor are non-rigid in the corresponding variety $\mathcal{S}_{n,k}$ and all $k$-nilpotent Lie algebras plus an abelian factor are non-rigid in the corresponding variety $\mathcal{N}_{n,k}$, with the only exception of the 3-dimensional Heisenberg Lie algebra plus a 1-dimensional abelian factor. ## 2\. Some preliminaries In this paper, $n$ is a fixed natural number and $\mathbb{K}$ an algebraically closed field of characteristic zero. We consider $n$-dimensional Lie $\mathbb{K}$-algebras. Through the whole paper, we will refer to a Lie algebra $\mathfrak{g}$ or to its Lie bracket $\mu$ indistinctly, according to which notation fits the exposition better. We shall mainly use $\mu$ and use $\mathfrak{g}$ when the underlying vector space is relevant, for instance, to refer to a subalgebra. We may also write $(\mathfrak{g},\mu)$. ### 2.1. Multilinear maps Let $V$ be a vector space and let $C^{i}(V)=\operatorname{Hom}(V^{\otimes i},V)$ be the space of $i$-multilinear maps from $V\times\cdots\times V$ to $V$. Given $\varphi\in C^{i}(V)$ and $\psi\in C^{j}(V)$ let $\varphi\circ\psi\in C^{i+j-1}(V)$ be the multilinear map defined by $\varphi\circ\psi(x_{1},\dots,x_{i+j-1})=\varphi(\psi(x_{1},\dots,x_{j}),x_{j+1},\dots,x_{i+j-1}).$ Also we define inductively $\varphi^{k}=\varphi\circ\varphi^{k-1}.$ Notice that if $f:V\rightarrow V$ is linear and $\varphi\in C^{i}(V)$, then $f\circ\varphi\in C^{i}(V)$. In particular, if $\mu\in C^{2}(V)$, then $\mu\circ\mu$ is the trilinear map given by $\mu\circ\mu(x,y,z)=\mu(\mu(x,y),z).$ For a trilinear map $\varphi$ we write $\circlearrowleft\varphi(x,y,z)=\varphi(x,y,z)+\varphi(y,z,x)+\varphi(z,x,y).$ Hence, for a bilinear map $\mu$, we have that $\circlearrowleft\mu\circ\mu(x,y,z)=\mu(\mu(x,y),z)+\mu(\mu(y,z),x)+\mu(\mu(z,x),y).$ So the Jacobi identity for $\mu$ is $\circlearrowleft\mu\circ\mu=0.$ Given $f,g,h\in V^{*}$, the dual linear space of $V$, $f\cdot g:V\times V\rightarrow\mathbb{K}$ and $f\cdot g\cdot h:V\times V\times V\rightarrow\mathbb{K}$ are the bilinear and trilinear maps defined by $f\cdot g\ (x,y)=f(x)g(y)\quad\text{and}\quad f\cdot g\cdot h\ (x,y,z)=f(x)g(y)h(z).$ A direct computation yields the following result that we shall use in the next section. ###### Lemma 2.1. Given $f,g,h\in V^{*}$, it holds that $\circlearrowleft(f\cdot g-g\cdot f)\cdot f=0$, $\circlearrowleft(f\cdot g-g\cdot f)\cdot g=0$ and $\circlearrowleft(f\cdot g-g\cdot f)\cdot h=\circlearrowleft(f\cdot h-h\cdot f)\cdot g$. ### 2.2. Varieties of Lie algebras Let $V=\mathbb{K}^{n}$. The variety $\mathcal{L}_{n}$ of $n$-dimensional Lie $\mathbb{K}$-algebras is the affine algebraic variety of all the antisymmetric maps $\mu\in C^{2}(V)$ satisfying $\circlearrowleft\mu\circ\mu=0$. The subvariety $\mathcal{N}_{n}$ of $n$-dimensional nilpotent Lie $\mathbb{K}$-algebras, is composite by those Lie brackets $\mu$ such that $\mu^{j}=0$, for some $j\geq 1$. A Lie algebra $\mu$ is said to be $k$-step nilpotent, for $k\geq 2$, if $\mu^{k}=0$ and $\mu^{k-1}\neq 0$. We consider the abelian Lie algebra $\mu=0$ as 1-step nilpotent. The subvariety $\mathcal{N}_{n,k}$ of $n$-dimensional nilpotent Lie algebras at most $k$-step nilpotent is then composite by all Lie brackets $\mu$ such that $\mu^{k}=0$. Notice that $\mathcal{N}_{n,k}\subset\mathcal{N}_{n,k+1}$ and that $\mathcal{N}_{n,n-1}=\mathcal{N}$. The subvarieties of solvable and $k$-step solvable Lie algebras are defined analogously by considering $\mu^{(k)}$ instead of $\mu^{k}$, where $\mu^{(1)}=\mu$ and for $k\geq 2$ $\mu^{(k)}=\mu\big{(}\mu^{(k-1)},\mu^{(k-1)}\big{)}.$ #### _Orbits and rigidity_ The orbit $\mathcal{O}(\mu)$ in $\mathcal{L}_{n}$ of a Lie bracket $\mu$ under the action of $\operatorname{GL}_{n}(\mathbb{K})$ given by _change of basis_ is the isomorphism class of $\mu$. Clearly, if $\mu$ is in any of the subvarieties described above, its orbit $\mathcal{O}(\mu)$ is contained in it. A Lie algebra $\mu$ in a subvariety $\mathcal{L}^{\prime}$ of these is said to be rigid in $\mathcal{L}^{\prime}$ if its orbit is open in $\mathcal{L}^{\prime}$. #### _$k$ -rigidity for nilpotent Lie brackets_ Given a nilpotent Lie bracket $\mu$ of $\mathbb{K}^{n}$, we say that it is _$k$ -rigid_, if it is rigid in $\mathcal{N}_{n,k}$. Given a $k$-step nilpotent Lie bracket $\mu$ of $\mathbb{K}^{n}$ we may ask for the smallest $j$, $j\geq k$, such that $\mu$ is not rigid in $\mathcal{N}_{n,j}$, if such a $j$ exists. According to the available evidence we have, in general, the smallest $j$ is just $k+1$. That is, there is no known $k$-step nilpotent Lie algebra which is $(k+1)$-rigid. This paper adds more evidence to the negative answer of the question: * • Are there $k$-step nilpotent Lie brackets which are $(k+1)$-rigid? To prove $k$-rigidity, we will use the Corollary 7.4 of the appendix. To prove non-$k$-rigidity, we will construct non-trivial linear deformations. ### 2.3. Linear deformations For a general presentation of the theory of deformations, including Lie algebras, the reader may refer to [MM]. Given a Lie bracket $\mu$ of $\mathbb{K}^{n}$, we shall consider _linear deformations_ of $\mu$, that is deformations of the form $\mu_{t}=\mu+t\varphi$, where t is a parameter in $\mathbb{K}$. It is straightforward to verify that $\mu_{t}$ is a Lie algebra for all $t$ if and only if $\varphi$ is a Lie algebra and a 2-cocycle for $\mu$, that is $\circlearrowleft\mu\circ\varphi+\varphi\circ\mu=0$. #### _Grunewald-O’Halloran construction_ Given a Lie algebra $\mathfrak{g}$ with bracket $\mu$, an ideal $\mathfrak{h}$ of codimension 1 and a derivation $D$ of $\mathfrak{h}$, Grunewald and O’Halloran [GH2] considered the linear deformation of $\mu$ $\mu_{t}=\mu+t\varphi_{D},$ where $\varphi_{D}$ is the (Dixmier) 2-cocycle defined by $\varphi_{D}(x,h)=D(h)=-\varphi_{D}(h,x),\quad\varphi_{D}(h,h^{\prime})=0,$ for $h,h^{\prime}\in\mathfrak{h}$ and $x$ a fixed element outside of $\mathfrak{h}$. Notice that $\mathfrak{h}$ remains an ideal of $\mu_{t}$, for all $t$. ## 3\. A novel construction of linear deformations In that follows, we construct linear deformations of a given Lie algebra $\mathfrak{g}$ with Lie bracket $\mu$ starting from a subalgebra $\mathfrak{h}$ of $\mathfrak{g}$ of codimension 2. Fix $a_{1},a_{2}\in\mathfrak{g}$ such that $\langle a_{1},a_{2}\rangle$ is a complementary subspace to $\mathfrak{h}$, i.e. $\mathfrak{g}=\langle a_{1},a_{2}\rangle\oplus\mathfrak{h}.$ For a basis $\\{h_{1},\dots,h_{n-2}\\}$ of $\mathfrak{h}$, $B=\\{a_{1},a_{2},h_{1},\dots,h_{n-2}\\}$ is a basis of $\mathfrak{g}$. Let $B^{*}=\\{a_{1}^{*},a_{2}^{*},h_{1}^{*},\dots,h_{n-2}^{*}\\}$ be the dual basis of $B$. Hence, given $x\in\mathfrak{g}$, there is a unique $x_{\mathfrak{h}}\in\mathfrak{h}$ such that $x=a_{1}^{*}(x)a_{1}+a_{2}^{*}(x)a_{2}+x_{\mathfrak{h}}.$ We denote the linear map $x\mapsto x_{h}$ by $\pi_{\mathfrak{h}}$. By $\operatorname{ad}_{x}$ we denote the adjoint of $x\in\mathfrak{g}$, $\operatorname{ad}_{x}:\mathfrak{g}\to\mathfrak{g}$. For $h\in\mathfrak{h}$, we denote by $\operatorname{ad}_{h}^{\mathfrak{h}}$ the adjoint of $h$ restricted to $\mathfrak{h}$, $\operatorname{ad}_{h}^{\mathfrak{h}}:\mathfrak{h}\to\mathfrak{h}$. In addition, for $y\in\mathfrak{g}$, we shall consider the antisymmetric bilinear map $\varphi=a_{1}^{*}\wedge a_{2}^{*}\otimes y$. Recall that, for all $u,v\in\mathfrak{g}$, $\displaystyle(a_{1}^{*}\wedge a_{2}^{*}\otimes y)(u,v)$ $\displaystyle=$ $\displaystyle(a_{1}^{*}\cdot a_{2}^{*}-a_{2}^{*}\cdot a_{1}^{*})(u,v)y$ $\displaystyle=$ $\displaystyle(a_{1}^{*}(u)a_{2}^{*}(v)-a_{2}^{*}(u)a_{1}^{*}(v))y.$ In particular $\varphi(\mathfrak{h},\mathfrak{g})=0$. Finally notice that $\varphi$ is a Lie bracket isomorphic to a $3$-dimensional Heisenberg Lie algebra plus an $(n-3)$-dimensional abelian Lie algebra so that $\varphi$ is a Lie bracket. ###### Theorem 3.1. Let $(\mathfrak{g},\mu)$ be a Lie algebra and $\mathfrak{h}\subseteq\mathfrak{g}$ a subalgebra of codimension 2. Fix $a_{1},a_{2}\in\mathfrak{g}$ such that $\mathfrak{g}=\langle a_{1},a_{2}\rangle\oplus\mathfrak{h}$ and $a_{1}^{*}\circ\operatorname{ad}_{a_{1}}+a_{2}^{*}\circ\operatorname{ad}_{a_{2}}=0$ in $\mathfrak{h}$. Then for any $y\in Z_{\mathfrak{g}}(\mathfrak{h})$, $\mu_{t}=\mu+t(a_{1}^{*}\wedge a_{2}^{*}\otimes y)$ is a linear deformation of $\mu$. ###### Proof. Let us denote $\mu=[\ ,\ ]$ and $a_{1}^{*}\wedge a_{2}^{*}\otimes y=\varphi$. Since $\varphi$ is already a Lie bracket, it remains to show that $\varphi$ is a 2-cocycle for $\mu$, that is $\circlearrowleft\left([\ ,\ ]\circ\varphi+\varphi\circ[\ ,\ ]\right)=0.$ We show that moreover $\circlearrowleft[\ ,\ ]\circ\varphi=0$ and $\circlearrowleft\varphi\circ[\ ,\ ]=0$. On the one hand, let $u,v,w\in\mathfrak{g}$. $\left([\ ,\ ]\circ\varphi\right)(u,v,w)=[\varphi(u,v),w]$ and $\displaystyle[\varphi(u,v),w]$ $\displaystyle=$ $\displaystyle\left[\ (a_{1}^{*}a_{2}^{*}-a_{2}^{*}a_{1}^{*})(u,v)\ y\ ,\ a_{1}^{*}(w)a_{1}+a_{2}^{*}(w)a_{2}+w_{\mathfrak{h}}\ \right]$ $\displaystyle=$ $\displaystyle((a_{1}^{*}a_{2}^{*}-a_{2}^{*}a_{1}^{*})a_{1}^{*})(u,v,w)\ [y,a_{1}]$ $\displaystyle\qquad+((a_{1}^{*}a_{2}^{*}-a_{2}^{*}a_{1}^{*})a_{2}^{*})(u,v,w)\ [y,a_{2}]$ $\displaystyle\qquad+(a_{1}^{*}a_{2}^{*}-a_{2}^{*}a_{1}^{*})(u,v)\ [y,w_{\mathfrak{h}}]$ $\displaystyle=$ $\displaystyle 0.$ The first two terms are equal to $0$ by Lemma 2.1 and the third one is $0$ because $y\in Z_{\mathfrak{g}}(\mathfrak{h})$. On the other hand, let $u,v,w\in\mathfrak{g}$, $(\varphi\circ[,])(u,v,w)=\varphi([u,v],w)$. Writing $\displaystyle u=a_{1}^{*}(u)a_{1}+a_{2}^{*}(u)a_{2}+u_{\mathfrak{h}}$ $\displaystyle v=a_{1}^{*}(v)a_{1}+a_{2}^{*}(v)a_{2}+v_{\mathfrak{h}},$ it follows that $\displaystyle[u,v]$ $\displaystyle=$ $\displaystyle\left(a_{1}^{*}(u)a_{2}^{*}(v)-a_{2}^{*}(u)a_{1}^{*}(v)\right)[a_{1},a_{2}]+[u_{\mathfrak{h}},v_{\mathfrak{h}}]+a_{1}^{*}(u)[a_{1},v_{\mathfrak{h}}]$ $\displaystyle\qquad+a_{2}^{*}(u)[a_{2},v_{\mathfrak{h}}]+a_{1}^{*}(v)[u_{\mathfrak{h}},a_{1}]+a_{2}^{*}(v)[u_{\mathfrak{h}},a_{2}],$ and since $[u_{\mathfrak{h}},v_{\mathfrak{h}}]\in\mathfrak{h}$ we have that $\displaystyle\varphi([u,v],w)$ $\displaystyle=$ $\displaystyle(a_{1}^{*}(u)a_{2}^{*}(v)-a_{2}^{*}(u)a_{1}^{*}(v))\varphi([a_{1},a_{2}],w)+a_{1}^{*}(u)\varphi([a_{1},v_{\mathfrak{h}}],w)$ $\displaystyle+$ $\displaystyle a_{2}^{*}(u)\varphi([a_{2},v_{\mathfrak{h}}],w)-a_{1}^{*}(v)\varphi([a_{1},u_{\mathfrak{h}}],w)-a_{2}^{*}(v)\varphi([a_{2},u_{\mathfrak{h}}],w).$ The first term is equal to $\displaystyle S_{1}(u,v,w)$ $\displaystyle=$ $\displaystyle((a_{1}^{*}\cdot a_{2}^{*}-a_{2}^{*}\cdot a_{1}^{*})\cdot a_{2}^{*})(u,v,w)\ a_{1}^{*}([a_{1},a_{2}])\ y$ $\displaystyle\qquad-((a_{1}^{*}\cdot a_{2}^{*}-a_{2}^{*}\cdot a_{1}^{*})\cdot a_{1}^{*})(u,v,w)\ a_{2}^{*}([a_{1},a_{2}])\ y;$ the sum of the second and forth terms is equal to $\displaystyle S_{24}(u,v,w)$ $\displaystyle=$ $\displaystyle(a_{1}^{*}\cdot(a_{1}^{*}\circ\operatorname{ad}_{a_{1}}\circ\pi_{\mathfrak{h}})-(a_{1}^{*}\circ\operatorname{ad}_{a_{1}}\circ\pi_{\mathfrak{h}})\cdot a_{1}^{*})\cdot a_{2}^{*}\ (u,v,w)\ y+$ $\displaystyle\qquad((a_{2}^{*}\circ\operatorname{ad}_{a_{1}}\circ\pi_{\mathfrak{h}})\cdot a_{1}^{*}-a_{1}^{*}\cdot(a_{2}^{*}\circ\operatorname{ad}_{a_{1}}\circ\pi_{\mathfrak{h}}))\cdot a_{1}^{*}\ (u,v,w)\ y;$ and the sum of the third and fifth terms is equal to $\displaystyle S_{35}(u,v,w)$ $\displaystyle=$ $\displaystyle(a_{2}^{*}\cdot(a_{1}^{*}\circ\operatorname{ad}_{a_{2}}\circ\pi_{\mathfrak{h}})-(a_{1}^{*}\circ\operatorname{ad}_{a_{2}}\circ\pi_{\mathfrak{h}})\cdot a_{2}^{*})\cdot a_{2}^{*}\ (u,v,w)\ y+$ $\displaystyle\qquad((a_{2}^{*}\circ\operatorname{ad}_{a_{2}}\circ\pi_{\mathfrak{h}})\cdot a_{2}^{*}-a_{2}^{*}\cdot(a_{2}^{*}\circ\operatorname{ad}_{a_{2}}\circ\pi_{\mathfrak{h}}))\cdot a_{1}^{*}\ (u,v,w)\ y.$ Now we have that, by Lemma 2.1, $\circlearrowleft S_{1}(u,v,w)=0.$ Also by the same lemma, $\displaystyle\circlearrowleft S_{24}(u,v,w)$ $\displaystyle=$ $\displaystyle\circlearrowleft(a_{1}^{*}\cdot(a_{1}^{*}\circ\operatorname{ad}_{a_{1}}\circ\pi_{\mathfrak{h}})-(a_{1}^{*}\circ\operatorname{ad}_{a_{1}}\circ\pi_{\mathfrak{h}})\cdot a_{1}^{*})\cdot a_{2}^{*}\ (u,v,w)\ y$ $\displaystyle\circlearrowleft S_{35}(u,v.w)$ $\displaystyle=$ $\displaystyle\circlearrowleft((a_{2}^{*}\circ\operatorname{ad}_{a_{2}}\circ\pi_{\mathfrak{h}})\cdot a_{2}^{*}-a_{2}^{*}\cdot(a_{2}^{*}\circ\operatorname{ad}_{a_{2}}\circ\pi_{\mathfrak{h}}))\cdot a_{1}^{*}\ (u,v,w)\ y.$ Finally, by hypothesis $a_{1}^{*}\circ\operatorname{ad}_{a_{1}}\circ\pi_{\mathfrak{h}}=-a_{2}^{*}\circ\operatorname{ad}_{a_{2}}\circ\pi_{\mathfrak{h}}$. This implies, by using Lemma 2.1, that $\circlearrowleft(S_{24}+S_{35})=0$ and therefore $\circlearrowleft\varphi\circ[\ ,\ ]=0$ as we wanted to prove. ∎ ###### Corollary 3.2. Let $(\mathfrak{g},\mu)$ be a nilpotent Lie algebra and $\mathfrak{h}\subseteq\mathfrak{g}$ a subalgebra of codimension 2. Fix $a_{1},a_{2}\in\mathfrak{g}$ be such that $\mathfrak{g}=\langle a_{1},a_{2}\rangle\oplus\mathfrak{h}$. Then for any $y\in Z_{\mathfrak{g}}(\mathfrak{h})$ $\mu_{t}=\mu+t(a_{1}^{*}\wedge a_{2}^{*}\otimes y)$ is a linear deformation of $\mu$. ###### Proof. Both $\mathfrak{g}$ and $\mathfrak{h}$ are nilpotent Lie algebras, hence $\operatorname{tr}(\operatorname{ad}_{x})=0$, for all $x\in\mathfrak{g}$ and $\operatorname{tr}(\operatorname{ad}_{h}^{\mathfrak{h}})=0$, for all $h\in\mathfrak{h}$. Since $\operatorname{tr}(\operatorname{ad}_{h})=a_{1}^{*}([h,a_{1}])+a_{2}^{*}([h,a_{2}])+\operatorname{tr}(\operatorname{ad}_{h}^{\mathfrak{h}}),$ it follows that $0=a_{1}^{*}\circ\operatorname{ad}_{a_{1}}(h)+a_{2}^{*}\circ\operatorname{ad}_{a_{2}}(h).$ Thus the hypotheses of Theorem 3.1 are satisfied. ∎ ###### Remark 3.3. If $\mathfrak{g}$ is nilpotent, this construction is a particular case of the Grunewald-O’Halloran construction [GH2]. This follows from two easy arguments. First, all subalgebras of codimension $2$ can be extended to an ideal of codimension $1$. [Given $\langle a_{1},a_{2}\rangle$ a direct linear complement of $\mathfrak{h}$ in $\mathfrak{g}$, then either $a_{1}$ or $a_{2}$ are not in $[\mathfrak{g},\mathfrak{g}]$. In fact if both are in $[\mathfrak{g},\mathfrak{g}]$, since $\mathfrak{g}$ is nilpotent and $\mathfrak{h}$ is a subalgebra, then $a_{1}=[a_{2},h_{1}]$ and $a_{2}=[a_{1},h_{2}]$ for some $h_{1},h_{2}\in\mathfrak{h}$. But this is not possible for $\mathfrak{g}$ nilpotent.] Then we can assume that $\mathfrak{g}=\langle a_{1}\rangle\oplus I$, with $\mathfrak{h}\subseteq I$. Finally, the linear function of $I$ that sends $a_{2}$ to $y$ and the rest of elements of a basis to $0$, is a derivation of $I$. In general, our construction is different from that in [GH2], as the following two examples show. ###### Example 3.4. Let $(\mathfrak{g}=\langle a_{1},a_{2},y\rangle,\mu)$ with Lie bracket $\displaystyle\mu(a_{1},y)=2a_{1}$ $\displaystyle\mu(a_{2},y)=2a_{2}$ $\displaystyle\mu(a_{1},a_{2})=0$ If we take $\mathfrak{h}=\langle y\rangle$, it satisfies the hypotheses of the Theorem 3.1. It follows that $\mu_{t}$ is isomorphic to $\mathfrak{sl}_{2}(\mathbb{K})$ for all $t\neq 0$. Since $\mathfrak{sl}_{2}(\mathbb{K})$ has no ideals, the deformation $\mu_{t}$ is not Grunewald-O’Halloran’s type. ###### Example 3.5. Let $(\mathfrak{h},\nu)$ be a non-perfect Lie algebra with non-trivial center and let $f:\mathfrak{h}\rightarrow\mathbb{K}$ be a non-zero linear map such that $f(\nu(\mathfrak{h},\mathfrak{h}))=0$. Define the Lie algebra $(\mathfrak{g},\mu)$ by taking $\mathfrak{g}=\langle a_{1},a_{2}\rangle\oplus\mathfrak{h}$ with $\mu$ defined by: $\displaystyle\mu_{|_{\mathfrak{h}\times\mathfrak{h}}}=\nu,$ $\displaystyle\mu(a_{1},h)=f(h)a_{2},$ $\displaystyle\mu(a_{2},h)=f(h)a_{1},$ $\displaystyle\mu(a_{1},a_{2})=0,$ for all $h\in\mathfrak{h}$. By taking $y\neq 0\in Z(\mathfrak{h})$, the hypotheses of Theorem 3.1 are satisfied. The corresponding linear deformation $\mu_{t}$ is not of Grunewald-O’Halloran type. Indeed, if $\mu_{t}=\mu+t(a_{1}^{*}\wedge a_{2}^{*}\otimes y)$ were of that type, it would exist an ideal $I\triangleleft\mathfrak{g}$ of codimension one, $x\in\mathfrak{g}$ and $D\in\operatorname{Der}(I)$ such that $\mathfrak{g}=\langle x\rangle\oplus I$ and (3.1) $\displaystyle\mu_{t}(i_{1},i_{2})=\mu(i_{1},i_{2})$ (3.2) $\displaystyle\mu_{t}(x,i)=\mu(x,i)+tD(i)$ for all $i,i_{1},i_{2}\in I$. Since $I\triangleleft\mathfrak{g}$ has codimension one, $\mu(\mathfrak{g},\mathfrak{g})\subseteq I$ and then $a_{1},a_{2}\in I$. Now from (3.1) we get that $\mu_{t}(a_{1},a_{2})=\mu(a_{1},a_{2}).$ However by the definition of $\mu_{t}$, we have that $\mu_{t}(a_{1},a_{2})=(\mu+t(a_{1}^{*}\wedge a_{2}^{*}\otimes y))(a_{1},a_{2})=\mu(a_{1},a_{2})+ty,$ which is not possible for $t\neq 0$. Therefore $\mu_{t}$ is not of Grunewald-O’Halloran type. ### 3.1. 2-step nilpotent graph Lie algebras As an example of the previous construction, we deform 2-step nilpotent graph Lie algebras (see [AAA, BT] for 2-step graph Lie algebras in the degenerations and deformations framework). Let $G$ be the graph with vertices $V=\\{v_{1},\dots,v_{m}\\}$ and edges $A=\\{a_{ij}:(i,j)\in I\\}$, $I\subseteq\\{1,\dots,n\\}\times\\{1,\dots,n\\}$. The graph Lie algebra associated to $G$, $\mathfrak{g}_{G}$, is the $\mathbb{K}$-vector space generated by $V\cup A$, where the non-zero brackets of basis elements are $\mu(v_{i},v_{j})=a_{ij},\quad\text{if }(i,j)\in I.$ Notice that if $A=\emptyset$, then $\mathfrak{g}_{G}=\mathfrak{a}_{m}$, the $m$-dimensional abelian Lie algebra. But in general, if $A\neq\emptyset$, it is a $2$-step nilpotent Lie algebra, i.e $\mathfrak{g}_{G}\in\mathcal{N}_{n,2}$, $n=|V|+|A|$. ###### Example 3.6. The Lie algebra associated with the graph with two vertices and one edge is the 3-dimensional Heisenberg Lie algebra $\mathfrak{h}_{1}$. In general, 2-step nilpotent graph Lie algebras are not 3-rigid, as stated precisely in the following theorem. ###### Theorem 3.7. Let $\mathfrak{g}$ be a $n$-dimensional graph Lie algebra non isomorphic to $\mathfrak{h}_{1}$, $\mathfrak{a}_{1}$ or $\mathfrak{a}_{2}$, then $\mathfrak{g}$ is non-rigid in $\mathcal{N}_{n,3}$. ###### Proof. Let $\mu$ be the bracket of $\mathfrak{g}$. In all cases, we construct a non- trivial 3-step nilpotent deformation of $\mu$. If $A=\emptyset$, $\mathfrak{g}\simeq\mathfrak{a}^{n}$ with $n\geq 3$ and $\mu=0$, then [Section 6, item (3)] provides a non-trivial $2$-step deformation of $\mu$. If $|A|=1$, since $\mathfrak{g}\ncong\mathfrak{h}_{1}$, we have that $m>2$. We may assume, by relabeling the vertices if necessary, that $A=\\{a_{12}\\}$. The hypotheses of Corollary 3.2 are fulfilled for $a_{1}=v_{1}$, $a_{2}=a_{12}$, $\mathfrak{h}=\langle V-\\{v_{1}\\}\rangle$ and $y=v_{3}$. Then $\mu_{t}=\mu+t(v_{1}\wedge a_{12}\otimes v_{3})$ is a linear deformation of $\mu$, which is $3$-step nilpotent for all $t\neq 0$. If $|A|>1$, we can assume that $a_{12}\in A$ and that there is another edge in $a\in A$. The hypotheses of Corollary 3.2 are fulfilled for $a_{1}=v_{1}$, $a_{2}=a_{12}$, $\mathfrak{h}=\langle(V-\\{v_{1}\\})\cup(A-\\{a_{12}\\})\rangle$ and $y=a$. Then $\mu_{t}=\mu+t(v_{1}\wedge a_{12}\otimes a)$ is a linear deformation of $\mu$, which is $3$-step nilpotent for all $t\neq 0$. ∎ ## 4\. Free nilpotent Lie algebras Let $L_{(k)}(m)$ be the free $k$-step nilpotent Lie algebra on $m$ generators, where $m\geq 2$, and be $n$ its dimension. In this section, we explore the rigidity of $L_{(k)}(m)$ in the varieties $\mathcal{N}_{n,k}$ and $\mathcal{N}_{n,k+1}$, showing that it is rigid in the first one, but not in the second one. There is a single exception: $L_{(2)}(2)$, this Lie algebra is isomorphic to the $3$-dimensional Heisenberg Lie algebra, which is rigid in $\mathcal{N}_{3,2}=\mathcal{N}_{3,3}=\mathcal{N}_{3}$. Let us recall briefly the construction of $L_{(k)}(m)$ and some well-known facts to fix notation. Given a set of generators $X$, one constructs one after the other, the free magma $M(X)$, the free algebra $A(X)$, the free Lie algebra $L(X)$ and finally the $k$-step free nilpotent Lie algebra on $X$, $L_{(k)}(X)$. The free magma on $X$ is the set $M(X)$ with an operation ‘$\centerdot$’. $M(X)=\bigcup_{i\in\mathbb{N}}M_{i}(X),$ where $M_{1}(X)=X$ and for $i\geq 2$, $M_{i}(X)=\\{p\centerdot q\ |\ p\in M_{s}(X),q\in M_{t}(X),s+t=i\\}$. The elements of $M_{i}(X)$ are said to be of length $i$. The free algebra on $X$ is the algebra $A(X)$ built on the linear space generated by the set $M(X)$ with the bilinear product induced by ‘$\centerdot$’. Clearly, it is naturally graded $A(X)=\bigoplus_{i=1}^{\infty}A_{i}(X),$ where $A_{i}(X)=\langle M_{i}(X)\rangle$. Notice that $A_{i}(X)\centerdot A_{j}(X)\subseteq A_{i+j}(X)$. The elements of $A_{i}(X)$ are said to be of degree $i$. The free Lie algebra on $X$ is the Lie algebra $(L(X),\lambda)$, constructed as the quotient algebra $L(X)=\frac{A(X)}{I},$ where $I$ is the ideal generated by the set $\\{a\centerdot b+b\centerdot a\ |\ a,b\in A(X)\\}\cup\\{\circlearrowleft\centerdot^{2}(a,b,c)\ |\ a,b,c\in A(X)\\}.$ The quotient projection is $\pi:A(X)\rightarrow L(X)$. Since the ideal $I$ is homogeneous, $L(X)$ is naturally graded $L(X)=\bigoplus_{i=1}^{\infty}L_{i}(X),$ where $L_{i}(X)=\pi(A_{i}(X))$. The $k$-step free nilpotent Lie algebra on $X$ is the Lie algebra $(L_{(k)}(X),\mu)$, constructed as the quotient algebra $L_{(k)}(X)=\frac{L(X)}{L^{k+1}(X)},$ where $L^{k+1}(X)$ is the $(k+1)$-th term of the central descending series of $L(X)$, $L^{k+1}(X)=\bigoplus_{i=k+1}^{\infty}L_{i}(X).$ The quotient projection is $\overline{\phantom{I}}:L(X)\rightarrow L_{(k)}(X)$. Since the ideal $L^{k+1}(X)$ is homogeneous, $L_{(k)}(X)$ is naturally graded $L_{(k)}(X)=\bigoplus_{i=1}^{k}L_{i}(X),$ where, for $i\leq k$, we identify $L_{i}(X)$ with its image in the quotient. Finally, we denote by $\pi_{k}=\overline{\phantom{I}}\circ\pi$ the composition $\pi_{k}:A(X)\longrightarrow L(X)\longrightarrow L_{(k)}(X).$ Sometimes for convenience we will write $[\ ,\ ]$ instead of $\mu$. Hall bases. Let $X=\\{x_{1},\cdots,x_{m}\\}$. Since $L_{i}(X)=\pi(A_{i}(X))$ and $A_{i}(X)=\langle M_{i}(X)\rangle$, it follows that $L_{i}(X)=\langle\pi(M_{i}(X))\rangle.$ Linear basis for each $L_{i}(X)$, and hence for the free Lie algebra $L(X)$ and for the $k$-step free nilpotent Lie algebras, can be chosen from $\pi(M_{i}(X))$. Very well-known bases of this kind are the _Hall bases_. 1. (1) Starting from an ordered basis $B_{1}$ of $L_{1}(X)$ one constructs recursively ordered basis $B_{i}$ of $L_{i}(X)$ as follows: 1. (i) Let $B_{1}$ be the set of generators $X$ ordered by $x_{1}<x_{2}<\dots<x_{m}.$ 2. (ii) Given ordered bases $B_{1},\dots,B_{k}$ of $L_{1}(X),\dots,L_{k}(X)$ respectively, $u\in B_{i}$ and $v\in B_{j}$, with $i\neq j$, are ordered by $u<v,\text{\qquad if $i<j$}.$ 3. (iii) Now $B_{k+1}$ is formed by all brackets $\lambda(u,v)$, with $u\in B_{i}$, $v\in B_{j}$ with $i+j=k+1$, subject to the following restrictions: $u>v\text{\qquad and \qquad if\quad}u=\lambda(w,z),\quad v\geq z.$ The elements of $B_{k+1}$ are ordered lexicographically; that is $\lambda(u,v)>\lambda(w,z)$ if $u>w$, or $u=w$ and $v>z$. 4. (iv) Given an order for the generators, the basis so constructed is uniquely determined. 2. (2) Observe that each element $v$ in the basis $B_{i}$ has a multidegree $D_{v}=(d_{1},\dots,d_{n})$, where $d_{j}$ is the number of occurrences of $x_{j}$ in $v$. Clearly $d_{1}+\dots+d_{n}=i$ and moreover the bracket is multigraded, that is $D_{[v,w]}=D_{v}+D_{w}$. From the construction and the last observation, three facts, that we recall for later use, follow: 1. (i) The element $c_{k}=\lambda^{k-1}(x_{2},x_{1},\dots,x_{1})$, of multidegree $D=(k-1,1,0,\dots,0)$ is in $B_{k}$, for every $k\geq 2$. 2. (ii) Given $v\in B_{i}$ and $w\in B_{j}$, $\lambda(v,w)\in L_{i+j}(X)$ is a linear combination of the those elements of $B_{i+j}$ with multidegree equal to $D_{v}+D_{w}$. The coefficient of $c_{k}$, with $k=i+j$, in this linear combination is almost always zero. The only exception is the case $v=c_{k-1}$ and $w=x_{1}$, in which case $\lambda(v,w)=c_{k}$. 3. (iii) In general $\dim L_{i}(X)\geq 2$, with the only exception of $L_{2}(X)$ if $m=2$, in which case $\dim L_{2}(X)=1$. ### 4.1. Main results Given $X=\\{x_{1},...,x_{m}\\}$ let us denote $M(X)$, $A(X)$, $L(X)$ and $L_{(k)}(X)$ by $M(m)$, $A(m)$, $L(m)$ and $L_{(k)}(m)$ respectively. ###### Theorem 4.1. The $n$-dimensional free $k$-step nilpotent Lie algebra on $m$ generators $L_{(k)}(m)$, is non-rigid in $\mathcal{N}_{n,k+1}$, if it is not isomorphic to $L_{(2)}(2)$. ###### Proof. We construct a $(k+1)$-step linear deformation of $L_{(k)}(m)$ using Corollary 3.2. Let $B$ be the Hall basis (given in the previous section) associated to the set of generators $\\{x_{1},...,x_{m}\\}$ ordered by $x_{1}<\dots<x_{m}$. Let $a_{1}=x_{1}\qquad\text{and}\qquad a_{2}=c_{k}=[[\dots[[x_{2},x_{1}],x_{1}],\dots],x_{1}],$ and $\mathfrak{h}=\langle B-\\{a_{1},a_{2}\\}\rangle$. The subspace $\mathfrak{h}$ is in fact a subalgebra, as it follows from the item (2)(ii) of the previous facts. Also $B_{k}$ has at least 2 elements, as it follows from the item (2)(iii) and the fact that $L_{(k)}(m)\not\simeq L_{(2)}(2)$. Then there exists $y\in B_{k}$ linearly independent with $a_{2}\in B_{k}$. Since $B_{k}\subseteq Z(L_{(k)}(m))$, then $y\in Z_{L_{(k)}(m)}(\mathfrak{h})$. Hence, by Corollary 3.2, we can consider the linear deformation of $L_{(k)}(m)$ given by $[\ ,\ ]_{t}=[\ ,\ ]+t(a_{1}\wedge a_{2}\otimes y).$ If $t\neq 0$, $[a_{1},a_{2}]_{t}=ty\neq 0$, so that $L_{t}^{k+1}=\langle y\rangle\neq 0$ and therefore $L_{t}$ is a $(k+1)$-step nilpotent Lie algebra for all $t\neq 0$. ∎ The following fact is needed in our proof of the next theorem. ###### Remark 4.2. Given $x_{i_{1}},\dots,x_{i_{k}}$ in $X$, consider the element $\lambda^{k-1}(x_{i_{1}},\dots,x_{i_{k}})=[[\dots[[x_{i_{1}},x_{i_{2}}],x_{i_{3}}]\dots],x_{i_{k}}]$ in $L_{k}(m)$. The set formed by these elements will be denoted by $\lambda^{k-1}(X^{k})$. In general $\lambda^{k-1}(X^{k})$ is not a linearly independent set, but it generates the homogeneous component $L_{k}(m)$. This is clear for $L_{1}(m)$ and $L_{2}(m)$ and follows inductively for $L_{k}(m)$, by the Jacobi identity. ###### Theorem 4.3. The free $k$-step nilpotent Lie algebra $L_{(k)}(m)$ is rigid in $\mathcal{N}_{n,k}$. ###### Proof. According to Corollary 7.4 (see the Appendix), it suffices to prove that $H_{k\textrm{-nil}}^{2}(\mu,\mu)=0$. Given $\sigma\in Z_{k\textrm{-nil}}^{2}(\mu,\mu)$, we construct a linear function $f:L_{(k)}(m)\rightarrow L_{(k)}(m)$ such that $\sigma=\delta f$, so that $\sigma\in B_{k\textrm{-nil}}^{2}(\mu,\mu)$. We start by considering the linear function $f:A(m)\rightarrow L_{(k)}(m)$ defined in the basis $M(m)$ of $A(m)$ recursively by $\displaystyle f(x)=0,\text{\ for all\ }x\in X=M_{1}(m)$ $\displaystyle f(p\centerdot q)=[f(p),\pi_{k}(q)]+[\pi_{k}(p),f(q)]-\sigma(\pi_{k}(p),\pi_{k}(q)).$ A direct calculation shows that $f$ satisfies, for all $a,b,c\in A(m)$, $\displaystyle f(a\centerdot b+b\centerdot a)=0,$ $\displaystyle f(\circlearrowleft\centerdot^{2}(a,b,c))=0.$ So that it induces a linear map $f:L(m)=A(m)/I\rightarrow L_{(k)}(m)$. This map satisfies, for $u,v\in L(m)$, (4.1) $f(\lambda(u,v))=[f(u),\bar{v}]+[\bar{u},f(v)]-\sigma(\bar{u},\bar{v}).$ Recall that $\bar{\ }:L(m)\rightarrow L_{(k)}(m)$ is the quotient projection. Moreover, we will prove that $f(L(m)^{k+1})=0$ and therefore it induces a linear map $f:L_{(k)}(m)\rightarrow L_{(k)}(m)$ satisfying, for all $u,v\in L_{(k)}(m)$, $f([u,v])=[f(u),v]+[u,f(v)]-\sigma(u,v).$ And thus $\sigma=\delta f$. It remains to see that $f(L^{k+1}(m))=0$. To prove this, it suffices to show that $f\circ\lambda^{i}(X^{i+1})=0$, for all $i\geq k$, because $\lambda^{i}(X^{i+1})$ generates $L^{i+1}(m)$ (see 4.2). That $f\circ\lambda^{i}(X^{i+1})=0$, for all $i\geq k$ is a consequence of the following identity of $(i+1)$-multilinear functions in $X^{i+1}$: (4.2) $f\circ\lambda^{i}=-\sum_{j=0}^{i-1}{\mu}^{i-1-j}\circ\sigma\circ\mu^{j},$ where $\mu$ is the Lie bracket of $L_{(k)}(m)$. We prove this identity by induction. If $i=1$ and $(x,y)\in X^{2}$, since $\overline{x}=x$, $\overline{y}=y$, and $f(x)=0=f(y)$, by using (4.1) we have that $\displaystyle f\circ\lambda^{1}(x,y)$ $\displaystyle=$ $\displaystyle f(\lambda(x,y))$ $\displaystyle=$ $\displaystyle[f(x),y]+[x,f(y)]-\sigma(x,y)$ $\displaystyle=$ $\displaystyle-\sum_{j=0}^{0}\mu^{0-j}\circ\sigma\circ\mu^{j}(x,y).$ If $f\circ\lambda^{i}=-\sum_{j=0}^{i-1}\mu^{i-1-j}\circ\sigma\circ\mu^{j}$ and $\mathbf{x}=(\mathbf{x}^{\prime},x)\in X^{i+2}$, for some $\mathbf{x}^{\prime}\in X^{i+1}$ and $x\in X$, we have that $\displaystyle f\circ\lambda^{i+1}(\mathbf{x})$ $\displaystyle=$ $\displaystyle f(\lambda(\lambda^{i}(\mathbf{x}^{\prime}),x))$ $\displaystyle=$ $\displaystyle[f(\lambda^{i}(\mathbf{x}^{\prime})),\overline{x}]+[\overline{\lambda^{i}(\mathbf{x}^{\prime})},f(x)]-\sigma(\overline{\lambda^{i}(\mathbf{x}^{\prime})},\overline{x})$ $\displaystyle=$ $\displaystyle[f\circ\lambda^{i}(\mathbf{x}^{\prime}),x]+\sigma(\mu^{i}(\mathbf{x}^{\prime}),x);$ the second identity follows from (4.1) and the third one follows because $\overline{x}=x$, $\overline{\lambda^{i}(\mathbf{x}^{\prime})}=\mu^{i}(\overline{\mathbf{x}^{\prime}})=\mu^{i}(\mathbf{x}^{\prime})$ and $f(x)=0$. Now by the inductive hypothesis, we have that $\displaystyle f\circ\lambda^{i+1}(\mathbf{x})$ $\displaystyle=$ $\displaystyle\big{[}-\sum_{j=0}^{i-1}\mu^{i-1-j}\circ\sigma\circ\mu^{j}(\mathbf{x}^{\prime}),x\big{]}-\sigma(\mu^{i}(\mathbf{x}^{\prime}),x)$ $\displaystyle=$ $\displaystyle-\sum_{j=0}^{i-1}\mu^{i-j}\circ\sigma\circ\mu^{j}(\mathbf{x})-\mu^{i-i}\circ\sigma\circ\mu^{i}(\mathbf{x})$ $\displaystyle=$ $\displaystyle-\sum_{j=0}^{i}\mu^{i-j}\circ\sigma\circ\mu^{j}(\mathbf{x}).$ Finally, we show that $f\circ\lambda^{i}=0$, for all $i\geq k$. If $i\geq k$, then $i=k+c$, for some $c\geq 0$. So that $\displaystyle f\circ\lambda^{i}$ $\displaystyle=-\sum_{j=0}^{i-1}\mu^{i-1-j}\circ\sigma\circ\mu^{j}$ $\displaystyle=-\sum_{j=0}^{k+c-1}\mu^{k+c-1-j}\circ\sigma\circ\mu^{j}$ $\displaystyle=-\sum_{j=0}^{k-1}\mu^{k+c-1-j}\circ\sigma\circ\mu^{j}-\sum_{j=k}^{k+c-1}\mu^{k+c-1-j}\circ\sigma\circ\mu^{j}$ $\displaystyle=-\mu^{c}\circ(\sum_{j=0}^{k-1}\mu^{k-1-j}\circ\sigma\circ\mu^{j})-\sum_{j=k}^{k+c-1}\mu^{k+c-1-j}\circ\sigma\circ\mu^{j}$ $\displaystyle=0.$ The last identity holds because both terms are zero. The first one is zero, because $\sigma\in Z_{k\textrm{-nil}}^{2}(\mu,\mu)$ and hence $\sum_{j=0}^{k-1}\mu^{k-1-j}\circ\sigma\circ\mu^{j}=0$. The second one is equal to zero, because $\mu$ is $k$-step nilpotent Lie algebra and hence $\mu^{j}=0$, for all $j\geq k$. ∎ ## 5\. Heisenberg Lie algebras The $(2m+1)$-dimensional Heisenberg Lie algebra is $\mathfrak{h}_{m}=V\oplus Z=\langle x_{1},y_{1},\dots,x_{m},y_{m}\rangle\oplus\langle z\rangle$, where the non-zero brackets of basis elements are $\mu(x_{i},y_{i})=z,\text{\ for all }1\leq i\leq m.$ $\mathfrak{h}_{m}$ is a $2$-step nilpotent Lie algebra with center $Z$. Sometimes for convenience we will write $[\ ,\ ]$ instead of $\mu$ and $n$ instead of $2m+1$. Let $\\{x_{1}^{*},y_{1}^{*},\dots,x_{m}^{*},y_{m}^{*},z^{*}\\}$ be the dual basis of the given one and for any $x\in\mathfrak{h}_{m}$, denote by $x_{V}$ the unique element in $V$ such that $x=x_{V}+z^{*}(x)z$. ###### Remark 5.1. Given $x,y\in\mathfrak{h}_{m}$, $\operatorname{ad}_{x}=\operatorname{ad}_{y}$ if and only if $x_{V}=y_{V}$ or equivalently if there is $v\in V$ such that $x=v+z^{*}(x)z$ and $y=v+z^{*}(y)z$. ###### Theorem 5.2. The $2m+1$-dimensional Heisenberg Lie algebra $\mathfrak{h}_{m}$ is non-rigid in $\mathcal{N}_{2m+1,3}$, for all $m>1$. ###### Proof. We will give a $3$-step linear deformation of $\mathfrak{h}_{m}$ using Corollary 3.2. Let $a_{1}=x_{1},\,a_{2}=x_{2},\,\mathfrak{h}=\langle B-\\{x_{1},x_{2}\\}\rangle$ and $y=y_{1}$. Hence, we can consider the linear deformation of $\mathfrak{h}_{m}$ $[\ ,\ ]_{t}=[\ ,\ ]+t(a_{1}\wedge a_{2}\otimes y),$ which is $3$-step nilpotent, for all $t\neq 0$. ∎ ###### Theorem 5.3. The $2m+1$-dimensional Heisenberg Lie algebra $\mathfrak{h}_{m}$ is rigid in $\mathcal{N}_{2m+1,2}$, for all $m\in\mathbb{N}$. ###### Proof. According to Corollary 7.4, it is enough to prove that $H_{2\textrm{-nil}}^{2}(\mu,\mu)=0$. Given $\sigma\in Z_{2\textrm{-nil}}^{2}(\mu,\mu)$, we will prove that there exist a linear function $f:\mathfrak{h}_{m}\rightarrow\mathfrak{h}_{m}$, such that $\sigma=\delta f$. Since $[\ ,\ ]\circ\sigma=-\sigma\circ[\ ,\ ]$, on the one hand (5.1) $\operatorname{ad}_{\sigma(x_{i},y_{i})}=-\sigma(z,\cdot)=\operatorname{ad}_{\sigma(x_{j},y_{j})},$ for all $i,j\in\\{1,...,m\\}$, and hence there exists $v\in V$ such that $\sigma(x_{i},y_{i})=v+z^{*}(\sigma(x_{i},y_{i}))z,$ for all $i=1,...,m$ (see Remark 5.1). On the other hand (5.2) $\text{if }[x,y]=0,\text{then }\sigma(x,y)\in Z.$ Now, $\delta f=\sigma$ if and only if $f$ satisfies the following system of linear equations: $\displaystyle\sigma(x_{i},y_{i})$ $\displaystyle=$ $\displaystyle[f(x_{i}),y_{i}]+[x_{i},f(y_{i})]-f(z),$ $\displaystyle\text{for $i=1,\dots,m$};$ $\displaystyle\sigma(x_{i},y_{j})$ $\displaystyle=$ $\displaystyle[f(x_{i}),y_{j}]+[x_{i},f(y_{j})],$ $\displaystyle\text{for $1\leq i,j\leq m,i\neq j$};$ $\displaystyle\sigma(x_{i},x_{j})$ $\displaystyle=$ $\displaystyle[f(x_{i}),x_{j}]+[x_{i},f(x_{j})],$ $\displaystyle\text{for $1\leq i<j\leq m$ };$ $\displaystyle\sigma(y_{i},y_{j})$ $\displaystyle=$ $\displaystyle[f(y_{i}),y_{j}]+[y_{i},f(y_{j})],$ $\displaystyle\text{for $1\leq i<j\leq m$ };$ $\displaystyle\sigma(z,x_{i})$ $\displaystyle=$ $\displaystyle[f(z),x_{i}],$ $\displaystyle\text{for $i=1,\dots,m$ };$ $\displaystyle\sigma(z,y_{i})$ $\displaystyle=$ $\displaystyle[f(z),y_{i}],$ $\displaystyle\text{for $i=1,\dots,m$}.$ We start by defining $f(z)=-v$. Then the last two sets of equations are satisfied because $\sigma$ satisfies (5.1). To define $f$ in $V$, we set that $z^{*}(f(v))=0$, for all $v\in V$, so that $\displaystyle f(x_{i})$ $\displaystyle=$ $\displaystyle\sum_{k=1}^{m}x_{k}^{*}(f(x_{i}))x_{k}+\sum_{k=1}^{m}y_{k}^{*}(f(x_{i}))y_{k},$ $\displaystyle f(y_{i})$ $\displaystyle=$ $\displaystyle\sum_{k=1}^{m}x_{k}^{*}(f(y_{i}))x_{k}+\sum_{k=1}^{m}y_{k}^{*}(f(y_{i}))y_{k}.$ It remains to determine $a_{k}^{i}=x_{k}^{*}(f(x_{i}))$, $b_{k}^{i}=y_{k}^{*}(f(x_{i}))$, $c_{k}^{i}=x_{k}^{*}(f(y_{i}))$ and $d_{k}^{i}=y_{k}^{*}(f(y_{i}))$, for $k,i\in\\{1,\dots,m\\}$, which now must satisfy the system of linear equations $\displaystyle\sigma(x_{i},y_{i})-v$ $\displaystyle=(a_{i}^{i}+d_{i}^{i})z,$ $\displaystyle\text{for $i=1,\dots,m$};$ $\displaystyle\sigma(x_{i},y_{j})$ $\displaystyle=(a_{j}^{i}+d_{i}^{j})z,$ $\displaystyle\text{for $1\leq i,j\leq m,i\neq j$};$ $\displaystyle\sigma(x_{i},x_{j})$ $\displaystyle=(-b_{j}^{i}+b_{i}^{j})z,$ $\displaystyle\text{for $1\leq i,j\leq m$ };$ $\displaystyle\sigma(y_{i},y_{j})$ $\displaystyle=(c_{j}^{i}-c_{i}^{j})z,$ $\displaystyle\text{for $1\leq i,j\leq m$ }.$ Since this one is clearly consistent, we are done. ∎ ###### Remark 5.4. It is worth mentioning that the same result can be found in [GR2]. ## 6\. Lie algebras with an abelian factor In this section, we explore the rigidity and non-rigidity of Lie algebras with an abelian factor. Given a Lie algebra $(\mathfrak{g},\lambda)$ and an abelian Lie algebra $(\mathfrak{a},\mu)$, the question we address in this section is whether is $\mathfrak{g}\oplus\mathfrak{a}$ rigid or not ($\mathfrak{g}\oplus\mathfrak{a}$ with the Lie bracket given by the direct sum of Lie algebras). Denote by $\lambda\oplus\mu$ the bracket of $\mathfrak{g}\oplus\mathfrak{a}$. The answer, which depends on the size of $\mathfrak{a}$ and the framework variety, is in general no. However the situation for small abelian factors, in particular for one dimensional factors are quite interesting. It is worth recalling that for Lie algebras with an abelian ideal or subalgebra that is not an abelian factor, the situation is different. In fact in [R] rigid Lie algebras of the form $\mathfrak{g}\ltimes\mathfrak{a}$ have been constructed and in [AG] rigid solvable Lie algebras of the form $\mathfrak{g}\rtimes\mathfrak{a}$ have been constructed. Given an $m$-dimensional Lie algebra $(\mathfrak{g},\lambda)$ and the $l$-dimensional abelian Lie algebra, denoted by $(\mathfrak{a}_{l},\mu)$, we observe that for any non-abelian Lie bracket $\nu$, (6.1) $\mu_{t}=t\nu$ is a non-trivial linear deformation of $\mu$. This gives rise to the non- trivial deformation $(\lambda\oplus\mu)_{t}=\lambda\oplus t\nu,$ of the $m+l$-dimensional Lie algebra $\lambda\oplus\mu$. From this observation it follows that: 1. (1) If $l\geq 2$, then for any $\mathfrak{g}$, $\mathfrak{g}\oplus\mathfrak{a}_{l}$ is non-rigid in $\mathcal{L}_{m+l}$. 2. (2) If $l\geq 2$, then for any $m$-dimensional solvable Lie algebra $\mathfrak{s}$, $\mathfrak{s}\oplus\mathfrak{a}_{l}$ is non-rigid in $\mathcal{S}_{m+l}$. Moreover, if $\mathfrak{s}$ is $k$-step solvable, $\mathfrak{s}\oplus\mathfrak{a}_{l}$ is non-rigid in $\mathcal{S}_{m+l,k}$. 3. (3) If $l\geq 3$, then for any $m$-dimensional nilpotent Lie algebra $\mathfrak{n}$, $\mathfrak{n}\oplus\mathfrak{a}_{l}$ is non-rigid in $\mathcal{N}_{m+l}$. Moreover, if $\mathfrak{n}$ is $k$-step nilpotent, $\mathfrak{n}\oplus\mathfrak{a}_{l}$ is non-rigid in $\mathcal{N}_{m+l,k}$. Hence, the only cases remaining to consider are: 1. (1’) $\mathfrak{g}\oplus\mathfrak{a}_{l}\in\mathcal{L}_{m+l}$, where $\mathfrak{g}$ is any Lie algebra and $l=1$. 2. (2’) $\mathfrak{s}\oplus\mathfrak{a}_{l}\in\mathcal{S}_{m+l}$, where $\mathfrak{s}$ is any $k$-step solvable Lie algebra and $l=1$. And its stronger version for $\mathfrak{s}\oplus\mathfrak{a}_{l}\in\mathcal{S}_{m+l,k}$. 3. (3’) $\mathfrak{n}\oplus\mathfrak{a}_{l}\in\mathcal{N}_{m+l}$, where $\mathfrak{n}$ is any $k$-step nilpotent Lie algebra and $l=1$ or $l=2$. And its stronger version for $\mathfrak{n}\oplus\mathfrak{a}_{l}\in\mathcal{N}_{m+l,k}$. We will show that the answer for (2’) and for (3’) is the general one, namely, they are not rigid, even in their strongest forms. The answer for (1’) turns out to be more intricate. Even though there is no unified answer independently of $\mathfrak{g}$, we shall see that being $\mathfrak{g}$ perfect or not plays a role. Recall that $\mathfrak{g}$ is called _perfect_ if $\lambda(\mathfrak{g},\mathfrak{g})=\mathfrak{g}$. ###### Example 6.1. Given $\mathfrak{g}$ a $m$-dimensional semisimple Lie algebra, let $\overline{\mathfrak{g}}=\mathfrak{g}\oplus\mathfrak{a}_{1}$. Then $H^{2}(\overline{\mathfrak{g}},\overline{\mathfrak{g}})=0$ and hence $\overline{\mathfrak{g}}$ is rigid in $\mathcal{L}_{m+1}$. The fact that the second adjoint cohomology group of $\overline{\mathfrak{g}}$ vanishes follows directly from the Hochschild-Serre spectral sequence associated to the ideal $\mathfrak{g}$ of $\overline{\mathfrak{g}}$ and the fact that $H^{1}(\mathfrak{g},\mathfrak{g})=H^{2}(\mathfrak{g},\mathfrak{g})=0$. We address now questions (1’), (2’) and (3’), where the abelian factor is small, one at a time. For the sake of completeness, the statements are given for arbitrary abelian factors. ###### Notation 6.2. For convenience, we will denote by $[\ ,\ ]$ the bracket of $\mathfrak{g}\oplus\mathfrak{a}_{l}$. ### 6.1. Non-perfect Lie algebras ###### Theorem 6.3. If $\mathfrak{g}\in\mathcal{L}_{m}$ is non-perfect, then $\mathfrak{g}\oplus\mathfrak{a}_{l}$ is non-rigid in $\mathcal{L}_{m+l}$, for all $l\in\mathbb{N}$. ###### Proof. The only remaining case is $l=1$. We will give a linear deformation of $\mathfrak{g}\oplus\mathfrak{a}_{1}$ in $\mathcal{L}_{n+l}$, using Theorem 3.1. Given $A=\\{a\\}$ any basis of $\mathfrak{a}_{1}$ and $B$ any basis of $\mathfrak{g}$, there exists $b\in B$ such that $b\notin[\mathfrak{g},\mathfrak{g}]$. Take $a_{1}=a,a_{2}=b,\mathfrak{h}=\langle(A\cup B)-\\{a,b\\}\rangle$ and $y=a$. So that, from the Corollary 3.2, we can consider the linear deformation of $\mathfrak{g}\oplus\mathfrak{a}_{1}$ $[\ ,\ ]_{t}=[\ ,\ ]+t(a_{1}\wedge a_{2}\otimes y).$ This deformation is non-trivial, since the dimension of the commutator corresponding to $t\neq 0$ is larger than the dimension of the original one. ∎ ### 6.2. Solvable Lie algebras ###### Theorem 6.4. If $\mathfrak{s}\in\mathcal{S}_{m,k}$, then $\mathfrak{s}\oplus\mathfrak{a}_{l}$ is non-rigid in $\mathcal{S}_{m+l,k}$, for all $l\in\mathbb{N}$. ###### Proof. The only remaining case is $l=1$. Since $\mathfrak{s}$ is solvable, it is non- perfect. The deformation given in the proof of the Theorem 6.3 is a $k$-step solvable linear deformation. ∎ ### 6.3. Nilpotent Lie algebras In this section, we complete the proof of the fact that any $m$-dimensional $k$-step nilpotent Lie algebra plus the $l$-dimensional abelian Lie algebra is not rigid, even in the smaller subvariety $\mathcal{N}_{m+l,k}$, by considering the remaining cases where the dimension of the abelian factor is $l=2$ or $l=1$. It is worth saying that there is a single exception, namely $\mathfrak{h}_{1}\oplus\mathfrak{a}_{1}$. ###### Remark 6.5. There are only two, non isomorphic, nilpotent Lie algebras of dimension 4 in $\mathcal{N}_{4,2}$: $\mathfrak{a}_{4}$ and $\mathfrak{h}_{1}\oplus\mathfrak{a}_{1}$. Therefore, $\mathfrak{h}_{1}\oplus\mathfrak{a}_{1}$ is rigid in $\mathcal{N}_{2}$. It is worth mentioning that $\mathfrak{h}_{1}\oplus\mathfrak{a}_{1}$ is not rigid in $\mathcal{N}_{4,3}=\mathcal{N}_{4}$. Nilpotent Lie algebras are non-perfect, so by Theorem 6.3, $\mathfrak{n}\oplus\mathfrak{a}_{l}$ is non-rigid in $\mathcal{L}_{m+l}$, for all $l\in\mathbb{N}$. But to prove that $\mathfrak{n}\oplus\mathfrak{a}_{l}$ is non-rigid in $\mathcal{N}_{m+l,k}$ we must work harder, since the deformation constructed in the proof of that theorem is not nilpotent. ###### Notation 6.6. We fix some notation for what follows. The bases for $\mathfrak{a}_{1}$ and $\mathfrak{a}_{2}$ will be $A=\\{c_{1}\\}$ and $A=\\{c_{1},c_{2}\\}$ respectively. For $\mathfrak{n}\in\mathcal{N}_{m,k}$, we choose $B_{k}\subseteq...\subseteq B_{1}$ such that $B_{i}$ is a basis of $\mathfrak{n}^{i}$, the $i$-th term of the descending central series of $\mathfrak{n}$, for all $i=1,\dots,k$. We denote $B_{i}-B_{i+1}=\\{x_{1}^{i},\dots,x_{n_{i}}^{i}\\}$, for $i=1,\dots,k-1$. Notice that $B=B_{1}$ is a basis of $\mathfrak{n}$ and $B\cup A$ is a basis of $\mathfrak{n}\oplus\mathfrak{a}_{l}$. ###### Proposition 6.7. If $\mathfrak{n}\in\mathcal{N}_{m,k}$, then $\mathfrak{n}\oplus\mathfrak{a}_{2}$ is non-rigid in $\mathcal{N}_{m+2,k}$. ###### Proof. We will construct a non-trivial linear deformation of $\mathfrak{n}\oplus\mathfrak{a}_{2}$ in $\mathcal{N}_{k}$ using Corollary 3.2. Take $a_{1}=c_{1}$, $a_{2}=x_{1}^{1}$, $\mathfrak{h}=\langle(A\cup B)-\\{c_{1},x_{1}^{1}\\}\rangle$ and $y=c_{2}$. So that, we can consider the linear deformation of $\mathfrak{n}\oplus\mathfrak{a}_{2}$ given by $[\ ,\ ]_{t}=[\ ,\ ]+t(a_{1}\wedge a_{2}\otimes y).$ It is easy to see that this deformation is $k$-step nilpotent. It is non- trivial, since the dimension of the commutator corresponding to $t\neq 0$ is larger than the dimension of the original one. ∎ We come now to the most difficult case, that for $l=1$. We look at the 2-step nilpotent quotient $\tilde{\mathfrak{n}}=\dfrac{\mathfrak{n}}{\mathfrak{n}^{3}}$; recall that $\mathfrak{n}^{3}=\lambda(\lambda(\mathfrak{n},\mathfrak{n}),\mathfrak{n})$. And we split the proof into two propositions, according to whether this quotient is isomorphic to a free 2-step nilpotent Lie algebra or it is not. In general, given a nilpotent Lie algebra $\mathfrak{g}$ with an adapted basis $B$ as in 6.6, by taking $a_{1},a_{2}\in B_{1}-B_{2}$, $\mathfrak{h}=\langle B-\\{a_{1},a_{2}\\}\rangle$ and $y$ a central element, the hypotheses of our construction are trivially fulfilled. In the case we are dealing with, in which $\mathfrak{g}=\mathfrak{n}\oplus\mathfrak{a}_{1}$, we may take $y=c_{1}$. Assuming that $\tilde{\mathfrak{n}}\ncong L_{(2)}(m)$ we are able to prove that the resulting deformation is non-trivial. In the case $\tilde{\mathfrak{n}}\cong L_{(2)}(m)$, we do something different. ###### Proposition 6.8. If $\mathfrak{n}\in\mathcal{N}_{m,k}$ and $\tilde{\mathfrak{n}}\ncong L_{2}(m)$, then $\mathfrak{n}\oplus\mathfrak{a}_{1}$ is non-rigid in $\mathcal{N}_{m+1,k}$. ###### Proof. We will construct a non-trivial linear deformation of $\mathfrak{n}\oplus\mathfrak{a}_{1}$ in $\mathcal{N}_{m+1,k}$, using Corollary 3.2. Since $\displaystyle n_{2}<\genfrac{(}{)}{0.0pt}{2}{n-1}{2}$, the projection of the set $\\{[x_{j}^{1},x_{i}^{1}]:1\leq i<j\leq n_{1}\\}$ onto $\tilde{\mathfrak{n}}$ is a linearly dependent set. Hence, after relabeling if necessary, we may assume that (6.2) $[x_{2}^{1},x_{1}^{1}]\in\left\langle{\Big{(}\big{\\{}[x_{j}^{1},x_{i}^{1}]:1\leq i<j\leq n_{1}\\}-\\{[x_{2}^{1},x_{1}^{1}]\big{\\}}\Big{)}\cup B_{3}}\right\rangle.$ Taking $a_{1}=x_{1}^{1},a_{2}=x_{2}^{1},\mathfrak{h}=\langle A\cup(B-\\{x_{1}^{1},x_{2}^{1}\\}\rangle$ and $y=c_{1}$, from Corollary 3.2, we can consider the linear deformation of $\mathfrak{n}\oplus\mathfrak{a}_{1}$ given by $[\ ,\ ]_{t}=[\ ,\ ]+t(a_{1}\wedge a_{2}\otimes y).$ It is easy to see that this deformation is $k$-step nilpotent. Also it is non- trivial because the dimension of the commutator corresponding to $t\neq 0$ is larger than the dimension of the original one (6.2). ∎ For the last case, we shall assume without lost of generality, that $\mathfrak{n}$ has not abelian factor. In fact, if $\mathfrak{n}$ has an abelian factor then $\mathfrak{n}\oplus\mathfrak{a}_{1}$ falls in the case covered by Proposition 6.7. ###### Proposition 6.9. If $\mathfrak{n}\in\mathcal{N}_{m,k}$ has not abelian factor, $\mathfrak{n}\ncong\mathfrak{h}_{1}$ and $\tilde{\mathfrak{n}}\cong L_{(2)}(m)$, then $\mathfrak{n}\oplus\mathfrak{a}_{1}$ is non-rigid in $\mathcal{N}_{m+1,k}$. ###### Proof. We will construct a non-trivial linear deformation of $\mathfrak{n}\oplus\mathfrak{a}_{1}$ in $\mathcal{N}_{m+1,k}$, using Corollary 3.2. Consider the sets $S$ and $R$, $\displaystyle S=\big{\\{}x\in\mathfrak{n}:\,[x,\mathfrak{n}^{2}]=0\text{ and }\dim([x,\mathfrak{n}])\leq 1\big{\\}},$ $\displaystyle R=\big{\\{}r\in\\{1,\dots,k\\}:\,S\cap(\mathfrak{n}^{r}-\mathfrak{n}^{r+1})\neq\emptyset\big{\\}}.$ Let $r_{0}=\min R$. Notice that if $r_{0}=1$, then $m=2$. If $r_{0}\geq 2$, there exists $y_{0}\neq 0$ such that $y_{0}\in S\cap\mathfrak{n}^{2}$. Let us consider separately the cases $r_{0}=1$ and $r_{0}\geq 2$. Case 1: If $r_{0}=1$, then $\tilde{\mathfrak{n}}\cong L_{(2)}(2)$. Since $\mathfrak{n}\ncong L_{(2)}(2)\simeq\mathfrak{h}_{1}$, $k\geq 3$. We may choose $B=\\{x_{1}^{1},x_{2}^{1}\\}\cup\\{[x_{1}^{1},x_{2}^{1}]\\}\cup B_{3},$ with $x_{2}^{1}\in S$. We take $a_{1}=x_{2}^{1}$, $a_{2}=[x_{1}^{1},x_{2}^{1}]$, $\mathfrak{h}=\langle\\{x_{1}^{1}\\}\cup B_{3}\rangle$ and $y=c_{1}$. Hence, using Corollary 3.2, we can consider the linear deformation of $\mathfrak{n}\oplus\mathfrak{a}_{1}$ $[\ ,\ ]_{t}=[\ ,\ ]+t(a_{1}\wedge a_{2}\otimes y).$ It is easy to see that this deformation is $k$-step nilpotent. Also it is non- trivial because $\dim((\mathfrak{n}\oplus\mathfrak{a}_{1})_{t}^{2})=\dim((\mathfrak{n}\oplus\mathfrak{a}_{1})^{2})+1$, for $t\neq 0$. Case 2: If $r_{0}\geq 2$, let $0\neq y_{0}\in S\cap(\mathfrak{n}^{r_{0}}-\mathfrak{n}^{r_{0}+1})$. Choose $B$ such that $\displaystyle[y_{0},b]=0,\text{\ for all\ }b\in B-\\{x_{1}^{1}\\},$ and take $a_{1}=x_{1}^{1}$, $a_{2}=c_{1}$, $\mathfrak{h}=\langle B-\\{x_{1}^{1}\\}\rangle$ and $y=y_{0}$. So that, from Corollary 3.2, we can consider the linear deformation of $\mathfrak{n}\oplus\mathfrak{a}_{1}$ given by $[\ ,\ ]_{t}=[\ ,\ ]+t(a_{1}\wedge a_{2}\otimes y).$ This deformation is $k$-step nilpotent, because $y_{0}\in\mathfrak{n}^{2}$. In order to prove that this deformation is non-trivial, assume instead that for arbitrary small $t$, $(\mathfrak{n}\oplus\mathfrak{a}_{1})_{t}\cong\mathfrak{n}\oplus\mathfrak{a}_{1}$. Hence $(\mathfrak{n}\oplus\mathfrak{a}_{1})_{t}$ has an abelian factor $\langle z\rangle$. Then (6.3) $\displaystyle[z,b]_{t}=0,\text{\ for all\ }b\in B;$ (6.4) $\displaystyle z\notin(\mathfrak{n}\oplus\mathfrak{a}_{1})_{t}^{2}=\mathfrak{n}^{2}.$ Writing $z=z_{\mathfrak{n}}+\alpha c_{1}$, (6.3) implies that (6.5) $\displaystyle[z_{n},b]=0,\text{ for all }b\in B-\\{x_{1}^{1}\\};$ (6.6) $\displaystyle[z_{n},x_{1}^{1}]=[z_{n},x_{1}^{1}]_{t}=t\alpha y_{0}.$ On the one hand, if $\alpha=0$, then $z_{n}\in Z(\mathfrak{n})$ and $z_{\mathfrak{n}}\notin\mathfrak{n}^{2}$ (6.4) and therefore $z_{\mathfrak{n}}$ is an abelian factor of $\mathfrak{n}$. On the other hand, if $\alpha\neq 0$, $z_{\mathfrak{n}}\in S$ (6.5) and then by (6.6) there exists $r\in R$ with $r<r_{0}$. ∎ Summarizing all we have proved, it follows that $k$-step nilpotent Lie algebras with an abelian factor are never $k$-rigid, except for $\mathfrak{h}_{1}\oplus\mathfrak{a}_{1}$. ###### Theorem 6.10. If $\mathfrak{n}\in\mathcal{N}_{m,k}$ and $l\geq 1$, then $\mathfrak{n}\oplus\mathfrak{a}_{l}$ is rigid in $\mathcal{N}_{m+l,k}$ if and only if $\mathfrak{n}\cong\mathfrak{h}_{1}$ and $l=1$. ### 6.4. The exceptional case The only exceptional case for which there is no unified answer on whether $\mathfrak{g}\oplus\mathfrak{a}_{l}$ is rigid or not in $\mathcal{L}_{m+l}$, is for $\mathfrak{g}$ a perfect Lie algebra and $l=1$. Example 6.1 shows that the answer might be “rigid”. The following example shows that the answer might be “non-rigid”. ###### Example 6.11. Let $\mathfrak{g}$ be the complex 5-dimensional Lie algebra with basis $\\{a,b,c,d,e\\}$ and bracket defined by: $\begin{gathered}{}\lambda(a,b)=2b,\qquad\lambda(a,c)=-2c,\qquad\lambda(b,c)=a,\\\ \lambda(a,d)=d,\qquad\lambda(a,e)=-e,\qquad\lambda(b,e)=d,\qquad\lambda(c,d)=e.\end{gathered}$ Notice that $\mathfrak{g}=\mathfrak{sl}_{2}\ltimes\mathbb{C}^{2}$ where the semidirect product is given by the $2$-dimensional irreducible representation of $\mathfrak{sl}_{2}$. It holds that $H^{2}(\mathfrak{g},\mathfrak{g})=0$ and hence $\mathfrak{g}$ is rigid. Let $\overline{\mathfrak{g}}=\mathfrak{g}\oplus\mathfrak{a}_{1}$ where $\mathfrak{a}_{1}$ is an abelian factor, let its Lie bracket be denoted by $[\ ,\ ]$ and let $\\{f\\}$ be a basis for $\mathfrak{a}_{1}$. Consider the linear deformation of $\overline{\mathfrak{g}}$ given by $[\ ,\ ]_{t}=[\ ,\ ]+t\varphi,$ where $\varphi$ is the 2-cocycle $\varphi:=d^{*}\wedge e^{*}\otimes f.$ Then $[\ ,\ ]_{t}$ is given by $\begin{gathered}{}[a,b]_{t}=2b,\qquad[a,c]_{t}=-2c,\qquad[b,c]_{t}=a,\\\ [a,d]_{t}=d,\qquad[a,e]_{t}=-e,\qquad[b,e]_{t}=d,\qquad[c,d]_{t}=e,\qquad[d,e]_{t}=tf.\end{gathered}$ Clearly $\overline{\mathfrak{g}}_{t}$ is perfect, for every $t\neq 0$, so that it has no abelian factor and the deformation is non-trivial. Thus $\overline{\mathfrak{g}}$ is non-rigid. ## 7\. Appendix by Diego Sulca The classical Nijenhuis-Richardson theorem asserts that an $n$-dimensional Lie algebra $\mathfrak{g}$ for which the second Cartan-Eilenberg cohomology $H^{2}(\mathfrak{g},\mathfrak{g})$ is zero must be rigid in the variety of $n$-dimensional Lie algebras [NR]. The proof given in [NR] can be easily adapted to show analogous results for other classes of algebras. The general strategy is discussed by Remm in [RE, Sections 2.2 and 2.3]. We provide full details and apply this generalization to the variety of $n$-dimensional $k$-step nilpotent Lie algebras and the variety of $n$-dimensional $k$-step solvable Lie algebras. We make use of the language of schemes. For the information of algebraic groups acting on schemes, we refer to [M, Chapter 7]. Throughout, $\mathbb{K}$ denotes any field of characteristic zero. Let $\mathbb{A}_{\mathbb{K}}^{m}$ be the affine $m$-space over $\mathbb{K}$ and let $X\subset\mathbb{A}_{\mathbb{K}}^{m}$ be closed subscheme. Given a rational point $x\in X(\mathbb{K})$ the Zariski tangent space $T_{x}X$ of $X$ at $x$ can be computed as $T_{x}X=\\{y\in\mathbb{K}^{m}:x+\varepsilon y\in X(\mathbb{K}[\varepsilon])\\},$ where $\mathbb{K}[\varepsilon]=\mathbb{K}+\mathbb{K}\varepsilon$ is the $\mathbb{K}$-algebra of dual numbers ($\varepsilon^{2}=0$). We have $\dim_{x}X\leq\dim T_{x}X$, where $\dim_{x}X$ is the local dimension of $X$ at $x$ (i.e., the dimension of the local ring of $X$ at $x$) and $\dim T_{x}X$ is the dimension of $T_{x}X$ as vector space over $\mathbb{K}$. The equality holds if and only if $x$ is a non-singular point of $X$. Fix now $n\in\mathbb{N}$. We think of $\mathbb{A}_{\mathbb{K}}^{n^{3}}$ as representing the functor $R\mapsto\\{R\mbox{-bilinear maps}\ R^{n}\times R^{n}\to R^{n}\\}=\\{\mathbb{K}\mbox{-bilinear maps}\ \mathbb{K}^{n}\times\mathbb{K}^{n}\to R^{n}\\}$ from commutative $\mathbb{K}$-algebras to the category of sets. The linear group $GL_{n}$ (viewed as affine group scheme over $\mathbb{K}$) acts on $\mathbb{A}_{\mathbb{K}}^{n^{3}}$ as follows: given a commutative $\mathbb{K}$-algebra $R$, a matrix $g\in GL_{n}(R)$ and an $R$-bilinear map $\mu:R^{n}\times R^{n}\to R^{n}$, we define $g\cdot\mu:R^{n}\times R^{n}\to R^{n}$ by setting $(g\cdot\mu)(x,y):=g(\mu(g^{-1}(x),g^{-1}(y))),\quad x,y\in R^{n}.$ Let ${X}\subset\mathbb{A}_{\mathbb{K}}^{n^{3}}$ be a closed subscheme that is invariant under the action of $GL_{n}$. Fix a rational point $\mu\in{X}(\mathbb{K})$ (if there are any). The image of the orbit map $GL_{n}\to{X}$, $g\to g\cdot\mu$, is locally closed in ${X}$. The orbit $O(\mu)$ of $\mu$ is this image equipped with its structure of a reduced subscheme of ${X}$. It is smooth over $\mathbb{K}$. The isotropy group $G_{\mu}$ at $\mu$ is a closed subgroup of $GL_{n}$, and for all commutative $\mathbb{K}$-algebras $R$, $G_{\mu}(R)=\\{g\in GL_{n}(R):g\cdot\mu_{R}=\mu_{R}\\}$ where $\mu_{R}\in{X}(R)$ denotes the image of $\mu\in{X}(\mathbb{K})$ in ${X}(R)$. The orbit map $GL_{n}\to O(\mu)$ induces a $\mathbb{K}$-isomorphism (7.1) $\displaystyle GL_{n}/G_{\mu}\cong O(\mu).$ For $\mu\in{X}(\mathbb{K})$ we define $\displaystyle Z_{{X}}^{2}(\mu,\mu)$ $\displaystyle:=T_{\mu}({X})=\\{\omega:\mathbb{K}^{n}\times\mathbb{K}^{n}\to\mathbb{K}^{n}\ |\ \mu+\varepsilon\omega\in{X}(\mathbb{K}[\varepsilon])\\},$ $\displaystyle B^{2}(\mu,\mu)$ $\displaystyle:=\\{\delta g:\mathbb{K}^{n}\times\mathbb{K}^{n}\to\mathbb{K}^{n}\ |\ g\in\mathfrak{gl}_{n}(\mathbb{K})\\},$ where for $g\in\mathfrak{gl}_{n}(\mathbb{K})$, $\delta g$ is the $\mathbb{K}$-bilinear map $\delta g(x,y)=g(\mu(x,y))-\mu(g(x),y)-\mu(x,g(y)),\ x,y\in\mathbb{K}^{n}.$ Notice that $B^{2}(\mu,\mu)\subseteq Z_{{X}}^{2}(\mu,\mu).$ Indeed, given $g\in\mathfrak{gl}_{n}(\mathbb{K})$, we have $I+\varepsilon g\in GL_{n}(\mathbb{K}[\varepsilon])$, and it is easy to check that (7.2) $\displaystyle\mu+\varepsilon\delta g=(I+\varepsilon g)\cdot\mu_{K[\varepsilon]}\in{X}(\mathbb{K}[\varepsilon]),$ which shows that $\delta g\in Z_{X}^{2}(\mu,\mu)$. Finally, set $\displaystyle H_{{X}}^{2}(\mu,\mu):=\frac{Z_{X}^{2}(\mu,\mu)}{B^{2}(\mu,\mu)}.$ ###### Theorem 7.1. Let $\mathbb{K}$ be a field of characteristic zero. For $\mu\in\mathcal{(}\mathbb{K})$ the following conditions are equivalent. 1. (1) $H_{X}^{2}(\mu,\mu)=0$. 2. (2) $O(\mu)$ is an open subscheme of ${X}$. 3. (3) $O(\mu)$ is an open subset of $X$ and ${X}$ is reduced at $\mu$. ###### Proof. We first review the fact that $T_{\mu}O(\mu)$ is isomorphic to $B^{2}(\mu,\mu)$. By (7.1), there are $\mathbb{K}$-linear isomorphisms $T_{\mu}O(\mu)\cong T_{e}(GL_{n}/G_{\mu})\cong\mathfrak{gl}_{n}(\mathbb{K})/\operatorname{Lie}(G_{\mu})$ where $e\in GL_{n}/G_{\mu}$ denotes the image of the identity of $GL_{n}$. Thus, it is enough to show that $\operatorname{Lie}(G_{\mu})$ is the kernel of the surjective map $\delta:\mathfrak{gl}_{n}(\mathbb{K})\to B^{2}(\mu,\mu)$. Now $\displaystyle\operatorname{Lie}(G_{\mu})$ $\displaystyle=\\{g\in\mathfrak{gl}_{n}(\mathbb{K}):I+\varepsilon g\in G_{\mu}(\mathbb{K}[\varepsilon])\\}$ $\displaystyle=\\{g\in\mathfrak{gl}_{n}(\mathbb{K}):(I+\varepsilon g)\cdot\mu_{K[\varepsilon]}=\mu_{K[\varepsilon]}\\}$ As observed in (7.2) we have $(I+\varepsilon g)\mu_{K[\varepsilon]}=\mu+\varepsilon\delta g$. Hence, $(I+\varepsilon g)\cdot\mu_{K[\varepsilon]}=\mu_{K[\varepsilon]}$ if and only if $\delta g=0$, as was to be shown. We now proceed with the proof of the equivalences. Note first that (7.3) $\displaystyle\dim B^{2}(\mu,\mu)$ $\displaystyle=\dim T_{\mu}O(\mu)=\dim_{\mu}O(\mu)\leq\dim_{\mu}{X}\leq$ $\displaystyle\leq\dim T_{\mu}{X}_{\textrm{red}}\leq\dim T_{\mu}{X}=\dim Z_{{X}}^{2}(\mu,\mu),$ where in the first equality we use the isomorphism of the above paragraph and in the second one we use the fact that $O(\mu)$ is smooth. The inequalities are clear. If $H_{{X}}^{2}(\mu,\mu)=0$ then all the inequalities in (7.3) become equalities. The equality $\dim_{\mu}{X}=\dim T_{\mu}{X}$ implies that $\mu$ is a non-singular point of ${X}$. Since the set of non-singular points in a scheme of finite type over a field is open, the schemes ${X}$ and $O(\mu)$ are regular of the same dimension at a neighborhood of $\mu$. As $O(\mu)$ is a locally closed subscheme of ${X}$, we deduce that $O(\mu)$ and ${X}$ coincide locally at $\mu$. Given another rational point $\mu^{\prime}\in O(\mu)(\mathbb{K})$, clearly $O(\mu^{\prime})=O(\mu)$ and $H_{{X}}^{2}(\mu^{\prime},\mu^{\prime})\cong H_{{X}}^{2}(\mu,\mu)$, which is assumed to be zero. By applying the above reasoning to each such $\mu^{\prime}\in O(\mu)(\mathbb{K})$ we find that ${X}$ and $O(\mu)$ coincide as schemes locally at each rational point of $O(\mu)$. Now, as $\mathbb{K}$ is infinite, $GL_{n}(\mathbb{K})$ is dense in $GL_{n}$ hence $O(\mu)(\mathbb{K})$ is dense in $O(\mu)$. It follows that $O(\mu)$ is an open subscheme of ${X}$. This completes the proof of (1)$\Rightarrow$(2). (2)$\Rightarrow$(3) is obvious. We finally show that (3)$\Rightarrow$(1). The first hypothesis implies that $O(\mu)$ is an open subscheme of ${X}_{\textrm{red}}$, hence the first two inequalities of (7.3) are indeed equalities. Since in addition ${X}$ is reduced at $\mu$, the last inequality is also an equality. Summarizing, all the inequalities in (7.3) are equalities, hence $H_{{X}}^{2}(\mu,\mu)=0$. ∎ We compute $Z_{X}^{2}(\mu,\mu)$ for three examples of $X$. ### The scheme $L_{n}$ of Lie brackets on $\mathbb{K}^{n}$ [NR] Let $L_{n}\subset\mathbb{A}_{\mathbb{K}}^{n^{3}}$ be the closed subscheme such that for all commutative $\mathbb{K}$-algebras $R$, $L_{n}(R)$ is the set of $R$-bilinear maps $\mu:R^{n}\times R^{n}\to R^{n}$ that are alternating (i.e., $\mu(x,x)=0$ for all $x\in\mathbb{R}$) and satisfy the Jacobi identity $\displaystyle\circlearrowleft\mu(\mu(x,y),z)=0,\quad\forall x,y,z\in R^{n}.$ Given $\mu\in L_{n}(\mathbb{K})$, by definition $Z_{L_{n}}(\mu,\mu)$ is the set of bilinear maps $\omega:\mathbb{K}^{n}\times\mathbb{K}^{n}\to\mathbb{K}^{n}$ such that $\mu+\varepsilon\omega$ is alternating and satisfies the Jacobi identity. In other words, for all $x,y,z\in\mathbb{K}^{n}$, $\displaystyle(\mu+\varepsilon\omega)(x,x)$ $\displaystyle=0,$ $\displaystyle\circlearrowleft\mu(\mu(x,y)+\varepsilon\omega(x,y),z)+\circlearrowleft\varepsilon\omega(\mu(x,y)+\varepsilon\omega(x,y),z)$ $\displaystyle=0.$ As $\mu$ is alternating, the first equality is equivalent to saying that $\omega$ is alternating. Since $\varepsilon^{2}=0$ and $\mu$ satisfies the Jacobi identity, the left hand side of the second equality is simply $\varepsilon\delta\omega(x,y,z)$, where $\delta\omega(x,y,z):=\circlearrowleft\mu(\omega(x,y),z)+\circlearrowleft\omega(\mu(x,y),z)$ We conclude that $\displaystyle Z_{L_{n}}^{2}(\mu,\mu)=\\{\omega:\mathbb{K}^{n}\times\mathbb{K}^{n}\to\mathbb{K}^{n}\ |\ \omega\ \mbox{is }\mathbb{K}\mbox{-bilinear, alternating and }\delta\omega=0\\}.$ It follows that $H_{L_{n}}^{2}(\mu,\mu)$ is the usual Cartan-Eilenberg cohomology of the Lie algebra $(\mathbb{K}^{n},\mu)$, denoted simply by $H^{2}(\mu,\mu)$. ###### Corollary 7.2. Let $\mathbb{K}$ be an algebraically closed field of characteristic zero and let $\mathcal{L}_{n}=L_{n}(\mathbb{K})$, with the structure of affine algebraic variety. Given $\mu\in\mathcal{L}_{n}$, if $H^{2}(\mu,\mu)=0$, then $\mu$ is rigid in $\mathcal{L}_{n}$. ### The scheme of $k$-solvable Lie brackets on $\mathbb{K}^{n}$ Let $S_{n,k}\subset L_{n}$ be the closed subscheme such that for all commutative $\mathbb{K}$-algebras $R$, $S_{n,k}(R)$ is the set of those $\mu\in L_{n}(R)$ such that $\mu^{(k)}(x_{1},\ldots,x_{2^{k}})=0,\quad\forall x_{1},\ldots,x_{2^{k}}\in R^{n},$ where $\mu^{(i)}:\underbrace{R^{n}\times\cdots\times R^{n}}_{2^{i}}$ is defined inductively by setting $\mu^{(0)}=\operatorname{id}_{R^{n}}$ and $\mu^{(i)}(x_{1},\ldots,x_{2^{i}})=\mu(\mu^{(i-1)}(x_{1},\ldots,x_{2^{i-1}}),\mu^{(i-1)}(x_{2^{i-1}+1},\ldots,x_{2^{i}}))$ for $i\geq 1$. Given $\mu\in S_{n,k}(\mathbb{K})$, by definition $Z_{S_{n,k}}^{2}(\mu,\mu)$ is the set of those $\omega\in Z_{L_{n}}^{2}(\mu,\mu)$ satisfying the additional condition $(\mu+\varepsilon\omega)^{(k)}=0$. One easily checks that $\displaystyle(\mu+\varepsilon\omega)^{(k)}(x_{1},\ldots,x_{2^{i}})=\mu^{(k)}(x_{1},\ldots,x_{2^{i}})+\varepsilon\sigma_{k}\omega(x_{1},\ldots,x_{2^{i}}),$ where $\sigma_{i}\omega:\underbrace{R^{n}\times\cdots\times R^{n}}_{2^{i}}\to R^{n}$ is defined inductively as follows: $\sigma_{1}\omega=\omega$, and $\displaystyle\sigma_{i}\omega(x_{1},\ldots,x_{2^{i}})=$ $\displaystyle\mu(\mu^{(i-1)}(x_{1},\ldots,x_{2^{i-1}}),\sigma_{i-1}\omega(x_{2^{i-1}+1},\ldots,x_{2^{i}}))$ $\displaystyle+\mu(\sigma_{i-1}\omega(x_{1},\ldots,x_{2^{i-1}}),\mu^{(i-1)}(x_{2^{i-1}+1},\ldots,x_{2^{i}}))$ $\displaystyle+\omega(\mu^{(i-1)}(x_{1},\ldots,x_{2^{i-1}}),\mu^{(i-1)}(x_{2^{i-1}+1},\ldots,x_{2^{i}})),$ for $i\geq 2$. Since $\mu^{(k)}(x_{1},\ldots,x_{2^{i}})=0$ we obtain that $\displaystyle Z_{S_{n,k}}^{2}(\mu,\mu)=\\{\omega\in Z_{L_{n}}^{2}(\mu,\mu)\ |\ \sigma_{k}\omega=0\\}.$ We shall use the notation $H_{k{\textit{-sol}}}^{2}(\mu,\mu):=H_{S_{n,k}}^{2}(\mu,\mu)$. ###### Corollary 7.3. Let $\mathbb{K}$ be an algebraically closed field of characteristic zero and let $\mathcal{S}_{n,k}=S_{n,k}(\mathbb{K})$, with the structure of affine algebraic variety. Given $\mu\in\mathcal{S}_{n,k}$, if $H^{2}_{k{\textit{-sol}}}(\mu,\mu)=0$, then $\mu$ is rigid in $\mathcal{S}_{n,k}$. ### The scheme of $k$-step nilpotent Lie brackets on $\mathbb{K}^{n}$ Let $1\leq k\leq n-1$ and let $N_{n,k}\subset L_{n}$ be the closed subscheme such that for all commutative $\mathbb{K}$-algebras $R$, $N_{n,k}(R)$ is the set of those $\mu\in L_{n}(R)$ such that $\mu^{k}(x_{1},\ldots,x_{k+1})=0,\quad\forall x_{1},\ldots,x_{k+1}\in R^{n},$ where $\mu^{i}:\underbrace{R^{n}\times\cdots\times R^{n}}_{i+1}\to R^{n}$ is defined inductively by setting $\mu^{0}=\operatorname{id}_{R^{n}}$ and $\displaystyle\mu^{i}(x_{1},\ldots,x_{i+1}):=\mu(\mu^{i-1}(x_{1},\ldots,x_{i}),x_{i+1})\quad\mbox{for}\quad i\geq 1.$ Given $\mu\in N_{n,k}(\mathbb{K})$, by definition $Z_{N_{n,k}}^{2}(\mu,\mu)$ is the set of those $\omega\in Z_{L_{n}}^{2}(\mu,\mu)$ satisfying the additional condition $(\mu+\varepsilon\omega)^{k}(x_{1},\ldots,x_{k+1})=0$ for all $x_{1},\ldots,x_{k+1}\in\mathbb{K}^{n}.$ One easily checks by induction that $\displaystyle(\mu+\varepsilon\omega)^{k}(x_{1},\ldots,x_{k+1})=\mu^{k}(x_{1},\ldots,x_{k+1})+\varepsilon\eta_{k}\omega(x_{1},\ldots,x_{k+1})$ $\eta_{k}\omega:\underbrace{\mathbb{K}\times\cdots\times\mathbb{K}}_{k+1}\to\mathbb{K}^{n}$ is the $\mathbb{K}$-multilinear map $\displaystyle\eta_{k}\omega(x_{1},\ldots,x_{k+1})=\sum_{i=1}^{k}\mu^{k-i}(\omega(\mu^{i-1}(x_{1},\ldots,x_{i}),x_{i+1}),x_{i+2},\ldots,x_{k+1}).$ Since $\mu^{k}(x_{1},\ldots,x_{k+1})=0$ we obtain that $\displaystyle Z_{N_{n,k}}^{2}(\mu,\mu)=\\{\omega\in Z_{K_{n}}^{2}(\mu,\mu):\eta_{k}\omega=0\\}.$ We shall denote $H_{k\textrm{-nil}}^{2}(\mu,\mu):=H_{N_{n,k}}^{2}(\mu,\mu)$. By using the notation from Section 2.1, we can rewrite it as follows: $\displaystyle H^{2}_{k\textrm{-nil}}(\mu,\mu)$ $\displaystyle=\frac{Z_{N_{n,k}}^{2}(\mu,\mu)}{B^{2}(\mu,\mu)}$ $\displaystyle=\frac{Ker(\delta)\bigcap Ker(\eta_{k})}{Im(\delta^{1})},$ with $\delta:\Lambda^{2}({\mathbb{K}^{n}}^{*})\rightarrow\Lambda^{3}({\mathbb{K}^{n}}^{*})$ as before: $\delta\omega(x,y,z):=\circlearrowleft\mu(\omega(x,y),z)+\circlearrowleft\omega(\mu(x,y),z),$ $\delta^{1}:\Lambda^{1}({\mathbb{K}^{n}}^{*})\rightarrow\Lambda^{2}({\mathbb{K}^{n}}^{*})$: $\delta^{1}(f)(x,y)=\mu(f(x),y)+\mu(x,f(y))-f(\mu(x,y)),$ and $\eta_{k}$ given by: $\eta_{k}(\omega)=\sum_{j=0}^{k-1}\mu^{k-1-j}\circ\omega\circ\mu^{j}.$ Note that this is the $k$-nil cohomology introduced in [BCC]. ###### Corollary 7.4. Let $\mathbb{K}$ be an algebraically closed field of characteristic zero and let $\mathcal{N}_{n,k}=N_{n,k}(\mathbb{K})$, with the structure of affine algebraic variety. If $H^{2}_{k\textit{-nil}}(\mu,\mu)=0$, then $\mu$ is rigid in $\mathcal{N}_{n,k}$. ###### Remark 7.5. If $\mathbb{K}=\mathbb{R}$ and $H^{2}(\mu,\mu)=0$, then $O(\mu)$ is open in $(L_{n})_{\textrm{red}}$ hence $O(\mu)(\mathbb{R})$ is open in $L_{n}(\mathbb{R})$ if we view $L_{n}(\mathbb{R})$ as $\mathbb{R}$-analytic space. A similar observation holds for $S_{n,k}$ and $N_{n,k}$. In particular we recover [BCC, Theorem 2.1] ###### Acknowledgements. The authors would like to thank an anonymous referee for his or her thorough reading of this paper. His or her comments made us improve significantly its first version. This paper is part of the Ph.D. thesis of Josefina Barrionuevo, being carried out thanks to a Doctoral Fellowship from CONICET, Argentina. ## References * [A] Alvarez M.A., _On rigid 2-step nilpotent Lie algebras_ , Algebra Colloq. 25, No. 2 (2018), 349-360. * [BCC] Brega O., Cagliero L. and Chaves-Ochoa A., _The Nash–Moser theorem of Hamilton and rigidity of finite dimensional nilpotent Lie algebras_ , Journal of Pure and Applied Algebra 221, (2017), 2250-2265. * [AAA] Arancibia B. Alfaro, Alvarez M. A. and Anza Y., _Degenerations of graph Lie algebras_ , Linear and Multilinear Algebra (2020), DOI: 10.1080/03081087.2020.1712317 * [AG] Ancochea Bermudez J.M. and Goze M., _The rank of a linear system of roots of a complex solvable rigid Lie algebra_ (French), Commun. Algebra 20 No. 3, (1992), 875–887. * [BS] Burde D. and Steinhoff C., _Classification of Orbit Closures of 4-Dimensional Complex Lie Algebras_ , Journal of Algebra Volume 214, Issue 2, (1999), 729-739. * [BT] Barrionuevo J. and Tirao P., _Rigid 2-step graph Lie algebras_ , arXiv 2206.10572. * [C] Carles R., _Sur la structure des algèbres de Lie rigides_ , Annales de l’institut Fourier, 34(1984), 65-82. * [GA] Goze M. and Ancochea Bermudez, _On the varieties of nilpotent Lie algebras of dimension 7 and 8_ , J. of Pure and Applied Algebra 77 (1992), 131–140. * [GR1] Goze M. and Remm E., _$k$ -step nilpotent Lie algebras_, Georgian Math. J. 22, No. 2 (2015), 219-234. * [GR2] Goze M. and Remm E., _Lie algebras with associative structures. Applications to the study of 2-step nilpotent Lie algebras_ , arXiv 1201.2674v3 (2013). * [GH1] Grunewald F. and O’Halloran J., _Varieties of nilpotent Lie algebras of dimension less than six_ , J. Algebra 112 (1988), 31–325. * [GH2] Grunewald F. and O’Halloran J., _Deformations of Lie Algebras_ , Journal of Algebra 162, (1993), 210-224. * [GT1] Granada-Herrera F. and Tirao P., _Filiform Lie algebras of dimension 8 as degenerations_ , Journal of algebras and its applications 13, (2014). * [GT2] Granada-Herrera F. and Tirao P., _The Grunewald-O’Halloran conjecture for nilpotent Lie algebras of rank $\geq 1$_, Comm. Alg. vol. 4 (2016), 2180-2192 * [H] Hamilton R.S., _The inverse function theorem of Nash and Moser_ , Bull. Amer. Math. Soc., 7 (1982), 65-222. * [LL] Leger and Lucks, _Cohomology of nilradicals of Borel subalgebras_ , Trans. Am. Math. Soc., 195 (1974), 305-316. * [MM] Martin Markl, Deformation Theory of Algebras and Their Diagrams, CBMS 116, AMS 2012. * [M] Milne J. S., Algebraic Groups: The Theory of Affine Group Schemes of Finite Type over a Field, Cambridge University Press, 2017. * [NR] Nijenhuis A. and Richardson R.W., _Deformations of Lie Algebra Structures_ Journal of Mathematics and Mechanics 17 (1), (1967), 89-105. * [R] Richardson, R.W., _On the rigidity of semi-direct products of Lie algebras_ , Pacific J. Math., Volume 22, Number 2 (1967), 339-344. * [RE] Remm E., _Rigid Lie algebras and algebraicity_ , Rev. Roumaine Math Pures Appl. 65, (2020), 491-510. * [S] Seeley C., _Degenerations of 6–dimensional nilpotent Lie algebras over $\mathbb{C}$_, Comm. Alg. 18 (1990), 3493–3505. * [TV] Tirao P. and Vera S., _There are no rigid filiform Lie algebras of low dimension_ , Journal of Lie Theory vol. 29 (2019), 391-412. * [V] Vergne M., _Cohomologie des algèbres de Lie nilpotentes. Application à l’ètude de la variètè des algèbres de Lie nilpotentes_ , Bulletin de la S. M. F., 98 (1970), 81-116.
# Towards Overfitting Avoidance: Tuning-free Tensor-aided Multi-user Channel Estimation for 3D Massive MIMO Communications Lei Cheng and Qingjiang Shi L. Cheng is with the Shenzhen Research Institute of Big Data, Shenzhen, Guangdong, P. R. China (e-mail: leicheng@sribd.cn).Q. Shi is with the School of Software Engineering at Tongji University, Shanghai, 201804, China. He is also with the Shenzhen Research Institute of Big Data, Shenzhen, 518172, China. Email<EMAIL_ADDRESS> ###### Abstract Channel estimation has long been deemed as one of the most critical problems in three-dimensional (3D) massive multiple-input multiple-output (MIMO), which is recognized as the leading technology that enables 3D spatial signal processing in the fifth-generation (5G) wireless communications and beyond. Recently, by exploring the angular channel model and tensor decompositions, the accuracy of single-user channel estimation for 3D massive MIMO communications has been significantly improved given a limited number of pilot signals. However, these existing approaches cannot be straightforwardly extended to the multi-user channel estimation task, where the base station (BS) aims at acquiring the channels of multiple users at the same time. The difficulty is that the coupling among multiple users’ channels makes the channel estimation deviate from widely-used tensor decompositions. It gives a non-standard tensor decomposition format that has not been well tackled. To overcome this challenge, besides directly fitting the new tensor model for channel estimation to the wireless data via block coordinate descent (BCD) method, which is prone to the overfitting of noises or requires regularization parameter tuning, we further propose a novel tuning-free channel estimation algorithm that can automatically control the channel model complexity and thus effectively avoid the overfitting. Numerical results are presented to demonstrate the excellent performance of the proposed algorithm in terms of both estimation accuracy and overfitting avoidance. ###### Index Terms: Joint model-and-data-driven wireless communications, 3D massive MIMO, channel estimation, tensor methods, tuning-free. ## I Introduction In recent years, massive multiple-input multiple-output (MIMO) has gradually evolved from being a theoretical concept to a leading practical technology for the next generation wireless communications [1, 2]. To further embrace the forthcoming era of Internet of Things (IoT), it calls for advanced three- dimensional (3D) spatial signal processing techniques (e.g., 3D beamforming) [5, 6] to allow high-quality communications among multiple users (including unmanned aerial vehicles (UAVs) in the sky [3] and unmanned ground vehicles (UGVs) on roads [4]). To achieve this, rather than mechanically tilting conventional antenna arrays, 3D massive MIMO, in which the BS is with a 3D antenna array, has emerged as an enabling technique to broaden the scope of BS [7, 8, 9]. However, its promise can only be fulfilled when accurate channel state information (CSI) of multiple users is available at the BS. As one of the most critical problems in wireless communications, channel estimation has been continuously studied for many decades, synergizing nearly all the ideas of signal processing methods including optimizations [10], statistics [11], algebras [12], and machine learning [13]. Its ultimate goal is to estimate wireless channels as accurate as possible given a limited number of pilot signals, while its challenges vary significantly under different channel models and wireless systems including millimeter-wave massive MIMO [47], IoT [48] and machine-type communications (MTC) [49]. That is why “no free lunch theorem” [14] in machine learning also holds in channel estimation research, in the sense that there is no panacea that suits every wireless scenario. In particular, for 3D massive MIMO communications, it is widely recognized that there is a unique challenge in exploiting the inherent 3D spatial structure inside the channel coefficients [15, 16, 17], and thus needs tailored algorithm designs. To reveal the underlying 3D spatial structure, an emerging trend is to leverage the angular channel model, which has been validated by real-world measurements [8, 9]. In this model, the channel coefficient is modelled as the summation of different propagation paths, each of which is specified by angle parameters and fading parameters. Since this channel model mimics the signal propagations in the physical world, its parameters are with clear interpretations. On the other hand, the mathematical form of the angular channel model shares a lot of similarities with the models in array signal processing, and thus has triggered tremendous research progress on massive MIMO channel estimation from an array signal processing perspective [18]. In particular, many fundamental ideas in array signal processing, including discrete Fourier transformation (DFT) [19], multiple signal classification (MUSIC) [20], and estimation of signal parameters via rotational invariance technique (ESPRIT) [21], have tapped into the algorithm design of massive MIMO channel estimation and brought significant performance improvement [22, 23, 24]. Furthermore, these inspiring ideas have integrated with advanced tensor methods to achieve more accurate 3D massive MIMO channel estimation even with limited pilot signals [27, 17, 28, 25, 26]. However, previous works on tensor-aided 3D massive MIMO communications [17, 28, 27, 25, 26] mainly investigated single-user channel estimation. By equivalently formulating channel estimation problems as standard tensor decompositions, a vast number of off-the-shelf tensor decomposition tools can be utilized [29]. One might contemplate the straightforward extension of existing works to the scenario where the BS estimates multiple users’ channels simultaneously. Unfortunately, the coupling among different users’ channels make the problem formulation deviate from widely-used tensor decompositions. Instead, it gives a non-standard tensor decomposition format that has not been well tackled. This unique challenge requires a novel tensor-aided multi-user channel estimation algorithm design for 3D massive MIMO communications. The most straightforward approach is to fit the new tensor decomposition model to the observation data via solving an optimization problem. In particular, with the widely adopted least-squares (LS) model fitting criterion, it can be shown that the optimization problem enjoys a block multi-convex property [30], in the sense that although the original problem is not convex, after fixing other variables other than one variable, the remaining problem is convex. It motivates the leverage of block coordinate descent (BCD) method [31] to solve the model fitting problem. This approach can be interpreted as a maximum- likelihood (ML) approach under the assumption that the signals are corrupted by additive white Gaussian noises (AWGNs) [14]. From the viewpoint of machine learning, it is well known that the ML solution is prone to the overfitting of noises if the model complexity is not set correctly [14]. In the angular channel model, the model complexity is determined by the number of independent propagation paths, which however is unknown in practice [8, 9]. To mitigate the overfitting, a typical method is to introduce an additive regularization term that penalizes complicated channel models [14]. However, for the best channel estimation performance, this approach requires tuning the regularization parameters carefully to balance the data fitting and the model complexity control, which inevitably consumes enormous computation resources. Therefore, in this paper, we aim at answering the following question: _could we develop a tuning-free channel estimation algorithm that can automatically learn the optimal channel model complexity from the wireless data?_ This question invites a data-driven approach to the wireless research, in order to let the wireless data tell its desired channel model complexity. This goal just coincides with the fundamental philosophy of Bayesian methods[32]. In particular, the Bayesian Occam Razor principle states that the multiple integrations in the Bayes rule will automatically drive the inferred model to the simplest one that can still explain the data well. This has enabled tuning-free algorithm designs for Bayesian neural network [33], sparse Bayesian learning [34], and more recently Bayesian structured tensor decompositions [35, 36]. Its great success in automatic model complexity control inspires us to rethink the multi-user 3D massive MIMO channel estimation problem from a Bayesian perspective. In particular, by establishing the probabilistic model and designing the efficient inference algorithm, in this paper, we propose a novel tuning-free tensor-aided multi-user channel estimation algorithm for 3D massive MIMO communications. Numerical results have corroborated its excellent performance in terms of both channel estimation accuracy and overfitting avoidance. The remainder of this paper is organized as follows. In Section II, after introducing the system model, the multi-user channel estimation problem is formulated as a non-standard tensor decomposition problem. To fit the new tensor model to the wireless data, a BCD-based method is briefly introduced in Section III, which however is prone to the overfitting of noises. To avoid the overfitting via a tuning-free approach, a novel algorithm based on Bayesian modelling and inference is proposed in Section IV. Simulation results are presented in Section V to show the effectiveness of the proposed algorithm. Finally, conclusions are drawn in Section VI. Notation: Boldface lowercase and uppercase letters will be used for vectors and matrices, respectively. Tensors are written as calligraphic letters. $\mathbb{E}[\leavevmode\nobreak\ \cdot\leavevmode\nobreak\ ]$ denotes the expectation of its argument and $j\triangleq\sqrt{-1}$. Superscripts $T$, $*$ and $H$ denote transpose, conjugate and Hermitian, respectively. $\bm{A}^{-1}$ denotes the inverse of a matrix $\bm{A}$. The operator $\textrm{Tr}\left({\bm{A}}\right)$ denotes the trace of a matrix $\bm{A}$. $\bigparallel\cdot{\bigparallel}_{F}$ represents the Frobenius norm of the argument. $\mathcal{CN}(\bm{x}|\bm{u},\bm{R})$ stands for the probability density function of a circularly-symmetric complex Gaussian vector $\bm{x}$ with mean $\bm{u}$ and covariance matrix $\bm{R}$. The operator $\mathfrak{Re}\\{\cdot\\}$ represents the real part of the argument. The symbol $\propto$ represents a linear scalar relationship between two real- valued functions. The $N\times N$ diagonal matrix with diagonal elements $a_{1}$ through $a_{N}$ is represented as $\mathrm{diag}\\{a_{1},a_{2},...,a_{N}\\}$, while $\bm{I}_{M}$ represents the $M\times M$ identity matrix. The $(i,j)^{th}$ element, the $i^{th}$ row, and the $j^{th}$ column of a matrix $\bm{A}$ are represented by $\bm{A}_{i,j}$, $\bm{A}_{i,:}$ and $\bm{A}_{:,j}$, respectively. ## II System Model And Problem Formulation: When Angular Channel Model Meets Multi-user Massive MIMO Consider a massive MIMO system where the BS is equipped with a 3D uniform cuboid antenna array (UCA), as shown in Figure 1, and each user is equipped with a single antenna. Let $M$ and $N$ denote the number of antennas at the BS and the number of users, respectively. In the BS, with the first antenna assumed to be the origin of the coordinate system, the number of antennas in the x-direction, y-direction and z-direction are $I_{1}$, $I_{2}$ and $I_{3}$, respectively (i.e., $M=I_{1}I_{2}I_{3}$). Obviously, the UCA includes the uniform rectangular array (URA) and the uniform linear array (ULA) as its special cases by setting some of $\\{I_{1},I_{2},I_{3}\\}$ to be one. In this paper, we consider the uplink transmission where all the users simultaneously transmit their pilot signals to the BS through narrow-band non- line-of-sight (NLOS) channels111The discussions on incorporating the LOS path are presented in _Remark 1_ (at the end of Section IV. A). Each user is assigned a unique pilot sequence $\bm{s}_{n}=[s_{n}(1),...,s_{n}(L)]^{T}$ with length $L$, which is assumed to be smaller than the channel coherence length. The channel state information (CSI) from the $n^{th}$ user to the $m^{th}$ antenna at the BS is modeled as a complex coefficient $h_{m}^{n}$. Then, the received discrete-time complex baseband signal at the BS can be modeled as $\displaystyle\bm{Y}$ $\displaystyle=\sum_{n=1}^{N}\bm{s}_{n}\bm{h}_{n}^{T}+\bm{W}=\bm{S}\bm{H}+\bm{W},$ (1) where vector $\bm{h}_{n}=[h_{1}^{n},h_{2}^{n},...,h_{M}^{n}]^{T}$ collects channel coefficients for the $n^{th}$ user, and each element $w_{l,m}$ in the noise matrix $\bm{W}$ denotes the additive white Gaussian noise (AWGN) at the BS, i.e., $w_{l,m}\sim\mathcal{CN}(0,\beta^{-1})$ is spatially and temporally independent. Pilot matrix $\bm{S}\in\mathbb{C}^{L\times N}$ is with the $n^{th}$ column being $\bm{s}_{n}$, and channel matrix $\bm{H}\in\mathbb{C}^{N\times M}$ is with the $n^{th}$ row being $\bm{h}_{n}^{T}$. The goal of multi-user channel estimation is to estimate the channel matrix $\bm{H}$ from the received data $\bm{Y}$ at the BS with the help of the pilot matrix $\bm{S}$. From data model (1), a standard least-squares (LS) solution can be obtained immediately: $\displaystyle\hat{\bm{H}}^{\mathrm{LS}}=(\bm{S}^{H}\bm{S})^{-1}\bm{S}^{H}\bm{Y}.$ (2) When using the LS estimator, since no prior information is incorporated, it is well known that the estimation accuracy heavily relies on the pilot length $L$ [39]. That is, to ensure accurate channel estimation, long pilot sequences are required to be transmitted at user sides, which however will consume invaluable spectral resources. This is not desirable in practical massive MIMO systems, and thus calls for alternative solutions that can significantly improve the accuracy of channel estimation even with limited pilots. Figure 1: A massive MIMO system where the base station (BS) is equipped with a three dimensional (3D) uniform cuboid antenna array (UCA). Under both spatial and frequency narrow-band assumption, the relative delays among different non- line-of-sight (NLOS) propagation paths and the effect of different subcarriers are assumed to be negligible [46]. To achieve this, recent research works have repeatedly shown the glimmers of hope from channel model structure exploitation. In particular, a vast amount of research works [18, 12, 22, 23, 24, 17, 28, 27, 25, 26] have shown the effectiveness of the angular channel model, which has been validated by real- world measurements [8, 9]. Not only does it depict the signal propagations in the physical word via angle parameters and fading parameters, it also bridges the design of massive MIMO systems and array signal processing techniques. More specifically, it assumes that for the $n^{th}$ user, the channel model consists of $R^{n}$ propagation paths, each of which is determined by path gain $\xi_{r^{n}}$, elevation angle $\theta_{r^{n}}$ and azimuth angle $\phi_{r^{n}}$, i.e., $\displaystyle h_{x_{m},y_{m},z_{m}}^{n}=\sum_{r=1}^{R^{n}}\xi_{r^{n}}\mathrm{exp}\Big{\\{}j\frac{2\pi}{\lambda_{c}}\big{[}x_{m}\sin\theta_{r^{n}}\cos\phi_{r^{n}}$ $\displaystyle+y_{m}\sin\theta_{r^{n}}\sin\phi_{r^{n}}+z_{m}\cos\theta_{r^{n}}\big{]}\Big{\\}},$ (3) where $\lambda_{c}$ is the wavelength of the carrier signal and $(x_{m},y_{m},z_{m})$ is the coordinate of the $m^{th}$ antenna. Notice that in (3), under both spatial and frequency narrow-band assumption, the relative delays among different propagation paths and the effect of different subcarriers are assumed to be negligible [46]. Although the channel model (3) shares a lot of similarities with the array signal processing model [20], [21], there is a slight difference. Due to the block-fading assumption, the path gain $\xi_{r^{n}}$ is assumed to be unchanged during the channel estimation. In contrast, in most array signal processing applications [20], [21], the source signals are assumed to be time-varying. Using the angular channel model (3), instead of directly estimating the channel matrix $\bm{H}$ with $MN$ unknown parameters, one could estimate the model parameters $\\{\\{\xi_{r^{n}},\theta_{r^{n}},\phi_{r^{n}}\\}_{r=1}^{R^{n}}\\}_{n=1}^{N}$ and then reconstruct the channel coefficients. By this approach, only $\sum_{n=1}^{N}3R^{n}$ unknown parameters need to be estimated. Since the path number $R^{n}$ is usually much smaller than the antenna number $M$, the adoption of the angular channel model significantly reduces the number of unknown parameters, and thus allows more accurate channel estimation. However, estimating these unknown parameters $\\{\\{\xi_{r^{n}},\theta_{r^{n}},\phi_{r^{n}}\\}_{r=1}^{R^{n}}\\}_{n=1}^{N}$ from the observation data $\bm{Y}$ is quite challenging, since they are nonlinearly coupled in the channel model (3). In particular, motivated by the AWGN assumption, the following LS-based optimization problem can be formulated: $\displaystyle\min_{\\{\\{\xi_{r^{n}},\theta_{r^{n}},\phi_{r^{n}}\\}_{r=1}^{R^{n}}\\}_{n=1}^{N}}\sum_{l=1}^{L}\sum_{m=1}^{M}\bigparallel\bm{Y}_{l,m}-\sum_{n=1}^{N}s_{n}(l)$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \times\sum_{r=1}^{R^{n}}\xi_{r^{n}}\mathrm{exp}\Big{\\{}j\frac{2\pi}{\lambda_{c}}\big{[}x_{m}\sin\theta_{r^{n}}\cos\phi_{r^{n}}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +y_{m}\sin\theta_{r^{n}}\sin\phi_{r^{n}}+z_{m}\cos\theta_{r^{n}}\big{]}\Big{\\}}{\bigparallel}_{F}^{2}.$ (4) Similar optimization problems have been investigated in array signal processing society for many decades [19], and it is widely agreed that directly optimizing these variables $\\{\\{\xi_{r^{n}},\theta_{r^{n}},\phi_{r^{n}}\\}_{r=1}^{R^{n}}\\}_{n=1}^{N}$ is prohibitively expensive in computations. Instead, subspace methods (e.g, MUSIC [20] and ESPRIT [21]) have come up as the main tools to enable the accurate estimation of these unknown parameters in a computationally efficient manner. Its key idea is to recast the parameter estimation problem as the low dimensional signal subspace recovery problem, for which an array of dimensionality reduction tools (e.g., low-rank matrix decompositions [14]) are off-the-shelf. Inspired by this idea and further exploiting the 3D structure of the antenna array at the BS, recent studies leverage low-rank tensor decompositions to achieve better signal subspace recovery and subsequently more accurate channel estimation [27, 17, 28, 25, 26]. However, these works are limited to the single-user case, while their extensions to the multi-user scenario are not straightforward. To see this, following the tensor modelling in previous works [27, 17, 28, 25, 26], we re-organize the channel coefficients $\\{h_{x_{m},y_{m},z_{m}}^{n}\\}_{m=1}^{M}$ into a 3D tensor $\mathcal{H}^{n}\in\mathbb{C}^{I_{1}\times I_{2}\times I_{3}}$. In particular, let set $S_{x}$ collect all the antennas’ x-axis coordinates $\\{x_{m}\\}_{m}^{M}$, with repeated values eliminated and remaining values sorted via the ascending order, (i.e., $S_{x}(i_{1})$ is the $i_{1}^{th}$ largest number in $\\{x_{m}\\}_{m=1}^{M}$). Similarly, let $S_{y}$ and $S_{z}$ collect ordered coordinates on the y-axis and z-axis, respectively. Then, the channel coefficient $h_{x_{m},y_{m},z_{m}}^{n}$ can be equivalently re-indexed as $h_{i_{1},i_{2},i_{3}}^{n}$, where the index $(i_{1},i_{3},i_{3})$ satisfies $S_{x}(i_{1})=x_{m}$, $S_{y}(i_{2})=y_{m}$ and $S_{y}(i_{3})=z_{m}$. Using the new indexing scheme, the channel model (3) can be equivalently re- expressed as $\displaystyle h_{i_{1},i_{2},i_{3}}^{n}=\sum_{r=1}^{R^{n}}\xi_{r^{n}}\mathrm{exp}\Big{\\{}j\frac{2\pi}{\lambda_{c}}\big{[}S_{x}(i_{1})\sin\theta_{r^{n}}\cos\phi_{r^{n}}$ $\displaystyle+S_{y}(i_{2})\sin\theta_{r^{n}}\sin\phi_{r^{n}}+S_{3}(i_{3})\cos\theta_{r^{n}}\big{]}\Big{\\}},$ $\displaystyle=\sum_{r=1}^{R^{n}}\xi_{r^{n}}\mathrm{exp}(S_{x}(i_{1})u_{r^{n}})\mathrm{exp}(S_{y}(i_{2})v_{r^{n}})\mathrm{exp}(S_{z}(i_{3})p_{r^{n}}),$ (5) where $u_{r^{n}}=j\frac{2\pi}{\lambda_{c}}\sin\theta_{r^{n}}\cos\phi_{r^{n}}$, $v_{r^{n}}=j\frac{2\pi}{\lambda_{c}}\sin\theta_{r^{n}}\sin\phi_{r^{n}}$ and $p_{r^{n}}=j\frac{2\pi}{\lambda_{c}}\cos\theta_{r^{n}}$. Comparing expression (5) to the definition of tensor canonical polyadic decomposition (CPD) [40], it is easy to identify that each 3D channel tensor $\mathcal{H}^{n}$ follows a rank-$R^{n}$ tensor CPD format: $\displaystyle\mathcal{H}^{n}$ $\displaystyle\triangleq\llbracket\bm{U}^{(n)},\bm{V}^{(n)},\left[\bm{\xi}^{n}\right]^{T}\diamond\bm{P}^{(n)}\rrbracket$ $\displaystyle=\sum_{r=1}^{R^{n}}\bm{u}^{(n)}_{r}\circ\bm{v}^{(n)}_{r}\circ\bm{p}^{(n)}_{r}\xi_{r^{n}},$ (6) where $\bm{U}^{(n)}\in\mathbb{C}^{I_{1}\times R^{n}}$ is with its $(i_{1},r)^{th}$ element being $\mathrm{exp}(S_{x}(i_{1})u_{r^{n}})$; $\bm{V}^{(n)}\in\mathbb{C}^{I_{2}\times R^{n}}$ is with its $(i_{2},r)^{th}$ element being $\mathrm{exp}(S_{y}(i_{2})v_{r^{n}})$; and $\bm{P}^{(n)}\in\mathbb{C}^{I_{3}\times R^{n}}$ is with its $(i_{3},r)^{th}$ element being $\mathrm{exp}(S_{z}(i_{3})p_{r^{n}})$. $\bm{u}^{(n)}_{r}$, $\bm{v}^{(n)}_{r}$ and $\bm{p}^{(n)}_{r}$ are the $r^{th}$ columns in matrix $\bm{U}^{(n)}$, $\bm{V}^{(n)}$ and $\bm{P}^{(n)}$, respectively. Symbol $\circ$ denotes vector outer product, $\diamond$ denotes Khatri-Rao product, and vector $\bm{\xi}^{n}=[\xi_{1^{n}},\xi_{2^{n}},...,\xi_{R^{n}}]^{T}\in\mathbb{C}^{R^{n}\times 1}$. Then, optimization problem (4) can be equivalently formulated as: $\displaystyle\min_{\\{\\{\xi_{r^{n}},\theta_{r^{n}},\phi_{r^{n}}\\}_{r=1}^{R^{n}}\\}_{n=1}^{N}}\sum_{l=1}^{L}\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \times\llbracket\bm{U}^{(n)},\bm{V}^{(n)},\left[\bm{\xi}^{n}\right]^{T}\diamond\bm{P}^{(n)}\rrbracket{\bigparallel}_{F}^{2}.$ (7) In (7), $\mathcal{Y}_{l}\in\mathbb{C}^{I_{1}\times I_{2}\times I_{3}}$ is a 3D tensor that collects measurements $\\{\bm{Y}_{l,m}\\}_{m=1}^{M}$ according to $\left[\mathcal{Y}_{l}\right]_{i_{1},i_{2},i_{3}}=\bm{Y}_{l,m^{*}}$ where $x_{m^{*}}=S_{x}(i_{1})$, $y_{m^{*}}=S_{y}(i_{2})$ and $z_{m^{*}}=S_{z}(i_{3})$. At last, inspired by the subspace methods, rather than searching parameters $\\{\\{\xi_{r^{n}},\theta_{r^{n}},\phi_{r^{n}}\\}_{r=1}^{R^{n}}\\}_{n=1}^{N}$ exhaustively, it is viable to firstly estimate the factor matrices $\\{\bm{U}^{(n)},\bm{V}^{(n)},\left[\bm{\xi}^{n}\right]^{T}\diamond\bm{P}^{(n)}\\}_{n=1}^{N}$ from the data $\\{\mathcal{Y}_{l}\\}_{l=1}^{L}$ and then reconstruct each channel tensor $\mathcal{H}^{n}$ via $\llbracket\bm{U}^{(n)},\bm{V}^{(n)},\left[\bm{\xi}^{n}\right]^{T}\diamond\bm{P}^{(n)}\rrbracket$. For the brevity of notations, let factor matrices $\\{\bm{U}^{(n)},\bm{V}^{(n)},\left[\bm{\xi}^{n}\right]^{T}\diamond\bm{P}^{(n)}\\}_{n=1}^{N}$ simply be denoted by $\\{\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\\}_{n=1}^{N}$. Then, the channel estimation problem can be formulated as: $\displaystyle\min_{\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}}\sum_{l=1}^{L}\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \times\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket{\bigparallel}_{F}^{2}.$ (8) If $N=1$ (i.e. single-user case), problem (8) can be decomposed into a set of standard tensor CPD problems that enjoy appealing uniqueness property (see Appendix A), for which there are abundant “out of the box” algorithms [29]. However, when the BS serves multiple users simultaneously, the summand inside the Frobenius norm prohibits the straightforward utilization of standard tensor decomposition tools, and thus make the multi-user channel estimation problem much more challenging than the single-user counterpart [27, 17, 28, 25, 26]. In particular, when $N>1$, the factor matrices $\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}$ are intricately coupled together after expanding the Frobenius norm (as elaborated in Appendix B). This coupling is much more complicated than those appeared in existing single- user channel estimation works [27, 17, 28, 25, 26], making their extensions (either using optimizations or Bayesian methods) to the multi-user scenario not straightforward. This paper makes the first attempt to tackle this challenge. ## III Direct Fitting via Block Coordinate Descent: How to Avoid Overfitting? It is not difficult to show the non-convexity of problem (8), since all the factor matrices $\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}$ are coupled together via multi-linear products. However, a closer inspection could reveal its appealing block multi-convex property [30], based on which BCD method [31] can be leveraged. More specifically, although problem (8) is not convex with respect to $\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}$, after fixing all the variables to their latest updates other than a single factor matrix $\bm{\Xi}^{(k),n}$, in the iteration $t+1$, the remaining subproblem can be formulated as: $\displaystyle\min_{\bm{\Xi}^{(k),n}}\sum_{l=1}^{L}\bigparallel\\!\left[\mathcal{B}_{l,p\neq n}\right]^{\kappa}(k)\\!\\!-\\!\\!s_{n}(l)\bm{\Xi}^{(k),n}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]^{\kappa}\right)^{T}\\!\\!{\bigparallel}_{F}^{2},$ (9) where $\displaystyle\left[\mathcal{B}_{l,p\neq n}\right]^{\kappa}$ $\displaystyle\triangleq\mathcal{Y}_{l}\\!\\!-\\!\\!\sum_{p=1,p\neq n}^{N}s_{p}(l)\Big{\llbracket}\\!\left[\bm{\Xi}^{(1),p}\right]^{\kappa},\left[\bm{\Xi}^{(2),p}\right]^{\kappa},\left[\bm{\Xi}^{(3),p}\right]^{\kappa}\\!\Big{\rrbracket},$ (10) and $\kappa$ denotes the most recent update index, i.e., $\kappa=t+1$ when $j<k$ or $p<n$, and $\kappa=t$ otherwise. $\left[\mathcal{B}_{l,n\neq k}\right]^{\kappa}(k)$ is a matrix obtained by unfolding the tensor $\left[\mathcal{B}_{l,n\neq k}\right]^{\kappa}$ along its $k^{th}$ dimension, and the multiple Khatri-Rao products $\mathop{\diamond}\limits_{n=1,n\neq k}^{N}{\bm{A}}^{(n)}={\bm{A}}^{(N)}\diamond{\bm{A}}^{(N-1)}\diamond\cdots\diamond{\bm{A}}^{(k+1)}\diamond{\bm{A}}^{(k-1)}\diamond\cdots\diamond{\bm{A}}^{(1)}$. After checking the positive semi-definiteness of the Hessian matrix, subproblem (9) can be shown to be convex. Then, by setting the derivative of the objective function in (9) to be zero, the closed-form optimal solution can be obtained as follows: $\displaystyle\left[\bm{\Xi}^{(k),n}\right]^{t+1}\\!\\!\\!=\\!\left[\sum_{l=1}^{L}\left[\mathcal{B}_{l,n\neq k}\right]^{\kappa}(k)s_{n}(l)^{*}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]^{\kappa}\right)^{*}\right]$ $\displaystyle\\!\\!\\!\times\\!\\!\\!\left[\sum_{l=1}^{L}|s_{n}(l)|^{2}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]^{\kappa}\right)^{T}\\!\\!\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]^{\kappa}\right)^{*}\\!\right]^{-1}.$ (11) Since each subproblem is convex, after iteratively updating each $\left[\bm{\Xi}^{(k),n}\right]^{t+1}$ via (11), the resultant BCD algorithm, which is summarized in Algorithm 1 at the top of this page, is guaranteed to converge to a critical point of the objective function of (8) [31]. Algorithm 1: BCD Based Multi-user Channel Estimation Initializations: Choose path number estimates $\\{\hat{R}^{n}\\}_{n=1}^{N}$ and initial values $\\{\\{\left[\bm{\Xi}^{(k),n}\right]^{0}\\}_{k=1}^{3}\\}_{n=1}^{N}$. Iterations: For the iteration $t+1$ ($t\geq 0$), Update factor matrix $\left[\bm{\Xi}^{(k),n}\right]^{t+1}$ $\displaystyle\left[\bm{\Xi}^{(k),n}\right]^{t+1}\\!=\\!\left[\sum_{l=1}^{L}\left[\mathcal{B}_{l,n\neq k}\right]^{\kappa}(k)s_{n}(l)^{*}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]^{\kappa}\right)^{*}\right]$ $\displaystyle\\!\\!\\!\times\\!\\!\\!\left[\sum_{l=1}^{L}|s_{n}(l)|^{2}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]^{\kappa}\right)^{T}\\!\\!\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]^{\kappa}\right)^{*}\\!\right]^{-1},$ where $\left[\mathcal{B}_{l,n\neq k}\right]^{\kappa}$ is computed using (10); $\kappa$ denotes the most recent update index, i.e., $\kappa=t+1$ when $j<k$ or $p<n$, and $\kappa=t$ otherwise. Until Convergence Channel Estimation: $\hat{\mathcal{H}}^{n}=\Big{\llbracket}\left[\bm{\Xi}^{(1),n}\right]^{t+1},\left[\bm{\Xi}^{(2),n}\right]^{t+1},\left[\bm{\Xi}^{(3),n}\right]^{t+1}\Big{\rrbracket},\forall n.$ However, to implement Algorithm 1, prior knowledge about the path numbers $\\{R^{n}\\}_{n=1}^{N}$ are required, which however is difficult to acquire in practice. On the other hand, as seen in (6), path number $R^{n}$ controls the number of rank-1 component in the CPD model, and thus controls the channel model complexity. In [40], it has been shown that generally $R^{n}$ is non- deterministic polynomial-time hard (NP-hard) to estimate. With over-estimated path numbers $\\{\hat{R}^{n}\\}_{n=1}^{N}$, (or equivalently too complicated channel models), directly fitting the tensor channel models $\\{\mathcal{H}\\}_{n=1}^{N}$ to the observation data $\\{\mathcal{Y}_{l}\\}_{l=1}^{N}$ via Algorithm 1 will be prone to the overfitting of noises, and thus will cause performance deterioration in channel estimation. To avoid the overfitting, a widely-adopted approach is to introduce an additional regularization term that penalizes the model complexity as follows [14]: $\displaystyle\min_{\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}}\sum_{l=1}^{L}\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)$ $\displaystyle\times\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket{\bigparallel}_{F}^{2}+\sum_{n=1}^{N}\sum_{k=1}^{3}\gamma_{k}^{n}g(\bm{\Xi}^{(k),n}),$ (12) where the regularization function $g(\cdot)$ (e.g., $l_{1}$ norm and $l_{2}$ norm) is pre-selected. For the best channel estimation performance, the regularization parameters $\\{\\{\gamma_{k}^{n}\\}_{k=1}^{3}\\}_{n=1}^{N}$ need to be finely tuned, which is however computationally demanding. Then, an immediate question is: could we develop a tuning-free approach such that the model complexity can be optimally learnt from the data? This is fundamentally important in achieving overfitting avoidance for channel estimation. ## IV Towards A tuning-free Approach: A Bayesian Perspective This question has been partially answered in the research of Bayesian modelling and inference. In the early pioneering works of Mackay [33] and Tipping [34] on Bayesian neural network and relevance vector machine, sparsity-enhancing priors were employed to encode an over-parameterized model. Together with the Bayesian Occam Razor principle, which indicates that Bayesian inference will automatically seek the simplest model that can still explain the data adequately, the inference algorithm will drive redundant model parameters to be all zeros and thus effectively control the model complexity. This idea has triggered flourishing research on Bayesian compressive sensing [37], sparse Bayesian learning [38], and more recently Bayesian structured tensor decompositions [35, 36]. However, for the tensor- aided multi-user channel estimation problem in (8), since it does not follow a standard tensor decomposition format, there is no existing Bayesian solution. Therefore, in this paper, we develop such an algorithm from the first principle of Bayesian methods. ### IV-A Sparsity-promoting Probabilistic Modelling Firstly, the probabilistic model, which encodes the knowledge of problem (8), needs to be established. Motivated by the LS cost function in (8) (and equivalently the AWGN assumption in data model (1)), a Gaussian likelihood function is adopted as follows: $\displaystyle p(\\{\mathcal{Y}_{l}\\}_{l=1}^{L}|\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N})$ $\displaystyle\propto\exp\Bigg{\\{}-\beta\sum_{l=1}^{L}\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket{\bigparallel}_{F}^{2}\Bigg{\\}},$ (13) where $\beta^{-1}$ is the noise power. To reflect the non-informativeness of the noise power, gamma distribution $p(\beta)=\mathrm{gamma}(\beta|\epsilon,\epsilon)$ with $\epsilon$ being very small (e.g., $10^{-6}$) is employed as its prior. For factor matrices $\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}$, since $\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}$ determines the $n^{th}$ user’s channel tensor $\mathcal{H}^{n}$ and the channels of different users are assumed to be statistically independent, we have $p(\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N})=\prod_{n=1}^{N}p(\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3})$. In the channel model (6), it can be observed that the channel tensor $\mathcal{H}^{n}$ is the summation of $R^{n}$ rank-1 tensors, each of which is determined by the $r^{th}$ columns in the three factor matrices, i.e., $\\{\bm{\Xi}^{(k),n}_{:,r}\\}_{k=1}^{3}$. By treating each $\\{\bm{\Xi}^{(k),n}_{:,r}\\}_{k=1}^{3}$ as an independent building block of the channel model, we have $p(\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3})=\prod_{k=1}^{3}\prod_{r=1}^{R^{n}}p(\bm{\Xi}^{(k),n}_{:,r})$. Since the exact path number $R^{n}$ is unknown, an upper bound on its value $\bar{R}^{n}$ is assumed to give an over-parameterized model. Then, inspired by previous Bayesian sparsity modelling works [33, 34], a sparsity-promoting Gaussian-gamma prior distribution is utilized to model $\\{\bm{\Xi}^{(k),n}_{:,r}\\}_{r=1}^{\bar{R}^{n}}$. Finally, the sparsity- promoting prior for all the factor matrices is: $\displaystyle p(\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}|\\{\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N})$ $\displaystyle=\prod_{n=1}^{N}p(\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}|\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}})$ $\displaystyle=\prod_{n=1}^{N}\prod_{k=1}^{3}\prod_{r=1}^{\bar{R}^{n}}p(\bm{\Xi}^{(k),n}_{:,r}|\gamma_{r}^{n})$ $\displaystyle=\prod_{n=1}^{N}\prod_{k=1}^{3}\prod_{r=1}^{\bar{R}^{n}}\mathcal{CN}\left(\bm{\Xi}^{(k),n}_{:,r}|\bm{0}_{I_{k}},(\gamma_{r}^{n})^{-1}\bm{I}_{I_{k}}\right),$ (14) $\displaystyle p(\\{\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N})=\prod_{n=1}^{N}\prod_{r=1}^{\bar{R}^{n}}\mathrm{gamma}(\gamma_{r}^{n}|\epsilon,\epsilon),$ (15) where $\epsilon$ is a very small number (e.g., $10^{-6}$) that indicates the non-informativeness of the prior model. Consequently, the proposed probabilistic model is a three-layer Bayes network, as illustrated in Figure 2. _Remark 1:_ In (14), due to the NLOS assumption adopted in this paper, there is no need to explicitly model the significant power differences of paths. If the LOS path is considered (with its power much larger than other paths), order statistics [51] might be exploited to model this structural information, which is an interesting future direction to investigate. Figure 2: Probabilistic model for 3D massive MIMO multi-user channel estimation. ### IV-B Variational Inference: Block Coordinate Descent Over Functional Space Table I: Optimal variational probability density functions Variational pdfs | Remarks ---|--- $Q^{\dagger}\left(\bm{\Xi}^{(k),n}\right)=\mathcal{CMN}(\bm{\Xi}^{(k),n}|\bm{M}^{(k),n},\bm{I}_{I_{k}},\bm{\Sigma}^{(k),n}),\forall n,k$ | Circularly-symmetric complex matrix normal distribution | with mean $\bm{M}^{(k),n}$ and covariance matrix $\bm{\Sigma}^{(k),n}$ $Q^{\dagger}\left(\gamma_{r}^{n}\right)=\mathrm{gamma}(\gamma_{r}^{n}|a_{r}^{n},b_{r}^{n}),\forall n,r$ | Gamma distribution with shape $a_{r}^{n}$ and rate $b_{r}^{n}$ $Q^{\dagger}\left(\beta\right)=\mathrm{gamma}(\beta|c,d)$ | Gamma distribution with shape $c$ and rate $d$ Let $\bm{\Theta}$ be the set that collects all the unknown variables in the probabilistic model, i.e, $\bm{\Theta}=\\{\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N},\\{\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N},\beta\\}$. The goal of Bayesian inference is to infer the posterior distribution of each unknown variable. Following the Bayes rule, the posterior distribution $p(\bm{\Theta}_{i}|\\{\mathcal{Y}_{l}\\}_{l=1}^{L})=\int\frac{p(\bm{\Theta},\\{\mathcal{Y}_{l}\\}_{l=1}^{L})}{\int p(\bm{\Theta},\\{\mathcal{Y}_{l}\\}_{l=1}^{L})d\bm{\Theta}}d\bm{\Theta}_{j\neq i}$, where $\bm{\Theta}_{i}$ is part of $\bm{\Theta}$ with $\cup_{i=1}^{I}\bm{\Theta}_{i}=\bm{\Theta}$ and $\cap_{i=1}^{I}\bm{\Theta}_{i}=\O$. However, the intricacy of the probabilistic model does not allow the tractable solution of the multiple integrations involved [14]. Fortunately, this challenge is not totally new, and is common for modern Bayesian inference tasks such as Bayesian deep learning [41] and Bayesian tensor methods [35, 36]. In these works, variational inference (VI) is advocated since it scales well to complicated models with a large number of parameters [43, 44]. In essence, VI recasts the intractable multiple integration problem into a functional optimization problem. In particular, it solves the following problem [44]: $\displaystyle\min_{Q(\bm{\Theta})}\mathrm{KL}\big{(}Q\left(\bm{\Theta}\right)\parallel p\left(\bm{\Theta}\mid\\{\mathcal{Y}_{l}\\}_{l=1}^{L}\right)\big{)}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \triangleq-\mathbb{E}_{Q\left(\bm{\Theta}\right)}\left\\{\ln\frac{p\left(\bm{\Theta}\mid\\{\mathcal{Y}_{l}\\}_{l=1}^{L}\right)}{Q\left(\bm{\Theta}\right)}\right\\}$ $\displaystyle\mathrm{s.t.}\leavevmode\nobreak\ \leavevmode\nobreak\ Q(\bm{\Theta})\in\mathcal{F},$ (16) where $\mathrm{KL}(\cdot||\cdot)$ denotes the Kullback-Leibler (KL) divergence between two arguments, and $\mathcal{F}$ is a pre-selected family of probability density functions (pdfs). The rationale behind problem (16) is: although the exact posterior distribution $p\left(\bm{\Theta}\mid\\{\mathcal{Y}_{l}\\}_{l=1}^{L}\right)$ has no closed- form, we can still seek a tractable variational probability distribution $Q(\bm{\Theta})$ in one family $\mathcal{F}$ that is the closest to the true posterior distribution $p\left(\bm{\Theta}\mid\\{\mathcal{Y}_{l}\\}_{l=1}^{L}\right)$ in terms of the KL divergence. The choice of probability distribution family $\mathcal{F}$ is an art, since it needs to be flexible enough to ensure the freedoms of variational pdfs, while simple enough to enable efficient functional optimization. Mean-field family is such a good choice, as evidenced by a lot of Bayesian inference works [42, 14]. It assumes that $Q(\bm{\Theta})=\prod_{i=1}^{I}Q(\bm{\Theta}_{i})$. Then, inspired by its factorized structure, the idea of BCD could be migrated to the functional space. More specifically, after fixing other variational pdfs $\\{Q(\bm{\Theta}_{j})\\}_{j\neq i}$ to their latest update results, the pdf $Q(\bm{\Theta}_{i})$ can be updated via solving the following problem: $\displaystyle\min_{Q(\bm{\Theta}_{i})}\int Q(\bm{\Theta}_{i})\Big{(}-\mathbb{E}_{\prod_{j\neq i}Q(\bm{\Theta}_{j})}\left[\ln p(\bm{\Theta},\\{\mathcal{Y}_{l}\\}_{l=1}^{L})\right]$ $\displaystyle+\ln Q(\bm{\Theta}_{i})\Big{)}d{\bm{\Theta}_{i}}.$ (17) Using variational calculus, the optimal solution of subproblem (17) can be shown to be [43, 44]: $\displaystyle Q^{\dagger}\left(\bm{\Theta}_{i}\right)=\frac{\exp\left(\mathbb{E}_{\prod_{j\neq i}Q\left(\bm{\Theta}_{j}\right)}\left[\ln{p\left(\bm{\Theta},\\{\mathcal{Y}_{l}\\}_{l=1}^{L}\right)}\right]\right)}{\int\exp\left(\mathbb{E}_{\prod_{j\neq i}Q\left(\bm{\Theta}_{j}\right)}\left[\ln p\left(\bm{\Theta},\\{\mathcal{Y}_{l}\\}_{l=1}^{L}\right)\right]\right)d\bm{\Theta}_{i}},$ (18) where $\displaystyle\ln p\left(\bm{\Theta},\\{\mathcal{Y}_{l}\\}_{l=1}^{L}\right)=\left(\prod_{k=1}^{3}I_{k}L\right)\ln\beta-\beta\sum_{l=1}^{L}\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)$ $\displaystyle\times\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket{\bigparallel}_{F}^{2}+(\epsilon-1)\ln\beta-\epsilon\beta$ $\displaystyle-\sum_{n=1}^{N}\sum_{k=1}^{3}\left[\mathrm{Tr}\left(\bm{\Xi}^{(k),n}\bm{\Gamma}^{n}\left[\bm{\Xi}^{(k),n}\right]^{H}\right)+I_{k}\sum_{r=1}^{\bar{R}^{n}}\ln\gamma_{r}^{n}\right]$ $\displaystyle+\sum_{n=1}^{N}\sum_{r=1}^{\bar{R}^{n}}\left[(\epsilon-1)\ln\gamma_{r}^{n}-\epsilon\gamma_{r}^{n}\right],$ (19) and $\bm{\Gamma}^{n}=\mathrm{diag}\\{\gamma_{1}^{n},...,\gamma_{\bar{R}^{n}}^{n}\\}$. ### IV-C Tuning-free Algorithm Derivation After substituting (19) into (18), the optimal solutions $\\{Q^{\dagger}\left(\bm{\Theta}_{i}\right)\\}_{i=1}^{I}$ can be obtained. Although straightforward as it may seem, multiple integrations involved in (18) and complicated tensor algebras in (19) both make the derivations technically challenging and tedious. On the other hand, the coupling among different users’ channel parameters deviates the algorithm derivations from those developed in related works on single-user channel estimation [27, 17, 28, 25, 26]. Consequently, it needs much effort to derive the optimal variational pdfs for the multi-user channel estimation problem. To keep the brevity of the main body, we move the lengthy derivations to Appendix C and only present the final optimal solutions $\\{Q^{\dagger}\left(\bm{\Theta}_{i}\right)\\}_{i=1}^{I}$ in Table I at the top of this page. In Table I, the optimal variational pdf for each factor matrix $Q(\bm{\Xi}^{(k),n})$ is a circularly-symmetric complex matrix normal distribution [45], where the covariance matrix $\displaystyle\bm{\Sigma}^{(k),n}=$ $\displaystyle\Bigg{[}\sum_{l=1}^{L}|s_{n}(l)|^{2}\mathbb{E}\left[\beta\right]\mathbb{E}\Big{[}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\bm{\Xi}^{(j),n}\right)^{T}$ $\displaystyle\times\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\bm{\Xi}^{(j),n}\right)^{*}\Big{]}+\mathbb{E}\left[\bm{\Gamma}^{n}\right]\Bigg{]}^{-1},$ (20) and mean matrix $\displaystyle\bm{M}^{(k),n}$ $\displaystyle=\sum_{l=1}^{L}\mathfrak{B}_{l,p\neq n}(k)s_{n}(l)^{*}\mathbb{E}\left[\beta\right]\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\mathbb{E}\left[\bm{\Xi}^{(j),n}\right]\right)^{*}\bm{\Sigma}^{(k),n},$ (21) with $\displaystyle\mathfrak{B}_{l,p\neq n}$ $\displaystyle\triangleq\mathcal{Y}_{l}\\!-\\!\sum_{p=1,p\neq n}^{N}s_{p}(l)\Big{\llbracket}\mathbb{E}\left[\bm{\Xi}^{(1),p}\right],\mathbb{E}\left[\bm{\Xi}^{(2),p}\right],\mathbb{E}\left[\bm{\Xi}^{(3),p}\right]\Big{\rrbracket}.$ (22) On the other hand, the optimal variational distributions for each $\gamma_{r}^{n}$ and $\beta$ are gamma distributions, with parameters $\displaystyle a_{r}^{n}=\epsilon+\sum_{k=1}^{3}I_{k},$ (23) $\displaystyle b_{r}^{n}=\epsilon+\sum_{k=1}^{3}\mathbb{E}\left[\left[\bm{\Xi}^{(k),n}_{:,r}\right]^{H}\bm{\Xi}^{(k),n}_{:,r}\right],$ (24) $\displaystyle c=\epsilon+\prod_{k=1}^{3}I_{k}L,$ (25) $\displaystyle d=\epsilon+\sum_{l=1}^{L}\mathbb{E}\left[\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket{\bigparallel}_{F}^{2}\right].$ (26) In (20)-(26), there are several expectations that need to be computed. Some of them can be directly obtained from their parameters. In particular, $\mathbb{E}[\bm{\Xi}^{(k),n}]=\bm{M}^{(k),n}$, $\mathbb{E}\left[\gamma_{r}^{n}\right]=\frac{a_{r}^{n}}{b_{r}^{n}}$ and $\mathbb{E}\left[\beta\right]=\frac{c}{d}$. Some of them have already computed in previous works [35, 17]: $\displaystyle\mathbb{E}\left[\left[\bm{\Xi}^{(k),n}_{:,r}\right]^{H}\bm{\Xi}^{(k),n}_{:,r}\right]=\left[\bm{M}^{(k),n}_{:,r}\right]^{H}\bm{M}^{(k),n}_{:,r}+I_{k}\bm{\Sigma}^{(k),n}_{r,r},$ $\displaystyle\mathbb{E}\Big{[}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\bm{\Xi}^{(j),n}\right)^{T}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\bm{\Xi}^{(j),n}\right)^{*}\Big{]}$ (27) $\displaystyle=\mathop{\odot}\limits_{j=1,j\neq k}^{3}\left[\left[\bm{M}^{(j),n}\right]^{T}\left[\bm{M}^{(j),n}\right]^{*}+I_{j}\left[\bm{\Sigma}^{(j),n}\right]^{*}\right],$ (28) where the multiple Hadamard products $\mathop{\odot}\limits_{n=1,n\neq k}^{N}{\bm{A}}^{(n)}={\bm{A}}^{(N)}\odot{\bm{A}}^{(N-1)}\odot\cdots\diamond{\bm{A}}^{(k+1)}\odot{\bm{A}}^{(k-1)}\odot\cdots\odot{\bm{A}}^{(1)}$. However, due to the coupling among different users’ channel parameters, there is one complicated expectation in (26) that has not been tackled so far. In Appendix D, we show that $\displaystyle\mathbb{E}\Bigg{[}\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket{\bigparallel}_{F}^{2}\Bigg{]},$ $\displaystyle=$ $\displaystyle\bigparallel\mathcal{Y}_{l}{\bigparallel}_{F}^{2}-\mathrm{Tr}\Bigg{(}2\mathfrak{Re}\Big{(}\mathcal{Y}_{l}(1)\sum_{n=1}^{N}s_{n}(l)^{*}\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{M}^{(j),n}\right)^{*}$ $\displaystyle\times\left[\bm{M}^{(1),n}\right]^{H}\Big{)}-\sum_{n=1}^{N}\sum_{p=1,p\neq n}^{N}s_{n}(l)s_{p}(l)^{*}\bm{M}^{(1),n}$ $\displaystyle\times\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{M}^{(j),n}\right)^{T}\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{M}^{(j),p}\right)^{*}\left[\bm{M}^{(1),p}\right]^{H}\Bigg{)}$ $\displaystyle+\mathrm{Tr}\Bigg{(}\sum_{n=1}^{N}|s_{n}(l)|^{2}\left[\left[\bm{M}^{(1),n}\right]^{H}\bm{M}^{(1),n}+I_{1}\bm{\Sigma}^{(1),n}\right]$ $\displaystyle\mathop{\odot}\limits_{k=2}^{3}\left[\left[\bm{M}^{(k),n}\right]^{H}\bm{M}^{(k),n}+I_{k}\bm{\Sigma}^{(k),n}\right]^{*}\Bigg{)}.$ (29) From (20)-(29), it is easy to see that the parameters of each optimal variational pdf $Q^{\dagger}\left(\bm{\Theta}_{i}\right)$ rely on the statistics of other variational pdfs $\\{Q^{\dagger}\left(\bm{\Theta}_{i}\right)\\}_{j\neq i}$. By alternatively updating these variational pdfs, a tuning-free iterative channel estimation algorithm can be summarized in Algorithm 2 at the top of this page. Algorithm 2: VI Based Multi-user Channel Estimation Initializations: Choose $\bar{R}^{n}>R^{n},\forall n$, initial values $\\{\\{\left[\bm{M}^{(k),n}\right]^{0},\left[\bm{\Sigma}^{(k),n}\right]^{0}\\}_{k=1}^{3}\\}_{n=1}^{N}$ and $\epsilon$. Let $\\{[a_{r}^{n}]^{0},[b_{r}^{n}]^{0},c^{0},d^{0}\\}=\epsilon,\forall r,n$. Iterations: For the iteration $t+1$ ($t\geq 0$), Update the parameters of $Q(\bm{\Xi}^{(k),n})^{t+1}$: $\displaystyle\left[\bm{\Sigma}^{(k),n}\right]^{t+1}\\!\\!=\\!\\!\Bigg{[}\\!\sum_{l=1}^{L}|s_{n}(l)|^{2}\frac{c^{t}}{d^{t}}\\!\\!\mathop{\odot}\limits_{j=1,j\neq k}^{3}\\!\\!\Bigg{[}\left[\bm{M}^{(j),n}\right]^{\kappa,T}\\!\\!\left[\bm{M}^{(j),n}\right]^{\kappa,*}$ $\displaystyle+I_{j}\left[\bm{\Sigma}^{(j),n}\right]^{\kappa,*}\Bigg{]}+\mathrm{diag}\left\\{\frac{[a^{n}_{1}]^{t}}{[b^{n}_{1}]^{t}},...,\frac{[a^{n}_{\bar{R}^{n}}]^{t}}{[b^{n}_{\bar{R}^{n}}]^{t}}\right\\}\Bigg{]}^{-1},$ (30) $\displaystyle\left[\bm{M}^{(k),n}\right]^{t+1}\\!\\!=\\!\\!\sum_{l=1}^{L}\Bigg{(}\mathcal{Y}_{l}\\!-\\!\\!\\!\sum_{p=1,p\neq n}^{N}s_{p}(l)\Big{\llbracket}\left[\bm{M}^{(1),p}\right]^{\kappa},\left[\bm{M}^{(2),p}\right]^{\kappa}\\!\\!\\!\\!,$ $\displaystyle\left[\bm{M}^{(3),p}\right]^{\kappa}\Big{\rrbracket}\Bigg{)}(k)s_{n}(l)^{*}\frac{c^{t}}{d^{t}}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{M}^{(j),n}\right]^{\kappa}\right)^{*}\left[\bm{\Sigma}^{(k),n}\right]^{t+1}\\!\\!\\!\\!\\!\\!,$ (31) where $\kappa$ denotes the most recent update index, i.e., $\kappa=t+1$ when $j<k$ or $p<n$, and $\kappa=t$ otherwise. Update the parameters of $Q(\gamma_{r}^{n})^{t+1}$: $\displaystyle\left[a_{r}^{n}\right]^{t+1}=\epsilon+\sum_{k=1}^{3}I_{k},$ (32) $\displaystyle\left[b_{r}^{n}\right]^{t+1}=\epsilon+\sum_{k=1}^{3}\left[\bm{M}^{(k),n}_{:,r}\right]^{t+1,H}\left[\bm{M}^{(k),n}_{:,r}\right]^{t+1}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +I_{k}\bm{\left missing}[\bm{\Sigma}^{(k),n}_{r,r}\right]^{t+1}.$ (33) Update the parameters of $Q(\beta)^{t+1}$: $\displaystyle c^{t+1}=\epsilon+\prod_{k=1}^{3}I_{k}L,$ (34) $\displaystyle d^{t+1}=\epsilon+\sum_{l=1}^{L}\mathfrak{f}_{l}^{t+1},$ (35) where $\mathfrak{f}_{l}^{t+1}$ is computed using (29) with $\\{\bm{M}^{(k),n},\bm{\Sigma}^{(k),n}\\}$ being replaced by $\\{\left[\bm{M}^{(k),n}\right]^{t+1},\left[\bm{\Sigma}^{(k),n}\right]^{t+1}\\},\forall n,k$. Until Convergence Channel Estimation: $\hat{\mathcal{H}}^{n}=\Big{\llbracket}\left[\bm{M}^{(1),n}\right]^{t+1},\left[\bm{M}^{(2),n}\right]^{t+1},\left[\bm{M}^{(3),n}\right]^{t+1}\Big{\rrbracket},\forall n.$ ### IV-D Intuitive Interpretation of Updating Equations #### IV-D1 Intuitive interpretation of (20) and (21) The covariance matrix $\bm{\Sigma}^{(k),n}$ of the approximate posterior distribution $Q(\bm{\Xi}^{(k),n})$ computed in (20) combines the prior information from $\mathbb{E}\left[\bm{\Gamma}^{n}\right]$ and the information from other factor matrices. It is then used as the rotation matrix in the estimation of the factor matrix mean $\bm{M}^{(k),n}$ in (21), which takes the linear combination of the observation data and other factor matrices. If there is no prior information $\mathbb{E}\left[\bm{\Gamma}^{n}\right]$ and no noise precision estimate $\mathbb{E}\left[\beta\right]$, the update equation (21) is very similar to the BCD update in (11), since VI essentially performs BCD steps over the functional space. #### IV-D2 Intuitive interpretation of (23)-(26) From (23) and (24), it can be seen that $\mathbb{E}\left[\gamma_{r}^{n}\right]=\frac{a_{r}^{n}}{b_{r}^{n}}$ is proportional to the inverse of the sum of the $r^{th}$ column powers in all three factor matrices. Therefore, if the $r^{th}$ columns are learnt to be nearly zero, it will give a very large $\mathbb{E}\left[\gamma_{r}^{n}\right]=\frac{a_{r}^{n}}{b_{r}^{n}}$, which will further encourage the sparsity of the $r^{th}$ columns in (21). On the other hand, it is straightforward to see that (25) is related to the number of observations and (26) approximates the model fitting error. ### IV-E Further Discussions and Insights To gain more insights from the proposed algorithm, discussions on its automatic model complexity control, convergence property, and computational complexity are presented in this subsection. #### IV-E1 Automatic model complexity control In Algorithm 2, although the initial channel model is over-parameterized, there is no need to manually tune any parameter to control the model complexity for overfitting avoidance, since the parameters $\\{\\{\frac{c_{r}^{n}}{d_{r}^{n}}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N}$, which are the expectations of $\\{\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N}$, effectively shrink the values of redundant columns in the factor matrices. In particular, if $\frac{c_{r}^{n}}{d_{r}^{n}}$ is learnt to be very large, they would contribute to the covariance matrix of the factor matrix (as seen in (30)) and then rescale the $r^{th}$ column of the factor matrix to approach zero values (as seen in (31)). On the other hand, parameters $\\{\\{\frac{c_{r}^{n}}{d_{r}^{n}}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N}$ will be updated together with other parameters in the algorithm, following the principle of the employed Bayesian framework. #### IV-E2 Convergence property The algorithm is developed under the framework of mean-field VI, which inherently performs BCD steps over the functional space. Its convergence result has been established in [43]. In particular, it has been shown that when the variational pdf is optimized using (18) in each iteration (just as what we have done in this paper), the limit point generated by the BCD steps over the functional space of variational pdfs is guaranteed to be at least a stationary point of the KL divergence in (16) under the assumption of mean- field family [43] . #### IV-E3 Computational complexity In each iteration, the computational complexity of Algorithm 2 is dominated by the steps of updating the factor matrices, costing $O(\sum_{n=1}^{N}\prod_{k=1}^{3}3I_{k}(\bar{R}^{n})^{2}+\sum_{n=1}^{N}\sum_{k=1}^{3}(\bar{R}^{n})^{3})$. The overall complexity is about $O(q(\sum_{n=1}^{N}\prod_{k=1}^{3}3I_{k}(\bar{R}^{n})^{2}+\sum_{n=1}^{N}\sum_{k=1}^{3}(\bar{R}^{n})^{3}))$ where $q$ is the number of iterations required for convergence. Thus it can be seen that the complexity is comparable to that of Algorithm 1, in which the computational complexity is $O(q^{\prime}(\sum_{n=1}^{N}\prod_{k=1}^{3}3I_{k}(\hat{R}^{n})^{2}+\sum_{n=1}^{N}\sum_{k=1}^{3}(\hat{R}^{n})^{3}))$ where $q^{\prime}$ is the number of iterations at convergence. ## V Numerical Results and Discussions In this section, numerical results are presented to assess the channel estimation performance of the proposed tuning-free algorithm (i.e., Algorithm 2). Consider a UCA with $M=512$ antenna elements, which are deployed in a 3D grid with dimensions $I_{1}=8$, $I_{2}=8$, $I_{3}=8$ and the inter-grid spacing $d_{x}=d_{y}=d_{z}=\lambda_{c}/2$. Assume that there are $N=5$ users simultaneously transmitting signals to the BS. For each user, there are $R^{n}=3$ propagation paths with elevation angles randomly selected from $[-\pi/2,\pi/2]$ and azimuth angles randomly selected from $[-\pi,\pi]$. The pilot length is $L=10$, and each pilot symbol is sampled from a zero-mean circularly-symmetric complex Gaussian distribution with unit variance. The path gains $\\{\xi_{r^{n}}\\}_{r,n}$ are drawn from a zero-mean circularly- symmetric complex Gaussian distribution with unit variance, and without any correlation across $r$ and $n$. The signal-to-noise ratio (SNR) is defined as $10\log_{10}\left(\frac{\sum_{l=1}^{L}\sum_{n=1}^{N}s_{n}(l)\bigparallel\llbracket\bm{U}^{(n)},\bm{V}^{(n)},\left[\bm{\xi}^{n}\right]^{T}\diamond\bm{P}^{(n)}\rrbracket{\bigparallel}_{F}^{2}}{\parallel\mathcal{W}\parallel_{F}^{2}}\right)$ where $\mathcal{W}\in\mathbb{C}^{I_{1}\times I_{2}\times I_{3}\times L}$ is a tensor collecting all the noise samples. For the proposed tuning-free algorithm, initial mean $\left[\bm{M}^{(k),n}\right]^{0}$ for each matrix $\bm{\Xi}^{(k),n}$ is drawn from a zero-mean circularly-symmetric complex matrix normal distribution with an identity covariance matrix, and the initial covariance matrix is set as $\left[\bm{\Sigma}^{(k),n}\right]^{0}=\bm{I}_{\bar{R}^{n}\times\bar{R}^{n}}$. The upper bound for channel path $\bar{R}^{n}=\min\\{I_{1},I_{2},I_{3}\\}=8$ unless stated otherwise, which is a common practice in Bayesian tensor decompositions [17],[27],[35],[36]. Each point in the following figures is an average of 100 Monte-Carlo trials. ### V-A Convergence Property and Automatic Channel Model Complexity Learning The convergence behavior of the proposed tuning-free algorithm is shown in Figure 3 under two different SNRs, where the mean-square-error (MSE) of channel estimation $\frac{1}{MN}\sum_{n=1}^{N}||\hat{\mathcal{H}}^{n}-\mathcal{H}^{n}||_{F}^{2}$ is adopted as the measure. From Figure 3, it can be seen that the MSEs of the proposed algorithm decrease significantly in the first tens of iterations and then gradually converge to stable values. Figure 3: The convergence behavior of the proposed algorithm under SNR = 10 dB and SNR = 20 dB ($\bar{R}^{n}$ = 8, $R^{n}=3,L=10$). Figure 4: Performance of channel estimation versus different channel path upper bound values $\bar{R}^{n}$ (SNR = 20 dB, $R^{n}=3,L=10$). To see whether the proposed algorithm is sensitive to the initial upper bound value $\bar{R}^{n}$, under SNR = 20 dB, the MSEs of the proposed tuning-free algorithm (labeled as VI-$\bar{R}^{n}$) are presented in Figure 4, in which the MSEs of the LS method (labeled as LS), the BCD method (i.e., Algorithm 1) with incorrect path numbers $\\{\bar{R}^{n}\\}_{n=1}^{N}$ (labeled as BCD-$\bar{R}^{n}$) and the genie-aided BCD method with exact path numbers $\\{{R}^{n}\\}_{n=1}^{N}$ (labeled as BCD-$R^{n}$) are served as benchmarks. From Figure 4, it can be seen that the proposed algorithm with different values of $\bar{R}^{n}\in\\{6,8,10,30,50\\}$ shows indistinguishable channel estimation performances to those of the genie-aided BCD-$R^{n}$ method. Notice that $\bar{R}^{n}\in\\{30,50\\}$ is much larger than the true path number (tensor rank) $R^{n}=3$. This shows that with different values of upper bound $\bar{R}^{n}$, the proposed tuning-free algorithm still can learn the model complexity well and then give accurate channel estimation results. On the other hand, with much larger $\bar{R}^{n}\in\\{30,50\\}$, the BCD-$\bar{R}^{n}$ algorithm overfits the noises heavily, and even cannot outperform the LS method in channel estimations. Figure 5: The estimates of $\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}$ for user 1 and user 3 in different Monte-Carlo trials ($\bar{R}^{n}=8$, $R^{n}=3$, $L=10$, SNR = 20 dB). Figure 6: The estimates of $\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}$ for user 1 and user 3 in different Monte-Carlo trials ($\bar{R}^{n}=30$, $R^{n}=3$, $L=10$, SNR = 20 dB). As discussed in Section IV. E, the estimation results of $\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}$ under different ${\bar{R}}^{n}$s determine the channel model complexity learning performance of the proposed method. Since $\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}$ in different Monte- Carlo trials are possibly with different sparsity patterns (i.e., the very small values might appear in different subscripts $r$), averaging them over Monte-Carlo trials is not informative. Therefore, in Figure 5 and Figure 6, we present the estimation results of $\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}$ for user 1 and user 3 in three independent Monte-Carlo trials under ${\bar{R}}^{n}=8$ and ${\bar{R}}^{n}=30$ respectively. From these two figures, it can be seen that under both ${\bar{R}}^{n}=8$ and ${\bar{R}}^{n}=30$, only three $\gamma_{r}^{n}$s were estimated to be very small, indicating that there are three significant channel paths for each user. Since the exact path number $R_{n}=3$, it shows that the proposed method can accurately recover the channel model complexity and thus avoid the overfitting. ### V-B Channel Estimation Performance To assess the channel estimation performance at different SNRs, the MSEs of different algorithm are shown in Figure 7. From Figure 7, it is obvious that the three tensor-aided methods (VI-$\bar{R}^{n}$, BCD-$\bar{R}^{n}$, and BCD-$R^{n}$) achieve much more accurate channel estimation than the LS method, due to the exploitation of tensor structures in the adopted angular channel model. It can be also observed that the genie-aided BCD-$R^{n}$ method achieves the lowest MSE for a wide range of SNRs, since it fits the channel coefficients into the observation data assuming the accurate channel model complexity, which however is not available in practice. With a wrong guess of the path numbers, the MSEs of the BCD-$\bar{R}^{n}$ method are much higher than those of the BCD-$R^{n}$ algorithm, due to the overfitting of noises. In contrast, although the proposed VI-$\bar{R}^{n}$ algorithm is also with a wrong guess of the path numbers, its MSEs are nearly the same as those of the genie-aided BCD-$R^{n}$ method. This shows the effectiveness of the Bayesian method in automatic model complexity control and overfitting avoidance. Figure 7: Performance of channel estimation versus SNRs ($\bar{R}^{n}$ = 8, $R^{n}=3,L=10$). On the other hand, we present the running time of the three iterative tensor- aided channel estimation algorithms (VI-$\bar{R}^{n}$, BCD-$\bar{R}^{n}$, and BCD-$R^{n}$) in Table II. From Table II, it can be observed that the proposed algorithm is with comparable running time to that of the BCD-$\bar{R}^{n}$ approach, which validates the complexity analysis in Section IV. E. Notice that these two approaches cost much more time than the genie-aided BCD-$R^{n}$ algorithm, since they need to update over-determined model parameters. Table II: Running time (second) of different channel estimation algorithms ($\bar{R}^{n}=8$, ${R}^{n}=3$, $L=10$) SNR | BCD-$R^{n}$ | BCD-$\bar{R}^{n}$ | VI-$\bar{R}^{n}$ ---|---|---|--- 10 dB | 1.7942 | 6.5550 | 6.6890 20 dB | 1.5422 | 4.1581 | 4.9155 To see how the model complexity of channels affects different algorithms, under SNR = 20 dB, the MSEs of channel estimations versus different path numbers in the channel model are presented in Figure 8. In previous simulation studies, $R^{n}=3,\forall n$ is considered. Here we further consider different path number values $R^{n}=\\{2,3,4,5,6\\}$, each of which indicates different channel model complexities. With a higher $R^{n}$, there are more unknown channel coefficients. Then, it is expected that the channel estimation performance would degrade given the same amount of the observation data. This conjecture has been validated by Figure 8, in which the MSEs indeed increase as $R^{n}$ increases. On the other hand, it can be seen that the proposed VI-$\bar{R}^{n}$ algorithm achieves indistinguishable performances as those of the genie-aided BCD-$R^{n}$ method. This shows that the proposed algorithm can learn a wide range of model complexities and then effectively shrink redundant channel model parameters for overfitting avoidance. Figure 8: Performance of channel estimation versus different path numbers (SNR = 20 dB, $\bar{R}^{n}=8,L=10$). Figure 9: Performance of channel estimation versus different pilot lengths (SNR = 20 dB, $\bar{R}^{n}=8,R^{n}=3$). Finally, in Figure 9, we show how the proposed VI-$\bar{R}^{n}$ algorithm saves the pilot resources for channel estimation, compared to the standard LS method. From Figure 9, it is clear that given the MSE $10^{-3}$, the proposed VI-$\bar{R}^{n}$ algorithm needs only around $20$ pilot signals while the LS method requires about $50$ pilot signals. The gain comes from both the tensor structure exploitation of channel model and the Bayesian philosophy in overfitting avoidance. ## VI Conclusions and Future Research In this paper, the multi-user channel estimation problem for 3D massive MIMO communications was investigated through the lens of Bayesian tensor methods. The channel estimation problem was firstly recasted as a factor matrix learning problem for a non-standard tensor decomposition model, which requires a novel learning algorithm design with an integrated feature of overfitting avoidance. To achieve this goal, a tuning-free channel estimation algorithm was proposed in this paper under the framework of Bayesian modelling and inference. Numerical studies have shown the excellent channel estimation performance of the proposed method in terms of both accuracy and overfitting avoidance. In future research, the exploitation of the shift-invariance property [21], [50] in cubic antenna array at the base station might give a new tensor model for massive MIMO communications, which also needs novel tuning-free channel estimation algorithm design. _We believe that the integration of tensor model, array signal processing and Bayesian method will bring us closer to the era of “Joint Model-and-Data-Driven Wireless Communications”._ ## Appendix A Uniqueness Property of Tensor CPD In [40], a sufficient condition for the uniqueness of tensor CPD is stated as follows. Uniqueness condition for CPD [40]. _If $\llbracket\bm{A}^{(1)},\bm{A}^{(2)},...,\bm{A}^{(N)}\rrbracket$ = $\llbracket\bm{\Xi}^{(1)},\bm{\Xi}^{(2)},...,\bm{\Xi}^{(N)}\rrbracket$, and $\sum_{n=1}^{N}k_{n}\geq 2L+(N-1)$ where $k_{i}$ denotes the k-rank of matrix $\bm{A}^{(i)}$ and $L$ is the tensor rank. Then the following equations hold: $\bm{\Xi}^{(1)}=\bm{A}^{(1)}\bm{\Delta}\bm{\Lambda}^{(1)}$, $\bm{\Xi}^{(2)}=\bm{A}^{(2)}\bm{\Delta}\bm{\Lambda}^{(2)}$, …, $\bm{\Xi}^{(N)}=\bm{A}^{(N)}\bm{\Delta}\bm{\Lambda}^{(N)}$ where $\bm{\Delta}$ is a permutation matrix and diagonal matrix $\bm{\Lambda}^{(n)}$ satisfies $\prod_{n=1}^{N}\bm{\Lambda}^{(n)}=\bm{I}_{L}$ ._ Thus tensor CPD is unique up to trivial scaling and permutation ambiguities. ## Appendix B Complicated Coupling after Expanding the Frobenius norm in (8) Since $||\mathcal{X}||_{F}^{2}=||\mathcal{X}(k)||_{F}^{2}$ where $\mathcal{X}(k)$ is the $k^{th}$ unfolding matrix of the tensor $\mathcal{X}$, after using the unfolding property of tensor CPD [40], problem (8) is equivalent to $\displaystyle\min_{\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}}\sum_{l=1}^{L}\bigparallel\mathcal{Y}_{l}(k)-$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \sum_{n=1}^{N}s_{n}(l)\bm{\Xi}^{(k),n}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]\right)^{T}{\bigparallel}_{F}^{2}.$ (36) Further using the fact $||\mathbf{X}||_{F}^{2}=\mathrm{Tr}(\mathbf{X}\mathbf{X}^{H})$ to expand the Frobenius norm in (36), we have the following problem: $\displaystyle\min_{\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}}\sum_{l=1}^{L}\mathrm{Tr}\Bigg{(}\Big{[}\mathcal{Y}_{l}(k)-\sum_{n=1}^{N}s_{n}(l)\bm{\Xi}^{(k),n}$ $\displaystyle\times\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]\right)^{T}\Big{]}\Big{[}\mathcal{Y}_{l}(k)-\sum_{n=1}^{N}s_{n}(l)\bm{\Xi}^{(k),n}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \times\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]\right)^{T}\Big{]}^{H}\Bigg{)}.$ (37) $\displaystyle\min_{\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}}\sum_{l=1}^{L}\mathrm{Tr}\Bigg{(}\mathcal{Y}_{l}(k)\mathcal{Y}_{l}(k)^{H}-2\mathfrak{Re}\Big{(}\mathcal{Y}_{l}(k)\sum_{n=1}^{N}s_{n}^{*}(l)\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]\right)^{*}\big{[}\bm{\Xi}^{(k),n}\big{]}^{H}\Big{)}$ $\displaystyle+\underbrace{\Big{[}\sum_{n=1}^{N}s_{n}(l)\bm{\Xi}^{(k),n}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]\right)^{T}\Big{]}\Big{[}\sum_{n=1}^{N}s_{n}(l)\bm{\Xi}^{(k),n}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]\right)^{T}\Big{]}^{H}}_{\mathfrak{t}}\Bigg{)}.$ (38) After manipulating algebras, problem (37) becomes (38) at the top of the next page. In the term $\mathfrak{t}$, it is clear that the product of two summation terms, i.e., $\displaystyle\Big{[}\sum_{n=1}^{N}s_{n}(l)\bm{\Xi}^{(k),n}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]\right)^{T}\Big{]}$ $\displaystyle\times\Big{[}\sum_{n=1}^{N}s_{n}(l)\bm{\Xi}^{(k),n}\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\left[\bm{\Xi}^{(j),n}\right]\right)^{T}\Big{]}^{H}$ (39) will result in complicated coupling among the factor matrices $\\{\\{\bm{\Xi}^{(k),n}\\}_{k=1}^{3}\\}_{n=1}^{N}$. Although problem (38) seems complicated, if we only optimize a single factor matrix $\bm{\Xi}^{(k),n}$ while fixing other variables, problem (38) will become problem (9) in Section III, which is a convex problem and can be easily solved. ## Appendix C The Derivations of The Optimal Variational Pdfs in Table I After substituting (19) into (18) and only keep terms relevant to $\bm{\Xi}^{(k),n}$, we have $\displaystyle Q^{\dagger}\left(\bm{\Xi}^{(k),n}\right)$ $\displaystyle\propto\exp\Bigg{\\{}\mathbb{E}\Bigg{[}-\beta\sum_{l=1}^{L}\bigparallel\mathcal{Y}_{l}-s_{n}(l)\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},$ $\displaystyle\bm{\Xi}^{(3),n}\rrbracket-\sum_{p=1,p\neq n}^{N}s_{p}(l)\llbracket\bm{\Xi}^{(1),p},\bm{\Xi}^{(2),p},\bm{\Xi}^{(3),p}\rrbracket{\bigparallel}_{F}^{2}$ $\displaystyle-\mathrm{Tr}\left(\bm{\Xi}^{(k),n}\bm{\Gamma}^{n}\left[\bm{\Xi}^{(k),n}\right]^{H}\right)\Bigg{]}\Bigg{\\}}.$ (40) Then, we utilize the result $\parallel\bm{A}\parallel_{F}^{2}=\mathrm{Tr}(\bm{A}\bm{A}^{H})$ to expand the Frobenius norm. After a series of algebra manipulations, $Q^{\dagger}\left(\bm{\Xi}^{(k),n}\right)$ can be organized to be $\displaystyle Q^{\dagger}\left(\bm{\Xi}^{(k),n}\right)$ $\displaystyle\propto\exp\Bigg{\\{}\mathbb{E}\Bigg{[}\mathrm{Tr}\Bigg{(}-\bm{\Xi}^{(k),n}\Bigg{(}\sum_{l=1}^{L}|s_{n}(l)|^{2}\beta\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\bm{\Xi}^{(j),n}\right)^{T}$ $\displaystyle\times\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\bm{\Xi}^{(j),n}\right)^{*}+\bm{\Gamma}^{n}\Bigg{)}\left[\bm{\Xi}^{(k),n}\right]^{H}+2\mathfrak{Re}\Bigg{(}\bm{\Xi}^{(k),n}$ $\displaystyle\times\sum_{l=1}^{L}s_{n}(l)\beta\left(\mathop{\diamond}\limits_{j=1,j\neq k}^{3}\bm{\Xi}^{(j),n}\right)^{T}\Bigg{(}\mathcal{Y}_{l}(k)\\!-\\!\sum_{p=1,p\neq n}^{N}s_{p}(l)$ $\displaystyle\times\Big{\llbracket}\bm{\Xi}^{(1),p},\bm{\Xi}^{(2),p},\bm{\Xi}^{(3),p}\Big{\rrbracket}(k)\Bigg{)}\Bigg{)}\Bigg{)}\Bigg{]}\Bigg{\\}}.$ (41) After distributing the expectations and comparing the functional form of (41) to that of circularly-symmetric complex matrix normal distribution [45], it can be concluded that $Q^{\dagger}\left(\bm{\Xi}^{(k),n}\right)=\mathcal{CMN}(\bm{\Xi}^{(k),n}|\bm{M}^{(k),n},\bm{I}_{I_{k}},\bm{\Sigma}^{(k),n})$ with its mean $\bm{M}^{(k),n}$ and covariance matrix $\bm{\Sigma}^{(k),n}$ being defined in (20) and (21). Similarly, after substituting (19) and (18), and fixing all the variables other than $\\{\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N}$, we have $\displaystyle Q^{\dagger}\left(\\{\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N}\right)\propto\exp\Bigg{\\{}\mathbb{E}\Bigg{[}\sum_{n=1}^{N}\sum_{k=1}^{3}-\mathrm{Tr}\Big{(}\bm{\Xi}^{(k),n}\bm{\Gamma}^{n}$ $\displaystyle\left[\bm{\Xi}^{(k),n}\right]^{H}\Big{)}+I_{k}\sum_{r=1}^{\bar{R}^{n}}\ln\gamma_{r}^{n}+(\epsilon-1)\ln\gamma_{r}^{n}-\epsilon\gamma_{r}^{n}\Bigg{]}\Bigg{\\}}.$ (42) Using the fact that $\mathrm{Tr}\Big{(}\bm{\Xi}^{(k),n}\bm{\Gamma}^{n}\left[\bm{\Xi}^{(k),n}\right]^{H}\Big{)}=\sum_{r=1}^{\bar{R}^{n}}\gamma_{r}^{n}\left[\bm{\Xi}^{(k),n}\right]_{:,r}^{H}\left[\bm{\Xi}^{(k),n}\right]_{:,r}$, it can be shown that $\displaystyle Q^{\dagger}\left(\\{\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N}\right)\propto\prod_{n=1}^{N}\prod_{r=1}^{\bar{R}^{n}}\exp\Bigg{\\{}\mathbb{E}\Bigg{[}-\gamma_{r}^{n}\sum_{k=1}^{3}\Big{(}\left[\bm{\Xi}^{(k),n}\right]_{:,r}^{H}$ $\displaystyle\left[\bm{\Xi}^{(k),n}\right]_{:,r}+I_{k}\ln\gamma_{r}^{n}\Big{)}+(\epsilon-1)\ln\gamma_{r}^{n}-\epsilon\gamma_{r}^{n}\Bigg{]}\Bigg{\\}}.$ (43) It is easy to conclude that $Q^{\dagger}\left(\\{\\{\gamma_{r}^{n}\\}_{r=1}^{\bar{R}^{n}}\\}_{n=1}^{N}\right)=\prod_{n=1}^{N}\prod_{r=1}^{\bar{R}^{n}}Q^{\dagger}(\gamma_{r}^{n})$, where $\displaystyle Q^{\dagger}(\gamma_{r}^{n})\propto\exp\Bigg{\\{}\left(\sum_{k=1}^{3}I_{k}+\epsilon-1\right)\ln\gamma_{r}^{n}$ $\displaystyle-\gamma_{r}^{n}\left(\epsilon+\sum_{k=1}^{3}\mathbb{E}\left[\left[\bm{\Xi}^{(k),n}\right]_{:,r}^{H}\left[\bm{\Xi}^{(k),n}\right]_{:,r}\right]\right)\Bigg{\\}}.$ (44) By comparing (44) to the functional form of gamma distribution, we have $Q^{\dagger}\left(\gamma_{r}^{n}\right)=\mathrm{gamma}(\gamma_{r}^{n}|a_{r}^{n},b_{r}^{n})$, where $a_{r}^{n},b_{r}^{n}$ is defined by (23) and (24) respectively. Finally, we use (18) and (19) again to derive the optimal variational pdf $Q^{\dagger}\left(\beta\right)$. It can be shown that $\displaystyle Q^{\dagger}\left(\beta\right)\propto\exp\Bigg{\\{}\left(\prod_{k=1}^{3}I_{k}L+\epsilon-1\right)\ln\beta$ $\displaystyle-\beta\Bigg{(}\epsilon+\sum_{l=1}^{L}\mathbb{E}\Bigg{[}\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket{\bigparallel}_{F}^{2}\Bigg{]}\Bigg{)}\Bigg{\\}}.$ (45) After comparing (45) to the functional form of gamma distribution, it is easy to identify $Q^{\dagger}\left(\beta\right)=\mathrm{gamma}(\beta|c,d)$, where $c$ and $d$ are expressed in (25) and (26) respectively. ## Appendix D Expectation Computation for (26) In (26), computing the expectation $\mathbb{E}\left[\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket{\bigparallel}_{F}^{2}\right]$ is quite complicated. We use the result $\parallel\bm{A}\parallel_{F}^{2}=\mathrm{Tr}(\bm{A}\bm{A}^{H})$ and the tensor unfolding property [40] to expand the Frobenius norm: $\displaystyle\mathbb{E}\left[\bigparallel\mathcal{Y}_{l}-\sum_{n=1}^{N}s_{n}(l)\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket{\bigparallel}_{F}^{2}\right]$ $\displaystyle=\mathbb{E}\Bigg{[}\mathrm{Tr}\Bigg{(}\mathcal{Y}_{l}(1)\mathcal{Y}_{l}(1)^{H}-2\mathfrak{Re}\Bigg{(}\mathcal{Y}_{l}(1)\sum_{n=1}^{N}s_{n}(l)^{*}$ $\displaystyle\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{\Xi}^{(j),n}\right)^{*}\left[\bm{\Xi}^{(1),n}\right]^{H}\Bigg{)}+\mathcal{G}_{l}(1)\mathcal{G}_{l}(1)^{H}\Bigg{)}\Bigg{]},$ (46) where $\displaystyle\mathcal{G}_{l}=\sum_{n=1}^{N}s_{n}(l)\llbracket\bm{\Xi}^{(1),n},\bm{\Xi}^{(2),n},\bm{\Xi}^{(3),n}\rrbracket.$ (47) After distributing the expectations, the most complicated term is $\mathbb{E}\left[\mathcal{G}_{l}(1)\mathcal{G}_{1}(1)^{H}\right]$. Using the tensor unfolding property [40] again, we have $\displaystyle\mathbb{E}\left[\mathcal{G}_{l}(1)\mathcal{G}_{1}(1)^{H}\right]$ $\displaystyle=\mathbb{E}\Bigg{[}\mathrm{Tr}\Bigg{(}\Bigg{[}\sum_{n=1}^{N}s_{n}(l)\bm{\Xi}^{(1),n}\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{\Xi}^{(j),n}\right)^{T}\Bigg{]}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \times\Bigg{[}\sum_{n=1}^{N}s_{n}(l)\bm{\Xi}^{(1),n}\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{\Xi}^{(j),n}\right)^{T}\Bigg{]}^{H}\Bigg{)}\Bigg{]}$ $\displaystyle=\mathrm{Tr}\Bigg{(}\sum_{n=1}^{N}\sum_{p=1}^{N}s_{n}(l)s_{p}(l)^{*}\mathbb{E}\Bigg{[}\bm{\Xi}^{(1),n}\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{\Xi}^{(j),n}\right)^{T}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \times\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{\Xi}^{(j),p}\right)^{*}\left[\bm{\Xi}^{(1),p}\right]^{H}\Bigg{]}\Bigg{)}.$ (48) Further using the results in (27) and (28), we have $\displaystyle\mathbb{E}\left[\mathcal{G}_{l}(1)\mathcal{G}_{1}(1)^{H}\right]$ $\displaystyle=\mathrm{Tr}\Bigg{(}\sum_{n=1}^{N}\sum_{p=1,p\neq n}^{N}s_{n}(l)s_{p}(l)^{*}\bm{M}^{(1),n}\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{M}^{(j),n}\right)^{T}$ $\displaystyle\times\left(\mathop{\diamond}\limits_{j=2}^{3}\bm{M}^{(j),p}\right)^{*}\left[\bm{M}^{(1),p}\right]^{H}\Bigg{)}$ $\displaystyle+\mathrm{Tr}\Bigg{(}\sum_{n=1}^{N}|s_{n}(l)|^{2}\left[\left[\bm{M}^{(1),n}\right]^{H}\bm{M}^{(1),n}+I_{1}\bm{\Sigma}^{(1),n}\right]$ $\displaystyle\mathop{\odot}\limits_{k=2}^{3}\left[\left[\bm{M}^{(k),n}\right]^{H}\bm{M}^{(k),n}+I_{k}\bm{\Sigma}^{(k),n}\right]^{*}\Bigg{)}.$ (49) After putting (49) into (46), the result of (29) can be obtained. ## References * [1] E. G. Larsson, O. Edfors, F. Tufvesson and T. L. Marzetta, “Massive MIMO for next generation wireless systems,” _IEEE Communications Magazine_ , vol. 52, no. 2, pp. 186-195, Feb. 2014. * [2] E. Bjornson, E. G. Larsson and T. Marzetta, “Massive MIMO: Ten myths and one critical question,” _IEEE Communications Magazine_ , vol. 54, no. 10, pp. 114-123, Feb. 2016. * [3] L. Li, T.-H. Chang and S. Cai, “UAV positioning and power control for two-way wireless relaying,” _IEEE Trans. on Wireless Communications_ , vol. 19, no. 2, pp.1008-1024, Feb. 2020. * [4] S. Wang, M. Xia, and Y-C. Wu, “Backscatter data collection with unmanned ground vehicle: mobility management and power allocation,” _IEEE Trans. on Wireless Communications_ , vol. 18, no. 4, pp. 2314-2328, Apr. 2019. * [5] X. Li, Z. Liu, N. Qin, and S. Jin, “FFR based joint 3D beamforming interference coordination for multi-cell FD-MIMO downlink transmission systems, _IEEE Trans. on Vehicular Technology_ , vol. 69, no. 3, pp. 3105-3118, Mar. 2020. * [6] Y. Huang, Q. Wu, T. Wang, G. Zhou, and R. Zhang, “3D beam tracking for cellular-connected UAV,” _IEEE Wireless Communications Letters_ , vol. 9, no. 5, pp. 736-740, May. 2020. * [7] S. M. Razavizadeh, M. Ahn, and I. Lee, “Three-dimensional beamforming: A new enabling technology for 5G wireless networks,” _IEEE Signal Processing Magazine_ , vol. 31, no. 6, pp. 94-101, Nov. 2014. * [8] Y. H. Nam, B. L. Ng, K. Sayana, Y. Li, J. Zhang, Y. Kim, and J. Lee, “Full-dimension MIMO (FD-MIMO) for next generation cellular technology,” _IEEE Communications Magazine_ , vol. 51, no. 6, pp. 172-179, 2013. * [9] Y. Kim, H. Ji, J. Lee, Y. H. Nam, B. L. Ng, I. Tzanidis, and J. Zhang “Full dimension MIMO (FD-MIMO): The next evolution of MIMO in LTE systems,” _IEEE Wireless Communications_ , vol. 21, no. 2, pp. 26-33, 2014. * [10] T.-H. Chang, W. -C. Chiang, Y. -W. Peter Hong, and C. -Y. Chi, “Training sequence design for discriminatory channel estimation in wireless MIMO systems,” _IEEE Trans. on Signal Processing_ , vol. 58, no. 12, pp. 6223-6237, Dec. 2010. * [11] C. K. Wen, S. Jin, K. K. Wong, J. C. Chen, and P. Ting, “Channel estimation for massive MIMO using Gaussian-mixture Bayesian learning,” _IEEE Trans. on Wireless Communications_ , vol. 14, no. 3, pp.1356-1368, 2014. * [12] C. Qian, X. Fu, and N. D. Sidiropoulos, “Algebraic channel estimation algorithms for FDD massive MIMO systems,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 13, no. 5, pp. 961-973, Jun. 2019. * [13] Y. Yang, F. Gao, Z. Zhong, B. Ai, and A. Alkhateeb, “Deep transfer learning based downlink channel prediction for FDD massive MIMO systems,” arXiv preprint arXiv:1912.12265, 2019. * [14] K. P. Murphy, _Machine learning: a probabilistic perspective_ , MIT press, 2012. * [15] R. Shafin, L. Liu, Y. Li, A. Wang, and J. Zhang, “Joint angle and delay estimation for 3D massive MIMO systems based on parametric channel modelling”, _IEEE Trans. on Wireless Communications_ , vol. 16, no. 8, pp. 5370-5383, Aug. 2017. * [16] J. Kaleva, N. J. Myers, A. Tölli, R. W. Heath, and U. Madhow, “Short range 3D MIMO mmWave channel reconstruction via geometry-aided AoA estimation,” _in 2019 IEEE Asilomar Conference on Signals, Systems, and Computers_ , pp. 427-431, 2019. * [17] L. Cheng, C. Xing, and Y-C. Wu, “Irregular array manifold aided channel estimation in massive MIMO communications,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 13, no. 5, pp. 974-988, Sep. 2019. * [18] F. Gao, Z. Tian, E. G. Larsson, M. Pesavento, and S. Jin, “Introduction to the special issue on array signal processing for angular models in massive MIMO communications,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 13, no. 5, pp. 882-885, Sep. 2019. * [19] V. Trees, _Detection, estimation, and modulation theory, optimum array processing_ , John Wiley & Sons, 2004. * [20] M. Pesavento, A. B. Gershman, and M. Haardt, “Unitary root-MUSIC with a real-valued eigen-decomposition: A theoretical and experimental performance study,” _IEEE Trans. on Signal Processing_ , vol. 48, no.5, pp. 1306-1314, 2000. * [21] R. Richard and T. Kailath, “ESPRIT-estimation of signal parameters via rotational invariance techniques, ” _IEEE Trans. on Acoustics, Speech, and Signal Processing_ , vol. 37, no. 7, pp. 984-995, 1989. * [22] D. Fan, F. Gao, G. Wang, Z. Zhong, and A. Nallanathan, “Angle domain signal processing aided channel estimation for indoor 60GHz TDD/FDD massive MIMO systems,” _IEEE Journal on Selected Areas in Communications_ , vol. 35, no. 9, pp. 1948-1961, 2017. * [23] Z. Guo, X. Wang and W. Heng, “Millimeter-wave channel estimation based on 2-D beamspace MUSIC Method,” _IEEE Trans. on Wireless Communications_ , vol. 16, no. 8 pp. 5384-5394, 2017. * [24] R. Shafin, L. Liu, and J. Zhang, “DoA Estimation and RMSE characterization for 3D massive-MIMO/FD-MIMO OFDM system,” in _2015 IEEE Global Communications Conference (GLOBECOM)_ , pp. 1-6, Dec. 2015. * [25] D. C. Araújo, A. L. De Almeida, J. P. Da Costa, and R. T. de Sousa,“Tensor-based channel estimation for massive MIMO-OFDM systems,” _IEEE Access_ , vol. 7, pp. 42133-42147, 2019. * [26] F. Wen, and C. Liang, “Improved tensor-MODE based direction-of-arrival estimation for massive MIMO systems,” _IEEE Communications Letters_ , vol. 9, no.12, pp. 2182-2185, 2015. * [27] L. Cheng, Y-C. Wu, J. Zhang, and L. Liu, “Subspace identification for DOA estimation in massive / full-dimension MIMO system: bad data mitigation and automatic source enumeration,” _IEEE Trans. on Signal Processing_ , vol. 63, no. 22, pp. 5897-5909, Nov 2015. * [28] L. Cheng, Y-C. Wu, S. Ma, J. Zhang and L. Liu, “Channel estimation in full-dimensional massive MIMO system using one training symbol,” _in Proceedings of the IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)_ , Hokkaido, Japan, July 2017. * [29] N. D. Sidiropoulos, L. D. Lathauwer, X. Fu, K. Huang, E. E. Papalexakis and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” _IEEE Trans. on Signal Processing_ , vol. 65, no. 13, pp. 3551-3582, 2017. * [30] Y. Xu and W. Yin, “A block coordinate descent method for regularized multi-convex optimization with applications to nonnegative tensor factorization and completion,” _SIAM Journal on Imaging Sciences_ , vol. 6, no. 3, pp. 1758-1789, 2013. * [31] H. Shi, S. Tu, Y. Xu and W. Yin, “A primer on coordinate descent algorithms,” 2016, arXiv preprint arXiv:1610.00040. * [32] M. J. Beal, _Variational algorithms for approximate Bayesian inference_ , London, University of London, 2003. * [33] D. J. MacKay, “Probable networks and plausible predictions-a review of practical Bayesian methods for supervised neural networks,” _Computation in Neural Systems_ , vol. 6, no. 3, pp. 469-505, 1995. * [34] M. E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” _Journal of Machine Learning Research_ , vol. 1, pp. 211-244, Jun. 2001. * [35] L. Cheng, Y-C. Wu, and H. V. Poor, “Probabilistic tensor canonical polyadic decomposition with orthogonal factors,” _IEEE Trans. on Signal Processing_ , vol. 65, no. 3, pp. 663-676, Feb. 2017. * [36] L. Cheng, X. Tong, S. Wang, Y-C. Wu, and H. V. Poor, “Learning nonnegative factors from tensor data: probabilistic modelling and inference algorithm,” _IEEE Trans. on Signal Processing_ , accepted, Feb. 2020. * [37] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” _IEEE Trans. on Signal Processing_ , vol. 16, no, 56, pp. 2346-56, May. 2008. * [38] D. Wipf, and B. Rao, “Sparse Bayesian learning for basis selection,” _IEEE Trans. on Signal processing_ , vol. 52, no. 8, pp. 2153-64, Jun. 2004. * [39] S. M. Kay, “Fundamentals of statistical signal processing, volume i: Estimation theory”, PTR Prentice-Hall, Englewood Cliffs, 1993. * [40] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” _SIAM Review_ , vol. 51, no. 3, pp. 455-500, Aug. 2009. * [41] Q. Liu and D. Wang, “Stein variational gradient descent: A general purpose bayesian inference algorithm,” _in Advances in Neural Information Processing Systems (NeuIPS)_ , pp. 2378-2386, 2016. * [42] M. Hoffman, D. Blei, J. Paisley, and C. Wang, “Stochastic variational inference,” _Journal of Machine Learning Research_ , vol. 14, pp. 1303-1347, 2013. * [43] M. J. Wainwright and M. I. Jordan, “Graphical models, exponential families, and variational inference,” _Foundations and Trends in Machine Learning_ , vol. 1, no. 102, pp. 1-305, Jan. 2008. * [44] C. Zhang, J. Butepage, H. Kjellstrom and S. Mandt, “Advances in variational inference,” _IEEE Trans. on Pattern Analysis and Machine Intelligence_ , vol. 41, no. 8. pp. 2008-2026, Aug. 2019. * [45] A. K. Gupta and D. K. Nagar, _Matrix Variate Distributions_ , CRC Press,1999. * [46] M. Wang, F. Gao, S. Jin, and H. Lin, “An overview of enhanced massive MIMO with array signal processing techniques,” _IEEE J. Sel. Topics Signal Process._ , vol. 13, no. 5, pp. 886-901, Sep. 2019. * [47] X. Gao, L. Dai, S. Zhou, A. M. Sayeed, and L. Hanzo, “Wideband beamspace channel estimation for millimeter-wave MIMO systems relying on lens antenna arrays,” _IEEE Trans. Signal Process._ , vol. 67, no. 18, pp. 4809-4824, Sep. 2019. * [48] Z. Ding, L. Dai, and H. V. Poor, “MIMO-NOMA design for small packet transmission in the Internet of things,” _IEEE Access_ , vol. 4, pp. 1393-1405, Apr. 2016 * [49] L. Liu, E. G. Larsson, W. Yu, P. Popovski, C. Stefanovic, and E. de Carvalho, “Sparse signal processing for grant-free massive connectivity: A future paradigm for random access protocols in the Internet of Things,” _IEEE Signal Process. Mag._ , vol. 35, no. 5, pp. 88-99, Sep. 2018. * [50] S. Sahnoun and P. Comon, “Joint source estimation and localization,” _IEEE Trans. Signal Process._ , vol. 63, no. 10, pp. 2485-2495, May 2015. * [51] B. C. Arnold, N. Balakrishnan, and N. H. Nagaraja, _A First Course in Order Statistics_ , Society for Industrial and Applied Mathematics, 2008.
ifaamas [AAMAS ’21]Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021)May 3–7, 2021London, UKU. Endriss, A. Nowé, F. Dignum, A. Lomuscio (eds.) 2021 2021 ??? University of Luxembourg SnT - Interdisciplinary Centre for Security, Reliability and Trust # Incentive Mechanism Design for Federated Learning: Hedonic Game Approach Cengis Hasan<EMAIL_ADDRESS> ###### Abstract. Incentive mechanism design is crucial for enabling federated learning. We deal with clustering problem of agents contributing to federated learning setting. Assuming agents behave selfishly, we model their interaction as a stable coalition partition problem using hedonic games where agents and clusters are the players and coalitions, respectively. We address the following question: is there a family of hedonic games ensuring a Nash-stable coalition partition? We propose the Nash-stable set which determines the family of hedonic games possessing at least one Nash-stable partition, and analyze the conditions of non-emptiness of the Nash-stable set. Besides, we deal with the decentralized clustering. We formulate the problem as a non-cooperative game and prove the existence of a potential game. ###### Key words and phrases: Federated Learning, Hedonic Games, Optimal Clustering ## 1\. Introduction Data protection is a major concern. If we do not trust someone withholding our data, we may opt for federated learning by privately developing intelligent systems to create privacy-preserving AI. Federated learning enables privacy- preserving machine learning in a decentralized way Li et al. (2020). It is used in situations where data is distributed among different agents and training is impossible due to the difficulty to collect data centrally. All data is kept on device while a shared (global) learning model is trained in each device and aggregated (combined) centrally. Formally, we consider the following setting: i) data owner agents which locally trains the shared learning model, and ii) model aggregating entity (MAE) which combines learning model of its own with the agents. MAE and agents contribute to the same shared learning model. Federated learning has been identified as a distributed machine learning framework which sees rapid advances and broad adoption in next generation networking and edge systems Elgabli et al. (2020); Li et al. (2020); by: Peter Kairouz and McMahan (2021); Zheng et al. (2020); Samarakoon et al. (2018); Khan et al. (2020); Hosseinalipour et al. (2020); Yang et al. (2020). Obviously, the motivation to implement federated learning is to reduce the variance in a learned model by accessing more data. A very crucial question is how would MAE motivate the agents to participate in federated learning. Designing the mechanism of agents’ incentives can be performed by utilizing various frameworks such game theory, auction theory, etc Samarakoon et al. (2018). Any clustering among agents (players) being able to make strategic decisions becomes a coalition formation game when the players –for various individual reasons– may wish to belong to a relative small coalition rather than the grand coalition–the set of all players. Players’ moves from one to another coalition are governed by a set of rules. Basically, an agent (player) will move to a new coalition when it may obtain a better gain from this coalition. We shall not consider any permission requirements, which means that a player is always accepted by a coalition to which the player is willing to join. Based on those rules, the crucial question in the game context is how a stable partition exists. This is essential to enable federated learning. We study the hedonic coalition formation game model of the agents and analyze the Nash stability Hajduková (2006). A coalition formation game is called hedonic if each player’s preferences over partitions of players depend only on the members of his/her coalition. Finding a stable coalition partition is the main question in a coalition formation game. We refer to Aziz and Brandl (2012) discussing the stability concepts associated to hedonic conditions. In the sequel, we concentrate on the Nash stability. The definition of the Nash stability is quite simple: a partition of players is Nash stable whenever no player deviates from its coalition to another coalition in the partition. In this work, we deal with the following problem: having coalitions associated with their gain, we seek the answer of how must the coalition gain be allocated to the players in order to obtain a stable coalition partition. Clearly, the fundamental question is to determine which gain allocation methods may ensure a Nash-stable partition. Note that the answer of this enables to find the family of hedonic games that possess at least one Nash- stable partition of players. We first propose the definition of the Nash- stable set which is the set of all possible allocation methods resulting in Nash-stable partitions. We show that additively separable and symmetric gain allocation always ensures Nash-stable partitions. Moreover, our work aims also at finding the partitions in a decentralized setting which basically corresponds to finding stable decentralized clustering. We model this problem as a non-cooperative game and show that such a game is a potential game. A recent work that considers the clustering of agents in the form of hedonic games can be found in Donahue and J. Kleinberg (2021) where the authors study the agents decisions to participate in federated learning setting in case of a biased global model. In Lim et al. (2020) a federated learning based privacy- preserving approach is proposed to facilitate collaborative machine learning among multiple model owners in mobile crowdsensing. Another work in Kim (2020) implements mechanism design and differential privacy where an objectives-first approach is considered for designing incentives toward desired objectives; the differential privacy can provide a theoretical guarantee for users’ privacy in federated learning participation. In Le et al. (2021), an incentive mechanism between a base station and mobile users as an auction game is formulated where the base station is an auctioneer and the mobile users are the sellers. In Ding et al. (2020), the authors consider a multidimensional contract-theoretic approach on optimal incentive mechanism design, in the presence of users’ multi-dimensional private information including training cost and communication delay. The work in Zhan and Zhang (2020) deals with a deep reinforcement learning based approach to design the incentive mechanism and find the optimal trade-off between model training time and parameter server’s payment. The authors in Sarikaya and Ercetin (2020) analyze the influence of heterogeneous clients on federated learning convergence, and propose an incentive mechanism to balance the time delay of each iteration. For a recent survey on mechanism design for federated learning, we refer to the paper in Zhan et al. (5555). ## 2\. Motivation and Problem Description Figure 1. Federated learning framework. We consider a set of agents denoted $N=\\{1,2,\ldots,n\\}$ that can participate in the federated learning setting, and a model aggregating entity (MAE) which aggregates (combines) learning model of its own with the agents. MAE and agents contribute in the same global learning model. The parameters of learning model of MAE and agent $i$ are represented by $\bm{\theta}_{\text{MAE}}=(\theta_{\text{MAE},1},\theta_{\text{MAE},2}\ldots,\theta_{\text{MAE},M})$ and $\bm{\theta}_{i}=(\theta_{i,1},\theta_{i,2},\ldots,\theta_{i,M})$, respectively where $M$ is the number of trainable weights (variables) of the learning model, and every agent $i$ has $m_{i}$ data samples. We also consider that the communication link between MAE and agent $i$ can be characterized by a probability of reliable transmission (e.g. low bit error rate) denoted $p_{i}$. The vector of probabilities of reliable transmission of all agents is shown by $\bm{p}=(p_{1},p_{2},\ldots,p_{n})$. If a cluster, say $S\subseteq N$, of agents agree to be federated, then the aggregated (combined) learning model is found by using an aggregation method which is given by $(\bm{\theta}^{S};\bm{x})=\mathsf{A}(\bm{\theta}_{1},\bm{\theta}_{2},\ldots,\bm{\theta}_{n_{S}};\bm{x}),\quad(n_{S}=|S|)$ (1) where $\mathsf{A}(\cdot)$ shows the aggregation function given $\bm{\theta}_{1},\bm{\theta}_{2},\ldots,\bm{\theta}_{n_{S}}$ and $\bm{x}=(x_{1},x_{2},\ldots,x_{n})$ in which $x_{i}=1$ if MAE receives successfully information of $\bm{\theta}_{i}$. This is nothing more than choosing agent $i$ with probability $p_{i}$. ### 2.1. Incentives of Agents We assume that agents may not agree to be in the same cluster depending on their preferences. Thus, we come up with the case where multiple disjoint clusters may occur. In Figure 1, we illustrate such an example scenario in which two disjoint clusters, i.e. $\\{1,2,3\\}$ and $\\{4,5\\}$, create two different aggregated learning models, i.e. $(\bm{\theta}^{\\{1,2,3\\}};\bm{x})$ and $(\bm{\theta}^{\\{4,5\\}};\bm{x})$, respectively. From the perspective of agents, we assume that MAE assigns a gain to all possible clusters. Note that from the MAE point of view, this is the cost that must be paid to the cluster. By this way, MAE evaluates the contribution to the aggregated model. However, we shall formulate the problem using the “gain” term due to the fact that from the agents point of view, this corresponds to the earnings of the agents. Then, we come up with the question how to design the incentives in order that the agents are willing to participate in federated learning setting taking into account their preferences. In this work, we consider the following linear aggregation method: $(\bm{\theta}^{S};\bm{x})=\frac{\sum_{i\in S}x_{i}m_{i}\bm{\theta}_{i}}{\sum_{i\in S}x_{i}m_{i}},\quad x_{i}=\begin{cases}1,&\mbox{reception successful},\\\ 0,&\mbox{otherwise}.\end{cases}$ (2) which essentially corresponds to the weighted average of the learning models within the cluster. Furthermore, we represent by $\mathcal{L}(\bm{\theta}^{S};\bm{x})$ the loss of learning model given by parameters $\bm{\theta}^{S}$. The expected value of loss function is given by $\displaystyle\mathbb{E}_{\bm{x}}[\mathcal{L}(\bm{\theta}^{S})]=\sum_{\bm{x}\in\mathcal{X}}\mathcal{L}(\bm{\theta}^{S};\bm{x})\mathbb{P}[{\bm{x}}]$ (3) $\displaystyle\mathbb{P}[{\bm{x}}]=\prod_{i\in N}p_{i}^{x_{i}}(1-p_{i})^{1-x_{i}}$ (4) where $\mathcal{X}$ with $|\mathcal{X}|=2^{n}$ is the set of all possible combinations of $\bm{x}$ vectors. On the other hand, given $\bm{x}$ and cluster $S$, the loss of aggregated model due to $S$ is lower than the loss averaged over the agents in $S$: $\mathcal{L}(\bm{\theta}^{S};\bm{x})\leq\frac{\sum_{i\in S}x_{i}\mathcal{L}(\bm{\theta}_{i})}{\sum_{i\in S}x_{i}}$ (5) due to the fact that when the disjoint agents are merged, the amount of data used to train the model increases which results in lower training error. If two disjoint clusters $S$ and $T$, i.e. $S\cap T=\emptyset$, are federated, then we denote the new parameters as $\bm{\theta}^{S\cup T}$. It is reasonable to assume that the minimum of loss of $\bm{\theta}^{S\cup T}$ is lower than the minimum of average loss of $\bm{\theta}^{S}$ and $\bm{\theta}^{T}$: $\mathcal{L}(\bm{\theta}^{S\cup T};\bm{x})\leq\tfrac{\sum_{i\in S}x_{i}}{\sum_{i\in S\cup T}x_{i}}\mathcal{L}(\bm{\theta}^{S};\bm{x})+\tfrac{\sum_{i\in T}x_{i}}{\sum_{i\in S\cup T}x_{i}}\mathcal{L}(\bm{\theta}^{T};\bm{x}),$ (6) Moreover, we consider that there exists a communication cost, denoted $c$, when MAE receives the learning model’s parameters’ data; note that this data increases with the size of cluster. On the other hand, MAE earns a monetary gain by utilizing the aggregated model and commits a monetary value which can be paid to the agents. Then, the agents deduct the communication cost $c$ from what they earn from MAE. We represent by $u$ the gain which assigns a real value for every subset of $N$, i.e. $u:2^{N}\rightarrow\mathbb{R}$ where $2^{N}$ is the collection of all possible non-empty subsets of $N$ and empty set $\emptyset$, and we set $u(\emptyset)=0$. Thus, the gain of cluster $S\in 2^{N}$ is given by $u(S)=f\left(\tfrac{1}{\mathbb{E}_{\bm{x}}[\mathcal{L}(\bm{\theta}^{S})]}\right)-c(S).$ (7) where $f:\mathbb{R}\rightarrow\mathbb{R}$ can be a monotonically increasing function and inversely proportional to $\mathbb{E}_{\bm{x}}[\mathcal{L}(\bm{\theta}^{S})]$ meaning that the less loss the more gain. Note that this is the monetary value that cluster $S$ earns. Any agent $i$ can join a cluster if guaranteed to be paid at least $u(i)=\pi_{i}$ which is the minimal price given by $\pi_{i}=f\left(\tfrac{p_{i}}{\mathcal{L}(\bm{\theta}_{i})}\right).$ (8) Note that $p_{i}$ is the second parameter which has an impact on the price asked by the agent. It corresponds to the fact that as $p_{i}$ has a poor value, the agent asks lower price to participate in the federation. ### 2.2. Optimal Clustering Consider that MAE aims at finding the clustering that results in minimal cost. Optimal clustering problem is defined through the agents $N$ and a _clustering_ set $\Pi$ which partitions the agents’ set $N$ such that $\bigcup_{S\in\Pi}S=N$. All clusters in $\Pi$ are disjoint clusters, i.e., $S\cap T=\emptyset$ for all $S,T\in\Pi$. Given $\mathcal{P}$, the set of all possible clustering structures, the optimal clustering problem is to find a clustering $\Pi\in\mathcal{P}$ which minimizes the objective while satisfying the constraints of agents: $\displaystyle\min_{\Pi\in\mathcal{P}}\sum_{S\in\Pi}u(S)\mbox{ subject to}$ $\displaystyle\sum_{i\in S}\pi_{i}\leq u(S),\quad\forall S\in\Pi,$ (9) where the constraints in eq. (2.2) ensure that the demand of agents are satisfied. The optimization only tells that the agents are guaranteed to be paid their minimal price but not how much more if the gain of cluster allows it. ### 2.3. Clustering under Selfishness As agents can behave selfishly, the fundamental question is to find clusters which are stable under selfishness. Let us consider that agent $i$ shall get some monetary gain by joining cluster $S$ as following: $\mbox{gain of agent }i=\pi_{i}+\phi_{i}^{S}$ (10) where $\phi_{i}^{S}\in\mathbb{R}$ is the clustering gain of agent $i$ by joining cluster $S$, and we set $\phi_{i}^{i}=0$ for all $i\in N$. Such a setting enables to deal with the clustering gains. Thus, the fundamental problem becomes to > find clustering gains $\phi$ so that the agents agree not to change their > cluster Obviously, this is the stable clustering problem under selfishness where the agents strategically decide to which cluster to join; thus, we can define the problem as a coalition formation game. We then change the language of problem formulation using game theoretic terms, i.e. agents $\rightarrow$ players cluster $\rightarrow$ coalition clustering $\rightarrow$ partition In the sequel, we deal with figuring out family of coalition formation games that ensure stable clusterings. ## 3\. Hedonic Game A hedonic coalition formation game (in short, hedonic game) is given by a pair $\langle N,\succ\rangle$, where $\succ:=(\succeq_{1},\succeq_{2},\ldots,\succeq_{n})$ denotes the preference profile, specifying for each player $i\in N$ his preference relation $\succeq_{i}$, i.e. a reflexive, complete and transitive binary relation. Given $\Pi$, called as coalition partition, and $i$, $S_{\Pi}(i)$ denotes the set $S\in\Pi$ such that $i\in S$. Moreover, $\mathcal{P}$ is the set of all possible coalition partitions over $N$. In its partition form, a coalition formation game is defined on the set $N$ by associating a gain $u(S|\Pi)$ to each subset of any partition $\Pi$ of $N$. The gain of a set is independent of the other coalitions, and therefore, $u(S|\Pi)=u(S)$. The games of this form are more restrictive but present interesting properties to reach a stability. Practically speaking, this assumption means that the gain of a group is independent of the other players outside the group. Hedonic games fall into this category with an additional assumption: ###### Definition . A coalition formation game is hedonic if * • _the gain of any player depends solely on the members of the coalition to which the player belongs_ , and * • _the coalitions arise as a result of the preferences of the players over their possible coalitions’ set_. ### 3.1. Preference Relation The preference relation of a player can be defined over a _preference function_. We consider the case where the preference relation is chosen to be the gain allocated to the player in a coalition. Thus, player $i$ prefers the coalition $S$ to $T$ iff, $\phi_{i}^{S}\geq\phi_{i}^{T}\Leftrightarrow S\succeq_{i}T.$ (11) ### 3.2. The Nash Stability The stability concepts for a hedonic game are various. In the literature, a hedonic game could be individually stable, Nash stable, core stable, strict core stable, Pareto optimal, strong Nash stable, or, strict strong Nash stable. We refer to Aziz and Brandl (2012) for a thorough definition of these different stability concepts. In this paper, we are only interested in the Nash stability because the players do not cooperate to take their decisions jointly. ###### Definition (Nash Stability). A partition of players is Nash-stable whenever no player has incentive to unilaterally change its coalition to another coalition in the partition which can be mathematically formulated as follows: partition $\Pi^{\text{NS}}$ is said to be Nash-stable if no player can benefit from moving from his coalition $S_{\Pi^{\text{NS}}}(i)$ to another existing coalition $T\in\Pi^{\text{NS}}$, i.e.: $S_{\Pi^{\text{NS}}}(i)\succeq_{i}T\cup i,\quad\forall T\in\Pi^{\text{NS}}\cup\emptyset;\forall i\in N.$ (12) which can be similarly defined over preference function as follows: $\phi_{i}^{S_{\Pi^{\text{NS}}}(i)}\geq\phi_{i}^{T\cup i},\quad\forall T\in\Pi^{\text{NS}}\cup\emptyset;\forall i\in N.$ (13) Nash-stable partitions are immune to individual movements even when a player who wants to change does not need permission to join or leave an existing coalition Bogomonlaia and Jackson (2002). ###### Remark 3.0. Stability concepts being immune to individual deviation are _Nash stability, individual stability, contractual individual stability_. Nash stability is the strongest within above. The notion of _core stability_ has been used already in some models where immunity to coalition deviation is required Hajduková (2006). ###### Remark 3.0. In Barber and Gerber (2007), the authors propose some set of axioms which are non-emptiness, symmetry pareto optimality, self-consistency; and they analyze the existence of any stability concept that can satisfy these axioms. It is proven that for any game $|N|>2$, there does not exist any solution which satisfies these axioms. ### 3.3. Aggregated Learning Model Parameters When a stable partition exists, then this means that all the players (agents) are agreed to participate to federation. As a result of this, MAE utilizes the following aggregation of learning model parameters: $\bm{\theta}^{F}=w\bm{\theta}_{\text{MAE}}+(1-w)\frac{\sum_{i\in N}x_{i}m_{i}\bm{\theta}_{i}}{\sum_{i\in N}x_{i}m_{i}}$ (14) where $0\leq w\leq 1$ is a weighting parameter showing how much MAE favors the aggregated learning model parameters of agents (players), $\bm{\theta}_{\text{MAE}}$ shows the learning parameters of MAE’s local model. In summary, we have the following procedure: while Nash-stable partition $\Pi^{\text{NS}}$ exists 1. 1. Player (agent) $i$ sends information of $\bm{\theta}_{i}$, for all $i\in N$ 2. 2. MAE calculates aggregated learning model parameters $\bm{\theta}^{F}$ end Given $\bm{\theta}^{F}$, the expected value of loss function in federation can be calculated as following: $\displaystyle\mathbb{E}_{\bm{x}}[\mathcal{L}(\bm{\theta}^{F})]$ $\displaystyle=\sum_{\bm{x}\in\mathcal{X}}\mathcal{L}(\bm{\theta}^{F};\bm{x})\mathbb{P}[\bm{x}]$ $\displaystyle\geq\mathcal{L}(\mathbb{E}_{\bm{x}}[\bm{\theta}^{F};\bm{x}])\qquad(\text{Jensen's inequality})$ (15) where $\mathbb{E}_{\bm{x}}[\bm{\theta}^{F};\bm{x}]=w\bm{\theta}_{\text{MAE}}+(1-w)\sum_{\bm{x}\in\mathcal{X}}\frac{\sum_{i\in N}x_{i}m_{i}\bm{\theta}_{i}}{\sum_{i\in N}x_{i}m_{i}}\mathbb{P}[\bm{x}].$ (16) Note that calculating $\mathbb{E}_{\bm{x}}[\mathcal{L}(\bm{\theta}^{F})]$ may be more difficult than $\mathbb{E}_{\bm{x}}[\bm{\theta}^{F};\bm{x}]$. Therefore, it can be also an option to define the gain of a cluster using $\mathbb{E}_{\bm{x}}[\bm{\theta}^{F};\bm{x}]$ in eq. (7). ## 4\. The Nash-stable Set As the gain $u$ associated with all possible coalitions are known, we are interested in finding a gain distribution to ensure Nash stability. We thus define an allocation method $\bm{\phi}\in\mathbb{R}^{\kappa}$ where $\kappa=n2^{n-1}$ as following: $\bm{\phi}=\\{\phi_{i}^{S}:\forall i\in S,\forall S\in 2^{N}\\}$ (17) which directly sets up a preference profile. The set of all possible allocation methods is denoted by $\mathcal{F}\subset\mathbb{R}^{\kappa}$. We define the mapping $\mathsf{M}$, which for any allocation method $\bm{\phi}$, it finds corresponding all possible Nash-stable partitions, i.e. $\mathsf{M}(\bm{\phi})\subset\mathcal{P}$. We define the Nash-stable set which includes all those allocation methods that build the following set: $\mathscr{N}_{\text{stable}}=\left\\{\bm{\phi}\in\mathbb{R}^{\kappa}:\exists\Pi\in\mathsf{M}(\bm{\phi})|S_{\Pi}(i)\succeq_{i}T\cup i,\right.\\\ \left.\forall T\in\Pi\cup\emptyset;\forall i\in N\right\\}.$ (18) Essentially, the Nash-stable set includes > the family of hedonic games, each one having a different preference profile > that derives from a different allocation method. Thus, before finding a > Nash-stable partition, we need to find the hedonic game (i.e., an allocation > method) for which a Nash-stable partition exists. Let us define the set of constraints stemming from the preference function in order to check if the Nash-stable set is non-empty. Due to the gain bound, for any allocation method $\bm{\phi}$, we have $\sum_{i\in S}(\pi_{i}+\phi_{i}^{S})\leq u(S),\quad\forall S\in 2^{N}$ called as budged balanced gain allocation which further can be given by $\sum_{i\in S}f\left(\tfrac{p_{i}}{\mathcal{L}(\bm{\theta}_{i})}\right)+\sum_{i\in S}\phi_{i}^{S}\leq f\left(\tfrac{1}{\mathbb{E}_{\bm{x}}[\mathcal{L}(\bm{\theta}^{S})]}\right)-c(S),\quad\forall S\in 2^{N}.$ (19) For simplicity, let us define marginal gain as following: $\Delta_{\bm{\theta}}(S)=\begin{cases}f\left(\tfrac{1}{\mathbb{E}_{\bm{x}}[\mathcal{L}(\bm{\theta}^{S})]}\right)-c(S)-\sum_{i\in S}f\left(\tfrac{p_{i}}{\mathcal{L}(\bm{\theta}_{i})}\right),&\forall S\in 2^{N}\setminus i,\\\ 0,&\forall i\in N.\end{cases}$ (20) which results in the following constraints: $\mathscr{C}_{\bm{\theta}}^{1}(\bm{\phi}):=\left\\{\sum_{i\in S}\phi_{i}^{S}\leq\Delta_{\bm{\theta}}(S),\forall S\in 2^{N}\right\\}.$ (21) which are the constraints that stem from budged balancedness. On the other hand, for any $\bm{\phi}$, the constraints that ensure the Nash stability are given by $\mathscr{C}_{\bm{\theta}}^{2}(\bm{\phi}):=\left\\{\exists\Pi\in\mathsf{M}(\bm{\phi})\left|\phi_{i}^{S_{\Pi}(i)}\geq\phi_{i}^{T\cup i},\forall T\in\Pi\cup\emptyset;\forall i\in N\right.\right\\},$ (22) Based on these two constraints represented by $\mathscr{C}_{\bm{\theta}}^{1}(\bm{\phi})$ and $\mathscr{C}_{\bm{\theta}}^{2}(\bm{\phi})$, we can define the Nash-stable set. $\mathscr{N}_{\text{stable}}(\bm{\theta})=\left\\{\bm{\phi}\in\mathbb{R}^{\kappa}:\mathscr{C}_{\bm{\theta}}^{1}(\bm{\phi})\mbox{ and }\mathscr{C}_{\bm{\theta}}^{2}(\bm{\phi})\right\\},$ (23) Then, the non-emptiness of the Nash-stable set is crucial. The theorem below states the necessary conditions about the non-emptiness of the Nash-stable set: The Nash-stable set can be non-empty. ###### Proof. We can check if the Nash-stable set is non-empty by solving the following optimization problem: $\displaystyle\max_{\bm{\phi}}\sum_{S\in 2^{N}}\sum_{i\in S}\phi_{i}^{S}\mbox{ subject to }\mathscr{C}_{\bm{\theta}}^{1}(\bm{\phi})\mbox{ and }\mathscr{C}_{\bm{\theta}}^{2}(\bm{\phi}).$ If there exists any feasible solution of this problem, then we conclude that there is at least one allocation method which provides a Nash-stable partition. ∎ However, searching in an exhaustive manner over the whole partitions is NP- hard as the number of partitions grows according to the Bell number. Typically, with only $10$ players, the number of partitions is as large as $115,975$. ### 4.1. Superadditive Gain If the gain function $u$ is superadditive, then it is trivial to check that the marginal gain is also superadditive: $\Delta_{\bm{\theta}}(S\cup T)\geq\Delta_{\bm{\theta}}(S)+\Delta_{\bm{\theta}}(T)$, for all possible $S$ and $T$ such that $S\cap T=\emptyset$. Due to eq. (19), we have $\displaystyle\sum_{i\in S\cup T}\phi_{i}^{S\cup T}=\sum_{i\in S}\phi_{i}^{S\cup T}+\sum_{i\in T}\phi_{i}^{S\cup T}\leq\Delta_{\bm{\theta}}(S\cup T)$ $\displaystyle\sum_{i\in S}\phi_{i}^{S}+\sum_{i\in T}\phi_{i}^{T}\leq\Delta_{\bm{\theta}}(S)+\Delta_{\bm{\theta}}(T)$ $\displaystyle\Rightarrow\sum_{i\in S}\phi_{i}^{S\cup T}+\sum_{i\in T}\phi_{i}^{S\cup T}\geq\sum_{i\in S}\phi_{i}^{S}+\sum_{i\in T}\phi_{i}^{T}.$ This result means that any player is better off in a larger coalition which ultimately all players have the most gain in the grand coalition. This is obvious from eq. (22) where for every player $i\in N$, $\phi_{i}^{N}\geq\phi_{i}^{S}$ for all $S\in 2^{N}$. ### 4.2. Additively Separable and Symmetric Gain Preferences of a player are _additively separable_ whenever the preference can be stated with a function characterizing how a player prefers another player in each coalition. This means that the player’s preference for a coalition is based on individual preferences. This can be formalized as follows: ###### Definition . The preferences of a player are said to be additively separable if there exists a function $v_{i}:N\rightarrow\mathbb{R}$ such that $\sum_{j\in S}v_{i}(j)\geq\sum_{j\in T}v_{i}(j)\Leftrightarrow S\succeq_{i}T,\quad\forall S,T\in 2^{N}.$ (24) $v_{i}(i)$ is normalized and set to $v_{i}(i)=0$. A profile of additively separable preferences satisfies _symmetry_ if $v_{i}(j)=v_{j}(i)=v(i,j)$, for all $i,j\in N$. The meaning of $v(i,j)$ is the mutual gain of player $i$ and $j$ when they are in the same coalition. Let $\mathcal{V}(S)$ be the all possible bipartite coalitions which can occur in coalition $S$ such that: $\mathcal{V}(S):=\\{(i,j)\in S:j>i\\},\quad|S|\geq 2.$ We then define ${\bf v}\in\mathbb{R}^{|\mathcal{V}(N)|}$ which shall serve as an allocation method to generate additively separable and symmetric preferences: ${\bf v}=\left\\{v(i,j):\forall(i,j)\in\mathcal{V}(N)\right\\}$ and mapping $\mathsf{M}$ shall find all possible Nash-stable partitions,, i.e. $\mathsf{M}({\bf v})\subset\mathcal{P}$. The constraints that define the Nash- stable set are then defined over ${\bf v}$: $\mathscr{C}_{\bm{\theta}}^{1}(\bm{\phi})\rightarrow\mathscr{C}_{\bm{\theta}}^{1}({\bf v})\mbox{ and }\mathscr{C}_{\bm{\theta}}^{2}(\bm{\phi})\rightarrow\mathscr{C}_{\bm{\theta}}^{2}({\bf v})$ Further, note that the gain that player $i$ has in coalition $S$ is given by $\displaystyle\pi_{i}+\phi_{i}^{S}=f\left(\tfrac{1}{\mathcal{L}(\bm{\theta}_{i})}\right)+\sum_{j\in S}v(i,j)$ $\displaystyle\Rightarrow\phi_{i}^{S}=\sum_{j\in S}v(i,j)$ (25) On the other hand, due to the symmetry property of mutual gain, we have the following: $\sum_{i,j\in S}v(i,j)=2\sum_{(i,j)\in\mathcal{V}(S)}v(i,j).$ For example, if $S=(1,2,3)$, then $\sum_{i,j\in S}v(i,j)=2[v(1,2)+v(1,3)+v(2,3)]$. Additively separable and symmetric preferences always admit a Nash-stable partition. Therefore, constraints in $\mathscr{C}^{2}_{\bm{\theta}}({\bf v})$ are always satisfied Hajduková (2006). Based on this theorem, we only need to satisfy the constraints given by $\mathscr{C}_{\bm{\theta}}^{1}({\bf v})$. Thus, we define the Nash-stable set which generates additively separable and symmetric preferences $\mathscr{N}_{\text{stable}}^{\text{A}}(\bm{\theta})\subset\mathscr{N}_{\text{stable}}(\bm{\theta})$ as following: $\mathscr{N}_{\text{stable}}^{\text{A}}(\bm{\theta})=\Bigg{\\{}{\bf v}\in\mathbb{R}^{|\mathcal{V}(N)|}:\underbrace{\sum_{(i,j)\in\mathcal{V}(S)}v(i,j)\leq\tfrac{\Delta_{\bm{\theta}}(S)}{2},\forall S\in 2^{N}}_{\mathscr{C}^{1}_{\bm{\theta}}({\bf v})}\Bigg{\\}}$ (26) Finding the values of $v(i,j)$ in eq. (26) satisfying $\mathscr{C}_{\bm{\theta}}^{1}({\bf v})$ conditions can be done straightforward. However, we propose to formulate as an optimization problem for finding the values of $v(i,j)$. A feasible solution of the following linear program guarantees the non-emptiness of $\mathscr{N}_{\text{stable}}^{\text{A}}(\bm{\theta})$: $\displaystyle\max_{{\bf v}}\sum_{(i,j)\in\mathcal{V}(N)}v(i,j)\mbox{ subject to }$ $\displaystyle\sum_{(i,j)\in\mathcal{V}(S)}v(i,j)\leq\tfrac{\Delta_{\bm{\theta}}(S)}{2},\quad\forall S\in 2^{N},$ (27) where note that any feasible solution ${\bf v}^{*}$ is upper bounded by $\sum_{(i,j)\in\mathcal{V}(N)}v^{*}(i,j)\leq{\Delta_{\bm{\theta}}(N)}/{2}$. Furthermore, the coalition partition that stems from ${\bf v}^{*}$ is given by $\Pi^{\text{NS}}\in\mathsf{M}({\bf v}^{*})$ which is Nash-stable. ## 5\. Decentralized Clustering In this section, we study finding a Nash-stable partition in a decentralized setting which corresponds to finding stable decentralized clustering. We, in fact, model the problem of finding a Nash-stable partition as a non- cooperative game. A hedonic coalition formation game is equivalent to a non-cooperative game. Denote as $\Sigma$ the set of strategies. We assume that the number of strategies is equal to the number of players, i.e. $|\Sigma|=n$. This is sufficient to represent all possible choices. Indeed, the players that select the same strategy are interpreted as a coalition. For example, if every player chooses different strategies, then this corresponds to the coalition partition comprised of singletons. Consider the best-reply dynamics where in a particular step, only one player chooses its best strategy. A strategy tuple is represented as $\bm{\sigma}=\\{\sigma_{1},\sigma_{2},\ldots,\sigma_{n}\\}$, where $\sigma_{i}\in\Sigma$ is the strategy of player $i$. In every step, only one dimension is changed in $\bm{\sigma}$. We further define $\displaystyle S_{\bm{\sigma}}(i)=\\{j\in N:\sigma_{i}=\sigma_{j}\\}$ (28) $\displaystyle\Pi(\bm{\sigma})=\\{S_{\bm{\sigma}}(i),\forall i\in N\\}$ (29) the set of players that share the same strategy with player $i$ and partition of players with respect to strategy tuple $\bm{\sigma}$. Thus, note that $\cup_{i\in N}S_{\bm{\sigma}}(i)=N$ for each step. The gain of player $i$ in case of strategy tuple $\bm{\sigma}$ is represented by $\phi_{i}(\bm{\sigma})$ which verifies the following relation: $\phi_{i}(\bm{\sigma})\geq\phi_{i}(\bm{\sigma}^{\prime})\Leftrightarrow S_{\bm{\sigma}}(i)\succeq_{i}S_{\bm{\sigma}^{\prime}}(i),$ (30) Any sequence of strategy tuple in which each strategy tuple differs from the preceding one in only one coordinate is called a path, and a unique deviator in each step strictly increases the gain he receives is an improvement path. Obviously, any maximal improvement path which is an improvement path that can not be extended is terminated by stability. ### 5.1. Equilibrium Analysis The Nash equilibrium is defined as following: $\sigma_{i}^{\text{NE}}\in\arg\max_{\sigma_{i}\in\Sigma}\phi_{i}(\sigma_{i},\sigma_{-i}),\quad\forall i\in N.$ (31) essentially corresponding to a Nash-stable partition in the original hedonic game, which is given by $\displaystyle S_{\bm{\sigma}^{\text{NE}}}(i)=\\{j\in N:\sigma^{\text{NE}}_{i}=\sigma_{j}\\},\quad\forall i\in N$ $\displaystyle\Pi^{\text{NE}}=\\{S_{\bm{\sigma}^{\text{NE}}}(i),\forall i\in N\\}.$ (32) In the sequel, we prove that the additively separable and symmetric gains result in a potential game where all the players have incentive to change their strategy according to a single global function called as potential function. Any additively separable and symmetric gain results in a potential game with potential function: $P_{{\bf v}}(\bm{\sigma})=\sum_{S\in\Pi(\bm{\sigma})}\sum_{(i,j)\in\mathcal{V}(S)}v(i,j).$ (33) ###### Proof. A non-cooperative game is a potential game whenever there exists a function $P_{{\bf v}}$ such that: $P_{{\bf v}}(\sigma_{i},\sigma_{-i})-P_{{\bf v}}(\sigma^{\prime}_{i},\sigma_{-i})=\phi(\sigma_{i},\sigma_{-i})-\phi(\sigma^{\prime}_{i},\sigma_{-i})$ where $(\sigma_{i},\sigma_{-i})=\bm{\sigma}$ and $\sigma_{-i}$ shows the strategies of the players other than $i$. This means that when player $i$ switches from strategy $\sigma_{i}$ to $\sigma^{\prime}_{i}$ the difference of its gain can be given by the difference of a function $P$. We choose the following potential function: $P_{{\bf v}}(\bm{\sigma})=\sum_{S\in\Pi(\sigma)}\sum_{(i,j)\in\mathcal{V}(S)}v(i,j)$ (34) Let us denote as $i\in S$ and $i\not\in S^{\prime}$ the coalitions when player $i$ switches from strategy $\sigma_{i}$ to $\sigma^{\prime}_{i}$, respectively. Potential function is given as following before and after switching $\displaystyle P_{{\bf v}}(\sigma_{i},\sigma_{-i})=\sum_{(i,j)\in\mathcal{V}(S)}v(i,j)+\sum_{(k,j)\in\mathcal{V}(S^{\prime})}v(k,j)$ $\displaystyle+\sum_{T\in\Pi(\bm{\sigma})\setminus\\{S,S^{\prime}\\}}\sum_{(k,j)\in\mathcal{V}(T)}v(k,j)$ $\displaystyle P_{{\bf v}}(\sigma^{\prime}_{i},\sigma_{-i})=\sum_{(k,j)\in\mathcal{V}(S\setminus i)}v(k,j)+\sum_{(k,j)\in\mathcal{V}(S^{\prime}\cup i)}v(k,j)$ $\displaystyle+\sum_{T\in\Pi(\sigma)\setminus\\{S,S^{\prime}\\}}\sum_{(k,j)\in\mathcal{V}(T)}v(k,j)$ where note that we have $S\rightarrow S\setminus i$ and $S^{\prime}\rightarrow S^{\prime}\cup i$ after switching. Thus, we have $\displaystyle P_{{\bf v}}(\sigma_{i},\sigma_{-i})-P_{{\bf v}}(\sigma^{\prime}_{i},\sigma_{-i})=$ $\displaystyle\sum_{(i,j)\in\mathcal{V}(S)}v(i,j)+\sum_{(k,j)\in\mathcal{V}(S^{\prime})}v(k,j)$ $\displaystyle-\sum_{(k,j)\in\mathcal{V}(S\setminus i)}v(k,j)-\sum_{(k,j)\in\mathcal{V}(S^{\prime}\cup i)}v(k,j)=$ $\displaystyle\sum_{j\in S}v(i,j)-\sum_{j\in S^{\prime}\cup i}v(i,j)$ On the other hand, the gain shift before and after strategy switch is given by $\displaystyle\phi(\sigma_{i},\sigma_{-i})-\phi(\sigma^{\prime}_{i},\sigma_{-i})=\sum_{j\in S}v(i,j)-\sum_{j\in S^{\prime}\cup i}v(i,j)$ which concludes the proof that $P_{{\bf v}}(\sigma_{i},\sigma_{-i})-P_{{\bf v}}(\sigma^{\prime}_{i},\sigma_{-i})=\phi(\sigma_{i},\sigma_{-i})-\phi(\sigma^{\prime}_{i},\sigma_{-i})$. ∎ In a potential game, a Nash equilibrium shall result in an optimum in potential $P_{{\bf v}}$. Therefore, $\bm{\sigma}^{*}\in\arg\max_{\bm{\sigma}}P_{{\bf v}}(\bm{\sigma})$ corresponds to a coalition partition $\Pi(\sigma^{*})\in\mathsf{M}({\bf v})$ which is Nash-stable. ## 6\. Conclusions We analyzed stable clustering problem in federated learning setting. Clusters are made up of the agents contributing to federated learning. We considered that every agent is better off when switching from one cluster to another one. We modeled the decisions of agents in the framework hedonic games which is a widely used cooperative game model for this type of problems. A fundamental question in hedonic games is to analyze the conditions how stable coalition partitions can occur. We studied the existence of stable coalition partitions by introducing the Nash-stable set, and analyzed the existence of decentralized coalition partitions. As future work, it may be interesting to do stability analysis for different stability notions, to study other types of preference profiles ensuring Nash stability. On the other hand, it is essential to do experiments with real data and specific learning models as well as realistic gain and communication cost functions. ## References * (1) * Aziz and Brandl (2012) H. Aziz and F. Brandl. 2012\. Existence of stability in hedonic coalition formation games. In _Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2012)_. * Barber and Gerber (2007) S. Barber and A. Gerber. 2007. A note on the impossibility of a satisfactory concept of stability for coalition formation games. _Economics Letters_ 95 (2007), 85–90. * Bogomonlaia and Jackson (2002) A. Bogomonlaia and M. Jackson. 2002. The stability of hedonic coalition structures. _Games and Economic Behavior_ 38 (jan 2002), 201–230. * by: Peter Kairouz and McMahan (2021) Edited by: Peter Kairouz and H. Brendan McMahan. 2021. Advances and Open Problems in Federated Learning. _Foundations and Trends in Machine Learning_ 14, 1 (2021), –. https://doi.org/10.1561/2200000083 * Ding et al. (2020) Ningning Ding, Zhixuan Fang, and Jianwei Huang. 2020\. Incentive Mechanism Design for Federated Learning with Multi-Dimensional Private Information. In _2020 18th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT)_. 1–8. * Donahue and J. Kleinberg (2021) K. Donahue and booktitle=AAAI 2021 J. Kleinberg. 2021\. Model-sharing Games: Analyzing Federated Learning Under Voluntary Participation. * Elgabli et al. (2020) Anis Elgabli, Jihong Park, Amrit S Bedi, Mehdi Bennis, and Vaneet Aggarwal. 2020. GADMM: fast and communication efficient framework for distributed machine learning. _Journal of Machine Learning Research_ 21, 76 (2020), 1–39. * Hajduková (2006) J. Hajduková. 2006\. Coalition formation games: A survey. _International Game Theory Review_ 8, 4 (2006), 613–641. * Hosseinalipour et al. (2020) S. Hosseinalipour, C. G. Brinton, V. Aggarwal, H. Dai, and M. Chiang. 2020. From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks. _IEEE Communications Magazine_ 58, 12 (2020), 41–47. https://doi.org/10.1109/MCOM.001.2000410 * Khan et al. (2020) L. U. Khan, S. R. Pandey, N. H. Tran, W. Saad, Z. Han, M. N. H. Nguyen, and C. S. Hong. 2020. Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism. _IEEE Communications Magazine_ 58, 10 (2020), 88–93. https://doi.org/10.1109/MCOM.001.1900649 * Kim (2020) Sungwook Kim. 2020\. Incentive Design and Differential Privacy Based Federated Learning: A Mechanism Design Perspective. _IEEE Access_ 8 (2020), 187317–187325. https://doi.org/10.1109/ACCESS.2020.3030888 * Le et al. (2021) Tra Huong Thi Le, Nguyen H. Tran, Yan Kyaw Tun, Minh N. H. Nguyen, Shashi Raj Pandey, Zhu Han, and Choong Seon Hong. 2021\. An Incentive Mechanism for Federated Learning in Wireless Cellular network: An Auction Approach. _IEEE Transactions on Wireless Communications_ (2021), 1–1. https://doi.org/10.1109/TWC.2021.3062708 * Li et al. (2020) T. Li, A. K. Sahu, A. Talwalkar, and V. Smith. 2020\. Federated Learning: Challenges, Methods, and Future Directions. _IEEE Signal Processing Magazine_ 37, 3 (2020), 50–60. https://doi.org/10.1109/MSP.2020.2975749 * Lim et al. (2020) Wei Yang Bryan Lim, Zehui Xiong, Chunyan Miao, Dusit Niyato, Qiang Yang, Cyril Leung, and H. Vincent Poor. 2020. Hierarchical Incentive Mechanism Design for Federated Machine Learning in Mobile Networks. _IEEE Internet of Things Journal_ 7, 10 (2020), 9575–9588. https://doi.org/10.1109/JIOT.2020.2985694 * Samarakoon et al. (2018) S. Samarakoon, M. Bennis, W. Saad, and M. Debbah. 2018\. Federated Learning for Ultra-Reliable Low-Latency V2V Communications. In _2018 IEEE Global Communications Conference (GLOBECOM)_. 1–7. https://doi.org/10.1109/GLOCOM.2018.8647927 * Sarikaya and Ercetin (2020) Yunus Sarikaya and Ozgur Ercetin. 2020. Motivating Workers in Federated Learning: A Stackelberg Game Perspective. _IEEE Networking Letters_ 2, 1 (2020), 23–27. https://doi.org/10.1109/LNET.2019.2947144 * Yang et al. (2020) K. Yang, T. Jiang, Y. Shi, and Z. Ding. 2020\. Federated Learning via Over-the-Air Computation. _IEEE Transactions on Wireless Communications_ 19, 3 (2020), 2022–2035. https://doi.org/10.1109/TWC.2019.2961673 * Zhan and Zhang (2020) Yufeng Zhan and Jiang Zhang. 2020. An Incentive Mechanism Design for Efficient Edge Learning by Deep Reinforcement Learning Approach. In _IEEE INFOCOM 2020 - IEEE Conference on Computer Communications_. 2489–2498. https://doi.org/10.1109/INFOCOM41043.2020.9155268 * Zhan et al. (5555) Y. Zhan, J. Zhang, Z. Hong, L. Wu, P. Li, and S. Guo. 5555. A Survey of Incentive Mechanism Design for Federated Learning. _IEEE Transactions on Emerging Topics in Computing_ 1 (mar 5555), 1–1. https://doi.org/10.1109/TETC.2021.3063517 * Zheng et al. (2020) S. Zheng, C. Shen, and X. Chen. 2020. Design and Analysis of Uplink and Downlink Communications for Federated Learning. _IEEE Journal on Selected Areas in Communications_ (2020), 1–1. https://doi.org/10.1109/JSAC.2020.3041388
# Efficient and accurate computation to the $\varphi$-function and its action on a vector Siyu Yang Dongping Li<EMAIL_ADDRESS>Department of Mathematics, Changchun Normal University, Changchun 130032, PR China Department of Mathematics, Jilin University, Changchun 130012, PR China ###### Abstract In this paper, we develop efficient and accurate algorithms for evaluating $\varphi(A)$ and $\varphi(A)b$, where $A$ is an $N\times N$ matrix, $b$ is an $N$ dimensional vector and $\varphi$ is the function defined by $\varphi(z)\equiv\sum\limits^{\infty}_{k=0}\frac{z^{k}}{(1+k)!}$. Such matrix function (the so-called $\varphi$-function) plays a key role in a class of numerical methods well-known as exponential integrators. The algorithms use the scaling and modified squaring procedure combined with truncated Taylor series. The backward error analysis is presented to find the optimal value of the scaling and the degree of the Taylor approximation. Some useful techniques are employed for reducing the computational cost. Numerical comparisons with state-of-the-art algorithms show that the algorithms perform well in both accuracy and efficiency. ###### keywords: $\varphi$-function , Truncated Taylor series , Scaling and modified squaring method , Backward error, Paterson-Stockmeyer method ###### MSC: [2010] 65L05 , 65F10, 65F30 left ## 1 Introduction In this work, we consider numerical methods for approximating the first matrix exponential related function and its action on a vector, that is, $\varphi(A)~{}~{}\text{and}~{}~{}\varphi(A)b,$ (1) where $\varphi(z)=\sum\limits^{\infty}_{k=0}\frac{z^{k}}{(1+k)!},~{}A\in\mathbb{C}^{N\times N},~{}b\in\mathbb{C}^{N}.$ (2) The $\varphi$-function satisfies the recursive relation $\varphi(z)=\frac{e^{z}-1}{z}.$ (3) The problem of numerically approximating such matrix function is of great importance and is commonly encountered in the solution of constant inhomogeneous linear system of ordinary differential equations and in the exponential integrators for solving semi-linear problems. For example, the well-known exponential Euler method for solving the autonomous semi-linear problems of the form $y^{\prime}(t)=Ay(t)+N(y(t)),~{}~{}y(t_{n})=y_{n}$ (4) yields $y_{n+1}=e^{hA}y_{n-1}+h\varphi(hA)N(y_{n}).$ (5) If Eq. (4) has a constant inhomogeneous term, i.e., $N(y(t))\equiv b$, then the scheme (3) is the exact solution of (4). Utilizing the relationship (3), it can be shown that (5) is equivalent to $y_{n+1}=h\varphi(hA)(AN(y_{n})+y_{n-1}).$ (6) The main cost in the scheme (6) originates from the need to accurately solve the $\varphi$-function at each time step. For a detailed overview on exponential integrators, see [13, 17]. Over the past few years, there has been a tremendous effort to develop efficient approaches to deal with such matrix functions, see, e.g., [2, 3, 4, 12, 25, 16, 14, 20, 26]. These methods are generally divided into two classes. The first class of methods compute $\varphi(A)$ explicitly. Among them, the scaling and modified squaring method combined with Padé approximation [12, 26] is perhaps the most popular choice for small and medium sized $A$. The method is a variant of the well-known scaling and squaring approach for computing the matrix exponential [1, 9]. An alternative computation is based on the formula [22]: $\begin{aligned} e^{\mathbb{A}}=\left(\begin{tabular}[]{cccccc}$e^{A}$&$\varphi(A)$\\\ $0$&$I$\end{tabular}\right),\end{aligned}~{}\text{where}~{}\begin{aligned} \mathbb{A}=\left(\begin{tabular}[]{cccccc}$A$&$I$\\\ $0$&$0$\end{tabular}\right)\in\mathbb{C}^{2N\times 2N}.\end{aligned}$ (7) Thus the computation of $\varphi(A)$ can be reduced to that of the matrix exponential. The effective evaluation of matrix exponential, which arise in many areas of science and engineering, have been extensively investigated in the literature; see, e.g., [1, 6, 7, 9, 10, 18, 22, 24, 27] and the references given therein. In some applications, it requires the computation of matrix-function vector product $\varphi(A)b$ rather than the single $\varphi(A)$. When $A$ is very large, it is prohibitive to explicitly compute $\varphi(A)$ and then form the the product with vector $b$. The second class of methods enable evaluation of $\varphi(A)b$ using matrix-vector products and avoids the explicit computation of the generally dense matrix $\varphi(A)$. This type of methods is especially well-suited to large and sparse $A$. We mention two typical strategies in such an approach: Krylov subspace methods [25, 20] and the scaling-and-squaring method [2]. The former are iterative and difficult to determine a reasonable convergence criterion to guarantee a sufficiently accurate approximation. The latter evaluate $\varphi(A)b$ by computing the action of a matrix exponential $e^{\mathbb{A}}$ of dimension $N+1$ on a vector. The method is numerical stable and can achieve a machine accuracy in exact arithmetic. In the present paper we focus on the direct approach and develop the scaling and modified squaring method in combination with Taylor series to efficiently and accurately evaluate $\varphi(A)$ and $\varphi(A)b,$ respectively. The backward error are used to determine the scaling value $s$ and the Taylor degree $m$. Numerical experiments with other state-of-the-art MATLAB routines illustrate that a straight implementation of the scaling and modified squaring algorithm may be the most efficient. This paper is organized as follows. Section 2 presents two algorithms for computing $\varphi(A)$. Section 3 deals with algorithm for evaluating $\varphi(A)b.$ Numerical experiments are given to illustrate the benefits of the algorithms in Section 4. Finally, conclusions are given in Section 5. Throughout the paper, we use $\|\cdot\|$ to denote an induced matrix norm, and in particular $\|\cdot\|_{1}$, the 1-norm. Let $I$ be the identity and 0 be the zero matrix or vector whose dimension are clear from the context. $e_{i}$ denotes the $i$-th coordinate vector with appropriate size. $\lfloor x\rfloor$ denotes the largest integer not exceeding $x$ and $\lceil x\rceil$ denotes the smallest integer not less than $x$. Standard MATLAB notations are used whenever necessary. ## 2 Computing $\varphi(A)$ For a given matrix $A\in\mathbb{C}^{N\times N},$ the scaling and modified squaring method exploits the identity [12, 26] $\varphi(A)=\frac{1}{2}\varphi(\frac{1}{2}A)(e^{\frac{1}{2}A}+I).$ (8) Applying recursively (8) $s$ times yields $\displaystyle\varphi(A)=(\frac{1}{2})^{s}\varphi(X)(e^{X}+I)(e^{2X}+I)\ldots(e^{2^{s-1}X}+I),~{}~{}s\in\mathbb{N},$ (9) where $X=2^{-s}A$. The $\varphi(A)$ then can be evaluated using rational polynomial to approximate $\varphi(X)$ and $e^{X}$ and employing the following coupled recurrences: $\left\\{\begin{array}[]{l}\varphi(2X)=\frac{1}{2}\varphi(X)(e^{X}+I),\vspace{1ex}\\\ e^{2X}=e^{X}\cdot e^{X}.\end{array}\right.$ (10) The scaling parameter $s$ is chosen such that $\|X\|$ is sufficiently small and the method can achieve a prescribed accuracy. In our algorithm, we use the truncated Taylor series $T_{m}(X)$ to approximate $\varphi(X)$, i.e., $T_{m}(X):=\sum\limits^{m}_{k=0}\frac{X^{k}}{(1+k)!}.$ (11) Then, the approximation to $e^{X}$ is naturally chosen as $\tilde{T}_{m}:=XT_{m}(X)+I.$ (12) Here $e^{X}=\tilde{T}_{m}(X)+\mathcal{O}(X^{m+2})$. The computation of $\tilde{T}_{m}$ only requires one matrix multiplication and one matrix summation. In practical the truncated Taylor series $T_{m}(X)$ in (11) can be computed efficiently by using the Paterson-Stockmeyer method [21]. The expression is $\tilde{T}_{m}=\sum\limits^{r}_{0}B_{k}\cdot(X^{q})^{k},~{}~{}~{}r=\lfloor m/q\rfloor,$ (13) where $q$ is a positive integer and $B_{k}=\left\\{\begin{array}[]{l}\sum\limits^{q-1}_{i=0}\frac{1}{(1+qk+i)!}X^{i},~{}~{}~{}k=0,1,\ldots,r-1,\vspace{1ex}\\\ \sum\limits^{m-qr}_{i=0}\frac{1}{(1+qr+i)!}X^{i},~{}~{}~{}k=r.\\\ \end{array}\right.$ (14) Applying Horner’s method to (13), then the number of matrix multiplications for computing $T_{m}(X)$ is minimized by $q$ either $\lfloor\sqrt{m}\rfloor$ or $\lceil\sqrt{m}\rceil$, and both choices yield the same computational cost. To obtain a more accurate approximation to $\varphi(X)$, we compute $T_{m}$ only when $m$ belongs to the index sequence $\mathbb{M}=\\{2,4,6,9,12,16,20,25,30,36,\ldots\\}$. Assume that $m_{i}$ is the $i$-th element of the set $\mathbb{M}$, it is shown in [10, Table 4.1], [23] that the number of matrix multiplications for computing $T_{m_{i}}(X)$ is the same amount as $T_{k}(X)$ of $m_{i-1}<k<m_{i}$. Then the number of matrix multiplications for computing $T_{m}(X)$ is $\pi_{m}=\lceil\sqrt{m}\rceil+\lfloor m/\lceil\sqrt{m}\rceil\rfloor-2.$ (15) Table 1 lists the corresponding number of matrix multiplications $\pi_{m}$ to evaluate $T_{m}$ for the first 12 values of $m$ belonging to $\mathbb{M}$. A brief sketch of the algorithm for solving $\varphi(A)$ is given in Algorithm 1. Table 1: Number of matrix multiplications $\pi_{m}$ required to evaluate $T_{m}$ for the first 12 optimal values of $m$. $m$ | $2$ | $4$ | $6$ | $9$ | $12$ | $16$ | $20$ | $25$ | $30$ | $36$ | $42$ | $49$ ---|---|---|---|---|---|---|---|---|---|---|---|--- $\pi_{m}$ | $1$ | $2$ | $3$ | $4$ | $5$ | $6$ | $7$ | $8$ | $9$ | $10$ | $11$ | $12$ Algorithm 1 Given $A\in\mathbb{C}^{N\times N},$ this algorithm computes $\varphi(A)$ by the scaling and modified squaring based on Taylor series. 1:Select optimal values of $m$ and $s.$ 2:Compute $X=2^{-s}A.$ 3:Compute $T=\sum\limits^{m}_{k=0}\frac{X^{k}}{(1+k)!}$ by PS method. 4:Compute $\tilde{T}:=XT+I.$ 5:Compute $Y=\tilde{T}+I.$ 6:for $i=1:s-1$ do 7: Compute $\tilde{T}=\tilde{T}^{2}.$ 8: Compute $Y=\frac{1}{2}Y(\tilde{T}+I).$ 9:end for 10:Compute $Y=\frac{1}{2}TY.$ 11: $Y$ Now we consider the concrete choice of $m$ and $s$. We formulate two approaches to choose the scaling value $s$ and the Taylor degree $m$, which were similarly introduced in [1, 2]. Define the function $h_{m+2}(X)=\log(e^{-X}\tilde{T}_{m}(X))$, then $\tilde{T}_{m}(X)=e^{X+h_{m+2}(X)}$ (16) and $\displaystyle\begin{aligned} \varphi(A)&\approx(\frac{1}{2})^{s}T_{m}(\tilde{T}_{m}+I)(\tilde{T}_{m}^{2}+I)\cdots(\tilde{T}_{m}^{2^{s-1}}+I)\\\ &=(\frac{1}{2})^{s}\frac{\tilde{T}_{m}-I}{X}(\tilde{T}_{m}+I)(\tilde{T}_{m}^{2}+I)\cdots(\tilde{T}_{m}^{2^{s-1}}+I)\\\ &=\frac{e^{2^{s}X+2^{s}h_{m+2}(X)}-I}{2^{s}X}\\\ &=\frac{e^{A+\Delta A}-I}{A},\end{aligned}$ (17) where $\Delta A=2^{s}h_{m+2}(X)$ is the backward error resulting from the approximation of $\varphi(A).$ Let $X\in\Omega_{m}:=\\{X\in C^{N\times N}:~{}~{}\rho(e^{-X}\tilde{T}_{m}-I)<1\\},$ then the function $h_{m+2}(X)$ has a power series expansion $h_{m+2}(X)=\sum\limits^{\infty}_{k=m+2}c_{k}X^{k}.$ (18) By Theorem 4.2 of [1] we have $\frac{\|\Delta A\|}{\|A\|}=\frac{\|h_{m+2}(X)\|}{\|X\|}\leq\tilde{h}_{m+2}(2^{-s}\alpha_{p}(A)),~{}~{}p(p-1)\leq m+2,$ (19) where $\tilde{h}_{m+2}(x)=\sum\limits^{\infty}_{k=m+2}|c_{k}|x^{k-1}$ and $\alpha_{p}(A)=\max(\|A^{p}\|^{1/p},\|A^{p+1}\|^{1/(p+1)}).$ Given a tolerance Tol, one can computes $\theta_{m}=max\\{\theta:{\tilde{h}_{m+2}(\theta)}\leq\text{Tol}\\}.$ (20) Table 2 presents the maximal values $\theta_{m}$ satisfying the backward error bound (20) of $\text{Tol}=2^{-53}$ for the first 12 values of $m$ in $\mathbb{M}$. Thus, once the scaling $s$ is chosen such that $2^{-s}\alpha_{p}(A)\leq\theta_{m},~{}~{}p(p-1)\leq m+2,$ (21) it follows that $\|\Delta A\|\leq\|A\|\cdot\text{Tol}.$ (22) A straightforward computation of inequality (21) yields $s\geq\lceil\log_{2}(\alpha_{p}(A)/\theta_{m})\rceil.$ (23) We naturally choose the smallest $s$ so that the inequality (21) holds. The total number of matrix multiplications $C_{m}$ to evaluate $\varphi(A)$ then is $C_{m}=\pi_{m}+2s=\lceil\sqrt{m}\rceil+\lfloor\sqrt{m}\rfloor-2+2\max(\lceil\log_{2}(\alpha_{p}(A)/\theta_{m})\rceil,~{}0).$ (24) Table 2: Maximal values $\theta_{m}$ such that the backward error bound (20) does not exceed $\text{tol}=2^{-53}$ for the first 12 optimal values of $m$. $m$ | $2$ | $4$ | $6$ | $9$ | $12$ | $16$ | $20$ | $25$ | $30$ | $36$ | $42$ | $49$ ---|---|---|---|---|---|---|---|---|---|---|---|--- $\theta_{m}$ | $1.39\text{e-}5$ | $2.40\text{e-}3$ | $2.38\text{e-}2$ | $1.44\text{e-}1$ | $4.00\text{e-}1$ | $9.31\text{e-}1$ | $1.62$ | $2.64$ | $3.77$ | $5.22$ | $6.73$ | $8.55$ In Figure 1 we have plotted $C_{m}$ as a function of $m$ for ten different values of $\alpha_{p}(A)$. We see the location of the first optimal value of $m$, that is, the first value that minimizes $C_{m}$ is no more than 25. Thus we consider $m$ with $m\in\\{2,4,6,9,12,16,20,25\\}$ in the remainder of the section. Figure 1: $m$ versus cost $C_{m}$ with different $\alpha_{p}.$ In order to get the optimal value of $m$, we consider the following two strategies: $\bullet$ Choose the first $m\in\\{2,4,6,9,12,16,20,25\\}$ such that $\eta_{m}\leq\theta_{m}$, where $\eta_{m}=\min\\{\alpha_{p}(A),p(p-1)\leq m+2\\}$, and set $s=0$. When $\eta_{25}>\theta_{25}$, set $m=25$ and $s=\lceil\log_{2}(\eta_{25}/\theta_{25})\rceil$. To reduce the computational cost, in practical implementation the bound $\|A^{p}\|^{1/p}$ are estimated using the products of bounds or norms of matrices that have been computed. The details of the process are summarized in Algorithm 2. $\bullet$ Select the parameters $m$ and $s$ such that the total computational cost (23) is the lowest. This requires pre-evaluating the first six 1-norm of matrix power, i.e., $\|A^{k}\|$, $k=1,2,\cdots,6$. The full procedure is given in Algorithm 3. Algorithm 2 Given $A\in\mathbb{C}^{N\times N},$ this algorithm computes the parameters $m,~{}s$ and $A_{i}=A^{i}.$ 1:$s=0.$ 2:$A_{1}=A,$ $A_{2}=A^{2},$ $d_{1}=\|A_{1}\|_{1},$ $d_{2}=\|A_{2}\|_{1},$ $d_{3}=d_{1}d_{2}.$ 3:$\alpha_{1}=d_{1},$ $\alpha_{2}=\max(d_{2}^{1/2},d_{3}^{1/3}),$ $\eta_{1}=\alpha_{2}.$ 4:if $\eta_{1}<=\theta_{2},$ then $m=2,$ return . 5:end if 6:if $\eta_{1}<=\theta_{4},$ then $m=4,$ return . 7:end if 8:$A_{3}=A_{1}A_{2},$ $d_{3}=\|A_{3}\|_{1},$ $d_{4}=\min(d_{1}d_{3},d_{4}),$ $\alpha_{2}=\max(d_{2}^{1/2},d_{3}^{1/3}),$ $\alpha_{3}=\max(d_{3}^{1/3},d_{4}^{1/4}).$ 9:$\eta_{2}=\min(\alpha_{2},\alpha_{3}).$ 10:if $\eta_{2}<=\theta_{6},$ then $m=6,$ return . 11:end if 12:if $\eta_{2}<=\theta_{9},$ then $m=9,$ return . 13:end if 14:$A_{4}=A_{2}^{2},$ $d_{4}=\|A_{4}\|_{1},$ $d_{5}=\min(d_{1}d_{4},d_{2}d_{3}),$ $\alpha_{3}=\max(d_{3}^{1/3},d_{4}^{1/4}),$ $\alpha_{4}=\max(d_{4}^{1/4},d_{5}^{1/5}).$ 15:$\eta_{3}=\min(\alpha_{2},\alpha_{3},\alpha_{4}).$ 16:if $\eta_{3}<=\theta_{12},$ then $m=12,$ return . 17:end if 18:if $\eta_{3}<=\theta_{16}$ then $m=16,$ return . 19:end if 20:$A_{5}=A_{1}A_{4},$ $d_{5}=\|A_{5}\|_{1},$ $d_{6}=\min(d_{1}d_{5},d_{2}d_{4},d_{3}^{2}),$ $\alpha_{4}=\max(d_{4}^{1/4},d_{5}^{1/5}),$ $\alpha_{5}=\max(d_{5}^{1/5},d_{6}^{1/6}).$ 21:$\eta_{4}:=\min(\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5}).$ 22:if $\eta_{4}<=\theta_{20}$ then $m=20,$ return . 23:end if 24:if $\eta_{4}<=\theta_{25}$ then $m=25,$ return . 25:end if 26:$m=25,$ $s=\lceil\log_{2}(\eta_{4}/\theta_{25})\rceil,$ 27:$A_{i}=2^{-is},$ $i=1,\cdots,5.$ Algorithm 3 Given $A\in\mathbb{C}^{N\times N},$ this algorithm computes the parameters $m$ and $s$ based on the number of matrix-matrix products. 1:$M=[2,~{}4,~{}6,~{}9,~{}12,~{}16,~{}20,~{}25].$ 2:$p_{max}=5,$ $m_{max}=8.$ 3:$A_{1}=A,$ $d_{1}=\|A\|_{1}.$ 4:for $p=2:p_{max}+1$ do 5: $c=\text{normest}(A,p).$ 6: $d_{p}=c^{1/p}.$ 7:end for 8:$\alpha_{1}=d_{1}.$ 9:for $p=2:p_{max}$ do 10: $\alpha_{p}=\max(d_{p},d_{p+1}).$ 11:end for 12:$\eta_{1}=\alpha_{2}.$ 13:for $p=2:p_{\max}$ do 14: $\eta_{p}=\min(\eta_{p-1},\alpha_{p}).$ 15:end for 16:for $m=[2,~{}4,~{}6,~{}9,~{}12,~{}16,~{}20,~{}25]$ do 17: if $m=2$ then 18: $s_{m}=\max(\lceil\log_{2}(\eta_{2}/\theta_{m})\rceil,0).$ 19: else if $m=[4,6,9]$ then 20: $s_{m}=\max(\lceil\log_{2}(\eta_{3}/\theta_{m})\rceil,0).$ 21: else if $m=[12,16]$ then 22: $s_{m}=\max(\lceil\log_{2}(\eta_{4}/\theta_{m})\rceil,0).$ 23: else 24: $s_{m}=\max(\lceil\log_{2}(\eta_{5}/\theta_{m})\rceil,0).$ 25: end if 26: $q_{m}=\sqrt{m},$ $C_{m}=\lceil q_{m}\rceil+\lfloor q_{m}\rfloor-2+2*s_{m}.$ 27:end for 28:$m=\text{argmin}_{m\in M}C_{m},$ $s=s_{m},$ $q=\lceil q_{m}\rceil.$ 29:$A_{i}=A_{i-1}A,$ $i=2:q.$ 30:$A_{i}=2^{-is}A_{i},$ $i=2:q.$ ## 3 Computing $\varphi(A)b$ We now focus our attention on accurately and efficiently evaluating $\varphi(A)b$ for sparse and large matrix $A$. Following an idea of Al-Mohy and Higham [2], we will use the scaling part of the scaling and modified squaring method in combination with truncated Taylor series to approximate the function. The computational cost of the method is dominated by matrix-vector products. We start by recalling the following general recurrence [26]: $\varphi((\alpha+\beta)z)=\frac{1}{(\alpha+\beta)}[\beta e^{\alpha z}\varphi(\beta z)+\alpha\varphi(\alpha z)],~{}~{}~{}\alpha,\beta\in\mathbb{R},~{}z\in\mathbb{C}.$ (25) As a special case of (25), we have $\varphi(sz)=\frac{1}{s}[e^{(s-1)z}\varphi(z)+(s-1)\varphi((s-1)z)],~{}~{}s\in\mathbb{N}.$ (26) Taking $Y=\frac{1}{s}A,$ and using (26) it follows that $\displaystyle\begin{aligned} \varphi(A)=&\frac{1}{s}[e^{(s-1)Y}\varphi(Y)+(s-1)\varphi((s-1)Y)]\\\ =&\frac{1}{s}[e^{(s-1)Y}\varphi(Y)+e^{(s-2)Y}\varphi(Y)+(s-2)\varphi((s-2)Y)]\\\ &\vdots\\\ =&\frac{1}{s}\varphi(Y)[e^{(s-1)Y}+e^{(s-2)Y}+\cdots+e^{2Y}+e^{Y}+I].\end{aligned}$ (27) Choose the integers $m$ and $s$ such that $\varphi(Y)$ and $e^{Y}$ can be well-approximated by the truncated Taylor series $T_{m}(Y)$ and $\tilde{T}_{m}(Y):=YT_{m}(Y)+I$ defined by (11) and (12). Then $\varphi(A)b$ can be approximated by firstly evaluating the recurrence $\displaystyle b_{1}=T_{m}(Y)b~{}~{}~{}\text{and}~{}~{}~{}b_{i+1}=\tilde{T}_{m}(Y)b_{i},~{}~{}i=1,2,\cdots,s-1,$ (28) and then computing $\frac{1}{s}$ times the sum of $b_{i},i=1,2,\cdots,s-1.$ This process requires $s$ multiplications of matrix polynomial with a vector, $s$ vector additions, and 1 scalar multiplication. The number of matrix-vector products for evaluating $\varphi(A)b$ by recurrence (28) is $C_{m}=s(m+1)-1.$ Algorithm 4 Given $A\in\mathbb{C}^{N\times N},~{}b\in\mathbb{C}^{N\times n_{0}},$ this algorithm computes $\varphi(A)b$ by the scaling and modified squaring based on Taylor series. 1:Select optimal values of $m$ and $s.$ 2:Compute $Y=A/s.$ 3:Compute $b_{1}=\sum\limits^{m}_{k=0}\frac{Y^{k}}{(1+k)!}b$ based on matrix- vector products. 4:Compute $f=b_{1}.$ 5:for $i=1:s-1$ do 6: Compute $b_{i+1}=\sum\limits^{m+1}_{k=0}\frac{Y^{k}}{k!}b_{i}$ based on matrix-vector products. 7: Compute $f=f+b_{i}.$ 8:end for 9:Compute $f=\frac{1}{s}f.$ 10: $f$ The procedure described above mentions two key parameters: the degree $m$ of the matrix polynomial $T_{m}(Y)$ and the scaling parameter $s$. We use the backward error analysis combined with the computational cost to choose an optimal parameters $m$ and $s$. The backward error analysis of the method is exactly the same as the above section. The only difference is the form of their scaling coefficients. The former is $2^{-s}$ and the latter is $\frac{1}{s}.$ The relative backward error of the method satisfies $\frac{\|\Delta A\|}{\|A\|}\leq\tilde{h}_{m+2}(\frac{1}{s}\alpha_{p}(A)),$ (29) where $\tilde{h}_{m+2}(x)$ and $\alpha_{p}(A)$ are defined exactly as in (15). Given a tolerance Tol and integer $m,$ the parameter $s$ is chosen so that $s^{-1}\alpha_{p}(A)\leq\theta_{m},$ i.e., $s\geq\lceil\alpha_{p}(A)/\theta_{m}\rceil.$ (30) And the cost of the algorithm in matrix-vector products is $C_{m}=(m+1)\lceil\alpha_{p}(A)/\theta_{m}\rceil-1.$ (31) Let $p_{max}$ denote the largest positive integer $p$ such that $p(p-1)\leq m_{max}+2.$ Then the optimal cost is $C_{m^{*}}=\min\\{(m+1)\lceil\alpha_{p}(A)/\theta_{m}\rceil-1:~{}~{}2\leq p\leq p_{max},p(p-1)-2\leq m\leq m_{max}\\},$ (32) where $m^{*}$ denotes the smallest value of $m$ at which the minimum is attained. The optimal scaling parameter is $s=C_{m^{*}}/m^{*}.$ The cost of computing $\alpha_{p}(A)$ for $p=2:p_{max}$ is approximately $2lp_{max}(p_{max}+3),~{}~{}l=1~{}\text{or}~{}2$. If the cost $C_{m_{max}}$ matrix-vector products of evaluating $\varphi(A)b$ with $m$ determined by using $\|A\|_{1}$ in place of $\alpha_{p}(A)$ in (31) is no larger than the cost of computing the $\alpha_{p}(A)$, i.e. $\|A\|_{1}<=[4*p_{max}*(p_{max}+3)+1]*[\theta_{m_{max}}/(m_{max}+1)].$ (33) Then we should certainly use $\|A\|_{1}$ in place of the $\alpha_{p}(A)$. The details of the method is summarized in Algorithms 4 and 5. Algorithm 5 Given $A\in\mathbb{C}^{N\times N},~{}b\in\mathbb{C}^{N\times n_{0}},$ $p_{max}$ and $m_{max}.$ this algorithm computes the parameters $m$ and $n$ based on the number of matrix-vector products. 1:$M=1:m_{max}.$ 2:$d_{1}=\|A\|_{1}.$ 3:if $d_{1}<=\theta_{m_{max}}*(4*p_{max}*(p_{max}+3)+1)/m_{max}$ then 4: $m=\arg\min\\{(m+1)\lceil d_{1}/\theta_{m}\rceil-1,~{}1\leq m\leq m_{max}\\}.$ 5: $n=\lceil d_{1}/\theta_{m}\rceil,$ return . 6:end if 7:for $p=2:p_{max}+1$ do 8: $c=\text{normest}(A,p).$ 9: $d_{p}=c^{1/p}.$ 10:end for 11:$\alpha_{1}=d_{1}.$ 12:for $p=2:p_{max}$ do 13: $\alpha_{p}=\max(d_{p},d_{p+1}).$ 14:end for 15:$[m,p]=\arg\min\\{(m+1)\lceil\alpha_{p}/\theta_{m}\rceil-1:2\leq p\leq p_{max},p(p-1)-2\leq m\leq m_{max}\\}.$ 16:$s=\max(\lceil\alpha_{p}/\theta_{m}\rceil,1).$ 17: $m,s.$ ## 4 Numerical experiments In this section we perform two numerical experiments to test the performance of the approach that has been presented in the previous sections. All tests are performed under Windows 10 and MATLAB R2018b running on a laptop with an Intel Core i7 processor with 1.8 GHz and RAM 8 GB. We use Algorithm 1 in combination with Algorithm 2 and Algorithm 3 to evaluate $\varphi(A),$ and Algorithm 4 combined with Algorithm 5 to compute $\varphi(A)b.$ The three combined algorithms are denoted as phitay1, phitay2 and phimv, respectively. ###### Experiment 1. In this experiment we compare algorithms phitay1 and phitay2 with existing MATLAB routine phipade13 from [26]. The function phipade13 employs scaling and modified squaring method based on [13/13] padé approximation to evaluate $\varphi(A).$ We use a total of 201 matrices divided into two sets to test these algorithms. The two test sets are described as follows: $\bullet$ The first test set contains 62 $8\times 8$ test matrices as in [9] and [24, sec. 4.1]. The first 48 matrices are obtained from the subroutine matrix111The subroutine matrix can generate fifty-two matrices. Matrices 17, 42, 44, 43 are excluded the scope of the test as the first three overflow in double precision and the last is repeated as matrix 49. in the matrix computation toolbox [11]. The other fourteen test matrices of dimension $2-20$ come from [5, Ex. 2], [7, Ex. 3.10], [15, p. 655], [19, p. 370], [27, Test Cases 1–4]. $\bullet$ The second test set is essentially the same tests as in [6], which consists of 139 matrices of dimension $n=128.$ The first 39 matrices are obtained from MATLAB routine matrix of the Matrix Computation Toolbox [11]. The remaining 100 matrices are generated randomly, half of which are diagonalizable and half non diagonalizable matrices. In this implementation, we evaluated the relative errors in the 1-norm of the computed solutions $Y$, i.e., $Error=\frac{\|Y-\varphi(A)\|_{1}}{\|\varphi(A)\|_{1}}.$ (34) The ”exact” $\varphi(A)$ is computed using MATLAB build-in function expm of [9, 1] by evaluating the augmented matrix exponential (7) at 100-digit precision using MATLAB’s Symbolic Math Toolbox. a. Normwise relative errors b. Performance of errors c. Ratio of execution times d. Performance of execution times Figure 2: Results for test matrix set 1 for Experiment 1. In Figs. 2 and 3, we present the relative errors, the performances of relative errors, the ratio of execution times and the performances of execution times for each test. Figs. 2(a) and 3(a) display the relative error of the algorithms in our test sets, sorted by decreasing relative condition number of $\varphi(A)$ at $A$. The solid black line represents the unit roundoff multiplied by the relative condition number, which is estimated by the MATLAB routine funm condest1 in the Matrix Function Toolbox [11]. Figs. 2(b) and 3(b) show the performance profiles of the three solvers on the same error data. For a given $\alpha,$ the corresponding value of $p$ on each performance curve is the probability that the algorithm has a relative error lower than or equal to $\alpha$ times the smallest error over all the methods involved [8]. The results show that the methods based on Taylor series are more accurate than the implementation based on Padé series, and phitay2 is slightly more accurate than phitay1. Figs. 2(c) and 3(c) show the the ratio of execution times of the three solvers with respect to phipade13. The performance on the execution times of three methods is compared in Figs. 2(d) and 3(d). We notice that phitay1 and phitay2 have lower execution times than phipade13, and the execution time of phitay1 is slightly lower than the execution time of phitay2 on test set 1 but the opposite is true on test set 2. This can be attributed to the choice of the key parameters $m$ and $s$ in the methods. The phitay2 uses a minimum amount of computational costs to determine the optimal parameters by evaluating exactly the 1-norm of $\|A^{k}\|_{1}^{1/k}$ for a few values of $k$ using a matrix norm estimator. Although this requires some extra calculations in estimating the norm of matrix power, the computational advantages will gain as the dimension of the matrix increases. a. Normwise relative errors b. Performrnce of errors c. Ratio of execution times d. Performance of execution times Figure 3: Experimental results for test matrix set 2 for Experiment 1. ###### Experiment 2. This experiment uses the same tests as [20]. There are four different sparse matrices test matrices. The matrices details are $\bullet$ The first matrix orani678 is an unsymmetric sparse matrix of order $N=2,529$ with $nnz=90,158$ nonzero elements and its 1-norm is 1.04e+03. $\bullet$ The second matrix bcspwr10 is a symmetric Hermitian sparse matrix of order $N=5,300$ with $nnz=21,842$ nonzero elements and its 1-norm is 14. $\bullet$ The third matrix gr 30 30 is an symmetric sparse matrix of order $N=900$ with $nnz=7,744$ nonzero elements and its 1-norm is 16. $\bullet$ The fourth matrix helm2d03 is a sparse matrix of order $N=392,257$ has $nnz=2,741,935$ nonzero elements and its 1-norm is 10.72. We use our algorithm phimv with other two popular MATLAB routines phiv of [25] and phipm of [20] to evaluate $\varphi(tA)b$ and $\varphi_{0}(tA)b_{0}+t\varphi_{1}(tA)b_{1}$ with $t=10$ for the first test matrix and $t=2$ for the other three, respectively. As in [20], we choose the vectors $b=b_{0}=b_{1}=[1,1,...,1,1]^{T}$ except for the second test matrix with $b=[1,0,...,0,1]^{T}.$ The MATLAB routines phiv and phipm are run with their default parameters and the uniform convergence tolerance Tol=eps (eps is the unit roundoff) in our experiments. In this tests, we assess the accuracy of the computed solution $y$ by the relative errors $Error=\frac{\|y-y_{exa}\|_{2}}{\|y_{exa}\|_{2}}.$ (35) where $y_{exa}$ is a reference solution obtained by computing the action of the augmented matrix exponential using MATLAB routine expmv of [2]. We measure the average ratio of execution times ($t_{ratio}$) of each of the codes relative to phikmv by running the comparisons 100 times. Tables 3 and 4 show the numerical results. All three methods deliver almost the same accuracy, but phikmv performs to be the fastest. Table 3: Comparisons of the average speedup of phiv, phip and phipm with respect to phimv and the relative errors for solving $\varphi(A)b$. method | orani678 | bcspwr10 | gr 30 30 | helm2d03 ---|---|---|---|--- | error | $t_{ratio}$ | error | $t_{ratio}$ | error | $t_{ratio}$ | error | $t_{ratio}$ phiv | 1.1966e-15 | 1.38 | 4.8688e-16 | 45.90 | 1.8016e-14 | 9.92 | 1.0842e-13 | 20.98 phipm | 1.7612e-15 | 0.79 | 6.6655e-16 | 7.4 | 3.7391e-15 | 5.54 | 1.3550e-13 | 1.59 phimv | 1.1682e-15 | 1 | 3.6051e-16 | 1 | 1.2622e-15 | 1 | 6.2692e-14 | 1 Table 4: Comparisons of the average speedup of phiv, phip and phipm with respect to phimv and the relative errors for solving $\varphi_{0}(tA)b_{0}+t\varphi_{1}(tA)b_{1}$. method | orani678 | bcspwr10 | gr 30 30 | helm2d03 ---|---|---|---|--- | error | $t_{ratio}$ | error | $t_{ratio}$ | error | $t_{ratio}$ | error | $t_{ratio}$ phiv | 4.7206e-16 | 1.06 | 7.4407e-16 | 43.63 | 2.1790e-15 | 15.58 | 1.5239e-14 | 20.73 phipm | 1.0408e-15 | 0.63 | 1.2216e-15 | 8.73 | 7.4383e-15 | 5.58 | 8.7267e-15 | 1.77 phimv | 1.8024e-15 | 1 | 7.6561e-16 | 1 | 8.7257e-16 | 1 | 8.7682e-15 | 1 ## 5 Conclusion The computation of $\varphi$-functions can lead to a large computational burden of exponential integrators. In this work three accurate algorithms phitay1, phitay2 and phimv have been developed to compute the first $\varphi$-function and its action on a vector. The first two are used for solving $\varphi(A)$ and the last one is used for $\varphi(A)b$. These algorithms employ the scaling and modified squaring procedure based on the truncated Taylor series of the $\varphi$-function and are backward stable in exact arithmetic. For phitay1 and phitay2, the optimal Horner and Paterson- Stockmeyer’s technique has been applied to reduce the computational cost. The main difference of both is the estimation of matrix powers $\|A^{k}\|_{1}^{1/k}$ for a few values of $k$ and phitay2 allows to determine the optimal values of scaling and the degree of the Taylor approximation by the minimum amount of computational costs. The phimv takes the similar approach as phitay2 to determine the key parameters. The computational costs mostly focused on computing matrix-vector products which is especially well- suited large sparse matrix. Numerical comparisons with other state-of-the-art MATLAB routines illustrate that the methods proposed are efficient and reliable. In the future we hope to further generalize these methods to the general exponential related functions and their linear combination. ## Acknowledgements This work was supported in part by the Jilin Scientific and Technological Development Program (Grant Nos. 20200201276JC and 20180101224JC) and the Natural Science Foundation of Jilin Province (Grant No. 20200822KJ), and the Scientific Startup Foundation for Doctors of Changchun Normal University (Grant No. 002006059). ## References * [1] A.H. Al-Mohy, N.J. Higham, _A new scaling and squaring algorithm for the matrix exponential,_ SIAM J. Matrix Anal. Appl., 31 (3) (2009), 970-989. * [2] A. Al-Mohy and N. Higham, _Computing the action of the matrix exponential, with an application to exponential integrators,_ SIAM J. Sci. Comput., 33 (2011), pp. 488-511. * [3] G. Beylkin, J.M. Keiser, L. Vozovoi, _A new class of time discretization schemes for the solution of nonlinear PDEs,_ J. Comput. Phys., 147 (1998), 362–387. * [4] M. Caliari, M. Vianello, L. Bergamaschi, _Interpolating discrete advection-diffusion propagators at Leja sequences_ , J. Comput. Appl. Math. 172 (1)(2004), pp.79-99. * [5] I. Davies and N. J. Higham, _A Schur-Parlett algorithm for computing matrix functions,_ SIAM J. Matrix Anal. Appl., 25 (2003), pp. 464-485. * [6] E. Defez, J. Ibáñez, J. Sastre, J. Peinado and P. Alonso, _A new efficient and accurate spline algorithm for the matrix exponential computation,_ J. Comput. Appl. Math., 337 (2018), pp. 354-365. * [7] L. Dieci and A. Papini, _Padé approximation for the exponential of a block triangular matrix,_ Linear Algebra Appl., 308 (2000), pp. 183-202. * [8] E. D. Dolan and J. J. Moré _Benchmarking optimization software with performance profiles,_ Math. Program., 91 (2002), pp. 201-213. * [9] N. J. Higham, _The scaling and squaring method for the matrix exponential revisited,_ SIAM J. Matrix Anal. Appl., 26 (2005), pp. 1179-1193. * [10] N. J. Higham, _Functions of matrices: theory and computation,_ SIAM, Philadelphia, 2008. * [11] N. J. Higham, _The Matrix Computation Toolbox,_ http://www.ma.man.ac.uk/ higham/mctoolbox. * [12] M. Hochbruck, C. Lubich, H. Selhofer, _Exponential integrators for large systems of differential equations._ SIAM J. Sci. Comput. 19 (1998), pp. 1552-1574. * [13] M. Hochbruck,and A. Ostermann, _Exponential Integrators,_ Acta Numer., 19 (2010), pp. 209-286. * [14] A.K. Kassam and L.N. Trefethen, _Fourth-order time stepping for stiff PDEs,_ SIAM J. Sci. Comput., 26 (2005), pp. 1214-1233. * [15] C. S. Kenney and A. J. Laub, _A Schur-Fréchet algorithm for computing the logarithm and exponential of a matrix,_ SIAM J. Matrix Anal. Appl., 19 (1998), pp. 640-663. * [16] Y.Y. Lu, _Computing a matrix function for exponential integrators,_ J. Comput. Appl. Math. 161 (1) (2003), pp. 203–216. * [17] B.V. Minchev and W.M. Wright, _A review of exponential integrators for first order semi-linear problems,_ Tech. report 2/05, Department of Mathematics, NTNU, 2005. * [18] C. Moler, C.V. Loan, _Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later,_ SIAM Review, 45 (2003), pp. 3-49. * [19] I. Najfeld and T. F. Havel, _Derivatives of the matrix exponential and their computation,_ Adv. in Appl. Math., 16 (1995), pp. 321-375. * [20] J. Niesen, W. Wright, _Algorithm 919: A Krylov subspace algorithm for evaluating the phi- functions appearing in exponential integrators_ , ACM Trans. Math. Software, 38 (3) (2012), Article 22. * [21] M.S. Paterson, L.J. Stockmeyer, _On the number of nonscalar multiplications necessary to evaluate polynomials,_ SIAM J. Comput. 2 (1) (1973), pp.60-66. * [22] Y. Saad, _Analysis of some Krylov subspace approximations to the matrix exponential operator,_ SIAM J. Numer. Anal., 29 (1992), pp. 209-228. * [23] J. Sastre, J. Ibáñez, E. Defez, P. Ruiz, _Efficient orthogonal matrix polynomial based method for computing matrix exponential,_ Appl. Math. Comput, 217 (14) (2011), pp. 6451-6463. * [24] J. Sastre, J. Ibáñez, E. Defez, and P. Ruiz, _New Scaling-Squaring Taylor Algorithms for Computing the Matrix Exponential,_ SIAM J. Sci. Comput., 37 (1) (2015), pp. 439-455. * [25] R.B. Sidje, _Expokit: A software package for computing matrix exponentials,_ ACM Trans. Math. Softw., 24 (1998), pp. 130-156. * [26] B. Skaflestad and W.M. Wright, _The scaling and modified squaring method for matrix functions related to the exponential,_ Applied Numerical Mathematics, 59 (2009), pp. 783-799. * [27] R. C. Ward, _Numerical computation of the matrix exponential with accuracy estimate,_ SIAM J. Numer. Anal., 14 (1977), pp. 600-610.
remarkRemark hypothesisHypothesis claimClaim Nonuniform approximations of semilinear diffusion-wave equationsP. Lyu and S. Vong ex_supplement # A symmetric fractional-order reduction method for direct nonuniform approximations of semilinear diffusion-wave equations††thanks: This work was partially supported by the Fundamental Research Funds for the Central Universities (JBK2102010), the National Natural Science Foundation of China (12071373), The Science and Technology Development Fund, Macau SAR (File no. 0005/2019/A) and the grant MYRG2018-00047-FST from University of Macau. Pin Lyu School of Economic Mathematics, Southwestern University of Finance and Economics, Chengdu, China<EMAIL_ADDRESS>Seakweng Vong Corresponding author. Department of Mathematics, University of Macau, Macao, China. (). <EMAIL_ADDRESS> ###### Abstract We introduce a symmetric fractional-order reduction (SFOR) method to construct numerical algorithms on general nonuniform temporal meshes for semilinear fractional diffusion-wave equations. By using the novel order reduction method, the governing problem is transformed to an equivalent coupled system, where the explicit orders of time-fractional derivatives involved are all $\alpha/2$ $(1<\alpha<2)$. The linearized L1 scheme and Alikhanov scheme are then proposed on general time meshes. Under some reasonable regularity assumptions and weak restrictions on meshes, the optimal convergence is derived for the two kinds of difference schemes by $H^{2}$ energy method. An adaptive time stepping strategy which based on the (fast linearized) L1 and Alikhanov algorithms is designed for the semilinear diffusion-wave equations. Numerical examples are provided to confirm the accuracy and efficiency of proposed algorithms. ###### keywords: diffusion-wave equation, weak singularity, nonuniform mesh, adaptive mesh 65M06, 65M12, 35B65, 35R11 ## 1 Introduction In this paper, we consider numerical methods of the semilinear diffusion-wave equation: (1) $\displaystyle{\cal D}_{t}^{\alpha}u=\nu^{2}\Delta u+f(u,{\bf x},t),\quad{\bf x}\in\Omega,~{}t\in(0,T],$ subject to the initial conditions $u({\bf x},0)=\varphi({\bf x})$ and $u_{t}({\bf x},0)=\tilde{\varphi}({\bf x})$ for ${\bf x}\in\Omega$, and the homogeneous boundary condition $u({\bf x},t)=0$ for ${\bf x}\in\partial\Omega$; where $\Omega=(x_{l},x_{r})\times(y_{l},y_{r})$, $1<\alpha<2$, $\nu$ is a constant, and ${\cal D}_{t}^{\delta}$ denotes the Caputo derivative of order $\delta$: ${\cal D}_{t}^{\delta}u(t):=({\cal I}^{n-\delta}u^{(n)})(t)\quad\mbox{for}~{}t>0~{}\mbox{and}~{}n-1<\delta<n,$ in which ${\cal I}^{\beta}$ represents the Riemann-Liouville fractional integral of order $\beta$: ${\cal I}^{\beta}u(t):=\int_{0}^{t}\omega_{\beta}(t-s)u(s)\,\mathrm{d}s\quad\mbox{with}\quad\omega_{\beta}(t)=\frac{t^{\beta-1}}{\Gamma(\beta)}.$ The diffusion-wave equation, which is also called the time-fractional wave equation, can be applied to describe evolution processes intermediate between diffusion and wave propagation. For example, it governs the propagation of mechanical waves in viscoelastic media [24, 25]. The practical applications of equation (1) span diversely many disciplines, such as the image processing [39, 3], the universal electromagnetic, acoustic and mechanical response [31]. It is well known that the solutions of the sub-diffusion equations (also called the time-fractional diffusion equations) typically exhibit weak initial singularities [9, 35, 34], and it causes that the traditional time-stepping methods fail to preserve their desired convergence rate [9]. The same phenomenon occurs for the diffusion-wave equations. For example, Jin, Lazarov and Zhou [10, Theorem A.4] show that the solution of the linear diffusion-wave equation ($f=f({\bf x},t)$) satisfies that $\|\partial_{t}^{m}u\|_{L^{2}(\Omega)}\leq C_{T}t^{\alpha-m}\|f\|_{W^{m-1,\infty}(0,T;L^{2}(\Omega))}$, $m=1,2$, if $f\in W^{1,\infty}(0,T;L^{2}(\Omega))$ and $\varphi={\tilde{\varphi}}=0$. Other studies on regularities can be found in [10, 26, 34]. Recently some excellent works have been done on the numerical approximation of linear diffusion-wave equations taking the weak initial singularities into account. The convolution quadrature methods generated by backward difference formulas are rigorously discussed in [10], where the first- and second-order temporal convergence rates are obtained under proper assumptions of the given data, and their discrete maximal regularities are further studied by Jin, Li and Zhou [11]. Lately, for the problem with nonsmooth data, a Petrov-Galerkin method and a time-stepping discontinuous Galerkin method are proposed in [22] (Luo, Li and Xie) and [14] (Li, Wang and Xie), where the temporal convergence rate is $(3-\alpha)/2$-order and about first-order respectively. Numerical schemes with classical L1 approximation in time and the standard P1-element in space are also implemented in [13] to have the temporal accuracies of ${\cal O}(\tau^{3-\alpha})$ and ${\cal O}(\tau^{2})$ provided the ratio $\tau^{\alpha}/h^{2}_{\min}$ is uniformly bounded. We note that the numerical methods in the above works [10, 11, 22, 14, 13] are implemented on uniform temporal steps. On the other hand, Mustapha & McLean [29] and Mustapha & Schötzau [30] considered the time-stepping discontinuous Galerkin methods on nonuniform temporal meshes to solve the following kind of fractional wave equation: (2) $\displaystyle u_{t}+{\cal I}^{\beta}Au(t)=f(t),\quad\mbox{for}\quad\beta\in(0,1)\quad\mbox{and}\quad t\in(0,T],$ where $A$ is a self-adjoint linear elliptic spatial operator. It can be observed that the above integro-differential problem is (mathematically) equivalent to the linear case of (1) under suitable assumptions on $f$ and initial data. Their methods are illuminating and efficient with good temporal accuracies. Laplace transform methods and convolution quadrature methods on uniform temporal steps are also discussed respectively by McLean & Thom$\acute{\mbox{e}}$e [27, 28] and Cuesta et al. [4, 5, 6] for the above integro-differential problem, where the reference [4] is for the semilinear case $f(t)=f(u,\nabla u,x,t)$. However, the above numerical methods for solving (2) may not be easily extended to the semilinear problem (1) due to the nonlinearity ${\cal D}_{t}^{\beta}f(u,t)$. To the best of our knowledge, there are still challenges for numerical methods of the diffusion-wave equation. In this paper, we will address the following issues: (i) establishing and analyzing difference schemes by the classical L1 [32] and Alikhanov [1] approximations on nonuniform temporal meshes (especially on more general meshes) for the semilinear diffusion-wave equation with typical weak singular solutions; (ii) studying efficient numerical algorithms, such as the adaptive time-stepping algorithm, for the semilinear diffusion-wave equation in order to deal with the highly oscillatory variations in time since the problem (1) leads to a mixed behavior of diffusion and wave propagation. Before introducing our main approach, we review two classical and popular algorithms. The first one is the L1 algorithm [32], which was generated by Lagrange linear interpolation formula, it is a direct and convenient approximation formula in constructing numerical methods for sub-diffusion problems, e.g., [36, 21, 38] where it was employed on uniform temporal grids. Recently, the L1 method on graded temporal meshes, with monotonically increasing step sizes, was analyzed in Stynes, O’Riordan & Gracia [35] and Kopteva [12] to resolve the sub-diffusion equations with weakly singular solutions. The other one is the Alikhanov algorithm, which was firstly proposed by Alikhanov [1] by combining linear and quadratic interpolations skillfully at an off-set time point on uniform mesh for the sub-diffusion problem with sufficiently smooth solution. Implementation of this algorithm on graded mesh was discussed by Chen and Stynes [2] and the second-order convergence concerning with the weak initial singularities was established. Particularly, Liao, Li and Zhang [15] presented a novel and technical framework to derive the optimal convergence result of the nonuniform L1 scheme. The techniques were then generalized in Liao, McLean and Zhang [17], which were further extended to a linearized scheme for the semilinear sub- diffusion equations [20] and the Alikhanov scheme on more general nonuniform meshes [16]. We remark that the above methods [35, 12, 2, 15, 17, 20, 16] on nonuniform meshes are all for sub-diffusion problems. In view of the high efficiency and broad potential applications of the L1 and the Alikhanov algorithms, it is of high scientific value to consider their nonuniform versions for resolving (at least) the weak initial singularities of the diffusion-wave problem. In [36], Sun and Wu utilized a fixed-order reduction method, i.e., by taking an auxiliary function $v=u_{t}$, to rewrite a linear case of equation (1) to the following coupled equations: (3) $\displaystyle{\cal D}_{t}^{\alpha-1}v=\nu^{2}\Delta u+f({\bf x},t),$ (4) $\displaystyle v=u_{t},$ for ${\bf x}\in\Omega,~{}t\in(0,T]$. We note that the time-fractional derivative on the auxiliary function $v$ in equation (3) is of order $\alpha-1$ which belongs to $(0,1)$, so the system (3)–(4) is not structure consistency in the time derivative order point of view. The diffusion-wave equation can be solved following the standard framework of the L1 method on uniform temporal meshes. Although it is easy to extend the above order reduction method to the corresponding nonuniform L2 scheme (see [36] for its uniform version), we find that it may be difficult to establish its stability and convergence on more general time meshes. Therefore, for the first time, we present a new order reduction method by introducing a novel auxiliary function ${\bf v}={\cal D}_{t}^{\frac{\alpha}{2}}{\bf u},$ where ${\bf u}=u-t{\tilde{\varphi}}$, which is a non-fixed-order reduction technique, and we call the symmetric fractional-order reduction (SFOR) method. The semilinear diffusion-wave equation (1) is then skillfully rewritten as coupled equations with nice structure or having the feature of structure consistency, i.e. (8)–(9), see Section 2 for more details. Basing on this equivalent formulation, we can construct the implicit and linearized L1 and Alikhanov algorithms on possible nonuniform time partitions $0=t_{0}<t_{1}<\cdots<t_{N}=T$ for a given positive integer $N$, and discuss their unconditional convergence by utilizing the framework of [15, 17, 16, 20]. Throughout this paper we assume that the solution satisfies the following regularity: (5) $\displaystyle\|\partial_{t}^{(k)}u\|_{H^{4}(\Omega)}\leq C_{u}(1+t^{\sigma_{1}-k})\quad\mbox{and}\quad\|\partial_{t}^{(k)}{\bf v}\|_{H^{4}(\Omega)}\leq C_{u}(1+t^{\sigma_{2}-k}),\quad k=0,1,2,3,$ for $t\in(0,T]$, where $\sigma_{1}\in(1,2)\cup(2,3)$ and $\sigma_{2}\in(\alpha/2,1)\cup(1,2)$ are two regularity parameters. Our analysis are under the weak mesh assumption: * MA. There is a constant $C_{\gamma}>0$ such that $\tau_{k}\leq C_{\gamma}\tau\min\\{1,t_{k}^{1-1/\gamma}\\}$ for $1\leq k\leq N$, with $t_{k}\leq C_{\gamma}t_{k-1}$ and $\tau_{k}/t_{k}\leq C_{\gamma}\tau_{k-1}/t_{k-1}$ for $2\leq k\leq N$, where $\gamma\geq 1$ is the mesh parameter, $\tau_{k}:=t_{k}-t_{k-1}$ denotes the $k$-th time step size for $1\leq k\leq N$ and $\tau:=\max_{1\leq k\leq N}\\{\tau_{k}\\}$. We prove that our method can achieve the desired optimal temporal convergence orders (see Theorem 3.6), that is ${\cal O}(\tau^{\min\\{2-\frac{\alpha}{2},\gamma\sigma_{1},\gamma\sigma_{2}\\}})$ for the nonuniform L1 algorithm and ${\cal O}(\tau^{\min\\{2,\gamma\sigma_{1},\gamma\sigma_{2}\\}})$ for the nonuniform Alikhanov algorithm. We note that the sum-of-exponentials approximations [8, 19] can also be directly adopted to the proposed nonuniform L1 and Alikhanov algorithms to reduce the memory storage and computational costs. We further design an adaptive time-stepping strategy according to the two kinds of algorithms, which is robust and accurate for dealing with not only the weak initial singularities but also the rapid temporal oscillations of the semilinear diffusion-wave problem. The main contributions of this paper are summarized below: * • We propose a novel order reduction method (SFOR) which enables the nonuniform L1 and Alikhanov algorithms for the semilinear diffusion-wave equation can be constructed and analyzed. * • Based on some reasonable regularity assumptions and weak mesh restrictions, we obtain the optimal convergence orders: the temporal convergence rate is up to $(2-\alpha/2)$-order for the L1 algorithm and second-order for the Alikhanov algorithm. * • An adaptive time-stepping strategy is designed for the semilinear diffusion- wave equation to efficiently resolve possible oscillations of the solution. The rest of the paper is organized as follows. In section 2, we present the novel SFOR method and equivalently rewrite the semilinear diffusion-wave equation into coupled equations. In Section 3, we construct and analyze the linearized nonuniform L1 and the nonuniform Alikhanov algorithms, and obtain their optimal convergences unconditionally by $H^{2}$ energy method. Furthermore, we design an adaptive time-stepping method by combining the proposed nonuniform (fast linearized) L1 and Alikhanov algorithms. Numerical examples are provided in Section 4 to demonstrate the accuracy and efficiency. A brief conclusion is followed in Section 5, and the analysis of truncation errors is given in Section 7. Throughout the paper, we use $C$ to denote a generic constant which may depends on the data of the governing problem but is independent of time and space step sizes (or nodes). ## 2 The SFOR method In this section, we propose a symmetric fractional-order reduction (SFOR) method such that the technical framework proposed in [15, 17, 16, 20] can be adopted to analyze implicit numerical schemes for solving the diffusion-wave equation (1) on temporal nonuniform mesh. The basic idea of the SFOR method is presented in the following lemma. ###### Lemma 2.1. For $\alpha\in(1,2)$ and $u(t)\in{\cal C}^{1}([0,T])\cap{\cal C}^{2}((0,T])$, it holds that (6) $\displaystyle{\cal D}_{t}^{\alpha}u(t)={\cal D}_{t}^{\frac{\alpha}{2}}\left({\cal D}_{t}^{\frac{\alpha}{2}}u(t)\right)-u^{\prime}(0)\omega_{2-\alpha}(t).$ Moreover, if we take ${\bf u}(t):=u(t)-tu^{\prime}(0)$, then (7) $\displaystyle{\cal D}_{t}^{\alpha}{u}(t)={\cal D}_{t}^{\alpha}{\bf u}(t)={\cal D}_{t}^{\frac{\alpha}{2}}\left({\cal D}_{t}^{\frac{\alpha}{2}}{\bf u}(t)\right).$ ###### Proof 2.2. Taking $v(t):={\cal D}_{t}^{\frac{\alpha}{2}}u(t)$, one has $\displaystyle v={\cal D}_{t}^{\frac{\alpha}{2}}u(t)=({\cal I}^{\frac{\alpha}{2}}u^{\prime})(t)=$ $\displaystyle\int_{0}^{t}\omega_{1-\frac{\alpha}{2}}(t-s)u^{\prime}(s)\,\mathrm{d}s$ $\displaystyle=$ $\displaystyle-u^{\prime}(s)\omega_{2-\frac{\alpha}{2}}(t-s)|_{0}^{t}+\int_{0}^{t}\omega_{2-\frac{\alpha}{2}}(t-s)u^{\prime\prime}(s)\,\mathrm{d}s$ $\displaystyle=$ $\displaystyle u^{\prime}(0)\omega_{2-\frac{\alpha}{2}}(t)+\int_{0}^{t}\omega_{2-\frac{\alpha}{2}}(t-s)u^{\prime\prime}(s)\,\mathrm{d}s,$ where the integration by parts has been utilized. Then $\displaystyle v_{t}=\frac{\,\mathrm{d}}{\,\mathrm{d}t}{\cal D}_{t}^{\frac{\alpha}{2}}u(t)=u^{\prime}(0)\omega_{1-\frac{\alpha}{2}}(t)+\int_{0}^{t}\omega_{1-\frac{\alpha}{2}}(t-s)u^{\prime\prime}(s)\,\mathrm{d}s=u^{\prime}(0)\omega_{1-\frac{\alpha}{2}}(t)+({\cal I}^{1-\frac{\alpha}{2}}u^{\prime\prime})(t).$ Hence, using the composition property ${\cal I}^{p}{\cal I}^{q}g(t)={\cal I}^{p+q}g(t)~{}(p,q>0)$ [33, pp. 59], we get $\displaystyle{\cal D}_{t}^{\frac{\alpha}{2}}\left({\cal D}_{t}^{\frac{\alpha}{2}}u(t)\right)={\cal D}_{t}^{\frac{\alpha}{2}}v(t)=({\cal I}^{1-\frac{\alpha}{2}}v^{\prime})(t)=$ $\displaystyle u^{\prime}(0)({\cal I}^{1-\frac{\alpha}{2}}\omega_{1-\frac{\alpha}{2}})(t)+{\cal I}^{1-\frac{\alpha}{2}}{\cal I}^{1-\frac{\alpha}{2}}u^{\prime\prime}(t)$ $\displaystyle=$ $\displaystyle u^{\prime}(0)\omega_{2-\alpha}(t)+{\cal I}^{2-\alpha}u^{\prime\prime}(t)$ $\displaystyle=$ $\displaystyle u^{\prime}(0)\omega_{2-\alpha}(t)+{\cal D}_{t}^{\alpha}u(t),$ implying (6) is true. The equality (7) can be obtained directly by taking ${\bf v}:={\cal D}_{t}^{\frac{\alpha}{2}}{\bf u}(t)$ in the above derivations. Now we take ${\bf u}({\bf x},t):=u({\bf x},t)-t{\tilde{\varphi}}({\bf x})\quad\mbox{and}\quad{\bf v}({\bf x},t):={\cal D}_{t}^{\frac{\alpha}{2}}{\bf u}({\bf x},t).$ From (5) and using the Sobolev embedding theorem, we have $\|{\bf u}_{t}({\bf x},t)\|_{\infty}\leq C(1+t^{\sigma_{1}-1})$ for $t\in(0,T]$. Then utilizing the Comparison theorem for integrals (see pp. 400–401 in [40]), one has $\displaystyle|{\bf v}({\bf x},0)|=$ $\displaystyle\left|\lim_{t\rightarrow 0}{\cal D}_{t}^{\frac{\alpha}{2}}{\bf u}({\bf x},t)\right|\leq\frac{1}{\Gamma(1-\frac{\alpha}{2})}\lim_{t\rightarrow 0}\int_{0}^{t}(t-s)^{-\frac{\alpha}{2}}\left|{\bf u}_{t}({\bf x},s)\right|\,\mathrm{d}s$ $\displaystyle\leq$ $\displaystyle C\lim_{t\rightarrow 0}\int_{0}^{t}(t-s)^{-\frac{\alpha}{2}}(1+s^{\sigma_{1}-1})\,\mathrm{d}s\leq C\lim_{t\rightarrow 0}(t^{1-\frac{\alpha}{2}}+t^{\sigma_{1}-\frac{\alpha}{2}})=0,$ which gives ${\bf v}({\bf x},0)=0.$ Thus, by Lemma 2.1, the equation (1) can be equivalently solved by the following coupled equations: (8) $\displaystyle{\cal D}_{t}^{\frac{\alpha}{2}}{\bf v}=\nu^{2}\Delta{\bf u}+{f}(u,{\bf x},t)+t\Delta{\tilde{\varphi}},\quad{\bf x}\in\Omega,~{}t\in(0,T],$ (9) $\displaystyle{\bf v}={\cal D}_{t}^{\frac{\alpha}{2}}{\bf u},\quad{\bf x}\in\Omega,~{}t\in(0,T],$ provided $u={\bf u}+t{\tilde{\varphi}}$, the initial conditions ${\bf u}({\bf x},0)=\varphi({\bf x}),~{}{\bf v}({\bf x},0)=0$ for ${\bf x}\in\Omega$, and boundary conditions ${\bf u}({\bf x},t)={\bf v}({\bf x},t)=0$ for ${\bf x}\in\partial\Omega$. One can observe that, by utilizing the proposed SFOR method, the explicit orders of the time-fractional derivatives in the resulting coupled equations (8) and (9) are all $\alpha/2$. Therefore, they can be discretized by the same strategy (e.g., the L1 or Alikhanov approximations). ###### Remark 2.3. We observe from our numerical experiments that, by extracting the singular term $u^{\prime}(0)\omega_{2-\alpha}(t)$ in (6), the proposed algorithms will have more regular accuracy due to the regularity of the remaining part. This is the reason why we define the auxiliary function ${\bf v}={\cal D}_{t}^{\frac{\alpha}{2}}{\bf u}$ with ${\bf u}=u-t{\tilde{\varphi}}$, instead of ${v}={\cal D}_{t}^{\frac{\alpha}{2}}{u}$. ## 3 Numerical algorithms ### 3.1 Preliminary Our main concern is the time approximation of (1). Here and hereafter, $g^{k}$ and $g_{h}^{k}$ denotes the numerical approximations of $g(t_{k})$ and $g({\bf x}_{h},t_{k})$, respectively. Define the off-set time points and grid functions $t_{n-\theta}:=\theta t_{n-1}+(1-\theta)t_{n}\quad\mbox{and}\quad g^{n-\theta}:=\theta g^{n-1}+(1-\theta)g^{n},\quad 1\leq n\leq N.$ Denote $\beta:=\alpha/2$. The Caputo derivative ${\cal D}_{t}^{\beta}g(t_{n-\theta})$ can be formally approximated by the following discrete Caputo derivative with convolution structure: (10) $\displaystyle({\cal D}_{\tau}^{\beta}g)^{n-\theta}:=\sum_{k=1}^{n}A_{n-k}^{(n)}\nabla_{\tau}g^{k},\quad\mbox{where}~{}\nabla_{\tau}g^{k}=g^{k}-g^{k-1}.$ The general discretization (10) includes two practical ones. It leads to the L1 formula while $\theta=0$ (see also (11)) and yields the Alikhanov formula while $\theta=\beta/2$ (see also (13)). To efficiently solve the semilinear diffusion-wave equation with possible weak singular or more complicated solutions, we next give more explicit formulations of these two classical approximations on possible nonuniform meshes, which have also been rigorously studied in [15, 17, 16]. Nonuniform L1 formula. The L1 formula on general mesh for the approximation of the Caputo derivative ${\cal D}_{t}^{\beta}g(t_{n})$ is given as: (11) $\displaystyle({\cal D}_{\tau}^{\beta}g)^{n}:=\sum_{k=1}^{n}\int_{t_{k-1}}^{t_{k}}\omega_{1-\beta}(t_{n}-s)(\Pi_{1,k}g(s))^{\prime}\,\mathrm{d}s=\sum_{k=1}^{n}A_{n-k}^{(n)}\nabla_{\tau}g^{k},$ where $\Pi_{1,k}$ represents the linear interpolation operator, and (12) $\displaystyle A_{n-k}^{(n)}:=\int_{t_{k-1}}^{t_{k}}\frac{\omega_{1-\beta}(t_{n}-s)}{\tau_{k}}\,\mathrm{d}s.$ Nonuniform Alikhanov formula. Denote $\theta:=\beta/2=\alpha/4$, and define the discrete coefficients $\displaystyle a_{n-k}^{(n)}:=\frac{1}{\tau_{k}}\int_{t_{k-1}}^{\min\\{t_{k},t_{n-\theta}\\}}\omega_{1-\beta}(t_{n-\theta}-s)\,\mathrm{d}s,~{}1\leq k\leq n;$ $\displaystyle b_{n-k}^{(n)}:=\frac{2}{\tau_{k}(\tau_{k}+\tau_{k+1})}\int_{t_{k-1}}^{t_{k}}\omega_{1-\beta}(t_{n-\theta}-s)(s-t_{k-\frac{1}{2}})\,\mathrm{d}s,~{}1\leq k\leq n-1.$ Referring to [16], the Alikhanov formula on general mesh for the approximation of the Caputo derivative ${\cal D}_{t}^{\beta}g(t_{n-\theta})$ is $\displaystyle({\cal D}_{\tau}^{\beta}g)^{n-\theta}:=$ $\displaystyle\sum_{k=1}^{n-1}\int_{t_{k-1}}^{t_{k}}\omega_{1-\beta}(t_{n-\theta}-s)(\Pi_{2,k}g(s))^{\prime}\,\mathrm{d}s+\int_{t_{n-1}}^{t_{n-\theta}}\omega_{1-\beta}(t_{n-\theta}-s)(\Pi_{1,n}g(s))^{\prime}\,\mathrm{d}s$ (13) $\displaystyle=$ $\displaystyle\sum_{k=1}^{n}A_{n-k}^{(n)}\nabla_{\tau}g^{k},$ where $\Pi_{2,k}$ denotes the quadratic interpolation operator, and the discrete convolution kernels $A_{n-k}^{(n)}$ here are given as follows: $A_{0}^{(1)}:=a_{0}^{(1)}$ for $n=1$, and (17) $\displaystyle A_{n-k}^{(n)}:=\left\\{\begin{array}[]{ll}a_{0}^{(n)}+\rho_{n-1}b_{1}^{(n)},&k=n,\\\ a_{n-k}^{(n)}+\rho_{k-1}b_{n-k+1}^{(n)}-b_{n-k}^{(n)},&2\leq k\leq n-1,\\\ a_{n-1}^{(n)}-b_{n-1}^{(n)},&k=1,\end{array}\right.\quad\mbox{for}~{}n\geq 2,$ with $\rho_{k}:=\tau_{k}/\tau_{k+1}$ and $\rho:=\max_{k}\\{\rho_{k}\\}$ being the local time step-size ratios and the maximum ratio, respectively. ###### Remark 3.1. In the rest of this paper, we will use the general form (10) to represent the nonuniform L1 formula and Alikhanov formula. The discrete coefficients $A_{n-k}^{(n)}$ and the related properties studied later correspondingly refer to those of the nonuniform L1 formula and the Alikhanov formula while $\theta=0$ and $\theta=\beta/2$, respectively. The following two basic properties have been verified in [17, 16] for the discrete coefficients of the nonuniform L1 formula (with $\pi_{A}=1$) and the nonuniform Alikhanov formula (with $\pi_{A}=11/4$ and $\rho=7/4$), which are required in the numerical analysis of corresponding algorithms: * A1. The discrete kernels are positive and monotone: $A_{0}^{(n)}\geq A_{1}^{(n)}\geq\cdots\geq A_{n-1}^{(n)}>0$; * A2. There is a constant $\pi_{A}>0$ such that $A_{n-k}^{(n)}\geq\frac{1}{\pi_{A}}\int_{t_{k-1}}^{t_{k}}\frac{\omega_{1-\beta}(t_{n}-s)}{\tau_{k}}\,\mathrm{d}s$ for $1\leq k\leq n\leq N$. With A1–A2, a natural and important property is valid for the nonuniform L1 formula [15, proof of Theorem 2.1] and the nonuniform Alikhanov formula [16, Corollary 2.3]: (18) $\displaystyle\left\langle({\cal D}_{\tau}^{\beta}g)^{n-\theta},g^{n-\theta}\right\rangle\geq\frac{1}{2}\sum_{k=1}^{n}A_{n-k}^{(n)}\nabla_{\tau}(\|g^{k}\|^{2})\quad\mbox{for}~{}1\leq n\leq N.$ A discrete fractional Grönwall inequality proposed in [17, Theorem 3.1] is a crucial tool in the numerical analysis of fractional problems. As required in the analysis later, we present a slightly modified version in the following. It is easy to trace the proof of [17, Theorem 3.1] to justify the modification, here we skip its trivial derivations. ###### Lemma 3.2. Let $(g^{n})_{n=1}^{N}$ and $(\lambda_{l})_{l=0}^{N-1}$ be given nonnegative sequences. Assume that there exists a constant $\Lambda$ (independent of the step sizes) such that $\Lambda\geq\sum_{l=0}^{N-1}\lambda_{l}$, and that the maximum step size satisfies $\max_{1\leq n\leq N}\tau_{n}\leq\frac{1}{{{}^{\beta}\sqrt{4\pi_{A}\Gamma(2-\beta)\Lambda}}}.$ Then, for any nonnegative sequence $(v^{k})_{k=0}^{N}$ and $(w^{k})_{k=0}^{N}$ satisfying $\displaystyle\sum_{k=1}^{n}A_{n-k}^{(n)}\nabla_{\tau}\left[(v^{k})^{2}+(w^{k})^{2}\right]\leq\sum_{k=1}^{n}\lambda_{n-k}\left(v^{k-\theta}+w^{k-\theta}\right)^{2}+(v^{n-\theta}+w^{n-\theta})g^{n},~{}1\leq n\leq N,$ it holds that (19) $\displaystyle v^{n}+w^{n}\leq 4E_{\beta}(4\max(1,\rho)\pi_{A}\Lambda t_{n}^{\beta})\left(v^{0}+w^{0}+\max_{1\leq k\leq n}\sum_{j=1}^{k}P_{k-j}^{(k)}g^{j}\right)\quad\mbox{for}~{}1\leq n\leq N,$ where $E_{\beta}(z)=\sum_{k=0}^{\infty}\frac{z^{k}}{\Gamma(1+k\beta)}$ is the Mittag-Leffler function. The coefficients $P_{n-j}^{(n)}$ in (19) are called the complementary discrete kernels ([17]) which are defined on the convolution coefficients $A_{n-k}^{(n)}$ : (20) $\displaystyle P_{0}^{(n)}:=\frac{1}{A_{0}^{(n)}},\quad P_{n-j}^{(n)}:=\frac{1}{A_{0}^{(j)}}\sum_{k=j+1}^{n}\left(A_{k-j-1}^{(k)}-A_{k-j}^{(k)}\right)P_{n-k}^{(n)},\quad 1\leq j\leq n-1.$ It has been shown in [17, Lemmma 2.1] that the kernels satisfy (21) $\displaystyle 0\leq P_{n-j}^{(n)}\leq\pi_{A}\Gamma(2-\beta)\tau_{j}^{\beta},\quad\sum_{j=1}^{n}P_{n-j}^{(n)}\omega_{1-\beta}(t_{j})\leq\pi_{A},\quad 1\leq j\leq n\leq N.$ ### 3.2 Nonuniform L1 and Alikhanov algorithms We now implement linearized algorithms on temporal nonuniform meshes to solve the coupled equations (8)–(9) based on the nonuniform L1 and Alikhanov formulas. Some basic notations in the spatial direction are needed. The uniform spatial step sizes are denoted by $h_{x}:=(x_{r}-x_{l})/M_{x}$ and $h_{y}:=(y_{r}-y_{l})/M_{y}$ respectively, where $M_{x},M_{y}$ are positive integers. The mesh space is given by $\bar{\Omega}_{h}:=\\{{\bf x}_{h}=(x_{l}+ih_{x},y_{l}+jh_{y})|0\leq i\leq M_{x},0\leq j\leq M_{y}\\}$. For any grid functions $u_{h}:=\\{u_{i,j}=u(x_{i},t_{j})|(x_{i},t_{j})\in\bar{\Omega}_{h}\\}$, we employ standard five-point finite difference operator $\Delta_{h}:=\delta_{x}^{2}+\delta_{y}^{2}$ on $\bar{\Omega}_{h}$ to discretize the Laplacian operator $\Delta$, where $\delta_{x}^{2}u_{i,j}:=(u_{i+1,j}-2u_{i,j}+u_{i-1,j})/{h_{x}^{2}}$ and $\delta_{y}^{2}u_{i,j}$ is defined similarly. Denote $F(u_{h}^{n-\theta}):=f(u_{h}^{n-1},{\bf x}_{h},t_{n-\theta})+(1-\theta)\partial_{u}f(u_{h}^{n-1},{\bf x}_{h},t_{n-\theta})(u_{h}^{n}-u_{h}^{n-1}),\quad 1\leq n\leq N.$ The linearized and implicit difference schemes which based on the L1 and the Alikhanov approximations on general nonuniform temporal meshes for the problem (8)–(9) or the problem (1) are constructed as follows: (22) $\displaystyle({\cal D}_{\tau}^{\beta}{\bf v}_{h})^{n-\theta}=\nu^{2}\Delta_{h}{\bf u}_{h}^{n-\theta}+F({u}_{h}^{n-\theta})+t_{n-\theta}\Delta{\tilde{\varphi}}_{h},\quad{\bf x}\in\Omega_{h},~{}1\leq n\leq N;$ (23) $\displaystyle{\bf v}_{h}^{n-\theta}=({\cal D}_{\tau}^{\beta}{\bf u}_{h})^{n-\theta},\quad{\bf x}\in\Omega_{h},~{}1\leq n\leq N;$ (24) $\displaystyle u_{h}^{n}={\bf u}_{h}^{n}+t_{n}{\tilde{\varphi}}_{h},\quad{\bf x}\in\Omega_{h},~{}0\leq n\leq N,$ equipped with the initial conditions ${\bf u}_{h}^{0}=\varphi({\bf x}_{h})$ and ${\bf v}_{h}^{0}=0$ for ${\bf x}\in\Omega_{h}$, and the boundary conditions ${\bf u}_{h}^{n}={\bf v}_{h}^{n}=0$ for ${\bf x}\in\partial\Omega_{h},~{}1\leq n\leq N$. ###### Remark 3.3. The equations (22)–(24) represent two different numerical algorithms for solving the semilinear diffusion-wave equation. It is the nonuniform L1 algorithm while $\theta=0$ and is the nonuniform Alikhanov algorithm while $\theta=\beta/2=\alpha/4$. In order to analyze the two proposed algorithms, we consider an equivalent form of (22)–(24). Firstly, denote $w:={\cal D}_{t}^{\beta}{\bf v}-f(u,{\bf x},t)+t\Delta{\tilde{\varphi}}$ with the initial condition $w({\bf x},0):=\nu^{2}\Delta\varphi$ and the boundary $w({\bf x},t):=-f(0,{\bf x},t)$. Then (8)–(9) can be rewritten as $\displaystyle w={\cal D}_{t}^{\beta}{\bf v}-f(u,{\bf x},t)+t\Delta{\tilde{\varphi}},\quad{\bf x}\in\Omega,~{}t\in(0,T];$ $\displaystyle w=\nu^{2}\Delta{\bf u},\quad{\bf x}\in\Omega,~{}t\in(0,T];$ $\displaystyle{\bf v}={\cal D}_{t}^{\beta}{\bf u},\quad{\bf x}\in\Omega,~{}t\in(0,T].$ Utilizing the nonuniform L1 and Alikhanov formulas and the linearized technique to approximate the above equations, we obtain an auxiliary system of (22)–(24): (25) $\displaystyle w_{h}^{n-\theta}=({\cal D}_{\tau}^{\beta}{\bf v}_{h})^{n-\theta}-F(u_{h}^{n-\theta})+t_{n-\theta}\Delta{\tilde{\varphi}}_{h},\quad{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ (26) $\displaystyle w_{h}^{n}=\nu^{2}\Delta_{h}{\bf u}_{h}^{n},\quad{\bf x}_{h}\in\Omega_{h},~{}0\leq n\leq N;$ (27) $\displaystyle{\bf v}_{h}^{n-\theta}=({\cal D}_{\tau}^{\beta}{\bf u}_{h})^{n-\theta},\quad{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ (28) $\displaystyle u_{h}^{n}={\bf u}_{h}^{n}+t_{n}{\tilde{\varphi}}_{h},\quad{\bf x}\in\Omega_{h},~{}0\leq n\leq N.$ As $w_{h}^{n-\theta}=(1-\theta)w_{h}^{n}+\theta w_{h}^{n-1}$, one can see that equations (22)–(24) are equivalent to (25)–(28) by eliminating the functions $w_{h}^{n-\theta}$ and $w_{h}^{n}$. ### 3.3 Unconditional convergence In this subsection, we show the unconditional convergence of the proposed nonuniform L1 and Alikhanov algorithms (22)–(24) according to their auxiliary system (25)–(28). Take $\Omega_{h}=\bar{\Omega}_{h}\cap\Omega$ and $\partial\Omega_{h}=\bar{\Omega}_{h}\cap\partial\Omega$. For $u_{h},v_{h}$ belonging to the space of grid functions which vanish on $\partial\Omega_{h}$, we introduce the discrete inner product $\langle u,v\rangle:=h_{x}h_{y}\sum_{{\bf x}_{h}\in\Omega_{h}}u_{h}v_{h}$, the discrete $L_{2}$-norm $\|u\|:=\sqrt{\langle u,u\rangle}$, the discrete $L_{\infty}$-norm $\|u\|_{\infty}:=\max\\{|u_{h}|\\}$, the discrete $H^{1}$ seminorms $\|\delta_{x}u\|$ and $\|\delta_{y}u\|$, and $\|\nabla_{h}u\|:=\sqrt{\|\delta_{x}u\|^{2}+\|\delta_{y}u\|^{2}}$, where $\delta_{x}u_{i-\frac{1}{2},j}:=(u_{i,j}-u_{i-1,j})/h_{x}$ and similar definition works for $\delta_{y}u_{i,j-\frac{1}{2}}$. One can easily check that $\langle\Delta_{h}u,u\rangle=-\|\nabla_{h}u\|^{2}$, and, for some positive constants ${\tilde{C}}_{\Omega},{\hat{C}}_{\Omega}$, the embedding inequalities are valid ([18]): $\|u\|\leq{\tilde{C}}_{\Omega}\|\nabla_{h}u\|$ and $\max\\{\|\nabla_{h}u\|,\|u\|_{\infty}\\}\leq{\hat{C}}_{\Omega}\|\Delta_{h}u\|$. For simplicity of presentation, we take $h:=\max\\{h_{x},h_{y}\\}$ and $C_{\Omega}=\max\\{{\tilde{C}}_{\Omega},{\hat{C}}_{\Omega}\\}$. For $\vartheta\in(0,1]$, let $U_{h}^{n-\vartheta}:=u({\bf x}_{h},t_{n-\vartheta})$ and denote the solution errors ${\tilde{u}}_{h}^{n}:=U_{h}^{n}-u_{h}^{n}={\bf u}({\bf x}_{h},t_{n})-{\bf u}_{h}^{n},\quad{\tilde{v}}_{h}^{n}:={\bf v}({\bf x}_{h},t_{n})-{\bf v}_{h}^{n},\quad\mbox{and}\quad{\tilde{w}}_{h}^{n}:=w({\bf x}_{h},t_{n})-w_{h}^{n}.$ One can obtain the error system of (25)–(28): (29) $\displaystyle{\tilde{w}}_{h}^{n-\theta}=({\cal D}_{\tau}^{\beta}{\tilde{v}}_{h})^{n-\theta}-{\cal N}_{h}^{n-\theta}-({\cal T}_{f})_{h}^{n-\theta}+({\cal T}_{v1})_{h}^{n-\theta}-({\cal T}_{w})_{h}^{n-\theta},\quad{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ (30) $\displaystyle{\tilde{w}}_{h}^{n}=\nu^{2}\Delta_{h}{\tilde{u}}_{h}^{n}+\nu^{2}{\cal S}_{h}^{n},\quad{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ (31) $\displaystyle{\tilde{v}}_{h}^{n-\theta}=({\cal D}_{\tau}^{\beta}{\tilde{u}}_{h})^{n-\theta}+({\cal T}_{u})_{h}^{n-\theta}-({\cal T}_{v2})_{h}^{n-\theta},\quad{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ $\displaystyle{\tilde{u}}_{h}^{0}={\tilde{v}}_{h}^{0}={\tilde{w}}_{h}^{0}=0,\quad{\bf x}_{h}\in\bar{\Omega}_{h};\qquad{\tilde{u}}_{h}^{n}={\tilde{v}}_{h}^{n}=0,\quad{\bf x}_{h}\in\partial\Omega_{h},~{}1\leq n\leq N.$ where $({\cal T}_{f})_{h}^{n-\theta}$, $({\cal T}_{v1})_{h}^{n-\theta}$, $({\cal T}_{w})_{h}^{n-\theta}$, $({\cal T}_{u})_{h}^{n-\theta}$, $({\cal T}_{v2})_{h}^{n-\theta}$ and ${\cal S}_{h}^{n}$ are the temporal and spatial truncation errors, see more details in Appendix (Section 7); and $\displaystyle{\cal N}_{h}^{n-\theta}:=$ $\displaystyle F(U_{h}^{n-\theta})-F(u_{h}^{n-\theta})$ $\displaystyle=$ $\displaystyle(1-\theta)\left[\partial_{u}f(u_{h}^{n-1},{\bf x}_{h},t_{n-\theta})\nabla_{\tau}{\tilde{u}}_{h}^{n}+{\tilde{u}}_{h}^{n-1}\nabla_{\tau}U_{h}^{n}\int_{0}^{1}\partial^{2}_{u}f(sU_{h}^{n-1}+(1-s)u_{h}^{n-1},{\bf x}_{h},t_{n-\theta})\,\mathrm{d}s\right]$ $\displaystyle+{\tilde{u}}_{h}^{n-1}\int_{0}^{1}\partial_{u}f(sU_{h}^{n-1}+(1-s)u_{h}^{n-1},{\bf x}_{h},t_{n-\theta})\,\mathrm{d}s.$ It can be deduced from (30) that $\sum_{k=1}^{n}A_{n-k}^{(n)}\nabla_{\tau}{\tilde{w}}_{h}^{k}=\nu^{2}\sum_{k=1}^{n}A_{n-k}^{(n)}\nabla_{\tau}\left(\Delta_{h}{\tilde{u}}_{h}^{k}+{\cal S}_{h}^{k}\right)$. Then, performing the operator $\Delta_{h}$ on (29) and (31), it follows $\displaystyle\Delta_{h}{\tilde{w}}_{h}^{n-\theta}=({\cal D}_{\tau}^{\beta}\Delta_{h}{\tilde{v}}_{h})^{n-\theta}+\Delta_{h}{\cal N}_{h}^{n-\theta}-\Delta_{h}({\cal T}_{f})_{h}^{n-\theta}+\Delta_{h}({\cal T}_{v1})_{h}^{n-\theta}-\Delta_{h}({\cal T}_{w})_{h}^{n-\theta},$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ (32) $\displaystyle({\cal D}_{\tau}^{\beta}{\tilde{w}}_{h})^{n-\theta}=\nu^{2}({\cal D}_{\tau}^{\beta}\Delta_{h}{\tilde{u}}_{h})^{n-\theta}+\nu^{2}({\cal D}_{\tau}^{\beta}{\cal S}_{h})^{n-\theta},\quad{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ (33) $\displaystyle\Delta_{h}{\tilde{v}}_{h}^{n-\theta}=({\cal D}_{\tau}^{\beta}\Delta_{h}{\tilde{u}}_{h})^{n-\theta}+\Delta_{h}({\cal T}_{u})_{h}^{n-\theta}-\Delta_{h}({\cal T}_{v2})_{h}^{n-\theta},\quad{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ $\displaystyle{\tilde{u}}_{h}^{0}={\tilde{v}}_{h}^{0}={\tilde{w}}_{h}^{0}=0,\quad{\bf x}_{h}\in\bar{\Omega}_{h};\qquad{\tilde{u}}_{h}^{n}={\tilde{v}}_{h}^{n}=0,\quad{\bf x}_{h}\in\partial\Omega_{h},~{}1\leq n\leq N.$ By eliminating the term $({\cal D}_{\tau}^{\beta}\Delta_{h}{\tilde{u}}_{h})^{n-\theta}$ in (32) and (33), we get $\displaystyle\Delta_{h}{\tilde{w}}_{h}^{n-\theta}=({\cal D}_{\tau}^{\beta}\Delta_{h}{\tilde{v}}_{h})^{n-\theta}+\Delta_{h}{\cal N}_{h}^{n-\theta}-\Delta_{h}({\cal T}_{f})_{h}^{n-\theta}+\Delta_{h}({\cal T}_{v1})_{h}^{n-\theta}-\Delta_{h}({\cal T}_{w})_{h}^{n-\theta},$ (34) $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ (35) $\displaystyle\frac{1}{\nu^{2}}({\cal D}_{\tau}^{\beta}{\tilde{w}}_{h})^{n-\theta}=\Delta_{h}{\tilde{v}}_{h}^{n-\theta}-\Delta_{h}({\cal T}_{u})_{h}^{n-\theta}+\Delta_{h}({\cal T}_{v2})_{h}^{n-\theta}+({\cal D}_{\tau}^{\beta}{\cal S}_{h})^{n-\theta},\quad{\bf x}_{h}\in\Omega_{h},~{}1\leq n\leq N;$ $\displaystyle{\tilde{u}}_{h}^{0}={\tilde{v}}_{h}^{0}={\tilde{w}}_{h}^{0}=0,\quad{\bf x}_{h}\in\bar{\Omega}_{h};\qquad{\tilde{u}}_{h}^{n}={\tilde{v}}_{h}^{n}=0,\quad{\bf x}_{h}\in\partial\Omega_{h},~{}1\leq n\leq N.$ ###### Lemma 3.4. Let ${\cal F}(\psi({\bf x}),{\bf x})\in C^{2}(\mathbb{R}\times\Omega)$, and $\\{\psi_{h}\\}$ be a grid function which satisfy $\max\\{\|\psi\|_{\infty},\|\nabla_{h}\psi\|,\|\Delta_{h}\psi\|\\}\leq C_{\psi}$. Then there is a constant $C_{F}>0$ dependent on $C_{\psi}$ and $C_{\Omega}$ such that $\|\Delta_{h}[{\cal F}(\psi,{\bf x})v]\|\leq C_{F}\|\Delta_{h}v\|.$ ###### Proof 3.5. The proof can be worked out following that of [20, Lemma 4.1] just by routine computations on $\delta_{x}{\cal F}(\psi_{i-\frac{1}{2},j},(x_{i-\frac{1}{2}},y_{j}))$, $\delta_{y}{\cal F}(\psi_{i,j-\frac{1}{2}},(x_{i},y_{j-\frac{1}{2}}))$, $\delta_{x}^{2}{\cal F}(\psi_{i,j},(x_{i},y_{j}))$ and $\delta_{y}^{2}{\cal F}(\psi_{i,j},(x_{i},y_{j}))$ using the Taylor formula with integral remainder. We next show the unconditional convergence of the proposed linearized scheme (22)–(24) based on the $H^{2}$ energy method ([20, 18]). ###### Theorem 3.6. Let $f\in C^{(4,2,0)}({\mathbb{R}}\times\Omega\times[0,T])$. If the assumptions in (5) and the mesh assumption MA hold, the linearized schemes (22)–(24) are unconditional convergent with (36) $\|\Delta_{h}{\tilde{u}}^{n}\|+\|\nabla_{h}{\tilde{v}}^{n}\|\leq\left\\{\begin{array}[]{ll}\vskip 3.0pt plus 1.0pt minus 1.0ptC(\tau^{\min\\{2-\beta,\gamma\sigma_{1},\gamma\sigma_{2}\\}}+h^{2}),\quad\mbox{if}\quad\theta=0;\\\ C(\tau^{\min\\{2,\gamma\sigma_{1},\gamma\sigma_{2}\\}}+h^{2}),\quad\mbox{if}\quad\theta=\frac{\beta}{2};\end{array}\quad\mbox{for}\quad 1\leq n\leq N.\right.$ ###### Proof 3.7. Taking inner product of equations (34) and (35) with ${\tilde{v}}_{h}^{n-\theta}$ and ${\tilde{w}}_{h}^{n-\theta}$ respectively, we have $\displaystyle\left\langle\Delta_{h}{\tilde{w}}^{n-\theta},{\tilde{v}}^{n-\theta}\right\rangle=$ $\displaystyle\left\langle({\cal D}_{\tau}^{\beta}\Delta_{h}{\tilde{v}})^{n-\theta},{\tilde{v}}^{n-\theta}\right\rangle$ (37) $\displaystyle+\left\langle\Delta_{h}{\cal N}^{n-\theta}-\Delta_{h}({\cal T}_{f})^{n-\theta}+\Delta_{h}({\cal T}_{v1})^{n-\theta}-\Delta_{h}({\cal T}_{w})^{n-\theta},{\tilde{v}}^{n-\theta}\right\rangle$ and (38) $\displaystyle\left\langle\frac{1}{\nu^{2}}({\cal D}_{\tau}^{\beta}{\tilde{w}})^{n-\theta},{\tilde{w}}^{n-\theta}\right\rangle=\left\langle\Delta_{h}{\tilde{v}}^{n-\theta},{\tilde{w}}^{n-\theta}\right\rangle+\left\langle-\Delta_{h}({\cal T}_{u})^{n-\theta}+\Delta_{h}({\cal T}_{v2})^{n-\theta}+({\cal D}_{\tau}^{\beta}{\cal S})^{n-\theta},{\tilde{w}}^{n-\theta}\right\rangle.$ With the identity $\left\langle\Delta_{h}{\tilde{w}}^{n-\theta},{\tilde{v}}^{n-\theta}\right\rangle=\left\langle\Delta_{h}{\tilde{v}}^{n-\theta},{\tilde{w}}^{n-\theta}\right\rangle$ and the zero boundary conditions of ${\tilde{v}}_{h}^{n-\theta}$, it follows form (37)–(38) that $\displaystyle\left\langle\frac{1}{\nu^{2}}({\cal D}_{\tau}^{\beta}{\tilde{w}})^{n-\theta},{\tilde{w}}^{n-\theta}\right\rangle+\left\langle({\cal D}_{\tau}^{\beta}\nabla_{h}{\tilde{v}})^{n-\theta},\nabla_{h}{\tilde{v}}^{n-\theta}\right\rangle$ $\displaystyle=$ $\displaystyle\left\langle\Delta_{h}{\cal N}^{n-\theta}-\Delta_{h}({\cal T}_{f})^{n-\theta}+\Delta_{h}({\cal T}_{v1})^{n-\theta}-\Delta_{h}({\cal T}_{w})^{n-\theta},{\tilde{v}}^{n-\theta}\right\rangle$ $\displaystyle+\left\langle-\Delta_{h}({\cal T}_{u})^{n-\theta}+\Delta_{h}({\cal T}_{v2})^{n-\theta}+({\cal D}_{\tau}^{\beta}{\cal S})^{n-\theta},{\tilde{w}}^{n-\theta}\right\rangle.$ Utilizing (18) and the Cauchy-Schwarz inequality, the above equation leads to $\displaystyle\sum_{k=1}^{n}A_{n-k}^{(n)}\nabla_{\tau}(\|{\tilde{w}}^{n}\|^{2}+\nu^{2}\|\nabla_{h}{\tilde{v}}^{n}\|^{2})$ $\displaystyle\leq$ $\displaystyle 2\nu^{2}\left(\|\Delta_{h}{\cal N}^{n-\theta}\|+\|\Delta_{h}({\cal T}_{f})^{n-\theta}\|+\|\Delta_{h}({\cal T}_{v1})^{n-\theta}\|+\|\Delta_{h}({\cal T}_{w})^{n-\theta}\|\right)\|{\tilde{v}}^{n-\theta}\|$ $\displaystyle+2\nu^{2}\left(\|\Delta_{h}({\cal T}_{u})^{n-\theta}\|+\|\Delta_{h}({\cal T}_{v2})^{n-\theta}\|+\|({\cal D}_{\tau}^{\beta}{\cal S})^{n-\theta}\|\right)\|{\tilde{w}}^{n-\theta}\|$ (39) $\displaystyle\leq$ $\displaystyle 2\nu(1+\nu)(1+C_{\Omega})\left(\|\Delta_{h}{\cal N}^{n-\theta}\|+{\cal T}^{n-\theta}\right)\left(\|{\tilde{w}}^{n-\theta}\|+\nu\|\nabla_{h}{\tilde{v}}^{n-\theta}\|\right),$ where ${\cal T}^{n-\theta}:=\|\Delta_{h}({\cal T}_{f})^{n-\theta}\|+\|\Delta_{h}({\cal T}_{v1})^{n-\theta}\|+\|\Delta_{h}({\cal T}_{w})^{n-\theta}\|+\|\Delta_{h}({\cal T}_{u})^{n-\theta}\|+\|\Delta_{h}({\cal T}_{v2})^{n-\theta}\|+\|({\cal D}_{\tau}^{\beta}{\cal S})^{n-\theta}\|.$ From (45) and (47)–(52), there exist positive constant $C_{r}$ such that (40) $\sum_{j=1}^{n}P_{n-j}^{(n)}{\cal T}^{j-\theta}\leq\left\\{\begin{array}[]{ll}\vskip 3.0pt plus 1.0pt minus 1.0ptC_{r}(\tau^{\min\\{2-\beta,\gamma\sigma_{1},\gamma\sigma_{2}\\}}+h^{2}),\quad\mbox{for}\quad\theta=0;\\\ C_{r}(\tau^{\min\\{2,\gamma\sigma_{1},\gamma\sigma_{2}\\}}+h^{2}),\quad\mbox{for}\quad\theta=\frac{\beta}{2}.\end{array}\right.$ From the regularity assumptions in (5), we introduce the following constant (41) $\displaystyle C_{0}=\max_{0\leq n\leq N}\\{\|U^{n}\|_{\infty},\|\nabla_{h}U^{n}\|,\|\Delta_{h}U^{n}\|\\}.$ The mathematical induction method will be applied to show that (42) $\displaystyle\|{\tilde{w}}^{n}\|+\nu\|\nabla_{h}{\tilde{v}}^{n}\|\leq{\cal E}_{n}{\tilde{\cal T}}^{n-\theta},\quad 1\leq n\leq N,$ where ${\cal E}_{n}:=4E_{\beta}(2\max(1,\rho)\pi_{A}\Lambda t_{n}^{\beta})$ with $\Lambda=2\nu(1+\nu)C_{f}(1+C_{\Omega})$ and ${\tilde{\cal T}}^{n-\theta}:=2\nu(1+\nu)(1+C_{\Omega})\times\left\\{\begin{array}[]{ll}\vskip 3.0pt plus 1.0pt minus 1.0ptC_{r}(\tau^{\min\\{2-\beta,\gamma\sigma_{1},\gamma\sigma_{2}\\}}+h^{2})+\pi_{A}\Gamma(1-\beta)C_{f}C_{u}t_{n}^{\beta}h^{2},\quad\mbox{for}\quad\theta=0;\\\ C_{r}(\tau^{\min\\{2,\gamma\sigma_{1},\gamma\sigma_{2}\\}}+h^{2})+\pi_{A}\Gamma(1-\beta)C_{f}C_{u}t_{n}^{\beta}h^{2},\quad\mbox{for}\quad\theta=\frac{\beta}{2},\end{array}\right.$ in which $C_{f}=\max\\{(1-\theta)C_{1},C_{2}[1+((1-\theta)(C_{2}+1)+1)/\theta]\\}$ with $C_{1}$ and $C_{2}$ being two proper positive constants which depend on $C_{0}$ and $C_{\Omega}$. While $n=1$, it holds that ${\tilde{u}}_{h}^{0}=0$ and $u_{h}^{0}=U_{h}^{0}\leq C_{0}$. Suppose $f\in C^{(3,2,0)}({\mathbb{R}}\times\Omega\times[0,T])$, by Lemma 3.4 and (46), there exists a positive constant $C_{1}$ such that (43) $\displaystyle\|\Delta_{h}{\cal N}^{1-\theta}\|=(1-\theta)\|\Delta_{h}f_{u}^{\prime}(u_{h}^{0},{\bf x},t_{1-\theta}){\tilde{u}}_{h}^{1}\|\leq(1-\theta)C_{1}\|\Delta_{h}{\tilde{u}}_{h}^{1}\|\leq(1-\theta)C_{1}(\|{\tilde{w}}^{1}\|+C_{u}h^{2}).$ For simplicity, denote $\|{\tilde{w}}^{(n-\theta)}\|:=(1-\theta)\|{\tilde{w}}^{n}\|+\theta\|{\tilde{w}}^{n-1}\|$. Similarly, we define $\|\nabla_{h}{\tilde{v}}^{(n-\theta)}\|$. The triangle inequality gives $\|{\tilde{w}}^{n-\theta}\|\leq\|{\tilde{w}}^{(n-\theta)}\|$ and $\|\nabla_{h}{\tilde{v}}^{n-\theta}\|\leq\|\nabla_{h}{\tilde{v}}^{(n-\theta)}\|$. Then, it follows from (39) and (43) that $\displaystyle A_{0}^{(1)}\nabla_{\tau}(\|{\tilde{w}}^{1}\|^{2}+\nu^{2}\|\nabla_{h}{\tilde{v}}^{1}\|^{2})\leq 2\nu(1+\nu)C_{1}(1+C_{\Omega})\left(\|{\tilde{w}}^{(1-\theta)}\|+\nu\|\nabla_{h}{\tilde{v}}^{(1-\theta)}\|\right)^{2}$ $\displaystyle+2\nu(1+\nu)(1+C_{\Omega})\left({\cal T}^{1-\theta}+(1-\theta)C_{1}C_{u}h^{2}\right)\left(\|{\tilde{w}}^{(1-\theta)}\|+\nu\|\nabla_{h}{\tilde{v}}^{(1-\theta)}\|\right).$ Thus, applying Lemma 3.2 on the above inequality, and utilizing (40), we get $\displaystyle\|{\tilde{w}}^{1}\|+\nu\|\nabla_{h}{\tilde{v}}^{1}\|\leq{\cal E}_{1}\left[2\nu(1+\nu)(1+C_{\Omega})P_{0}^{(1)}\left({\cal T}^{1-\theta}+(1-\theta)C_{1}C_{u}h^{2}\right)\right]\leq{\cal E}_{1}{\tilde{\cal T}}^{1-\theta},$ which means that (42) holds for $n=1$. Assume that (42) is valid for $1\leq k\leq n-1~{}(n\geq 2)$. The eq. (30) and discrete embedding inequalities imply that $\displaystyle\max\\{\|{\tilde{u}}^{k}\|_{\infty},\|\nabla_{h}{\tilde{u}}^{k}\|,\|\Delta_{h}{\tilde{u}}^{k}\|\\}\leq$ $\displaystyle\max\\{1,C_{\Omega}\\}\left(\frac{1}{\nu^{2}}\|{\tilde{w}}^{k}\|+\|{\cal S}^{k}\|\right)$ $\displaystyle\leq$ $\displaystyle\max\\{1,C_{\Omega}\\}\left(\frac{1}{\nu^{2}}{\cal E}_{k}{\tilde{\cal T}}^{k-\theta}+C_{u}h^{2}\right)\leq 1,$ for $1\leq k\leq n-1$ and small step sizes. So according to (41), the numerical solutions satisfy $\max\\{\|{u}^{k}\|_{\infty},\|\nabla_{h}{u}^{k}\|,\|\Delta_{h}{u}^{k}\|\\}\leq C_{0}+1.$ Now for $k=n$, suppose $f\in C^{(4,2,0)}({\mathbb{R}}\times\Omega\times[0,T])$. By Lemma 3.4 there exists a positive constant $C_{2}$ such that $\displaystyle\|\Delta_{h}{\cal N}^{n-\theta}\|\leq$ $\displaystyle(1-\theta)\bigg{(}\|\Delta_{h}[f_{u}^{\prime}(u^{n-1},{\bf x},t_{n-\theta})\nabla_{\tau}{\tilde{u}}^{n}]\|$ $\displaystyle+\int_{0}^{1}\|\Delta_{h}[f_{u}^{\prime\prime}(sU^{n-1}+(1-s)u^{n-1},{\bf x},t_{n-\theta}){\tilde{u}}^{n-1}\nabla_{\tau}U^{n}]\|\,\mathrm{d}s\bigg{)}$ $\displaystyle+\int_{0}^{1}\|\Delta_{h}[f_{u}^{\prime}(sU^{n-1}+(1-s)u^{n-1},{\bf x},t_{n-\theta}){\tilde{u}}^{n-1}]\|\,\mathrm{d}s$ $\displaystyle\leq$ $\displaystyle(1-\theta)C_{2}\left(\|\Delta_{h}(\nabla_{\tau}{\tilde{u}}^{n})\|+C_{2}\|\Delta_{h}{\tilde{u}}^{n-1}\|\right)+C_{2}\|\Delta_{h}{\tilde{u}}^{n-1}\|$ $\displaystyle\leq$ $\displaystyle C_{f}\left[(1-\theta)\|\Delta_{h}{\tilde{u}}^{n}\|+\theta\|\Delta_{h}{\tilde{u}}^{n-1}\|\right]$ (44) $\displaystyle\leq$ $\displaystyle C_{f}\|{\tilde{w}}^{(n-\theta)}\|+C_{f}C_{u}h^{2}.$ So (39) and (44) lead to $\displaystyle\sum_{k=1}^{n}A_{n-k}^{(n)}\nabla_{\tau}(\|{\tilde{w}}^{n}\|^{2}+\nu^{2}\|\nabla_{h}{\tilde{v}}^{n}\|^{2})\leq 2\nu(1+\nu)C_{f}(1+C_{\Omega})\left(\|{\tilde{w}}^{(n-\theta)}\|+\nu\|\nabla_{h}{\tilde{v}}^{(n-\theta)}\|\right)^{2}$ $\displaystyle+2\nu(1+\nu)(1+C_{\Omega})\left({\cal T}^{n-\theta}+C_{f}C_{u}h^{2}\right)\left(\|{\tilde{w}}^{(n-\theta)}\|+\nu\|\nabla_{h}{\tilde{v}}^{(n-\theta)}\|\right).$ Applying Lemma 3.2 and utilizing (40) again, it yields $\displaystyle\|{\tilde{w}}^{n}\|+\nu\|\nabla_{h}{\tilde{v}}^{n}\|\leq{\cal E}_{n}\left[2\nu(1+\nu)(1+C_{\Omega})\max_{1\leq k\leq n}\sum_{j=1}^{k}P_{k-j}^{(k)}\left({\cal T}^{j-\theta}+C_{f}C_{u}h^{2}\right)\right]\leq{\cal E}_{n}{\tilde{\cal T}}^{n-\theta}.$ Therefore (42) is verified. Finally, the desired result (36) is reached by (30) and unifying the constants. ###### Remark 3.8. A memory and computational storage saving technique (called SOE approximation) investigated in [8] (see also [8, Theorem 2.5] or [19, Lemma 5.1]) to compute the discrete Caputo derivative can be directly employed to the nonuniform L1 and Alikhanov formulas, the corresponding coefficients of fast L1 and fast Alikhanov formulas preserve the properties A1–A2 [17, 19] which further ensure the theoretical analysis of the associated fast schemes. Therefore, in our later implementation of adaptive time stepping strategy and numerical tests, we will always utilize the fast L1 formula [17, Example 2] and the fast Alikhanov formula [19, eq. (5.3)] while applying the proposed algorithms (22)–(24) with $\theta=0$ and $\theta=\beta/2$, respectively. ### 3.4 Adaptive time-stepping strategy The time mesh assumption in Theorems 3.6 permits us to establish adaptive time-stepping strategy based on the fast L1 and fast Alikhanov algorithms to reduce the computational costs while solving the semilinear diffusion-wave equation, especially when the solution of the governing problem may possess highly oscillatory feature in time. In the following, we refer to [7, 19] for designing an adaptive time-stepping algorithm of the semilinear diffusion-wave equation (1), the strategy is presented in Algorithm 1. Algorithm 1 Adaptive time-stepping strategy Given: $u^{n}$, $v^{n}$ and time step $\tau_{n+1}$ 1: Compute $u_{1}^{n+1}$ by the (fast) L1 scheme ((22)–(24) for $\theta=0$) with time step $\tau_{n+1}$; 2: Compute $u_{2}^{n+1}$ by the (fast) Alikhanov scheme ((22)–(24) for $\theta=\beta/2$) with time step $\tau_{n+1}$; 3: Calculate $e^{n+1}=\|u_{2}^{n+1}-u_{1}^{n+1}\|/\|u_{2}^{n+1}\|$; 4: if $e^{n+1}<tol$ or $\tau_{n+1}=\frac{2}{3}\tau_{n}$ then 5: Update time step size $\tau_{n+2}\leftarrow\min\\{\max\\{\tau_{\min},\tau_{ada}\\},\tau_{\max}\\}$; 6: else 7: Reformulate the time-step size $\tau_{n+1}\leftarrow\max\\{\min\\{\max\\{\tau_{\min},\tau_{ada}\\},\tau_{\max}\\},\frac{2}{3}\tau_{n+1}\\}$; 8: Goto 1 9: end if The adaptive time step size in Algorithm 1 is updated by $\tau_{ada}(e,\tau)=S\left(\frac{tol}{e}\right)^{\frac{1}{2}}\tau,$ where $S$, $tol$ denote the safety coefficient and the tolerance, respectively. ## 4 Numerical experiments Numerical examples are carried out in this section to show the accuracy and efficiency of proposed algorithms. The absolute tolerance error $\epsilon$ and the cut-off time $\Delta t$ of fast L1 formula [17, Example 2] and the fast Alikhanov formula [19, Lemma 5.1] are set as $\epsilon=10^{-12}$ and $\Delta t=\tau_{1}$ in all of the following tests. ###### Example 4.1. We first consider the problem (1) with $\Omega=(0,1)^{2}$, $T=1$, $\nu=1$ and $f(u,{\bf x},t)=-u^{3}+[\sin(\pi x)\sin(\pi y)(1+t+t^{\alpha})]^{3}+\sin(\pi x)\sin(\pi y)\left[\Gamma(\alpha+1)+2\pi^{2}(1+t+t^{\alpha})\right].$ In this situation, the exact solution is $u=\sin(\pi x)\sin(\pi y)(1+t+t^{\alpha})$. One may notice that the regularity parameters in (5) are $\sigma_{1}=\alpha$ and $\sigma_{2}=\alpha/2$ for Example 4.1. Therefore, according to Theorem 3.6, the optimal mesh parameter is $\gamma_{opt}=(4-\alpha)/\alpha$ for the nonuniform L1 scheme and takes the value $\gamma_{opt}=4/\alpha$ for the nonuniform Alikhanov scheme. They are all bounded for $\alpha\in(1,2)$. These bounded mesh parameters keep the robustness of the algorithms in practical implementation if the graded mesh $t_{k}=T(k/N)^{\gamma}$ is imposed to deal with the weak initial singularity. On the other hand, the optimal grading parameters of the nonuniform schemes in [12, 15, 16, 19, 20, 35] will grow without bound while the fractional order becomes small as they are all possessing the form $\gamma_{opt}=r/\alpha$ where $\alpha\in(0,1)$ for the sub-diffusion problems and $r$ should be the optimal time rate, this generally lead to practical limitations. Since the spatial error ${\cal O}(h^{2})$ is standard, we only display the temporal accuracy of the fast L1 and fast Alikhanov schemes. For Example 4.1, we fixed a fine spatial grid mesh with $M=1000$ such that the temporal errors dominate the spatial errors. In each tests, the time interval $[0,T]$ is divided into two parts $[0,T_{0}]$ and $(T_{0},T]$ with total $N$ time nodes. A graded mesh with $t_{k}=T_{0}(k/N_{0})^{\gamma}$ in the first interval $[0,T_{0}]$ is utilized to resolve the weak initial singularity, where $T_{0}=\min\\{1/\gamma,T\\}$. For the second interval $(T_{0},T]$, we use random time-step sizes $\tau_{N_{0}+k}=(T-T_{0})\epsilon_{k}/\sum_{k=1}^{N_{1}}\epsilon_{k}$ for $N_{1}=N-N_{0}$, where $\epsilon_{k}$ take values in $(0,1)$ randomly. For this example, we take $N_{0}=\lceil\frac{N}{T+1-\gamma^{-1}}\rceil$. The discrete $H^{2}$-norm errors $e_{H^{2}}(N)=\max_{1\leq n\leq N}\|U^{n}-u^{n}\|_{H^{2}}$ are recorded in each run, and the temporal convergence order is given by $\mbox{Order}=\log_{2}\left[\frac{e_{H^{2}}(N/2)}{e_{H^{2}}(N)}\right].$ Table 1: Numerical accuracy in temporal direction of fast L1 scheme for Example 4.1, where $\alpha=1.1$. $\gamma=1$ $\gamma_{opt}=(4-\alpha)/\alpha\approx 2.64$ $\gamma=\frac{9}{8}\gamma_{opt}\approx 2.97$ $N$ $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $16$ 4.2668e-02 $\ast$ 1.4285e-01 $\ast$ 2.4493e-01 $\ast$ $32$ 3.3723e-02 0.34 3.5519e-02 2.01 5.1210e-02 2.26 $64$ 2.2386e-02 0.59 1.0731e-02 1.73 1.2611e-02 2.02 $128$ 1.3688e-02 0.71 2.4621e-03 2.12 3.8861e-03 1.70 Theoretical Order 0.55 1.45 1.45 Table 2: Numerical accuracy in temporal direction of fast L1 scheme for Example 4.1, where $\alpha=1.5$. $\gamma=1$ $\gamma_{opt}=(4-\alpha)/\alpha\approx 1.67$ $\gamma=\frac{9}{8}\gamma_{opt}\approx 1.88$ $N$ $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $16$ 3.4875e-02 $\ast$ 2.2507e-01 $\ast$ 2.5657e-01 $\ast$ $32$ 1.2196e-02 1.52 3.6991e-02 2.61 3.4006e-02 2.92 $64$ 8.7566e-03 0.48 1.2921e-02 1.52 1.3962e-02 1.28 $128$ 5.5637e-03 0.65 4.2362e-03 1.61 3.8805e-03 1.85 Theoretical Order 0.75 1.25 1.25 Table 3: Numerical accuracy in temporal direction of fast L1 scheme for Example 4.1, where $\alpha=1.9$. $\gamma=1$ $\gamma_{opt}=(4-\alpha)/\alpha\approx 1.11$ $\gamma=\frac{9}{8}\gamma_{opt}\approx 1.24$ $N$ $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $16$ 7.4641e-02 $\ast$ 7.4027e-02 $\ast$ 1.3225e-01 $\ast$ $32$ 3.7415e-02 1.00 3.4234e-02 1.11 3.4250e-02 1.95 $64$ 1.7990e-02 1.06 1.6886e-02 1.02 1.6737e-02 1.03 $128$ 8.4156e-03 1.10 8.0910e-03 1.06 8.0567e-03 1.05 Theoretical Order 0.95 1.05 1.05 Table 4: Numerical accuracy in temporal direction of fast Alikhanov scheme for Example 4.1, where $\alpha=1.2$. $\gamma=1$ $\gamma_{opt}=4/\alpha\approx 3.33$ $\gamma=\frac{9}{8}\gamma_{opt}\approx 3.75$ $N$ $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $16$ 5.2656e-02 $\ast$ 1.2494e-01 $\ast$ .5496e-01 $\ast$ $32$ 3.2671e-02 0.69 3.3236e-02 1.91 4.1575e-02 1.90 $64$ 2.0683e-02 0.66 8.5962e-03 1.95 1.0801e-02 1.94 $128$ 1.1645e-02 0.83 2.1990e-03 1.97 2.8352e-03 1.93 Theoretical Order 0.60 2.00 2.00 Table 5: Numerical accuracy in temporal direction of fast Alikhanov scheme for Example 4.1, where $\alpha=1.5$. $\gamma=1$ $\gamma_{opt}=4/\alpha\approx 2.67$ $\gamma=\frac{9}{8}\gamma_{opt}=3$ $N$ $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $16$ 3.0823e-02 $\ast$ 7.0440e-02 $\ast$ 8.7416e-02 $\ast$ $32$ 1.3857e-02 1.15 1.8560e-02 1.92 2.3212e-02 1.91 $64$ 6.2024e-03 1.16 4.7736e-03 1.96 5.9919e-03 1.95 $128$ 2.6236e-03 1.24 1.2150e-03 1.97 1.5269e-03 1.97 Theoretical Order 0.75 2.00 2.00 Table 6: Numerical accuracy in temporal direction of fast Alikhanov scheme for Example 4.1, where $\alpha=1.8$. $\gamma=1$ $\gamma_{opt}=4/\alpha\approx 2.22$ $\gamma=\frac{9}{8}\gamma_{opt}=2.50$ $N$ $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $e_{H^{2}}(N)$ Order $16$ 1.9521e-02 $\ast$ 3.5560e-02 $\ast$ 4.3938e-02 $\ast$ $32$ 6.7203e-03 1.54 9.3089e-03 1.93 1.1590e-02 1.92 $64$ 2.6309e-03 1.35 2.3828e-03 1.97 2.9755e-03 1.96 $128$ 1.1487e-03 1.20 6.0470e-04 1.98 7.5559e-04 1.98 Theoretical Order 0.90 2.00 2.00 Tables 1–3 record the numerical results of the proposed fast L1 scheme with different grading parameters when solving the example for different $\alpha$. One can observe that the L1 scheme works accurately with the optimal temporal convergence of ${\cal O}(\tau^{\min\\{2-\frac{\alpha}{2},\gamma\frac{\alpha}{2}\\}})$. Similar numerical tests of the fast Alikhanov scheme are carried out for the example, and the results are listed in Tables 4–6. The temporal convergence of ${\cal O}(\tau^{\min\\{2,\gamma\frac{\alpha}{2}\\}})$ is well reflected and the optimal second-order convergence is apparent while $\gamma\geq\gamma_{opt}={4}/{\alpha}$. ###### Example 4.2. Consider the semilinear problem (1) with $\Omega=(-1,1)^{2}$, $\nu=1$ and $f(u,{\bf x},t)=-u^{3}$. The initial data are given as $\varphi({\bf x})=(x^{2}-1)(y^{2}-1)\\{\exp\\{-10((x+0.4)^{2}+y^{2})]+\exp[-10((x-0.4)^{2}+y^{2})]\\},\quad{\tilde{\varphi}}({\bf x})=0.$ Figure 1: The numerical solution in maximum-norm of the Algorithm 1 and the Graded-Uniform scheme for Example 4.2 with $\alpha=1.5$. (a) t=0 (b) t=0.5 (c) t=2 (d) t=10 Figure 2: Contour plots of the solutions of Algorithm 1 for Example 4.2 at different time with $\alpha=1.5$. Figure 3: The variation of time step sizes of the Algorithm 1 and the Graded-Uniform scheme for Example 4.2 with $\alpha=1.5$. For Example 4.2, we choose the spatial node $M=100$ and also divide the time interval $[0,T]$ into two parts $[0,T_{0}]$ and $(T_{0},T]$ with $T_{0}=0.02$. The Alikhanov algorithm on graded mesh with $t_{k}=T_{0}(k/N_{0})^{\gamma}$ ($\gamma=\alpha/4$) in the first interval $[0,T_{0}]$ is utilized to resolve the possible weak initial singularity, where $N_{0}=30$. For the remaining interval $(T_{0},T]$, we employ the proposed adaptive time stepping strategy (Algorithm 1) to compute the numerical solution until $T=10$. The parameters of the adaptive algorithm for solving this example are $tol=10^{-3},~{}S=0.9,~{}\tau_{\min}=10^{-3},~{}\tau_{\max}=10^{-1},~{}\tau_{N_{0}+1}=\tau_{N_{0}}.$ In order to show the efficiency of the adaptive algorithm, the fast linearized Alikhanov scheme is applied at the same time to find the solution in the interval $(T_{0},T]$. Its temporal mesh is graded (with $\gamma=\alpha/4$) in $[0,T_{0}]$ and is uniform in $(T_{0},T]$. In the following, we use ‘Graded- Uniform’ to represent this scheme. Figure 1 displays the numerical solution in maximum-norm of the Algorithm 1 and the Graded-Uniform scheme for $\alpha=1.5$. It implies that the adaptive mesh suits well with a dense uniform mesh in $(T_{0},T]$, provided that the adaptive mesh requires 277 time nodes in the remain interval $(T_{0},T]$ whereas the uniform mesh needs 970 time nodes. Figure 2 gives the solution contour plots of the solutions by the adaptive strategy, which simply shows the wave interactions of the example at different time. The variation of the temporal step sizes of the adaptive strategy with its comparison to those of the Graded-Uniform scheme are presented in Figure 3. The results indicate that the adaptive time-stepping strategy should be efficient and robust in the long time simulation of the semilinear diffusion-wave equations especially when the solution may exhibit high oscillations in time. ## 5 Concluding remarks We proposed a novel order reduction method to equivalently rewrite the semilinear diffusion-wave equation into coupled equations, where the explicit time-fractional derivative orders are all $\alpha/2$. The L1 and Alikhanov schemes combining with linearized approximations have been constructed for the equivalent problem. By using $H^{2}$ energy method, unconditional convergences (Theorem 3.6) were obtained for the two proposed algorithms under reasonable regularity assumptions and weak mesh restrictions. An adaptive time-stepping strategy was then designed for the semilinear problem to deal with possible temporal oscillations of the solution. The theoretical results were well demonstrated by our numerical experiments. We finally point out several relevant issues that deserve for further study: (i) deriving the regularity of the linear and semilinear diffusion-wave equations for the difference schemes; (ii) studying the energy properties of the nonlinear diffusion-wave equations for both the continuous and discrete versions, noting that corresponding properties were investigated recently for the nonlinear sub-diffusion problems [37]; (iii) extending the proposed methods to some related problems, such as the multi-term time-fractional wave equation [23]. ## 6 Acknowledgement The authors are very grateful to Prof. Hong-lin Liao for his great help on the design of the SFOR method and valuable suggestions on other parts of the whole paper. ## 7 Appendix: Truncation error analysis The truncation errors in (29)–(31) are defined as $\displaystyle({\cal T}_{f})_{h}^{n-\theta}:=f(U_{h}^{n-\theta},{\bf x}_{h},t_{n-\theta})-\left[f(U_{h}^{n-1},{\bf x}_{h},t_{n-\theta})+(1-\theta)f^{\prime}(U_{h}^{n-1},{\bf x}_{h},t_{n-\theta})\nabla_{\tau}U_{h}^{n}\right],$ $\displaystyle({\cal T}_{v1})_{h}^{n-\theta}:={\cal D}_{t}^{\beta}{\bf v}({\bf x}_{h},t_{n-\theta})-({\cal D}_{\tau}^{\beta}{\bf v}_{h})^{n-\theta},$ $\displaystyle({\cal T}_{w})_{h}^{n-\theta}:=w({\bf x}_{h},t_{n-\theta})-w_{h}^{n-\theta},$ $\displaystyle({\cal T}_{u})_{h}^{n-\theta}:={\cal D}_{t}^{\beta}{\bf u}({\bf x}_{h},t_{n-\theta})-({\cal D}_{\tau}^{\beta}{\bf u}_{h})^{n-\theta},$ $\displaystyle({\cal T}_{v2})_{h}^{n-\theta}:={\bf v}({\bf x}_{h},t_{n-\theta})-{\bf v}_{h}^{n-\theta},$ $\displaystyle{\cal S}_{h}^{n}:=\Delta u({\bf x}_{h},t_{n})-\Delta_{h}u_{h}^{n}.$ According to [16, Lemma 3.8 and Theorem 3.9], we have the follow lemma on estimating the time weighted approximation. ###### Lemma 7.1. Assume that $g\in C^{2}((0,T])$ and there exists a constant $C_{g}>0$ such that $|g^{\prime\prime}(t)|\leq C_{g}(1+t^{\sigma-2}),\quad 0<t\leq T,$ where $\sigma\in(0,1)\cup(1,2)$ is a regularity parameter. Denote the local truncation error of $g^{n-\vartheta}$ (here $\vartheta=\beta/2$) by ${\tilde{\cal R}}^{n-\vartheta}=g(t_{n-\vartheta})-g^{n-\vartheta},\quad 1\leq n\leq N.$ If the mesh assumption MA holds, then $\displaystyle\sum_{j=1}^{n}P_{n-j}^{(n)}|{\tilde{\cal R}}^{j-\vartheta}|\leq C_{g}\left(\tau_{1}^{\sigma+\beta}/\sigma+t_{n}^{\beta}\max_{2\leq k\leq n}t_{k-1}^{\sigma-2}\tau_{k}^{2}\right)\leq C\tau^{\min\\{\gamma\sigma,2\\}}.$ The following lemma is provided to analyze $({\cal T}_{f})_{h}^{n-\theta}$, which is analogous to Lemma 3.4 in [20]. ###### Lemma 7.2. Assume that $\eta\in C([0,T])\cap C^{2}((0,T])$ and there exists a constant $C_{u}>0$ such that $|\eta^{(k)}(t)|\leq C_{u}(1+t^{\sigma-k}),\quad 0<t\leq T,\quad k=1,2,$ where $\sigma\in(0,1)\cup(1,2)$ is a regularity parameter. Assume further that the nonlinear function $f(u,x,t)\in C^{4}({\mathbb{R}})$ with respect to $u$. Denote $\eta^{n}=\eta(t_{n})$ and the local truncation error $\displaystyle{\cal R}_{f}^{n-\theta}:=f(\eta(t_{n-\theta}),x,t)-\left[f(\eta^{n-1},x,t)+(1-\theta)f^{\prime}_{u}(\eta^{n-1},x,t)\nabla_{\tau}\eta^{n}\right].$ If the assumption MA holds, then $\sum_{j=1}^{n}P_{n-j}^{(n)}|{\cal R}_{f}^{j-\theta}|\leq\left\\{\begin{array}[]{ll}C\tau^{\min\\{2\gamma\sigma,2\\}},\quad\theta=0;\\\ C\tau^{\min\\{\gamma\sigma,2\\}},\quad\theta=\frac{\beta}{2};\end{array}\quad 1\leq n\leq N.\right.$ ###### Proof 7.3. Denote ${\cal R}_{\eta}^{n-\theta}:=\eta(t_{n-\theta})-\eta^{n-\theta}$. We have ${\cal R}_{\eta}^{n-\theta}=0$ while $\theta=0$. By the Taylor expansion, we have $\displaystyle{\cal R}_{f}^{n-\theta}=$ $\displaystyle f_{u}^{\prime}(\eta^{n-1}){\cal R}_{\eta}^{n-\theta}$ $\displaystyle+\left((1-\theta)\nabla_{\tau}\eta^{n}+{\cal R}_{\eta}^{n-\theta}\right)^{2}\int_{0}^{1}f^{\prime\prime}_{u}\left(\eta^{n-1}+s(\eta(t_{n-\theta})-\eta^{n-1}),x,t\right)(1-s)\,\mathrm{d}s.$ Following the proof of [20, Lemma 3.4] and using Lemma 7.1, the desired result holds immediately. For ${\bf x}\in\Omega$, let $\xi^{n}({\bf x})$ be a spatially continues function and denote $\xi^{n}_{h}:=\xi^{n}({\bf x}_{h})$. One may apply the Taylor expansion to get $\displaystyle\Delta_{h}\xi^{n}_{h}=$ $\displaystyle\int_{0}^{1}\left[\partial_{xx}\xi^{n}(x_{i}-sh_{x},y_{j})+\partial_{xx}\xi^{n}(x_{i}+sh_{x},y_{j})\right](1-s)\,\mathrm{d}s$ $\displaystyle+\int_{0}^{1}\left[\partial_{yy}\xi^{n}(x_{i},y_{j}-sh_{y})+\partial_{yy}\xi^{n}(x_{i},y_{j}+sh_{y})\right](1-s)\,\mathrm{d}s,\quad 1\leq n\leq N.$ Then we define a function ${\cal T}_{f}^{n}({\bf x})$ by $({\cal T}_{f})_{h}^{n}={\cal T}_{f}^{n}({\bf x}_{h})$. If the assumptions in (5) and MA are satisfied, by Lemma 7.2 and the differential formula of composite function, we can obtain (45) $\sum_{j=1}^{n}P_{n-j}^{(n)}\|\Delta_{h}({\cal T}_{f})^{n-\theta}\|\leq\left\\{\begin{array}[]{ll}C\tau^{\min\\{2\gamma\sigma_{1},2\\}},\quad\theta=0;\\\ C\tau^{\min\\{\gamma\sigma_{1},2\\}},\quad\theta=\frac{\beta}{2};\end{array}\quad 1\leq n\leq N.\right.$ For the spatial error, based on the regularity condition, it is easy to know that (46) $\displaystyle\|{\cal S}^{n}\|\leq C_{u}h^{2},\quad 1\leq n\leq N.$ Then (47) $\displaystyle\sum_{j=1}^{n}P_{n-j}^{(n)}\|({\cal D}_{\tau}^{\beta}{\cal S})^{j}\|\leq\sum_{j=1}^{n}P_{n-j}^{(n)}\sum_{k=1}^{j}A_{j-k}^{(j)}\|\nabla_{\tau}{\cal S}^{k}\|=\sum_{k=1}^{n}\|\nabla_{\tau}{\cal S}^{k}\|\leq C_{u}(1+t_{n}^{\sigma_{1}-1})h^{2}.$ We now consider the temporal truncation errors $({\cal T}_{v1})_{h}^{n-\theta}$, $({\cal T}_{w})_{h}^{n-\theta}$, $({\cal T}_{u})_{h}^{n-\theta}$ and $({\cal T}_{v2})_{h}^{n-\theta}$ in two situations: $\theta=0$ and $\theta=\beta/2$. For a function $g(t)$, define the global error ${\cal R}^{n-\theta}:=({\cal D}_{t}^{\beta}g)(t_{n-\theta})-({\cal D}_{\tau}^{\beta}g)^{n-\theta},\quad 1\leq n\leq N.$ $\bullet$ For L1 approximation ($\theta=0$): We have $({\cal T}_{w})_{h}^{n}=({\cal T}_{v2})_{h}^{n}=0$ in this situation. According to [15, Lemma 3.3] and [20, Lemma 3.3], the global consistency error of the L1 approximation can be presented in the following lemma. ###### Lemma 7.4. Assume that $g\in C^{2}((0,T])$ and there exists a constant $C_{g}>0$ such that $|g^{\prime\prime}(t)|\leq C_{g}(1+t^{\sigma-2}),\quad 0<t\leq T,$ where $\sigma\in(0,1)\cup(1,2)$ is a regularity parameter. If the assumption MA holds, it follows that $\displaystyle\sum_{j=1}^{n}P_{n-j}^{(n)}|{\cal R}^{j}|\leq C_{g}\left(\tau_{1}^{\sigma}/\sigma+\frac{1}{1-\beta}\max_{2\leq k\leq n}(t_{k}-t_{1})^{\beta}t_{k-1}^{\sigma-2}\tau_{k}^{2-\beta}\right)\leq C\tau^{\min\\{2-\beta,\gamma\sigma\\}}.$ Define the functions ${\cal T}_{v1}^{n}({\bf x})$ and ${\cal T}_{u}^{n}({\bf x})$ by $({\cal T}_{v1})_{h}^{n}:={\cal T}_{v1}^{n}({\bf x}_{h})$ and $({\cal T}_{u})_{h}^{n}:={\cal T}_{u}^{n}({\bf x}_{h})$ respectively. Using similar techniques for (45), and Lemma 7.4 with the in assumptions (5) and MA, we have (48) $\displaystyle\sum_{j=1}^{n}P_{n-j}^{(n)}\|\Delta_{h}{\cal T}_{v1}^{n}\|\leq C\tau^{\min\\{2-\beta,\gamma\sigma_{2}\\}}\quad\mbox{and}\quad\sum_{j=1}^{n}P_{n-j}^{(n)}\|\Delta_{h}{\cal T}_{u}^{n}\|\leq C\tau^{\min\\{2-\beta,\gamma\sigma_{1}\\}}.$ $\bullet$ For Alikhanov approximation ($\theta=\beta/2$): The global consistency error estimate of the Alikhanov approximation is estimated in the next lemma. ###### Lemma 7.5. ([16, Lemma 3.6]) Assume that $g\in C^{3}((0,T])$ and there exists a constant $C_{g}>0$ such that $|g^{\prime\prime\prime}(t)|\leq C_{g}(1+t^{\sigma-3}),\quad 0<t\leq T,$ where $\sigma\in(0,1)\cup(1,2)$ is a regularity parameter. Then $\displaystyle\sum_{j=1}^{n}P_{n-j}^{(n)}|{\cal R}^{j-\theta}|\leq C_{g}\left(\tau_{1}^{\sigma}/\sigma+t_{1}^{\sigma-3}\tau_{2}^{3}+\frac{1}{1-\beta}\max_{2\leq k\leq n}t_{k}^{\beta}t_{k-1}^{\sigma-3}\tau_{k}^{3}/\tau_{k-1}^{\beta}\right).$ By Lemma 7.5, Lemma 7.1, the assumptions in (5) and MA, it is easy to get that (49) $\displaystyle\sum_{j=1}^{n}P_{n-j}^{(n)}\|\Delta_{h}({\cal T}_{v1})_{h}^{j-\theta}\|\leq C\left(\tau_{1}^{\sigma_{2}}+\tau_{2}^{3}\tau_{1}^{\sigma_{2}-3}+\max_{2\leq k\leq n}(t_{k}-t_{1})^{\beta}t_{k-1}^{\sigma_{2}-3}\tau_{k}^{3-\beta}\right)\leq C\tau^{\min\\{3-\beta,\gamma\sigma_{2}\\}},$ (50) $\displaystyle\sum_{j=1}^{n}P_{n-j}^{(n)}\|\Delta_{h}({\cal T}_{u})_{h}^{j-\theta}\|\leq C\left(\tau_{1}^{\sigma_{1}}+\tau_{2}^{3}\tau_{1}^{\sigma_{1}-3}+\max_{2\leq k\leq n}(t_{k}-t_{1})^{\beta}t_{k-1}^{\sigma_{1}-3}\tau_{k}^{3-\beta}\right)\leq C\tau^{\min\\{3-\beta,\gamma\sigma_{1}\\}},$ (51) $\displaystyle\sum_{j=1}^{n}P_{n-j}^{(n)}\|\Delta_{h}({\cal T}_{w})_{h}^{j-\theta}\|\leq C\left(\tau_{1}^{\sigma_{1}+\beta}+\max_{2\leq k\leq n}t_{k-1}^{\sigma_{1}-2}\tau_{k}^{2}\right)\leq C\tau^{\min\\{2,\gamma\sigma_{1}\\}},$ (52) $\displaystyle\sum_{j=1}^{n}P_{n-j}^{(n)}\|\Delta_{h}({\cal T}_{v2})_{h}^{j-\theta}\|\leq C\left(\tau_{1}^{\sigma_{2}+\beta}+\max_{2\leq k\leq n}t_{k-1}^{\sigma_{2}-2}\tau_{k}^{2}\right)\leq C\tau^{\min\\{2,\gamma\sigma_{2}\\}}.$ ## References * [1] A. A. Alikhanov, A new difference scheme for the time fractional diffusion equation, J. Comput, Phys., 280 (2015), pp. 424–438, https://doi.org/10.1016/j.jcp.2014.09.031. * [2] H. Chen and M. Stynes, Error analysis of a second-order method on fitted meshes for a time-fractional diffusion problem, J. Sci. Comput., 79 (2019), pp. 624–647, https://doi.org/10.1007/s10915-018-0863-y. * [3] E. Cuesta, M. Kirane, and S. A. Malik, Image structure preserving denoising using generalized fractional time integrals, Signal Processing, 92 (2012), pp. 553–563, https://doi.org/10.1016/j.sigpro.2011.09.001. * [4] E. Cuesta, C. Lubich, and C. Palencia, Convolution quadrature time discretization of fractional diffusion-wave equations, Math. Comput., 75 (2006), pp. 673–696, https://doi.org/10.1090/S0025-5718-06-01788-1. * [5] E. Cuesta and C. Palencia, A fractional trapezoidal rule for integro-differential equations of fractional order in banach spaces, Appl. Numer. Math., 45 (2003), pp. 139–159, https://doi.org/10.1016/S0168-9274(02)00186-1. * [6] E. Cuesta and C. Palencia, A numerical method for an integro-differential equation with memory in Banach spaces: qualitative properties, SIAM J. Numer. Anal., 41 (2003), pp. 1232–1241, https://doi.org/10.1137/S0036142902402481. * [7] H. Gomez and T. J. R. Hughes, Provably unconditionally stable, second-order time-accurate, mixed variational methods for phase-field models, J. Comput. Phys., 230 (2011), pp. 5310–5327, https://doi.org/10.1016/j.jcp.2011.03.033. * [8] S. Jiang, J. Zhang, Q. Zhang, and Z. Zhang, Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations, Commun. Comput. Phys., 21 (2017), pp. 650–678, https://doi.org/10.4208/cicp.OA-2016-0136. * [9] B. Jin, R. Lazarov, and Z. Zhou, An analysis of the L1 scheme for the subdiffusion equation with nonsmooth data, IMA J. Numer. Anal., 36 (2016), pp. 197–221, https://doi.org/10.1093/imanum/dru063. * [10] B. Jin, R. Lazarov, and Z. Zhou, Two fully discrete schemes for fractional diffusion and diffusion-wave equations with nonsmooth data, SIAM J. Sci. Comput., 38 (2016), pp. A146–A170, https://doi.org/10.1137/140979563. * [11] B. Jin, B. Li, and Z. Zhou, Discrete maximal regularity of time-stepping schemes for fractional evolution equations, Numer. Math., 138 (2018), pp. 101–131, https://doi.org/10.1007/s00211-017-0904-8. * [12] N. Kopteva, Error analysis of the L1 method on graded and uniform meshes for a fractional-derivative problem in two and three dimensions, Math. Comput., 88 (2019), pp. 2135–2155, https://doi.org/10.1090/mcom/3410. * [13] B. Li, T. Wang, and X. Xie, Analysis of the L1 scheme for fractional wave equations with nonsmooth data, arXiv: 1908.09145v2 [math.NA]. * [14] B. Li, T. Wang, and X. Xie, Analysis of a time-stepping discontinuous Galerkin method for fractional diffusion-wave equations with nonsmooth data, J. Sci. Comput., 82 (2020), https://doi.org/10.1007/s10915-019-01118-7. * [15] H. L. Liao, D. Li, and J. Zhang, Sharp error estimate of a nonuniform L1 formula for time-fractional reaction-subdiffusion equations, SIAM J. Numer. Anal., 56 (2018), pp. 1112–1133, https://doi.org/10.1137/17M1131829. * [16] H. L. Liao, W. McLean, and J. Zhang, A second-order scheme with nonuniform time steps for a linear reaction-subdiffusion problem, arXiv:1803.09873v2 [math.NA]. * [17] H. L. Liao, W. McLean, and J. Zhang, A discrete Grönwall inequality with applications to numerical schemes for subdiffusion problems, SIAM J. Numer. Anal., 57 (2019), pp. 218–237, https://doi.org/10.1137/16M1175742. * [18] H. L. Liao and Z. Z. Sun, Maximum norm error bounds of ADI and compact ADI methods for solving parabolic equations, Numer. Meth. Part Differ. Equ., 26 (2010), pp. 37–60, https://doi.org/10.1002/num.20414. * [19] H. L. Liao, T. Tang, and T. Zhou, A second-order and nonuniform time-stepping maximum-principle preserving scheme for time-fractional Allen-Cahn equations, J. Comput. Phys., 414 (2020), https://doi.org/10.1016/j.jcp.2020.109473. * [20] H. L. Liao, Y. Yan, and J. Zhang, Unconditional convergence of a two-level linearized fast algorithm for semilinear subdiffusion equations, J. Sci. Comput., 80 (2019), pp. 1–25, https://doi.org/10.1007/s10915-019-00927-0. * [21] Y. Lin and C. Xu, Finite difference/spectral approximations for the time-fractional diffusion equation, J. Comput. Phys., 225 (2007), pp. 1533–1552, https://doi.org/10.1016/j.jcp.2007.02.001. * [22] H. Luo, B. Li, and X. Xie, Convergence analysis of a Petrov-Galerkin method for fractional wave problems with nonsmooth data, J. Sci. Comput., 80 (2019), pp. 957–992, https://doi.org/10.1007/s10915-019-00962-x. * [23] P. Lyu, Y. Liang, and Z. Wang, A fast linearized finite difference method for the nonlinear multi-term time-fractional wave equation, Appl. Numer. Math., 151 (2020), pp. 448–471, https://doi.org/10.1016/j.apnum.2019.11.012. * [24] F. Mainardi, Fractional Calculus and Waves in Linear Viscoelasticity, Imperial College Press, London, 2010. * [25] F. Mainardi and P. Paradisi, Fractional diffusive waves, J. Comput. Acoust., 9 (2001), pp. 1417–1436, https://doi.org/10.1142/S0218396X01000826. * [26] W. McLean and K. Mustapha, A second-order accurate numerical method for a fractional wave equation, Numer. Math., 105 (2007), pp. 481–510, https://doi.org/10.1007/s00211-006-0045-y. * [27] W. McLean and V. Thom$\acute{\mbox{e}}$e, Time discretization of an evolution equation via Laplace transforms, IMA J. Numer. Anal., 24 (2004), pp. 439–463, https://doi.org/10.1093/imanum/24.3.439. * [28] W. McLean and V. Thom$\acute{\mbox{e}}$e, Maximum-norm error analysis of a numerical solution via Laplace transformation and quadrature of a fractional-order evolution equation, IMA J. Numer. Anal., 30 (2010), pp. 208–230, https://doi.org/10.1093/imanum/drp004. * [29] K. Mustapha and W. McLean, Superconvergence of a discontinuous Galerkin method for fractional diffusion and wave equations, SIAM J. Nmer. Anal., 51 (2013), pp. 491–515, https://doi.org/10.1137/120880719. * [30] K. Mustapha and D. Schötzau, Well-posedness of hp-version discontinuous Galerkin methods for fractional diffusion wave equations, IMA J. Numer. Anal., 34 (2014), pp. 1426–1446, https://doi.org/10.1093/imanum/drt048. * [31] R. R. Nigmatullin, To the theoretical explanation of the “Universal Response”, Phys. Status Solidi B, 123 (1984), pp. 739–745, https://doi.org/10.1002/pssb.2221230241. * [32] K. Oldham and J. Spanier, The Fractional Calculus, Academic Press, New York, London, 1974. * [33] I. Podnubny, Fractional Differential Equations, Academic Press, San Diego, London, 1999. * [34] K. Sakamoto and M. Yamamoto, Initial value/boundary value problems for fractional diffusion-wave equations and applications to some inverse problems, J. Math. Anal. Appl., 382 (2011), pp. 426–447, https://doi.org/10.1016/j.jmaa.2011.04.058. * [35] M. Stynes, E. O’Riordan, and J. L. Gracia, Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation, SIAM J. Numer. Anal., 55 (2017), pp. 1057–1079, https://doi.org/10.1137/16M1082329. * [36] Z. Z. Sun and X. N. Wu, A fully discrete difference scheme for a diffusion-wave system, Appl. Numer. Math., 56 (2006), pp. 193–209, https://doi.org/10.1016/j.apnum.2005.03.003. * [37] T. Tang, H. Yu, and T. Zhou, On energy dissipation theory and numerical stability for time-fractional phase-field equations, SIAM J. Sci. Comput., 41 (2019), pp. A3757–A3778, https://doi.org/10.1137/18M1203560. * [38] Y. Yan, M. Khan, and N. J. Ford, An analysis of the modified L1 scheme for time-fractional partial differential equations with nonsmooth data, SIAM J. Numer. Anal., 56 (2018), pp. 210–227, https://doi.org/10.1137/16M1094257. * [39] W. Zhang, J. Li, and Y. Yang, A fractional diffusion-wave equation with non-local regularization for image denoising, Signal Processing, 103 (2014), pp. 6–15, https://doi.org/10.1016/j.sigpro.2013.10.028. * [40] V. A. Zorich, Mathematical Analysis I, Springer, Berlin, 2004.
On the Automorphism Group of Polar Codes Marvin Geiselhart, Ahmed Elkelesh, Moustafa Ebada, Sebastian Cammerer and Stephan ten Brink The authors would like to thank Florian Euchner for his help with proving Theorem 1 by proposing the crazy paternoster algorithm.An extended version of this paper is provided online (arXiv:2101.09679) <cit.>. Institute of Telecommunications, Pfaffenwaldring 47, University of Stuttgart, 70569 Stuttgart, Germany ECCerror-correcting code HDDhard decision decoding SDDsoft decision decoding MLmaximum likelihood GPUgraphical processing unit BPbelief propagation BPLbelief propagation list CA-BPLCRC-aided belief propagation list LDPClow-density parity-check HDPChigh density parity-check BERbit error rate BPSKbinary phase shift keying AWGNadditive white Gaussian noise MSEmean squared error LLRlog-likelihood ratio MAPmaximum a posteriori NEnormalized error BLERblock error rate PEprocessing element SCLsuccessive cancellation list SCsuccessive cancellation BI-DMCBinary Input Discrete Memoryless Channel CRCcyclic redundancy check CA-SCLCRC-aided successive cancellation list BECBinary Erasure Channel BSCBinary Symmetric Channel PSCLpartitioned successive cancellation list SPAsum product algorithm LFSRlinear feedback shift register 3GPP3rd Generation Partnership Project eMBBenhanced Mobile Broadband CNcheck node VNvariable node GenAlgGenetic Algorithm AIArtificial Intelligence MCMonte Carlo CSIChannel State Information FGfactor graph URLLCultra-reliable low-latency communications OSDordered statistic decoding LTAlower-triangular affine group GAgeneral affine group BLTAblock lower-triangular affine group URLLCultra-reliable low-latency communications DMCdiscrete memoryless channel MSBmost significant bit LSBleast significant bit PSMCpartially symmetric monomial code The automorphism group of a code is the set of permutations of the codeword symbols that map the whole code onto itself. For polar codes, only a part of the automorphism group was known, namely the LTA, which is solely based upon the partial order of the code's synthetic channels. Depending on the design, however, polar codes can have a richer set of automorphisms. In this paper, we extend the LTA to a larger subgroup of the GA, namely the BLTA and show that it is contained in the automorphism group of polar codes. Furthermore, we provide a low complexity algorithm for finding this group for a given information/frozen set and determining its size. Most importantly, we apply these findings in automorphism-based decoding of polar codes and report a comparable error-rate performance to that of SCL decoding with significantly lower complexity. § INTRODUCTION Polar codes are the first channel codes which are theoretically proven to asymptotically achieve the channel capacity under SC decoding [1]. In the short length regime, CRC-aided polar codes under SCL decoding [2] achieves an outstanding performance and, thus, selected as the channel code for the uplink and downlink control channel of the 5G standard [3]. Due to the highly symmetric structure of the polar code factor graph, decoders using the concept of factor graph permutations are proposed in [4], [5] and [6]. A different approach is to use the symmetries in the code itself, i.e., its automorphism group. To this end, polar codes are viewed as decreasing monomial codes [7]. In [7], it is shown that the automorphism group of decreasing monomial codes (and, thus, polar codes) is at least the LTA, solely based on a partial order of synthetic channels. This proved to be sufficient for the application of the minimum-weight codeword enumeration. However, in general, we expect decreasing monomial codes to have more automorphisms. This is easily verified by the fact that RM codes can be seen as a special case of decreasing monomial codes with an automorphism group known to be the GA [8], which is much larger than LTA. Automorphism-based decoding has been successfully applied to RM codes [9, 10] and BCH codes [11]. However, it was not yet possible to use the automorphism group in SC-based decoding of polar codes. The reason for this is that LTA-based automorphisms cannot result in any gains under SC-based (ensemble) decoding, as proven in <cit.>. Therefore, it is crucial to find automorphisms outside the LTA to enable efficient parallel ensemble decoding of polar codes. Further potential applications include analysis of some post-quantum cryptography schemes [12]. The main contribution of this work is the introduction of a larger automorphism group of decreasing monomial codes, namely the BLTA. We provide efficient algorithms for finding this group and sampling from it. The concept applies to polar codes, RM codes and the recently proposed PSMC [13]. § PRELIMINARIES §.§ Polar Codes Polar codes are constructed based on the concept of channel polarization [1]. $ N $ identical DMC are converted, via the channel transform, into $ N $ synthetic channels that show a polarization behavior. This means that a fraction of the bit-channels become very reliable (i.e., noiseless), while the rest of the synthetic bit-channels become totally noisy. Information is transmitted only on the $ K $ most reliable channels (information channels), while the poor channels are set to “0” (frozen channels). This is equivalent to selecting $ K $ rows from the Hadamard matrix $ \mathbf{G}_N = \left[ \begin{smallmatrix}1 & 0 \\ 1 & 1 \end{smallmatrix}\right]^{\otimes n} $ with $ N = 2^n $ to form the generator matrix $\mathbf{G}$ of the code. Alternatively, polar codes can be viewed as monomial codes [7]. In this perspective, each synthetic channel corresponds to a monomial in $ n $ binary variables $ x_i $. The set of all monomials in $ n $ variables is defined as $ \mathcal{M}_n $ and a polar code is a specific subset $ I $, called the information set of the polar code. Every monomial can be written as \begin{equation}\label{eq:monomial} f = \prod_{i \in \ind(f)} x_i \ifdef{\negvspaces}{\vspace{-.2cm}}{} \end{equation} where $ \ind(f) $ is an ordered subset of the variable indices $ \Omega = [0, n-1] \triangleq \{0,1, \dots,n-1\} $ and directly corresponds to the $ \ell $-th row of the generator matrix as \begin{equation}\label{eq:row} \ell = \sum_{i \in \Omega \setminus \ind(f)} 2^{i}. \ifdef{\negvspaces}{\vspace{-.2cm}}{} \end{equation} In other words, the monomial $ f $ corresponds to the row whose binary representation has zeros exactly in the bit-positions of the variables contained in $ f $. A message is a polynomial \begin{equation}\label{eq:polynomial} u(x_0, \dots, x_{n-1}) = \sum_{f \in I} u_f \cdot f(x_0, \dots, x_{n-1}) \ifdef{\negvspaces}{\vspace{-.2cm}}{} \end{equation} with $ K $ coefficients $ u_f \in \FF_2 $. The respective codeword is given by the evaluation of $ u(\xv) $ in all $ N $ points $ \xv \in \FF_2^n $. As a convention, we assume the $ j $-th codeword symbol is obtained from the point $ \xv $ equal to the binary expansion of $ j $. §.§ Partial Order It was shown in [7] and [14] that the synthetic channels exhibit a partial order “$ \preccurlyeq $” with respect to their reliability, i.e, $ f \preccurlyeq g $ means that the synthetic channel corresponding to monomial $ f $ is more reliable than the one corresponding to $ g $. For monomials of equal degree this partial order is defined as \begin{equation}\label{def:partial_order1} f \preccurlyeq g \Leftrightarrow \ind(f)_j \leq \ind(g)_j \quad \forall j = 0, \dots, \deg(f)-1 \end{equation} and for monomials of different degree \begin{equation}\label{def:partial_order2} f \preccurlyeq g \Leftrightarrow \exists g^* | g \text{ with } \operatorname{deg}(g^*) = \operatorname{deg}(f) \text{ and } f \preccurlyeq g^*. \end{equation} §.§ Decreasing Monomial Codes A decreasing monomial code is a polar code whose monomial selection obeys the partial order [7]. More precisely, if a synthetic channel is selected as an information channel, all stronger channels w.r.t. “$\preccurlyeq $” are also information channels. Mathematically, this can be written as \begin{equation}\label{def:dec_mon_code} \forall g \in I, \forall f \in \mathcal{M}_n \text{ with } f \preccurlyeq g \Rightarrow f\in I.\ifdef{\negvspaces}{\vspace{-.2cm}}{} \end{equation} Almost all practical polar code constructions result in decreasing monomial codes. A decreasing monomial code can be fully specified by a minimal information set $ I_\mathrm{min} $ containing only a small number of monomials called generators. All other monomials are implied by the partial order: \begin{equation} I = \bigcup_{g \in I_\mathrm{min}} \left\{f \in \mathcal{M}_n, f\preccurlyeq g\right\}.\ifdef{\negvspaces}{\vspace{-.2cm}}{} \end{equation} Moreover, the RM code of order $ r $ and length $N=2^n$ (i.e., RM$\left(r,n\right)$-code) is a special case of a decreasing monomial code with $ I_\mathrm{min} = \left\{x_{n-r} \cdots x_{n-1}\right\}$. In this paper, we will notate $ I_\mathrm{min} $ as numerical row indices, according to Eq. (<ref>). §.§ Automorphisms of Decreasing Monomial Codes The automorphism group $ \operatorname{Aut}(\mathcal{C}) $ of a code $\mathcal{C}$ is the group of codeword symbol permutations, that leave the code unchanged, i.e., map each codeword onto a codeword that is not necessarily different. It was shown in [7] that the automorphism group of a decreasing monomial code contains at least $ \operatorname{LTA}(2,n) $, i.e., affine transformations of the variables $ x_i $ in the form $ \xv' = \Am \xv + \bv $, with $ \Am\in \FF_2^{n\times n}$ being a lower triangular matrix with a unit diagonal and arbitrary $ \bv\in \FF_2^n $. § STABILIZERS OF THE MONOMIAL SET It was shown in [15] that the stage-shuffling of the polar factor graph corresponds to a bit-index permutation of both the codeword vector $ \cv $ and the message vector $ \uv $ (including the frozen bits). When viewing such permutations from a monomial code perspective, they exactly correspond to permuting the variables $ x_i $ of the monomials from $ \mathcal{M}_n $. Depending on the polar code construction (i.e., information/frozen set), there may exist permutations that keep the information set $ I $ unchanged, i.e., they stabilize it. Such a permutation is directly related to that automorphism of the code, where $ \Am $ in $ \xv' = \Am \xv + \bv $ is the corresponding permutation matrix. Definition (Stabilizer): Let $ S(\Omega) $ be the set of all permutations of $ \Omega $. Then a permutation $ \pi \in S(\Omega)$ stabilizes a monomial set $ I $, if and only if \begin{equation}\label{eq:stab} \forall f \in I \implies f' = \pi(f) \triangleq \prod_{i\in \ind(f)}{x_{\pi(i)}} \in I. \ifdef{\negvspaces}{\vspace{-.2cm}}{} \end{equation} In other words, $ I $ remains unchanged when permuting the variable indices in the monomials according to $ \pi $. Furthermore, let $ \Stab(I) $ denote the set of all permutations with this property. Note that $ \Stab(I) $ is a subgroup of $ S(\Omega) $. We call the stabilizer trivial, if it only contains the identity permutation. In the following, we seek to find $ \Stab(I) $ for a given $ I $ and derive some useful properties. Definition (Minimum and Maximum of a Permutation): Let $ \pi \in S(\Omega) $ be some permutation. The minimum $ \min(\pi) $ and maximum $ \max(\pi) $ are defined by the smallest and largest element not fixed by $ \pi $, i.e., \begin{align}\label{eq:minmax} \min(\pi) &\triangleq \min \left\{ i \; \middle| \; i \in \Omega, \pi(i)\ne i \right\},\\ \max(\pi) &\triangleq \max \left\{ i \; \middle| \; i \in \Omega, \pi(i)\ne i \right\}. \end{align} Definition (Interval Disjoint and Interlocked Permutations): Two permutations $ \pi_1 $ and $ \pi_2 $ are interval disjoint, if the intervals $ \Omega_{\pi_1} = [\min(\pi_1), \max(\pi_1)] $ and $ \Omega_{\pi_2} = [\min(\pi_2), \max(\pi_2)] $ are disjoint. Note that there may be elements of $ \Omega_{\pi_i} $ which are not affected by $ \pi_i $. Permutations are said to be interlocked, if they are not interval disjoint and do not share elements. Let $ C(\pi) $ be the cycle decomposition of $ \pi $. By merging all interlocked cycles $ \sigma \in C(\pi) $ into the same sub-permutations $ \rho_i $, we obtain the interval disjoint decomposition $ T(\pi) = \left\{\rho_0, \dots, \rho_{d-1} \right\}, $ as the unique set of pairwise interval disjoint permutations $ \rho_i $ such that $ \pi = \rho_0 \circ \cdots \circ \rho_{d-1} $. Theorem 1 (Stabilizers): Let $ I $ be the monomial set of a decreasing monomial code with a non-trivial stabilizer $ \Stab(I) $. Then the following statement holds: If a non-trivial permutation $\pi$ stabilizes $ I $, then all permutations of the disjoint intervals of $ \pi $ stabilizes $I$ as well, i.e., \begin{equation}\label{eq:stabilizer_thm} \pi \in \Stab(I) \Rightarrow \left\langle S\left(\left[\min(\rho),\max(\rho)\right]\right) \right\rangle_{\rho \in T(\pi)} \subseteq \Stab(I), \end{equation} where $ \langle \cdot \rangle $ denotes the join of subgroups. Proof: The proof is given in Appendix <ref>.The proof is given in the extended version <cit.>. Theorem 1 has a useful corollary revealing the structure of $ \Stab(I) $. $ \Stab(I) $ can be written as the join of permutations groups $ S(\Omega_k) $ of partitions of $ \Omega $, i.e., \begin{align}\label{eq:stab_corollary} \Stab(I) &= \left\langle S\left(\Omega_0\right), \dots, S\left(\Omega_{m-1}\right)\right\rangle \nonumber\\ \text{ with } \bigcup_{k=0}^{m-1}\Omega_k &= \Omega \; \text{ and } \; \Omega_k \cap \Omega_l = \emptyset \text{ for } k\ne l. \vspace{-.2cm} \end{align} In other words, every permutation $ \pi \in \Stab(I) $ can be written as a product of (potentially trivial) permutations of the intervals $ \Omega_k $ and vice versa. Note that $ \Omega_k $ may contain only a single element when $ S(\Omega_k) $ does not contribute to any non-trivial permutation. Assume the sub-intervals are not disjoint, i.e., there exist two sub-intervals $ \Omega_k $ and $ \Omega_l $ with $ S(\Omega_k) \subseteq \Stab(I) $ and $ S(\Omega_l) \subseteq \Stab(I) $ but $ \Omega_k \cap \Omega_l \ne \emptyset $ and neither $ \Omega_k \subseteq \Omega_l $ nor $ \Omega_l \subseteq \Omega_k $. Then one can pick two permutations (e.g., extremal transpositions) $ \pi_1 \in S(\Omega_k) $ and $ \pi_2 \in S(\Omega_l) $ which are not interval disjoint and $ \pi = \pi_1 \circ \pi_2 $ is either a single cycle or the product of interlocked cycles. In both cases, $ \pi $ stabilizes $ I $ and, thus, $ S(\Omega_k \cup \Omega_l) \subseteq \Stab(I) $. Therefore, every permutation in $ \Stab(I) $ either falls into an existing sub-interval or expands or merges sub-intervals, keeping the partition property. $ \qed $ The partition (and therefore $ \Stab(I) $) is fully described by the list of interval sizes $ \sv = [s_k] $ of the $ m $ sub-intervals $ \Omega_k $, i.e., \begin{equation}\label{eq:sk} s_k = |\Omega_k| = \max(\Omega_k) - \min(\Omega_k) + 1.\ifdef{\negvspaces}{\vspace{-.3cm}}{} \end{equation} The corollary gives us an algorithm for finding the sub-intervals $ \Omega_k $ for an arbitrary decreasing monomial code with information set $ I $. We know that all permutations in $ S(\Omega_k) $ are contained in $ \Stab(I) $, as we can pick trivial permutations for the other sub-intervals. In particular, also the transposition $ \pi = (\min(\Omega_k),\max(\Omega_k)) $ stabilizes $ I $. Therefore, we can find the borders of the sub-intervals by systematically searching for pairs $ i_0, i_1 $ with maximal distance. Algorithm <ref> provides a pseudo-code for this procedure. The algorithm has a worst case runtime of $ \mathcal{O}(K \cdot n^2) $, with the check $ \pi(I)=I $ requiring $ K $ comparisons. Information set $ I $ of decreasing monomial code in $ n $ variables List of sub-interval sizes $ \sv $ $ \sv \leftarrow [\,],\quad i_0 \leftarrow 0 $ $ i_0 < n $ $ i_1 \leftarrow n-1 $ $ i_1 \ge i_0 $ $ \pi \leftarrow (i_0,i_1) $ $\pi(I) = I$ append $ i_1 - i_0 + 1 $ to $ \sv $ $ i_0 \leftarrow i_1 + 1 $ $ i_1 \leftarrow i_1 - 1 $ Finding $ \Stab(I) $ in terms of the partition of $ \Omega $ into sub-intervals of size $ s_k $. We can represent $ \Stab(I) $ as a set of $ n \times n $ permutation matrices $ P_\sv(n) $, where $ \sv = [s_k] $ defines a block diagonal structure with blocks of sizes $ s_k \times s_k $. Except for the block diagonal elements, all matrix elements are zero. For a non-trivial stabilizer, we hereby find automorphisms outside LTA, as no permutation matrix is lower-triangular besides the identity permutation. § THE AUTOMORPHISM GROUP OF POLAR CODES In the following, we combine both LTA and the newly found stabilizer group into a larger group, namely the block lower-triangular affine group (BLTA). Definition (Block Indices): We denote a partition of the interval $ \Omega = [0,n-1] $ by a sequence of $ m $ positive integers $ s_k > 0 $ for the sizes of the sub-intervals. The interval start $ \gamma_k $ is the first element of the $ k$-th sub-interval and is defined as the cumulative sum \begin{equation}\label{gamma} \gamma_k = \sum_{i=0}^{k-1} s_k.\ifdef{\negvspaces}{\vspace{-.2cm}}{} \end{equation} The index function $ k(i) $ returns the index of the sub-interval that contains $ i $ and is defined as $ k(i) = \max\left\{k: \; i \ge \gamma_k \right\} $. Structure of a block lower-triangular matrix with block sizes $ s_k $ and block starts $ \gamma_k $. Definition (Block Lower-Triangular Matrix): An $ n \times n $ matrix $ \mathbf{A} $ over an arbitrary field is block lower-triangular with block sizes $ \sv = [s_k], 0 \le k \le m-1 $, if all elements to the right of the block diagonal are zero, i.e., $ a_{i,j} = 0 \; \forall j \ge \gamma_{k(i)}+ s_{k(i)} $. The blocks of the matrix are denoted by $ \Dm_{k,l} $. Fig. <ref> shows the general structure of a block lower diagonal matrix. As square block matrices naturally extend conventional matrices, we have the following properties: * The product of two block lower-triangular matrices is also a block lower-triangular matrix with the same block structure. * A block lower-triangular is non-singular if and only if all blocks on the main diagonal $ \Dm_{k,k} $ are non-singular. * The inverse of a block lower-triangular matrix is also a block lower-triangular matrix with the same block structure as the original matrix. As a consequence, non-singular block lower-triangular matrices form a group under matrix multiplication. Note that associativity is inherited from matrix multiplication and the identity matrix $ \mathbf{I} $ is always block lower-triangular. The size of this group can be easily computed in terms of $ \sv $. For this, observe that in row $ i $, there are $ \gamma_{k(i)}+s_{k(i)} $ elements that can be 0 or 1 each. However, one has to deduce the number of cases where the row is a linear combination of the $ i $ previous rows. Therefore, the number of invertible block lower-triangular matrices is \begin{align} N_\mathrm{IBLT}(\sv) &= \prod_{i=0}^{n-1}\left(2^{\gamma_{k(i)}+s_{k(i)}} - 2^i\right) \label{eq:num_whole_perspective}\\ &= \prod_{k=0}^{m-1}\prod_{i'=0}^{s_k-1}\left(2^{\gamma_k+s_k} - 2^{\gamma_k+i'}\right) \nonumber\\ &= \prod_{k=0}^{m-1}\left( 2^{\gamma_k \cdot s_k} \prod_{i'=0}^{s_k-1}\left(2^{s_k} - 2^{i'}\right) \right).\label{eq:num_block_perspective} \end{align} While Eq. (<ref>) expresses the number from a whole matrix perspective, Eq. (<ref>) views the same thing from a block matrix perspective. In particular, the inner product gives the number of non-singular diagonal blocks $ \Dm_{k,k} $, while $ 2^{\gamma_k \cdot s_k} $ is the number of arbitrary rectangular matrices to the left of the block diagonal $ \Dm_{k,l} $ with $ l<k $, for each block row $ k $. Definition (Block Lower-Triangular Affine Group, BLTA): The block lower-triangular affine group $ \operatorname{BLTA}(\sv, n) $ is the set of affine transformations $ \xv' = \Am \xv + \bv $ over $ \FF_2^n $ with $ \Am \in \FF_2^{n\times n}$ non-singular block lower-triangular with block structure $ \sv $ and an arbitrary $ \bv \in \FF_2^n $. From the discussion of block lower-triangular matrices above, it is easy to see that BLTA is indeed a group, in particular a subgroup of $ \operatorname{GA}(2,n) $. Moreover, it can be seen that $ \operatorname{LTA}(2,n) $ and $ \operatorname{GA}(2,n) $ are themselves special cases of $ \operatorname{BLTA}(\sv,n) $, with $ \sv = [1,\dots,1] $ and $ \sv = [n] $, respectively. Lemma 1: The join of the group of block-permutation transformations $ P_\sv(n) $ and the group lower triangular affine transformations $ \operatorname{LTA}(2,n) $ is exactly $ \operatorname{BLTA}(\sv,n) $, i.e., \begin{equation}\label{eq:lemma_join} \operatorname{BLTA}(\sv,n) = \langle P_\sv(n), \operatorname{LTA}(2,n) \rangle.\ifdef{\negvspaces}{\vspace{-.1cm}}{} \end{equation} In other words, any composition of transformations from $ \operatorname{LTA}(2,n) $ and $ P_\sv(n) $ is a transformation from $ \operatorname{BLTA}(\sv,n) $ and vice versa. Proof: “$ \Rightarrow $”: Obviously, $ P_\sv(n) \subseteq \operatorname{BLTA}(\sv,n) $, as permutation matrices are non-singular and the block structure is given. Similarly, $ \operatorname{LTA}(2,n) \subseteq \operatorname{BLTA}(\sv,n) $, where again we have a special case of affine transformations. Also, as $ \operatorname{BLTA}(\sv,n) $ is closed, a composition of any transformations will not generate any elements outside $ \operatorname{BLTA}(\sv,n) $. “$ \Leftarrow $”: We can show this by observing that any block lower-triangular matrix $ \Am $ may be decomposed as $ {\Am = \Pm_1\cdot \Lm_1 \cdot \Pm_2 \cdot \Lm_2 \cdot \Pm_3} $, with $ \Pm_i \in P_\sv(n) $ and $ \Lm_i \in \operatorname{LTA}(2,n) $. For this, consider the LUP decomposition of $ \Am $, i.e., $ \Pm \Am = \Lm \Um $ [16]. The block lower-triangular structure of $ \Am $ ensures that also $ \Um $ and $ \Pm $ are block lower-triangular. One can now transform $ \Um $ into a conventional lower-triangular matrix by reversing the order of the rows and columns within each block. This can be written as $ \Lm_2 = \Pm_{\mathrm{BR}}(\sv) \cdot \Um \cdot \Pm_{\mathrm{BR}}(\sv) $, with $ \Pm_{\mathrm{BR}}(\sv) = [p_{i,j}] $ and \begin{equation}\label{eq:pbr} p_{i,j} = \begin{cases} 1 & \text{for } j = 2\gamma_{k(i)}+s_{k(i)} - 1 - i\\ 0 & \text{else} \end{cases}. \end{equation} Finally, as $ \Pm_{\mathrm{BR}}(\sv) = \Pm_{\mathrm{BR}}^{-1}(\sv) $, we have $ \Pm_1 = \Pm^{-1} $, $ \Pm_2 = \Pm_3 = \Pm_{\mathrm{BR}}(\sv) $ and $ \Lm_1 = \Lm $. The additive term $ \bv $ may be included (i.e., also properly permuted) in any of the LTA transformations. $ \qed $ Theorem 2 (Automorphisms of Polar Codes): Let $ \mathcal{C} $ be a decreasing monomial code in $ n $ variables with information set $ I $. Then \begin{equation}\label{eq:blta_is_aut} \operatorname{BLTA}(\sv,n) \subseteq \operatorname{Aut}(\mathcal{C}) \end{equation} with $ \sv $ being the block structure of $ \Stab(I) $. Proof: The proof directly follows from Lemma 1, as both LTA and $ \Stab(I) $ correspond to automorphisms of the code. $ \qed $ We furthermore conjecture that Eq. (<ref>) holds with equality when considering only affine automorphisms. However, we were not yet able to find a rigorous proof. To prove it, one would need to show that for every nonzero element $ a_{i,j} $ with $ i<j $ in an affine transformation $ (\Am, \bv) \in \operatorname{Aut}(\mathcal{C}) $, the variable permutation $ \pi = (i,j) $ must be contained in $ \Stab(I) $.[This conjecture has been proven in [17] while this paper was under review.] §.§ Number of Automorphisms The number of automorphisms is (at least) the size of $ \operatorname{BLTA}(\sv,n) $ for a code with block structure $ \sv $. Clearly, this is the number of non-singular block lower-triangular matrices times the number of affine translations $ \bv $. Using Eq. (<ref>), we have \begin{equation} |\operatorname{BLTA}(\sv,n)| = N_\mathrm{IBLT}(\sv)\cdot 2^{n} = 2^n \cdot \prod_{k=0}^{m-1}\left( 2^{\gamma_k \cdot s_k} \prod_{i'=0}^{s_k-1}\left(2^{s_k} - 2^{i'}\right) \right). \end{equation} Note that this equates to the sizes of $ \operatorname{LTA}(2,n) $ and $ \operatorname{GA}(2,n) $ for the special cases $ \sv = [1,\dots,1] $ and $ \sv = [n] $, respectively. Number of automorphisms of $ (128,64) $ polar codes with Bhattacharyya-based construction versus BEC design erasure probability $ \epsilon $. Maximum, average and minimum number of automorphisms of all $ (128,64) $ decreasing monomial codes versus their number of generators $ |I_\mathrm{min}| $. Fig. <ref> shows the sizes of the automorphism groups for polar codes with $ N=128 $ and $ K=64 $ designed according to the Bhattacharyya parameter of the synthetic channels. This construction assumes a BEC with erasure probability $ \epsilon $. It can be seen that for low erasure probability, this construction generates the RM(3,7)-code. The larger the values of $ \epsilon $, the fewer the automorphisms featured by the code. In Fig. <ref>, we evaluate the influence of the number of generators of a code $ |I_\mathrm{min}| $ on the size of the automorphism group, also for the case of $ N=128 $ and $ K=64 $. Since there exist usually many codes with the same number of generators, we plot the minimum, average and maximum automorphism group sizes for each value of $ |I_\mathrm{min}| $. To obtain these numbers, we enumerated all 1007 $ (128,64) $ decreasing monomial codes using a tree search. As just mentioned, it can be seen that a smaller size of $ I_\mathrm{min} $ generally results in a larger number of automorphisms. We find the RM code on the very left of the plot, while typical polar codes lie more towards the right edge. It is worth mentioning that, from a code design perspective, several code constructions can be viewed as lying between polar and RM codes (e.g., [13], [18], [19] and [20]). §.§ Sampling Automorphisms For some practical applications such as automorphism ensemble decoding [10], it is required to sample from the automorphism group, i.e., to pick a permutation from $ \operatorname{BLTA}(\sv,n) $ at random. In general, it is difficult to ensure that a random matrix is invertible. If the fraction of non-singular matrices out of all matrices is sufficiently large, one can generate random matrices and test for invertibility. For binary matrices (GL for general linear), this probability is lower bounded [21] as \begin{align} p_{\mathrm{succ,GL}} &= \frac{\prod_{i=0}^{n-1}\left(2^n-2^i\right)}{2^{(n^2)}} = \prod_{i'=1}^{n}\left(1-2^{-i'}\right) \nonumber \\ &\ge \lim_{n\to\infty} \prod_{i'=1}^{n}\left(1-2^{-i'}\right) = 0.28878\dots \label{eq:psucc_gl} \end{align} However, the same expression for a block lower-triangular (BLT) matrices, i.e., \begin{align} p_{\mathrm{succ,BLT}} & = \frac{\prod_{i=0}^{n-1}\left(2^{\gamma_{k(i)}+s_{k(i)}} - 2^i\right) }{\prod_{i=0}^{n-1}2^{\gamma_{k(i)}+s_{k(i)}} } = \prod_{i=0}^{n-1}\left(1-2^{i-\gamma_{k(i)}-s_{k(i)}}\right) \nonumber \\ &\ge \prod_{i=0}^{n-1}\left(1-2^{-1}\right) = 2^{-n} \xrightarrow{n \to \infty} 0, \label{eq:psucc_blt} \end{align} cannot be lower bounded, since the last line holds with equality for the case $ \sv = [1,\dots,1] $. We therefore propose a different method, based on the fact that only the blocks on the diagonal must be non-singular: * For $k = 0,1,\cdots, m-1$, sample the square blocks $ \Dm_{k,k} $ on the main diagonal from $ \operatorname{GL}(2,s_k) $, i.e., generate random $ s_k \times s_k $ binary matrices until a non-singular one is found, with success probability $ p_{\mathrm{succ},k} = \prod_{i=1}^{s_k}\left(1-2^{-i}\right) $. * Select all elements below the block diagonal (i.e., $ a_{i,j} $ with $ j < \gamma_{k(i)} $, or blocks $ \Dm_{k,l} $ with $ l<k $) randomly uniformly from $ \{0,1\} $. This method has the advantage that each block on the diagonal can be independently sampled, resulting in total in the same lower bound, Eq. (<ref>), which is fulfilled with equality for the worst case of $ \sv = [n] $. § POLAR CODES UNDER AUTOMORPHISM SC DECODING Comparison of $\left(N=256,K=128\right)$ polar codes under SC, Aut-SC and SCL decoding; BI-AWGN channel. Appendix <ref> gives BLER results for (128,64) polar codes.In the extended version <cit.>, we provide more BLER results for the (128,64) polar codes. Design $ \sv $ $ |\operatorname{Aut}(\mathcal{C})| $ $ d_\mathrm{min} $a $ A_{d_\mathrm{min}} $b Bhat. @1 dB $[2, 1, 1, 1, 1, 1, 1]$ $2.06\cdot10^{11}$ 8 96 $I_\mathrm{min}=\{31,99\}$ $[5, 3]$ $1.41\cdot10^{16}$ 16 69936 $I_\mathrm{min}=\{31,57\}$ $[3, 5]$ $1.41\cdot10^{16}$ 16 69936 Properties of the compared (256,128) polar codes. a$ d_\mathrm{min} $: minimum distance of the code. b$ A_{d_\mathrm{min}} $: number of minimum-weight codewords. As an application, we now evaluate polar codes under automorphism SC (Aut-SC) decoding. As proposed in [10], we use $ M=8 $ parallel independent SC decoders, each decoding a permuted version of the received sequence $ \yv $. The codeword estimates of each SC decoder are un-permuted and the ML-in-the-list method is applied to select the final codeword estimate. The permutations are conducted by automorphisms randomly sampled from the BLTA group of the particular code, found using Algorithm <ref>. Note that this decoder is similar to SCL, however, no sorting of the path-metrics are required, as the constituent decoders are independent. We assume an AWGN channel with BPSK modulation. Fig. <ref> shows the BLER performance of (256,128) polar codes under SC-based decoding. In particular, we compare plain SC decoding [1] with SCL with list size 8 (SCL-8) decoding [2] and Aut-8-SC decoding [10]. First, we see that while being the best code under SC-decoding, the Bhattacharyya construction at design SNR of 1 dB ($ I_\mathrm{min}=\{59, 79, 105, 149, 163, 224\} $) does not show any gains for Aut-SC decoding, as expected for such few automorphisms outside LTA. Next, in order to have a large automorphism group, we designed codes by selecting two generators $ I_\mathrm{min}=\{31,57\} $ and $ I_\mathrm{min}=\{31,99\} $, under the constraint of $ K=128 $. Both codes can be viewed as examples of PSMC [13]. While the SC performance degrades, now a significant performance gain is achieved by both Aut-SC and SCL. However, the two constructions show a very different behavior. While for the code with $ I_\mathrm{min}=\{31,99\} $ SCL shows a very good performance, Aut-SC shows only small gains. The code with $ I_\mathrm{min}=\{31,57\} $ can, however, outperform SCL. Therefore, a strict correlation between SCL and Aut-SC decoding performance for partially symmetric codes cannot be inferred and code design for both decoders remains an open problem. Table <ref> lists the parameters and properties of the compared codes.[An interactive demo of the code properties is provided online: <http://webdemo.inue.uni-stuttgart.de/webdemos/08_research/polar/index.php?id=12>] In Appendix <ref> we provide more BLER results for the case of $ N=128 $ and $ K=64 $.In the extended version of this paper <cit.>, we provide more BLER results for the case of $ N=128 $ and $ K=64 $. We want to emphasize again that the usage of just LTA permutations would result in the BLER performance curves of Aut-SC to coincide with plain SC decoding as depicted and discussed in [10]. § CONCLUSION We show that decreasing monomial codes have at least BLTA as their automorphism group, which is in most cases larger than the previously known subgroup LTA, and propose an algorithm to find this group. While the automorphisms from LTA were proven to yield no error-rate performance gains under automorphism-based SC decoding when compared to plain SC decoding, the newly found BLTA permutations show significant gains, outperforming the state-of-the-art SCL in some scenarios, with a strictly lower complexity. [1] E. Arıkan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels,” IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3051–3073, Jul. 2009. [2] I. Tal and A. Vardy, “List Decoding of Polar Codes,” IEEE Trans. Inf. Theory, vol. 61, no. 5, pp. 2213–2226, May 2015. [3] “Technical Specification Group Radio Access Network,” 3GPP, 2018, TS 38.212 V.15.1.1. [Online]. Available: [4] S. A. Hashemi, N. Doan, M. Mondelli, and W. J. Gross, “Decoding Reed-Muller and Polar Codes by Successive Factor Graph Permutations,” in IEEE 10th Inter. Symp. on Turbo Codes Iterative Inf. Process. (ISTC), Dec. 2018. [5] A. Elkelesh, M. Ebada, S. Cammerer, and S. ten Brink, “Belief Propagation List Decoding of Polar Codes,” IEEE Commun. Lett., vol. 22, no. 8, pp. 1536–1539, Aug. 2018. [6] M. Kamenev, Y. Kameneva, O. Kurmaev, and A. Maevskiy, “Permutation Decoding of Polar Codes,” in XVI Inter. Symp. “Problems of Redundancy in Information and Control Systems” (REDUNDANCY), 2019, pp. [7] M. Bardet, V. Dragoi, A. Otmani, and J. Tillich, “Algebraic Properties of Polar Codes From a New Polynomial Formalism,” in IEEE Inter. Symp. Inf. Theory (ISIT), Jul. 2016, pp. 230–234. [8] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, ser. North-Holland Mathematical Library.1em plus 0.5em minus 0.4emNorth-Holland Pub. Co., 1977, no. 16. [9] N. Stolte, “Rekursive Codes mit der Plotkin-Konstruktion und ihre Decodierung,” Ph.D. dissertation, Technische Universität Darmstadt, Jan. 2002. [Online]. Available: [10] M. Geiselhart, A. Elkelesh, M. Ebada, S. Cammerer, and S. ten Brink, “Automorphism Ensemble Decoding of Reed-Muller Codes,” ArXiv e-prints, arXiv:2012.07635, Dec. 2020. [11] T. Hehn, O. Milenkovic, S. Laendner, and J. B. Huber, “Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes,” IEEE Trans. Inf. Theory, vol. 54, no. 12, 2008. [12] M. Bardet, J. Chaulet, V. Dragoi, A. Otmani, and J. Tillich, “Cryptanalysis of the McEliece Public Key Cryptosystem Based on Polar Codes,” in Post-Quantum Cryptography, 2016, pp. 118–143. [13] K. Ivanov and R. Urbanke, “Partially symmetric monomial codes,” ArXiv e-prints, arXiv:2001.03790, Jan. 2020. [14] C. Schürch, “A Partial Order For the Synthesized Channels of a Polar Code,” in IEEE Inter. Symp. Inf. Theory (ISIT), Jul. 2016, pp. [15] N. Doan, S. A. Hashemi, M. Mondelli, and W. J. Gross, “On the Decoding of Polar Codes on Permuted Factor Graphs,” in IEEE Global Commun. Conf. (GLOBECOM), Dec. 2018. [16] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, 2nd ed.1em plus 0.5em minus 0.4emThe MIT Press, [17] Y. Li, H. Zhang, R. Li, J. Wang, W. Tong, G. Yan, and Z. Ma, “The Complete Affine Automorphism Group of Polar Codes,” ArXiv e-prints, arXiv:2103.14215, Mar. 2021. [18] B. Li, H. Shen, and D. Tse, “A RM-Polar Codes,” ArXiv e-prints, arXiv:1407.5483, Jul. 2014. [19] M. Mondelli, S. H. Hassani, and R. L. Urbanke, “From Polar to Reed-Muller Codes: A Technique to Improve the Finite-Length Performance,” IEEE Trans. Commun., vol. 62, no. 9, pp. 3084–3091, Sep. 2014. [20] A. Elkelesh, M. Ebada, S. Cammerer, and S. ten Brink, “Decoder-Tailored Polar Code Design Using the Genetic Algorithm,” IEEE Trans. Commun., vol. 67, no. 7, pp. 4521–4534, Jul. 2019. [21] N. J. A. Sloane, “The Encyclopedia of Integer Sequences, Sequence A048651.” [Online]. Available: <http://oeis.org/A048651> [22] M. Fossorier and S. Lin, “Soft-Decision Decoding of Linear Block Codes Based on Ordered Statistics,” IEEE Trans. Inf. Theory, vol. 41, no. 5, pp. 1379–1396, Sep. 1995. §.§ Proof of Theorem 1 Let $ \sigma \in C(\pi) $ be a cycle of the cycle decomposition of $ \pi $ and $ \Omega_\sigma = [\min(\sigma), \max(\sigma)] $ be the interval $ \sigma $ acts on. The index set $ \Omega_\pi $ is the union of the intervals of the cycle decomposition of $ \pi $, i.e., \begin{equation}\label{eq:operating interval} \Omega_\pi = \bigcup_{\sigma \in C(\pi)} \Omega_\sigma, \end{equation} and $ \Omega_\pi^c = \Omega \setminus \Omega_\pi $ its complement. We can factor every monomial $ f\in I $ into a part $ f_\pi $ corresponding to $ \pi $ and a residual: \begin{equation} f = \prod_{i\in \ind(f) \cap \Omega_\pi} x_i \prod_{i\in \ind(f) \cap \Omega_\pi^c} x_i = f_\pi \cdot f_{\pi^c}. \end{equation} Now, partition $ I $ into subsets $ I_f $ of the same degree and equal residual $ f_{\pi^c} $, i.e., under the following equivalence relation: \begin{align} &f \sim f' \Leftrightarrow \deg(f) = \deg(f') \text{ and } f_{\pi^c} = f_{\pi^c}' \\ &I/{\sim} = \left\{[f]_{\sim} \; \middle| \; f \in I\right\}. \end{align} We focus on some monomial $ f $ with subset $ I_f = [f]_{\sim} $. From <cit.>, we know that within each subset, all elements are comparable under the partial order and it is sufficient to look at the part of $ f $ that is not shared by the elements in $ I_f $, i.e., $ f_\pi $. We will now show that $ I_f $ contains all monomials that share $ f_{\pi^c} $ and have the same degrees in intervals of the interval disjoint decomposition of $ \pi $, i.e., \begin{equation}\label{eq:whatweshow} I_f = \left\{g\cdot f_{\pi^c} \in \mathcal{M}_n, \; \deg(g_\rho)=\deg(f_\rho) \; \forall \rho \in T(\pi)\right\}, \end{equation} by repeatedly applying $ \pi $ and using the partial order. First, assume $ \pi = \sigma $ is just a single cycle and $ \Omega_\sigma = [i_0,i_1] $. Let $ d=\deg(f_\sigma) $. If $ d=0 $, Eq. (<ref>) is already fulfilled, as $ f $ is the only such monomial. If $ d>0 $, observe: * After a maximum of $ \ord(\sigma) $ steps, we can transform $ f $ into some $ f' $ with the property $ i_1 \in \ind(f') $. * After a maximum of $ 2d\cdot\ord(\sigma) $ steps, we can transform $ f $ into $ \hat{f} = x_{i_1-d+1} \cdots x_{i_1}$ which is the maximum monomial (w.r.t. the partial order) in $ \Omega_\sigma $. In both scenarios, a step refers to one application of the partial order (i.e., transforming $ f $ into some $ f'\preccurlyeq f $ with $ f' \sim f $) followed by one application of the permutation $ \pi $ (i.e., transforming $ f $ into $ f' = \pi(f) $; $ f'\sim f $ implicitly fulfilled). Both operations will map $ f $ to another monomial $ f' $ that is contained in $ I $, as we assume $ \pi \in \Stab(I) $ and $ I $ belongs to a decreasing monomial code. Observations 1) and 2) can be verified by looking at the following algorithm. We denote the monomial at step $ j $ by $ f_j $. Assume, we know some upper limit of $ Q $ steps, in which we can arrive certainly at the desired target state $ \hat{f}=f_Q $. At each step $ j $, find the positions $ \ind(f_j) $ that are not correct if $ \pi $ is applied another $ Q-j $ times, i.e., \begin{equation}\label{eq:fixset} F_j = \left\{ i \in \ind(\pi^{Q-j}(\hat{f})), i\notin \ind(f_j) \right\}. \end{equation} After each set of two revolutions of $ \pi $ (or $ 2\cdot\ord(\pi) $ steps), we can remove (at least) one element from $ F_j $, since, as long as we have not yet arrived at the target state, there is an “empty place” in the positions affected by $ \pi $ which is permuted to $ i_0 $ at some point. The partial order allows us to move one of the $ x_i $ that are not yet in the target position, into $ i_0 $, as $ x_{i_0} \preccurlyeq x_i $. After another maximum of $ \ord(\pi)-1 $ steps, the respective $ x_i $ has moved to position $ i_1 $, and can be placed in the target position $ \hat{i} $ by the partial order, again, because $ x_{\hat{i}} \preccurlyeq x_{i_1} $. Therefore, no more than $ Q=2d \cdot \ord(\pi) $ steps are required. Note that this is a very loose upper bound, but to prove Eq. (<ref>), we only need the algorithm to be deterministic and stop after a finite number of steps. To summarize, we can move any variable in $ f $ to any other place in $ \Omega_\pi $, as we can move it to $ i_0 $ using the partial order, rotate it to $ i_1 $ using the cycle permutation $ \pi $, and then place it in the target position as all indices are reachable by the partial order from $ i_1 $. Due to the resemblance of the cycle $ \pi $ with an irregular paternoster elevator running around a building with floors $ \Omega_\pi $, we call this algorithm the crazy paternoster algorithm. Visualization of interlocked cycles in the example permutation $ {\pi = (1,5,2)(3,7)(6,10)(8,9)} $. The described method can be extended to the case where $ \pi $ is a product of interlocked cycles. We can classify all cycles as either transit cycles or parking cycles. A parking cycle $ \sigma $ is fully enclosed by another cycle, i.e., $ \exists \sigma' \in C(\pi) $ with $ \Omega_{\sigma} \subset \Omega_{\sigma'} $; while transit cycles partially overlap with another. It is easy to see that there exists a chain of transit cycles $ \sigma_1, \cdots, \sigma_t $ with the properties $ \min(\sigma_1) = i_0 $, $ \max(\sigma_j) > \min(\sigma_{j+1}) $ and $ \max(\sigma_t)=i_1 $. Fig. <ref> shows the interlocked cycles for the example of $ {\pi = (1,5,2)(3,7)(6,10)(8,9)} $. Here, the cycles $ \sigma_1=(1,5,2) $, $ \sigma_2=(3,7) $ and $ \sigma_3=(6,10) $ form the chain of transit cycles, while $ (8,9) $ fully overlaps with $ (6,10) $ and therefore is classified as a parking cycle. Using this chain of transit cycles, we can again move any variable in $ f $ to any other position within $ \Omega_\pi $, as the overlap of the cycles allows $ x_i $ to “change” from one cycle to the next higher cycle. Note that depending on the degrees of monomials, the order in which the $ x_i $ are moved, must be adjusted. The maximum number of steps remains upper bounded, however, by the same number, namely \begin{equation}\label{eq:q_ext1} Q' = 2d\cdot |C(\pi)| \cdot \max_{\sigma \in C(\pi)} \left\{\ord(\sigma)\right\}. \end{equation} For the most general case, i.e., if $ \pi $ is a product of multiple interval disjoint permutations, we can use the upper bound \begin{equation}\label{eq:q_ext2} Q'' = \max_{\rho \in T(\pi)} \left\{ 2d\cdot |C(\rho)| \cdot \max_{\sigma \in C(\rho)} \left\{\ord(\sigma)\right\}\right\}, \end{equation} as each interval disjoint region $ \Omega_\rho = [i_{\rho,0}, i_{\rho,1}] $ can be optimized independently according to the procedure above. For all regions, the same backtracking $ \pi^{Q''-j}(\hat{f}) $ is used in each step $ j $. Therefore, Eq. (<ref>) holds for all cases of monomials $ f $. This means that all permutations of the intervals $ \Omega_\rho $ stabilize $ I_f $. As this holds for all $ I_f $ individually, it also holds for their union $ I $. $ \qed $ §.§ Error-Rate Performance for (128,64) Codes Comparison of $\left(N=128,K=64\right)$ polar codes under SC, Aut-SC and SCL decoding; BI-AWGN channel. Design $ \sv $ $ |\operatorname{Aut}(\mathcal{C})| $ $ d_\mathrm{min} $ $ A_{d_\mathrm{min}} $ Bhat. @1 dB $[2, 2, 1, 1, 1]$ $2.42\cdot10^9$ 8 688 $I_\mathrm{min}=\{27,56\}$ $[3, 4]$ $1.78\cdot10^{12}$ 8 240 $I_\mathrm{min}=\{23,112\}$ $[4, 3]$ $1.78\cdot10^{12}$ 8 16 RM(3,7) $[7]$ $2.10\cdot10^{16}$ 16 94488 Properties of the compared (128,64) polar codes. Fig. <ref> shows the BLER performance of (128,64) polar codes under SC-based decoding. In particular, we compare plain SC decoding [1] with SCL decoding with list size 8 (SCL-8) [2] and Aut-8-SC decoding [10]. Furthermore, OSD-4 results serve as an upper bound on the ML performance of each code [22]. Again, the Bhattacharyya construction at design SNR of 1 dB ($ I_\mathrm{min}=\{31, 45, 51, 71, 84, 97\} $) does not show any gains for Aut-SC decoding. Also, note that the gains of SCL-8 when compared to SC are also smaller than 0.2 dB. Next, we designed codes by selecting two generators $ I_\mathrm{min}=\{27,56\} $ and $ I_\mathrm{min}=\{23,112\} $ in order to have a large automorphism group, under the constraint of $ K=64 $. Both codes can be viewed as examples of PSMC [13]. While the SC performance degrades, now a significant performance gain is achieved by both Aut-SC and SCL decoding with comparable performance. In particular, Aut-SC is within 0.1 dB to 0.2 dB of the SCL performance. For completeness, we also included performance results for the RM code construction ($ I_\mathrm{min}=\{15\} $). As previously reported in [10], in the RM case, automorphism-based decoding can even outperform SCL decoding. Table <ref> lists the parameters and properties of the compared (128,64) codes.
# $\kappa$-Poincaré-comodules, Braided Tensor Products and Noncommutative Quantum Field Theory Fedele<EMAIL_ADDRESS>Flavio <EMAIL_ADDRESS> a Dipartimento di Fisica “Ettore Pancini”, Università di Napoli Federico II, Napoli, Italy; b INFN, Sezione di Napoli, c Departament de Física Quàntica i Astrofísica and Institut de Cíencies del Cosmos (ICCUB), Universitat de Barcelona, Barcelona, Spain ###### Abstract We discuss the obstruction to the construction of a multiparticle field theory on a $\kappa$-Minkowski noncommutative spacetime: the existence of multilocal functions which respect the deformed symmetries of the problem. This construction is only possible for a light-like version of the commutation relations, if one requires invariance of the tensor product algebra under the coaction of the $\kappa$-Poincaré group. This necessitates a _braided_ tensor product. We study the representations of this product, and prove that $\kappa$-Poincaré-invariant N-point functions belong to an Abelian subalgebra, and are therefore commutative. We use this construction to define the 2-point Whightman and Pauli–Jordan functions, which turn out to be identical to the undeformed ones. We finally outline how to construct a free scalar $\kappa$-Poincaré-invariant quantum field theory, and identify some open problems. ## 1 Introduction The $\kappa$-Minkowski spacetime [1, 2, 3] is a deformation of the algebra of complex-valued functions on Minkowski spacetime, $\mathbbm{C}[\mathbbm{R}^{3,1}]$ into the noncommutative *-algebra $\mathcal{A}$, generated by the coordinate functions333See [4, 5, 6] for another example. $[x^{\mu},x^{\nu}]=\frac{i}{\kappa}(v^{\mu}x^{\nu}-v^{\nu}x^{\mu})\,,\qquad\mu=0,\dots,3\,,\qquad(x^{\mu})^{\dagger}=x^{\mu}\,,$ (1.1) where $v^{\mu}$ are four arbitrary real numbers, and the $x^{\mu}$ operators generalize the Cartesian coordinate functions. The constant $\kappa$ has the dimensions of an inverse length, supposedly identified with (or at least related to) the Planck energy. From now on, we will work in units in which $\kappa=1$. The above relations close a Lie algebra, known as $\mathfrak{an}(3)$, of which $\mathcal{A}$ is the universal enveloping algebra. Notice that all these algebras, for any choice of $v^{\mu}$, are isomorphic to each other. This can be seen by observing that the following linear redefinition of generators: $x^{i}\to v^{0}x^{i}-v^{i}x^{0}$, $x^{0}\to v_{i}x^{i}+\frac{1-\|\vec{v}\|^{2}}{v^{0}}x^{0}$ puts the algebra in the form: $[x^{0},x^{i}]=\frac{i}{\kappa}x^{i}\,,\qquad\left[x^{i},x^{j}\right]=0\,,\qquad i,j=1,2,3\,,$ (1.2) which is the original [3] and best-known form of the $\kappa$-Minkowski algebra. The algebra (1.1) is invariant under the following Hopf algebra: $\displaystyle\Delta[\Lambda^{\mu}{}_{\nu}]$ $\displaystyle=\Lambda^{\mu}{}_{\alpha}\otimes\Lambda^{\alpha}{}_{\nu},$ $\displaystyle[\Lambda^{\mu}{}_{\nu}\Lambda^{\alpha}{}_{\beta}]$ $\displaystyle=0,$ (1.3) $\displaystyle\Delta[a^{\mu}]$ $\displaystyle=\Lambda^{\mu}{}_{\nu}\otimes a^{\nu}+a^{\mu}\otimes\mathbbm{1},$ $\displaystyle[\Lambda^{\mu}{}_{\nu},a^{\gamma}]$ $\displaystyle=i\left[\left(\Lambda^{\mu}{}_{\alpha}\,v^{\alpha}-v^{\mu}\right)\Lambda^{\gamma}{}_{\nu}+\left(\Lambda^{\alpha}{}_{\nu}\tilde{g}_{\alpha\beta}-\tilde{g}_{\nu\beta}\right)v^{\beta}g^{\mu\gamma}\right],$ $\displaystyle S[\Lambda]$ $\displaystyle=\Lambda^{-1},\leavevmode\nobreak\ \leavevmode\nobreak\ S[a^{\mu}]=-a^{\mu},$ $\displaystyle[a^{\mu},a^{\nu}]$ $\displaystyle=i\left(v^{\mu}\,a^{\nu}-v^{\nu}\,a^{\mu}\right),$ $\displaystyle\varepsilon[\Lambda^{\mu}\nu]$ $\displaystyle=\delta^{\mu}{}_{\nu},\leavevmode\nobreak\ \leavevmode\nobreak\ \varepsilon[a^{\mu}]=0,$ $\displaystyle\Lambda^{\mu}{}_{\alpha}\Lambda^{\nu}{}_{\beta}g^{\alpha\beta}$ $\displaystyle=g^{\mu\nu},\leavevmode\nobreak\ \leavevmode\nobreak\ \Lambda^{\rho}{}_{\mu}\Lambda^{\sigma}{}_{\nu}g_{\rho\sigma}=g_{\mu\nu}.$ where $g_{\mu\nu}$ is any symmetric invertible matrix, and $g^{\mu\nu}$ is its inverse. While the metric is usually taken to be the standard Minkowski one, $\eta_{\mu\nu}=\text{diag}(-1,+1,+1,+1)$, other choices are possible, including some degenerate cases [7]. When $g_{\mu\nu}=\eta_{\mu\nu}$, this Hopf algebra (or quantum group [8]) is called $\kappa$-Poincaré [1, 2, 9, 10, 3, 11, 12, 13, 14]. In this case the relations (1.3) are to be understood as the deformation of the algebra of functions on the Poincaré group, $\mathbbm{C}[ISO(3,1)]$ into a noncommutative Hopf algebra $\mathcal{P}_{\kappa}$, in which the coproduct $\Delta$, antipode $S$ and counit $\varepsilon$ are undeformed, and simply codify the Lie group structure of $ISO(3,1)$, while the commutation relations acquire a dependence on $\kappa$, and make the algebra of function nonabelian. The operators $a^{\mu}$ (translations) and $\Lambda^{\mu}{}_{\nu}$ (Lorentz matrices) are coordinate functions on the group, and the matrices $\Lambda^{\mu}{}_{\nu}$ leave $g^{\mu\nu}$ and its inverse invariant in the ordinary, algebraic sense expressed by the last line of the Equation above. Moreover, Equations (1.3) leave the commutation relations (1.2) invariant, in the sense that the following left co-action: $x^{\prime\mu}=\Lambda^{\mu}{}_{\nu}\otimes x^{\nu}+a^{\mu}\otimes 1\,,$ (1.4) is an algebra homomorphism for the relations (1.2); in other words, $\kappa$-Minkowski is a $\kappa$-Poincaré-comodule algebra [8]. This coaction can be seen as the rule to transform a $\kappa$-Minkowski coordinate into a $\kappa$-Poincaré transformed one, which is an object that lives in the tensor product $\mathcal{P}_{\kappa}\otimes\mathcal{A}$, the noncommutative version of the algebra of functions on $ISO(3,1)\times\mathbbm{R}^{3,1}$. Depending on the choice of eigenvalues of the matrix $g_{\mu\nu}$, the Hopf algebra (1.3) might be a quantum-group deformation of the Poincaré, Euclidean or $ISO(2,2)$ groups. Moreover, there are also degenerate cases in which $g_{\mu\nu}$ is not invertible, but the algebra is still well-defined, and it might, for example, correspond to a deformation of the Carroll group [7]. According to the particular form of $g_{\mu\nu}$, _i.e._ in which directions its eigenvectors are pointing, the coordinates $x^{0}$, $x^{1}$, $x^{2}$ and $x^{3}$ might have different nature. In the Poincaré case $g_{\mu\nu}=\eta_{\mu\nu}$, for example, $x^{0}$ is the timelike direction and $x^{i}$ are the spacelike ones. But any other choice is possible (and linear combinations thereof). Similarly, the vector $v^{\mu}$ in Eq. (1.1) could take any form, and if it is pointing in the $0$ direction, $v^{\mu}=\delta^{\mu}_{0}$, then the commutation relations reduce to (1.2), in which $x^{0}$ is the only noncommuting coordinate. In all other cases, the direction of $v^{\mu}$ indicates which linear combination of $x^{\mu}$ coordinates is the noncommuting one. Of course, one can act on the generators $x^{\mu}$ with any (commutative) linear transformation, and obtain an algebra with a different $v^{\mu}$ vector, invariant under a quantum group (1.3) with a different matrix $g^{\mu\nu}$. In the end, in the case that $g^{\mu\nu}$ is invertible, what counts is the relative orientation of $v^{\mu}$ with respect to the eigenvectors of $g^{\mu\nu}$. In the degenerate cases things are more complicated. For a complete treatment of all the physically-inequivalent models, and the corresponding momentum spaces, see [7]. Once we have a generalization of the algebra of functions on a manifold, the natural context to look for physical applications of the model is field theory, whose basic ontology is that of fields, which are multiplets of functions on the spacetime manifold. Classical (in the sense of unquantized, $\hbar\to 0$ limit) noncommutative models based on action functionals and equations of motion are fairly well understood [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 12, 29, 30, 31, 32]. There is, however, no current agreement in the literature on the correct formulation of noncommutative Quantum Field Theory (QFT), although there is a sizeable literature on the subject [33, 34, 35, 36, 37, 38, 39, 40, 41]. Recently, there has been a resurgence in interest for QFT $\kappa$-Minkowski [42, 43, 44, 45, 46, 47, 48, 41], and perhaps the most important difference between approaches regards the basic ontology. Most approaches are based on a commutative algebra of functions, over which a non-local “star” product, involving an infinite number of derivatives of the fields, is defined. This star product provides a representation of the basic commutation relations (1.2) or (1.1), and the theory is treated as a commutative-but-nonlocal QFT, defined, for example, through a regular path integral. Assuming such an ontology might be problematic from the operational point of view [49], and it is not clear whether $\kappa$-Poincaré symmetries can be implemented as symmetries of the theory. But most importantly, such an ontology naturally leads one to define the QFT in terms of “commutative” N-point functions (defined, _e.g._ through the functional derivatives of a partition function with respect to the commutative fields) that do not address the issue of multilocal functions, which we describe in the following. In this paper we want to attack the main obstruction that prevented the full development of a QFT based on a truly noncommutative ontology: the fact that, in order to work with QFTs, it is necessary to have a good notion of multilocal functions, because the theory is entirely determined by its N-point functions. From an algebraic point of view, we would like to have, to begin, a notion of “function of two points”. This is a function on the Cartesian product of two copies of Minkowski space, $\mathbbm{R}^{3,1}\times\mathbbm{R}^{3,1}$. The commutative algebra of such functions is $\mathbbm{C}[\mathbbm{R}^{3,1}\times\mathbbm{R}^{3,1}]$, which, under the canonical isomorphism, can be identified with the tensor product algebra $\mathbbm{C}[\mathbbm{R}^{3,1}]\otimes\mathbbm{C}[\mathbbm{R}^{3,1}]$, which is canonically defined as generated by the coordinate functions: $x_{1}^{\mu}=x^{\mu}\otimes 1\,,\qquad x_{2}^{\mu}=1\otimes x^{\mu}\,,$ (1.5) with the identity $1^{\otimes 2}=1\otimes 1$, and the product is simply $x^{\mu}_{1}x^{\nu}_{2}=x^{\nu}_{2}x^{\mu}_{1}$. So, in the noncommutative setting, it appears natural to refer to the tensor product algebra $\mathcal{A}^{\otimes 2}$ generated by (1.5), where $[x_{1}^{\mu},x_{1}^{\nu}]=i(v^{\mu}x_{1}^{\nu}-v^{\nu}x_{1}^{\mu})\,,\qquad[x_{2}^{\mu},x_{2}^{\nu}]=i(v^{\mu}x_{2}^{\nu}-v^{\nu}x_{2}^{\mu})\,,\qquad[x_{1}^{\mu},x_{2}^{\nu}]=0\,.$ (1.6) This algebra is a good Lie algebra (it satisfies the Jacobi rules), and gives rise to a perfecly legitimate universal enveloping algebra. It also makes sense that the coordinates $x^{\mu}_{1}$ and $x^{\mu}_{2}$ are the operators that generalize to the noncommutative settings the coordinates of point 1 and point 2, which are distinct points which we should be able to choose independently.444With “choosing a point”, in the noncommutative setting, we mean choosing a state on the algebra, which can provide a degree of localization. In fact, classical points can be described through the commutative algebra of functions on a manifold as limits of functions peaked around a choice of coordinates (_e.g._ Gaussians), which tend to a Dirac delta. In the noncommutative setting this limit is unattainable except for special points (_e.g._ the time axis, [14]), because of uncertainty relations. However, one can introduce a notion of “fuzzy points”, corresponding to maximally-localized states [47, 48, 14]. Since, by construction, the states on the tensor product algebra allow us to localize $x^{\mu}_{1}$ and $x^{\mu}_{2}$ independently around arbitrary classical coordinates without interference of the state of one point on the other, we could be quite satisfied with this formulation. However, there is a big problem: extending the $\kappa$-Poincaré coaction (1.4) to $\mathcal{A}^{\otimes 2}$ in the canonical way: $\displaystyle{x^{\prime}}_{1}^{\mu}=\Lambda^{\mu}{}_{\nu}\otimes x_{1}^{\nu}+a^{\mu}\otimes 1^{\otimes 2}=\Lambda^{\mu}{}_{\nu}\otimes x^{\nu}\otimes 1+a^{\mu}\otimes 1\otimes 1\,,$ (1.7) $\displaystyle{x^{\prime}}_{2}^{\mu}=\Lambda^{\mu}{}_{\nu}\otimes x_{2}^{\nu}+a^{\mu}\otimes 1^{\otimes 2}=\Lambda^{\mu}{}_{\nu}\otimes 1\otimes x^{\nu}+a^{\mu}\otimes 1\otimes 1\,,$ the algebra (1.6) is not left invariant by it. In technical terms, (1.6) is not a $\kappa$-Poincaré-comodule. Specifically, it is the commutation relations between $x^{\mu}_{1}$ and $x^{\mu}_{2}$ that are not covariant. In fact, $[{x^{\prime}}_{1}^{\mu},{x^{\prime}}_{2}^{\nu}]=[\Lambda^{\mu}{}_{\rho},a^{\nu}]\otimes\left(x_{1}^{\rho}-x_{2}^{\rho}\right)+[a^{\mu},a^{\nu}]\otimes 1^{\otimes 2}\neq 0\,.$ (1.8) There is a way out of this problem: relax the commutativity of the two sides of the tensor product algebra, $[x_{1}^{\mu},x_{2}^{\nu}]=0$, in order to make Eq. (1.8) covariant. The structure we end up dealing with is a “braided tensor product”, first introduced by Majid in the 90’s [50, 8]. A similar concept has been used in [51, 52, 53, 54, 4, 55, 56] to properly define QFT on the Moyal/canonical spacetime. In [37] the necessity to extend the $\kappa$-Minkowski algebra to multiple points in a nontrivial way was recognized. The novelty in our work is that we require that the proposed solution of the problem provides a coherent $\kappa$-Poincaré comodule. In the following, we will address the issue and find the conditions under which it can be solved. A related alternative, which we will not pursue here, would be to enforce the symmetry via a Drinfeld twist, and coherently generate a deformed tensor product, deformed star product and other structures, along the lines of [57]. Twists for $\kappa$-Minkowski symmetries have been studied, they are not exempt from problems [58], a recent review, with references, is [59]. ## 2 The braided tensor product algebra Let us first consider the algebra of two points. We have to request that it closes two $\kappa$-Minkowski (1.1) subalgebras: $\displaystyle\left[x_{1}^{\mu},x_{1}^{\nu}\right]=i\left[v^{\mu}x_{1}^{\nu}-v^{\nu}x_{1}^{\mu}\right]\,,\qquad\left[x_{2}^{\mu},x_{2}^{\nu}\right]=i\left[v^{\mu}x_{2}^{\nu}-v^{\nu}x_{2}^{\mu}\right]\,,$ (2.1) with yet-to-be-determined cross-commutators: $\left[x_{1}^{\mu},x_{2}^{\nu}\right]=if^{\mu\nu}(x_{1},x_{2},v)\,,$ (2.2) and it should form a left-comodule under the following left coaction: $x_{a}^{\prime\mu}=\Lambda^{\mu}{}_{\nu}x_{a}^{\nu}+a^{\mu}\,,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ a=1,2\,,$ (2.3) of the $\kappa$-Poincaré group (1.3). Finally, we have to request that the commutators (2.2) satisfy the Jacobi rules. In addition to these definitory requests, we can make a few reasonable assumptions: the function $f^{\mu\nu}(x_{1},x_{2},v)$ should go to zero when $v^{\mu}\to 0$, and we can assume it is polynomial in $x^{\mu}_{a}$. Under this ansatz, we can expand it in powers of $v^{\mu}$: $\left[x_{1}^{\mu},x_{2}^{\nu}\right]=ia^{\mu\nu}_{\rho\sigma}v^{\rho}v^{\sigma}+iv^{\rho}\left(b^{\mu\nu}_{\rho\sigma}x_{1}^{\sigma}+c^{\mu\nu}_{\rho\sigma}x_{2}^{\sigma}\right)\,,$ (2.4) where $a^{\mu\nu}_{\rho\sigma}$, $b^{\mu\nu}_{\rho\sigma}$ and $c^{\mu\nu}_{\rho\sigma}$ are numbers. Imposing the comodule condition on this commutator: $\displaystyle\left[x_{1}^{\prime\mu},x_{2}^{\prime\nu}\right]=$ $\displaystyle ia^{\mu\nu}_{\rho\sigma}v^{\rho}v^{\sigma}+iv^{\rho}\left(b^{\mu\nu}_{\rho\sigma}x_{1}^{\prime\sigma}+c^{\mu\nu}_{\rho\sigma}x_{2}^{\prime\sigma}\right)\,,$ (2.5) $\displaystyle\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma}[x_{1}^{\rho},x_{2}^{\sigma}]+\left[\Lambda^{\mu}{}_{\rho},a^{\nu}\right]x_{1}^{\rho}+\left[a^{\mu},\Lambda^{\nu}{}_{\sigma}\right]x_{2}^{\sigma}+\left[a^{\mu},a^{\nu}\right]=$ $\displaystyle ia^{\mu\nu}_{\rho\sigma}v^{\rho}v^{\sigma}+iv^{\rho}\left(b^{\mu\nu}_{\rho\sigma}\Lambda^{\sigma}{}_{\lambda}x_{1}^{\lambda}+c^{\mu\nu}_{\rho\sigma}\Lambda^{\sigma}{}_{\lambda}x_{2}^{\lambda}\right)$ $\displaystyle+iv^{\rho}\left(b^{\mu\nu}_{\rho\sigma}a^{\sigma}+c^{\mu\nu}_{\rho\sigma}a^{\sigma}\right)\,,$ $\displaystyle i\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma}a^{\rho\sigma}_{\lambda\tau}v^{\lambda}v^{\tau}+i\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma}v^{\lambda}\left(b^{\rho\sigma}_{\lambda\tau}x_{1}^{\tau}+c^{\rho\sigma}_{\lambda\tau}x_{2}^{\tau}\right)=$ $\displaystyle ia^{\mu\nu}_{\rho\sigma}v^{\rho}v^{\sigma}+iv^{\rho}\left(b^{\mu\nu}_{\rho\sigma}\Lambda^{\sigma}{}_{\lambda}x_{1}^{\lambda}+c^{\mu\nu}_{\rho\sigma}\Lambda^{\sigma}{}_{\lambda}x_{2}^{\lambda}\right)$ $\displaystyle+\left[\Lambda^{\mu}{}_{\rho},a^{\nu}\right]x_{1}^{\rho}+\left[a^{\mu},\Lambda^{\nu}{}_{\sigma}\right]x_{2}^{\sigma}+\left[a^{\mu},a^{\nu}\right]\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak$ $\displaystyle+iv^{\rho}\left(b^{\mu\nu}_{\rho\sigma}a^{\sigma}+c^{\mu\nu}_{\rho\sigma}a^{\sigma}\right)\,.$ The different powers of $v^{\mu}$ in the above equation have to vanish separately. The quadratic term gives $\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma}a^{\rho\sigma}_{\lambda\tau}v^{\lambda}v^{\tau}=a^{\mu\nu}_{\rho\sigma}v^{\rho}v^{\sigma}\,,$ (2.6) which cannot be solved if $a^{\mu\nu}_{\rho\sigma}\neq 0$, so we have to put it to zero. We then split the terms that are linear in $x_{a}^{\mu}$ from the one that does not depend on it, which reads: $\left[a^{\mu},a^{\nu}\right]\equiv i\left(v^{\mu}\,a^{\nu}-v^{\nu}\,a^{\mu}\right)=iv^{\rho}\left(b^{\mu\nu}_{\rho\sigma}+c^{\mu\nu}_{\rho\sigma}\right)a^{\sigma}\,.$ (2.7) This is solved by $b^{\mu\nu}_{\rho\sigma}+c^{\mu\nu}_{\rho\sigma}=\delta^{\mu}{}_{\rho}\delta^{\nu}{}_{\sigma}-\delta^{\nu}{}_{\rho}\delta^{\mu}{}_{\sigma}\,.$ (2.8) The two terms that are linear in $x^{\mu}_{1}$ and, respectively, in $x^{\mu}_{2}$ vanish iff $i\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma}v^{\lambda}b^{\rho\sigma}_{\lambda\tau}+\left[\Lambda^{\mu}{}_{\rho},a^{\nu}\right]\delta^{\rho}{}_{\tau}=iv^{\rho}b^{\mu\nu}_{\rho\sigma}\Lambda^{\sigma}{}_{\tau}\,,\qquad i\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma}v^{\lambda}c^{\rho\sigma}_{\lambda\tau}+\left[a^{\mu},\Lambda^{\nu}{}_{\sigma}\right]\delta^{\sigma}{}_{\tau}=iv^{\rho}c^{\mu\nu}_{\rho\sigma}\Lambda^{\sigma}{}_{\tau}\,.$ (2.9) Using the $\kappa$-Poincaré relations $[\Lambda^{\mu}{}_{\nu},a^{\gamma}]=i\left[\left(\Lambda^{\mu}{}_{\alpha}\,v^{\alpha}-v^{\mu}\right)\Lambda^{\gamma}{}_{\nu}+\left(\Lambda^{\alpha}{}_{\nu}\eta_{\alpha\beta}-\eta_{\nu\beta}\right)v^{\beta}\eta^{\mu\gamma}\right]$ we can write these two equations as $\displaystyle\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma}v^{\lambda}b^{\rho\sigma}_{\lambda\tau}-v^{\rho}b^{\mu\nu}_{\rho\sigma}\Lambda^{\sigma}{}_{\tau}+\left[\left(\Lambda^{\mu}{}_{\alpha}\,v^{\alpha}-v^{\mu}\right)\Lambda^{\nu}{}_{\rho}+\left(\Lambda^{\alpha}{}_{\rho}\eta_{\alpha\beta}-\eta_{\rho\beta}\right)v^{\beta}\eta^{\mu\nu}\right]\delta^{\rho}{}_{\tau}=0\,,$ (2.10) $\displaystyle\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma}v^{\lambda}c^{\rho\sigma}_{\lambda\tau}-v^{\rho}c^{\mu\nu}_{\rho\sigma}\Lambda^{\sigma}{}_{\tau}-\left[\left(\Lambda^{\nu}{}_{\alpha}\,v^{\alpha}-v^{\nu}\right)\Lambda^{\mu}{}_{\sigma}+\left(\Lambda^{\alpha}{}_{\sigma}\eta_{\alpha\beta}-\eta_{\sigma\beta}\right)v^{\beta}\eta^{\nu\mu}\right]\delta^{\sigma}{}_{\tau}=0\,.$ To solve these equations, we should recall that $\Lambda^{\mu}{}_{\nu}$ is an $SO(3,1)$ matrix, which can therefore be expanded in powers of an antisymmetric matrix $\epsilon_{\alpha\beta}$ as $\Lambda^{\mu}{}_{\nu}=\delta^{\mu}{}_{\nu}+\epsilon_{\rho\nu}\eta_{\rho\mu}+\mathcal{O}(\epsilon^{2})\,.$ (2.11) Eq. (2.10) reads, at first order in $\epsilon^{\alpha\beta}$: $\displaystyle\epsilon_{\alpha\beta}v^{\lambda}\left(\eta^{\mu\alpha}b^{\beta\nu}_{\lambda\tau}+\eta^{\nu\alpha}b^{\mu\beta}_{\lambda\tau}-\delta^{\beta}{}_{\tau}\eta^{\rho\alpha}b^{\mu\nu}_{\lambda\rho}+\delta^{\beta}{}_{\lambda}\delta^{\nu}{}_{\tau}\eta^{\mu\alpha}+\delta^{\beta}{}_{\tau}\delta^{\alpha}{}_{\lambda}\eta^{\mu\nu}\right)=0\,,$ (2.12) $\displaystyle\epsilon_{\alpha\beta}v^{\lambda}\left(\eta^{\mu\alpha}c^{\beta\nu}_{\lambda\tau}+\eta^{\nu\alpha}c^{\mu\beta}_{\lambda\tau}-\delta^{\beta}{}_{\tau}\eta^{\rho\alpha}c^{\mu\nu}_{\lambda\rho}-\delta^{\beta}{}_{\lambda}\delta^{\nu}{}_{\tau}\eta^{\mu\alpha}-\delta^{\beta}{}_{\tau}\delta^{\alpha}{}_{\lambda}\eta^{\mu\nu}\right)=0\,,$ which are equivalent to $\displaystyle\eta^{\mu[\alpha}b^{\beta]\nu}_{\lambda\tau}+b^{\mu[\beta}_{\lambda\tau}\eta^{\alpha]\nu}-\delta^{[\beta}{}_{\tau}\eta^{\alpha]\rho}b^{\mu\nu}_{\lambda\rho}+\eta^{\mu[\alpha}\delta^{\beta]}{}_{\lambda}\delta^{\nu}{}_{\tau}+\delta^{[\beta}{}_{\tau}\delta^{\alpha]}{}_{\lambda}\eta^{\mu\nu}=0\,,$ (2.13) $\displaystyle\eta^{\mu[\alpha}c^{\beta]\nu}_{\lambda\tau}+c^{\mu[\beta}_{\lambda\tau}\eta^{\alpha]\nu}-\delta^{[\beta}{}_{\tau}\eta^{\alpha]\rho}c^{\mu\nu}_{\lambda\rho}-\eta^{\mu[\alpha}\delta^{\beta]}{}_{\lambda}\delta^{\nu}{}_{\tau}-\delta^{[\beta}{}_{\tau}\delta^{\alpha]}{}_{\lambda}\eta^{\mu\nu}=0\,.$ The two equations above are satisfied by $b^{\mu\nu}_{\rho\sigma}=\delta^{\mu}{}_{\rho}\delta^{\nu}{}_{\sigma}-\eta^{\mu\nu}\eta_{\rho\sigma}\,,\qquad c^{\mu\nu}_{\rho\sigma}=-\delta^{\nu}{}_{\rho}\delta^{\mu}{}_{\sigma}+\eta^{\mu\nu}\eta_{\rho\sigma}\,,$ (2.14) which satisfies also Eq. (2.8). A quick calculation reveals that this perturbative solution is exact at all orders in $\epsilon_{\mu\nu}$. In fact, replacing (2.14) into Eq. (2.10) the two equations reduce to $v^{\lambda}\left(\eta^{\mu\nu}-\Lambda^{\mu}{}_{\rho}\Lambda^{\nu}{}_{\sigma}\eta^{\rho\sigma}\right)=0$, which is of course satisfied as long as $\Lambda^{\mu}{}_{\nu}\in SO(3,1)$. We then found a general solution of the comodule problem: $\left[x_{1}^{\mu},x_{2}^{\nu}\right]=i\left[v^{\mu}x_{1}^{\nu}-v^{\nu}x_{2}^{\mu}-\eta^{\mu\nu}\eta_{\rho\sigma}v^{\rho}\left(x_{1}^{\sigma}-x_{2}^{\sigma}\right)\right]\,.$ (2.15) Notice now that the above commutators can be written in the following form: $\left[x_{a}^{\mu},x_{b}^{\nu}\right]=i\left[v^{\mu}x_{a}^{\nu}-v^{\nu}x_{b}^{\mu}-\eta^{\mu\nu}\eta_{\rho\sigma}v^{\rho}\left(x_{a}^{\sigma}-x_{b}^{\sigma}\right)\right]\,,$ (2.16) which reduce to the usual (generalized) $\kappa$-Minkowski commutators when $a=b$: $\left[x_{a}^{\mu},x_{a}^{\nu}\right]=i\left(v^{\mu}x_{a}^{\nu}-v^{\nu}x_{a}^{\mu}\right)\,,$ (2.17) and, moreover, remain consistent even if we let the indices $a,b$ run on an arbitrarily large set of labels. We have a comodule regardless of the number of points we are considering. In order to have a proper (associative) comodule algebra, our commutators need to satisfy also the Jacobi rules: $[x_{a}^{\mu},[x_{b}^{\nu},x_{c}^{\rho}]]+[x_{b}^{\nu}[x_{c}^{\rho},x_{a}^{\mu}]]+[x_{c}^{\rho},[x_{a}^{\mu},x_{b}^{\nu}]]=0\,.$ (2.18) A straightforward, but tedious, calculation reveals that $[x_{a}^{\mu},[x_{b}^{\nu},x_{c}^{\rho}]]+[x_{b}^{\nu}[x_{c}^{\rho},x_{a}^{\mu}]]+[x_{c}^{\rho},[x_{a}^{\mu},x_{b}^{\nu}]]=-v^{\alpha}v_{\alpha}\left[\eta^{\nu\rho}(x_{c}^{\mu}-x_{b}^{\mu})+\eta^{\rho\mu}(x_{a}^{\nu}-x_{c}^{\nu})+\eta^{\mu\nu}(x_{b}^{\rho}-x_{a}^{\rho})\right]\,,$ (2.19) and the only way that the right-hand side can vanish is that $v^{\alpha}v_{\alpha}=0$. We obtained a significant result: the only $\kappa$-Minkowski-like algebra that admits a braided tensor product construction as a $\kappa$-Poincaré comodule is the _lightlike_ one, in which the deformation parameters $v^{\mu}$ form a lightlike vector. Our result is coherent with the one found by Jurić, Meljanac and Pikutić [59] using a Drinfeld twist. They too obtained that a covariant deformation of the tensor product can only be obtained for the lightlike case. This particular choice for the vector $v$ is remarkable for several other reasons, and the result we just derived makes it the only viable algebra, in the $\kappa$-Minkowski family, to construct a well-defined quantum field theory. ## 3 Representation of the braided $\kappa$-Minkowski algebra Let us review the results obtained so far. The following algebra, which we will call $\mathcal{A}^{\underline{\otimes}N}$, generated by the identity together with $4N$ generators $x^{\mu}_{a}$, $a=1,\dots,N$: $\left[x_{a}^{\mu},x_{b}^{\nu}\right]=i\left[v^{\mu}x_{a}^{\nu}-v^{\nu}x_{b}^{\mu}-\eta^{\mu\nu}\eta_{\rho\sigma}v^{\rho}\left(x_{a}^{\sigma}-x_{b}^{\sigma}\right)\right]\,,\qquad x^{\mu}_{a}\in\mathcal{A}^{\underline{\otimes}N}\,,$ (3.1) is a left comodule for the $\kappa$-Poincaré group: $\begin{gathered}\left[a^{\mu},a^{\nu}\right]=i\left(v^{\mu}a^{\nu}-v^{\nu}a^{\mu}\right)\,,\qquad[\Lambda^{\mu}{}_{\nu},\Lambda^{\rho}{}_{\sigma}]=0\,,\\\ \left[a^{\alpha},\Lambda^{\mu}{}_{\nu}\right]=i\left[\left(v^{\beta}\Lambda^{\mu}{}_{\beta}-v^{\mu}\right)\Lambda^{\alpha}{}_{\nu}+\left(\Lambda^{\beta}{}_{\nu}\eta_{\beta\rho}-\eta_{\nu\rho}\right)v^{\rho}\eta^{\alpha\mu}\right]\,,\end{gathered}$ (3.2) with respect to the coaction $x_{a}^{\prime\mu}=\Lambda^{\mu}{}_{\nu}x_{a}^{\nu}+a^{\mu}$, if the vector $v^{\mu}$ is light-like ($v^{\mu}v^{\nu}\eta_{\mu\nu}=0$. We now proceed to study the representations of the algebra (3.1). To start, notice that the relative positions: $\Delta x_{ab}^{\mu}=x_{a}^{\mu}-x_{b}^{\mu}\,,$ (3.3) close an Abelian subalgebra: $[\Delta x^{\mu}_{ab},\Delta x^{\nu}_{cd}]=0\qquad\forall\leavevmode\nobreak\ a,b,c,d\,.$ (3.4) These however are wildly redundant. If we are interested in identifying the maximal abelian subalgebra we should introduce the ‘centre of mass’ coordinates: $x_{\text{cm}}^{\mu}=\frac{1}{N}\sum_{a=1}^{N}x_{a}^{\mu}\,,\qquad y^{\mu}_{a}=x^{\mu}_{a}-x_{\text{cm}}^{\mu}\,,$ (3.5) then it is easy to show that $[y^{\mu}_{a},y^{\nu}_{b}]=0\qquad\forall\leavevmode\nobreak\ a,b\,.$ (3.6) The $y^{\mu}_{a}$ are $4N$ variables, but $4$ of these are redundant, because they satisfy the linear relation $\sum_{a=1}^{N}y_{a}^{\mu}=0$. So we have identified a $4(N-1)$-dimensional Abelian subalgebra. What about the remaining four variables, $x_{\text{cm}}^{\mu}$? Their commutators with $y_{a}^{\nu}$ give a linear combination of $y_{a}^{\nu}$, and they close a $\kappa$-Minkowski subalgebra with each other: $[x_{\text{cm}}^{\mu},y_{a}^{\nu}]=i\left(\eta^{\mu\nu}\eta_{\rho\sigma}v^{\rho}y^{\sigma}_{a}-v^{\nu}y^{\mu}_{a}\right)\,,\leavevmode\nobreak\ \leavevmode\nobreak\ [x_{\text{cm}}^{\mu},x_{\text{cm}}^{\nu}]=i\left(v^{\mu}x_{\text{cm}}^{\nu}-v^{\nu}x_{\text{cm}}^{\mu}\right)\,,$ (3.7) however the component of $x_{\text{cm}}^{\mu}$ along $v^{\mu}$ commutes with all the $y_{a}^{\nu}$: $w=\eta_{\mu\nu}v^{\mu}x_{\text{cm}}^{\nu}\leavevmode\nobreak\ \leavevmode\nobreak\ \Rightarrow\leavevmode\nobreak\ \leavevmode\nobreak\ [w,y^{\mu}_{a}]=0\,,\leavevmode\nobreak\ \leavevmode\nobreak\ [x_{\text{cm}}^{\mu},w]=i\,v^{\mu}w\,,$ (3.8) we identified a $(4N-3)$-dimensional Abelian subalgebra, generated by $y_{a}^{\mu}$ and $w$, while the three components of $x_{\text{cm}}^{\mu}$ perpendicular to $v^{\mu}$ are irreducibly noncommutative. Without loss of generality, we may assume $v^{\mu}=(1,1,0,0)$ (taking $v^{\mu}$ lightlike necessarily selects a special spatial direction). Then the only noncommutative coordinates are $z=x_{\text{cm}}^{0}+x_{\text{cm}}^{1}$, $u=x_{\text{cm}}^{2}$ and $v=x_{\text{cm}}^{3}$, and the braided tensor product algebra is described by the following relations: $\displaystyle[y^{\mu}_{a},y^{\nu}_{b}]=[y^{\mu}_{a},w]=0\,,\qquad\sum_{a=1}^{N}y^{\mu}_{a}=0\,,$ (3.9) $\displaystyle[z,u]=2i\,u\,,\qquad[z,v]=2i\,v\,,\qquad[z,w]=2i\,w\,,\qquad[u,w]=[v,w]=[u,v]=0\,,$ $\displaystyle[u,y_{a}^{\nu}]=i\left(\eta^{2\nu}(y^{1}_{a}-y^{0}_{a})-\left(\delta^{\nu}_{0}+\delta^{\nu}_{1}\right)y^{2}_{a}\right)\,,\leavevmode\nobreak\ \leavevmode\nobreak\ [v,y_{a}^{\nu}]=i\left(\eta^{3\nu}(y^{1}_{a}-y^{0}_{a})-\left(\delta^{\nu}_{0}+\delta^{\nu}_{1}\right)y^{3}_{a}\right)\,,$ $\displaystyle[z,y_{a}^{\nu}]=i\left((\delta^{\nu}_{1}-\delta^{\nu}_{0})(y^{1}_{a}-y^{0}_{a})-\left(\delta^{\nu}_{0}+\delta^{\nu}_{1}\right)(y^{0}_{a}+y^{1}_{a})\right)\,.$ We can write a representation of the above algebra. The operators $y^{\mu}_{a}$, $a=1,\dots,N-1$ and $w$ are multiplicative with real spectrum, while the $N$-th coordinate is the a linear combination of the others: $y^{\mu}_{N}=-\sum_{a=1}^{N-1}y^{\mu}_{a}$. Finally, $u$, $v$ and $z$ can be represented as the following Hermitian operators: $\displaystyle\hat{u}\,\psi(y^{\mu}_{1},\dots,y^{\mu}_{N-1},w)$ $\displaystyle=i\sum_{a=1}^{N-1}\left(y^{1}_{a}\frac{\partial}{\partial y^{2}_{a}}-y^{2}_{a}\frac{\partial}{\partial y^{1}_{a}}-y^{0}_{a}\frac{\partial}{\partial y^{2}_{a}}-y^{2}_{a}\frac{\partial}{\partial y^{0}_{a}}\right)\psi(y^{\mu}_{1},\dots,y^{\mu}_{N-1},w)\,,$ (3.10) $\displaystyle\hat{v}\,\psi(y^{\mu}_{1},\dots,y^{\mu}_{N-1},w)$ $\displaystyle=i\sum_{a=1}^{N-1}\left(y^{1}_{a}\frac{\partial}{\partial y^{3}_{a}}-y^{3}_{a}\frac{\partial}{\partial y^{0}_{a}}-y^{0}_{a}\frac{\partial}{\partial y^{3}_{a}}-y^{3}_{a}\frac{\partial}{\partial y^{1}_{a}}\right)\psi(y^{\mu}_{1},\dots,y^{\mu}_{N-1},w)\,,$ $\displaystyle\hat{z}\,\psi(y^{\mu}_{1},\dots,y^{\mu}_{N-1},w)$ $\displaystyle=-2i\left(y^{0}_{a}\frac{\partial}{\partial y^{1}_{a}}+y^{1}_{a}\frac{\partial}{\partial y^{0}_{a}}-w\frac{\partial}{\partial w}-\frac{1}{2}\right)\psi(y^{\mu}_{1},\dots,y^{\mu}_{N-1},w)\,.$ Introducing the operators that generate the simultaneous Lorentz transformations of the $N-1$ coordinates $y^{\mu}_{a}$: $M^{\mu\nu}=i\,\sum_{a=1}^{N-1}\left(y^{\mu}_{a}\eta^{\nu\rho}\frac{\partial}{\partial y^{\rho}_{a}}-y^{\nu}_{a}\eta^{\mu\rho}\frac{\partial}{\partial y^{\rho}_{a}}\right)\,,$ (3.11) we notice that we are representing $u$, $v$ and $z$ as: $u=M^{12}-M^{02}\,,\qquad v=M^{13}-M^{03}\,,\qquad z=2M^{10}+2\,i\,w\frac{\partial}{\partial w}+i\,,$ (3.12) which reproduce the algebra $[z,u]=iu$, $[z,v]=iv$, $[u,v]=0$, as can be immediately verified by using the Lorentz algebra commutators $[M^{\mu\nu},M^{\rho\sigma}]=i\left(\eta^{\nu\rho}M^{\mu\sigma}-\eta^{\mu\rho}M^{\nu\sigma}-\eta^{\nu\sigma}M^{\mu\rho}+\eta^{\mu\sigma}M^{\nu\rho}\right)$. It is not the first time that the $\kappa$-Minkowski algebra is represented as linear combinations of Lorentz generators, see for example [60, 7]. Our braided algebra admits a representation in terms of Lorentz generators acting on the space of $4(N-1)$ spacetime points, and a dilatation operator on the real line of $w$. Accordingly, the natural Hilbert space for the representation (3.10) is $L^{2}(\mathbbm{R}^{4N-3})$, with inner product: $\langle\varphi|\psi\rangle=\int d^{4}y_{1}\dots d^{4}y_{4N}dw\,\bar{\varphi}(y^{\mu}_{1},\dots,y^{\mu}_{N-1},w)\psi(y^{\mu}_{1},\dots,y^{\mu}_{N-1},w)\,,$ (3.13) and all our operators are self-adjoint with respect to this Hilbert space. ## 4 $\kappa$-Poincaré-invariant Quantum Field Theory We will now lay the ground for a consistent construction of a QFT on the $\kappa$-Minkowski noncommutative spacetime. The first step is to define what we mean withQFT in this context. As is well known, a QFT on a commutative spacetime (in particular Minkowski) is entirely defined in terms of all $N$-point functions [61]. We can import this definition into our noncommutative setting, however now the $N$-point functions have to be replaced with elements of our braided $N$-point algebra, which are noncommutative operators. However, the $\kappa$-Poincaré invariance that characterizes our theory comes to our aid. It turns out that all $\kappa$-Poincaré invariant elements of our braided algebra (as $N$-point functions should be) are elements of the abelian subalgebra of coordinate separations $x^{\mu}_{a}-x^{\mu}_{b}$ (or the center-of-mass coordinates $y^{\mu}_{a}$). As operators, therefore, they can all be simultaneously localized arbitrarily well, and they can be effectively treated as _bona fide_ commutative functions. It is clear that all the commutative Poincaré-invariant polynomials remain invariant under the coaction (2.3). These are the functions of the (squared) proper distances: $\eta_{\mu\nu}(x_{a}^{\prime\mu}-x_{b}^{\prime\mu})(x_{a}^{\prime\nu}-x_{b}^{\prime\nu})=\eta_{\mu\nu}(x_{a}^{\mu}-x_{b}^{\mu})(x_{a}^{\nu}-x_{b}^{\nu})\qquad\forall a,b=1,\dots,N\,.$ (4.1) It would be interesting to check whether these are the _only_ Poincaré- invariant polynomials in the noncommutative case, however we do not have a proof of this at the moment. If we focus on functions that can be Fourier transformed (which is what we are interested in, if we want to define the $N$-point functions), in the commutative case one can see that $f(x^{\prime\mu}_{a})=\int d^{4}k^{1}\dots d^{4}k^{N}\tilde{f}(k_{\mu}^{a})e^{i\sum_{a=1}^{N}k^{a}_{\mu}\Lambda^{\mu}{}_{\nu}x^{\nu}_{a}}e^{i\sum_{a=1}^{N}k^{a}_{\mu}a^{\mu}}\,,$ (4.2) is equal to $f(x^{\mu}_{a})$ only if $\tilde{f}(k^{a}_{\mu})\propto\delta^{4}\left(\sum_{a=1}^{N}k^{a}_{\mu}\right)$ (translation invariance), and $\tilde{f}(\Lambda_{\nu}{}^{\mu}k^{a}_{\mu})=\tilde{f}(k_{\nu})$ (Lorentz invariance). In the noncommutative case, the coordinate algebra is replaced by a Lie algebra (3.1). Therefore plane waves, _i.e._ exponentials of the generators, are replaced by Lie group elements, and Fourier transforms admit a definition in terms of a group average [47, 14]. We can represent a generic group element once we choose a factorization, _i.e._ an ordering choice. For example: $e^{ik^{1}_{\mu}x^{\mu}_{1}}\dots e^{ik^{N}_{\mu}x^{\mu}_{N}}$ (4.3) covers all group elements, upon varying $k^{1}_{a}$ over all of $\mathbbm{R}^{4N}$. Once we introduced this ordering prescription, all Fourier-transformable functions can be represented as a linear combination of group elements: $f(x^{\mu}_{a})=\int d^{4}k^{1}\dots d^{4}k^{N}\,\tilde{f}(k_{\mu}^{a})\,e^{ik^{1}_{\mu}x^{\mu}_{1}}\dots e^{ik^{N}_{\mu}x^{\mu}_{N}}\,.$ (4.4) It is now convenient to split the coordinates into center-of-mass coordinates $y_{a}^{\mu}$, which are translation-invariant and commutative, and coordinates of the center of mass $x^{\mu}_{\text{cm}}$. We already found the algebra that these coordinates close, Eq. (3.7), and the main feature we would like to highlight is that the algebra realizes an action of the $x^{\mu}_{\text{cm}}$ generators on the $y_{a}^{\mu}$ ones, because the $x^{\mu}_{\text{cm}}$ generators close a subalgebra, and their commutators with $y_{b}^{\nu}$ gives a linear combination of $y_{b}^{\nu}$. Consider now this fact: $e^{ik^{a}_{\mu}(y^{\mu}_{a}+x^{\mu}_{\text{cm}})}=e^{iq^{a}_{\mu}y^{\mu}_{1}}e^{ik^{a}_{\mu}x^{\mu}_{\text{cm}}}$ (4.5) where $q^{a}_{\mu}=q^{a}_{\mu}(k^{a}_{0},k^{a}_{1},k^{a}_{2},k^{a}_{3})$ is a certain function of $k^{a}_{\mu}$. The above equation is always true, and is a consequence of the Baker–Campbell–Hausdorff formula when $k^{a}_{\mu}x^{\mu}_{\text{cm}}$ belongs to the subalgebra acting on $k^{a}_{\mu}y^{\mu}_{a}$ . Now consider this other identity, which is always true for Lie groups: $e^{ik^{a}_{\mu}x^{\mu}_{\text{cm}}}e^{iq^{b}_{\mu}y^{\mu}_{b}}=e^{i(k^{a}\triangleright q^{b})_{\mu}y^{\mu}_{b}}e^{ik^{a}_{\mu}x^{\mu}_{\text{cm}}}$ (4.6) where $\triangleright$ is the adjoint action of the group on itself. Finally, the subgroup properties imply the existence of an associative deformed sum of momenta $\boxplus:\mathbbm{R}^{4}\times\mathbbm{R}^{4}\to\mathbbm{R}^{4}$ which realizes the product of the subgroup generated by $x^{\mu}_{\text{cm}}$: $e^{ip_{\mu}x^{\mu}_{\text{cm}}}e^{iq_{\mu}x^{\mu}_{\text{cm}}}=e^{i(p\boxplus q)_{\mu}x^{\mu}_{\text{cm}}}\,.$ (4.7) Armed with the three identities listed above, we can rewrite Eq. (4.4) in the following form: $\displaystyle f(x^{\mu}_{a})$ $\displaystyle=\int d^{4}k^{1}\dots d^{4}k^{N}\,\tilde{f}(k_{\mu}^{a})\,e^{iq^{1}_{\mu}y^{\mu}_{1}}e^{ik^{1}_{\mu}x^{\mu}_{\text{cm}}}e^{iq^{2}_{\mu}y^{\mu}_{2}}e^{ik^{2}_{\mu}x^{\mu}_{\text{cm}}}\dots e^{iq^{N}_{\mu}y^{\mu}_{N}}e^{ik^{N}_{\mu}x^{\mu}_{\text{cm}}}$ (4.8) $\displaystyle=\int d^{4}k^{1}\dots d^{4}k^{N}\,\tilde{f}(k_{\mu}^{a})\,e^{iq^{1}_{\mu}y^{\mu}_{1}}e^{i(k^{1}\triangleright q^{2})_{\mu}y^{\mu}_{2}}\dots e^{i(k^{1}\boxplus k^{2}\boxplus\dots\boxplus k^{N-1})\triangleright q^{N}_{\mu}y^{\mu}_{N}}e^{i(k^{1}\boxplus k^{2}\boxplus\dots\boxplus k^{N})_{\mu}x^{\mu}_{\text{cm}}}\,.$ This proves that, if $\tilde{f}(k_{\mu}^{a})\propto\delta^{(4)}(k^{1}\boxplus k^{2}\boxplus\dots\boxplus k^{N})$, the dependence on $x^{\mu}_{\text{cm}}$ completely disappears. A necessary condition for $f(x^{\mu}_{a})$ to be $\kappa$-Poincaré-invariant is that $k^{1}\boxplus k^{2}\boxplus\dots\boxplus k^{N}=0$ so that the dependence on $x^{\mu}_{\text{cm}}$ drops. In fact, transforming all coordinates according to the coaction (2.3) we get $\displaystyle f({x^{\prime}}^{\mu}_{a})=\int d^{4}k^{1}\dots d^{4}k^{N}\,\tilde{f}(k_{\mu}^{a})\,e^{iq^{1}_{\mu}\Lambda^{\mu}{}_{\nu}y^{\nu}_{1}}e^{i(k^{1}\triangleright q^{2})_{\mu}\Lambda^{\mu}{}_{\nu}y^{\nu}_{2}}\dots e^{i(k^{1}\boxplus k^{2}\boxplus\dots\boxplus k^{N-1})\triangleright q^{N}_{\mu}\Lambda^{\mu}{}_{\nu}y^{\nu}_{N}}$ (4.9) $\displaystyle\cdot e^{i(k^{1}\boxplus k^{2}\boxplus\dots\boxplus k^{N})_{\mu}\Lambda^{\mu}{}_{\nu}x^{\nu}_{\text{cm}}}e^{i(k^{1}\boxplus k^{2}\boxplus\dots\boxplus k^{N})_{\mu}a^{\mu}_{\text{cm}}}\,,$ and the $a^{\mu}$-dependent exponential disappears only if $k^{1}\boxplus k^{2}\boxplus\dots\boxplus k^{N}=0$. Therefore translation invariance alone ensures that $N$-point functions are commutative, because they are elements of the Abelian subalgebra generated by $y^{\mu}_{a}$. Let us now take a deep dive into the structures that are necessary to build a consistent QFT on $\kappa$-Minkowski. We will begin with the properties of plane waves, which, as we already remarked, are Lie group elements, and can be mapped into points on a pseudo-Riemannian manifold, _momentum space_. We will study all the structures that spacetime noncommutativity induces on said momentum space, and their relation. We will focus in particular on the issue of ordering and coordinate systems on momentum space: each ordering prescription of polynomials of noncommutative coordinates correspond to a choice of coordinates on momentum space, and changes of ordering coincide with diffemorphisms on momentum space. One of the guiding principles of our analysis will be that all physical quantities (and, in particular, $N$-point functions), will have to be independent of the ordering choice, and therefore they will have to be Riemannian invariants on momentum space. From now on, we will focus on 1+1 spacetime dimensions, which simplify significantly the calculations, although everything we say can be generalized to arbitrary dimensions. ### 4.1 Plane waves paraphernalia In the $1+1$-dimensional case, it is convenient to rewrite the algebra (3.1) in lightcone coordinates: $x_{a}^{\pm}=x_{a}^{0}\pm x_{a}^{1}\,,\qquad x^{0}_{a}=\frac{x_{a}^{+}+x_{a}^{-}}{2}\,,\leavevmode\nobreak\ \leavevmode\nobreak\ x^{1}_{a}=\frac{x_{a}^{+}-x_{a}^{-}}{2}\,,$ (4.10) the commutation relations take now the form $[x^{+}_{a},x^{+}_{b}]=2i(x^{+}_{a}-x^{+}_{b})\,,\leavevmode\nobreak\ \leavevmode\nobreak\ [x^{+}_{a},x^{-}_{b}]=2ix^{-}_{b}\,,\leavevmode\nobreak\ \leavevmode\nobreak\ [x^{-}_{a},x^{+}_{b}]=-2ix^{-}_{a}\,,\leavevmode\nobreak\ \leavevmode\nobreak\ [x^{-}_{a},x^{-}_{b}]=0\,.$ (4.11) When $a=b$, the coordinates of a single point close the algebra $[x^{+}_{a},x^{-}_{a}]=2ix^{-}_{a}\,,$ (4.12) which is identical (up to a normalization factor) to the timelike 1+1-dimensional $\kappa$-Minkowski algebra $\mathfrak{an}(1)$. The natural ordering prescriptions for polynomials then involve putting the $x^{+}_{a}$ coordinate to the right (resp. left) of $x^{-}_{a}$: $:(x^{+}_{a})^{n}(x^{-}_{a})^{m}:_{\text{R}}=(x^{-}_{a})^{m}(x^{+}_{a})^{n}\,,$ (4.13) $:(x^{+}_{a})^{n}(x^{-}_{a})^{m}:_{\text{L}}=(x^{+}_{a})^{n}(x^{-}_{a})^{m}\,,$ (4.14) or symmetrizing them: $:(x^{+}_{a})^{n}(x^{-}_{a})^{m}:_{\text{S}}=\frac{1}{2}(x^{+}_{a})^{n}(x^{-}_{a})^{m}+\frac{1}{2}(x^{-}_{a})^{m}(x^{+}_{a})^{n}\,,$ (4.15) or, also, Weyl-ordering them: $:(x^{+}_{a})^{n}(x^{-}_{a})^{m}:_{\text{W}}=\frac{1}{2^{m}}\sum_{r=0}^{m}(x^{+}_{a})^{m-r}(x^{-}_{a})^{r}\,.$ (4.16) The linear maps $:\underline{\leavevmode\nobreak\ }:_{\text{R}}$, $:\underline{\leavevmode\nobreak\ }:_{\text{L}}$, $:\underline{\leavevmode\nobreak\ }:_{\text{S}}$ and $:\underline{\leavevmode\nobreak\ }:_{\text{W}}$ are _Weyl maps_ , that go from the algebra of commutative polynomials to the algebra (4.11), see, _e.g._ [18]. These maps are isomorphism from our noncommutative algebra of functions to the commutative one, and the commutation relations (4.12) allow to translate from one map to the other, _e.g._ : $:x^{+}_{a}x^{-}_{a}:_{\text{R}}=:\left(x^{+}_{a}x^{-}_{a}+2ix^{-}_{a}\right):_{\text{L}}=:\left(x^{+}_{a}x^{-}_{a}+ix^{-}_{a}\right):_{\text{W}}\,,$ (4.17) The linear nature of Fourier theory allows us to use these Weyl maps to map commutative Fourier-transformable functions (understood as functions on momentum space) to noncommutative functions with a certain ordering. As we will show, the same noncommutative function will admit different Fourier transforms, one for each choice of ordering, and these momentum space functions are related to each other by general coordinate transformations, _i.e._ , diffeomorphisms of momentum space [47]. #### 4.1.1 Plane waves of a single coordinate For illustrative purposes, from now on we will work with right-ordered and Weyl-ordered functions, showing at each step how to translate one description into the other. Again, keep in mind that our guiding principle is that no physical quantity should depend on the ordering choice. Introduce the right- ordered plane waves, which provide a basis for Fourier theory: ${\,\scaleobj{1.3}{\bm{e}}}_{a}[k]=e^{ik_{-}x_{a}^{-}}e^{ik_{+}x_{a}^{+}}\,,$ (4.18) they are labeled by $(k_{-},k_{+})\in\mathbbm{R}^{2}$, and are closed under Hermitian conjugation: ${\,\scaleobj{1.3}{\bm{e}}}_{a}^{\dagger}[k]={\,\scaleobj{1.3}{\bm{e}}}_{a}[S(k)]\,,\qquad S(k)=(-e^{2k_{+}}k_{-},-k_{+})\,.$ (4.19) The map $S:\mathbbm{R}^{2}\to\mathbbm{R}^{2}$ is an involution ($S\circ S=\text{id}$), called _antipode_. Since, as we remarked earlier, ${\,\scaleobj{1.3}{\bm{e}}}_{a}[k]$ span the whole group $AN(1)$ associated to the Lie algebra $[x^{+}_{a},x^{-}_{a}]=2ix^{-}_{a}$, the map $S$ realizes the group inverse, and its properties follow from it. Another group axiom that can be represented as a map on the coordinates $k_{\pm}$ is the product: ${\,\scaleobj{1.3}{\bm{e}}}_{a}[k]{\,\scaleobj{1.3}{\bm{e}}}_{a}[q]={\,\scaleobj{1.3}{\bm{e}}}_{a}[k\oplus q]\,,\qquad k\oplus q=(k_{-}+e^{-2k_{+}}q_{-},k_{+}+q_{+})\,,$ (4.20) now the map $\oplus:\mathbbm{R}^{2}\times\mathbbm{R}^{2}\to\mathbbm{R}^{2}$ will be referred to as _coproduct_ , or _momentum composition law_. Its properties follow from the axioms of Lie groups: $(k\oplus q)\oplus p=k\oplus(q\oplus p)=k\oplus q\oplus p\,,\qquad k\oplus S(k)=S(k)\oplus k=o\,,\qquad S(k\oplus q)=S(q)\oplus S(k)\,.$ (4.21) The first rule expresses the associativity of $\oplus$, the second is the fact that $S$ is a bilateral inverse for $\oplus$, where $o=(0,0)$ are the coordinates of the origin of momentum space, and the third expresses the antihomomorphism property of the group inverse. The momentum-space origin $o$ is the neutral element for the composition law/coproduct: $o\oplus q=q\oplus o=q\,,$ (4.22) and the plane waves with momentum $o$ are the identity element of the algebra: ${\,\scaleobj{1.3}{\bm{e}}}_{a}[o]=1\,.$ (4.23) #### 4.1.2 Translation-invariant products of two-point plane waves The product of two plane waves of different coordinates ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]$, ${\,\scaleobj{1.3}{\bm{e}}}_{2}[q]$ lies within the Abelian subalgebra of the functions of coordinate differences, $x^{\mu}_{1}-x^{\mu}_{2}$ (_i.e._ , it is translation-invariant), if the momenta of the two waves are the antipode of each other. There are four ways to combine two such waves: ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]\,,\leavevmode\nobreak\ \leavevmode\nobreak\ {\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[k]\,,\leavevmode\nobreak\ \leavevmode\nobreak\ {\,\scaleobj{1.3}{\bm{e}}}_{2}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[k]\,,\leavevmode\nobreak\ \leavevmode\nobreak\ {\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]\,,$ (4.24) the first and the third expressions are Hermitian conjugates, as are the second and fourth. Let us calculate explicitly now the functional form of the product of two waves: ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[q]=e^{ik_{-}x_{1}^{-}}e^{ik_{+}x_{1}^{+}}e^{iq_{-}x_{2}^{-}}e^{iq_{+}x_{2}^{+}}\,,$ (4.25) we want to order the expression by having all $x_{a}^{-}$ coordinates to the left, and all $x_{a}^{+}$ to the right. We need to commute $e^{ik_{+}x_{1}^{+}}$ and $e^{iq_{-}x_{2}^{-}}$, where the coordinates $x_{1}^{+}$, $x_{2}^{-}$ close the subalgebra $[x_{1}^{+},x_{2}^{-}]=2ix_{2}^{-}$ of the algebra (4.11), so, using the well- known $\kappa$-Minkowski commutation rule between exponentials [47] (see Appendix A Eq. (A.6) with $\kappa\to 1/2$): $e^{ik_{+}x_{1}^{+}}e^{iq_{-}x_{2}^{-}}=e^{ie^{-2k_{+}}q_{-}x_{2}^{-}}e^{ik_{+}x_{1}^{+}}\,.$ (4.26) The coordinates $x_{a}^{-}$ commute with each other (4.11), therefore our expression takes the form ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[q]=e^{i\left(k_{-}x_{1}^{-}+e^{-2k_{+}}q_{-}x_{2}^{-}\right)}e^{ik_{+}x_{1}^{+}}e^{iq_{+}x_{2}^{+}}\,.$ (4.27) Since the coordinates $x_{a}^{+}$ close the subalgebra $[x_{1}^{+},x_{2}^{+}]=2i(x_{1}^{+}-x_{2}^{+})$ (4.11), it is convenient to make the linear redefinition $X=\frac{x_{1}^{+}-x_{2}^{+}}{4}\,,\qquad T=-\frac{x_{1}^{+}+x_{2}^{+}}{4}\,,\qquad x^{+}_{1}=2(X-T)\,,\qquad x^{+}_{2}=-2(X+T)\,,$ (4.28) so that we have another copy of the timelike $\kappa$-Minkowski algebra $[T,X]=iX$, and use the rule to combine two Weyl-ordered $\kappa$-Minkowski waves (A.15) (recall that we are using the convention $\kappa=1$): $e^{i\left(\alpha T+\beta X\right)}e^{i\left(\gamma T+\delta X\right)}=e^{i(\alpha+\gamma)T+i\left(\frac{(\alpha+\gamma)}{1-e^{-(\alpha+\gamma)}}\right)\left[\left(\frac{1-e^{-\alpha}}{\alpha}\right)\beta+e^{-\alpha}\left(\frac{1-e^{-\gamma}}{\gamma}\right)\delta\right]X}\,,$ (4.29) since $e^{ik_{+}x_{1}^{+}}e^{iq_{+}x_{2}^{+}}=e^{i2k_{+}(X-T)}e^{-i2q_{+}(X+T)}$ we can obtain our desired expression by making the replacements $\alpha=-2k_{+}$, $\beta=2k_{+}$, $\gamma=-2q_{+}$, $\delta=-2q_{+}$. The result is: $\displaystyle e^{ik_{+}x_{1}^{+}}e^{iq_{+}x_{2}^{+}}$ $\displaystyle=e^{-2i(k_{+}+q_{+})T+2i\left(\frac{k_{+}+q_{+}}{1-e^{2(k_{+}+q_{+})}}\right)\left[\left(1-e^{2k_{+}}\right)-e^{2k_{+}}\left(1-e^{2q_{+}}\right)\right]X}$ (4.30) $\displaystyle=e^{2i(k_{+}+q_{+})\left[\left(\frac{1-2e^{2k_{+}}+e^{2(k_{+}+q_{+})}}{1-e^{2(k_{+}+q_{+})}}\right)X-T\right]}\,.$ Replacing the expressions for $X$ and $T$ we obtain the final expression: ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[q]=e^{i\left(k_{-}x_{1}^{-}+e^{-2k_{+}}q_{-}x_{2}^{-}\right)}e^{i\frac{k_{+}+q_{+}}{1-e^{2(k_{+}+q_{+})}}\left[\left(1-e^{2k_{+}}\right)x^{+}_{1}+e^{2k_{+}}\left(1-e^{2q_{+}}\right)x^{+}_{2}\right]}\,,$ (4.31) For $q=S(k)=(-e^{2k_{+}}k_{-},-k_{+})$, the whole expression turns into: ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]=e^{i\xi_{-}\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\xi_{+}\left(x^{+}_{1}-x^{+}_{2}\right)}\,,$ (4.32) where we introduce the notation (which will be useful later): $\xi_{-}=k_{-}\,,\qquad\xi_{+}=\left(\frac{e^{2k_{+}}-1}{2}\right)\,.$ (4.33) The expression above depends only on $(x^{\mu}_{1}-x^{\mu}_{2})$, as anticipated. Had we chosen to put the dagger on ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]$, we would have obtained: ${\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[k]=e^{i\chi_{-}\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\chi_{+}\left(x^{+}_{1}-x^{+}_{2}\right)}\,,$ (4.34) where $\chi_{-}=-e^{2k_{+}}k_{-}=S(\xi_{-})\,,\qquad\chi_{+}=\left(\frac{e^{-2k_{+}}-1}{2}\right)=S(\xi_{+})\,.$ (4.35) Notice how the functions $\xi_{\pm}(k_{\pm})$ map $\mathbbm{R}^{2}$ into a half-plane of $\mathbbm{R}^{2}$, because $\xi_{+}>-\frac{1}{2}$. This has significant consequences, which we will comment upon below. Here we just observe that, if one multiplies the two plane waves ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]$ and ${\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]$, which have arbitrary frequencies $k_{\mu}\in\mathbbm{R}^{2}$, the resulting translation-invariant wave (4.34) cannot have any frequency. The coordinate differences $x_{1}^{\mu}-x_{2}^{\mu}$ in (4.34) are multiplied by frequencies that belong to a sub-region of $\mathbbm{R}^{2}$. The other two possible translation-invariant products of plane waves can be obtained from scratch with an analogous calculation, or, equivalently, by taking the Hermitian conjugate of the expressions (4.32) and (4.34), which simply amount to changing the sign of the exponents (or swapping coordinates 1 and 2), because the coordinate differences $(x^{\mu}_{1}-x^{\mu}_{2})$ commute with each other: ${\,\scaleobj{1.3}{\bm{e}}}_{2}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[k]=\left({\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]\right)^{\dagger}=e^{-i\xi_{-}\left(x_{1}^{-}-x_{2}^{-}\right)}e^{-i\xi_{+}\left(x^{+}_{1}-x^{+}_{2}\right)}\,,$ (4.36) ${\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]=\left({\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[k]\right)^{\dagger}=e^{-i\chi_{-}\left(x_{1}^{-}-x_{2}^{-}\right)}e^{-i\chi_{+}\left(x^{+}_{1}-x^{+}_{2}\right)}\,.$ (4.37) #### 4.1.3 Plane waves in different bases/orderings Define now the Weyl-ordered plane waves as ${\,\scaleobj{1.3}{\bm{f}}}_{a}[q_{-},q_{+}]=e^{iq_{-}x^{-}_{a}+iq_{+}x^{+}_{a}}={\,\scaleobj{1.3}{\bm{e}}}_{a}\left[\textstyle\left(\frac{1-e^{-2q_{+}}}{2q_{+}}\right)q_{-},q_{+}\right]\,,$ (4.38) where the last expression comes from Eq. (A.12) with $\kappa\to 1/2$. If $k_{\pm}$ are the frequencies of a right-ordered plane wave ${\,\scaleobj{1.3}{\bm{e}}}(k_{\pm})$, and $q_{\pm}$ are those of a Weyl- ordered wave ${\,\scaleobj{1.3}{\bm{f}}}(q_{\pm})$, their relation is: $k_{-}=\left(\frac{1-e^{-2q_{+}}}{2q_{+}}\right)q_{-}\qquad k_{+}=q_{+}\,,\qquad q_{-}=\left(\frac{2k_{+}}{1-e^{-2k_{+}}}\right)k_{-}\qquad q_{+}=k_{+}\,,$ (4.39) which implies that ${\,\scaleobj{1.3}{\bm{e}}}(k_{\pm})={\,\scaleobj{1.3}{\bm{f}}}(q_{\pm})$. Let us now look at the composition law of Weyl-ordered waves: ${\,\scaleobj{1.3}{\bm{f}}}_{1}[k]{\,\scaleobj{1.3}{\bm{f}}}_{1}[q]={\,\scaleobj{1.3}{\bm{f}}}_{1}[k\oplus^{\prime}q]\,,$ (4.40) the map $\oplus^{\prime}$ is explicitly calculated in Appendix A: replacing $\kappa$ with $1/2$ in Eq.(A.15) we get $\displaystyle(k\oplus^{\prime}q)_{-}$ $\displaystyle=\left(\frac{2(k_{+}+q_{+})}{1-e^{-2(k_{+}+q_{+})}}\right)\left[\left(\frac{1-e^{-2k_{+}}}{2k_{+}}\right)k_{-}+e^{-2k_{+}}\left(\frac{1-e^{-2q_{+}}}{2q_{+}}\right)q_{-}\right]\,,$ (4.41) $\displaystyle(k\oplus^{\prime}q)_{+}$ $\displaystyle=k_{+}+q_{+}\,.$ The antipode map: ${\,\scaleobj{1.3}{\bm{f}}}^{\dagger}[q]={\,\scaleobj{1.3}{\bm{f}}}[S^{\prime}(q)]\leavevmode\nobreak\ \Rightarrow\leavevmode\nobreak\ S^{\prime}[k]=-k\,,$ (4.42) can also be calculated by combining the right-ordered antipode, $S(k)=(-e^{2k_{+}}k_{-},-k_{+})$, with the coordinate change (4.39), which we will call $\phi$: $\displaystyle S^{\prime}(q)$ $\displaystyle=(\phi^{-1}S\circ\phi)(q)=\phi^{-1}\left[\left(\frac{2S(k)_{+}}{1-e^{-2S(k)_{+}}}\right)S(k)_{-},S(k)_{+}\right]$ (4.43) $\displaystyle=\left(-\left(\frac{2k_{+}}{1-e^{-2k_{+}}}\right)k_{-},-k_{+}\right)\Big{|}_{\tiny\begin{array}[]{l}k_{-}\to\left(1-e^{-2q_{+}}\right)/2q_{+}\\\ k_{+}\to q_{+}\end{array}}=(-q_{-},-q_{+})\,,$ which confirms the above result that the $S^{\prime}$ map just puts a minus in front of both components of $q$. Finally, we can check where the origin of momentum space is mapped by the coordinate change: $o^{\prime}(q)=\phi^{-1}\circ o\phi)(q)=(0,0)$, which is consistent with the fact that ${\,\scaleobj{1.3}{\bm{f}}}_{a}[q]\to 1$ when $q_{-}=q_{+}=0$. We are now ready to study the products of plane waves of different points. The calculation is similar to that for right-ordered waves: $\displaystyle{\,\scaleobj{1.3}{\bm{f}}}_{1}[k]{\,\scaleobj{1.3}{\bm{f}}}_{2}[q]$ $\displaystyle=e^{ik_{-}x^{-}_{1}+ik_{+}x^{+}_{1}}e^{iq_{-}x^{-}_{2}+iq_{+}x^{+}_{2}}=e^{i\left(\frac{1-e^{-2k_{+}}}{2k_{+}}\right)k_{-}x^{-}_{1}}e^{ik_{+}x^{+}_{1}}e^{i\left(\frac{1-e^{-2q_{+}}}{2q_{+}}\right)q_{-}x^{-}_{2}}e^{iq_{+}x^{+}_{2}}$ (4.44) $\displaystyle=e^{i\left(\frac{1-e^{-2k_{+}}}{2k_{+}}\right)k_{-}x^{-}_{1}}e^{ie^{-2k_{+}}\left(\frac{1-e^{-2q_{+}}}{2q_{+}}\right)q_{-}x^{-}_{2}}e^{ik_{+}x^{+}_{1}}e^{iq_{+}x^{+}_{2}}$ $\displaystyle=e^{i\left(\frac{1-e^{-2k_{+}}}{2k_{+}}\right)k_{-}x^{-}_{1}+ie^{-2k_{+}}\left(\frac{1-e^{-2q_{+}}}{2q_{+}}\right)q_{-}x^{-}_{2}}e^{i\frac{k_{+}+q_{+}}{1-e^{2(k_{+}+q_{+})}}\left[\left(1-e^{2k_{+}}\right)x^{+}_{1}+e^{2k_{+}}\left(1-e^{2q_{+}}\right)x^{+}_{2}\right]}\,,$ and setting $q=S^{\prime}(k)$ gives a translation-invariant product of waves: ${\,\scaleobj{1.3}{\bm{f}}}_{1}[k]{\,\scaleobj{1.3}{\bm{f}}}_{2}^{\dagger}[k]=e^{i\left(\frac{1-e^{-2k_{+}}}{2k_{+}}\right)k_{-}(x^{-}_{1}-x^{-}_{2})+\frac{i}{2}\left(e^{2k_{+}}-1\right)(x_{1}^{+}-x_{2}^{+})}\,.$ (4.45) #### 4.1.4 Lorentz transformations of momenta and plane waves Under the Poincaré coaction (2.3), our plane waves transform in the following way: ${\,\scaleobj{1.3}{\bm{e}}}^{\prime}_{a}[k]={\,\scaleobj{1.3}{\bm{e}}}_{a}[\lambda(k,\Lambda)]{\,\scaleobj{1.3}{\bm{a}}}[k]\,,$ (4.46) where $\lambda$ is a nonlinear representation of the Lorentz group: $\lambda:\mathbbm{R}^{2}\times SO(1,1)\to\mathbbm{R}^{2}\,,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \,,\leavevmode\nobreak\ \leavevmode\nobreak\ \lambda(\lambda(k,\Lambda),\Lambda^{\prime})=\lambda(k,\Lambda^{\mu}{}_{\rho}\Lambda^{\prime\rho}{}_{\nu})\,,\leavevmode\nobreak\ \leavevmode\nobreak\ \Lambda(k,\delta^{\mu}{}_{\nu})=k\,,\leavevmode\nobreak\ \leavevmode\nobreak\ \lambda(o,\Lambda)=0\,,$ (4.47) and ${\,\scaleobj{1.3}{\bm{a}}}[k]=e^{ik_{-}a^{-}}e^{ik_{+}a^{+}}\,,$ (4.48) is an ordered plane wave of the translation parameters (the notation $a^{\pm}=a^{0}\pm a^{1}$ should be clear at this point). In order to calculate $\lambda(k,\Lambda)$, we could exploit the homomorphism property of the coaction and Poincaré-transform the two sides of Eq. (4.32), which depends only on the difference between coordinates and therefore is translation-invariant: ${\,\scaleobj{1.3}{\bm{e}}}_{1}^{\prime}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}^{\prime\dagger}[k]=e^{i\xi_{-}\left(x_{1}^{\prime-}-x_{2}^{\prime-}\right)}e^{i\xi_{+}\left(x_{1}^{\prime+}-x_{2}^{\prime+}\right)}=e^{i\lambda(k,\Lambda)_{-}\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{e^{2\lambda(k,\Lambda)_{+}}-1}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}\,,$ (4.49) since $\left(x_{1}^{\prime-}-x_{2}^{\prime-}\right)=e^{-\omega}\left(x_{1}^{-}-x_{2}^{-}\right)\,,\qquad\left(x_{1}^{\prime+}-x_{2}^{\prime+}\right)=e^{+\omega}\left(x_{1}^{+}-x_{2}^{+}\right)\,,$ (4.50) where $\omega$ is the rapidity, $\Lambda^{0}{}_{0}=\Lambda^{1}{}_{1}=\cosh\omega$, $\Lambda^{0}{}_{1}=\Lambda^{1}{}_{0}=\sinh\omega$. Consistency demands that $\lambda(k,\xi)_{-}=e^{-\omega}\xi_{-}\,,\qquad\frac{e^{2\lambda(k,\xi)_{+}}-1}{2}=e^{+\omega}\xi_{+}\,,$ (4.51) which admits the solution $\lambda(k,\xi)_{-}=e^{-\omega}\xi_{-}=e^{-\omega}k_{-}\,,\qquad\lambda(k,\xi)_{+}=\frac{1}{2}\log\left(1+2e^{\omega}\xi_{+}\right)=\frac{1}{2}\log\left[1+e^{\omega}\left(e^{2k_{+}}-1\right)\right]\,.$ (4.52) The ‘$-$’ component transforms in an undeformed way, while the transformation of the ‘$+$’ component is nonlinear. In a power series in the momentum: $\lambda(k,\xi)_{+}=k_{+}e^{\xi}-k_{+}^{2}e^{\xi}\left(e^{\xi}-1\right)+\mathcal{O}(k_{+}^{3})\,.$ (4.53) One can easily verify that $\lambda(k,\xi)$ is a representation of the Lorentz group: $\lambda(k,\xi+\xi^{\prime})=\lambda(\lambda(k,\xi),\xi^{\prime})\,,\qquad\lambda(k,0)=k\,,$ (4.54) and that it leaves the origin unchanged: $\lambda(o,\xi)=0$. We could be content with having found the map $\lambda$ from Eq. (4.32), but we used Eq. (4.46) dictating the form of the Poincaré transformation of a plane wave, and therefore we do not yet have a proof. We need to go through the pain of proving it with a direct calculation, which however will allow us to derive the form of $\lambda(k,\Lambda)$ directly, providing a check that the Poincaré coaction is indeed a homomorphism ot the coordinate algebra, showing that the two sides of Eq. (4.32) transform in the same way. First of all, we need to write the lightlike $\kappa$-Poincaré commutation relations (3.2) in a more convenient way, adapted to the $1+1$-dimensional lightlike case: $[a^{+},\omega]=2i\left(e^{\omega}-1\right)\,,\qquad[a^{-},\omega]=0\,,$ (4.55) where $a^{\pm}=a^{0}\pm a^{1}$, and $\omega$, again, is the rapidity $\Lambda^{0}{}_{0}=\cosh\omega$. We are first interested in the adjoint action of exponentials of the translation parameters ${\,\scaleobj{1.3}{\bm{a}}}[k]=e^{ik_{-}a^{-}}e^{ik_{+}a^{+}}$ on arbitrary functions of the rapidity. Since $a^{-}$ commutes with $\omega$ it will go though, instead $a^{+}$ is canonically conjugate to the coordinate $\rho=\frac{1}{2}\log\left(e^{-\omega}-1\right)$: $[a^{+},\frac{1}{2}\log\left(e^{-\omega}-1\right)]=[a^{+},\rho]=i\,,$ (4.56) therefore it acts like a translation for $\rho$: $e^{ik_{+}a^{+}}f(\rho)e^{-ik_{+}a^{+}}=f(\rho-k_{+})\,,$ (4.57) which corresponds to a nonlinear action on the coordinate $\omega$: $e^{ik_{+}a^{+}}f(\omega)e^{-ik_{+}a^{+}}=f\left[-\log\left(1+(e^{-\omega}-1)e^{-2k_{+}}\right)\right]\,.$ (4.58) This has been sometimes described [3, 27] as a ‘backreaction’ of the momenta on the Lorentz sector, part of the ‘bicrossproduct’ structure of the $\kappa$-Poincaré group, represented as a right action: $\triangleleft:SO(1,1)\times\mathbbm{R}^{2}\to SO(1,1)\,,\qquad\omega\triangleleft k=-\log\left(1+(e^{-\omega}-1)e^{-2k_{+}}\right)\,.$ (4.59) From the definition of $\triangleleft$ (4.57), it follows that it is a coalgebra homomorphism for the coproduct, _i.e._ $(f(\Lambda)\triangleleft p)\triangleleft q=f(\Lambda)\triangleleft(p\oplus q)\,,\qquad f(\Lambda)\triangleleft o=f(\Lambda)\,,$ (4.60) and the adjoint action of translations on rapidities can be written: ${\,\scaleobj{1.3}{\bm{a}}}[k]f(\omega){\,\scaleobj{1.3}{\bm{a}}}^{\dagger}[k]=f(\omega\triangleleft k)\,,\qquad{\,\scaleobj{1.3}{\bm{a}}}^{\dagger}[k]f(\omega){\,\scaleobj{1.3}{\bm{a}}}[k]=f(\omega\triangleleft S[k])\,,\,.$ (4.61) The next step is to calculate the opposite action, the adjoint action of any function $f(\omega)$ on the translation parameters $a^{\mu}$. The $a^{-}$ parameter commutes with $\omega$ and will therefore be invariant. Regarding $a^{+}$, from the commutation relations with $\omega$, and its immediate consequence $[a^{+},g(\omega)]=2i(e^{\omega}-1)g^{\prime}(\omega)$, we deduce $e^{f(\omega)}a^{+}=\left(a^{+}-2i\left(e^{\omega}-1\right)f^{\prime}(\omega)\right)e^{f(\omega)}\,,$ (4.62) iterating the procedure: $\displaystyle e^{f(\omega)}(a^{+})^{2}$ $\displaystyle=\left(a^{+}-2i\left(e^{\omega}-1\right)f^{\prime}(\omega)\right)e^{f(\omega)}a^{+}=\left(a^{+}-2i\left(e^{\omega}-1\right)f^{\prime}(\omega)\right)^{2}e^{f(\omega)}\,,$ (4.63) $\displaystyle e^{f(\omega)}(a^{+})^{3}$ $\displaystyle=\left(a^{+}-2i\left(e^{\omega}-1\right)f^{\prime}(\omega)\right)^{3}e^{f(\omega)}\,,$ $\displaystyle\vdots$ $\displaystyle e^{f(\omega)}(a^{+})^{n}$ $\displaystyle=\left(a^{+}-2i\left(e^{\omega}-1\right)f^{\prime}(\omega)\right)^{n}e^{f(\omega)}\,,$ by induction, we get $e^{f(\omega)}e^{ik_{+}a^{+}}=e^{ik_{+}\left(a^{+}-2i\left(e^{\omega}-1\right)f^{\prime}(\omega)\right)}e^{f(\omega)}\,.$ (4.64) Now consider the plane wave ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]$ and apply a $\kappa$-Poincaré transformation to it: ${\,\scaleobj{1.3}{\bm{e}}}_{1}^{\prime}[k]=e^{ik_{-}(e^{-\omega}x^{-}+a^{-})}e^{ik_{+}(e^{\omega}x^{+}+a^{+})}\,,$ (4.65) using the commutativity of $a^{-}$ and $\omega$, $e^{ik_{-}(e^{-\omega}x^{-}+a^{-})}e^{ik_{+}(e^{\omega}x^{+}+a^{+})}=e^{ik_{-}e^{-\omega}x^{-}}e^{ik_{-}a^{-}}e^{ik_{+}(e^{\omega}x^{+}+a^{+})}\,.$ (4.66) Consider now Eq. (4.64), for the following choice of function: $f(\omega)=\frac{i}{2}\log\left(e^{\omega}-1\right)x^{+}$. It takes the form: $e^{ik_{+}\left(a^{+}+e^{\omega}x^{+}\right)}=e^{\frac{i}{2}\log\left(e^{\omega}-1\right)x^{+}}e^{ik_{+}a^{+}}e^{-\frac{i}{2}\log\left(e^{\omega}-1\right)x^{+}}\,,$ (4.67) which can be immediately substituted in (4.66): ${\,\scaleobj{1.3}{\bm{e}}}_{1}^{\prime}[k]=e^{ik_{-}e^{-\omega}x^{-}}e^{ik_{-}a^{-}}e^{\frac{i}{2}\log\left(e^{\omega}-1\right)x^{+}}e^{ik_{+}a^{+}}e^{-\frac{i}{2}\log\left(e^{\omega}-1\right)x^{+}}\,.$ (4.68) Now we want to bring the exponential $e^{ik_{+}a^{+}}$ to the right, with the help of Eq. (4.58): $\begin{gathered}e^{ik_{-}e^{-\omega}x^{-}}e^{ik_{-}a^{-}}e^{\frac{i}{2}\log\left(e^{\omega}-1\right)x^{+}}e^{ik_{+}a^{+}}e^{-\frac{i}{2}\log\left(e^{\omega}-1\right)x^{+}}=\\\ e^{ik_{-}e^{-\omega}x^{-}}e^{ik_{-}a^{-}}e^{\frac{i}{2}\log\left(e^{\omega}-1\right)x^{+}}e^{-\frac{i}{2}\log\left(\frac{(1-e^{-\omega})e^{-2k_{+}}}{1+(e^{-\omega}-1)e^{-2k_{+}}}\right)x^{+}}e^{ik_{+}a^{+}}=\\\ e^{ik_{-}e^{-\omega}x^{-}}e^{\frac{i}{2}\log\left(e^{\omega}-1\right)x^{+}}e^{-\frac{i}{2}\log\left(\frac{(1-e^{-\omega})e^{-2k_{+}}}{1+(e^{-\omega}-1)e^{-2k_{+}}}\right)x^{+}}e^{ik_{-}a^{-}}e^{ik_{+}a^{+}}\\\ e^{ik_{-}e^{-\omega}x^{-}}e^{-\frac{i}{2}\log\left(\frac{e^{-\omega}e^{-2k_{+}}}{1+(e^{-\omega}-1)e^{-2k_{+}}}\right)x^{+}}e^{ik_{-}a^{-}}e^{ik_{+}a^{+}}=\\\ e^{ik_{-}e^{-\omega}x^{-}}e^{\frac{i}{2}\log\left[1+e^{\omega}(e^{2k_{+}}-1)\right]x^{+}}e^{ik_{-}a^{-}}e^{ik_{+}a^{+}}={\,\scaleobj{1.3}{\bm{e}}}_{1}[\lambda(k,\omega)]{\,\scaleobj{1.3}{\bm{a}}}[k]\,,\end{gathered}$ (4.69) which reproduces the formula (4.52) for $\lambda(k,\omega)$. Consider the transformation rule of the translation-invariant products of two waves (4.32), ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]$. As we have seen above, the coordinate differences $x^{\mu}_{1}-x^{\mu}_{2}$ transform following an underformed, linear Lorentz transformation, ${x^{\prime}}^{\mu}_{1}-{x^{\prime}}^{\mu}_{2}=\Lambda^{\mu}{}_{\nu}(x^{\mu}_{1}-x^{\mu}_{2})$, and the functions $\xi_{\mu}(k)$ appearing in front of $x^{\mu}_{1}-x^{\mu}_{2}$ transform according to the (inverse) undeformed Lorentz transformation $\xi_{\mu}^{\prime}(k)=\Lambda^{\nu}{}_{\mu}\xi_{\nu}(k)$, which can also be written as a transformation of the momentum parameter $k^{\mu}$, but in this case the transformation is nonlinear. In other words, $\xi_{\mu}$ provides a linear representation of the Lorentz group: $\xi_{\mu}[\lambda(k,\xi)]=\xi_{\nu}[k]\Lambda^{\nu}{}_{\mu}\,.$ (4.70) If we now consider the transformation law of the other translation-invariant product of waves, that is not just the Hermitian conjugate of the first one, Eq. (4.34), we obtain $\displaystyle{{\,\scaleobj{1.3}{\bm{e}}}_{1}^{\prime}}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}^{\prime}[k]$ $\displaystyle={\,\scaleobj{1.3}{\bm{a}}}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[\lambda(k,\omega)]{\,\scaleobj{1.3}{\bm{e}}}_{2}[\lambda(k,\omega)]{\,\scaleobj{1.3}{\bm{a}}}[k]$ (4.71) $\displaystyle={\,\scaleobj{1.3}{\bm{a}}}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}[S[\lambda(k,\omega)]]{\,\scaleobj{1.3}{\bm{e}}}_{2}[\lambda(k,\omega)]{\,\scaleobj{1.3}{\bm{a}}}[k].$ Notice the following important identity: $\lambda(k,\omega)=S[\lambda(S[k],\omega\triangleleft k))]$ (4.72) which implies also $S[\lambda(k,\omega)]=\lambda(S[k],\omega\triangleleft k))$, which can be used in the expression above: $\displaystyle{\,\scaleobj{1.3}{\bm{a}}}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}[S[\lambda(k,\omega)]]{\,\scaleobj{1.3}{\bm{e}}}_{2}[\lambda(k,\omega)]{\,\scaleobj{1.3}{\bm{a}}}[k]$ $\displaystyle={\,\scaleobj{1.3}{\bm{a}}}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}[\lambda(S[k],\omega\triangleleft k))]{\,\scaleobj{1.3}{\bm{e}}}_{2}[\lambda(k,\omega)]{\,\scaleobj{1.3}{\bm{a}}}[k]$ (4.73) $\displaystyle={\,\scaleobj{1.3}{\bm{e}}}_{1}[\lambda(S[k],\omega\triangleleft k\triangleleft S[k])]{\,\scaleobj{1.3}{\bm{a}}}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[\lambda(k,\omega)]{\,\scaleobj{1.3}{\bm{a}}}[k]$ $\displaystyle={\,\scaleobj{1.3}{\bm{e}}}_{1}[\lambda(S[k],\omega)]{\,\scaleobj{1.3}{\bm{e}}}_{2}[\lambda(k,\omega\triangleleft S[k])]{\,\scaleobj{1.3}{\bm{a}}}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{a}}}[k]$ $\displaystyle={\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[\lambda(k,\omega\triangleleft S[k])]{\,\scaleobj{1.3}{\bm{e}}}_{2}[\lambda(k,\omega\triangleleft S[k])]\,.$ Notice that the transformation rule of $k_{\mu}$ is not $k_{\mu}\to\lambda_{\mu}(k,\omega\triangleleft k)$, as it in the case of ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]$. The trasformation rule is instead $k_{\mu}\to\lambda_{\mu}(k,\omega\triangleleft S[k])\,,$ (4.74) the novelty being in the transformed rapidity which is now $\omega\triangleleft S[k]$. A direct calculation confirms that this particular rule makes the function $\chi_{\mu}[k]$ transform linearly: $\chi_{\mu}[\lambda(k,\omega\triangleleft S[k])]=\chi_{\nu}[k]\Lambda^{\nu}{}_{\mu}\,,$ (4.75) so that ${{\,\scaleobj{1.3}{\bm{e}}}_{1}^{\prime}}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}^{\prime}[k]=\ e^{i\chi_{\mu}[k]\Lambda^{\mu}{}_{\nu}(x^{\nu}_{1}-x^{\nu}_{2})}=e^{i\chi_{\mu}[\lambda(k,\omega\triangleleft k)](x^{\mu}_{1}-x^{\mu}_{2})}={\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[\lambda(k,\omega\triangleleft k)]{\,\scaleobj{1.3}{\bm{e}}}_{2}[\lambda(k,\omega\triangleleft k)]\,.$ (4.76) ### 4.2 Geometry of momentum space In [7] we studied the general theory of $\kappa$ momentum spaces. Following the techniques illustrated in that paper, one can study the geometries of momentum space that are compatible with the lightlike $\kappa$-Minkowski space we are interested in. For our purposes, however, it is sufficient to observe that the coordinates $\xi_{\pm}$ transform linearly (like light-cone coordinates) under momentum-space Lorentz transformations, and therefore said transformations will leave invariant the following light-cone-coordinates Minkowski metric: $ds^{2}=d\xi_{-}d\xi_{+}\,.$ (4.77) In right-ordered coordinates $k_{\pm}$, which are related to $\xi_{\pm}$ by Eq. (4.33), this metric reads $ds^{2}=e^{2k_{+}}\,dk_{-}dk_{+}\,.$ (4.78) As we observed after showing Eq. (4.33), the functions $\xi_{\pm}$ do not represent a map from $\mathbbm{R}^{2}$ to $\mathbbm{R}^{2}$. They rather map $\mathbbm{R}^{2}$ to the semiplane $\xi_{+}>-\frac{1}{2}$. The border $\xi_{+}=-1/2$ of our coordinate system coincides with a lightlike line. Figure 1: The momentum space of $1+1$-dimensional lightlike $\kappa$-Minkowski. The presence of a finite border implies that our momentum space, despite being _locally_ Lorentz-invariant, is not so globally. This is reflected also in the form of the Lorentz trasnsformations of $k_{\pm}$, Eq. (4.52), which become singular at a finite value of $\xi$ when $e^{2k_{+}}<1$, and the argument of the logarithm in $\lambda_{+}(k,\xi)=\frac{1}{2}\log\left[1+e^{\xi}\left(e^{2k_{+}}-1\right)\right]$ is negative for all values of $\xi$ above $-\log\left(1-e^{2k_{+}}\right)$. The situation is perfectly analogue to that of ‘timelike’ $\kappa$-Minkowski, with its half-de Sitter momentum space whose border can be reached with a finite Lorentz transformation. In that model, a way out of this Lorentz- breaking feature was to assume a different global topology for momentum space, by quotienting it by a reflection in the ambient space, thereby obtaining an _elliptic_ de Sitter momentum space, which is closed under Lorentz transformations. It’s not obvious whether we can do something like that here. It is easy to verify that the transformation $k_{\pm}\to\lambda(k,\xi)_{\pm}$ is an isometry of $ds^{2}$. The geodesics of momentum space are obviously straight lines in the coordinates $\xi_{\pm}$, which in coordinates $k_{\pm}$ are: $k_{-}(s)=\alpha\,s+k_{-}^{0}\,,\qquad k_{+}(s)=\frac{1}{2}\log\left(\beta\,s+e^{2k_{+}^{0}}\right)\,.$ (4.79) The geodesic distance between the origin $o=(0,0)$ and the point $(k^{1}_{-},k^{1}_{+})$, along the geodesic connecting $o$ to $k^{1}_{\mu}$, $k_{-}(s)=k^{1}_{-}\,s$, $k_{+}(s)=\frac{1}{2}\log\left[(e^{2k_{+}^{1}}-1)\,s+1\right]$, is then: $\int_{0}^{1}\sqrt{2k^{1}_{-}(e^{2k_{+}^{1}}-1)}ds=\sqrt{2k^{1}_{-}(e^{2k_{+}^{1}}-1)}\,.$ (4.80) We can define a mass-shell operator as any function of the Geodesic distance (the difference between different choices of the function will amount to a nonlinear redefinition of the mass: $\mathcal{C}=k_{-}(e^{2k_{+}}-1)=\xi_{-}\xi_{+}\,,$ (4.81) it is easy to see how $\mathcal{C}$ is Lorentz-invariant. #### 4.2.1 Mass shells The $k_{\pm}$ coordinates are deformations of light-cone coordinates. In special relativity, if we want to describe the mass shells through dispersion relations, light-cone coordinates are a bit different from the familiar energy-momentum ones. In terms of energy $E$ and momentum $p$, the mass-shell condition reads $E^{2}-p^{2}=m^{2}$, and solving this with respect to $E$ gives the two dispersion relations of positive- and negative-frequency waves: $E=\pm\sqrt{m^{2}+p^{2}}$. In light-cone coordinates the mass shell is $p_{+}p_{-}=m^{2}\,,$ (4.82) and solving with respect to one of the coordinates, _e.g._ $p_{-}$, gives one single solution: $p_{-}=\frac{m^{2}}{2p_{+}}$. This is sufficient to describe both positive- and negative-frequency solutions with one single function: the former correspond to positive values of $p_{+}$, and the latter to $p_{+}<0$. In the case of imaginary mass, $m^{2}$ changes sign in Eq. (4.82), and the mass-shell we are describing are the tachionic ones. Our $\kappa$-deformation does not change the basic qualitative picture: the mass-shell relation is (the normalization is chosen in order to match Eq. (4.82) in the $\kappa\to 0$ limit): $\mathcal{C}(k)=\frac{1}{2}k_{-}\left(e^{2k_{+}}-1\right)=m^{2}\,,$ (4.83) which can be solved for $k_{-}$ as: $k_{-}=\omega_{r}(k_{+})=\frac{2m^{2}}{e^{2k_{+}}-1}\,,$ (4.84) and the positive-frequency mass-shells correspond values of $k_{+}>0$, while the negative-frequency ones correspond to $k_{+}<0$. Notice that $\omega_{r}\in(-\infty,-m^{2})\cup(0,\infty)$. If we decided to use Weyl-ordered waves instead of right-ordered ones, the mass shell function would take a different form. Using the relation (4.39) between these two coordinate systems, we get the following form for the mass- shell function: $\mathcal{C}^{\prime}(q)=\frac{1}{4}\left(e^{2q_{+}}-1\right)^{2}\frac{q_{-}}{q_{+}}=m^{2}\,,$ (4.85) and the dispersion relation now takes the form $q_{-}=\omega_{w}(q_{+})=\frac{4m^{2}\,q_{+}}{\left(e^{2q_{+}}-1\right)^{2}}\,,$ (4.86) again, $q_{+}<0$ describes positive-frequency waves, and $q_{+}<0$ negative- frequency ones. Figure 2: The dispersion relations (4.83) (left) and (4.85) (right). ### 4.3 The $\kappa$-Klein–Gordon equation Consider the following equation: $\mathcal{C}\triangleright\phi(x_{a})=m^{2}\phi(x_{a})\,,$ (4.87) where the Casimir operator’s action on noncommutative functions is defined in Fourier-transform as $\mathcal{C}\triangleright f(x_{a})=\int d^{2}k\sqrt{-g(k)}\,\tilde{f}(k)\,\mathcal{C}(k){\,\scaleobj{1.3}{\bm{e}}}_{a}[k]\,,\qquad f(x_{a})=\int d^{2}k\sqrt{-g(k)}\,\tilde{f}(k)\,{\,\scaleobj{1.3}{\bm{e}}}_{a}[k]\,.$ (4.88) The generic solution to Eq. (4.87) is $\displaystyle\phi(x_{a})$ $\displaystyle=\int d^{2}k\sqrt{-g(k)}\ \delta\left(\mathcal{C}(k)-m^{2}\right)\tilde{\phi}(k){\,\scaleobj{1.3}{\bm{e}}}_{a}[k]=\int d^{2}k\sqrt{-g(k)}\frac{\delta\left(k_{-}-\omega_{r}(k_{+})\right)}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}\,\tilde{\phi}(k){\,\scaleobj{1.3}{\bm{e}}}_{a}[k]\,,$ (4.89) we can now split the function $\tilde{\phi}(k)$ according to its values on the two mass-shells, the Lorentz-invariant one with $k_{+}>0$ and the other one with $k_{+}<0$: $\tilde{\phi}(k)=a(k_{+})\Theta(k_{+})+\bar{b}(k_{+})\Theta(-k_{+})\,,$ (4.90) and we get: $\phi(x_{a})=\int_{0}^{+\infty}dk_{+}\frac{e^{2k_{+}}}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}\,a(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{a}(k_{+})+\int_{-\infty}^{0}dk_{+}\frac{e^{2k_{+}}}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}\,\bar{b}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{a}(k_{+})\,,$ (4.91) where we called the on-shell waves ${\,\scaleobj{1.3}{\mathbbm{e}}}_{a}(k_{+})={\,\scaleobj{1.3}{\bm{e}}}_{a}\left[\omega_{r}(k_{+}),k_{+}\right]\,.$ (4.92) Notice now that ${\,\scaleobj{1.3}{\mathbbm{e}}}_{a}^{\dagger}(k_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{a}(-k_{+})$, so that $\displaystyle\hat{\phi}(x_{a})=$ $\displaystyle\int_{0}^{+\infty}dk_{+}\frac{e^{2k_{+}}}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}\left[a(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{a}(k_{+})+e^{-2k_{+}}\,\bar{b}(-k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{a}^{\dagger}(k_{+})\right]\,.$ (4.93) In the following, we will need to commute on-shell plane waves of different points. The key identity we need to calculate these commutator is Eq. (4.31), which we reproduce here for convenience: ${\,\scaleobj{1.3}{\bm{e}}}_{a}[k]{\,\scaleobj{1.3}{\bm{e}}}_{b}[q]=e^{i\left(k_{-}x_{a}^{-}+e^{-2k_{+}}q_{-}x_{b}^{-}\right)}e^{i\frac{k_{+}+q_{+}}{1-e^{2(k_{+}+q_{+})}}\left[\left(1-e^{2k_{+}}\right)x^{+}_{a}+e^{2k_{+}}\left(1-e^{2q_{+}}\right)x^{+}_{b}\right]}\,.$ We can then ask whether commuting two on-shell waves gives again a product of on-shell waves, _i.e.:_ ${\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(q_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{2}\left(q_{+}^{\prime}\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}\left(k_{+}^{\prime}\right)\,,$ (4.94) this equation is solved by $k_{+}^{\prime}=\frac{1}{2}\log\left(\frac{e^{2(k_{+}+q_{+})}}{e^{2k_{+}}\left(e^{2q_{+}}-1\right)+1}\right)\,,\qquad q_{+}^{\prime}=\frac{1}{2}\log\left(e^{2k_{+}}\left(e^{2q_{+}}-1\right)+1\right)\,.$ (4.95) We can find similar relations for Hermitian conjugate on-shell waves. The whole algebra is summarized here: $\begin{gathered}\textstyle{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(q_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{2}\left(\frac{1}{2}\log\left[e^{2k_{+}}\left(e^{2q_{+}}-1\right)+1\right]\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}\left(k_{+}+q_{+}-\frac{1}{2}\log\left[e^{2k_{+}}\left(e^{2q_{+}}-1\right)+1\right]\right)\,,\\\ \textstyle{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}\left(q_{+}-\frac{1}{2}\log\left[e^{2k_{+}}\left(1-e^{2q_{+}}\right)+e^{2q_{+}}\right]\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}\left(k_{+}-\frac{1}{2}\log\left[e^{2k_{+}}\left(1-e^{2q_{+}}\right)+e^{2q_{+}}\right]\right)\,,\\\ \textstyle{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(q_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{2}\left(\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-1\right]-k_{+}\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}\left(\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-1\right]-q_{+}\right)\,,\\\ \textstyle{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}\left(\frac{1}{2}\log\left[1-e^{2q_{+}}\left(1-e^{2k_{+}}\right)\right]\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}\left(k_{+}+q_{+}-\frac{1}{2}\log\left[1-e^{2q_{+}}\left(1-e^{2k_{+}}\right)\right]\right)\,.\end{gathered}$ (4.96) ### 4.4 Two-point functions We are ready to study in full generality two-point functions, built from the elements of the noncommutative two-point algebra $\mathcal{A}^{\underline{\otimes}2}$ that can be written as Fourier transforms, that are $\kappa$-Poincaré invariant and that solve the $\kappa$-Klein–Gordon equation. A reasonable proposal for such a function, based on what we know from commutative QFT, is something like this: $\int d^{2}k{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]\,f(k)\,\delta\left(\mathcal{C}(k)-m^{2}\right)\,,$ (4.97) where of course $f(k)$ is supposed to be a Lorentz-invariant function of the momentum. However the above function is not Lorentz-invariant. In fact: $\displaystyle\int d^{2}k{\,\scaleobj{1.3}{\bm{e}}}_{1}^{\prime}[k]{{\,\scaleobj{1.3}{\bm{e}}}^{\prime}}^{\dagger}_{2}[k]\,f(k)\,\delta\left(\mathcal{C}(k)-m^{2}\right)$ $\displaystyle=\int d^{2}k{\,\scaleobj{1.3}{\bm{e}}}_{1}[\lambda(k,\omega)]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[\lambda(k,\omega)]\,f(k)\,\delta\left(\mathcal{C}(k)-m^{2}\right)$ (4.98) $\displaystyle=\int d^{2}q\left|\det\left(\frac{\partial\lambda(k,-\omega)_{\mu}}{\partial k_{\nu}}\right)\right|{\,\scaleobj{1.3}{\bm{e}}}_{1}[q]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[q]\,f(k)\,\delta\left(\mathcal{C}(q)-m^{2}\right)$ $\displaystyle\neq\int d^{2}k{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]\,f(k)\,\delta\left(\mathcal{C}(k)-m^{2}\right)\,.$ Instead, inserting the square root of minus the determinant of the momentum- space metric: $\displaystyle F(x^{\mu}_{1}-x^{\mu}_{2})$ $\displaystyle=\int d^{2}k\sqrt{-g(k)}{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]\,f(k)\,\delta\left(\mathcal{C}(k)-m^{2}\right)$ (4.99) $\displaystyle=\int d^{2}k\sqrt{-g(k)}{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]\,f(k)\,\frac{\delta\left(k_{-}-\omega_{r}(k_{+})\right)}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}\,,$ where $\sqrt{-g(k)}=e^{2k_{+}}$, makes the integral Lorentz invariant. Now we worry about another issue: ordering dependence. We could have used the Weyl-ordered basis of plane waves to construct the function: $\int d^{2}q\sqrt{-g^{\prime}(q)}{\,\scaleobj{1.3}{\bm{f}}}_{1}[q]{\,\scaleobj{1.3}{\bm{f}}}^{\dagger}_{2}[q]\,f^{\prime}(q)\,\delta\left(\mathcal{C^{\prime}}(q)-m^{2}\right)=\int d^{2}q\sqrt{-g^{\prime}(q)}{\,\scaleobj{1.3}{\bm{f}}}_{1}[q]{\,\scaleobj{1.3}{\bm{f}}}^{\dagger}_{2}[q]\,f^{\prime}(q)\,\frac{\delta\left(q_{-}-\omega_{w}(q_{+})\right)}{\left|\frac{\left(e^{2q_{+}}-1\right)^{2}}{4q_{+}}\right|}\,,$ (4.100) where $g^{\prime}=\left|\frac{e^{2q_{+}}-1}{2q_{+}}\right|$, $\mathcal{C^{\prime}}(q)=\mathcal{C}\left[q_{+},q_{-}\left(\frac{e^{2q_{+}}-1}{2q_{+}}\right)\right]$ , $f^{\prime}(q)=f\left[q_{+},q_{-}\left(\frac{e^{2q_{+}}-1}{2q_{+}}\right)\right]$. However, we can prove that the two functions are identical. In fact, under the coordinate change $q_{+}=k_{+}$, $q_{-}=\frac{2k_{+}k_{-}}{e^{2k_{+}}-1}$, one has: $d^{2}q\sqrt{-g^{\prime}(q)}=d^{2}k\sqrt{-g(k)}\,,\leavevmode\nobreak\ \leavevmode\nobreak\ {\,\scaleobj{1.3}{\bm{f}}}_{1}[q]={\,\scaleobj{1.3}{\bm{e}}}_{1}[k]\,,\leavevmode\nobreak\ \leavevmode\nobreak\ {\,\scaleobj{1.3}{\bm{f}}}^{\dagger}_{2}[q]={\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]\,,\qquad h^{\prime}(q)=h(k)\,,\leavevmode\nobreak\ \leavevmode\nobreak\ \mathcal{C}^{\prime}(q)=\mathcal{C}(k)\,,$ (4.101) and therefore $\displaystyle\int d^{2}q\sqrt{-g^{\prime}}{\,\scaleobj{1.3}{\bm{f}}}_{1}[q]{\,\scaleobj{1.3}{\bm{f}}}^{\dagger}_{2}[q]\,f^{\prime}(q)\,\delta\left(\mathcal{C^{\prime}}(q)-m^{2}\right)$ $\displaystyle=\int d^{2}k\sqrt{-g}{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]f(k)\frac{\delta\left(\frac{2k_{+}k_{-}}{e^{2k_{+}}-1}-\omega_{w}(k_{+})\right)}{\left|\frac{\left(e^{2k_{+}}-1\right)^{2}}{4k_{+}}\right|}$ (4.102) $\displaystyle=\int d^{2}k\sqrt{-g)}{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]f(k)\frac{\delta\left(k_{-}-\frac{e^{2k_{+}}-1}{2k_{+}}\omega_{w}(k_{+})\right)}{\left|\frac{2k_{+}}{e^{2k_{+}}-1}\right|\left|\frac{\left(e^{2k_{+}}-1\right)^{2}}{4k_{+}}\right|}$ $\displaystyle=\int d^{2}k\sqrt{-g}{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]f(k)\frac{\delta\left(k_{-}-\omega_{r}(k_{+})\right)}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}\,.$ The function $f(k)$ appearing in our two-point function should be Lorentz- invariant, and the functions that are used for commutative QFT two-point functions, _e.g._ Feynmann propagators, Wightman functions and Pauli-Jordan functions, are all constants on the forward and backward light cones in momentum space. In our case we can write: $f(k)=f_{-}\,\Theta(-k_{+})+f_{+}\,\Theta(k_{+})\,,$ (4.103) where $f_{-}$ and $f_{+}$ are constants. This function gives $f_{+}$ on the forward light cone and $f_{-}$ on the backwards one, and it is easy to see that it is Lorentz invariant, because the sign of $k_{+}$ is not changed by on-shell Lorentz transformations. This expression, however is not _globally_ Lorentz-covariant: the backwards light cone is not closed under Lorentz transformations, and this will make the $f_{-}$ term non-invariant. Let us now calculate explicitly the form of $F(x^{\mu}_{1}-x^{\mu}_{2})$ that is implied by the choice (4.103): $\displaystyle F(x^{\mu}_{1}-x^{\mu}_{2})$ $\displaystyle=\int_{\mathbbm{R}}dk_{+}\frac{e^{2k_{+}}}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}e^{i\left(\frac{2m^{2}}{e^{2k_{+}}-1}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{e^{2k_{+}}-1}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}f\left(k_{+},\frac{2m^{2}}{e^{2k_{+}}-1}\right)$ (4.104) $\displaystyle=\int_{0}^{\infty}\frac{dy}{\left|y-1\right|}e^{i\left(\frac{2m^{2}}{y-1}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{y-1}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}f\left(\frac{1}{2}\log y,\frac{2m^{2}}{y-1}\right)$ $\displaystyle=\int_{-1}^{\infty}\frac{dz}{\left|z\right|}e^{i\left(\frac{2m^{2}}{z}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{z}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}f\left(\frac{1}{2}\log(z+1),\frac{2m^{2}}{z}\right)$ $\displaystyle=f_{-}\int_{0}^{1}\frac{du}{u}e^{-i\left(\frac{2m^{2}}{u}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{-i\left(\frac{u}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}+f_{+}\int_{0}^{\infty}\frac{dz}{z}e^{i\left(\frac{2m^{2}}{z}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{z}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}\,,$ Reintroducing $\kappa$, the expression above becomes: $\displaystyle F(x^{\mu}_{1}-x^{\mu}_{2})=$ $\displaystyle f_{-}\int_{0}^{1}\frac{du}{u}e^{-i\left(\frac{2m^{2}}{\kappa\,u}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{-i\left(\frac{\kappa\,u}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}+f_{+}\int_{0}^{\infty}\frac{dz}{z}e^{i\left(\frac{2m^{2}}{\kappa\,z}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{\kappa\,z}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}$ (4.105) $\displaystyle=$ $\displaystyle f_{-}\int_{0}^{\frac{\kappa}{2m}}\frac{du}{u}e^{-im\left(\frac{1}{u}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{-imu\left(x_{1}^{+}-x_{2}^{+}\right)}+f_{+}\int_{0}^{\infty}\frac{dz}{z}e^{im\left(\frac{1}{u}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{imu\left(x_{1}^{+}-x_{2}^{+}\right)}$ $\displaystyle=$ $\displaystyle f_{-}\int_{-\infty}^{\log\frac{\kappa}{2m}}d\chi e^{-im\left(\cosh\chi-\sinh\chi\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{-im\left(\cosh\chi+\sinh\chi\right)\left(x_{1}^{+}-x_{2}^{+}\right)}$ $\displaystyle\leavevmode\nobreak\ +f_{+}\int_{-\infty}^{\infty}d\chi e^{im\left(\cosh\chi-\sinh\chi\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{im\left(\cosh\chi+\sinh\chi\right)\left(x_{1}^{+}-x_{2}^{+}\right)}$ $\displaystyle=$ $\displaystyle f_{-}\int_{-\infty}^{m\sinh\left(\log\frac{\kappa}{2m}\right)}\frac{dp}{\sqrt{p^{2}+m^{2}}}e^{-i\left(\sqrt{p^{2}+m^{2}}-p\right)\left(x_{1}^{-}-x_{2}^{-}\right)-i\left(\sqrt{p^{2}+m^{2}}+p\right)\left(x_{1}^{+}-x_{2}^{+}\right)}$ $\displaystyle\leavevmode\nobreak\ +f_{+}\int_{-\infty}^{\infty}\frac{dp}{\sqrt{p^{2}+m^{2}}}e^{i\left(\sqrt{p^{2}+m^{2}}-p\right)\left(x_{1}^{-}-x_{2}^{-}\right)+i\left(\sqrt{p^{2}+m^{2}}+p\right)\left(x_{1}^{+}-x_{2}^{+}\right)}$ $\displaystyle=$ $\displaystyle f_{-}\int_{-\infty}^{m\sinh\left(\log\frac{\kappa}{2m}\right)}\frac{dp}{\sqrt{p^{2}+m^{2}}}e^{-2i\left[\sqrt{p^{2}+m^{2}}(x^{0}_{1}-x^{0}_{2})+p(x^{1}_{1}-x^{1}_{2})\right]}$ $\displaystyle\leavevmode\nobreak\ +f_{+}\int_{-\infty}^{\infty}\frac{dp}{\sqrt{p^{2}+m^{2}}}e^{2i\left[\sqrt{p^{2}+m^{2}}(x^{0}_{1}-x^{0}_{2})+p(x^{1}_{1}-x^{1}_{2})\right]}\,,$ the expression above is identical to the integrals appearing in the undeformed 2-point functions (written in light-cone coordinates), except for the Lorentz- breaking integration boundary $m\sinh\left(\log\frac{\kappa}{2m}\right)$ in the first integral. So, our conclusion is that, in order to have a $\kappa$-Poincaré-invariant function of type $F(x^{\mu}_{1}-x^{\mu}_{2})$, we have to set $f_{-}=0$. We have found a first $\kappa$-Poincaré-invariant two-point function, based on the translation-invariant wave combination (4.32), ${\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]$: $F=\int d^{2}k\sqrt{-g(k)}{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]\,\Theta(k_{+})\,\delta\left(\mathcal{C}(k)-m^{2}\right)=\int_{-\infty}^{\infty}\frac{dp}{\sqrt{p^{2}+m^{2}}}e^{2i\left[\sqrt{p^{2}+m^{2}}(x^{0}_{1}-x^{0}_{2})+p(x^{1}_{1}-x^{1}_{2})\right]}\,.$ (4.106) We could have instead used the translation invariant combination of plane waves introduced in Eq. (4.36), ${\,\scaleobj{1.3}{\bm{e}}}_{2}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[k]$, but this is just the Hermitian conjugate of the wave combination used before. Moreover, the two-point function built with it coincides with $F$ with $x^{\mu}_{1}$ and $x^{\mu}_{2}$ exchanged, because: $F^{\dagger}(x^{\mu}_{1}-x^{\mu}_{2})=F(x^{\mu}_{2}-x^{\mu}_{1})\,.$ (4.107) The wave combination (4.34), ${\,\scaleobj{1.3}{\bm{e}}}_{1}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[k]$ is not obviously related to (4.32), so we need to check what we get if we use it to define our two-point function: $H(x^{\mu}_{1}-x^{\mu}_{2})=\int d^{2}k{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[k]\,h(k)\,\delta\left(\mathcal{C}(k)-m^{2}\right)\,,$ (4.108) which is Lorentz-invariant because, from Eq. (4.76), ${\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}_{2}[k]={\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{1}[\lambda(k,\omega\triangleleft S[k])]{\,\scaleobj{1.3}{\bm{e}}}_{2}[\lambda(k,\omega\triangleleft S[k])]$, and the Jacobian of the transformation $q_{\mu}=\lambda_{\mu}(k,\omega\triangleleft S[k])$ is one. Again, the plane wave combination (4.37), ${\,\scaleobj{1.3}{\bm{e}}}_{2}^{\dagger}[k]{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]$ is just the Hermitian conjugate of (4.34), and again, the two-point function built with it coincides with $H$ with $x^{\mu}_{1}$ and $x^{\mu}_{2}$ exchanged, because: $H^{\dagger}(x^{\mu}_{1}-x^{\mu}_{2})=H(x^{\mu}_{2}-x^{\mu}_{1})\,.$ (4.109) An explicit calculation of $H$ gives: $\displaystyle H(x^{\mu}_{1}-x^{\mu}_{2})$ $\displaystyle=\int_{\mathbbm{R}}dk_{+}\frac{1}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}e^{-ie^{2k_{+}}\left(\frac{2m^{2}}{e^{2k_{+}}-1}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{-i\left(\frac{1-e^{-2k_{+}}}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}h\left(k_{+},\frac{2m^{2}}{e^{2k_{+}}-1}\right)$ (4.110) $\displaystyle=\int_{\mathbbm{R}}dk_{+}\frac{e^{-2k_{+}}}{\frac{1}{2}\left|e^{-2k_{+}}-1\right|}e^{i\left(\frac{2m^{2}}{e^{-2k_{+}}-1}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{e^{-2k_{+}}-1}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}h\left(k_{+},\frac{2m^{2}}{e^{2k_{+}}-1}\right)$ $\displaystyle=\int_{0}^{\infty}\frac{dy}{\left|y-1\right|}e^{i\left(\frac{2m^{2}}{y-1}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{y-1}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}h\left(-\frac{1}{2}\log y,\frac{2m^{2}}{y-1}\right)$ $\displaystyle=\int_{-1}^{\infty}\frac{dz}{\left|z\right|}e^{i\left(\frac{2m^{2}}{z}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{z}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}h\left(-\frac{1}{2}\log(z+1),\frac{2m^{2}}{z}\right)\,,$ and, if $h(k)=h_{-}\Theta(-k_{+})+h_{+}\Theta(k_{+})$: $\displaystyle H(x^{\mu}_{1}-x^{\mu}_{2})$ $\displaystyle=h_{+}\int_{0}^{1}\frac{du}{u}e^{-i\left(\frac{2m^{2}}{u}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{-i\left(\frac{u}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}+h_{-}\int_{0}^{\infty}\frac{dz}{z}e^{i\left(\frac{2m^{2}}{z}\right)\left(x_{1}^{-}-x_{2}^{-}\right)}e^{i\left(\frac{z}{2}\right)\left(x_{1}^{+}-x_{2}^{+}\right)}$ (4.111) $\displaystyle=$ $\displaystyle h_{+}\int_{-\infty}^{m\sinh\left(\log\frac{\kappa}{2m}\right)}\frac{dp}{\sqrt{p^{2}+m^{2}}}e^{-2i\left[\sqrt{p^{2}+m^{2}}(x^{0}_{1}-x^{0}_{2})+p(x^{1}_{1}-x^{1}_{2})\right]}$ $\displaystyle\leavevmode\nobreak\ +h_{-}\int_{-\infty}^{\infty}\frac{dp}{\sqrt{p^{2}+m^{2}}}e^{2i\left[\sqrt{p^{2}+m^{2}}(x^{0}_{1}-x^{0}_{2})+p(x^{1}_{1}-x^{1}_{2})\right]}\,,\,,$ so, if we set $h_{+}=0$ we have a genuinely Lorentz-invariant function. This function, however, turns out to be identical to $F$ (modulo a constant factor). We conclude that we can use $F(x^{\mu}_{1}-x^{\mu}_{2})$ and its Hermitian conjugate to define all two-point functions that we need, which will have the appropriate commutative limit and invariance properties. Moreover, these two- point functions will be indistinguishable from their commutative counterparts. For example, the Wightman function can be defined as: $\Delta_{\text{W}}(x^{\mu}_{1}-x^{\mu}_{2})=\int d^{2}k\sqrt{-g(k)}\,{\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]\,\Theta(k_{+})\,\delta\left(\mathcal{C}(k)-m^{2}\right)\,,$ (4.112) and the associated Pauli-Jordan function will be the anti-Hermitian part of $\Delta_{\text{W}}$: $\Delta_{\text{PJ}}(x^{\mu}_{1}-x^{\mu}_{2})=\int d^{2}k\sqrt{-g(k)}\left({\,\scaleobj{1.3}{\bm{e}}}_{1}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{2}[k]-{\,\scaleobj{1.3}{\bm{e}}}_{2}[k]{\,\scaleobj{1.3}{\bm{e}}}^{\dagger}_{1}[k]\right)\Theta(k_{+})\,\delta\left(\mathcal{C}(k)-m^{2}\right)\,.$ (4.113) ### 4.5 Field quantization We can use the Pauli-Jordan function to define a quantization, _i.e._ $[\hat{\phi}(x_{1}),\hat{\phi}^{\dagger}(x_{2})]=i\Delta_{\text{PJ}}(x^{\mu}_{1}-x^{\mu}_{2})\,,\leavevmode\nobreak\ \leavevmode\nobreak\ [\hat{\phi}(x_{1}),\hat{\phi}(x_{2})]=0\,,\leavevmode\nobreak\ \leavevmode\nobreak\ [\hat{\phi}^{\dagger}(x_{1}),\hat{\phi}^{\dagger}(x_{2})]=0\,,$ (4.114) where now the Fourier coefficients of our on-shell field are assumed to be non-necessarily commutative operators, which however commute with $x^{\mu}_{a}$: $\hat{\phi}(x_{a})=\int_{0}^{+\infty}dk_{+}\frac{e^{2k_{+}}}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}\left(\hat{a}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{a}(k_{+})+e^{-2k_{+}}\,\hat{b}^{\dagger}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{a}^{\dagger}(k_{+})\right)\,,$ (4.115) and the Hermitian conjugate field will be: $\hat{\phi}^{\dagger}(x_{a})=\int_{0}^{+\infty}dk_{+}\frac{e^{2k_{+}}}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}\left(\hat{a}^{\dagger}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{a}^{\dagger}(k_{+})+e^{-2k_{+}}\,\hat{b}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{a}^{(}k_{+})\right)\,.$ (4.116) Consider first the equation $[\hat{\phi}(x_{1}),\hat{\phi}^{\dagger}(x_{2})]=i\Delta_{\text{PJ}}(x^{\mu}_{1}-x^{\mu}_{2})$, which implies: $\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}dk_{+}dq_{+}\frac{e^{2(k_{+}+q_{+})}}{\frac{1}{4}\left|e^{2k_{+}}-1\right|\left|e^{2q_{+}}-1\right|}\left\\{\hat{a}(k_{+})\hat{a}^{\dagger}(q_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+})-\hat{a}^{\dagger}(q_{+})\hat{a}(k_{+})\leavevmode\nobreak\ {\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+})\right\\}$ (4.117) $\displaystyle=\int_{0}^{\infty}dk_{+}\frac{e^{2k_{+}}}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(k_{+})\,,$ $\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}dk_{+}dq_{+}\frac{e^{2k_{+}}}{\frac{1}{4}\left|e^{2k_{+}}-1\right|\left|e^{2q_{+}}-1\right|}\left\\{\hat{a}(k_{+})\hat{b}(q_{+})\,{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(q_{+})-\hat{b}(q_{+})\hat{a}(k_{+})\,{\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(q_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+})\right\\}=0\,,$ $\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}dk_{+}dq_{+}\frac{e^{2q_{+}}}{\frac{1}{4}\left|e^{2k_{+}}-1\right|\left|e^{2q_{+}}-1\right|}\left\\{\hat{b}^{\dagger}(k_{+})\hat{a}^{\dagger}(q_{+})\,{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+})-\hat{a}^{\dagger}(q_{+})\hat{b}^{\dagger}(k_{+})\,{\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}(k_{+})\right\\}=0\,,$ $\displaystyle\int_{0}^{\infty}\int_{0}^{\infty}dk_{+}dq_{+}\frac{1}{\frac{1}{4}\left|e^{2k_{+}}-1\right|\left|e^{2q_{+}}-1\right|}\left\\{\hat{b}^{\dagger}(k_{+})\hat{b}(q_{+})\,{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(q_{+})-\hat{b}(q_{+})\hat{b}^{\dagger}(k_{+})\,{\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(q_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}(k_{+})\right\\}$ $\displaystyle=-\int_{0}^{\infty}dk_{+}\frac{e^{2k_{+}}}{\frac{1}{2}\left|e^{2k_{+}}-1\right|}{\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}(k_{+})\,,$ recall Eq. (4.96), and rewrite it according to our present needs: $\begin{gathered}\textstyle{\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{1}\left(\frac{1}{2}\log\left[e^{2q_{+}}+e^{2k_{+}}-1\right]-q_{+}\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}\left(\frac{1}{2}\log\left[e^{2q_{+}}+e^{2k_{+}}-1\right]-k_{+}\right)\,,\\\ \textstyle{\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(q_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{1}\left(\frac{1}{2}\log\left[e^{2q_{+}}\left(e^{2k_{+}}-1\right)+1\right]\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}\left(q_{+}+k_{+}-\frac{1}{2}\log\left[e^{2q_{+}}\left(e^{2k_{+}}-1\right)+1\right]\right)\,,\\\ \textstyle{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}\left(\frac{1}{2}\log\left[1-e^{2q_{+}}\left(1-e^{2k_{+}}\right)\right]\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}\left(k_{+}+q_{+}-\frac{1}{2}\log\left[1-e^{2q_{+}}\left(1-e^{2k_{+}}\right)\right]\right)\,,\\\ \textstyle{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}(q_{+})={\,\scaleobj{1.3}{\mathbbm{e}}}_{2}\left(\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-1\right]-k_{+}\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}^{\dagger}\left(\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-1\right]-q_{+}\right)\,.\end{gathered}$ (4.118) Consider the first line of Eq. (4.117). Using (4.118), we can rewrite it as $\displaystyle\int_{\mathbbm{R}_{+}^{2}}dk_{+}dq_{+}\frac{e^{2(k_{+}+q_{+})}}{\frac{1}{4}\left|e^{2k_{+}}-1\right|\left|e^{2q_{+}}-1\right|}\Bigg{\\{}\hat{a}(k_{+})\hat{a}^{\dagger}(q_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+})$ (4.119) $\displaystyle-\hat{a}^{\dagger}(q_{+})\hat{a}(k_{+})\leavevmode\nobreak\ {\,\scaleobj{1.3}{\mathbbm{e}}}_{1}\left(\frac{1}{2}\log\left[e^{2q_{+}}+e^{2k_{+}}-1\right]-q_{+}\right){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}\left(\frac{1}{2}\log\left[e^{2q_{+}}+e^{2k_{+}}-1\right]-k_{+}\right)\Bigg{\\}}\,,$ and then, inverting the relations $k_{+}^{\prime}=\frac{1}{2}\log\left[e^{2q_{+}}+e^{2k_{+}}-1\right]-q_{+}\,,\qquad q_{+}^{\prime}=\frac{1}{2}\log\left[e^{2q_{+}}+e^{2k_{+}}-1\right]-k_{+}\,,$ (4.120) we get: $q_{+}=q^{\prime}_{+}-\frac{1}{2}\log\left[e^{2k^{\prime}_{+}}+e^{2q^{\prime}_{+}}-e^{2(k^{\prime}_{+}+q^{\prime}_{+})}\right]\,,\qquad k_{+}=k^{\prime}_{+}-\frac{1}{2}\log\left[e^{2k^{\prime}_{+}}+e^{2q^{\prime}_{+}}-e^{2(k^{\prime}_{+}+q^{\prime}_{+})}\right]\,,$ (4.121) and, taking into account the Jacobian of the transformation $\left|e^{-2k_{+}^{\prime}}+e^{-21_{+}^{\prime}}-1\right|^{-1}$, $\displaystyle\int_{\mathbbm{R}_{+}^{2}}\frac{dk_{+}dq_{+}e^{2(k_{+}+q_{+})}}{\frac{1}{4}\left|e^{2k_{+}}-1\right|\left|e^{2q_{+}}-1\right|}{\,\scaleobj{1.3}{\mathbbm{e}}}_{1}(k_{+}){\,\scaleobj{1.3}{\mathbbm{e}}}_{2}^{\dagger}(q_{+})\Bigg{\\{}\hat{a}(k_{+})\hat{a}^{\dagger}(q_{+})-\frac{\frac{1}{2}\left|e^{2k_{+}}-1\right|}{e^{2k_{+}}}\delta(k_{+}-q_{+})$ (4.122) $\displaystyle-\frac{\hat{a}^{\dagger}\left(q_{+}-\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-e^{2(k_{+}+q_{+})}\right]\right)\hat{a}\left(k_{+}-\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-e^{2(k_{+}+q_{+})}\right]\right)}{\left|1-e^{-2k_{+}}-e^{-2q_{+}}\right|}\Bigg{\\}}=0\,,$ which imposes the following deformed commutators for the creation and annihilation operators: $\displaystyle\hat{a}(k_{+})\hat{a}^{\dagger}(q_{+})-\frac{\hat{a}^{\dagger}\left(q_{+}-\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-e^{2(k_{+}+q_{+})}\right]\right)\hat{a}\left(k_{+}-\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-e^{2(k_{+}+q_{+})}\right]\right)}{\left|1-e^{-2k_{+}}-e^{-2q_{+}}\right|}=$ (4.123) $\displaystyle\frac{1}{2}\left|1-e^{-2k_{+}}-1\right|\delta(k_{+}-q_{+})\,.$ All the other commutators forming the bosonic oscillator algebra can be similarly derived. Notice now that, upon commuting $\hat{a}(k_{+})$ and $\hat{a}^{\dagger}(q_{+})$, we get creation and annihilation operators labeled by momentum coordinates that diverge, or become complex, for certain values of $k_{+}$ and $q_{+}$: $q_{+}^{\prime\prime}=q_{+}-\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-e^{2(k_{+}+q_{+})}\right]\,,\qquad k_{+}^{\prime\prime}=k_{+}-\frac{1}{2}\log\left[e^{2k_{+}}+e^{2q_{+}}-e^{2(k_{+}+q_{+})}\right]\,,$ (4.124) when $e^{2k_{+}}=\frac{1}{1-e^{-2q_{+}}}$ both $q_{+}^{\prime\prime}$ and $k_{+}^{\prime\prime}$ diverge. This has to do with the fact that the maps that send the momenta of the on-shell waves in Eq. (4.96) to the momenta of the commuted waves are not maps of $\mathbbm{R}_{+}^{2}$ onto itself. Specifically, when we reached Eq. (4.119), we had to make the coordinate transformation (4.121), which, as a real map, sends the region $e^{2k^{\prime}_{+}}>\frac{1}{1-e^{-2q_{+}^{\prime}}}\,,$ (4.125) into the region $e^{2k_{+}}>1-e^{2q_{+}}\,.$ (4.126) We need to consider what happens beyond those regions, which can be accessed by Lorentz-transforming the momenta, and cannot therefore be ignored if we want to preserve Lorentz invariance. This issue deserves further investigation. ## 5 Conclusions We solved the main problem that obstructed the definition of a genuine $\kappa$-Poincaré-invariant QFT on $\kappa$-Minkowski, defined in terms of “noncommutative” N-point functions. This was the problem of defining in a $\kappa$-Poincaré-covariant way the algebra of functions of more than one point, which we called $\mathcal{A}^{\bar{\otimes}N}$. We did this at the expense of generality: a covariant algebra can be defined only for the “lightlike” $\kappa$-Minkowski algebra $v^{\mu}v^{\nu}g_{\mu\nu}=0$. We introduced a natural representation of the algebra $\mathcal{A}^{\bar{\otimes}N}$, and found that translation-invariant coordinate differences belong to the maximal Abelian subalgebra of $\mathcal{A}^{\bar{\otimes}N}$, and therefore they are, for all practical purposes, equivalent to commutative functions. This result has a consequence that hugely simplifies the interpretational framework of the QFT: all N-point functions are translation-invariant, and they are therefore commutative. A QFT on $\kappa$-Minkowski can then be defined in terms of a set of standard $N$-point functions, just like any QFT on the ordinary, commutative Minkowski space. We studied explicitly the possible 2-point functions, defined by requiring that they solve the $\kappa$-Klein–Gordon equation and that they are $\kappa$-Poincaré invariant. This gives a Wightman function that is equivalent to the commutative one, with all the dependence on the deformation parameter $\kappa$ disappearing from the theory. All $2$-point functions that can be built from it, like the Pauli–Jordan function, will be therefore undeformed and independent of $\kappa$. With the Pauli–Jordan function, we can impose quantization rules for free complex $\kappa$-Klein–Gordon fields, and look for a representation of the quantum fields in terms of a bosonic oscillator algebra. One finds that the algebra of bosonic oscillators is deformed, similarly to other results in the $\kappa$-QFT literature (_e.g._ [37, 36]). However, the commutation relations of our creation and annihilation operators seem to involve divergent/complex momenta, an issue whose investigation we leave to future works. The fact that our $2$-point functions are undeformed motivates the conjecture that all $N$-point functions of the free theory might turn out to be undeformed and independent of $\kappa$, which would make the theory completely indistinguishable from the ordinary, commutative free scalar QFT on Minkowski space. Indeed, this is what happened in [51, 52, 53, 54, 4, 55, 56] (see in particular [56]) for the free scalar QFT on the Moyal–Weyl noncommutative spacetime. In these works, extending the noncommutative algebra of coordinates to a deformed tensor product algebra which is covariant under noncommutative Poincaré transformations, resulted in a mostly-commutative algebra, in which all translation-invariant coordinate differences are commutative, just like our result. Both the free and the interacting scalar QFT turns out to be equivalent to the commutative/undeformed one [55, 56]. We proved a similar result only for the free theory, and only for $2$-point functions. One of the first priorities for further works in this direction will be to investigate whether the same holds for all $N$-point functions in the free theory, which seems likely. Then, the following step will be to investigate an interacting theory, and check whether a dependence on $\kappa$ finally appears in interaction vertices. Another interesting issue is the relation of our construction with the approaches based on star products [33, 34, 35, 36, 38, 39, 40, 45, 46, 44, 42]. In particular, [41] focuses on the lightlike $\kappa$-Minkowski spacetime, and, despite being based on a star-product approach whose fundamental ontology is that of commutative functions, it derives some results that are in line with ours so far: the free scalar QFT is undeformed, and a dependence on $\kappa$ seems to be confined to the interacting theory. An approach based on star products isn’t obviously related to ours, based on a covariant braided $N$-point algebra of coordinates, but it would be very interesting if one could prove a relation between the two. In the case of QFT on the Moyal noncommutative spacetime, the two approaches are fundamentally different and lead to different predictions for the $N$-point functions [55]. We have shown how to have, for a free two-point function a $\kappa$-Poincaré invariant _on shell_ theory, by entirely avoiding the Lorentz-breaking parts of the mass shell. This workaround might not work in the interacting theory, which requires loop integrations of off-shell momenta. If these parts of momentum space cannot be avoided, perhaps a breaking of Lorentz symmetry can be avoided by incorporating into our theory the plane waves that are obtained by boosting the waves belonging to the “Lorentz-breaking” mass-shell beyond the patch of momentum space that is covered by our coordinates. Then, as can be seen in relation (4.52) and the like, one gets logarithms of negative numbers, _i.e._ complex frequencies. This might indicate some sort of damping, and deserves further scrutiny. ## Appendix A Appendix: some $\kappa$-Minkowski algebraic calculations In this appendix we explicitly derive some useful identities. We explicitly reintroduce $\kappa$, as it will be expedient in some cases to have it be a different constant. We start with the commutation rules $[T,X]=\frac{i}{\kappa}X\,,$ (A.1) can be used repeatedly to prove inductively that $\begin{gathered}T\,X=X(T+i/\kappa)\,,\\\ T^{2}\,X=X(T+i/\kappa)^{2}\,,\\\ \vdots\\\ T^{n}\,X=X(T+i/\kappa)^{n}\,,\end{gathered}$ (A.2) therefore $e^{ip_{0}T}\,X=\sum_{n=0}^{\infty}\frac{(ip_{0})^{n}}{n!}T^{n}\,X=X\sum_{n=0}^{\infty}\frac{(ip_{0})^{n}}{n!}(T+i/\kappa)^{n}=X\,e^{ip_{0}T-p_{0}/\kappa}\,,$ (A.3) and $\begin{gathered}e^{ip_{0}T}\,X=e^{-p_{0}/\kappa}X\,e^{ip_{0}T}\,\,,\\\ e^{ip_{0}T}\,X^{2}=e^{-2p_{0}/\kappa}X^{2}\,e^{ip_{0}T}\,,\\\ \vdots\\\ e^{ip_{0}T}\,X^{n}=e^{-np_{0}/\kappa}X^{n}\,e^{ip_{0}T}\,,\end{gathered}$ (A.4) so we conclude that $e^{ip_{0}T}e^{ip_{1}X}=e^{ip_{0}T}\sum_{n=0}^{\infty}\frac{(ip_{1})^{n}}{n!}X^{n}=\sum_{n=0}^{\infty}\frac{(ip_{1})^{n}}{n!}e^{-np_{0}/\kappa}X^{n}\,e^{ip_{0}T}=e^{ie^{-p_{0}/\kappa}p_{1}X}e^{ip_{0}T}\,.$ (A.5) The product of two right-ordered plane waves is then $e^{ip_{1}X}e^{ip_{0}T}e^{iq_{1}X}e^{iq_{0}T}=e^{ip_{1}X}e^{ie^{-p_{0}/\kappa}q_{1}X}e^{ip_{0}T}e^{iq_{0}T}=e^{i(p_{1}+e^{-p_{0}/\kappa}q_{1})X}e^{i(p_{0}+q_{0})T}\,.$ (A.6) similarly, left-ordered plane waves combine in the following way: $e^{ip_{0}T}e^{ip_{1}X}e^{iq_{0}T}e^{iq_{1}X}=e^{i(p_{0}+q_{0})T}e^{i(e^{+q_{0}/\kappa}p_{1}+q_{1})X}\,.$ (A.7) Weyl-ordered waves are a little bit more tricky. First we need to find their relation with right-ordered waves. To do so, expand Eq. (A.5) to first order in $p_{0}$: $e^{ip_{1}X}T=\left(T+\frac{p_{1}}{\kappa}X\right)e^{ip_{1}X}\,,$ (A.8) by induction, $e^{ip_{1}X}T^{n}=\left(T+\frac{p_{1}}{\kappa}X\right)^{n}e^{ip_{1}X}\,,$ (A.9) and so $e^{ip_{1}X}e^{ip_{0}T}=e^{ip_{0}\left(T+\frac{p_{1}}{\kappa}X\right)}e^{ip_{1}X}\,.$ (A.10) Multiply now both sides by $e^{-ip_{1}X}$ from the right, and reorder the left hand side with $T$ to the right: $\begin{gathered}e^{ip_{1}X}e^{ip_{0}T}e^{-ip_{1}X}=e^{ip_{0}\left(T+\frac{p_{1}}{\kappa}X\right)}\,,\\\ e^{i\left(1-e^{-p_{0}/\kappa}\right)p_{1}X}e^{ip_{0}T}=e^{ip_{0}\left(T+\frac{p_{1}}{\kappa}X\right)}\,,\end{gathered}$ (A.11) if we now rename $p_{0}=q_{0}$ and $\frac{p_{1}p_{0}}{\kappa}=q_{1}$ we get the desired expression: $e^{i\left(q_{0}T+q_{1}X\right)}=e^{i\left(\frac{1-e^{-q_{0}/\kappa}}{q_{0}/\kappa}\right)q_{1}X}e^{iq_{0}T}\,.$ (A.12) Now that we know how to translate Weyl-ordered waves into right-ordered ones, we can use the combination law of the latters to derive the one of the formers. Consider, in fact, the following rewriting of Eq. (A.6): $e^{i\left(\frac{1-e^{-\frac{p_{0}}{\kappa}}}{p_{0}/\kappa}\right)p_{1}X}e^{ip_{0}T}e^{i\left(\frac{1-e^{-\frac{q_{0}}{\kappa}}}{q_{0}/\kappa}\right)q_{1}X}e^{iq_{0}T}=e^{i\left[\left(\frac{1-e^{-\frac{p_{0}}{\kappa}}}{p_{0}/\kappa}\right)p_{1}+e^{-\frac{p_{0}}{\kappa}}\left(\frac{1-e^{-\frac{q_{0}}{\kappa}}}{q_{0}/\kappa}\right)q_{1}\right]X}e^{i(p_{0}+q_{0})T}\,,$ (A.13) converting the right-hand side into a Weyl-ordered wave through the inverse relation to (A.12), $e^{ik_{1}X}e^{ik_{0}T}=e^{ik_{0}T+\left(\frac{k_{0}/\kappa}{1-e^{-k_{0}/\kappa}}\right)k_{1}X}\,.$ (A.14) we get: $e^{i\left(p_{0}T+p_{1}X\right)}e^{i\left(q_{0}T+q_{1}X\right)}=e^{i(p_{0}+q_{0})T+i\left(\frac{(p_{0}+q_{0})/\kappa}{1-e^{-(p_{0}+q_{0})/\kappa}}\right)\left[\left(\frac{1-e^{-\frac{p_{0}}{\kappa}}}{p_{0}/\kappa}\right)p_{1}+e^{-\frac{p_{0}}{\kappa}}\left(\frac{1-e^{-\frac{q_{0}}{\kappa}}}{q_{0}/\kappa}\right)q_{1}\right]X}\,.$ (A.15) #### Acknowledgments F.L. acknowledges support from the INFN Iniziativa Specifica GeoSymQFT, the Spanish MINECO underProject No. MDM-2014-0369 of ICCUB (Unidad de Excelencia ‘Maria de Maeztu’), Grant No. FPA2016-76005-C2-1-P. 67985840. F.M. thanks the Action CA18108 QG-MM from the European Cooperation in Science and Technology (COST) and the Foundational Questions Institute (FQXi). ## References * [1] J. Lukierski, A. Nowicki, and H. Ruegg, “Real forms of complex quantum anti-De Sitter algebra U-q(Sp(4:C)) and their contraction schemes,” Phys. Lett. B271 (1991) 321–328, arXiv:hep-th/9108018 [hep-th]. * [2] J. Lukierski, A. Nowicki, and H. Ruegg, “New quantum Poincaré algebra and k deformed field theory,” Phys. Lett. B293 (1992) 344–352, arXiv:hep-th/9108018 [hep-th]. * [3] S. Majid and H. Ruegg, “Bicrossproduct structure of kappa Poincaré group and noncommutative geometry,” Phys. Lett. B334 (1994) 348–354, arXiv:hep-th/9405107 [hep-th]. * [4] P. Aschieri, M. Dimitrijevic, F. Meyer, and J. Wess, “Noncommutative geometry and gravity,” Class. Quant. Grav. 23 (2006) 1883–1912, arXiv:hep-th/0510059. * [5] G. Amelino-Camelia, G. Gubitosi, and F. Mercati, “Discreteness of area in noncommutative space,” Phys. Lett. B 676 no. 4-5, (Jun, 2009) 180–183, arXiv:0812.3663 [hep-th]. * [6] P. Martinetti, F. Mercati, and L. Tomassini, “Minimal length in quantum space and integrations of the line element in Noncommutative Geometry,” Rev. Math. Phys. 24 (2012) 1250010, arXiv:1106.0261 [math-ph]. * [7] F. Lizzi, M. Manfredonia, and F. Mercati, “The momentum spaces of $\kappa$-minkowski noncommutative spacetime,” Nucl. Phys. B958 (2020) 115117, arXiv:1811.08409 [hep-th]. * [8] S. Majid, Foundations of Quantum Group Theory. Cambridge University Press, 1995. * [9] J. Lukierski, H. Ruegg, A. Nowicki, and V. N. Tolstoy, “q-deformation of poincaré algebra,” Physics Letters B 264 no. 3, (1991) 331 – 338. http://www.sciencedirect.com/science/article/pii/037026939190358W. * [10] S. Majid, “Hopf algebras for physics at the Planck scale,” Classical and Quantum Gravity 5 (Dec, 1988) 1587–1606. https://ui.adsabs.harvard.edu/abs/1988CQGra...5.1587M. * [11] S. Majid, “Quantum groups and noncommutative geometry,” J. Math. Phys. 41 (2000) 3892–3942, arXiv:hep-th/0006167 [hep-th]. * [12] J. Lukierski, Z. Skoda, and M. Woronowicz, “$\kappa$-deformed covariant quantum phase spaces as Hopf algebroids,” Phys. Lett. B750 (2015) 401–406, arXiv:1507.02612 [hep-th]. * [13] F. Lizzi, M. Manfredonia, and F. Mercati, “Localizability in $\kappa$-Minkowski spacetime,” Int. J. Geom. Meth. Mod. Phys. 17 no. supp01, (2020) 2040010, arXiv:1912.07098 [hep-th]. * [14] F. Lizzi, M. Manfredonia, F. Mercati, and T. Poulain, “Localization and Reference Frames in $\kappa$-Minkowski Spacetime,” Phys. Rev. 99 (2019) 085003, arXiv:1811.08409 [hep-th]. * [15] J. Lukierski and H. Ruegg, “Quantum $kappa-$Poincaré in any dimension,” Phys. Lett. B329 (1994) 189–194, arXiv:hep-th/9310117 [hep-th]. * [16] J. Kowalski-Glikman and S. Nowak, “Noncommutative space-time of doubly special relativity theories,” Int. J. Mod. Phys. D12 (2003) 299–316, arXiv:hep-th/0204245 [hep-th]. * [17] A. Agostini, G. Amelino-Camelia, and M. Arzano, “Dirac spinors for doubly special relativity and kappa Minkowski noncummutative space-time,” Class. Quant. Grav. 21 (2004) 2179–2202, arXiv:gr-qc/0207003 [gr-qc]. * [18] A. Agostini, F. Lizzi, and A. Zampini, “Generalized Weyl systems and kappa Minkowski space,” Mod. Phys. Lett. A17 (2002) 2105–2126, arXiv:hep-th/0209174 [hep-th]. * [19] J. Kowalski-Glikman, “De sitter space as an arena for doubly special relativity,” Phys. Lett. B547 (2002) 291–296, arXiv:hep-th/0207279 [hep-th]. * [20] J. Kowalski-Glikman and S. Nowak, “Doubly special relativity and de Sitter space,” Class. Quant. Grav. 20 (2003) 4799–4816, arXiv:hep-th/0304101 [hep-th]. * [21] A. Agostini, “kappa-Minkowski representations on Hilbert spaces,” J. Math. Phys. 48 (2007) 052305, arXiv:hep-th/0512114 [hep-th]. * [22] G. Amelino-Camelia, G. Gubitosi, A. Marciano, P. Martinetti, and F. Mercati, “A No-pure-boost uncertainty principle from spacetime noncommutativity,” Phys. Lett. B 671 (2009) 298–302, arXiv:0707.1863 [hep-th]. * [23] G. Amelino-Camelia, G. Gubitosi, A. Marciano, P. Martinetti, F. Mercati, D. Pranzetti, and R. A. Tacchi, “First results of the Noether theorem for Hopf-algebra spacetime symmetries,” Prog. Theor. Phys. Suppl. 171 (2007) 65–78, arXiv:0710.1219 [gr-qc]. * [24] J. M. Carmona, J. L. Cortes, D. Mazon, and F. Mercati, “About locality and the relativity principle beyond special relativity,” Phys. Rev. D84 (2011) 085010, arXiv:1107.0939 [hep-th]. * [25] J. M. Carmona, J. L. Cortes, and F. Mercati, “Relativistic kinematics beyond special relativity,” Phys. Rev. D86 (2012) 084032, arXiv:1206.5961 [hep-th]. * [26] G. Amelino-Camelia, N. Loret, G. Mandanici, and F. Mercati, “Gravity in quantum spacetime,” Int. J Mod. Phys. D 19 no. 14, (Dec, 2010) 2385–2392, arXiv:1007.0851 [gr-qc]. * [27] G. Gubitosi and F. Mercati, “Relative Locality in $\kappa$-Poincaré,” Class. Quant. Grav. 30 (2013) 145002, arXiv:1106.5710 [gr-qc]. * [28] F. Mercati, “Quantum $\kappa$-deformed differential geometry and field theory,” Int. J. Mod. Phys. D25 no. 05, (2016) 1650053, arXiv:1112.2426 [math.QA]. * [29] S. Meljanac, D. Meljanac, F. Mercati, and D. Pikutić, “Noncommutative spaces and Poincaré symmetry,” Phys. Lett. B766 (2017) 181–185, arXiv:1610.06716 [hep-th]. * [30] N. Loret, S. Meljanac, F. Mercati, and D. Pikutić, “Vectorlike deformations of relativistic quantum phase-space and relativistic kinematics,” Int. J. Mod. Phys. D26 no. 11, (2017) 1750123, arXiv:1610.08310 [hep-th]. * [31] J. Lukierski, “Kappa-Deformations: Historical Developments and Recent Results,” J. Phys. Conf. Ser. 804 no. 1, (2017) 012028, arXiv:1611.10213 [hep-th]. * [32] F. Mercati and M. Sergola, “Physical Constraints on Quantum Deformations of Spacetime Symmetries,” Nucl. Phys. B933 (2018) 320–339, arXiv:1802.09483 [hep-th]. * [33] P. Kosinski, J. Lukierski, and P. Maslanka, “Local D = 4 field theory on kappa deformed Minkowski space,” Phys. Rev. D 62 (2000) 025004, arXiv:hep-th/9902037. * [34] P. Kosinski, J. Lukierski, and P. Maslanka, “kappa deformed Wigner construction of relativistic wave functions and free fields on kappa-Minkowski space,” Nucl. Phys. B Proc. Suppl. 102 (2001) 161–168, arXiv:hep-th/0103127. * [35] P. Kosinski, P. Maslanka, J. Lukierski, and A. Sitarz, “Generalized kappa deformations and deformed relativistic scalar fields on noncommutative Minkowski space,” in Conference on Topics in Mathematical Physics, General Relativity, and Cosmology on the Occasion of the 75th Birthday of Jerzy F. Plebanski, pp. 255–277. 7, 2003. arXiv:hep-th/0307038. * [36] M. Arzano and A. Marciano, “Fock space, quantum fields and kappa-Poincare symmetries,” Phys. Rev. D 76 (2007) 125005, arXiv:0707.1329 [hep-th]. * [37] M. Daszkiewicz, J. Lukierski, and M. Woronowicz, “kappa-deformed statistics and classical fourmomentum addition law,” Mod. Phys. Lett. A 23 (2008) 653–665, arXiv:hep-th/0703200. * [38] L. Freidel, J. Kowalski-Glikman, and S. Nowak, “Field theory on kappa-Minkowski space revisited: Noether charges and breaking of Lorentz symmetry,” Int. J. Mod. Phys. A23 (2008) 2687–2718, arXiv:0706.3658 [hep-th]. * [39] M. Arzano, J. Kowalski-Glikman, and A. Walkus, “Lorentz invariant field theory on kappa-minkowski space,” Class. Quant. Grav. 27 (2010) 025012, arXiv:0908.1974 [hep-th]. * [40] M. Arzano and J. Kowalski-Glikman, “Non-commutative fields and the short-scale structure of spacetime,” Phys. Lett. B771 (2017) 222–226, arXiv:1704.02225 [hep-th]. * [41] T. Jurić, S. Meljanac, and A. Samsarov, “Light-like $\kappa$-deformations and scalar field theory via Drinfeld twist,” J. Phys. Conf. Ser. 634 no. 1, (2015) 012005, arXiv:1506.02475 [hep-th]. * [42] P. Mathieu and J.-C. Wallet, “Gauge theories on $\kappa$-Minkowski spaces: twist and modular operators,” JHEP 05 (2020) 112, arXiv:2002.02309 [hep-th]. * [43] T. Poulain and J. C. Wallet, “$\kappa$-Poincaré invariant quantum field theories with KMS weight,” Phys. Rev. D 98 no. 2, (2018) 025002, arXiv:1801.02715 [hep-th]. * [44] T. Poulain and J.-C. Wallet, “$\kappa$-Poincaré invariant orientable field theories at one-loop,” JHEP 01 (2019) 064, arXiv:1808.00350 [hep-th]. * [45] T. Jurić, T. Poulain, and J.-C. Wallet, “Vacuum energy and the cosmological constant problem in $\kappa$-Poincaré invariant field theories,” Phys. Rev. D 99 no. 4, (2019) 045004, arXiv:1805.09027 [hep-th]. * [46] M. Arzano and L. T. Consoli, “Signal propagation on $\kappa$-Minkowski spacetime and nonlocal two-point functions,” Phys. Rev. D 98 no. 10, (2018) 106018, arXiv:1808.02241 [hep-th]. * [47] F. Mercati and M. Sergola, “Pauli-Jordan function and scalar field quantization in $\kappa$-Minkowski noncommutative spacetime,” Phys. Rev. D98 no. 4, (2018) 045017, arXiv:1801.01765 [hep-th]. * [48] F. Mercati and M. Sergola, “Light Cone in a Quantum Spacetime,” Phys. Lett. B787 (2018) 105–110, arXiv:1810.08134 [hep-th]. * [49] N. Huggett, F. Lizzi, and T. Menon, “Missing the point in noncommutative geometry,” Synthese (2021) , arXiv:2006.13035. https://doi.org/10.1007/s11229-020-02998-1. * [50] S. Majid, “Algebras and hopf algebras in braided categories,” arXiv:q-alg/9509023 [q-alg]. * [51] R. Oeckl, “Untwisting noncommutative $R^{d}$ and the equivalence of quantum field theories,” Nucl. Phys. B 581 (2000) 559–574, arXiv:hep-th/0003018. * [52] J. Wess, “Deformed coordinate spaces: Derivatives,” in 1st Balkan Workshop on Mathematical, Theoretical and Phenomenological Challenges Beyond the Standard Model: Perspectives of Balkans Collaboration, pp. 122–128. 2003\. arXiv:hep-th/0408080. * [53] M. Chaichian, P. Kulish, K. Nishijima, and A. Tureanu, “On a Lorentz-invariant interpretation of noncommutative space-time and its implications on noncommutative QFT,” Phys. Lett. B 604 (2004) 98–102, arXiv:hep-th/0408069. * [54] F. Koch and E. Tsouchnika, “Construction of theta-Poincare algebras and their invariants on Mu(theta),” Nucl. Phys. B 717 (2005) 387–403, arXiv:hep-th/0409012. * [55] G. Fiore and J. Wess, “On full twisted Poincare’ symmetry and QFT on Moyal-Weyl spaces,” Phys. Rev. D 75 (2007) 105022, arXiv:hep-th/0701078. * [56] G. Fiore, “Can QFT on Moyal-Weyl spaces look as on commutative ones?,” Prog. Theor. Phys. Suppl. 171 (2007) 54–60, arXiv:0705.1120 [hep-th]. * [57] P. Aschieri, F. Lizzi, and P. Vitale, “Twisting all the way: From Classical Mechanics to Quantum Fields,” Phys. Rev. D 77 (2008) 025037, arXiv:0708.3002 [hep-th]. * [58] A. Borowiec, J. Lukierski, and A. Pachoł, “Twisting and $\kappa$-Poincaré,” J. Phys. A 47 no. 40, (2014) 405203, arXiv:1312.7807 [math-ph]. * [59] T. Juric, S. Meljanac, and D. Pikutic, “Realizations of $\kappa$-Minkowski space, Drinfeld twists and related symmetry algebras,” Eur. Phys. J. C 75 no. 11, (2015) 528, arXiv:1506.04955 [hep-th]. * [60] A. Blaut, M. Daszkiewicz, J. Kowalski-Glikman, and S. Nowak, “Phase spaces of doubly special relativity,” Phys. Lett. B582 (2004) 82–85, arXiv:hep-th/0312045 [hep-th]. * [61] M. E. Peskin and D. V. Schroeder, An Introduction to quantum field theory. Addison-Wesley, Reading, USA, 1995. http://www.slac.stanford.edu/~mpeskin/QFT.html.
# On some components of Hilbert schemes of curves Flaminio Flamini, Paola Supino F. Flamini, Dipartimento di Matematica, Università degli Studi di Roma “Tor Vergata”, Viale della Ricerca Scientifica 1, 00133 Roma – Italy<EMAIL_ADDRESS>P. Supino, Dipartimento di Matematica e Fisica, Università degli Studi “Roma Tre”, Largo S. L. Murialdo 1, 00146 Roma – Italy<EMAIL_ADDRESS> ###### Abstract. Let $\mathcal{I}_{d,g,R}$ be the union of irreducible components of the Hilbert scheme whose general points parametrize smooth, irreducible, curves of degree $d$, genus $g$, which are non–degenerate in the projective space $\mathbb{P}^{R}$. Under some numerical assumptions on $d$, $g$ and $R$, we construct irreducible components of $\mathcal{I}_{d,g,R}$ other than the so- called distinguished component, dominating the moduli space $\mathcal{M}_{g}$ of smooth genus–$g$ curves, which are generically smooth and turn out to be of dimension higher than the expected one. The general point of any such a component corresponds to a curve $X\subset\mathbb{P}^{R}$ which is a suitable ramified $m$–cover of an irrational curve $Y\subset\mathbb{P}^{R-1}$, $m\geqslant 2$, lying in a surface cone over $Y$. The paper extends some of the results in [12, 13]. ###### Key words and phrases: Hilbert scheme of curves, Brill–Noether theory, ruled surfaces, cones, coverings, Gaussian–Wahl maps. ###### 2010 Mathematics Subject Classification: Primary 14C05; Secondary 14E20, 14F05, 14J10, 14J26, 14H10 This collaboration has benefitted of funding from the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata (CUP: E83-C18000100006) and from the MIUR Excellence Department Project awarded to the Department of Mathematics and Physics, University Roma Tre. Both authors are members of INdAM–GNSAGA ## Introduction Projective varieties are distributed in families, obtained by suitably varying the coefficients of their defining equations. The study of these families and, in particular, of the properties of their parameter spaces is a central theme in Algebraic Geometry and sets on technical tools, like flatness, base change, etc., as well as on the existence (due to Grothendieck, with refinements by Mumford) of the so called Hilbert scheme, a closed, projective scheme parametrizing closed projective subschemes with fixed numerical/projective invariants (i.e. the Hilbert polynomial), and having fundamental universal properties. Hilbert schemes have interested several authors over the decades, owing also to deep connections with several other subjects in Algebraic Geometry (cf. e.g. bibliography in [39] for an overview). Indeed, results and techniques in the “projective domain” of the Hilbert schemes have frequently built bridges towards other topics in Algebraic Geometry, as by improving already known results, as by providing new ones. The interplay between Hilbert schemes of curves in projective spaces and the Brill–Noether theory of line bundles on curves is one of the milestone in Algebraic Geometry (cf. e.g.[1, 17, 30]). The construction of the moduli space $\mathcal{M}_{g}$ of smooth, genus–$g$ curves (and its generalizations ${\mathcal{M}}_{g,n}$ of moduli spaces of smooth, $n$–pointed, genus–$g$ curves), the proof of its irreducibility and the construction of a natural compactification of it deeply rely on the use of Hilbert schemes of curves (c.f. e.g. [2, 19, 31]). Similarly, together with the Deligne–Mumford compactification of ${\mathcal{M}}_{g}$ in [19], the use of Hilbert schemes of curves has been also fundamental in the costruction of suitable compactifications of the universal Picard variety (cf. e.g [10, Theorem, p. 592]). Besides these examples, the use of Hilbert schemes has been fundamental for several other issues in Algebraic Geometry: unirationality and/or Torelli’s type of theorems for cubic hypersurfaces and for prime Fano threefolds of given genus have been proved via the use of Hilbert schemes of lines and planes contained in such varieties (cf. e.g.[26, 18, 7, 23, 41, 27, 34]). Important connections between Hilbert schemes parametrizing $k$–linear spaces contained in complete intersections of hyperquadrics and intermediate Jacobians (cf. [22]) are worth to be mentioned, whereas in [8, 9] the Hilbert schemes of projective scroll surfaces have been related with families of rank–$2$ vector-bundles as well as with moduli spaces of (semi)stable ones. Surjectivity of Gaussian–Wahl maps on curves with general moduli ([15, 16]) has deep reflections both on suitable Hilbert schemes of associated cones and on the extendability of such curves (especially in the $K3$–case). At last, Hilbert schemes parametrizing lines in suitable complete intersections are used either in [4], to deduce upper–bounds of minimal gonality of a family of curves covering a very–general projective hypersurface of high degree, or in [5, 6] to deduce new results concerning either enumerative properties or a certain “algebraic hyperbolicity” behavior. In the present paper we focus on Hilbert schemes of smooth, irreducible projective curves of given degree and genus, the study of which is classical and goes back to Castelnuovo, Halphen (Casteluovo bounds and the gap problem) and Severi. Given non-negative integers $d$, $g$ and $R\geqslant 3$, we denote by $\mathcal{I}_{d,g,R}$ the union of all irreducible components of the Hilbert scheme whose general points parametrize smooth, irreducible, non–degenerate curves of degree $d$ and genus $g$ in the projective space $\mathbb{P}^{R}$. A component of $\mathcal{I}_{d,g,R}$ is said to be regular if it is generically smooth and of the expected dimension, otherwise it is said to be superabundant (cf. § 1.1 for more details). Under suitable numerical assumptions, involving the so called Brill-Noether number, it is well–known that $\mathcal{I}_{d,g,R}$ has a unique irreducible component which dominates the moduli space ${\mathcal{M}}_{g}$ parametrizing (isomorphism classes of) smooth, irreducible genus–$g$ curves (cf. [30] and § 1.1 below). This is called the distinguished component of the Hilbert scheme. In [40], Severi claimed the irreducibility of $\mathcal{I}_{d,g,R}$ when $d{\geqslant}g+R$, and this was actually proved by Ein for $R=3,\,4$ in (cf. [24, 25]); further sufficient conditions on $d$ and $g$ ensuring the irreducibility of some $\mathcal{I}_{d,g,R}$ for $R\geqslant 5$ have been found e.g. in [3]. On the other hand, in several cases there have been also given examples of additional non–distinguished components of $\mathcal{I}_{d,g,R}$. Some of these extra components have been constructed by using either $m$–sheeted covers of $\mathbb{P}^{1}$ (cf. e.g.[35], [37], etc.), or by using double covers of irrational curves (cf. e.g. [12], [13], etc.) or even by using non–linearly normal curves in projective space (Harris, 1984 unpublished, see e.g. [17, Ch. IV]). In this paper we prove the following: ###### Main Theorem. Let $\gamma\geqslant 10$, $e\geqslant 2\gamma-1$, $R=e-\gamma+1$ and $m\geqslant 2$ be integers. Set $d:=me\;\;\;{\rm and}\;\;\;g:=m(\gamma-1)+\frac{m(m-1)}{2}e+1.$ Then $\mathcal{I}_{d,g,R}$ contains an irreducible component which is generically smooth and superabundant, having dimension $\lambda_{d,g,R}+\sigma_{d,g,R},$ where $\lambda_{d,g,R}:=(R+1)me-(R-3)\left(m(\gamma-1)+\frac{m(m-1)}{2}e\right)$ is the expected dimension of $\mathcal{I}_{d,g,R}$ whereas the positive integer $\sigma_{d,g,r}:=(R-4)\left[(\gamma-1)(m-1)+1+e+\frac{m(m-3)}{2}e\right]+4(e+1)+em(m-5)$ is the superabundance summand for the dimension of such a component. As additional result, we explicitly describe a general point of the aforementioned superabundant component (cf. Proposition 2.5 and § 3 below). We want to stress that Main Theorem extends some of the results in [12, 13] which deal with the case $m=2$. The paper consists of three sections. In Section 1 we remind some generalities concerning Hilbert schemes of curves and associated Brill–Noether theory (cf.§ 1.1), Gaussian–Wahl maps and Hilbert schemes of cones (cf.§ 1.2) and ramified coverings of curves (cf. § 1.3), which will be used for our analysis. Section 2 deals with the construction of curves $X$ which fill–up an open dense subscheme of the superabundant component of $\mathcal{I}_{d,g,R}$ mentioned in Main Theorem above. Precisely in § 2.1 we more generally consider, for any $\gamma\geqslant 1$ and $e\geqslant 2\gamma-1$, curves $Y$ of genus $\gamma$, degree $e$ with general moduli, which are non–special and projectively normal in $\mathbb{P}^{R-1}$ and which fill–up the distinguished component of the related Hilbert scheme ${\mathcal{I}}_{e,\gamma,R-1}$. Then in § 2.2 we consider cones $F=F_{Y}$ extending in $\mathbb{P}^{R}$ curves $Y$ as above, we describe abstract resolutions of cones $F$, together with further cohomological properties (see Proposition 2.2), as well as an explicit parametric description of the parameter space of such cones, as $Y$ varies in the distinguished component of ${\mathcal{I}}_{e,\gamma,R-1}$. In § 2.3 we construct the desired curves $X$ as curves sitting in cones $F$ as $m$–sheeted ramified covers $\varphi:X\to Y$, where the map $\varphi$ is given by the projection from the vertex of the cone. We prove that such curves $X$ are non–degenerate and linearly normal in $\mathbb{P}^{R}$, we moreover compute their genus $g$ and some other useful cohomological properties (cf. Proposition 2.5). We also prove Lemma 2.6, a technical result which deals with a more general situation involving projections and ramified covers of possibly reducible, connected, nodal curves and which is needed for a certain inductive procedure used in proving Main Theorem (see Lemma 3.3 and the proof of Claim 3.5). Finally, Section 3 focuses on the proof of Main Theorem, which also involves surjectivity of suitable Gaussian–Wahl maps (cf. proof of Claim 3.5). This explains why in this last section, as well as in Main Theorem, the hypothesis $\gamma\geqslant 10$ is required (cf. Proposition 3.2). ### Notation and terminology We work throughout over the field $\mathbb{C}$ of complex numbers. All schemes will be endowed with the Zariski topology. By _variety_ we mean an integral algebraic scheme and by _curve_ we intend a variety of dimension 1. We say that a property holds for a _general_ point $x$ of a variety $X$ if it holds for any point in a Zariski open non–empty subset of $X$. We will interchangeably use the terms rank-$r$ vector bundle on a variety $X$ and rank-$r$ locally free sheaf. To ease notation and when no confusion arises, we sometimes identify line bundles with Cartier divisors, interchangeably using additive notation instead of multiplicative notation and tensor products; we moreover denote by $\sim$ the linear equivalence of divisors and by $\equiv$ their numerical equivalence. If $\mathcal{P}$ is either a parameter space of a flat family of closed subschemes of a variety $X$, as e.g. $\mathcal{P}$ a Hilbert scheme, or a moduli space parametrizing geometric objects modulo a given equivalence relation, as e.g. the moduli space of smooth genus–$g$ curves, we will denote by $[Y]$ the parameter point (resp., the moduli point) corresponding to the subscheme $Y\subset X$ (resp., associated to the equivalence class of $Y$). For non–reminded terminology, we refer the reader to [33]. ## 1\. Generalities We briefly recall some generalities and results which will be used in the next sections. ### 1.1. Hilbert schemes and Brill-Noether theory of curves Let $C$ be a smooth, irreducible, projective curve of genus $g>0$. Given positive integers $d$ and $r$, the Brill–Noether locus, $W^{r}_{d}(C)\subseteq{\rm Pic}^{d}(C)$, when not empty, parametrizes degree–$d$ line bundles $L$ on $C$ such that $h^{0}(C,L)\geqslant r+1$. Its expected dimension is given by the so called Brill-Noether number $\rho(g,r,d):=g-(r+1)(g+r-d).$ (1.1) It is well-known that if $C$ has general moduli (i.e. when $C$ corresponds to a general point of the moduli space ${\mathcal{M}}_{g}$ parametrizing isomorphism classes of smooth, genus–$g$ curves) it is well known that $W^{r}_{d}(C)$ is empty if $\rho(g,r,d)<0$, whereas it is generically smooth, of the expected dimension $\rho(g,r,d)$, otherwise. Moreover, when $\rho(g,r,d)>0$, $W^{r}_{d}(C)$ is also irreducible and for a general $L$ parametrized by $W^{r}_{d}(C)$ it is $h^{0}(C,L)=r+1$ (cf. [1, Ch. IV, V, VI]). Brill-Noether theory of line–bundles on abstract projective curves $C$ is intimately related to the study of Hilbert schemes parametrizing projective embeddings of such curves. Indeed, assume for simplicity $L\in W^{r}_{d}(C)$ very–ample and such that $h^{0}(C,L)=r+1$; hence one has an embedding $C\stackrel{{\scriptstyle\phi_{|L|}}}{{\hookrightarrow}}\mathbb{P}^{r}$ induced by the complete linear system $|L|$ determined by $L$, whose image $Y:=\phi_{|L|}(C)$ is a smooth, irreducible curve of degree $d$, genus $g$ which is non–degenerate in $\mathbb{P}^{r}$. If we denote by $Hilb_{d,g,r}$ the Hilbert scheme parametrizing closed subschemes of ${\mathbb{P}}^{r}$ with Hilbert polynomial $P(t)=dt+(1-g)$, then $Y$ corresponds to a point of $Hilb_{d,g,r}$. If we denote by $\mathcal{I}_{d,g,r}$ the union of all irreducible components of $Hilb_{d,g,r}$ whose general points parametrize smooth, irreducible, non–degenerate curves in $\mathbb{P}^{r}$, then $Y$ represents a point $[Y]\in\mathcal{I}_{d,g,r}$. When $[Y]$ is a smooth point of $\mathcal{I}_{d,g,r}$, then $Y$ is said to be unobstructed in ${\mathbb{P}}^{r}$. If $N_{Y/{\mathbb{P}}^{r}}$ denotes the normal bundle of $Y$ in ${\mathbb{P}}^{r}$, one has $T_{[Y]}(\mathcal{I}_{d,g,r})\cong H^{0}(Y,N_{Y/{\mathbb{P}}^{r}})\;\;\;\;{\rm and}\;\;\;\;\chi(Y,N_{Y/{\mathbb{P}}^{r}})\leqslant\dim_{[Y]}\,\mathcal{I}_{d,g,r}\leqslant h^{0}(Y,N_{Y/{\mathbb{P}}^{r}}),$ (1.2) where the integer $\chi(Y,N_{Y/{\mathbb{P}}^{r}})=h^{0}(Y,N_{Y/{\mathbb{P}}^{r}})-h^{1}(Y,N_{Y/{\mathbb{P}}^{r}})$ in (1.2) is the so–called expected dimension of $\mathcal{I}_{d,g,r}$ at $[Y]$ and the equality on the right–most–side in (1.2) holds iff $Y$ is unobstructed in ${\mathbb{P}}^{r}$ (for full details, cf. e.g. [39, Cor. 3.2.7, Thm. 4.3.4, 4.3.5]). The expected dimension of $\mathcal{I}_{d,g,r}$, given by $\chi(Y,N_{Y/{\mathbb{P}}^{r}})$, can be easily computed with the use of normal and Euler sequences for $Y\subset\mathbb{P}^{r}$, and it turns out to be $\lambda_{d,g,r}:=\chi(Y,N_{Y/{\mathbb{P}}^{r}})=(r+1)d-(r-3)(g-1).$ (1.3) A component of $\mathcal{I}_{d,g,r}$ is said to be regular if it is both reduced (i.e. generically smooth) and of the expected dimension $\lambda_{d,g,r}$; otherwise it is said to be superabundant. By above, any component $\mathcal{I}$ of $\mathcal{I}_{d,g,r}$ has a natural rational map $\mu_{g}:{\mathcal{I}}\dasharrow\mathcal{M}_{g},$ which simply sends $[Y]\in{\mathcal{I}}$ general to the moduli point $[C]\in\mathcal{M}_{g}$ as above. The map $\mu_{g}$ is called the modular morphism of $\mathcal{I}$; with same terminology as in [38, Introduction], the dimension of ${\rm Im}(\mu_{g})$ is called the number of moduli of $\mathcal{I}$. The expected dimension of ${\rm Im}(\mu_{g})$ is ${\rm min}\\{3g-3,\,3g-3+\rho(g,r,d)\\}$, where $\rho(g,r,d)$ as in (1.1), and it is called the expected number of moduli of $\mathcal{I}$. The expression of the expected number of moduli of $\mathcal{I}$ is the obvious postulation which comes from the well–known interpretation, in terms of maps between vector bundles on Picard scheme, of the existence of special line bundles on $C$ (cf. [1, Ch. IV, V, VI]). In this set–up, we remind the following result due to Sernesi: ###### Theorem 1.1. (cf. [38, Theorem, p.26]) For any integers $r\geqslant 2,\;d$ and $g$ such that $d\geqslant r+1\;\;\;{\rm and}\;\;\;d-r\leqslant g\leqslant\frac{r(d-r)-1}{r-1}$ there exists a component $\mathcal{I}$ of ${\mathcal{I}}_{d,g,r}$ which has the expected number of moduli. Moreover, $[Y]\in\mathcal{I}$ general corresponds to an unobstracted curve $Y\subset\mathbb{P}^{r}$ such that $h^{1}(Y,N_{Y/{\mathbb{P}}^{r}})=0$ and whose embedding in $\mathbb{P}^{r}$ is given by a complete linear system. ###### Remark 1.2. We want to stress the “geometric counter–part” of the numerical hypotheses appearing in Theorem 1.1. For $Y$ as in Theorem 1.1, let $(C,L)$ be the pair consisting of a smooth, irreducible, abstract projective curve $C$ of genus $g$ and of $L\in{\rm Pic}^{d}(C)$ such that $Y=\phi_{|L|}(C)$. Then, condition $d\geqslant r+1$ simply means that the curve $Y$ is of positive genus and non–degenerate in $\mathbb{P}^{r}$ whereas $d-r\leqslant g$, i.e. $g+r-d\geqslant 0$, simply decodes by Riemann–Roch the condition that the index of speciality $i(L):=g+r-d$ of $L$ is non–negative. At last, the condition $g\leqslant\frac{r(d-r)-1}{r-1}$ reads $g-rg+rd-r^{2}-1\geqslant 0$ which is nothing but $\rho(g,r,d)+(r+g-d)=\rho(g,r,d)+i(L)\geqslant 1$, i.e. it is a “Brill-Noether type” condition on the pair $(C,L)$. It is well known (cf. e.g. [30, p. 70]) that, when $\rho(g,r,d)\geqslant 0$, $\mathcal{I}_{d,g,r}$ has a unique component with a dominant modular morphism $\mu_{g}$, i.e. dominating $\mathcal{M}_{g}$; thus such a component has maximal number of moduli $3g-3$. It is called the distinguished component of $\mathcal{I}_{d,g,r}$ and, in the sequel, we will denote it by $\widehat{\mathcal{I}_{d,g,r}}$ or simply by $\widehat{\mathcal{I}}$, if no confusion arises. As a direct consequence of the uniqueness of $\widehat{\mathcal{I}}$ and of Theorem 1.1, one has: ###### Corollary 1.3. For any integers $r\geqslant 2,\;d$ and $g$ such that $d\geqslant r+1\;\;\;{\rm and}\;\;\;d-r\leqslant g\leqslant\frac{(r+1)(d-r)-1}{r}$ the distinguished component $\widehat{\mathcal{I}}$ of $\mathcal{I}_{d,g,r}$ is not empty. Its general point $[Y]$ corresponds to an unobstructed curve $Y$ in $\mathbb{P}^{r}$ with $h^{1}(Y,N_{Y/{\mathbb{P}}^{r}})=0$ and whose embedding in $\mathbb{P}^{r}$ is given by a complete linear system. Furthermore $\widehat{\mathcal{I}}$ is regular, i.e. generically smooth and of the expected dimension $\lambda_{d,g,r}$. ###### Proof. The condition $g\leqslant\frac{(r+1)(d-r)-1}{r}$ is equivalent to $\rho(g,r,d)\geqslant 1$. Thus we conclude by applying Theorem 1.1, taking into account what discuss in Remark 1.2, and by applying [30, p. 70] and (1.2), as the condition $h^{1}(Y,N_{Y/{\mathbb{P}}^{r}})=0$ implies both the non–obstructedness of $Y$ in $\mathbb{P}^{r}$ and the regularity of $\widehat{\mathcal{I}}$. ∎ In [40], Severi claimed the irreducibility of $\mathcal{I}_{d,g,r}$ when $d{\geqslant}g+r$. Severi’s claim was proved by Ein for $r=3,\,4$ in (cf. [24, 25]); further sufficient conditions on $d$ and $g$ ensuring the irreducibility of some $\mathcal{I}_{d,g,R}$ for $R\geqslant 5$ have been found e.g. in [3]. On the other hand, in several cases there have been also given examples of additional non–distinguished components of $\mathcal{I}_{d,g,r}$, even in the range $\rho(g,d,r)\geqslant 0$. Some of these extra components have been constructed by using either $m$–sheeted covers of $\mathbb{P}^{1}$ (cf. e.g.[35], [37], etc.), or by using double covers of irrational curves (cf. e.g. [12], [13], etc.) or even by using non–linearly normal curves in projective space (the latter approach is contained in a series of examples due to Harris, 1984 unpublished, fully described in e.g. [17, Ch. IV]). In some cases, these extra components have been also proved to be regular (cf. e.g. [17, Ch. IV], [13]). ### 1.2. Gaussian-Wahl maps and cones Let $C$ be a smooth, irreducible projective curve of positive genus $g$ and $L$ be a very–ample line bundle of degree $d$ on $C$. Set $Y\subset\mathbb{P}^{r}$ the embedding of $C$ by the complete linear system $|L|$. Let $F_{Y}$ (equiv., $F_{C,L}$) denote the cone in $\mathbb{P}^{r+1}$ over $Y$ with vertex at a point $v\in\mathbb{P}^{r+1}\setminus\mathbb{P}^{r}$ (if no confusion arises, in the sequel we simply set $F$). Fundamental properties of such cones are related to the so–called Gaussian–Wahl maps (cf. e.g. [15, 16, 42]), as we will briefly remind. If $\omega_{C}$ denotes the canonical bundle of $C$, one sets $R(\omega_{C},L):={\rm Ker}\left[H^{0}(C,\omega_{C})\otimes H^{0}(C,L)\longrightarrow H^{0}(C,\omega_{C}\otimes L)\right],$ where the previous map is a natural multiplication map among global sections. One can consider the map $\Phi_{\omega_{C},L}:R(\omega_{C},L)\to H^{0}(\omega_{C}^{\otimes 2}\otimes L),$ (1.4) defined locally by $\Phi_{\omega_{C},L}(s\otimes t):=s\,dt-t\,ds$, which is called the Gaussian–Wahl map. As customary, one sets $\gamma_{C,L}:={\rm cork}(\Phi_{\omega_{C},L})=\dim\,{\rm Coker}(\Phi_{\omega_{C},L}).$ (1.5) For reader’s convenience we will remind here statement of [16, Prop. 2.1], limiting ourselves to its $(2.8)$–part, which will be used in Section 3; indeed, the full statement of [16, Prop. 2.1] is quite long, with many exceptions and dwells also on curves with low genus whereas Section 3 will focus on curves of genus at least $10$. ###### Proposition 1.4. (cf. [16, Proposition 2.1–(2.8)]) Let $g\geqslant 6$ be an integer. Assume that $C$ is a smooth, projective curve of genus $g$ with general moduli and that $L\in{\rm Pic}^{d}(C)$ is general. Then, $\gamma_{C,L}=0$ (i.e. $\Phi_{\omega_{C},L}$ is surjective) if $d\geqslant\left\\{\begin{array}[]{cc}g+12&{\rm for}\;\;6\leqslant g\leqslant 8\\\ g+9&{\rm for}\;\;g\geqslant 9\end{array}\right..$ Gaussian–Wahl maps can be used to compute the dimension of the tangent space to the Hilbert scheme of surfaces in $\mathbb{P}^{r+1}$ at points representing cones $F$ as above (cf. e.g. [16]). Indeed, let $\mathcal{W}$ be any irreducible component of the Hilbert scheme of curves ${\mathcal{I}}_{d,g,r}$ and let $\mathcal{H}(\mathcal{W})$ be the variety which parametrizes the family of cones $F\subset\mathbb{P}^{r+1}$ over curves $Y\subset\mathbb{P}^{r}$ representing points in $\mathcal{W}$. Then, one has: ###### Proposition 1.5. (cf. [16, Cor. 2.20–(c), Prop. 2.12–(2.13) and (2.15)]) Set notation and conditions as in Proposition 1.4. Let $r=d-g$, $Y\subset\mathbb{P}^{r}$ and $\mathcal{W}$ be any component of ${\mathcal{I}}_{d,g,r}$ s.t. $[Y]\in\mathcal{W}$ is general. Then: (i) The Gaussian–Wahl map $\Phi_{\omega_{Y},{\mathcal{O}}_{Y}(1)}$ is surjective, i.e. $\gamma_{Y,{\mathcal{O}}_{Y}(1)}=0$. (ii) $h^{0}(Y,N_{Y/\mathbb{P}^{r}}\otimes{\mathcal{O}}_{Y}(-1))=r+1$. (iii) $h^{0}(Y,N_{Y/\mathbb{P}^{r}}\otimes{\mathcal{O}}_{Y}(-j))=0$, for any $j\geqslant 2$. (iv) $\mathcal{H}(\mathcal{W})$ is a generically smooth component of the Hilbert scheme parametrizing surfaces of degree $d$ and sectional genus $g$ in $\mathbb{P}^{r+1}$. Moreover, $\dim\,\mathcal{H}(\mathcal{W})=(r+1)(d+1)-(r-3)(g-1)=\lambda_{d,g,r}+(r+1)$ (1.6) and it is generically smooth, i.e. for $[F]\in\mathcal{H}(\mathcal{W})$ general the associated cone $F=F_{Y}$ is unobstructed in $\mathbb{P}^{r+1}$. ### 1.3. Ramified coverings of curves Let $Y$ be a scheme. A morphism $\varphi:X\to Y$ is called a _covering map of degree_ $m$ if $\varphi_{*}\mathcal{O}_{X}$ is a locally free ${\mathcal{O}}_{Y}$–sheaf of rank $m$. A map $\varphi$ is a covering map (or simply a cover) if and only if it is finite and flat. In particular, if $Y$ is smooth and irreducible and $X$ is Cohen–Macaulay, then every finite, surjective morphism $\varphi:X\to Y$ is a covering map (cf. e.g. [11, p. 1361]). When $\varphi:X\to Y$ is a covering map of degree $m$, one has a natural exact sequence $0\to\mathcal{O}_{Y}\xrightarrow{\varphi^{\sharp}}\varphi_{\ast}\mathcal{O}_{X}\to\mathcal{T}^{\vee}_{\varphi}\to 0\,,$ where $\mathcal{T}^{\vee}_{\varphi}:=Coker(\varphi^{\sharp})$ is the so-called _Tschirnhausen bundle_ associated to the covering map $\varphi$, which is of rank $m-1$ on $Y$. Since $\textrm{Char}\;(\mathbb{C})=0$, the trace map $\textrm{tr}\colon\varphi_{*}\mathcal{O}_{X}\rightarrow\mathcal{O}_{Y}$ gives rise to a splitting of the previous exact sequence, so that one has $\varphi_{*}\mathcal{O}_{X}=\mathcal{O}_{Y}\oplus\mathcal{T}^{\vee}_{\varphi}$ (cf. e.g. [11, 13, 20, 21]). If $X$ and $Y$ are in particular smooth, irreducible curves and $\varphi:X\to Y$ is a covering map of degree $m$, according to [33, Ex. IV.2.6–(d), p. 306], the branch divisor $B_{\varphi}$ of $\varphi$ is such that $\left(\bigwedge^{m}(\varphi_{*}\mathcal{O}_{X})\right)^{\otimes 2}\cong\mathcal{O}_{Y}(-B_{\varphi}).$ (1.7) If moreover $X$ (resp., $Y$) has genus $g$ (resp., $\gamma$) then one has $\deg B_{\varphi}=-2\deg\left(\bigwedge^{m}(\varphi_{*}\mathcal{O}_{X})\right)=2(g-1)-2m(\gamma-1)$. As for the ramification divisor $R_{\varphi}$ such that $\varphi(R_{\varphi})=B_{\varphi}$, the Riemann–Hurwitz formula gives $\omega_{X}={\varphi}^{*}(\omega_{Y})+\mathcal{O}_{X}(R_{\varphi}).$ (1.8) In this set–up, we recall the pinching construction described in [21, § 3.1]. Let $\varphi:X\to Y$ be a degree–$m$ covering map between smooth irreducible curves $X$ and $Y$. Let $Z$ be the reduced, reducible nodal curves $Z:=X\cup Y,$ where $X$ and $Y$ are attached nodally at $\delta$ distinct points as follows: let $y_{i}\in Y$ and $x_{i}\in X$ be points such that $\varphi(x_{i})=y_{i}$, $1\leqslant i\leqslant\delta$. Set $D:=\sum_{i=1}^{\delta}y_{i}$, $\mathcal{O}_{D}$ the structural sheaf of $D$ and $\mathcal{J}$ the kernel of the map $\varphi_{*}\mathcal{O}_{X}\oplus\mathcal{O}_{Y}\to\mathcal{O}_{D},$ defined around any $y_{i}$’s as $(f,g)\mapsto f(x_{i})-g(y_{i}),\;\;\forall\;\;1\leqslant i\leqslant\delta.$ Then $\mathcal{J}\subset\varphi_{*}\mathcal{O}_{X}\oplus\mathcal{O}_{Y}$ is an $\mathcal{O}_{Y}$-subalgebra of $\varphi_{*}\mathcal{O}_{X}\oplus\mathcal{O}_{Y}$ and $\mathrm{Spec}_{Y}(\mathcal{J})=Z=X\cup Y$. $D$ is called the set of nodes of $Z$. Let $\psi:Z\to Y$ be the natural induced finite and surjective map. Since $Y$ is smooth, irreducible and $Z$ is l.c.i. (so in particular Cohen–Macaulay), from what reminded above the map $\psi$ is a covering map of degree $m+1$. In this set–up, one has the following: ###### Proposition 1.6. (cf. [21, Lemma 3.2]) Let $\varphi:X\to Y$ be a covering map of degree–$m$ between smooth irreducible curves $X$ and $Y$. Let $\psi:Z\to Y$ be the covering map of degree $m+1$ induced by the pinching construction whose set of nodes is $D$. Then, the following exact sequence of vector bundles on $Y$ $0\to\mathcal{T}_{\varphi}\to\mathcal{T}_{\psi}\to\mathcal{O}_{Y}(D)\to 0$ holds, where $\mathcal{T}^{\vee}_{\varphi}$ and $\mathcal{T}^{\vee}_{\psi}$ are the Tschirnhausen bundles associated to the covering maps $\varphi$ and $\psi$, respectively. ## 2\. Curves and cones In this section we first construct families of non–special curves $Y$ of any positive genus $\gamma$ and of degree $e\geqslant 2\gamma-1$ in a projective space, which turn out to fill–up the distinguished component $\widehat{\mathcal{I}}$ of the related Hilbert scheme (cf. § 2.1). After that, we deal with the family ${\mathcal{H}}(\widehat{\mathcal{I}})$, as in § 1.2, which parametrizes cones extending curves in $\widehat{\mathcal{I}}$, i.e. cones having curves in $\widehat{\mathcal{I}}$ as hyperplane sections. We describe an abstract resolution of a general point of ${\mathcal{H}}(\widehat{\mathcal{I}})$, and compute $\dim\;{\mathcal{H}}(\widehat{\mathcal{I}})$ via an explicit parametric description (cf. § 2.2). To conclude the section, for cones $F$ parametrized by ${\mathcal{H}}(\widehat{\mathcal{I}})$ we construct smooth, irreducible curves $X\subset F$, of suitable degree $d$ and genus $g$, which turn out to be $m$–sheeted ramified covers of curves $Y$ varying in the distinguished component $\widehat{\mathcal{I}}$ (cf. § 2.3). ### 2.1. Curves in distinguished components Let $\gamma>0$ and $e\geqslant 2\gamma-1$ be integers. Let $C$ be a smooth, irreducible, projective curve of genus $\gamma$ and let ${\mathcal{O}}_{C}(E)\in{\rm Pic}^{e}(C)$ be a general line bundle. Thus, ${\mathcal{O}}_{C}(E)$ is very–ample and non–special (i.e. $h^{1}(C,{\mathcal{O}}_{C}(E))=0$). By Riemann-Roch, we set $R:=h^{0}(C,{\mathcal{O}}_{C}(E))=e-\gamma+1,$ (2.1) so that $|{\mathcal{O}}_{C}(E)|$ defines an embedding $C\stackrel{{\scriptstyle\phi_{|E|}}}{{\hookrightarrow}}\mathbb{P}^{R-1}$, whose image we denote from now on by $Y:=\phi_{|E|}(C)$. Taking into account [29, Thm. 1], we therefore have: $Y\;\mbox{is a smooth, projective curve of genus $\gamma>0$, degree $e\geqslant 2\gamma-1$, which is projectively normal in $\mathbb{P}^{R-1}$. }$ (2.2) As in § 1.1, one has in particular that $[Y]\in{\mathcal{I}}_{e,\gamma,R-1}$. If we let vary $[C]\in\mathcal{M}_{\gamma}$ and, for any such $C$, we let ${\mathcal{O}}_{C}(E)$ vary in ${\rm Pic}^{e}(C)$, the next result shows that the corresponding curves $Y\subset\mathbb{P}^{R-1}$ fill–up the distinguished component $\widehat{\mathcal{I}}:=\widehat{\mathcal{I}_{e,\gamma,R-1}}$ which also turns out to be regular. ###### Proposition 2.1. Let $\gamma>0$ and $e\geqslant 2\gamma-1$ be integers. Let $C$ be a smooth, projective curve of genus $\gamma$ with general moduli, and let ${\mathcal{O}}_{C}(E)\in{\rm Pic}^{e}(C)$ be a general line bundle. Let $Y:=\phi_{|E|}(C)\subset\mathbb{P}^{R-1}$, where $R=e-\gamma+1$. Then, $Y$ is a smooth, irreducible curve of degree $e$ and genus $\gamma$ which is projectively normal in $\mathbb{P}^{R-1}$, as an embedding of $C$ via the complete linear system $|E|$, and such that $h^{1}(Y,N_{Y/\mathbb{P}^{R-1}})=0$. It corresponds to a general point of the distinguished component $\widehat{\mathcal{I}}:=\widehat{\mathcal{I}_{e,\gamma,R-1}}$ of the Hilbert scheme ${\mathcal{I}}_{e,\gamma,R-1}$, which is regular of dimension $\dim\,\widehat{\mathcal{I}}=\lambda_{e,\gamma,R-1}=Re-(R-4)(\gamma-1).$ (2.3) ###### Proof. Numerical assumptions and [29, Thm. 1] imply that $E$ is very–ample, non–special and that $Y\subset\mathbb{P}^{R-1}$ is projectively normal, the equality $R=e-\gamma+1$ simply following by the non–speciality of $E$ and by Riemann-Roch. Under our assumptions, numerical hypotheses in Corollary 1.3 hold true. Indeed, as explained in Remark 1.2 we have the following: since $Y\subset\mathbb{P}^{R-1}$ is non–degenerate and of positive genus $\gamma$, then condition $e\geqslant R$ is certainly satisfied; concerning condition $\gamma\geqslant e-(R-1)$, i.e. $i(E)\geqslant 0$, it certainly holds from the non–speciality of $E$; at last, non–speciality of $E$ gives $\rho(\gamma,R-1,e)+i(E)=\rho(\gamma,R-1,e)=\gamma\geqslant 1$, therefore $\gamma\leqslant\frac{R(e-(R-1))-1}{R-1}$ as in Corollary 1.3 certainly holds (cf. Remark 1.2). Thus, by Corollary 1.3, $[Y]$ corresponds to a point in the distinguished component $\widehat{\mathcal{I}}$ of $\mathcal{I}_{e,\gamma,R-1}$ and is such that $h^{1}(Y,N_{Y/\mathbb{P}^{R-1}})=0$, i.e. $\widehat{\mathcal{I}}$ is generically smooth and of the expected dimension $\lambda_{e,\gamma,R-1}$ which equals $Re-(R-4)(\gamma-1)$, as it follows from (1.3). ∎ ### 2.2. Cones extending curves in $\widehat{\mathcal{I}}$ With notation as in § 1.2, here we will deal with the family of cones ${\mathcal{H}}(\widehat{\mathcal{I}})$, where $\widehat{\mathcal{I}}=\widehat{\mathcal{I}_{e,\gamma,R-1}}$ is the distinguished component in Proposition 2.1 above. For $[Y]\in\widehat{\mathcal{I}}$ general, we will denote by $F:=F_{Y}\subset\mathbb{P}^{R}$ a cone over $Y$ with general vertex $v\in\mathbb{P}^{R}\setminus\mathbb{P}^{R-1}$. In order to describe suitable smooth, abstract resolution of the cones $F$, we recall the following general facts. Let $C$ be a smooth, irreducible projective curve of genus $\gamma>0$ and let ${\mathcal{O}}_{C}(E)\in{\rm Pic}^{e}(C)$ be a general line bundle of degree $e\geqslant 2\gamma-1$. Consider the rank–two, normalized vector bundle $\mathfrak{F}:=\mathcal{O}_{C}\oplus\mathcal{O}_{C}(-E)$ on $C$ and let $S:=\mathbb{P}(\mathfrak{F})=\mathrm{Proj}_{C}(\mathrm{Sym}(\mathfrak{F}))$ the associated geometrically ruled surface on $C$. One has the structural morphism $\rho:S\to C$ such that $\rho^{-1}(p)=f_{p}$, for any $p\in C$, where $f_{p}\cong\mathbb{P}^{1}$ denotes the fiber of the ruling of $S$ over the point $p\in C$. A general fiber of the ruling of $S$ will be simply denoted by $f$. $S$ is endowed with two natural sections, $C_{0}$ and $C_{1}$, both isomorphic to $C$, and such that $C_{0}\cdot C_{1}=0$, $C_{0}^{2}=-C_{1}^{2}=-e$. The section $C_{0}$ (resp., $C_{1}$) corresponds to the exact sequence $0\to\mathcal{O}_{C}\to\mathfrak{F}\to\mathcal{O}_{C}(-E)\to 0\;\;\;({\rm resp.,}\;\;\;0\to\mathcal{O}_{C}(-E)\to\mathfrak{F}\to\mathcal{O}_{C}\to 0).$ Moreover, one has ${\rm Pic}(S)\cong\mathbb{Z}[\mathcal{O}_{S}(C_{0})]\oplus\rho^{*}({\rm Pic}(C))$ and ${\rm Num}(S)\cong\mathbb{Z}\oplus\mathbb{Z}$ (cf. e.g. [33, V.2]). To ease notation, for any $D\in{\rm Div}(C)$, we will simply set $\rho^{*}(D):=D\,f$. If $K_{S}$ (resp., $K_{C}$) denotes a canonical divisor of $S$ (resp., of $C$), one has (cf. e.g. [33, V.2]): $C_{1}\sim C_{0}+E\;f\;\;\;{\rm and}\;\;\;K_{S}\sim-2C_{0}+(K_{C}-E)\,f.$ (2.4) ###### Proposition 2.2. Let $C$ be a smooth, irreducible projective curve of genus $\gamma>0$ and ${\mathcal{O}}_{C}(E)\in{\rm Pic}^{e}(C)$ be a general line bundle of degree $e\geqslant 2\gamma-1$. Consider the normalized, rank–two vector bundle $\mathfrak{F}:=\mathcal{O}_{C}\oplus\mathcal{O}_{C}(-E)$ on $C$ and let $S:=\mathbb{P}(\mathfrak{F})$, together with the natural sections $C_{0}$ and $C_{1}$, where $C_{1}\sim C_{0}+E\;f,\;C_{0}\cdot C_{1}=0,\;C_{0}^{2}=-C_{1}^{2}=-e$. Then: (i) The linear system $|{\mathcal{O}_{S}}(C_{1})|$ is base–point–free and not composed with a pencil. It induces a morphism $\Psi:=\Psi_{|{\mathcal{O}}_{S}(C_{1})|}:S\to\mathbb{P}^{R},$ where $R=e-\gamma+1$. (ii) $\Psi$ is an isomorphism, outside the section $C_{0}\subset S$, onto its image $F:=\Psi(S)\subset\mathbb{P}^{R}$, whereas it contracts $C_{0}$ at a point $v\in\mathbb{P}^{R}$. (iii) $F$ is a cone of vertex $v$ over $Y:=\Psi(C_{1})\cong C$, where $Y\subset\mathbb{P}^{R-1}$ is a hyperplane section of $F$ not passing through $v$, which is smooth, irreducible, non–degenerate, of degree $e$, genus $\gamma$ and it is also projectively normal in $\mathbb{P}^{R-1}$. (iv) The cone $F\subset\mathbb{P}^{R}$ is projectively normal, of degree $\deg F=e$, of sectional genus and speciality $\gamma$. In particular, $h^{0}(F,{\mathcal{O}}_{F}(1))=R+1=e-\gamma+2$ and $h^{1}(F,{\mathcal{O}}_{F}(1))=\gamma$. (v) For any $m\geqslant 2$, one has $h^{0}(F,\mathcal{O}_{F}(m))=\frac{m(m+1)}{2}e-m(\gamma-1)+1$. ###### Proof. (i) From [33, Ex. V.2.11 (a), p. 386] one deduces that $|{\mathcal{O}}_{S}(C_{1})|$ is base–point–free and not composed with a pencil. Therefore $\Psi$ is a morphism and its image is a surface. Now, from (2.4), we have $h^{0}(S,{\mathcal{O}}_{S}(C_{1}))=h^{0}(S,{\mathcal{O}}_{S}(C_{0}+Ef))=h^{0}(C,{\mathcal{O}}_{C}(E))+h^{0}(C,{\mathcal{O}}_{C})=(e-\gamma+1)+1=R+1,$ where the second equality follows from Leray’s isomorphism, projection formula and the fact that $\rho_{*}({\mathcal{O}}_{S}(C_{0}+Ef))=\rho_{*}({\mathcal{O}}_{S}(C_{0})\otimes{\mathcal{O}}_{S}(Ef))={\mathfrak{F}}\otimes{\mathcal{O}}_{C}(E)=\left(\mathcal{O}_{C}\oplus\mathcal{O}_{C}(-E)\right)\otimes{\mathcal{O}}_{C}(E)=\mathcal{O}_{C}(E)\oplus\mathcal{O}_{C},$ whereas the third and the last equality follow, respectively, from the fact that $E$ is non–special of degree $e$ on $C$ of genus $\gamma$ and from (2.1). (ii) Since $E$ is very–ample on $C$, [28, Prop. 23] implies that the morphism $\Psi$ is an isomorphism onto its image $F$ outside the section $C_{0}$ of $S$. On the other hand, $C_{1}\cdot C_{0}=(C_{0}+Ef)\cdot C_{0}=-e+e=0$, i.e. $C_{1}$ contracts the section $C_{0}$ at a point $v\in\mathbb{P}^{R}$ which is off $Y=\Psi(C_{1})$, the isomorphic image of the section $C_{1}\cong C$. (iii) All the fibers of the ruling of $S$ are embedded as lines, as $C_{1}\cdot f=1$. Since $C_{0}$ is contracted to a point $v$ and since $C_{0}\cdot f=1$, for any fiber $f$ of the ruling of $S$, it follows that any line $\ell:=\Psi(f)$ passes through $v$; thus $F=\Psi(S)$ is a cone over $Y$, of vertex $v\in\mathbb{P}^{R}$. From the isomorphism $C_{1}\cong C$, one also deduces $\Psi_{|_{C_{1}}}\cong\phi_{|\mathcal{O}_{C}(E)|}$, as the following diagram summarizes: ${S}$${F\subset\mathbb{P}^{R}}$${C\;\;\;\;}$${\;\;\;\;\;Y\subset\mathbb{P}^{R-1},}$$\scriptstyle{\Psi_{|\mathcal{O}_{S}(C_{1})|}}$$\scriptstyle{\rho}$$\scriptstyle{\pi_{v}}$$\scriptstyle{\phi_{|\mathcal{O}_{C}(E)|}}$ (2.5) where $\pi_{v}$ denotes the projection from the vertex point $v$. Since ${\mathcal{O}}_{F}(1)$ is induced by ${\mathcal{O}}_{S}(C_{1})$, it is clear that $Y$ is a hyperplane section of $F$ so $Y\subset\mathbb{P}^{R-1}$ is of degree $e$, genus $\gamma$ and it is projectively normal in $\mathbb{P}^{R-1}$, as it follows from [29, Thm. 1]. (iv) One has $\deg\,F=Y^{2}=C_{1}^{2}=e$; moreover, since $Y\cong C$ is a hyperplane section, then $F$ has sectional genus $\gamma$. Now, $h^{0}(F,{\mathcal{O}}_{F}(1))=h^{0}(S,{\mathcal{O}}_{S}(C_{1}))=R+1=e-\gamma+2$, as computed in (i); whereas $h^{1}(F,{\mathcal{O}}_{F}(1))=h^{1}(S,{\mathcal{O}}_{S}(C_{1}))$ so, from the exact sequence $0\to{\mathcal{O}}_{S}\to{\mathcal{O}}_{S}(C_{1})\stackrel{{\scriptstyle r_{C_{1}}}}{{\longrightarrow}}{\mathcal{O}}_{C_{1}}(C_{1})\cong{\mathcal{O}}_{C}(E)\to 0$ one gets $h^{0}(S,{\mathcal{O}}_{S}(C_{1}))=R+1,\;\,\;h^{0}(C_{1},{\mathcal{O}}_{C_{1}}(C_{1}))=h^{0}(C,{\mathcal{O}}_{C}(E))=R,\;\;\;h^{1}(C_{1},{\mathcal{O}}_{C_{1}}(C_{1}))=h^{1}(C,{\mathcal{O}}_{C}(E))=0,$ as $E$ is non–special. By Leray’s isomorphism and projection formula one also gets $h^{1}(S,{\mathcal{O}}_{S})=h^{1}(C,{\mathcal{O}}_{C})=\gamma$. Thus the map $H^{0}(r_{C_{1}})$, induced in cohomology by the map $r_{C_{1}}$, is surjective hence, from the above exact sequence, one gets $h^{1}(S,{\mathcal{O}}_{S}(C_{1}))=\gamma$. At last, since $F$ has general hyperplane section which is a projectively normal curve in $\mathbb{P}^{R-1}$, it follows that $F$ is projectively normal in $\mathbb{P}^{R}$ (cf. e.g. [8, Proof of Lemma 5.7, Rem. 5.8]. (v) By the very definition of $F$, one has $h^{0}(F,\mathcal{O}_{F}(m))=h^{0}(S,\mathcal{O}_{S}(mC_{1}))$. From [33, Ex. III.8.3, p. 253] it follows that $h^{0}(S,\mathcal{O}_{S}(mC_{1}))=h^{0}(S,\mathcal{O}_{S}(mC_{0}+mEf))=\sum_{k=0}^{m}h^{0}(C,\mathcal{O}_{C}((m-k)E))=\sum_{j=0}^{m}h^{0}(C,\mathcal{O}_{C}(jE)).$ Since $E$ is non–special on $C$, so it is any divisor $jE$, for any integer $1\leqslant j\leqslant m$. Therefore, by Riemann–Roch on $C$, one has $h^{0}(F,\mathcal{O}_{F}(m))=\sum_{j=0}^{m}h^{0}(C,\mathcal{O}_{C}(jE))=1+\left[1+2+3+\ldots+(m-1)+m\right]e-m\gamma+m=\frac{m(m+1)}{2}e-m(\gamma-1)+1,$ as stated. ∎ Proposition 2.2 allows to give an explicit parametric description of the family of cones ${\mathcal{H}}(\widehat{\mathcal{I}})$, where $\widehat{\mathcal{I}}:=\widehat{\mathcal{I}_{e,\gamma,R-1}}$ is the distinguished component of the Hilbert scheme $\mathcal{I}_{e,\gamma,R-1}$ as in Proposition 2.1. For reader’s convenience, we first report here a special case of [9, Lemma 6.3], which is needed for the parametric description of ${\mathcal{H}}(\widehat{\mathcal{I}})$. ###### Lemma 2.3. With notation and assumptions as in Proposition 2.2, assume further that ${\rm Aut}(C)=\\{Id\\}$ (this, in particular, happens when e.g. $C$ has general moduli). Let $G_{F}\subset{\rm PGL}(R+1,\mathbb{C})$ denote the projective stabilizer of $F$, i.e. the sub-group of projectivities of $\mathbb{P}^{R}$ which fix $F$ as a cone. Then $G_{F}\cong{\rm Aut}(S)$ and $\dim\,G_{S}=h^{0}(C,{\mathcal{O}}_{C}(E))+1=R+1=e-\gamma+2$. ###### Proof. There is an obvious inclusion $G_{F}\hookrightarrow{\rm Aut}(S)$; we want to show that this is actually a group isomorphism. Let $\sigma\in{\rm Aut}(S)$ be any automorphism of $S$. Since $C_{0}$ is the unique section of $S$ with negative self–intersection, then $\sigma(C_{0})=C_{0}$, i.e. $\sigma$ induces an automorphism of $C_{0}\cong C$. Assumption ${\rm Aut}(C)=\\{Id\\}$ implies that $\sigma$ fixes $C_{0}$ pointwise. Now, from the fact that $C_{1}\sim C_{0}+E\,f$, it follows that $\sigma^{*}(C_{1})\sim\sigma^{*}(C_{0})+\sigma^{*}(E\,f)=C_{0}+E\,f\sim C_{1}$. Therefore, since $|C_{1}|$ corresponds to the hyperplane linear system of $F=\Psi(S)$, one deduces that any automorphism $\sigma\in{\rm Aut}(S)$ is induced by a projective transformation of $F$. The rest of the proof directly follows from cases [36, Theorem 2–(2) and (3)] and from [36, Lemma 6]: indeed condition ${\rm Aut}(C)=\\{Id\\}$ implies that ${\rm Aut}(S)\cong{\rm Aut}_{C}(S)$; furthermore, since $C_{0}$ is the unique section of negative self–intersection on $S$, $\dim\,G_{S}=h^{0}(C,{\mathcal{O}}_{C}(E))+1$ follows by using the description of ${\rm Aut}_{C}(S)$ in [36, Theorem 2]. ∎ With the use of Proposition 2.2 and of Lemma 2.3, one can explicitly describe the family of cones ${\mathcal{H}}(\widehat{\mathcal{I}})$, where $\widehat{\mathcal{I}}$ the distinguished component in Proposition 2.1, and also compute its dimension. Parametric description of ${\mathcal{H}}(\widehat{\mathcal{I}})$: letting $[C]$ vary in ${\mathcal{M}}_{\gamma}$ and, for any such $C$, letting ${\mathcal{O}}_{C}(E)$ vary in ${\rm Pic}^{e}(C)$, cones $F$ arising as in Proposition 2.2 fill-up the component ${\mathcal{H}}(\widehat{\mathcal{I}})$, which depends on the following parameters: * • $3\gamma-3$, since $[C]$ varies in ${\mathcal{M}}_{\gamma}$, plus * • $\gamma$, which are the parameters on which ${\mathcal{O}}_{C}(E)\in{\rm Pic}^{e}(C)$ depends, plus * • $(R+1)^{2}-1=\dim\;{\rm PGL}(R+1,\mathbb{C}),$ minus * • $\dim\,G_{F}$, which is the dimension of the projectivities of $\mathbb{P}^{R}$ fixing a general $F$ arising from this construction. From Lemma 2.3 it follows that $\dim\,G_{F}=R+1$, so $\dim\;{\mathcal{H}}(\widehat{\mathcal{I}})=4\gamma-3+(R+1)^{2}-(R+2).$ (2.6) From Proposition 2.1 we know that $[Y]\in\widehat{\mathcal{I}}$ general has $h^{1}(Y,N_{Y/\mathbb{P}^{R-1}})=0$; moreover, since $R=e-\gamma+1$, it is a straightforward computation to notice that (2.6) equals the expression in (1.6) with the choice ${\mathcal{W}}=\widehat{\mathcal{I}}$, $r=R-1$, $d=e$ and $g=\gamma$, namely $\dim\;{\mathcal{H}}(\widehat{\mathcal{I}})=R(e+1)-(R-4)(\gamma-1)=\lambda_{e,\gamma,R-1}+R.$ (2.7) ###### Remark 2.4. The previous parametric description of ${\mathcal{H}}(\widehat{\mathcal{I}})$ can be formalized by taking into account the schematic construction of ${\mathcal{H}}(\widehat{\mathcal{I}})$ which deals with universal Picard varieties over $\mathcal{M}_{\gamma}$. To do so, we follow procedure as in [14, § 2]. Let ${\mathcal{M}}^{0}_{\gamma}$ be the Zariski open subset of the moduli space ${\mathcal{M}}_{\gamma}$, whose points correspond to isomorphism classes of curves of genus $g$ without non-trivial automorphisms. By definition, ${\mathcal{M}}^{0}_{\gamma}$ is a fine moduli space, i.e. it has a universal family $p:{\mathcal{C}}\to{\mathcal{M}}^{0}_{\gamma}$, where ${\mathcal{C}}$ and ${\mathcal{M}}^{0}_{\gamma}$ are smooth schemes and $p$ is a smooth morphism. ${\mathcal{C}}$ can be identified with the Zariski open subset ${\mathcal{M}}^{0}_{\gamma,1}$ of the moduli space ${\mathcal{M}}_{\gamma,1}$ of smooth, $1$–pointed, genus–$\gamma$ curves, whose points correspond to isomorphism classes of pairs $[(C,x)]$, with $x\in C$ a point and $C$ a smooth curve of genus $\gamma$ without non-trivial automorphisms. On ${\mathcal{M}}^{0}_{\gamma,1}$ there is again a universal family $p_{1}:{\mathcal{C}}_{1}\to{\mathcal{M}}^{0}_{\gamma,1}$, where ${\mathcal{C}}_{1}={\mathcal{C}}\times_{{\mathcal{M}}^{0}_{\gamma}}{\mathcal{C}}$. The family $p_{1}$ has a natural regular global section $\delta$ whose image is the diagonal. By means of $\delta$, for any integer $k$, we have the universal family of Picard varieties of order $k$ over ${\mathcal{M}}^{0}_{\gamma,1}$, i.e. $p_{1}^{(k)}:{\mathcal{P}ic}^{(k)}\to{\mathcal{M}}^{0}_{\gamma,1}$ (cf. [14, § 2]) and, setting $\mathcal{Z}_{k}:={\mathcal{C}}_{1}\times_{{\mathcal{M}}^{0}_{\gamma,1}}{\mathcal{P}ic}^{(k)},$ we have a Poincarè line-bundle ${\mathcal{L}}_{k}$ on $\mathcal{Z}_{k}$ (cf. a relative version of [1, p. 166-167]). For any closed point $[(C,x)]\in{\mathcal{M}}^{0}_{\gamma,1}$, its fibre via $p_{1}^{(k)}$ is isomorphic to ${\rm Pic}^{(k)}(C)$. Take $k=e\geqslant 2\gamma-1$ and let $\pi_{2}:\mathcal{Z}_{e}\to{\mathcal{P}ic}^{(e)}$ be the projection onto the second factor. For a general point $u:=[(C,x),{\mathcal{O}}_{C}(E)]\in{\mathcal{P}ic}^{(e)}$, the restriction of ${\mathcal{L}}_{e}$ to $\pi_{2}^{-1}(u)$ is isomorphic to ${\mathcal{O}}_{C}(E)\in{\rm Pic}^{e}(C)$ for $[(C,x)]\in{\mathcal{M}}^{0}_{g,1}$ general; one has $\mathcal{E}_{e}:={\mathcal{O}}_{\mathcal{Z}}\oplus{\mathcal{L}}_{e}$ as a rank–two vector bundle on $\mathcal{Z}_{e}$. The fibre of $\mathcal{E}_{e}$ over $u=[(C,x),{\mathcal{O}}_{C}(E)]\in{\mathcal{P}ic}^{(e)}$ is the rank–two vector bundle $\mathfrak{E}_{u}=\mathfrak{F}_{u}(E):={\mathcal{O}}_{C}\oplus{\mathcal{O}}_{C}(E)$ on $C$, where ${\mathfrak{F}}_{u}={\mathcal{O}}_{C}(-E)\oplus{\mathcal{O}}_{C}$ as in § 2.2 and where $[(C,x)]\in{\mathcal{M}}^{0}_{\gamma,1}$ is general. Moreover, the sheaf $(\pi_{2})_{*}(\mathcal{E}_{e})$ is free of rank $R+1=e-\gamma+2$ on a suitable dense, open subset $\mathcal{U}$ of ${\mathcal{P}ic}^{(e)}$; therefore, on $\mathcal{U}$, we have functions $s_{0},\ldots,s_{R}$ such that, for each point $u\in\mathcal{U}$, $s_{0},\ldots,s_{R}$ computed at $u=[(C,x),{\mathcal{O}}_{C}(E)]$ span the space of sections of the corresponding vector bundle $\mathfrak{E}_{u}=\mathfrak{F}_{u}(E)$. There is a natural morphism $\Psi_{e}:{\mathcal{P}ic}^{(e)}\times{\rm PGL}(R+1,\mathbb{C})\to{\rm Hilb}(e,\gamma,R),$ where ${\rm Hilb}(e,\gamma,R)$ denotes the Hilbert scheme of surfaces in ${\mathbb{P}}^{R}$ of degree $e$ and sectional genus $\gamma$: given a pair $(u,\omega)$, embed $S_{u}:=\mathbb{P}(\mathfrak{E}_{u})$ to $\mathbb{P}^{R}$ via the sections $s_{0},\ldots,s_{R}$ computed at $u$, compose with the projectivity $\omega$ and take the image. Since ${\mathcal{P}ic}^{(e)}\times{\rm PGL}(R+1,\mathbb{C})$ is irreducible, by Proposition 2.2, ${\mathcal{H}}(\widehat{\mathcal{I}})$ is the closure of the image of the above map to the Hilbert scheme. By construction, ${\mathcal{H}}(\widehat{\mathcal{I}})$ dominates ${\mathcal{M}}_{\gamma}$ and its general point represents a cone $F\subset\mathbb{P}^{R}$ as in Proposition 2.2. From the previous construction, for $[F]\in{\mathcal{H}}(\widehat{\mathcal{I}})$ general, one has $\dim\;\Psi^{-1}_{e}([F])=\dim G_{F}+1$. From Lemma 2.3, one has $\dim\,G_{F}=R+1$ so $\dim\;{\mathcal{H}}(\widehat{\mathcal{I}})=4\gamma-3+(R+1)^{2}-(R+2)$ as in (2.6). ### 2.3. Curves on cones and ramified coverings In this section, we construct suitable ramified $m$–covers of $Y\subset\mathbb{P}^{R-1}$, for $[Y]\in\widehat{\mathcal{I}}$ general in the distinguished component $\widehat{\mathcal{I}}$, with the use of cones $F$ parametrized by ${\mathcal{H}}(\widehat{\mathcal{I}})$. Our approach extends the strategy used in [13], which deals with double covers. Using notation and assumptions as in Proposition 2.2, for any integer $m\geqslant 1$, let $C_{m}\in|\mathcal{O}_{S}(mC_{1})|$ be a general member of the linear system on $S$ and let $X_{m}:=\Psi(C_{m})\subset F$ denote its image. ###### Proposition 2.5. For any integer $m\geqslant 1$, one has: (i) $X_{m}$ is a smooth, irreducible curve of degree $\deg\,X_{m}=me$, which is non–degenerate and linearly normal in $\mathbb{P}^{R}$. (ii) $X_{m}$ is obtained by the intersection of the cone $F$ with a hypersurface of degree $m$ in $\mathbb{P}^{R}$. (iii) The projection $\pi_{v}$ from the vertex $v\in F$ gives rise to a morphim $\varphi_{m}:X_{m}\to Y$, which is a degree–$m$ covering map induced on $X_{m}$ by the ruling of the cone $F$. (iv) The geometric genus of $X_{m}$ is $g_{m}:=g(X_{m})=m(\gamma-1)+\frac{m(m-1)}{2}e+1.$ (2.8) (v) For any $j\geqslant m$, the line bundle ${\mathcal{O}}_{X_{m}}(j)$ is non–special and such that $h^{0}(X_{m},{\mathcal{O}}_{X_{m}}(j))=jme-m(\gamma-1)-\frac{m(m-1)}{2}e,\;\;\forall\;\;j\geqslant m.$ (2.9) ###### Proof. For $m=1$, $X_{1}=Y$ as in Proposition 2.2 and there is nothing else to prove. Therefore, from now on we will focus on $m\geqslant 2$. (i) Since $C_{m}$ is a smooth, irreducible curve on $S$ and since $C_{m}\cdot C_{0}=mC_{1}\cdot C_{0}=0$, then $C_{m}$ is isomorphically embedded via $\Psi$ onto its image $X_{m}\subset F\subset\mathbb{P}^{R}$, which does not pass through the vertex $v\in F$. Moreover, $\deg\,X_{m}=C_{m}\cdot C_{1}=mC_{1}\cdot C_{1}=mC_{1}^{2}=me$. Tensoring the exact sequence defining $C_{m}$ on $S$ by $\mathcal{O}_{S}(C_{1})$, we get $0\to{\mathcal{O}}_{S}((1-m)C_{1})\to{\mathcal{O}}_{S}(C_{1})\stackrel{{\scriptstyle r_{C_{1}}}}{{\longrightarrow}}{\mathcal{O}}_{C_{m}}(C_{1})\cong{\mathcal{O}}_{X_{m}}(1)\to 0.$ Since $m\geqslant 2$, then $h^{0}({\mathcal{O}}_{S}((1-m)C_{1}))=0$. Moreover, by Serre duality, $h^{1}(S,{\mathcal{O}}_{S}((1-m)C_{1}))=h^{1}(S,\omega_{S}\otimes{\mathcal{O}}_{S}((m-1)C_{1})).$ From the facts that $\Psi$ is birational, $C_{1}^{2}=e>0$ and $(m-1)>0$, it follows that ${\mathcal{O}}_{S}((m-1)C_{1})$ is big and nef, so $h^{1}(S,\omega_{S}\otimes{\mathcal{O}}_{S}((m-1)C_{1}))=0$, by Kawamata–Viehweg vanishing theorem. Thus, $H^{0}(X_{m},{\mathcal{O}}_{X_{m}}(1))\cong H^{0}(C_{m},{\mathcal{O}}_{C_{m}}(C_{1}))\cong H^{0}(S,{\mathcal{O}}_{S}(C_{1}))$ which implies that $X_{m}$ is non–degenerate and linearly normal, as it follows from Proposition 2.2–(iv). (ii) Since $C_{m}\sim mC_{1}$ on $S$ and since $C_{1}$ induces the hyperplane section of $F$, it follows that $X_{m}\in|{\mathcal{O}}_{F}(m)|$. (iii) Taking into account diagram (2.5), the projection from the vertex $v$ induces the morphim $\varphi_{m}$. Since $C_{m}\cdot f=mC_{1}\cdot f=m$ and since any fiber $f$ is embedded by $\Psi$ as a line of $F$, $\varphi_{m}$ is induced by the ruling of the cone. As $Y$ is smooth, irreducible and all the fibers of $\varphi_{m}$ have constant length $m$, then $\varphi_{m}$ is a finite, flat morphism from $X_{m}$ to $Y$ (cf. e.g. [39]). Therefore, $\varphi_{m}$ is a covering map of degree $m$ as in § 1.3. (iv) The genus of $X_{m}$ equals the genus of $C_{m}$. Therefore, to compute $g_{m}$ we can apply adjunction formula on $S$ and the Riemann-Hurwitz formula as in (1.8) to the map $\varphi_{m}:C_{m}\to C_{1}$ induced by the fibers of the ruling of $S$ (to ease notation, we use the same symbol as for the map $\varphi_{m}:X_{m}\to Y$ induced by the projection from the vertex $v$ of the cone $F$). If $R_{\varphi_{m}}$ denotes the ramification divisor of $\varphi_{m}$ on $C_{m}$, by Riemann–Hurwitz (1.8) one has ${\mathcal{O}}_{C_{m}}(R_{\varphi_{m}})\cong{\mathcal{O}}_{C_{m}}(K_{C_{m}}-\varphi_{m}^{*}(K_{C_{1}}))$. By adjunction formula, for $j=1,\,m$, the canonical divisor $K_{C_{j}}$ is induced on $C_{j}$ by the divisor $K_{S}+C_{j}$ on $S$, which is $K_{S}+C_{j}\sim(j-2)C_{0}+(j-1)Ef+K_{C}f,\;\;\;j=1,m.$ Therefore one has ${\mathcal{O}}_{C_{m}}(R_{\varphi_{m}})\cong{\mathcal{O}}_{C_{m}}((m-1)C_{1})\;\;\;{\rm and}\;\;\;\deg\,R_{\varphi_{m}}=(m-1)C_{1}\cdot C_{m}=m(m-1)C_{1}^{2}=m(m-1)e.$ (2.10) Using Riemann–Hurwitz formula (1.8), one gets therefore $2g_{m}-2=m(2\gamma-2)+m(m-1)e$ which gives (2.8). (v) Since $\deg\,X_{m}=me$, then $\deg\;{\mathcal{O}}_{X_{m}}(j)=jme$ whereas, from above, $\deg\;\omega_{X_{m}}=2g_{m}-2=2m(\gamma-1)+m(m-1)e$. Since $e\geqslant 2\gamma-1$, it is a straightforward computation to notice that if $j\geqslant m$ then $\deg\;{\mathcal{O}}_{X_{m}}(j)>\deg\;\omega_{X_{m}},$ which implies the non–speciality of ${\mathcal{O}}_{X_{m}}(j)$ for any $j\geqslant m$. The computation of $h^{0}(X_{m},{\mathcal{O}}_{X_{m}}(j))$ then reduces to simply apply Riemann–Roch on the curve $X_{m}$. ∎ We conclude the section with a general result, involving covering maps and projections, which in particular applies to smooth, irreducible curves $X_{m}$ on $F$ as above and which extends [13, Lemma 4, Cor. 5] to the reducible, connected case. This will be used in the proof of our Main Theorem (cf. proof of Claim 3.5). ###### Lemma 2.6. Let $Z\subset\mathbb{P}^{R}$ be a non–degenerate, connected, projective curve, which is possibly reducible and which has at most nodes as possible singularities. Let $D=\mathrm{Sing}(Z)$ denote its scheme of nodes, whose cardinality we denote by $\delta$ (in particular $\delta=0$ and $D=\emptyset$ if e.g. $Z$ smooth and irreducible). Let $H$ be a hyperplane in $\mathbb{P}^{R}$ and $v\in\mathbb{P}^{R}\setminus(H\cup Z)$ be a point. Assume that the projection $\pi_{v}:Z\to H\cong\mathbb{P}^{R-1}$ from the point $v$ is such that $Y:=\pi_{v}(Z)\subset H$ is smooth and irreducible. Let $R_{\pi_{v}}$ be the ramification divisor of $\pi_{v}$. Then $R_{\pi_{v}}$ is a Cartier divisor on $Z$ and the following exact sequence holds $0\to\mathcal{L}_{Z}\to N_{Z/\mathbb{P}^{R}}\to\pi^{*}_{v}(N_{Y/\mathbb{P}^{R-1}})\to 0,$ (2.11) where $N_{Z/\mathbb{P}^{R}}$ denotes the normal sheaf of $Z$, which is locally free on $Z$, and $\mathcal{L}_{Z}$ is a line bundle on $Z$ such that $\deg\,\mathcal{L}_{Z}=\deg\,Z+\deg\;R_{\pi_{v}}+\delta.$ If, in particular, $Z=X_{m}$ as in Proposition 2.5, for some $m\geqslant 2$, and $v$ is the vertex of the cone $F=F_{Y}$ then $\pi_{v}(X_{m})=Y$, where $Y$ a hyperplane section of $F$ not passing through $v$ as in Proposition 2.2, and (2.11) reads $0\to{\mathcal{O}}_{X_{m}}(R_{\pi_{v}})\otimes{\mathcal{O}}_{X_{m}}(1)\to N_{X_{m}/\mathbb{P}^{R}}\to\pi^{*}_{v}(N_{Y/\mathbb{P}^{R-1}})\to 0.$ (2.12) ###### Proof. If $\mathcal{I}_{Z/\mathbb{P}^{R}}$ denotes the ideal sheaf of $Z$ in $\mathbb{P}^{R}$ then, since $Z$ is at most nodal, $N_{Z/\mathbb{P}^{R}}:={\mathcal{H}om}({\mathcal{I}}_{Z/\mathbb{P}^{R}},\mathcal{O}_{Z})$ and $T_{\mathbb{P}^{R}}|_{Z}:={\mathcal{H}om}(\Omega^{1}_{\mathbb{P}^{R}},\mathcal{O}_{Z})$ are both locally–free of rank $R-1$ and $R$, respectively (cf. [38, page 30]). If we take into account the projection $\pi_{v}:Z\to Y\subset H$, one has $\pi^{*}_{v}(\mathcal{O}_{Y})\cong\mathcal{O}_{Z}$ and $\pi^{*}_{v}(\mathcal{O}_{Y}(1))\cong\mathcal{O}_{Z}(1)$; thus, considering the Euler sequences of $Z$ and $Y$ $\displaystyle 0$ $\displaystyle\to\mathcal{O}_{Z}\to\mathcal{O}_{Z}(1)^{\oplus(R+1)}\to T_{\mathbb{P}^{R}}|_{Z}\to 0$ $\displaystyle 0$ $\displaystyle\to\mathcal{O}_{Y}\to\mathcal{O}_{Y}(1)^{\oplus R}\to T_{\mathbb{P}^{R-1}}|_{Y}\to 0$ and pulling–back to $Z$ via $\pi_{v}$ the second Euler sequence, one deduces the following exact diagram ${0}$${0}$${0}$${\mathcal{O}_{Z}(1)}$${\mathcal{K}er\,\alpha}$${0}$${\mathcal{O}_{Z}}$${\mathcal{O}_{Z}(1)^{\oplus(R+1)}}$${T_{\mathbb{P}^{R}}|_{Z}}$${0}$${0}$${\mathcal{O}_{Z}}$${\mathcal{O}_{Z}(1)^{\oplus R}}$${\pi_{v}^{*}(T_{\mathbb{P}^{R-1}}|_{Y})}$${0}$${0}$${0}$${0}$$\scriptstyle{\alpha}$ where the map $\alpha$ is surjective by the Snake Lemma. The exactness of the diagram implies that $\mathcal{K}er\,\alpha\cong\mathcal{O}_{Z}(1)$. Hence, from the right–most exact column of the diagram, we get $\displaystyle 0\to\mathcal{O}_{Z}(1)\to T_{\mathbb{P}^{R}}|_{Z}\to\pi_{v}^{*}(T_{\mathbb{P}^{R-1}}|_{Y})\to 0.$ (2.13) If we set $\Omega^{1}_{Z}$ the cotangent sheaf (or the sheaf of Kähler differentials) on $Z$, then its dual $\Theta_{Z}:={\mathcal{H}om}(\Omega^{1}_{Z},\mathcal{O}_{Z})$ is not locally–free, but it is torsion free (cf. [38]) and it is called the sheaf of derivations of $\mathcal{O}_{Z}$ (when $Z$ is smooth and irreducible, $\Omega_{Z}^{1}$ coincides with the canonical bundle whereas $\Theta_{Z}$ with the tangent bundle). At last, $T^{1}_{Z}:={\mathcal{E}xt}^{1}(\Omega^{1}_{Z},\mathcal{O}_{Z})$ is called the first cotangent sheaf of $Z$, which is a torsion sheaf supported on $Sing(Z)$. Since by assumption $Z$ is at most nodal, it is either $T^{1}_{Z}=0$ (i.e. the zero–sheaf) when $Z$ is smooth, or $T^{1}_{Z}\cong{\mathcal{O}}_{D}$, where $D$ the set of nodes of $Z$, otherwise. By [38, page 30, (1.2)], one has the exact sequence $0\to\Theta_{Z}\to T_{\mathbb{P}^{R}}|_{Z}\to N_{Z/\mathbb{P}^{R}}\to T^{1}_{Z}\to 0.$ (2.14) Putting together (2.13) and (2.14) and taking into account the map $\pi_{v}:Z\to Y$, one gets the following exact diagram: ${0}$${0}$${0}$${\mathcal{K}er\,\pi_{v*}}$${\mathcal{O}_{Z}(1)}$${\mathcal{K}er\,\beta}$${0}$${\Theta_{Z}}$${T_{\mathbb{P}^{R}}|_{Z}}$${N_{Z/\mathbb{P}^{R}}}$${T^{1}_{Z}}$${0}$${0}$${\pi^{*}_{v}(T_{Y})}$${\pi^{*}_{v}(T_{\mathbb{P}^{R-1}}|_{Y})}$${\pi^{*}_{v}(N_{Y|\mathbb{P}^{R-1}})}$${0}$${\mathcal{C}oker\,\pi_{v*}}$${0}$${0}$${0}$$\scriptstyle{\pi_{v*}}$$\scriptstyle{\beta}$ where $\beta$ is defined by the diagram. From [38], the sequence (2.14) splits in two exact sequences $\displaystyle 0\to\Theta_{Z}\to T_{\mathbb{P}^{R}}|_{Z}\to N^{\prime}_{Z}\to 0$ $\displaystyle 0\to N^{\prime}_{Z}\to N_{Z/\mathbb{P}^{R}}\to T^{1}_{Z}\to 0,$ where $N^{\prime}_{Z}$ is the equi–singular sheaf. Hence, the previous exact diagram gives rise to the following: ${0}$${0}$${0}$${\mathcal{K}er\,\pi_{v*}}$${\mathcal{O}_{Z}(1)}$${\mathcal{K}er\,\beta^{\prime}}$${0}$${\Theta_{Z}}$${T_{\mathbb{P}^{R}}|_{Z}}$${N^{\prime}_{Z}}$${0}$${0}$${\pi^{*}_{v}(T_{Y})}$${\pi^{*}_{v}(T_{\mathbb{P}^{R-1}}|_{Y})}$${\pi^{*}_{v}(N_{Y/\mathbb{P}^{R-1}})}$${0}$${\mathcal{C}oker\,\pi_{v*}}$${0}$${0}$${0}$$\scriptstyle{\pi_{v*}}$$\scriptstyle{\beta^{\prime}}$ (2.15) where $\beta^{\prime}$ is induced by $\beta$ from the previous diagram. By the Snake Lemma, one has therefore $\displaystyle 0\to\mathcal{K}er\,\pi_{v*}\to\mathcal{O}_{Z}(1)\to\mathcal{K}er\,\beta^{\prime}\to\mathcal{C}oker\,\pi_{v*}\to 0.$ (2.16) Since $\pi_{v}^{*}(T_{Y})$ is a line bundle on $Z$ and $\Theta_{Z}$ has generically rank $1$ on $Z$, if it were $\mathcal{K}er\,\pi_{v*}\neq 0$ then it would be a torsion sheaf, which is a contradiction by (2.16) and the fact that $\mathcal{O}_{Z}(1)$ is a line bundle on $Z$. Therefore (2.16) gives $\displaystyle 0\to\mathcal{O}_{Z}(1)\to\mathcal{K}er\,\beta^{\prime}\to\mathcal{C}oker\,\pi_{v*}\to 0.$ (2.17) Since by the right–most column of diagram (2.15) the sheaf $\mathcal{K}er\,\beta^{\prime}$ has generically rank $1$ and, by the left–most column of diagram (2.15), $\mathcal{C}oker\,\pi_{v*}$ is a torsion sheaf, from (2.17) it follows that $\mathcal{K}er\,\beta^{\prime}$ is a line bundle whereas $\mathcal{C}oker\,\pi_{v*}\cong\mathcal{O}_{R_{\pi_{v}}}$, and $R_{\pi_{v}}$ is the (effective, Cartier) ramification divisor of the projection $\pi_{v}$. Thus, $\mathcal{K}er\,\beta^{\prime}$ is a line bundle on $Z$ such that, from (2.17), is isomorphic to $\mathcal{O}_{Z}(R_{\pi_{v}})\otimes\mathcal{O}_{Z}(1)$. Therefore, the sequence (2.17) reads as $\displaystyle 0\to\mathcal{O}_{Z}(1)\to\mathcal{O}_{Z}(R_{\pi_{v}})\otimes\mathcal{O}_{Z}(1)\to\mathcal{O}_{R_{\pi_{v}}}\to 0.$ (2.18) From diagram (2.15) we deduce ${0}$${0}$${0}$${0}$${\mathcal{O}_{Z}(R_{\pi_{v}})\otimes\mathcal{O}_{Z}(1)}$${\mathcal{K}er\,\beta}$${T^{1}_{Z}\cong\mathcal{O}_{D}}$${0}$${0}$${N^{\prime}_{Z}}$${N_{Z/\mathbb{P}^{R}}}$${T^{1}_{Z}}$${0}$${0}$${\pi^{*}_{v}(N_{Y/\mathbb{P}^{R-1}})}$${\pi^{*}_{v}(N_{Y/\mathbb{P}^{R-1}})}$${0}$${0}$${0}$${0}$${0}$$\scriptstyle{\beta^{\prime}}$$\scriptstyle{\beta}$ By the Snake Lemma again, $\mathcal{K}er\,\beta=:\mathcal{L}_{Z}$ is a line bundle too, for which $\displaystyle 0\to\mathcal{O}_{Z}(R_{\pi_{v}})\otimes\mathcal{O}_{Z}(1)\to\mathcal{L}_{Z}\to T_{Z}^{1}\cong\mathcal{O}_{D}\to 0.$ (2.19) holds. In particular, by (2.19) one has $\deg\;\mathcal{L}_{Z}=\deg\;\left(\mathcal{O}_{Z}(R_{\pi_{v}})\otimes\mathcal{O}_{Z}(1)\right)+\delta=\deg\,Z+\deg\,R_{\pi_{v}}+\delta$ as stated. Moreover, from the middle–column of the above diagram, one also has $0\to\mathcal{L}_{Z}\to N_{Z/\mathbb{P}^{R}}\to\pi^{*}_{v}(N_{Y/\mathbb{P}^{R-1}})\to 0,$ and this concludes the first part of the statement. At last, if $Z=X_{m}\subset F$ as in Proposition 2.5 and if $v$ is the vertex of the cone $F$, the projection $\pi_{v}$ induces a $m$-sheeted ramified cover of the base curve $Y$ which is a hyperplane section of $F$ not passing through the vertex $v$; in this case $\delta=0$, $D=\emptyset$ and $\mathcal{L}_{X_{m}}\cong\mathcal{O}_{X_{m}}(R_{\pi_{v}})\otimes\mathcal{O}_{X_{m}}(1)$ so (2.11) becomes (2.12), as stated. ∎ ## 3\. Superabundant components of Hilbert schemes This section is entirely devoted to the construction of superabundant components of Hilbert schemes and to the proof of our Main Theorem. To do so, we will need to deal with the surjectivity of the Gaussian–Wahl map $\Phi_{Y,{\mathcal{O}}_{Y}(1)}$ for $Y\subset\mathbb{P}^{R-1}$ as in § 2.1 (cf. Claim 3.5 below). ###### Remark 3.1. Recall that ${\mathcal{O}}_{Y}(1)\cong{\mathcal{O}}_{C}(E)$ is of degree $e\geqslant 2\gamma-1$. Taking into account numerical assumptions as in Proposition 1.4, for $6\leqslant\gamma\leqslant 8$, the condition $2\gamma-1\geqslant\gamma+12$ cannot hold since it would give $\gamma\geqslant 13$, contradicting that $6\leqslant\gamma\leqslant 8$; similarly, condition $2\gamma-1\geqslant\gamma+9$ does not hold for $\gamma=9$. On the contrary, $\gamma\geqslant 10$ ensures that $\deg{\mathcal{O}}_{C}(E)=e\geqslant 2\gamma-1\geqslant\gamma+9$ holds true so we can apply Proposition 1.4 to the pair $(C,{\mathcal{O}}_{C}(E))$ giving rise to $Y$ to prove the next result. ###### Proposition 3.2. Let $\gamma\geqslant 10$, $e\geqslant 2\gamma-1$ and $R=e-\gamma+1$ be integers. Let $\widehat{{\mathcal{I}}_{e,\gamma,R-1}}$ be the distinguished component of $\mathcal{I}_{e,\gamma,R-1}$ and let $[Y]\in\widehat{{\mathcal{I}}_{e,\gamma,R-1}}$ be general. Then: $(a)$ the Gaussian-Wahl map $\Phi_{\omega_{Y},{\mathcal{O}}_{Y}(1)}$ is surjective, $(b)$ ${\mathcal{H}}(\widehat{{\mathcal{I}}_{e,\gamma,R-1}})$ is generically smooth of dimension $\dim\,{\mathcal{H}}(\widehat{{\mathcal{I}}_{e,\gamma,R-1}})=\lambda_{e,\gamma,R-1}+R=R(e+1)-(R-4)(\gamma-1),$ (3.1) and $(c)$ the cone $F$, corresponding to $[F]\in{\mathcal{H}}(\widehat{{\mathcal{I}}_{e,\gamma,R-1}})$ general, is unobstructed in $\mathbb{P}^{R}$. ###### Proof. As observed in Remark 3.1, $\gamma\geqslant 10$ and $e\geqslant 2\gamma-1$ imply that numerical assumptions of Proposition 1.4 certainly hold. Moreover, since the pair $(C,{\mathcal{O}}_{C}(E))$, giving rise to $Y$, is such that $C$ is with general moduli and ${\mathcal{O}}_{C}(E)\in{\rm Pic}^{e}(C)$ is general we are in position to apply Propositions 1.4 and 1.5, from which $(a)$, $(b)$ and $(c)$ directly follow. ∎ Notice that (3.1) coincides with the expression (2.7), which has been independently found via the parametric description of ${\mathcal{H}}(\widehat{{\mathcal{I}}_{e,\gamma,R-1}})$ in § 2.2. From Remark 3.1 and Proposition 3.2, we therefore fix from now on the following numerical assumptions: $\gamma\geqslant 10,\;\;\;e\geqslant 2\gamma-1,\;\;\;R=e-\gamma+1,\;\;\;m\geqslant 2.$ (3.2) Furthermore, to ease notation, we simply pose: $d:=me,\;\;\;X:=X_{m},\;\;\;g:=g_{m},$ (3.3) where $X_{m}$ and $g_{m}$ are as in Proposition 2.5. In this set–up we have that $[X]\in{\mathcal{I}}_{d,g,R}$; we now show that, as $[F]$ varies in ${\mathcal{H}}(\widehat{{\mathcal{I}}_{e,\gamma,R-1}})$, curves $X$ fill–up an irreducible locus in ${\mathcal{I}}_{d,g,R}$ as follows. With notation as in § 2.2, set first $\displaystyle\mathcal{U}_{e,\gamma,R}:=$ $\displaystyle\Big{\\{}u:=\left([C],\;\mathcal{O}_{C}(E),\;S,\;C_{1}\right)\;\;|\;\;[C]\in\mathcal{M}_{\gamma}\;\text{general},\;\mathcal{O}_{C}(E)\in\mathrm{Pic}^{e}(C)\;\text{general},$ $\displaystyle\;\;\;\mathfrak{F}=\mathcal{O}_{C}\oplus\mathcal{O}_{C}(-E),\;S=\mathbb{P}(\mathfrak{F}),\;\;C_{1}\in|\mathcal{O}_{S}(C_{0}+Ef)|\;\text{general}\Big{\\}};$ by construction $\mathcal{U}_{e,\gamma,R}$ is obviously irreducible. Then, for any $m\geqslant 2$, consider $\mathcal{W}_{d,g,R}:=\Big{\\{}(u,C_{m})|\,u\in\mathcal{U}_{e,\gamma,R},\,C_{m}\in|\mathcal{O}_{S}(m(C_{0}+Ef))|\;\;{\rm general}\;\;\Big{\\}}\stackrel{{\scriptstyle\pi}}{{\longrightarrow}}\mathcal{U}_{e,\gamma,R-1},\;\;\;(u,C_{m})\mapsto u,$ where the natural projection map $\pi$ endows $\mathcal{W}_{d,g,R}$ with a structure of a non–empty, open dense subset of a projective–bundle over $\mathcal{U}_{e,\gamma,R}$, hence $\mathcal{W}_{d,g,R}$ is irreducible too (recall that $d=me$ and $g=g_{m}$ depend on $m$). By the very definition of $\mathcal{W}_{d,g,R}$, one has a natural Hilbert morphism $\displaystyle h:\mathcal{W}_{d,g,R}$ $\displaystyle\longrightarrow$ $\displaystyle\mathcal{I}_{d,g,R}$ $\displaystyle(u,C_{m})$ $\displaystyle\mapsto$ $\displaystyle[X_{m}]:=[\Psi(C_{m})],$ where $\Psi$ the morphism as in Proposition 2.2, and one defines ${\mathcal{S}}_{d,g,R}:=h(\mathcal{W}_{d,g,R})\subset\mathcal{I}_{d,g,R}.$ (3.4) ###### Lemma 3.3. ${\mathcal{S}}_{d,g,R}$ is irreducible, it has dimension $\dim\;{\mathcal{S}}_{d,g,R}=\lambda_{d,g,R}+\sigma_{d,g,R},$ (3.5) where $\lambda_{d,g,R}=(R+1)me-(R-3)\left(m(\gamma-1)+\frac{m(m-1)}{2}e\right)$ is the expected dimension of ${\mathcal{I}}_{d,g,R}$ as in (1.3), whereas the positive integer $\sigma_{d,g,r}:=(R-4)\left[(\gamma-1)(m-1)+1+e+\frac{m(m-3)}{2}e\right]+4(e+1)+em(m-5)$ is called the superabundance summand of the dimension of ${\mathcal{S}}_{d,g,R}$. Furthermore, ${\mathcal{S}}_{d,g,R}$ is generically smooth. ###### Proof. By construction of ${\mathcal{S}}_{d,g,R}$, it is irreducible and $\dim\;{\mathcal{S}}_{d,g,R}=\dim\;{\mathcal{H}}(\widehat{{\mathcal{I}}_{e,\gamma,R-1}})+\dim\;|\mathcal{O}_{F}(m)|,$ where $[F]\in{\mathcal{H}}(\widehat{{\mathcal{I}}_{e,\gamma,R-1}})$ is general. Thus, from (3.1) (equivalently, from (2.7)) and from Proposition 2.2 (v), the latter reads $\begin{array}[]{ccl}\dim\;{\mathcal{S}}_{d,g,R}&=&R(e+1)-(R-4)(\gamma-1)+\frac{m(m+1)}{2}e-m(\gamma-1)=\\\ &=&\lambda_{e,\gamma,R-1}\,+R+\frac{m(m+1)}{2}e-m(\gamma-1).\end{array}$ (3.6) Taking into account (1.3) which, in our notation, reads $\lambda_{d,g,R}=(R+1)me-(R-3)\left(m(\gamma-1)+\frac{m(m-1)}{2}e\right),$ to prove the first part of the statement it suffices to showing that $\dim\;{\mathcal{S}}_{d,g,R}-\lambda_{d,g,R}=\sigma_{d,g,r}$ and that the latter integer is positive. To do so, observe that $\dim\;{\mathcal{S}}_{d,g,R}-\lambda_{d,g,R}=R(e+1)-(R-4)(\gamma-1)+\frac{m(m+1)}{2}e-m(\gamma-1)-\left[(R+1)me-(R-3)\left(m(\gamma-1)+\frac{m(m-1)}{2}e\right)\right]=$ $=R\left(\gamma(m-1)-m+2+e\right)+Re\frac{m(m-3)}{2}-4(\gamma-1)(m-1)-em(m-1)=$ $=(R-4)\left[(\gamma-1)(m-1)+1+e+\frac{m(m-3)}{2}e\right]+4(e+1)+em(m-5).$ Notice that, since $R=e-\gamma+1$, $e\geqslant 2\gamma-1$ and $\gamma\geqslant 10$, then $R-4\geqslant 6$; moreover, since $m\geqslant 2$, the summands in square–parentheses add–up to a positive integer: the statement is clear for $m\geqslant 3$, whereas for $m=2$ one has $\left[(\gamma-1)(m-1)+1+e+\frac{m(m-3)}{2}e\right]=\gamma\geqslant 10$. Concerning the summand $4(e+1)$, in our assumptions it is $4(e+1)\geqslant 8\gamma\geqslant 80$. The last summand $em(m-5)$ is non–negative for $m\geqslant 5$, whereas for $m=2,3,4$ it is, respectively, $-6e,-6e,-4e$; in all the latter three sporadic cases, the negativity of the summand $em(m-5)$ does not affect the positivity of the total expression. The previous computations show that $\dim\;{\mathcal{S}}_{d,g,R}-\lambda_{d,g,R}=(R-4)\left[(\gamma-1)(m-1)+1+e+\frac{m(m-3)}{2}e\right]+4(e+1)+em(m-5)=\sigma_{d,g,R}$ and the first part of the statement is proved. Concerning the generic smoothness of ${\mathcal{S}}_{d,g,R}$, we have first the following: ###### Claim 3.4. For $[X]\in{\mathcal{S}}_{d,g,R}$ general, one has $h^{0}(X,N_{X/\mathbb{P}^{R}})=\lambda_{e,\gamma,R-1}+h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}^{\vee}_{\varphi})+\frac{m(m+1)}{2}e-m(\gamma-1),$ where $\mathcal{T}^{\vee}_{\varphi}$ is the Tschirnhausen bundle associated to the degree–$m$ covering map $\varphi:X\to Y$ induced by the projection $\pi_{v}$ from the vertex $v$ of the cone $F$ as in diagram (2.5). ###### Proof of Claim 3.4. Consider the exact sequence (2.12) in Lemma 2.6 which, in the present notation, reads $0\to{\mathcal{O}}_{X}(R_{\pi_{v}})\otimes{\mathcal{O}}_{X}(1)\to N_{X/\mathbb{P}^{R}}\to\pi^{*}_{v}(N_{Y/\mathbb{P}^{R-1}})\to 0.$ From Proposition 2.5 (iii), the degree–$m$ covering map $\varphi:X\to Y$ is induced by the projection $\pi_{v}$ from the vertex of the cone $F$ and, by (2.10), we have ${\mathcal{O}}_{X}(R_{\pi_{v}})={\mathcal{O}}_{X}(R_{\varphi})\cong{\mathcal{O}}_{X}(m-1)$. Therefore, the previous exact sequence gives $0\to{\mathcal{O}}_{X}(m)\to N_{X/\mathbb{P}^{R}}\to\varphi^{*}(N_{Y/\mathbb{P}^{R-1}})\to 0.$ From Proposition 2.5 (v), ${\mathcal{O}}_{X}(m)$ is non–special on $X$, so $h^{0}(X,N_{X/\mathbb{P}^{R}})=h^{0}(X,\varphi^{*}(N_{Y/\mathbb{P}^{R-1}}))+h^{0}(X,{\mathcal{O}}_{X}(m)).$ Since $\varphi$ is a finite morphism, using Leray’s isomorphism and projection formula, we get $h^{0}(X,\varphi^{*}(N_{Y/\mathbb{P}^{R-1}}))=h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\varphi_{*}\;\mathcal{O}_{X}).$ Moreover, from § 1.3, one has $\varphi_{*}\mathcal{O}_{X}=\mathcal{O}_{Y}\oplus\mathcal{T}^{\vee}_{\varphi}$, where $\mathcal{T}^{\vee}_{\varphi}$ the Tschirnhausen bundle associated to $\varphi$. Thus, $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\varphi_{*}\;\mathcal{O}_{X})=h^{0}(Y,N_{Y/\mathbb{P}^{R-1}})+h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}^{\vee}_{\varphi}).$ To sum–up, one has $h^{0}(X,N_{X/\mathbb{P}^{R}})=h^{0}(Y,N_{Y/\mathbb{P}^{R-1}})+h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}^{\vee}_{\varphi})+h^{0}(X,{\mathcal{O}}_{X}(m)).$ (3.7) By (2.9) with $j=m$, one has $h^{0}(X,{\mathcal{O}}_{X}(m))=\frac{m(m+1)}{2}e-m(\gamma-1).$ From (1.3) and Corollary 1.3, it follows that $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}})=\lambda_{e,\gamma,R-1},$ since $Y$ corresponds to a general point in the distinguished component $\widehat{\mathcal{I}_{e,\gamma,R-1}}$. ∎ To conclude that ${\mathcal{S}}_{d,g,R}$ is generically smooth, we are left with the following: ###### Claim 3.5. For any $m\geqslant 2$, one has $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}^{\vee}_{\varphi})=R.$ (3.8) ###### Proof of Claim 3.5. To prove the statement, we will use an inductive approach. Assume first $m=2$, so $X=X_{2}$ and $\varphi:=\varphi_{2}:X\to Y$ is the double cover of the curve $Y$, as in Proposition 2.5 (iii). In this case, the Tschirnhausen bundle $\mathcal{T}^{\vee}_{\varphi}$ is a line bundle on $Y$ which, from (1.7) and (2.10), equals ${\mathcal{O}}_{Y}(-E)\cong{\mathcal{O}}_{Y}(-1)$. Since $\gamma\geqslant 10$ and $e\geqslant 2\gamma-1$, assumptions of Propositions 1.4 are satisfied. Therefore, from Proposition 1.5 (ii), we have $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{O}_{Y}(-1))=R$ and (3.8) holds true in this case. Take now $m\geqslant 3$ and assume that (3.8) holds for a degree–$(m-1)$ covering map $\varphi:=\varphi_{m-1}:X:=X_{m-1}\to Y$, where $X_{m-1}\in|{\mathcal{O}}_{F}(m-1)|$ general as in Proposition 2.5. To ease notation, the associated Tschirnhausen bundle $\mathcal{T}^{\vee}_{\varphi}$ will be simply denoted by $\mathcal{T}^{\vee}_{m-1}$. Let $Y^{\prime}\in|\mathcal{O}_{F}(1)|$ be general and consider the projective, connected, non–degenerate reducible curve $Z:=X\cup Y^{\prime}\subset F$ which, as a Cartier divisor on $F$, is such that $Z\in|\mathcal{O}_{F}(m)|$. The singular locus of $Z$ is $D:=X\cap Y^{\prime}$ and consists of $\delta$ nodes, where $\delta:=(m-1)H^{2}=(m-1)e$, $H$ denoting the hyperplane section of $F$. As in § 1.3, the curve $Z$ is endowed with a natural degree–$m$ covering map $\psi:Z\to Y$, whose Tschirnhausen bundle $\mathcal{T}^{\vee}_{\psi}$ on $Y$ will be simply denoted by $\mathcal{T}^{\vee}_{m}$. From Proposition 1.6, passing to duals, we get the exact sequence $0\to\mathcal{O}_{Y}(-D)\to\mathcal{T}_{m}^{\vee}\to\mathcal{T}_{m-1}^{\vee}\to 0$ of vector bundles on $Y$. Tensoring this exact sequence with $N_{Y/\mathbb{P}^{R-1}}$ gives $0\to N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{O}_{Y}(-D)\to N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{m}^{\vee}\to N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{m-1}^{\vee}\to 0.$ (3.9) By induction, since $m-1\geqslant 2$, one has $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{m-1}^{\vee})=R$. Moreover, since $D$ is cut–out on the irreducible component $Y^{\prime}$ by a hypersurface of degree $m-1$ in $\mathbb{P}^{R}$ and since $Y^{\prime}\cong Y$, then ${\mathcal{O}}_{Y}(D)\cong\mathcal{O}_{Y}(m-1)$ and one has $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{O}_{Y}(-D))=h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{O}_{Y}(-(m-1)))=0,$ as it follows from Proposition 1.5 (iii) and from the fact that $m-1\geqslant 2$. By (3.9), we deduce that $H^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{m}^{\vee})$ injects into $H^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{m-1}^{\vee})$, so in particular $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{m}^{\vee})\leqslant R.$ (3.10) On the other hand since $[Z]\in\overline{{\mathcal{S}}}_{d,g,R}$, where $\overline{{\mathcal{S}}}_{d,g,R}$ denotes the closure in $\mathcal{I}_{d,g,R}$ of ${\mathcal{S}}_{d,g,R}$, by (1.2) one must have $h^{0}(Z,N_{Z/\mathbb{P}^{R}})=\dim\;T_{[Z]}(\mathcal{I}_{d,g,R})\geqslant\dim\;{\mathcal{S}}_{d,g,R}=\lambda_{e,\gamma,R-1}\,+R+\frac{m(m+1)}{2}e-m(\gamma-1),$ (3.11) as it follows from (3.6). Since $Z$ satisfies assumptions as in Lemma 2.6, we can consider the exact sequence (2.11). The line bundle $\mathcal{L}_{Z}$ therein has degree $\deg\;\mathcal{L}_{Z}=\deg\;Z+\deg\;R_{\psi}+\delta=me+\deg\;R_{\psi}+(m-1)e.$ By definition of $\psi:Z\to Y$, the ramification of this map is supported on the irreducible component $X=X_{m-1}$ of $Z$, namely $R_{\psi}=R_{\varphi_{m-1}}$ where $\varphi_{m-1}:X\to Y$. From (2.10) we therefore have $\deg\;R_{\varphi_{m-1}}=(m-1)^{2}e$, so $\deg\;\mathcal{L}_{Z}=me+(m-1)^{2}e+(m-1)e=m^{2}e.$ Hence, $\mathcal{L}_{Z}$ is a non–special line bundle on $Z$, $Z$ being a reduced, connected and nodal curve of arithmetic genus $p_{a}(Z)=g=g_{m}$ as in (2.8) (the non–speciality of $\mathcal{L}_{Z}$ can be proved by applying the same numerical computation as in the proof of Proposition 2.5 (v), replacing the canonical bundle with the dualizing sheaf $\omega_{Z}$). Thus, from (2.11) one gets $h^{0}(Z,N_{Z/\mathbb{P}^{R}})=h^{0}(Z,\psi^{*}(N_{Y/\mathbb{P}^{R-1}}))+h^{0}(Z,\mathcal{L}_{Z})$ where $h^{0}(Z,\mathcal{L}_{Z})=\chi(Z,\mathcal{L}_{Z})=\frac{m(m+1)}{2}e-m(\gamma-1),$ both equality following from the non–speciality of $\mathcal{L}_{Z}$. As for the summand $h^{0}(Z,\psi^{*}(N_{Y/\mathbb{P}^{R-1}}))$, we can apply projection formula and Leray’s isomorphism, which gives $h^{0}(Z,\psi^{*}(N_{Y/\mathbb{P}^{R-1}}))=h^{0}(Y,N_{Y/\mathbb{P}^{R-1}})+h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{m}^{\vee}).$ Since $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}})=\lambda_{e,\gamma,R-1}$ as $[Y]\in\widehat{\mathcal{I}_{e,\gamma,R-1}}$ is general (cf. Corollary 1.3), then comparing with (3.11) we deduce that $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{m}^{\vee})\geqslant R$. Thus, using the previous inequality (3.10), we get $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{m}^{\vee})=R$. By semi–continuity on the general element $[X_{m}]\in{\mathcal{S}}_{d,g,R}$, with its degree–$m$ covering map $\varphi_{m}:X_{m}\to Y$ and its associated Tschirnhausen bundle $\mathcal{T}_{\varphi_{m}}^{\vee}$, we deduce that $h^{0}(Y,N_{Y/\mathbb{P}^{R-1}}\otimes\mathcal{T}_{\varphi_{m}}^{\vee})\leqslant R.$ On the other hand, replacing $Z$ with $X_{m}$ in the previous computations, since $h^{0}(X_{m},N_{X_{m}/\mathbb{P}^{R}})=\dim\;T_{[X_{m}]}(\mathcal{I}_{d,g,R})\geqslant\dim\;{\mathcal{S}}_{d,g,R}=\lambda_{e,\gamma,R-1}\,+R+\frac{m(m+1)}{2}e-m(\gamma-1),$ one can conclude by applying (2.12), with ${\mathcal{O}}_{X_{m}}(R_{\varphi_{m}})\cong{\mathcal{O}}_{X_{m}}(m-1)$ as in (2.10), and reasoning as we did for $Z$ above. ∎ The previous computations show that, for $[X]\in{\mathcal{S}}_{d,g,R}$ general, one has $\dim{\mathcal{S}}_{d,g,R}=\lambda_{d,g,R}+\sigma_{d,g,R}=\dim T_{[X]}({\mathcal{S}}_{d,g,R})=T_{[X]}(\mathcal{I}_{d,g,R}),$ (3.12) which therefore implies that ${\mathcal{S}}_{d,g,R}$ is generically smooth. ∎ We are finally in position to prove our Main Theorem. ###### Proof of Main Theorem. The first part of Lemma 3.3 ensures that any irreducible component of $\mathcal{I}_{d,g,R}$ containing ${\mathcal{S}}_{d,g,R}$ has to be superabundant, having dimension at least $\dim{\mathcal{S}}_{d,g,R}=\lambda_{d,g,R}+\sigma_{d,g,R}$. On the other hand, the proofs of Claims 3.4 and 3.5 show that ${\mathcal{S}}_{d,g,R}$ is contained in a unique component of $\mathcal{I}_{d,g,R}$, more precisely it fills–up an open, dense subset of an irreducible component of $\mathcal{I}_{d,g,R}$ which is generically smooth, superabundant, of dimension $\lambda_{d,g,R}+\sigma_{d,g,R}$. Indeed, by (3.12), for $[X]\in{\mathcal{S}}_{d,g,R}$ general we have that $\dim T_{[X]}({\mathcal{S}}_{d,g,R})=h^{0}(X,N_{X/\mathbb{P}^{R}})=\dim T_{[X]}({\mathcal{I}}_{d,g,R})=\dim{\mathcal{S}}_{d,g,R}=\lambda_{d,g,R}+\sigma_{d,g,R}.$ ∎ ###### Remark 3.6. It is clear from the construction that $\mathcal{S}_{d,g,R}$ lies in a component of $\mathcal{I}_{d,g,R}$ which cannot dominate $\mathcal{M}_{g}$. Indeed, the modular morphism of such a component maps to the Hurwitz space $\mathcal{H}_{\gamma,m,g}$ parametrizing isomorphism classes of genus–$g$ curves arising as $m$-sheeted, ramified covers of irrational curves of genus $\gamma$. ## References * [1] E. Arbarello, M. Cornalba, P.A. Griffiths, J. Harris, Geometry of algebraic curves, Vol. I. Grundlehren der mathematischen Wissenschaften, 267, Springer-Verlag, New York (1985). * [2] E. Arbarello, M. Cornalba, P.A. Griffiths, Geometry of algebraic curves, Vol. II. Grundlehren der mathematischen Wissenschaften, 268, Springer-Verlag, New York (2011). * [3] E. Ballico, C. Fontanari, A few remarks about the Hilbert scheme of smooth projective curves, _Comm. Algebra_ , 42, 3895–3901 (2014). * [4] F. Bastianelli, C. Ciliberto, F. Flamini, P. Supino, Gonality of curves on general hypersurfaces, _J. Math. Pures Appl._ , 125, 94–118 (2019). * [5] F. Bastianelli, C. Ciliberto, F. Flamini, P. Supino, On complete intersections containing a linear subspace, _Geom. Ded._ , 204, 231–239 (2020). * [6] F. Bastianelli, C. Ciliberto, F. Flamini, P. Supino, On Fano schemes of linear subspaces of general complete intersections, _Arch. Math._ , 115, 639–645 (2020). * [7] A. Beauville, R. Donagi, The varieties of lines of a cubic fourfold, _C.R. Acad. Sci. Paris, Ser. 1_ , 301, 703–706 (1985). * [8] A. Calabri, C. Ciliberto, F. Flamini, R. Miranda, Non-special scrolls with general moduli, _Rend. Circ. Mat. Palermo_ , 57, 1–31 (2008). * [9] A. Calabri, C. Ciliberto, F. Flamini, R. Miranda, Special scrolls whose base curve has general moduli, _Contemp. Math._ , 496, 133–155 (2009). * [10] L. Caporaso, A compactification of the Universal Picard Variety over the Moduli Space of Stable Curves, _J. Amer. Math. Soc._ , 7 (3), 589–660 (1994). * [11] G. Casnati, T. Ekedahl, Covers of algebraic varieties I. A general structure theorem, covers of degree $3,\;4$ and Enriques surfaces, _J. Algebraic Geom._ 5, 439–460 (1996). * [12] Y. Choi, H. Iliev, S. Kim, Reducibility of the Hilbert scheme of smooth curves and families of double covers, _Taiwanese J. Math._ 21 (3), 583–600 (2017). * [13] Y. Choi, H. Iliev, S. Kim, Components of the Hilbert Scheme of smooth projective curves using ruled surfaces, _manuscripta math._ (2020) https://doi.org/10.1007/s00229-020-01188-0. * [14] C. Ciliberto, On rationally determined line bundles on a familly of projective curves with general moduli, Duke Math. J., 55 (4), 909–917 (1987). * [15] C. Ciliberto, J. Harris, R. Miranda, On the surjectivity of the Wahl map, _Duke Math. J._ , 57 (3), 829–858 (1988). * [16] C. Ciliberto, A. Lopez, R. Miranda, Some remarks on the obstructedness of cones over curves of low genus, in Higher-dimensional complex varieties (Trento, 1994), 167–182, de Gruyter, Berlin (1996). * [17] C. Ciliberto, E. Sernesi, Families of varieties and the Hilbert scheme, in Lectures on Riemann Surfaces. Proceedings of the College on Riemann Surfaces (Trieste, 1987), 428–499, World Scientific, Singapore (1989). * [18] H. Clemens, P. Griffiths, The intermediate Jacobian of the cubic threefold, _Ann. of Math._ , 95, 281–356 (1972). * [19] P. Deligne, D. Mumford, The irreducibility of the space of curves of given genus, _Publications mathématiques I.H.E.S._ 36, 75–109 (1969). * [20] A. Deopurkar, A. Patel, The Picard rank conjecture for the Hurwitz spaces of degree up to five, _Algebra & Number theory_ 9 (2), 459–492 (2015). * [21] A. Deopurkar, A. Patel, Vector bundles and finite covers, arXiv:1608.01711v3 [math.AG], 1–30 (2019). * [22] R. Donagi, Group law on the intersection of two quadrics, _Annali Sc. N. Sup. Pisa_ , 7, 217–239 (1980). * [23] R. Donagi, Generic Torelli for projective hypersurfaces, _Compositio Math._ , 50, 325–353 (1983). * [24] L. Ein, Hilbert scheme of smooth space curves, _Ann. Sci.École Norm. Sup._ 19 (4), 469–478 (1986). * [25] L. Ein, The irreducibility of the Hilbert scheme of smooth space curves, in Algebraic geometry, Bowdoin 1985 (Brunswick, Maine, 1985), 46, Sympos. Pure Math., 83–87. Amer.Math.Soc. (1987). * [26] G. Fano, Sul sistema $\infty^{2}$ di rette contenute in una varietà cubica generale dello spazio a quattro dimensioni, _Atti Accad. Sc. Torino_ 39, 778–792 (1904). * [27] F. Flamini, E. Sernesi, The curve of lines on a prime Fano threefold of genus $8$, _Internat. J. Math._ 21, 1561–1584 (2010). * [28] L. Fuentes, M. Pedreira, The Projective Theory of Ruled Surfaces, _Note Mat._ 24 (1), 25–63 (2005). * [29] M. Green, R. Lazarsfeld, On the projective normality of complete linear series on an algebraic curve, _Invent. Math._ 83 (1), 73–90 (1986). * [30] J. Harris, Curves in projective space, with the collaboration of D. Eisenbud, in _Séminaire de Mathématiques Supérieures_ , 85, Presses de l’Université de Montréal, Montréal, Quebec (1982). * [31] J. Harris, Curves and their moduli. Algebraic geometry, Bowdoin, 1985 (Brunswick, Maine, 1985), Proc. Sympos. Pure Math., 46 (1), 99-143, Amer. Math. Soc., Providence, RI, 1987. * [32] J. Harris, I. Morrison, Moduli of curves, Graduate Texts in Mathematics, 187. Springer-Verlag, New York, 1998. * [33] R. Hartshorne, Algebraic geometry, Graduate Texts in Math. 52, Springer-Verlag, New York, 1977. * [34] A. Iliev, D. Markushevich, The Abel–Jacobi map for a cubic threefold and periods of Fano threefolds of degree $14$, _Doc. Math._ 5, 23–47 (20). * [35] C. Keem, Reducible Hilbert scheme of smooth curves with positive Brill-Noether number, _Proc. Amer. Math. Soc._ 122, 349–354 (1994). * [36] M. Maruyama, On automorphism groups of ruled surfaces, J. Math. Kyoto. Univ., 11 (1971), 89–112. * [37] E. Mezzetti, G. Sacchiero, Gonality and Hilbert schemes of smooth curves, in Algebraic curves and projective geometry (Trento, 1988), 183–194, 1389, Lecture Notes in Math. Springer (1989). * [38] E. Sernesi, On the existence of certain families of curves, _Invent. Math._ , 75, 25–57 (1984). * [39] E. Sernesi, Deformation of Algebraic schemes, Grundlehren der mathematischen Wissenschaften, 334, Springer-Verlag, New York (2006). * [40] F. Severi, Vorlesungen über algebraische Geometrie, Teubner, Leibzig (1921). * [41] C. Voisin, Théorème de Torelli pour le cubiques de $\mathbb{P}^{5}$, _Invent. Math._ , 86, 577–601 (1986). * [42] J. Wahl, Gaussian maps on algebraic curves, _J. Differential Geom._ , 32, 77–98 (1990).
# Families of infinite parabolic IFS with overlaps: the approximating method Liangang Ma Dept. of Mathematical Sciences, Binzhou University, Huanghe 5th Road No. 391, Binzhou 256600, Shandong, P. R. China<EMAIL_ADDRESS> ###### Abstract. This work is devoted to the study of families of infinite parabolic iterated function systems (PIFS) on a closed interval parametrized by vectors in $\mathbb{R}^{d}$ with overlaps. We show that the Hausdorff dimension and absolute continuity of ergodic projections through a family of infinite PIFS are decided _a.e._ by the growth rate of the entropy and Lyapunov exponents of the families of truncated PIFS with respect to the concentrating measures, under transversality of the family essentially. We also give an estimation on the upper bound of the Hausdorff dimension of parameters where the corresponding ergodic projections admit certain dimension drop. The setwise topology on the space of finite measures enables us to approximate a family of infinite systems by families of their finite truncated sub-systems, which plays the key role throughout our work. The work is supported by ZR2019QA003 from SPNSF and 12001056 from NSFC ## 1\. Introduction This work should be understood in the background of families of overlapping iterated function systems. A typical family of overlapping _iterated function system_ (IFS) on a compact metric space consists of a flow of finitely many strictly contractive endomorphisms on the space parametrized by some (time) parameter. The dimension of attractors and measures supported on them (especially the invariant or ergodic ones) is the focus in the theory of families of IFS. Besides calculating the dimension of attractors and measures supported on the attractors, another important problem is to decide whether the projective measures on the attractors from measures on the symbolic spaces are singular or absolutely continuous with respect to the Lebesgue measure, refer to [BRS, Hoc1, Hoc3, PSS, Shm2, Shm3, SS1, SSS]. See also [Fur] by Furstenberg, [Hoc4] by Hochman, [PS1] by Peres-Solomyak and [Shm4] by Shmerkin for more interesting questions on families of overlapping IFS. These problems are already very difficult in the case of Bernoulli convolutions, see for example [BV1, BV2, Hoc1, LPS, Shm1, Sol3, SS2, Var1, Var2, Var3] for recent progress on the topic. Note that most of the above research are done on families of finite IFS, that is, every individual IFS in the family is constituted by finitely many maps. In this work we try to deal with families of infinite IFS. We choose the families of parabolic iterated function systems investigated by K. Simon, B. Solomyak and M. Urbański in [SSU1, SSU2] to demonstrate our ideas, which are probably applicable to some other families of infinite IFS. Our technique here is to utilize the sequence of concentrating measures on the attractors of families of finite sub-systems to approximate the measures on the attractors of families of infinite systems. Let $X\subset\mathbb{R}$ be a closed interval with its Borel $\sigma$-algebra $\mathcal{B}$. Let $\mathcal{\hat{M}}(X)$ be the collection of all the finite Borel measures on $(X,\mathcal{B})$. For a set $A\subset\mathbb{R}^{d}$ with $d\in\mathbb{N}$, let $HD(A)$ be its Hausdorff dimension [Fal2]. ###### 1.1 Definition. For a measure $\nu\in\mathcal{\hat{M}}(X)$, the _lower_ and _upper Hausdorff dimension_ of $\nu$ are defined respectively to be: $dim_{*}\nu=\inf\\{HD(A):\nu(A)>0,A\in\mathcal{B}\\},$ and $dim^{*}\nu=\inf\\{HD(A):\nu(A)=\nu(X),A\in\mathcal{B}\\}.$ The two values indicate the distribution of a measure on $X$ from two complementary points of views, with the possibility of the lower one being strictly less than the upper one. However, they coincide with each other for ergodic measures with respect to transformations of proper regularity (for example the $C^{1}$ diffeomorphisms) on $X$. See for example [HS, SimS2, You] on calculation of the Hausdorff dimension of measures in various circumstances. Following P. Mattila, M. Morán and J. M. Rey [MMR], the author discovers some semi-continuity of the measure-dimension mappings $dim_{*}$ and $dim^{*}$ on $\mathcal{\hat{M}}(X)$ equipped with the setwise topology [FKZ, Las], which is crucial in our approximating process. Now we gradually introduce the families of parabolic iterated function systems. For $\vartheta\in(0,1]$, a point $v\in X$ is called an _indifferent point_ of a $C^{1+\vartheta}$ map $s:X\rightarrow X$ if $|s^{\prime}(v)|=1$. ###### 1.2 Definition. A $C^{1+\vartheta}$ map $s:X\rightarrow X$ is called _parabolic_ it satisfies the following conditions: * • $s$ has only one indifferent point $v$ and it is fixed, that is, $s(v)=v$. * • $s$ is _contractive_ on $X\setminus\\{v\\}$, that is, $0<|s^{\prime}(x)|<1$ for any $x\in X\setminus\\{v\\}$. * • $s$ is _well-behaved_ around $v$, that is, $s^{\prime}(x)$ is monotone on each component of $X\setminus\\{v\\}$. * • There exist $L_{1}\geq 1$ and $\beta<\cfrac{\vartheta}{1-\vartheta}$ such that $\cfrac{1}{L_{1}}\leq\liminf_{x\rightarrow v}\cfrac{|s^{\prime}(x)-s^{\prime}(v)|}{|x-v|^{\beta}}\leq\limsup_{x\rightarrow v}\cfrac{|s^{\prime}(x)-s^{\prime}(v)|}{|x-v|^{\beta}}\leq L_{1}$. It is called _hyperbolic_ if $0<|s^{\prime}(x)|<1$ for any $x\in X$. Parabolic maps with finitely many indifferent points can be handled in our context, with some technical modifications on the proofs. ###### 1.3 Definition. For $\vartheta\in(0,1]$ and a countable index set $I$ (with at least two elements), a _parabolic iterated function system_ (PIFS) is a collection of $C^{1+\vartheta}$ maps $S=\\{s_{i}:X\rightarrow X\\}_{i\in I}$ satisfying the following conditions: * • There is one and only one $i_{1}\in I$ such that the map $s_{i_{1}}$ is a parabolic map with its indifferent point $v$. * • The other maps $\\{s_{i}\\}_{i\in I\setminus\\{i_{1}\\}}$ are all hyperbolic. The collection of PIFS $S=\\{s_{i}:X\rightarrow X\\}_{i\in I}$ satisfying $\cup_{i\in I\setminus\\{i_{1}\\}}s_{i}(X)\subset X^{o}\setminus\\{v\\}$ is termed $\Gamma_{X}(\vartheta)$. For an integer $d\in\mathbb{N}$ and an open set $U\subset\mathbb{R}^{d}$, we focus on families of PIFS instead of a single system in $\Gamma_{X}(\vartheta)$ parametrized by the vector-time parameter $\mathbb{t}\in U$. The parabolic map is always fixed at any time in the family. Since families of finite PIFS (that is, $\\#I<\infty$) has been considered by Simon-Solomyak-Urbański, we focus on the families of infinite PIFS ($\\#I=\infty$) in the following. That is, we focus on families of PIFS $S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}$, such that $S^{\mathbb{t}}\in\Gamma_{X}(\vartheta)$ with the parabolic map $s_{1}^{\mathbb{t}}=s_{1}$ remaining the same at any time $\mathbb{t}\in U$. Let $\pi_{\mathbb{t}}:I^{\infty}\rightarrow X$ be the projection map and $J_{\mathbb{t}}=\pi_{\mathbb{t}}(I^{\infty})$ be the attractor ([Hut]) at time $\mathbb{t}\in U$. To achieve some reasonable conclusions on the parametrized family, obviously some control on the dependence of the family with respect to $\mathbb{t}$ is necessary. A family of PIFS $S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}$ is said to satisfy the _continuity condition_ if for any fixed $i\in\mathbb{N}$, the map $s_{i}^{\mathbb{t}}$ depends continuously on the parameter $\mathbb{t}\in U$ in the Banach space $C^{1+\vartheta}$ equipped with the supreme norm. Let $\mathfrak{L}^{d}$ be the Lebesgue measure on $\mathbb{R}^{d}$ for $d\in\mathbb{N}$. The family is said to satisfy the _transversality condition_ if there exists a constant $C_{1}$ such that $\mathfrak{L}^{d}\\{\mathbb{t}\in U:|\pi_{\mathbb{t}}(\omega)-\pi_{\mathbb{t}}(\tau)|\leq r\\}\leq C_{1}r$ for any $\omega,\tau\in\mathbb{N}^{\infty},\omega_{1}\neq\tau_{1}$ and any $r>0$. While the continuity condition comes more naturally, the transversality condition has been recognized to be an effective condition to achieve some uniform conclusions in various contexts of flows of dynamical systems. For example, see [Fal1, PolS, PS2, PS3, SimS1, Sol1, Sol2]. The following are some classical notions on the symbolic dynamics on $\mathbb{N}^{\infty}$. For an infinite word $\omega=\omega_{1}\omega_{2}\cdots\in\mathbb{N}^{\infty}$, let $\sigma(\omega)=\omega_{2}\omega_{3}\cdots$ be the shift map. For a finite $k$-word $\tau\in\mathbb{N}^{k}$, let $[\tau]=\\{\omega\in\mathbb{N}^{\infty}:\omega|_{k}=\tau\\}$ be a _cylinder set_ , in which $\omega|_{k}=\omega_{1}\omega_{2}\cdots\omega_{k}$ is the $k$-th restriction of the infinite word $\omega$. Let $\mathbb{N}^{*}=\cup_{k=1}^{\infty}\mathbb{N}^{k}$ be the collection of all the finite words. Now consider the $\sigma$-algebra $\mathcal{B}_{\mathbb{N}^{\infty}}$ generated by all the cylinder sets on $\mathbb{N}^{\infty}$. Let $h_{\mu}(\sigma)$ be the entropy of the shift map $\sigma$ with respect to the partition of $\mathbb{N}^{\infty}$ by cylinder sets $\\{[i]\\}_{i=1}^{\infty}$ for a measure $\mu\in\mathcal{\hat{M}}(\mathbb{N}^{\infty})$ ([Hoc2, Wal]). One can treat $\mu$ as the law of an infinite discrete-time _stochastic process_ $\mathbb{Y}=\\{Y_{i}\\}_{i\in\mathbb{N}}$, in which $Y_{i}$ is a discrete random variable on $\mathbb{N}$ for $i\in\mathbb{N}$ under the natural projection with law $\mu(\mathbb{N}\times\mathbb{N}\times\cdots\times Y_{i}\times\mathbb{N}\times\cdots)$. The sequence of infinite random variables is called _independent_ if the finite variables $\\{Y_{1},Y_{2},\cdots,Y_{n}\\}$ are independent for any $n\in\mathbb{N}$, see [Tsi, Definition 3a(1),(b)]. The stochastic process $\mathbb{Y}$ is a _Markov process_ under the hypothesis of independence, refer to [BHP, FKZ, Hai, HL]. In the following we focus on the measures on $J_{\mathbb{t}}$ projected from probability measures on $\mathbb{N}^{\infty}$ through $\pi_{\mathbb{t}}$ for $\mathbb{t}\in U$. For a measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$, consider its projection under $\pi_{\mathbb{t}}$: $\nu_{\mathbb{t}}=\mu\circ\pi_{\mathbb{t}}^{-1}$ for any $\mathbb{t}\in U$. We are particularly interested in the cases when $\mu$ is invariant or ergodic with respect to the shift map $\sigma$ on $\mathbb{N}^{\infty}$. If $\mu$ is invariant then $\nu_{\mathbb{t}}$ is of pure type, see for example [JW]. Let $\lambda^{\mathbb{t}}_{\mu}(\sigma)=-\int_{\mathbb{N}^{\infty}}\log|s_{\omega_{1}}^{\prime}(\pi_{\mathbb{t}}\circ\sigma(\omega))|d\mu(\omega)$ be the average _Lyapunov exponent_ of the family of PIFS $S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}$ at time $\mathbb{t}\in U$. Our first main result deals with the two basic questions regarding the projective measures $\\{\nu_{\mathbb{t}}\\}_{\mathbb{t}\in U}$, these are, deciding their dimensions $dim_{*}\nu_{\mathbb{t}},dim^{*}\nu_{\mathbb{t}}$ and whether they are absolutely continuous or singular with respect to the Lebesgue measure for $\mathbb{t}\in U$. Let $\mathbb{N}_{n}=\\{1,2,\cdots,n\\}$ be the $n$-th truncation of $\mathbb{N}$ for $n\in\mathbb{N}$. We leave the technical concept-the $n$-th concentrating measure $\mu_{n}$ (supported on $\mathbb{N}_{n}^{\infty}$) of a finite measure $\mu$ (supported on $\mathbb{N}^{\infty}$) to Section 4. The following result generalizes [SSU1, Theorem 2.3] from families of finite PIFS to families of infinite PIFS. ###### 1.4 Theorem. Let $\big{\\{}S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}\in\Gamma_{X}(\vartheta)\big{\\}}_{\mathbb{t}\in U}$ be a family of infinite parabolic iterated function systems satisfying the continuity and transversality condition with respect to the vector-time parameter $\mathbb{t}\in U$. For an ergodic probability measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$ with positive entropy $h_{\mu}(\sigma)$, let $\mu_{n}$ be its $n$-th concentrating measure for any $n\in\mathbb{N}$. If the sequence of infinite random variables in $\mathbb{Y}$ under the law $\mu$ is independent and $\mu(\cup_{n=1}^{\infty}\mathbb{N}_{n}^{\infty})=1$, then 1. (i). For Lebesgue _a.e._ $\mathbb{t}\in U$, $dim_{*}\nu_{\mathbb{t}}=dim^{*}\nu_{\mathbb{t}}=\lim_{n\rightarrow\infty}\min\Big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)},1\Big{\\}}$, 2. (ii). $\nu_{\mathbb{t}}$ is absolutely continuous for Lebesgue _a.e._ $\mathbb{t}\in\Big{\\{}\mathbb{t}:\limsup_{n\rightarrow\infty}\big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)}\big{\\}}>1\Big{\\}}$, in which $\nu_{\mathbb{t}}=\mu\circ\pi_{\mathbb{t}}^{-1}$ is the projective measure at time $\mathbb{t}\in U$. ###### 1.5 Remark. We are not sure whether the the hypothesis of independence of the sequence of infinite random variables under the law $\mu$ can be removed or not, although it seems so alluring to us. ###### 1.6 Remark. As to the approximation of $\mu$, better results are possible if one does the approximation by the sequence of restrictions $\\{\mu|_{\mathbb{N}_{n}^{\infty}}\\}_{n=1}^{\infty}$ instead of the sequence of concentrating measures $\\{\mu_{n}\\}_{n=1}^{\infty}$. However, the restrictions are not probabilities, while the concentrating measures take the advantage that they are all probability measures. ###### 1.7 Remark. Similar to [SSU1, Theorem 2.3], we can only guarantee the result holds _a.e._ instead of everywhere in Theorem 1.4, as it is still impossible to rule out the exact overlapping between images of $\\{s^{\mathbb{t}}_{\omega}\\}_{\omega\in\mathbb{N}^{*}}$ for us for fixed $\mathbb{t}\in U$. There is an important conjecture that a dimension drop implies exact overlapping, which has been confirmed in the case of finite Bernoulli convolutions, see for example [Hoc1, Var3]. Note that the proof of [SSU1, Theorem 2.3] does not apply to Theorem 1.4. There are two main obstacles on application of Simon-Solomyak-Urbański’s techniques to the families of infinite PIFS. The first one is that $h_{\mu}(\sigma)$ can explode in case of $\\#I=\infty$, see [Hoc2]. If $h_{\mu}(\sigma)=\infty$ for some ergodic $\mu\in\mathcal{\hat{M}}(\mathbb{N}^{\infty})$ with respect to the shift map, the Shannon-McMillan-Breiman Theorem is not known to hold in this case [Hay]. The second one is the regularity of the families of infinite PIFS. The collection of PIFS $S=\\{s_{i}:X\rightarrow X\\}_{i\in I}$ satisfying the following conditions is called $\Gamma_{X}(\vartheta,V,\gamma,u,M)$ in [SSU1]. 1. (a) There exists an open connected neighbourhood $V$ of $v$, such that $\cup_{i\in I\setminus\\{i_{1}\\}}s_{i}(X)\cap V=\emptyset$. 2. (b) There exists some $\gamma\in(0,1)$, such that $\sup_{i\in I\setminus\\{i_{1}\\}}\\{\|s_{i}^{\prime}\|\\}\leq\gamma$. 3. (c) There exists some $u\in(0,1)$, such that $\inf_{i\in I,x\in X}\\{|s_{i}^{\prime}(x)|\\}\geq u$. 4. (d) There exists $M>0$ such that $\|S^{\prime}\|_{\vartheta}=\sup_{i\in I}\sup_{x,y\in X}\Big{\\{}\cfrac{|s_{i}^{\prime}(x)-s_{i}^{\prime}(y)|}{|x-y|^{\vartheta}}\Big{\\}}\leq M$. For a finite PIFS, $S\in\Gamma_{X}(\vartheta)$ is enough to guarantee $S\in\Gamma_{X}(\vartheta,V,\gamma,u,M)$ for some parameters $V,\gamma,u,M$. However, this is not true for an infinite PIFS. Our technique of approximating measures on the infinite symbolic space by the sequence of concentrating measures on its finite subspaces successfully overcomes these obstacles. Our next main result deals with local dimension of the exceptional parameters of a family of infinite PIFS $\\{S^{\mathbb{t}}\\}_{\mathbb{t}\in U}$. Let $\mu$ be a measure on the symbolic space $\mathbb{N}^{\infty}$ with its concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$. For a subset $G\subset U$ and $0<\alpha<1$, in case the sequence $\Big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)}\Big{\\}}_{n=1}^{\infty}$ admits a limit for any $\mathbb{t}\in G$, let $E_{\alpha,G}:=\Big{\\{}\mathbb{t}\in G:dim^{*}\nu_{\mathbb{t}}<\lim_{n\rightarrow\infty}\min\big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)},\alpha\big{\\}}\Big{\\}}$ be the level-$\alpha$ exceptional parameters in $G$. Let $K_{\alpha,G}=\min\Big{\\{}\sup_{\mathbb{t}\in G}\lim_{n\rightarrow\infty}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)},\alpha\Big{\\}}+d-1$. For a parameter set $A\subset U\subset\mathbb{R}^{d}$, let $N_{r}(A)$ be the least number of open balls of radius $r>0$ needed to cover the set $A$. A family of PIFS $S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}$ is said to satisfy the _strong transversality condition_ if there exists a constant $C_{2}>0$ such that $N_{r}\\{\mathbb{t}\in U:|\pi_{\mathbb{t}}(\omega)-\pi_{\mathbb{t}}(\tau)|\leq r\\}\leq C_{2}r^{1-d}$ for any $\omega,\tau\in\mathbb{N}^{\infty},\omega_{1}\neq\tau_{1}$ and any $r>0$. Obviously this condition is stronger than the transversality condition. We have the following estimation on the upper bound of the Hausdorff dimension of the local level-$\alpha$ exceptional set. ###### 1.8 Theorem. Let $\big{\\{}S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}\in\Gamma_{X}(\vartheta)\big{\\}}_{\mathbb{t}\in U}$ be a family of infinite parabolic iterated function systems satisfying the continuity and strong transversality condition with respect to the vector-time parameter $\mathbb{t}\in U$. For an ergodic probability measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$ with $\mu(\cup_{n=1}^{\infty}\mathbb{N}_{n}^{\infty})=1$, if the sequence of infinite random variables in $\mathbb{Y}$ under the law $\mu$ is independent, then (1.1) $HD(E_{\alpha,G})\leq K_{\alpha,G}$ for any $0<\alpha<1$ and $G\subset U$ such that $\lim_{n\rightarrow\infty}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)}$ exists for any $\mathbb{t}\in G$, in which $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ are the concentrating measures. This result is a generalization of [SSU1, Theorem 5.3] from families of finite PIFS to families of infinite PIFS. See also [Kau, Theorem]. ## 2\. The setwise topology and semi-continuity of the measure-dimension mappings under it Since we are considering families of IFS in this work, it would be very helpful if the measure-dimension mappings $dim_{*}:\mathcal{\hat{M}}(X)\rightarrow[0,\infty)$ or $dim^{*}:\mathcal{\hat{M}}(X)\rightarrow[0,\infty)$ admits some continuity property. To discuss the continuity problem, some appropriate topology on $\mathcal{\hat{M}}(X)$ is needed. Since the measures $\\{\nu_{\mathbb{t}}\\}_{\mathbb{t}\in U}\subset\mathcal{\hat{M}}(X)$ in Theorem 1.4 inherit the parametrization naturally from parametrization of the PIFS, one may try to endow the Euclidean metric on $\\{\nu_{\mathbb{t}}\\}_{\mathbb{t}\in U}$ through the parameter $\mathbb{t}\in U\subset\mathbb{R}^{d}$ to discuss the continuity of the measure-dimension mappings. The measure-dimension mappings are not always continuous under the Euclidean metric on $\\{\nu_{\mathbb{t}}\\}_{\mathbb{t}\in U}$ in general consideration, although lower semi-continuity of them can be expected. One can see from the family of Bernoulli convolutions $\\{\nu_{\lambda}\\}_{\lambda}\subset\mathcal{\hat{M}}([0,1])$ with the contraction ratio $\lambda\in(0,1)$. The measure-dimension mappings $dim_{*}=dim^{*}:(0,1)\rightarrow[0,1]$ are not always continuous since there are dimension drops from $1$ at the inverses of the Pisot parameters. However, they are lower semi-continuous [HS, Theorem 1.8]. Since we are dealing with measures originated from families of the infinite systems and their finite sub-systems simultaneously in this work, we will abandon the Euclidean metric on $\\{\nu_{\mathbb{t}}\\}_{\mathbb{t}\in U}\subset\mathcal{\hat{M}}(X)$ to discuss the continuity problem. A well-known topology on $\mathcal{\hat{M}}(X)$ is the weak topology. ###### 2.1 Definition. A sequence of bounded measures $\\{\nu_{n}\in\mathcal{\hat{M}}(X)\\}_{n=1}^{\infty}$ is said to converge _weakly_ to $\nu\in\mathcal{\hat{M}}(X)$, if $\lim_{n\rightarrow\infty}\int_{X}f(x)d\nu_{n}=\int_{X}f(x)d\nu$ for any bounded continuous $f:X\rightarrow\mathbb{R}$. Denote the convergence in this sense by $\nu_{n}\stackrel{{\scriptstyle w}}{{\rightarrow}}\nu$ as $n\rightarrow\infty$. However, The measure-dimension mappings $dim^{*}$ and $dim_{*}$ do not admit any semi-continuity under the weak topology on $\mathcal{\hat{M}}(X)$ [Ma, Theorem 3.1]. It turns out that the setwise topology is the most ideal topology for us to discuss the continuity of the measure-dimension mappings. ###### 2.2 Definition. A sequence of measures $\\{\nu_{n}\in\mathcal{\hat{M}}(X)\\}_{n=1}^{\infty}$ is said to converge _setwisely_ to $\nu\in\mathcal{\hat{M}}(X)$, if $\lim_{n\rightarrow\infty}\nu_{n}(B)=\nu(B)$ for any $B\in\mathcal{B}$. One is recommended to refer to [Doo, FKZ, GR, HL, Las, LY] for more equivalent descriptions of the setwise topology on $\mathcal{\hat{M}}(X)$ and its various applications. Denote the convergence in this sense by $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $n\rightarrow\infty$. The measure-dimension mappings $dim^{*}$ and $dim_{*}$ are lower semi-continuous and upper semi-continuous under the setwise topology respectively on $\mathcal{\hat{M}}(X)$. ###### 2.3 Theorem. The measure-dimension mapping $dim^{*}$ is lower semi-continuous, while $dim_{*}$ is upper semi-continuous under the setwise topology on $\mathcal{\hat{M}}(X)$, that is, if $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ in $\mathcal{\hat{M}}(X)$ as $n\rightarrow\infty$, then (2.1) $\liminf_{n\rightarrow\infty}dim^{*}\nu_{n}\geq dim^{*}\nu$ while (2.2) $\limsup_{n\rightarrow\infty}dim_{*}\nu_{n}\leq dim_{*}\nu.$ ###### 2.4 Remark. Similar results appear in [Ma, Theorem 2.8, 2.9] when one considers the measure-dimension mappings $dim^{*}$ and $dim_{*}$ on the probability space $\mathcal{M}(X)=\\{\nu:\nu(X)=1,\nu\in\mathcal{\hat{M}}(X)\\}$ with $X$ being an arbitrary metric space. The proofs apply to the measure- dimension mappings on the space of finite measures $\mathcal{\hat{M}}(X)$ here, or even on the space of infinite measures. Considering generalizations of measures, Theorem 2.3 is still true on the space of _finitely additive measures_ (a finitely additive measure $\nu$ on $(X,\mathcal{B})$ is a function from $\mathcal{B}$ to $[0,\infty)$ satisfying finite additivity instead of $\sigma$-additivity), but not true for a _signed measure_ on $(X,\mathcal{B})$ under the Definition 1.1. Obviously a new meterage for the dimension of signed measures is needed. Let $\mathcal{\hat{M}}_{s}(X)$ be the collection of all the finite signed Borel measures on $X$. ###### 2.5 Definition. For a finite signed measure $\nu\in\mathcal{\hat{M}}_{s}(X)$, the _lower_ and _upper Hausdorff dimension_ of $\nu$ are defined respectively to be: $dim_{s}\nu=\inf\\{HD(A):\nu(A)>0\mbox{ or }\nu(A)<0,A\in\mathcal{B}\\},$ and $dim^{s}\nu=\max\Big{\\{}\inf\big{\\{}HD(A):\nu(A)=\max\\{\nu(B)\\}_{B\in\mathcal{B}}\big{\\}},\inf\big{\\{}HD(A):\nu(A)=\min\\{\nu(B)\\}_{B\in\mathcal{B}}\big{\\}}\Big{\\}}.$ Under this modified definition the measure-dimension mappings $dim_{s}$ and $dim^{s}$ from $\mathcal{\hat{M}}_{s}(X)$ to $[0,\infty)$ still bear some continuity. ###### 2.6 Proposition. The measure-dimension mapping $dim^{s}$ is lower semi-continuous, while $dim_{s}$ is upper semi-continuous under the setwise topology on $\mathcal{\hat{M}}_{s}(X)$, that is, if $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ in $\mathcal{\hat{M}}_{s}(X)$ as $n\rightarrow\infty$, then (2.3) $\liminf_{n\rightarrow\infty}dim^{s}\nu_{n}\geq dim^{s}\nu$ while (2.4) $\limsup_{n\rightarrow\infty}dim_{s}\nu_{n}\leq dim_{s}\nu.$ ###### Proof. The conclusion $dim_{s}$ is upper semi-continuous follows a similar argument as the proof of [Ma, Theorem 2.9]. Now we show $dim^{s}$ is lower semi- continuous under the setwise topology on $\mathcal{\hat{M}}_{s}(X)$. Without loss of generality we assume $\inf\big{\\{}HD(A):\nu(A)=\max\\{\nu(B)\\}_{B\in\mathcal{B}}\big{\\}}\geq\inf\big{\\{}HD(A):\nu(A)=\min\\{\nu(B)\\}_{B\in\mathcal{B}}\big{\\}}$ and $\inf\big{\\{}HD(A):\nu(A)=\max\\{\nu(B)\\}_{B\in\mathcal{B}}\big{\\}}>0$, $\max\\{\nu(B)\\}_{B\in\mathcal{B}}>0$. Suppose conversely that (2.3) does not hold. Then we can find a subsequence $\\{n_{i}\\}_{i=1}^{\infty}$ and a real $0<a<\inf\big{\\{}HD(A):\nu(A)=\max\\{\nu(B)\\}_{B\in\mathcal{B}}\big{\\}}$ such that (2.5) $\lim_{i\rightarrow\infty}dim^{s}\nu_{n_{i}}<a<dim^{s}\nu=\inf\big{\\{}HD(A):\nu(A)=\max\\{\nu(B)\\}_{B\in\mathcal{B}}\big{\\}}.$ Now apply the Hahn decomposition theorem to the measure $\nu$, let $X^{+}\subset X$ be a positive set of $\nu$ with $\nu(X^{+})=\max\\{\nu(B)\\}_{B\in\mathcal{B}}$. Under the above assumption we have $HD(X^{+})\geq dim^{s}\nu>a>dim^{s}\nu_{n_{i}}$ for any $i$ large enough. Since $\lim_{i\rightarrow\infty}\nu_{n_{i}}(X^{+})=\nu(X^{+})>0$, considering (2.5), for any $i$ large enough, there must exist $X^{+}_{i}\subset X^{+}$, such that $\nu_{n_{i}}(X^{+}_{i})=0$ and $HD(X^{+}_{i})=HD(X^{+})>a>HD(X^{+}\setminus X^{+}_{i})$. Now let $X^{+}_{\infty}=\cup_{i=N}^{\infty}(X^{+}\setminus X^{+}_{i})$ for some integer $N$ large enough. One can see that $X^{+}\setminus X^{+}_{\infty}\neq\emptyset$ as (2.6) $HD(X^{+}_{\infty})\leq a<HD(X^{+}).$ Moreover, we have $\nu(X^{+}\setminus X^{+}_{\infty})=\lim_{i\rightarrow\infty}\nu_{n_{i}}(X^{+}\setminus X^{+}_{\infty})=0$, which forces (2.7) $\nu(X^{+}_{\infty})=\nu(X^{+}).$ Now (2.6) together with (2.7) contradict the fact that $dim^{s}\nu>a$, which finishes the proof. ∎ ## 3\. Absolute continuity of convergent sequences of measures under the setwise topology In this section we discuss the relationship between absolute continuity of a sequence of measures and absolute continuity of its limit measure (under the weak or setwise topology, in case the sequence converges). We start from the following basic result. ###### 3.1 Proposition. For a sequence of measures $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ in $\mathcal{\hat{M}}(X)$ as $n\rightarrow\infty$, if there exists a subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$ such that $\nu_{n_{i}}$ is absolutely continuous with respect to some $\varrho\in\mathcal{\hat{M}}(X)$ for any $1\leq i<\infty$, then $\nu$ is absolutely continuous with respect to $\varrho$. ###### Proof. We do this by reduction to absurdity. Suppose that $\nu$ is not absolutely continuous with respect to $\varrho$, then there exists $A\in\mathcal{B}$ such that $\nu(A)>0$ while $\varrho(A)=0$. Since $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $n\rightarrow\infty$, there exists $N$ large enough such that $\nu_{n_{N}}(A)>0$. Considering $\varrho(A)=0$, this contradicts the fact that $\nu_{n_{N}}$ is absolutely continuous with respect to $\varrho$. ∎ Due to Proposition 3.1, we make the following definition formally. ###### 3.2 Definition. A sequence of measures $\\{\nu_{n}\\}_{n=1}^{\infty}$ is said to be _absolutely continuous_ with respect to some $\varrho$ on $\mathcal{\hat{M}}(X)$ if it contains a subsequence such that every measure in the subsequence is absolutely continuous with respect to $\varrho$. In fact Proposition 3.1 holds for any topological ambient space $X$. However, it is possible that a sequence of absolutely continuous measures converges weakly to a measure which is not absolutely continuous, as one can see from the Example 3.3. Let $\mathfrak{L}^{d}|_{A}$ be the restriction of $\mathfrak{L}^{d}$ on a set $A\subset\mathbb{R}^{d}$. ###### 3.3 Example. Consider the Cantor middle-third set $J$ appears as the attractor of the homogeneous IFS $S=\big{\\{}s_{1}(x)=\frac{1}{3}x,s_{2}(x)=\frac{1}{3}x+\frac{2}{3}\big{\\}}$ on $X=[0,1]$. Let $\nu_{n}=\frac{1}{2^{n}}\mathfrak{L}^{1}|_{\cup_{\omega\in\mathbb{N}_{2}^{n}}s_{\omega}(X)}$. Let $\nu$ be the unique probability such that $\nu=\frac{1}{2}\nu\circ s_{1}^{-1}+\frac{1}{2}\nu\circ s_{2}^{-1}$. Unfortunately, the inverse of Proposition 3.1 is not always true, as one can see from the following example. Let $\delta_{\\{x\\}}$ be the Dirac probability measure at the point $x\in X$. ###### 3.4 Example. For $X=[0,1]$, let $\nu_{n}=\mathfrak{L}^{1}|_{X}+\frac{1}{n}\delta_{\\{1\\}}$ be a sequence of measures in $\mathcal{\hat{M}}(X)$. It is easy to justify that $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\mathfrak{L}^{1}|_{X}$ as $n\rightarrow\infty$ in Example 3.4. However, there are no measure absolutely continuous with respect to $\mathfrak{L}^{1}|_{X}$ at all in the sequence $\\{\nu_{n}\\}_{n=1}^{\infty}$. One can see that in Example 3.4 the measures in the sequence are all of mixed type. In fact this is the only reason which prevents the inverse of Proposition 3.1 to be true. ###### 3.5 Proposition. For a sequence of measures $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ in $\mathcal{\hat{M}}(X)$ as $n\rightarrow\infty$ with $\nu(X)>0$, if $\nu$ is absolutely continuous with respect to some $\varrho\in\mathcal{\hat{M}}(X)$ and the measures in the sequence are all of pure type with respect to $\varrho$, then there exists $N\in\mathbb{N}$ large enough such such that $\nu_{n}$ is absolutely continuous with respect to $\varrho\in\mathcal{\hat{M}}(X)$ for any $n\geq N$. ###### Proof. Suppose in the contrary that the conclusion is not true, then we can find a subsequence $\\{n_{i}\\}_{i=1}^{\infty}\subset\mathbb{N}$ such that any measure in the sequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$ is singular with respect to $\varrho$. Denote an essential support of $\nu_{n_{i}}$ by $A_{i}\subset X$ with $\varrho(A_{i})=0$ for any $1\leq i<\infty$. Let $A=\cup_{i=1}^{\infty}A_{i}$, according to the $\sigma$-additivity of $\varrho$, we have (3.1) $\varrho(A)=0.$ Note that since $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $n\rightarrow\infty$, we have (3.2) $\nu(A)=\lim_{i\rightarrow\infty}\nu_{n_{i}}(A)=\lim_{i\rightarrow\infty}\nu_{n_{i}}(X)=\nu(X)>0.$ Now (3.1) together with (3.2) contradict the fact that $\nu$ is absolutely continuous with respect to $\varrho$. ∎ Proposition 3.5 also holds for any topological ambient space $X$, but it is not true for convergent sequences of measures under the weak topology on $\mathcal{\hat{M}}(X)$, as one can also see from our Example 3.3. The above results show that the absolute continuity of convergent sequences of measures and their limit measures are equivalent to each other under the setwise topology on $\mathcal{\hat{M}}(X)$, while this relationship is not true under the weak topology. This is another advantage that the setwise topology takes over the weak topology on $\mathcal{\hat{M}}(X)$. ## 4\. The $n$-th concentrating measures of probability on the symbolic spaces and some inherited properties In this section we do the approximation of a probability measure $\mu$ on the infinite symbolic space $\mathbb{N}^{\infty}$ by the sequence of its concentrating measures $\\{\mu_{n}\\}_{n=1}^{\infty}$ supported on $\\{\mathbb{N}_{n}^{\infty}\\}_{n=1}^{\infty}$ respectively. We will prove that some properties are inherited by the concentrating measures from $\mu$, under the hypothesis that the sequence of infinite random variables in the infinite stochastic process $\mathbb{Y}=\\{Y_{i}\\}_{i\in\mathbb{N}}$ is independent under the law $\mu$. As indicated in Remark 1.6, one may be able to remove the independent hypothesis in due course if one tries the approximation of $\mu$ by its restrictions $\\{\mu|_{\mathbb{N}_{n}^{\infty}}\\}_{n=1}^{\infty}\subset\mathcal{\hat{M}}(\mathbb{N}_{n}^{\infty})$ instead of the concentrating measures $\\{\mu_{n}\\}_{n=1}^{\infty}\subset\mathcal{M}(\mathbb{N}_{n}^{\infty})$. ###### 4.1 Definition. For a probability measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$, define its $n$-th _concentrating measure_ $\mu_{n}$ for any $n\in\mathbb{N}$ to be the unique probability measure supported on $\mathbb{N}_{n}^{\infty}$ satisfying the following conditions. 1. (1) On the first level cylinders, $\mu_{n}([i])=\mu([i])$ for $1\leq i\leq n-1$, $\mu_{n}([n])=\sum_{i=n}^{\infty}\mu([i])$. 2. (2) On the second level cylinders, $\begin{array}[]{l}\mu_{n}([ij])=\mu([ij])\mbox{ for }1\leq i,j\leq n-1,\\\ \mu_{n}([in])=\sum_{k=n}^{\infty}\mu([ik])\mbox{ for }1\leq i\leq n-1,\\\ \mu_{n}([nj])=\sum_{k=n}^{\infty}\mu([kj])\mbox{ for }1\leq j\leq n-1,\\\ \mu_{n}([nn])=\sum_{l=n}^{\infty}\sum_{k=n}^{\infty}\mu([kl]).\end{array}$ 3. (3) For any $q$-th cylinder $[i_{1}i_{2}\cdots i_{q}]\subset\mathbb{N}_{n}^{\infty}$ with $i_{1},i_{2},\cdots,i_{q}\in\mathbb{N}_{n}$ and $q\in\mathbb{N}$, let $\\#\\{1\leq k\leq q:i_{k}=n\\}=l$. Number the indexes in $\\{1\leq k\leq q:i_{k}=n\\}$ in increasing order as $j_{1}<j_{2}<\cdots<j_{l}$, that is, $\\{j_{1},j_{2},\cdots,j_{l}\\}=\\{1\leq k\leq q:i_{k}=n\\}$. Then $\begin{array}[]{ll}&\mu_{n}([i_{1}i_{2}\cdots i_{q}])\\\ =&\sum_{n\leq r_{1},r_{2},\cdots,r_{l}<\infty}\mu([i_{1}\cdots i_{j_{1}-1}r_{1}i_{j_{1}+1}\cdots i_{j_{2}-1}r_{2}i_{j_{2}+1}\cdots i_{j_{l}-1}r_{l}i_{j_{l}+1}\cdots i_{q}]).\end{array}$ Since for any $q$-level cylinder $[i_{1}i_{2}\cdots i_{q}]\subset\mathbb{N}_{n}^{\infty}$ with $i_{1},i_{2},\cdots i_{q}\in\mathbb{N}_{n}$ and $q\in\mathbb{N}$, the conditions in Definition 4.1 (1)-(3) imply $\mu_{n}([i_{1}i_{2}\cdots i_{q}])=\sum_{j=1}^{n}\mu_{n}([i_{1}i_{2}\cdots i_{q}j])$ between cylinders of successive levels, the existence and uniqueness of the measure $\mu_{n}$ are guaranteed by the _Kolmogorov extension theorem_ , see for example [Tao, Theorem 2.4.3]. Moreover, $\mu_{n}$ is invariant for any $n\in\mathbb{N}$ if $\mu$ is invariant with respect to the shift map $\sigma$ according to [Ber, Theorem 2]. In fact, more properties are inherited from $\mu$ as the structure of the original measure is suitably preserved considering the structure of the concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ on the cylinder sets, under the independent hypothesis of $\mathbb{Y}$ with respect to $\mu$. ###### 4.2 Lemma. For a measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$, if the sequence of infinite random variables in $\mathbb{Y}$ under the law $\mu$ is independent, then it is also independent under the concentrating law $\mu_{n}$ for any $n\in\mathbb{N}$. ###### Proof. For any $q$-word $\omega=i_{1}i_{2}\cdots i_{q}\in\mathbb{N}_{n}^{q}$, let $\\#\\{1\leq k\leq q:i_{k}=n\\}=l\geq 0$ and $\\{j_{1},j_{2},\cdots,j_{l}\\}=\\{1\leq k\leq q:i_{k}=n\\}$. Since $\mathbb{Y}$ is independent under the law $\mu$, we have $\begin{array}[]{ll}&\mu_{n}([\omega])\\\ =&\sum_{n\leq r_{1}<\infty,n\leq r_{2}<\infty,\cdots,n\leq r_{l}<\infty}\mu([i_{1}\cdots i_{j_{1}-1}r_{1}i_{j_{1}+1}\cdots i_{j_{2}-1}r_{2}i_{j_{2}+1}\cdots i_{j_{l}-1}r_{l}i_{j_{l}+1}\cdots i_{q}])\\\ =&\mu([i_{1}])\cdots\mu([i_{j_{1}-1}])\big{(}\sum_{n\leq r_{1}<\infty}\mu([r_{1}])\big{)}\mu([i_{j_{1}+1}])\cdots\mu([i_{j_{2}-1}])\big{(}\sum_{n\leq r_{2}<\infty}\mu([r_{2}])\big{)}\mu([i_{j_{2}+1}])\\\ &\cdots\mu([i_{j_{l}-1}])\big{(}\sum_{n\leq r_{l}<\infty}\mu([r_{l}])\big{)}\mu([i_{j_{l}+1}])\cdots\mu([i_{q}])\\\ =&\mu_{n}([i_{1}])\cdots\mu_{n}([i_{j_{1}-1}])\mu_{n}([i_{j_{1}}])\mu_{n}([i_{j_{1}+1}])\cdots\mu_{n}([i_{j_{2}-1}])\mu_{n}([i_{j_{2}}])\mu_{n}([i_{j_{2}+1}])\cdots\\\ &\mu_{n}([i_{j_{l}-1}])\mu_{n}([i_{j_{l}}])\mu_{n}([i_{j_{l}+1}])\cdots\mu_{n}([i_{q}]),\end{array}$ which justifies the independence of the random variables in $\mathbb{Y}$ under the law $\mu_{n}$ for any $n\in\mathbb{N}$. ∎ The ergodicity is also inherited by the concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ under the independent hypothesis of $\mathbb{Y}$. ###### 4.3 Lemma. For an ergodic measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$, if the sequence of infinite random variables in $\mathbb{Y}$ under the law $\mu$ is independent, then the concentrating measure $\mu_{n}$ is also ergodic on $\mathbb{N}_{n}^{\infty}$ for any $n\in\mathbb{N}$ with respect to the shift map $\sigma$. ###### Proof. This is a standard proof in classical probability techniques, with all the above preparations. First, due to Lemma 4.2 and the independence assumption, the sequence of random variables $\\{Y_{i}\\}_{i=1}^{\infty}$ in $\mathbb{Y}$ under the law $\mu_{n}$ is independent for any $n\in\mathbb{N}$. Now let $\mathcal{B}_{\mathbb{N}_{n}^{\infty}}$ be the $\sigma$-algebra generated by the cylinder sets in $\mathbb{N}_{n}^{\infty}$ for some $n\in\mathbb{N}$. Let $\mathcal{B}_{k}=\\{\mathbb{N}_{n}^{k}\times B:B\in\mathcal{B}_{\mathbb{N}_{n}^{\infty}}\\}$. The tail $\sigma$-algebra is define to be $\mathcal{B}_{T}=\cap_{k\in\mathbb{N}}\mathcal{B}_{k}$. Obviously the events $\\{A\in\mathcal{B}_{\mathbb{N}_{n}^{\infty}}:\sigma^{-1}(A)=A\\}\subset\mathcal{B}_{T}$ are all tail events. So according to the Kolmogorov’s $0-1$ law (see for example [Tsi, Proposition 3b9] or [Shi]), $\mu_{n}(A)=0$ or $1$ for any event in $\\{A\in\mathcal{B}_{\mathbb{N}_{n}^{\infty}}:\sigma^{-1}(A)=A\\}$. ∎ ###### 4.4 Remark. Similar to Remark 1.5, we are wondering whether the result holds without the hypothesis of independence on the sequence of infinite random variables with law $\mu$, which essentially confines our main results to be under this condition. Considering the limit behaviour of the sequence of concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$, we have the following simple but important result. ###### 4.5 Proposition. For a probability measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$ such that $\mu(\cup_{n=1}^{\infty}\mathbb{N}_{n}^{\infty})=1$, let $\mu_{n}$ be its $n$-th concentrating measure on $\mathbb{N}_{n}^{\infty}$ for any $n\in\mathbb{N}$. Then $\mu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\mu$ as $n\rightarrow\infty$. ###### Proof. It is obvious that for any finite word $\omega\in\mathbb{N}^{*}$, we have $\lim_{n\rightarrow\infty}\mu_{n}([\omega])=\mu([\omega])$, as $\mu_{n}([\omega])=\mu([\omega])$ for any $n$ large enough. Since $\mu$ is supported on $\cup_{n=1}^{\infty}\mathbb{N}_{n}^{\infty}$, the convergence extends to all measurable sets in $\mathbb{N}^{\infty}$ due to regularity of the measures, or [FKZ, Theorem 2.3]. ∎ ###### 4.6 Remark. The convergence of the sequence of concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ of a measure $\mu$ is usually not true under the total variation (TV) topology on $\mathcal{M}(\mathbb{N}^{\infty})$, except in some trivial cases. The entropy is also inherited by the sequence of concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ from the original measure $\mu$, considering Proposition 4.5 as well as the following result. ###### 4.7 Lemma. For a finite measure $\mu\in\mathcal{\hat{M}}(\mathbb{N}^{\infty})$ and its concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$, we have $\lim_{n\rightarrow\infty}h_{\mu_{n}}(\sigma)=h_{\mu}(\sigma)$ with respect to $(\mathbb{N}^{\infty},\mathcal{B}_{\mathbb{N}^{\infty}})$. ###### Proof. We justify the result in case of $h_{\mu}(\sigma)=\infty$ and $h_{\mu}(\sigma)<\infty$ respectively. If $h_{\mu}(\sigma)=\liminf_{k\rightarrow\infty}-\frac{1}{k}\sum_{\omega\in\mathbb{N}^{k}}\mu([\omega])\log\mu([\omega])=\infty$, we have $-\frac{1}{k}\sum_{\omega\in\mathbb{N}^{k}}\mu([\omega])\log\mu([\omega])=\infty$ for any $k\in\mathbb{N}$. Then for any $a>0$ and $k\in\mathbb{N}$, we can find $N_{a,k}$ large enough, such that $-\frac{1}{k}\sum_{\omega\in\mathbb{N}_{n}^{k}}\mu_{n}([\omega])\log\mu_{n}([\omega])>a$ for any $n>N_{a,k}$. This is enough to force $\lim_{n\rightarrow\infty}h_{\mu_{n}}(\sigma)=h_{\mu}(\sigma)=\infty$ by reduction to absurdity. Now if $h_{\mu}(\sigma)=\liminf_{k\rightarrow\infty}-\frac{1}{k}\sum_{\omega\in\mathbb{N}^{k}}\mu([\omega])\log\mu([\omega])<\infty$, then for any small $\epsilon>0$, we can find $K_{\epsilon}$ large enough, such that $h_{\mu}(\sigma)\leq-\frac{1}{k}\sum_{\omega\in\mathbb{N}^{k}}\mu([\omega])\log\mu([\omega])<h_{\mu}(\sigma)+\epsilon$ for any $k>K_{\epsilon}$. Now for fixed $k>K_{\epsilon}$, since $-\sum_{\omega\in\mathbb{N}^{k}}\mu([\omega])\log\mu([\omega])<\infty$, we can find $N_{k,\epsilon}$ large enough such that $\big{|}\sum_{\omega\in\mathbb{N}^{k}}\mu([\omega])\log\mu([\omega])-\sum_{\omega\in\mathbb{N}_{n}^{k}}\mu_{(}[\omega])\log\mu_{n}([\omega])\big{|}<\epsilon$ for any $n>N_{k,\epsilon}$. This is enough to force $\lim_{n\rightarrow\infty}h_{\mu_{n}}(\sigma)=h_{\mu}(\sigma)$. ∎ ###### 4.8 Remark. In fact Lemma 4.7 holds for any sequence of measures converging to the finite measure $\mu$ under the setwise topology in $\mathcal{\hat{M}}(X)$, not only for the sequence of its concentrating measures, for any topological ambient space $X$. The version now is enough for our purpose in this work. ## 5\. Setwise approximation of projective measures on the attractors of families of infinite PIFS In this section we aim at proving Theorem 1.4. After the proof we formulate some interesting applications of Theorem 1.4 in some circumstances. We first show a result which links the upper and lower dimensions of the projective measure $\nu$ on the attractor $J$ of a single PIFS. ###### 5.1 Lemma. Let $S=\\{s_{i}:X\rightarrow X\\}_{i\in I}$ be a parabolic iterated function system with a countable index set $I$. For an ergodic measure $\mu$ on $I^{\infty}$, let $\nu=\mu\circ\pi^{-1}$ be its projective measure. Then $dim_{*}\nu=dim^{*}\nu$. ###### Proof. To prove the result, it suffices for us to prove (5.1) $dim_{*}\nu\geq dim^{*}\nu.$ Let $J$ be the attractor of the PIFS. For a set $A\subset J$ with $\nu(A)=\mu\circ\pi^{-1}(A)>0$, consider the set $\pi\circ\sigma^{-1}\circ\pi^{-1}(A)=\pi(\cup_{i\in I}i\pi^{-1}(A))=\cup_{i\in I}\pi(i\pi^{-1}(A))$. Note that for any $i\in I$, due to $\pi(\omega)=s_{\omega|_{n}}\circ\pi\circ\sigma^{n}(\omega)$ for any $\omega\in I^{\infty}$, we have $\pi(i\pi^{-1}(A))=s_{i}(A)$, in which $i\pi^{-1}(A):=\\{\omega\in I^{\infty}:\omega_{1}=i,\sigma(\omega)\in\pi^{-1}(A)\\}$. Since $0<s_{i}^{\prime}(x)\leq 1$ for any $i\in I$, we have $HD\big{(}\pi(i\pi^{-1}(A))\big{)}=HD(A)$ for any $i\in I$. Since $I$ is countable, this forces $HD(\pi\circ\sigma^{-1}\circ\pi^{-1}(A))=HD(A)$. Successively we can show $HD(\pi\circ\sigma^{-n}\circ\pi^{-1}(A))=HD(A)$ for any $n\in\mathbb{N}$, which forces $HD(\cup_{n\in\mathbb{N}}\pi\circ\sigma^{-n}\circ\pi^{-1}(A))=HD(A)$. Since $\mu$ is ergodic with respect to $\sigma$, then $\begin{array}[]{ll}&\nu\big{(}\cup_{n\in\mathbb{N}}\pi\circ\sigma^{-n}\circ\pi^{-1}(A)\big{)}\\\ =&\mu\circ\pi^{-1}\big{(}\cup_{n\in\mathbb{N}}\pi\circ\sigma^{-n}\circ\pi^{-1}(A)\big{)}\\\ \geq&\mu\big{(}\cup_{n\in\mathbb{N}}\sigma^{-n}\circ\pi^{-1}(A)\big{)}\\\ =&1.\end{array}$ This implies the inequality (5.1). ∎ Lemma 5.1 may not be true for some invariant measure $\mu$ on the symbolic space. Apply it to families of PIFS, we have the following result instantly. ###### 5.2 Corollary. Let $\big{\\{}S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in I}\big{\\}}_{\mathbb{t}\in U}$ be a family of parabolic iterated function systems with a countable index set $I$. Let $\mu$ be an ergodic measure on $I^{\infty}$ and $\nu_{\mathbb{t}}$ be the projective measure under $\pi_{\mathbb{t}}$ at time $\mathbb{t}$, then $dim_{*}\nu_{\mathbb{t}}=dim^{*}\nu_{\mathbb{t}}$ for any $\mathbb{t}\in U$. Note that the equality holds everywhere instead of Lebesgue _a.e._ with respect to $\mathbb{t}\in U$. Now we consider setwisely approximating the projective measure of $\mu$ by the sequence of projective measures of its concentrating measures on the ambient space $X$. The following result is an instant corollary of Proposition 4.5. ###### 5.3 Corollary. Let $S=\\{s_{i}:X\rightarrow X\\}_{i\in\mathbb{N}}$ be an infinite parabolic iterated function system with its attractor $J$. For a probability measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$ such that $\mu(\cup_{n=1}^{\infty}\mathbb{N}_{n}^{\infty})=1$, let $\mu_{n}$ be its $n$-th concentrating measure on $\mathbb{N}_{n}^{\infty}$ for any $n\in\mathbb{N}$. Then their projections $\\{\nu_{n}=\mu_{n}\circ\pi^{-1}\\}_{n\in\mathbb{N}}$ and $\nu=\mu\circ\pi^{-1}$ satisfy $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $n\rightarrow\infty$ on $J$. As to absolute continuity of the projective measures on the ambient space, we have the following result. ###### 5.4 Lemma. Let $S=\\{s_{i}:X\rightarrow X\\}_{i\in\mathbb{N}}$ be an infinite parabolic iterated function system with attractor $J$. For an ergodic probability measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$, with $\mu(\cup_{n=1}^{\infty}\mathbb{N}_{n}^{\infty})=1$ and its $n$-th concentrating measure $\mu_{n}$ also being ergodic on $\mathbb{N}_{n}^{\infty}$ for any $n\in\mathbb{N}$, consider their projections $\\{\nu_{n}=\mu_{n}\circ\pi^{-1}\\}_{n\in\mathbb{N}}$ and $\nu=\mu\circ\pi^{-1}$ on $J$. The sequence of measures $\\{\nu_{n}\\}_{n\in\mathbb{N}}$ is absolutely continuous with respect to $\mathfrak{L}^{1}$ if and only if $\nu$ is absolutely continuous with respect to $\mathfrak{L}^{1}$. ###### Proof. First note that all the projective measures $\\{\nu_{n}\\}_{n\in\mathbb{N}}$ and $\nu$ are of pure type since they are all projected from ergodic measures on the symbolic spaces. By Corollary 5.3 we have $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $n\rightarrow\infty$ on $J$. Then the absolute continuity of the sequence of measures $\\{\nu_{n}\\}_{n\in\mathbb{N}}$ and the absolute continuity of $\nu$ with respect to $\mathfrak{L}^{1}$ is equivalent to each other in virtue of Proposition 3.1 and Proposition 3.5. ∎ We will only use the result that absolute continuity of the sequence of measures $\\{\nu_{n}\\}_{n\in\mathbb{N}}$ implies the absolute continuity of $\nu$ with respect to $\mathfrak{L}^{1}$ in the following. This is true even if $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ are not ergodic according to Proposition 3.1. Equipped with all the above results, now we are well prepared to prove Theorem 1.4. Proof of Theorem 1.4: ###### Proof. First note that since $\big{\\{}S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}\in\Gamma_{X}(\vartheta)\big{\\}}_{\mathbb{t}\in U}$, there exist sequences of positive real numbers $\\{0<\gamma_{n},u_{n}<1\\}_{n\in\mathbb{N}}$, $\\{M_{n}>0\\}_{n\in\mathbb{N}}$ (it is possible that these sequences satisfy $\sup\\{\gamma_{n}\\}_{n\in\mathbb{N}}=1$, $\inf\\{u_{n}\\}_{n\in\mathbb{N}}=0$ and $\sup\\{M_{n}\\}_{n\in\mathbb{N}}=\infty$) and a sequence of open neighbourhoods $V_{n}$ of $v$ (it is also possible that $\cap_{n=1}^{\infty}V_{n}=\\{v\\}$), such that its $n$-th family of truncates $\big{\\{}S_{n}^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}_{n}}\in\Gamma_{X}(\vartheta,V_{n},\gamma_{n},u_{n},M_{n})\big{\\}}_{\mathbb{t}\in U}$ for any fixed $n\in\mathbb{N}$. Denote their attractors by $\\{J_{n,\mathbb{t}}\subset J_{\mathbb{t}}\\}_{n\in\mathbb{N},\mathbb{t}\in U}$. Consider the projections of the concentrating measures $\nu_{n,\mathbb{t}}=\mu_{n}\circ\pi_{\mathbb{t}}^{-1}$ on the limit sets for $\mathbb{t}\in U,n\in\mathbb{N}$. Since the sequence of infinite random variables in $\mathbb{Y}$ is independent under the ergodic law $\mu$, the concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ are also ergodic with respect to the shift map $\sigma$ in virtue of Lemma 4.3. Moreover, according to Lemma 4.7, we have $h_{\mu_{n}}(\sigma)>0$ for $n$ large enough. Now apply the [SSU1, Theorem 2.3 (i)] and Corollary 5.2 to the ergodic projections $\\{\nu_{n,\mathbb{t}}\\}_{\mathbb{t}\in U}$ originated from the family of finite PIFS $\big{\\{}S_{n}^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}_{n}}\in\Gamma_{X}(\vartheta,V_{n},\gamma_{n},u_{n},M_{n})\big{\\}}_{\mathbb{t}\in U}$ satisfying the continuity and transversality condition, we have $dim_{*}\nu_{n,\mathbb{t}}=dim^{*}\nu_{n,\mathbb{t}}=\min\Big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)},1\Big{\\}}$ for Lebesgue _a.e._ $\mathbb{t}\in U$ and any $n\in\mathbb{N}$ large enough. In virtue of Corollary 5.3, we have $\nu_{n,\mathbb{t}}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu_{\mathbb{t}}$ as $n\rightarrow\infty$ on $J$ for any $\mathbb{t}\in U$. Now apply the semi- continuity result Theorem 2.3 to the setwisely convergent sequence of projective measures $\\{\nu_{n,\mathbb{t}}\\}_{\mathbb{t}\in U}$, we have (5.2) $\begin{array}[]{ll}\limsup_{n\rightarrow\infty}\min\Big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)},1\Big{\\}}=\limsup_{n\rightarrow\infty}dim_{*}\nu_{n,\mathbb{t}}\leq dim_{*}\nu_{\mathbb{t}}\\\ =dim^{*}\nu_{\mathbb{t}}\leq\liminf_{n\rightarrow\infty}dim^{*}\nu_{n,\mathbb{t}}=\liminf_{n\rightarrow\infty}\min\Big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)},1\Big{\\}}\end{array}$ for Lebesgue _a.e._ $\mathbb{t}\in U$ and any $n\in\mathbb{N}$ large enough. Then Theorem 1.4 (i) is proved by letting $n\rightarrow\infty$ in (5.2), since the Lebesgue measure of unions of countablly many Lebesgue null sets is still null. To prove Theorem 1.4 (ii), for fixed $n\in\mathbb{N}$, apply [SSU1, Theorem 2.3 (ii)] to the ergodic projections $\\{\nu_{n,\mathbb{t}}\\}_{\mathbb{t}\in U}$ originated from the $n$-th truncated family of PIFS $\\{S_{n}^{\mathbb{t}}\\}_{\mathbb{t}\in U}$ satisfying the continuity and transversality condition, we see that $\nu_{n,\mathbb{t}}$ is absolutely continuous for _a.e._ $\mathbb{t}\in\Big{\\{}\mathbb{t}\in U:\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)}>1\Big{\\}}$. So in virtue of Lemma 5.4, $\nu_{\mathbb{t}}$ is absolutely continuous for _a.e._ $\mathbb{t}$ in $\cup_{\\{n_{j}\\}_{j=1}^{\infty}\mbox{ is an infinite subsequence of }\\{n\\}_{n=1}^{\infty}}\cap_{j=1}^{\infty}\Big{\\{}\mathbb{t}\in U:\cfrac{h_{\mu_{n_{j}}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n_{j}}}(\sigma)}>1\Big{\\}}:=U_{a}$. It is easy to see that $\Big{\\{}\mathbb{t}\in U:\limsup_{n\rightarrow\infty}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)}>1\Big{\\}}\subset U_{a}$. This justifies Theorem 1.4 (ii). ∎ One can see from the above proof that the limit of the sequence $\Big{\\{}\min\big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)},1\big{\\}}\Big{\\}}_{n=1}^{\infty}$ always exists for Lebesgue _a.e._ $\mathbb{t}\in U$. More interesting results are possible according to different _a.e._ limit behaviours of the sequence $\Big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)}\Big{\\}}_{\mathbb{t}\in U,n\in\mathbb{N}}$. In the following of the section we apply Theorem 1.4 to various families of PIFS to deduce some interesting results. The first application is on the Hausdorff dimension of the attractors of a family of infinite PIFS. ###### 5.5 Corollary. Let $\big{\\{}S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}\in\Gamma_{X}(\vartheta)\big{\\}}_{\mathbb{t}\in U}$ be a family of infinite parabolic iterated function systems satisfying the continuity and transversality condition with respect to the vector-time parameter $\mathbb{t}\in U$. For an ergodic probability measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$ with positive entropy $h_{\mu}(\sigma)$ and $\mu(\cup_{n=1}^{\infty}\mathbb{N}_{n}^{\infty})=1$, let $\mu_{n}$ be its $n$-th concentrating measure for any $n\in\mathbb{N}$. If the sequence of infinite random variables in $\mathbb{Y}$ under the law $\mu$ is independent, then 1. (i). $HD(J_{\mathbb{t}})\geq\lim_{n\rightarrow\infty}\min\Big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)},1\Big{\\}}$ for Lebesgue _a.e._ $\mathbb{t}\in U$. 2. (ii). $\mathfrak{L}^{1}(J_{\mathbb{t}})>0$ for Lebesgue _a.e._ $\mathbb{t}\in\Big{\\{}\mathbb{t}:\limsup_{n\rightarrow\infty}\big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)}\big{\\}}>1\Big{\\}}$. ###### Proof. These results follow directly from Theorem 1.4 and the fact that $supp(\nu_{\mathbb{t}})\subset J_{\mathbb{t}}$ for any $\mathbb{t}\in U$. ∎ More interesting corollaries on the Hausdorff dimension of the attractors can be formulated from Theorem 1.4, considering different limit behaviours of the sequence $\Big{\\{}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda^{\mathbb{t}}_{\mu_{n}}(\sigma)}\Big{\\}}_{\mathbb{t}\in U,n\in\mathbb{N}}$ and the fact that $supp(\nu_{\mathbb{t}})$ is always contained in $J_{\mathbb{t}}$. Now we apply Theorem 1.4 or Corollary 5.5 to some families of infinite PIFS with _exploding_ measures $\mu$ on $\mathbb{N}^{\infty}$, that is, measures $\mu$ satisfying $h_{\mu}(\sigma)=\infty$. There will be some stronger results available in due courses. ###### 5.6 Corollary. Let $\big{\\{}S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}\in\Gamma_{X}(\vartheta,V,\gamma,u,M)\big{\\}}_{\mathbb{t}\in U}$ be a family of infinite parabolic iterated function systems satisfying the continuity and transversality condition with respect to the vector-time parameter $\mathbb{t}\in U$ for some fixed open neighbourhood $V$ of $v$ and some fixed parameters $0<\vartheta,\gamma,u<1,M>0$. For an ergodic exploding probability measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$ with $\mu(\cup_{n=1}^{\infty}\mathbb{N}_{n}^{\infty})=1$, if the sequence of infinite random variables in $\mathbb{Y}$ under the law $\mu$ is independent, then 1. (i). For Lebesgue _a.e._ $\mathbb{t}\in U$, $dim_{*}\nu_{\mathbb{t}}=dim^{*}\nu_{\mathbb{t}}=1$, 2. (ii). $\nu_{\mathbb{t}}$ is absolutely continuous for Lebesgue _a.e._ $\mathbb{t}\in U$, in which $\nu_{\mathbb{t}}=\mu\circ\pi_{\mathbb{t}}^{-1}$ is the projective measure at time $\mathbb{t}$. ###### Proof. Note that since (5.3) $\inf_{n\in\mathbb{N},x\in X}\\{|(s_{n}^{\mathbb{t}})^{\prime}(x)|\\}\geq u$ for any $\mathbb{t}\in U$, we have (5.4) $\lambda^{\mathbb{t}}_{\mu}(\sigma)\leq-\log u.$ Apply Theorem 1.4 to the projective measures $\\{\nu_{\mathbb{t}}\\}_{\mathbb{t}\in U}$ originated from the family of infinite parabolic iterated function systems $\big{\\{}S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}\in\Gamma_{X}(\vartheta,V,\gamma,u,M)\big{\\}}_{\mathbb{t}\in U}$ satisfying the continuity and transversality condition (note that $\Gamma_{X}(\vartheta,V,\gamma,u,M)\subset\Gamma_{X}(\vartheta)$), the two conclusions follow instantly in virtue of (5.4), $h_{\mu}(\sigma)=\infty$ and Lemma 4.7. ∎ ## 6\. Dimensional estimates of the exceptional parameters This section is devoted to estimation on the upper bound of the local (global) Hausdorff dimension of the exceptional parameters. We mean to prove Theorem 1.8 here. We start by showing several preceding results on the limit behaviours of the (families of truncated) finite sub-systems of an (family of) infinite PIFS. ###### 6.1 Lemma. Let $S=\\{s_{i}:X\rightarrow X\\}_{i\in\mathbb{N}}\in\Gamma_{X}(\vartheta)$ be a PIFS. Let $\mu$ be a finite measure on $\mathbb{N}^{\infty}$ with its concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ on $\\{\mathbb{N}_{n}^{\infty}\\}_{n\in\mathbb{N}}$ respectively. Considering the sequence of Lyapunov exponents $\\{\lambda_{\mu_{n}}(\sigma)\\}_{n=1}^{\infty}$ of the truncated systems $\big{\\{}S_{n}=\\{s_{i}:X\rightarrow X\\}_{i\in\mathbb{N}_{n}}\big{\\}}_{n\in\mathbb{N}}$, we have $\lim_{n\rightarrow\infty}\lambda_{\mu_{n}}(\sigma)=\lambda_{\mu}(\sigma)$. ###### Proof. First we represent $\lambda_{\mu}(\sigma)$ as $\lambda_{\mu}(\sigma)=\sum_{i=1}^{\infty}-\int_{[i]}\log|s_{i}^{\prime}(\pi\circ\sigma(\omega))|d\mu(\omega)$. Note that (6.1) $-\int_{[i]}\log|s_{i}^{\prime}(\pi\circ\sigma(\omega))|d\mu(\omega)<\infty$ for any $1\leq i<\infty$. We distinguish the case $\lambda_{\mu}(\sigma)=\infty$ from the case $\lambda_{\mu}(\sigma)<\infty$. If $\lambda_{\mu}(\sigma)=\infty$, we will claim that for any $a>0$, there exists an integer $N_{a}$ large enough, such that for any $n>N_{a}$, we have $\lambda_{\mu_{n}}(\sigma)>a$. This means $\lim_{n\rightarrow\infty}\lambda_{\mu_{n}}(\sigma)=\infty$. To see this, since $\lambda_{\mu}(\sigma)=\infty$, for any $a>0$, there exists $N_{*}$ large enough, such that $\sum_{i=1}^{N_{*}}-\int_{[i]}\log|s_{i}^{\prime}(\pi\circ\sigma(\omega))|d\mu(\omega)>2a$. Now choose some small $0<\epsilon<a$, due to 6.1, for any $1\leq i\leq N_{*}$, we can find an integer $N_{i}$ large enough, such that $a_{i}:=\big{|}\int_{[i]}\log|s_{i}^{\prime}(\pi\circ\sigma(\omega))|d\mu(\omega)-\int_{[i]}\log|s_{i}^{\prime}(\pi\circ\sigma(\omega))|d\mu_{n}(\omega)\big{|}<\cfrac{\epsilon}{2^{i}}$ for any $n>N_{i}$. Now let $N_{a}=\max\\{N_{i}\\}_{i=1}^{N_{*}}$, we have $\begin{array}[]{ll}&\sum_{i=1}^{N_{*}}-\int_{[i]}\log|s_{i}^{\prime}(\pi\circ\sigma(\omega))|d\mu_{n}(\omega)\\\ \geq&\sum_{i=1}^{N_{*}}-\int_{[i]}\log|s_{i}^{\prime}(\pi\circ\sigma(\omega))|d\mu(\omega)-\sum_{i=1}^{N_{*}}a_{i}\\\ \geq&2a-\sum_{i=1}^{N_{*}}\cfrac{\epsilon}{2^{i}}\\\ >&a\end{array}$ for any $n>N_{a}$, which justifies the claim. The case $\lambda_{\mu}(\sigma)<\infty$ can be dealt with in a similar way, which is left to the interested readers. ∎ Similar to Remark 4.8, lemma 6.1 holds for any sequence of measures converging to the measure $\mu$ under the setwise topology on $\mathcal{\hat{M}}(X)$, not only for the sequence of its concentrating measures, for any topological ambient space $X$. ###### 6.2 Lemma. Let $G\subset U$ be a subset. Consider a sequences of real functions $\\{f_{n}:G\rightarrow\mathbb{R}\\}_{n=1}^{\infty}$ such that $f_{n}(\mathbb{t})\rightarrow f(\mathbb{t})$ as $n\rightarrow\infty$ for some $f(\mathbb{t}):G\rightarrow\mathbb{R}$ at any $\mathbb{t}\in G$. Now if $HD(\\{\mathbb{t}\in G:f_{n}(\mathbb{t})<0\\})\leq h_{n}$, for a sequence of reals $\\{h_{n}\\}_{n=1}^{\infty}$, then we have $HD(\\{\mathbb{t}\in G:f(\mathbb{t})<0\\})\leq\limsup_{n\rightarrow\infty}h_{n}$. ###### Proof. Note that since $\lim_{n\rightarrow\infty}f_{n}(\mathbb{t})\rightarrow f(\mathbb{t})$ at any time $\mathbb{t}\in G$, then $\\{\mathbb{t}\in G:f(\mathbb{t})<0\\}\subset\cap_{k=1}^{\infty}\cup_{n=k}^{\infty}\\{\mathbb{t}\in G:f_{n}(\mathbb{t})<0\\}$. This means $\begin{array}[]{ll}&HD(\\{\mathbb{t}\in G:f(\mathbb{t})<0\\})\\\ \leq&\inf\big{\\{}HD(\cup_{n=k}^{\infty}\\{\mathbb{t}\in G:f_{n}(\mathbb{t})<0\\})\big{\\}}_{k=1}^{\infty}\\\ \leq&\inf\big{\\{}\sup\\{h_{n}\\}_{n=k}^{\infty}\big{\\}}_{k=1}^{\infty}\\\ =&\limsup_{n\rightarrow\infty}h_{n}.\end{array}$ ∎ Be careful that Lemma 6.2 will not be true if the two $<$ are both substituted by $\leq$ in it. Now we are well prepared to prove Theorem 1.8. Proof of Theorem 1.8: ###### Proof. Since $\mu$ is ergodic and $\mathbb{Y}$ is independent under the law $\mu$, according to Lemma 4.3, consider the sequence of measures $\\{\nu_{n,\mathbb{t}}\\}_{\mathbb{t}\in U,n\in\mathbb{N}}$ projected from the ergodic concentrating measures $\\{\mu_{n}\\}_{n\in\mathbb{N}}$ of $\mu$. For fixed $n\in\mathbb{N}$, as the family of $n$-th truncated finite PIFS $\big{\\{}S_{n}^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}_{n}}\in\Gamma_{X}(\vartheta,V_{n},\gamma_{n},u_{n},M_{n})\big{\\}}_{\mathbb{t}\in U}$ satisfying the continuity and strong transversality condition with respect to the time parameter $\mathbb{t}\in U$ for some open neighbourhood $V_{n}$ of $v$, some $0<\gamma_{n},u_{n}<1$ and $M_{n}>0$, apply [SSU1, Theorem 5.3] to the family of truncated finite PIFS $S_{n}^{\mathbb{t}}$, we have (6.2) $HD\Big{(}\big{\\{}\mathbb{t}\in G:dim^{*}\nu_{n,\mathbb{t}}<\min\\{\frac{h_{\mu_{n}}(\sigma)}{\lambda_{\mu_{n}}^{\mathbb{t}}(\sigma)},\alpha\\}\big{\\}}\Big{)}\leq\min\Big{\\{}\sup_{\mathbb{t}\in G}\frac{h_{\mu_{n}}(\sigma)}{\lambda_{\mu_{n}}^{\mathbb{t}}(\sigma)},\alpha\Big{\\}}+d-1$ for any $n\in\mathbb{N}$ and $\epsilon>0$. Note that $dim^{*}\nu_{\mathbb{t}}=\lim_{n\rightarrow\infty}dim^{*}\nu_{n,\mathbb{t}}$ since $\lim_{n\rightarrow\infty}\nu_{n,\mathbb{t}}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu_{\mathbb{t}}$ and $\lim_{n\rightarrow\infty}\min\\{\cfrac{h_{\mu_{n}}(\sigma)}{\lambda_{\mu_{n}}^{\mathbb{t}}(\sigma)},\alpha\\}$ exists for any $\mathbb{t}\in G$ since $\lim_{n\rightarrow\infty}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda_{\mu_{n}}^{\mathbb{t}}(\sigma)}$ exists everywhere on $G$. Moreover, $\lim_{n\rightarrow\infty}\Big{(}\min\big{\\{}\sup_{G}\cfrac{h_{\mu_{n}}(\sigma)}{\lambda_{\mu_{n}}^{\mathbb{t}}(\sigma)},\alpha\big{\\}}+d-1\Big{)}=K_{\alpha,G}$. Apply Lemma 6.2 here, we get (6.3). ∎ In the non-exploding case, Theorem 1.8 degenerates into a more simple version as following. ###### 6.3 Corollary. Let $\big{\\{}S^{\mathbb{t}}=\\{s_{i}^{\mathbb{t}}:X\rightarrow X\\}_{i\in\mathbb{N}}\in\Gamma_{X}(\vartheta)\big{\\}}_{\mathbb{t}\in U}$ be a family of infinite parabolic iterated function systems satisfying the continuity and strong transversality condition with respect to the vector-time parameter $\mathbb{t}\in U$. For an ergodic probability measure $\mu$ on the symbolic space $\mathbb{N}^{\infty}$ satisfying $\mu(\cup_{n=1}^{\infty}\mathbb{N}_{n}^{\infty})=1$ and $0<h_{\mu}(\sigma),\lambda^{\mathbb{t}}_{\mu}(\sigma)<\infty$ at any time $\mathbb{t}\in U$, if the sequence of infinite random variables in $\mathbb{Y}$ under the law $\mu$ is independent, then (6.3) $HD\Big{(}\Big{\\{}\mathbb{t}\in G:dim^{*}\nu_{\mathbb{t}}<\min\big{\\{}\cfrac{h_{\mu}(\sigma)}{\lambda^{\mathbb{t}}_{\mu}(\sigma)},\alpha\big{\\}}\Big{\\}}\Big{)}\leq\min\Big{\\{}\sup_{\mathbb{t}\in G}\cfrac{h_{\mu}(\sigma)}{\lambda^{\mathbb{t}}_{\mu}(\sigma)},\alpha\Big{\\}}+d-1$ for any $0<\alpha<1$ and $G\subset U$. ###### Proof. Under the assumption $0<h_{\mu}(\sigma),\lambda^{\mathbb{t}}_{\mu}(\sigma)<\infty$ at any time $\mathbb{t}\in U$, considering Lemma 4.7 and Lemma 6.1, the result follows instantly from Theorem 1.8. ∎ In the exploding case, that is, either $h_{\mu}(\sigma)=\infty$ or $\lambda^{\mathbb{t}}_{\mu}(\sigma)=\infty$ for some $\mathbb{t}\in U$, the result depends heavily on the asymptotic behaviour of the sequence of concentrating measures of $\mu$, however, some estimations on the Hausdorff dimension of some form of exceptional parameters may still be possible upon reforming Lemma 6.2. Note that most of the notions and results in this work apply to families of _hyperbolic iterated function systems_ , which are families of iterated function systems constituted by contractive hyperbolic maps only. ## References * [Ber] V. Berthé, Symbolic dynamics and representations, Les cours du C.I.R.M. Vol. 5 no 1 (2017) 1-16, Cours $n^{o}$ I. * [BHP] R. Basu, J. Hermon and Y. Peres, Characterization of cutoff for reversible Markov chains, Ann. Probab. Volume 45, Number 3 (2017), 1448-1487. * [BRS] B. Bárány, M. Rams and K. Simon. On the dimension of self-affine sets and measures with overlaps. Proceedings of the American Mathematical Society, 144(10):4427-4440, 2016. * [BV1] E. Breuillard and P. Varjú, On the dimension of Bernoulli convolutions, Ann. Probab. Volume 47, Number 4 (2019), 2582-2617. * [BV2] E. Breuillard and P. Varjú, Entropy of Bernoulli convolutions and uniform exponential growth for linear groups, Journal d’Analyse Mathématique volume 140, pages 443-481(2020). * [Doo] J. Doob, Measure Theory, Graduate Texts in Mathematics, 143, Springer-Verlag New York, 1994. * [Fal1] K. Falconer, The Hausdorff dimension of self-affine fractals, Math. Proc. Cambridge Philos. Soc. 103, 1988, 339-350. * [Fal2] K. Falconer, Fractal Geometry, Mathematical Foundations and Applications, 3rd Edition, John Wiley & Sons, 2014. * [FKZ] E. Feinberg, P. Kasyanov and M. Zgurovsky, Convergence of probability measures and Markov decision models with incomplete information, Proceedings of the Steklov Institute of Mathematics, December 2014, Volume 287, Issue 1, pp 96-117. * [Fur] H. Furstenberg, Intersections of Cantor sets and transversality of semigroups, In Problems in analysis (Sympos. Salomon Bochner, Princeton Univ., Princeton, N.J., 1969), pages 41-59. Princeton Univ. Press, Princeton, N.J., 1970. 3, 4, 10. * [GR] J. K. Ghosh and R. V. Ramamoorthi, Bayesian Nonparametrics, Springer, New York, 2003. * [Hai] M. Hairer, Ergodic Properties of Markov Processes, Lecture given at The University of Warwick in Spring 2006, http://www.hairer.org/notes/Markov.pdf, 2018. * [Hay] N. Haydn, The Central Limit Theorem for uniformly strong mixing measures, Stochastics and Dynamics, Vol. 12, No. 4 (2012). * [HL] O. Hernandez-Lerma and J. Lasserre, Markov Chains and Invariant Probabilities, Progress in Mathematics, Birkhäuser Basel, 2003. * [Hoc1] M. Hochman, On self-similar sets with overlaps and inverse theorems for entropy, Ann. of Math. (2) 180, 773-822, 2014. * [Hoc2] M. Hochman, Lectures on dynamical systems and entropy, http://math.huji.ac.il/~mhochman/courses/dynamics2014/notes.5.pdf, June 27, 2014. * [Hoc3] M. Hochman, Self similar sets, entropy and additive combinatorics, Geometry and Analysis of Fractals, Volume 88, 2014, pp 225-252. * [Hoc4] M. Hochman, Some problems on the boundary of fractal geometry and additive combinatorics, Recent Developments in Fractals and Related Fields, pp 129-174, Editors: J. Barral and S. Seuret, Conference on Fractals and Related Fields III, île de Porquerolles, France, 2015. * [HS] M. Hochman and P. Shmerkin, Local entropy averages and projections of fractal measures, Ann. of Math. (2) 175 1001-1059, 2012. * [Hut] J. Hutchinson, Fractals and self-similarity, Indiana Univ. Math. J. 30 (1981), no. 5, 713-747. * [JW] B. Jessen and A. Wintner, Distribution functions and the Riemann zeta function, Trans. Amer. Math. Soc., 38(1):48-88, 1935. * [Kau] R. Kaufman, On Hausdorff dimension of projections, Mathematika 15 (1968), 153-155. * [Las] J. Lasserre, On the setwise convergence of sequences of measures, Journal of Applied Mathematics and Stochastic Analysis, 10:2 (1997), 131-136. * [LPS] E. Lindenstrauss, Y. Peres and W. Schlag, Bernoulli convolutions and an intermediate value theorem for entropy of K-partitions, J. d’Analyse Math 87 (2002), 337-367. * [LY] T. Linder and S. Yüksel, Optimization and convergence of observation channels in stochastic control, SIAM J. Control Optim. Vol. 50, No. 2, pp. 864-887. * [Ma] L. Ma, Continuity of measure-dimension mappings, arXiv:2105.05491 [math.DS], 2021. * [MMR] P. Mattila, M. Morán and J. M. Rey, Dimension of a measure, Studia Math. 142 (2000), no. 3, 219-233. * [PS1] Y. Peres and B. Solomyak, Problems on self-simiar and self-affine sets, an update, in ‘Fractals and Stochastics II, Proceedings of the Greifswald conference’ (Aug 1998), (2000), 95-106. * [PS2] Y. Peres and B. Solomyak, Absolute continuity of Bernoulli convolutions, a simple proof, Math. Research Letters 3:2 (1996), 231-239. * [PS3] Y. Peres and B. Solomyak, Self-similar measures and intersections of Cantor sets, Trans. Amer. Math. Soc. 350, no. 10 (1998), 4065-4087. * [PolS] M. Pollicott and K. Simon, The Hausdorff dimension of $\lambda$-expansions with deleted digits, Trans. Amer. Math. Soc. 347 (1995), 967-983. * [PSS] Y. Peres, K. Simon and B. Solomyak, Absolute continuity for random iterated function systems with overlaps, Journal of the London Math. Soc. (2) 74 (2006), 739-756. * [Shi] P. Shields, The theory of Bernoulli Shifts, Web Edition 1.01, https://www.impan.pl/~gutman/The%20Theory%20of%20Bernoulli%20Shifts.pdf. * [Shm1] P. Shmerkin, On the exceptional set for absolute continuity of Bernoulli convolutions, Geom. Funct. Anal. 24 no. 3(2014),946-958. * [Shm2] P. Shmerkin, A modified multifractal formalism for a class of self-similar measures with overlap, Asian J. Math., 9(3):323-348, 2005. * [Shm3] P. Shmerkin, Overlapping self-affine sets, Indiana Univ. Math. J., 55(4):1291-1331, 2006\. * [Shm4] P. Shmerkin, Projections of self-similar and related fractals: a survey of recent developments, Fractal Geometry and Stochastics V, pages 53-74. Editors: Christoph Bandt, Kenneth J. Falconer, and Martina Zähle, Springer, 2015. * [SimS1] K. Simon and B. Solomyak, Hausdorff dimension for horseshoes in $\mathbb{R}^{3}$, Ergodic Th. and Dynam. Sys. 19 (1999), 1343-1363. * [SimS2] K. Simon and B. Solomyak, On the dimension of self-similar sets, Fractals 10 (2002), 59-65. * [Sol1] B. Solomyak, On the random series $\sum\pm\lambda^{i}$ (an Erdös problem), Annals of Math. 142 (1995), 611-625. * [Sol2] B. Solomyak, Measure and dimension for some fractal families, Math. Proc. Cambridge Phil. Soc. 124, no. 3 (1998), 531-546. * [Sol3] B. Solomyak, Notes on Bernoulli convolutions, In Fractal Geometry and Applications: A Jubilee of Benoît Mandelbrot. Part 1. Proc. Sympos. Pure Math. 72 (2004) 207-230. Amer. Math. Soc., Providence, RI. * [SS1] P. Shmerkin and B. Solomyak, Absolute continuity of self-similar measures, their projections and convolutions, Transactions of the American Math. Society 368 (2016), 5125-5151. * [SS2] P. Shmerkin and B. Solomyak, Absolute continuity of complex Bernoulli convolutions, Math. Proc. Cambridge Philos. Soc. 161 (2016), no. 3, 435-453. * [SSS] S. Saglietti, P. Shmerkin and B. Solomyak, Absolute continuity of non-homogeneous self-similar measures, Adv. Math., 335:60-110, 2018. * [SSU1] K. Simon, B. Solomyak and M. Urbański, Invariant Measures for Parabolic IFS with Overlaps and Random Continued Fractions, Transactions of the American Mathematical Society, Vol. 353, No. 12 (Dec., 2001), pp. 5145-5164. * [SSU2] K. Simon, B. Solomyak and M. Urbański, Hausdorff dimension of limit sets for parabolic IFS with overlaps, Pacific J. Math. 201 (2001), 441-478. * [Tao] T. Tao, An introduction to measure theory, Graduate Studies in Mathematics, Volume: 126, American Math. Society, 2011. * [Tsi] B. Tsirelson, Lecture notes for Probability for mathematicians, Part A: Independence, 3, Infinite independent sequences, http://www.math.tau.ac.il/~tsirel/Courses/ProbForMath/A3.pdf. * [Var1] P. P. Varjú, Recent progress on Bernoulli convolutions, In European Congress of Mathematics: Berlin, July 18-22, 2016. Eur. Math. Soc., Zurich. * [Var2] P. P. Varjú, Absolute continuity of Bernoulli convolutions for algebraic parameters, J. Amer. Math. Soc. 32 (2019) 351-397. * [Var3] P. P. Varjú, On the dimension of Bernoulli convolutions for all transcendental parameters, Annals of Mathematics 189 (2019), 1001-1011. * [Wal] P. Walters, An introduction to ergodic theory, Graduate Texts in Mathematics, 79. Springer, New York-Berlin, 1982. * [You] L. Young, Dimension, entropy and Lyapunov exponents, Ergodic Theory Dynam. Systems 2(1982), no. 1, 109-124.
# Optical evidence of local and itinerant states in Ce- and Yb-heavy-fermion compounds Shin-ichi Kimura1,2,3, Yong Seung Kwon4,1, Cornelius Krellner5, and Jörg Sichelschmidt6,1 1 Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka 565-0871, Japan 2 Department of Physics, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan 3 Department of Electronic Structure, Institute for Molecular Science, Okazaki, Aichi 444-8585, Japan 4 Department of Emerging Materials Science, DGIST, Daegu 711-873, Republic of Korea 5 Kristall- und Materiallabor, Physikalisches Institut, Goethe-Universität Frankfurt, Max-von-Laue Straße 1, D-60438, Frankfurt am Main, Germany 6 Max Planck Institute for Chemical Physics of Solids, Nothnitzer Straße 40, 01187 Dresden, Germany<EMAIL_ADDRESS> ###### Abstract The electronic properties of Cerium (Ce) and ytterbium (Yb) intermetallic compounds may display a more local or more itinerant character depending on the interplay of the exchange interactions among the $4f$ electrons and the Kondo coupling between $4f$ and conduction electrons. For the more itinerant case, the materials form heavy-fermions once the Kondo effect is developed at low temperatures. Hence, a temperature variation occurs in the electronic structure that can be traced by investigating the optical conductivity ($\sigma(\omega)$) spectra. Remarkably, the temperature variation in the $\sigma(\omega)$ spectrum is still present in the more localized case, even though the Kondo effect is strongly suppressed. Here, we clarify the local and itinerant character in the electronic structure by investigating the temperature dependence in the $\sigma(\omega)$ spectra of various Ce and Yb compounds with a tetragonal ThCr2Si2-type crystal structure. We explain the temperature change in a unified manner. Above temperatures of about 100 K, the temperature dependence of the $\sigma(\omega)$ spectra is mainly due to the electron-phonon interaction, while the temperature dependence below is due to the Kondo effect. * Version ## 1 Introduction Intermetallic compounds with cerium (Ce) and ytterbium (Yb) ions are known as heavy-fermion (HF) materials, in which the effective carrier mass ($m^{*}$) increases up to thousand times heavier than that of free carriers with decreasing temperature owing to the Kondo effect, which develops below a characteristic temperature called Kondo temperature $T_{\rm K}$ [1]. Since the effective carrier mass can be described as $m^{*}=kdk/dE(k)|_{k=k_{\rm F}}$ of the dispersion curve of the conduction band $E(k)$, where $k$ is the wavenumber, the band dispersion is expected to be modified with temperature. Whereas the change of the density of states of the conduction band is contained in the optical conductivity ($\sigma(\omega)$) and scanning tunneling spectroscopy measurements [2], the band dispersions $E(k)$ can be directly detected by angle-resolved photoelectron spectroscopy (ARPES), which is a method now being widely used in HF materials [3, 4, 5, 6, 7]. However, since ARPES is very sensitive to solid surfaces due to low-energy electrons’ detection, the results are sometimes inconsistent with bulk properties such as transport. An $\sigma(\omega)$ spectrum is a photon-in and photon-out measurement and thus reflects the band shape due to direct transitions with the momentum transfer $q=0$ using low-energy photons in the IR and THz regions [8]. Although the $4f$ states and conduction bands of Ce and Yb compounds are renormalized by many-body effects, the $\sigma(\omega)$ peaks in the middle- infrared region (“mid-IR peak”) are consistent with the unoccupied (occupied) $4f$ peak positions in Ce (Yb) compounds derived from density functional theory (DFT) band calculations with a self-energy shift from the Fermi level ($E_{\rm F}$) [9, 10]. This result suggests that DFT band calculation is a good approximation for the $\sigma(\omega)$ spectrum, unlike for ARPES of HF systems, which is explained by the band calculations including many-body effects within the dynamical mean-field theory (DFT+DMFT) [13, 12, 11]. The reason for this is that photoemission spectroscopy, including ARPES, observes the state with one electron extracted from the ground state, whereas $\sigma(\omega)$ spectra show the ground state with a matched number of electrons, i.e., the charge is conserved in the material, which is presumably due to the small many-body effect after photo-excitation. Figure 1: (a, b) Schematic spatial image of localized (a) and itinerant (b) $4f$ states in Ce and Yb intermetallic compounds. In the localized $4f$ states (a), the overlap between $4f$ states and conduction electrons is small, but in the itinerant case (b), the $c$-$f$ hybridization is developed, and the overlap becomes large. (c, d) Schematic band structure of Ce (c) and Yb (d) compounds. The dashed and solid lines indicate the localized and itinerant band structure, respectively. The $c$-$f$ hybridization state appears in the itinerant band structure. The spin-orbit splitting energies $\Delta$ in Ce and Yb compounds are about 250 meV and 1.5 eV, respectively. The split band at the $4f$ level is a simple example of hybridization, and in reality, the hybridization intensity depends on the symmetry of the bands. Vertical arrows indicate possible optical transitions. (e) Schematic optical conductivity ($\sigma(\omega)$) spectra from localized band structure (a dashed line) and itinerant $c$-$f$ hybridized one (a solid line). The horizontal scale is linear. The difference in the spectrum between Ce and Yb compounds appears in the double peak with the splitting energy of 250 meV in Ce compounds and single peak in Yb compounds owing to the larger spin-orbit splitting than the IR energy region of below 1 eV. Figure 1 illustrates how the $\sigma(\omega)$ spectroscopy can be useful in the study of HF systems for characterizing the bulk ground state [14, 15, 16, 17, 18, 19, 20, 21]. A schematic image of localized and itinerant $4f$ electrons in real space is shown in Figs. 1(a,b), and a schematic diagram of the band structure of Ce and Yb compounds is shown in Figs. 1(c,d). The localized $4f$ electrons are bound at the atomic position in the real space with a small spatial overlap with the spatially extended conduction ($c$) electrons (Fig. 1(a)). On the other hand, in the momentum space, the localized $4f$ electrons have a narrow energy width and are spread over the whole momentum space, while the $c$ band is hardly hybridized with the $4f$ electrons (Figs. 1(c,d)). In Ce compounds, the Ce3+ $4f^{1}$ is stable in the localized state, and the $4f$ state and the $c$ band are hardly hybridized (dashed line in Fig. 1(c)), whereas in Yb compounds, the Yb3+ $4f^{13}$ ($4f^{1}_{h}$: $4f$ hole) state is localized (Fig. 1(d)). As a result, the overlap of the wave functions of the $4f$ and $c$ states becomes small, and the optical transition probabilities of $|4f^{n}c^{m}\rangle+\hbar\omega\to|4f^{n+1}c^{m-1}\rangle$ for Ce compounds or $|4f^{n}c^{m}\rangle+\hbar\omega\to|4f^{n-1}c^{m+1}\rangle$ for Yb compounds are weak. Now, when the $4f$ electrons are hybridized with the conduction electrons ($c$-$f$ hybridization) (Fig. 1(b)), the hybridization band expands spatially. In other words, the overlap between the wave functions of $4f$ electrons and conduction electrons becomes large. The optical transition probabilities of $|4f^{n}c^{m}\rangle+\hbar\omega\to|4f^{n\pm 1}c^{m\mp 1}\rangle$ (actually, the optical transitions between the bonding and antibonding orbitals of the $c$-$f$ hybridization band) become larger. In heavy-fermion systems, $4f$ electrons have a localized character at temperatures higher than $T_{\rm K}$, but, with decreasing temperature, some of the $4f$ states are hybridized with the $c$ band ($c$-$f$ hybridization) and behave itinerantly (solid line in Fig. 1(c)). Then, the curvature of the bare $c$-band at $E_{\rm F}$ becomes flat at low temperature due to the $c$-$f$ hybridization band’s appearance, resulting in heavier quasiparticles. Besides, since the $c$-$f$ hybridization band has an energy gap of twice the hybridization strength ($2V$) on the $E_{\rm F}$, where $V$ is the hybridization energy, a shoulder structure of several meV to several ten meV appears, corresponding to the size of the $c$-$f$ hybridization gap. Furthermore, in HF systems, a characteristic peak called mid-IR peak appears. The origin of the mid-IR peak is also considered to be due to the $c$-$f$ hybridization. A schematic diagram of the change in the $\sigma(\omega)$ spectrum due to $c$-$f$ hybridization is shown in Fig. 1(e). In the localized state, at high temperature, a Drude structure from the bare $c$-band appears (dashed line, without the $c$-$f$ hybridization). As the temperature decreases, the simple form changes to a complex shape with a renormalized Drude peak, a far-IR peak due to the transition across the $c$-$f$ hybridization gap, and a mid-IR peak structure due to the development of the $c$-$f$ hybridization. The mid-IR peak is assigned to the optical transition of $|4f^{n}c^{m}\rangle+\hbar\omega\to|4f^{n+1}c^{m-1}\rangle$ for Ce compounds and $|4f^{n}c^{m}\rangle+\hbar\omega\to|4f^{n-1}c^{m+1}\rangle$ for Yb compounds. The spin-orbit splitting of the $4f$ states of Ce and Yb ions must appear in the $\sigma(\omega)$ spectra. The spin-orbit splitting in Ce compounds is about 250 meV, which corresponds to the peak spacing of the double-peak structure in the mid-IR peak. On the other hand, in the Yb compound, since the spin-orbit splitting energy of the $4f$ states is about 1.5 eV, only one peak appears in the mid-IR peak region, while the other peak is expected to appear in the visible light region. However, it is difficult to observe it because it is overlapped with other absorption structures. As for the mid-IR peak, Okamura et al. found that the peak energy is proportional to the effective mass of the heavy quasiparticle, which is another feature of heavy-fermion systems [22]. On the other hand, Kimura et al. showed that the behavior of the mid-IR peak changes could be explained by DFT calculations in the itinerant phase [9, 10], even though unoccupied states cannot be fully trusted in a DFT calculation because the unoccupied states cannot be optimized or adjusted during the iteration to self-consistency. Across a quantum critical point (QCP), however, the behavior changes to that is explained by using the local character with an on-site Coulomb repulsion in the $4f$ states [23]. In other words, by investigating the appearance of the mid-IR peak, we can study the localized and itinerant characters of $4f$ states, as well as the typical electronic states of HF systems. Table 1: Magnetic ordering temperatures ($T_{\rm N}$, $T_{\rm C}$) and Kondo temperature ($T_{\rm K}$) of samples used for this paper [30, 31, 32, 33, 34]. Sample | CeAg2Ge2 | CeRh2Ge2 | CeRu2Ge2 | CeCu2Ge2 | CeNi2Ge2 | CeCu2Si2 ---|---|---|---|---|---|--- $T_{\rm N}$ | 5–8 K | 15 K | 8.5 K | 4.1 K | – | – $T_{\rm C}$ | | | 8.0 K | | | $T_{\rm K}$ | – | – | – | – | several K | 10 K Sample | | Yb(CoxRh1-x)2Si2 | | YbIr2Si2 ---|---|---|---|--- | $x=1$ | $x=0.27$ | $x=0$ | $T_{\rm N}$ | 1.65 K | 1.3 K | 0.072 K | – $T_{\rm K}$ | – | 7 K | 25 K | 40 K Many $\sigma(\omega)$ studies have shown that this mid-IR peak is related to the $c$-$f$ hybridization [27, 25, 26, 27, 28, 29]. However, the relationship between the temperature at which the mid-IR peak is formed, and the $T_{\rm K}$ has not been investigated systematically. In this study, we investigated the relationship between the temperature dependence in the $\sigma(\omega)$ spectra in the mid-IR region across the $T_{\rm K}$ in HF materials with the same tetragonal ThCr2Si2-type crystal structure of Ce compounds (Ce$M_{2}$Ge2 ($M$ = Ag, Rh, Ru, Cu, Ni), CeCu2Si2) and of Yb compounds (Yb(CoxRh1-x)2Si2 ($x=0,0.27,1$), YbIr2Si2). The magnetic ordering and Kondo temperatures of these materials are listed in Table 1 [30, 31, 32, 33, 34]. These materials cover the overall physical property change from the local to itinerant characters in the Doniach phase diagram [35]. As a result, no mid-IR peaks were observed in the materials with strongly localized $4f$ levels (CeAg2Ge2 and YbCo2Si2 [36, 37]), and peaks indicating absorption into the levels shifted to the higher energy side due to the on-site Coulomb interaction $U$ acting on the $4f$ electrons were observed. The slight temperature dependence of the $\sigma(\omega)$ structure of these materials is concluded to originate from the thermal effect of the electron-phonon interaction. On the other hand, for materials with strong itinerancy, the mid-IR peaks are visible even at room temperature, much higher than $T_{\rm K}$. Moreover, the temperature at which the mid-IR peaks begin to appear does not scale with $T_{\rm K}$. This result suggests that the mid-IR peaks only represent the $c$-$f$ hybridization intensity due to the Kondo effect as the temperature decreases, but the $c$-$f$ hybridization band’s formation is not directly related to $T_{\rm K}$. ## 2 Method Polycrystalline samples of Ce$M_{2}$Ge2 ($M=$ Ag, Rh, Ru, Cu, Ni) and single- crystalline of Yb(CoxRh1-x)2Si2 ($x=0,0.27$) were synthesized by tetra-arc melting and flux methods, respectively, and the surfaces were well-polished using 0.3 $\mu$m grain-size Al2O3 lapping film sheets to obtain shiny surfaces for the optical reflectivity ($R(\omega)$) measurements. Near-normal incident $R(\omega)$ spectra were accumulated in a wide photon-energy range of 2 meV – 30 eV using a Martin-Puplett-type FTIR for the THz region ($\hbar\omega=2-30$ meV, FARIS-1, JASCO Co., Ltd.), a Michelson-type FTIR for the IR region (20 meV – 1.5 eV, FT/IR-6100, JASCO Co., Ltd.), and the beamline 7B of a synchrotron radiation facility, UVSOR (1.2 – 30 eV) to ensure an accurate Kramers-Kronig analysis (KKA) [20]. To obtain $\sigma(\omega)$ via KKA of $R(\omega)$, the spectra were extrapolated using the Hagen-Rubens function below the lowest energy measured, and the free-electron approximation $R(\omega)\propto\omega^{-4}$ above the highest energy [38]. $\sigma(\omega)$ spectra of Ce$M_{2}$Ge2 at 8–10 K have already been reported [23]. In this paper, we will report the temperature-dependent $\sigma(\omega)$ spectra of Ce$M_{2}$Ge2 and Yb(CoxRh1-x)2Si2 ($x=0,0.27$) compared with those of CeCu2Si2 [39], YbRh2Si2 [40], and YbIr2Si2 [41] to check the relation of the Kondo temperature to the temperature-dependent spectral change. ## 3 Results ### 3.1 Ce compounds Figure 2: Temperature-dependent optical reflectivity ($R(\omega)$) spectra (a) and optical conductivity ($\sigma(\omega)$) spectra (b) of Ce$M_{2}$Ge2 ($M=$Ag, Rh, Ru, Cu, Ni) and CeCu2Si2. The spectrum of CeCu2Si2 was taken from Ref. [39]. The baselines of each $R(\omega)$ and $\sigma(\omega)$ spectra are shifted by $0.15$ and $4.5\times 10^{3}~{}\Omega^{-1}{\rm cm}^{-1}$, respectively. Solid triangles and squares indicate the mid-IR peaks shown in Fig. 1(e) and the optical transitions to the unoccupied $4f$ states with the on-site Coulomb interaction ($U$-activated peak), respectively [23]. Figure 2 shows the temperature dependence of the $R(\omega)$ spectra and $\sigma(\omega)$ spectra of the Ce compounds. It can be seen that the $R(\omega)$ spectra (Fig. 2(a)) change more or less clearly with temperature for all the samples shown in this study. It should be noted that since it is difficult to directly compare the $R(\omega)$ spectrum with the electronic state, we will use the $\sigma(\omega)$ spectrum for the detailed discussion. Among these, CeCu2Si2, CeNi2Ge2 with heavy electrons, and CeCu2Ge2 with antiferromagnetic ordering and being located very close to a QCP, show a slight double-peak structure (marked by solid triangles) in the mid-IR region. However, for the more $4f$-localized CeRu2Ge2, CeRh2Ge2, and CeAg2Ge2, the double-peak structure is not visible, and instead, a broad peak appears and shifts to higher energies as the localization increases (marked by solid squares) [23]. Clear temperature dependence is also observed in the $\sigma(\omega)$ spectra of all materials (Fig. 2(b)). In particular, the mid- IR double-peak intensity increases when lowering temperature and shifts to higher energy by increasing the itinerant nature in CeCu2Ge2, CeNi2Ge2, and CeCu2Si2, see solid triangles. Also, the temperature dependence of $\sigma(\omega)$ increases and develops at lower temperatures. On the other hand, the peak structure shown by solid squares, which indicates localization, appears to vary only in peak width, with little change in peak intensity. ### 3.2 Yb compounds Figure 3: Temperature-dependent optical reflectivity ($R(\omega)$) spectra (a) and optical conductivity ($\sigma(\omega)$) spectra (b) of Yb(CoxRh1-x)2Si2 ($x=0,0.27,1$) and YbIr2Si2. The spectra of YbRh2Si2 and YbIr2Si2 were taken from Ref. [40, 41]. The baselines of each $R(\omega)$ and $\sigma(\omega)$ spectra are shifted by $0.17$ and $6.0\times 10^{3}~{}\Omega^{-1}{\rm cm}^{-1}$, respectively. Solid triangles and squares indicate the mid-IR peaks shown in Fig. 1(e) and the optical transitions from the occupied $4f$ states with the on-site Coulomb interaction ($U$-activated peak), respectively. Figure 3 shows the temperature dependence of the $R(\omega)$ spectra and $\sigma(\omega)$ spectra of Yb compounds. Like Ce compounds, YbIr2Si2 and YbRh2Si2, which are in the itinerant phase and very close to the QCP, respectively, show a clear mid-IR peak (marked by solid triangles), indicating a robust $c$-$f$ hybridization. There appears to be a small mid-IR peak in Yb(CoxRh1-x)2Si2 ($x=0.27$) located in the slightly localized region. This result suggests that the $c$-$f$ hybridization also works on the slightly localized side of the QCP. This is consistent with the behavior in the Ce compounds shown above. However, in the case of YbCo2Si2, which is located further to the localized side, the mid-IR peak disappears. Instead, an on- site-$U$-activated peak (marked by solid square) appears, which is also observed in localized Ce compounds, but the temperature change in the $\sigma(\omega)$ spectra becomes more pronounced. The mid-IR peak of YbIr2Si2 and Yb(CoxRh1-x)2Si2 ($x=0,0.27$) (solid triangles) develops with decreasing temperature, while the on-site-$U$-activated peak of YbCo2Si2 (solid square) only becomes narrower with no change in peak intensity. This result is the same as in the case of Ce compounds, and suggests that the mid-IR peak is related to the development of the Kondo effect at low temperatures. ## 4 Discussion Figure 4: Relative temperature dependence of the change of the center of gravity ($\Delta\langle\omega\rangle$) (a, c) and the spectral weight ($\Delta SW$) (b, d) of Ce (a, b) and Yb (c, d) compounds. These values were normalized to their room-temperature values. It is commonly observed in Ce and Yb HF-compounds that the mid-IR peak (solid triangles in Figs. 2 and 3) due to $c$-$f$ hybridization develops towards low temperatures in itinerant materials. In contrast, the on-site-$U$-activated peak (solid squares) in localized materials does not change in intensity with lowering the temperature but instead shows a decreasing line width. Here, we try to separate the effect of $c$-$f$ hybridization and other effects from these results. In general, the electron-phonon interaction intensity can be changed by temperature. When the magnitude of the electron-phonon interaction changes by changing the temperature, the peak position in the spectrum does not change so much, but only the width does, because the peak position and width originate from the $c$-$f$ hybridization and the electron-phonon interaction, respectively. The fact that the peak position is unchanged means that the center of gravity ($\Delta\langle\omega\rangle$) of the spectrum does not change with temperature, and the fact that the line width changes implies that the difference of the spectral weight changes ($\Delta SW$). It should be noted that Drude peaks also become sharp with decreasing temperature, which will cause the low-energy shift of both $\Delta\langle\omega\rangle$ and $\Delta SW$. This effect will be visible at the lowest energy region in $\sigma(\omega)$ spectra where a renormalized Drude peak appears, so the effect is not large to $\Delta\langle\omega\rangle$ and $\Delta SW$. To check these expectations, the change of the $\Delta\langle\omega\rangle$ and the $\Delta SW$ normalized to their room-temperature values were evaluated by using the following functions: $\Delta\langle\omega\rangle=\left[\langle\omega(T)\rangle-\langle\omega({\rm 300~{}K})\rangle\right]/\langle\omega({\rm 300~{}K})\rangle,$ where $\langle\omega(T)\rangle=\int_{\omega_{1}}^{\omega_{2}}\sigma(\omega,T)\omega d\omega/\int_{\omega_{1}}^{\omega_{2}}\sigma(\omega,T)d\omega.$ and $\Delta SW=\int_{\omega_{1}}^{\omega_{2}}|\sigma(\omega,T)-\sigma(\omega,{\rm 300~{}K})|/\sigma(\omega,{\rm 300~{}K})d\omega.$ The integration range was set as $\omega_{1}=0~{}{\rm eV}\leq\omega\leq\omega_{2}=0.8~{}{\rm eV}$, where the spectral change in the lower energy region is almost recovered. The obtained temperature dependence of these parameters is shown in Fig. 4. Firstly, we discuss the behaviors of the strongly localized CeAg2Ge2, CeRh2Ge2, and YbCo2Si2. According to Fig. 4(a,c), the $\Delta\langle\omega\rangle$ of these materials is almost $0$ or negative at all temperatures. This result suggests that the electron-phonon interaction is the primary origin of the temperature variation. The negative $\Delta\langle\omega\rangle$ suggests that the Drude peak becomes narrow with decreasing temperature, indicating increasing relaxation time [38]. On the other hand, for $\Delta SW$, shown in Fig. 4(b,d), there is a temperature dependence at $T~{}\geq~{}100~{}{\rm K}$ (= $T_{el\mathchar 45\relax ph}$), and it is almost constant at $T~{}\leq$ $T_{el\mathchar 45\relax ph}$. This temperature of 100 K is not the Debye temperature [42], however, the observation of a similar temperature dependence in the La- or Lu-reference materials [43] clarifies that this is not related to $4f$ magnetism. Therefore, the origin of these materials’ temperature variation does not originate from the Kondo effect but mainly from the electron-phonon interaction. On the other hand, CeCu2Si2, CeNi2Ge2, YbIr2Si2, and YbRh2Si2 are in the itinerant regime or the localized regime close to the QCP, both $\Delta\langle\omega\rangle$ and $\Delta SW$ increase with decreasing temperature even below $T_{el\mathchar 45\relax ph}$. This result implyies that the $c$-$f$ hybridization develops at low temperatures. It should be noted that above $T_{el\mathchar 45\relax ph}$, the effect of the electron- phonon interaction should also be taken into account, and the effect cannot be separated from the $c$-$f$ hybridization effect. In these itinerant materials, the temperature dependences of both $\Delta\langle\omega\rangle$ and $\Delta SW$ start at or higher than room temperature, which seems to be independent of the $T_{\rm K}$. A similar thermal effect in the $\sigma(\omega)$ spectra has been reported from not only other $\sigma(\omega)$ spectra [26, 27, 29] but also ARPES results of CeNi2Ge2 [7], YbRh2Si2 [44], and CeCoIn5 [11]. Therefore, the electronic structure change due to the Kondo effect is considered to start at a temperature much higher than $T_{\rm K}$. In CeRu2Ge2, located at the middle of the above materials, $\Delta\langle\omega\rangle$ increases toward lower temperatures. This phenomenon represents a $c$-$f$ hybridization development under discussion above, while the $\Delta SW$ remains nearly constant below $T_{el\mathchar 45\relax ph}$. This result is probably due to the weak $c$-$f$ hybridization strength. A mid-IR double-peak structure is not visible even at low temperature, but an on-site-$U$-activated peak is visible (Fig. 2, solid squares). On the other hand, for Yb(CoxRh1-x)2Si2 ($x=0.27$), the $\Delta\langle\omega\rangle$ is almost constant (rather decreasing) with decreasing temperature, while the $\Delta SW$ is observed to be temperature- dependent only above $T_{el\mathchar 45\relax ph}$, similar to YbCo2Si2. This result implies that Yb(CoxRh1-x)2Si2 ($x=0.27$) seems to be localized even though the mid-IR peak is slightly visible even at low temperature. Thus, for Ce and Yb compounds, the behaviors of $\Delta\langle\omega\rangle$ and $\Delta SW$ roughly correspond to the localized and itinerant electronic structure across the QCP. Very close to the QCP, however, the behaviors of $\Delta\langle\omega\rangle$ and $\Delta SW$ of Ce and Yb compounds are similar to each other, but they gradually change across the QCP, suggesting the competition between the local and itinerant character. ## 5 Summary In conclusion, we investigated the transition from the local to the itinerant states of Ce- and Yb-based heavy-fermion compounds with the same tetragonal ThCr2Si2-type crystal structure using the temperature dependence of the $\sigma(\omega)$ spectrum. The itinerant electronic structure appears not only in the mid-IR peak but also in the temperature dependence of the center of gravity of the IR $\sigma(\omega)$ spectra and that of the change of the spectral weight. The spectral-weight change also indicates that an electron- phonon interaction mainly works at high temperatures in all materials. ## Acknowledgements We would like to acknowledge Hiroko Yokoyama for her help in the IR experiments, UVSOR staff members for synchrotron radiation experiments, Christoph Klingner for providing YbCo2Si2 samples, and Christoph Geibel for his fruitful comments. Part of this paper was supported by the Use-of-UVSOR Facility Program (BL7B) of the Institute for Molecular Science. J. S. and Y. S. K acknowledge support by the International Collaboration Program of Osaka University. ## References ## References * [1] Hewson A C 1993 The Kondo Problem to heavy-fermions (Cambridge University Press, Cambridge) * [2] Kirchner S, Paschen S, Chen Q, Wirth S, Feng D, Thompson J D and Si Q 2020 Rev. Mod. Phys.92 011002 * [3] Im H J, Ito T, Kim H-D, Kimura S, Lee K E, Hong J B, Kwon Y S, Yasui A and Yamagami H 2008 Phys. Rev. Lett.100 176402 * [4] Koitzsch A, Borisenko S V, Inosov D, Geck J, Zabolotnyy V B, Shiozawa H, Knupfer M, Fink J, Büchner B, Bauer E D, Sarrao J L and Follath R 2008 Phys. Rev.B 77 155128 * [5] Okane T, Ohkochi T, Takeda Y, Fujimori S-i, Yasui A, Saitoh Y, Yamagami H, Fujimori A, Matsumoto Y, Sugi M, Kimura N, Komatsubara T and Aoki H 2009 Phys. Rev. Lett.102 216401 * [6] Vyalikh D V, Danzenbächer S, Kucherenko Yu, Kummer K, Krellner C, Geibel C, Holder M G, Kim T K, Laubschat C, Shi M, Patthey L, Follath R and Molodtsov S L 2010 Phys. Rev. Lett.105 237601 * [7] Nakatani Y, Aratani H, Fujiwara H, Mori T, Tsuruta A, Tachibana S, Yamaguchi T, Kiss T, Yamasaki A, Yasui A, Yamagami H, Miyawaki J, Ebihara T, Saitoh Y and Sekiyama A 2018 Phys. Rev.B 97 115160 * [8] Degiorgi L Rev. Mod. Phys.71 687 * [9] Kimura S, Iizuka T and Kwon Y S 2009 J. Phys. Soc. Japan78 013710 * [10] Kimura S 2009 Phys. Rev.B 80 073103 * [11] Jang S, Denlinger J D, Allen J W, Zapf V S, Maple M B, Kim J N, Jang B G and Shim J H 2020 PNAS 117 23467 * [12] Zhang Y, Lu H, Zhu X, Tan S, Liu Q, Chen Q, Feng W, Xie D, Luo L, Liu Y, Song H, Zhang Z and Lai X 2016 Sci. Rep. 6 33613 * [13] Choi H C, Haule K, Kotliar G, Min B I and Shim J H 2013 Phys. Rev.B 88 125111 * [14] Kimura S, Iizuka T, Miyazaki H, Irizawa A, Muro Y and Takabatake T 2011 Phys. Rev. Lett.106 056404 * [15] Kimura S, Iizuka T, Miyazaki H, Hajiri T, Matsunami M, Mori T, Irizawa A, Muro Y, Kajino J and Takabatake T 2011 Phys. Rev.B 84 165125 * [16] Kimura S, Muro Y and Takabatake T 2011 J. Phys. Soc. Japan80 033702 * [17] Iizuka T, Mizuno T, Min B H, Kwon Y S and Kimura S 2012 J. Phys. Soc. Japan81 043703 * [18] Kimura S, Arai F and Ikezawa M 2000 J. Phys. Soc. Japan69 3451 * [19] Okamura H, Michizawa T, Nanba T, Kimura S, Iga F and Takabatake T 2005 J. Phys. Soc. Japan74 1954 * [20] Kimura S and Okamura H 2013 J. Phys. Soc. Japan82 021004 * [21] Kimura S, Takao H, Kawabata J, Yamada Y and Takabatake T 2016 J. Phys. Soc. Japan85 123705 * [22] Okamura H, Watanabe T, Matsunami M, Nishihara T, Tsujii N, Ebihara T, Sugawara H, Sato H, Ōnuki Y, Isikawa Y, TakabatakeT and Nanba T 2007 J. Phys. Soc. Japan76 023703 * [23] Kimura S, Kwon Y S, Matsumoto Y, Aoki H and Sakai O 2016 J. Phys. Soc. Japan85 083702 * [24] Singley E J, Basov D N, Bauer E D and Maple M B 2002 Phys. Rev.B 65 161101(R) * [25] Okamura H, Michizawa T, Nanba T and Ebihara T 2004 J. Phys. Soc. Japan73 2045 * [26] Mena F P, van der Marel D and Sarrao J L 2005 Phys. Rev.B 72 045119 * [27] Singley E J, Basov D N, Bauer E D and Maple M B 2002 Phys. Rev.B 65 16110(R) * [28] Chen R Y and Wang N L 2016 Rep. Prog. Phys. 79 064502 * [29] Bachar N, Stricker D, Muleady S, Wang K, Mydosh J A, Huang Y K and van der Marel D 2016 Phys. Rev.B 94 235101 * [30] Endstra T, Nieuwenhuys G J and Mydosh J A 1993 Phys. Rev.B 48 9595 * [31] Matsumoto Y, Sugi M, Aoki K, Shimizu Y, Kimura N, Komatsubara T, Aoki H, Kimata M, Terashima T and Uji S 2011 J. Phys. Soc. Japan80 074715 * [32] Gegenwart P, Custers J, Geibel C, Neumaier K, Tayama T, Tenya K, Trovarelli O and Steglich F 2002 Phys. Rev. Lett.89 056402 * [33] Hossain Z, Geibel C, Weickert F, Radu T, Tokiwa Y, Jeevan H, Gegenwart P and Steglich F 2005 Phys. Rev.B 72 (2005) 094411 * [34] Klingner C, Krellner C, Brando M, Geibel C, Steglich F, Vyalikh D V, Kummer K, Danzenbächer S, Molodtsov S L, Laubschat C, Kinoshita T, Kato Y and Muro T 2011 Phys. Rev.B 83 144405 * [35] Doniach S 1977 Physica 91B 231 * [36] Gruner T, Sichelschmidt J, Klingner C, Krellner C, Geibel C and Steglich F 2012 Phys. Rev.B 85 035119 * [37] Güttler M, Kummer K, Patil S, Höppner M, Hannaske A, Danzenbächer S, Shi M, Radovic M, Rienks E, Laubschat C, Geibel C and Vyalikh D V 2014 Phys. Rev.B 90 195138 * [38] Dressel M and Grüner G Electrodynamics of Solids (Cambridge University Press, Cambridge, U.K., 2002). * [39] Sichelschmidt J, Herzog A, Jeevan H S, Geibel C, Steglich F, Iizuka T and Kimura S 2013 J. Phys.: Condens. Matter25 065602 * [40] Kimura S, Sichelschmidt J, Ferstl J, Krellner C, Geibel C and Steglich F 2006 Phys. Rev.B 74 132408 * [41] Iizuka T, Kimura S, Herzog A, Sichelschmidt J, Krellner C, Geibel C and Steglich F 2010 J. Phys. Soc. Japan79 123703 * [42] Hartmann S, Oeschler N, Krellner C, Geibel C and Steglich F 2009 150 042049 * [43] Kimura S, Nanba T, Kunii S and Kasuya T 1994 Phys. Rev.B 50 1406 * [44] Kummer K, Patil S, Chikina A, Güttler M, Höppner M, Generalov A, Danzenbächer S, Seiro S, Hannaske A, Krellner C, Kucherenko Y, Shi M, Radovic M, Rienks E, Zwicknagl G, Matho K, Allen J W, Laubschat C, Geibel C and Vyalikh D V 2015 Phys. Rev.X 5 011028
# Fast Sequence Generation with Multi-Agent Reinforcement Learning Longteng Guo, Jing Liu, Xinxin Zhu, and Hanqing Lu Longteng Guo, Jing Liu, Xinxin Zhu and Hanqing Lu are with the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, and also with the School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China. E-mail: {longteng.guo, jliu, xinxin.zhu<EMAIL_ADDRESS>(Corresponding author: Jing Liu) This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. ###### Abstract Autoregressive sequence Generation models have achieved state-of-the-art performance in areas like machine translation and image captioning. These models are autoregressive in that they generate each word by conditioning on previously generated words, which leads to heavy latency during inference. Recently, non-autoregressive decoding has been proposed in machine translation to speed up the inference time by generating all words in parallel. Typically, these models use the word-level cross-entropy loss to optimize each word independently. However, such a learning process fails to consider the sentence-level consistency, thus resulting in inferior generation quality of these non-autoregressive models. In this paper, we propose a simple and efficient model for Non-Autoregressive sequence Generation (NAG) with a novel training paradigm: Counterfactuals-critical Multi-Agent Learning (CMAL). CMAL formulates NAG as a multi-agent reinforcement learning system where element positions in the target sequence are viewed as agents that learn to cooperatively maximize a sentence-level reward. On MSCOCO image captioning benchmark, our NAG method achieves a performance comparable to state-of-the- art autoregressive models, while brings 13.9$\times$ decoding speedup. On WMT14 EN-DE machine translation dataset, our method outperforms cross-entropy trained baseline by $6.0$ BLEU points while achieves the greatest decoding speedup of 17.46$\times$. ###### Index Terms: Non-autoregressive sequence generation, image captioning, machine translation, multi-agent reinforcement learning ## 1 Introduction Sequence generation tasks aim at generating a target sequence conditioning on a source input. Standard sequence generation tasks include neural machine translation [1, 2], image captioning [3], and automatic speech recognition [4] _etc_., where the targets are often sequences of words/tokens and the source inputs can be sentences, images, and speeches _etc_. Recent sequence generation models typically follow the encoder-decoder paradigm where an encoder encodes the input into vectorial representations, and a sequence decoder, _e.g_. recurrent neural networks (RNNs) or Transformer [2], generates a sentence given the outputs of the encoder. Most of these models use autoregressive decoders that require sequential execution: they generate each word conditioned on the sequence of previously generated words. However, this process is not parallelizable and thus results in high inference latency, which is sometimes unaffordable for real-time industrial applications. Recently, non-autoregressive decoding was proposed in neural machine translation [5] to significantly improve the inference speed by predicting all target words in parallel. A non-autoregressive model takes basically the same structure as the autoregressive Transformer network [2]. But instead of conditioning the decoder on the previously generated words as in autoregressive models, they generate all words independently, as illustrated in Figure 1. Such models are typically optimized with the cross-entropy (XE) losses of individual words. Figure 1: Given a source input, _e.g_. an image, Autoregressive sequence Generation (AG) model generates a target sentence word by word, while Non- Autoregressive sequence Generation (NAG) model outputs all words in parallel. However, existing non-autoregressive models still have a large gap in generation quality compared to their autoregressive counterparts, mainly due to their severe decoding inconsistency problem. For example, in Figure 1, the caption generated by the non-autoregressive model has repeated words and incomplete content. A major reason for such performance degradation is that the word-level XE based training objective cannot guarantee sentence-level consistency. That is, the XE loss encourages the model to generate the golden word in each position but does not consider the global consistency of the whole sentence. Also, though the models are trained with XE loss, however, at test time, they are typically evaluated using non-differentiable test metrics such as CIDEr [6] or BLEU [7]. This inconsistency in training time and test time objectives is undesirable and can cause suboptimal performance. To simultaneously reduce the inference time and improve the decoding consistency of sequence generation models, in this paper, we propose a Non- Autoregressive sequence Generation (NAG) method with a novel training paradigm: Counterfactuals-critical Multi-Agent Learning (CMAL). We formulate NAG as a multi-agent reinforcement learning system, where element positions in the target sequence are viewed as agents and they act cooperatively to maximize a team reward about the quality of the whole sentence. CMAL optimizes the model with policy gradient algorithm and introduces a counterfactual baseline to address the multi-agent credit assignment [8] problem. Also, we further boost the generation quality by augmenting the training data with massive unlabeled inputs. In the following, We introduce each component. Specifically, our NAG model’s architecture is based on Transformer [2] with the minimum necessary modifications. We consider NAG decoder as a cooperative multi-agent reinforcement learning (MARL) [9] system. In this system, each agent observes the environment (the encoded visual context), and communicates with other agents through the self-attention layers in Transformer. After several rounds of environment observation and agent communication, the agents reach an agreement about content planning of the target sentence and separately take actions to predict the words in their corresponding positions, which forms a joint action. The agents then receive a common sentence-level team reward that is, for instance, the BLEU [7] score of the generated sentence. The goal of the agents is to maximize the team reward and policy gradient is used to update their parameters. This training paradigm has two benefits. First, the non-differentiable test metrics, _e.g_. BLEU [7] or CIDEr [6], could be directly optimized. Second, by optimizing the agents towards a common sentence-level objective, the decoding consistency can be substantially improved. A crucial challenge in the above MARL training paradigm is multi-agent credit assignment [8]: the shared team reward makes it difficult for each agent to deduce its own contribution to the team’s success. This could impede multi- agent learning and lead to decoding inconsistency. To address this challenge, we compute an agent-specific advantage function that compares the team reward for the joint action against an agent-wise counterfactual baseline [10, 11]. The individual counterfactual baseline of an agent is the expected reward when marginalizing out a single agent’s action, while keeping the other agents’ actions fixed. As a result, only actions from an agent that outperform the counterfactual baseline are given positive weight, and inferior actions are suppressed. To more precise credit assignment among agents, we further introduce a compositional counterfactual baseline as a supplement to the individual counterfactual baseline. In the compositional counterfactual baseline, we treat two neighbouring agents as a compositional agent to take into account the joint influence of their actions on the team reward. The counterfactual baseline fully exploits the distinctive features of the multi- agent NAG system, _i.e_. extremely short episode and large action space, and thereby is simple and easy to implement. To further boost generation quality, we propose to augment the training data with massive unlabeled inputs. These unlabeled data could be easily acquired without costly human annotations. Specifically, we use an autoregressive teacher model to produce target sequences for these unlabeled inputs, which are then used as pseudo paired data for training the non-autoregressive models. Compared to previous non-autoregressive models that often rely on specially designed submodules [12, 13], our NAG method adds almost no extra components to the Transformer architecture. Owing to this structure simplicity, our model can maximize the decoding speedup. We evaluate the proposed non-autoregressive sequence generation method on two distinctive sequence generation tasks, _i.e_. image captioning and neural machine translation. On MSCOCO image captioning benchmark, our method brings $13.9\times$ decoding speedup relative to the autoregressive counterpart, while achieving comparable performance to state-of-the-art autoregressive models. On the more challenging WMT14 EN-DE machine translation dataset, our method significantly outperforms the cross- entropy trained baseline by $6.0$ BLEU points while achieves a greatest decoding speedup of $17.46\times$. To summarize, the main contributions of this paper are four-fold: * • We propose a non-autoregressive sequence generation method with a novel training paradigm: counterfactuals-critical multi-Agent learning. To the best of our knowledge, we are the first to formulate non-autoregressive sequence generation as a cooperative multi-agent learning problem. * • We design individual and compositional counterfactual baselines to disentangle the contribution of each agent from the team reward. * • We propose to utilize massive unlabeled data to boost the performance of non- autoregressive models. * • Extensive experiments on image captioning and machine translation tasks show that our method significantly outperforms previous cross-entropy trained non- autoregressive models while brings the greatest decoding speedup. ## 2 Related Work ### 2.1 Autoregressive Sequence Generation Sequence generation, _e.g_. neural machine translation [14, 1] and image captioning [15], in deep learning has largely focused on autoregressive modeling. These autoregressive models often follow the encoder-decoder framework with different choices of architectures such as RNN [16, 17], convolutional neural networks (CNN) [18] and full attention networks without recurrence and convolution (Transformer) [2]. RNN-based models have a sequential architecture that prevents them from being parallelized during both training and test processes. Although CNN and Transformer based models avoid recurrence at training time with highly parallelized architecture, the usage of autoregressive decoding makes them still have to generate sequences token- by-token during inference. Transformer has refreshed state-of-the-art performance on several sequence generation tasks, including machine translation [2] and image captioning [19, 20]. ### 2.2 Non-Autoregressive Sequence Generation Non-autoregressive Sequence Generation (NAG) [5] has recently been proposed to reduce the generation latency through parallelizing the decoding process. A basic NAG model takes the same encoder-decoder architecture as Transformer. Non-autoregressive models often perform considerably worse than the autoregressive counterparts. Some methods has been proposed to narrow the performance gap between autoregressive and non-autoregressive models, including knowledge distillation [5], auxiliary regularization terms [21], well-designed decoder inputs [22], iterative refinement [12, 23] _etc_. Among them, [23] and [24] are the two published works on non-autoregressive image captioning. However, these methods models typically use conventional word- level cross-entropy loss to optimize each word independently, which fails to consider the sentence-level consistency and results in poor generation quality. Unlike these works, we propose to apply multi-agent reinforcement learning in non-autoregressive models to optimize a sentence-level objective. ### 2.3 Multi-Agent Reinforcement Learning (MARL) In a reinforcement learning system, agents take actions in an environment in order to maximize the cumulative reward. Some works on autoregressive sequence generation incorporated single-agent reinforcement learning to tackle the exposure bias problem [14] and optimize non-differentiable score functions. [14] introduced REINFORCE [25] algorithm to sequence training with RNNs. [26] lowers the variance of the policy gradient by introducing a baseline, which is the reward of the sequence generated by the inference algorithm. [27] propose actor-critic algorithms for sequence prediction. However, applying reinforcement learning to autoregressive machine translation only reported marginal improvements [1, 28, 29]. Meanwhile, while previous works mainly focus on autoregressive models with single-agent reinforcement learning, we study non-autoregressive models and are the first to formulate sequence generation models as a multi-agent reinforcement learning system. Multi-Agent Reinforcement Learning (MARL) [9] considers a system of multiple agents that interact within a common environment. It is often designed to deal with complex reinforcement learning problems that require decentralized policies, where each agent selects its own action. Compared to well-studied MARL game tasks, our NAG model has a much larger action space and extremely shorter episode. Our counterfactual baseline gets intuition from [10], which requires training an additional critic network to estimate the Q value for each possible action. Learning such a critic network increases the model complexity and is not practical due to the high-dimensional action space of NAG. Instead, we turn to a simple yet powerful REINFORCE [25] algorithm in which the actual return is used to replace the Q function directly. ### 2.4 Our Previous Work Preliminary version of this work was published in [30]. Compared with the previous version, this work shows two significant improvements. First, we introduce compositional agents to take into account the joint influence of the actions from two neighboring agents on the team reward. With compositional agents, the compositional counterfactual baseline could be calculated as a supplement to the individual counterfactual baseline, thereby promoting more precise credit assignment among agents. Second, while in the previous version we only focus on image captioning task, in this paper, we extend the model to more general sequence generation tasks and conduct extensive experiments on classic sequence generation tasks including image captioning and machine translation. Compared to image captioning task, machine translation task often has much longer target sentence length and is thus more challenging for non- autoregressive models. We verify that our model, with only minor modifications, can perform quite well on non-autoregressive machine translation. ## 3 Background ### 3.1 The Transformer Model Neural sequence generation models typically follow an encoder-decoder structure [17, 16, 3]. The encoder maps a source input to context representations. Given the context, the decoder then generates an output sequence of tokens one token at a time. The decoder is autoregressive [31], consuming the generated partial sequence as input and output the next token. Transformer [2] is among the most successful neural sequence transduction models. Both the encoder and decoder of Transformer are built with stacked multi-head attention and point-wise feed-forward layers. ##### Multi-Head Attention At the core of Transformer is two types of multi-head attention, _i.e_. self- attention and inter-attention. A general attention mechanism can be formulated as the weighted sum of the value vectors $V$ using the similarities between query vectors $Q$ and key vectors $K$: $\text{Attention}(Q,K,V)=\operatorname{softmax}\left(\frac{QK^{T}}{\sqrt{d}}\right)\cdot V,$ (1) where $d$ is the dimension of hidden representations. For self-attention, $Q$, $K$ and $V$ are projected hidden representations of the preceding layer. For inter-attention, $Q$ refers to projected hidden representations of the preceding layer, whereas $K$ and $V$ are projected context vectors from the encoder outputs. ##### Feed-Forward Network Position-wise feed-forward network is applied after each multi-head attentions. It consists of a two-layer linear transformation with a ReLU activation in between: $\operatorname{FFN}(x)=\max\left(0,xW_{1}+b_{1}\right)W_{2}+b_{2}.$ (2) Both the Transformer encoder and decoder compose of a stack of 6 identical blocks. Each encoder block has two layers: a self-attention and a feed-forward network. Each decoder block has three layers, including a self-attention, an inter-attention, and a feed-forward network. Causal attention masking is applied in the self-attention of the decoder stack to prevent each position from attending to subsequent positions. Figure 2: Illustration of our non-autoregressive sequence generation model. It is based on Transformer and composes of an encoder and a decoder. The encoder takes as input a source input, and the decoder generates a target sequence given the encoder outputs. The source input could be, for example, the vectorial CNN features of an image in image captioning task or the token embeddings of a sentence in machine translation task. On the rightmost, we cast the non-autoregressive decoder in the multi-agent reinforcement learning terminology, where element positions in the target sequence are viewed as agents that learn to cooperatively maximize a sentence-level team reward. ### 3.2 Autoregressive Decoding Given an input $x=(x_{1},...,x_{n})$ and a target sentence $y=(y_{1},...,y_{T})$, Autoregressive sequence generation models are based on a chain of conditional probabilities with a left-to-right causal structure: $p(y|I;\theta)=\prod_{i=1}^{T}p\left(y_{i}|y_{<i},x;\theta\right),$ (3) where $\theta$ is the model’s parameters and $y_{<i}$ represents the words before the $i$-th word of target $y$. The inference process is not parallelizable under such autoregressive factorization as the sentence is generated word by word sequentially. ### 3.3 Non-Autoregressive Decoding Recently, non-autoregressive sequence models were proposed to alleviate the inference latency by removing the sequential dependencies within the target sentence. A NAG model generates all words independently: $p(y|I;\theta)=\prod_{i=1}^{T}p\left(y_{i}|x;\theta\right).$ (4) During inference, all words could be parallelly decoded in one pass, thus the inference speed could be significantly improved. ### 3.4 Maximum Likelihood Training Typically, a non-autoregressive sequence model straightforwardly adopts maximum likelihood training with a cross-entropy (XE) loss applied at each decoding position $i$ of the sentence: $\mathcal{L}_{XE}(\theta)=-\sum_{i=1}^{T}\log\left(p\left(y_{i}|x;\theta\right)\right)$ (5) However, such word-level XE loss only encourages the model to generate the golden word in each position, while the sentence-level consistency of the whole sentence is not modeled. This problem becomes more serious in non- autoregressive decoding and causes non-autoregressive models to perform significantly poor than their autoregressive counterparts. ## 4 Approach In this section, we first present the architecture of our NAG model, and then introduce our Counterfactuals-critical Multi-Agent Learning (CMAL) algorithm for model optimization. Finally, we describe how we utilize unlabeled data to boost generation quality. ### 4.1 Transformer-Based NAG Model Given an input, _i.e_. the vectorial image features in image captioning or the sentence tokens in machine translation, NAG generates a sentence about the input in a non-autoregressive manner. The architecture of our NAG model is based on the well-known Transformer network, which composes of an encoder and decoder, as is shown in Figure 2. We only make the minimum necessary modifications on Transformer so as to maximize the decoding speed. ##### Inputs For machine translation task, the input are simply the tokens in the source sentence, which are converted to vectors of dimension $d$ with learned embeddings. For image captioning task, the input image is first represented as a set of vectorial features that are either grid features extracted from a pre-trained CNN [15] or region features extracted from a pre-trained object detector [32]. Each vector corresponds to a spatial position in the feature map when using grid features, and corresponds to an object in the image when using region features. We then feed the visual features through an input embedding layer to reduce the channel dimension to $d$. This embedding layer consists of a fully-connected layer followed by a ReLU and a dropout layer. ##### Encoder The encoder of NAG is basically the same as the Transformer encoder, which takes the vectors as inputs and generates the context representation. ##### Decoder Since the sequential dependency is removed in a non-autoregressive decoder, previous works often introduce additional components _e.g_. well-designed decoder architecture [13, 33] and decoder inputs [22] _etc_., which adds on extra inference time. Different from these methods, we choose a design that simplifies the decoder to the most degree but proves to work well in our experiment. We keep the decoder architecture almost the same as the Transformer decoder, and simply use a sequence of sinusoidal positional encodings [2] as the decoder inputs, each of which represents a position in the target sequence. The sequence length is either fixed to $N$ (see Sec. 5.1.2) or predicted by a target length predictor (see Sec. 6.1.2). We remove the causal attention mask from the self-attention layers of the decoder, allowing each position to attend over all positions in the decoder. ### 4.2 NAG as a MARL System To address the decoding inconsistency problem caused by word-level XE loss, we propose to formulate NAG model as a fully cooperative Multi-Agent Reinforcement Learning (MARL) system, which optimizes sentence-level rewards and thus improves the decoding consistency. We now formally cast NAG in MARL terminology. ##### Agent Each word position in the target sequence is viewed as an agent that interacts with a common environment (context from the encoder output) and other agents. There are $N$ agents in total, identified by $a\in A\equiv\\{a_{1},a_{2},\ldots,a_{N}\\}$. We denote joint quantities over all agents in bold, _e.g_. $\mathbf{u}$, $\boldsymbol{\pi}$. ##### State The hidden states in NAG decoder layers naturally represent the states of the agents, which are updated in each decoder layer. The agents observe the environment through the inter-attention layer where they attend to the context. The agents communicate with each other through the self-attention layer where the messages are passed between every two agents. After $L$ rounds of observation and communication, the final state of each agent is denoted as $s_{a}$. ##### Action After obtaining $s_{a}$, each agent simultaneously chooses an action $u_{a}\in U$, which is a word from the whole vocabulary $U$. The actions of all agents form a joint action $\mathbf{u}\in\mathbf{U}\equiv U^{N}$. To transform the joint action into a sentence, we truncate the word sequence at the first end- of-sentence token ($<$eos$>$) when the number of agents is fixed, and directly decode a sentence when the number of agents is decided by a target length predictor. ##### Policy The parameters of the network, $\theta$, define a stochastic policy $\pi_{a}$ for each agent, from which the action is sampled, _i.e_. $u_{a}\sim\pi_{a}=\textit{softmax}(s_{a})$. We speed up learning and reduce model complexity by sharing parameters among agents. ##### Reward After all agents take their actions (words), they receive a shared team reward $R(\mathbf{u})$. The reward is computed with an evaluation metric (_e.g_. CIDEr) by comparing the generated sentence to corresponding ground-truth sequences. Compared to typical MARL applications, NAG has two notable features. First, NAG has a much larger action space (_i.e_. the whole vocabulary, which is near 10,000 words). Second, NAG has extremely shorter episodes (_i.e_. the episode length is 1). Actually, agents in NAG perform a one-step Markov Decision Process (MDP) since all words are generated in one-pass. (a) Counterfactual replacements of an individual agent. (b) Counterfactual replacements of a compositional agent. Figure 3: Illustration of counterfactual replacements. We replace the action of an individual agent or a compositional agent while keeping the other agents’ actions fixed. ### 4.3 Multi-Agent Policy Gradient The goal of multi-agent learning is to maximize the expected team reward: $\mathcal{L}(\theta)=-\mathbb{E}_{\boldsymbol{\pi}}\left[R(\mathbf{u})\right].$ (6) With the policy gradient theorem, the expected gradient of the agents can be computed as follows: $\nabla_{\theta}\mathcal{L}(\theta)=-\mathbb{E}_{\boldsymbol{\pi}}\left[\sum_{a}R(\mathbf{u})\nabla_{\theta}\log\pi_{a}\left(u_{a}|s_{a};\theta\right)\right].$ (7) Particularly, using the REINFORCE [25] algorithm, the above equation can be approximated using a single sampling $\mathbf{u}\sim\boldsymbol{\pi}$ from the agents: $\nabla_{\theta}\mathcal{L}(\theta)\approx\sum_{a}R(\mathbf{u})\nabla_{\theta}\log\pi_{a}\left(u_{a}|s_{a};\theta\right).$ (8) However, such a gradient estimate suffers from high variance, which leads to unstable and slow learning of the optimal policy. To reduce the variance, a reference reward or baseline $b$ can be subtracted from the reward: $\nabla_{\theta}\mathcal{L}(\theta)\approx\sum_{a}(R(\mathbf{u})-b)\nabla_{\theta}\log\pi_{a}\left(u_{a}|s_{a};\theta\right).$ (9) The baseline still leads to an unbiased estimate, and importantly, it results in lower variance of the gradient estimate [34]. The baseline can be an arbitrary function, as long as it does not depend on the action $u_{a}$. ### 4.4 Individual Counterfactual Baseline The above approach, however, fails to address a key multi-agent credit assignment problem. That is, because each agent receives the same team reward, it is unclear how a specific agent’s action contributes to that team reward. The consequences of this problem are inefficient multi-agent learning and decoding inconsistency. For example, as illustrated in Figure 3(a), suppose there is a generated sentence (joint action), “a girl girl riding a bike”, and it gets a relatively high reward, then the word “girl” taken by the third agent is likely to be pushed up because it receives a positive reward, which, however, should actually be suppressed and replaced with “is”. To address this problem, we decide to compute a separate advantage function $A_{a}(s_{a},\mathbf{u})$ for each agent. It is computed by subtracting an agent-specific counterfactual baseline $B_{a}(s_{a},\mathbf{u}_{-a})$ from the common team reward, _i.e_.: $A_{a}(s_{a},\mathbf{u})=R(\mathbf{u})-B_{a}(s_{a},\mathbf{u}_{-a}),$ (10) where $\mathbf{u}_{-a}$ denotes the joint action of all the agents other than agent $a$. $A_{a}(s_{a},\mathbf{u})$ measures the increase (or decrease) in expected return of a joint action $\mathbf{u}$ due to agent $a$ having chosen action $u_{a}$ under state $s_{a}$. The gradient in Eqn. (9) then becomes: $\nabla_{\theta}\mathcal{L}(\theta)\approx\sum_{a}A_{a}(s_{a},\mathbf{u})\nabla_{\theta}\log\pi_{a}\left(u_{a}|s_{a};\theta\right).$ (11) Since $B_{a}(s_{a},\mathbf{u}_{-a})$ does not depend on the action of agent $a$, as described above, it will not change the expected gradient. Formally, the counterfactual baseline $B_{a}$ is calculated by marginalizing the rewards when agent $a$ traverses all possible actions while keeping the other agents’ actions $\mathbf{u}_{-a}$ fixed: $B_{a}(s_{a},\mathbf{u}_{-a})=\mathbb{E}_{u^{\prime}_{a}\sim\pi_{a}}\left[R([\mathbf{u}_{-a},u^{\prime}_{a}])\right].$ (12) The key insight of using this counterfactual baseline for NAG is that: given a sampled sequence/joint-action, if we replace the chosen word/action of a target position/agent with all possible words/actions and see how such counterfactual replacements affect the resulting reward, then the expected reward can act as a baseline to tell the actual influence of the chosen word/action. As a result, for each agent, only actions that outperform its counterfactual baseline would be pushed up, and inferior actions would be suppressed. Because the action space of each agent is quite large, we approximate the expectation computation in the above equation by only considering $k$ actions with the highest probability: $\displaystyle\pi_{a}^{\prime}\left(u_{a}|s_{a};\theta\right)$ $\displaystyle=\frac{\pi_{a}\left(u_{a}|s_{a};\theta\right)}{\sum_{u^{\prime}_{a}\in\mathcal{T}_{a}}\pi_{a}\left(u^{\prime}_{a}|s_{a};\theta\right)},$ (13) $\displaystyle B_{a}(s_{a},\mathbf{u}_{-a})$ $\displaystyle\approx\sum_{u^{\prime}_{a}\in\mathcal{T}_{a}}\pi_{a}^{\prime}\left(u^{\prime}_{a}|s_{a};\theta\right)R([\mathbf{u}_{-a},u^{\prime}_{a}]),$ where $\pi^{\prime}\left(u_{a}|s_{a};\theta\right)$ is the re-normalized probability for action $u_{a}$, and $\mathcal{T}_{a}$ is the set of words with top-$k$ probabilities in $\pi_{a}$. Experimentally, we found this approximation to be quite accurate even with a relatively small $k$ because the top-ranking words often have dominating probabilities. Thanks to the one-step Markov Decision Process (MDP) nature of our NAG model, the counterfactual replacements could be effortlessly made by simply choosing new words from $\pi_{a}\left(u_{a}|s_{a};\theta\right)$, without the need for time-consuming Monte-Carlo rollouts as in common multi-step MDP problems. ### 4.5 Compositional Counterfactual Baseline In the previous section we analyze an agent’s contribution by counterfactual replacements of that single agent’s action while keeping the other agents’ actions fixed. To take into account the joint influence of two neighboring agents, we introduce compositional agent (CA). Specifically, we treat two neighboring agents as a single compositional agent, and see how its compositional action affects the team reward by the aforementioned counterfactual replacements of its action. Take Figure 3(b) for example, for the generated sentence, “a girl girl riding a bike”, we can consider the third and fourth agent as a compositional agent, and perform counterfactual replacements on its compositional action, _e.g_. replacing “girl riding” with “is on” or “is riding”. We denote the compositional agent of agent $a_{i}$ and its subsequent agent $a_{i+1}$ as $\tilde{a}_{i}=[a_{i};a_{i+1}]$. Similarly to Eqn. (12), for compositional agent $\tilde{a}_{i}$, its compositional counterfactual baseline $\widetilde{B}_{\tilde{a}_{i}}$ is calculated by marginalizing the rewards when agent $\tilde{a}_{i}$ traverses all possible compositional actions $u^{\prime}_{\tilde{a}_{i}}=[u_{a_{i}};u_{a_{i+1}}]$ while keeping the other agents’ actions $\mathbf{u}_{-{\tilde{a}_{i}}}$ fixed: $\widetilde{B}_{\tilde{a}_{i}}(s_{\tilde{a}_{i}},\mathbf{u}_{-{\tilde{a}_{i}}})=\mathbb{E}_{u^{\prime}_{\tilde{a}_{i}}\sim\pi_{\tilde{a}_{i}}}\left[R([\mathbf{u}_{-{\tilde{a}_{i}}},u^{\prime}_{\tilde{a}_{i}}])\right].$ (14) Also similar to Eqn. (13), we approximate the expectation computation in the above equation by only considering $k$ compositional actions with the highest probability: $\displaystyle\pi_{\tilde{a}_{i}}\left(u_{\tilde{a}_{i}}|s_{\tilde{a}_{i}};\theta\right)$ $\displaystyle=\pi_{{a}_{i}}\left(u_{{a}_{i}}|s_{{a}_{i}};\theta\right)\cdot\pi_{{a}_{i+1}}\left(u_{{a}_{i+1}}|s_{{a}_{i+1}};\theta\right),$ (15) $\displaystyle\pi_{\tilde{a}_{i}}^{\prime}\left(u_{\tilde{a}_{i}}|s_{\tilde{a}_{i}};\theta\right)$ $\displaystyle=\frac{\pi_{\tilde{a}_{i}}\left(u_{\tilde{a}_{i}}|s_{\tilde{a}_{i}};\theta\right)}{\sum_{u^{\prime}_{\tilde{a}_{i}}\in\mathcal{T}_{{\tilde{a}_{i}}}}\pi_{\tilde{a}_{i}}\left(u^{\prime}_{\tilde{a}_{i}}|s_{\tilde{a}_{i}};\theta\right)},$ $\displaystyle\widetilde{B}_{\tilde{a}_{i}}(s_{\tilde{a}_{i}},\mathbf{u}_{-{\tilde{a}_{i}}})$ $\displaystyle\approx\sum_{u^{\prime}_{\tilde{a}_{i}}\in\mathcal{T}_{{\tilde{a}_{i}}}}\pi_{\tilde{a}_{i}}^{\prime}\left(u^{\prime}_{\tilde{a}_{i}}|s_{\tilde{a}_{i}};\theta\right)R([\mathbf{u}_{-{\tilde{a}_{i}}},u^{\prime}_{\tilde{a}_{i}}]),$ where $\pi_{\tilde{a}_{i}}\left(u_{\tilde{a}_{i}}|s_{\tilde{a}_{i}};\theta\right)$ and $\pi_{\tilde{a}_{i}}^{\prime}\left(u_{\tilde{a}_{i}}|s_{\tilde{a}_{i}};\theta\right)$ are the joint probability and the re-normalized probability for compositional action $u_{\tilde{a}_{i}}$, and $\mathcal{T}_{{\tilde{a}_{i}}}$ is the set of words with top-$k$ probabilities in $\pi_{\tilde{a}_{i}}^{\prime}$. Since an agent $a_{i}$ has two neighboring agents, _i.e_. its preceding agent $a_{i-1}$ and subsequent agent $a_{i+1}$, it involves in two compositional agents, $\tilde{a}_{i-1}=[a_{i-1};a_{i}]$ and $\tilde{a}_{i}=[a_{i};a_{i+1}]$. Therefore, for agent $a_{i}$, its involved compositional counterfactual baseline is the average of $\widetilde{B}_{\tilde{a}_{i-1}}$ and $\widetilde{B}_{\tilde{a}_{i}}$: $\displaystyle\widetilde{B}_{a_{i}}=\frac{\widetilde{B}_{\tilde{a}_{i-1}}+\widetilde{B}_{\tilde{a}_{i}}}{2}.$ (16) The final counterfactual baseline for agent $a_{i}$ is the weighted-sum of two parts: the individual baseline $B_{a_{i}}$ in Eqn. (13) and the compositional baseline $\widetilde{B}_{a_{i}}$ in Eqn. (16): $\widehat{B}_{a_{i}}=(1-\lambda)B_{a_{i}}+\lambda\widetilde{B}_{a_{i}},$ (17) where $\lambda$ is a hyper-parameter to balance the two terms. We empirically found set $\lambda$ to $0.5$ performs well. We only consider the compositional agent of two neighboring agents, since in our preliminary experiment we did not observe further improvement when using more adjacent agents. Finally, by considering compositional agent (CA), the gradient in Eqn. (9) becomes: $\displaystyle\nabla_{\theta}\mathcal{L}(\theta)$ $\displaystyle\approx\sum_{a}\left(R(\mathbf{u})-\widehat{B}_{a}\right)\nabla_{\theta}\log\pi_{a}\left(u_{a}|s_{a};\theta\right).$ (18) The individual baseline measures the expected effect of an agent to the team reward by considering that individual agent’s actions, while the compositional baseline further considers how that agent’s actions interact with its neighbor agents’ actions. The individual baseline and compositional baseline are complementary in that the combination of they can more precisely measure the expected reward of an agent and hence more accurately measure how much gain the agent brings by taking the sampled action. TABLE I: Generation Quality, Latency, and Speedup on MSCOCO Image Captioning Dataset. “${\dagger}$” Indicates the Model is Based on LSTM Architecture While the others are Based on Transformer. AIC is Our Implementation of the Transformer-Based Autoregressive Model, Which Has the Same Structure as NAIC Models and is Used as the Teacher Model for Knowledge Distillation (KD). “/” Denotes That the Results are not Reported or Not Directly Comparable. “bw” denotes the Beam Width Used for Beam Search. Latency is the Time to Decode a Single Image without Minibatching, Averaged Over the Whole Test Split, and is Tested on a GeForce GTX 1080 Ti GPU. The Speedup Values of the Compared Models are from the Corresponding Papers. Models | BLEU-1 | BLEU-4 | METEOR | ROUGE | SPICE | CIDEr | Latency | Speedup ---|---|---|---|---|---|---|---|--- Autoregressive models NIC-v2† [3] | / | 32.1 | 25.7 | / | / | 99.8 | / | / Up-Down† [32] | 79.8 | 36.3 | 27.7 | 56.9 | 21.4 | 120.1 | / | / CAVP† [35] | / | 38.6 | 28.3 | 58.5 | 21.6 | 126.3 | / | / AoANet† [36] | 80.2 | 38.9 | 29.2 | 58.8 | 22.4 | 129.8 | / | / ETA [37] | 81.5 | 39.3 | 28.8 | 58.9 | 22.7 | 126.6 | / | / ORT [38] | 80.5 | 38.6 | 28.7 | 58.4 | 22.6 | 128.3 | / | / NG-SAN [20] | / | 39.9 | 29.3 | 59.2 | 23.3 | 132.1 | / | / X-Transformer [39] | 80.9 | 39.7 | 29.5 | 59.1 | 23.4 | 132.8 | / | / AIC ($\text{bw}=1$) | 79.8 | 38.4 | 29.0 | 58.7 | 22.8 | 126.6 | 134ms | 1.66$\times$ AIC ($\text{bw}=3$) | 80.3 | 38.9 | 29.1 | 58.9 | 22.9 | 128.8 | 222ms | 1.00$\times$ Non-autoregressive models MNIC [23] | 75.4 | 30.9 | 27.5 | 55.6 | 21.0 | 108.1 | / | 2.80$\times$ FNIC [24] | / | 36.2 | 27.1 | 55.3 | 20.2 | 115.7 | / | 8.15$\times$ Non-autoregressive models (Ours) NAIC-base | 60.7 | 15.9 | 18.2 | 45.9 | 11.9 | 60.6 | 16ms | 13.90$\times$ \+ weight-init | 62.3 | 17.1 | 19.0 | 46.8 | 12.6 | 64.6 \+ KD | 78.5 | 35.3 | 27.3 | 56.9 | 20.8 | 115.5 \+ CMAL | 80.3 | 37.3 | 28.1 | 58.0 | 21.8 | 124.0 \+ unlabel | 80.5 | 38.0 | 28.3 | 58.2 | 22.0 | 125.5 \+ CA | 80.6 | 38.2 | 28.4 | 58.4 | 22.1 | 126.4 \+ postprocess | 80.7 | 38.4 | 28.4 | 58.5 | 22.1 | 126.9 ### 4.6 Unlabeled Data Augmented Knowledge Distillation Following previous works [5], we use sequence-level knowledge distillation (KD) [40] during training, where the sentences produced by a pre-trained autoregressive Transformer teacher model is considered as pseudo target sentences for training the non-autoregressive student model. This strategy has been shown to be an effective way to alleviate the multimodality problem [5]. While previously only labeled data are used for training non-autoregressive models, we propose to utilize freely available unlabeled inputs to boost the performance of sequence generation with the KD strategy. Specifically, we use KD teacher model to produce target sentences for extra unlabeled data, which are used as pseudo paired data during XE training. These unlabeled data can be seen as a data augmentation technique that helps to better transfer the teacher model’s knowledge to the student model. Before starting CMAL training, we first pre-train the NAG models with the XE loss (Eqn. (5)), during which we use both the labeled and unlabeled data and their corresponding pseudo target sentences as training data. Then during CMAL training (Eqn. (11) and Eqn. (18)), we use the real labeled data from the original dataset. There are two advantages of using real data instead of pseudo data for CMAL training: first, the reward computation at training time is consistent with the evaluation metric computation at test time, _i.e_. the generated sentences are compared against the real target sentences; second, unlike previous works on NAG, the performance of our method will not be limited by that of the KD teacher model. ## 5 Experiments on Image Captioning We first validate the proposed non-autoregressive sequence generation method on image captioning task. We denote our Non-Autoregressive Image Captioning models as NAIC. ### 5.1 Experimental Settings #### 5.1.1 MSCOCO Dataset MSCOCO [41] is the most popular benchmark for image captioning. We use the “Karpathy” splits [42] that have been used extensively for reporting results in prior works. This split contains 113,287 training images with 5 captions each, and 5,000 images for validation and test splits, respectively. The vocabulary size is 9,487 words. We use the officially released MSCOCO unlabeled images as unlabeled data. To be consistent with previous works, we pre-extract image features for all the images following [32]. We use standard automatic evaluation metrics to evaluate the quality of captions, including BLEU-1/4 [7], METEOR [43], ROUGE [44], SPICE [45], and CIDEr [6], denoted as B1/4, M, R, S, and C, respectively. #### 5.1.2 Implementation Details We train an autoregressive image captioning model as the teacher model, namely AIC. Both NAIC and the AIC models closely follow the same model hyper- parameters as Transformer-Base [2]. Specifically, the number of stacked blocks $L$ is 6. The AIC model is trained first with XE loss and then with SCST [26]. Beam search with a beam width of 3 is used during decoding of AIC model. Our best NAIC model is trained according to the following process. We use a fixed number of $N=16$ agents because most of the captions are not longer than this length. We first initialize the weights of NAIC model with the pre-trained AIC teacher model. We then pre-train NAIC model with XE loss for 30 epochs. During this stage, we use a warm-up learning rate of $min(t\times 10^{-4};3\times 10^{-4})$, where $t$ is the current epoch number that starts at 1. After 6 epochs, the learning rate is decayed by 0.5 every 3 epochs. After that, we run CMAL training to optimize the CIDEr metric for about 70 epochs. To enhance training efficiency, we sample 5 sentences for each input. At this training stage, we use an initial learning rate of $7.5\times 10^{-5}$ and decay it by 0.8 every 10 epochs. Both training stages use Adam [46] optimizer with a batch size of 50. By default, we use $k=2$ top-ranking words in CMAL, and use $100,000$ unlabeled images for training. ### 5.2 Main Results We first compare the performance of our methods against other non- autoregressive models and state-of-the-art autoregressive models. Among the autoregressive models, ETA [37], ORT [38], X-Transformer [39], NG-SAN [20], MNIC [23], FNIC[24], and AIC are based on similar Transformer architecture as ours, while others are based on LSTM [47]. MNIC [23] and FNIC[24] are two published non-autoregressive image captioning models. MNIC adopts an iterative refinement strategy while FNIC uses an RNN to order words detected in the image. As shown in Table I, our best model (the last row) achieves significant improvements over the previous non-autoregressive models across all metrics, strikingly narrowing their performance gap between AIC from $13.1$ CIDEr points down to only $1.9$ CIDEr points. Furthermore, we achieve comparable performance with state-of-the-art autoregressive models. Comparing speedups, our method obtains a significant speedup of more than a factor of 10 over the autoregressive counterpart, with latency111The time for image feature extraction is not included in latency. reduced to about $16ms$. In Table II we further compare our NAIC model with state-of-the-art autoregressive models by submitting our single-model results to online MSCOCO evaluation server. C5 and c40 denote evaluating the models with 5 and 40 reference captions, respectively. The results show that our method achieves very competitive results compared to state-of-the-art autoregressive models and even surpasses some of them. TABLE II: Comparision with State-Of-the-Art, Single-Model Autoregressive Methods on the Online MSCOCO Image Captioning Test Server. Model | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | METEOR | ROUGE-L | CIDEr-D ---|---|---|---|---|---|---|--- | c5 | c40 | c5 | c40 | c5 | c40 | c5 | c40 | c5 | c40 | c5 | c40 | c5 | c40 Up-Down∗ [32] | 80.2 | 95.2 | 64.1 | 88.8 | 49.1 | 79.4 | 36.9 | 68.5 | 27.6 | 36.7 | 57.1 | 72.4 | 117.9 | 120.5 CAVP [35] | 80.1 | 94.9 | 64.7 | 88.8 | 50.0 | 79.7 | 37.9 | 69.0 | 28.1 | 37.0 | 58.2 | 73.1 | 121.6 | 123.8 SGAE [48] | 80.6 | 95.0 | 65.0 | 88.9 | 50.1 | 79.6 | 37.8 | 68.7 | 28.1 | 37.0 | 58.2 | 73.1 | 122.7 | 125.5 ETA [37] | 81.2 | 95.0 | 65.5 | 89.0 | 50.9 | 80.4 | 38.9 | 70.2 | 28.6 | 38.0 | 58.6 | 73.9 | 122.1 | 124.4 NG-SAN [20] | 80.8 | 95.0 | 65.4 | 89.3 | 50.8 | 80.6 | 38.8 | 70.2 | 29.0 | 38.4 | 58.7 | 74.0 | 126.3 | 128.6 Ours | 80.3 | 94.4 | 64.5 | 87.8 | 49.6 | 78.2 | 37.5 | 67.4 | 28.1 | 36.8 | 58.0 | 72.8 | 121.1 | 122.6 ### 5.3 The Contribution of Each Component We conduct an extensive ablation study with the proposed NAIC model. The results are shown in the bottom of Table I. NAIC-base is the naive NAIC model trained from scratch using XE loss (Eqn. (5)), weight-init denotes initializing the weights of NAIC with AIC model, KD represents using the sentence-level knowledge distillation with AIC as the teacher model, CMAL denotes further applying our proposed CMAL algorithm with individual counterfactual baseline (Eqn. (11)) for CIDEr optimization, unlabel means using additional 100,000 unlabeled data during XE training, CA indicates applying compositional counterfactual baseline with Eqn. (18), and postprocess is a simple postprocessing step whcih removes the consecutive identical tokens following[12] . From Table I, we have the following observations. (1) Initializing NAIC model’s weights with the AIC teacher can consistently improve the performance. (2) NAIC-base performs extremely poorly compared to AIC. (3) We see that training on the distillation data during XE training improves the CIDEr score to 115.5. However, there still remains a large performance gap between this model and the AIC teacher. (4) Applying our CMAL training on top of the above XE trained model significantly improves the performance by $8.5$ CIDEr points. (5) Augmenting knowledge distillation data with extra unlabeled data during XE training further boosts the performance by $1.5$ CIDEr points. (6) Considering the influence of compositional agents (CA) on CMAL’s counterfactual baseline further increases the CIDEr score by $0.9$ points. (7) Removing the consecutive identical tokens (postprocess) slightly improve the CIDEr score by $0.59$ points, resulting in a final CIDEr score of $126.9$, which is only $1.9$ points behind the AIC teacher. TABLE III: Comparison of Using Various Baselines $b$ (in Eqn. (9) on MSCOCO Image Captioning Dataset. XE: The Performance After Pre-Training With Cross-Entropy Loss. None: Not Using a Baseline. MA: Moving Average Baseline. SC: Self-Critical Baseline. CF: The Proposed Counterfactual Baseline. Baseline $b$ | B1 | B4 | M | R | S | C ---|---|---|---|---|---|--- w/o weight-init: | | | | | | XE | 77.7 | 34.8 | 26.9 | 56.3 | 20.3 | 113.9 None | 65.6 | 19.4 | 22.7 | 48.9 | 15.8 | 91.4 MA | 75.6 | 28.7 | 24.4 | 53.6 | 17.9 | 103.3 SC | 79.0 | 34.6 | 26.9 | 56.2 | 20.6 | 118.1 CF | 79.9 | 36.5 | 27.7 | 57.4 | 21.4 | 122.1 w/ weight-init: | | | | | | XE | 78.5 | 35.3 | 27.3 | 56.9 | 20.8 | 115.5 None | 78.6 | 33.7 | 26.5 | 56.1 | 20.2 | 115.2 MA | 79.0 | 34.1 | 26.6 | 56.3 | 20.2 | 116.1 SC | 79.6 | 36.5 | 27.6 | 57.4 | 21.4 | 121.2 CF | 80.3 | 37.3 | 28.1 | 58.0 | 21.8 | 124.0 ### 5.4 Comparison of Various Reward Baselines $b$ To evaluate the effectiveness of our counterfactual (CF) baseline, we compare it with two widely-used baselines in policy gradient, _i.e_. Moving Average [49] (MA) and Self-Critical [26] (SC), and not using a baseline (None), _i.e_. $b=0$. Specially, CF baseline is the proposed individual counterfactual baseline (Eqn. (11)). MA baseline is the accumulated sum of the previous rewards with exponential decay. SC baseline is the received reward when all agents directly take greedy actions. The results are shown in Table III. We specially consider the case when not using weight-init because it may not be possible to find an autoregressive model that has the same structure as a novelly designed non-autoregressive model. As we can see, our CF baseline consistently outperforms all the other compared methods. Noteworthy that the performance gaps between our CF baseline and other baselines become larger when trainings start from a low-performing model (_i.e_. XE model w/o weight-init). That is, our method is less sensitive to model initialization, suggesting its ability to enable more robust and stable reinforcement learning. None and MA severely degrade the performance compared to XE model when not using weight-init, but they perform similar to XE model when using weight-init. While SC considerably outperforms XE model, it is still inferior to CF. The reason is that both MA and SC are agent- agnostic global baselines that fail to address the multi-agent credit assignment problem. Different from these methods, our CF baseline is agent- specific and can disentangle the individual contribution of each agent from the team reward so that the agents can be trained more efficiently. Figure 4: Examples of the generated image captions on MSCOCO image captioning dataset. GT is a ground-truth caption. NAIC-XE and NAIC-CMAL are our NAIC model after XE and CMAL training, respectively. Two failure cases are highlighted in red. TABLE IV: Effect of Top-$k$ Size (in Eqn. (13) on MSCOCO Image Captioning Dataset. top-$k$ | B1 | B4 | M | R | S | C ---|---|---|---|---|---|--- 1 | 80.1 | 37.4 | 28.0 | 57.9 | 21.7 | 123.7 2 | 80.3 | 37.3 | 28.1 | 58.0 | 21.8 | 124.0 5 | 80.1 | 37.3 | 28.0 | 58.0 | 21.7 | 123.7 TABLE V: Performance of Our Method as a Function of Training Metric on MSCOCO Image Captioning Dataset. Training Metric | Evaluation Metric ---|--- B1 | B4 | M | R | S | C XE | 78.5 | 35.3 | 27.3 | 56.9 | 20.8 | 115.5 BLEU-4 | 78.4 | 38.3 | 27.6 | 58.0 | 21.1 | 118.4 METEOR | 77.9 | 35.5 | 28.6 | 57.9 | 22.2 | 119.5 ROUGE | 78.6 | 37.5 | 27.6 | 58.6 | 20.9 | 118.9 CIDEr | 80.3 | 37.3 | 28.1 | 58.0 | 21.8 | 124.0 ### 5.5 Effect of Top-$k$ Size As shown in Table IV, the model is not sensitive to the choice of top-$k$ size. Using a small $k$ of 2 could achieve fairly good performance. This is because after pre-training NAIC model with cross-entropy loss, the top-ranking words often have dominating probabilities and thus making it feasible to approximate the expectation over all words in the vocabulary with these $k$ top-ranking words. TABLE VI: The Results After XE and CMAL Training When Using Different Numbers of Unlabeled Images on MSCOCO Image Captioning Dataset. #unlabel | stage | B1 | B4 | M | R | S | C ---|---|---|---|---|---|---|--- 0 | XE | 78.5 | 35.3 | 27.3 | 56.9 | 20.8 | 115.5 CMAL | 80.3 | 37.3 | 28.1 | 58.0 | 21.8 | 124.0 50k | XE | 78.8 | 36.2 | 27.6 | 57.2 | 21.1 | 118.1 CMAL | 80.2 | 37.6 | 28.1 | 58.1 | 21.9 | 124.8 100k | XE | 79.0 | 36.2 | 27.7 | 57.3 | 21.2 | 118.3 CMAL | 80.5 | 38.0 | 28.3 | 58.2 | 22.0 | 125.5 ### 5.6 Training on Different Metrics. In Table V we show the results of using different evaluation metrics as reward function in CMAL. The SPICE metric is not experimented because it computation runs too slow. As expected, optimizing for a given metric during training generally leads to the best performance on that same metric at test time. Optimizing for CIDEr gives more balanced improvements on all the evaluation metrics. We have experimented with optimizing various mixed metrics, but no noticeable improvement was observed. ### 5.7 Number of Unlabeled Images In Table VI, we show the results after XE and CMAL training when using 0, 50,000 and 100,000 unlabeled images respectively. Generally, using more unlabeled images could lead to better performance. XE training benefits more from the unlabeled images than CMAL training because we directly use the unlabeled images during XE training while not using them for CMAL. TABLE VII: Two Examples Comparing Translations Produced by the autoregressive Transformer, the Non-autoregressive NAT-XE, and Our NAT-CMAL without Postprocessing. Repeated Words are Highlighted in Gray. Source: | NSA siphons data from Google and Yahoo - Snowden wants to help ---|--- Target: | NSA saugt Daten von Google und Yahoo ab - Snowden will helfen Transformer: | NSA sickert Daten von Google und Yahoo - Snowden will helfen NAT-XE: | NSA - phdaten von Google Google Yahoo Yahoo SnowSnowden will helfen NAT-CMAL: | NSA sionen Daten von Google und Yahoo - Snowden will helfen Source: | Helpers can include parents , grandparents , friends , organisations and , of course , the teachers and children themselves . Target: | Helfer können Eltern , Großeltern , Freunde , Vereine und natürlich die Erzieher und Kinder selbst sein . Transformer: | Helfer können Eltern , Großeltern , Freunde , Organisationen und natürlich die Lehrer und Kinder selbst sein . NAT-XE: | Zu fer können Eltern GroßGroßeltern , , , Organisationen Organisationen natürlich natürlich Lehrer Lehrer Kinder Kinder eingehören . NAT-CMAL: | Zu Helfer können Eltern , Großeltern , Freunde , Organisationen und natürlich die Lehrer und Kinder selbst einschließen . ### 5.8 Qualitative Analysis We present some examples of generated image captions in Figure 4. As can be seen, repeated words and incomplete content are most prevalent in the XE trained NAIC model, _e.g_. “playing playing on on”, “zebra standing … zebra standing”, and “brushing brushing teeth teeth a a”. This proves that the word- level XE training often results in decoding inconsistency problem. With our CMAL training, the sentences often become far more consistent and fluent. We also show two failure cases in the last row, highlighted in red. In the first case, our model outputs the unfluent sentence “and on a beach”, which might be due to its failure to recognize the “chair”. In the second case, our model outputs “in to of a mirror”, which should be “in front of a mirror”. TABLE VIII: Statistics of MSCOCO Image Captioning Dataset and WMT14 EN-DE Machine Translation Dataset. Under “Tokens-Per-Sentence” are the Mean, Standard Deviation, and Median of the Target Sentence Length. Dataset | #Examples | Tokens-Per-Sentence ---|---|--- Mean | StdDev | Median MSCOCO | 0.6M | 10.5 | 2.4 | 10.0 WMT14 EN-DE | 4.5M | 29.4 | 17.0 | 26.0 ## 6 Experiments on Machine Translation To validate the generalization ability of our non-autoregressive sequence generation method, we further perform experiments on machine translation task. Compared to image captioning task, machine translation task often has much longer target sentence length (shown in Table VIII) and is thus more challenging for non-autoregressive decoding models. We denote our Non- Autoregressive machine Translation models as NAT. ### 6.1 Experimental Settings #### 6.1.1 WMT14 EN-DE Dataset We use the widely adopted machine translation dataset, WMT14 En- De222http://www.statmt.org/wmt14/translation-task, to evaluate the effectiveness of our proposed method. WMT14 En-De has 4.5 million English-to- German sentence pairs. Following previous practices, we employ newstest2013 and newstest2014 as the validation and test sets respectively. All the data are tokenized and segmented into subwords units using byte-pair encoding (BPE) [50], leading to a vocabulary of size 40k and is shared for source and target languages. We evaluate the translation quality with BLEU [7]. #### 6.1.2 Implementation Details As is shown in Table VIII, WMT EN-DE dataset’s target sequence lengths varies greatly and its average length is about 3 times longer than that of MSCOCO image captioning dataset. Therefore, instead of using a fixed number of agents in the decoder, we make the number of agents to adapt to the input sentence. That is, we train a target length predictor to predict the target sequence length and set the number of agents to be the predicted length during inference. During training, we use the ground-truth target sentence length and the length predictor is not used. We formulate the target length prediction as a classification problem, predicting the length offset between the target and source lengths. Specifically, We first perform global pooling on the hidden vectors from the last layer of the encoder, and then feed the resulting vector to a linear layer followed by a softmax function. The length predictor is jointly trained with the whole network during training. We use the default network architecture of the Transformer-Base [2] model. We first train an autoregressive Transformer teacher model with standard cross- entropy loss, which achieves a BLEU score of $27.20$. During decoding of the teacher model, beam search with a beam width of 4 is used. Our models are implemented based on the open-sourced fairseq [51] sequence modeling toolkit. For our non-autoregressive models, we first pre-train NAT model with XE loss for 300 epochs, exactly following the Transformer’s training settings. The knowledge distillation data produced by the Transformer teacher model is used in this process. After that, we run our CMAL training to optimize the GLEU score [1] for 30 more epochs with a fixed learning rate of $1\times 10^{-5}$. The BLEU score was designed to be a corpus measure and has some undesirable properties when used for calculating per sentence rewards. Therefore, we use a slightly different score, the sentence-level “GLEU score” [1], as our rewards objective. GLEU is designed to be a per-sentence metric while at the same time, correlates quite well with BLEU on a corpus level. To enhance training efficiency, we sample 5 sentences for each input. By default, we use $k=1$ top-ranking words in CMAL. We do not average multiple check-pointed models. TABLE IX: BLEU Score, Latency, and Speedup on Official Test Set of WMT14 En-De Machine Translation Dataset. All of the Compared Non-Autoregressive Models Are Based on Transformer Architecture. The Transformer [2] Results Are Based on Our Own Reproduction and is Used as the Teacher Model for Knowledge Distillation. “/” Denotes That the Results are Not Reported or Not Directly Comparable. Latency is the Time to Decode a Single Sentence without Minibatching, Averaged Over the Whole Test Split, and is Tested on a Tesla V100 GPU. The Speedup Values of the Compared Models are from the Corresponding Papers. Models | Decoding | En$\to$De | Latency | Speedup ---|---|---|---|--- Iterations | BLEU Autoregressive models LSTM-based S2S [1] | $N$ | 24.60 | / | / ConvS2S [18] | $N$ | 26.43 | / | / Transformer [2] | $N$ | 27.20 | 251.4ms | 1.00$\times$ Non-autoregressive models NAT with Fertility [5] | 1 | 17.69 | / | 15.6$\times$ LT [13] | 1 | 19.80 | / | 5.78$\times$ Iterative Refinement [12] | 1 | 13.91 | / | 11.39$\times$ Iterative Refinement [12] | 2 | 16.95 | / | 8.77$\times$ Iterative Refinement [12] | 10 | 21.61 | / | 2.01$\times$ CTC [33] | 1 | 17.68 | / | 3.42$\times$ Auxiliary Regularization [21] | 1 | 20.65 | / | / FlowSeq [52] | 1 | 21.45 | / | / Bag-of-ngrams Loss [53] | 1 | 20.90 | / | 10.77$\times$ FCL-NAT [54] | 1 | 21.70 | / | / Non-autoregressive models (Ours) NAT-XE | 1 | 18.46 | 14.4ms | 17.46$\times$ NAT-CMAL | 1 | 23.69 NAT-CMAL-CA | 1 | 23.87 NAT-CMAL-CA (postprocess) | 1 | 24.46 ### 6.2 Main Results We compare our model with the following non-autoregressive methods. NAT with fertility [5] is the first to propose non-autoregressive neural machine translation, which predicts input token fertilities as a latent variable. Latent Transformer (LT) [13] incorporates an autoregressive module into NAT to predict a sequence of discrete latent variables. Iterative Refinement [12] trains extra decoders to iteratively refine the translation output with multiple iterations. CTC [33] uses connectionist temporal classification. Auxiliary Regularization [21] adds two auxiliary regularization terms on the decoder representations. FlowSeq [52] introduces a method based on generative flow. Bag-of-ngrams Loss [53] minimizes the bag-of-ngrams difference between the model output and the reference sentence. FCL-NAT [54] introduces curriculum learning into fine-tuning of NAT. We also compare with strong autoregressive methods that are based on LSTM [1], CNN [18] and self-attention [2]. The inference latency is the average per-sentence decoding latency over the newstest2014 test set, and is conducted on a single Tesla V100 GPU. The speedup relative to the autoregressive Transformer is also reported. Results are shown in Table IX. The proposed multi-agent reinforcement learning method with individual counterfactual baseline (NAT-CMAL, Eqn. (11)) outperforms the cross-entropy loss based baseline (NAT-XE, Eqn. (5)) by a significant margin of $5.23$ BLEU points. Considering the influence of compositional agents (CA, Eqn. (18)) on the counterfactual baseline slightly increases the BLEU score by $0.18$ points. NAT-CMAL-CA achieves state-of-the-art performance with significant improvements over compared non-autoregressive models. Specifically, NAT-CMAL- CA outperforms NAT with fertility by $6.18$ BLEU points and Iterative Refinement with rescoring 10 candidates by $2.26$ BLEU points. Comparing to autoregressive models, NAT-CMAL-CA is only $3.33$ BLEU points behind its Transformer teacher, and is even comparable to the state-of-the-art LSTM-based S2S baseline. By removing the consecutive identical tokens as a simple postprocessing following[12] (postprocess), the BLEU score of NAT-CMAL-CA is further improved by $0.59$. Our final model shrinks the gap between the autoregressive teacher and the NAT model from $8.74$ ($27.20$ v.s. $18.46$) to only $2.74$ ($27.20$ v.s. $24.46$) BLEU points. The promising results demonstrate that the proposed multi-agent reinforcement learning method can produce higher quality translations by enforcing sentence-level consistency and optimizing the non-differentiable test metric. Comparing speedups, our method obtains the greatest speedup of $17.46\times$ over the autoregressive counterpart, with latency significantly reduced from $251.4ms$ to only $14.4ms$. Previous works often rely on adding extra components, _e.g_. iterative refinement [12], autoregressive submodule [13], or CTC decoder [33], on the Transformer to achieve better generation quality, which, however, inevitably sacrifice the decoding speed. Our method adds no extra components except the lightweight length predictor, and therefore is able to maximize the decoding speedup. ### 6.3 Comparison of Various Reward Baselines $b$ Similar to Section 5.4, we also compare our counterfactual (CF) baseline with two widely-used baselines in policy gradient, _i.e_. Moving Average [49] (MA) and Self-Critical [26] (SC), and not using a baseline (None), _i.e_. $b=0$. The results are shown in Figure 7. MA and SC baselines lead to extremely poor performance on WMT14 EN-DE dataset. Not using a baseline (None) brings $1.66$ BLEU improvements over the XE model. Our CF baseline significantly outperforms all the other compared baselines by large margins. Particularly, CF outperforms None by $3.57$ BLEU points. The results demonstrate that the proposed counterfactual baseline can efficiently assign credit to each agent and thus stabilize the training process. ### 6.4 Effect of Top-$k$ Size As shown in Table X, the model is insensitive to top-$k$ size. Simply setting $k$ to 1 could achieve fairly good performance. Figure 5: The translation latencies for each sentence in the official test set of WMT14 En-De machine translation dataset. Latency is the time used to decode a single sentence without minibatching, averaged over three separate runs, and is tested on a GeForce GTX 1080 Ti GPU. ### 6.5 The Effects of Sentence Lengths We compare the translation quality between Transformer [2], NAT-XE and the proposed NAT-CMAL with regard to different sentence lengths. We divide the sentence pairs in WMT14 En-De test set into different length buckets according to the length of source sentence (number of subword tokens). The results are shown in Figure 6. It can be seen that as sentence length increases, the accuracy of NAT-XE drops quickly and the gap between Transformer and NAT-XE also enlarges. Our NAT-CMAL has stable performance over various sentence lengths, and in particular, performs much better than NAT-XE over long sentences. The performance degradation of NAT-XE is because the drawback of word-level cross-entropy loss, _i.e_. cannot model sentence-level consistency, magnifies as sentence length grows. Our CMAL method optimizes the sentence- level objective with cooperative agents and is therefore better at maintaining the global consistency of long sentences. In Figure 5, we compare the per-sentence decoding latency between the autoregressive Transformer and the proposed NAT-CMAL. We conduct the analysis on WMT14 En-De test set without minibatching and report the average latency of three separate runs. We can see that the latency of Transformer is linear in the decoding length, while that of our NAT-CMAL is nearly constant for various lengths. When the target sentence length is close to $100$, Transformer requires a decoding time close to 1 second even on a high-performance Tesla V100 GPU, while our NAT-CMAL only uses $16.4ms$ to decode the sentence. TABLE X: Effect of Top-$k$ Size (in Eqn. (13) on Official Test Set of WMT14 En-De Translation Dataset. top-$k$ | 1 | 2 | 5 ---|---|---|--- BLEU | 23.69 | 23.67 | 23.66 Figure 6: The BLEU scores comparison between Transformer, NAT-XE, and our NAT-CMAL over sentences in different length buckets on official test set of WMT14 En-De machine translation dataset. Figure 7: Comparison of using various baselines $b$ (in Eqn. (9) on WMT14 En-De machine translation dataset. XE: the performance after pre-training with cross-entropy loss. None: not using a baseline. MA: moving average baseline. SC: self-critical baseline. CF: the proposed counterfactual baseline. ### 6.6 Qualitative Analysis In Table VII, we provide two examples of translations from Transformer, NAT- XE, NAT-CMAL, as well as the ground-truth translation. Consecutive identical words, highlighted in gray, are most prevalent in the translations of NAT-XE model, especially for the relatively complex second example sentence. Translations from NAT-CMAL are often more consistent than NAT-XE, and are of similar quality to the outputs produced by the autoregressive Transformer teacher. The examples suggest that CMAL training is better at maintaining the global consistency of sentences than XE loss. ## 7 Conclusion We have proposed a non-autoregressive sequence generation model and a novel Counterfactuals-critical Multi-Agent Learning (CMAL) algorithm. The decoding inconsistency problem in non-autoregressive models is well addressed by the combined effect of the cooperative agents, sentence-level team reward, and individual/compositional counterfactual baselines. The generation quality is further boosted by augmenting training data with unlabeled inputs. Extensive experiments on MSCOCO image captioning benchmark and WMT14 EN-DE machine translation dataset have shown that our non-autoregressive model significantly outperforms previous cross-entropy trained non-autoregressive models while at the same time enjoys the greatest decoding speedup. In particular, our non- autoregressive image captioning model even achieves a performance very comparable to state-of-the-art autoregressive models. There are two promising future directions. First, we can apply our method to more sequence generation tasks, _e.g_. speech recognition and dialog response generation. Second, it will be interesting to extend our method for generating very long sequences (_e.g_. hundreds of words) in non-autoregressive manner. For example, we can design a hierarchical reinforcement learning system, in which a level of manager agents assign goals for their worker agents and the worker agents simultaneously perform actions to generate the whole sequence. ## Acknowledgments This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106400), National Natural Science Foundation of China (61922086, 61872366), and Beijing Natural Science Foundation (4192059). ## References * [1] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey _et al._ , “Google’s neural machine translation system: Bridging the gap between human and machine translation,” _arXiv preprint arXiv:1609.08144_ , 2016. * [2] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in _Advances in Neural Information Processing Systems_ , 2017, pp. 6000–6010. * [3] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: Lessons learned from the 2015 mscoco image captioning challenge,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 39, no. 4, pp. 652–663, 2017. * [4] M. Benzeghiba, R. D. Mori, O. Deroo, S. Dupont, T. Erbes, D. Jouvet, L. Fissore, P. Laface, A. Mertins, and C. a. Ris, “Automatic speech recognition and speech variability: A review,” _Speech Communication_ , vol. 49, no. 10–11, pp. 763–786, 2007. * [5] J. Gu, J. Bradbury, C. Xiong, V. O. Li, and R. Socher, “Non-autoregressive neural machine translation,” in _International Conference on Learning Representations_ , 2018. * [6] R. Vedantam, C. L. Zitnick, and D. Parikh, “Cider: Consensus-based image description evaluation,” _Computer Science_ , pp. 4566–4575, 2015. * [7] K. Papineni, S. Roukos, T. Ward, and W. J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” _Meeting on Association for Computational Linguistics_ , pp. 311–318, 2002. * [8] Y.-H. Chang, T. Ho, and L. P. Kaelbling, “All learning is local: Multi-agent learning in global reward games,” in _Advances in neural information processing systems_ , 2004, pp. 807–814. * [9] L. Buşoniu, R. Babuška, and B. De Schutter, “Multi-agent reinforcement learning: An overview,” in _Innovations in multi-agent systems and applications-1_. Springer, 2010, pp. 183–221. * [10] J. N. Foerster, G. Farquhar, T. Afouras, N. Nardelli, and S. Whiteson, “Counterfactual multi-agent policy gradients,” in _Thirty-Second AAAI Conference on Artificial Intelligence_ , 2018. * [11] L. Chen, H. Zhang, J. Xiao, X. He, S. Pu, and S.-F. Chang, “Counterfactual critic multi-agent training for scene graph generation,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 4613–4623. * [12] J. Lee, E. Mansimov, and K. Cho, “Deterministic non-autoregressive neural sequence modeling by iterative refinement,” in _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Brussels, Belgium: Association for Computational Linguistics, Oct.-Nov. 2018, pp. 1173–1182. * [13] L. Kaiser, S. Bengio, A. Roy, A. Vaswani, N. Parmar, J. Uszkoreit, and N. Shazeer, “Fast decoding in sequence models using discrete latent variables,” in _Proceedings of the 35th International Conference on Machine Learning_ , ser. Proceedings of Machine Learning Research, J. Dy and A. Krause, Eds., vol. 80. Stockholmsmässan, Stockholm Sweden: PMLR, 10–15 Jul 2018, pp. 2390–2399. * [14] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba, “Sequence level training with recurrent neural networks,” in _4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings_ , Y. Bengio and Y. LeCun, Eds., 2016. * [15] K. Xu, J. L. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” _international conference on machine learning_ , pp. 2048–2057, 2015. * [16] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” _neural information processing systems_ , pp. 3104–3112, 2014. * [17] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” _international conference on learning representations_ , 2015. * [18] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, “Convolutional sequence to sequence learning,” in _Proceedings of the 34th International Conference on Machine Learning_ , ser. Proceedings of Machine Learning Research, D. Precup and Y. W. Teh, Eds., vol. 70. International Convention Centre, Sydney, Australia: PMLR, 06–11 Aug 2017, pp. 1243–1252. * [19] J. Yu, J. Li, Z. Yu, and Q. Huang, “Multimodal transformer with multi-view visual representation for image captioning,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. PP, pp. 1–1, 10 2019. * [20] L. Guo, J. Liu, X. Zhu, P. Yao, S. Lu, and H. Lu, “Normalized and geometry-aware self-attention network for image captioning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 10 327–10 336. * [21] Y. Wang, F. Tian, D. He, T. Qin, C. Zhai, and T. Liu, “Non-autoregressive machine translation with auxiliary regularization,” in _The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019_. AAAI Press, 2019, pp. 5377–5384. * [22] J. Guo, X. Tan, D. He, T. Qin, L. Xu, and T.-Y. Liu, “Non-autoregressive neural machine translation with enhanced decoder input,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 33, 2019, pp. 3723–3730. * [23] J. Gao, X. Meng, S. Wang, X. Li, S. Wang, S. Ma, and W. Gao, “Masked non-autoregressive image captioning,” _arXiv preprint arXiv:1906.00717_ , 2019. * [24] Z.-c. Fei, “Fast image caption generation with position alignment,” _arXiv preprint arXiv:1912.06365_ , 2019. * [25] R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” _Machine Learning_ , vol. 8, no. 3-4, pp. 229–256, 1992. * [26] S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel, “Self-critical sequence training for image captioning,” _computer vision and pattern recognition_ , 2017. * [27] D. Bahdanau, P. Brakel, K. Xu, A. Goyal, R. Lowe, J. Pineau, A. C. Courville, and Y. Bengio, “An actor-critic algorithm for sequence prediction,” in _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net, 2017. * [28] L. Wu, F. Tian, T. Qin, J. Lai, and T.-Y. Liu, “A study of reinforcement learning for neural machine translation,” in _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Brussels, Belgium: Association for Computational Linguistics, Oct.-Nov. 2018, pp. 3612–3621. * [29] S. Sun, H. Hou, N. Wu, and Z. Guo, “Neural machine translation based on prioritized experience replay,” in _International Conference on Artificial Neural Networks_. Springer, 2020, pp. 358–368. * [30] L. Guo, J. Liu, X. Zhu, X. He, J. Jiang, and H. Lu, “Non-autoregressive image captioning with counterfactuals-critical multi-agent learning,” in _Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20_ , C. Bessiere, Ed. International Joint Conferences on Artificial Intelligence Organization, 7 2020, pp. 767–773, main track. * [31] A. Graves, “Generating sequences with recurrent neural networks,” _arXiv preprint arXiv:1308.0850_ , 2013. * [32] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, “Bottom-up and top-down attention for image captioning and visual question answering,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 6077–6086. * [33] J. Libovický and J. Helcl, “End-to-end non-autoregressive neural machine translation with connectionist temporal classification,” in _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Brussels, Belgium: Association for Computational Linguistics, Oct.-Nov. 2018, pp. 3016–3021. * [34] R. S. Sutton and A. G. Barto, _Reinforcement learning: An introduction_. MIT press Cambridge, 1998, vol. 1, no. 1. * [35] Z.-J. Zha, D. Liu, H. Zhang, Y. Zhang, and F. Wu, “Context-aware visual policy network for fine-grained image captioning,” _IEEE transactions on pattern analysis and machine intelligence_ , 2019. * [36] L. Huang, W. Wang, J. Chen, and X.-Y. Wei, “Attention on attention for image captioning,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 4634–4643. * [37] G. Li, L. Zhu, P. Liu, and Y. Yang, “Entangled transformer for image captioning,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2019, pp. 8928–8937. * [38] S. Herdade, A. Kappeler, K. Boakye, and J. Soares, “Image captioning: Transforming objects into words,” in _Advances in Neural Information Processing Systems_ , vol. 32. Curran Associates, Inc., 2019, pp. 11 137–11 147. * [39] Y. Pan, T. Yao, Y. Li, and T. Mei, “X-linear attention networks for image captioning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 10 971–10 980. * [40] Y. Kim and A. M. Rush, “Sequence-level knowledge distillation,” in _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Nov. 2016, pp. 1317–1327. * [41] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollár, and C. L. Zitnick, “Microsoft coco captions: Data collection and evaluation server,” _arXiv preprint arXiv:1504.00325_ , 2015. * [42] A. Karpathy and L. Feifei, “Deep visual-semantic alignments for generating image descriptions,” _computer vision and pattern recognition_ , pp. 3128–3137, 2015. * [43] M. Denkowski and A. Lavie, “Meteor universal: Language specific translation evaluation for any target language,” in _The Workshop on Statistical Machine Translation_ , 2014, pp. 376–380. * [44] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” _Text Summarization Branches Out_ , 2004. * [45] P. Anderson, B. Fernando, M. Johnson, and S. Gould, “Spice: Semantic propositional image caption evaluation,” 2016, pp. 382–398. * [46] D. P. Kingma and J. L. Ba, “Adam: A method for stochastic gradient descent,” in _ICLR: International Conference on Learning Representations_ , 2015, pp. 1–15. * [47] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” _Neural computation_ , vol. 9, no. 8, pp. 1735–1780, 1997. * [48] X. Yang, K. Tang, H. Zhang, and J. Cai, “Auto-encoding scene graphs for image captioning,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 10 685–10 694. * [49] L. Weaver and N. Tao, “The optimal reward baseline for gradient-based reinforcement learning,” in _Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence_. Morgan Kaufmann Publishers Inc., 2001, pp. 538–545. * [50] R. Sennrich, B. Haddow, and A. Birch, “Neural machine translation of rare words with subword units,” in _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 1715–1725. * [51] M. Ott, S. Edunov, A. Baevski, A. Fan, S. Gross, N. Ng, D. Grangier, and M. Auli, “fairseq: A fast, extensible toolkit for sequence modeling,” in _Proceedings of NAACL-HLT 2019: Demonstrations_ , 2019. * [52] X. Ma, C. Zhou, X. Li, G. Neubig, and E. Hovy, “FlowSeq: Non-autoregressive conditional sequence generation with generative flow,” in _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_. Hong Kong, China: Association for Computational Linguistics, Nov. 2019, pp. 4282–4292. * [53] C. Shao, J. Zhang, Y. Feng, F. Meng, and J. Zhou, “Minimizing the bag-of-ngrams difference for non-autoregressive neural machine translation,” in _The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020_. AAAI Press, 2020, pp. 198–205. * [54] J. Guo, X. Tan, L. Xu, T. Qin, E. Chen, and T.-Y. Liu, “Fine-tuning by curriculum learning for non-autoregressive neural machine translation,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 34, no. 05, 2020, pp. 7839–7846. | Longteng Guo received the B.E. degree from Xi’an Jiaotong University, Shaanxi, China, in 2016. He is currently pursuing the Ph.D. degree at National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China. His current research interests include deep learning, image content analysis and multimodal deep learning. ---|--- | Jing Liu received the Ph.D. degree from the Institute of Automation, Chinese Academy of Sciences, Beijing, in 2008. She is a Professor with the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences. Her current research interests include deep learning, image content analysis and classification, multimedia understanding and retrieval. ---|--- | Xinxin Zhu Xinxin Zhu received the B.E. degree from Hebei Normal University, Hebei, China, in 2013, and the Ph.D. degree from Beijing University of Posts and Telecommunications, Beijing, China, in 2019. He is an assistant professor with the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences. His current research interests include deep learning, image content analysis and multimodal deep learning. ---|--- | Hanqing Lu received the Ph.D. from Department of Electronic and Information Science in Huazhong University of Science and Technology. He is a Professor with the National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences. His current research interests include image similarity measure, video analysis, multimedia technology and system. ---|---
Functional Pearl Longest Segment of Balanced Parentheses: an Exercise in Program Inversion in a Segment Problem Shin-Cheng Mu, Tsung-Ju Chiang Institute of Information Science, Academia Sinica, Taiwan Given a string of parentheses, the task is to find the longest consecutive segment that is balanced, in linear time. We find this problem interesting because it involves a combination of techniques: the usual approach for solving segment problems, and a theorem for constructing the inverse of a function — through which we derive an instance of shift-reduce parsing. § INTRODUCTION Given a string of parentheses, the task is to find a longest consecutive segment that is balanced. For example, for input the output should be . We also consider a reduced version of the problem in which we return only the length of the segment. While there is no direct application of this problem [However, the length-only version was possibly used as an interview problem collected in, for example, <https://leetcode.com/problems/longest-valid-parentheses/>.], the authors find it interesting because it involves two techniques. Firstly, derivation for such optimal segment problems (those whose goal is to compute a segment of a list that is optimal up to certain criteria) usually follows a certain pattern (e.g. [Bird, 1987, Gibbons, 1997, Zantema, 1992]). We would like to see how well that works for this case. Secondly, at one point we will need to construct the right inverse of a function. It will turn out that we will discover an instance of shift-reduce parsing. *Specification  Balanced parentheses can be captured by a number of grammars, for example , or and . After trying some of them, the authors decided on [B]S→ϵ|(S) S  ,[E] because it is unambiguous and the most concise. Other grammars have worked too, albeit leading to lengthier algorithms. The parse tree of the chosen grammar can be represented in Haskell as below, with a function specifying how a tree is printed: [B]𝐝𝐚𝐭𝐚 Tree=Nul|Bin Tree Tree  ,[E] [B]pr Nul[15] [15]=34 34[E] [B]pr (Bin t u)[15] [15]=34 (34++pr t++34 )34++pr u  .[E] For example, letting and , we have and "(()())()" (parentheses are colored to aid the reader). Function is injective but not surjective: it does not yield un-balanced strings. Therefore its right inverse, that is, the function such that , is partial; its domain is the set of balanced parenthesis strings. We implement it by a function that is made total by using the monad. This function builds a parse tree — should return such that if is balanced, and return otherwise. While this defines already, a direct definition of will be presented in Section <ref>. The problem can then be specified as below, where stands for “longest balanced segment (of parentheses)”: [B]lbs=maxBy size·filtJust·map parse·segments  ,[E] [B]segments=concat·map inits·tails  ,[E] [B]filtJust ts=[1.5mu t|Just t←ts1.5mu]  ,[E] [B]size t=length (pr t)  .[E] The function returns all segments of a list, with respectively computing all prefixes and suffixes of their input lists. The result of is passed to , which collects only those elements wrapped by . For example, . [ is called in the standard Haskell libraries. The authors think the name is more informative.] For this problem always returns a non-empty list, because the empty string, which is a member of for any , can always be parsed to . Given where is a type that is ordered, picks a maximum element from the input. Finally, computes the length of . The length-only problem can be specified by . § THE PREFIX-SUFFIX DECOMPOSITION It is known that many optimal segment problems can be solved by following a fixed pattern [Bird, 1987, Gibbons, 1997, Zantema, 1992], which we refer to as prefix-suffix decomposition. In the first step, finding an optimal segment is factored into finding, for each suffix, an optimal prefix. For our problem, the calculation goes: [7]maxBy size·filtJust·map parse·segments[E] [7]maxBy size·filtJust·map parse·concat·map inits·tails[E] [7]maxBy size·filtJust·concat·map (map parse·inits)·tails[E] [7]maxBy size·concat·map (filtJust·map parse·inits)·tails[E] [7]maxBy size·map (maxBy size·filtJust·map parse·inits)·tails  .[E] For each suffix returned by , the program above computes its longest prefix of balanced parentheses by . We abbreviate the latter to (for “longest balanced prefix”). Generating every suffix and computing for each of them is rather costly. The next step is to try to apply the following scan lemma, which says that if a function can be expressed as right fold, there is a more efficient algorithm to compute : , where [B]scanr::(a→b→b)→b→[1.5mu a1.5mu]→[1.5mu b1.5mu][E] [B]scanr (⊕) e [1.5mu 1.5mu][24] [24]=[1.5mu e1.5mu][E] [B]scanr (⊕) e (x:xs)[24] [24]=𝐥𝐞𝐭 (y:ys)=scanr (⊕) e xs 𝐢𝐧 (x⊕y):y:ys  .[E] If can be written in the form , we do not need to compute of each suffix from scratch; each optimal prefix can be computed, in , from the previous optimal prefix by . If is a constant-time operation, we get a linear-time algorithm. The next challenge is therefore to express as a right fold. Since can be expressed as a right fold — , a reasonable attempt is to fuse with , to form a single . Recall the -fusion theorem: if . The antecedent will be referred to as the fusion condition. To fuse and using Theorem <ref>, we calculate from the LHS of the fusion condition (with and ): [7]map parse ([1.5mu 1.5mu]:map (x:) xss)[E] [7]Just Nul:map (parse·(x:)) xss[E] [7]Just Nul:g' x (map parse xss)[E] [7]g x (map parse xss)[28] [28]  .[E] We can construct if (and only if) there is a function such that . Is that possible? It is not hard to see that the answer is no. Consider and . Since both strings in are not balanced, gives us . However, , a list of balanced strings. Therefore has to produce something from nothing — an impossible task. We have to generalise our problem such that receives inputs that are more informative. § PARTIALLY BALANCED STRINGS A string of parentheses is said to be left-partially balanced if it may possibly be balanced by adding zero or more parentheses to the left. For example, "(())()))()" is left-partially balanced because is balanced — again we use coloring to help the reader parsing the string. Note that "(()(" is also balanced. For a counter example, the string "()(()" is not left-partially balanced — due to the unmatched '(' in the middle of , there is no such that can be balanced. While parsing a fully balanced string cannot be expressed as a right fold, it is possible to parse left-partially balanced strings using a right fold. In this section we consider what data structure such a string should be parsed to. We discuss how to how to parse it in the next section. A left-partially balanced string can always be uniquely factored into a sequence of fully balanced substrings, separated by one or more right parentheses. For example, can be factored into two balanced substrings, "(())()" and "()", separated by "))". One of the possible ways to represent such a string is by a list of trees — a , where the trees are supposed to be separated by a . That is, such a forest can be printed by: [B]𝐭𝐲𝐩𝐞 Forest=[1.5mu Tree1.5mu]  , non-empty[E] [B]prF [1.5mu t1.5mu][13] [13]=pr t[E] [B]prF (t:ts)[13] [13]=pr t++34 )34++prF ts  .[E] For example, "(())()))()" can be represented by a forest containing three trees: [B]ts=[1.5mu txtl (Bin (Bin Nul Nul) (Bin Nul Nul)),  Nul,  txbl (Bin Nul Nul)1.5mu]  ,[E] where prints to "(())()", prints to "()", and there is a between them due to the consecutive right parentheses "))" in ( itself prints to ). One can verify that indeed. Note that we let the type be non-empty lists of trees. [We can let the non-emptiness be more explicit by letting . Presentation-wise, both representations have their pros and cons, and we eventually decided on using a list.] The empty string can be represented by , since . The aim now is to construct the right inverse of , such that a left-partially balanced string can be parsed using a right fold. § PARSING PARTIALLY BALANCED STRINGS Given a function , the converse-of-a-function theorem [Bird and de Moor, 1997, de Moor and Gibbons, 2000] constructs the relational converse — a generalised notion of inverse — of . The converse is given as a relational fold whose input type is , which can be any inductively-defined datatype with a polynomial base functor. We specialise the general theorem to our needs: we use it to construct only functions, not relations, and only for the case where is a list type. Given , if we have and satisfying: \begin{align*} \ensuremath{\Varid{f}\;\Varid{base}} &= \ensuremath{[\mskip1.5mu \mskip1.5mu]} \quad\wedge \\ \ensuremath{\Varid{f}\;(\Varid{step}\;\Varid{x}\;\Varid{t})} &= \ensuremath{\Varid{x}\mathbin{:}\Varid{f}\;\Varid{t}} \mbox{~~,} \end{align*} then is a partial right inverse of . That is, we have for all in the range of . While the general version of the theorem is not trivial to prove, the version above, specialised to functions and lists, can be verified by an easy induction on the input list. Recall that we wish to construct the right inverse of using Theorem <ref>. It will be easier if we first construct a new definition of , one that is inductive, does not use , and does not rely on . For a base case, . It is also immediate that . When the list contains more than one tree and the first tree is not , we calculate: [7]prF (Bin t u:ts)[E] [7]34 (34++pr t++34 )34++pr u++34 )34++prF ts[E] [7]'(':prF (t:u:ts)  .[E] We have thus derived the following new definition of : [B]prF [1.5mu Nul1.5mu][20] [20]=34 34[E] [B]prF (Nul:ts)[20] [20]=')':prF ts[E] [B]prF (Bin t u:ts)[20] [20]='(':prF (t:u:ts)  .[E] We are now ready to invert by Theorem <ref>, which amounts to finding and such that and for or . With the inductive definition of in mind, we pick , and the following meets the requirement: [B]step ')' ts[24] [B]step '(' (t:u:ts)[24] [27]Bin t u:ts  .[E] We have thus constructed . If we expand the definitions, we have [B]prF^0.5-1 34 34[16] [16]=[1.5mu Nul1.5mu][E] [B]prF^0.5-1 (')':xs)[16] [16]=Nul:prF^0.5-1 xs[E] [B]prF^0.5-1 ('(':xs)[16] [16]=𝐜𝐚𝐬𝐞 prF^0.5-1 xs 𝐨𝐟 (t:u:ts)→Bin t u:ts  ,[E] which is pleasingly symmetrical to the inductive definition of . For an operational explanation, a right parenthesis indicates starting a new tree, thus we start freshly with a ; a left parenthesis ought to be the leftmost symbol of some , thus we wrap the two most recent siblings into one tree. When there are no such two siblings (that is, in the expression evaluates to a singleton list), the input is not a left-partially balanced string — appears too early, and the result is undefined. Readers may have noticed the similarity to shift-reduce parsing, in which, after reading a symbol we either "shift" the symbol by pushing it onto a stack, or "reduce" the symbol against a top segment of the stack. Here, the forest is the stack. The input is processed right-to-left, as opposed to left-to-right, which is more common when talking about parsing. We shall discuss this issue further in Section <ref>. We could proceed to work with for the rest of this pearl but, for clarity, we prefer to make the partiality explicit. Let be the monadified version of , given by: [B]parseF::String→Maybe Forest[E] [B]parseF 34 34[16] [16]=Just [1.5mu Nul1.5mu][E] [B]parseF (x:xs)[16] [16]=parseF xs0.7>>=stepM x  ,[E] [3]𝐰𝐡𝐞𝐫𝐞 [10] [10]stepM ')' ts[34] [34]=Just (Nul:ts)[E] [10]stepM '(' [1.5mu t1.5mu][34] [10]stepM '(' (t:u:ts)[34] [34]=Just (Bin t u:ts)  ,[E] where is monadified — for the case missing in we return . To relate to , notice that . We therefore have [B]parse::String→Maybe Tree[E] [B]parse=unwrapM0.7<=<parseF  ,[E] [B]unwrapM [1.5mu t1.5mu][14] [14]=Just t[E] [B]unwrapM [14] [14]=Nothing  .[E] where is (reversed) Kleisli composition. That is, calls , and declares success only when the input can be parsed into a single tree. § LONGEST BALANCED PREFIX IN A FOLD Recall our objective at the close of Section <ref>: to compute in a right fold, in order to obtain a faster algorithm using the scan lemma. Now that we have where is a right fold, we perform some initial calculation whose purpose is to factor the postprocessing out of the main computation: [7]maxBy size·filtJust·map parse·inits[E] [7]maxBy size·filtJust·map (unwrapM0.7<=<parseF)·inits[E] [7]maxBy size·filtJust·map (unwrapM0.7=<<)·map parseF·inits[E] [7]maxBy size·map unwrap·filtJust·map parseF·inits[E] [7]unwrap·maxBy (size·unwrap)·filtJust·map parseF·inits  .[E] In the penultimate step is moved leftwards past and becomes , defined by: [B]unwrap [1.5mu t1.5mu][13] [B]unwrap [13] [13]=Nul  .[E] Recall that . The aim now is to fuse , , and with . By Theorem <ref>, to fuse with , we need to construct that meets the fusion condition: [B]map parseF ([1.5mu 1.5mu]:map (x:) xss)=g x (map parseF xss)  .[E] Now that we know that is a fold, the calculation goes: [7]map parseF ([1.5mu 1.5mu]:map (x:) xss)[E] [7]Just [1.5mu Nul1.5mu]:map (parseF·(x:)) xss[E] [7]Just [1.5mu Nul1.5mu]:map (λts→parseF ts0.7>>=stepM x) xss[E] [7]Just [1.5mu Nul1.5mu]:map (0.7>>=stepM x) (map parseF xss)  .[E] Therefore we have [B]map parseF·inits::String→[1.5mu Maybe Forest1.5mu][E] [B]map parseF·inits=[E] [4]foldr (λx tss→Just [1.5mu Nul1.5mu]:map (0.7>>=stepM x) tss) [1.5mu Just [1.5mu Nul1.5mu]1.5mu]  .[E] Next, we fuse with by Theorem <ref>. After some calculations, we get: [B]filtJust·map parseF·inits::String→[1.5mu Forest1.5mu][E] [B]filtJust·map parseF·inits=foldr (λx tss→[1.5mu Nul1.5mu]:extend x tss) [1.5mu [1.5mu Nul1.5mu]1.5mu]  ,[E] [5]𝐰𝐡𝐞𝐫𝐞 [12] [12]extend ')' tts=map (Nul:) tts[E] [12]extend '(' tts=[1.5mu (Bin t u:ts)|(t:u:ts)←tts1.5mu]  .[E] After the fusion we need not keep the entries in the fold; the computation returns a collection of forests. If the next character is , we append to every forest. If the next entry is , we choose those forests having at least two trees, and combine them — the list comprehension keeps only the forests that match the pattern and throws away those do not. Note that , to which the empty string is parsed, is always added to the collection of forests. Results of and for prefixes of . To think about how to deal with , we consider an example. Figure <ref> shows the results of and for prefixes of , where , , and are respectively abbreviated to , , and . The function chooses between and , the two parses resulting in single trees, and returns . However, notice that is also the head of , the last forest returned by . In general, the largest singleton parse tree will also present in the head of the last forest returned by . One can intuitively see why: if we print them both, the former is a prefix of the latter. Therefore, can be replaced by . To fuse with by Theorem <ref>, we need to construct a function that satisfies the fusion condition [B]last ([1.5mu Nul1.5mu]:extend x tss)=step x (last tss)  ,[E] where is a non-empty list of forests. The case when is easy — choosing will do the job. For the case when we need to analyse the result of , and use the property that forests in are ordered in ascending lengths. * If , a forest having only one tree, there are no forest in that contains two or more trees. Therefore returns an empty list, and . * Otherwise, would not be empty, and . We may then combine the first two trees, as would do. In summary, we have [B]last·filtJust·map parseF·inits::String→Forest[E] [B]last·filtJust·map parseF·inits=foldr step [1.5mu Nul1.5mu]  ,[E] [3]𝐰𝐡𝐞𝐫𝐞 [10] [10]step ')' ts[29] [10]step '(' [1.5mu t1.5mu][29] [29]=[1.5mu Nul1.5mu][E] [10]step '(' (t:u:ts)[29] [29]=Bin t u:ts  ,[E] which is now a total function on strings of parentheses. The function derived above turns out to be with one additional case (). What we have done in this section can be seen as justifying this extra case (which is a result of case (1) in the fusion of ), which is not as trivial as one might think. § WRAPPING UP We can finally resume the main derivation in Section <ref>: [7]maxBy size·map (maxBy size·filtJust·map parse·inits)·tails[E] [7]maxBy size·map (head·foldr step [1.5mu Nul1.5mu])·tails[E] [7]maxBy size·map head·scanr step [1.5mu Nul1.5mu]  .[E] We have therefore derived: [B]lbs=maxBy size·map head·scanr step [1.5mu Nul1.5mu]  ,[E] where is as defined in the end of Section <ref>. To avoid recomputing the sizes in , we can annotate each tree by its size: letting , resulting in an algorithm that runs in linear-time: [B]lbs=fst·maxBy snd·map head·scanr step [1.5mu (Nul,0)1.5mu]  ,[E] [4]𝐰𝐡𝐞𝐫𝐞 [11] [11]step ')' ts[25] [11]step '(' [1.5mu t1.5mu][25] [25]=[1.5mu (Nul,0)1.5mu][E] [11]step '(' ((t,m):(u,n):ts)=(Bin t u,2+m+n):ts  .[E] Finally, the size-only version can be obtained by fusing into . It turns out that we do not need to keep the actual trees, but only their sizes — [B]lbsl=maximum·map head·scanr step [1.5mu 01.5mu]  ,[E] [3]𝐰𝐡𝐞𝐫𝐞 [10] [10]step ')' ts[24] [10]step '(' [1.5mu t1.5mu][24] [24]=[1.5mu 01.5mu][E] [10]step '(' (m:n:ts)=(2+m+n):ts  .[E] input size (M) 1 2 4 6 8 10 user time (sec.) Measured running time for some input sizes. We ran some simple experiments to measure the efficiency of the algorithm. The test machine was a laptop computer with a Apple M1 chip (8 core, 3.2GHz) and 16GB RAM. We ran on randomly generated inputs containing 1, 2, 4, 6, 8, and 10 million parentheses, and measured the user times. The results, shown in Figure <ref>, confirmed the linear-time behaviour. § CONCLUSIONS AND DISCUSSIONS So we have derived a linear-time algorithm for solving the problem. We find it an interesting journey because it relates two techniques: prefix-suffix decomposition for solving segment problems, and the converse-of-a-function theorem for program inversion. In Section <ref> we generalised from trees to forests. Generalisations are common when applying the converse-of-a-function theorem. It was observed that the trees in a forest are those along the left spine of the final tree, therefore such a generalisation is referred to as switching to a “spine representation” [Mu and Bird, 2003]. What we derived in Section <ref> and <ref> is a compacted form of shift-reduce parsing, where the input is processed right-to-left. The forest serves as the stack, but we do not need to push the parser state to the stack, as is done in shift-reduce parsing. If we were to process the input in the more conventional left-to-right order, the corresponding grammar would be . It is an SLR(1) grammar whose parse table contains 5 states. Our program is much simpler. A possible reason is that consecutive shifting and reducing are condensed into one step. It is likely that parsing SLR(1) languages can be done in a fold. The relationship between LR parsing and the converse-of-a-function theorem awaits further investigation. There are certainly other ways to solve the problem. For example, one may interpret a as a $-1$, and a as a $+1$. A left-partially balanced string would be a list whose right-to-left running sum is never negative. One may then apply the method in [Zantema, 1992] to find the longest such prefix for each suffix. The result will be an algorihm that maintains the sum in a loop — an approach that might be more commonly adopted by imperative programmers. The problem can also be seen as an instance of maximum-marking problems — choosing elements in a data structure that meet a given criteria while maximising a cost function — to which methods of [Sasano et al., 2001] can be applied. The problem was suggested by Yi-Chia Chen. The authors would like to thank our colleagues in the PLFM group in IIS, Academia Sinica, in particular Hsiang-Shang `Josh' Ko, Liang-Ting Chen, and Ting-Yan Lai, for valuable discussions. Also thanks to Chung-Chieh Shan and Kim-Ee Yeoh for their advice on earlier drafts of this paper. We are grateful to the reviewers of previous versions of this article, who gave detailed and constructive criticisms that helped a lot in improving this work. [Bird, 1987] R. S. Bird. An introduction to the theory of lists. In M. Broy, editor, Logic of Programming and Calculi of Discrete Design, number 36 in NATO ASI Series F, pages 5–42. Springer, [Bird and de Moor, 1997] R. S. Bird and O. de Moor. Algebra of Programming. Prentice Hall, 1997. ISBN 0-13-507245-X. [de Moor and Gibbons, 2000] O. de Moor and J. Gibbons. Pointwise relational programming. In T. Rus, editor, Algebraic Methodology and Software Technology, number 1816 in LNCS, pages 371–390. Springer, 2000. [Gibbons, 1997] J. Gibbons. Calculating functional programs. In Proceedings of ISRG/SERG Research Colloquium. Oxford Brookes University, November 1997. [Mu and Bird, 2003] S.-C. Mu and R. S. Bird. Theory and applications of inverting functions as folds. Science of Computer Programming (Special Issue for Mathematics of Program Construction), 51:0 87–116, 2003. [Sasano et al., 2001] I. Sasano, Z. Hu, and M. Takeichi. Generation of efficient programs for solving maximum multi-marking In ACM SIGPLAN Workshop on Semantics, Applications and Implementation of Program Generation (SAIG'01), number 2196 in LNCS, pages 72–91. Springer, 2001. [Zantema, 1992] H. Zantema. Longest segment problems. Science of Computer Programming, 180 (1):0 39–66, 1992.
Functional Pearl A Greedy Algorithm for Dropping Digits Richard Bird^1 Shin-Cheng Mu^2 Consider the puzzle: given a number, remove digits such that the resulting number is as large as possible. Various techniques were employed to derive a linear-time solution to the puzzle: predicate logic was used to justify the structure of a greedy algorithm, a dependently-typed proof assistant was used to give a constructive proof of the greedy condition, and equational reasoning was used to calculate the greedy step as well as the final, linear-time optimisation. § INTRODUCTION Greedy algorithms abound in computing. Well-known examples include Huffman coding, minimum cost spanning trees, and the coin-changing problem; but there are many others. This pearl adds yet another problem to this collection. However, as has been said before, greedy algorithms can be tricky things. The trickiness is not in the algorithm itself, which is usually quite short and easy to understand, but in the proof that it does produce a best possible result. The mathematical theory of matroids, see [Lawler, 1976], and its generalisation to greedoids, see [Korte et al., 1991], have been developed to explain why and when many greedy algorithms work, although the theory does not cover all possible cases. In practice, greedy algorithms are usually verified directly rather than by extracting the underlying matroid or greedoid. [Curtis, 2003] discusses four basic ways in which a greedy algorithm can be proved to work, one of which will be followed with our problem. The problem is to remove digits from a number containing at least digits, so that the result is as large as possible. For example, removing one digit from the number gives as the largest possible result, while removing three digit yields . Given that a number can be seen as a list of digits, the problem can be generalised to removing, from a list whose elements are drawn from a linearly ordered type, elements so that the result is largest under lexicographic ordering. While the problem was invented out of curiosity rather than for a pressing application, it has apparently been used as an interview question for candidates seeking jobs in computing. [The problem is listed on LeetCode as <https://leetcode.com/problems/remove-k-digits/>, where the objective is to find the smallest number rather than the largest, but the principles are the same.] The hope is that we can discover an algorithm that takes linear time in the number of elements. The first task is to give a formal specification of the problem. Consider the function that removes a single element from a non-empty list in all possible ways: [We use notations similar to Haskell, with slight variations. For example, the type for lists is denoted by , and we allow as a pattern in function definitions, to match our inductive proofs. Laziness is not needed. [B]drops::List a→List (List a)[E] [B]drops [1.5mu x1.5mu][15] [18][1.5mu [1.5mu 1.5mu]1.5mu][E] [B]drops (x:xs)[15] [18]xs:map (x:) (drops xs)[E] For example, . The function for computing a solution can be defined by a simple exhaustive search: [B]solve::Ord a⇒Nat→List a→List a[E] [B]solve k[10] [13]maximum·.apply k step·.wrap  ,[E] [B]step::List (List a)→List (List a)[E] [B]step=concat·.map drops  .[E] The function converts a given input into a singleton list, applies the function exactly times to produce all possible candidates, and computes the lexical maximum of the result. is the type of natural numbers. The function is lifted to a list of candidates. It computes, for each candidate, all the ways to drop element. Functions and are respectively defined by [B]wrap::a→List a[E] [B]wrap x=[1.5mu x1.5mu]  ,[E] [B]apply 0 [14] [B]apply (1+k) [14] [14]f=apply k f·.f  ,[E] For brevity, for the rest of the pearl we will write as . Since a sequence of length has drops, and computing the larger of two lists of length takes steps, this method for computing the answer takes steps. We aim to do better. § A GREEDY ALGORITHM To obtain a greedy algorithm, one would wish that the best way to remove digits can be computed by removing digit times, and each time we greedily remove the digit that makes the current result as large as possible. That is, letting [B]gstep::Ord a⇒List a→List a[E] [B]gstep=maximum·.drops  ,[E] we wish to have \begin{align} \ensuremath{\Varid{maximum}\hsdot{\cdot }{.}{\Varid{step}}^{\Varid{k}}\hsdot{\cdot }{.}\Varid{wrap}\mathrel{=}{\Varid{gstep}}^{\Varid{k}}~~.} \label{eq:k-gsteps-correct} \end{align} One cannot just claim that this strategy works without proper reasoning, however. It can be shown that (<ref>) is true if the following monotonicity condition holds (we denote lexicographic ordering on lists by , and ordering on individual elements by ): \begin{align} \ensuremath{\Varid{xs}\unlhd\Varid{ys}~\Rightarrow~(\forall \Varid{xs'}\hsforall \in\Varid{drops}\;\Varid{xs}\mathbin{:}(\exists \Varid{ys'}\hsexists \in\Varid{drops}\;\Varid{ys}\mathbin{:}\Varid{xs'}\unlhd\Varid{ys'}))~~,} \label{eq:mono1} \end{align} where is overloaded to denote membership for lists. That is, if is no worse than , whatever candidate we can obtain from , we can compute a candidate from that is no worse either. Unfortunately, (<ref>) does not hold for our problem. Consider , is a possible result of , but the best we can do by removing one digit from is . Note that (<ref>) does not hold even if we restrict and to lists that can be obtained from the same source — certainly both and are both result of removing two digits from, say, . In the terminology of [Curtis, 2003], the Better-Global principle — which says that if one first step is no worse than another, then there is a global solution using the former that is no worse than one using the latter — does not hold for this problem. What does hold is a weaker property, the Best-Global principle: a globally optimal solution can be obtained by starting out with the best possible step. Formally, what we do have is that for all and : \begin{equation} \begin{split} &\ensuremath{(\forall \Varid{xs'}\hsforall \in{\Varid{step}}^{\mathrm{1}\mathbin{+}\Varid{k}}\;[\mskip1.5mu \Varid{xs}\mskip1.5mu]\mathbin{:}} \\ & \qquad \ensuremath{(\exists \Varid{zs}\hsexists \in({\Varid{step}}^{\Varid{k}}\cdot\Varid{wrap}\cdot\Varid{gstep})\;\Varid{xs}\mathbin{:}\Varid{xs'}\unlhd\Varid{zs}))~~.} \end{split} \label{eq:step-gstep} \end{equation} That is, letting be an arbitrary result of dropping elements from , one can always obtain a result that is no worse than by greedily dropping the best element (by ) and then dropping arbitrary elements. Property (<ref>) will be proved in Section <ref>. For now, let us see how (<ref>) helps to prove (<ref>), that is, The proof proceeds by induction on . For both sides reduce to . For the inductive case we need the universal property of : for all and , and for total order on : We restrict our disucssion to total orders to ensure that returns one unique result. More general scenarios are discussed in [Bird and de Moor, 1997]. \begin{align*} \begin{split} \ensuremath{\Varid{s}\mathrel{=}\Varid{maximum}\cdot\Varid{p}~\mathrel{\equiv}~}& \ensuremath{(\forall \Varid{x}\hsforall \mathbin{:}\Varid{s}\;\Varid{x}\in\Varid{p}\;\Varid{x})\mathrel{\wedge}} \\ &\quad \ensuremath{(\forall \Varid{x}\hsforall ,\Varid{y}\mathbin{:}\Varid{y}\in\Varid{p}\;\Varid{x}\Rightarrow\Varid{y}\unlhd\Varid{s}\;\Varid{x})~~.} \end{split} \end{align*} To prove we need to show that 1. for all , is a member of , which is a routine proof, and 2. for all and for all , we have We reason: [6]ysgstep^1+k xs[E] [6]ysgstep^k (gstep xs)[E] [6]ysmaximum (step^k [1.5mu gstep xs1.5mu])[E] [6](∃xs∈step^k [1.5mu gstep xs1.5mu]:ysxs)[E] [6]ys∈step^1+k [1.5mu xs1.5mu]  ,[E] which is our assumption. We have thus proved (<ref>). Remark: the proof above was carried out using predicate logic. There is a relational counterpart, in the style of [Bird and de Moor, 1997], that is slightly more concise and more general, which we unfortunately cannot present without adding a section introducing the notations and rules. § REFINING THE GREEDY STEP We will prove the greedy condition (<ref>) in Section <ref>. It will turn out that the proof makes use of properties of that will be evident from its inductive definition. Therefore we calculate an inductive definition of in this section. It is easy to derive . For the inductive case we reason [7]gstep (x:y:xs)[E] [7]maximum (drops (x:y:xs))[E] [7]maximum ((y:xs):map (x:) (drops (y:xs)))[E] [7]max (y:xs) (maximum (map (x:) (drops (y:xs))))[E] [7]max (y:xs) (x:maximum (drops (y:xs)))[E] [7]max (y:xs) (x:gstep (y:xs))[E] [7]𝐢𝐟 x<y 𝐭𝐡𝐞𝐧 y:xs[E] [8]𝐞𝐥𝐬𝐞 𝐢𝐟 xy 𝐭𝐡𝐞𝐧 x:max xs (gstep (y:xs))[E] [9]𝐞𝐥𝐬𝐞 x:gstep (y:xs)[E] [7]𝐢𝐟 x<y 𝐭𝐡𝐞𝐧 y:xs 𝐞𝐥𝐬𝐞 x:gstep (y:xs)  .[E] Hence we have [B]gstep [1.5mu x1.5mu][17] [20][1.5mu 1.5mu][E] [B]gstep (x:y:xs)[17] [20]𝐢𝐟 x<y 𝐭𝐡𝐞𝐧 y:xs 𝐞𝐥𝐬𝐞 x:gstep (y:xs)  .[E] It turns out that deletes the last element of the longest descending prefix of . For easy reference, we will refer to this element as the hill foot of the list. For example, , where the hill foot, the element deleted, is the third . § PROVING THE GREEDY CONDITION In this section we aim to prove (<ref>), recited here: \begin{align*} \ensuremath{(\forall \Varid{xs'}\hsforall \in{\Varid{step}}^{\mathrm{1}\mathbin{+}\Varid{k}}\;[\mskip1.5mu \Varid{xs}\mskip1.5mu]\mathbin{:}(\exists \Varid{zs}\hsexists \in({\Varid{step}}^{\Varid{k}}\cdot\Varid{wrap}\cdot\Varid{gstep})\;\Varid{xs}\mathbin{:}\Varid{xs'}\unlhd\Varid{zs}))~~.} \end{align*} Proving a proposition containing universal and existential quantification can be thought of as playing a game. The opponent challenges us by providing and a way of removing elements to obtain . We win by presenting a way of removing elements from , such that the result satisfies . Equivalently, we present a way of removing elements from , while making sure that the hill foot of is among the elements removed. To prove (<ref>) is to come up with a strategy to always win the game. We could just invent the strategy and argue for its correctness. However, we experimented with another approach: could a proof assistant offer some help? Can we conjecture the existence of a function that, given the opponent's input, computes , and try to develop the function and the proof that at the same time, letting their developments mutually guide each other? It would be a modern realisation of Dijkstra's belief that a program and its proof should be developed hand-in-hand [Dijkstra, 1974]. *The datatypes. We modeled the problem in the dependently typed language/proof assistant Agda. For the setting-up, we need to define a number of datatypes. Firstly, given a type with a total ordering (which derives a strict ordering ), lexicographic ordering for is defined by: To be consistent with earlier parts of this pearl, we use Haskell-like notations for the Agda code. Typing relation is denote by and list cons by . The two constructors of are and . Universally quantified implicit arguments in constructor and function declarations are omitted. [B]𝐝𝐚𝐭𝐚 __::List a→List a→Set 𝐰𝐡𝐞𝐫𝐞[E] [11] :: [1.5mu 1.5mu]xs[E] [11] :: x<y[41] [11] :: xsys[41] [41]→(x:xs)(x:ys)  ,[E] that is, is no larger than any lists, if , and two lists having the same head is compared by their tails. Secondly, rather than actually deleting elements of a list, in proofs it helps to remember which elements are deleted. The following datatype can be seen as instructions on how elements are deleted from : [B]𝐝𝐚𝐭𝐚 Dels::Nat→List a→Set 𝐰𝐡𝐞𝐫𝐞[E] [9]::Dels 0 [1.5mu 1.5mu][E] [9]::Dels k xs→Dels k (x:xs)[E] [9]::Dels k xs→Dels (1 + k) (x:xs)  .[E] For example, letting , says that the st and rd elements of (counting from ) are to be deleted. The function actually carries out the instruction: [B]dels::(xs::List a)→Dels k xs→List a[E] [B]dels [1.5mu 1.5mu] [16] [26]=[1.5mu 1.5mu][E] [B]dels (x:xs) [16] [16](del [22] [27]=dels xs ds[E] [B]dels (x:xs) [16] [16](keep ds)[27] [27]=x:dels xs ds  .[E] For example, . Thirdly, the predicate holds if the -th element in is the hill foot, that is, the element that would be removed by : [B]𝐝𝐚𝐭𝐚 HFoot::Nat→List a→Set 𝐰𝐡𝐞𝐫𝐞[E] [9]::HFoot 0 (x:[1.5mu 1.5mu])[E] [20]→HFoot 0 (x:y:xs)[E] [20]→HFoot i (y:xs)→HFoot (1 + i) (x:y:xs)  .[E] For example, may have type , since the th element is the last in the descending prefix . Finally, we define a datatype such that, for all , the relation holds if instructs that the -th element of is to be deleted. Its definition is repetitive and thus omitted. *The function and the proofs. The aim is to construct the following function : [B]alter::Dels (1 + k) xs→HFoot i xs→Dels (1 + k) xs  .[E] It takes an instruction, given by the opponent, that deletes elements from , and an evidence that the -th element of is its hill foot, and produces a possibly altered instruction that also deletes elements. Recalling the discussion in the beginning of this section, should satisfy two properties: [9]::(ds::Dels (1 + k) xs)→(ft::HFoot i xs)→dels xs dsdels xs (alter ds ft)  ,[E] [9]::(ds::Dels (1 + k) xs)→(ft::HFoot i xs)→IsDel i (alter ds ft)  .[E] Given and , the property says that always produces a list that is not worse than that produced by , while says that does delete the hill foot. The goal now is to develop , , and together. The reader is invited to give it a try — it is more fun trying it yourself! [The Agda code can be downloaded from <https://scm.iis.sinica.edu.tw/home/2020/dropping-digits/>. ] In most of the steps, the type and proof constraints leave us with only one reasonable choice, while in one case we are led to discover a lemma. The cases to consider are: [Curly brackets are used in Agda to mention implicit arguments. In each case here we pattern match such that the readers know what input list we are dealing with.] * — the opponent keeps , which is the hill foot because . Due to , we have to delete ; a simple way to satisfy is to keep . Thus we return , where can be any instruction that deletes elements in — it doesn't matter how does it! * — the opponent keeps , and we have not reached the hill foot yet. In this case it is safe to imitate the opponent and keep too, before recursively calling to generate the rest of the instruction. * — the opponent deletes the sole element . In this case we delete too, returning . * — the element is the hill foot, and is deleted by the opponent. In this case, since is satisfied, we can do exactly the same. We end up returning the same instruction as the opponent's but it is fine, since both and are satisfied. * — the opponent deletes , which is in the descending prefix but is not the hill foot. This turns out to be the most complex case. One may try to imitate and delete as well, returning for some . However, , having type , cannot be produced by a recursive call to , whose return type is . It could be the case that is and, since we have not deleted the hill foot yet, returning would violate . The lesson learnt from the type is that we can only delete elements, and we have to save at least one for the hill foot, which is yet to come. We thus have to further distinguish between two cases: * for some . In this case we still have room to delete more elements, thus we can safely imitate the opponent, delete , and recursively call . * . In this case we keep , returning , where computes an instruction that deletes exactly one element, the hill foot. What is left to prove to establish for this case can be extracted to be a lemma: [17]x≥y→(ft::HFoot i (y:ys))→[E] [19](y:ys)(x:dels (y:ys) (delfoot ft))  ,[E] whose proof is an induction on , keeping as an invariant. If or is the hill foot, we are done. Otherwise and we inductively inspect the tail . Without Agda, it would not be easy to discover this lemma. [B]alter::Dels (1 + k) xs→HFoot i xs→Dels (1 + k) xs[E] [B]alter (keep ds) [18] [18](this x<y)[34] [34]=del (keep (deleteAny ds))[E] [B]alter (keep ds) [18] [18](next x≥y ft)[34] [34]=keep (alter ds ft)[E] [B]alter (del end) [18] [34]=del end[E] [B]alter (del ds) [18] [18](this x<y)[34] [34]=del ds[E] [B]alter {1.5mu k=01.5mu} [18] [18](del ds) (next x≥y ft)[43] [43]=keep (delfoot ft)[E] [B]alter {1.5mu k=1 + k1.5mu} [18] [18](del ds) (next x≥y ft)[43] [43]=del (alter ds ft)  ,[E] The function , where generates a , and a graphical summary. Element with dotted outline are those that are considered already; the one with thick outline is the current element. It is the hill foot if it is smaller than the element to the right, or it is the last. Deleted elements are marked with a cross. In summary, the function we have constructed, and a graphical summary, are shown in Figure <ref>. Remark: We may also tuple and the properties together, and try to construct: [9]::(ds::Dels (1 + k) xs)→(ft::HFoot i xs)→[E] [11]∃ (λ[17] [17](ds'::Dels (1 + k) xs)→(dels xs dsdels xs ds')×IsDel i ds')  .[E] An advantage is that the code of each case of are next to its proof. A disadvantage is that having to pattern-match the result of psychologically discourages one from making a recursive call when needing a . It is up to personal preference which style one prefers. § IMPROVING EFFICIENCY Back to our code. We have proved that , with given by: [B]gstep [1.5mu x1.5mu][17] [20][1.5mu 1.5mu][E] [B]gstep (x:y:xs)[17] [20]𝐢𝐟 x<y 𝐭𝐡𝐞𝐧 y:xs 𝐞𝐥𝐬𝐞 x:gstep (y:xs)  .[E] Each time is called, it takes steps to go through the descending prefix and find the hill foot, before the next invocation of starts from the beginning of the list again. Therefore, takes steps over all. This is certainly not necessary — to find the next hill foot, the next could start from where the previous one left off. The way to implement this idea is to bring in an accumulating parameter. Suppose we generalise to a function , defined by [3]gsolve k xs ys=solve k (xs+-8mu+ys)  ,[E] with the proviso that the argument is constrained to be a descending sequence. In particular, . We aim to develop a recursive definition of . Clearly, [3]gsolve 0 xs ys=xs+-8mu+ys  .[E] Recalling that drops the last element of a descending list, we know that repetitions of on a decreasing list will drop the last elements. [3]gsolve k xs [1.5mu 1.5mu]=dropLast k xs  ,[E] where drops the last elements of a list. We will not give a formal definition of as it will be replaced by another function in a moment. That deals with the two base cases. For the recursive case, it is easy to prove the following property of : [3]gstep (xs+-8mu+y:ys)[23] [23]|null xs∨last xs≥y[50] [56]gstep ((xs+-8mu+[1.5mu y1.5mu])+-8mu+ys)[E] [56]init xs+-8mu+y:ys  ,[E] which can be used to construct the following case of : [3]gsolve (1+k) xs (y:ys)[27] [27]|null xs∨last xs≥y[54] [57]=gsolve (1+k) (xs+-8mu+[1.5mu y1.5mu]) ys[E] [57]=gsolve k (init xs) (y:ys)  .[E] The second optimisation is simply to replace the list in the definition of by to avoid adding elements at the end of a list. That leads to our final algorithm: [B]solve k xs=gsolve k [1.5mu 1.5mu] xs  ,[E] [B]gsolve 0 xs ys[21] [24]reverse xs+-8mu+ys[E] [B]gsolve k xs [1.5mu 1.5mu][21] [24]reverse (drop k xs)[E] [B]gsolve k xs (y:ys)[21] [21]|null xs∨head xs≥y[49] [52]gsolve k (y:xs) ys[E] [52]gsolve (k-1) (tail xs) (y:ys)  ,[E] where is a standard Haskell function that drops the first elements from a list. For an operational explanation, traverses through the list, keeping looking for the next hill foot to delete — the case is when a hill foot is found. The list is the traversed part — forms a zipper. The head of is a possible candidate of the hill foot. While the algorithm looks simple once understood, without calculation it is not easy to get the details right. The authors have come up with several versions that are wrong, before sitting down to calculate it! To time the program, note that at each step either is reduced to or is reduced to . Hence takes steps, where . § CONCLUSION To construct a linear-time algorithm for solving the puzzle, various techniques were employed. The structure of the greedy algorithm was proved using predicate logic, and the proof was simplified from relational program calculus. Agda was used to give a constructive proof of the greedy condition, and equational reasoning was used to derive the greedy step as well as the final, linear-time optimisation. [Bird and de Moor, 1997] R. S. Bird and O. de Moor. Algebra of Programming. International Series in Computer Science. Prentice Hall, 1997. ISBN 0-13-507245-X. [Curtis, 2003] S. Curtis. The classification of greedy algorithms. Science of Computer Programming, 490 (1-3):0 125–157, 2003. [Dijkstra, 1974] E. W. Dijkstra. Programming as a discipline of mathematical nature. American Mathematical Monthly, 810 (6):0 608–612, May 1974. EWD 361. [Korte et al., 1991] B. Korte, L. Lovász, and R. Schrader. Springer-Verlag, 1991. [Lawler, 1976] E. L. Lawler. Combinatorial Optimization: Networks and Matroids. Holt, Rinehart, and Winston, 1976.
# Topological description of the Borel probability space Liangang Ma Dept. of Mathematical Sciences, Binzhou University, Huanghe 5th Road No. 391, Binzhou 256600, Shandong, P. R. China<EMAIL_ADDRESS> ###### Abstract. We study properties of some popular topology on the space of Borel probabilities on a topological ambient space in this paper. We show that the two types of popular vague topology are equivalent to each other in case the ambient space is LCH. The two types of setwise topology induced from two equivalent descriptions of setwisely sequential convergence of probability measures are also equivalent to each other regardless of the topology on the ambient space. We give explicit conditions for the two types of vague topology and the two types of setwise topology to be separable or metrizable on the space of Borel probabilities. These conditions are either in terms of the cardinality of the elementary events in the Borel $\sigma$-algebra or some direct topological assumptions on the ambient space. We give an necessary and sufficient condition for families of probability measures to be setwisely relatively compact in case the ambient space is a compact metric space. There are some extending problems and heuristic schemes on formulating new topologies on the space of Borel probabilities at the end of the work. The work is supported by ZR2019QA003 from SPNSF and 12001056 from NSFC ## 1\. Introduction This work addresses the following question: Let $X$ be a topological space along with its Borel $\sigma$-algebra $\mathcal{B}$, consider the collection of all the probability measures $\mathcal{M}(X)$ on $(X,\mathcal{B})$, how to describe the topological structure of $\mathcal{M}(X)$? The subtlety of the topology on $(X,\mathcal{B})$ is revealed when one considers problems related to regularity (for example, continuity) of mapping on the probability space $\mathcal{M}(X)$, which is even crucial in attacking these problems in some cases. For long time importance of the weak topology is recognized in its various applications. For example, see its application in interactive particle systems by J. T. Cox, A. Klenke and E. A. Perkins on a locally compact Polish space in [CK, CKP]. The solution of problems due to applications of various other topology in due course indicates the importance of these topology on $\mathcal{M}(X)$. While various other topologies on $\mathcal{M}(X)$ are highlighted in their applications, we limit our attention to the vague, weak, setwise and TV topology in this work. The vague topology on $\mathcal{M}(X)$ is the coarsest topology among the four. We focus on two types of vague topology on $\mathcal{M}(X)$. One is in the sense of Kallenberg [Kallen1, Kallen4], another is in the sense of Folland [Fol]. We will show the equivalence of the two types of vague topology in case of the ambient space $X$ is locally compact Hausdorff (LCH). We also study some topological properties of $\mathcal{M}(X)$ under the vague topology, such as its separability and metrizability. Setwise topology appears as a finer topology than the weak topology on $\mathcal{M}(X)$. The sequential convergence under this topology is a more demanding property than under the weak topology. See its application in the Markov decision processes by E. Feinberg, P. Kasyanov and M. Zgurovsky [FK, FKL, FKZ3, FKZ4], in the Markov chains by O. Hernández-Lerma and J. Lasserre [HL1, HL3]. The notion is further exploited by the author into the dimensional theory of (families of) iterated function systems (IFS), which extends some important results in this field [Ma1, Ma2]. We derive two types of setwise topology, which follow naturally from two equivalent descriptions of sequential setwise convergence of probability measures on $\mathcal{M}(X)$. The two types of setwise topology are always equivalent to each other. The separability and metrizability of $\mathcal{M}(X)$ under the setwise topology are decided by the cardinality of elementary events in the Borel $\sigma$-algebra $\mathcal{B}$. We consider the setwisely relative compactness of families of probabilities in $\mathcal{M}(X)$ following Billingsley and Kallenberg. There are three notes for the readers. The first one is on the ambient space $X$. We try to set our results on general topological ambient spaces, while some results do require some assumptions on the ambient spaces. Notable assumptions are those requiring $X$ to be LCH, compact, metrizable, separable, normal or complete. The second note is that although we pay main attention to the vugue and setwise topology on $\mathcal{M}(X)$ (partly because the weak topology is well-studied in various circumstances), we hope it will shed some light on properties of other topology, as well as comparisons of different types of topology induced from equivalent descriptions of convergence of sequences of probability measures on $\mathcal{M}(X)$. The last one is on the scope of measures that our results apply. Let $\hat{\mathcal{M}}(X)$ be the collection of all finite measures on $(X,\mathcal{B})$. Some notions and results extend naturally from $\mathcal{M}(X)$ to $\hat{\mathcal{M}}(X)$ by normalization, or even to the space of all measures (including infinite ones) in some cases. The organization of the paper is as following. In Section 2 we introduce the vague, weak, setwise and TV topology on $\mathcal{M}(X)$. These definitions are induced naturally from the corresponding sequential convergence of probability measures in $\mathcal{M}(X)$. The main results of the work are presented in this section-Theorem 2.3, 2.4, 2.5, 2.13, 2.14, 2.15. Section 3 is devoted to the proof of Theorem 2.3 and 2.13, which compare different types of vague and setwise topology on $\mathcal{M}(X)$. In Section 4 we prove Theorem 2.4, 2.5 and 2.14, which give conditions on the separability and metrizability of $\mathcal{M}(X)$ under the vague or setwise topology in due course. Section 5 is devoted to the proof of Theorem 2.15 on the setwisely relative compactness of families of probabilities in $\mathcal{M}(X)$. In section 6 we indicate some further problems on the notable topology on $\mathcal{M}(X)$. In the last section we propose some open schemes to defining more subtle topologies in ones’ needs. ## 2\. The vague, weak, setwise, TV topology on $\mathcal{M}(X)$ and the main results We introduce concepts of the vague, weak, setwise and total-variation (TV) topology on $\mathcal{M}(X)$ in this section. These notions differ from each other in fineness. One can expect more desiring properties for convergent sequences of probability measures under finer topology. For example, the _uniform Fatou lemma_ holds for convergent sequences of finite measures under the TV topology, but it does not hold for convergent sequences of finite measures under the setwise topology on $\mathcal{M}(X)$ [FKZ2]. We merely assume the ambient space $X$ is a topological space with its Borel $\sigma$-algebra $\mathcal{B}$, in general. Some assumptions on the ambient space $X$ are required in order to deduce some particular results. For integrations on a general measure space, see [Li] or [Tay, Chapter 3]. A function $f:X\rightarrow\mathbb{R}$ is said to _vanish at infinity_ if $f^{-1}\big{(}(-\infty,-\epsilon]\cup[\epsilon,\infty)\big{)}$ is a compact set in $X$ for any $\epsilon>0$. The _support_ of a function $f:X\rightarrow\mathbb{R}$ is defined to be the closure $Supp(f):=\overline{f^{-1}\big{(}(-\infty,0)\cup(0,\infty)\big{)}}$. We highlight the following families of testing functions in this work. * • $C(X)=\\{f:f\mbox{ is a continuous function from }X\mbox{ to }\mathbb{R}\\}.$ * • $C_{b}(X)=\\{f:f\mbox{ is a bounded continuous function from }X\mbox{ to }\mathbb{R}\\}$. * • $C_{0}(X)=\\{f:f\mbox{ is a continuous function from }X\mbox{ to }\mathbb{R}\mbox{ vanishing at infinity}\\}$. * • $C_{sc}(X)=\\{f:f\mbox{ is a continuous function from }X\mbox{ to }\mathbb{R}\mbox{ with compact support}\\}$. * • $C_{sb}(X)=\\{f:f\in C(X)\mbox{ has bounded support in case }X\mbox{ is metrizable}\\}$. * • $M_{b}(X)=\\{f:f\mbox{ is a bounded measurable function from }X\mbox{ to }\mathbb{R}\\}$. * • $M_{1}(X)=\\{f:f\mbox{ is a measurable function from }X\mbox{ to }[-1,1]\\}$. Obviously we can see that, (2.1) $M_{1}(X)\cup C_{b}(X)\subset M_{b}(X)\mbox{ and }C_{sc}(X)\cup C_{sb}(X)\subset C_{0}(X)\subset C_{b}(X)\subset C(X).$ If the ambient space $X$ is a metric space satisfying the _Heine-Borel Property_ [JW], then $C_{sc}(X)=C_{sb}(X)$. Refer to [Kallen1] for testing functions in the family $C_{sb}(X)$. We pay attention to the following two types of vague topology on $\mathcal{M}(X)$ in this work, following two popular notions of vaguely sequential convergence of probability measures in $\mathcal{M}(X)$. ###### 2.1 Definition. The _Type-I vague topology_ $\mathfrak{W}_{v1}$ on $\mathcal{M}(X)$ is the topology with basis $W_{v1}(\nu,f,\epsilon)=\\{\varrho\in\mathcal{M}(X):|\int_{X}f(x)d\varrho-\int_{X}f(x)d\nu|<\epsilon\\}$ for any $f\in C_{sc}(X)$ and any real $\epsilon>0$. This type of vague topology is induced from the vaguely sequential convergence of probability measures in [Kallen1, Chapter 4] and [Kle, Definition 13.12]. ###### 2.2 Definition. The _Type-II vague topology_ $\mathfrak{W}_{v2}$ on $\mathcal{M}(X)$ is the topology with basis $W_{v2}(\nu,f,\varepsilon)=\\{\varrho\in\mathcal{M}(X):|\int_{X}f(x)d\varrho-\int_{X}f(x)d\nu|<\epsilon\\}$ for any $f\in C_{0}(X)$ and any real $\epsilon>0$. This type of vague topology is induced from the vaguely sequential convergence of probability measures in [Fol] and [Las1]. It is obvious that the Type-II vague topology is finer than Type-I vague topology since $C_{sc}(X)\subset C_{0}(X)$. Our first result shows that the fineness is not strict in some cases. ###### 2.3 Theorem. If the ambient space $X$ is an LCH space, then the Type-II vague topology $\mathfrak{W}_{v2}$ is equivalent to the Type-I vague topology $\mathfrak{W}_{v1}$ on $\mathcal{M}(X)$. Now rename the space $\mathcal{M}(X)$ as $\mathcal{M}_{v1}(X)$ and $\mathcal{M}_{v2}(X)$ under the topology $\mathfrak{W}_{v1}$ and $\mathfrak{W}_{v2}$ respectively. Our next result describes the separability and metrizability of the probability space $\mathcal{M}(X)$ under the vague topology. A set $A\in\mathcal{B}$ is called an _elementary event_ if it does not contain any non-empty proper subset $B\in\mathcal{B}$. ###### 2.4 Theorem. For an LCH space $X$, $\mathcal{M}_{v1}(X)$ ($\mathcal{M}_{v2}(X)$) is separable and metrizable if the Borel $\sigma$-algebra $\mathcal{B}$ admits at most countably many elementary events. The conclusion is not true for non-LCH space $X$, see Remark 4.5. It is natural to ask the separability and metrizablility of $\mathcal{M}_{v1}(X)$ or $\mathcal{M}_{v2}(X)$ in case the Borel $\sigma$-algebra $\mathcal{B}$ has uncountably many elementary events. We are not able to provide a conclusive answer to this question, even if $X$ is LCH. However, we do have some results when $X$ is a compact metric space. The separability of the Borel probability space under the vague topology can be found in [Tao, Exercise 1.10.22.], see also Proposition 4.8. ###### 2.5 Theorem. The probability space $\mathcal{M}_{v1}(X)$ ($\mathcal{M}_{v2}(X)$) is separable and metrizable if the ambient space $X$ is a compact metric space. ###### 2.6 Remark. The readers are strongly recommended to [Kallen1, Theorem 4.2] for a stronger result on the separability, metrizability and completeness of the space $\mathcal{M}_{v1}(X)$ ($\mathcal{M}_{v2}(X)$), with $X$ being a separable and complete metric space (a compact metric space is separable and complete). The author finds Kallenberg’s exciting result after Theorem 2.5, but I will retain it in its form as the proof differs from Kallenberg’s techniques completely, which will benefit the readers from different point of views. See also Problem 6.5. Now we turn to the weak topology on $\mathcal{M}(X)$. ###### 2.7 Definition. The _Type-I weak topology_ $\mathfrak{W}_{w1}$ on $\mathcal{M}(X)$ is the topology with basis $W_{w1}(\nu,f,\epsilon)=\\{\varrho\in\mathcal{M}(X):|\int_{X}f(x)d\varrho-\int_{X}f(x)d\nu|<\epsilon\\}$ for any $f\in C_{b}(X)$ and any real $\epsilon>0$. This topology is obviously finer than the two types of vague topology $\mathfrak{W}_{v1}$ and $\mathfrak{W}_{v2}$ on $\mathcal{M}(X)$. There is a detailed study of some finer form of weak topology and its various applications in [Kal]. The topology $\mathfrak{W}_{w1}$ is metrizable if $X$ is metrizable, for example, by the Prohorov metric (see [Bil1, Section 6]). The Type-I weak topology is well-understood, so will not be our focus in this work. See Problem 6.4 on comparing it with other types of weak topology on $\mathcal{M}(X)$. In the following we introduce two types of setwise topology on $\mathcal{M}(X)$. Different from the introduction of the vague and weak topology above, we would like to do this from the setwisely sequential convergence of probability measures in $\mathcal{M}(X)$. ###### 2.8 Definition. A sequence of probability measures $\\{\nu_{n}\in\mathcal{M}(X)\\}_{n=1}^{\infty}$ is said to converge _setwisely_ to $\nu\in\mathcal{M}(X)$, if $\lim_{n\rightarrow\infty}\nu_{n}(A)=\nu(A)$ for any $A\in\mathcal{B}$. See for example [Doo, GR, HL1, Las1]. Denote the sequential convergence in this sense by $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $n\rightarrow\infty$. ###### 2.9 Definition. The induced topology $\mathfrak{W}_{s1}$ with subbasis $W_{s1}(\nu,A,\epsilon)=\\{\varrho\in\mathcal{M}(X):|\varrho(A)-\nu(A)|<\epsilon\\}$ for $A\in\mathcal{B}$ and any real $\epsilon>0$ is called the Type-I _setwise topology_ on $\mathcal{M}(X)$. An equivalent way on describing the setwisely sequential convergence of measures stems from treating the measure as a functional on the space of bounded Borel-measurable functions on $X$. ###### 2.10 Definition. A sequence of probability measures $\\{\nu_{n}\in\mathcal{M}(X)\\}_{n=1}^{\infty}$ is said to converge _setwisely_ to $\nu\in\mathcal{M}(X)$, if $\lim_{n\rightarrow\infty}\int_{X}f(x)d\nu_{n}=\int_{X}f(x)d\nu$ for any $f\in M_{b}(X)$. This is because the simple functions are dense among the bounded Borel- measurable functions on $X$ under the supremum norm. In this way one can define a topology $\mathfrak{W}_{s2}$ on $\mathcal{M}(X)$ as following. ###### 2.11 Definition. The Type-II _setwise topology_ $\mathfrak{W}_{s2}$ on $\mathcal{M}(X)$ is the topology with basis $W_{s2}(\nu,f,\epsilon)=\\{\varrho\in\mathcal{M}(X):|\int_{X}f(x)d\varrho-\int_{X}f(x)d\nu|<\epsilon\\}$ for any $f\in M_{b}(X)$ and any real $\epsilon>0$. Feinberg, Kasyanov and Zgurovsky gave some equivalent conditions on verifying the setwisely sequential convergence of probability measures in $\mathcal{M}(X)$ with $X$ being metrizable as following (refer to [FKZ1, Theorem 2.3]). ###### 2.12 Theorem (Feinberg-Kasyanov-Zgurovsky). For a sequence of measures $\\{\nu_{n}\in\mathcal{M}(X)\\}_{n=1}^{\infty}$ and $\nu\in\mathcal{M}(X)$ with a metric space $X$, the following conditions are equivalent to each other: 1. (I). $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $n\rightarrow\infty$. 2. (II). $\lim_{n\rightarrow\infty}\nu_{n}(A)=\nu(A)$ for any open set $A\subset X$. 3. (III). $\lim_{n\rightarrow\infty}\nu_{n}(A)=\nu(A)$ for any closed set $A\subset X$. There are some sufficient conditions to guarantee setwisely sequential convergence of measures in certain contexts by Feinberg-Kasyanov-Zgurovsky and J. Lasserre, see [FKZ1, Section 3] and [Las1, Lemma 4.1(i)(ii)]. See also the Vitali-Hahn-Saks Theorem (refer to [Doo] or [HL2]) on setwisely sequential convergence of measures in $\mathcal{M}(X)$. Due to the equivalence of Definition 2.8 and 2.10, it is alluring whether the two types of topology $\mathfrak{W}_{s1}$ and $\mathfrak{W}_{s2}$ are equivalent to each other on $\mathcal{M}(X)$. ###### 2.13 Theorem. The Type-I setwise topology $\mathfrak{W}_{s1}$ is always equivalent to the Type-II setwise topology $\mathfrak{W}_{s2}$ on the probability space $\mathcal{M}(X)$. Note that two topologies on a same space admit the same convergent sequences does not guarantee they are equivalent to each other, since there are examples of spaces with inequivalent topology which admit the same convergent sequences (as one can refer to an example by J. Schur [Sch]). Especially one needs to be careful about the case when the two types of setwise topology are not metrizable (refer to Theorem 2.14). In the following we pay attention to the separability and metrizability of the probability space $\mathcal{M}(X)$ under the setwise topology. Rename the space $\mathcal{M}(X)$ as $\mathcal{M}_{s1}(X)$ equipped with the topology $\mathfrak{W}_{s1}$, and $\mathcal{M}_{s2}(X)$ equipped with the topology $\mathfrak{W}_{s2}$. Part of the following result is due to J. K. Ghosh and R. V. Ramamoorthi [GR, Proposition 2.2.1]. ###### 2.14 Theorem. The topological space $\mathcal{M}_{s1}(X)$ ($\mathcal{M}_{s2}(X)$) is separable or metrizable if and only if the Borel $\sigma$-algebra $\mathcal{B}$ admits at most countably many elementary events. Now we consider setwisely relative compactness of families of probability measures in $\mathcal{M}(X)$. A family of probability measures $\Xi\subset\mathcal{M}(X)$ is said to be _setwisely relatively compact_ if there is a setwisely convergent subsequence in every sequence of probability measures in $\Xi$. We give a necessary and sufficient condition for families of probability measures to be setwisely relatively compact when the ambient space $X$ is a compact metric space. ###### 2.15 Theorem. For a compact metric space $X$, a family of probability measures $\Xi\subset\mathcal{M}(X)$ is setwisely relatively compact if and only if for any sequence of probability measures $\\{\nu_{n}\\}_{n=1}^{\infty}\subset\Xi$, there is a subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$, such that (2.2) $\limsup_{i\rightarrow\infty}\nu_{n_{i}}(U)=\sup_{K\subset U,K\mbox{ is closed}}\limsup_{i\rightarrow\infty}\nu_{n_{i}}(K)$ for any open set $U\subset X$. In case the ambient space $X$ is not a compact metric space, it seems more complicated to judge setwisely relative compactness of a family of probability measures in $\mathcal{M}(X)$. See Problem 6.7. The last topology we consider in this work is the total-variation (TV) topology on $\mathcal{M}(X)$. It is induced by the total-variation (TV) metric on $\mathcal{M}(X)$. ###### 2.16 Definition. The _total-variation metric_ is defined by $\|\nu-\varrho\|_{TV}=\sup_{A\in\mathcal{B}}\\{|\nu(A)-\varrho(A)|\\}$ between two measures $\nu,\varrho\in\mathcal{M}(X)$. One is recommended to [Doo, FKZ1, HL1, PS]) for more related topics on the TV topology on $\mathcal{M}(X)$. The Type-I (Type-II) setwise topology is coarser than the TV topology, and this comparison is strict in some cases. In fact, there are examples of sequences of Borel probability measures converging under the setwise topology but diverging under the $TV$ topology on compact metric ambient spaces. ## 3\. Comparison of the vague topology $\mathfrak{W}_{v1}$ with $\mathfrak{W}_{v2}$ and the setwise topology $\mathfrak{W}_{s1}$ with $\mathfrak{W}_{s2}$ on $\mathcal{M}(X)$ This section is devoted to the proof of Theorem 2.3 and Theorem 2.13 on comparison of different types of vague and setwise topology on $\mathcal{M}(X)$. To prove Theorem 2.3, we need some suitable continuous approximation for characteristic functions of compact sets in LCH spaces. The following result is a locally compact version of Urysohn’s Lemma (refer to [Fol, p131]). ###### 3.1 Urysohn’s Lemma. Let $X$ be an LCH space. For any compact set $K\subset X$, let $U\supset K$ be its open neighbourhood, then there exists a continuous map $g_{K}:X\rightarrow[0,1]$ with compact support $Supp(g_{K})\subset U$, such that $g_{K}(x)=1$ for any $x\in K$. Now we recall the following separation axioms in general topological spaces. * • A topological space $X$ is called _Hausdorff_ or $T_{2}$ if for any two points $x,y\in X$, there are two disjoint open sets containing $x$ and $y$ respectively. * • A topological space $X$ is called _regular_ if for any point $x\in X$ and any closed set $C\subset X$ not containing $x$, there are two disjoint open sets containing $x$ and $C$ respectively. * • A topological space $X$ is called $T_{3}$ if it is both Hausdorff and regular. Note that some people use the terminology ’regular’ for ’regular and Hausdorff’ ($T_{3}$) spaces. A topological space $X$ is called _second- countable_ if its topology admits a countable basis. The equivalence of the Type-I and Type-II vague topology on $\mathcal{M}(X)$ with an LCH space $X$ follows essentially from the Urysohn’s Lemma. Proof of Theorem 2.3: ###### Proof. It is enough for us to show that the Type-I vague topology is finer than the Type-II vague topology in case of $X$ being an LCH space. For a measure $\nu\in\mathcal{M}(X)$, consider its neighbourhood $W_{v2}(\nu,f,\epsilon)$ for some continuous function $f:X\rightarrow\mathbb{R}$ vanishing at infinity and some $\epsilon>0$ under the Type-II vague topology. Consider the following two sets in $X$, $K=f^{-1}\big{(}(-\infty,-\cfrac{\epsilon}{4}]\cup[\cfrac{\epsilon}{4},\infty)\big{)}$ and $U=f^{-1}\big{(}(-\infty,-\cfrac{\epsilon}{8})\cup(\cfrac{\epsilon}{8},\infty)\big{)}$. Obviously $K\subset U$ while $K$ is compact and $U$ is open. So according to the Urysohn’s Lemma, there exists a continuous map $g_{K}:X\rightarrow[0,1]$ with compact support $Supp(g_{K})\subset U$, such that $g_{K}(x)=1$ for any $x\in K$. Let $g=f\cdot g_{K}$. It is a continuous map on $X$ with compact support. We claim that the neighbourhood (3.1) $W_{v1}(\nu,g,\cfrac{\epsilon}{8})\subset W_{v2}(\nu,f,\epsilon).$ To see this, for any probability measure $\varrho\in W_{v1}(\nu,g,\cfrac{\epsilon}{8})$, we have $\begin{array}[]{ll}&|\int_{X}f(x)d\varrho-\int_{X}f(x)d\nu|\\\ =&|\int_{X}\big{(}f(x)-g(x)\big{)}d\varrho+\int_{X}g(x)d\varrho-\int_{X}g(x)d\nu-\int_{X}\big{(}f(x)-g(x)\big{)}d\nu|\\\ \leq&|\int_{X}\big{(}f(x)-g(x)\big{)}d\varrho|+|\int_{X}g(x)d\varrho-\int_{X}g(x)d\nu|+|\int_{X}\big{(}f(x)-g(x)\big{)}d\nu|\\\ =&|\int_{K^{\prime}}\big{(}1-g_{K}(x)\big{)}f(x)d\varrho|+|\int_{X}g(x)d\varrho-\int_{X}g(x)d\nu|+|\int_{K^{\prime}}\big{(}1-g_{K}(x)\big{)}f(x)d\nu|\\\ <&\cfrac{3\epsilon}{8},\end{array}$ in which $K^{\prime}=X\setminus K$ is the residual set. (3.1) guarantees the Type-I vague topology is finer than Type-II vague topology with LCH ambient spaces. ∎ It would be an interesting question to ask whether there exists some none-LCH space $X$, such that the Type-II vague topology $\mathfrak{W}_{v2}$ is strictly finer than the Type-I vague topology $\mathfrak{W}_{v1}$ on $\mathcal{M}(X)$. We can not give an example of a none-LCH space $X$ for which the strict fineness holds between the two types of vague topology, although there do exist examples of none-LCH spaces $X$ on which the Urysohn’s Lemma fails. Since Urysohn’s Lemma holds on normal spaces, the following result follows from the proof of Theorem 2.3. ###### 3.2 Corollary. The Type-II vague topology $\mathfrak{W}_{v2}$ is equivalent to the Type-I vague topology $\mathfrak{W}_{v1}$ on $\mathcal{M}(X)$ with $X$ being a normal space. Now we show equivalence of the Type-I and Type-II setwise topology on $\mathcal{M}(X)$. Proof of Theorem 2.13: ###### Proof. Obviously the topology $\mathfrak{W}_{s2}$ is finer than $\mathfrak{W}_{s1}$ since any characteristic function $1_{A}$ of an measurable set $A\in\mathcal{B}$ is a bounded measurable function. In the following we show the inverse is also true. For a probability measure $\nu\in\mathcal{M}(X)$, a function $f\in M_{b}(X)$ and a small $\epsilon>0$, consider the neighbourhood $W_{s2}(\nu,f,\epsilon)$ of $\nu$. We will find an open neighbourhood $U_{\nu}$ of $\nu$ under $\mathfrak{W}_{s1}$, such that $U_{\nu}\subset W_{s2}(\nu,f,\epsilon)$, this is enough to justify the Type-I setwise topology $\mathfrak{W}_{s1}$ is finer than Type-II setwise topology $\mathfrak{W}_{s2}$. First, choose an integer $N\in\mathbb{N}$ large enough such that $\cfrac{4\|f\|_{\infty}}{N}<\epsilon$. For $1\leq i\leq N-1$, let $A_{i}=f^{-1}\big{(}[-\|f\|_{\infty}+\frac{2(i-1)\|f\|_{\infty}}{N},-\|f\|_{\infty}+\frac{2i\|f\|_{\infty}}{N})\big{)}$. Let $A_{N}=f^{-1}\big{(}[\|f\|_{\infty}-\frac{2\|f\|_{\infty}}{N},\|f\|_{\infty}]\big{)}$. Note that $A_{i}\in\mathcal{B}$ for any $1\leq i\leq N$ since $f$ is measurable, and $\cup_{1\leq i\leq N}A_{i}$ is a disjoint partition of the ambient space $X$. Now for every $1\leq i\leq N$, consider the open neighbourhood $W_{s1}(\nu,A_{i},\frac{\epsilon}{2N\|f\|_{\infty}})$ of $\nu$. Let $U_{\nu}=\cap_{1\leq i\leq N}W_{s1}(\nu,A_{i},\frac{\epsilon}{2N\|f\|_{\infty}})$ be the open neighbourhood of $\nu$ under $\mathfrak{W}_{s1}$. We claim that (3.2) $U_{\nu}\subset W_{s2}(\nu,f,\epsilon).$ To see this, for any probability measure $\varrho\in U_{\nu}$, compare the integration of $f$ with respect to $\nu$ and $\varrho$ over $X$, we have $\begin{array}[]{ll}&|\int_{X}fd\nu-\int_{X}fd\varrho|\vspace{2mm}\\\ \leq&\sum_{1\leq i\leq N}|\int_{A_{i}}fd\nu-\int_{A_{i}}fd\varrho|\vspace{2mm}\\\ \leq&\max\Big{\\{}\sum_{1\leq i\leq N}\Big{|}\big{(}(-\|f\|_{\infty}+\frac{2i\|f\|_{\infty}}{N})\nu(A_{i})-(-\|f\|_{\infty}+\frac{2(i-1)\|f\|_{\infty}}{N})\varrho(A_{i})\big{)}\Big{|},\\\ &\sum_{1\leq i\leq N}\Big{|}\big{(}(-\|f\|_{\infty}+\frac{2(i-1)\|f\|_{\infty}}{N})\nu(A_{i})-(-\|f\|_{\infty}+\frac{2i\|f\|_{\infty}}{N})\varrho(A_{i})\big{)}\Big{|}\Big{\\}}\vspace{2mm}\\\ \leq&\sum_{1\leq i\leq N}\Big{|}\Big{(}\big{(}-\|f\|_{\infty}+\frac{2(i-1)\|f\|_{\infty}}{N}\big{)}\big{(}\nu(A_{i})-\varrho(A_{i})\big{)}\Big{)}\Big{|}+\frac{2\|f\|_{\infty}}{N}\vspace{2mm}\\\ <&\sum_{1\leq i\leq N}\|f\|_{\infty}\frac{\epsilon}{2N\|f\|_{\infty}}+\frac{2\|f\|_{\infty}}{N}\vspace{2mm}\\\ \leq&\frac{\epsilon}{2}+\frac{\epsilon}{2}\vspace{2mm}\\\ =&\epsilon.\end{array}$ ∎ ###### 3.3 Remark. Be careful that the type-I setwise topology is defined by a basis in Definition 2.9 while the type-II setwise topology is defined by a subbasis in Definition 2.11 on $\mathcal{M}(X)$. Due to Theorem 2.13 we will usually not distinguish the two types of setwise topology in some cases. However, technically, it is more convenient to resort to one of the two types of setwise topology than the other in due course. These also apply to the two types of vague topology on $\mathcal{M}(X)$ with LCH ambient spaces in virtue of Theorem 2.3. ## 4\. Separability and metrizability of the probability space $\mathcal{M}(X)$ under the vague topology and setwise topology This section is devoted to properties of the probability space $\mathcal{M}(X)$ under the vague topology and setwise topology, especially the separability and metrizability. We aim to prove Theorem 2.4, 2.5 and Theorem 2.14 in this section. Our strategy is to prove Theorem 2.14 first by establishing some separation and countability properties of the probability space $\mathcal{M}_{s1}(X)$ ($\mathcal{M}_{s2}(X)$). Since the setwise topology is finer than the vague topology, some separation and countability properties of the probability space $\mathcal{M}_{s1}(X)$ ($\mathcal{M}_{s2}(X)$) are inherited naturally by the space $\mathcal{M}_{v1}(X)$ ($\mathcal{M}_{v2}(X)$) in some cases. These properties are then applied to the proof of Theorem 2.4 and 2.5. To prove Theorem 2.14, we need several preceding results on the separation and countability of the topological space $\mathcal{M}_{s1}(X)$ ($\mathcal{M}_{s2}(X)$). In virtue of Theorem 2.13, all the separation and countability properties are shared by the two spaces $\mathcal{M}_{s1}(X)$ and $\mathcal{M}_{s2}(X)$. ###### 4.1 Lemma. The topological space $\mathcal{M}_{s1}(X)$ ($\mathcal{M}_{s2}(X)$) is Hausdorff. ###### Proof. Without loss of generality, suppose that $X$ is not endowed with the trivial topology $\\{\emptyset,X\\}$. Now for two probability measures $\nu,\varrho\in\mathcal{M}_{s1}(X)$, if $\nu\neq\varrho$, there must exist some $A\in\mathcal{B}$, such that $\nu(A)\neq\varrho(A)$. Without loss of generality suppose $\nu(A)>\varrho(A)$. Then we have $\nu\in W_{s1}(\nu,A,\cfrac{\nu(A)-\varrho(A)}{4})$ and $\varrho\in W_{s1}(\varrho,A,\cfrac{\nu(A)-\varrho(A)}{4})$ while $W_{s1}(\nu,A,\cfrac{\nu(A)-\varrho(A)}{4})\cap W_{s1}(\varrho,A,\cfrac{\nu(A)-\varrho(A)}{4})=\emptyset$. ∎ ###### 4.2 Lemma. The topological space $\mathcal{M}_{s2}(X)$ ($\mathcal{M}_{s1}(X)$) is regular. ###### Proof. Let $\nu\in\mathcal{M}_{s2}(X)$ and $\Xi_{1}\subset\mathcal{M}_{s2}(X)$ be a closed set such that $\nu\notin\Xi_{1}$. So the residual set $\Xi_{1}^{\prime}=\mathcal{M}_{s2}(X)\setminus\Xi_{1}$ is an open set such that $\nu\in\Xi_{1}^{\prime}$. Then there must exist $f\in M_{b}(X)$ and $\epsilon>0$, such that $W_{s2}(\nu,f,\epsilon)\subset\Xi_{1}^{\prime}$. Since $W_{s2}(\nu,f,\cfrac{\epsilon}{4})\subset W_{s2}(\nu,f,\cfrac{\epsilon}{2})$ and the closure $\overline{W_{s2}(\nu,f,\epsilon/2)}\subset W_{s2}(\nu,f,\epsilon)$, we have $\nu\in W_{s2}(\nu,f,\cfrac{\epsilon}{4})$ while $C\subset\overline{W_{s2}(\nu,f,\epsilon/2)}^{\prime}$ and $W_{s2}(\nu,f,\cfrac{\epsilon}{4})\cap\overline{W_{s2}(\nu,f,\epsilon/2)}^{\prime}=\emptyset$. ∎ ###### 4.3 Lemma. The topological space $\mathcal{M}_{s1}(X)$ ($\mathcal{M}_{s2}(X)$) is second- countable if the $\sigma$-algebra $\mathcal{B}$ has at most countably many elementary events. ###### Proof. In case of $\\#\mathcal{B}$ being finite, let $\\{A_{1},A_{2},\cdots,A_{n}\\}\subset\mathcal{B}$ be the collection of all the elementary events. Then the set of probability measures $\\{\nu:\nu(A_{i})\in\mathbb{Q},0\leq\nu(A_{i})\leq 1\mbox{\ for any\ }1\leq i\leq n\mbox{\ and\ }\sum_{1\leq i\leq n}\nu(A_{i})=1\\}$ is a dense subset of $\mathcal{M}_{s1}(X)$. Every measure in the set has a countable neighbourhood basis, which can be used to build a countable basis of $\mathcal{M}_{s1}(X)$. Now suppose $\mathcal{B}$ has countably many elementary events $\\{A_{i}\\}_{i=1}^{\infty}$. Consider the collection of finite unions of these elementary events, $\\{B_{F}=\cup_{i\in F}A_{i}\\}_{F\subset\mathbb{N},\\#F<\infty}$. It is a countable set. We claim that the countable set of measures $\prod_{1}=\cup_{F\subset\mathbb{N},\\#F<\infty,j\in(\mathbb{N}\setminus F)}\\{\nu:\nu(B_{F})\in\mathbb{Q}\cap[0,1],\nu(A_{j})=1-\nu(B_{F})\\}$ is a countable dense subset in $\mathcal{M}_{s1}(X)$ (one has the freedom to adjust mass on the elementary events in $B_{F}$, but we take only one such measure with respect to individual $B_{F}$ in $\prod_{1}$). To see this, let $\varrho\in\mathcal{M}_{s1}(X)$ be a probability measure. For any measurable set $A\in\mathcal{B}$ and any $\epsilon>0$, consider the neighbourhood $W_{s1}(\varrho,A,\epsilon)$. Without loss of generality suppose $A\neq X$ in the following. Now if $A=B_{F}$ for some $F\subset\mathbb{N}$ and $\\#F<\infty$, obviously there exists some $\nu\in\prod_{1}$, such that $\nu\in W_{s1}(\varrho,A,\epsilon)$. If $A=\cup_{i=1}^{\infty}A_{n_{i}}$ for some $\\{n_{i}\\}_{i=1}^{\infty}\subset\mathbb{N}$, then there exists $k\in\mathbb{N}$ large enough, such that $0<\varrho(A)-\varrho(\cup_{i=1}^{k}A_{n_{i}})<\cfrac{\epsilon}{4}$. Let $F^{*}=\\{n_{i}\\}_{i=1}^{k}$, so $B_{F^{*}}=\cup_{i=1}^{k}A_{n_{i}}$. Then there exists $\nu_{F^{*}}\in\prod_{1}$, such that $|\nu_{F^{*}}(B_{F^{*}})-\varrho(B_{F^{*}})|<\cfrac{\epsilon}{4}$ and $\nu_{F^{*}}(A\setminus B_{F^{*}})=0$. So $\begin{array}[]{ll}|\nu_{F^{*}}(A)-\varrho(A)|&=|\nu_{F^{*}}(B_{F^{*}})-\varrho(B_{F^{*}})+\nu_{F^{*}}(A\setminus B_{F^{*}})-\varrho(A\setminus B_{F^{*}})|\\\ &\leq|\nu_{F^{*}}(B_{F^{*}})-\varrho(B_{F^{*}})|+|\nu_{F^{*}}(A\setminus B_{F^{*}})-\varrho(A\setminus B_{F^{*}})|\\\ &=|\nu_{F^{*}}(B_{F^{*}})-\varrho(B_{F^{*}})|+|\varrho(A)-\varrho(\cup_{i=1}^{k}A_{n_{i}})|\\\ &<\epsilon/2.\end{array}$ This implies $\nu_{F^{*}}\in W_{s1}(\varrho,A,\epsilon)$, and thus justifies our claim. To see the topological space $\mathcal{M}_{s1}(X)$ is second countable, let $\mathcal{B}_{s}=\\{W_{s1}(\nu,A_{i},\frac{1}{j}):\nu\in\prod_{1},1\leq i,j<\infty\\}$ be the countable family of open sets. The proof that it can serve as a subbase of the topological space $\mathcal{M}_{s1}(X)$ is left to the keen readers. ∎ Now we are in a position to prove Theorem 2.14. Proof of Theorem 2.14: ###### Proof. If the Borel $\sigma$-algebra $\mathcal{B}$ admits at most countably many elementary events, then there is a dense subset of $\mathcal{M}_{s1}(X)$ according to the proof of Lemma 4.3. If the $\sigma$-algebra $\mathcal{B}$ has uncountably many elementary events, consider the collection of all open neighbourhoods of the Dirac measures $\\{\delta_{A}:A\mbox{\ is an elementary event in\ }\mathcal{B}\\}$. One can see that $\\{W_{s1}(\delta_{A},A,1/2):A\mbox{\ is an elementary event in\ }\mathcal{B}\\}$ is an uncountably disjoint family of open sets. So $\mathcal{M}_{s1}(X)$ is not separable in this case. As to the metrization, if $\mathcal{B}$ has at most countably many elementary events, considering Lemma 4.1, 4.2 and 4.3, $\mathcal{M}_{s1}(X)$ is metrizable by the _Urysohn Metrization Theorem_ [Mun, Theorem 34.1]. If the $\sigma$-algebra $\mathcal{B}$ has uncountably many elementary events, it supports a continuous measure, which does not admit a countable neighbourhood basis according to [GR, Proposition 2.2.1(ii)], so $\mathcal{M}_{s1}(X)$ is not metrizable in this case. ∎ Now we turn to the proof of Theorem 2.4. Again we first establish some separation properties of the spaces $\mathcal{M}_{v1}(X)$ and $\mathcal{M}_{v2}(X)$. ###### 4.4 Lemma. The topological space $\mathcal{M}_{v1}(X)$ ($\mathcal{M}_{v2}(X)$) is Hausdorff if $X$ is LCH. ###### Proof. For two probability measures $\nu,\varrho\in\mathcal{M}_{v1}(X)$, if $\nu\neq\varrho$, according to the Riesz Representation Theorem (see for example [Kallen3, Theorem 2.22] or [Rud, Theorem 2.14]), there exists some $f\in C_{sc}(X)$, such that $\int_{X}fd\nu\neq\int_{X}fd\varrho$. Without loss of generality suppose $\int_{X}fd\nu<\int_{X}fd\varrho$. Now let $\epsilon=\cfrac{\int_{X}fd\varrho-\int_{X}fd\nu}{4}$. One can easily check that $W_{v1}(\nu,f,\epsilon)$ and $W_{v1}(\varrho,f,\epsilon)$ are two disjoint open neighbourhoods of $\nu$ and $\varrho$ respectively. ∎ ###### 4.5 Remark. Lemma 4.4 needs not to be true without the assumption of $X$ being LCH. For example, consider the affine space $\mathbb{A}^{n}$ endowed with the Zariski topology admitting infinitely many closed sets. Since any continuous map on $\mathbb{A}^{n}$ is a constant map in this case, then $\int_{\mathbb{A}^{n}}fd\varrho-\int_{\mathbb{A}^{n}}fd\nu=0$ for any $f\in C(\mathbb{A}^{n})$ and any two probability measures $\nu,\varrho\in\mathcal{M}(\mathbb{A}^{n})$. In this case a neighbourhood $W_{v1}(\nu,f,\epsilon)$ of $\nu$ is always the whole space $\mathcal{M}(\mathbb{A}^{n})$ for any $f\in C(\mathbb{A}^{n})$ and $\epsilon>0$. ###### 4.6 Lemma. The topological space $\mathcal{M}_{v1}(X)$ ($\mathcal{M}_{v2}(X)$) is regular. ###### Proof. The proof of Lemma 4.2 applies in these cases, with the role of the measurable function $f\in M_{b}(X)$ substituted by a continuous function $f\in C_{sc}(X)$ (or $f\in C_{0}(X)$). ∎ Note that Lemma 4.6 holds for any topological space $X$ instead of only for LCH spaces, comparing with Lemma 4.4. ###### 4.7 Lemma. The probability spaces $\mathcal{M}_{v1}(X)$ and $\mathcal{M}_{v2}(X)$ are second-countable if the $\sigma$-algebra $\mathcal{B}$ admits at most countably many elementary events. ###### Proof. First, if the $\sigma$-algebra $\mathcal{B}$ has at most countably many elementary events, one can check that the set $\prod_{2}=\cup_{F\subset\mathbb{N},\\#F<\infty,j\in(\mathbb{N}\setminus F)}\big{\\{}\nu:\nu(B_{i})\in\mathbb{Q}\cap[0,1]\mbox{ for any }i\in F\cup\\{j\\}\mbox{ and }\sum_{i\in F\cup\\{j\\}}\nu(A_{i})=1\big{\\}}$ is still a dense subset in $\mathcal{M}_{v1}(X)$ and $\mathcal{M}_{v2}(X)$, by a similar argument as in the proof of Lemma 4.3. Note that the setwise topology $\mathfrak{W}_{s1}$ is finer than the Type-I and Type-II vague topology on $\mathcal{M}(X)$, so $\mathcal{M}_{v1}(X)$ and $\mathcal{M}_{v2}(X)$ are both second-countable in this case. ∎ Now we are in a position to prove Theorem 2.4. Proof of Theorem 2.4: ###### Proof. The separability follows from the fact that $\prod_{2}$ is a countable dense subset of $\mathcal{M}_{v1}(X)$ and $\mathcal{M}_{v2}(X)$. The metrizability follows from a combination of Lemma 4.4, 4.6 and 4.7, in virtue of the Urysohn Metrization Theorem. ∎ Now we go towards our final goal in this section-the proof of Theorem 2.5. Together with Theorem 2.14, these results provide some incisive distinctions between the vague (weak) topology and the setwise topology on the probability space $\mathcal{M}(X)$ with a compact metric space $X$. A locally compact and $\sigma$-compact metric space $X$ admits a countable dense subset, say $X_{d}=\\{x_{1},x_{2},\cdots\\}$. Let $\Xi_{2}\subset\mathcal{M}(X)$ be the collection of all the discrete probability measures supported on finite points in $X_{d}$. One can easily check that $\Xi_{2}$ is separable under either of the two types of vague topology on it. ###### 4.8 Proposition (Tao). If $X$ is a locally compact and $\sigma$-compact metric space endowed with a metric $\rho$, then $\Xi_{2}$ is a dense subset of $\mathcal{M}(X)$ under the Type-I or Type-II vague topology. ###### Proof. It suffices for us to show that $\Xi_{2}$ is a dense subset of $\mathcal{M}(X)$ under the Type-II vague topology. Consider a probability measure $\nu\in\mathcal{M}(X)$. First, for any $\epsilon>0$, since $X$ is a locally compact and $\sigma$-compact metric space, there exists some compact set $X_{c}\subset X$, such that $\nu(X\setminus X_{c})<\epsilon$. Without loss of generality we assume $X_{c}\neq X$. Since $X_{d}$ is dense in $X$ and the residual set $X_{c}^{\prime}$ of the compact (closed) set $X_{c}$ is open, we choose some point $x_{*}\in X_{d}\cap X_{c}^{\prime}$. Let $I\subset\mathbb{N}$ be the collection of all indexes such that $x_{i}\in X_{c}$ for $i\in I$. For any $f\in C_{0}(X)$ and any $\epsilon>0$, there exists some $\delta>0$ independent of $\epsilon$, such that (4.1) $|f(x)-f(y)|<\epsilon$ for any $\rho(x,y)<\delta$ and $x,y\in X_{c}$. For any $x\in X$ and $r>0$, let $B(x,r)$ be the open ball in $X$ centred at $x$ with radius $r$. Let $N_{1}\in\mathbb{N}$ be large enough such that $\frac{2}{N_{1}}<\delta$. Since $\cup_{i\in I}B(x_{i},\frac{1}{N_{1}})$ covers $X_{c}$ and $X_{c}$ is compact, there exists a collection of finite indexes $I_{N_{2}}=\\{i_{1},i_{2},\cdots,i_{N_{2}}\\}\subset I$ for some $N_{2}\in\mathbb{N}$, such that $\cup_{i\in I_{N_{2}}}B(x_{i},\frac{1}{N_{1}})$ covers $X_{c}$. Note that for any $i\in I_{N_{2}}$ we have (4.2) $\max_{x\in B(x_{i},\frac{1}{N_{1}})\cap X_{c}}f(x)-\min_{x\in B(x_{i},\frac{1}{N_{1}})\cap X_{c}}f(x)<\epsilon$ considering (4.1). Now define a discrete measure $\nu_{\epsilon}\in\Xi_{2}$ supported on $\cup_{i\in I_{N_{2}}}x_{i}\cup\\{x_{*}\\}$ as following. * • $\nu_{\epsilon}(\\{x_{i_{1}}\\})=\nu\big{(}B(x_{i_{1}},\frac{1}{N_{1}})\cap X_{c}\big{)}$. * • $\nu_{\epsilon}(\\{x_{i_{2}}\\})=\nu\Big{(}\big{(}B(x_{i_{2}},\frac{1}{N_{1}})\cap X_{c}\big{)}\setminus B(x_{i_{1}},\frac{1}{N_{1}})\Big{)}$. * • $\nu_{\epsilon}(\\{x_{i_{3}}\\})=\nu\Big{(}\big{(}B(x_{i_{3}},\frac{1}{N_{1}})\cap X_{c}\big{)}\setminus\cup_{j=1}^{2}B(x_{i_{j}},\frac{1}{N_{1}})\Big{)}$. $\cdots$ * • $\nu_{\epsilon}(\\{x_{i_{N_{2}}}\\})=\nu\Big{(}\big{(}B(x_{i_{N_{2}}},\frac{1}{N_{1}})\cap X_{c}\big{)}\setminus\cup_{j=1}^{N_{2}-1}B(x_{i_{j}},\frac{1}{N_{1}})\Big{)}$. * • $\nu_{\epsilon}(\\{x_{*}\\})=\nu(X\setminus X_{c})$. Compare the integration of $f$ with respect to $\nu$ and $\nu_{\epsilon}$ over $X$, we have $\begin{array}[]{ll}&\int_{X}fd\nu-\int_{X}fd\nu_{\epsilon}\\\ =&\int_{X_{c}}fd\nu-\int_{X_{c}}fd\nu_{\epsilon}+\int_{X\setminus X_{c}}fd\nu-\int_{X\setminus X_{c}}fd\nu_{\epsilon}\\\ =&\int_{B(x_{i_{1}},\frac{1}{N_{1}})\cap X_{c}}fd(\nu-\nu_{\epsilon})+\int_{\big{(}B(x_{i_{2}},\frac{1}{N_{1}})\cap X_{c}\big{)}\setminus B(x_{i_{1}},\frac{1}{N_{1}})}fd(\nu-\nu_{\epsilon})+\cdots\\\ &+\int_{\big{(}B(x_{i_{N_{2}}},\frac{1}{N_{1}})\cap X_{c}\big{)}\setminus\cup_{j=1}^{N_{2}-1}B(x_{i_{j}},\frac{1}{N_{1}})}fd(\nu-\nu_{\epsilon})+\int_{X\setminus X_{c}}fd(\nu-\nu_{\epsilon})\\\ \leq&\epsilon\nu_{\epsilon}(\\{x_{i_{1}}\\})+\epsilon\nu_{\epsilon}(\\{x_{i_{2}}\\})+\cdots+\epsilon\nu_{\epsilon}(\\{x_{i_{N_{2}}}\\})+2\epsilon\|f\|_{\infty}\\\ \leq&(1+2\|f\|_{\infty})\epsilon.\end{array}$ This means that $\nu_{\epsilon}\in W_{v2}\big{(}\nu,f,(1+2\|f\|_{\infty})\epsilon\big{)}$, which implies $\Xi_{2}$ is a dense subset of $\mathcal{M}(X)$ under the Type- II vague topology. ∎ Equipped with Proposition 4.8, we are ready to prove Theorem 2.5. Proof of Theorem 2.5: ###### Proof. First note that if $X$ is a compact metric space, then according to Theorem 2.3, it suffices for us to prove the separability and metrizability of either space $\mathcal{M}_{v1}(X)$ or $\mathcal{M}_{v2}(X)$. If $X$ is a compact metric space, all the probability measures in $\mathcal{M}(X)$ are Radon measures. A compact metric space is of course locally compact and $\sigma$-compact, so the conclusion of separability of $\mathcal{M}_{v1}(X)$ follows directly from Proposition 4.8. In the following we show $\mathcal{M}_{v1}(X)$ is metrizable in case $X$ is a compact metric space. Considering Lemma 4.4 and Lemma 4.6, in virtue of the Urysohn Metrization Theorem, it suffices for us to show $\mathcal{M}_{v1}(X)$ is second-countable. Let $\Pi_{3}\subset\Xi_{2}$ be a countable dense subset of $\mathcal{M}_{v1}(X)$. Since $C(X)=C_{sc}(X)=C_{0}(X)$, on any compact metric space $X$, according to [Tao, Proposition 1.10.20.], $C_{sc}(X)$ is separable. Let $\Xi_{3}\subset C_{sc}(X)$ be a countable dense subset (note that [Tao, Proposition 1.10.20.] actually asserts that $C_{sc}(X)$ is separable under the infinite norm on it, this obviously implies the separability under the $L^{1}$ norm on it). One can easily check that the following collection of open sets is a countable basis under $\mathfrak{W}_{v1}$, $\cup_{\nu\in\Pi_{3},f\in\Xi_{3},n\in\mathbb{N}}W_{v1}(\nu,f,\frac{1}{n})$. This justifies that $\mathcal{M}_{v1}(X)$ is second-countable. ∎ Theorem 2.5 together with Theorem 2.14 have some interesting applications to the probability spaces on some popular ambient compact metric spaces. ###### 4.9 Corollary. Let $X=[0,1]$ or $X=\overline{B(\mathbb{0},1)}\subset\mathbb{R}^{n}$ be endowed with the Euclidean metric on it. Then the probability space $\mathcal{M}(X)$ is separable and metrizable under the Type-I or Type-II vague topology, while it is not separable and not metrizable under the Type-I or Type-II setwise topology. ## 5\. Relative compactness of families of probabilities in $\mathcal{M}(X)$ In this section we deal with the relative compactness of families of probability measures in $\mathcal{M}(X)$, especially under the setwise topology. It seems to us that a general condition for families of probability measures to be relatively compact is difficult when we merely assume the ambient space $X$ is a topological space, so we decide to limit our attention to metric ambient spaces in this section. We first recall some results on relative compactness of families of probability measures in $\mathcal{M}(X)$ under the vague and weak topology. In a similar way as we define setwisely relative compactness, we can define vague, weak or TV relative compactness of families of probability measures in $\mathcal{M}(X)$. A family $\Xi$ of probability measures in $\mathcal{M}(X)$ is said to be _tight_ if for any small $\epsilon>0$, there exists some compact set $K\subset X$ such that $\nu(K)>1-\epsilon$ for any $\nu\in\Xi$. In case of $X$ being a metric space, Prohorov gave the following condition for weakly relative compactness of families of probability measures in $\mathcal{M}(X)$, see [Bil1, Theorem 5.1, Theorem 5.2]. ###### 5.1 Prohorov’s Theorem. Let $X$ be a metric space. If a family of probability measures $\Xi\subset\mathcal{M}(X)$ is tight, then it is weakly relatively compact. Conversely, if $X$ is a separable and complete metric space, then $\Xi$ is tight if it is weakly relatively compact. For the vaguely relative compactness of families of locally finite measures, see [Kallen1, Theorem 4.2]. As to the setwisely relative compactness of a family $\Xi\subset\mathcal{M}(X)$, the condition of tightness is obviously inadequate, even if we require $X$ to be of the best topological space in our consideration. ###### 5.2 Example. Let $X=[0,1]$ endowed with the Euclidean metric on it. Let $\Xi_{4}=\\{\delta_{\frac{1}{n}}\\}_{n=1}^{\infty}$ be the sequence of Dirac measures supported on $\\{\frac{1}{n}\\}$ for an individual $n\in\mathbb{N}$. In the above example the ambient space $X$ is a compact, separable, complete metric space, so $\Xi_{4}$ is tight, while one can not find any setwisely convergent subsequence in $\Xi_{4}$. The reason of obstacle for the appearance of a setwisely convergent subsequence in $\Xi_{4}$ is that there are fractures of mass transportation between some open sets and their closed (compact) subsets when taking limit(sup) along the sequence, that is, for the open set $U=(0,1)\subset X$ in Example 5.2, we have $\limsup_{i\rightarrow\infty}\delta_{\frac{1}{n_{i}}}(U)=1>\sup_{K\subset U,K\mbox{ is closed}}\limsup_{i\rightarrow\infty}\delta_{\frac{1}{n_{i}}}(K)=0$ for any subsequence $\\{n_{i}\\}_{i=1}^{\infty}\subset\mathbb{N}$. This inspires us the condition (5.8) on setwisely relative compactness of families of probability measures in $\mathcal{M}(X)$. ###### 5.3 Lemma. A compact metric space $X$ admits a countable base $\mathcal{B}_{c}$ such that if $x\in U$ for some open set $U\subset X$, then there is some $B\in\mathcal{B}_{c}$ such that $x\in B\subset\bar{B}\subset U$, in which $\bar{B}$ is the closure of $B$. ###### Proof. First, since a compact metric space is second-countable, we can find a countable base $\mathcal{B}_{c1}$. Then according to [Bil1, p237, Theorem], there is a countable collection $\mathcal{B}_{c2}$ of open sets such that if $x\in U$ for some open set $U\subset X$, then there is some $B\in\mathcal{B}_{c2}$ such that $x\in B\subset\bar{B}\subset U$. Then the countable base $\mathcal{B}_{c}=\mathcal{B}_{c1}\cup\mathcal{B}_{c2}$ satisfies the requirement of the the lemma. ∎ Proof of Theorem 2.15: ###### Proof. First, if a family of probability measures $\Xi$ is setwisely relatively compact, then for any sequence $\\{\nu_{n}\\}_{n=1}^{\infty}\subset\Xi$, we can find a subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$, such that $\nu_{n_{i}}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $i\rightarrow\infty$ for some probability measure $\nu\in\mathcal{M}(X)$. Since $\lim_{i\rightarrow\infty}\nu_{n_{i}}(K)=\nu(K)$ and $\lim_{i\rightarrow\infty}\nu_{n_{i}}(U)=\nu(U)$ for any open set $U\subset X$ and closed (any closed set is compact since we are assuming the ambient space $X$ is compact now) set $K\subset U$, we have $\lim_{i\rightarrow\infty}\nu_{n_{i}}(U)=\sup_{K\subset U,K\mbox{ is closed}}\lim_{i\rightarrow\infty}\nu_{n_{i}}(K)$ as $\nu$ is a regular measure on the metric space $X$. This of course guarantees that the subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$ satisfies (5.8). Now we show the inverse is also true. Suppose for any sequence of probability measures $\\{\nu_{n}\\}_{n=1}^{\infty}\subset\Xi$, we can find a subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$ satisfying (5.8) for any open set $U\subset X$. Upon the technique in constructing a weakly convergent subsequence of probability measures in proving the Prohorov’s Theorem [Bil1, p60], we will find a setwisely convergent subsequence of the sequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$. Since $X$ is a compact metric space, it is second-countable. In virtue of Lemma 5.3, let $\mathcal{B}_{c}=\\{B_{i}\\}_{i=1}^{\infty}$ be a countable basis such that if $x\in U$ for some open set $U\subset X$, there is some $j\in\mathbb{N}$ such that $x\in B_{j}\subset\bar{B}_{j}\subset U$, in which $\bar{B}_{j}$ is the closure of $B_{j}$. Let $\mathcal{\bar{B}}_{c}=\big{\\{}B:B=\cup_{j=1}^{n}\bar{B}_{i_{j}}\mbox{ with }\\{i_{j}\\}_{j=1}^{n}\subset\mathbb{N}\big{\\}}$. Now consider the sequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$ satisfying (5.8). Since $\mathcal{\bar{B}}_{c}$ is countable, we can find a subsequence $\\{\nu_{n_{i_{j}}}\\}_{j=1}^{\infty}$ of $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$, such that $\lim_{j\rightarrow\infty}\nu_{n_{i_{j}}}(B)$ exists for any $B\in\mathcal{\bar{B}}_{c}$. Then we can find a probability measure $\nu\in\mathcal{M}(X)$, such that (5.1) $\nu(U)=\sup_{B\subset U,B\in\mathcal{\bar{B}}_{c}}\lim_{j\rightarrow\infty}\nu_{n_{i_{j}}}(B)\leq\liminf_{j\rightarrow\infty}\nu_{n_{i_{j}}}(U)$ for any open set $U\subset X$. We claim now that the limit of the sequence $\lim_{j\rightarrow\infty}\nu_{n_{i_{j}}}(U)$ exists for any open set $U\subset X$, moreover, we have (5.2) $\nu(U)=\lim_{j\rightarrow\infty}\nu_{n_{i_{j}}}(U)$ for any open set $U\subset X$. This is enough to justify $\nu_{n_{i_{j}}}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $j\rightarrow\infty$ in virtue of Theorem 2.12, which completes the proof. To show the claim, note that the sequence $\\{\nu_{n_{i_{j}}}\\}_{j=1}^{\infty}$ satisfies (5.3) $\limsup_{j\rightarrow\infty}\nu_{n_{i_{j}}}(U)=\sup_{K\subset U,K\mbox{ is closed}}\limsup_{j\rightarrow\infty}\nu_{n_{i_{j}}}(K)$ for any open set $U\subset X$. Since any metric space is normal [Tao], for any closed set $K\subset U$, we can find an open set $U_{K}$, such that $K\subset U_{K}\subset\bar{U}_{K}\subset U$. As $U_{K}$ can be written as an union of sets in $\mathcal{B}_{c}$ and $K$ is compact, then $U_{K}$ can be written as a finite union of sets in $\mathcal{B}_{c}$ whose closures are all in $U$. This means that there exists some $B\in\bar{B}_{c}$ such that $K\subset B$, which guarantees that (5.4) $\sup_{B\subset U,B\in\mathcal{\bar{B}}_{c}}\lim_{j\rightarrow\infty}\nu_{n_{i_{j}}}(B)=\sup_{K\subset U,K\mbox{ is closed}}\limsup_{j\rightarrow\infty}\nu_{n_{i_{j}}}(K)$ for any open set $U\subset X$. Now combining (5.1) (5.3) and (5.4) together, we have $\limsup_{j\rightarrow\infty}\nu_{n_{i_{j}}}(U)=\nu(U)\leq\liminf_{j\rightarrow\infty}\nu_{n_{i_{j}}}(U)$ for any open set $U\subset X$, which implies our claim and (5.2). ∎ It seems that checking the condition (5.8) holding for any open $U\subset X$ is a rather tough job in Theorem 2.15, in fact, we only need to check it holds for countably many open $U\subset X$. ###### 5.4 Corollary. For a compact metric space $X$, there exists a countable collection $\mathcal{B}_{cf}$ of open sets in $X$, such that a family of probability measures $\Xi\subset\mathcal{M}(X)$ is setwisely relatively compact if and only if for any sequence of probability measures $\\{\nu_{n}\\}_{n=1}^{\infty}\subset\Xi$, there is a subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$, such that (5.8) holds for any $U\in\mathcal{B}_{cf}$. ###### Proof. It suffices for us to show the sufficiency. In case of $X$ being a compact metric space, let $\mathcal{B}_{c1}$ be a countable base. Now let $\mathcal{B}_{cf}=\\{B:B\mbox{ is a finite union of sets in }\mathcal{B}_{c1}\\}$. $\mathcal{B}_{cf}$ is a countable set. Now suppose that for any sequence of probability measures $\\{\nu_{n}\\}_{n=1}^{\infty}\subset\Xi$, (5.8) holds for some subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$ on any open set in $\mathcal{B}_{cf}$. Since $\mathcal{B}_{cf}$ is countable, we can find a subsequence $\\{\nu_{n_{i_{j}}}\\}_{j=1}^{\infty}$ of $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$, such that $\lim_{j\rightarrow\infty}\nu_{n_{i_{j}}}(B)$ exists for any $B\in\mathcal{B}_{cf}$. For any open set $U\subset X$, let $U=\cup_{j=1}^{\infty}A_{j}$ with $A_{j}\in\mathcal{B}_{c1}$ for any $j\in\mathbb{N}$. One can show that (5.5) $\limsup_{i\rightarrow\infty}\nu_{n_{i}}(U)=\lim_{m\rightarrow\infty}\limsup_{i\rightarrow\infty}\nu_{n_{i}}(\cup_{j=1}^{m}A_{j}).$ Note that $\cup_{j=1}^{m}A_{j}\in\mathcal{B}_{cf}$. According to the assumption, we have (5.6) $\begin{array}[]{ll}&\lim_{m\rightarrow\infty}\limsup_{i\rightarrow\infty}\nu_{n_{i}}(\cup_{j=1}^{m}A_{j})\\\ =&\lim_{m\rightarrow\infty}\sup_{K\subset\cup_{j=1}^{m}A_{j},K\mbox{ is closed}}\limsup_{i\rightarrow\infty}\nu_{n_{i}}(K)\\\ \leq&\sup_{K\subset U,K\mbox{ is closed}}\limsup_{i\rightarrow\infty}\nu_{n_{i}}(K).\end{array}$ Now combining (5.5), (5.6) together with the following obvious fact (5.7) $\limsup_{i\rightarrow\infty}\nu_{n_{i}}(U)\geq\sup_{K\subset U,K\mbox{ is closed}}\limsup_{i\rightarrow\infty}\nu_{n_{i}}(K),$ we justify (5.8) holds for the subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$ and any open set $U\subset X$. This finishes the proof in virtue of Theorem 2.15. ∎ One is recommended to compare Corollary 5.4 with [FKZ1, Theorem 3.3.] and [FKZ1, Lemma 3.4.]. Theorem 2.15 and Corollary 5.4 of course can be applied to judge setwise convergence of sequences of probability measures in due course, with the aid of the following result. ###### 5.5 Lemma. For any topological space $X$, a sequence of probability measures $\\{\nu_{n}\\}_{n=1}^{\infty}\subset\mathcal{M}(X)$ converges vaguely, setwisely or TV to some $\nu\in\mathcal{M}(X)$ as $n\rightarrow\infty$ if and only if every subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$ of $\\{\nu_{n}\\}_{n=1}^{\infty}$ contains a further subsequence $\\{\nu_{n_{i_{j}}}\\}_{j=1}^{\infty}$, such that $\\{\nu_{n_{i_{j}}}\\}_{j=1}^{\infty}$ converges vaguely, setwisely or TV to the probability measure $\nu$ as $j\rightarrow\infty$ respectively. ###### Proof. The necessity is obvious, while the sufficiency of the result follows from a simple argument of reduction to absurdity, see [Bil1, Theorem 2.6.]. ∎ Theorem 2.15 together with Lemma 5.5 indicate the following result on judging the setwise convergence of sequences of probability measures in $\mathcal{M}(X)$ with the ambient space being a compact metric space. ###### 5.6 Corollary. Let $X$ be a compact metric space. Now if for a sequence of probability measures $\\{\nu_{n}\\}_{n=1}^{\infty}\subset\mathcal{M}(X)$, there is a subsequence $\\{\nu_{n_{i}}\\}_{i=1}^{\infty}$, such that (5.8) $\limsup_{i\rightarrow\infty}\nu_{n_{i}}(U)=\sup_{K\subset U,K\mbox{ is closed}}\limsup_{i\rightarrow\infty}\nu_{n_{i}}(K)$ for any open set $U\subset X$, and every setwisely convergent subsequence converges to the same probability measures $\nu\in\mathcal{M}(X)$, then $\nu_{n}\stackrel{{\scriptstyle s}}{{\rightarrow}}\nu$ as $n\rightarrow\infty$. ## 6\. Some further discussions on the topology on $\mathcal{M}(X)$ In this section we intend to make a discussion on more kinds of topology on $\mathcal{M}(X)$, their relationships and their topological properties. We will formulate some open problems on our concerns. One problem is on the comparison between various other topology on $\mathcal{M}(X)$. In the above sections we make a comparison between the the two types of vague topology, as well as the two types of setwise topology on the probability space $\mathcal{M}(X)$. Similar questions arise on the comparison between other kinds of topology on $\mathcal{M}(X)$. For example, considering the Portemanteau theorem (see for example [Kle, Theorem 13.16], [Bil1, Theorem 2.1] and [HL1, Theorem 1.4.16]), we define the following three types of weak topology on $\mathcal{M}(X)$ with the ambient space $X$ being a general topological space. ###### 6.1 Definition. The _Type-II weak topology_ $\mathfrak{W}_{w2}$ on $\mathcal{M}(X)$ is the topology with subbasis $W_{w2}(\nu,A,\epsilon)=\\{\varrho\in\mathcal{M}(X):|\varrho(A)-\nu(A)|<\epsilon\\}$ for $A\in\mathcal{B}$ with $\nu(\partial A)=0$ and any real $\epsilon>0$. ###### 6.2 Definition. The _Alexandrov topology_ $\mathfrak{W}_{w3}$ on $\mathcal{M}(X)$ is the topology with subbasis $W_{w3}(\nu,A,\epsilon)=\\{\varrho\in\mathcal{M}(X):\varrho(A)>\nu(A)-\epsilon\\}$ for any open set $A\in\mathcal{B}$ and any real $\epsilon>0$. Refer to [Ale], [Bla] and [Kal] for the Alexandrov topology on $\mathcal{M}(X)$. ###### 6.3 Definition. The _Type-IV weak topology_ $\mathfrak{W}_{w4}$ on $\mathcal{M}(X)$ is the topology with subbasis $W_{w3}(\nu,A,\epsilon)=\\{\varrho\in\mathcal{M}(X):\varrho(A)<\nu(A)+\epsilon\\}$ for any closed set $A\in\mathcal{B}$ and any real $\epsilon>0$. These types of weak topology are induced from equivalent descriptions of the _weakly sequential convergence_ of measures in $\mathcal{M}(X)$ with a metric ambient space $X$, refer to [Bil1, Bil2, Kallen2, Kallen5, Kle, Las2, Mat]. ###### 6.4 Problem. How is the comparison of the fineness between the Type-I, Type-II, the Alexandrov and Type-IV weak topology on $\mathcal{M}(X)$ with $X$ being a topological space? Another concern is on the properties of $\mathcal{M}(X)$ under various topology, for example, its separation properties, metrizability or completeness. Our Theorem 2.4, Theorem 2.5 together with [Kallen1, Theorem 4.2] provide some conditions on the separability and metrizability of the vague topology on $\mathcal{M}(X)$ with certain assumptions on the ambient space. A remaining problem is the following one. ###### 6.5 Problem. In case that the ambient space $X$ is not a separable and complete metric space but its Borel $\sigma$-algebra $\mathcal{B}$ admits uncountably many elementary events, is the probability space $\mathcal{M}_{v1}(X)$ or $\mathcal{M}_{v2}(X)$ separable or metrizable? It is well-known that the Type-II weak topology on the probability space $\mathcal{M}(X)$ can be induced by the _Prohorov metric_ if the ambient space $X$ is a separable and complete metric space, see [Bil1, p72]. According to Theorem 2.14, the probability space $\mathcal{M}(X)$ is metrizable in case the Borel $\sigma$-algebra $\mathcal{B}$ admits at most countably many elementary events, then it is natural to try to give an explicit metric which induces the setwise topology on $\mathcal{M}(X)$ in this cases. ###### 6.6 Problem. In the context of Theorem 2.14, can one give an explicit metric which induces the setwise topology in case the probability space $\mathcal{M}(X)$ is metrizable? How is its relationship with the Prohorov metric and the TV metric on $\mathcal{M}(X)$? We have got a necessary and sufficient condition on setwisely relative compactness of families of probabilities in $\mathcal{M}(X)$ with the ambient space $X$ being a compact metric space. The question on non-compact metric space or even non-metrizable space is still open. ###### 6.7 Problem. Can one give a necessary and sufficient condition for a family of probability measures in $\mathcal{M}(X)$ to be setwisely relatively compact in case the ambient space $X$ is a non-compact metric space or even a non-metrizable space? We suspect that on a non-compact metric space, tightness of the family together with existence of a subsequence satisfying (5.8) on any open $U\subset X$ for any sequence in the family may be sufficient for setwisely relative compactness of the family? ## 7\. General $F$-topology and $S$-topology on $\mathcal{M}(X)$ We study topological properties of $\mathcal{M}(X)$ under the vague, weak, setwise or TV topology in the above sections, however, it is possible that these four kinds of topologies are still not sensitive enough to deal with ones’ concerning problems in due courses. The concepts of $F$-topology and $S$-topology on $\mathcal{M}(X)$ allows one to work under its ideal topology on $\mathcal{M}(X)$ in one’s need. ###### 7.1 Definition. Let $F$ be some family of functions from $X$ to $\mathbb{R}$. The $F$-topology $\mathfrak{W}_{F}$ on $\mathcal{M}(X)$ is the topology with basis $W_{F}(\nu,f,\varepsilon)=\\{\varrho\in\mathcal{M}(X):|\int_{X}f(x)d\varrho-\int_{X}f(x)d\nu|<\epsilon\\}$ for any $f\in F$ and any real $\epsilon>0$. ###### 7.2 Definition. Let $S\subset\mathcal{B}$ be some family of Borel sets. The $S$-topology $\mathfrak{W}_{S}$ on $\mathcal{M}(X)$ is the topology with subbasis $W_{S}(\nu,A,\epsilon)=\\{\varrho\in\mathcal{M}(X):|\varrho(A)-\nu(A)|<\epsilon\\}$ for any $A\in S$ and any real $\epsilon>0$. By suitable choices of different families of functions $F$ or sets $S$, one can define the induced topology on $\mathcal{M}(X)$, which may crucially result in solutions of one’s problems in due courses. Theoretically, it is also interesting to ask how to interpret the $F$-topology in words of the $S$-topology, or vice versa, for particular families of $F$ or $S$. Some algebraic structures (such as lattice structure) are possible on the collections of the $F$-topology and the $S$-topology, in view of the natural algebraic structures on the collections of families $F$ and $S$ separately. Separability, metrizability and relative compactness will vary depending on different choices of families of functions $F$ or sets $S$. ## References * [Ale] A. D. Alexandrov, Additive set functions in abstract spaces, Mat. Sbornik, N.S., (1940). * [Bil1] P. Billingsley, Convergence of Probability Measures, 2nd Edition, John Wiley & Sons, 1999. * [Bil2] P. Billingsley, Probability and measure, 3rd Edition, John Wiley & Sons, 1995. * [Bla] J. H. Blau, The space of measures on a given set, Fundamenta Mathematicae, 38 (1951). * [CK] J. T. Cox and A. Klenke, Recurrence and ergodicity of interacting particle systems, Probability Theory and Related Fields, volume 116, pages 239-255 (2000). * [CKP] J. T. Cox, A. Klenke and E. A. Perkins, Convergence to equilibrium and linear systems duality, In Stochastic Models (Ottawa, ON, 1998), CMS Conference Proceedings 26, 41-66. Amer. Math. Soc., Providence, RI. * [Doo] J. Doob, Measure Theory, Graduate Texts in Mathematics, 143, Springer-Verlag New York, 1994. * [FK] E. Feinberg and P. Kasyanov, MDPs with Setwise Continuous Transition Probabilities, arXiv:2011.01325 [math.OC], 2021. * [FKL] E. Feinberg, P. Kasyanov and Y. Liang, Fatou’s Lemma in Its Classical Form and Lebesgue’s Convergence Theorems for Varying Measures with Applications to Markov Decision Processes, Theory Probab. Appl., 65(2), 270-291. * [FKZ1] E. Feinberg, P. Kasyanov and M. Zgurovsky, Convergence of probability measures and Markov decision models with incomplete information, Proceedings of the Steklov Institute of Mathematics, December 2014, Volume 287, Issue 1, pp 96-117. * [FKZ2] E. Feinberg, P. Kasyanov and M. Zgurovsky, Uniform Fatou’s lemma, J. Math. Anal. Appl. 444 (2016) 550-567. * [FKZ3] E. Feinberg, P. Kasyanov and M. Zgurovsky, Partially observable total-cost Markov decision processes with weakly continuous transition probabilities, Mathematics of Operations Research, Volume 41, Issue 2, Pages 377-744, May 2016. * [FKZ4] E. Feinberg, P. Kasyanov and M. Zgurovsky, Markov Decision Processes with Incomplete Information and Semi-Uniform Feller Transition Probabilities, arXiv:2103.13256 [math.OC], 2021. * [Fol] G. B. Folland, Real Analysis: Modern Techniques and their Applications, 2ed. John Wiley, 2007. * [GR] J. K. Ghosh and R. V. Ramamoorthi, Bayesian Nonparametrics, Springer, New York, 2003. * [HL1] O. Hernández-Lerma and J. Lasserre, Markov Chains and Invariant Probabilities, Progress in Mathematics, Birkhäuser Basel, 2003. * [HL2] O. Hernández-Lerma and J. Lasserre, An Extension of the Vitali-Hahn-Saks Theorem, Proceedings of the American Mathematical Society Vol. 124, No. 12 (Dec., 1996), pp. 3673-3676. * [HL3] O. Hernández-Lerma and J. Lasserre, Criteria for Positive Harris Recurrence of Markov Chains, Proceedings of the American Mathematical Society, Vol. 129, No. 5 (May, 2001), pp. 1521-1524. * [JW] L. Janos and R. Williamson, Constructing metrics with the Heine-Borel property, Proc. Amer. Math. Soc. Vol. 100 (1987), 567-573. * [Kallen1] O. Kallenberg, Random Measures, Theory and Applications. In: Probability Theory and Stochastic Modelling, vol. 77, Springer-Verlag, New York, 2017. * [Kallen2] O. Kallenberg, Characterization and Convergence of Random Measures and Point Processes, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, volume 27, pages 9-21(1973). * [Kallen3] O. Kallenberg, Foundations of Modern Probability, 2nd ed. New York: Springer, 2002. * [Kallen4] O. Kallenberg, On Symmetrically Distributed Random Measures, Transactions of the American Mathematical Society, Vol. 202 (Feb., 1975), pp. 105-121. * [Kallen5] O. Kallenberg, Splitting at Backward Times in Regenerative Sets, Annals of Probability, Vol. 9, No. 5 (Oct., 1981), pp. 781-799. * [Kal] G. Kallianpur, The Topology of Weak Convergence of Probability Measures, Journal of Mathematics and Mechanics, Vol. 10, No. 6 (1961), pp. 947-969. * [Kle] A. Klenke, Probability Theory: A Comprehensive Course, 2rd Edition, Springer, 2013. * [Las1] J. Lasserre, On the setwise convergence of sequences of measures, Journal of Applied Mathematics and Stochastic Analysis, 10:2 (1997), 131-136. * [Las2] J. Lasserre, Weak Convergences of Probability Measures: A Uniform Principle, Proceedings of the American Mathematical Society Vol. 126, No. 10 (Oct., 1998), pp. 3089-3096. * [Li] X. Li, Measure and integration, Lecture notes, http://www.xuemei.org/Measure-Integration.pdf, 2020. * [Ma1] L. Ma, On the Hausdorff dimensions of exploding measures originated from IFS, arXiv:1909.03390 [math.DS], 2020. * [Ma2] L. Ma, Families of infinite parabolic IFS with overlaps: the approximating method, arXiv:2101.09695 [math.DS], 2021. * [Mat] P. Mattila, Geometry of sets and measures in Euclidean spaces, Fractals and rectifiability, Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, England, 1995. * [Mun] J. Munkres, Topology (2nd Edition), Pearson Modern Classics for Advanced Mathematics Series, 2000. * [PS] K. Parthasarathy and T. Steerneman, A Tool in Establishing Total Variation Convergence, Proceedings of the American Mathematical Society Vol. 95, No. 4 (Dec., 1985), pp. 626-630. * [Rud] W. Rudin, Real and complex analysis. 3rd ed. McGraw-Hill, New York, 1987. * [Sch] J. Schur, Über lineare Transformationen in der Theorie der unendlichen Reihen, Journal für die reine und angewandte Mathematik, Volume 151 (1921), 79-111. * [Tao] T. Tao, An Epsilon of Room, I: Real Analysis: pages from year three of a mathematical blog, Graduate Studies in Mathematics Volume: 117, 2010. * [Tay] M. Taylor, Measure theory and integration, Graduate Studies in Mathematics, Volume: 76, American Math. Society, 2006.
# Deformed Morse-like potential I. A. Assi<EMAIL_ADDRESS>Department of Physics and Physical Oceanography, Memorial University of Newfoundland, St. John’s, Newfoundland & Labrador, A1B 3X7, Canada A. D. Alhaidari Saudi Center for Theoretical Physics, P.O. Box 32741, Jeddah 21438, Saudi Arabia H. Bahlouli Physics Department, King Fahd University of Petroleum Minerals, Dhahran 31261, Saudi Arabia (August 27, 2024) ###### Abstract We introduce an exactly solvable one-dimensional potential that supports both bound and/or resonance states. This potential is a generalization of the well- known 1D Morse potential where we introduced a deformation that preserves the finite spectrum property. On the other hand, in the limit of zero deformation, the potential reduces to the exponentially confining potential well introduced recently by A. D. Alhaidari. The latter potential supports infinite spectrum which means that the zero deformation limit is a critical point where our system will transition from the finite spectrum limit to the infinite spectrum limit. We solve the corresponding Schrodinger equation and obtain the energy spectrum and the eigenstates using the tridiagonal representation approach. ††preprint: APS/123-QED ## I Introduction Studying exactly solvable potentials has been an issue of great interest since the birth of quantum mechanics. Many techniques have been designed aiming at obtaining exact solution to the Schrodinger equation. These methods include, but not limited to, factorization method, point canonical transformation, supersymmetry approach, shape invariance, Darboux transformation, second quantization, asymptotic iteration method, group theoretical approaches, path integral transformation and Nikiforov-Uvarov method. For a brief description of these methods the reader can consult the corresponding literature.F. Cooper, A. Khare and U. Sukhatme (1995); M. Bander, and C. Itzykson (1966); Y. Alhassid, F. Iachello, and F. Gürsey (1983); L. Infeld, and T. E. Hull (1951); H. Ciftci, R. L. Hall, and N. Saad (2005); R. De, R. Dutt and U. Sukhatme (1992); A. F. Nikiforov V. B. Uvarov (1988) Our group has devised a new algebraic method called the “Tridiagonal Representation Approach”(TRA) which enabled us to enlarge the class of exactly solvable potentials. For more details on the TRA the reader is encouraged to refer to the recent summary of this method and references therein.Alhaidari and Bahlouli (2019) In this approach the exact solution of the corresponding Schrodinger equation is expressed in the form of a bounded series of suitably selected square integrable basis set. Requiring a tridiagonal representation of the wave equation in this basis set will result in a three term recursion relation for the expansion coefficients of the series which are then solved in terms orthogonal polynomials in the energy and physical parameter spaces. In the present work we introduce the following exactly solvable 1D deformed Morse-like potential ${{V}_{q}}(x)=A{{\left({{e}^{\lambda x}}+q\right)}^{-2}}+B\left({{e}^{\lambda x}}+q\right)^{-1}+C{{e}^{\lambda x}}-\frac{A+qB}{q^{2}}$ (1) where $x\in(-\infty,+\infty)$ and the potential parameters $\\{q,A,B,V,\lambda\\}$ are real such that $q>0$, $C>0$ and $q$ is the deformation parameter. The design of the above potential is such that it reduces to zero at minus infinity (the last constant value in (1) ensures that) and blows up at positive infinity. With these choices the potential will support both resonances and/or bound states or none depending on the range of potential parameters. For intense, for $C=q=0$, this potential reduces to the well-known 1D Morse potential with a finite energy spectrum,Morse (1929) and when $q=0$, this becomes the exponentially confining potential well introduced recently by Alhaidari with an infinite energy spectrum.Alhaidari (2021) For other values of the potential parameters we will see that we can have different situations where only resonances, only bound states, both bound and resonant states or none are allowed. Actually, we were able to generate an illuminating spectral phase diagram that shows different regions of the potential parameter space where bound states and/or resonances can occur. The rest of this manuscript is organized as follows: In section II we investigate the analytical properties of the deformed Morse-like potential (1). In section III we present the theoretical derivation of the solvable potential (1) using the tridiagonal representation approach. In section IV we compute the energy spectrum associated with this potential using the potential parameter spectrum (PPS),A. D. Alhaidari, and H. Bahlouli (2020) which is a highly non traditional eigenvalue problem. In section IV we confirm the validity of our generated energy spectrum using a direct numerical diagonalization technique in a suitable $L^{2}$ basis, and the asymptotic iteration method.H. Ciftci, R. L. Hall, and N. Saad (2005); Sous (2021) In section VII we present a conclusion and discuss possible extensions of this work and its potential application. ## II The potential Structure In this section we analyze all possible potential configurations that can sustain bound and/or resonance states. Starting with the potential (1), we first look at possible extrema which are given by the root of the equation $\frac{dV_{q}}{dx}=0$ at $x=x_{0}$, we then get $2Az_{0}^{3}+Bz_{0}^{2}-C=0$ (2) where $z_{0}^{-1}={{e}^{\lambda x_{0}}}+q$. If $A>0$ and $B<0$, then Descartes’ rule of signs dictates that this cubic equation has a single real positive root.D. R. Curtiss (1918) Now, if $0<z_{0}<1/q$, then the potential has a minimum and it can support only bound states. The second scenario holds when $B>0$ and $A<0$, then Descartes’ sign rule guarantees that this cubic equation has either two real positive root or none. Now, if both roots pertained to the interval $(0,1/q)$ and have unequal values then the potential has a minimum and a maximum and it can then support either resonances or a mix of resonance and bound states. Thirdly, if $A>0$ and $B<0$, then Descartes’ sign rule implies that the cubic equation (2) has only one positive real root, similarly to the first scenario, this case can support only bound states. Lastly, when $A<0$ and $B<0$, we have no real positive root and the potential cannot support neither bound states nor resonances. Obviously, in all cases, the potential supports scattering states. These different situations are summarized in five self explanatory panels in Fig.1. We have also created a video animation of the potential that shows the continuous transition from figures 1a to 1b as the potential parameters A/C and B/C are varied continuously. On the other hand, we plotted the spectral phase diagram in Fig.2 which shows different physical situations that the potential (1) can support based on the values of the parameters $\\{A,B\\}$ which are scaled in units of $C$. In Fig. 2(a), we took $q=0.5$ and the scaling parameter $\lambda$ is set to unity. The blue region represents part of the spectral phase diagram (SPD) where only bound (B) states can exist. In the green region, a mix of bound (B) and resonance (R) states whereas in the red region, only resonances. The fourth case is when neither bound nor resonance states can occur in the grey region. On the other hand, in Fig. 2 (b), we varied $q$ and plotted the SPD as indicated. We find that increasing $q$ from $0.2$ to $0.8$ has resulted in increasing the size of the blue and green regions and suppressed the red region while the boarder between red and grey regions remains fixed (independent of $q$). Thus, the deformation parameter $q$ can change the state of the system at fixed parameters $\\{A,B,C\\}$, such as transforming regions from bearing resonances only to allowing a mix of resonance and bound states. We have also provided a video animation showing how the SPD changes with $q$ which we varied from $0.01$ to $0.8$ with equal steps of $0.01$. Finally, we find the equations of the boundaries in the SPD as follows. The boundary of the blue region is obtained by setting $V_{q}(x)=V^{\prime}_{q}(x)=0$ and $z_{0}=1/q$, giving $2A=q^{3}C-qB$ (3) the boundary between the grey region and the red region is obtained by solving $V^{\prime}_{q}(x)=V^{\prime\prime}_{q}(x)=0$, $B^{3}=27A^{2}C$ (4) with $A<0$, $B>0$, and is independent of $q$. The third boundary is between the green and red regions and is obtained by solving $V_{q}(x)=V^{\prime}_{q}(x)=0$ allowing only one of the roots $z_{0}$ to be different from $1/q$, we obtain $A=q^{2}\sqrt{BC}-q\left(B+q^{2}C\right)$ (5) (a) (b) (c) (d) (e) Figure 1: Different possible configurations for the potential given by Eq. 1 which might support (a) only resonances, (b) bound and resonance states , (c) only bound states, and (d) & (e) none. All configurations support scattering states. (a) (b) Figure 2: (a) Snapshot of a video animation of the spectral phase diagram showing several regions where only bound (B) states can exist in the blue region, only resonances (R) in the red region, a mixture of resonance and bound states in the green region, and the gray region where none of these states can exist. The dashed line represents the TRA lower limit on the potential parameter $A$ scaled in units of $C$. Here we took $\lambda=1$ and $q=0.5$, and in (b) the spectral phase diagram for several values of $q$, from top left to bottom right we took $q=0.2,0.4,0.6,$ and $0.8$. ## III TRA Formulation Starting with the 1D Schrodinger equation (in units of $\hbar=m=1$), $\left[-\frac{1}{2}\frac{d^{2}}{dx^{2}}+V(x)-E\right]\psi(x)=0$ (6) we make a change of variable $y=\frac{2}{q}e^{\lambda x}+1\geq 1$, giving $\left[{{\left(y-1\right)}^{2}}\frac{{{d}^{2}}}{d{{y}^{2}}}+\left(y-1\right)\frac{d}{dy}+\varepsilon-U(y)\right]\psi\left(y\right)=0$ (7) where $\varepsilon=2E/\lambda^{2}$ and $U(y)=2V(y)/\lambda^{2}$. Following the TRA formalism, we expand the wavefunction as $\psi(y)=\sum_{n}f_{n}(E,\mathcal{P})\phi_{n}(y)$, where the expansion coefficients $f_{n}(E,\mathcal{P})$ are generally functions of the energy and potential parameters that are lumped together in $\mathcal{P}$. In our case $\mathcal{P}=\\{A,B,C,q\\}$, and the suitable square integrable basis function $\phi_{n}(y)$ is Alhaidari and Bahlouli (2019) ${{\phi}_{n}}(x)={{A}_{n}}{{\left(y-1\right)}^{\frac{\mu}{2}}}{{\left(y+1\right)}^{\frac{\nu+1}{2}}}P_{n}^{(\mu,\,\nu)}(y)$ (8) where $P_{n}^{(\mu,\,\nu)}(y)$ is the finite Jacobi polynomial defined in Appendix B, $n=0,1,\cdots,N$ for some non-negative integer $N$, $A_{n}$ is a normalization constant defined in Eq.32, and $\\{\mu,\nu\\}$ are real basis parameters with $\mu>-1$ and $\mu+\nu<-2N-1$. Using the differential equation of the finite Jacobi polynomials Eq. B, the action of the operator in Eq. 7 on the basis Eq.8 is given by $\left[(y-1)^{2}\frac{d^{2}}{dy^{2}}+(y-1)\frac{d}{dy}+\varepsilon-U(y)\right]\phi_{n}(y)=A_{n}(y-1)^{\frac{\mu}{2}+1}(y+1)^{\frac{\nu-1}{2}}\Bigg{\\{}n(n+\mu+\nu+1)+\frac{\mu^{2}}{4}\frac{y+1}{y-1}\\\ +\frac{\nu^{2}-1}{4}\frac{y-1}{y+1}+\frac{(\mu+1)(\nu+1)}{2}+(\varepsilon-U(y))\frac{y+1}{y-1}\Bigg{\\}}P_{n}^{(\mu,\,\nu)}(y)$ (9) Requiring that the representation of this operator in the basis set $\phi_{n}(x)$ be tridiagonal and looking for energy independent potential solutions we end up with the requirements $\varepsilon=-\frac{\mu^{2}}{4}$, and $\frac{1}{4}\left({{\nu}^{2}}-1\right)\frac{y-1}{y+1}-U(y)\frac{y+1}{y-1}=-Fy-D$ (10) This last requirement gives rise to the solvable deformed Morse-like potential announced at the beginning of this manuscript (1) where the parameters $\\{\nu,F,D\\}$ are related to $\\{A,B,C,q,\lambda\\}$ as indicated in Table.1. This potential by construction gives rise to the tridiagonal representation of the wave operator $(\hat{H}-E)$ in this basis, $-\frac{2}{{{\lambda}^{2}}}\left\langle m|(H-E)\left.|n\right\rangle\right.=a_{n}{{\delta}_{n,m}}-F\langle m|y\left.|n\right\rangle$ (11) where we defined $\langle m|(...)|n\rangle:=\lambda\int_{-\infty}^{+\infty}dx\phi_{m}(x)(...)\phi_{n}(x)$, $a_{n}=n(n+\mu+\nu+1)+\frac{1}{2}\left(\mu+1\right)\left(\nu+1\right)-D$, and $\langle m|y|n\rangle$ is the tridiagonal matrix defined in Eq.33. Table 1: The relation between the parameters of $U(x)$ in terms of those of $V_{q}(x)$. $U(x)$ | $V_{q}(x)$ ---|--- $\nu$ | $-\sqrt{\frac{8A}{\lambda^{2}q^{2}}+1}$ $D$ | $\frac{qC}{\lambda^{2}}-\frac{2B}{q\lambda^{2}}-\frac{4A}{q^{2}\lambda^{2}}$ $F$ | $\frac{qC}{\lambda^{2}}$ It is clear that the above representation is diagonal when $F=0$ (i.e. $C=0$), resulting in the following spectrumN. Rosen, and P. M. Morse (1932); H. Eğrifes, D. Demirhan, and F. Büyükkiliç (1999) $2{{E}_{n}}/{{\lambda}^{2}}=-\frac{1}{4}{{\left\\{\frac{D+\frac{1}{4}\left({{\nu}^{2}}-1\right)}{\left[n+\frac{1}{2}\left(\nu+1\right)\right]}-\left[n+\frac{1}{2}\left(\nu+1\right)\right]\right\\}}^{2}}$ (12) and associated wavefunction, ${{\psi}_{n}}(x)=\mathcal{A}_{n}{{\left(y-1\right)}^{\mu_{n}/2}}{{\left(y+1\right)}^{\frac{\nu+1}{2}}}P_{n}^{(\mu_{n},\,\nu)}(y)$ (13) where $\mathcal{A}_{n}$ is a normalization constant. To find the solution when $F\neq 0$ (i.e. $C>0$), we use Eq.11 to express the expansion coefficients of the wavefunction $\\{f_{n}(E,\mathcal{P})\\}$ in terms of a three-term recursion relation, $\left[FQ_{n}+a_{n}\right]{{f}_{n}}(E)=F\left[{{S}_{n-1}}{{f}_{n-1}}(E)+{{S}_{n}}{{f}_{n+1}}(E)\right]$ (14) where $\\{Q_{n},S_{n}\\}$ are defined in Eqs.34 & 35. Making the substitution $f_{n}=\frac{A_{n}f_{0}\xi_{n}}{A_{0}}$, we obtain the following three-term recursion relation for $\xi_{n}$, $\displaystyle\frac{1}{F}a_{n}{{\xi}_{n}}(E)=\tfrac{2(n+\mu)(n+\nu)}{\left(2n+\mu+\nu\right)\left(2n+\mu+\nu+1\right)}{{\xi}_{n-1}}(E)$ $\displaystyle+\tfrac{2\left(n+1\right)(n+\mu+\nu+1)}{(2n+\mu+\nu+1)\left(2n+\mu+\nu+2\right)}{{\xi}_{n+1}}(E)-{{Q}_{n}}{{\xi}_{n}}(E)$ (15) we deduce that $\xi_{n}=\bar{H}_{n}^{(\mu,\nu)}(-F^{-1};\ell,\pi/2)$, where $\bar{H}^{\mu,\nu}(z^{-1};\ell,\theta)$ is an orthogonal polynomial introduced recently by Alhaidari,Assche (2019) and $\ell=\frac{1}{2}\left(\mu+1\right)\left(\nu+1\right)-{{\left(\frac{\mu+\nu+1}{2}\right)}^{2}}-D$. This polynomial is defined by its three term recursion relation and some of its properties were addressed by W. Van Assche.Assche (2019) However, we do not know its important analytical properties such as its weight function, generating function, asymptotics, orthogonality relations, zeros, $\dots$ etc. We hope that experts in the field of orthogonal polynomials will be able to study this polynomial and derive its analytical properties. In addition, the analytical properties of this polynomial will also be useful when considering the exponentially confining potential well problem (Ref.Alhaidari, 2021) obtained from our potential (1) by considering the zero deformation limit. Although this system supports an infinite spectrum and the Jacobi basis used in this work is finite, in the limit $q\to 0$ the Jacobi basis becomes infinite! This can be seen through $\nu=-\sqrt{\frac{8A}{\lambda^{2}q^{2}}+1}$ along with $N<-\left(\mu+\nu+1\right)/2$. This is also in agreement with the fact that the polynomial $\bar{H}^{\mu,\nu}(z^{-1};\ell,\theta)$ has discrete infinite spectrum.Assche (2019) Consequently, the wavefunction associated with the potential (1) is $\psi_{n}(x)=R_{n}(y-1)^{\frac{\mu}{2}}(y+1)^{\frac{\nu+1}{2}}\times$ $\sum_{k=0}^{N}c_{k}\bar{H}_{k}^{(\mu,\nu)}(-F^{-1};\ell,\pi/2)P_{k}^{(\mu,\,\nu)}(y)$ (16) where $R_{n}$ is some normalization constant. We should mention that the TRA imposes the constraint $A\geq-\lambda^{2}q^{2}/8$ as per the results in Table.1. The critical limit $A=-\lambda^{2}q^{2}/8$ is represented by the dashed lines in Fig.2. It is clear that at some values of $q$, the blue triangle below this critical limit may contain bound states that cannot be accounted for by our TRA solution but can be treated together with resonance states by other numerical means.J. Aguilar, & J. M. Combes (1971) ## IV Potential Parameter Spectrum The potential parameter spectrum (PPS) is a non traditional numerical eigenvalue method that originated in the context of the TRA whenever the basis set is energy dependent A. D. Alhaidari, and H. Bahlouli (2020). This happens, for example, when the TRA constraints dictate that one or more of the basis parameters is/are energy dependent. Such is the case in our current problem where the basis parameter $\mu$ is required to be equal to $2\sqrt{-2E/\lambda^{2}}$. Below, we show how to obtain the TRA solution (energy spectrum and wavefunction) in this case. The fundamental TRA equation in the basis $\\{{{\phi}_{n}}(x)\\}$ is $\hat{J}{{\phi}_{n}}(x)=\omega(x)\left[c_{n}{{\phi}_{n}}(x)+b_{n-1}{{\phi}_{n-1}}(x)+b_{n}{{\phi}_{n+1}}(x)\right]$ (17) where $\hat{J}=\hat{H}-E$ is the wave operator, $\omega(x)$ is a node-less entire function, and $\\{b_{n},c_{n}\\}$ are real coefficients. Now, we write $c_{n}=a_{n}-z$ where $z$ is some proper parameter such that $\\{a_{n},b_{n}\\}$ are independent of $z$. Then, writing the wavefunction as series $\psi(x)=\sum_{n}f_{n}\phi_{n}(x)$ with $f_{n}=f_{0}P_{n}(z)$, then the wave equation $\hat{J}\psi(x)=0$ gives the following three-terms recursion relation $zP_{n}(z)=a_{n}P_{n}(z)+b_{n-1}P_{n-1}(z)+b_{n}P_{n+1}(z)$ (18) Favards theorem asserts that the solution of such a three term recursion relation is a set of orthogonal polynomials of degree $n$ in the variable $z$, $\\{P_{n}(z)\\}$. These polynomials can be computed to any desirable degree using the above three term recursion relation along with the initial conditions. However, the full analytical properties will be known only if the associated weight function, generating function and asymptotic limits are known. In our current situation, however, we have two difficulties: (1) the analytic properties of $P_{n}(z)$ are not known, and (2) the basis and $\\{a_{n},b_{n}\\}$ are energy dependent. Under these circumstances we resort to an indirect numerical approach, the potential parameter spectrum (PPS), to evaluate the corresponding energy spectrum. In brief the main components of the PPS approach go as follows: 1. 1. The polynomial argument $z$ must be chosen such that it does not depend on the energy and contains at least one of the system’s parameters that does not appear in $\\{a_{n},b_{n}\\}$. Let us call that parameter $\gamma$ and write $z$ as $z(\gamma)$. 2. 2. Write Eq. 18 as $z(\gamma)\ket{P}=T\ket{P}$ and take the tridiagonal matrix $T$ to be of finite size $N\times N$. 3. 3. We choose a value for $E$ from a proper range and calculate $\\{a_{n},b_{n}\\}_{n=0}^{N-1}$. 4. 4. Calculate the eigenvalues of $z(\gamma)\ket{P}=T\ket{P}$ as $\\{z_{n}(\gamma)\\}_{n=0}^{N-1}$. 5. 5. Repeat steps 3 and 4 for another $E$ in the chosen range until the whole range is covered. 6. 6. Let us designate $\\{\gamma_{n}^{k}\\}_{n=0}^{N-1}$ as the resulting eigenvalues from the $k$th step that corresponds to the energy $E_{k}$. This set is called the ”potential parameter spectrum” for $\gamma$ at energy $E_{k}$. 7. 7. Sort the set $\\{\gamma_{n}^{k}\\}_{k=0}^{N-1}$ and for each fixed $n$, make a function fit of $\\{E_{k}\\}_{k=1}^{k=M}$ versus $\\{\gamma_{n}^{k}\\}_{k=0}^{N-1}$ and call this function $G(n,\gamma)$. 8. 8. Thus, the energy spectrum of the system corresponding to a given physical parameter $\gamma$ is $\\{G(m,\gamma)\\}_{m=0}^{M}$, where $M$ is the maximum integer such that $G(m,\gamma)<0$. Now, the wavefunction $\psi_{m}(x)$ at the energy eigenvalue $G(m,\gamma)$ is obtained as follows. First, Calculate $\\{a_{n},b_{n}\\}_{n=0}^{N-1}$ at the energy $G(m,\gamma)$ and then solve Eq.18 for $P_{n}(z(\gamma))$. Consequently, we obtain $\psi_{m}(x)=f_{0}(z)\sum_{n}P_{n}(z(\gamma))\phi_{n}(x)$ where the energy eigenvalue $G(m,\gamma)$ is implicit in the basis and energy polynomial. ## V Numerical Diagonalization For completeness and to give an independent verification of our results, we choose a numerical diagonalization approach that falls within the spirit of TRA. We start by writing our original Hamiltonian in the following form, $\hat{H}=\hat{H}_{0}+U_{q}(x)$ (19) where $\hat{H}_{0}=-\frac{1}{2}\frac{d^{2}}{dx^{2}}+Ce^{\lambda x}$ and $U_{q}(x)=V_{q}(x)-Ce^{\lambda x}$. This suitable selection of $\hat{H}_{0}$ will enable us to treat exactly the $e^{\lambda x}$ term and leave only the short range part of the potential for numerical approximation. Then we select a basis set in which $\hat{H}_{0}$ can be represented by a tridiagonal matrix. In our case, the Laguerre basis will do the job, $\phi_{n}(x)=A_{n}y^{\frac{\gamma+1}{2}}e^{-y/2}L_{n}^{\gamma}(y),$ (20) where $A_{n}=\sqrt{n!/\Gamma(n+\gamma+1)}$ and $L_{n}^{\gamma}(y)$ is the associated Laguerre polynomial with $\gamma>-1$, $y=\rho e^{\lambda x/2}$, where $\rho=4\sqrt{2C}/\lambda$, giving $16\lambda^{-2}(\hat{H}_{0})_{n,m}=\alpha_{n}\delta_{n,m}-\beta_{n}\delta_{n,m+1}-\beta_{n+1}\delta_{n,m-1}$ (21) where $\alpha_{n}=\left(2n+\gamma+1\right)^{2}-\frac{1}{2}\left(\gamma^{2}-1\right),$ (22) $\beta_{n}=\left(2n+\gamma\right)\sqrt{n(n+\gamma)}.$ (23) Thus in the above basis set the matrix element of the seed Hamitonian $\hat{H}_{0}$ can be computed exactly, however, the potential terms $U_{q}(x)$ in this basis is calculated using the numerical Gauss quadrature approach $U_{q}=\Lambda.D.\Lambda^{T},$ (24) where $\Lambda$ is the normalized eigenvector matrix associated with the following quadrature matrix $\langle m|y|n\rangle=c_{n}\delta_{n,m}-d_{n}\delta_{n,m+1}-d_{n+1}\delta_{n,m-1}$ (25) where $\langle m|y|n\rangle=A^{2}_{n}\int_{0}^{\infty}dxx^{\gamma+1}e^{-x}L^{\gamma}_{m}(x)L^{\gamma}_{n}(x)$, $c_{n}=2n+\gamma+1$, $d_{n}=\sqrt{n(n+\gamma)}$, and $D_{ij}=\left[\frac{A}{\left[(e_{i}/\rho)^{2}+q\right]^{2}}+\frac{B}{(e_{i}/\rho)^{2}+q}-\frac{A+qB}{q^{2}}\right]\delta_{ij}$ (26) where $e_{i}$ is the associated eigenvalue of $\langle m|y|n\rangle$. ## VI Results and discussion In this section, we present some illustrative calculations of the energy spectrum and the wavefunction. First, we considered the potential parameters $A=2.0$, $B=\\{-12,-14\\}$, $C=1.0$, and $q=0.2$ where we summarize the results of the energy spectrum in Table.2. We observe a good matching between the results obtained via the numerical Hamiltonian diagonalization (NHD), the potential parameter spectrum (PPS) method, and the asymptotic iteration method (AIM) (Ref.Sous, 2021). On the other hand, we considered a larger spectrum size by taking the potential parameters as $A=2.0$, $B=-10.0$, $C=1.0$, and $q=0.2$ which has eight bound states as shown in Table.3. As we can see, the results obtained by NHD and the AIM are in good agreement, while errors in the PPS results increase for higher states. This issue arises from the fact that the basis we have chosen in Eq. 8 is finite with the constraint $N<-(\mu+\nu+1)/2$ with $\mu$ being energy dependent. This limits the number of data points that the PPS needs to give a better fitting for the higher excited states. Had we known the analytic properties of the orthogonal polynomials $\bar{H}_{n}^{(\mu,\nu)}(-F^{-1};\ell,\pi/2)$ introduced earlier, we would have been able to obtain exact results. Alternative procedures such as NHD and AIM are necessary to accurately study the higher excited states. Aside from the energy spectrum, we also plotted the lowest four eigenstates of the potential (1), for the parameters choice $B=-12$ in Table. 2, as shown in Fig. 3. Table 2: The complete bound state energy spectrum (in atomic units and with an overall negative sign) for the potential (1) obtained using the potential parameter spectrum (PPS) technique. For comparison, we present a numerical Hamiltonian diagonalization (NHD) in the Laguerre basis of size 200 with the Laguerre polynomial index $\gamma$ chosen from the plateau of stability as indicated along with results from the asymptotic iteration method (AIM) with 360 iterations and seed point $x_{0}$ chosen to be at the local minimum of our potential.Sous (2021) The potential parameters were chosen as $\\{A,C,q,\lambda\\}=\\{2.0,1.0,0.20,1.0\\}$ and we varied the parameter $B$ as shown. $B=-12$ | | $B=-14$ | ---|---|---|--- NHD ($\gamma=2.0$) | PPS | AIM | NHD ($\gamma=3.0$) | PPS | AIM 6.725966329 | 6.725966329 | 6.725966329 | 3.438724142 | 3.438724142 | 3.438724142 4.602821791 | 4.602821791 | 4.602821791 | 1.746928179 | 1.746928179 | 1.746928106 2.795104002 | 2.795104002 | 2.795104007 | 0.550245228 | 0.550245231 | 0.5501964482 1.348987620 | 1.348987619 | 1.348983460 | 0.008709768 | 0.008711308 | 0.026629576 0.354453319 | 0.354453301 | 0.3556443488 | | | Table 3: Reproduction of Table 2 with the same parameters except for $B=-10.0$. NHD ($\gamma=1.0$) | PPS | AIM ---|---|--- 11.092470042 | 11.092470042 | 11.092470042 8.789641222 | 8.789641170 | 8.789641222 6.721176457 | 6.721163320 | 6.721176456 4.883397609 | 4.883236520 | 4.883397612 3.275961756 | 3.275108205 | 3.275961745 1.909788824 | 1.882238563 | 1.909788706 0.824167396 | 0.757075573 | 0.8240517209 0.126148628 | 0.047743887 | 0.1463293492 Figure 3: The lowest four bound states of the potential (1) as generated by the TRA wavefunction (14) for the parameter choices $A=2.0$, $B=-12$, $C=1.0$, and $q=0.20$. ## VII Conclusions In this work, we have studied an interesting and exactly solvable 1D potential (1) which can support both bound and /or resonance states depending on the choice of potential parameters as shown in the spectral phase diagrams Fig. 2. We obtained the bound state solution of this potential via the tridiagonal representation approach. The wavefunction is expressed as a finite series in terms of the finite Jacobi polynomials with expansion coefficients expressed in terms of the dipole polynomials $\bar{H}^{\mu,\nu}(z^{-1};\ell,\theta)$. Due to the fact that the analytic properties of the later polynomial are still an open problem to be solved, we relied on numerical means to evaluate the energy spectrum. We have used the potential parameter spectrum, the numerical Hamiltonian diagonalization, and the asymptotic iteration method as three independent techniques and obtained the eigenvalues of our problem for several choices of the potential parameters. The results obtained by the three methods are in good agreement except for the PPS when the size of spectrum gets larger. On the other hand, resonance states can be studied using other well- known methods such as complex scaling. Finally, potential (1) can be used to model diatomic molecules and interactions between atoms and surfaces. It also adds more freedom when compared to the well-known 1D Morse potential due to the fact that the fitting parameters are more than the ones appearing in the Morse potential, and thus being more practical when considering fitting experimental data such as spectroscopic data. ###### Acknowledgements. The authors would like to thank Dr. A. J. Sous, Al-Quds open university, Palestine, for calculating the energy spectrum via the asymptotic iteration method as given in Tables 2 & 3. ## Appendix A Reviewing the TRA The TRA is one of the methods to solve the wave equation by expressing its solutions in terms of orthogonal polynomial. The basic idea of the method is that starting from Schrodinger equation $\left(\mathcal{H}-E\right)\ket{\psi}=0$, we expand the solution as an infinite (or finite) bounded series $\ket{\psi}=\sum\limits_{n}{{f}_{n}}\left(E\right)\ket{\phi_{n}}$, where $\\{f_{n}(E)\\}$ are expansion coefficients which are generally functions of energy and potential parameters, and $\\{\ket{\phi_{n}}\\}$ are square integrable basis states. The basis elements usually have the following form $\langle x|\phi_{n}\rangle=A_{n}w(x)P_{n}(x)$ (27) where $A_{n}$ is a normalization constant, $w(x)$ is the weight function that vanishes on the boundary of the configuration space, and $P_{n}(x)$ is an orthogonal polynomial. For example, when $x\geq 0$, one could use Laguerre basis in which $w(x)=x^{\alpha}e^{\beta x}$ and $P_{n}(x)=L_{n}^{\nu}(x)$, for some parameters $\\{\alpha,\beta,\nu\\}$ with $\nu>-1$. Requiring that the matrix representation of the wave operator $\mathcal{H}-E$ to be tridiagonal in the chosen basis $\\{\ket{\phi_{n}}\\}$, the wave equation becomes a three-term recursion relation for the expansion coefficients, $a_{n}f_{n}+b_{n}f_{n-1}+b_{n+1}f_{n+1}=0$ (28) For some cases, it was found that those expansion coefficients can be related to well-known orthogonal polynomials, while in other situations a new family of orthogonal polynomials arise which requires a further study which is beyond the context of this work.A. D. Alhaidari (2019) Having the analytic properties of those polynomials will help us find the analytic properties of the system such as bound state energies and the phase shift. In our situation, we have come across new family of orthogonal polynomials that where introduced by Alhaidari in Ref.A. D. Alhaidari, 2019, and since we still do not know their analytic properties we had to switch to numerical means using the PPS method within the context of the TRA. ## Appendix B Properties of the finite Jacobi polynomials on the semi- infinite interval These Jacobi polynomials $P_{n}^{(\mu,\nu)}(x)$ with $x\geq 1$ satisfy the following second order differential equation $\displaystyle(x^{2}-1)\frac{d^{2}P_{n}^{(\mu,\nu)}(x)}{dx^{2}}=[\nu-\mu-(\mu+\nu+2)x]\times$ $\displaystyle\frac{dP_{n}^{(\mu,\nu)}(x)}{dx}+n(n+\mu+\nu+1)P_{n}^{(\mu,\nu)}(x)$ (29) where $n=0,1,\cdots,N$, $\mu>-1$, and $\mu+\nu<-2N-1$. These polynomials satisfy the following properties $\displaystyle\left(x+Q_{n}\right)P_{n}^{(\mu,\nu)}(x)=\tfrac{2(n+\mu)(n+\nu)}{(2n+\mu+\nu)(2n+\mu+\nu+1)}\times$ (30) $\displaystyle{}P_{n-1}^{(\mu,\nu)}(x)+\tfrac{2(n+1)(n+\mu+\nu+1)}{(2n+\mu+\nu+1)(2n+\mu+\nu+2)}P_{n+1}^{(\mu,\nu)}(x)$ $\int\limits_{1}^{\infty}dx{{{\left(x-1\right)}^{\mu}}{{\left(x+1\right)}^{\nu}}P_{k}^{(\mu,\,\nu)}(x)}P_{l}^{(\mu,\,\nu)}(x)=A_{k}^{-2}{{\delta}_{kl}}$ (31) where, ${{A}_{k}}=\sqrt{\frac{(2k+\mu+\nu+1)\Gamma(k+1)\Gamma(k+\mu+\nu+1)}{{{2}^{\mu+\nu+1}}\Gamma(k+\nu+1)\Gamma(k+\mu+1)}}\times\\\ \sqrt{\frac{\sin\pi\left(\mu+\nu+1\right)}{\sin\pi\nu}}$ (32) Finally, $\langle n|y|m\rangle=-Q_{n}\delta_{nm}+S_{n}\delta_{n,m-1}+S_{n-1}\delta_{n,m+1}$ (33) where $Q_{n}=\frac{\mu^{2}-\nu^{2}}{(2n+\mu+\nu)(2n+\mu+\nu+2)}$ (34) $S_{n}=\frac{2}{2n+\mu+\nu+2}\times\\\ \sqrt{\frac{(n+1)(n+\mu+1)(n+\nu+1)(n+\mu+\nu+1)}{(2n+\mu+\nu+1)(2n+\mu+\nu+3)}}$ (35) These polynomials satisfy other relations but we listed here the ones that are relevant to this work, for more details see for example Ref.Ming-Po Chen and H.M. Srivastava, 1995. ## Data Availability Statement Data sharing is not applicable to this article as no new data were created or analyzed in this study. ## References * F. Cooper, A. Khare and U. Sukhatme (1995) F. Cooper, A. Khare and U. Sukhatme, Phys. Rep. 251, 267 (1995). * M. Bander, and C. Itzykson (1966) M. Bander, and C. Itzykson, Rev. Mod. Phys. 38, 330 (1966). * Y. Alhassid, F. Iachello, and F. Gürsey (1983) Y. Alhassid, F. Iachello, and F. Gürsey, Chem. Phys. Lett. 99, 27 (1983). * L. Infeld, and T. E. Hull (1951) L. Infeld, and T. E. Hull, Rev. Mod. Phys. 23, 21 (1951). * H. Ciftci, R. L. Hall, and N. Saad (2005) H. Ciftci, R. L. Hall, and N. Saad, J. Phys. A: Math. & Gen. 38, 1147 (2005). * R. De, R. Dutt and U. Sukhatme (1992) R. De, R. Dutt and U. Sukhatme, J. Phys. A: Math. & Gen. 25, L843 (1992). * A. F. Nikiforov V. B. Uvarov (1988) A. F. Nikiforov V. B. Uvarov, _Special functions of mathematical physics_ , Vol. 205 (Basel: Birkhäuser, 1988). * Alhaidari and Bahlouli (2019) A. D. Alhaidari and H. Bahlouli, Phys. Scrip. 94, 125206 (2019). * Morse (1929) P. H. Morse, Phys. Rev. 34, 57 (1929). * Alhaidari (2021) A. D. Alhaidari, Theor. Math. Phys 206, 84 (2021). * A. D. Alhaidari, and H. Bahlouli (2020) A. D. Alhaidari, and H. Bahlouli, J. Math. Phys. 61, 062103 (2020). * Sous (2021) A. J. Sous, Private Communication (2021). * D. R. Curtiss (1918) D. R. Curtiss, Ann. Math. 19, 251 (1918). * N. Rosen, and P. M. Morse (1932) N. Rosen, and P. M. Morse, Phys. Rev. 42, 210 (1932). * H. Eğrifes, D. Demirhan, and F. Büyükkiliç (1999) H. Eğrifes, D. Demirhan, and F. Büyükkiliç, Phys. Scrip. 60, 195 (1999). * Assche (2019) W. V. Assche, SIGMA 15, 005 (2019). * J. Aguilar, & J. M. Combes (1971) J. Aguilar, & J. M. Combes, Comm. Math. Phys. 22, 269 (1971). * A. D. Alhaidari (2019) A. D. Alhaidari, Rep. Math. Phys. 84, 393 (2019). * Ming-Po Chen and H.M. Srivastava (1995) Ming-Po Chen and H.M. Srivastava, App. Math. Comp. 68, 153 (1995).
Present address: D-Wave Systems, Burnaby, British Columbia, Canada] # Zeeman-driven parity transitions in an Andreev quantum dot A. M. Whiticar [ A. Fornieri Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen and Microsoft Quantum Lab–Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark A. Banerjee Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen and Microsoft Quantum Lab–Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark A. C. C. Drachmann Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen and Microsoft Quantum Lab–Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark S. Gronin Department of Physics and Astronomy and Microsoft Quantum Lab–Purdue, Purdue University, West Lafayette, Indiana 47907 USA Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907 USA G. C. Gardner Department of Physics and Astronomy and Microsoft Quantum Lab–Purdue, Purdue University, West Lafayette, Indiana 47907 USA Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907 USA T. Lindemann Department of Physics and Astronomy and Microsoft Quantum Lab–Purdue, Purdue University, West Lafayette, Indiana 47907 USA Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907 USA M. J. Manfra Department of Physics and Astronomy and Microsoft Quantum Lab–Purdue, Purdue University, West Lafayette, Indiana 47907 USA Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907 USA School of Materials Engineering, Purdue University, West Lafayette, Indiana 47907 USA School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907 USA C. M. Marcus<EMAIL_ADDRESS>Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen and Microsoft Quantum Lab–Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark ###### Abstract The Andreev spectrum of a quantum dot embedded in a hybrid semiconductor- superconductor interferometer can be modulated by electrostatic gating, magnetic flux through the interferometer, and Zeeman splitting from in-plane magnetic field. We demonstrate parity transitions in the embedded quantum dot system, and show that the Zeeman-driven transition is accompanied by a 0-$\pi$ transition in the superconducting phase across the dot. We further demonstrate that flux through the interferometer modulates both dot parity and 0-$\pi$ transitions. ## I Introduction The interplay of confinement, spin, and superconductivity leads to a rich variety of mesoscopic phenomena Eschrig (2011); De Franceschi _et al._ (2010); Klapwijk (2004) that can be investigated in semiconductor- superconductor hybrid materials coupled via the proximity effect Klapwijk (2004); Ryazanov _et al._ (2001). Recent advances in epitaxial growth of such hybrids have demonstrated highly transparent heterointerfaces in several material platforms Krogstrup _et al._ (2015); Shabani _et al._ (2016); Kjaergaard _et al._ (2017); Lutchyn _et al._ (2018). An important application is semiconducting Josephson junctions (JJs), where a semiconducting normal (N) region is bounded by two superconductors (S), giving rise to a spectrum of Andreev bound states (ABSs) in the N region at energies below the gap, $\Delta$, of the superconductors Beenakker (1991). ABS energies $E$ depend on the superconducting phase difference, $\varphi$, across the junction and generate a supercurrent, $I_{\rm s}(\varphi)=-(2e/h)\,dE/d\varphi$, where $e$ is the unit of charge and $h$ is Planck’s constant Yokoyama _et al._ (2014); Nichele _et al._ (2020). Semiconducting S-N-S junctions have been used as voltage-controlled transmons (gatemons) de Lange _et al._ (2015); Larsen _et al._ (2015); Casparis _et al._ (2018); Kringhøj _et al._ (2020); Bargerbos _et al._ (2020) and Andreev qubits Zazunov _et al._ (2003); Janvier _et al._ (2015); Hays _et al._ (2018); Tosi _et al._ (2019). A quantum dot (QD) embedded in a Josephson junction (S-QD-S) can result in a competition between superconductivity and spin in a confined system De Franceschi _et al._ (2010); Eichler _et al._ (2007); Sand-Jespersen _et al._ (2007); Kiršanskas _et al._ (2015); He _et al._ (2020). The charging energy of a weakly coupled QD typically stabilizes one of two spin states at zero magnetic field, depending on dot occupancy: a spin-zero singlet $\ket{S}$ or a spin-$\frac{1}{2}$ doublet $\ket{D}$Kiršanskas _et al._ (2015); Žitko _et al._ (2015); Meng _et al._ (2009). For even dot occupancy, the ground state (GS) is typically a singlet for all coupling strengths; for odd occupancy, the GS is either a $\ket{D}$ state for weak coupling, or a delocalized singlet, where the spin of the dot is hybridized with spins in the leads Žitko _et al._ (2015). The hybridized odd-parity subgap spectrum corresponds to Yu-Shiba-Rusinov states Jellinggaard _et al._ (2016); Žitko _et al._ (2015); Delagrange _et al._ (2016), and the crossover between even and odd parity Kiršanskas _et al._ (2015); Meng _et al._ (2009); Vecino _et al._ (2003); Rozhkov and Arovas (1999) is marked by a zero-energy crossing Chang _et al._ (2013); Lee _et al._ (2014); Jellinggaard _et al._ (2016); van Gerven Oei _et al._ (2017). Bound-state (BS) excitations at energies $E_{\rm{BS}}$ correspond to the differences between GS and first excited-state (ES) energies Lee _et al._ (2014). When $E_{\rm{BS}}=0$, the GS and ES are degenerate and a fermionic parity transition occurs Vecino _et al._ (2003); Žitko _et al._ (2015). The even and odd GS parities can be distinguished by the phase dependence of $E_{\rm{BS}}$. An odd GS leads to a superconducting phase difference of $\pi$ across the JJ, shifting the phase dependence of the subgap excitations by $\pi$ that results in a negative supercurrent Rozhkov and Arovas (1999); Van Dam _et al._ (2006). The GS parity transition from even to odd is commonly referred to as a $0{\text{-}}\pi$ phase transition and can be identified spectroscopically by the observation of zero-bias crossings (ZBCs) and $\pi$-shifted phase dependence of the subgap spectrum via tunneling of single electrons into the junction from a weakly-coupled normal lead Kiršanskas _et al._ (2015); Vecino _et al._ (2003); Pillet _et al._ (2010); Chang _et al._ (2013). In this Article, we investigate an S-QD-S junction embedded in a superconducting quantum interference device (SQUID). The device allowed control of the superconducting phase across the QD by threading magnetic flux through the SQUID loop, while BS energies were simultaneously measured via tunneling spectroscopy into the QD with a third (normal) lead. The system was fabricated from an epitaxial InAs-Al heterostructure patterned using electrostatic gates. We investigate spectra of subgap excitations under the influence of flux and Zeeman field across parity transitions identified by ZBCs. We observe that the subgap spectrum acquires a $\pi$ shifted energy dependence in the odd-parity GS, as expected for a $0{\text{-}}\pi$ phase transition. We find that these transitions are controlled via magnetic field, gate voltage, and superconducting phase difference, which demonstrates precise control of Andreev states in semiconductor Josephson junctions. Magnetic-field driven parity transitions in S-QD-S junctions have previously been observed as zeros of a reentrant critical current Estrada Saldaña _et al._ (2019) or via direct spectroscopy of the QD Pillet _et al._ (2010); Chang _et al._ (2013). Related measurements in N-QD-S devices showed gate voltage and field-driven parity transition as ZBCs Lee _et al._ (2014); Jellinggaard _et al._ (2016). Associated $0{\text{-}}\pi$ transitions were detected as a supercurrent reversals when the S-QD-S junction was embedded in a SQUID Van Dam _et al._ (2006); Delagrange _et al._ (2016); Li _et al._ (2019); Razmadze _et al._ (2020). In the present study, we spectroscopically interrogate the ABS spectrum of a S-QD-S junction, which reveals the ability for these parameters to work in concert with a superconducting phase difference to cause parity transitions. ## II Device Figure 1: (a) False-color electron micrograph of an S-QD-S device, and (b) device schematic. Device consists of a loop (blue) of epitaxial Al, and Ti/Au electrostatic gates (yellow). The junction is formed by two Al leads defined by gate voltage $V_{\rm S}$ and confined into a quantum dot (QD) by gate voltage $V_{\rm{pg}}$. The tunnel barriers to the normal (N) lead and superconducting leads are controlled by gate voltage $V_{\rm{t}}$. ac+dc bias voltage $V_{\rm{sd}}$ is applied to the normal lead with the superconducting loop grounded. Magnetic field directions $B_{\parallel}$ and $B_{\perp}$ are shown, where $B_{\perp}$ is used to apply magnetic flux through the superconducting loop. Devices were fabricated from an InAs two-dimensional electron gas (2DEG) heterostructure grown on InP with 8 nm of epitaxial Al deposited _in-situ_. Details of the heterostructure stack and device fabrication are given in the Appendix. Previous measurements on similar material revealed near-unity transmission of the ABS in an S-N-S JJ Kjaergaard _et al._ (2017); Nichele _et al._ (2020). We study two lithographically similar devices A and B. Figure 1 shows a micrograph of the device A. The superconducting loop was selectively wet etched from the Al film. A 15 nm HfO2 dielectric layer, grown by atomic layer deposition, was then deposited over the entire device. Ti/Au top-gates, patterned by electron-beam lithography, were then evaporated. The two S leads were defined by a negative gate voltage $V_{\rm S}$, forming a ballistic JJ of length 200 nm and connected by an Al loop to form a single-junction SQUID. The QD between the two S leads was defined and controlled with a negative voltage $V_{\rm{pg}}$. Typical QD charging energies varied between $U$ = 0.7 to 1 meV, giving $U/\Delta\sim 4$ (see Supplementary Fig. S.2). Tunnel barriers to the N and S leads were controlled by gate voltage $V_{\rm{t}}$ applied to both barriers, providing tunneling spectroscopy of the S-QD-S junction. A voltage bias $V_{\rm{sd}}$ consisting of ac and dc components was applied to the normal semiconductor (N) lead and the resulting current $I$ and four-terminal voltage $V_{\rm 4T}$ was measured using conventional lock-in techniques with the S loop grounded. The in-plane magnetic fields $B_{\parallel}$ and perpendicular field $B_{\perp}$ were applied using a three-axis vector magnet. The superconducting phase difference $\varphi$ across the junction was controlled by threading magnetic flux through the S loop, which has area of 1.8 $\mu\rm{m}^{2}$, so that $\sim 1.2$ mT corresponds to one flux quantum, $\Phi_{0}=h/2e=2.07$ mT$\,\mu$m2. ## III Model Figure 2: Subgap excitation spectrum of an S-QD-S Josephson junction (JJ) as a function of detuning, $\varepsilon$, centered on odd occupancy at $\varepsilon=0$. (a) Bound-state energy $E_{\rm{BS}}(\varepsilon)$ normalized by gap, $\Delta$ for strong ($g>1$, dashed) and weak ($g<1$, solid) coupling to the superconductors. Grey shaded region indicates odd-parity ground state. (b) $E_{\rm{BS}}$ as a function of superconducting phase difference $\varphi$ for an even (dashed) and odd (solid) parity ground state. (c) Dependence of $E_{\rm{BS}}$ on Zeeman energy $E_{\rm{Z}}=|g^{*}|\mu_{\rm B}B_{\parallel}$ for an even-parity ground state and for $\varphi=0,\pi$ (solid and dashed lines, respectively). A ground-state parity transition, from even to odd parity, occurs when $E_{\rm{BS}}=0$ for a critical Zeeman energy $E_{\rm Z,c}$. (d) $E_{\rm{BS}}(\varphi)$ for an intermediate Zeeman energy ($E_{\rm Z,c}(\pi)<E_{\rm{Z}}<E_{\rm Z,c}(0)$) where two zero-energy crossings occur near $\varphi=\pi$. Excitations are calculated based on a model introduced in Ref. Kiršanskas _et al._ (2015) with an asymmetric lead coupling of $\theta\sim\pi/3$. Blue/red denote spin resolved subgap states. Black denotes spin degenerate states. Before presenting experimental results, we first discuss the expected dependence of bound-state energies, $E_{\rm{BS}}$, of the QD on level detuning, $\varepsilon$, normalized Zeeman energy, $E_{\rm Z}/\Delta$, and phase difference, $\varphi$, across the dot, including parity and $0{\text{-}}\pi$ transitions (see Model Details for further information). Figure 2(a) shows that for an odd-occupied QD, $E_{\rm{BS}}$ is lowered as $\varepsilon$ is tuned away from $\pm 1$, the QD charge degeneracy points. At the particle-hole symmetry point, $\varepsilon=0$, the bound state energy depends on the parameter $g$, which describes the exchange coupling strength between the spin impurity and the S-lead as Kiršanskas _et al._ (2015); Žitko _et al._ (2015), $E_{\rm{BS}}=\Delta\frac{1-g^{2}}{1+g^{2}}\quad.$ (1) For strong coupling ($g>1$, dashed curves in Fig. 2a), $E_{\rm{BS}}$ does not reach zero for any value $\varepsilon$, preserving an even-parity GS. Physically, this is a consequence of strong screening of the unpaired spin of the QD by a quasiparticle in the S leads, which together form a delocalized singlet Žitko _et al._ (2015). An increase in Coulomb interaction reduces $g$ that leads to a lowering of $E_{\rm{BS}}$. For $g<1$, $E_{\rm{BS}}$ crosses zero energy, signalling a transition to an odd GS parity within the shaded region of Fig. 2 Žitko _et al._ (2015). Phase dependence of even and odd-parity GSs are shown in Fig. 2b. Dashed curves show the phase dependence for the even-parity GS (a zero-junction, denoted 0-JJ), showing a $2\pi$ periodicity with $E_{\rm{BS}}$ minima at $\varphi=\pi$. For the odd-parity GS, the phase dependence acquires a $\pi$ shift, with energy minima at $\varphi=0$ (solid curves), yielding a $\pi$ junction (denoted $\pi$-JJ). Notably, the $0{\text{-}}\pi$ transitions can be identified from this characteristic phase dependence of $E_{\rm{BS}}$. Zeeman coupling, for instance from an in-plane magnetic field, can induce parity transitions by splitting an excited bound state doublet by the Zeeman energy, $E_{\rm Z}=|g^{*}|\mu_{\rm B}B_{\parallel}$, giving $E_{\rm{BS}}(B_{\parallel})=E_{\rm{BS}}(0)\pm E_{\rm Z}/2$ Meden (2019); Lee _et al._ (2014); He _et al._ (2020), where $g^{*}$ is the effective g-factor and $\mu_{\rm B}$ is the Bohr magneton Žitko _et al._ (2015); Yokoyama _et al._ (2014); Vecino _et al._ (2003); Jellinggaard _et al._ (2016). In this scenario, at a critical Zeeman energy $E_{\rm Z,c}$, a zero-energy crossing results in a transition to an odd-parity GS, as shown in Fig. 2c Lee _et al._ (2014); Jellinggaard _et al._ (2016). Increasing the magnetic field further reopens a gap that stabilizes a magnetic doublet GS with a $\pi$-shifted phase dispersion Wentzell _et al._ (2016). Note that Zeeman-induced parity transitions occur at reduced $E_{\rm Z}$ for nonzero $\varphi$ (dashed curves in Fig. 2c). Figure 2d shows the phase dependence of an even-parity GS for Zeeman coupling in the range $E_{\rm Z,c}(\pi)<E_{\rm{Z}}<E_{\rm Z,c}(0)$. In this intermediate range, $E_{\rm{BS}}$ is lowered such that a gap opens in the vicinity of $\varphi=\pi$, marked by two zero-energy crossings, indicating an odd-parity GS. The size of the resulting gap increases with increasing Zeeman energy until $E_{\rm{Z}}$ reaches $E_{\rm Z,c}(0)$, resulting in a dispersion with two minima, one at $\varphi=0$ and one at $\varphi=\pi$, denoted $0^{\prime}$-JJ or $\pi^{\prime}$-JJ, depending on which minimum is deeper Rozhkov and Arovas (1999); Kiršanskas _et al._ (2015). ## IV Experiment ### IV.1 Gate voltage dependence Tunneling spectroscopy of the S-QD-S junction was performed by creating a tunnel barrier to the normal lead ($V_{\rm{t}}=-1.87$ V). In Fig. 3a, the differential conductance $G=dI/dV_{\rm 4T}$ is shown as a function of bias voltage $V_{\rm{sd}}$ and gate voltage $V_{\rm{pg}}$ used to tune the occupancy of the QD. When $V_{\rm{pg}}$ was varied, a gap of $V_{\rm{sd}}\sim\pm 140~{}\mu$V was observed, along with two subgap features. The first feature occured at $V_{\rm{pg}}\sim-5.74$ V (green marker in Fig. 3a), where the gap is reduced to $\sim 100~{}\mu$V, indicating an odd QD occupancy with an even GS parity due to a strong coupling $g$ (see dashed line Fig 2a). At the stronger subgap feature in Fig. 3a, around $V_{\rm{pg}}\sim$ -5.85 V, the gap closed entirely, resulting in two ZBCs, indicated by purple markers. Increasing $V_{\rm{t}}$ merged the ZBCs (black circle in Fig. 3b), then removed the crossings entirely (Fig. 3c). The sequence demonstrates a gate- voltage-induced GS parity transition of the type illustrated in Fig. 2a, where $g$ is controlled by gate voltage $V_{\rm{t}}$. The position in $V_{\rm{pg}}$ of the two subgap features shift with $V_{\rm{t}}$ due to cross coupling. Figure 3d shows the dependence of the ZBC on $B_{\parallel}$ at the merging point, $V_{\rm{t}}=-1.82$ V. The splitting is roughly linear in $B_{\parallel}$, yielding an effective g-factor $g^{*}\sim 5$. Figure 3: Parity transition induced by voltage $V_{\rm{pg}}$ (device A). Differential conductance $G$ as a function of bias $V_{\rm{sd}}$ and gate voltage $V_{\rm{pg}}$ for tunnel-barrier gate voltage $V_{\rm{t}}=$ -1.87 V (a), -1.82 V (b), and -1.8 V (c). (d) Zero-bias conductance, $G$, as a function of in-plane magnetic field $B_{\parallel}$ and $V_{\rm{pg}}$ for $V_{\rm{t}}=-1.82~{}V$. In panels a-c, $B_{\parallel}=\varphi=0$. ### IV.2 Phase dependence Figure 4: Dependence of an even-parity ground state on superconducting phase difference $\varphi$ (device A). (a) Differential conductance $G$ as a function of $\varphi$ and bias voltage $V_{\rm{sd}}$ for in-plane magnetic field (a) $B_{\parallel}=0$ T, (b) $B_{\parallel}=0.4$ T, (c) $B_{\parallel}=0.6$ T, (d) $B_{\parallel}=0.8$ T. (e) Zero-bias $G$ as a function of $\varphi$ and $B_{\parallel}$ with vertically offset line-cuts shown in (f). The red marker indicates the $V_{\rm{pg}}$ position in Fig. 3 for reference. We next examine the phase dependence of the even-parity GS at the location of the red marker in Fig. 3b ($V_{\rm{pg}}=-5.69$ V, $V_{\rm{t}}=-1.82$ V) by measuring the differential conductance $G$ as a function of $\varphi$ and $V_{\rm{sd}}$, as shown in Fig. 4a. Tuning $\varphi$ from 0 to $2\pi$ lowers $E_{\rm{BS}}$, eventually inducing a ZBC at $\varphi=\pi$. Applying an in- plane field $B_{\parallel}$ caused a gap to open in the vicinity of $\varphi=\pi$, as shown in Fig. 4b. This gap increased with $B_{\parallel}$, while the gap at $\varphi=0$ decreased, as shown in Figs. 4b-d. Although the induced superconducting gap is suppressed at $B_{\parallel}=0.8$ T (Fig. 4d), it is evident that the minimum $E_{\rm{BS}}$ occurs at $\varphi=0$. We interpret the gap opening at $\varphi=\pi$ as indicating $0^{\prime}$-JJ behavior of the type illustrated in Fig. 2d. The position of the ZBC in both $\varphi$ and $B_{\parallel}$ is captured by measuring $G(V_{\rm{sd}}$=0), as shown in Fig. 4e. Increasing the field causes the crossing at $\varphi=\pi$ to split and move towards $\varphi=0$, as shown in Fig. 4f. The splitting, roughly linear at low fields, yields a g-factor $g^{*}\sim 5$, consistent with the value found from Fig. 3d. Cuts in Fig. 4f at $B_{\parallel}=0$ and 0.8 T show minima shifted by $\pi$, indicating a $0{\text{-}}\pi$ transition driven by $B_{\parallel}$. Away from the feature marked by the red dot in Fig. 3b, where the full gap is observed ($V_{\rm{pg}}=-5.65$ V), ABSs do not cross zero-bias at $\varphi=\pi$ at zero magnetic field (see Supplemental Fig. S.1). ### IV.3 Magnetic field dependence Figure 5: Magnetic field dependence of an even-parity ground state (device B). (a) Differential conductance $G$ as a function of gate voltage $V_{\rm{pg}}$ and bias voltage $V_{\rm{sd}}$ for $\varphi=B_{\parallel}=0$. (b) Conductance $G$ at zero bias as a function of $\varphi$ and $B_{\parallel}$. $G$ as a function of bias $V_{\rm{sd}}$ and in-plane field $B_{\parallel}$ for superconducting phase difference (c) $\varphi=\pi$, (d) $\pi/2$, (e) $0$. The plots in c-e are reconstructed from line cuts of Supplementary Fig. S.3 for fixed $\varphi$ values. Dashed lines are guides to the eye. We next investigate bound-state bias spectra as a continuous function of $B_{\parallel}$, rather than for the discrete values of $B_{\parallel}$ shown in Figs. 4a-d. Focusing now on device B, Fig. 5a shows a dip in $E_{\rm{BS}}$ as a function of gate voltage $V_{\rm{pg}}$ without ZBCs, measured at $B_{\parallel}=0$. This indicates an even-parity GS and a doublet ES throughout this range of $V_{\rm{pg}}$. Figure 5b shows that a ZBC is first observed for $B_{\parallel}\sim 0.5$ T at a phase difference of $\pi$. Increasing $B_{\parallel}$ further causes the ZBC to split and merge at $\varphi=0$, similar to Fig. 4e. Comparing the phase dependence of the ZBC between $B_{\parallel}=0.5$ T and $0.9$ T, it is clear that the position of the ZBC in $\varphi$ is $\pi$-shifted, indicating a magnetic field induced $0{\text{-}}\pi$ transition. We attribute the finite field needed to induce a parity transition in device B in comparison to device A to reflect a different coupling $g$ resulting from a different charging energy (see Fig. 8). Figure 5c-e shows $E_{\rm{BS}}(B_{\parallel})$ for fixed $\varphi$. At $\varphi=\pi$ (Fig. 5c), $E_{\rm{BS}}$ splits from its zero-field value of $V_{\rm{sd}}=\pm 75~{}\mu{\rm V}$ moving linearly towards zero, crossing zero bias at $B_{\parallel}=0.6$ T, indicating a field-driven GS parity transition. At $\varphi=\pi/2$ (Fig. 5d) and $\varphi=0$ (Fig. 5e) the larger zero-field splittings push the zero-bias crossing point to larger $B_{\parallel}$. A g-factor $g^{*}\sim 4$, extracted from the slope of the lower ES, is insensitive to $\varphi$. The dependence of the zero-bias crossing field on $\varphi$ is consistent with expectations in Fig. 2c. Supplementary Fig. S.3 shows spectroscopy of $E_{\rm{BS}}(\varphi)$ for fixed $B_{\parallel}$, displaying a continuous evolution of the $0{\text{-}}\pi$ transition. ### IV.4 Odd-parity ground state Figure 6: Phase dependence of the odd-parity ground state (device A). Differential conductance $G$ as a function of superconducting phase difference $\varphi$ and bias $V_{\rm{sd}}$ for (a) $B_{\parallel}=0$, (b) $0.2$ T, (c) $B_{\parallel}=0.4$ T. Black marker indicates the $V_{\rm{pg}}$ position in Fig. 3 for reference. The odd-parity transition indicated by the black marker in Fig. 3b) is investigated in Fig. 6. At zero field, $E_{\rm{BS}}(\varphi)$ is $\pi$-shifted compared to the even-parity case (Fig. 4), with a minimum at $\varphi=0$, indicating a $\pi$ junction. Near the odd-parity transition, $E_{\rm{BS}}(\varphi)$ shows a reduced dependence on $\varphi$, as reported previously Chang _et al._ (2013); Pillet _et al._ (2010). Increasing $B_{\parallel}$ increases $E_{\rm{BS}}$, opening a gap for all $\varphi$ that increases with field while retaining the $\pi$ phase shift. This behaviour is consistent with theory (see Fig. 2) Wentzell _et al._ (2016). ### IV.5 Zero-bias crossings The contribution of in-plane magnetic field, phase difference, and gate voltage on GS parity and $0{\text{-}}\pi$ transitions is identified by measuring $G(V_{\rm{pg}},\varphi)$ at zero-bias in device A (see Fig. 7). This allows for the phase dependence of the ZBCs to be highlighted at specific $V_{\rm{pg}}$ values. Figure 7a shows two distinct values of $V_{\rm{pg}}$ where ZBCs occur at $B_{\parallel}=0$ (see red and black markers). At $V_{\rm{pg}}=-5.7$ V, ZBCs are observed at $\varphi=\pi$, marking the position of the even-parity GS investigated in Fig. 4. We interpret the limited range of this ZBC in $V_{\rm{pg}}$ to reflect the energy dependence of $E_{\rm{BS}}(\varepsilon)$ illustrated in Fig 2a, with the red marker signifying $\varepsilon=0$. For increasing magnetic field the ZBC at $\varphi=\pi$ splits in both $\varphi$ and $V_{\rm{pg}}$, stabilizing an odd- parity GS (see Fig. 7d). Figure 7: Evolution of zero-bias crossings (device A). (a) Differential conductance $G$ at zero bias as a function of phase difference $\varphi$ and gate voltage $V_{\rm{pg}}$ for (a) $B_{\parallel}=0$. Red and black markers show positions of line cuts in (b). Note opposite behavior along the two line cuts, as discussed in the text. (b) Line-cuts of (a) for an even-parity (red) and odd-parity (black) ground states. (c) Same as (a) for $B_{\parallel}=0.2$ T, (d) Same as (a) for $B_{\parallel}=0.6$ T. For $V_{\rm{pg}}=-5.8$ V, a bright vertical band is observed that indicates the odd-parity GS examined in Fig. 6. In Fig. 7b the phase dependence of the two GS locations is compared that reveals a $\pi$-shifted $E_{\rm{BS}}(\varphi)$ dependence for the odd-parity GS. Increasing $B_{\parallel}=0.2$ T causes the odd occupancy ZBC to split in $V_{\rm{pg}}$ while retaining a $\pi$-shifted phase dependence with $g^{*}\sim 5$. The results of Fig. 7 reveals how the combination of gate-voltage, magnetic field, and phase difference can control subgap excitations of the system and induce GS parity transitions. ## V Conclusion To summarize, we have measured the subgap spectrum of an S-QD-S Josephson junction under the influence of gate voltage, in-plane magnetic field, and superconducting phase difference. We found that odd QD occupancies were not always accompanied by parity transitions or $\pi$-shifted Andreev spectrum. However, by controlling either the coupling, magnetic field, or phase difference, subgap excitations could be lowered to zero bias, inducing a parity transition. Furthermore, we showed that by applying a finite phase difference across the junction, parity transitions can occur at lower magnetic fields. These results may have important implications for semiconductor based superconducting qubits Casparis _et al._ (2018), which recently showed that an unintentional QD resonance resulted in a suppressed charge dispersion Kringhøj _et al._ (2020); Bargerbos _et al._ (2020). We demonstrate that highly tunable QDs can be intentionally placed in the weak link that may enable an alternative mechanism for charge noise suppression while retaining large qubit anharmonicity. Moreover, these results introduce novel means of manipulating the spin of the ABS that could be used for controlling Andreev qubits Hays _et al._ (2018); Tosi _et al._ (2019). Our results demonstrate both the high material quality and device design flexibility offered by the InAs-Al heterostructure material platform. The S-QD-S device design studied here is a promising candidate for investigating the hybridization of a QD with Majorana zero modes in the pursuit of parity readout of a topological qubit Karzig _et al._ (2017). ## References * Eschrig (2011) M. Eschrig, Phys. Today 64, 43 (2011). * De Franceschi _et al._ (2010) S. De Franceschi, L. Kouwenhoven, C. Schönenberger, and W. Wernsdorfer, Nature Nanotechnology 5, 703 (2010). * Klapwijk (2004) T. Klapwijk, Journal of Superconductivity 17, 593 (2004). * Ryazanov _et al._ (2001) V. V. Ryazanov, V. A. Oboznov, A. Y. Rusanov, A. V. Veretennikov, A. A. Golubov, and J. Aarts, Phys. Rev. Lett. 86, 2427 (2001). * Krogstrup _et al._ (2015) P. Krogstrup, N. Ziino, W. Chang, S. Albrecht, M. Madsen, E. Johnson, J. Nygård, C. Marcus, and T. Jespersen, Nature Materials 14, 400 (2015). * Shabani _et al._ (2016) J. Shabani, M. Kjaergaard, H. J. Suominen, Y. Kim, F. Nichele, K. Pakrouski, T. Stankevic, R. M. Lutchyn, P. Krogstrup, R. Feidenhans’l, S. Kraemer, C. Nayak, M. Troyer, C. M. Marcus, and C. J. Palmstrøm, Phys. Rev. B 93, 155402 (2016). * Kjaergaard _et al._ (2017) M. Kjaergaard, H. J. Suominen, M. P. Nowak, A. R. Akhmerov, J. Shabani, C. J. Palmstrøm, F. Nichele, and C. M. Marcus, Phys. Rev. Applied 7, 034029 (2017). * Lutchyn _et al._ (2018) R. M. Lutchyn, E. P. Bakkers, L. P. Kouwenhoven, P. Krogstrup, C. M. Marcus, and Y. Oreg, Nature Reviews Materials 3, 52 (2018). * Beenakker (1991) C. W. J. Beenakker, Phys. Rev. Lett. 67, 3836 (1991). * Yokoyama _et al._ (2014) T. Yokoyama, M. Eto, and Y. V. Nazarov, Phys. Rev. B 89, 195407 (2014). * Nichele _et al._ (2020) F. Nichele, E. Portolés, A. Fornieri, A. M. Whiticar, A. C. C. Drachmann, S. Gronin, T. Wang, G. C. Gardner, C. Thomas, A. T. Hatke, M. J. Manfra, and C. M. Marcus, Phys. Rev. Lett. 124, 226801 (2020). * de Lange _et al._ (2015) G. de Lange, B. van Heck, A. Bruno, D. J. van Woerkom, A. Geresdi, S. R. Plissard, E. P. A. M. Bakkers, A. R. Akhmerov, and L. DiCarlo, Phys. Rev. Lett. 115, 127002 (2015). * Larsen _et al._ (2015) T. W. Larsen, K. D. Petersson, F. Kuemmeth, T. S. Jespersen, P. Krogstrup, J. Nygård, and C. M. Marcus, Phys. Rev. Lett. 115, 127001 (2015). * Casparis _et al._ (2018) L. Casparis, M. R. Connolly, M. Kjaergaard, N. J. Pearson, A. Kringhøj, T. W. Larsen, F. Kuemmeth, T. Wang, C. Thomas, S. Gronin, G. C. Gardner, M. J. Manfra, C. M. Marcus, and K. D. Petersson, Nature Nanotechnology 13, 915 (2018). * Kringhøj _et al._ (2020) A. Kringhøj, B. van Heck, T. W. Larsen, O. Erlandsson, D. Sabonis, P. Krogstrup, L. Casparis, K. D. Petersson, and C. M. Marcus, Phys. Rev. Lett. 124, 246803 (2020). * Bargerbos _et al._ (2020) A. Bargerbos, W. Uilhoorn, C.-K. Yang, P. Krogstrup, L. P. Kouwenhoven, G. de Lange, B. van Heck, and A. Kou, Phys. Rev. Lett. 124, 246802 (2020). * Zazunov _et al._ (2003) A. Zazunov, V. S. Shumeiko, E. N. Bratus’, J. Lantz, and G. Wendin, Phys. Rev. Lett. 90, 087003 (2003). * Janvier _et al._ (2015) C. Janvier, L. Tosi, L. Bretheau, Ç. Girit, M. Stern, P. Bertet, P. Joyez, D. Vion, D. Esteve, M. Goffman, _et al._ , Science 349, 1199 (2015). * Hays _et al._ (2018) M. Hays, G. de Lange, K. Serniak, D. J. van Woerkom, D. Bouman, P. Krogstrup, J. Nygård, A. Geresdi, and M. H. Devoret, Phys. Rev. Lett. 121, 047001 (2018). * Tosi _et al._ (2019) L. Tosi, C. Metzger, M. F. Goffman, C. Urbina, H. Pothier, S. Park, A. L. Yeyati, J. Nygård, and P. Krogstrup, Phys. Rev. X 9, 011010 (2019). * Eichler _et al._ (2007) A. Eichler, M. Weiss, S. Oberholzer, C. Schönenberger, A. Levy Yeyati, J. C. Cuevas, and A. Martín-Rodero, Phys. Rev. Lett. 99, 126602 (2007). * Sand-Jespersen _et al._ (2007) T. Sand-Jespersen, J. Paaske, B. M. Andersen, K. Grove-Rasmussen, H. I. Jørgensen, M. Aagesen, C. B. Sørensen, P. E. Lindelof, K. Flensberg, and J. Nygård, Phys. Rev. Lett. 99, 126603 (2007). * Kiršanskas _et al._ (2015) G. Kiršanskas, M. Goldstein, K. Flensberg, L. I. Glazman, and J. Paaske, Phys. Rev. B 92, 235422 (2015). * He _et al._ (2020) J. He, D. Pan, G. Yang, M. Liu, J. Ying, Z. Lyu, J. Fan, X. Jing, G. Liu, B. Lu, D. E. Liu, J. Zhao, L. Lu, and F. Qu, Phys. Rev. B 102, 075121 (2020). * Žitko _et al._ (2015) R. Žitko, J. S. Lim, R. López, and R. Aguado, Phys. Rev. B 91, 045441 (2015). * Meng _et al._ (2009) T. Meng, S. Florens, and P. Simon, Phys. Rev. B 79, 224521 (2009). * Jellinggaard _et al._ (2016) A. Jellinggaard, K. Grove-Rasmussen, M. H. Madsen, and J. Nygård, Phys. Rev. B 94, 064520 (2016). * Delagrange _et al._ (2016) R. Delagrange, R. Weil, A. Kasumov, M. Ferrier, H. Bouchiat, and R. Deblock, Phys. Rev. B 93, 195437 (2016). * Vecino _et al._ (2003) E. Vecino, A. Martín-Rodero, and A. L. Yeyati, Phys. Rev. B 68, 035105 (2003). * Rozhkov and Arovas (1999) A. V. Rozhkov and D. P. Arovas, Phys. Rev. Lett. 82, 2788 (1999). * Chang _et al._ (2013) W. Chang, V. E. Manucharyan, T. S. Jespersen, J. Nygård, and C. M. Marcus, Phys. Rev. Lett. 110, 217005 (2013). * Lee _et al._ (2014) E. J. Lee, X. Jiang, M. Houzet, R. Aguado, C. M. Lieber, and S. De Franceschi, Nature Nanotechnology 9, 79 (2014). * van Gerven Oei _et al._ (2017) W.-V. van Gerven Oei, D. Tanasković, and R. Žitko, Phys. Rev. B 95, 085115 (2017). * Van Dam _et al._ (2006) J. A. Van Dam, Y. V. Nazarov, E. P. Bakkers, S. De Franceschi, and L. P. Kouwenhoven, Nature 442, 667 (2006). * Pillet _et al._ (2010) J. Pillet, C. Quay, P. Morfin, C. Bena, A. L. Yeyati, and P. Joyez, Nature Physics 6, 965 (2010). * Estrada Saldaña _et al._ (2019) J. C. Estrada Saldaña, R. Žitko, J. P. Cleuziou, E. J. H. Lee, V. Zannier, D. Ercolani, L. Sorba, R. Aguado, and S. De Franceschi, Science Advances 5 (2019), 10.1126/sciadv.aav1235. * Li _et al._ (2019) C. Li, B. de Ronde, J. de Boer, J. Ridderbos, F. Zwanenburg, Y. Huang, A. Golubov, and A. Brinkman, Phys. Rev. Lett. 123, 026802 (2019). * Razmadze _et al._ (2020) D. Razmadze, E. C. T. O’Farrell, P. Krogstrup, and C. M. Marcus, Phys. Rev. Lett. 125, 116803 (2020). * Meden (2019) V. Meden, Journal of Physics: Condensed Matter (2019). * Wentzell _et al._ (2016) N. Wentzell, S. Florens, T. Meng, V. Meden, and S. Andergassen, Phys. Rev. B 94, 085151 (2016). * Karzig _et al._ (2017) T. Karzig, C. Knapp, R. M. Lutchyn, P. Bonderson, M. B. Hastings, C. Nayak, J. Alicea, K. Flensberg, S. Plugge, Y. Oreg, C. M. Marcus, and M. H. Freedman, Phys. Rev. B 95, 235305 (2017). Acknowledgments This work was supported by Microsoft Corporation, the Danish National Research Foundation, and the Villum Foundation. We thank Karsten Flensberg, Jens Paaske, and Jens Schulenborg for useful discussions. ## Appendix ### V.1 Wafer structure The wafers used for fabricating the devices were grown by molecular beam epitaxy. The material stack consists of an InP substrate with a 100-nm-thick $\rm{In_{0.52}Al_{0.48}As}$ lattice matched buffer, a 1-$\mathrm{\mu m}$-thick step-graded buffer realized with alloy steps from $\rm{In_{0.52}Al_{0.48}As}$ to $\rm{In_{0.89}Al_{0.11}As}$ (20 steps, 50 nm/step), a $58~{}\rm{nm}$ $\rm{In_{0.82}Al_{0.18}As}$ layer, a $4~{}\rm{nm}$ $\rm{In_{0.75}Ga_{0.25}As}$ bottom barrier, a $7~{}\rm{nm}$ InAs quantum well, a $10~{}\rm{nm}$ $\rm{In_{0.75}Ga_{0.25}As}$ top barrier, two monolayers of GaAs and a $7~{}\rm{nm}$ film of epitaxial Al deposited in-situ without breaking the MBE chamber vacuum. Hall bar device geometries (where the Al was removed) were used to characterize the two-dimensional electron gas and revealed an electron mobility peak $\mu=43,000~{}\rm{cm^{2}V^{-1}s^{-1}}$ for an electron density $n=8\times 10^{11}~{}\rm{cm^{-2}}$, corresponding to an electron mean free path of $l_{\rm e}\sim 600$ nm. ### V.2 Fabrication Details Devices were fabricated using standard electron beam lithography and wet etching techniques. The devices were electrically isolated using a two-step mesa etch by first removing the top Al film with Al etchant Transene D, and then a deep $\sim 300$ nm III-V chemical wet etch $\rm{H_{2}O:C_{6}H_{8}O_{7}:H_{3}PO_{4}:H_{2}O_{2}}$ (220:55:3:3). In a following lithography step, the Al film on the mesa was selectively etched into a SQUID with Al etchant Transene D at $50^{\circ}\mathrm{C}$. A $15~{}\rm{nm}$ thick layer of insulating $\rm{HfO_{2}}$ was grown over the entire sample by atomic layer deposition at a temperature of $90^{\circ}\mathrm{C}$. Finally, top gates of Ti/Au (5/25$~{}\rm{nm}$) were deposited by electron beam evaporation and connected to bonding pads with leads of Ti/Au (5/300$~{}\rm{nm}$). ### V.3 Measurement Details Electrical measurements were performed in a dilution refrigerator at a base temperature of $20~{}\rm{mK}$. Using conventional lock-in techniques at 166 Hz, an ac excitation voltage of 3 $\mu\rm{V}$ and a variable dc bias voltage $V_{\rm{sd}}$ was applied to the normal lead ohmic as shown in Fig. 1. The resulting current across the device was recorded by grounding the superconducting loop ohmic via a low-impedance current-to-voltage converter, and the four terminal voltage was measured by an ac voltage amplifier with an input impedance of $500~{}\rm{M\Omega}$. ### V.4 Model Details The energy of the subgap excitations in an S-QD-S system was theoretically examined under the influence of QD level detuning $\varepsilon$, coupling to the superconductor $g$, magnetic field $B_{\parallel}$, and superconducting phase difference $\varphi$ with the model proposed by Kiršanskas, G. et al. Kiršanskas _et al._ (2015). This is an Anderson-type model describing a single Coulomb blockaded QD level that is coupled to two S leads with superconducting gaps $\Delta\exp(\pm i\varphi/2)$ in the limit of $U\gg\Delta$. The excitations in this model are Yu-Shiba-Rosinov states resulting from spinful odd QD occupancies. In Fig. 2 we examine the energy of bound-state excitations $E_{\rm{BS}}$ in an S-QD-S JJ. The bound-state energies are calculated from Kiršanskas _et al._ (2015), $\begin{split}&E_{\pm,\sigma}=\frac{1}{2}\sigma E_{\rm{Z}}-\frac{\sigma c_{\pm}\Delta}{\sqrt{(1+u)^{2}+4g^{2}}}\Big{[}(1+u)(1+\chi u)\\\ &+2g^{2}\pm 2g\sqrt{g^{2}+u(1-\chi)(1+\chi u)}\Big{]}^{1/2}\end{split}$ (2) where the following shorthand notation is used, $\begin{split}&\chi=1-\sin^{2}(2\theta)\sin^{2}(\phi/2),\quad u=w^{2}-g^{2},\quad c_{+}=1\\\ &c_{-}=\rm{sign}(1+\chi u),\quad\tan(\theta)=t_{R}/t_{L}\quad.\end{split}$ (3) The exchange scattering amplitude $g$ and the potential scattering amplitude $w$ both depend on the position of the QD level detuning $\varepsilon$. The spin of the bound-states $\sigma=\pm 1$ is either aligned or anti-aligned with respect to the spin of the QD. An angle $\theta$ is introduced to account for an asymmetry between the left and right tunnel barriers ($t_{R/L}$) to the superconducting leads, where $\theta=\pi/4$ represents a symmetrical coupling. In Ref. Kiršanskas _et al._ (2015) these bound-state energies are used to calculate the conductance with a weakly coupled normal lead (similar setup as in Fig. 1b), where a good agreement between the simulated conductance and the subgap spectra shown in Fig. 2 is found. In Fig. 8(a-d) the dependence of a varying charging energy $U$ is shown. By decreasing the charging energy, a gap at $\varphi=\pi$ opens up due to an increased $g$. This shifts the critical Zeeman energy to higher fields (see Fig. 8 b, d). Figure 8(e,f) shows the effect of coupling asymmetry on the phase dispersion. Asymmetric left/right coupling can open a gap at $\varphi=\pi$, which can be closed by symmetrizing the coupling or applying a magnetic field. Figure 8: Dependence of bound-state spectra on superconducting phase difference $\varphi$ and Zeeman energy $E_{\rm{Z}}$ for (a,b) charging energy $U$ = 2 and (c,d) $U$ = 4 for asymmetric coupling to the superconducting leads $\theta=\pi/3$. (e,f) Dependence of asymmetric coupling $\varphi$ on phase dispersion and Zeeman energy of the bound-state excitations. In the model of Kiršanskas et al. Kiršanskas _et al._ (2015), a polarized spin approximation on the QD is employed to derive Eq. 2. Therefore, the Zeeman energy does not influence the QD but induces spin splitting in the superconducting leads. Experimentally we interpret the observed magnetic field dependence to reflect Zeeman splitting of the doublet ground states as discussed theoretically in Refs. Jellinggaard _et al._ (2016); Žitko _et al._ (2015) and experimentally in Ref. Lee _et al._ (2014). Experimentally it is challenging to differentiate between the two Zeeman splitting mechanisms since they contribute different g-factor values as discussed in Ref. van Gerven Oei _et al._ (2017). We therefore assume an effective g-factor $g^{*}$ that accounts for a contribution from both mechanisms. ## VI Supplementary Information Figure S.1: Dependence of a even-parity ground state on a superconducting phase difference $\varphi$ for intermediate values of $V_{\rm{pg}}$ in device A. (a-d) Differential conductance $G$ as a function of $\varphi$ and bias voltage $V_{\rm{sd}}$ for magnetic field $B_{\parallel}=0$ T (a), 0.2 T (b), 0.4 T (c), 0.6 T (d) for $V_{\rm{pg}}=-5.73$ V. (e) $G$ as a function of $V_{\rm{sd}}$ and $V_{\rm{pg}}$ for tunnel barrier gate voltage $V_{\rm{t}}=$ -1.82 V. (f-i) $G$ as a function of $\varphi$ and $V_{\rm{sd}}$ for $B_{\parallel}=0$ T (f), 0.4 T (g), 0.6 T (h), 0.8 T (i) for $V_{\rm{pg}}=-5.65$ V. Figure S.2: Coulomb blockade in device A. (a-d) Differential conductance $G$ as a function of bias voltage $V_{\rm{sd}}$ and $V_{\rm{pg}}$ in the normal state (a,c) and in the superconducting state (b,d). Panels (a) and (b) are measured in a similar gate configuration shown in Fig. 3b. Panels (c,d) are measured with more negative $V_{\rm S}$ gate voltages to allow for clearer Coulomb blockade features. Figure S.3: Evolution of phase dispersion in magnetic field for device B. (a-l) Differential conductance $G$ as a function of $\varphi$ and bias voltage $V_{\rm{sd}}$ for increasing magnetic field $B_{\parallel}$.
# Neuromorphic adaptive spiking CPG towards bio-inspired locomotion of legged robots Pablo Lopez-Osorio11footnotemark: 1<EMAIL_ADDRESS>Alberto Patino- Saucedo22footnotemark: 2 Juan P. Dominguez-Morales33footnotemark: 3 Horacio Rostro-Gonzalez44footnotemark: 4 Fernando Perez-Peña55footnotemark: 5 <EMAIL_ADDRESS>School of Engineering, Universidad de Cádiz, Spain. Department of Electronics, DICIS-University of Guanajuato, Mexico Robotics and Technology of Computers Lab. Universidad de Sevilla, Spain. ###### Abstract In recent years, locomotion mechanisms exhibited by vertebrate animals have been the inspiration for the improvement in the performance of robotic systems. These mechanisms include the adaptability of their locomotion to any change registered in the environment through their biological sensors. In this regard, we aim to replicate such kind of adaptability in legged robots through a sCPG. This sCPG generates different locomotion (rhythmic) patterns which are driven by an external stimulus, that is, the output of a FSR connected to the robot to provide feedback. The sCPG consists of a network of five populations of LIF neurons designed with a specific topology in such a way that the rhythmic patterns can be generated and driven by the aforementioned external stimulus. Therefore, the locomotion of the end robotic platform (any-legged robot) can be adapted to the terrain by using any sensor as input. The sCPG with adaptive learning has been numerically validated at software and hardware level, using the Brian 2 simulator and the SpiNNaker neuromorphic platform for the latest. In particular, our experiments clearly show an adaptation in the oscillation frequencies between the spikes produced in the populations of the sCPG while the input stimulus varies. To validate the robustness and adaptability of the sCPG, we have performed several tests by variating the output of the sensor. These experiments were carried out in Brian 2 and SpiNNaker; both implementations showed a similar behavior with a Pearson correlation coefficient of 0.905. ###### keywords: Neurorobotics , SpiNNaker , Central Pattern Generator , Spiking Neural Network , Neuromorphic Hardware , Adaptive-learning ††journal: Neural Networks ## 1 Introduction It is well known that, in biology, rhythmic locomotion is produced by a neural structure called Central Pattern Generator (CPG) [1]. This structure is located at the spinal cord and it usually comprises of two neural populations which produce an alternating output of spikes. Eventually, these output spikes are used to activate the muscles fibers. This approach of using CPGs can be borrowed to create locomotion in robotics. There are several possibilities to implement a CPG: using coupled-oscillators, using Artificial Neural Networks or using Spiking Neural Networks. The closest biological implementation is to use an SNN. These networks are based on neuron models and synaptic connections that implement biological features. The field of research called neuromorphic engineering aims to implement these networks on electronics, mimicking the way living beings have solved complex problems by using both analog and digital circuits. The neuromorphic robotics field puts together both the neuromorphic engineering and the roboticists communities [2]. The use of neural structures made of spiking neurons, coming from the neuromorphic engineering field, within robotics results in the need for less resources, less power consumption and a simplification of the algorithms [3]. One of this neural structures is the CPG. This structure generates a rhythmic pattern at its output, which can be used within robotics to generate locomotion. Thus, a CPG creates gaits that are suitable to use within a robotic platform. These structures can generate a very stable pattern even without sensory information or brain activity [4]. As briefly shown, there is a growing community of researchers that are exploring the possibility of using a CPG to create locomotion in robots [1]. Most of the previous works in the literature present an open-loop CPG which does not include any sensory information: in [5], the actuation of a lamprey- like robot is done by using an open-loop CPG and neuromorphic hardware. Then, in [6] and [7], the authors proposed a CPG implemented on an Field Programmable Gate Array (FPGA) and the SpiNNaker platform [8]; these three works do not offer the possibility of changing the originally produced pattern in real time. However, they showed that implementing CPGs using spiking neurons uses less power. There is a more recent paper which allows both real time functioning and pattern variation but without including any sensory information [9]. In [10], the open-loop CPG is implemented on Loihi [11] using an astrocytic network, producing two different gaits with 24 motor neurons. However, the most recent work [12], proposes the implementation of a CPG with the possibility of changing the amplitude, frequency and phase online without any sensory input required. The authors also pointed out that the architecture should include sensory feedback to modify the behavior of the CPG. That is the objective presented in this paper. We propose an SNN that, using the sensory information, adapts the behavior of the CPG. Regarding works that include sensory information, in [13], the CPG is built using coupled-oscillators instead of spiking neurons and the feedback to the CPG is included into the control loop of the equations of the CPG. In [14], 12 simulated neurons modulated by sensory feedback are used to build the CPG. Instead of what we propose, an adaptation SNN, they achieve different gaits by either moving the location or increasing the number of neural structures. The neuron model used is Izhikevich instead of the Leaky Integrate-and-Fire (LIF) model proposed in this work to reduce the computational resources used. The sensory information is used to adapt the time duration of each phase of the frequency switching to enable the actuators to reach the commanded position. Finally, in [15], a neuromorphic sensor has been used to select which predefined gait of the CPG should be activated. In most of these works, the use of an external input to the CPG changes the performance of it. This performance can be defined as a neuromodulation. It has been shown that this modulation is essential to alter the behavior of a neural structure by modifying the synaptic connections [16]. Finally, there are works where the main focus is on the learning process of the robotic platform: in [17], the authors proposed a rewarding-learning process to teach a hexapod robot how to walk without any previous knowledge. A couple of sensors (a standard camera and a gyroscope) are used to provide the rewarding signal to the neural network based on a CPG. Although they used a digital version model neuron of the LIF, they do not use neuromorphic hardware to implemented the neural architecture; a Raspberry Pi is used instead. A similar approach based on reinforcement learning, but without using spiking neurons, was proposed to improve the locomotion of the NAO robot [18]. Another approach is used in [19], where the authors have two hexapod robots: an expert and a student. The student learns or imitates the gait of the expert by using a one-layer feedforward SNN and a Dynamic Vision Sensor (DVS) camera as input. However, the possibility for the robot to adapt to its environment is not implemented. The objective of this paper is to design and deploy a spiking architecture that makes the interaction of a spiking CPG with its environment possible. An external agent. i.e. a Force Sensitive Resistor (FSR), is introduced as the feedback stimulus to the network. This agent can modify the gait generated by the CPG. Therefore, the locomotion of the end robotic platform (any legged robot) can be adapted to the terrain. Furthermore, the spiking network presented in this paper allows the introduction of the feedback sensory information on the loop of the CPG to provide adaptation. This adaptation SNN could be used with any sensor as stimulus. The rest of the paper is structured as follows: section 2.1 introduces the materials used in this work, including the simulator and hardware used. The implemented methods are described in 2.2, together with the SNN model. Then, the results obtained are presented and discussed in section 3. This section is divided into two different subsections: first, the experiments run using the software simulator and then, the same experiments run on on the hardware platform. Finally, the conclussions are presented. ## 2 Materials and Methods ### 2.1 Materials This section describes both the software and hardware used to perform the experiments. #### 2.1.1 Brian 2 Brian 2 [20] is a neural simulator for SNNs written in Python programming language. Thus, it is a cross-platform which is available in different operating systems. In contrast to other SNN simulators such as NEURON [21] or PyNN [22], Brian 2 is highly flexible and it is easily adaptable with new non- standard neuron models and synapses. Brian 2 can be used to model and simulate complex problems faced by neuroscientists, as well as giving faster and more robust results before implementing the solution on a hardware platform. #### 2.1.2 SpiNNaker SpiNNaker [23, 8, 24] is a massively parallel, multi-core computing system designed by the Advanced Processor Technologies (APT) Research Group from the University of Manchester. It was designed under the Human Brain Project (HBP) [25] for simulating parts of the brain by using SNNs. SpiNNaker machines consist of SpiNNaker chips, which have eighteen 200-Hz ARM968 processor cores each [26]. This allows an asynchronous communication infrastructure for sending short packages (each of them representing a particular neuron firing) [27] identified using Address Event Representation (AER) [28]. Different SpiNNaker machines were built and commercialized, including SpiNN-3 and SpiNN-5, with 4 and 48 SpiNNaker chips each, respectively. They also include the spinnlinks [29], which allow real-time input/output interfacing with neuromorphic sensors and other neuromorphic platforms such as FPGAs [30, 31, 32]. A PyNN-based [22] software package called sPyNNaker [33] can be used to design and implement SNNs on these machines. The recently built million-core machine is at the School of Computer Science at the University of Manchester that can be used through the HBP portal. In this work, a SpiNN-5 machine was used to run the simulations proposed. ### 2.2 Methodology We simulated our Spiking Central Pattern Generator (sCPG) model on a standard computer using the Brian 2 Simulator to characterize the network dynamics, to analyse the number of neurons per population needed and to adjust the network parameters. The simulation results guided the subsequent neuromorphic implementation using the SpiNNaker platform. #### 2.2.1 Neuron Model The LIF model is used to implement the neuron on both the software simulator and the SpiNNaker hardware platform [34]. The model is defined within the equations (1) and (2). $\tau_{m}\frac{dV}{dt}=-(V-V_{r})+RI(t)$ (1) $if\>V(t)=V_{th}\quad then\quad\lim_{\delta\rightarrow 0;\delta>0}V(t+\delta)=V_{r}$ (2) where $V$ is the membrane potential of the neuron, $R$ represents the resistance of the membrane, $\tau_{m}$ the time constant of the neuron, $V_{r}$ the resting potential, $V_{th}$ the threshold and $I(t)$ is the stimulus. #### 2.2.2 Network Model The SNN model depicted in Figure 1 was designed based on the neuron model presented in section 2.2.1. The main objective of the proposed model is to generate a constant oscillation between the spikes produced in ensembles A and B, whose frequency varies depending on the value read from the FSR sensor. Thus, the proposed CPG is able to automatically adapt its behavior depending on the input stimulus. The SNN consists of different parts which are described next. It is important to note that each of the ensembles (also called populations) have the same number of neurons. The number of neurons in each population was set based on different experiments, which are presented in section 3. Figure 1: Diagram of the proposed spiking neural network architecture. The main block of the architecture is the so-called $CPG_{AB}$, which consists of the populations A and B represented in Figure 1. These populations are self-excited and self-inhibited with a probability of 25% and 75% and weights of 4 nA and 1.5 nA, respectively. Moreover, a 75% probability of having inhibitory synapses between neurons from the two aforementioned populations is also present. Thus, when one of the populations is producing spikes, the opposite is inhibited and, thus, generating the desired oscillation pattern. Populations A and B are injected with a constant external current in order to start generating the oscillation. Therefore, for these two populations equation 3 is used instead of equation 1. $\frac{dV}{dt}=\frac{Vr-V+R(Iexc-Iinh+Ist)}{\tau_{m}}$ (3) Where $I_{st}$ is the current injected to neurons in populations A and B. This value was set to 2.2 nA, which is sufficient for producing the desired rhythmic pattern. Furthermore, populations 1 and 2 ($CPG_{12}$) both have the same number of neurons as those in $CPG_{AB}$, and are also interconnected in the same way. The projections between $CPG_{12}$ and $CPG_{AB}$ are depicted in Figure 1, and follow the same aforementioned probabilities (25% for excitatory and 75% for inhibitory projections). The weights between the different populations of the proposed model are specified in the same figure. Finally, the reference population ($Ref$) was implemented as a Poisson distribution with variable frequency. In contrast to the populations from $CPG_{AB}$ and $CPG_{12}$, the number of neurons in $Ref$ was set to 50. These neurons are connected to populations 1 and 2 following the scheme presented in Figure 1. This number of neurons was set to 50 since it was the optimum for producing biologically-plausible spiking rates in the population, with maximum and minimum frequencies close to the biological counterpart [35]. The spiking rate of the Poisson distribution depends on the values obtained from the FSR sensor used as input to population $Ref$. This sensor should be placed at the end of the leg of the robot. This sensor provides values between 0 and 5 V. Since the spinal cord ventral horn motor neuron alpha, which is the biological neuron taken as reference, has a spike rate between 10 and 171 Hz [35], a linear regression was established in order to match the frequency of the Poisson distribution with the voltage value read from the sensor. Equation 4, where $V_{s}$ is the voltage value provided by the sensor, shows this relation to provide the desired rates. $f_{Ref}=\frac{280V_{s}-950}{3}$ (4) Parameter | Value ---|--- ${u_{reset}}$ | -55.0 mV ${u_{rest}}$ | -55.0 mV ${u_{th}}$ | 15.0 mV ${\tau_{m}}$ | 6.0 ms ${\tau_{syn_{e}}}$ | 5.0 ms ${\tau_{syn_{i}}}$ | 8.75 ms ${c_{m}}$ | 0.1875 nF ${\Delta_{t}}$ | 1.0 ms ${I_{bias}}$ | 2.2 mA Table 1: Neuron parameters for the proposed CPG in both the Brian 2 simulator and the SpiNNaker hardware platform. Based on these three blocks ($CPG_{AB}$, $CPG_{12}$ and $Ref$) and the connections between them, the proposed behavior explained at the begining of this section was achieved. Therefore, following Fig. 1, different scenarios can be analyzed. In the case where the spike rate of population $Ref$ is greater than the oscillation frequency of $CPG_{AB}$, population 1 will be inhibited and population 2 will be excited. Since population 1 is excited from A, but the number of spikes is lower than the ones that are inhibiting the same population from $Ref$, population 1 will have very low activity. The opposite occurs in population 2, which will be inhibited from B but excited from $Ref$ in a stronger way and, thus, having considerable more activity than population 1. The activity from population 2 will excite $CPG_{AB}$, increasing its oscillation frequency. Conversely, the opposite happens when the spike rate of $Ref$ is lower than the oscillation frequency of $CPG_{AB}$, where population 1 will have more activity than 2 and, therefore, inhibit $CPG_{AB}$ in order to reduce its frequency. As a result, the proposed network is able to adapt the frequency of the CPG based on an input stimulus. This model was simulated in Brian 2 and emulated using SpiNNaker, and the results can be seen in section 3. ## 3 Results and Discussion ### 3.1 Brian 2 simulations The first experiment was performed to determine the number of neurons per population of the CPG that was needed to achieve a stable rate value along the simulations. The parameters used for all the experiments are shown in Table 1. Figure 2 shows the maximum, minimun and mean values obtained for the rate of the simulations performed. A thousand simulations per number of neurons were run. As the number of neurons increased, the rate achieved is more stable and the standard deviation becomes lower. Starting from 40 neurons per population in the $CPG_{AB}$, the standard deviation is less than 1.5 and the bahavior is more stable. A hundred neurons per population shows the most stable output rate with the least deviation. Figure 2: Average, maximum and minimum rate values obtained for each number of neurons per population. The trace shows the mean, and, its shadow, the maximum and minimun rate of all the one thousand simulations performed. Once the number of neurons per population was fixed, to verify if the architecture presented in Section 2.2 behaved as expected, the operation of the $CPG_{AB}$ in isolation was analyzed. This experiment ensured that it was able to produce a constant oscillation. Then, the operation of the same CPG was examined once interconnected with the $CPG_{12}$, performing tests with different stimuli to analyze the results obtained. Finally, the entire architecture was connected and analyzed in different scenarios based on the external input from the FSR sensor. #### 3.1.1 $CPG_{AB}$ analysis The topology of the $CPG_{AB}$ can be seen in Figure 1, where green arrows denote excitatory connections and red arrows denote inhibitory connections. As mentioned in section 2.2.2, $I_{St}$ is a constant current injected to all neurons in populations A and B, with a fixed value of 2.2 nA. Figure 3: Simulation of the $CPG_{AB}$ with a $I_{St}$ value of 10 nA. Increased oscillation frequency may be observed along with an increase in the amount of noise when compared to Figure 4. Figure 4: Simulation of the $CPG_{AB}$ with a $I_{St}$ value of 2.2 nA. A lower oscillation frequency and almost total noise removal can be observed when compared to Figure 3. This current is the minimum value required to produce the oscillatory pattern in the CPG. While the proposed topology can work with higher values of $I_{St}$, this generates a higher frequency of oscillation and a noticeable increase in the noise introduced in the simulation (see figures 3 and 4). Thus, this value was set to 2.2 nA in order to be able to more easily appreciate the effect of feedback on the CPG. The results of the simulation performed are shown in Figure 4, where a frequency of, approximately, 5.8 Hz was obtained for each population, while the total frequency of the CPG was 11.6 Hz. Figure 5: Results of the simulation of the proposed SNN model when having a 5 V input from the FSR sensor for 1000 ms between $t=1000ms$ and $t=2000ms$ and between $t=3000ms$ and $t=4000ms$ (generating a Poisson distribution of 171 Hz in $Ref$). For the rest of the simulation time, the sensor’s output was set to 0 V. Figure 6: Results obtained when simulating random voltage readings from the FSR sensor. Figure 7: Results obtained when simulating a continuous increase in the frequency of $Ref$. #### 3.1.2 Analysis of the effect of the sensor when used as input to the SNN In order to study the robustness of the network against sudden changes in the oscillation frequency, an experiment where the values read from the sensor were alternating between maximum and minimum voltage peaks was performed. Initially, a 5 V input was simulated in 1 s and 2 s, both with a duration of 1 s. This made neurons in population $Ref$ to fire at a frequency of, approximately, 171 Hz during this period. Before 1 s, between 2 s and 3 s, and after 4 s the sensor readings corresponded to 0 V. Figure 5 shows the results of this simulation. It can be observed that, at time zero, since no information was being received from the sensor, $Ref$ had no activity. Therefore, population 1 was excited by $CPG_{AB}$ and population 2 was inhibited. In turn, population 1 slightly inhibited $CPG_{AB}$. At 1 s, $Ref$ started firing at a frequency of approximately 171 Hz, exciting population 2 and inhibiting population 1. During this period, the former excited $CPG_{AB}$, increasing its oscillation frequency. After 2 s, population 1 started dominating population 2 again, slightly inhibiting $CPG_{AB}$ again. Exactly the same behavior can be seen again starting at 3 s. In this figure it can be observed how the frequency of $CPG_{AB}$ increased considerably between 1 s and 2 s and between 3 s and 4 s, obtaining minimum frequencies of 8 Hz and maximum frequencies of 15 Hz. Finally, different simulations of more realistic cases were performed. Initially, 10 random voltage values were used in order to simulate different readings from the FSR sensor. These values were updated every 500 ms. Specifically, the frequency values for Poisson distribution in $Ref$ were (171, 40, 80, 30, 5, 130, 50, 76, 20, 150) Hz. In Figure 6 it can be seen how $CPG_{AB}$ adapts its oscillation frequency based on the inputs stimuli, obtaining maximum and minimum frequency peaks of 15 Hz and 8 Hz, respectively. After this, a constant increase in the values of the readings from the sensor was simulated. In particular, increases in steps of 20 Hz per 500 ms were introduced in the frequency of $Ref$. To check the performance limits of the $CPG_{AB}$, the last injected frequency value exceeded up to 17% of the maximum theoretical value of 171 Hz. Figure 7 shows the results of this experiment. As can be seen in the figure, although there is a significant increase in the amount of noise, the oscillation frequency of $CPG_{AB}$ does not increase, achieving a maximum frequency of 15 Hz. ### 3.2 Running the model on SpiNNaker The proposed sCPG model was tested in the SpiNNaker neuromorphic hardware. The neuronal model is defined in sPyNNaker [33], a PyNN-based software interface that allows a quick prototyping and implementation of spiking neural networks in the SpiNNaker platform. The spiking neuron model is the LIF with fixed threshold and decaying- exponential post-synaptic current, whose parameters are given in table 1. These parameters were chosen so as to emulate the neuron dynamics of the simulations in Brian 2. In order to show that the model achieves a good performance in SpiNNaker, we performed three tests: first, by verifying that the constant oscillations of the $CPG_{AB}$ were observed and matched the rates presented in Brian 2. Then, by implementing the whole sCPG with increasing rates of the input sensor represented by the reference population. Finally, by testing the sCPG under random stimuli. #### 3.2.1 $CPG_{AB}$ implementation Figure 8: SpiNNaker simulation of the $CPG_{AB}$. Figure 9: SpiNNaker simulation of the $CPG$ for extreme values of the stimulus spiking rate (0.1 Hz and 171 Hz). For the implementation of the CPG in Spinnaker (see Figure 8) 100 neurons were used, each with a constant current $I_{St}$ equal to 2.2 nA, like in the equivalent Brian experiment. The measured rate of the oscillatory pattern of populations A and B was 11.62 Hz, which matches the rate measured in the Brian simulation. Although some spikes were lost in the raster plot compared to Brian, which can be attributed to limited support of floating-point calculations in sPyNNaker, the pattern appears consistent and with little noise. Figure 10: SpiNNaker simulation of the $CPG$ for ten increasing values of the stimulus rate (from 0 Hz to 50 Hz). #### 3.2.2 $CPG$ under different stimuli The SpiNNaker implementation of the full CPG, including the feedback network was tested under different conditions of the stimulus. First, we simulated a sudden change of the value of the sensor, represented by the rate of the Poisson generator in the $Ref$ population (see figure 9). This rate was set to 0.1 Hz for the first 2 seconds of the simulation and oscillated between 171 Hz and 0.1 Hz for the next four seconds, in order to appreciate both regimes of the CPG. It can be seen how, with a low $Pref$ frequency, the first feedback population dominates and the rate of the output oscillatory is low, at around 12.5 Hz. With a high $Pref$ frequency, it is the second feedback population which dominates and the measured oscillatory pattern displays a peak frequency of 27.5 Hz. Figure 11: SpiNNaker simulation of the $CPG$ for ten random values of the stimulus rate (from 0 Hz to 50 Hz). Figures 10 and 11 show the spiking response of the SpiNNker CPG to increasing rates of $Ref$ and to random values of $Ref$, respectively. The two regimes can be clearly observed in the spiking response of populations 1 and 2 and in the measured oscillatory rates of Populations A and B, proving that the feedback mechanism works correctly. ### 3.3 Comparison between the results obtained in Brian 2 and SpiNNaker Figure 12 shows the comparison made between both approaches: the simulations on Brian 2 and the implementation on the SpiNNaker platform. In this experiment, the same input stimulus from the sensor was used for both approaches, which went from 0 Hz to 180 Hz. The rates generated by both sCPGs were very similar. In the case of the Brian 2, the operation frequency of the sCPG ranged from 9.5 Hz to 14.9 Hz, and from 11.62 Hz to 26.66 Hz in the case of the SpiNNaker platform. The calculated Pearson correlation coefficient between both is 0.905. Figure 12: Comparison of the results obtained in both Brian 2 simulation and SpiNNaker implementation. The plot shows the rate generated by the sCPG when the input stimulus (population $Ref$) is changed from 0 Hz to 180 Hz. The blue trace shows Brian 2 results (left y-axis) and the red trace the SpiNNaker results (right y-axis). ## 4 Conclusions In this paper, we have presented what, to the best of our knowledge is, the first sCPG that incorporates a feedback in the loop to modify the locomotion frequencies of a legged robot through an adaptive learning mechanism. The feedback was provided by a FSR (but any sensor might be used) to simulate the force exerted on each limb of a future robot. With the use of this sensor, we injected different stimuli to the sCPG. Firstly, we performed some tests to find the optimal (minimum) input value, where the feedback effect could be easily observed. Then, some tests were performed to observe the effect in the oscillation frequencies under different stimuli. Finally, we carried out some experiments where the values read from the sensor were alternating between maximum and minimum voltage peaks in order to study the robustness of the sCPG against sudden changes in the oscillation frequency. From these experiments, we conclude that our sCPG presents a robust behavior in the adaptability of the oscillation frequencies. These frequencies can be used further in the generation of locomotion gaits for legged robots with the advantage that they can be modified depending on the terrain conditions. The implementation of the sCPG was firstly done by using the Brian 2 simulator and then, considering the same parameters, it was migrated to SpiNNaker. The results in both cases are highly the same ones as it is shown in Figure 12; this fact demonstrates the reproducibility of our architecture on any platform. Compared to the biological locomotion mechanism, we considered that our approach is highly plausible in two senses, the first one is that the proposed network model is based on spiking neurons, which are considered the neuron models that mimic the best the behavior of biological neurons. The second aspect is that the implementation performed in SpiNNaker allowed us to improve it in terms of the power consumption, and the hardware by itself attempts to be an artificial representation of the brain. As a future work, we propose to embed the SpiNNaker system into different legged robots (e.g. biped, quadruped and hexapod) to validate our approach on a real robotic platform. Also, it could directly process spatiotemporal patterns with the same sCPG by incorporating a neuromorphic sensor. ## Acknowledgements This work was partially supported by the Interreg Atlantic Area Programme through the European Regional Development Fund (TIDE - Atlantic network for developing historical maritime tourism, EAPA_630/2018), by the Spanish grant (with support from the European Regional Development Fund) MIND-ROB (PID2019-105556GB-C33) and by the EU H2020 project CHIST-ERA SMALL (PCI2019-111841-2). ## References * Ijspeert [2008] A. J. Ijspeert, Central pattern generators for locomotion control in animals and robots: a review, Neural networks 21 (2008) 642–653. * Krichmar and Wagatsuma [2011] J. L. Krichmar, H. Wagatsuma, Neuromorphic and brain-based robots, Cambridge University Press, 2011. * Indiveri et al. [2011] G. Indiveri, B. Linares-Barranco, T. J. Hamilton, A. Van Schaik, R. Etienne-Cummings, T. Delbruck, S.-C. Liu, P. Dudek, P. Häfliger, S. Renaud, et al., Neuromorphic silicon neuron circuits, Frontiers in neuroscience 5 (2011) 73. * Vogelstein et al. [2008] R. J. Vogelstein, F. V. Tenore, L. Guevremont, R. Etienne-Cummings, V. K. Mushahwar, A silicon central pattern generator controls locomotion in vivo, IEEE transactions on biomedical circuits and systems 2 (2008) 212–222. * Donati et al. [2014] E. Donati, F. Corradi, C. Stefanini, G. Indiveri, A spiking implementation of the lamprey’s Central Pattern Generator in neuromorphic VLSI, in: IEEE 2014 Biomedical Circuits and Systems Conference, BioCAS 2014 - Proceedings, pp. 512–515. * Rostro-Gonzalez et al. [2015] H. Rostro-Gonzalez, P. A. Cerna-Garcia, G. Trejo-Caballero, C. H. Garcia-Capulin, M. A. Ibarra-Manzano, J. G. Avina-Cervantes, C. Torres-Huitzil, A CPG system based on spiking neurons for hexapod robot locomotion, Neurocomputing 170 (2015) 47–54. * Cuevas-Arteaga et al. [2017] B. Cuevas-Arteaga, J. P. Dominguez-Morales, H. Rostro-Gonzalez, A. Espinal, A. F. Jimenez-Fernandez, F. Gomez-Rodriguez, A. Linares-Barranco, A SpiNNaker application: design, implementation and validation of SCPGs, in: International Work-Conference on Artificial Neural Networks, Springer, pp. 548–559. * Furber et al. [2014] S. B. Furber, F. Galluppi, S. Temple, L. A. Plana, The spinnaker project, Proceedings of the IEEE 102 (2014) 652–665. * Gutierrez-Galan et al. [2020] D. Gutierrez-Galan, J. P. Dominguez-Morales, F. Perez-Peña, A. Jimenez-Fernandez, A. Linares-Barranco, NeuroPod: a real-time neuromorphic spiking CPG applied to robotics, Neurocomputing 381 (2020) 10–19. * Polykretis et al. [2020] I. Polykretis, G. Tang, K. P. Michmizos, An astrocyte-modulated neuromorphic central pattern generator for hexapod robot locomotion on intel’s loihi, in: International Conference on Neuromorphic Systems 2020, pp. 1–9. * Davies et al. [2018] M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, et al., Loihi: A neuromorphic manycore processor with on-chip learning, IEEE Micro 38 (2018) 82–99. * Strohmer et al. [2020] B. Strohmer, P. Manoonpong, L. B. Larsen, Flexible spiking cpgs for online manipulation during hexapod walking, Frontiers in neurorobotics 14 (2020) 41. * Sartoretti et al. [2018] G. Sartoretti, S. Shaw, K. Lam, N. Fan, M. Travers, H. Choset, Central pattern generator with inertial feedback for stable locomotion and climbing in unstructured terrain, in: 2018 IEEE International Conference on Robotics and Automation (ICRA), IEEE, pp. 1–5. * Spaeth et al. [2020] A. Spaeth, M. Tebyani, D. Haussler, M. Teodorescu, Neuromorphic closed-loop control of a flexible modular robot by a simulated spiking central pattern generator, in: 2020 3rd IEEE International Conference on Soft Robotics (RoboSoft), IEEE, pp. 46–51. * Gutierrez-Galan et al. [2019] D. Gutierrez-Galan, J. P. Dominguez-Morales, F. Perez-Pena, A. Jimenez-Fernandez, A. Linares-Barranco, Live Demonstration: Neuromorphic Robotics, from Audio to Locomotion Through Spiking CPG on SpiNNaker, in: 2019 IEEE International Symposium on Circuits and Systems (ISCAS), IEEE, pp. 1–1. * Harris-Warrick [2011] R. M. Harris-Warrick, Neuromodulation and flexibility in central pattern generator networks, Current opinion in neurobiology 21 (2011) 685–692. * Lele et al. [2020] A. S. Lele, Y. Fang, J. Ting, A. Raychowdhury, Learning to walk: Spike based reinforcement learning for hexapod robot central pattern generation, in: 2020 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), IEEE, pp. 208–212. * Li et al. [2013] C. Li, R. Lowe, T. Ziemke, Humanoids learning to walk: a natural CPG-actor-critic architecture, Frontiers in neurorobotics 7 (2013) 5. * Ting et al. [2020] J. Ting, Y. Fang, A. S. Lele, A. Raychowdhury, Bio-inspired gait imitation of hexapod robot using event-based vision sensor and spiking neural network, arXiv preprint arXiv:2004.05450 (2020). * Stimberg et al. [2019] M. Stimberg, R. Brette, D. F. Goodman, Brian 2, an intuitive and efficient neural simulator, Elife 8 (2019) e47314. * Carnevale and Hines [2006] N. T. Carnevale, M. L. Hines, The NEURON book, Cambridge University Press, 2006. * Davison et al. [2009] A. P. Davison, D. Brüderle, J. M. Eppler, J. Kremkow, E. Muller, D. Pecevski, L. Perrinet, P. Yger, PyNN: a common interface for neuronal network simulators, Frontiers in neuroinformatics 2 (2009) 11. * Furber et al. [2012] S. B. Furber, D. R. Lester, L. A. Plana, J. D. Garside, E. Painkras, S. Temple, A. D. Brown, Overview of the spinnaker system architecture, IEEE Transactions on Computers 62 (2012) 2454–2467. * Furber and Bogdan [2020] S. Furber, P. Bogdan, SpiNNaker: A Spiking Neural Network Architecture, Boston-Delft: now publishers, 2020. * Markram [2012] H. Markram, The human brain project, Scientific American 306 (2012) 50–55. * Painkras et al. [2013] E. Painkras, L. A. Plana, J. Garside, S. Temple, F. Galluppi, C. Patterson, D. R. Lester, A. D. Brown, S. B. Furber, SpiNNaker: A 1-W 18-core system-on-chip for massively-parallel neural network simulation, IEEE Journal of Solid-State Circuits 48 (2013) 1943–1953. * Plana et al. [2007] L. A. Plana, S. B. Furber, S. Temple, M. Khan, Y. Shi, J. Wu, S. Yang, A GALS infrastructure for a massively parallel multiprocessor, IEEE Design & Test of Computers 24 (2007). * Mahowald [1992] M. Mahowald, VLSI analogs of neuronal visual processing: a synthesis of form and function, Ph.D. thesis, California Institute of Technology Pasadena, 1992. * Plana et al. [2020] L. A. Plana, J. Garside, J. Heathcote, J. Pepper, S. Temple, S. Davidson, M. Luján, S. Furber, spiNNlink: FPGA-Based Interconnect for the Million-Core SpiNNaker System, IEEE Access 8 (2020) 84918–84928. * Yousefzadeh et al. [2017] A. Yousefzadeh, M. Jabłoński, T. Iakymchuk, A. Linares-Barranco, A. Rosado, L. A. Plana, S. Temple, T. Serrano-Gotarredona, S. B. Furber, B. Linares-Barranco, On multiple AER handshaking channels over high-speed bit-serial bidirectional LVDS links with flow-control and clock-correction on commercial FPGAs for scalable neuromorphic systems, IEEE transactions on biomedical circuits and systems 11 (2017) 1133–1147. * Dominguez-Morales et al. [2016] J. P. Dominguez-Morales, A. Jimenez-Fernandez, A. Rios-Navarro, E. Cerezuela-Escudero, D. Gutierrez-Galan, M. J. Dominguez-Morales, G. Jimenez-Moreno, Multilayer spiking neural network for audio samples classification using spinnaker, in: International conference on artificial neural networks, Springer, pp. 45–53. * Schoepe et al. [2019] T. Schoepe, D. Gutierrez-Galan, J. P. Dominguez-Morales, A. Jimenez-Fernandez, A. Linares-Barranco, E. Chicca, Neuromorphic sensory integration for combining sound source localization and collision avoidance, in: 2019 IEEE Biomedical Circuits and Systems Conference (BioCAS), IEEE, pp. 1–4. * Rhodes et al. [2018] O. Rhodes, P. A. Bogdan, C. Brenninkmeijer, S. Davidson, D. Fellows, A. Gait, D. R. Lester, M. Mikaitis, L. A. Plana, A. G. Rowley, et al., sPyNNaker: a software package for running PyNN simulations on SpiNNaker, Frontiers in neuroscience 12 (2018) 816. * Gerstner and Kistler [2002] W. Gerstner, W. Kistler, Spiking Neuron Models. Single Neurons, Populations, Plasticity, Cambridge University Press, 2002\. * Tripathy et al. [2014] S. J. Tripathy, J. Savitskaya, S. D. Burton, N. N. Urban, R. C. Gerkin, NeuroElectro: a window to the world’s neuron electrophysiology data, Frontiers in neuroinformatics 8 (2014) 40. *[CPG]: Central Pattern Generator *[CPGs]: Central Pattern Generator *[SNN]: Spiking Neural Network *[FPGA]: Field Programmable Gate Array *[LIF]: Leaky Integrate-and-Fire *[DVS]: Dynamic Vision Sensor *[FSR]: Force Sensitive Resistor *[SNNs]: Spiking Neural Network *[HBP]: Human Brain Project *[AER]: Address Event Representation *[FPGAs]: Field Programmable Gate Array *[sCPG]: Spiking Central Pattern Generator *[sCPGs]: Spiking Central Pattern Generator
# Design and Analysis of Wideband In-Band-Full-Duplex FR2-IAB Networks Junkai Zhang, , Haifeng Luo, Navneet Garg, Abhijeet Bishnu, Mark Holm, and Tharmalingam Ratnarajah Manuscript received March 30, 2021; revised August 26, 2021, and November 3, 2021; accepted November 4, 2021. The work of J. Zhang, H. Luo, N. Garg, and A. Bishnu was supported by the research grant from Huawei Technologies (Sweden) AB. The work of T. Ratnarajah was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) under Grant EP/P009549/1. The associate editor coordinating the review of this article and approving it for publication was Trung Q. Duong. (Corresponding author: Junkai Zhang.)J. Zhang, H. Luo, N. Garg, A. Bishnu and T. Ratnarajah are with Institute for Digital Communications, The University of Edinburgh, Edinburgh, EH9 3FG, UK (e-mail: {jzhang15, hluo2, ngarg, abishnu, T. Ratnarajah}@ed.ac.uk.)M. Holm is with Radio Basestation Systems Department, Huawei Technologies (Sweden) AB, Gothenburg, Sweden. (e-mail: mark.holm@huawei.com). ###### Abstract This paper develops a 3GPP-inspired design for the in-band-full-duplex (IBFD) integrated access and backhaul (IAB) networks in the frequency range 2 (FR2) band, which can enhance the spectral efficiency (SE) and coverage while reducing the latency. However, the self-interference (SI), which is usually more than 100 dB higher than the signal-of-interest, becomes the major bottleneck in developing these IBFD networks. We design and analyze a subarray-based hybrid beamforming IBFD-IAB system with the RF beamformers obtained via RF codebooks given by a modified Linde-Buzo-Gray (LBG) algorithm. The SI is canceled in three stages, where the first stage of antenna isolation is assumed to be successfully deployed. The second stage consists of the optical domain (OD)-based RF cancellation, where cancelers are connected with the RF chain pairs. The third stage is comprised of the digital cancellation via successive interference cancellation followed by minimum mean-squared error baseband receiver. Multiuser interference in the access link is canceled by zero-forcing at the IAB-node transmitter. Simulations show that under 400 MHz bandwidth, our proposed OD-based RF cancellation can achieve around 25 dB of cancellation with 100 taps. Moreover, the higher the hardware impairment and channel estimation error, the worse digital cancellation ability we can obtain. ###### Index Terms: Wideband in-band-full-duplex millimeter wave (FR2 band), subarray hybrid beamforming, integrated access and backhaul, codebook design, self- interference cancellation. ## I Introduction Frequency range 2 (FR2) band (i.e., millimeter wave) communications have been identified as the key technology for the beyond fifth-generation (5G) wireless communications to provide much larger bandwidth, narrower beam, and high data rate services. Different from the FR1 band ($\leq 7.225$ GHz), in the FR2 band ($\geq 24.250$ GHz), high path loss and blockages become the major obstacles for broader coverage. However, the short wavelengths at the FR2 frequencies facilitate the deployment of large-scale antenna arrays, which could compensate for such high losses with highly directional narrow beamforming and provide reliable transmission quality [1, 2]. In the recent 3rd Generation Partnership Project (3GPP) technical report TR 38.874 (Rel. 16) [3], the integrated access and backhaul (IAB) networks have been proposed for the FR2 band communications, where only IAB donors connect with the core network by fiber. IAB-nodes can wirelessly communicate with both the access and the backhaul links as well as perform IAB-specific tasks such as resource allocation, route selection, and optimization [4]. This novel architecture enables cheap and dense deployment while extending the coverage in FR2 bands. Despite the visible advantages of this architecture, the study of IAB networks is still in its infancy. In-band-full-duplex (IBFD) transmission, which has been treated as another breakthrough for beyond 5G wireless communications, breaks the rule that downlink and uplink communications should occur in different time/frequency slots. In the IAB networks, IAB-nodes are preferred to run under the IBFD mode [5]. Compared with the half-duplex (HD), thanks to simultaneous transmissions, the IBFD mode can almost double the spectral efficiency (SE) without the need for the large guard time/band arranged in standard time-division duplex and frequency-division duplex systems [6, 7]. However, the major obstacle of IBFD communications is the existence of strong self-interference (SI), which is usually seen as more than 100 dB stronger than the signal of interest [8]. Therefore, finding efficient SI cancellation (SIC) techniques is important for IBFD operation and has recently been a popular research topic. Through hardware prototype measurements in 28 GHz, authors in [9] evaluate the framework’s link-level SI reduction in the propagation domain and system-level performance to verify the feasibility of IBFD-IAB systems; however, the large- scale antenna array and hybrid precoding were not considered. For the wideband IBFD-FR2 communications, we propose a three-stage SIC, which consists of the antenna isolation stage (i.e., by isolating the transceiver antennas electromagnetically for passive cancellation) [10], the analog cancellation (A-SIC) stage (i.e., by establishing a circuit canceler between each transceiver pair to replicate the SI channel as accurately as possible) [11], and the digital cancellation (D-SIC) stage (i.e., handling of the residual SI (RSI) left by previous stages by designing efficient beamformers) [8, 12, 13]. In the A-SIC, the conventional micro-strip analog canceler requires a huge number of taps for wideband SIC. However, due to the large insertion losses and realization of hundreds of taps, wideband SIC becomes infeasible in practice. Besides, it is challenging for the micro-strip analog canceler to be directly extended to FR2 band scenarios due to the hardware limitation (i.e., RF components usually do not have such processing properties at the FR2 band). Thus, a hardware efficient optical domain (OD)-based analog canceler has been investigated in [14] for the single antenna system. However, OD-based A-SIC for multi-antenna systems or IAB networks is lacking in the literature. Due to the use of large-scale array systems, the traditional full digital beamforming scheme for the FR1 band becomes expensive to implement for the FR2 band. Thus, towards the need for cost-friendly system design, hybrid beamforming has become a powerful and economical tool in large-scale array systems, which reduces the requirement on the number of RF chains and simplifies the system complexity [15]. Based on the extension of the standard Orthogonal Matching Pursuit (OMP) algorithm, a novel hybrid beamforming design was proposed in [16]. Compared with the fully connected hybrid beamforming structure [2], to improve the deployment cost and guarantee the similar performance of the system, authors in [1, 17], and [18] develop a subarray hybrid beamforming structure, where one RF chain only connects with a portion of antenna arrays. However, the works that consider the wideband IBFD multi- user IAB networks with subarray hybrid beamforming in the FR2 band still need more investigation. The hybrid beamforming design algorithms in [1, 2, 15, 16, 17] need to access the large and sparse channel matrix, which is hard to acquire in reality. Although the compressed sensing-based channel estimation approaches are presented in [19], it is difficult to realize in practical scenarios. Instead, the RF effective channel is estimated using standard estimation methods in practice, where the RF precoding and combining matrices are selected from the pre-defined codebooks. In [2], the RF codebook is designed by the Lloyd type algorithm. A $K$-means-based beam codebook is proposed by Mo et al., whose codewords are defined by maximizing the beamforming gain [20]. Unfortunately, their vector-wise codebooks may lead to a low-rank beamforming matrix, which directly amounts to a loss in the degrees of freedom, especially when the number of RF chains is more than one. Further, the hardware impairments (HWI), which takes into account the imperfection in the hardware, such as oscillators noise, amplifiers noise, non-linearities in the digital-to-analog converters (DACs) and the analog-to- digital converters (ADCs), and etc., have not been considered in most of the studies yet. Authors in [21] have mentioned that the independent Gaussian model can optimally capture those combined non-ideal hardware effects. Based on the above motivations, in this paper, we investigate the design and analysis of multiuser FR2-IBFD-IAB networks with subarray-based hybrid beamforming. The contributions of this work are given as follows: * • RF codebook design and RF effective channel estimation: For the subarray hybrid beamforming scheme, the RF precoders and combiners are selected by scanning from the matrix-wise codebooks, designed with our modified mean squared error (MSE)-based Linde-Buzo-Gray (LBG) algorithm, and the RF effective channel can be then estimated with standard estimation methods. Simulations show that, with the proposed codebooks, we can achieve a similar SE as that with infinite resolution phase shifters (PSs) without suffering from low rank quantized beamforming matrices. * • Staged SIC: We propose a staged SIC scheme in this paper, where the A-SIC is realized by the OD-based canceler connected with the RF chain pairs on the IAB-node to reduce the space and cost. Compared with the conventional micro- strip analog canceler, our canceler can provide a significant number of true delay lines for wideband operations and have good frequency-flatness. Simulations show that with our OD-based canceler, 25 dB of A-SIC can be achieved with about 100 taps over 400 MHz bandwidth. * • System Analysis with RSI: In order to explore how the RSI caused by the HWI and RF effective channel uncertainties can affect the performance of the IBFD system, we analyze the SE of the backhaul link by varying the HWI factors and SI RF effective channel estimation errors. Simulation results show that as SNR increases, the system becomes more vulnerable to the RSI; however, the tolerance is improved when increasing the codebook size. It is also shown that at lower RSI values, IBFD operation doubles the SE compared to that of the HD. The rest of the paper is organized as follows. In Section II, the system model and channel models are identified, followed by introducing the OD-based analog canceler design in Section III. Then, the modified LBG algorithm for the RF codebook design is proposed in Section IV, where RF effective channels are estimated with selected RF beamformer pairs. Next, D-SIC is processed in Section V. In Section VI, the SE expressions are evaluated, followed by the design of BB beamformers for both backhaul and access links. Finally, some simulation results and a brief conclusion are shown in Section VII and Section VIII, respectively. Notations: $\mathcal{B},\mathbf{B},\mathbf{b}$, $b$ represent a set, a matrix, a vector, and a scalar, respectively. $\mathbf{B}^{H},\mathbf{B}^{-1}$, and $\mathbf{B}^{T}$ are the Hermitian, inverse, and transpose of $\mathbf{B}$, respectively. $|\mathcal{B}|$ is the cardinality of $\mathcal{B}$. $\lVert\mathbf{B}\rVert_{F}$, $|\mathbf{B}|_{mn}$, $\mathrm{det}\\{\mathbf{B}\\}$, and $\mathrm{tr}[\mathbf{B}]$ are the Frobenius norm, absolute value of the ($m,n$)th entry, determinant, and trace of $\mathbf{B}$, respectively. $\lVert\mathbf{b}\rVert_{2}$ is the L2-norm of $\mathbf{b}$. $\lVert b\rVert$ is the norm of $b$. $arg(\mathbf{B})$ takes the angle of each entry of $\mathbf{B}$. $\mathrm{diag}[\mathbf{B}]$ takes the diagonal elements of the matrix. $\mathrm{blkdiag}[\mathbf{B}_{1},\mathbf{B}_{2}]$ is the block diagonal matrix formed by matrix $\mathbf{B}_{1}$ and $\mathbf{B}_{2}$. $[\mathbf{B}]_{:,1:n}$ and $[\mathbf{B}]_{m,n}$ denote the first $n$ columns and the $(m,n)$th entry of $\mathbf{B}$, respectively. $\mathrm{Cov}[\mathbf{B}]$ is the covariance matrix, i.e., $\mathbb{E}\\{\mathbf{B}\mathbf{B}^{H}\\}$. $\odot$ indicates the Hadamard product. $d(\cdot,\cdot)$ is the distance measurement. $\mathcal{CN}(m,n)$ denotes a complex Gaussian distribution with mean value of $m$ and variance $n$, and $\mathbf{I}_{K}$ is the $K\times K$ identity matrix. ## II System and Channel Models ### II-A System Model In this subsection, the system model is described for the wideband FR2-IBFD- IAB multiuser networks. According to the technical specifications–TR 38.874 (Rel. 16) provided by the 3GPP, standalone (SA) and non-standalone (NSA) are two typical deployments considered for IAB networks [3]. In this work, we consider the downlink of a single-cell FR2-IBFD-IAB multiuser network with SA deployment111The reason why SA structure is considered in this work is that the NSA architecture permits IAB-nodes and UEs to communicate with both 4G base stations (i.e., eNBs) as well as 5G base stations (i.e., gNBs); however, SA only allows connections with 5G base stations, which is considered for future wireless communication network environment. With minor modifications, the present design and analysis can be used for NSA as well., which consists of the following parts, that are * • an IAB donor, also called gNB, which is a single logical node and acts as the base station; * • an IBFD-IAB-node, which contributes SI from its transmitter to its receiver; * • $U$ downlink user-equipments (UEs). The IAB donor connects to the 5G next-generation core (NGC) network by fiber and communicates with the IAB-node through a wireless backhaul link. The IAB- node serves the users by wireless access links. Note that, in this work, the IAB donor only provides backhaul link service. An illustration of this IBFD- IAB multiuser network used in this work is depicted in Fig. 1, and more information about the 3GPP architecture can be found in our recent work [4]. Figure 1: Illustration of a single-cell IBFD-IAB multiuser network under the SA deployment. The IAB donor and IAB-node are equipped with the subarray-based hybrid beamforming structure [17], where each RF chain only connects with a portion of antenna elements. Compared with the fully connected structure [17], the subarray structure provides a cost-efficient solution for connecting RF chains to the antenna arrays. The number of subarrays (RF chains) at the IAB-donor and IAB-node is assumed to be the same as the number of devices at the UEs node, i.e., $U$. Meanwhile, the number of data streams transmitted from the IAB donor and IAB-node is assumed to be $U$ as well. However, since each user is assumed to have one RF chain receiving one data stream, only analog beamforming is required. For the FR2 band, the Orthogonal Frequency Division Multiplexing (OFDM) system is adopted, where we assume i) the length of the data block is the same as the number of subcarriers, i.e., ${K}$; ii) the RF beamformers are frequency-flat and the same for all subcarriers. In contrast, the baseband (BB) beamformers are different for different subcarriers [2]. The beamforming structure for this wideband FR2-IBFD-IAB network is shown in Fig. 2. At the IAB donor, $U$ data streams are transmitted through $U$ RF chains and ${N_{T}}$ transmit antenna arrays. The total number of antenna arrays are equally divided into $U$ subarrays, each with one RF chain. Hence, the transmitted signal at the $k=1,2,\dotsc,{K}$th subcarrier from the IAB donor is given by $\mathbf{x}_{\mathrm{D}}[k]=\mathbf{F}_{{\rm RF}\mathrm{D}}\left(\underbrace{\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{s}_{\mathrm{D}}[k]}_{\widetilde{\mathbf{x}}_{\mathrm{D}}[k]}+\mathbf{e}_{\mathrm{D}}[k]\right),$ (1) where $\mathbf{F}_{{\rm RF}\mathrm{D}}=\mathrm{blkdiag}\left[\mathbf{f}_{{\rm RF}\mathrm{D},1},\mathbf{f}_{{\rm RF}\mathrm{D},2},\ldots,\mathbf{f}_{{\rm RF}\mathrm{D},U}\right]\in\mathbb{C}^{N_{T}\times U}$ is the block diagonal RF precoder matrix with $\mathbf{f}_{{\rm RF}\mathrm{D},u}\in\mathbb{C}^{\frac{N_{T}}{U}\times 1},\forall u\in\\{1,2,\ldots,U\\}$ representing the RF precoder vector of the $u$th subarray. $\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\in\mathbb{C}^{U\times U}$ represents the BB precoder matrix. The transmit data vector $\mathbf{s}_{\mathrm{D}}[k]\in\mathbb{C}^{U\times 1}$ at the subcarrier $k$ has the covariance matrix of $\mathbb{E}\left\\{\mathbf{s}_{\mathrm{D}}[k]\mathbf{s}^{H}_{\mathrm{D}}[k]\right\\}=\frac{P_{t}}{KU}\mathbf{I}_{U}$, where $P_{t}$ is the average total transmit power across all subcarriers. By applying the transmit power constraint with equal power allocation, we get the constraint on the precoder as $\lVert\mathbf{F}_{{\rm RF}\mathrm{D}}\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\rVert_{F}^{2}=U$ for all subcarriers. The vector $\mathbf{e}_{\mathrm{D}}[k]\in\mathbb{C}^{U\times 1}\sim\mathcal{CN}\left(\mathbf{0},\rho\mathrm{diag}\left[\mathrm{Cov}\left[\widetilde{\mathbf{x}}_{\mathrm{D}}[k]\right]\right]\right)$ captures the transmitter HWI at the IAB donor with $\rho<<1$, where the transmitter HWI is uncorrelated with the transmit signal. At the IBFD-IAB-node, separate antennas are configured for transmission and reception (i.e., there are ${n}_{{T}}$ transmit antenna arrays and $U$ RF chains for transmitting to the UEs node; and ${n}_{{R}}$ antenna arrays with $U$ RF chains for receiving data from the IAB donor). Similarly, the subarray structure divides those antenna arrays into $U$ equal panels, each with one RF chain. Without SIC, the decoded signal at the IAB-node for subcarrier $k$ is expressed in (2), shown on the top of next page, $\displaystyle\mathbf{y}_{\mathrm{N}}[k]=\mathbf{W}_{{\rm BB}\mathrm{N}}^{H}[k]\left[\underbrace{\mathbf{W}_{{\rm RF}\mathrm{N}}^{H}\left(\mathbf{H}_{\mathrm{ND}}[k]\mathbf{x}_{\mathrm{D}}[k]+\mathbf{H}_{\mathrm{SI}}[k]\mathbf{x}_{\mathrm{N}}[k]+\mathbf{z}_{\mathrm{N}}[k]\right)}_{\widetilde{\mathbf{y}}_{\mathrm{N}}[k]}+\mathbf{g}_{\mathrm{N}}[k]\right]$ (2) where $\mathbf{W}_{{\rm RF}\mathrm{N}}=\mathrm{blkdiag}\left[\mathbf{w}_{{\rm RF}\mathrm{N},1},\mathbf{w}_{{\rm RF}\mathrm{N},2},\ldots,\mathbf{w}_{{\rm RF}\mathrm{N},U}\right]\in\mathbb{C}^{{n}_{R}\times U}$ represents the RF combiner matrix with $\mathbf{w}_{{\rm RF}\mathrm{N},u}\in\mathbb{C}^{\frac{n_{R}}{U}\times 1},\forall u\in\\{1,2,\ldots,U\\}$ denoting the RF combiner vector of subarray $u$. $\mathbf{W}_{{\rm BB}\mathrm{N}}[k]\in\mathbb{C}^{U\times U}$ is the BB combiner matrix. $\mathbf{H}_{\mathrm{ND}}[k]\in\mathbb{C}^{{n}_{{R}}\times{N}_{{T}}}$ and $\mathbf{H}_{\mathrm{SI}}[k]\in\mathbb{C}^{{n}_{{R}}\times{n}_{{T}}}$ are the ideal backhaul channel matrix and SI channel matrix at the subcarrier $k$, respectively. $\mathbf{z}_{\mathrm{N}}[k]\in\mathbb{C}^{n_{R}\times 1}\sim\mathcal{CN}(\mathbf{0},\sigma^{2}_{\mathrm{N}}\mathbf{I}_{n_{R}})$ is the circularly symmetric Gaussian noise. The vector $\mathbf{g}_{\mathrm{N}}[k]\in\mathbb{C}^{U\times 1}\sim\mathcal{CN}(\mathbf{0},\beta\mathrm{diag}[\mathrm{Cov}[\widetilde{\mathbf{y}}_{\mathrm{N}}[k]]])$ accounts for the receiver HWI at the IAB-node, which is uncorrelated with the received signal, with $\beta<<1$. Figure 2: Illustration of a wideband FR2-IBFD-IAB multiuser system with subarray hybrid beamforming. The vector $\mathbf{x}_{\mathrm{N}}[k]$ in (2) denotes the signal transmitted from the IAB-node at the $k$th subcarrier, given as $\mathbf{x}_{\mathrm{N}}[k]=\mathbf{F}_{{\rm RF}\mathrm{N}}\left(\underbrace{\mathbf{F}_{{\rm BB}\mathrm{N}}[k]\mathbf{s}_{\mathrm{N}}[k]}_{\widetilde{\mathbf{x}}_{\mathrm{N}}[k]}+\mathbf{e}_{\mathrm{N}}[k]\right),$ (3) where $\mathbf{F}_{{\rm RF}\mathrm{N}}=\mathrm{blkdiag}\left[\mathbf{f}_{{\rm RF}\mathrm{N},1},\mathbf{f}_{{\rm RF}\mathrm{N},2},\ldots,\mathbf{f}_{{\rm RF}\mathrm{N},U}\right]\in\mathbb{C}^{n_{T}\times U}$ is the RF precoder matrix with $\mathbf{f}_{{\rm RF}\mathrm{N},u}\in\mathbb{C}^{\frac{n_{T}}{U}\times 1},\forall u\in\\{1,2,\ldots,U\\}$ denoting the RF precoder vector of the $u$th subarray. $\mathbf{F}_{{\rm BB}\mathrm{N}}[k]=\left[\mathbf{f}_{{\rm BB}\mathrm{N},1}[k],\mathbf{f}_{{\rm BB}\mathrm{N},2}[k],\ldots,\mathbf{f}_{{\rm BB}\mathrm{N},U}[k]\right]\in\mathbb{C}^{U\times U}$ represents the BB precoder matrix with $\mathbf{f}_{{\rm BB}\mathrm{N},u}[k]\in\mathbb{C}^{U\times 1},\forall u\in\\{1,2,\ldots,U\\}$. $\mathbf{s}_{\mathrm{N}}[k]\in\mathbb{C}^{U\times 1}$ is the transmit data vector with covariance matrix of $\mathbb{E}\left\\{\mathbf{s}_{\mathrm{N}}[k]\mathbf{s}^{H}_{\mathrm{N}}[k]\right\\}=\frac{P_{t}}{KU}\mathbf{I}_{U}$ and is uncorrelated with $\mathbf{s}_{\mathrm{D}}[k]$. The vector $\mathbf{e}_{\mathrm{N}}[k]\in\mathbb{C}^{U\times 1}\sim\mathcal{CN}\left(\mathbf{0},\rho\mathrm{diag}\left[\mathrm{Cov}\left[\widetilde{\mathbf{x}}_{\mathrm{N}}[k]\right]\right]\right)$ denotes the transmitter HWI at the IAB node, which is uncorrelated with the transmit signal. In addition, for all subcarriers, the precoder per subarray has to satisfy the constraint of $\lVert\mathbf{F}_{{\rm RF}\mathrm{N}}\mathbf{f}_{{\rm BB}\mathrm{N},u}[k]\rVert_{F}^{2}=1,\forall u\in\\{1,2,\ldots,U\\}$ for sending data stream to the $u$th UE. At the UEs node, there are $U$ devices, each is equipped with $N_{R}$ receive antennas and a single RF chain. Thus, the received signal at all UEs can be jointly written as $\mathbf{y}_{\mathrm{E}}[k]=\underbrace{\mathbf{W}_{{\rm RF}\mathrm{E}}^{H}\left(\mathbf{H}_{\mathrm{EN}}[k]\mathbf{x}_{\mathrm{N}}[k]+\mathbf{z}_{\mathrm{E}}[k]\right)}_{\widetilde{\mathbf{y}}_{\mathrm{E}}[k]}+\mathbf{g}_{\mathrm{E}}[k],$ (4) where $\mathbf{y}_{\mathrm{E}}[k]=\left[y_{\mathrm{E},1}[k],y_{\mathrm{E},2}[k],\ldots,y_{\mathrm{E},U}[k]\right]^{T}\in\mathbb{C}^{U\times 1}$ with $y_{\mathrm{E},u}[k],\forall u\in\\{1,2,\ldots,U\\}$ denoting the decoded signal at the $u$th UE. $\mathbf{W}_{{\rm RF}\mathrm{E}}=\mathrm{blkdiag}\left[\mathbf{w}_{{\rm RF}\mathrm{E},1},\mathbf{w}_{{\rm RF}\mathrm{E},2},\ldots,\mathbf{w}_{{\rm RF}\mathrm{E},U}\right]\in\mathbb{C}^{UN_{R}\times U}$ is the RF combiner matrix with $\mathbf{w}_{{\rm RF}\mathrm{E},u}\in\mathbb{C}^{{N_{R}}\times 1},\forall u\in\\{1,2,\ldots,U\\}$ being the RF combiner vector of the $u$th UE. $\mathbf{H}_{\mathrm{EN}}[k]=\left[\mathbf{H}_{\mathrm{EN},1}^{T}[k],\mathbf{H}_{\mathrm{EN},2}^{T}[k],\ldots,\mathbf{H}_{\mathrm{EN},U}^{T}[k]\right]^{T}\in\mathbb{C}^{UN_{R}\times{n_{T}}}$ is the ideal access link channel matrix, where $\mathbf{H}_{\mathrm{EN},u}[k]\in\mathbb{C}^{{N}_{R}\times{n}_{T}}$ represents the access link channel matrix from the IAB-node to the $u$th UE. $\mathbf{z}_{\mathrm{E}}[k]=\left[\mathbf{z}_{\mathrm{E},1}^{T}[k],\mathbf{z}_{\mathrm{E},2}^{T}[k],\ldots,\mathbf{z}_{\mathrm{E},U}^{T}[k]\right]^{T}\in\mathbb{C}^{{UN}_{R}\times 1}\sim\mathcal{CN}(\mathbf{0},\sigma^{2}_{\mathrm{E}}\mathbf{I}_{UN_{R}})$ is the Gaussian noise vector with $\mathbf{z}_{\mathrm{E},u}[k]\in\mathbb{C}^{n_{R}\times 1},\forall u\in\\{1,2,\ldots,U\\}$ being the Gaussian noise vector at the $u$th UE. The receiver HWI vector $\mathbf{g}_{\mathrm{E}}[k]=[g_{\mathrm{E},1}[k],g_{\mathrm{E},2}[k],\ldots,$ $g_{\mathrm{E},U}[k]]^{T}\in\mathbb{C}^{U\times 1}\sim\mathcal{CN}(\mathbf{0},\beta\mathrm{diag}[\mathrm{Cov}[\widetilde{\mathbf{y}}_{\mathrm{E}}[k]]])$ with $g_{\mathrm{E},u}[k],\forall u\in\\{1,2,\ldots,U\\}$ denoting the receiver HWI at the $u$th UE, which is uncorrelated with the received signal. ### II-B General Channel For the wideband FR2 communications with the OFDM system, a cyclic prefix of length $D$ is added to each OFDM symbol, which is equal to the number of delay taps for the wideband channel. Due to the scattering effect, the FR2 signals are likely to arrive in ${N_{C}}$ clusters, with ${N_{L}}$ paths reflected by different obstacles in each cluster. A raised-cosine pulse shaping filter $p(dT_{s}-\tau_{c,l})$, for $d=0,1,\ldots,D-1$, with $T_{s}$-spaced signaling is utilized, where the delay $\tau_{c,l}$ is defined for the $l$th path in the $c$th cluster [22]. Assuming uniform planar arrays (UPAs) with half-wavelength spaced elements, the transmit and receive steering vectors can be written as $\mathbf{a}_{\mathrm{t}}(\theta_{c,l}^{t},\phi_{c,l}^{t})$ and $\mathbf{a}_{\mathrm{r}}(\theta_{c,l}^{r},\phi_{c,l}^{r})$, respectively, where the azimuth $\theta_{c,l}^{r}/\theta_{c,l}^{t}$ and elevation $\phi_{c,l}^{r}/\phi_{c,l}^{t}$ angles correspond to the angles of arrival/departure (AoAs/AoDs) for each path in their clusters. Hence, at subcarrier $k$, a typical FR2 channel model between two nodes can be expressed as $\mathbf{H}[k]=\mathbf{A}_{r}\mathbf{\Pi}[k]\mathbf{A}_{t}^{H},$ (5) where $\mathbf{A}_{r}=[\mathbf{a}_{\mathrm{r}}(\theta_{1,1}^{r},\phi_{1,1}^{r}),\ldots,\mathbf{a}_{\mathrm{r}}(\theta_{c,l}^{r},\phi_{c,l}^{r}),\ldots,\mathbf{a}_{\mathrm{r}}(\theta_{{N}_{C},{N}_{L}}^{r},\phi_{{N}_{C},{N}_{L}}^{r})],$ (6) $\mathbf{A}_{t}=[\mathbf{a}_{\mathrm{t}}(\theta_{1,1}^{t},\phi_{1,1}^{t}),\ldots,\mathbf{a}_{\mathrm{t}}(\theta_{c,l}^{t},\phi_{c,l}^{t}),\ldots,\mathbf{a}_{\mathrm{t}}(\theta_{{N}_{C},{N}_{L}}^{t},\phi_{{N}_{C},{N}_{L}}^{t})],$ (7) $\mathbf{\Pi}[k]=\sqrt{\tfrac{{N}_{r}{N}_{t}}{{N}_{C}{N}_{L}\bar{PL}}}\begin{bmatrix}\begin{smallmatrix}\alpha_{1,1}\chi_{1,1}[k]&&\ldots&&0\\\ \vdots&&\ddots&&\vdots\\\ 0&\ldots&\alpha_{c,l}\chi_{c,l}[k]&\ldots&0\\\ \vdots&&\ddots&&\vdots\\\ 0&&\ldots&&\alpha_{{N}_{C},{N}_{L}}\chi_{{N}_{C},{N}_{L}}[k]\end{smallmatrix}\end{bmatrix},$ (8) and $\chi_{c,l}[k]=\sum_{d=0}^{D-1}p(dT_{s}-\tau_{c,l})e^{(-j\frac{2{\pi}kd}{K})}$. ${N}_{t}$ and ${N}_{r}$ denote the number of transmit and receive antennas, respectively. $\alpha_{c,l}$ is the complex gain. $\bar{PL}$ denotes the average path loss due to the high attenuation of FR2 band channel. The close- in path loss model is adopted rather than the free space path loss [23], given as $\displaystyle\bar{PL}=\left(\frac{4\pi r_{0}}{\lambda}\right)^{2}\left(\frac{r}{r_{0}}\right)^{\mu},$ (9) where $r_{0}$, $r$, $\lambda$, and $\mu$ represent the reference distance, distance between transceiver, wavelength, and path loss exponent, respectively. Moreover, since for arbitrary transmission networks, the line- of-sight (LOS) component has a high probability of being blocked by obstacles. Therefore, an non-line-of-sight (NLOS) path loss exponent is preferred. Furthermore, the steering vector is defined as $\mathbf{a}(\theta,\phi)=\tfrac{1}{\sqrt{N}}\big{[}1,a_{1}(\theta,\phi),\dotsc,a_{N-1}(\theta,\phi)]^{T},$ (10) where $a_{n}(\theta,\phi)=e^{j\frac{2\pi}{\lambda}\mathbf{r}_{n}^{T}\mathbf{u}(\theta,\phi)}$; $N$ is the number of antenna arrays in the UPA; $\mathbf{r}_{n}=[x_{n},y_{n},z_{n}]^{T}$ is the coordinate of the $n$th antenna element; $\mathbf{u}(\theta,\phi)=[\cos\theta\cos\phi,\sin\theta\cos\phi,\sin\phi]^{T}$ is the unit-norm direction vector. In this work, the arrays are placed in the XY-plane, and the elevation angles are measured from the XY-plane. Besides, the $z$-axis indicates the array height measured from the UPA plane, which is assumed to be negligible, i.e., $z_{n}\approx 0$. ### II-C Self-Interference Channel The most important issue in the IBFD transmission is the introduction of the SI on the IAB-node. Due to the proximity of the transceiver on the IAB-node, the attenuation of the SI channel is significantly less than that of the typical communication channels, which contributes high power SI to the backhaul link and degrades its SE. In order to reduce the effect of the SI, a staged SIC scheme will be introduced in later sections. Since the distinct SI channel model for the FR2 band is still unknown, most of the works have considered the hypothetical SI channel for narrowband communications [6, 24]. Fortunately, a hypothetical model is proposed for the wideband SI channel in [25]. According to [24, 25], after some minor modifications, we model the hypothesis wideband SI channel as follows. Unlike the general channel in the previous subsection, the SI channel is likely to be modeled as a Rician-alike channel with Rician factor $\kappa$. The LOS part, $\mathbf{H}_{\mathrm{SI,L}}$, is adopted to a near-field model with spherical waveform and is assumed to be frequency flat. The frequency response of the LOS component is given as $\displaystyle\mathbf{H}_{\mathrm{SI,L}}=\left[\mathbf{a}_{r}(\theta^{r},\phi^{r})\mathbf{a}^{H}_{t}(\theta^{t},\phi^{t})\right]\odot\mathbf{R},$ (11) where only one AoA/AoD is assumed for the LOS link. The entries of $\mathbf{R}$ is $[\mathbf{R}]_{p,q}=\frac{\gamma}{{r}_{pq}}e^{-j2\pi\frac{{r}_{pq}}{\lambda}}$ with ${r}_{pq}$ denoting the distance between the $p$th element of the receive antenna and the $q$th element of the transmit antenna at the IAB-node. $\gamma=\sqrt{n_{R}n_{T}}$ is the normalization factor ensuring that the norm of $\mathbf{H}_{\mathrm{SI,L}}$ remains the same before and after multiplying with the steering vectors. The NLOS part, $\mathbf{H}_{\mathrm{SI,N}}$, is expressed similar to the general channel model in (5), but with a few clusters and rays. Consequently, the entire SI channel for subcarrier $k$ can be expressed as $\mathbf{H}_{\mathrm{SI}}[k]=\sqrt{\frac{\kappa}{\kappa+1}}\mathbf{H}_{\mathrm{SI,L}}+\sqrt{\frac{1}{\kappa+1}}\mathbf{H}_{\mathrm{SI,N}}[k].$ (12) ## III Analog Self-Interference Cancellation In this section, the working principle and limiting factor of the conventional A-SIC idea are presented first. Then, the OD-based canceler is described, followed by the implementation details of such canceler design for the FR2-IBFD-IAB networks. In this work, we assume the antenna isolation has already been deployed before A-SIC. ### III-A Working Principle and Limitations A-SIC is essential to avoid receiver saturation. Otherwise, the signal-of- interest cannot be quantized precisely [26, 11]. Active A-SIC is based on a subtraction idea, i.e., a replica of the received SI signal generated by the analog canceler is inserted into the receiver chain to subtract the received SI. The canceler is made up of limited number of tunable delay lines to capture the multi-path nature of the SI channel, where passive components are utilized to construct tunable delay lines to minimize the non-linearity effects. With multi-tap RF canceler, one can cancel the SI from reflection paths in addition to the direct path. By considering the hardware insertion losses, the frequency response of a single multi-tap RF canceler can be given as $h_{\mathrm{can}}[\omega]=\hat{\alpha}\sum_{m=1}^{M}\alpha_{m}\beta_{m}\left(w_{I,m}+jw_{Q,m}\right)e^{-j\omega\tau_{m}},$ (13) where $\hat{\alpha}$ is the attenuation introduced by coupling the RF signal into the canceler; $\alpha_{m}$ is the propagation loss of each delay line; $\beta_{m}$ denotes the tap coupling factor [14, (4)]; $w_{I,m}$ and $w_{Q,m}$ are tunable weights; and $\tau_{m}$ is the delay. The optimal weights are tuned to minimize the difference between the frequency components of the canceler and the SI channel within the band of interest (BoI) (for details, see [11]). Equation (13) suggests that the number of taps $M$ decides the available degrees of freedom for this optimization. The key factor for efficient wideband A-SIC is the realization of a sufficient number of taps (i.e., delay lines) [27]. For wider operational bandwidth, more frequency components need to be optimized, and more degrees of freedom, i.e., taps, are required. Figure 3: Illustration of the OD-based analog canceler. ### III-B OD-Based A-SIC For the conventional canceler, the insertion losses increase with an increasing number of taps (i.e., $\alpha_{m}$ and $\beta_{m}$ are small for large $m$ in (13)), which results in a large difference between the signal power at the first and the later taps. Therefore, the signals coupled into the later taps cannot replicate the desired signal level and degrade the cancellation performance. Conventionally, electrical attenuators and micro- strips or cables can be used for constructing the tunable delay lines. However, it is demonstrated that these electrical components have significant propagation loss and coupling loss that limit the number of effective taps [14], thus limiting the operational bandwidth and cancellation performance. Therefore, to overcome these drawbacks, an OD-based analog canceler has recently been investigated in [14], whose structure is illustrated in Fig. 3. Regarding the OD-based canceler mechanism, the RF reference signal is first converted to the optical domain by modulating onto optical carriers through the Mach–Zehnder modulator (MZM). These optical carriers are generated by tunable lasers according to the grating wavelengths, and the power of these carriers is adjusted by variable optical attenuators (VOAs). Then, $M$ optical carriers are combined by a multiplexer (MUX) for propagating into a single fiber according to the obtained weights. The reference signal modulated on the optical carrier at wavelength $\lambda_{B,m}$ will be reflected at the $m$th grating while propagating through the fiber-Bragg-grating (FBG). This reflection happens at different gratings causes different time delays to the coupled reference signal. Next, the reflected signals are detected by photo- diodes to remove the optical carriers. Finally, the canceler yields an accumulation of multiple weighted and delayed versions of the input reference signal as the canceler output [14]. Since the weights are achieved by attenuators, which can only be real and non-negative; however, the SI channel is complex. Thus, four FBGs are needed to realize the complex response of the canceler. The OD-based canceler can also be described by (13). Compared with conventional canceler, OD-based canceler has smaller insertion losses, i.e., $\alpha_{m}$ and $\beta_{m}$ are almost constant with increasing $m$. Theoretically, almost constant insertion losses in the OD-based canceler allow hundreds of effective taps to be implemented to enlarge the operational bandwidth. ### III-C Proposed OD-Based A-SIC In order to realize the OD-based canceler design in the MIMO system, ${n_{R}}\times{n_{T}}$ cancelers are traditionally required to match the ${n_{R}}\times{n_{T}}$ SI channel matrix, where each canceler is constructed and tuned as described above. However, such a canceler deployment will be extremely costly for the FR2 communications, especially for the OD-based canceler. In order to reduce the cost, we tap off the SI signal from the RF chains before the RF precoder at the IAB-node transmitter and insert the outputs of these analog cancelers back to the RF chains at the IAB-node receiver after the RF combiner (see Fig 2) [28]. With this architecture, the required number of analog cancelers can be reduced from ${n_{R}}\times{n_{T}}$ to $U\times U$, which is of great benefit to the cost and practical implementation. Since a single canceler can be tuned by adjusting the weights to imitate the estimated RF SI channel $\hat{h}_{\mathrm{SI},pq}[\omega]$ between the $p$th transmitter’s RF chain to the $q$th receiver’s RF chain, where $p,q\in\\{1,2,\ldots,U\\}$, the following optimization problem will need to be run for each canceler established between the $pq$th RF chain pair over the BoI, which is cast as $\displaystyle\arg\min_{\left\\{w_{I,m}^{pq},\;w_{Q,m}^{pq}\right\\}_{m=1}^{M}}\,\,\sum_{p=1}^{U}\sum_{q=1}^{U}\left\\{\left\|\hat{h}_{\mathrm{SI},pq}[\omega]-h_{\mathrm{can},pq}[\omega]\right\|^{2}\right\\}_{\omega=\omega_{0}}^{\omega_{1}}$ (14) $\displaystyle\text{ s.t.}\quad-1\leq w_{I,m}^{pq}\leq 1,\,-1\leq w_{Q,m}^{pq}\leq 1,$ where $[{\omega_{0}},{\omega_{1}}]$ spans the BoI, i.e., [27.8 GHz, 28.2 GHz] in this work. The operator $\\{\left\|\cdot\right\|^{2}\\}_{\omega=\omega_{0}}^{\omega_{1}}$ means the sum of the squared error across all frequency components within $[{\omega_{0}},{\omega_{1}}]$ since the sampled version of the BoI is considered [27]. $h_{\mathrm{can},pq}[\omega]$ is the canceler response for mitigating the SI between the $pq$th transceiver RF chain pair, which is represents by (13). The constraints come from passive VOAs. With the channel state information (CSI) of the estimated RF SI channel and the frequency response of the canceler without VOA effects being known as a prior, the optimal weights can be obtained by the least-squares (LS) method. Since the A-SIC performance mainly depends on the frequency selectivity of the SI channel and the number of taps in the canceler, and due to the fact that the RF beamformers do not affect the frequency selectivity of the SI channel, we assume the amount of cancellation for the RF SI channel to be the same as that for the SI channel. Thus, we obtain the A-SIC performance through simulating with the SI channel instead of the RF SI channel and reflect the A-SIC effect by simply scaling the SI signal with a power attenuation factor. In this work, we assume antenna isolation also attenuates the SI signal in a frequency-flat manner [29]. Thus, after A-SIC, the term $\mathbf{H}_{\mathrm{SI}}[k]\mathbf{x}_{\mathrm{N}}[k]$ in (2) is scaled by $\sqrt{\eta}$ with the scalar $\eta$ being the amount of SI signal strength attenuated by both the antenna isolation and A-SIC. ## IV RF Codebook Design and RF Effective Channel Estimation In practice, the RF precoders/combiners are usually implemented using finite resolution PSs, i.e., they are selected from the pre-defined RF codebooks. Besides, the estimation of the large and sparse mmWave channel is difficult in reality. Motivated by these, in this section, a modified LBG algorithm will be introduced for designing the RF codebook, followed by the estimation of the RF effective channels after A-SIC. ### IV-A Modified MSE-Based LBG Algorithm for RF Codebook Design The LBG algorithm is a popular vector quantization scheme and is treated as an extension of the Lloyd-Max scalar quantization algorithm [30]. Conventionally, for a matrix quantization, the existing codebooks work by vector-wise comparison can lead to a low-rank behavior on the quantized matrix222Suppose each subarray has multiple RF chains. With a vector-wise codebook, likely, the columns for the RF beamforming matrix of a certain subarray may be assigned to the same vector codeword, which can result in a low-rank matrix and the loss of degrees of freedom.. Therefore, to avoid that, we modify the LBG algorithm to yield the $B$ bits codebook with matrix codewords directly, whose steps are described as follows. * • Step 1 (Initialization): Given the training set $\mathcal{F}=\big{\\{}\mathbf{F}_{{\rm RF},t}|t=1,2,\ldots,T,\left|\mathbf{F}_{{\rm RF},t}\right|_{pq}=1$ if $\left[\mathbf{F}_{{\rm RF},t}\right]_{p,q}\neq 0\big{\\}}$ with $T$ entries, whose each entry is a block diagonal matrix with each block denoted by the angle of complex Gaussian random numbers with zero mean and unit variance333Optimally, the training set should have consisted of the optimal RF precoders/combiners, which are derived by the angle of the dominant eigenvector(s) corresponding to the eigenvalue decomposition (EVD) of the channel correlation matrix (i.e., the sample covariance matrix) [17]. However, as aforementioned, the mmWave channel is hard to be estimated. Therefore, by exploring the distribution of the RF precoders/combiners, i.e., the values in the RF precoder/combiner matrix are isotropically (uniformly) distributed [31, 32, Lemma 1, 2], we construct the entries of the training set by the angle of $\mathcal{CN}(0,1)$ random numbers. The codebook $\mathcal{C}$ is initialized with an entry $\mathbf{C}_{1}(0)$, obtained by the angle of the mean value of the training set as $\mathbf{C}_{1}(0)=e^{j\arg\left(\frac{\sum_{t=1}^{T}\mathbf{F}_{{\rm RF},t}}{T}\right)}.$ (15) * • Step 2 (Splitting): This step splits each entry of the $b$ bits codebook $\mathcal{C}$ into two new ones to initialize the $b+1$ bits codebook, where $b=0,1,\ldots,B-1$. To achieve that, we perturb each entry $\mathbf{C}_{i}(b)$ as, $\displaystyle\mathbf{C}^{(0)}_{i+2^{b}}(b+1)=e^{j\arg\left(\sqrt{1-\epsilon^{2}}\mathbf{C}_{i}(b)+\epsilon\mathbf{P}_{i}(b)\right)},$ $\displaystyle\mathbf{C}^{(0)}_{i}(b+1)=e^{j\arg\left(\sqrt{1-\epsilon^{2}}\mathbf{C}_{i}(b)-\epsilon\mathbf{P}_{i}(b)\right)},$ (16) where $i=1,2,\ldots,2^{b}$, $\epsilon$ is a small positive value (e.g., $10^{-3}$), $\mathbf{P}_{i}(b)$ is a block diagonal matrix, whose each block is drawn from the angle of $\mathcal{CN}(0,1)$ random numbers. * • Step 3 (Cluster Assignment): In this step, using the nearest neighbor routine based on MSE, the training set is divided into $2^{b+1}$ (i.e., $|\mathcal{C}|$) clusters, the centroid of cluster $j$ is given by $\mathbf{C}_{j}^{(v)}(b+1)$, where $v=0,1,\ldots,V-1$ with $V$ being the maximum number of iterations of Step 5. E.g., $\mathbf{F}_{{\rm RF},t}$ is in the cluster 1 if $d\left(\mathbf{F}_{{\rm RF},t},\mathbf{C}_{1}^{(v)}(b+1)\right)\leq d\left(\mathbf{F}_{{\rm RF},t},\mathbf{C}_{j}^{(v)}(b+1)\right)$, $\forall j=1,2,\ldots,|\mathcal{C}|$, where $d\left(\mathbf{X},\mathbf{Y}\right)=\frac{1}{PQ}\sum_{p=1}^{P}\sum_{q=1}^{Q}\left([\mathbf{X}]_{p,q}-[\mathbf{Y}]_{p,q}\right)^{2}$, and $P$, $Q$ denote the number of rows and columns of the matrix, respectively. * • Step 4 (Centroid Update): Each entry of the codebook is updated with the centroid of the corresponding cluster. The centroid is computed via the solution of the following optimization problem, that is $\mathbf{{C}}_{j}^{(v)}(b+1)=\arg\underset{\mathbf{C}_{j}^{(v)}(b+1)}{\min}{\underset{\mathbf{F}_{{\rm RF},t}\in j}{\sum}d\left(\mathbf{F}_{{\rm RF},t},\mathbf{C}_{j}^{(v)}(b+1)\right)}.$ (17) Thus, the new centroid $\mathbf{{C}}_{j}^{(v)}(b+1)$ is given by the angle of the mean value of all $\mathbf{F}_{{\rm RF},t}$ in the $j$th cluster. * • Step 5 (Inner loop): Go to step 3 until the maximum number of iterations $V$ is reached (e.g., $V=50$). * • Step 6 (Outer loop): Go to step 2 until the length of the codebook $b+1$ is equal to the desired codebook length $B$. ### IV-B RF Effective Channel Estimation Given the RF codebooks, we can estimate the RF effective channels for designing the BB beamformers. Note that the RF effective channels estimated in this section are those after A-SIC by assuming BB beamformers to be identity matrices [13]. There are two phases in the RF effective channel estimation: i) RF precoder- combiner pair selection; ii) RF effective channel estimation. The RF beamformers are designed to maximize the desired signal in their corresponding links. We treat the whole OFDM symbols as pilots and assume only the IAB donor or the IAB node can transmit data in a time slot. Moreover, the identity BB beamformer matrices are omitted here. #### IV-B1 Phase 1 (RF precoder-combiner pair selection) The received backhaul link signal of the $k$th pilot subcarrier at the IAB- node, which uses the $p$th codeword of the codebook $\mathcal{F}_{\mathrm{D}}$ as the RF precoder and the $q$th codeword of the codebook $\mathcal{W}_{\mathrm{N}}$ as the RF combiner, is given by $\displaystyle\mathbf{Y}_{\mathrm{N}}[k](p,q)$ $\displaystyle=\mathbf{W}_{{\rm RF}\mathrm{N},q}^{H}[\mathbf{H}_{\mathrm{ND}}[k]\mathbf{F}_{{\rm RF}\mathrm{D},p}\left(\mathbf{S}_{\mathrm{D}}[k]+\mathbf{E}_{\mathrm{D}}[k]\right)$ $\displaystyle+\mathbf{Z}_{\mathrm{N}}[k]]+\mathbf{G}_{\mathrm{N}}[k],$ (18) where $\mathbf{S}_{\mathrm{D}}[k]\in\mathbb{C}^{U\times U}$ is the matrix of orthogonal pilot signal with $\mathbf{S}_{\mathrm{D}}[k]\mathbf{S}_{\mathrm{D}}^{H}[k]=\frac{P_{t}}{KU}\mathbf{I}_{U}$. $\mathbf{E}_{\mathrm{D}}[k]\in\mathbb{C}^{U\times U}$, $\mathbf{G}_{\mathrm{D}}[k]\in\mathbb{C}^{U\times U}$, and $\mathbf{Z}_{\mathrm{N}}[k]\in\mathbb{C}^{U\times U}$ are the noise matrices caused by the transmitter HWI, receiver HWI, and Gaussian noise, respectively, following the same statistics in (1) and (2). Similarly, the jointly received access link signal of the $k$th pilot subcarrier across all UEs, which uses the $p$th codeword of the codebook $\mathcal{F}_{\mathrm{N}}$ as the RF precoder and the $q$th codeword of the codebook $\mathcal{W}_{\mathrm{E}}$ as the RF combiner, is cast as $\displaystyle\mathbf{Y}_{\mathrm{E}}[k](p,q)$ $\displaystyle=\mathbf{W}_{{\rm RF}\mathrm{E},q}^{H}[\mathbf{H}_{\mathrm{EN}}[k]\mathbf{F}_{{\rm RF}\mathrm{N},p}\left(\mathbf{S}_{\mathrm{N}}[k]+\mathbf{E}_{\mathrm{N}}[k]\right)$ $\displaystyle+\mathbf{Z}_{\mathrm{E}}[k]]+\mathbf{G}_{\mathrm{E}}[k],$ (19) where the matrix of orthogonal pilot signal $\mathbf{S}_{\mathrm{N}}[k]\in\mathbb{C}^{U\times U}$ has $\mathbf{S}_{\mathrm{N}}[k]\mathbf{S}_{\mathrm{N}}^{H}[k]=\frac{P_{t}}{KU}\mathbf{I}_{U}$. $\mathbf{E}_{\mathrm{N}}[k]\in\mathbb{C}^{U\times U}$, $\mathbf{G}_{\mathrm{E}}[k]\in\mathbb{C}^{U\times U}$, and $\mathbf{Z}_{\mathrm{E}}[k]\in\mathbb{C}^{U\times U}$ are the transmitter HWI, receiver HWI, and Gaussian noise matrix, respectively, with the same statistics in (3) and (4). According to the beam management [33], each time, a codeword is chosen from their corresponding codebook and the RF precoder and combiner pairs that can maximize the received power among all pilot subcarriers are selected, given as $\displaystyle\left\\{\mathbf{F}_{{\rm RF}\mathrm{D}},\mathbf{W}_{{\rm RF}\mathrm{N}}\right\\}=\arg\underset{p,q}{\max}{\sum_{k=1}^{K}\left\|\mathbf{Y}_{\mathrm{N}}[k](p,q)\right\|_{F}^{2}}$ (20a) $\displaystyle\text{subject to}\quad\mathbf{F}_{{\rm RF}\mathrm{D},p}\in\mathcal{F}_{\mathrm{D}},\quad\mathbf{W}_{{\rm RF}\mathrm{N},q}\in\mathcal{W}_{\mathrm{N}}.$ (20b) $\displaystyle\left\\{\mathbf{F}_{{\rm RF}\mathrm{N}},\mathbf{W}_{{\rm RF}\mathrm{E}}\right\\}=\arg\underset{p,q}{\max}{\sum_{k=1}^{K}\left\|\mathbf{Y}_{\mathrm{E}}[k](p,q)\right\|_{F}^{2}}$ (21a) $\displaystyle\text{subject to}\quad\mathbf{F}_{{\rm RF}\mathrm{N},p}\in\mathcal{F}_{\mathrm{N}},\quad\mathbf{W}_{{\rm RF}\mathrm{E},q}\in\mathcal{W}_{\mathrm{E}}.$ (21b) In this work, the RF beamformers for all nodes can be selected from the same isotropic RF codebook derived from the last subsection. #### IV-B2 Phase 2 (RF effective channel estimation) Given the RF precoder/combiner, one can estimate the RF effective channel with the help of pilot signal by standard estimation methods, such as, the LS. Consequently, after estimation, we can write the ideal RF effective channel matrix as the sum of the estimated RF effective channel matrix $\hat{\mathbf{H}}^{eff}_{(\cdot)}[k]$ and the estimation error matrix $\mathbf{\Delta}_{(\cdot)}[k]$, given as $\mathbf{W}_{{\rm RF}\mathrm{N}}^{H}\mathbf{H}_{\mathrm{ND}}[k]\mathbf{F}_{{\rm RF}\mathrm{D}}=\hat{\mathbf{H}}^{eff}_{\mathrm{ND}}[k]+\mathbf{\Delta}_{\mathrm{ND}}[k],$ (22) $\sqrt{\eta}\mathbf{W}_{{\rm RF}\mathrm{N}}^{H}\mathbf{H}_{\mathrm{SI}}[k]\mathbf{F}_{{\rm RF}\mathrm{N}}=\hat{\mathbf{H}}^{eff}_{\mathrm{SI}}[k]+\mathbf{\Delta}_{\mathrm{SI}}[k],$ (23) $\mathbf{W}_{{\rm RF}\mathrm{E}}^{H}\mathbf{H}_{\mathrm{EN}}[k]\mathbf{F}_{{\rm RF}\mathrm{N}}=\hat{\mathbf{H}}^{eff}_{\mathrm{EN}}[k]+\mathbf{\Delta}_{\mathrm{EN}}[k],$ (24) where we assume the channel estimation errors $\mathbf{\Delta}_{\mathrm{ND}}[k]$, $\mathbf{\Delta}_{\mathrm{SI}}[k]$, and $\mathbf{\Delta}_{\mathrm{EN}}[k]$ have the covariance matrices of $\mathrm{Cov}\left[\mathbf{\Delta}_{\mathrm{ND}}[k]\right]=\sigma_{e,\mathrm{ND}}^{2}\mathbf{I}_{M}$, $\mathrm{Cov}\left[\mathbf{\Delta}_{\mathrm{SI}}[k]\right]=\sigma_{e,\mathrm{SI}}^{2}\mathbf{I}_{M}$, and $\mathrm{Cov}\left[\mathbf{\Delta}_{\mathrm{EN}}[k]\right]=\sigma_{e,\mathrm{EN}}^{2}\mathbf{I}_{M}$ [34, 35]. ## V Digital Self-Interference Cancellation After A-SIC, the RSI left by previous stages will be processed in the digital domain of the IAB-node receiver. In practice, since the IAB-node knows its transmitted codeword $\mathbf{s}_{\mathrm{N}}[k]$ and we can know the estimated RF effective SI channel $\hat{\mathbf{H}}^{eff}_{\mathrm{SI}}[k]$ by the process in Section IV. Then, with the help of successive interference cancellation, we can cancel out $\hat{\mathbf{H}}^{eff}_{\mathrm{SI}}[k]\mathbf{F}_{\mathrm{{\rm BB}\mathrm{N}}}[k]\mathbf{s}_{\mathrm{N}}[k]$. Consequently, after subtraction, the decoded signal at the IAB-node in (2) can be reconstructed as $\displaystyle\hat{\mathbf{y}}_{\mathrm{N}}[k]=\mathbf{W}_{{\rm BB}\mathrm{N}}^{H}[k]\left({\widetilde{\hat{\mathbf{y}}}_{\mathrm{N}}[k]}+\mathbf{g}_{\mathrm{N}}[k]\right),$ (25) where $\mathbf{W}_{{\rm BB}{\mathrm{N}}}[k]$ is designed to act as the minimum mean-squared error (MMSE) BB combiner, which will be described in the next section. $\widetilde{\hat{\mathbf{y}}}_{\mathrm{N}}[k]=\mathbf{W}_{{\rm RF}\mathrm{N}}^{H}\left(\mathbf{H}_{\mathrm{ND}}[k]\mathbf{x}_{\mathrm{D}}[k]+\sqrt{\eta}\mathbf{H}_{\mathrm{SI}}[k]\mathbf{F}_{{\rm RF}\mathrm{N}}\mathbf{e}_{\mathrm{N}}[k]+\mathbf{z}_{\mathrm{N}}[k]\right)+\mathbf{\Delta}_{\mathrm{SI}}[k]\mathbf{F}_{{\rm BB}\mathrm{N}}[k]\mathbf{s}_{\mathrm{N}}[k]$ ## VI Spectral Efficiency and Baseband Beamforming Design ### VI-A Spectral Efficiency Define $\zeta=\frac{P_{t}}{KU}$ and substitute (1), (22), and (23) into (25), the SE of the backhaul link is expressed according to (25), given as $\displaystyle\mathcal{R}_{b}$ $\displaystyle=\frac{1}{K}\sum_{k=1}^{K}\log_{2}\mathrm{det}\Big{\\{}\mathbf{I}_{U}+{\mathbf{W}_{{\rm BB}{\mathrm{N}}}^{H}[k]\mathbf{\Phi}_{b}[k]\mathbf{W}_{{\rm BB}{\mathrm{N}}}[k]}$ $\displaystyle\times\left(\mathbf{W}_{{\rm BB}{\mathrm{N}}}^{H}[k]\mathbf{\Omega}_{b}[k]\mathbf{W}_{{\rm BB}{\mathrm{N}}}[k]\right)^{-1}\Big{\\}},$ (26) where $\mathbf{\Phi}_{b}[k]$ is the covariance matrix for the known part of the desired signal. $\mathbf{\Omega}_{b}[k]$ represents the covariance matrix consisting of the noise given by the channel estimation error, the transceiver HWI, and the Gaussian noise. $\displaystyle\mathbf{\Phi}_{b}[k]=\zeta\hat{\mathbf{H}}_{\mathrm{ND}}^{eff}[k]\mathbf{F}_{\mathrm{{\rm BB}\mathrm{D}}}[k]\mathbf{F}_{\mathrm{{\rm BB}\mathrm{D}}}^{H}[k]\left(\hat{\mathbf{H}}_{\mathrm{ND}}^{eff}[k]\right)^{H}.$ (27) $\displaystyle\mathbf{\Omega}_{b}[k]$ $\displaystyle=\mathbf{\Omega}_{b}^{(1)}[k]+\mathbf{\Omega}_{b}^{(2)}[k]+\mathbf{\Omega}_{b}^{(3)}[k]+\underbrace{\sigma_{\mathrm{N}}^{2}\mathbf{W}_{{\rm RF}\mathrm{N}}^{H}\mathbf{W}_{{\rm RF}\mathrm{N}}}_{\text{Gaussian noise}}$ $\displaystyle\overset{(a)}{=}\mathbf{\Omega}_{b}^{(1)}[k]+\mathbf{\Omega}_{b}^{(2)}[k]+\mathbf{\Omega}_{b}^{(3)}[k]+\sigma_{\mathrm{N}}^{2}\frac{n_{R}}{U}\mathbf{I}_{U},$ (28) where $(a)$ is derived according to the property of RF beamformers, i.e., $\mathbf{W}_{{\rm RF}\mathrm{N}}^{H}\mathbf{W}_{{\rm RF}\mathrm{N}}=\frac{n_{R}}{U}\mathbf{I}_{U}$. $\displaystyle\mathbf{\Omega}_{b}^{(1)}[k]$ $\displaystyle=\mathrm{Cov}\bigg{[}\underbrace{\hat{\mathbf{H}}_{\mathrm{ND}}^{eff}[k]\mathbf{e}_{\mathrm{D}}[k]+\mathbf{\Delta}_{\mathrm{ND}}[k]\mathbf{e}_{\mathrm{D}}[k]}_{\text{backhaul channel transmitter HWI}}$ $\displaystyle\quad+\underbrace{\mathbf{\Delta}_{\mathrm{ND}}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{s}_{\mathrm{D}}[k]}_{\text{backhaul channel estimation error}}\bigg{]}$ $\displaystyle\overset{(b)}{=}\zeta\rho\hat{\mathbf{H}}_{\mathrm{ND}}^{eff}[k]\mathrm{diag}\left[\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}^{H}[k]\right]\left(\hat{\mathbf{H}}_{\mathrm{ND}}^{eff}[k]\right)^{H}$ $\displaystyle\quad+\sigma_{e,\mathrm{ND}}^{2}\zeta(\rho+1)\mathrm{tr}\left[\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}^{H}[k]\right]\mathbf{I}_{U},$ (29a) where $(b)$ is obtained by following simplifications: $\displaystyle\left[\mathrm{Cov}\left[\mathbf{\Delta}_{\mathrm{ND}}[k]\mathbf{e}_{\mathrm{D}}[k]\right]\right]_{m,n}$ $\displaystyle=\sum_{p}\left[\mathbb{E}\left\\{\left[\mathbf{\Delta}_{\mathrm{ND}}[k]\right]_{m,p}\left[\mathbf{\Delta}_{\mathrm{ND}}^{H}[k]\right]_{p,n}\left[\left\|\mathbf{e}_{\mathrm{D}}[k]\right\|^{2}\right]_{p}\right\\}\right]_{m,n}$ $\displaystyle=\sigma_{e,\mathrm{ND}}^{2}\sum_{p}\left[\mathbb{E}\left\\{\left[\left\|\mathbf{e}_{\mathrm{D}}[k]\right\|^{2}\right]_{p}\right\\}\right]_{m,n}\delta_{m,n}$ $\displaystyle=\sigma_{e,\mathrm{ND}}^{2}\delta_{m,n}\mathrm{tr}\left[\mathbb{E}\left\\{\mathbf{e}_{\mathrm{D}}[k]\mathbf{e}_{\mathrm{D}}^{H}[k]\right\\}\right]$ $\displaystyle=\sigma_{e,\mathrm{ND}}^{2}\zeta\rho\mathrm{tr}\left[\mathrm{diag}\left[\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}^{H}[k]\right]\right]\delta_{m,n}$ $\displaystyle=\sigma_{e,\mathrm{ND}}^{2}\zeta\rho\mathrm{tr}\left[\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}^{H}[k]\right]\delta_{m,n},$ (29b) $\displaystyle\left[\mathrm{Cov}\left[\mathbf{\Delta}_{\mathrm{ND}}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{s}_{\mathrm{D}}[k]\right]\right]_{m,n}$ $\displaystyle=\zeta\sum_{p,q}\left[\mathbb{E}\left\\{\left[\mathbf{\Delta}_{\mathrm{ND}}[k]\right]_{m,p}\left[\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}^{H}[k]\right]_{p,q}\left[\mathbf{\Delta}_{\mathrm{ND}}^{H}[k]\right]_{q,n}\right\\}\right]_{m,n}$ $\displaystyle=\sigma_{e,\mathrm{ND}}^{2}\zeta\sum_{p,q}\left[\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}^{H}[k]\right]_{p,q}\delta_{m,n}\delta_{p,q}$ $\displaystyle=\sigma_{e,\mathrm{ND}}^{2}\zeta\mathrm{tr}\left[\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}^{H}[k]\right]\delta_{m,n}.$ (29c) $\displaystyle\mathbf{\Omega}_{b}^{(2)}[n]$ $\displaystyle=\mathrm{Cov}\bigg{[}\underbrace{\hat{\mathbf{H}}_{\mathrm{SI}}^{eff}[k]\mathbf{e}_{\mathrm{N}}[k]+\mathbf{\Delta}_{\mathrm{SI}}[k]\mathbf{e}_{\mathrm{N}}[k]}_{\text{SI channel transmitter HWI}}$ $\displaystyle\quad+\underbrace{\mathbf{\Delta}_{\mathrm{SI}}[k]\mathbf{F}_{{\rm BB}\mathrm{N}}[k]\mathbf{s}_{\mathrm{N}}[k]}_{\text{SI channel estimation error}}\bigg{]}$ $\displaystyle\overset{(c)}{=}\zeta\rho\hat{\mathbf{H}}_{\mathrm{SI}}^{eff}[k]\mathrm{diag}\left[\mathbf{F}_{{\rm BB}\mathrm{N}}[k]\mathbf{F}_{{\rm BB}\mathrm{N}}^{H}[k]\right]\left(\hat{\mathbf{H}}_{\mathrm{SI}}^{eff}[k]\right)^{H}$ $\displaystyle\quad+\sigma_{e,\mathrm{SI}}^{2}\zeta(\rho+1)\mathrm{tr}\left[\mathbf{F}_{{\rm BB}\mathrm{N}}[k]\mathbf{F}_{{\rm BB}\mathrm{N}}^{H}[k]\right]\mathbf{I}_{U},$ (30) where $(c)$ is derived by using the similar simplification processes shown in (29b) and (29c). $\displaystyle\mathbf{\Omega}_{b}^{(3)}[k]$ $\displaystyle=\underbrace{\beta\mathrm{diag}\left[\mathrm{Cov}\left[\widetilde{\hat{\mathbf{y}}}_{\mathrm{N}}[k]\right]\right]}_{\text{receiver HWI}}$ $\displaystyle=\beta\mathrm{diag}\left[\mathbf{\Phi}_{b}[k]+\mathbf{\Omega}_{b}^{(1)}[k]+\mathbf{\Omega}_{b}^{(2)}[k]+\sigma_{\mathrm{N}}^{2}\frac{n_{R}}{U}\mathbf{I}_{U}\right].$ (31) Next, we will derive the sum SE expression of the access link across all users. The decoded signal at the $u$th user is given as ${y}_{\mathrm{E},u}[k]=\underbrace{\mathbf{w}_{{\rm RF}\mathrm{E},u}^{H}\left(\mathbf{H}_{\mathrm{EN,u}}[k]\mathbf{x}_{\mathrm{N}}[k]+\mathbf{z}_{\mathrm{E,u}}[k]\right)}_{\widetilde{{y}}_{\mathrm{E},u}[k]}+g_{\mathrm{E},u}[k].$ (32) By substituting (3) and (24) into (32), we can have the sum SE expression of the access link as follows, that is $\mathcal{R}_{a}=\sum_{u=1}^{U}\frac{1}{K}\sum_{k=1}^{K}\log_{2}\left(1+\frac{{\Phi}_{a,u}[k]}{{\Omega}_{a,u}[k]}\right),$ (33) where ${\Phi}_{a,u}[k]$ denotes the covariance for the known part of the $u$th user’s desired signal and ${\Omega}_{a,u}[k]$ represents the covariance of the noise given by the multiuser interference, the channel estimation error, the transceiver HWI, and the Gaussian noise at the $u$th user. ${\Phi}_{a,u}[k]=\zeta\hat{\mathbf{h}}_{\mathrm{EN},u}^{eff}[k]\mathbf{f}_{{\rm BB}\mathrm{N},u}[k]\mathbf{f}_{{\rm BB}\mathrm{N},u}^{H}[k]\left(\hat{\mathbf{h}}_{\mathrm{EN},u}^{eff}[k]\right)^{H},$ (34) where $\hat{\mathbf{H}}_{\mathrm{EN}}^{eff}[k]=\left[\left(\hat{\mathbf{h}}_{\mathrm{EN},1}^{eff}[k]\right)^{T},\left(\hat{\mathbf{h}}_{\mathrm{EN},2}^{eff}[k]\right)^{T},\ldots,\left(\hat{\mathbf{h}}_{\mathrm{EN},U}^{eff}[k]\right)^{T}\right]^{T}$ with $\left\\{\hat{\mathbf{h}}_{\mathrm{EN},u}^{eff}[k]\right\\}_{u=1}^{U}\in\mathbb{C}^{1\times U}$. $\displaystyle{\Omega}_{a,u}[k]$ $\displaystyle={\Omega}_{a,u}^{(1)}[k]+{\Omega}_{a,u}^{(2)}[k]+{\Omega}_{a,u}^{(3)}[k]+\underbrace{\sigma_{\mathrm{E}}^{2}\mathbf{w}_{{\rm RF}\mathrm{E},u}^{H}\mathbf{w}_{{\rm RF}\mathrm{E},u}}_{\text{Gaussian noise}}$ $\displaystyle\overset{(d)}{=}{\Omega}_{a,u}^{(1)}[k]+{\Omega}_{a,u}^{(2)}[k]+{\Omega}_{a,u}^{(3)}[k]+\sigma_{\mathrm{E}}^{2}N_{R},$ (35) where $(d)$ comes from the property of RF beamformers, i.e., $\mathbf{w}_{{\rm RF}\mathrm{E},u}^{H}\mathbf{w}_{{\rm RF}\mathrm{E},u}=N_{R}$. $\displaystyle{\Omega}_{a,u}^{(1)}[k]$ $\displaystyle=\mathrm{Cov}\bigg{[}\underbrace{\sum_{v=1,v\neq u}^{U}\hat{\mathbf{h}}_{\mathrm{EN},u}^{eff}[k]\mathbf{f}_{{\rm BB}\mathrm{N},v}[k]{s}_{\mathrm{N},v}[k]}_{\text{multiuser interference}}+\underbrace{\hat{\mathbf{h}}_{\mathrm{EN},u}^{eff}[k]\mathbf{e}_{\mathrm{N}}[k]}_{\text{transmitter HWI}}\bigg{]}$ $\displaystyle=\zeta\sum_{v=1,v\neq u}^{U}\hat{\mathbf{h}}_{\mathrm{EN},u}^{eff}[k]\mathbf{f}_{{\rm BB}\mathrm{N},v}[k]\mathbf{f}_{{\rm BB}\mathrm{N},v}^{H}[k]\left(\hat{\mathbf{h}}_{\mathrm{EN},u}^{eff}[k]\right)^{H}$ $\displaystyle\quad+\zeta\rho\hat{\mathbf{h}}_{\mathrm{EN},u}^{eff}[k]\mathrm{diag}\left[\mathbf{F}_{{\rm BB}\mathrm{N}}[k]\mathbf{F}_{{\rm BB}\mathrm{N}}^{H}[k]\right]\left(\hat{\mathbf{h}}_{\mathrm{EN},u}^{eff}[k]\right)^{H},$ (36) where $\mathbf{s}_{\mathrm{N}}[k]=\left[{s}_{\mathrm{N},1}[k],{s}_{\mathrm{N},2}[k],\ldots,{s}_{\mathrm{N},U}[k]\right]^{T}$. $\displaystyle{\Omega}_{a,u}^{(2)}[k]$ $\displaystyle=\mathrm{Cov}\bigg{[}\underbrace{\mathbf{\Delta}_{\mathrm{EN},u}[k]\mathbf{e}_{\mathrm{N}}[k]}_{\text{transmitter HWI}}+\underbrace{\mathbf{\Delta}_{\mathrm{EN},u}[k]\mathbf{F}_{{\rm BB}\mathrm{N}}[k]\mathbf{s}_{\mathrm{N}}[k]}_{\text{channel estimation error}}\bigg{]}$ $\displaystyle\overset{(e)}{=}\sigma_{e,\mathrm{EN}}^{2}\zeta(\rho+1)\mathrm{tr}\left[\mathbf{F}_{{\rm BB}\mathrm{N}}[k]\mathbf{F}_{{\rm BB}\mathrm{N}}^{H}[k]\right],$ (37) where $\mathbf{\Delta}_{\mathrm{EN}}[k]=\left[\left(\mathbf{\Delta}_{\mathrm{EN},1}[k]\right)^{T},\left(\mathbf{\Delta}_{\mathrm{EN},2}[k]\right)^{T},\ldots,\left(\mathbf{\Delta}_{\mathrm{EN},U}[k]\right)^{T}\right]^{T}$ with $\left\\{\mathbf{\Delta}_{\mathrm{EN},u}[k]\right\\}_{u=1}^{\mathrm{U}}\in\mathbb{C}^{1\times U}$ and $(e)$ is obtained by adopting the similar simplifications in (29b) and (29c). ${\Omega}_{a,u}^{(3)}[k]=\underbrace{\beta\left\|\widetilde{{y}}_{\mathrm{E},u}[k]\right\|^{2}}_{\text{receiver HWI}}=\beta\left({\Phi}_{a,u}[k]+{\Omega}_{a,u}^{(1)}[k]+{\Omega}_{a,u}^{(2)}[k]+\sigma_{\mathrm{E}}^{2}N_{R}\right).$ (38) ### VI-B Baseband Beamforming Design Given the RF beamformers and RF effective channels derived from Section IV, we aim to design the BB beamformers for both the backhaul and access links. For the backhaul link, the $k$th BB precoder which maximizes the SE is obtained using the right singular vectors $\mathbf{V}_{\mathrm{ND}}[k]$ of the $k$th estimated RF effective backhaul link channel matrix $\hat{\mathbf{H}}^{eff}_{\mathrm{ND}}[k]$, that is $\mathbf{F}_{{\rm BB}\mathrm{D}}[k]=\left[\mathbf{V}_{\mathrm{ND}}[k]\right]_{:,1:U}.$ (39) Due to the precoder constraint, the BB precoder is updated as $\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\leftarrow\frac{\sqrt{U}\mathbf{F}_{{\rm BB}\mathrm{D}}[k]}{\left|\left|\mathbf{F}_{{\rm RF}\mathrm{D}}\mathbf{F}_{{\rm BB}\mathrm{D}}[k]\right|\right|_{F}}$. Next, the design of the BB precoder $\mathbf{F}_{{\rm BB}\mathrm{N}}[k]$ at the IAB-node transmitter aims to null the multiuser interference by the zero forcing, which is $\mathbf{F}_{{\rm BB}\mathrm{N}}[k]=\left(\hat{\mathbf{H}}_{\mathrm{EN}}^{eff}[k]\right)^{H}\left[\hat{\mathbf{H}}_{\mathrm{EN}}^{eff}[k]\left(\hat{\mathbf{H}}_{\mathrm{EN}}^{eff}[k]\right)^{H}\right]^{-1}.$ (40) Similarly, the BB precoder should be normalized as $\mathbf{f}_{{\rm BB}\mathrm{N},u}[k]\leftarrow\frac{\mathbf{f}_{{\rm BB}\mathrm{N},u}[k]}{\left|\left|\mathbf{F}_{{\rm RF}\mathrm{N}}\mathbf{f}_{{\rm BB}\mathrm{N},u}[k]\right|\right|_{F}}\;\forall u\in\\{1,2,\ldots,U\\}$. Finally, with the fact that the channel estimation error is uncorrelated with the data vector, and we assume the strength of HWI and channel estimation error are known as a prior. The MMSE BB combiner for the $k$th subcarrier $\mathbf{W}_{{\rm BB}\mathrm{N}}[k]$ is designed by solving the following optimization problem, which is $\arg\underset{\mathbf{W}_{{\rm BB}\mathrm{N}}[k]}{\min}{\mathbb{E}\left\\{\left\|\mathbf{s}_{\mathrm{D}}[k]-\hat{\mathbf{y}}_{\mathrm{N}}[k]\right\|^{2}_{2}\right\\}}.$ (41) By solving $\frac{\partial\mathbb{E}\left\\{\left\|\mathbf{s}_{\mathrm{D}}[k]-\hat{\mathbf{y}}_{\mathrm{N}}[k]\right\|^{2}_{2}\right\\}}{\partial\mathbf{W}_{{\rm BB}\mathrm{N}}^{H}[k]}=0$ (see Appendix-A), we have $\displaystyle\mathbf{W}_{{\rm BB}\mathrm{N}}[k]$ $\displaystyle=\mathbb{E}\left\\{\left({\widetilde{\hat{\mathbf{y}}}_{\mathrm{N}}[k]}+\mathbf{g}_{\mathrm{N}}[k]\right)\left({\widetilde{\hat{\mathbf{y}}}_{\mathrm{N}}[k]}+\mathbf{g}_{\mathrm{N}}[k]\right)^{H}\right\\}^{-1}$ $\displaystyle\quad\times\mathbb{E}\left\\{\left({\widetilde{\hat{\mathbf{y}}}_{\mathrm{N}}[k]}+\mathbf{g}_{\mathrm{N}}[k]\right)\mathbf{s}_{\mathrm{D}}^{H}[k]\right\\}$ $\displaystyle=\zeta\left(\mathbf{\Phi}_{b}[k]+\mathbf{\Omega}_{b}[k]\right)^{-1}\hat{\mathbf{H}}_{\mathrm{ND}}^{eff}[k]\mathbf{F}_{{\rm BB}\mathrm{D}}[k].$ (42) TABLE I: System Parameters and Default Values Notation | Physical Meaning | Values ---|---|--- $K$ | Number of subcarriers | 512 $D$ | Number of cyclic prefixes | 128 $W$ | Bandwidth | 400 MHz $f_{c}$ | Carrier frequency | 28 GHz $U$ | Number of users (subarrays, RF chains, data streams) | 4 ${N_{T}},{n_{T}}$ | Number of transmit antennas at the IAB donor and the IAB-node, respectively | $16\times 16$, $16\times 16$ ${n_{R}},{N_{R}}$ | Number of receive antennas at the IAB-node and each user, respectively | $16\times 16$, $16\times 4$ $\sigma_{\mathrm{N}},\sigma_{\mathrm{E}}$ | Gaussian noise power at the IAB-node and each user, respectively | | $-174\;\text{dBm}+10\log_{10}W$ --- $+10$ dB $T_{s}$ | Sampling time | $1/W$ $\tau_{c,l}$ | Path delay | $\mathcal{U}(0,DT_{s})$ $\alpha_{c,l}$ | Complex gain | $\mathcal{CN}(0,1)$ $r_{0}$ | Reference distance | 1 m $r$ | Distance between transceiver, respectively | 100 m (0.1 m)4 $\mu$ | Path loss exponent | 3.4 [23] $\lambda$ | Wavelength | $3\times 10^{8}/f_{c}$ $\kappa$ | Rician factor | 10 dB $\eta$ | Power of the SI signal attenuated by antenna isolation and A-SIC | -80 dB * 4 $r=100$ m for backhaul and access link channels; $r=0.1$ m ($\approx 10\lambda$ [36]) for SI channel. ## VII Simulations In this section, simulation results will be shown to analyze the performance of our designed networks. Each subarray (users) has 16$\times$4 UPA with 1 RF chain. The rolling factor of the pulse shaping filter is 1. Both communication links have $\mathrm{N_{C}}=8$ clusters, each with $\mathrm{N_{L}}=10$ rays, whereas the NLOS component of the SI channel has $\mathrm{N_{C}}=2$ clusters, each with $\mathrm{N_{L}}=8$ rays. Both azimuth and elevation AOAs/AODs can be expressed as the sum of the mean angle of each cluster and the angle shifts in the cluster. The mean azimuth and elevation AOAs/AODs of each cluster are assumed as uniformly distributed in $[-\pi,\pi]$, and $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$, respectively. In each cluster, the AOAs/AODs have Laplacian distribution with an angle spread of $5^{\circ}$. The transceiver arrays at the IBFD-IAB-node have a separation angle of $\frac{\pi}{6}$. Assume $\sigma^{2}_{\mathrm{N}}=\sigma^{2}_{\mathrm{E}}=\sigma^{2}$, we define $\mathrm{SNR}\triangleq\frac{P_{\mathrm{r}}}{\sigma^{2}KU}$, where $P_{\mathrm{r}}=\frac{P_{\mathrm{t}}}{\bar{PL}}$ is the ratio between transmit power and average path loss according to the Friis’ law. We let HWI factors $\rho=\beta$ be the same for all channels. The backhaul link SE of the HD scheme is given by removing the part relevant to the SI in (26) due to non- simultaneous transmission and reception. Moreover, for both links, the (sum) SE expressions for HD transmission need to be scaled by 0.5 since separate time-frequency signaling channels are used for backhaul and access link. Other parameters and their default values used in the simulations are summarized in Table I. Figure 4: Comparison between the performance of (a) traditional micro-strip analog canceler; (b) OD-based analog canceler (SI channel has a delay spread of 200 ns). ### VII-A Performance of OD-Based Analog Canceler Assume the propagation loss of the FBG (coiled into 2 cm) is 0.461 dB/m, and that of the micro-strip is 2.967 dB/m [14]. The OD-based design uses a 20 dB hybrid coupler to couple the RF reference signal into the OD-based canceler, while the conventional electrical canceler uses a 0 dB coupler. Besides, to explore the best performance, the tap delay varies according to the number of taps to cover the delay spread. Fig. 4 shows the A-SIC abilities (in dB) of the traditional micro-strip canceler (see Fig. 4) and the OD-based canceler (see Fig. 4) for different bandwidths and numbers of taps. Simulations are run with 200 ns of significant delay spread for the SI channel, which reflects a bad channel condition. Although a measurement for the SI channel delay spread is done in [37], a general delay spread value is still lacking in the literature. Fig. 4 shows that creating a large number of taps with conventional electrical components (e.g., cables or micro-strips) degrades the performance rather than improving it due to significant insertion losses. It can be seen that less than 15 dB of cancellation is achieved under 200 MHz bandwidth. Fig. 4 shows that under 400 MHz bandwidth, OD-based canceler can achieve around 25 dB of cancellation in FR2 wideband with 100 taps, which is also proved in [14]. Note that this result shows the cancellation ability that can be achieved between a single RF chain pair. In this work, we assume antenna isolation and our A-SIC can attenuate the SI signal power by 55 dB [10] and 25 dB, respectively. Figure 5: Comparison on the (sum) SE of 4-subarray hybrid beamforming structure with different kinds of codebooks for (a) backhaul link; (b) access link with 4 users. Each subarray (user) is equipped with $16\times 4$ UPA and 1 RF chain (perfect CSI without HWI). ### VII-B Performance of the Proposed Codebook Design The comparison on the (sum) SE of the backhaul and access links with RF precoders/combiners selected from our proposed matrix-wise codebooks and vector-wise codebooks designed by conventional MSE-based LBG algorithm in [30], respectively, for the subarray structure is plotted in Fig. 5. In order to get a fair comparison, a $b$-bit vector codebook should be compared with an $N_{{\rm RF}}b$-bit matrix codebook, where $N_{\rm RF}$ is the number of RF chains555For vector quantization, since each column of the RF beamformer matrix selects one codeword from a $b$-bit vector codebook, we can get $2^{N_{{\rm RF}}b}$ different candidate matrices, which is equal to the number of codeworks in a $N_{{\rm RF}}b$-bit matrix codebook.. We assume perfect CSI and hardware. The RF precoders/combiners with infinite resolution PSs are designed according to [17]. It can be seen that, for both kinds of codebooks, as the number of codebook size increases, the performance becomes closer to the ideal one (i.e., infinite resolution). Obviously, our matrix codebook can provide better performance than the vector codebook designed by the conventional LBG algorithm, which shows the successful applicability of our modified MSE-based LBG codebook design. However, there is still a small gap between the ideal one and the curves derived with 8 bits matrix codebook for both links. A large size of codebook can be used to reduce the gap. Moreover, the HD operation yields lower (sum) SE than that of the IBFD scheme. Figure 6: SE of backhaul link for different beamforming schemes. Each subarray (user) has $16\times 4$ UPA. The IAB donor and IAB-node have $16\times 16$ UPA for fully connected structure. ($\rho=\beta=-80$ dB, $\sigma_{e,\mathrm{ND}}^{2}=\sigma_{e,\mathrm{EN}}^{2}=\sigma_{e,\mathrm{SI}}^{2}=-120$ dB) ### VII-C Performance of Different Beamforming Schemes Fig. 6 shows the SE of the backhaul link for different beamforming schemes. The ideal curves are plotted by assuming perfect CSI and SIC without HWI. The design of the RF precoders/combiners for the ideal fully connected and subarray structures follows the process in [17], which have infinite resolution. The non-ideal curves are plotted by our proposed design algorithm with 8 bits RF codebook and setting $\rho=\beta=-80$ dB, $\sigma_{e,\mathrm{ND}}^{2}=\sigma_{e,\mathrm{EN}}^{2}=\sigma_{e,\mathrm{SI}}^{2}=-120$ dB. It can be observed that for the IBFD scheme, these three beamforming schemes evaluated in the figure are separated by a significant rate loss. Although the rate loss is evident, the subarray structure can significantly reduce the hardware complexity and provide low-computationally intensive precoders, which is beneficial for industrial implementations. Further, with our staged SIC, the SE of the subarray structure is very close to its ideal one; however, it shows some degrees of freedom loss at high SNR due to RSI caused by HWI and RF effective channel uncertainties. Fortunately, the losses on degrees of freedom and SE are further reduced by increasing the number of RF chains at the IAB-node receiver from 4 to 8 (see the green and orange curves in Fig. 6). Figure 7: SE of the backhaul link at $\text{SNR}=-5,0,5$ dB with 4-subarray hybrid beamforming structure, where RF beamformers are selected from different size of codebooks, in the presence of different values of (a) SI RF effective channel estimation error ($\rho=\beta=-80$ dB, $\sigma_{e,\mathrm{ND}}^{2}=\sigma_{e,\mathrm{EN}}^{2}=-120$ dB); (b) HWI ($\rho=\beta$, $\sigma_{e,\mathrm{ND}}^{2}=\sigma_{e,\mathrm{EN}}^{2}=\sigma_{e,\mathrm{SI}}^{2}=-120$ dB). ### VII-D Effect of RSI on the SE of the Backhaul Link In Fig. 7, with RF precoders/combiners selected from 1, 4, 8 bits codebooks, respectively, we would like to study how the RSI caused by RF effective SI channel estimation error and HWI can affect the SE performance of the backhaul link at different SNR values. With $\rho=\beta=-80$ dB and $\sigma_{e,\mathrm{ND}}^{2}=\sigma_{e,\mathrm{EN}}^{2}=-120$ dB, we plot the SE performance of the backhaul link in Fig. 7 by varying the channel estimation error of the SI RF effective channel. Interestingly, it is worth noting that as the size of the RF codebook increases, the intersection point (i.e., the point where both the IBFD and HD have the same performance) shifts to the right at a fixed SNR, which means the system can tolerate more RSI caused by channel estimation error. On the contrary, when the codebook size is fixed, as SNR increases, the intersection point shifts to the left. By assuming all effective channels have the same estimation error of -120 dB, Fig. 7 shows the backhaul link SE performance with varying HWI factors. Similar to the trend in Fig. 7, with the same codebook size, as the SNR increases, the system can tolerate less RSI caused by HWI. The tolerance is improved at a fixed SNR when the codebook size increases. Moreover, an almost doubled SE can be achieved by the IBFD compared to that of the HD when HWI factors (and channel estimation errors) are small enough, as can be seen in Fig. 7. ## VIII Conclusion In this paper, we have studied FR2 wideband IBFD-IAB networks under subarray structures, which are simpler to deploy and more cost-effective than fully- connected ones. For this system, we have proposed the RF codebook design for the subarray structure with hybrid beamforming. Compared with the traditional vector-wise codebook, our matrix-wise codebook can avoid low-rank matrix and loss of degrees of freedom. We also introduced the staged SIC scheme. In order to reduce the deployment cost, we have established the canceler on each RF chain pair and utilized the OD-based analog canceler to reduce the effect of insertion loss. The RSI left by the A-SIC was handled in the digital domain by successive interference cancellation and MMSE BB combiner. Simulations have shown that under 400 MHz bandwidth, our OD-based canceler can achieve about 25 dB cancellation with 100 taps as well as experiencing constant insertion loss, which can not be realized by the traditional micro-strip canceler. With large HWI and RF effective SI channel uncertainties, the IBFD transmission experiences performance limitation in the backhaul link; however, for small HWI and uncertainties, the IBFD promises almost doubled SE compared with that of the HD. Further work will include investigating multicell IBFD-IAB systems, optimal power allocation, and efficient antenna cancellation. Besides, the SI channel model will also be studied by real-world measurements or other reliable mathematics models. ## Appendix A MMSE BB Combiner $\displaystyle\frac{\partial\mathbb{E}\left\\{\left\|\mathbf{s}_{\mathrm{D}}[k]-\hat{\mathbf{y}}_{\mathrm{N}}[k]\right\|^{2}_{2}\right\\}}{\partial\mathbf{W}_{{\rm BB}\mathrm{N}}^{H}[k]}$ $\displaystyle=\frac{\partial\mathbb{E}\left\\{(\mathbf{s}_{\mathrm{D}}[k]-\hat{\mathbf{y}}_{\mathrm{N}}[k])(\mathbf{s}_{\mathrm{D}}[k]-\hat{\mathbf{y}}_{\mathrm{N}}[k])^{H}\right\\}}{\partial\mathbf{W}_{{\rm BB}\mathrm{N}}^{H}[k]}$ $\displaystyle=\frac{\partial\mathbb{E}\left\\{\mathbf{s}_{\mathrm{D}}[k]\mathbf{s}^{H}_{\mathrm{D}}[k]-\mathbf{s}_{\mathrm{D}}[k]\hat{\mathbf{y}}_{\mathrm{N}}^{H}[k]-\hat{\mathbf{y}}_{\mathrm{N}}[k]\mathbf{s}^{H}_{\mathrm{D}}[k]+\hat{\mathbf{y}}_{\mathrm{N}}[k]\hat{\mathbf{y}}_{\mathrm{N}}^{H}[k]\right\\}}{\partial\mathbf{W}_{{\rm BB}\mathrm{N}}^{H}[k]}.$ (43) By substituting (25) into (43), we have $\displaystyle\frac{\partial\mathbb{E}\left\\{\left\|\mathbf{s}_{\mathrm{D}}[k]-\hat{\mathbf{y}}_{\mathrm{N}}[k]\right\|^{2}_{2}\right\\}}{\partial\mathbf{W}_{{\rm BB}\mathrm{N}}^{H}[k]}=-\left({\widetilde{\hat{\mathbf{y}}}_{\mathrm{N}}[k]}+\mathbf{g}_{\mathrm{N}}[k]\right)\mathbf{s}^{H}_{\mathrm{D}}[k]$ $\displaystyle+\left({\widetilde{\hat{\mathbf{y}}}_{\mathrm{N}}[k]}+\mathbf{g}_{\mathrm{N}}[k]\right)\left({\widetilde{\hat{\mathbf{y}}}_{\mathrm{N}}[k]}+\mathbf{g}_{\mathrm{N}}[k]\right)^{H}\mathbf{W}_{{\rm BB}\mathrm{N}}[k].$ (44) Let (44) equal to 0, we can have the MMSE BB combiner given in (42). ## References * [1] Y. Chen, D. Chen, and T. Jiang, “Non-uniform quantization codebook-based hybrid precoding to reduce feedback overhead in millimeter wave MIMO systems,” _IEEE Trans. Commun._ , vol. 67, no. 4, pp. 2779–2791, Dec. 2019\. * [2] A. Alkhateeb and R. W. Heath, “Frequency selective hybrid precoding for limited feedback millimeter wave systems,” _IEEE Trans. Commun._ , vol. 64, no. 5, pp. 1801–1818, May 2016. * [3] 3GPP, “NR; Study on Integrated Access and Backhaul,” _TR 38.874 (Rel. 16)_ , Dec. 2018. * [4] J. Zhang, N. Garg, M. Holm, and T. Ratnarajah, “Design of full duplex millimeter-wave integrated access and backhaul networks,” _IEEE Wireless Commun. Mag._ , vol. 28, no. 1, pp. 60–67, Feb. 2021. * [5] T. Zhang, S. Biswas, and T. Ratnarajah, “An analysis on wireless edge caching in in-band full-duplex FR2-IAB networks,” _IEEE Access_ , vol. 8, pp. 164 987–165 002, Sep. 2020. * [6] Z. Xiao, P. Xia, and X. Xia, “Full-duplex millimeter-wave communication,” _IEEE Wireless Commun. Mag._ , vol. 24, no. 6, pp. 136–143, Dec. 2017. * [7] P. Aquilina, A. C. Cirik, and T. Ratnarajah, “Weighted sum rate maximization in full-duplex multi-user multi-cell MIMO networks,” _IEEE Trans. Commun._ , vol. 65, no. 4, pp. 1590–1608, Jan. 2017. * [8] L. Song, Y. Li, and Z. Han, “Resource allocation in full-duplex communications for future wireless networks,” _IEEE Wireless Commun. Mag._ , vol. 22, no. 4, pp. 88–96, Aug. 2015. * [9] G. Y. Suk, S.-M. Kim, J. Kwak, S. Hur, E. Kim, and C.-B. Chae, “Full duplex integrated access and backhaul for 5G NR: Analyses and prototype measurements,” June 2021. [Online]. Available: https://arxiv.org/pdf/2007.03272.pdf * [10] E. Everett, A. Sahai, and A. Sabharwal, “Passive self-interference suppression for full-duplex infrastructure nodes,” _IEEE Trans. Wireless Commun._ , vol. 13, no. 2, pp. 680–694, Jan. 2014. * [11] Y. Liu, D. Liu, X. Li, and C. Huang, “Multi-tap analog MIMO self-interference cancellation for full-duplex communications,” in _Proc. 9th International Conference on Wireless Communications and Signal Processing (WCSP)_ , Nanjing, China, Oct. 2017. * [12] A. C. Cirik, S. Biswas, S. Vuppala, and T. Ratnarajah, “Beamforming design for full-duplex MIMO interference channels–QoS and energy-efficiency considerations,” _IEEE Trans. Commun._ , vol. 64, no. 11, pp. 4635–4651, Sep. 2016. * [13] A. Bishnu, M. Holm, and T. Ratnarajah, “Performance evaluation of full-duplex IAB multi-cell and multi-user network for FR2 band,” _IEEE Access_ , vol. 9, pp. 72 269–72 283, May 2021. * [14] K. E. Kolodziej, S. Yegnanarayanan, and B. T. Perry, “Fiber bragg grating delay lines for wideband self-interference cancellation,” _IEEE Trans. Microwave Theory Tech._ , vol. 67, no. 10, pp. 4005–4014, Aug. 2019. * [15] Y. Sun, Z. Gao, H. Wang, and D. Wu, “Wideband hybrid precoding for next-generation backhaul/fronthaul based on mmWave FD-MIMO,” in _Proc. IEEE Globecom Workshops (GC Wkshps)_ , Abu Dhabi, United Arab Emirates, Dec. 2018. * [16] J. Mirza, B. Ali, S. Saud Naqvi, and S. Saleem, “Hybrid precoding via successive refinement for millimeter wave MIMO communication systems,” _IEEE Commun. Lett._ , vol. 21, no. 5, pp. 991–994, Jan. 2017. * [17] S. Park, A. Alkhateeb, and R. W. Heath, “Dynamic subarrays for hybrid precoding in wideband mmWave MIMO systems,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 5, pp. 2907–2920, May 2017. * [18] S. Payami, M. Ghoraishi, M. Dianati, and M. Sellathurai, “Hybrid beamforming with a reduced number of phase shifters for massive MIMO systems,” _IEEE Trans. Veh. Technol._ , vol. 67, no. 6, pp. 4843–4851, Feb. 2018. * [19] K. Venugopal, A. Alkhateeb, N. González Prelcic, and R. W. Heath, “Channel estimation for hybrid architecture-based wideband millimeter wave systems,” _IEEE J. Sel. Areas Commun._ , vol. 35, no. 9, pp. 1996–2009, June 2017. * [20] J. Mo, B. L. Ng, S. Chang, P. Huang, M. N. Kulkarni, A. Alammouri, J. C. Zhang, J. Lee, and W. Choi, “Beam codebook design for 5G mmwave terminals,” _IEEE Access_ , vol. 7, pp. 98 387–98 404, July 2019\. * [21] B. P. Day, A. R. Margetts, D. W. Bliss, and P. Schniter, “Full-duplex MIMO relaying: Achievable rates under limited dynamic range,” _IEEE J. Sel. Areas Commun._ , vol. 30, no. 8, pp. 1541–1553, Aug 2012. * [22] P. Schniter and A. Sayeed, “Channel estimation and precoder design for millimeter-wave communications: The sparse way,” in _Proc. 48th Asilomar Conference on Signals, Systems and Computers_ , Pacific Grove, CA, USA, April 2014. * [23] S. Sun, G. R. MacCartney, and T. S. Rappaport, “Millimeter-wave distance-dependent large-scale propagation measurements and path loss models for outdoor and indoor 5G systems,” in _Proc. 10th European Conference on Antennas and Propagation (EuCAP)_ , Davos, Switzerland, April 2016\. * [24] S. Han, Y. Zhang, W. Meng, and Z. Zhang, “Precoding design for full-duplex transmission in millimeter wave relay backhaul,” _Mobile Networks and Applications_ , vol. 23, pp. 1416–1426, Oct. 2018. * [25] I. P. Roberts, H. B. Jain, and S. Vishwanath, “Frequency-selective beamforming cancellation design for millimeter-wave full-duplex,” in _Proc. IEEE International Conference on Communications (ICC)_ , Dublin, Ireland, June 2020. * [26] D. Liu, Y. Shen, S. Shao, Y. Tang, and Y. Gong, “On the analog self-interference cancellation for full-duplex communications with imperfect channel state information,” _IEEE Access_ , vol. 5, pp. 9277–9290, May 2017\. * [27] H. Luo, M. Holm, and T. Ratnarajah, “Wideband active analog self-interference cancellation for 5G and beyond full-duplex systems,” in _Proc. 54th Asilomar Conference on Signals, Systems and Computers_ , Pacific Grove, CA, USA, Nov. 2020. * [28] I. P. Roberts, H. B. Jain, and S. Vishwanath, “Equipping millimeter-wave full-duplex with analog self-interference cancellation,” in _Proc. IEEE International Conference on Communications (ICC)_ , Dublin, Ireland, June 2020\. * [29] D. Lee and B. Min, “Demonstration of self-interference antenna suppression and rf cancellation for full duplex mimo communications,” in _Proc. IEEE Wireless Communications and Networking Conference Workshops (WCNCW)_ , Seoul, Korea (South), 2020. * [30] Y. Linde, A. Buzo, and R. Gray, “An algorithm for vector quantizer design,” _IEEE Trans. Commun._ , vol. 28, no. 1, pp. 84–95, Jan. 1980\. * [31] S. Sun, T. S. Rappaport, M. Shafi, and H. Tataria, “Analytical framework of hybrid beamforming in multi-cell millimeter-wave systems,” _IEEE Trans. Wireless Commun._ , vol. 17, no. 11, pp. 7528–7543, Sep. 2018\. * [32] D. J. Love, R. W. Heath, and T. Strohmer, “Grassmannian beamforming for multiple-input multiple-output wireless systems,” _IEEE Trans. Inform. Theory_ , vol. 49, no. 10, pp. 2735–2747, Oct. 2003. * [33] S. Noh, M. D. Zoltowski, and D. J. Love, “Multi-resolution codebook and adaptive beamforming sequence design for millimeter wave beam alignment,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 9, pp. 5689–5701, June 2017\. * [34] A. D. Dabbagh and D. J. Love, “Multiple antenna MMSE based downlink precoding with quantized feedback or channel mismatch,” _IEEE Trans. Commun._ , vol. 56, no. 11, pp. 1859–1868, Nov. 2008. * [35] A. A. Hutter, E. de Carvalho, and J. M. Cioffi, “On the impact of channel estimation for multiple antenna diversity reception in mobile ofdm systems,” in _Proc. Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)_ , Pacific Grove, CA, USA, Nov. 2000. * [36] I. P. Roberts and S. Vishwanath, “Beamforming cancellation design for millimeter-wave full-duplex,” in _Proc. IEEE Global Communications Conference (GLOBECOM)_ , Waikoloa, HI, USA, Dec. 2019. * [37] B. Lee, J. Lim, C. Lim, B. Kim, and J. Seol, “Reflected self-interference channel measurement for mmwave beamformed full-duplex system,” in _Proc. IEEE Globecom Workshops (GC Wkshps)_ , San Diego, CA, USA, Dec. 2015. | Junkai Zhang (Student Member, IEEE) received the B.Eng. degree in communication engineering from Shenyang Ligong University, Shenyang, China, in 2018, and the M.Sc. degree in 2019, in signal processing and communications (with Distinction) from The University of Edinburgh, Edinburgh, United Kingdom, where he is currently working toward the Ph.D. degree with the Institute for Digital Communications. His research interests include 5G and beyond wireless networks, millimeter-wave communications, in-band-full-duplex radio, stochastic geometry, integrated sensing and communications, and massive MIMO. ---|--- | Haifeng Luo received the Bachelor’s degree from the Civil Aviation University of China, Tianjin, China, in 2018, and received the Master’s degree from The University of Edinburgh, Edinburgh, United Kingdom, in 2019. He is currently pursuing the Ph.D. degree in the Institute for Digital Communications, The University of Edinburgh, Edinburgh, United Kingdom. His research interests include signal processing aspects of beyond 5G wireless networks, full-duplex radios, and machine learning aided communications. ---|--- | Navneet Garg (Member, IEEE) received the B.Tech. degree in electronics and communication engineering from College of Science & Engineering, Jhansi, India, in 2010, and the M.Tech. degree in digital communications from ABV- Indian Institute of Information Technology and Management, Gwalior, in 2012. He has completed the Ph.D. degree in June 2018 from the department of electrical engineering at the Indian Institute of Technology Kanpur, India. From July 2018-Jan 2019, he visited The University of Edinburgh, Edinburgh, United Kingdom. From February 2019-2020, he is employed as a research associate in Heriot-Watt university, Edinburgh, United Kingdom. Since February 2020, he is working as a research associate in The University of Edinburgh, Edinburgh, United Kingdom. His main research interests include wireless communications, signal processing, optimization, and machine learning. ---|--- | Abhijeet Bishnu (Member, IEEE) received his B.Eng. degree in Electronics and Communication Engineering from Technocrat Institute of Technology, Bhopal, India in 2010. He also received the M.Eng. degree in Electronics and Telecommunication Engineering from S.G.S.I.T.S. Indore, India in 2013, and the Ph.D. degree in Electrical Engineering from Indian Institute of Technology Indore, India, in 2019. He was also a visiting research scholar in The University of Edinburgh, Edinburgh, United Kingdom in 2019. He is currently a postdoctoral research associate in The University of Edinburgh, Edinburgh, United Kingdom. His research interests include channel estimation, cognitive radio, MIMO-OFDM system, full-duplex communication and 5G and beyond communication. He is served as reviewer for many IEEE and Springer journals. ---|--- | Mark Holm (Member, IEEE) received the B.S. degree (Hons.) in laser physics and optoelectronics and the Ph.D. degree in physics from the University of Strathclyde, in 1997 and 2001, respectively. He currently works as the Technical Lead and a Hardware System Architect with Huawei Technologies (Sweden) AB, with interest in microwave radio, phased array antennas, full duplex radio systems, and photonic radios. In the past, he was the Microwave Lead on AESA radar systems, a Senior Engineer responsible for GaAs pHemt modeling, and a Laser and Package Design Engineer for SFP/XENPACK fiber modules. He has published in the fields of laser design and GaAs device modeling. ---|--- | Tharmalingam Ratnarajah (Senior Member, IEEE) is currently with the Institute for Digital Communications, The University of Edinburgh, Edinburgh, UK, as a Professor in Digital Communications and Signal Processing. He was a Head of the Institute for Digital Communications during 2016-2018. His research interests include signal processing and information theoretic aspects of beyond 5G wireless networks, full-duplex radio, mmWave communications, random matrices theory, interference alignment, statistical and array signal processing and quantum information theory. He has published over 400 publications in these areas and holds four U.S. patents. He has supervised 16 PhD students and 21 post-doctoral research fellows and raised $\$$11+ million USD of research funding. He was the coordinator of the EU projects ADEL (3.7M €) in the area of licensed shared access for 5G wireless networks, HARP (4.6M €) in the area of highly distributed MIMO, as well as EU Future and Emerging Technologies projects HIATUS (3.6M €) in the area of interference alignment and CROWN (3.4M €) in the area of cognitive radio networks. Dr Ratnarajah was an associate editor IEEE Transactions on Signal Processing, 2015-2017 and Technical co-chair, The 17th IEEE International workshop on Signal Processing advances in Wireless Communications, Edinburgh, UK, 3-6, July 2016. Dr Ratnarajah is a Fellow of Higher Education Academy (FHEA). ---|---
# Nonlinear Quantum Electrodynamics in Dirac materials Aydın Cem Keser School of Physics, University of New South Wales, Sydney, NSW 2052, Australia Australian Research Council Centre of Excellence in Low- Energy Electronics Technologies,The University of New South Wales, Sydney 2052, Australia Yuli Lyanda-Geller Department of Physics and Astronomy, Purdue University, West Lafayette, IN, 47907, USA Oleg P. Sushkov School of Physics, University of New South Wales, Sydney, NSW 2052, Australia Australian Research Council Centre of Excellence in Low-Energy Electronics Technologies,The University of New South Wales, Sydney 2052, Australia ###### Abstract Classical electromagnetism is linear. However, fields can polarize the vacuum Dirac sea, causing quantum nonlinear electromagnetic phenomena, e.g., scattering and splitting of photons, that occur only in very strong fields found in neutron stars or heavy ion colliders. We show that strong nonlinearity arises in Dirac materials at much lower fields $\sim 1\>\text{T}$, allowing us to explore the nonperturbative, extremely high field limit of quantum electrodynamics in solids. We explain recent experiments in a unified framework and predict a new class of nonlinear magneto-electric effects, including a magnetic enhancement of dielectric constant of insulators and a strong electric modulation of magnetization. We propose experiments and discuss the applications in novel materials. Classical electromagnetism is linear and hence supports the principle of superposition. It has been pointed out by Heisenberg and Euler in 1936 that due to quantum mechanical effects and the presence of the Dirac sea, linearity ceases to hold in strong fields Heisenberg and Euler (1936); *euler- heisenberg. Quantum electrodynamics (QED) is therefore nonlinear as the electromagnetic field polarizes the Dirac sea as though it is a material medium. This effect becomes significant at electric and magnetic fields $E_{\star}\simeq 1.3\times 10^{16}\>\text{V}/\text{cm}$, $B_{\star}\simeq 4.4\times 10^{9}\>\text{T}$, at which the Zeeman splitting and electric potential over the Compton wavelength become comparable to the electron rest energy. These are the so called Schwinger critical values Schwinger (1951) and they are enormous on the laboratory scale. Such fields exist only in exotic environments e.g. neutron stars Kaspi and Beloborodov (2017); *reisenegger2001magnetic and heavy ion colliders Tuchin (2013). Nevertheless, some low-order nonlinear QED effects, such as scattering or splitting of photons have been observed in the laboratory Marklund and Shukla (2006); Akhmadaliev _et al._ (1998), and probing strong field effects is an active area of research Fedeli _et al._ (2021). Dirac materials have been known for decades Cohen and Blount (1960); Lax _et al._ (1960); Wolff (1964). Nevertheless, their recently understood topological properties and surface excitations have led to a surge of interest Qi _et al._ (2008); Zhang _et al._ (2009); Hasan and Kane (2010); Ando (2013); Wehling _et al._ (2014); Culcer _et al._ (2020). The nonlinear electromagnetic response of Dirac materials have been studied Sodemann and Fu (2015); Matsyshyn and Sodemann (2019); Culcer _et al._ (2020); Rostami and Cappelluti (2020a, b); Bhalla _et al._ (2020); Du _et al._ (2020); Morimoto and Nagaosa (2016) due to their transport properties (e.g. rectification) and possible applications in photovoltaics. In this letter, rather than transport, we study dielectric and magnetization response of the three-dimensional (3D) Dirac insulators and semimetals due to the Dirac vacuum, i.e. filled valence band. We include nonlinear contributions to all orders by nonperturbatively analyzing the Heisenberg-Euler action Heisenberg and Euler (2006); Akhiezer and Berestestskii (1965); Berestetskii _et al._ (1982); Peskin and Schroeder (2005); Dunne ; *Dunne_arxiv, going both beyond known results of QED and the general framework in condensed matter physics 111The high order nonlinear electromagnetic effects can also emerge in two-dimensional materials Katsnelson _et al._ (2013), but is unrelated to the effects we discuss here, due to dimensionality considerations. For applications to SU(N) gauge fields see Ref. Redlich (1984). Figure 1: Diagrammatic representation of the Heisenberg-Euler action $\delta L_{1}+\delta L_{HE}$ (Eqs. (6) and (Nonlinear Quantum Electrodynamics in Dirac materials)). The dashed line corresponds to the constant external electromagnetic fields $B$,$E$ and the solid line is the Green function of a Dirac sea electron. In known Dirac materials, we find the typical values of Schwinger fields, $E_{\star}\sim 10^{5}\>\text{V}/\text{cm}$, $B_{\star}\sim 1\>\text{T}$, are easily accessible, providing a platform to explore the strong field regime of QED and to observe quantum nonlinear electromagnetic effects in the laboratory. The nonlinear effects contribute to the experimentally observed high-field magnetization in the recent work on the Weyl semimetal TaAs Zhang _et al._ (2019) and the Dirac semimetal Bi Iwasa _et al._ (2019), but the importance of this observation and its origin in the Heisenberg-Euler effect has not been recognized. In the present work we demonstrate this connection and show that the data Zhang _et al._ (2019); Iwasa _et al._ (2019) agree with our predictions. More importantly, we predict a new class of magneto-electric effects. The most significant is the magnetic field tunable, very large enhancement of the dielectric constant, reaching up to $\delta\epsilon_{r}\sim 10$ per every $1\>\text{T}$ of the applied magnetic field. We also predict an electric field modulated magnetization. Both these effects are highly anisotropic, that is, they depend on relative orientation of E and B-fields and their crystallographic orientation. In a material, the classical Lagrangian of the electromagnetic field is 222In CGS units. In an anisotropic material $\bm{\epsilon},\bm{\mu}$ are symmetric $3\times 3$ matrices. Some QED textbooks are opposite to the condensed matter convention, see the footnote in ${\S}$ 129 of Ref. Berestetskii _et al._ (1982). In this paper $\bm{B}=\bm{\mu}\bm{H}$ is the magnetic flux density and, $\bm{E}=\bm{\epsilon}^{-1}\bm{D}$ is the screened electric field. $\displaystyle L_{cl}=\frac{1}{8\pi}(\bm{E}^{T}\bm{\epsilon}\bm{E}-\bm{B}^{T}\bm{\mu}^{-1}\bm{B}).$ (1) One of our principal results is the quantum 1-loop, nonlinear, nonperturbative contribution to the Lagrangian $\delta L_{HE}\to\frac{\Delta}{24\pi^{2}\lambdabar_{D}^{3}}\left[({\bf b}\cdot{\bf e})^{2}|{\bf b}|^{-1}+|\mathbf{b}|^{2}\ln|{\bf b}|\right],$ (2) in a strong B, and weak E-field (See further below and also Sec. S6 of the supplement). Here, the dimensionless vectors $\mathbf{e}$ and $\mathbf{b}$ depend on the fine structure constant $\alpha_{D}=e^{2}/\hbar v$ of the Dirac material: ${\bf e}(\alpha_{D})=\frac{\bm{\mathcal{U}E}}{E_{\star}(\alpha_{D})},\quad{\bf b}(\alpha_{D})=\frac{\bm{\mathcal{U}^{-1}B}}{B_{\star}(\alpha_{D})},$ (3) and the critical ‘Schwinger’ electric $E_{\star}(\alpha_{D})$ and magnetic $B_{\star}(\alpha_{D})$ fields in the material are defined by $E_{\star}^{2}(\alpha_{D})=\frac{v^{2}}{c^{2}}B_{\star}^{2}(\alpha_{D})=\frac{\Delta}{\alpha_{D}\lambdabar_{D}^{3}}=\left(\frac{\Delta^{2}}{e\hbar v}\right)^{2}.$ (4) Eqs. (2), (3) and (4) account for the anisotropy of real materials Aronov and Pikus (1967), for which the velocity tensor is a $3\times 3$ symmetric matrix Wolff (1964) $\bm{\mathcal{V}}=v\bm{\mathcal{U}}$, with $\text{det}(\bm{\mathcal{U}})=1$. The term $\bm{\mathcal{U}E}$ and $\bm{\mathcal{U}^{-1}B}$, are linear transformations of $\bm{E}$ and $\bm{B}$ respectively (See Supplement Sec. LABEL:s-sec:aniso Sup ). | $2\Delta$ [meV] | ${\alpha_{D}}/{\alpha}$ | $E_{\star}$ [V/cm] | $B_{\star}$ [mT] ---|---|---|---|--- QED $(\Delta=m_{e}c^{2})$ | $10^{9}$ Thomson (1897); *Millikan; Tiesinga _et al._ (2021) | 1 | $1.3\times 10^{16}$ | $4.4\times 10^{12}$ $\text{Pb}_{0.5}\text{Sn}_{0.5}\text{Te}$ | $63$ Dziawa _et al._ (2012) | 580 | $2.9\times 10^{4}$ | $5.6\times 10^{3}$ $\text{Bi}_{0.9}\text{Sb}_{0.1}$ | $15.5$ Liu and Allen (1995); Hsieh _et al._ (2008) | 188 | 571 | 36 TaAs | $0$ | 357 | 0 | 0 Table 1: Comparison of parameters including the band gap ($2\Delta$) edo , effective fine structure constant $\alpha_{D}=\frac{e^{2}}{\hbar v}$ as the ratio $\alpha_{D}/\alpha=c/v$, and the Schwinger fields in Eq. (4). We have defined the symbols in Eqs.(3),(4) according to convention in QED. The “Dirac wavelength” $\lambdabar_{D}=\frac{\hbar v}{\Delta}$ and the “Dirac magneton” $\mu_{D}=\frac{e\hbar v^{2}}{2\Delta c}$ replace the Compton wavelength and the Bohr magneton respectively. When the fields reach the ‘Schwinger scale’, Zeeman splitting and the potential difference at $\lambdabar_{D}$ are equal to the half of the Dirac band-gap: $\displaystyle 2\mu_{D}B_{\star}=\lambdabar_{D}eE_{\star}=\Delta,$ (5) and the nonlinearity becomes relevant. In Table 1 we list material parameters considered in this work. For more details see Sec. S2 and Table S2 of the Supplement Sup . (a) (b) (c) Figure 2: a) Nonlinear diamagnetic susceptibility $\chi_{ref}-\chi$ versus magnetic field, $\chi_{ref}=\chi(50\>\text{T})$ in Bi and $\chi_{ref}=\chi(30\>\text{T})$ in TaAs. In TaAs (blue) the field is along the crystal c-direction and in Bi there are two directions, binary (red) and bisectrix (green). The points represent numerical differentiation of TaAs and Bi magnetization data from Refs. Zhang _et al._ (2019) and Iwasa _et al._ (2019) respectively. The experimental points are connected by dashed lines for guidance. Solid lines represent our theory. The red solid line is our prediction for $\text{Bi}_{0.9}\text{Sb}_{0.1}$. b) Predicted variation of the dielectric constant in magnetic field, for parallel (perpendicular) field configurations shown by solid (dashed) lines. c) Predicted electric field modulated magnetization as a function of applied magnetic field, along the magnetic field direction at $E=0.3E_{\star}$. The quantum contribution to the Lagrangian can be viewed as the sum of the infinite chain of 1-loop diagrams in Fig.1 that represent the polarization of the Dirac sea of electrons by external electric and magnetic fields. In this work we consider only non-magnetic crystals with inversion symmetry Fu and Kane (2007); *Bansil2016 333TaAs is non-centrosymmetric crystal, but since it is gapless, we take $E=0$ to avoid transport. TaAs is non-magnetic and the time reversal symmetry requires that the effective action contains even powers of B only. and assume the static/quasistatic approximation, $\omega,kv\ll\Delta$, where $\omega$ and $k$ are the frequency and the wave number of the external fields. Therefore our diagrams, Fig.1, have only even numbers of external E-lines and B-lines. Besides diagrams in Fig.1, there are also multi-loop diagrams suppressed by a factor of $\alpha_{D}/\epsilon\sim 0.03$ per each additional loop, where $\epsilon$ is the large dielectric constant mainly due to the lattice and intra-ionic polarization. For the discussion of the suppression of the multi-loop diagrams in the context of phenomena considered here, see Sec. S3 in the supplement and also Refs. Sham (1966); Sham and Rice (1966) In Fig. 1, the first diagram quadratic in external fields is ultraviolet divergent and is equal to Akhiezer and Berestestskii (1965); edo $\delta L_{1}=\frac{\Delta}{12\pi^{2}\lambdabar_{D}^{3}}\ln\left(\frac{\Lambda}{\Delta}\right)\left(|\bf{e}|^{2}-|\bf{b}|^{2}\right).$ (6) Here the subscript ‘1’ indicates contribution from the first diagram in Fig.1 and $\Lambda\sim v\frac{\hbar\pi}{a}\sim 1\>\text{eV}$ is the ultraviolet cut- off energy, where $a$ being the lattice spacing. In QED this diagram describes the electric permittivity and magnetic permeability of vacuum and thus it is included into the definitions of the electric charge and electromagnetic fields. As a result, $\delta L_{1}$ does not appear explicitly in QED. However, for Dirac materials $\delta L_{1}$ is an explicit contribution that has to be added to the classical Lagrangian Eq. (1). Indeed, this is the contribution of the Dirac sea (valence band) to the dielectric constant and magnetic susceptibility. Equating $(E^{2}-B^{2})/(8\pi)+\delta L_{1}$ to the classical Lagrangian (1), we find the linear dielectric constant $\bm{\epsilon_{D}}$ and the linear magnetic susceptibility $\bm{\chi_{D}}$ ($\bm{\mu}=\bm{1}+4\pi\bm{\chi}$): $\displaystyle\bm{\epsilon_{D}}=\bm{1}+\frac{2\alpha_{D}}{3\pi}\ln\left(\frac{\Lambda}{\Delta}\right)\bm{\mathcal{U}}^{2},\quad\epsilon_{D}\sim 3,$ (7) $\displaystyle\bm{\chi_{D}}=-\frac{\alpha_{D}}{6\pi^{2}}\frac{v^{2}}{c^{2}}\ln\left(\frac{\Lambda}{\Delta}\right)\bm{\mathcal{U}}^{-2},\quad\chi_{D}\sim-10^{-6}.$ (8) where estimates are given for the diagonalized tensors. Eqs. (7) and (8) define the Dirac contributions to the total dielectric and magnetic susceptibilities. The contribution (7) is relatively small compared to the total relative permittivity $\bm{\epsilon}$ in Eq. (1), typically $\epsilon\sim 100$, which is primarily due to the ionic (lattice) and intra- ionic contributions (See Supplement Table S3). The magnetic response (8) constitutes a significant part of the diamagnetic susceptibility, which also has contributions from lower bands and core electrons. For Bismuth, the Dirac valence band contribution (8) has been previously considered in Ref. Fuseya _et al._ (2015). We describe now the nonlinear effects. The diagrams in Fig.1 beyond the first one ($n\geq 2$) are convergent at arbitrarily large $\mathbf{|e|,|b|}$ 444$\mathbf{|e|,|b|}\ll\Lambda^{2}/\Delta^{2}$, otherwise the effective Dirac Hamiltonian no longer applies. See Sec. S5 and are re- summed exactly 555The exact result is obtained by directly summing the energies of Landau levels Akhiezer and Berestestskii (1965); Berestetskii _et al._ (1982), or in modern language, zeta-function regularization Dunne ; *Dunne_arxiv to yield the 1-loop, nonperturbative Heisenberg-Euler action $\displaystyle\delta L_{HE}$ $\displaystyle=\sum_{n=2}^{\infty}\delta L_{n}\equiv\frac{-\Delta}{8\pi^{2}\lambdabar_{D}^{3}}\int_{0}^{\infty}\frac{d\eta e^{-\eta}}{\eta}$ $\displaystyle\times\bigg{[}A_{-}$ $\displaystyle\cot(\eta A_{-})A_{+}\cot(\eta A_{+})-\frac{1}{\eta^{2}}+\frac{1}{3}(A_{-}^{2}+A_{+}^{2})\bigg{]},$ $\displaystyle A_{\mp}$ $\displaystyle=-\frac{i}{2}\left[\sqrt{({\bf b}+i{\bf e})^{2}}\mp\sqrt{({\bf b}-i{\bf e})^{2}}\right],$ (9) which accounts for crystal anisotropy, cf., Eq. (3), as well as the strong field behavior. The imaginary part of Eq.(Nonlinear Quantum Electrodynamics in Dirac materials), obtained via its analytic continuation, captures the electric breakdown, which can be avoided in weak electric fields $|{\bf e}|<1$ ($E<E_{*}$ ). Then, Eq. (Nonlinear Quantum Electrodynamics in Dirac materials) can be expanded in powers of $\mathbf{e}$. However, the magnetic field can be much larger than $B_{\star}$, leading to the asymptotic expression Eq. (2) (See Supplement Sec. S6). At weak magnetic fields, $|{\bf e}|,|{\bf b}|\ll 1$, Eq. (Nonlinear Quantum Electrodynamics in Dirac materials) reduces to the 2nd diagram in Fig.1, $\displaystyle\delta L_{2}=\frac{\Delta}{360\pi^{2}\lambdabar_{D}^{3}}\left[\left(|\mathbf{e}|^{2}-|\mathbf{b}|^{2}\right)^{2}+7\left({\bf e}\cdot{\bf b}\right)^{2}\right].$ (10) At $E=0$, the nonlinear magnetic susceptibility is $\displaystyle\bm{\delta\chi}=\frac{\partial^{2}\delta L_{HE}}{\partial\bm{B}\partial\bm{B}}=\bm{\mathcal{U}}^{-2}\frac{\alpha_{D}}{12\pi^{2}}\frac{v^{2}}{c^{2}}F(|{\bf b}|);$ $\displaystyle F(|{\bf b}|)=\frac{2}{5}|{\bf b}|^{2},\ \ \|{\bf b}|\ll 1$ $\displaystyle F(|{\bf b}|)=\ln|{\bf b}|,\ \ \ |{\bf b}|\gg 1.$ (11) The dimensionless function $F(|{\bf b}|)$ in the full range of magnetic fields obtained by numerical integration of Eq. (Nonlinear Quantum Electrodynamics in Dirac materials) is shown in Supplement Fig. S2. Strong and weak field limits of $F$ follow from the actions given by Eqs.(2) and (10) respectively. The total magnetic susceptibility of the Dirac valence band is the sum of the linear susceptibility, Eq.(8) and the nonlinear contribution, $\bm{\chi}=\bm{\chi_{D}}+\bm{\delta\chi}$. When $|{\bf b}|\gg 1$ we have $\bm{\chi}=-\bm{\mathcal{U}}^{-2}\frac{\alpha_{D}}{12\pi^{2}}\frac{v^{2}}{c^{2}}\ln\left(\frac{c\Lambda^{2}}{e|\bm{\mathcal{U}^{-1}B}|\hbar v^{2}}\right).$ (12) Here $\bm{\chi}$ depends on $B$ but not on $\Delta$, and is well-defined in the limit $\Delta=0$, as in the Weyl semimetal TaAs Zhang _et al._ (2019). According to Eqs.(Nonlinear Quantum Electrodynamics in Dirac materials),(12) the magnetic susceptibility is nonlinear, i.e. it depends on magnetic field. Remarkably, this Dirac nonlinearity has been recently observed, but its connection to nonlinear electrodynamics was not identified. Here we show its origin in the Heisenberg-Euler effect. The magnetization of Weyl semimetal TaAs has been measured up to $B=30$ T, Ref. Zhang _et al._ (2019), and magnetization of Dirac semimetal Bi has been measured up to $B=60$ T, Ref. Iwasa _et al._ (2019). In Zhang et al. Zhang _et al._ (2019) the valence band contribution to magnetization at $\bm{E}=0$ was considered 666 However, our approach from the vantage point afforded by the transformation of Heisenberg-Euler action under dilation and contraction of space (See Sec. S1) leads to a more accurate overall factor., and in the high magnetic field limit, the magnetization quasi-linear in the applied B-field is investigated. Here we study the universal nonlinear susceptibility and eliminate all uncertainties such as the choice of ultraviolet cut-off $\Lambda$, subleading terms and contributions from other bands or core electrons. Both TaAs and Bi have nonzero chemical potential and hence have conduction electrons. Therefore at weak magnetic fields both compounds show magnetic oscillations. The conduction electrons freeze and the oscillations disappear at $B>5$T in Bi Iwasa _et al._ (2019) and $B>10-13$T in TaAs Zhang _et al._ (2019). In these ranges of B, we can compare the data with our predictions. In Fig. 2(a) the points show magnetic susceptibilities of TaAs (c-direction) and Bi (binary and bisector directions). The points have significant spread as they are obtained by numerical differentiation of experimental magnetizations from Refs. Zhang _et al._ (2019); Iwasa _et al._ (2019). To focus on the nonlinearity, we plot $\chi_{ref}-\chi$, where $\chi_{ref}=\chi(B=30T)$ for TaAs and $\chi_{ref}=\chi(B=50T)$ for Bi. Solid curves present our theoretical predictions, which is manifestly consistent with the data. For discussion of material specific details, anisotropy etc., see Supplement Sec. S8. Interestingly, $\text{Bi}_{0.9}\text{Sb}_{0.1}$ alloy has the band structure very close to that of Bi, but with no conduction electrons Liu and Allen (1995); Hsieh _et al._ (2008), and could be an ideal test platform for our theory. The susceptibility of this compound has not been measured yet, and the solid red curve in Fig. 2(a) shows our theoretical prediction. We now consider novel magneto-electric effects. The nonlinear dielectric constant is $\displaystyle\bm{\delta\epsilon_{D}}=4\pi\frac{\partial^{2}\delta L_{HE}}{\partial\bm{E}\partial\bm{E}}=\bm{\mathcal{U}}^{2}\frac{\alpha_{D}}{3\pi}G_{i}(|{\bf b}|);$ (13) $\displaystyle|{\bf b}|\ll 1:\ \ G_{||}(|{\bf b}|)=\frac{1}{3}|{\bf b}|^{2}\ ,\ \ \ \ G_{\perp}(|{\bf b}|)=-\frac{2}{15}|{\bf b}|^{2}$ $\displaystyle|{\bf b}|\gg 1:\ \ G_{||}(|{\bf b}|)=|{\bf b}|\ ,\ \ \ \ \ \ \ G_{\perp}(|{\bf b}|)=-\ln(|{\bf b}|).$ Here the index $i=||,\perp$ shows the relative orientation of $\mathbf{e},\mathbf{b}$ par . Dimensionless functions $G_{i}(|{\bf b}|)$ in the whole range of $\bf{b}$ obtained by numerical integration of (Nonlinear Quantum Electrodynamics in Dirac materials) are plotted in Supplement Fig. S2. Its strong and weak field limits define actions given by Eqs.(2) and (10) respectively. The dependence of the dielectric constant on the applied magnetic field is a novel magneto-electric effect. For ${\mathbf{b}}\parallel{\mathbf{e}}$ the contribution $\delta\epsilon_{D}$ is positive and can be very large, while for ${\mathbf{b}}\perp{\mathbf{e}}$ the contribution $\delta\epsilon_{D}$ is negative. The expressions for arbitrary angle between ${\mathbf{b}},{\mathbf{e}}$, and the relation to the angle between applied fields $\bm{B}$ and $\bm{E}$, which is generally different due to properties of the anisotropy transformation are given in Sec. S7. Furthermore, according to (1), (10) there is a nonlinear contribution quadratic in the electric field, $\bm{\delta\epsilon_{D}}(\bm{E})=\bm{\mathcal{U}}^{2}\frac{2\alpha_{D}}{15\pi}|\mathbf{e}|^{2},\quad|\mathbf{b}|=0,$ (14) which is suppressed by $|\mathbf{e}|^{2}/|\mathbf{b}|$ when $|\mathbf{b}|\gg 1$. Notably, at $|\mathbf{e}|,|\mathbf{b}|\ll 1$, contributions (13) and (14) add up. The magnetic field induced variation of the dielectric constant in Eq.(13) scales as $\delta\epsilon_{D}\propto 1/B_{\star}\propto\Delta^{-2}$. Thus, the effect is most significant in a small band-gap Dirac insulators. In Fig. 2(b) we plot our predictions for $\text{Bi}_{0.9}\text{Sb}_{0.1}$. For $\mathbf{e}||\mathbf{b}$ the effect is enormous, $\delta\epsilon_{D}\sim 10$/Tesla. For $\bm{E}\perp\bm{B}$ the effect is smaller and has the negative sign. In the same Fig. 2(b) we also plot predictions for $\delta\epsilon_{D}$ in $\text{Pb}_{0.5}\text{Sn}_{0.5}\text{Te}$. This compound has larger gap and therefore the effect is smaller, but still observable. One more novel magneto-electric effect is the dependence of magnetization, on the applied electric field. The electric field dependent magnetization ${\bm{M}^{(\mathbf{e})}}=\frac{\partial\delta L}{\partial{\bm{B}}}$, in units of “Dirac magnetons” per “Dirac volume”, reads $\displaystyle 4\pi{\bm{M}^{(\mathbf{e})}}=\frac{\bm{\mathcal{U}}^{-1}\mathbf{b}}{|\mathbf{b}|}\frac{\mu_{D}}{3\pi\lambdabar_{D}^{3}}|{\bf e}|^{2}D_{i}(|{\bf b}|)$ $\displaystyle|{\bf b}|\ll 1:\ \ \ D_{||}(|{\bf b}|)=\frac{2}{3}|{\bf b}|\ ,\ \ \ D_{\perp}(|{\bf b}|)=-\frac{4}{15}|{\bf b}|$ $\displaystyle|{\bf b}|\gg 1:\ \ \ D_{||}(|{\bf b}|)=1\ ,\ \ \ \ \ \ \ D_{\perp}(|{\bf b}|)=-\frac{1}{|{\bf b}|}.$ The direction of the magnetization (Nonlinear Quantum Electrodynamics in Dirac materials) in a Dirac crystal is defined by the vector ${\bf b}$ and depends on crystal anisotropy as described by Eq. (3). Dimensionless functions $D_{i}(|{\bf b}|)$ in the whole range of $\bf{b}$ obtained by numerical integration of (Nonlinear Quantum Electrodynamics in Dirac materials) are plotted in Fig.S2 in Supplementary material. For ${\mathbf{b}}\parallel{\mathbf{e}}$ the magnetization is large and paramagnetic, while for ${\mathbf{b}}\perp{\mathbf{e}}$ the magnetization is diamagnetic par . Magnetization (Nonlinear Quantum Electrodynamics in Dirac materials) is quadratic in the applied electric field and as a function of magnetic field, saturates when $|{\bf b}|\gg 1$. To enhance the magnetization in Eq. (Nonlinear Quantum Electrodynamics in Dirac materials) one needs the electric field as strong as possible. However, the field is limited by the dielectric strength, $E_{d}$ of the material, beyond which dielectric breakdown occurs. The breakdown probability (rate of Zener tunneling by electric field per unit volume) is obtained from Eq. (Nonlinear Quantum Electrodynamics in Dirac materials) Berestetskii _et al._ (1982) and found to be $P\propto|{\mathbf{e}}|^{2}e^{-\pi/|{\mathbf{e}}|}$ (See Sec. S4). The most important here is the exponential dependence, which universally applies to both the Dirac spectrum and quadratic dispersion. Thus, one expects that $E_{d}$, is proportional to $E_{\star}$. Taking two band insulators, diamond ($2\Delta\approx 5.5$eV, $E_{d}\approx 10^{7}$ V/cm), and silicon ($2\Delta\approx 1.14$ eV, $E_{d}\approx 3\times 10^{5}$ V/cm), as reference materials, we observe that the dielectric strength scales as $E_{d}\propto\Delta^{2}$. Therefore $E_{d}$ is a fixed fraction of $E_{\star}$. Significant $E$-dependent magnetic effects Eq. (Nonlinear Quantum Electrodynamics in Dirac materials) can then be observed for $|\mathbf{e}|=0.1\text{-}0.3$ 777Of course, there are other factors like the dependence of dielectric strength on impurities, the size of the sample, etc.. Furthermore, as usual in solids, setups with huge built-in electric fields in the insulating regime can be explored Calawa _et al._ (1960); *Rediker2. For a fixed ${\bf e}=E/E_{\star}$, the electric field modulated magnetization in Eq. Nonlinear Quantum Electrodynamics in Dirac materials obeys $M^{(\mathbf{e})}\propto B_{\star}\propto\Delta^{2}$, so materials with large gap are preferable, unlike in the dependence of dielectric constant on magnetic field. In Fig. 2(c) we plot the predicted magnetization for $\text{Pb}_{0.5}\text{Sn}_{0.5}\text{Te}$ versus magnetic field at $E=10^{4}\>\text{V/cm}$, which corresponds to ${\bf e}\approx 0.3$. For the both fields, ${\bf e}$ and ${\bf b}$, parallel to the c-axis, the electric field driven magnetization is $4\pi M^{(\mathbf{e})}\approx 0.2\>\mu\text{T}$ at $B=1T$. When $\bm{E\perp B}$, the magnetization changes sign, see Fig. 2(c). In the same figure we also plot the magnetization in $\text{Bi}_{0.9}\text{Sb}_{0.1}$ for ${\bf e}\approx 0.3$. Here the effect is smaller due to the smaller Dirac gap. The electric field driven magnetization in $\text{Bi}_{0.9}\text{Sb}_{0.1}$ ($4\pi M^{(\bf e)}\sim 10^{-8}$T) and in $\text{Pb}_{0.5}\text{Sn}_{0.5}\text{Te}$ ($4\pi M^{(\mathbf{e})}\sim 2\times 10^{-7}$T) can be feasibly detected in lock-in experiments, in an applied electric field having a constant and an AC component (with frequency $\omega$). The induced magnetization is then characterized by contributions modulated at frequencies $\omega$ and $2\omega$. Of course, the condition $\hbar\omega\ll 2\Delta$ is assumed fulfilled. Experiments on observation of $M^{(\mathbf{e})}$ could also take advantage of the SQUID magnetometry, sensitive to magnetization as low as $10^{-15}\>\text{T}/\sqrt{\text{Hz}}$ Gramolin _et al._ (2021), much lower than the predicted values. In conclusion, Based on the Heisenberg-Euler theory of the physical vacuum we develop the theory of nonlinear electromagnetic effects in Dirac materials. We explain the results of two recent experiments on nonlinear contribution to magnetization of Dirac materials. We predict two novel magneto-electric effects and discuss possible experiments and materials for their observation. ###### Acknowledgements. Acknowledgements We thank M. O’Brien, H. Takagi, A. O. Sushkov, V. M. Shabaev, J. Seidel, A. R. Hamilton, U. Zuelicke and Y. Ashim for useful comments. YLG was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE- SC0010544. He also acknowledges the Gordon Godfrey bequest for the support of his visit to UNSW. ACK and OS acknowledge the support from the Australian Research Council Centre of Excellence in Future Low Energy Electronics Technologies (CE170100039). ### References * Heisenberg and Euler (1936) W. Heisenberg and H. Euler, Zeitschr. Phys. , 714 (1936). * Heisenberg and Euler (2006) W. Heisenberg and H. Euler, “Consequences of dirac theory of the positron,” (2006), arXiv:physics/0605038 [physics.hist-ph] . * Schwinger (1951) J. Schwinger, Phys. Rev. 82, 664 (1951). * Kaspi and Beloborodov (2017) V. M. Kaspi and A. M. Beloborodov, Annual Review of Astronomy and Astrophysics 55, 261 (2017), https://doi.org/10.1146/annurev-astro-081915-023329 . * Reisenegger (2001) A. Reisenegger, “Magnetic fields of neutron stars: an overview,” (2001), arXiv:astro-ph/0103010 [astro-ph] . * Tuchin (2013) K. Tuchin, Advances in High Energy Physics 2013, 490495 (2013). * Marklund and Shukla (2006) M. Marklund and P. K. Shukla, Rev. Mod. Phys. 78, 591 (2006). * Akhmadaliev _et al._ (1998) S. Z. Akhmadaliev, G. Y. Kezerashvili, S. G. Klimenko, V. M. Malyshev, A. L. Maslennikov, A. M. Milov, A. I. Milstein, N. Y. Muchnoi, A. I. Naumenkov, V. S. Panin, S. V. Peleganchuk, V. G. Popov, G. E. Pospelov, I. Y. Protopopov, L. V. Romanov, A. G. Shamov, D. N. Shatilov, E. A. Simonov, and Y. A. Tikhonov, Phys. Rev. C 58, 2844 (1998). * Fedeli _et al._ (2021) L. Fedeli, A. Sainte-Marie, N. Zaim, M. Thévenet, J. L. Vay, A. Myers, F. Quéré, and H. Vincenti, Phys. Rev. Lett. 127, 114801 (2021). * Cohen and Blount (1960) M. H. Cohen and E. I. Blount, The Philosophical Magazine: A Journal of Theoretical Experimental and Applied Physics 5, 115 (1960), https://doi.org/10.1080/14786436008243294 . * Lax _et al._ (1960) B. Lax, J. G. Mavroides, H. J. Zeiger, and R. J. Keyes, Phys. Rev. Lett. 5, 241 (1960). * Wolff (1964) P. Wolff, Journal of Physics and Chemistry of Solids 25, 1057 (1964). * Qi _et al._ (2008) X.-L. Qi, T. L. Hughes, and S.-C. Zhang, Phys. Rev. B 78, 195424 (2008). * Zhang _et al._ (2009) H. Zhang, C.-X. Liu, X.-L. Qi, X. Dai, Z. Fang, and S.-C. Zhang, Nature Physics 5, 438 (2009). * Hasan and Kane (2010) M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010). * Ando (2013) Y. Ando, Journal of the Physical Society of Japan 82, 102001 (2013), https://doi.org/10.7566/JPSJ.82.102001 . * Wehling _et al._ (2014) T. Wehling, A. Black-Schaffer, and A. Balatsky, Advances in Physics 63, 1 (2014), https://doi.org/10.1080/00018732.2014.927109 . * Culcer _et al._ (2020) D. Culcer, A. C. Keser, Y. Li, and G. Tkachov, 2D Materials 7, 022007 (2020). * Sodemann and Fu (2015) I. Sodemann and L. Fu, Phys. Rev. Lett. 115, 216806 (2015). * Matsyshyn and Sodemann (2019) O. Matsyshyn and I. Sodemann, Phys. Rev. Lett. 123, 246602 (2019). * Rostami and Cappelluti (2020a) H. Rostami and E. Cappelluti, “Dominant role of two-photon vertex in nonlinear response of dirac materials,” (2020a), arXiv:2007.08282 [cond-mat.mes-hall] . * Rostami and Cappelluti (2020b) H. Rostami and E. Cappelluti, “Many-body effects in third harmonic generation of graphene,” (2020b), arXiv:2011.03824 [cond-mat.mes-hall] . * Bhalla _et al._ (2020) P. Bhalla, A. H. MacDonald, and D. Culcer, Phys. Rev. Lett. 124, 087402 (2020). * Du _et al._ (2020) Z. Z. Du, C. M. Wang, H.-P. Sun, H.-Z. Lu, and X. C. Xie, “Quantum theory of the nonlinear hall effect,” (2020), arXiv:2004.09742 [cond-mat.mes-hall] . * Morimoto and Nagaosa (2016) T. Morimoto and N. Nagaosa, Phys. Rev. B 93, 125125 (2016). * Akhiezer and Berestestskii (1965) A. Akhiezer and V. Berestestskii, _Quantum electrodynamics_ , Interscience Monographs and Texts in Physics and Astronomy No. v. XI (Interscience Publishers [1965], 1965). * Berestetskii _et al._ (1982) V. Berestetskii, E. Lifshitz, and L. Pitaevskii, _Quantum Electrodynamics: Volume 4_, Course of theoretical physics (Elsevier Science, 1982). * Peskin and Schroeder (2005) M. Peskin and D. Schroeder, _An Introduction to Quantum Field Theory_, Advanced book program (Levant Books, 2005). * (29) G. V. Dunne, “Heisenberg–euler effective lagrangians: Basics and extensions,” in _From Fields to Strings: Circumnavigating Theoretical Physics_, pp. 445–522. * Dunne (2004) G. V. Dunne, “Heisenberg–euler effective lagrangians: Basics and extensions,” (2004), arXiv:arXiv:hep-th/0406216 [hep-th] . * Note (1) The high order nonlinear electromagnetic effects can also emerge in two-dimensional materials Katsnelson _et al._ (2013), but is unrelated to the effects we discuss here, due to dimensionality considerations. For applications to SU(N) gauge fields see Ref. Redlich (1984). * Zhang _et al._ (2019) C.-L. Zhang, C. M. Wang, Z. Yuan, X. Xu, G. Wang, C.-C. Lee, L. Pi, C. Xi, H. Lin, N. Harrison, H.-Z. Lu, J. Zhang, and S. Jia, Nature Communications 10, 1028 (2019). * Iwasa _et al._ (2019) A. Iwasa, A. Kondo, S. Kawachi, K. Akiba, Y. Nakanishi, M. Yoshizawa, M. Tokunaga, and K. Kindo, Scientific Reports 9, 1672 (2019). * Note (2) In CGS units. In an anisotropic material $\bm{\epsilon},\bm{\mu}$ are symmetric $3\times 3$ matrices. Some QED textbooks are opposite to the condensed matter convention, see the footnote in ${\S}$ 129 of Ref. Berestetskii _et al._ (1982). In this paper $\bm{B}=\bm{\mu}\bm{H}$ is the magnetic flux density and, $\bm{E}=\bm{\epsilon}^{-1}\bm{D}$ is the screened electric field. * Aronov and Pikus (1967) A. G. Aronov and G. E. Pikus, JETP 24, 188 (1967). * (36) See Supplemental Material at [URL will be inserted by publisher] for details of the derivation of anisotropic 1-loop effective action and the calculation of nonlinear susceptibilites and a detailed comparison of our theory in different materials and QED. * Thomson (1897) J. J. Thomson, Philosophical Magazine 44, 293 (1897). * Millikan. (1913) R. A. Millikan., Phys. Rev. 2, 109 (1913). * Tiesinga _et al._ (2021) E. Tiesinga, P. J. Mohr, D. B. Newell, and B. N. Taylor, Journal of Physical and Chemical Reference Data 50, 033105 (2021), https://doi.org/10.1063/5.0064853 . * Dziawa _et al._ (2012) P. Dziawa, B. J. Kowalski, K. Dybko, R. Buczko, A. Szczerbakow, M. Szot, E. Łusakowska, T. Balasubramanian, B. M. Wojek, M. H. Berntsen, O. Tjernberg, and T. Story, Nature Materials 11, 1023 (2012). * Liu and Allen (1995) Y. Liu and R. E. Allen, Phys. Rev. B 52, 1566 (1995). * Hsieh _et al._ (2008) D. Hsieh, D. Qian, L. Wray, Y. Xia, Y. S. Hor, R. J. Cava, and M. Z. Hasan, Nature 452, 970 (2008). * (43) When the band gap is inverted, as in topological insulators, our analysis applies to $|\Delta|$. The additional boundary term in the effective action of the electromagnetic field, proportional to $\bm{E}\cdot\bm{B}$ in topological insulators Fröhlich (2018) and Weyl semimetals Burkov (2018); Raines and Galitski (2017); Rylands _et al._ (2021) does not affect the nonlinear response which we investigate in this letter. * Fu and Kane (2007) L. Fu and C. L. Kane, Phys. Rev. B 76, 045302 (2007). * Bansil _et al._ (2016) A. Bansil, H. Lin, and T. Das, Rev. Mod. Phys. 88, 021004 (2016). * Note (3) TaAs is non-centrosymmetric crystal, but since it is gapless, we take $E=0$ to avoid transport. TaAs is non-magnetic and the time reversal symmetry requires that the effective action contains even powers of B only. * Sham (1966) L. J. Sham, Phys. Rev. 150, 720 (1966). * Sham and Rice (1966) L. J. Sham and T. M. Rice, Phys. Rev. 144, 708 (1966). * Fuseya _et al._ (2015) Y. Fuseya, M. Ogata, and H. Fukuyama, Journal of the Physical Society of Japan 84, 012001 (2015), https://doi.org/10.7566/JPSJ.84.012001 . * Note (4) $\mathbf{|e|,|b|}\ll\Lambda^{2}/\Delta^{2}$, otherwise the effective Dirac Hamiltonian no longer applies. See Sec. S5. * Note (5) The exact result is obtained by directly summing the energies of Landau levels Akhiezer and Berestestskii (1965); Berestetskii _et al._ (1982), or in modern language, zeta-function regularization Dunne ; *Dunne_arxiv. * Note (6) However, our approach from the vantage point afforded by the transformation of Heisenberg-Euler action under dilation and contraction of space (See Sec. S1) leads to a more accurate overall factor. * (53) The angle between $\mathbf{e}$ and $\mathbf{b}$ in general is different to the one between applied fields $\bm{B}$ and $\bm{E}$. The transformation of the angle is given in Sec. S1. The susceptibilities for arbitrary angle between ${\mathbf{b}},{\mathbf{e}}$, are given in Sec. S6 and S7. * Note (7) Of course, there are other factors like the dependence of dielectric strength on impurities, the size of the sample, etc. * Calawa _et al._ (1960) A. R. Calawa, R. H. Rediker, B. Lax, and A. L. McWhorter, Phys. Rev. Lett. 5, 55 (1960). * Rediker and Calawa (1961) R. H. Rediker and A. R. Calawa, Journal of Applied Physics 32, 2189 (1961), https://doi.org/10.1063/1.1777040 . * Gramolin _et al._ (2021) A. V. Gramolin, D. Aybas, D. Johnson, J. Adam, and A. O. Sushkov, Nature Physics 17, 79 (2021). * Katsnelson _et al._ (2013) M. Katsnelson, G. Volovik, and M. Zubkov, Annals of Physics 331, 160 (2013). * Redlich (1984) A. N. Redlich, Phys. Rev. D 29, 2366 (1984). * Fröhlich (2018) J. Fröhlich, Reviews in Mathematical Physics 30, 1840007 (2018), https://doi.org/10.1142/S0129055X1840007X . * Burkov (2018) A. Burkov, Annual Review of Condensed Matter Physics 9, 359 (2018), https://doi.org/10.1146/annurev-conmatphys-033117-054129 . * Raines and Galitski (2017) Z. M. Raines and V. M. Galitski, Phys. Rev. B 96, 161115(R) (2017). * Rylands _et al._ (2021) C. Rylands, A. Parhizkar, A. A. Burkov, and V. Galitski, Phys. Rev. Lett. 126, 185303 (2021). * Birrell _et al._ (1984) N. Birrell, N. Birrell, and P. Davies, _Quantum Fields in Curved Space_, Cambridge Monographs on Mathematical Physics (Cambridge University Press, 1984). * (65) L. D. Landau, A. A. Abrikosov, and I. M. Khalatnikov, Doklady Akad. Nauk S.S.S.R. . * Suslov (2009) I. M. Suslov, Journal of Experimental and Theoretical Physics 108, 980 (2009). * Gell-Mann and Brueckner (1957) M. Gell-Mann and K. A. Brueckner, Phys. Rev. 106, 364 (1957). * Giuliani and Vignale (2005) G. Giuliani and G. Vignale, _Quantum Theory of the Electron Liquid_, Masters Series in Physics and Astronomy (Cambridge University Press, 2005). * Lippmann and Schwinger (1950) B. A. Lippmann and J. Schwinger, Phys. Rev. 79, 469 (1950). * Salpeter and Bethe (1951) E. E. Salpeter and H. A. Bethe, Phys. Rev. 84, 1232 (1951). * Wannier (1937) G. H. Wannier, Phys. Rev. 52, 191 (1937). * Mott (1938) N. F. Mott, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 167, 384 (1938), https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1938.0137 . * Schrödinger (1926) E. Schrödinger, Annalen der Physik 384, 361 (1926), https://onlinelibrary.wiley.com/doi/pdf/10.1002/andp.19263840404 . * Abrikosov (2017) A. Abrikosov, _Fundamentals of the Theory of Metals_ (Dover Publications, 2017). * Dyson (1952) F. J. Dyson, Phys. Rev. 85, 631 (1952). * HUET _et al._ (2012) I. HUET, M. R. DE TRAUBENBERG, and C. SCHUBERT, International Journal of Modern Physics: Conference Series 14, 383 (2012), https://doi.org/10.1142/S2010194512007507 . * Lin and Kleinman (1966) P. J. Lin and L. Kleinman, Phys. Rev. 142, 478 (1966). * Burstein _et al._ (1968) E. Burstein, S. Perkowitz, and M. Brodsky, Journal de Physique Colloques 29, C4 (1968). * Svane _et al._ (2010) A. Svane, N. E. Christensen, M. Cardona, A. N. Chantis, M. van Schilfgaarde, and T. Kotani, Phys. Rev. B 81, 245120 (2010). * Kanai and Shohno (1963) Y. Kanai and K. Shohno, Japanese Journal of Applied Physics 2, 6 (1963). * Suzuki and Adachi (1995) N. Suzuki and S. Adachi, Japanese Journal of Applied Physics 34, 5977 (1995). * Herman _et al._ (1968) F. Herman, R. L. Kortum, I. B. Ortenburger, and J. P. Van Dyke, J. Phys. Colloques C4, 62 (1968). * Hayasaka and Fuseya (2016) H. Hayasaka and Y. Fuseya, Journal of Physics: Condensed Matter 28, 31LT01 (2016). * Lefebvre _et al._ (1987) I. Lefebvre, M. Lannoo, G. Allan, A. Ibanez, J. Fourcade, J. C. Jumas, and E. Beaurepaire, Phys. Rev. Lett. 59, 2471 (1987). * Spr (a) “Antimony telluride (sb2te3) dielectric constants: Datasheet from landolt-börnstein \- group iii condensed matter · volume 41c: “non-tetrahedrally bonded elements and binary compounds i” in springermaterials (https://doi.org/10.1007/10681727_1045),” (a), copyright 1998 Springer-Verlag Berlin Heidelberg. * Spr (b) “Bismuth telluride (bi2te3) optical properties, dielectric constant: Datasheet from landolt-börnstein - group iii condensed matter · volume 41c: “non-tetrahedrally bonded elements and binary compounds i” in springermaterials (https://doi.org/10.1007/10681727_963),” (b), copyright 1998 Springer-Verlag Berlin Heidelberg. * Spr (c) “Bismuth selenide (bi2se3) optical properties, dielectric constants: Datasheet from landolt-börnstein - group iii condensed matter · volume 41c: “non-tetrahedrally bonded elements and binary compounds i” in springermaterials (https://doi.org/10.1007/10681727_945),” (c), copyright 1998 Springer-Verlag Berlin Heidelberg. * Jain _et al._ (2013) A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, and K. a. Persson, APL Materials 1, 011002 (2013). * Schwerdtfeger and Nagle (2019) P. Schwerdtfeger and J. K. Nagle, Molecular Physics 117, 1200 (2019), https://doi.org/10.1080/00268976.2018.1535143 . * Rudolph _et al._ (1980) R. Rudolph, H. Krüger, B. Fellmuth, and R. Herrmann, physica status solidi (b) 102, 295 (1980), https://onlinelibrary.wiley.com/doi/pdf/10.1002/pssb.2221020127 . * Weng _et al._ (2015) H. Weng, C. Fang, Z. Fang, B. A. Bernevig, and X. Dai, Phys. Rev. X 5, 011029 (2015). ## Supplemental Material ### S1 Dirac Cone Anisotropy Here, for the sake of completeness, we provide a prescription to treat anisotropic crystals, adopted from Ref. Aronov and Pikus (1967). In real crystals the velocity is a tensor that is represented by a $3\times 3$ real symmetric matrix $\bm{\mathcal{V}}$, so that the Dirac Hamiltonian reads $\bm{H}=\bm{\beta}\Delta-\bm{1}|e|{\phi}+\sum_{i,j}\bm{\alpha}_{i}\mathcal{V}_{ij}(p_{j}+|e|A_{j}/c),\quad i,j=1,2,3.$ (S16) where $\bm{\alpha}_{i},\bm{\beta}$ are $4\times 4$ Dirac matrices and $\bm{1}$ is a $4\times 4$ identity matrix. We define the Dirac velocity $v$ and the anisotropy matrix $\bm{\mathcal{U}}$ as $v^{3}=|\text{det}(\bm{\mathcal{V}})|,\quad\bm{\mathcal{U}}=\bm{\mathcal{V}}/v.$ (S17) #### S1.1 Dilation/contractions of space that renders the cone isotropic The matrix $\bm{\mathcal{U}}$, being a symmetric matrix with $\text{det}(\bm{\mathcal{U}})=1$, without loss of generality, induces a volume preserving transformation on coordinates, that is $\bm{\tilde{x}}=\bm{\mathcal{U}}^{-1}\bm{x},\quad\bm{\tilde{p}}=\bm{\mathcal{U}}\bm{p},\quad\bm{\tilde{A}}=\bm{\mathcal{U}}\bm{A},\quad d^{3}\tilde{x}=d^{3}x{,}$ (S18) with which the Dirac Hamiltonian assumes its isotropic form $\bm{H}=v\bm{{\alpha}}\cdot(\bm{\tilde{p}}+|e|\bm{\tilde{A}}/c)+\bm{\beta}\Delta-\bm{1}|e|{\phi}{.}$ (S19) The Lagrangian, being a scalar, is invariant under this volume preserving transformation. Meanwhile, the scaled electric and magnetic fields are $\bm{\tilde{E}}=\frac{\partial\phi}{\partial\bm{\tilde{x}}}=\bm{\mathcal{U}E},\quad\bm{\tilde{B}}=\bm{\mathcal{U}^{-1}B}{,}$ (S20) where the equation for $\bm{\tilde{B}}$ follows from the identity (repeated indices are summed) $\mathcal{U}_{nk}\tilde{B}_{k}=\mathcal{U}_{nk}\frac{\partial\tilde{A}_{i}}{\partial\tilde{x}_{j}}\epsilon_{ijk}=\frac{\partial A_{s}}{\partial x_{m}}\mathcal{U}_{im}\mathcal{U}_{js}\mathcal{U}_{nk}\epsilon_{ijk}=B_{n}{.}$ (S21) Finally, the physical susceptibilities $\chi,\chi^{e}$ in the anisotropic crystal as a function of applied fields $\bm{E},\bm{B}$ are $\displaystyle\bm{\chi}(\bm{E},\bm{B})=\frac{\partial^{2}L}{\partial\bm{B}\partial\bm{B}}=\bm{\mathcal{U}}^{-1}[\bm{\tilde{\chi}}(\bm{\tilde{E}},\bm{\tilde{B}})]\bm{\mathcal{U}}^{-1}$ $\displaystyle\bm{\chi^{e}}(\bm{E},\bm{B})=\frac{\partial^{2}L}{\partial\bm{E}\partial\bm{E}}=\bm{\mathcal{U}}[\bm{\tilde{\chi}^{e}}(\bm{\tilde{E}},\bm{\tilde{B}})]\bm{\mathcal{U}}{.}$ (S22) The magnetization and polarization vectors and higher order susceptibility tensors transform in the usual way like above. We re-iterate a point already mentioned in the main text that even when the crystal anisotropy is not taken into account, that is $\bm{\mathcal{U}}=\bm{1}$, the susceptibilities are intrinsically anisotropic, since they depend on the mutual alignment of electric and magnetic fields. Crystal anisotropy is an additional source of directional dependence that is relevant in the experimental context. #### S1.2 Invariance of $E\cdot B$ and the transformation of the angle between $E$ and $B$ In account of anisotropy, the normalized fields are defined as Eq. (3). We note that the scalar product of externally applied fields $\mathbf{E}\cdot\mathbf{B}$ transforms as $\bm{E}\cdot\bm{B}=\mathcal{U}^{-1}_{ik}\tilde{E}_{k}\mathcal{U}_{il}\tilde{B}_{l}=B_{\star}E_{\star}\mathbf{e}\cdot\mathbf{b},$ (S23) i.e., is proportional to the scalar product of the transformed fields. However, we note that $\bm{\mathcal{U}}$ is not a simple rotation. Hence, while scalar products are proportional, it does not preserve angles or norms taken separately. We see this when we express $\mathbf{\hat{e}}\cdot\mathbf{\hat{b}}$ in terms of $\bm{\hat{E}}\cdot\bm{\hat{B}}$ as $\mathbf{\hat{e}}\cdot\mathbf{\hat{b}}=\frac{\bf{e}\cdot\bf{b}}{|\bf{e}||\bf{b}|}=\frac{\bm{E}\cdot\bm{B}}{|\bm{\mathcal{U}E}||\bm{\mathcal{U}}^{-1}\bm{B}|}=\frac{|\bm{E}||\bm{B}|}{|\bm{\mathcal{U}E}||\bm{\mathcal{U}}^{-1}\bm{B}|}\bm{\hat{E}}\cdot\bm{\hat{B}}$ (S24) Therefore $\bm{E}\perp\bm{B}\iff\mathbf{e}\perp\mathbf{b}.$ (S25) We can work in a reference frame where $\bm{\mathcal{U}}$ is diagonal. However, even when $\bm{E}\parallel\bm{B}$ are directed along a general direction, we have $\bm{E}\parallel\bm{B}\implies\mathbf{\hat{e}}\cdot{\mathbf{\hat{b}}}=\frac{1}{|\bm{\mathcal{U}\hat{E}}||\bm{\mathcal{U}}^{-1}\bm{\hat{E}}|}\leq 1$ (S26) The equality is satisfied when $\bm{E}$ and $\bm{B}$ are directed along the principle directions of the frame that diagonalizes $\bm{\mathcal{U}}$, e.g. $\bm{\hat{E}}=\bm{\hat{B}}=(1,0,0)^{T}$. However, for general direction, the vectors $\mathbf{e}$ and $\mathbf{b}$ are no longer parallel, for example, if $\bm{\mathcal{U}}=\text{diag}(u,u,1/u^{2}),\quad\bm{\hat{E}}=\bm{\hat{B}}=\frac{1}{\sqrt{3}}(1,1,1)^{T}$ (S27) we obtain $\mathbf{\hat{e}}\cdot{\mathbf{\hat{b}}}=\left(\frac{9}{{\left(\frac{2}{u^{2}}+u^{4}\right)}\,{\left(2\,u^{2}+\frac{1}{u^{4}}\right)}}\right)^{1/2}$ (S28) For exemplary values of $u=1.2,1.5,2$, we get the angle between $\mathbf{\hat{e}}$ and ${\mathbf{\hat{b}}}$ as $28^{\circ},55^{\circ},75^{\circ}$ respectively. In terms of transformed vectors $\mathbf{e}$ and $\mathbf{b}$, Dirac materials are described by a Hamiltonian that has the usual isotropic form which simplifies calculations. In particular, for the perpendicular fields $\mathbf{e}$ and $\mathbf{b}$, one can apply the Lorentz transformation that eliminates the smaller of the fields, resulting in the purely electric field or the purely magnetic field case. In the case when fields $\mathbf{e}$ and $\mathbf{b}$ are not orthogonal, a particular choice of a Lorentz transformation leads to a Hamiltonian with a new set of electric and magnetic fields parallel to each other, for which the calculations of dielectric and magnetic susceptibilities are simplified (Section S6). Once the results are obtained in the frame where the fields are parallel, the inverse Lorentz transformation allows to express them in terms of the original non-orthogonal fields $\mathbf{e}$ and $\mathbf{b}$. Finally, inverse transformation to initial anisotropic system will give the results in terms of applied fields $\bm{E}$ and $\bm{B}$. In Sec. S7 we derive the susceptibilities for an arbitrary angle between $\mathbf{e}$ and $\mathbf{b}$, see Table S6 for quick reference. Figure S1: Multi-loop and multi-leg diagrams for the effective action. The dashed line corresponds to the external electromagnetic fields $B$,$E$, the blue wavy lines are Coulomb interactions and the solid line is the Green function of a Dirac sea electron. Only the 1-loop, 2-leg diagram given by (S40), is ultraviolet divergent, the others are convergent. The divergence is renormalized into the dielectric constant and relative permeability. The multi-loop diagrams contain Coulomb interaction (blue) that are suppressed by the relative permittivity, hence carry the factor $\alpha_{D}/\epsilon\sim 0.03$. The 1-loop diagrams with more than 4-legs are given by (S45), when the expansion parameters $|\mathbf{e}|^{2}=\frac{\alpha_{D}E^{2}}{\Delta/\lambdabar_{D}^{3}}$ or $|\mathbf{b}|^{2}=\frac{\alpha_{D}B^{2}v^{2}/c^{2}}{\Delta/\lambdabar_{D}^{3}}$, are large, the diagrams can be exactly summed to yield Eq. (S4) Parameter | Dirac Insulator | QED ---|---|--- | symbol | expression | value$\sim$ | symbol | expression | value$\sim$ Energy gap | $2\Delta$ | | $100\>\text{meV}$888This estimate is based on tunable gap compound $\text{Pb}_{1-x}\text{Sn}_{x}\text{Se}$ Dziawa _et al._ (2012). There is no gap in Weyl, Dirac semimetals (see TaAs Zhang _et al._ (2019)), and appropriate limit has to be taken. Gap is tunable between $\pm 10\>\text{meV}$ in Bi and Sb alloys. Typical gap for materials in Table S3 is $200\>\text{meV}$. | $2m_{e}c^{2}$ | | $1\>\text{MeV}$ Dirac velocity | $v$ | $c/400$ | $10^{6}\>\text{m}/\text{s}$ | $c$ | | $3\times 10^{8}\>\text{m/s}$ Wavelength (reduced) | $\lambdabar_{D}$ | $\frac{\hbar v}{\Delta}$ | $10\>\text{nm}$ | $\lambdabar_{C}$ | $\frac{\hbar}{m_{e}c}$ | $3.86\times 10^{-4}\>\text{nm}$ Fine structure constant | $\alpha_{D}$ | $\frac{c\alpha}{v}$ | $3$ | $\alpha$ | $\frac{e^{2}}{\hbar c}$ | $1/137$ Magnetic moment | $\mu_{D}$ | $\frac{e\hbar v^{2}}{2\Delta c}$ | $3.7\>\text{meV/T}$ | $\mu_{B}$ | $\frac{e\hbar}{2m_{e}c}$ | $5.8\times 10^{-2}\text{meV}/\text{T}$ Ultraviolet cut-off | $\Lambda$ | | $2\>\text{eV}$ | $\Lambda_{QED}$ | | $10^{28}\text{-}10^{286}\>\text{eV}$999Low estimate is based on Planck energy Birrell _et al._ (1984), where gravitational effects come into play. Large estimate is the Landau pole, where the coupling constant diverges. Landau _et al._ ; *Suslov2009 Dielectric screening (due to Dirac sea) | $\epsilon_{D}$ | $1+\frac{2\alpha_{D}}{3\pi}\log\frac{\Lambda}{\Delta}$ | $3$ | $\epsilon_{\Lambda}$ | $1+\frac{2\bar{e}^{2}}{3\pi\hbar c}\log\frac{\Lambda_{QED}}{m_{e}c^{2}}$ 101010See footnote g | Relative permittivity (renormalized) | $\epsilon$ | $\sim\epsilon_{D}+\epsilon_{i}$111111$\epsilon_{i}$ is the lattice and intra-ionic contribution. | $100$121212See Table S3. | $\epsilon$ | | $1$131313By definition, the vacuum has a permittivity of unity or $\epsilon_{0}$ in SI units. Electric charge in use | bare: $e$ | | $1.6\times 10^{-19}\>\text{C}$ | RG: $e_{0}=e$ | $\bar{e}/\sqrt{\epsilon_{\Lambda}}$141414The quantity $\bar{e}$ is the running charge defined at the ultraviolet cutoff energy. Peskin and Schroeder (2005) | $1.6\times 10^{-19}\>\text{C}$ Field definition in use | screened: $E$ | $D/\epsilon$151515$D$ is the macroscopic electric displacement that is due to free charges only. | | RG: $\bar{E}=E$ | $\sqrt{\epsilon_{\Lambda}}E_{0}$ | Schwinger E-Field | $E_{\star}$ | $\frac{\Delta^{2}}{e\hbar v}$ | $5\times 10^{4}\>\text{V/cm}$ | $E_{S}$ | $\frac{m_{e}^{2}c^{3}}{e\hbar}$ | $1.3\times 10^{16}\>\text{V/cm}$ Schwinger B-Field | $B_{\star}$ | $\frac{\Delta^{2}c}{e\hbar v^{2}}$ | $6.8\>\text{T}$ | $B_{S}$ | $\frac{m_{e}^{2}c^{3}}{e\hbar}$ | $4.4\times 10^{9}\>\text{T}$ 1-loop expansion parameter 1 | $|\bf{e}|^{2}$ | $\frac{\alpha_{D}E^{2}}{\Delta/\lambdabar_{D}^{3}}$ | $<0.1$161616Dielectric break down occurs beyond $|\bf{e}|\sim 0.1$ which is captured by our nonperturbative theory, see Eq. (S41) | $|\bf{e}|^{2}$ | $\frac{\alpha B^{2}}{m_{e}c^{2}/\lambdabar_{c}^{3}}$ | $<0.1$171717Electric field accelerates free particle-hole pairs beyond this point. 1-loop expansion parameter 2 | $|\bf{b}|^{2}$ | $\frac{\alpha_{D}B^{2}}{\Delta/\lambdabar_{D}^{3}}\frac{v^{2}}{c^{2}}$ | $<10^{3}$181818First Landau level is ejected beyond the cut-off energy past this point in external magnetic field. Therefore our theory breaks down. Note that perturbation theory is not valid beyond $|b|>1$, but the nonperturbative resummation in Eq. (S4) is still valid. | $|\mathbf{b}|^{2}$ | $\frac{\alpha B^{2}}{m_{e}c^{2}/\lambdabar_{c}^{3}}$ | $<10^{44}$191919Magnetic field energy curves space-time beyond this point. N-loop expansion parameter (interaction probability) | | $\alpha_{D}/\epsilon$ | $0.03$2020201-loop nonperturbative theory is valid as long as $\alpha_{D}<\epsilon\sim 100$, that is $v\gtrsim 10^{-4}c$. | $\alpha$ | | $\approx 0.01$ Invariant 1: $|\bf{e}|^{2}-|\bf{b}|^{2}$ | $\propto\mathcal{F}^{\mu\nu}\mathcal{F}_{\mu\nu}$212121$\mathcal{F}_{\mu\nu}=\tilde{\partial}_{\mu}\mathcal{A}_{\nu}-\tilde{\partial}_{\nu}\mathcal{A}_{\mu}$. This is different from true relativistic field tensor $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$. See Eq. (S30). | $E^{2}-\frac{v^{2}}{c^{2}}B^{2}$ | | $\propto{F}^{\mu\nu}{F}_{\mu\nu}$ | $E^{2}-B^{2}$ | Invariant 2: $\bf{e}\cdot\bf{b}$ | $\propto\mathcal{F}_{\alpha\beta}\mathcal{F}_{\mu\nu}\varepsilon^{\alpha\beta\mu\nu}$ | $\frac{v}{c}\bm{E}\cdot\bm{B}$ | | $\propto{F}_{\alpha\beta}{F}_{\mu\nu}\varepsilon^{\alpha\beta\mu\nu}$ | $\bm{E}\cdot\bm{B}$ | Table S2: Comparison of Dirac insulator with Quantum Electrodynamics (QED). For simplicity isotropic case is considered. For the general case see Sec. S1 ### S2 Comparison to the relativistic quantum electrodynamics The Dirac equation that follows from Eq. (S19), after multiplying from the left by $\bm{\beta}$, is $i\bm{\gamma}^{0}(\partial_{t}-i|e|{\phi})-v\bm{{\gamma}}\cdot(\bm{\tilde{p}}+|e|\bm{\tilde{A}}/c)\psi-\Delta\psi=0,\quad\bm{\gamma}^{0}=\bm{\beta},\>\bm{\gamma}^{i}=\bm{\beta}\bm{\alpha}^{i}.$ (S29) If we define the effective and true 4-position and electromagnetic 4-potential respectively as effective: $\displaystyle\quad\tilde{x}^{\mu}=(vt,\bm{\tilde{x}}),\quad\mathcal{A}^{\mu}=\frac{1}{v}(\phi,v\tilde{\bm{A}}/c)$ (S30) true-relativistic: $\displaystyle\quad{x}^{\mu}=(ct,\bm{{x}}),\quad{A}^{\mu}=\frac{1}{c}(\phi,\bm{A}),\quad\text{where }\eta^{\mu\nu}=(+,-,-,-),$ (S31) we can write down the Dirac action coupled to classical electromagnetism in the (anisotropic) material as $\text{Dirac material:}\quad S=\int dtd^{3}\tilde{x}\>\left(\bar{\psi}[iv\gamma^{\mu}(\tilde{\partial}_{\mu}-i|e|\mathcal{A}_{\mu})-\Delta]\psi+\frac{1}{8\pi}(\bm{E}^{T}\bm{\epsilon}\bm{E}-\bm{B}^{T}\bm{\mu}^{-1}\bm{B})\right).$ (S32) In addition, when the band gap is inverted, as in topological insulators, there is an additional boundary term bulk-boundary term $\bm{E}\cdot\bm{B}$ Fröhlich (2018) which does not effect the nonlinear response which we investigate in this letter. For this reason we assume $\Delta>0$ with out loss of generality. For comparison, the true relativistic QED action is $\text{QED:}\quad S=\int dtd^{3}x\>\left(\bar{\psi}[ic\gamma^{\mu}({\partial}_{\mu}-i|e|{A}_{\mu})-m_{e}c^{2}]\psi+\frac{1}{8\pi}({E}^{2}-{B}^{2})\right).$ (S33) The first difference one can notice is the speed of the Dirac fermion, that is $c$ in QED. In the material with a gap $2\Delta$, the Fermi-Dirac velocity is related to the effective mass $m^{*}$ (measured in terms of electron mass) and satisfies $\displaystyle E=\sqrt{\Delta^{2}+p^{2}v^{2}},\quad E\approx\Delta+\frac{v^{2}p^{2}}{2\Delta}\implies m^{*}m_{e}v^{2}=\Delta,$ $\displaystyle\Delta\approx 0.1\>\text{eV},\>m^{*}=0.01\text{-}0.5\implies 1000\gtrsim c/v\gtrsim 100{.}$ (S34) For this reason, the effective fine structure constant of the insulator is $\alpha_{D}=\frac{e^{2}}{\hbar v}\sim\alpha\times 400\approx 3{.}$ (S35) The minimally coupled 4-potentials are different as outlined in Eq. (S30). The classical action of the electromagnetic field in the two cases are obviously different. In the material case, $\bm{\epsilon},\bm{\mu}$ are symmetric matrices that define the relative permittivity and the relativity of the material, respectively. In the vacuum they satisfy $\epsilon=\mu=1$ in the CGS units. A more subtle difference between QED and the Dirac insulator is the definition of electric charge and electric field, when seen from a renormalization point of view. In QED the charge is normalized at zero momentum transfer, $e_{0}=e_{q=0}$ (referred to as the running electric charge Peskin and Schroeder (2005) ), which is related to the bare charge $\bar{e}$ through the ultraviolet (UV) cutoff dependent vacuum dielectric screening, $e_{0}=\bar{e}/\sqrt{\epsilon_{\Lambda}}$. However, in condensed matter, the charge is defined independently of scale and is given by $e\approx 1.6\times 10^{-19}\>\text{C}$. Nevertheless, due to RPA dielectric screening, the apparent charge at the macroscopic scale is $e/\sqrt{\epsilon}$, hence different from the bare $e$ that applies at the lattice spacing scale $q\sim\hbar\pi/a$. Definitions of the electric field are also different. In QED, the field is normalized to $\bm{\bar{E}}=\sqrt{\epsilon_{\Lambda}}\bm{E}$ so that the energy density of the field is fixed as $\bar{E}^{2}/(8\pi)$, and the dielectric constant of vacuum is by definition equal to unity. However in condensed matter, the Lagrangian of the free field, is already written in terms of the screened field $\bm{E}$, as $\epsilon E^{2}/(8\pi)$. The coupling term is also written in terms of the bare charge and the screened field $e\bm{E}$, hence $\epsilon$ does not appear in the definition of the effective fine structure constant $\alpha_{D}=e^{2}/(\hbar v)$. Importantly, the coupling term is independent of the definition, $e_{0}{\bar{\bm{E}}}=e{\bm{E}}$, allowing us to use the 1-loop renormalized effective action of QED in the condensed matter setting. A detailed comparison of parameters in Dirac insulator to their counterparts in QED is given in Table S2. ### S3 Effective action for Dirac insulator The effective action can be schematically organized in terms of multi-leg and multi-loop diagrams as in Fig. S1. The multi-loop diagrams contain at least one internal interaction line and therefore in QED, each such line contains the factor $\alpha\sim 0.01$. We now justify the possibility to limit the calculation of the effective action to the 1-loop approach and justify neglecting multi-loop diagrams in Dirac materials, particularly insulators. In the Dirac insulator, we first note that multi-loop diagrams are UV convergent hence the polarization is dictated mostly by the low energy-momentum sector where the static dielectric constant $\epsilon$ applies Sham (1966); Sham and Rice (1966). The Coulomb interaction is therefore screened due to $\epsilon$, which is dominated by the lattice and intra-ionic polarization contribution. In a wide range of practical situations as seen in Table S3 $\epsilon\sim 100$ and therefore the interaction lines are suppressed by the factor $\frac{\alpha_{D}}{\epsilon}\sim 0.03$. We note that, no matter how small the interaction parameter $\alpha_{D}/\epsilon$, the multi-loop diagrams may come with large combinatorial factors and can render the perturbation series divergent. In both QED and condensed matter physics several examples of divergences are known. In some of these divergences, the coupling constant $\alpha$ gets enhanced by an additional parameter $R$, $\alpha\rightarrow R\alpha$. For example, in an electron gas in metals, there is Random Phase Approximation (RPA) infrared divergence, where $\alpha$ is enhanced by a large distance. This diverging series of diagrams is eventually converging, and can be re- summed, as was shown by Gell-Mann Brueckner. Gell-Mann and Brueckner (1957) However, in the case of Dirac insulator, the electron gas infrared divergence does not arise. Indeed, for example the expansion of contribution to energy due to interactions, $E=E_{0}+\langle 0|H_{ee}|0\rangle+\sum_{n}\frac{\langle 0|H_{ee}|n\rangle\langle n|H_{ee}|0\rangle}{E_{n}-E_{0}}+....,$ (S36) where $E_{0}$ is the interaction-independent contribution to energy, $H_{ee}$ is the interaction Hamiltonian, index zero denotes the ground state of the system and index n corresponds to excited states. In an electron gas, the denominator $E_{n}-E_{0}=\frac{\hbar^{2}}{m}(q^{2}+({\bm{k}}_{1}-{\bm{k}}_{2})\cdot{\bm{q}})$, where ${\bm{k}}_{1}$ and ${\bm{k}}_{2}$ are momenta (wave vectors) of the two holes inside the Fermi sphere, ${\bm{q}}$ is the transferred momentum, ${\bm{k}}_{2}+{\bm{q}}$ and ${\bm{k}}_{2}-{\bm{q}}$ are the momenta of electrons outside the Fermi sphere. This denominator is divergent in the second order perturbation theory, and higher order terms are even more divergent. For the electron gas, these perturbation series can be summed. However, for the Dirac insulator, when excitations arise only in transitions between the full valence band and empty conduction band, every denominator in series Eq. (S36) contains extra constant $2\Delta$, and divergences do not appear. Furthermore, as we consider high magnetic fields, if both spin states of the highest occupied Landau level are fully filled, excitation involves a cyclotron energy gap. Then if the cyclotron energy is much larger than the characteristic Coulomb interaction, the mixing of excited states is negligible, and the non-interacting ”closed shell configuration” is essentially the ground state Giuliani and Vignale (2005). Due to large $\epsilon$, this happens in rather small magnetic fields in our case. Such situation is relevant for our consideration of Bi, where we consider the limit $\Delta=0$ and zero electric field. It becomes clear that electron gas RPA- like contribution need not be included in our consideration. Another type of divergence often arising due to interactions is Lippman- Schwinger Lippmann and Schwinger (1950), Bethe-Salpeter effects Salpeter and Bethe (1951), or, in condensed matter context, the divergence related to the exciton bound state. Wannier (1937); Mott (1938); Schrödinger (1926) For exciton, when $\epsilon$ is big, binding energy is small and the size is effectively big. Small binding energy means closeness in energy to the bottom of the conduction band. For this bound state to manifest itself in susceptibilities, excitations have to be generated at frequencies of electromagnetic wave close to $2\Delta$. Thus, the related contributions can be separated from the effects at small frequencies that we mostly consider. In susceptibilities, divergences as a result of summation of series of interaction contributions may also indicate phase transitions, such as ferromagnetism. In the presence of the gap $2\Delta$ for excitations in Dirac insulators, such phase transition is not expected. Similarly, a superconducting transition, while emerging due to binding of electrons in Cooper pairs associated with the presence of the Fermi surface (see, e.g. Abrikosov Abrikosov (2017)), is unlikely to arise in the presence of excitation gap in insulators. We note that the one-loop Heisenberg-Euler contribution at large B that we calculated, Eq. 2 of the main text, also stems from divergence of the general renormalization group type, when $\alpha$ is enhanced by the first power of log. Finally, there is potentially a divergence stemming from loop expansion that differ from the cases discussed above. It contains the fine structure constant $\alpha$ only, and is asymptotic, with combinatorial coefficients growing fast with increasing the power $n$ of the fine structure constant in diagrams with $n$ interaction lines and vertices of electromagnetic interactions. These so- called asymptotic series were considered first by Dyson Dyson (1952) (Also see an excellent review by Huet and co-authors HUET _et al._ (2012)). It is now widely accepted that for asymptotic series a summation up to an infinite order in small $\alpha$ does not make sense, and one must terminate the series after a few terms when doing practical calculations with QED, although a rigorous mathematical justification for using 1-loop QED is an open problem. Thus, we can safely restrict ourselves to 1-loop approximation since $\frac{(K+1)\text{-loops},2n\text{-legs}}{K\text{-loops},2n\text{-legs}}\sim\frac{\alpha_{D}}{\epsilon}\approx 0.03,\quad\text{See Eq.~{}\eqref{alpha_estimate} and Table~{}\ref{table:epsilons}}{.}$ (S37) This applies to QED where $\alpha\approx 1/137$, as considered by Heisenberg and Euler, and in the condensed matter contexts, when the interaction parameter $e^{2}/(\hbar v\epsilon)$ is small at large $\epsilon$, fixed by lattice properties. We note that our predictions pass the test of comparison with available experimental results (See Fig. 2 of the main text), even in the strong field regime which can not be directly probed in QED. Compound | $2\Delta$[eV] | $\epsilon$ | Refs. ---|---|---|--- PbTe | $0.19$ | $450$ | Lin and Kleinman (1966); Burstein _et al._ (1968); Svane _et al._ (2010); Kanai and Shohno (1963) PbSe | $0.16$ | $280$ | Lin and Kleinman (1966); Burstein _et al._ (1968); Svane _et al._ (2010) PbS | $0.28$ | $190$ | Lin and Kleinman (1966); Burstein _et al._ (1968); Svane _et al._ (2010) SnTe | $0.27$ | $1770$ | Suzuki and Adachi (1995); Herman _et al._ (1968); Hayasaka and Fuseya (2016) Sb2Te3 | $0.21$ | $168,36$ | Lefebvre _et al._ (1987); Spr (a) Bi2Te3 | $0.17$ | $290,75$ | Ando (2013); Spr (b) Bi2Se3 | $0.30$ | $113$ | Ando (2013); Spr (c) GeBi${}_{4\text{-}x}$SbxTe7 | $0.10$ | $448,30$ 222222@ $x=0$, i.e. GeBi4Te7 | Ando (2013); Jain _et al._ (2013) GeBi2Te4 | $0.18$ | $315,19$ | Ando (2013); Jain _et al._ (2013) $\text{Bi}_{1-x}\text{Sb}_{x}$ | $0.01$ 232323gap vanishes at $x=0.04$ | $\sim 100$ 242424@ x = 0.07, where $2\Delta\approx 0.01\>\text{eV}$ | Schwerdtfeger and Nagle (2019); Rudolph _et al._ (1980) Table S3: The Dirac gap and dielectric constants of Dirac materials. When multiple numbers for $\epsilon$ are provided, anisotropy is signified (see references for more information). The linear dielectric contribution from the valence bands coming from Eq. (S40) is, by definition, already contained in these measurements. ### S4 1-Loop effective action The 1-loop effective action of the material follows as $S_{eff}=-i\ln\mathrm{Det}\left[v\gamma^{\mu}(\tilde{\partial}_{\mu}-i|e|\mathcal{A}_{\mu})+i\Delta\right]{.}$ (S38) This determinant is exactly calculated in the QED case Dunne ; *Dunne_arxiv. By comparing the QED and Dirac insulator actions in Eqs. (S32) and (S33), we can write the 1-loop, nonperturbative, renormalized Heisenberg-Euler action in terms of the normalized fields defined in Eqs. (3) and (4) as Akhiezer and Berestestskii (1965); Berestetskii _et al._ (1982) $\displaystyle\delta L_{HE}$ $\displaystyle=$ $\displaystyle\frac{\Delta}{8\pi^{2}\lambdabar_{D}^{3}}\int_{0}^{\infty}\frac{d\eta e^{-\eta}}{\eta^{3}}\left[(-\eta A\cot(\eta A)\eta C\coth(\eta C)+1-\frac{1}{3}\eta^{2}(A^{2}-C^{2})\right],$ $\displaystyle A$ $\displaystyle=$ $\displaystyle-\frac{i}{2}\left[\sqrt{({\bf b}+i{\bf e})^{2}}-\sqrt{({\bf b}-i{\bf e})^{2}}\right],$ $\displaystyle C$ $\displaystyle=$ $\displaystyle\frac{1}{2}\left[\sqrt{({\bf b}+i{\bf e})^{2}}+\sqrt{({\bf b}-i{\bf e})^{2}}\right]{}.$ (S39) In addition to this, there is also the cut-off dependent part of the action that is due to the UV-divergent diagram (shown in Fig. S1 in a red box) which evaluates to $\delta L_{1}=\frac{\Delta}{12\pi^{2}\lambdabar_{D}^{3}}\ln\left(\frac{\Lambda}{\Delta}\right)\left(|\mathbf{e}|^{2}-|\mathbf{b}|^{2}\right).$ (S40) In QED, this quantity has the same form as the classical electromagnetic Lagrangian , hence it is renormalized into the definition of electric charge. However in the material case, it gives rise to the magnetic and electric susceptibilities that we discuss below absorbed into $\bm{\epsilon},\bm{\mu}$ of the material. The Eq. (S4) is valid for both strong and weak fields. The contour on the complex $\eta$ plane shall be chosen so that the poles in the integrand are avoided. The imaginary part of the action gives the breakdown probability (volume rate of particle hole creation in QED) . For example, when $|\mathbf{b}|=0$, we have to leading order Berestetskii _et al._ (1982) $P=2\text{Im}\>\delta L_{HE}=\frac{\Delta}{4\pi^{3}\hbar\lambdabar_{D}^{3}}|{\bf e}|^{2}e^{-\pi/|{\bf e}|},$ (S41) Meanwhile, another way to write Eq. (S38) is the standard form $S_{eff}=-i\ln\mathrm{Det}\left[v\gamma^{\mu}(\tilde{\partial}_{\mu}-i|e|\mathcal{A}_{\mu})+i\Delta\right]=-i\text{Tr}\ln[1+|e|vG\gamma^{\mu}\mathcal{A}_{\mu}]-i\ln\mathrm{Det}\left[-iG^{-1}\right]{.}$ (S42) Carrying out the formal diagrammatic expansion of the logarithm , we have $1\text{-loop},2n\text{-legs}=\frac{i}{2n}\text{Tr}[(|e|v\gamma^{\mu}\mathcal{A}_{\mu}G)^{2n}],\quad n=1,2,...$ (S43) For example, the 2-leg diagram after taking the Fourier transform $\mathcal{A}(q)=\int d^{4}\tilde{x}\mathcal{A}(\tilde{x})e^{ik^{\mu}\tilde{x}_{\mu}}$ reads $1\text{-loop},2\text{-legs}=\frac{ie^{2}v^{2}}{4}\int\frac{d^{4}qd^{4}k}{(2\pi)^{8}}\text{tr}\left[\not{\mathcal{A}}(q)\frac{1}{v\not{k}-\Delta}\not{\mathcal{A}}(-q)\frac{1}{v(\not{k}+\not{q})-\Delta}\right],\quad k^{\mu}=(\omega/v,\bm{\tilde{k}}),\>\not{k}=\gamma^{\mu}k_{\mu}{,}$ (S44) which gives the UV divergent term Eq. (S40). The higher order diagrams can be identified by formally expanding the action Eq. (S4) in the fields. For example , if we choose $\mathbf{e}\parallel\mathbf{B}$, we have Dunne ; *Dunne_arxiv $1\text{-loop},2n\geq 4\text{-legs}=-\frac{\Delta}{8\pi^{3}\lambdabar_{D}^{3}}(2n-3)!\sum_{k=0}^{n}\frac{\mathcal{B}_{2k}\mathcal{B}_{2n-2k}}{(2k)!(2n-2k)!}(2|\mathbf{b}|)^{2n-2k}(2i|\mathbf{e}|)^{2k}{,}$ (S45) where $\mathcal{B}_{2n}$ are Bernoulli numbers. The expansion parameters are $|\mathbf{e}|^{2}=\alpha_{D}E^{2}\frac{\lambdabar_{D}^{3}}{\Delta},\quad|\mathbf{b}|^{2}=\alpha_{D}B^{2}\frac{v^{2}}{c^{2}}\frac{\lambdabar_{D}^{3}}{\Delta}{.}$ (S46) Therefore when the fields or the coupling constant are strong, the diagrammatic expansion is only schematic because the series Eq. (S45) should be summed exactly back into the original form Eq. (S4), where the non-coplanar $\bm{E}$ and $\bm{B}$ case is also taken into account. ### S5 Regime of validity of 1-loop effective action for a Dirac material From previous sections, firstly, we have to ensure that the 1-loop approximation holds by keeping the interaction strength under control $\alpha_{D}\ll\epsilon\sim 100,\quad\text{(1-loop approximation holds)}{.}$ (S47) When the magnetic field is too large, the Landau level separation energy becomes comparable to the cut-off, and the Dirac description breaks down, but at $|\bm{B}|\ll\frac{\Lambda^{2}c}{e\hbar v^{2}},$ (S48) the Dirac theory is valid. Finally when the E-field is strong, a formal asymptotic expression can be easily derived from Eq. (S4). However, in our case this becomes a somewhat an academic question, because as seen from Eq. (S41) the electric breakdown probability increases significantly with strong fields, which we do not consider. At small fields we have conditions $\displaystyle|\bm{E}|$ $\displaystyle\ll\frac{\Lambda^{2}}{e\hbar v},\quad\text{(The Dirac theory is valid)}$ (S49) $\displaystyle|\mathbf{e}|$ $\displaystyle<0.3,\quad\text{(avoids electric breakdown)}.$ (S50) ### S6 Asymptotic expansion of the Heisenberg-Euler Action In order to avoid electric breakdown of an insulator, we are always in the weak $E$ limit, where the parallel field Lagrangian, being quadratic in $|\mathbf{e}|^{2}$ and $\hat{\bf{e}}\cdot\hat{\mathbf{b}}$, can be expanded as $f_{n}^{m}(b,x)$ | $n\rightarrow$ | | ---|---|---|--- $m\downarrow$ | $b^{2}\left(\frac{1}{x^{2}}+\frac{1}{3}-\frac{\coth(x)}{x}\right)$ | $-\left(1+\frac{x}{2b}\right)\left(\frac{1}{x^{2}}+\frac{1}{3}-\frac{\coth(x)}{x}\right)$ | $\frac{x(x+b)}{8b^{4}}\left(\frac{1}{x^{2}}+\frac{1}{3}-\frac{\coth(x)}{x}\right)$ | | $\frac{1}{2\sinh^{2}(x)}-\frac{3-2x^{2}}{6x}\coth(x)$ | $\left(\frac{1}{b^{2}}-\frac{x}{2b^{3}}\right)\left(\frac{1}{2\sinh^{2}(x)}-\frac{3-2x^{2}}{6x}\coth(x)\right)$ | | | $-\frac{6x\coth(x)+4x^{2}+9}{24b^{2}\sinh^{2}(x)}+\coth(x)\frac{225-60x^{2}+8x^{4}}{360b^{2}x}$ . Table S4: The functions $f_{n}^{m}(x,b)$ appearing in the integrands in $I^{m}_{n}(b)$ (Eq. (S52)) the perturbation expansion of the renormalized Lagrangian Eq. (S51) $I_{n}^{m}(b)$ | $n\rightarrow$ | | ---|---|---|--- $m\downarrow$ | $\ln(b)\left(\frac{1}{3}b^{2}+b+\frac{1}{2}\right)+b+\frac{3}{4}+\frac{\pi^{2}}{72b}-\frac{\zeta(3)}{48b^{2}}$ | $-\ln(b)\left(\frac{1}{3}+\frac{1}{2b}\right)-\frac{1}{6}-\frac{1}{b}-\frac{1}{4b^{2}}$ | $\frac{1}{12b^{2}}$ | | $\frac{b}{3}+\frac{1}{2b}\left(\ln(b)+1\right)+\frac{\zeta(3)+3}{12b^{2}}$ | $\frac{1}{6b}$ | | | 0 . Table S5: Strong $B$-field expansion (up to order $1/b^{2}$) of the integrals $I^{m}_{n}(b)$ (Eq. (S52)) in the perturbation expansion of the renormalized Lagrangian Eq. (S51) $\delta L_{HE}=\frac{\Delta}{8\pi^{2}\lambdabar_{D}^{3}}\sum_{n=0}^{\infty}\sum_{m=0}^{n}|\mathbf{e}|^{2n}(\mathbf{\hat{e}}\cdot\mathbf{\hat{b}})^{2m}I_{n}^{m}(b),\quad b=|\mathbf{b}|{.}$ (S51) The functions $I_{n}^{m}$ are integrals of the form $I_{n}^{m}(b)=\int_{0}^{\infty}dx\frac{e^{-x/b}}{x}f_{n}^{m}(b,x){.}$ (S52) where the functions $f_{n}^{m}(b,x)$ are tabulated in Table S4. Making a change of variables $x\to bx^{\prime}$, we can recheck the low field limit of the Lagrangian in Eq. (10) of the main text $\frac{8\pi^{2}\lambdabar_{D}^{3}}{\Delta}\delta L_{HE}\to\frac{1}{45}\int_{0}^{\infty}dx\>xe^{-x}\left(b^{4}+7(\mathbf{e}\cdot\mathbf{b})^{2}-|\mathbf{e}|^{2}b^{2}\frac{x+2}{2}+|\mathbf{e}|^{4}\frac{x+x^{2}}{8}\right)=\frac{1}{45}[(|\mathbf{e}|^{2}-|\mathbf{b}|^{2})+7(\mathbf{e}\cdot\mathbf{b})]{.}$ (S53) The strong $B$ limit is obtained if we use the fact that for a bounded function $g(x)\to 0$ when $x\to 0$ the integral converges to $\int_{0}^{\infty}\frac{dx}{x}e^{-x\delta}g(x)\to g(\infty)\ln(1/\delta),\quad\delta\to 0{.}$ (S54) Taking the derivative or integral with respect to $1/b$, we can generate the subleading or super-logarithmic terms , respectively. Performing this procedure we obtain the strong $B$ expansions of the integrals in Eqs. (S51) and (S52), as tabulated in Table S5. Reading of the leading order terms from Table S5 and substituting in Eq. (S51) we obtain Eq. (2) of the main text $\frac{8\pi^{2}\lambdabar_{D}^{3}}{\Delta}\delta L_{HE}\to\frac{1}{3}|\mathbf{b}|^{2}\log|\mathbf{b}|+\frac{1}{3}|\mathbf{e}|^{2}|\mathbf{b}|(\mathbf{\hat{e}}\cdot\mathbf{\hat{b}})^{2}.$ (S55) Figure S2: Dimensionless functions F,G,D of Eqs. (Nonlinear Quantum Electrodynamics in Dirac materials), (13)) and (Nonlinear Quantum Electrodynamics in Dirac materials) versus the dimensionless magnetic field. Functions G and D depend on the relative orientation of electric and magnetic field. The left (right) panel corresponds to $\mathbf{b}\parallel\mathbf{e}$ ($\mathbf{b}\perp\mathbf{e}$). For arbitrary angle between $\mathbf{b},\mathbf{e}$ see Table S6. $\bm{\delta\chi}$ | $\frac{v^{2}}{c^{2}}\frac{\alpha_{D}}{12\pi^{2}}\bm{\mathcal{U}}^{-2}$ | $2|\mathbf{b}|^{2}/5$ | $|\mathbf{b}|\ll 1$ ---|---|---|--- $\log(|\mathbf{b}|)$ | $|\mathbf{b}|\gg 1$ $\bm{\delta\epsilon(B)}$ | $\frac{\alpha_{D}}{3\pi}\bm{\mathcal{U}}^{2}$ | $\frac{7}{15}(\mathbf{\hat{e}}\cdot\mathbf{\hat{b}})^{2}-\frac{2}{15}|\mathbf{b}|^{2}$ | $|\mathbf{b}|\ll 1$ $-\log(|\mathbf{b}|)+(\mathbf{\hat{e}}\cdot\mathbf{\hat{b}})^{2}|\mathbf{b}|$ | $|\mathbf{b}|\gg 1$ $4\pi\bm{M^{(e)}}$ | $\frac{\mu_{D}}{3\pi\lambdabar_{D}^{3}}|\mathbf{e}|^{2}\bm{\mathcal{U}}^{-1}$ | $\frac{2}{15}\left(7\mathbf{\hat{e}}(\mathbf{\hat{e}}\cdot\mathbf{b})-2\mathbf{b}\right)$ | $|\mathbf{b}|\ll 1$ $-\hat{\mathbf{b}}(\mathbf{{\mathbf{\hat{e}}}}\cdot\mathbf{\hat{b}})^{2}+2{\mathbf{\hat{e}}}(\mathbf{{\hat{e}}}\cdot\mathbf{\hat{b}})-\frac{\mathbf{\hat{b}}}{|\mathbf{b}|}$ | $|\mathbf{b}|\gg 1$ Table S6: Asymptotic expressions for the nonlinear magnetic susceptibility $\bm{\chi}$, magnetically modulated dielectric constant $\bm{\delta\epsilon(B)}$ and the electric modulated magnetization $4\pi\bm{M^{(e)}}$ tensors for arbitrary angle between $\mathbf{\hat{e}}$ and $\mathbf{\hat{b}}$. ### S7 Derivation of Susceptibility tensors Inspecting the classical Lagrangian of the electromagnetic field $L_{0}=(E^{2}-B^{2})/(8\pi^{2})$, we can derive the (linear and nonlinear) contributions to susceptibilities from Eq. (S40) and Eq. (S4). If the quantum part of the action is called $\delta L$, the differential magnetic and electric susceptibilities are $\displaystyle\chi_{ij}$ $\displaystyle=\frac{\partial^{2}\delta L}{\partial B_{j}ialB_{i}}=\frac{v^{2}\alpha_{D}\lambdabar_{D}^{3}}{c^{2}\Delta}\frac{\partial^{2}\delta L}{\partial\text{b}_{l}\partial\text{b}_{k}}\mathcal{U}^{-1}_{ik}\mathcal{U}^{-1}_{jl}{,}$ (S56a) $\displaystyle\chi_{ij}^{e}$ $\displaystyle=\frac{\partial^{2}\delta L}{\partial{E}_{j}\partial{E}_{i}}=\frac{\alpha_{D}\lambdabar_{D}^{3}}{\Delta}\frac{\partial^{2}\delta L}{\partial\text{e}_{l}\partial\text{e}_{k}}\mathcal{U}_{ik}\mathcal{U}_{jl}{.}$ (S56b) #### S7.1 Linear and low order nonlinear susceptibilities The cutoff dependent UV contribution is $\delta L_{1}=\frac{\Delta}{12\pi^{2}\lambdabar_{D}^{3}}\ln\left(\frac{\Lambda}{\Delta}\right)\left(|\mathbf{e}|^{2}-|\mathbf{b}|^{2}\right).$ (S57) In vacuum quantum electrodynamics , this quantity has the same form as the classical electromagnetic Lagrangian, hence it is absorbed into the definition of electric charge. However in the material case, it gives rise to the isotropic valence band contribution to the magnetic and electric susceptibilities, respectively $(\tilde{\chi}_{1})_{ij}=-\frac{v^{2}}{c^{2}}\frac{\alpha_{D}}{6\pi^{2}}\ln\left(\frac{\Lambda}{\Delta}\right)\delta_{ij}=-\frac{v^{2}}{c^{2}}(\tilde{\chi}^{e}_{1})_{ij}{.}$ (S58) In the anisotropic case, these become $\bm{\chi_{D}}=-\frac{v^{2}}{c^{2}}\frac{\alpha_{D}}{6\pi^{2}}\ln\left(\frac{\Lambda}{\Delta}\right)\bm{\mathcal{U}}^{-2},\quad\bm{\epsilon_{D}}=\bm{1}+4\pi\bm{\chi_{D}^{e}}=\bm{1}+\frac{2\alpha_{D}}{3\pi^{2}}\ln\left(\frac{\Lambda}{\Delta}\right)\bm{\mathcal{U}}^{2}{.}$ (S59) as in Eqs. (7) and (8) of the main text. The weak field limit of Eq. (S4) contains the lowest order nonlinear terms that are quadratic in $|\mathbf{e}|^{2}-|\mathbf{b}|^{2}$ and $\mathbf{e}\cdot\mathbf{b}$ $\delta L_{2}=\frac{\Delta}{360\pi^{2}\lambdabar_{D}^{3}}\left[\left(|\mathbf{e}|^{2}-|\mathbf{b}|^{2}\right)^{2}+7\left({\bf e}\cdot{\bf b}\right)^{2}\right],$ (S60) from which the nonlinear differential susceptibility tensors for an isotropic system follow as $\displaystyle\delta\tilde{\chi}_{ij}$ $\displaystyle\to\frac{v^{2}}{c^{2}}\frac{\alpha_{D}}{180\pi^{2}}\left[2|\mathbf{b}|^{2}(2\hat{b}_{i}\hat{b}_{j}+\delta_{ij})+|\mathbf{e}|^{2}(7\hat{e}_{i}\hat{e}_{j}-2\delta_{ij})\right],$ (S61a) $\displaystyle\delta\tilde{\epsilon}_{ij}$ $\displaystyle\to\frac{\alpha_{D}}{45\pi}\left[|\mathbf{b}|^{2}(7\hat{b}_{i}\hat{b}_{j}-2\delta_{ij})+2|\mathbf{e}|^{2}(2\hat{e}_{i}\hat{e}_{j}+\delta_{ij})\right],\quad\hat{e}_{i}=\frac{\text{e}_{i}}{|\mathbf{e}|},\>\hat{b}_{i}=\frac{\text{b}_{i}}{|\mathbf{b}|}$ (S61b) These quantities can be re-expressed in the anisotropic case by using Eq. (S1.1) as $\displaystyle\bm{\delta\chi}$ $\displaystyle=\frac{v^{2}}{c^{2}}\frac{\alpha_{D}}{180\pi^{2}}\left(4(\bm{\mathcal{U}^{-1}}\mathbf{b})\otimes(\bm{\mathcal{U}^{-1}}\mathbf{b})+2|\mathbf{b}|^{2}\bm{\mathcal{U}}^{-2}+7(\bm{\mathcal{U}^{-1}}\mathbf{e})\otimes(\bm{\mathcal{U}^{-1}}\mathbf{e})-2\bm{\mathcal{U}}^{-2}|\mathbf{e}|^{2}\right),$ (S62a) $\displaystyle\bm{\delta\epsilon}$ $\displaystyle=\frac{\alpha_{D}}{45\pi}\left(4(\bm{\mathcal{U}}\mathbf{e})\otimes(\bm{\mathcal{U}}\mathbf{e})+2|\mathbf{e}|^{2}\bm{\mathcal{U}}^{2}+7(\bm{\mathcal{U}}\mathbf{b})\otimes(\bm{\mathcal{U}}\mathbf{b})-2\bm{\mathcal{U}}^{2}|\mathbf{b}|^{2}\right).$ (S62b) The electric modulated magnetization then is simply $\bm{M^{(e)}}=\bm{\delta\chi(E)B}=B_{\star}\bm{\delta\chi(E)\mathcal{U}}\mathbf{b}$ and reads $4\pi\bm{M^{(e)}}=\frac{2\mu_{D}}{45\pi\lambdabar_{D}^{3}}\bm{\mathcal{U}}^{-1}\left(7\mathbf{e}(\mathbf{e}\cdot\mathbf{b})-2\mathbf{b}|\mathbf{e}|^{2}\right).$ (S63) To obtain the expressions in the main text we work in coordinates where $\bm{\mathcal{U}}=\text{diag}(U_{xx},U_{yy},U_{zz}).$ (S64) For simplicity we assume $\bm{E}$ and $\bm{B}$ directed along principal directions, e.g. $\bm{E}\parallel\bm{B}\parallel\mathbf{\hat{x}}$ for the parallel field configuration and $\bm{E}\parallel\mathbf{\hat{x}},\>\>\bm{B}\parallel\mathbf{\hat{y}}$ for the perpendicular field configuration. When these conditions are met, we can reduce, for example the parallel weak field magnetic susceptibility as $(\bm{\delta\chi_{\parallel}(E)})_{xx}=\bm{\mathcal{U}}^{-2}_{xx}\frac{v^{2}}{c^{2}}\frac{\alpha_{D}}{36\pi^{2}}|\mathbf{e}|^{2},\quad 4\pi(\bm{M^{(e)}_{\parallel}})_{x}=(\bm{\mathcal{U}}^{-1}\mathbf{b})_{x}\frac{2\mu_{D}}{9\pi\lambdabar_{D}^{3}}|\mathbf{e}|^{2}.$ (S65) For a summary of susceptibilities for arbitrary mutual orientation of E&B see Table S6. As we discussed in section S1, using inverse anisotropy transformation, the results for susceptibilities can be re-written in terms of applied external fields $\bm{E}$ and $\bm{B}$. The angle between $\mathbf{e}$ and $\mathbf{b}$ as function of the angle between $\bm{E}$ and $\bm{B}$ is given in Eq. (S24). #### S7.2 Higher nonlinear corrections to susceptibilities Now that we have the renormalized Lagrangian in the form Eq. (S51), with the integrals $I^{m}_{n}$ tabulates in Table S5, we can calculate the susceptibilities according to Eq. (S56), in notable cases. If the electric field is zero , we have $\frac{8\pi^{2}\lambdabar_{D}^{3}}{\Delta}\delta L_{HE}(|\mathbf{e}|=0)\to\frac{\mathbf{b}^{2}}{3}\ln(\mathbf{b})+\mathbf{b}\ln(\mathbf{b})+...$ (S66) consistent with Eq. (2) of the main text. Therefore the contribution to magnetic susceptibility due to applied magnetic field is $\bm{\delta\chi}(|\mathbf{e}|=0)\to\frac{v^{2}}{c^{2}}\frac{\alpha_{D}}{12\pi^{2}}\bm{\mathcal{U}}^{-2}\ln|\mathbf{b}|{,}$ (S67) as in Eq. (Nonlinear Quantum Electrodynamics in Dirac materials) of the main text. The term in the Lagrangian that is second order in the electric field is $\frac{8\pi^{2}\lambdabar_{D}^{3}}{\Delta}\frac{\partial\delta L_{HE}}{\partial|\mathbf{e}|^{2}}\bigg{|}_{|\mathbf{e}|=0}=-\ln(|\mathbf{b}|)\left(\frac{1}{3}+\frac{1}{2|\mathbf{b}|}\right)+(\mathbf{\hat{\mathbf{e}}}\cdot\mathbf{\hat{b}})^{2}\left(\frac{|\mathbf{b}|}{3}+\frac{1}{2|\mathbf{b}|}\left(\ln|\mathbf{b}|+1\right)\right)+...$ (S68) Then the leading order magnetization contribution is obtained from $M=\partial L/\partial B$ as $4\pi\bm{M}^{(\bm{e})}\to\frac{\mu_{D}}{3\pi\lambdabar_{D}^{3}}|\mathbf{e}|^{2}\bm{\mathcal{U}^{-1}}\left(-\hat{\mathbf{b}}(\mathbf{{\mathbf{\hat{e}}}}\cdot\mathbf{\hat{b}})^{2}+2{\mathbf{\hat{e}}}(\mathbf{{\hat{e}}}\cdot\mathbf{\hat{b}})-\frac{\mathbf{\hat{b}}}{|\mathbf{b}|}...\right),$ (S69) as in Eq. (Nonlinear Quantum Electrodynamics in Dirac materials) of the main text. The dielectric response, linear in the electric field, is due to the term in the Lagrangian that is second order in the electric field , from which we obtain the magnetic field contribution in the isotropic case as $\bm{\delta{\epsilon}}\to\frac{\alpha_{D}}{3\pi}\left(-\ln(|\mathbf{b}|)(\bm{\mathcal{U}}\mathbf{\hat{e}})\otimes(\bm{\mathcal{U}}\mathbf{\hat{e}})+(\bm{\mathcal{U}}\mathbf{\hat{b}})\otimes(\bm{\mathcal{U}}\mathbf{\hat{b}})|\mathbf{b}|\right){.}$ (S70) The leading order electric field contribution to the dielectric tensor comes from the term that is $\sim|\mathbf{e}|^{4}$ in the Lagrangian $\delta{\tilde{\epsilon}}_{ij}\to\frac{\alpha_{D}}{6\pi}\frac{|\mathbf{e}|^{2}}{\mathbf{|b|}^{2}}\left(\delta_{ij}+2\hat{e}_{i}\hat{e}_{j}+|\mathbf{b}|\delta_{ij}(\mathbf{\hat{e}}\cdot\mathbf{\hat{b}})^{2}+2|\mathbf{b}|[\hat{e}_{i}\hat{b}_{j}+\hat{e}_{j}\hat{b}_{i}](\mathbf{\hat{e}}\cdot\mathbf{\hat{b}})+\mathbf{|b|}\hat{e}_{i}\hat{e_{j}}\right){,}$ (S71) hence small in the parameter $|\mathbf{e}|^{2}/\mathbf{|b|}\ll 1$. ### S8 Material Applications #### S8.1 Bismuth and $\text{Bi}_{0.9}\text{Sb}_{0.1}$ While our consideration is primarily for insulators , we, e.g., demonstrated that the calculated magnetic susceptibility would be valid for valence band contributions in gapless materials. However real experiments often include, particularly for insulators with small gap, a certain density of free electrons. Then analysis of experimental settings must take into account contributions from free electrons, or experimental conditions must be found when these contributions are suppressed. Bismuth has a band gap of $2\Delta=15.5\>\text{meV}$ at the $L$-point and a Fermi level of $\mu=35\>\text{meV}$ Fuseya _et al._ (2015) measured from the midgap point of the Dirac bands. The electronic Fermi surface is composed of 3 electron ellipsoids that lie on the binary-bisectrix (x-y) plane perpendicular to the trigonal (z) axis. There is also a hole pocket along the trigonal axis. The hole Fermi level is about $-196\>\text{meV}$ measured from the midgap point of the hole Dirac bands. The hole band gap is about $2\Delta=370\>\text{meV}$. Liu and Allen (1995) The hole contribution to susceptibility in the binary-bisectrix plane is small $\chi\sim-10^{-7}$ due to the large hole band gap and the alignment of the hole ellipsoid. The conduction bands are polarized in magnetic field, so that only the e1 ellipsoid is populated when a field is applied in the binary direction Iwasa _et al._ (2019). Furthermore, the e1 conduction electrons can exhibit de Haas- van Alphen oscillations, which are suppressed above $B=5\>\text{T}$. At higher magnetic fields the nonlinear diamagnetism is identical to the insulating alloy $\text{Bi}_{0.9}\text{Sb}_{0.1}$, which is identical to bismuth except for the absence of the Fermi level. Based on the band structure calculations Liu and Allen (1995), we write the diagonalized velocity tensor as $\bm{\mathcal{V}}=\text{diag}(1.7,\>1.5,\>0.4)v,\quad c/v=188,$ (S72) The Dirac cone is located at the L-point, therefore the velocity operator in the crystal frame is obtained by using an appropriate rotation-reflection operator. The critical field is about $B_{\star}=36\>\text{mT}$. We find the total susceptibility by adding the contributions from each L-point located symmetrically in the binary axis $\bm{\chi_{1}}+\bm{\delta\chi}(\bm{B})=\sum_{n=1}^{3}(\bm{R}^{T})^{n}\left[\bm{\chi_{1}}+\bm{\delta\chi}\left(\frac{\bm{\mathcal{U}}^{-1}\bm{R}^{n}\bm{B}}{B_{\star}}\right)\right]^{(L-point\>n)}\bm{R}^{n-1},$ (S73) where $R$ implements rotation by $120^{\circ}$ about the trigonal axis. The first order contribution is $\bm{\chi_{D}}=-10^{-5}\begin{pmatrix}3&0&0\\\ 0&3&0\\\ 0&0&0.5\end{pmatrix}.$ (S74) If we apply magnetic field in the binary ($x$) direction we have $\delta\chi_{xx}(B_{x}=5\>\text{T})=13\times 10^{-6}.$ (S75) For the full field range see Fig. 2. The other quantities (dielectric enhancement, magnetization etc.) are calculated in a similar fashion. For example the dielectric enhancement when $E\parallel B$ is in $x$-direction we have $\delta\epsilon_{\parallel}(B_{x})\to 10B_{x}.$ (S76) Given that the dielectric constant of the bismuth alloy is about $\epsilon~{}100$ (See Table S3) arising mostly from the ionic crystal, the magnetic field induced nonlinear contribution due to the Dirac band is enormous even at relatively weak B-fields. To compare the nonlinear behavior of our theory with the experiment by Iwasa et al. Iwasa _et al._ (2019), we take the reference level of susceptibility to be $\chi_{ref}=-\chi(50\>\text{T})$. According to our theory $\chi_{theory}^{binary}(50\>\text{T})=-5.5\times 10^{-6},\quad\chi_{theory}^{bisectrix}(50\>\text{T})=-8.0\times 10^{-6}.$ (S77) The Iwasa experiment has the reference points $\chi_{exp}^{binary}(50\>\text{T})=-18.7\times 10^{-6},\quad\chi_{exp}^{bisectrix}(50\>\text{T})=-15.4\times 10^{-6}.$ (S78) The difference in the reference levels are due to the additional contributions to linear magnetic susceptibility that are not contained in the Dirac theory. Although the logarithmic Dirac contribution in Eq. (8) constitutes a significant part of the linear diamagnetic response, non-Dirac core shell electron contributions can be equally important. Furthermore, there can be additional errors due to the choice of UV cut-off and subleading terms in the summation of the Landau levels can contribute to the total. None of the above mentioned factors affect the nonlinear response, which is dominated by the Dirac bands. Therefore, once the overall constant offset due to linear susceptibility is adjusted, we get an excellent agreement in the nonlinear behavior of the measured susceptibility in bismuth, as seen in Fig. 2(a). #### S8.2 TaAs TaAs has 12 pairs of Weyl fermions Weng _et al._ (2015), 4 pairs forming hole pockets $2\>\text{meV}$ below the Fermi level and 8 forming electron pockets $21\>\text{meV}$ above the Fermi level. The Fermi level $E_{F}=21\>\text{meV}$ contribution which is not accounted in our theory, becomes unimportant $B\gg\frac{4E_{F}^{2}}{v_{1}v_{2}e\hbar}\approx 10\>T.$ We represent each pair by a single massless Dirac fermion with the velocity tensor comparable to the calculations and measurements Zhang _et al._ (2019); Weng _et al._ (2015) $\bm{\mathcal{V}}=\text{diag}(1.7,\>1.7,\>0.35)v,\quad c/v=447$ (S79) where the the z-component is the velocity in the $c$-axis. Since the system is gapless we consider the situation where $E=0$. The nonlinear susceptibility is the gapless limit is $\bm{\chi_{D}+\delta\chi}\to-\frac{\alpha_{D}}{12\pi^{2}}\frac{v^{2}}{c^{2}}\ln\left(\frac{c\Lambda^{2}}{eB\hbar v^{2}}\right)\bm{\mathcal{U}}^{-2}$ (S80) To compare the nonlinear behavior of our theory with the experiment by Zhang et al. Zhang _et al._ (2019), we take the reference level of susceptibility to be $\chi_{ref}=-\chi(30\>\text{T})=1.56\times 10^{-5}$. #### S8.3 $\text{Pb}_{0.8}\text{Sn}_{0.2}\text{Te}$ The gap as a function of the Sn content is Hayasaka and Fuseya (2016) $2\Delta[\text{meV}]=182-480x$ (S81) At $x=0.235$ we have the inverse masses $m^{-1}_{\perp}=100m^{-1}_{e}$ and $m^{-1}_{\parallel}=11.25m_{e}^{-1}$ where the gap is $2\Delta=69.1\>\text{meV}$. At $x=0.510$ we have the same inverse massed where the gap is similar $2\Delta=-62.8\>\text{meV}$ (TI phase). We will base our estimates on these values . The parallel denotes the z-axis aligned in the [001] direction. The velocity tensor is then $\bm{\mathcal{V}}=\text{diag}(1.4,\>1.4,\>0.5)v,\quad c/v=580.$ (S82) There are a total of 4 Dirac fermions located at the L-points symmetric about the z-axis [001] direction of the rock salt structure. Due to the relatively large gap the most striking property is the electric field modulated magnetization. If we apply $\bm{B}\parallel\bm{E}$ in the $x$-direction with $B\sim 5\>T$ and $E=E_{\star}/3$ we have $4\pi({M^{e}_{\parallel}})_{x}(B_{x}=5\>\text{T},E_{x}=10^{4}\>\text{V/cm})\approx 0.5\>\mu\text{T}.$ (S83)
# 2020 Conference on Self-Organising Systems at Christian-Albrechts University in Kiel Combining SimTrust and Weighted Simple Exponential Smoothing Tobias Michel Latta Intelligent Systems Christian-Albrechts University Kiel, Germany <EMAIL_ADDRESS> ###### Abstract In the domain of Autonomic and Organic Computing, the entities of a distributed system are variable as well as the efficiency and the intention of their work. Therefore, a scalable mechanism to incentivise/sanction entities which contribute towards/against the system goal is needed. Trust is a suited metric to find benevolent entities. In this paper, we focus for one on the SimTrust model which introduces trust on entities when they share interest and opinions using tagging information. The second model is the Weighted Simple Exponential Smoothing Trust metric (WSES) which functions on explicitly rated items. WSES follows two basic rules which ensure a logic rating mechanism. When putting these two models in context, SimTrust has advantages on items that have not been rated yet or can not easily be rated. WSES is a trust metric which returns good results on explicit rank values. We propose concepts on combining both approaches and state in which cases they are incompatible. ###### Index Terms: ubiquitous, Autonomic computing, Organic computing, trust, tagging, keywords, keywording, distributed, self-organising systems ## I Introduction In every open system, where multiple entities interact with each other, trust plays a meaningful part. In self-organising systems (SOS), a successful interaction between two entities may only be performed, when both parties can estimate the behaviour of one another. Because these systems are getting more accessible, the number of interactable entities is growing. With that, the entities are increasingly unfamiliar with each other [1]. To establish successful cooperations, trust metrics are integrated in the architecture of the SOS. This is often done via Collaborative Filtering (CF). CF takes into account the behaviour of every entity in the SOS. Similar entities get clustered and recommendations are created based on the behaviour of the other entities in the cluster [2],[3]. This implies that with increasing user activity the quality of clustering and recommendations is improving. On the other hand, for little interaction or novel entities, the quality of recommendations is not sufficient. This is typically referred to as the cold-start problem [4]. Focussing on this issue is SimTrust where the entity has to provide personalized tags upon registration. This way, no explicit rating is needed to find quantified similarities on entities [5]. Besides the chosen fundamentals of trust derivation (e.g. CF or SimTrust), it is also important which metric is used. An intuitive metric might be rating all interactions on a certain scale. The Weighted Simple Exponential Smoothing metric (WSES) takes into account a set of rules which need to be followed to ensure an intuitive and fair metric [6]. However, there are many different approaches to this and the question arises which approach is best-suited for a typical SOS. The contribution of this paper is comparing the performance of SimTrust and WSES on certain use cases [7], [8], [9] and defining possible combinations where they can or can not benefit from each other. In Section 2, the use of trust in SOS is explained with a focus on deriving from self-tagging by entities and explicit rating of thirds. In Section 3, trust from tagging is introduced in connection to the implementation via SimTrust. Afterwards this is done analogous for trust from rating with the WSES metric in Section 4. After both approaches for establishing trust have been described, they are compared in Section 5. SimTrust and WSES are applied in theory on different use cases as well as combined to see possible beneficial outcomes. In Section 6, a conclusion as well as a final discussion is provided. Figure 1: Visualization of the user-tag-keyword system ## II Trust in Self-organising systems Self-organising systems are characterized by the varying participation and behaviour of entities. Due to missing control and surveillance over participants, a decentralized concept is needed that stimulates the contribution of entities towards a common system goal. In particular, the entities have to take over the responsibility to find and interact with suitable others as they are completely autonomous and may belong to different authorities. They have to find cooperation partners that reach a certain amount of efficiency and share the intention of being benevolent towards a common system goal [10]. To support this process, malevolent or inefficient entities have to be isolated. This can be done via computational trust which serves as a framework for entities to select and interact with others [10]. A standard approach to quantify trust for the system and the entities is CF. Similarities in previous activities and the known characteristics of two entities imply trust among them [11]. This approach is for example applied on item recommendation in e-commerce [12], [13]. Besides this technique, there are many more ways of identifying trust. By providing tags the entities must enter information, as it is done for content- based filtering [4]. This information can be used to group similar entities which implies trust among them, as it will be explained in the following section. ## III Trust from Tagging - SimTrust The first approach on trust addressed in this paper is presented in this section. To build a basic understanding trust derived from tagging is introduced in the first subsection. Building on that, the implementation via SimTrust will be covered in the second subsection. ### III-A Tagging as a Trust indicator When in content-based filtering the user of an e-commerce platform has selected an item with a certain set of characteristics, it is likely to get another item recommended that has similar characteristics. So when entities have to define features for themselves, it is possible to group them by these. This can be expressed via multiple short labels or keywords, called tags [14]. Research shows that there is a correlation between similar entity-behaviour and trust [15]. In the context of SOS, this can be helpful for entities to solve problems in cooperation [11]. ### III-B Implementation via SimTrust SimTrust is an approach by Bhuiyan et al. [5]. It uses trust for recommending items to entities based on other entities, they are trusting. When they are similar, they get recommended preferred or already acquired items from trusting entities. These similarities are quantified in the form of shared tags. Because the item tags will be set by the entities, they lack in standardization and unification. Therefore, the first step of SimTrust is to derive the semantic meaning of a tag by connecting it with keywords derived from the item description. In the following Fig. 1, the connections between entities (users in this case), tags and keywords are presented. A user $u_{i}$ (referenced by $i$) defines himself with a set of tags $T_{i}$. The semantics of the tags are derived from a text-mining procedure called tf-idf [16]. It connects a tag $t_{ij}$ with a set of keywords $w_{ij}$ and their frequencies $v_{ij}$. The $i$ references the user while the $j$ references the specific tag $j$. The frequency of a keyword is a measure for the importance of defining the tag. Tags $t_{ij}$ and $t_{ik}$ (another tag from the same user) are considered similar when a function $sim(v_{ij},v_{ik})$ reaches a certain threshold. The question arises how it is possible to calculate a trust value between two users $u_{i}$ and $u_{j}$. This is solved by calculating the similarity of interest for every keyword $k$. These values are summed and averaged to express the overall similarity of $u_{i}$ and $u_{j}$. This concept has been tested on a dataset with 2,200 users and 18,663 books in comparison to a classic CF algorithm that would quantify trust among users on the overlap of previous ratings using Jaccard’s coefficient [17]. To compare the algorithms, all users were assigned with personal lists of preferred items $T_{i}$. Later, the recommendations made by the algorithms $P_{i}$ were compared to $T_{i}$ via a popular method called ”Precision and Recall” by Cleverdon [18] in combination with a performance metric called F1 by Sarwar et al. [19]. Fig. 2 from Bhuiyan et al. [5] shows an improvement by 200% when comparing the performance of traditional collaborative filtering (CF) to SimTrust (ST). Figure 2: Recommender evaluation of SimTrust (ST) and Collaborative Filtering (CF) by Bhuiyan et al. ## IV Trust from Rating - WSES The second approach on trust is presented in this section. After the basics for Trust derived from ratings are explained, a specific implementation via WSES will be covered in the second subsection. ### IV-A Rating as a Trust indicator In comparison to the previous tag-based approach, using explicit ratings seems to be a more intuitive way in establishing trust between entities. From school grades, IMDB ratings or sport disciplines: linear number-based ratings, like variants of the Likert scale [20], aim for explicitness and comprehensibility even for novel participants of a system. Because of the quantifiability, ratings serve as a suited base for CF. The algorithm can easily group entities with similar ratings on similar items[21]. On the other hand, ratings are very subjective due to the different requirements entities are setting for the items [22], [11]. Another issue is the rating metric itself. The value ranges are typically binary or float values. But how the personal ratings of an entity are calculated to an overall rating of an item is not as intuitive as it seems. Therefore, Kantert et al. [6] have developed a rating metric that fulfills two requirements which ensure a fair and realistic rating value for an item or entity. This metric is presented in the following subsection. ### IV-B Implementation via Weighted SES Metric In the context of the previously mentioned Autonomic Computing [23] and Organic Computing [24] initiatives, trust has been considered as a means to assess the expected behaviour of participants in open, self-organising system constellations. Kantert et al. [6] have developed a metric for accumulating explicit ratings which prevents two major problems that intuitive metrics have. For example is normalizing the sum of ratings faulty in terms of incorporating the number of ratings, an entity received. The following example by Kantert et al. [6] illustrates this behaviour: $R_{A}:=(1,1,1)$ (1) $F(R_{A}):=\frac{1+1+1}{3}=1$ (2) $R_{B}:=(1,1,1,0.5,0.5,0.5)$ (3) $F(R_{B}):=\frac{3*1+3*0.5}{6}=0.75$ (4) In this exemplary SOS, entities A and B perform tasks for which they receive a real number rating between 0 and 1 ($\mathfrak{R}_{+}$), depending on the complexity or length of the task. Entity A has completed three tasks that are rewarded with three good ratings (1). The normalized average value is therefore also a good value (2). Another entity B has done a doubled amount of operations with three good ratings and three medium ratings (3). Intuitively, B should have a better overall rating than A because it fulfilled twice as many operations as A did and also received three good ratings. On the contrary, the normalized average value is lower than for A (4). From this, two requirements are derived: R1 describes the behaviour of a metric when two ratings $r_{1}$ and $r_{2}$ are added to a set of ratings $R_{n}$. For adding, the function $\mathfrak{U}$ is introduced. $r_{1}$ is bigger than $r_{2}$. The final reputation $\mathfrak{T}(R_{n})$ with $r_{1}$ has to be higher than $\mathfrak{T}(R_{n})$ with $r_{2}$ unless $R_{n}$ with $r_{2}$ is already the maximum value. The second requirement R2 ensures that every positive rating increases the reputation until the maximum value is reached. This prevents the faulty behaviour of the initial example. $\begin{gathered}\forall r_{1},r_{2}\in\mathfrak{R}_{+}:\mathfrak{T}(\mathfrak{U}(R_{n},r_{1}))>\mathfrak{T}(\mathfrak{U}(R_{n},r_{2}))\vee\\\ \mathfrak{T}(\mathfrak{U}(R_{n},r_{2}))=1,\\\ r_{1}>r_{2}\Rightarrow|\mathfrak{R}_{+}|>1\end{gathered}$ (R1) $\forall r\in\mathfrak{R}_{+}:\mathfrak{T}(\mathfrak{U}(R_{n},r))>\mathfrak{T}(R_{n})\vee\mathfrak{T}(R_{n})=1$ (R2) Binary and continuous trust metrics have been checked on R1 and R2. They failed at least on one requirement. The weighted trust metric is the first proposal by Kantert et al. [6]. The ratings are represented by float values from $[-1;1]$ (5) and only on a small border case, where the per-entity rating storage is exceeded, (R2) does not hold. The second and final proposal includes the Simple Exponential Smoothing (SES) [25] to form the Weighted SES Trust metric. It is an ”advanced version of a rolling average, but it does not have to remember historic values” [6]. Newer ratings have more impact. This can be adjusted via the $\alpha$ value in (7). The memory usage is reduced to one value (5), even though the semantic values (weights) for positive and negative ratings have to be stored independently (case distinction in (7)). The trust value $\tau^{s}$ is then calculated according to (8). $\mathfrak{R}^{S}:=[-1,1],r^{S}\in\mathfrak{R}^{W}$ (5) ${R}^{S}:=[0,1]^{2}$ (6) $\begin{gathered}R^{S}_{n+1}:=\mathfrak{U}^{S}(R^{S}_{n},r)\\\ :=\begin{cases}(p_{1}*\alpha+(1-\alpha)*r,p_{2}*\alpha)&r>0,(p_{1},p_{2})\in R^{s}_{n}\\\ (p_{1}*\alpha,p_{2}*\alpha-(1-\alpha)*r)&r<0,(p_{1},p_{2})\in R^{s}_{n}\\\ R^{S}_{n}&otherwise\end{cases}\end{gathered}$ (7) $\tau^{s}:=\mathfrak{T}^{s}(R^{s}_{n}):=\frac{p_{1}-p_{2}}{p_{1}+p_{2}},(p_{1},p_{2})\in R^{s}_{n}$ (8) A proof that WSES fulfills R1 and R2 is given in [6]. To challenge the metrics on performance, a Trusted Desktop Grid (TDG) [26] is set up with 100 benevolent entities which operate on tasks, when on the 50000th system tick, 100 new entities are added to the system. The new entities behave maliciously. The performance is tested in terms of speed and strength in which the malevolent entities are isolated by the initial 100 benevolent entities. Fig. 4 from Kantert et al. [6] shows the average reputation of agent types over time (ticks) when WSES trust metric was used. The attack at tick 50000 is clearly visible. While the reputation on the egoistic agents falls from $0,05$ to $-0,65$, the adaptive agents rise from $0,8$ to $1$. Fig. 4 shows a direct comparison between the continuous ($\mathfrak{T}^{c}$), weighted ($\mathfrak{T}^{w}$) and weighted simple exponential smoothed metric ($\mathfrak{T}^{s}$). The weighted and WSES metrics are significantly better in rating well-behaving entities than the continuous metric. This is reflected by the reputation averages being close to $1$. All three metrics penalize malicious behaviour by the attackers very strictly with negative reputations averaged around $-0.76$. Figure 3: Reputation for metric $\tau_{s}$. 1 is the best and −1 the worst reputation. By Kantert et al [6]. Figure 4: Reputation averaged per metric by Kantert et al [6]. Because the WSES metric fulfills both requirements (R1) and (R2) unconditionally and has the overall best incentive/penalizing behaviour, it will serve as a comparison to the SimTrust metric. Differences and possibilities to combine these two approaches are served in the following section. ## V Comparing SimTrust and WSES In the following section, SimTrust and WSES will be compared. Because of the different fundamentals of the two, it is not trivial to find a common benchmark. Therefore, both systems are compared on different use cases to point out individual advantages in the first subsection. Subsequently, concepts for combining the advantages are proposed in the second subsection. Figure 5: Approach for combining WSES and SimTrust on a database for interactable items. Figure 6: Approach for combining WSES and SimTrust to get additional information about entity rating. ### V-A Approaches on different use cases Both of the discussed metrics aim for establishing trust among entities. While SimTrust was ultimately developed for item recommendation based on grouped entities, the main field of application for the WSES metric would be the mere isolation of malevolent entities in a SOS that follows a common goal. Finding a mutual transformation is therefore not trivial. Both metrics assume multiple entities that benefit from being grouped with entities that they trust. On SimTrust, trust among entities is derived from tagging information. The tag semantics are derived from item descriptions. Therefore, the cold-start problem is not as severe as it is on the WSES metric [5]. So SimTrust might have advantages on freshly established systems that do not have a big database of item ratings. Another advantage of SimTrust over WSES might be on use cases where explicit rating is not common, e.g. social networks or dating platforms like Tinder [27]. Here it might be more socially acceptable to group entities based on their tags than using explicit rating. Also, the overall experience for the entity might be improved when it is not only grouped with other entities based on geographical properties but also on common features. WSES does not hold any items. Entities fulfil tasks to gain an explicit rating. In comparison to SimTrust, this might be more accurate and reasonable for entities of these systems. They get fair ratings, ensured by R1 and R2. SimTrust might be incompatible for evaluation on R1 and R2 due to the missing of explicit trust values. Trust from tagging mostly relies on numeric values for calculating the similarity of tags, not for an entity’s rating. So in SimTrust are neither explicit ratings used nor is any comparable metric involved. On WSES, entities that do not follow the common system goal will get strictly isolated from the rest of the system. In this case isolation means that no more tasks are delegated to the entity but he still might be able to submit new work units. He will be inactive as a working unit though. On SimTrust this would result in not receiving any recommendations. When comparing to the strength of penalizing on WSES, this penalty on SimTrust would be less severe because an entity can still be active in the SimTrust environment even though it does not get any recommendations. After outlining advantages and disadvantages of the systems over each other, combining them for better overall results will be approached in the next subsection. ### V-B Different combinations of WSES and SimTrust A first idea of merging the two approaches might be using SimTrust as a base system for trust among entities but with a modified ranking for items in a database via the WSES. This is presented in Fig. 6. First, entities are grouped via their tags. When they have interacted with items, they can rate them via the WSES metric. For example should an item be recommended to an entity A when similar entities have already interacted with the item and gave it a good rating. This would help distinguishing entities with their different requirements on an item. There is for example an e-commerce system, where entities can purchase items. Entity A characterizes itself with the tags ${quality,haptic,material}$. It rates a football P with 2 out of 5 possible points. Entity B characterizes itself with the tags ${look,beauty,appearance}$. It rates P with 5 out of 5 possible points. The item will now only be recommended to entities that are similar to B because they might set the same requirements and be as satisfied as B is. Entity-dependant relevance of rating can also be achieved differently. Enriching the numeric rating with a set of tags will precise the meaning of a rating. From this, multiple use cases are possible: For example, entities that share tags with the tagged ratings can only view these or items with sufficient ratings and shared tags get recommended only to fitting entities. A second idea on combining SimTrust with WSES could be used on use cases similar to TDG. Even though WSES has a fair metric for trust presented in one value, it does not hold any more information for interpretation. This could be inefficient when there are entities that have different abilities. Just from a single trust value, it is not possible to differentiate the special skills of the individual entities when they have similar trust values. They would only be considered from ”good” to ”bad”. Tags could enrich the information about an entity, describing its abilities. Fig. 6 visualizes the idea. A possible use case could be a SOS with the goal of solving mathematical equations. They may include integrations or matrix multiplication. The entities in the SOS have individual skills in solving the equations. User $u_{i}$ is able to solve integrations, while user $u_{j}$ is only able to solve matrix multiplications. When $u_{i}$ solves the task $i_{1}$ which is an integration task, it will receive a good rating because it is specialized for this type of task and can perform it in a satisfactory way. It will not receive a good rating on task $i_{2}$ because it does not have the skills to solve it. User $u_{j}$ performs well on $i_{2}$ but not good on $i_{1}$. The task type where the user performed well will be saved in their tags so it will be matched or selects itself further tasks ($i_{3}$) where it is likely to receive good ratings. This behaviour encourages specialization of users, even more if the users are able to improve their performance on runtime. To establish a credibility of ratings and abilities, a network of trust could be set up based on the transitivity of trust. In other words, entities can also mediate between others that do not share an interaction history. ## VI Conclusion SimTrust and WSES have different ways of approaching trust among entities. While WSES relies on explicit rating from thirds, SimTrust is based on entity tagging. This causes WSES to perform well on systems that encourage explicit rating, while SimTrust has advantages on fresh systems because the cold-start problem can be reduced. It is difficult to compare the systems on their performance because of these different implementations. Even though both aim for trust, the usage later on in the applications is different so the requirements on the trust metrics vary heavily. WSES is mainly used on collaborative problem solving while SimTrust has been developed for item recommendation. However, two ideas for combining the metrics could be stated. Both ideas use the tagging information to enrich explicit ratings. The first concept aims on recommending items with good ratings to entities that are trusting the reviewers. The other concept recommends tasks to users which are likely to perform well on a task derived from the required abilities. The initial question, which of the presented systems should be used if trust has to be implemented on a typical SOS, has evolved into a comparison of SimTrust and WSES. The different nature of these two has been described and it could be seen that different advantages and disadvantages are arising from this. Therefore it was not possible to find a common use case on which they could have been ranked in terms of performance. Both systems have their advantages depending on the detailed implementation of the respective SOS. It is not realistic to compare SimTrust and WSES on a so-called ”default” SOS. Every open SOS already brings features that puts one of the metrics in favour. However, elaborating the advantages on different use cases exposed benefits which were able to be combined into new metric concepts. In the future, it would be interesting to evaluate these concepts in a real scenario as well as researching more on a common scenario for SimTrust and WSES to get an explicit comparison between these two. ## References * [1] S. Tomforde, J. Hähner, and B. Sick, “Interwoven systems,” _Inform. Spektrum_ , vol. 37, no. 5, pp. 483–487, 2014. [Online]. Available: https://doi.org/10.1007/s00287-014-0827-z * [2] J. B. Schafer, D. Frankowski, J. Herlocker, and S. Sen, _Collaborative Filtering Recommender Systems_. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007, pp. 291–324. [Online]. Available: https://doi.org/10.1007/978-3-540-72079-9_9 * [3] P. Massa and P. Avesani, “Trust-aware recommender systems,” in _Proceedings of the 2007 ACM Conference on Recommender Systems_ , ser. RecSys ’07. New York, NY, USA: Association for Computing Machinery, 2007, p. 17–24. [Online]. Available: https://doi.org/10.1145/1297231.1297235 * [4] C. Jr and M. Armada, “Recommender systems in social networks,” _JISTEM - Journal of Information Systems and Technology Management_ , vol. 8, pp. 681–716, 12 2011. * [5] T. Bhuiyan, Y. Xu, and A. Jøsang, “Simtrust: A new method of trust network generation,” in _2010 IEEE/IFIP International Conference on Embedded and Ubiquitous Computing_ , 2010, pp. 718–722. * [6] J. Kantert, S. Edenhofer, S. Tomforde, and C. Müller-Schloer, “Representation of trust and reputation in self-managed computing systems,” in _15th IEEE International Conference on Computer and Information Technology, CIT 2015; 14th IEEE International Conference on Ubiquitous Computing and Communications, IUCC 2015; 13th IEEE International Conference on Dependable, Autonomic and Secure Computing, DASC 2015; 13th IEEE International Conference on Pervasive Intelligence and Computing, PICom 2015, Liverpool, United Kingdom, October 26-28, 2015_ , Y. Wu, G. Min, N. Georgalas, J. Hu, L. Atzori, X. Jin, S. A. Jarvis, L. C. Liu, and R. A. Calvo, Eds. IEEE, 2015, pp. 1827–1834. [Online]. Available: https://doi.org/10.1109/CIT/IUCC/DASC/PICOM.2015.273 * [7] S. Sterlin, A. Sandhya, S. Merlin, and B. Sam, “A review on e-commerce recommender applications,” 03 2017, pp. 387–391. * [8] B. Schafer, J. Konstan, and J. Riedl, “E-commerce recommendation applications,” vol. 5, 08 2000. * [9] R. Burke, A. Felfernig, and M. H. Göker, “Recommender systems: An overview,” _AI Magazine_ , vol. 32, no. 3, pp. 13–18, Jun. 2011. [Online]. Available: https://www.aaai.org/ojs/index.php/aimagazine/article/view/2361 * [10] G. Anders, H. Seebach, J.-P. Steghöfer, W. Reif, E. André, J. Hähner, C. Müller-Schloer, and T. Ungerer, _The Social Concept of Trust as Enabler for Robustness in Open Self-Organising Systems_. Cham: Springer International Publishing, 2016, pp. 1–16. [Online]. Available: https://doi.org/10.1007/978-3-319-29201-4_1 * [11] T. Bhuiyan, _Trust for Intelligent Recommendation_ , 09 2013. * [12] J. F. Rayport and B. J. Jaworski, _Introduction to E-Commerce_ , 2nd ed. USA: McGraw-Hill, Inc., 2003. * [13] G. Linden, B. Smith, and J. York, “Amazon.com recommendations: item-to-item collaborative filtering,” _IEEE Internet Computing_ , vol. 7, no. 1, pp. 76–80, 2003. * [14] T. Bhuiyan, _SimTrust: The Algorithm for Similarity-Based Trust Network Generation_. New York, NY: Springer New York, 2013, pp. 63–73. [Online]. Available: https://doi.org/10.1007/978-1-4614-6895-0_5 * [15] C.-N. Ziegler and J. Golbeck, “Investigating interactions of trust and interest similarity,” _Decision Support Systems_ , vol. 43, pp. 460–475, 03 2007. * [16] W. Uther, D. Mladenić, M. Ciaramita, B. Berendt, A. Kołcz, M. Grobelnik, M. Witbrock, J. Risch, S. Bohn, S. Poteet, A. Kao, L. Quach, J. Wu, E. Keogh, R. Miikkulainen, P. Flener, U. Schmid, F. Zheng, G. Webb, and S. Nijssen, _TF–IDF_ , 01 2010. * [17] P.-N. Tan, M. Steinbach, A. Karpatne, and V. Kumar, _Introduction to Data Mining_ , 2nd ed. Upper Saddle River, NJ: Pearson, 2017. * [18] C. Cleverdon, J. Mills, and E. Keen, “Factors determining the performance of indexing systems,” vol. I, 01 1966. * [19] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl, “Recommender systems for large-scale e-commerce scalable neighborhood formation using clustering,” vol. 50, 01 2002. * [20] A. Joshi, S. Kale, S. Chandel, and D. Pal, “Likert scale: Explored and explained,” _British Journal of Applied Science & Technology_, vol. 7, pp. 396–403, 01 2015. * [21] J. Herlocker, J. Konstan, and J. Riedl, “An empirical analysis of design choices in neighborhood-based collaborative filtering algorithms,” _Information Retrieval_ , vol. 5, pp. 287–310, 01 2002. * [22] Y. Ruan and A. Durresi, “A survey of trust management systems for online social communities – trust modeling, trust inference and attacks,” _Knowledge-Based Systems_ , vol. 106, 05 2016. * [23] J. O. Kephart and D. M. Chess, “The vision of autonomic computing,” _Computer_ , vol. 36, no. 1, pp. 41–50, 2003. * [24] C. Müller-Schloer and S. Tomforde, _Organic Computing - Technical Systems for Survival in the Real World_. Birkhäuser, 2017. [Online]. Available: https://doi.org/10.1007/978-3-319-68477-2 * [25] R. G. Brown, “Exponential smoothing for predicting demand,” p. 145, 1957. * [26] J. Kantert, H. Spiegelberg, S. Tomforde, J. Hähner, and C. Müller-Schloer, “Distributed rendering in an open self-organised trusted desktop grid,” 07 2015, pp. 267–272. * [27] J. Degen and A. Kleeberg-Niepage, “The more we tinder: Subjects, selves and society,” _Human Arenas_ , 08 2020.
# Introducing piXedfit \- a Spectral Energy Distribution Fitting Code Designed for Resolved Sources Abdurro’uf Institute of Astronomy and Astrophysics, Academia Sinica, 11F of AS/NTU Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C. Yen-Ting Lin Institute of Astronomy and Astrophysics, Academia Sinica, 11F of AS/NTU Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C. Po-Feng Wu National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Masayuki Akiyama Astronomical Institute, Tohoku University, Aramaki, Aoba, Sendai 980-8578, Japan ###### Abstract We present `piXedfit`, pixelized spectral energy distribution (SED) fitting, a Python package that provides tools for analyzing spatially resolved properties of galaxies using multiband imaging data alone or in combination with integral field spectroscopy (IFS) data. `piXedfit` has six modules that can handle all tasks in the spatially resolved SED fitting. The SED fitting module uses the Bayesian inference technique with two kinds of posteriors sampling methods: Markov Chain Monte Carlo (MCMC) and random densely-sampling of parameter space (RDSPS). We test the performance of the SED fitting module using mock SEDs of simulated galaxies from IllustrisTNG. The SED fitting with both posteriors sampling methods can recover physical properties and star formation histories of the IllustrisTNG galaxies well. We further test the performance of `piXedfit` modules by analyzing 20 galaxies observed by the CALIFA and MaNGA surveys. The data comprises of 12-band imaging data from GALEX, SDSS, 2MASS, and WISE, and the IFS data from CALIFA or MaNGA. `piXedfit` can spatially match (in resolution and sampling) of the imaging and IFS data. By fitting only the photometric SEDs, `piXedfit` can predict the spectral continuum, $\text{D}_{\rm n}4000$, $H_{\alpha}$, and $H_{\beta}$ well. The star formation rate (SFR) derived by `piXedfit` is consistent with that derived from $H_{\alpha}$ emission. The RDSPS method gives equally good fitting results as the MCMC and it is much faster than the MCMC. `piXedfit` is a versatile tool equipped with a parallel computing module for efficient analysis of large datasets, and will be made publicly available111https://github.com/aabdurrouf/piXedfit. methods: data analysis – methods: statistical – galaxies: evolution – galaxies: fundamental parameters ††journal: ApJS††facilities: GALEX, Sloan, CTIO:2MASS, FLWO:2MASS, WISE, Sloan (BOSS, MaNGA survey), CAO:3.5m (PMAS/PPAK, CALIFA survey)††software: Astropy (Astropy Collaboration et al., 2013), Photutils (Bradley et al., 2019), reproject (Robitaille, 2018), SExtractor (Bertin & Arnouts, 1996), sewpy, FSPS (Conroy et al., 2009), python-FSPS (Foreman-Mackey et al., 2014), emcee (Foreman-Mackey et al., 2013), matplotlib (Hunter, 2007), SciPy (Virtanen et al., 2020), NumPy (Harris et al., 2020), specutils (Earl et al., 2020) ## 1 Introduction The accumulated multiwavelength photometric and spectroscopic observations over the past decades have played a crucial role in our current understanding of galaxy formation and evolution. To interpret the multiwavelength data, modeling of the galaxy spectral energy distribution (SED) is required. Motivated by such needs, stellar population synthesis modeling has been systematically developed since the pioneering work by Tinsley (1972) and Searle et al. (1973). Since then, numerous efforts from various groups have been made to improve the methods (Buzzoni, 1989; Bruzual A. & Charlot, 1993; Bruzual & Charlot, 2003; Maraston, 1998, 2005; Conroy et al., 2009; Eldridge & Stanway, 2009). Recently, extensive developments have been made to include more physical components into the SED modeling, to account for the complexity of the physics underlying the SED of a galaxy. These components include nebular emission (e.g., Ferland et al., 1998, 2013), dust emission (Burgarella et al., 2005; Draine & Li, 2007; da Cunha et al., 2008; Groves et al., 2008; Noll et al., 2009; Leja et al., 2017), dusty torus emission from an active galactic nucleus (AGN; e.g., Nenkova et al., 2008a; Stalevski et al., 2012), and synchrotron radio emission (e.g., Boquien et al., 2019). In parallel with the development of the SED modeling, the statistical method for comparison between the observed SED and model SED, the so-called SED fitting, has been extensively developed over the past few decades (see reviews by Walcher et al., 2011; Conroy, 2013). Traditionally, SED fitting was considered as an optimization problem, where $\chi^{2}$ minimization technique is used to find a model that best reproduce the observed SED (e.g., Sawicki & Yee, 1998; Arnouts et al., 1999; Cid Fernandes et al., 2005; Kriek et al., 2009; Sawicki, 2012). As the number of parameters in the SED modeling becomes higher (due to the incorporation of various physical components, as described above) which introducing more opportunities of degeneracy among the parameters, we see the emergence of the Bayesian inference technique. This technique infers the parameters from posterior probability distributions produced by taking into account the likelihoods of all models. Pioneered by Kauffmann et al. (2003), the Bayesian framework for SED fitting has been applied widely in the literature (e.g., Burgarella et al., 2005; Salim et al., 2007; da Cunha et al., 2008; Noll et al., 2009; Boquien et al., 2019). Currently, a Bayesian inference with state of the art posteriors sampling technique, such as the Markov Chain Monte Carlo (MCMC) and the nested sampling techniques, has become a standard practice in the SED fitting (e.g., Acquaviva et al., 2011; Serra et al., 2011; Johnson et al., 2013; Han & Han, 2014; Chevallard & Charlot, 2016; Calistro Rivera et al., 2016; Leja et al., 2017; Carnall et al., 2018; Zhou et al., 2020). Despite the fact that galaxies are extended objects, the majority of the studies over the past decades have only utilized their integrated light, particularly for SED fitting; in the case of spectroscopic studies, the integrated spectrum of a galaxy is obtained with the single-fiber spectroscopy over a small diameter of the galaxy’s center (e.g., Sloan Digital Sky Survey, SDSS, Galaxy and Mass Assembly survey, GAMA, York et al., 2000; Driver et al., 2009, respectively). These observations have revealed many important evolutionary trends and correlations among physical properties of galaxies that shaped our current understanding of galaxy evolution. Despite the huge amount of information obtained from the above surveys, we have not made the full use of the available information, namely the omission of spatially resolved SED with which physical properties of spatial regions in the galaxy can be derived. As spatially extended objects, galaxies have properties that vary across their bodies. The advent of the integral field spectroscopy (IFS) surveys has revolutionized the studies of galaxy formation and evolution: in the local universe, we have SAURON (de Zeeuw et al., 2002), $\text{ATLAS}^{\rm 3D}$ (Cappellari et al., 2011), CALIFA (Sánchez et al., 2012), SAMI (Croom et al., 2012), and MaNGA (Bundy et al., 2015); in the high redshifts, KMOS3D (Wisnioski et al., 2015) and SINS/zC-SINF (Förster Schreiber et al., 2018). Thanks to these surveys, spatially resolved properties of galaxies are recently being studied, allowing for a better understanding of the galaxy evolution. While the SED fitting technique has been widely applied to the integrated SEDs of galaxies over a wide range of redshifts, its potential for applications to the spatially resolved SEDs has only been explored by a limited number of studies. Abraham et al. (1999) did fitting of spectral synthesis models to spatially resolved multicolor photometry of 32 galaxies at $0.4<z<1$ in the Hubble Deep Field (HDF) to study the ages and evolutionary histories of the stellar populations in the galaxies. Lanyon-Foster et al. (2007, 2012) analyzed the pixel-by-pixel multicolor photometry of galaxies at $z<1$ using pixel color-magnitude diagram (pCMDs; the similar method is also implemented by Bothun 1986) to study the structural parameters of galaxies across the Hubble sequence. Zibetti et al. (2009) used spatially resolved optical/near infrared (NIR) colors to infer spatially resolved mass-to-light ratios (M/L), which are then multiplied by the surface brightness to obtain the maps of stellar mass surface density ($\Sigma_{*}$) of 9 nearby galaxies. Wuyts et al. (2012, 2013) applied the standard SED fitting technique to the spatially resolved broad-band SEDs (from the Hubble Space Telescope, HST) of $0.5<z<2.5$ star-forming galaxies in the GOODS-South field. They used the resulting maps of stellar population properties to analyze the variations in rest-frame color, $\Sigma_{*}$, age, and dust attenuation as a function of galactocentric radius, and measure structural parameters of the galaxies. Recently, Sorba & Sawicki (2015, 2018) used multiband images covering rest- frame ultraviolet (UV)–optical to conduct pixel-by-pixel SED fitting of 67 nearby galaxies and 1222 galaxies in high redshifts (up to $z\sim 2.5$) to study the systematic effect introduced by the integrated SED fitting on the total stellar mass ($M_{*}$) estimate. By comparing the total $M_{*}$ from summing up the spatially resolved mass estimates with that obtained from the integrated SED fitting (i.e., spatially-unresolved $M_{*}$), they found that the $M_{*}$ can be severely underestimated using the integrated SED, especially on star-forming galaxies. They argue that this systematic effect is caused by the outshining effect by young stars, i.e., young stars (which have low $\text{M}/\text{L}$) are so bright such that their light dominates the galaxy’s SED in the optical wavelengths, thus undermining the contribution from old stars (which have high $\text{M}/\text{L}$)222However, the discrepancy between the both total $M_{*}$ estimates is not observed by Wuyts et al. (2012) and Smith & Hayward (2018). Smith & Hayward (2018) used synthetic galaxy images covering FUV–FIR that are constructed by performing dust radiative transfer on a 3D hydrodynamical simulation of an isolated disk galaxy.. In our previous studies (Abdurro’uf & Akiyama, 2017, 2018), we conducted spatially resolved SED fitting of 93 local ($0.01<z<0.02$) and 152 high redshifts ($0.8<z<1.8$) massive disk galaxies to study the evolution of the spatially resolved star formation main sequence (SFMS) and the radial trends of disk growth and quenching. Overall, we found that massive disk galaxies tend to build their stellar masses and quench their star formation activities in the inside-out fashion. Until recently, the wide area IFS surveys (mentioned previously) have been mostly targeting local galaxies because such large surveys for high redshifts galaxies are prohibitively expensive. The spatially resolved SED fitting method can serve as a powerful alternative to studying the spatially resolved stellar population properties of galaxies across a wide range of redshifts, as shown by previous studies mentioned above. Some advantages of this method over the IFS surveys are the following: (1) the current and future abundance of high spatial resolution and deep multiband imaging data, particularly those from space missions such as Euclid, JWST, and Roman Space telescope, which allow us to perform this method to a large number of galaxies across wide range of redshifts, (2) the recent developments in SED modeling and fitting methods enable a robust and rapid estimation of galaxy properties, (3) the usage of a single method to study galaxies over a wide range of redshift can reduce systematic biases (which would arise when different methods are used for different redshift) in the study of evolutionary trends of the galaxy properties. Motivated by these, in this study, we develop `piXedfit`, pixelized SED fitting, a Python package that provides a self-contained set of tools for analyzing spatially resolved properties of galaxies from imaging data as well as the combination of imaging data and IFS data. The structure of this paper is as follows. We describe the data sets used for the analysis of this paper in Section 2. In Section 3, we explain the `piXedfit` design, including descriptions of 4 out of 6 modules. The description of the SED fitting approach and the 2 modules associated with it is given in Section 4. In Section 5, we test the SED fitting performance of `piXedfit` using mock SEDs of the simulated galaxies from the IllustrisTNG. In Section 6, we empirically test `piXedfit` modules using spatially resolved spectrophotometric data of local galaxies. Finally, we summarize the analysis of this paper in Section 7. As sections 2 to 4 are primarily technical and describing the architecture of `piXedfit`, readers who are more interested in the performance can start from section 5 while referring to Table 1. Throughout this paper, the cosmological parameters of $\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$, and $H_{0}=70\text{km}\text{s}^{-1}\text{Mpc}^{-1}$, the AB magnitude system, and the Chabrier (2003) initial mass function (IMF) are assumed. ## 2 Data In the analysis throughout this paper, two kinds of data sets are used: imaging data set ranging from far-ultraviolet (FUV) to near-infrared (NIR) and the IFS data. Each of the data sets is briefly described in the following. ### 2.1 Broad-band Imaging Data #### 2.1.1 GALEX The Galaxy Evolution Explorer (GALEX; Martin et al., 2005) is a space mission equipped with a $0.5$-m telescope with a field-of-view of $1.13$ $\text{deg}^{2}$, a pixel resolution of $1.5^{\prime\prime}$, and a point spread function (PSF) full width at half-maximum (FWHM) of $4.2^{\prime\prime}$ and $5.3^{\prime\prime}$ in the FUV and near-ultraviolet (NUV) bands (effective wavelengths: $1538.6$ and $2315.7\text{\AA}$), respectively. The imaging survey has three modes: all-sky imaging survey (AIS), medium imaging survey (MIS), and deep imaging survey (DIS). The typical integrations per tile of those three survey modes are $200$ s, $1500$ s, and $30000$ s, respectively. The $5\sigma$ limiting magnitudes in FUV (NUV) of those three survey modes are $19.9$ ($20.8$), $22.6$ ($22.7$), and $24.8$ ($24.4$), respectively (Morrissey et al., 2007). In this paper, we use imaging data from the DIS whenever available. Otherwise, imaging data from the MIS is used. #### 2.1.2 SDSS The SDSS (York et al., 2000) and its following surveys are providing the largest dataset combining imaging and spectroscopic data, using a dedicated $2.5$-m telescope at Apache Point Observatory. The imaging survey has five filters ($u$, $g$, $r$, $i$, and $z$) with central wavelengths ranging from $3551$ to $8932\text{\AA}$ and pixel resolution of $0.396^{\prime\prime}$. The SDSS imaging is 95$\%$ complete to $u=22.0$ mag, $g=22.2$ mag, $r=22.2$ mag, $i=21.3$ mag, and $z=20.5$ mag (Abazajian et al., 2004). The median seeing of all SDSS imaging data is $1.32^{\prime\prime}$ in the $r$-band (see Ross et al., 2011). #### 2.1.3 2MASS The Two Micron All Sky Survey (2MASS; Skrutskie et al., 2006) is an imaging survey of the whole sky in the NIR. The survey uses two $1.3$-m telescopes, one at Mt. Hopkins, Arizona, United States and the other at Cerro Tololo, Chile. The telescopes observe the sky in $J$ (1.24 $\mu$m), $H$ (1.66 $\mu$m), and $K_{s}$ (2.16 $\mu$m) bands. The image product is resampled to $1.0^{\prime\prime}\text{ pixel}^{-1}$. The point-source sensitivities at signal-to-noise ratio of S/N=$10$ are: $15.8$, $15.1$, and $14.3$ mag for $J$, $H$, and $K_{s}$, respectively. The seeing is $\sim 2.5-3.5^{\prime\prime}$ (Skrutskie et al., 2006). #### 2.1.4 WISE The Wide-field Infrared Survey Explorer (WISE; Wright et al., 2010) mapped the whole sky in four infrared bands: $3.4$, $4.6$, $12$, and $22\mu$m ($W1$, $W2$, $W3$, and $W4$, respectively). In this paper, we use the imaging data product from the AllWISE data release. The four wavelength bands ($W1$, $W2$, $W3$, and $W4$) have spatial resolutions PSF FWHM of $6.1^{\prime\prime}$, $6.4^{\prime\prime}$, $6.5^{\prime\prime}$, and $12.0^{\prime\prime}$, respectively. The spatial sampling of the imaging product in the four wavelength bands is $1.375^{\prime\prime}\text{ pixel}^{-1}$. WISE achieved $5\sigma$ point source sensitivites better than $0.08$, $0.11$, $1$, and $6.0$ mJy in unconfused regions on the ecliptic in the four bands (Wright et al., 2010). In the analysis of this paper, we use only data in $W1$ and $W2$ bands. ### 2.2 Integral Field Spectroscopy (IFS) Data #### 2.2.1 CALIFA The Calar Alto Legacy Integral Field Area (CALIFA) survey (Sánchez et al., 2012) is an IFS survey designed to obtain spatially resolved spectra of around 600 galaxies in the local universe ($0.005<z<0.03$). The observations were carried out with the Postdam Multi Aperture Spectrograph (PMAS; Roth et al., 2005) —in the PPak configuration— mounted at the $3.5$-m telescope at the Calar Alto observatory. Each galaxy is observed with two different overlapping setups. The low-resolution setup (V500; $R\sim 850$) covers $3745-7500\text{\AA}$, while the medium resolution setup (V1200; $R\sim 1650$) covers $3400-4840\text{\AA}$. The observations with the V500 and V1200 setups reached $3\sigma$ surface brightness limits of $\sim 23.0\text{ mag}\text{ arcsec}^{-2}$ and $\sim 22.7\text{ mag}\text{ arcsec}^{-2}$, respectively (Sánchez et al., 2012). In the analysis of this paper, we use the combined data product so-called COMB data cubes from the DR3 release (Sánchez et al., 2016a). The COMB data product is a collection of data cubes that combines the spectra from the two observation setups. The COMB spectra cover $3701-7501\text{\AA}$ with the spectral resolution (FWHM) of $6.0\text{\AA}$. The mean spatial resolution (PSF FWHM) of the data cube is $\sim 2.5^{\prime\prime}$, with a spatial sampling of $1.0^{\prime\prime}\text{ spaxel}^{-1}$. #### 2.2.2 MaNGA Mapping nearby Galaxies at Apache Point Observatory (MaNGA; Bundy et al., 2015), a part of SDSS IV (Blanton et al., 2017), is a wide area IFS survey targeting $\sim 10,000$ local galaxies at $0.01<z<0.15$. The MaNGA hexagonal fiber bundles make use of the BOSS spectrographs (Smee et al., 2013). The observed spectra cover $3600-10,300\text{\AA}$ with a spectral resolution of $R\sim 1100-2200$. After dithering, MaNGA data cubes have an effective spatial resolution FWHM of $2.5^{\prime\prime}$ (Law et al., 2015) and spatial sampling of $0.5^{\prime\prime}\text{ spaxel}^{-1}$. In the analysis of this paper, we use the `LOGCUBE` data cubes from the data reduction pipeline (DRP; Law et al., 2016). The data cubes reach a typical $10\sigma$ limiting continuum surface brightness of $23.5\text{ mag}\text{ arcsec}^{-2}$ in a five-arcsecond-diameter aperture in the $g$ band (Law et al., 2016). Detailed descriptions on the survey design and observing strategy are given in Law et al. (2015), Yan et al. (2016), and Wake et al. (2017). ## 3 piXedfit design `piXedfit` is designed to be modular, and each module can be run independent of each other. Due to its modularity, users can use a particular module in `piXedfit` without the need of using other modules. For instance, it is possible to use the SED fitting module to fit integrated SED of a galaxy (not limited to spatially resolved SED) without the need of using the image processing module. This way `piXedfit` can be beneficial for various applications. Figure 1 shows the design of `piXedfit`. `piXedfit` has six modules: (1) `piXedfit_images` is for image processing, (2) `piXedfit_spectrophotometric` is for spatially matching multiband imaging data with IFS data to obtain spatially resolved spectrophotometric SEDs of a galaxy, (3) `piXedfit_bin` is for pixel binning to maximize $\text{S}/\text{N}$ ratio, (4) `piXedfit_model` is for generating model SEDs, (5) `piXedfit_fitting` is for performing the SED fitting, and (6) `piXedfit_analysis` is for visualization of fitting results. In this section we describe the first four modules, leaving the last two modules to Section 4. Figure 1: The piXedfit design. piXedfit has six modules: (1) piXedfit_images is for image processing, (2) piXedfit_spectrophotometric is for spatially matching of multiband imaging data and IFS data to obtain spatially resolved spectrophotometric SEDs of a galaxy, (3) piXedfit_bin is for pixel binning, (4) piXedfit_model is for generating model SED, (5) piXedfit_fitting is for performing SED fitting, and (6) piXedfit_analysis is for visualizing SED fitting results. ### 3.1 piXedfit_images: Image Processing In the pixel-by-pixel SED fitting process, it is very important to make sure that the multiband images are all matched to the same spatial resolution and spatial sampling, so that a given pixel represents the same region on the sky in all the images used. Such an image processing task in `piXedfit` is done by the `piXedfit_images` module. The `piXedfit_images` module is a Python scripting module that combines together various image processing functions in `Astropy`333https://www.astropy.org/ (Astropy Collaboration et al., 2013), `Photutils`444https://photutils.readthedocs.io/en/stable/ (Bradley et al., 2019), and `SExtractor` (Bertin & Arnouts, 1996) such that an image processing task for any combination of imaging data can be done automatically. The user only need to specify a set of photometric bands, the names of input FITS file for the science image associated with each band, the names of input FITS file for the variance image (which is the square root of an uncertainty image) associated with each band, and coordinate (right ascension, RA, and declination, DEC) of the target galaxy. Using a specific function in `piXedfit_images`, the variance image is calculated for each band555The description on how to estimate the uncertainty of pixel value and derive the variance image are described at https://pixedfit.readthedocs.io/en/latest/list_imaging_data.html. The current version of `piXedfit` can perform image processing to the following list of imaging data: GALEX, SDSS, 2MASS, WISE, Spitzer, Herschel, and Hubble Space Telescope (HST). The workflow of image processing is shown in Figure 1. In the following, each of the image processing tasks will be described. #### 3.1.1 Background Subtraction In `piXedfit_images`, the background estimation is done using the `Background2D` function from `Photutils`. The `Background2D` function estimates the background by first dividing an image into certain number of grids and then, for each grid, background level is estimated using the sigma- clipping method. In `piXedfit_images`, grid size is required as an input. The background subtraction is only applied to the science images. After the background subtraction process, the background and RMS images are stored into FITS files. #### 3.1.2 PSF Matching In order to obtain accurate multiwavelength photometric SED from a set of multiband images, it is important that all the images are brought to the same PSF size. Commonly, PSF matching between two images is done by convolving the higher resolution image (i.e., smaller PSF size) with a pre-calculated kernel. The matching kernel between the two PSFs is derived from the ratio of Fourier transforms (see e.g., Gordon et al., 2008; Aniano et al., 2011). Previous studies have constructed convolution kernels for matching the PSFs of imaging data from various telescopes including both space-based and ground- based ones. Gordon et al. (2008) constructed convolution kernels for matching the PSFs of the Spitzer/IRAC and Spitzer/MIPS images666Convolution kernels are available at https://irsa.ipac.caltech.edu/data/SPITZER/docs/dataanalysistools/tools/contributed/general/convkern/. Aniano et al. (2011) constructed convolution kernels for matching the PSFs of imaging data from various space-based and ground-based telescopes that includes GALEX, Spitzer, WISE, and Herschel. Besides that, Aniano et al. (2011) also constructed convolution kernels for some analytical PSFs that includes Gaussian, sum of Gaussians, and Moffat777PSFs and convolution kernels are available at https://www.astro.princeton.edu/~ganiano/Kernels.html. The analytical PSF forms are expected to represent the net (i.e., effective) PSFs of ground-based telescopes. We use convolution kernels from Aniano et al. (2011) for the PSF matching process in the `piXedfit_images` module. Since the PSFs of SDSS and 2MASS are not explicitly covered in the list of PSFs analyzed by Aniano et al. (2011), to find the analytical PSFs representative of those imaging data, we construct empirical PSFs of the 5 SDSS bands and 3 2MASS bands, then compare them with the analytical PSFs of Aniano et al. (2011). We present this analysis in Appendix A. In short, we find that the empirical PSFs of SDSS $u$, $g$, and $r$ bands are best represented by double Gaussian with FWHM of $1.5^{\prime\prime}$, while the other bands (i.e., $i$ and $z$) are best represented by double Gaussian with FWHM of $1.0^{\prime\prime}$. The two Gaussian components have a fix center, the relative weights of $0.9$ and $0.1$, and the FWHM of the second component is twice that of the first (Aniano et al., 2011). For 2MASS, all the three bands ($J$, $H$, and $K_{s}$) are best represented by Gaussian with FWHM of $3.5^{\prime\prime}$. For consistency, we use those analytical PSFs to represent the PSFs of SDSS and 2MASS and use the convolution kernels associated with them whenever needed888More information on the kernels and demonstration of their performaces can be seen at https://pixedfit.readthedocs.io/en/latest/list_kernels_psf.html. In `piXedfit_images`, the convolution of an image with a kernel is done using the `convolve_fft` function in `Astropy`. Before convolving an image with a kernel, the kernel should be spatially resampled to match the spatial sampling of the image, which is done using the `resize_psf` function in `Photutils`. Originally, the kernels provided by Aniano et al. (2011) are all resampled to $0.25^{\prime\prime}\text{ pixel}^{-1}$. The PSF matching process is done to both science images and variance images. #### 3.1.3 Spatial Resampling and Reprojection After the PSF matching, all images are brought to a uniform spatial sampling and reprojection. The final spatial sampling is chosen to be the lowest spatial sampling (i.e., largest pixel size) among the imaging data being analyzed. The spatial resampling and reprojection task in `piXedfit_images` is done using the `reproject_exact` function from the `reproject` package (Robitaille, 2018). `reproject_exact` reprojects an image to a new projection using the flux-conserving spherical polygon intersection method. Because the reprojection basically includes regridding and interpolation, the pixel value of the image should be in a surface brightness unit, not in a flux unit. Therefore before reprojection and resampling, the images are converted into surface brightness whenever needed. If the original unit of an image is in flux, it will be reconverted to flux unit after the resampling process. The next step is cropping around the target galaxy. This is done using the `wcs_world2pix` and `Cutout2D` functions available in `Astropy`. The size of the final cropped images, which retain correct WCS information, can be defined by the user. The spatial resampling, reprojection, and cropping are done to the science images and the variance images. #### 3.1.4 Image Segmentation and Defining Galaxy’s Region of Interest In `piXedfit_images`, image segmentation using SExtractor is done to obtain an initial estimate for the region999As it is often times difficult to define the boundary of a galaxy, here we refer to the region of the target galaxy to be fit simply as the “region” of the galaxy. of the target galaxy. The segmentation is done in all imaging bands (only the science images), then segmentation maps from all bands are merged (i.e., combined) to get a single segmentation map from which the galaxy’s region will be determined. Due to the emergence of the background noise, the segmentation map of a galaxy can have an irregular (i.e., filamentary) structure at the outskirt. To remove such feature, an elliptical aperture cropping is applied to the galaxy’s segmentation region. Ellipticity, position angle, and maximum radius (along the semi-major axis) for the elliptical aperture cropping can be specified when providing input to the `piXedfit_images` module. If those parameters are not provided by the user, elliptical isophote fitting will be done to the final stamp image of a band around the middle of the rest-frame optical (such as $r$ band) using the `Ellipse` class in `Photutils`. In `Ellipse`, the isophotes in the galaxy’s image are measured using an iterative method described in Jedrzejewski (1987). From the set of isophotes (as a function of radius) produced by `Ellipse`, the ellipse closest to the desired maximum radius is chosen. #### 3.1.5 Extracting SEDs of Pixels The tasks described above give the final stamps of reduced science and variance images, and the pixel coordinates associated with the galaxy’s region of interest. The next step is calculating fluxes and flux uncertainties of pixels within the galaxy’s region in the multiband images. The end product of this process is the photometric SED of every pixel of interest. The conversion of pixel value into flux density unit of $\text{erg }\text{s}^{-1}\text{cm}^{-2}\text{\AA}^{-1}$ (which is the default flux unit of data product produced by `piXedfit_images`) depends on the unit of the pixel value in the original image. The flux uncertainty of a pixel is obtained by first taking square root of the pixel value in the variance image then convert it into the flux density unit. The pixel values of the imaging data used in our analysis have a variety of units. To convert the pixel value of an image to flux density in $\text{erg }\text{s}^{-1}\text{cm}^{-2}\text{\AA}^{-1}$ and estimate the uncertainty of the pixel value, we follow the relevant information from the literature and documentation files from the survey’s website from which the imaging data were obtained. The variance images associated with the science images that are input to `piXedfit_images` (see Section 3.1) are constructed following that information101010The unit of pixel value in imaging data that can be analyzed with the current version of piXedfit, and how to convert the pixel value into flux and estimate the flux uncertainty are described at https://pixedfit.readthedocs.io/en/latest/list_imaging_data.html. The next step is to correct the pixel-wise SEDs for the foreground Galactic dust extinction. For this, we estimate $E(B-V)$ from the reddening ($A_{\lambda}$) in the SDSS bands, obtained from the NASA/IPAC Extragalactic Database (NED)111111https://ned.ipac.caltech.edu/ which is based on the map by Schlafly & Finkbeiner (2011), recalibration from Schlegel et al. (1998). Then we use the Fitzpatrick (1999) with $R_{V}=3.1$ dust reddening law to correct for the foreground Galactic extinction. The final step in the image processing is to crop regions associated with foreground stars. This step is only done if bright stars are found within the galaxy’s region of interest. In the current version of the `piXedfit_images` module, this step is done manually using a specific function. The user only need to input central coordinate and an estimate of the radius (in pixels) of each star. The derived maps of fluxes and flux uncertainties (in multiple photometric bands) of the target galaxy are then saved into one multi-extension FITS file. Figure 2 shows an example of the maps of multiband fluxes produced by the `piXedfit_images` module. The target galaxy for this example is NGC 309. Imaging data over 12 bands ranging from the FUV to $W2$ are used to obtain the spatially resolved SED data cube. As can be seen from the fluxes maps, the 2MASS bands are the shallowest among the photometric bands used in the analysis. This is the reason we add the WISE bands ($W1$ and $W2$, which are deeper than the 2MASS bands) although their spatial resolution is lower than UV and optical bands. The inclusion of the WISE bands can provide stronger constraint in the NIR regime. Figure 2: Example of the maps of multiband fluxes produced by the piXedfit_images module. The target galaxy in this example is the NGC 309. The left most panel in the first row shows $gri$ composite image. The 12 panels in the second to fourth row show maps of flux at 12 wavelength bands from the FUV to $W2$. The 12-band images are brought to the spatial resolution of the $W2$ band and the spatial sampling of the FUV/NUV band. The missing pixels in the outskirt of 2MASS bands and $W2$ are caused by the negative fluxes, which do not appear in the logarithmic plot. ### 3.2 piXedfit_spectrophotometric: Extracting Spatially Resolved Spectrophotometric SEDs of a Galaxy In the analyses of the integrated SED of a galaxy (i.e., treating the galaxy as one object), there have been several attemps in combining rest-frame optical spectra (particularly covering $4000\text{\AA}$ break) and broad-band photometry covering wider wavelength range into a so-called spectrophotometric SED fitting (see e.g., Newman et al., 2014; Dressler et al., 2018; Morishita et al., 2019; Abramson et al., 2020; Chen et al., 2020). By combining the rest-frame optical spectrum and the broad-band photometry, it is expected that the constraining power in the SED fitting can be enhanced and potentially break the existing degeneracies among the parameters in the fitting process. The availability of the FUV–NIR broad-band imaging and the IFS datasets for local galaxies (thanks to CALIFA, MaNGA, and SAMI surveys) give us opportunities to conduct the spatially resolved spectrophotometric SED analyses. However, for a self-consistent analysis we need to spatially match (in spatial resolution and sampling) the broad-band imaging and IFS datasets. `piXedfit` provides a new capability of combining the broad-band imaging data with the IFS data to obtain spatially resolved spectrophotometric SEDs of a galaxy. The tasks featuring this process is in the module `piXedfit_spectrophotometric`. As for the current version, the `piXedfit_spectrophotometric` module can only analyze the combination of broad-band imaging data from the GALEX, SDSS, 2MASS, and WISE, and the IFS data from the CALIFA/COMB and MaNGA/DRP. The final product of this module is a data cube that contains spatially-matched pixel-wise spectrophotometric SEDs of a galaxy. Our analysis presented here is the first attempt of this kind. To spatially match the three dimensional IFS data with the broad-band imaging data, first, a two dimensional image (i.e., the imaging layer) of every wavelength grid in the IFS data is made. Before creating images out of the IFS data, the spectra are smoothed by convolving them with a Gaussian kernel with a sigma value following that of the spectral resolution of the IFS data ($\sim 2.6\text{\AA}$ for CALIFA and $\sim 3.5\text{\AA}$ for MaNGA, adopted the median value of the spectral resolution across the whole wavelength range). After two dimensional images are created out of the IFS data, the PSF matching and spatial resampling are done to each image, in the same way as processing a broad-band image. In case of matching IFS data from MaNGA or CALIFA with the 12-band imaging data from the FUV to the $W2$, the final product has the spatial resolution of the $W2$ ($6.37^{\prime\prime}$ FWHM) and the spatial sampling of the FUV/NUV ($1.5^{\prime\prime}\text{ pixel}^{-1}$). The PSF matching for an imaging layer is done by convolving the image with a pre-calculated kernel. Since the effective PSFs of MaNGA and CALIFA have FWHM of $2.5^{\prime\prime}$, we use corresponding convolution kernel from Aniano et al. (2011). The convolution kernel was created for matching a Gaussian PSF with FWHM of $2.5^{\prime\prime}$ to the PSF size of $W2$. We have compared the reconstructed PSFs of MaNGA DRP data cube in the $g$, $r$, $i$, and $z$ bands (provided in the FITS file containing the data cube of one galaxy) with the Gaussian PSF with FWHM of $2.5^{\prime\prime}$ from Aniano et al. (2011). The MaNGA empirical PSFs match well with the Gaussian PSF in all these bands. After PSF matching, all the imaging layers are spatially resampled and reprojected to match the spatial sampling and projection of the broad-band imaging data cube produced by `piXedfit_images`. This task is done in the same way as that for the images processing, described in Section 3.1.3. The next step is correcting the spatially resolved spectra for the foreground Galactic dust extinction. This step is only done for the MaNGA data cubes (Law et al., 2016), as such a correction has been applied to the CALIFA cubes (Sánchez et al., 2016a). For this task, we use the $E(B-V)$ value obtained from the header (keyword:`EBVGAL`) of the MaNGA DRP FITS file and then apply the dust extinction correction adopting the Fitzpatrick (1999) reddening law with $R_{V}=3.1$. We have found that, the normalization of the IFS spectra and the photometric SEDs are often slightly offset from each other. There appears to be no general patterns for the flux offsets. In addition to flux offsets that vary across wavelength in an SED of a pixel, there are also variations of the flux offset spatially. To get a simplified pattern of the variation of the flux offsets, first, we reconstruct $g$, $r$, and $i$ ($g$ and $r$) images from the post- processed IFS data from MaNGA (CALIFA) by convolving them with broadband filters. We then compare the reconstructed images with the real images. For MaNGA, the mean $\log(f_{\rm obs}/f_{\rm recons})$ in $g$, $r$, and $i$ are $-0.014\pm 0.085$, $-0.044\pm 0.092$, and $-0.032\pm 0.090$, respectively. For CALIFA, the $\log(f_{\rm obs}/f_{\rm recons})$ in $g$ and $r$ are $-0.065\pm 0.141$ and $-0.043\pm 0.133$, respectively, where $f_{\rm obs}$ and $f_{\rm recons}$ are flux from real image and the reconstructed image, respectively. These values are derived using a sample of 20 galaxies that will be used in the analysis of Section 6. The mismatch between spectrum and photometric SED can be caused by at least two factors: systematics in the data processing (PSF matching, spatial resampling, reprojection, etc.) of the broad-band imaging data and the IFS data, and the uncertainty in the flux calibration of the photometric and the IFS data. For detailed descriptions on the flux calibration in the MaNGA and CALIFA surveys, please refer to Yan et al. (2016) and García-Benito et al. (2015), respectively. In order to overcome the photometry–spectroscopy offset, we multiply the spectrum with a wavelength-dependent smooth factor obtained from a third-order Legendre polynomial function fit such that the spectrum normalization become consistent with the normalization of the photometric SED. The polynomial order of $3$ is low enough to prevent the introducing of spectral breaks or artificial features to the spectrum. To find the smooth multiplicative factor, we first obtain a model spectrum that best describes the photometric SED using a $\chi^{2}$ minimization technique applied to a set of pre-calculated model SEDs (to be described in Section 4.2.2), then fit a third-order Legendre polynomial to the ratio between the model spectrum and the observed (IFS) spectrum. This method adopts the typical technique used in the spectrum fitting that uses multiplicative polynomial function of a certain order ($\sim 2-8$) to make a model spectrum template fit the overall spectral shape of the observed spectrum (see e.g., Kelson et al., 2000; Koleva et al., 2009; Emsellem et al., 2004; Newman et al., 2014; Cappellari, 2017; Westfall et al., 2019; Belfiore et al., 2019). Figure 3 shows examples of spectrophotometric SED data cubes of the galaxy NGC 309, which is observed by the CALIFA survey (first row), and another galaxy, PLATE-IFU:8934-12702, observed by the MaNGA survey (second row). Regions in the galaxies that are covered by the IFU fiber bundle are shown by the transparent hexagonal regions overlaid on top of the $gri$ composite images (left panel in each row). Outside of these regions, we still have spatially resolved broad-band photometry data. In each row, the right panel shows SEDs of 4 randomly chosen pixels — three spectrophotometric SEDs and one photometric SED. The $gri$ composite images are made using the `make_lupton_rgb` function in Astropy (Lupton et al., 2004). Figure 3: Examples of the spectrophotometric SED data cubes obtained with the piXedfit_spectrophotometric module. The two galaxies are: NGC 309 (first row) which is observed by the CALIFA survey, and a galaxy with PLATE-IFU:8934-12702 (bottom row) observed by the MaNGA survey. The region covered by the IFU fiber bundle is plotted transparently on top of the $gri$ composite image (left panel in each row). In each row, the right panel shows SEDs of 4 randomly chosen pixels — three spectrophotometric SEDs and one photometric SED (shown by the purple colored points). ### 3.3 piXedfit_bin: Pixel Binning In most cases, fluxes measured in individual pixels have a low $\text{S}/\text{N}$ ratio. It is also common to find pixels with missing or negative fluxes. In order to get an accurate inference of the parameters in the SED fitting, typically one needs an observed SED with sufficient $\text{S}/\text{N}$ ratio. For this reason, we do not apply the SED fitting analysis to pixel-wise SED. Instead, we bin the data locally before conducting further analysis to the data. Previous studies have applied pixel binning in spatially resolved SED fitting analysis (e.g., Wuyts et al., 2013; Belfiore et al., 2019; Sánchez et al., 2018). A popular pixel binning scheme is the Voronoi binning by Cappellari & Copin (2003), who showed that, with the Voronoi tessellation technique, the bins can be made as ‘compact’ as possible, no overlapping with each other, and having similar $\text{S}/\text{N}$ ratio (in a particular band). In Abdurro’uf & Akiyama (2017), we developed a new pixel binning scheme that takes into account of the similarity in the SED shape among pixels. This new criteria is important especially for the spatially resolved SED fitting analyses, because it is expected to preserve any important information from the SED at the pixel scale. While pixel binning is done to achieve a certain minimum S/N, at the cost of degrading the spatial resolution, we can still preserve important information in the SED at the pixel scale with this binning scheme. In the conventional pixel binning schemes that do not consider the similarity of the SED shape, it is possible that neighboring pixels which have different SED shapes (likely having different properties) are binned together. This could smooth out the spatial variation of the stellar population properties. `piXedfit_bin` is a module designed for performing such a binning scheme, and is built upon what was developed in Abdurro’uf & Akiyama (2017). There are four requirements in the pixel binning scheme: (1) proximity, such that only neighboring connected pixels are binned together, (2) similarity of SED shape, (3) S/N threshold in each band, and (4) smallest diameter of a bin ($D_{\rm min,bin}$ in pixel). The last requirement is a new parameter introduced with the current version of `piXedfit_bin`. This parameter prevents the binning process from picking a single bright pixel as a bin. In some cases, a single bright pixel (typically around the central region) can exceed the S/N threshold such that further binning with other pixels is not needed. The smallest diameter of the bin can be thought of as the FWHM of the PSF although the user is free to define the diameter. The pixel binning scheme adopted in `piXedfit_bin` is a simple empirical one. Briefly speaking, a spatial bin is obtained by first selecting a brightest pixel in a reference band which is defined by the user (a band around the middle of the rest-frame optical regime is recommended, e.g., the $r$ band). Then pixels enclosed within a diameter of $D_{\rm min,bin}$ from the brightest pixel are joined together and the total S/N of the bin (in each band) is checked. If the total S/N in each band is higher than the S/N threshold, the bin size is not expanded and the first bin is established. Otherwise, the bin’s radius is increased by $dr=2$ pixels and pixels within the new annulus are examined to see if they have a similar SED shape as the brightest pixel. Pixels that have similar SED shape are added into the bin and the total S/N at each band is checked. If the total S/N in each band is above the S/N threshold, the expansion of the bin is terminated. Otherwise, the expansion is continued until the S/N threshold at each band is reached. To proceed to the next bin, the brightest pixel among the remaining pixels is selected as the starting pixel, and the same procedure is applied again. The above procedure is applied until no more bins can be made with the remaining pixels. In most cases, pixels around the outskirt are left without being binned. This likely caused by the insufficient number of those outskirt pixels (which typically have low S/N) left over by the previous binning process that makes binning some of them that have similar SED (within a certain $\chi^{2}$, to be described later) cannot reach the required S/N threshold. In this case, all the remaining pixels are finally binned into one bin. The similarity of SED shape of a pixel with index of $m$ to that of the brightest pixel with index of $b$ is evaluated with the following $\chi^{2}$ formula $\chi^{2}=\sum_{i}\frac{(f_{m,i}-s_{mb}f_{b,i})^{2}}{\sigma_{m,i}^{2}+\sigma_{b,i}^{2}}.$ (1) $i$ in the above equation represents photometric band, and $f_{m,i}$ and $f_{b,i}$ are $i$-th band flux of a pixel $m$ and $b$, respectively. $\sigma_{m,i}$ and $\sigma_{b,i}$ are $i$-th band flux uncertainty of the pixel $m$ and $b$. $s_{mb}$ is a scaling factor that bring the two SEDs into a similar normalization, and it can be calculated using $s_{mb}=\frac{\sum_{i}\frac{f_{m,i}f_{b,i}}{\sigma_{m,i}^{2}+\sigma_{b,i}^{2}}}{\sum_{i}\frac{f_{b,i}^{2}}{\sigma_{m,i}^{2}+\sigma_{b,i}^{2}}}.$ (2) If $\chi^{2}$ is smaller than a certain value ($\chi^{2}_{\rm max,bin}$ which is defined by the user), the pixels $m$ and $b$ are considered to have a similar SED shape. Figure 4 shows two pixel binning results for the NGC 309 obtained with binning requirements that only differ in $\text{S}/\text{N}$ thresholds for the three 2MASS bands. The pixel binning results in the top and bottom panels use 2MASS $\text{S}/\text{N}$ thresholds of $1$ and $3$, respectively. The $\text{S}/\text{N}$ threshold for the rest of the photometric bands is set to $10$ (see Figure 2 for the set of the photometric bands). The other requirements are the same for the two binning: $D_{\rm min,bin}$ of $4$ pixels and reduced $\chi^{2}_{\rm max,bin}$ limit of $3.3$ in the SED shape similarity check. The $\text{S}/\text{N}$ ratios in the FUV and $J$ of the original pixels and bins are shown on the right side of each panel. The blue lines show $\text{S}/\text{N}$ thresholds. The pixel binning scheme is able to meet the minimum $\text{S}/\text{N}$ requirement. A general trend is that the bin size increases with radius from the galaxy’s center, which can be understood because the $\text{S}/\text{N}$ of pixels decreases with radius, and thus more pixels are needed in a bin to reach the $\text{S}/\text{N}$ threshold. In this example, the 2MASS bands determine the overall result of the pixel binning because they are the shallowest (i.e., having lowest $\text{S}/\text{N}$) among the photometric bands used in this analysis. Due to the similarity SED shape requirement, the pixel binning map roughly reconstruct the spiral arms structure (where young stellar populations are), especially in the first binning analysis (top left panel). For binning a spectrophotometric data cube, we use the pixel binning map obtained with multi-band images (described above) as a reference to bin the spectrophotometric SEDs of pixels, so that the spectroscopy and photometry of a bin are consistent. For a bin in which some of the member pixels do not have spectroscopic SED, we only assign spectrophotometric SED to a bin in which at least $90\%$ of the member pixels have spectroscopic SED. The derived spatial binning map together with the fluxes and flux uncertainties are then saved into a multi-extension FITS file. Figure 4: Results of pixel binning for NGC 309 obtained with the piXedfit_bin module. The top and bottom panels show results of pixel binning with requirements that only differ in the $\text{S}/\text{N}$ thresholds for the three 2MASS bands. The top (bottom) panel uses $\text{S}/\text{N}$ thresholds of $1$ ($3$) for the 2MASS bands. The two pixel binning use the same $\text{S}/\text{N}$ thresholds of $10$ for all other bands. The other requirements are the same for the two binning: $D_{\rm min,bin}$ of $4$ pixels, reduced $\chi^{2}_{\rm max,bin}$ limit of $3.3$ in the SED shape similarity check. ### 3.4 piXedfit_model: Generating Model SEDs `piXedfit_model` is a module designed for generating a model SED of a Composite Stellar Population (CSP) from a given set of input parameters. #### 3.4.1 Generating Rest-frame Model Spectra For generating model spectra, the Flexible Stellar Population Synthesis (`FSPS`)121212https://github.com/cconroy20/fsps package is used (Conroy et al., 2009; Conroy & Gunn, 2010). For interface to the Python environment, `python-fsps`131313http://dfm.io/python-fsps/current/ package is used (Foreman-Mackey et al., 2014). The `FSPS` package provides a self-consistent modeling of galaxy’s SED through a careful modeling of the physical components that produce the total luminosity output of a galaxy. Those components consist of stellar emission, nebular emission, dust emission, and emission from the dusty torus heated by the AGN. We refer reader to Conroy et al. (2009), Conroy & Gunn (2010), and Leja et al. (2017, 2018) for detailed description of the SED modeling within the `FSPS`. For efficiency, we do not describe in detail the ingredients of the SED modeling in this paper but present the parameters in the SED modeling and fitting in Table 1141414A more detailed descriptions of the ingredients in the SED modeling and the parameters associated with it are available at https://pixedfit.readthedocs.io/en/latest/ingredients_model.html. In generating spectra of the Simple Stellar Population (SSP), `piXedfit_model` uses an option in the FSPS that allows interpolation of SSP spectra between the $Z$ grids available in the isochrone and spectral libraries. The nebular emission modeling uses the CLOUDY code (Ferland et al., 1998, 2013) which was implemented in the FSPS by Byler et al. (2017). For the dust attenuation modeling, `piXedfit_model` allows two options: Calzetti et al. (2000) and the two-component dust model of Charlot & Fall (2000). The dust emission modeling in FSPS assumes the energy balance principle, where the amount of energy attenuated by the dust is equal to the amount of energy re-emitted in the infrared (da Cunha et al., 2008). FSPS uses the Draine & Li (2007) dust emission templates to describe the shape of the infrared SED. For the modeling of emission from the dusty torus heated by the AGN, FSPS uses AGN templates from the Nenkova et al. (2008a, b) CLUMPY models. Due to the rare availability of the high spatial resolution of imaging data in the infrared, the dust emission and AGN dusty torus emission components are not applicable in most of the spatially resolved SED fitting implementation. We still include dust emission and AGN dusty torus emission in the `piXedfit_model` because this module together with `piXedfit_fitting` can be used for fitting an integrated SED of a galaxy, not limited to the spatially resolved SED. In case the sufficiently high spatial resolution infrared imaging data is available and the AGN component is necessary in the SED modeling, it is possible to include the AGN component to fit only the SED of the central bin of a galaxy. Using the $D_{\rm min,bin}$ parameter in the pixel binning (see Section 3.3), the minimum diameter of a bin can be set to be similar to the PSF FWHM of the images (which is implemented in the binning result that is shown in the top left panel of Figure 4). Thus, the central bin always larger than the PSF size, which supposed to enclose the AGN dusty torus component in the galaxy. Figure 5 shows an example of rest-frame model spectrum (in black color) generated using the `piXedfit_model` module. The model spectrum is broken down into its components: stellar emission (orange color), nebular emission (blue color), AGN dusty torus emission (green color), and dust emission (red color). Please refer to the caption for the values of the parameters used to generate the model spectrum. Figure 5: Example of a rest-frame model spectrum (black line) generated with piXedfit_model. The decomposition of the spectrum to its components is shown with different colors: orange (stellar emission), cyan (nebular emission), red (dust emission), and green (AGN dusty torus emission). The model spectrum is generated assuming delayed tau SFH with $\log(\tau[\text{Gyr}])=0.115$, $\log(\text{age}_{\rm sys}[\text{Gyr}])=0.637$, $\log(Z/Z_{\odot})=-1.208$, Calzetti et al. (2000) dust attenuation law with $\hat{\tau}_{2}=1.384$, $\log(M_{*}/M_{\odot})=10.746$, $\log(f_{\rm AGN})=-0.809$, $\log(\tau_{\rm AGN})=0.957$, $\log(Q_{\rm PAH})=0.375$, $\log(U_{\rm min})=0.953$, and $\log(\gamma_{e})=-2.035$. For the meaning of the parameters please refer to Table 1. #### 3.4.2 Choices for the Star Formation History (SFH) In SED fitting, the assumed SFH is one of the fundamental components yet difficult to constraint. As a fundamental component, the assumed SFH and associated priors are very influential to the inferred physical properties of galaxies, such that the robustness of the inferred parameters is dependent on whether or not the assumed SFH is flexible enough to reflect the true SFH of the galaxies (see e.g., Lee et al., 2009; Maraston et al., 2010; Michałowski et al., 2012, 2014; Conroy, 2013; Iyer & Gawiser, 2017; Carnall et al., 2019; Leja et al., 2019a; Lower et al., 2020). The recent developments in SED fitting enable the inference of SFH (i.e., SFH is not only an assumption in the fitting). There have been many attempts that try to infer SFH of galaxies using SED fitting (e.g., Dye, 2008; Smith & Hayward, 2015; Pacifici et al., 2016; Iyer & Gawiser, 2017; Iyer et al., 2019; Carnall et al., 2018; Dressler et al., 2018; Leja et al., 2019a; Morishita et al., 2019). In terms of the SFH modeling approach, the SED fitting techniques can be classified into two main categories: parametric and non-parametric SFH. The former assumes a functional form for the SFH (e.g., Han & Han, 2014; Carnall et al., 2018; Boquien et al., 2019; Zhou et al., 2020), while the latter do not, instead the look-back time (i.e., stellar ages) is gridded and the SFR of each time grid is let free in the fitting (e.g., VESPA, Tojeiro et al. 2007; Dressler et al. 2016; prospector, Leja et al. 2017; Chauke et al. 2018; gsf, Morishita et al. 2019; Dense Basis, Iyer & Gawiser 2017, Iyer et al. 2019), or another way is using a set of SSPs with various ages and metallicities to fit the observed SED (typically a spectrum, e.g., STARLIGHT, Cid Fernandes et al. 2005; STECMAP, Ocvirk et al. 2006; FIREFLY, Wilkinson et al. 2017). The parametric SFH approach has the advantage of having fewer numbers of free parameters involved in the fitting and unlimited stellar age sampling (i.e., time resolution in the SFH) compared to the non-parametric approach. The non- parametric approach is expected to be more flexible in reflecting the real SFH of galaxies (compared to the parametric one). However, this approach has the disadvantage of the cruder sampling of stellar ages and possibly complex degeneracies in the fitting due to large numbers of parameters involved. Recently, Carnall et al. (2018) have shown that using the double power law SFH model can recover SFHs of simulated galaxies from the `MUFASA` suite of cosmological hydrodynamical simulations. The double power law form has also been applied to fit the evolution of the cosmic SFR density (Behroozi et al., 2013). Another study by Diemer et al. (2017) showed that the log-normal SFH model can produce good fits to SFHs of simulated galaxies from the cosmological simulation Illustris. In `piXedfit_model`, we adopt the parametric SFH approach, with 5 choices: exponentially declining (i.e., tau model), delayed tau, log-normal, Gaussian, and double power law SFHs151515The functional forms of the SFH models are described in detailed at https://pixedfit.readthedocs.io/en/latest/ingredients_model.html. The double power law SFH has the following form, $SFR(t)\propto\left[\left(\frac{t}{\tau}\right)^{\alpha}+\left(\frac{t}{\tau}\right)^{-\beta}\right]^{-1},$ (3) where $\alpha$ and $\beta$ are the falling slope, and the rising slope, respectively. The $\tau$ parameter controls the peak time. The $t$ in the above equation represent the time since the start of star formation (i.e., age of the system, $\text{age}_{\text{sys}}$). #### 3.4.3 IGM Absorption, Cosmological Redshifting, and Integrating through Photometric Filters The rest-frame model spectra generated in the previous step are then attenuated further to account for the absorption due to the intergalactic medium (IGM) between the galaxy and the observer. The `piXedfit_model` has two options for the IGM absorption: Madau (1995) and Inoue et al. (2014). The effect of cosmological redshifting and dimming is then applied to the model spectra. This will transform the spectra (that are still in unit of luminosity density, $L_{\lambda}$) into the observer frame flux density ($f_{\lambda}$). For this operation, redshift information of the galaxy is needed. However, if the redshift is unknown, it will be a free parameter in the fitting. The calculation of the luminosity distance uses the `cosmology` package in the `Astropy`. The last step in generating model photometric SEDs is to convolve the model spectra with the set of filter transmission functions. The current vesion of `piXedfit` has a library of transmission functions for 163 photometric filters of ground-based and space-based telescopes. The user can also add a filter transmission function using a specific function in `piXedfit`. Please refer to Table 1 for a compilation of the parameters involved in the SED modeling and fitting. Table 1: Description for the parameters involved in the SED modeling and fitting Parameter | Description ---|--- $M_{*}$ | Stellar mass $Z$ | Stellar metallicity $t$ | Evolving age ($\text{age}_{\text{sys}}$) of the stellar population $\tau$ | A parameter in the SFH that controls the duration of star formation $T_{0}$ | A parameter in the log-normal and Gaussian SFHs that controls the peak time $\alpha$ | A parameter in the double power law SFH that controls the slope of the falling star formation episode $\beta$ | A parameter in the double power law SFH that controls the slope of the rising star formation episode $\hat{\tau}_{1}$ | Dust optical depth of the birth cloud in the Charlot & Fall (2000) dust attenuation law $\hat{\tau}_{2}$ | Dust optical depth of the diffuse ISM in the Calzetti et al. (2000) and Charlot & Fall (2000) dust attenuation laws $n$ | Power law index in the dust atttenuation curve for the diffuse ISM in the Charlot & Fall (2000) dust attenuation law $U$ | Ionization parameter in the nebular emission modeling $U_{\rm min}$ | Minimum starlight intensity that illuminate the dust $\gamma_{e}$ | Fraction of total dust mass that is exposed to this minimum starlight intensity $Q_{\rm PAH}$ | Fraction of total dust mass that is in the polycyclic aromatic hydrocarbons (PAHs) $f_{\rm AGN}$ | AGN luminosity as a fraction of the galaxy bolometric luminosity $\tau_{\rm AGN}$ | Optical depth of the AGN dusty torus ## 4 SED Fitting Approach in piXedfit The SED fitting in `piXedfit` is done by `piXedfit_fitting` module. This module can perform SED fitting to a photometric SED as well as a spectrophotometric SED. The SED fitting approach adopted in `piXedfit` is described in the following sections. ### 4.1 Bayesian Inference Method The `piXedfit_fitting` module uses the Bayesian inference technique for estimating the underlying parameters of a galaxy’s SED. Two important components in the Bayesian inference process are the likelihood (i.e., $P(X|\theta)$, which is the probability of observing the data $X$ given the model $\theta$) and prior (i.e., $P(\theta)$, which is the hypothesis on the probability of model $\theta$ before fitting with the data). In SED fitting, the likelihood is commonly given by the Gaussian function because of the assumption of a Gaussian form of noise. The Gaussian likelihood form is used by the majority of Bayesian SED fitting implementation, e.g., Kauffmann et al. (2003), MAGPHYS (da Cunha et al., 2008), BayeSED (Han & Han, 2014), BAGPIPES (Carnall et al., 2018), CIGALE (Burgarella et al., 2005; Noll et al., 2009; Boquien et al., 2019). In Abdurro’uf & Akiyama (2017), we implemented a different likelihood function that make use of the Student’s t function. The new likelihood function has been shown to be able to give a better recovery of the SFR in the fitting tests using mock SEDs and better matching to the SFR derived from the Spitzer/MIPS $24\mu$m flux (see Appendix A of Abdurro’uf & Akiyama 2017). Motivated by this result, we implement two kinds of likelihood functions in piXedfit: (1) Gaussian function as mentioned above, and (2) Student’s t function which has the following form $P(X|\theta)=\prod_{i=1}^{n}\frac{\Gamma\left(\frac{\nu+1}{2}\right)}{\sqrt{\nu\pi}\Gamma\left(\frac{\nu}{2}\right)}\left(1+\frac{\chi_{i}^{2}}{\nu}\right)^{-\frac{\nu+1}{2}},$ (4) with $\chi_{i}$ is given by $\chi_{i}=\frac{f_{X,i}-sf_{\theta,i}}{\sigma_{X,i}}.$ (5) The $n$ represents number of bands (in case of photometric SED) or wavelength points (photometric bands and wavelength grids of the spectrum, in case of spectrophotometric SED), while $f_{X,i}$ and $\sigma_{X,i}$ represent the observed flux and its associated uncertainty in a given band or wavelength $i$, respectively. In case of fitting to a spectrum (or spectrophotometric SED), only the spectral continuum (or spectral continuum and photometric SED) is fitted. A certain window (default of $\pm 10\text{\AA}$) around all possible emission lines (based on the list of emission lines wavelengths from the FSPS) is used to exclude emission lines in the fitting. The $f_{\theta,i}$ and $s$ are flux of model SED in band or wavelength point $i$ and a scaling factor that bring the model SED in overall similar normalization as that of the observed SED, respectively. Since model SED generated with FSPS is normalized to $1M_{\odot}$, so $s$ corresponds to the stellar mass. The $\nu$ represents the degree of freedom which should be specified by the user. A large value of $\nu$ will give a likelihood function similar to that of Gaussian, while a small value of $\nu$ will give heavier tails in the likelihood distribution (compared to the Gaussian one). In Appendix B, we compare performances of various fitting approaches and determine the best value for $\nu$. We find that $\nu\sim 1-3$ give overall robust and stable inference of parameters. The flux uncertainty ($\sigma_{X,i}$) is not just taken from the observational error, which is often an underestimation, but also consider the systematic uncertainties which come from the observational procedure (e.g., associated with image processing) and the SED modeling procedures. We assume that the bulk of the systematic uncertainties is a multiplicative factor of the observed fluxes such that $\sigma_{sys,i}=\text{err}_{sys}\times\sigma_{X,i}$, following Han & Han (2019). We do not set the $\text{err}_{sys}$ as a free paremeter in the fitting considering that it can possibly add a degeneracy in the fitting process, instead we fixed it to a certain value that is obtained from a fitting test that can be done either to each individual galaxies or to one galaxy representative of a whole sample. Practically, in the fitting test we vary the $\text{err}_{sys}$ such that the reduced $\chi^{2}$ of the best- fit model SED is below $\sim 2.0$. Without adding such systematic uncertainties, it is quite often to find cases where the reduced $\chi^{2}$ of the best-fit model SED is large while the fluxes residuals are actually very small. From analysis of 20 local galaxies (to be presented in Section 6), we find that $\text{err}_{sys}\lesssim 0.15$ is enough to reach the required reduced $\chi^{2}$ mentioned above. In the default setting and in the analysis throughout this paper, a flat prior over a certain range is assumed for each parameter. For versatility, `piXedfit_fitting` can also adapt with the priors given by the user in array or a text file format. ### 4.2 Posterior Sampling Method The main task in Bayesian parameter inference is to solve for the posterior probability distribution function of each parameter. Commonly, a sampling method is used to reconstruct the posteriors. In the SED fitting application, there are at least three approaches adopted for the posterior sampling: the gridding method (e.g., Boquien et al., 2019; Chen et al., 2020), MCMC (e.g., Acquaviva et al., 2011; Leja et al., 2017; Morishita et al., 2019), and nested sampling (e.g., Han & Han, 2014; Carnall et al., 2018; Leja et al., 2019b). In the gridding method, each parameter space is divided into a number of grids, then model SEDs are generated for all the possible combinations of the parameters grids. One of the advantages of the gridding method is that it could fit a large number of SEDs quickly, especially if the set of model SEDs (with many redshift grids) are generated before the fitting. The disadvantage of this method is that it typically requires a large number of parameters grids (and so the number of model SEDs) in order for the sampling to be complete, especially for high dimensional parameter space. In the MCMC fitting, the $N$ dimensional parameters are explored by random walks of sampler chains. Over time, the frequency of visited locations can in principle be a representative of the posterior probability function. The disadvantage of this method is that it is computationally expensive and typically slow. In `piXedfit_fitting`, we adopt two different posterior sampling methods: MCMC and random densely-sampling of parameter space (hereafter RDSPS). Each of those methods is described in the following. #### 4.2.1 Fitting with MCMC For the MCMC sampling, we use `emcee`161616https://github.com/dfm/emcee package by Foreman-Mackey et al. (2013, 2018, 2019). Before running the MCMC sampling, an initial fitting is done using the $\chi^{2}$ minimization technique to get an initial guess and set initial positions for the MCMC walkers. For this fitting, a set of pre-calculated model SEDs (to be described in Section 4.2.2) is used. The initial positions for the MCMC walkers are defined by a small asymmetric Gaussian “ball” with a $\sigma=0.08\times W$ around the best-fit parameters obtained from the initial fitting. The $W$ is the width (i.e., prior range) of a parameter space. The next step is running the MCMC. The number of MCMC walkers and steps should be defined by the user. When the MCMC is running, a model likelihood has to be supplied for each ensemble of $N$ parameter values that are generated. In this case, we use the Gaussian likelihood function for calculating the model likelihood. The MCMC sampling will finish when every walker has completed the specified number of steps. The results of MCMC sampling is the sampler chains which record the locations in the parameter space that are visited by the walkers throughout the process. From these sampler chains, the posterior probability distribution of each parameter can be constructed. The inferred value for each parameter is then obtined from the median of the posterior, while the uncertainty is defined by the range given by the 16th and 84th percentiles. In order to make the calculation efficient, the parallelization module in `emcee` is implemented. #### 4.2.2 Random Densely-sampling of Parameter Space (RDSPS) The second sampling method we adopt is the RDSPS method, which is a simple sampling method inspired by the gridding method described previously. Unlike the gridding method which defines fixed grids of values for each parameter, the RDSPS method draws random values uniformly within the prior range in each parameter. For generating $N_{\rm mod}$ number of model SEDs with $N$ number of parameters, an $N_{\rm mod}$ number of random values are generated for each parameter. Then, those $N$ arrays of parameters are randomly connected with each other to construct the library of model SEDs. The reason of using the RDSPS method over the gridding method is its efficiency. With a smaller number of generated models (e.g., $\sim$500000 for 9 free parameters), sub regions in each parameter axis can be represented by at least several models. In order to reduce the computation time, large number of model SEDs are calculated and stored into FITS files. The models are calculated in many grid of redshifts with increment of $0.002$. A set of model SEDs with the same redshift is stored into one FITS file. Then, this library of model SEDs can be used for fitting all the galaxies in a sample. In the fitting where redshift of the galaxy is known, model SEDs are calculated for that redshift by applying cubic spline interpolation from the set of pre-calculated model SEDs. Otherwise, the redshift will be set as a free parameter. In the spatially resolved SED fitting application, for higher accuracy, it is also possible to generate a set of model SEDs for each galaxy based on the known redshift of the galaxy. Then this set of model SEDs is used for fitting all the spatial bins of the galaxy. The next step in the fitting is to calculate the posterior probability of each model. For fitting with the RDSPS method, we allow two kinds of likelihood functions: Gaussian and the Student’s t functions. In the calculation of model likelihood, the normalization ($s$) of a model SED is calculated from the analytical solution for minimizing the $\chi^{2}$ (see e.g., Eq. 7 in Sawicki 2012). We do not set $s$ as free in the fitting for the sake of efficiency. After calculating the posterior probability of each model, the inferred value of each parameter is obtained from weighted averaging with model posterior serving as the weight for the model. The uncertainty is estimated from the weighted standard deviation. For fast fitting performance, we have incorporated the parallel processing module, namely message passing interface (MPI) in this SED fitting module. ### 4.3 piXedfit_analysis: Visualization of Fitting Result The output of the fitting process with the `piXedfit_fitting` module is a FITS file containing sampler chains (in the case of fitting with MCMC) or posterior probabilities of model SEDs (in the case of fitting with the RDSPS method). The FITS file can then be used for further analysis, such as deriving inferred values of parameters and visualization of the fitting, the latter task can be done with `piXedfit_analysis` module. For visualizing the fitting results with MCMC, 3 kinds of plots can be made using the `piXedfit_analysis` module: corner plot, SED plot, and SFH plot. The corner plot shows the posterior probability distributions (constructed from the sampler chains) of individual parameters (as 1D histograms) as well as joint posterior probability distributions of every pair of two parameters (in 2D). In the corner plot, inferred values of parameters (from median of the posteriors), the uncertainty (16th–84th percentiles of the posteriors) are shown with black vertical line and gray shaded area in the 1D histograms, respectively. For producing the SED plot, an ensemble of $200$ sampler chains is randomly picked from the full MCMC sampler chains, then their spectra are generated. The median posterior model SED (spectrum as well as photometric SED) and its uncertainty are then obtained by taking median, 16th and 84th percentiles from the ensemble of spectra. The residual, which is $(f_{X,i}-f_{\theta,i})/f_{X,i}$, is also shown in the SED plot (see Section 4.1 for the definitions of $f_{X,i}$ and $f_{\theta,i}$). For producing the SFH plot, the inferred SFH is derived by first randomly picking $200$ sampler chains from the full MCMC sampler chains, then the SFHs associated with the sampler chains are calculated. The median, 16th and 84th percentiles are then calculated from the ensemble of SFHs at each time step. The median is then used as the inferred SFH, while the area between the 16th and 84th percentiles is used as the associated uncertainty. For fitting with the RDSPS method, currently, only the SED plot can be produced in which the best-fit model SED is obtained from the model with lowest $\chi^{2}$. Example of the corner plot, SED plot, and SFH plot can be seen in Figures 6 and 13. ## 5 Testing the SED Fitting Performance Using Mock SEDs of IllustrisTNG Galaxies In this section, we use FUV–NIR mock SEDs of the IllustrisTNG (hereafter TNG) galaxies to test the performance of the `piXedfit_fitting` module in terms of its abilities in parameter inference and SFH reconstruction. We leave the fitting experiment that uses mock FUV–FIR SEDs for future work. ### 5.1 Generating Mock SEDs of TNG Galaxies The IllustrisTNG simulations171717http://www.tng-project.org(Marinacci et al., 2018; Naiman et al., 2018; Nelson et al., 2018; Pillepich et al., 2018; Springel et al., 2018; Nelson et al., 2019) are a suite of cosmological hydrodynamical simulations that model a range of physical processes involved in the formation of galaxies. In order to test the performance of the SED fitting using `piXedfit_fitting` in inferring the galaxy properties, we generate mock SED of TNG galaxies and then fit them with the `piXedfit_fitting` module to see whether the inferred parameters can recover the true properties of the TNG galaxies. Furthermore, having realistic SFH from the TNG galaxies, we can also test the performance of the `piXedfit_fitting` module in reconstructing the SFH of a galaxy. For this fitting test, we use the fiducial TNG100 simulation, which has a volume of $\sim 100^{3}$ comoving Mpc and a baryon mass resolution of $1.4\times 10^{6}M_{\odot}$. We select 300 galaxies from the TNG100 simulation. More specifically, we select 100, 80, 60, and 40 galaxies in every $0.5$ dex bin in $M_{*}$ between $10^{9}$ and $10^{11}M_{\odot}$ and other 20 galaxies more massive than $10^{11.5}M_{\odot}$. The number is somewhat arbitrary, simply to reflect that there are more low-mass galaxies than high- mass ones. In each mass bin, we first rank all TNG galaxies by their sSFR and choose target number of galaxies equally spacing in terms of the percentiles in sSFR. In this way, the selected galaxies cover the entire sSFR range. The mock spectra of TNG galaxies are created by regarding a stellar particle as an SSP, then generating the spectrum of each stellar particle using FSPS. In generating the SSP spectra, Padova isochrones (Girardi et al., 2000; Marigo & Girardi, 2007; Marigo et al., 2008), MILES stellar spectral library (Sánchez-Blázquez et al., 2006; Falcón-Barroso et al., 2011), and Chabrier (2003) IMF are assumed. The integrated spectrum of a galaxy is then obtained by summing up the spectra of gravitationally-bound stellar particles in a subhalo associated with the galaxy. We assume a redshift of $0.001$. To mimic the dust attenuation effect, we assign each galaxy with a random value of dust optical depth ($\hat{\tau}_{2}$) and then apply the Calzetti et al. (2000) dust attenuation law to the galaxy’s spectrum. The random values of $\hat{\tau}_{2}$ are uniformly distributed between $0$ and $2.5$. For this fitting test, we generate two kinds of mock SEDs: photometric and spectrophotometric SEDs. The photometric SEDs are obtained by convolving the synthetic spectra with 12 broad-band filters: GALEX (FUV, NUV), SDSS ($u$, $g$, $r$, $i$, $z$), 2MASS ($J$, $H$, $K_{s}$), and WISE ($W1$, $W2$). For the spectrophotometric SED, the photometric SED is created with the above procedure, while the mock spectrum is created to match the characteristic of MaNGA spectrum. To mimic the observational noise, a Gaussian noise is injected to the SEDs (both photometric and spectroscopic) by randomly perturbing each flux point from the original value by drawing from a Gaussian distribution with standard deviation dictated by the flux uncertainty. We assign each SED (either photometric or spectroscopic) with a $\text{S}/\text{N}$ ratio of $10$. We create the mock FUV–NIR SEDs with the similar setting as that provided in the `piXedfit_model` because we only focus on testing the performance of the fittting algorithm of piXedfit_fitting module. ### 5.2 SED Fitting Analysis of TNG Galaxies We fit the synthetic SEDs with the `piXedfit_fitting` module using the same assumptions of the IMF, spectral library, isochrones, and dust attenuation law as those used for creating the synthetic SEDs. For the SFH, we use the double power law model. We choose double power law SFH form because of its flexibility in the rising and falling phases. Since the wavelength of the mock SEDs ranges from FUV to NIR, we turn off the AGN dusty torus emission and the dust emission modeling in the fitting. This leaves us with seven free parameters: $Z$, $\tau$, $t$ ($\text{age}_{\text{sys}}$), $\hat{\tau}_{2}$, $\alpha$, $\beta$, and $M_{*}$. Flat priors within a given range is assumed for all the parameters. Logarithmic sampling is applied to all the parameters, except for $\hat{\tau}_{2}$. The assumed parameters ranges for the priors are as follows: $\log(Z/Z_{\odot})=[-0.5,0.42]$, $\log(\tau)=[-3.0,0.6]$, $\log(t)=[0.9,1.14]$, $\hat{\tau}_{2}=[0.0,3.0]$, $\log(\alpha)=[-2.0,1.0]$, and $\log(\beta)=[-2.0,1.0]$. For the $M_{*}$, we use a flat prior in logarithmic scale within a range of $\log(M_{*})=[\log(s_{\text{best}})-2,\log(s_{\text{best}})+2]$, with $s_{\text{best}}$ is the normalization obtained from the initial fitting with the $\chi^{2}$ minimization technique (see Section 4.2.1). In order to compare the performances of various fitting approaches provided within `piXedfit_fitting`, we do the SED fitting with 8 different fitting approaches. These cover the two posterior sampling methods (MCMC and RDSPS), the two likelihood functions (Gaussian and Student’s t) in the RDSPS method, and 6 values of degree of freedoms ($\nu$) for the Student’s t likelihood function: $0.3$, $1.0$, $2.0$, $3.0$, $5.0$, and $10.0$. In the MCMC fitting, the number of walkers and steps are 100 and 1000, respectively. Figure 6 shows fitting results to a mock photometric SED (left) and spectrophotometric SED (right) of a TNG galaxy. The fitting uses MCMC technique. In each side, three plots are shown: a corner plot, an SED plot, and an SFH plot. The three plots are made using the `piXedfit_analysis` module (see Section 4.3). In the corner plot, the black vertical dashed lines and the shaded area represent median and 16th–84th percentiles. The red vertical lines show the true values. The SED plot shows the mock photometric SED (blue squares), the mock spectrum (red line, in case of the right panel), the median posterior model spectrum (black line), and the median posterior model photometric SED (gray squares). The residual is given by $(f_{X,i}-f_{\theta,i})/f_{X,i}$ (see Section 4.3). In the SFH plot, the black line and gray shaded area represent the inferred SFH and its uncertainty, while the red line shows the true SFH. The figure shows that the SED fitting with the `piXedfit_fitting` module can recover the true properties and the overall trend of the SFH of the TNG galaxy well. The addition of the synthetic spectrum in the so-called spectrophotometric SED can add more constraining power in the fitting process and result in better contraints of the $Z$ and SFH parameters (i.e., $\alpha$, $\beta$, and $\tau$). The fitting with spectrophotometric SED can also reveal a bimodality in the posterior probability distribution of metallicity. Figure 6: Example of fitting results to mock photometric SED (left) and spectrophotometric SED (right) of a TNG galaxy. The fitting uses MCMC method. In each fitting result, three kinds of plots are shown: corner plot, SED plot, and SFH plot. Please see the text for description of the symbols in each plot. ### 5.3 Recovering Physical Properties of the TNG Galaxies For the first test, in this section we compare the inferred parameters obtained from fitting and the true properties of the TNG galaxies. The metallicity and age of a TNG galaxy are obtained by mass-weighted averaging over the metallicities and ages of the stellar particles, respectively. The SFR of a TNG galaxy is estimated from the amount of stellar mass formed over the last $50$ Myr, based on the formation times of individual stellar particles. Figure 7 shows direct comparisons between the inferred parameters obtained from fitting to mock photometric SEDs and the true values for two fitting approcahes: the RDSPS that uses Student’s t likelihood with $\nu=2.0$ (first and third columns) and the MCMC (second and fourth columns). We discuss the fitting results obtained with the other 6 fitting approaches in Appendix B. In brief, we found that all the 8 fitting approaches can recover the true properties of the TNG galaxies well. Though, we find an indication that the fitting approach of RDSPS that uses the Student’s t likelihood function with $\nu\sim 1-3$ can outperform the RDSPS that use other likelihood functions and give results that are broadly consistent with the MCMC method, but with much faster performance. In Figure 7, the scattered data are color-coded based on the sSFR of the TNG galaxies. The inferred mass-weighted age is calculated by weighting age of stars (i.e., look back times in the SFH) with their stellar masses at the birth time (i.e., amount of stellar masses produced at the look back time). To assess the goodness of the parameter recovery, we calculate mean offset ($\mu$), scatter (i.e., standard deviation, $\sigma$), and the Spearman rank- order correlation coefficient ($\rho$, which is calculated using the SciPy package, Virtanen et al. 2020). The coefficient $\rho$ is a nonparametric measure of the monotonicity of the relationship between two datasets. The histogram for the logarithmic ratio and the associated $\mu$, $\sigma$, and $\rho$ values are shown along with the scattered data. Figure 7: Comparisons between the inferred parameters from fitting and the true properties of the TNG galaxies. Results from fitting with two approaches (RDSPS that uses Student’s t likelihood with $\nu=2.0$ and the MCMC) are shown in this figure. The color-coding is based on the sSFR of the TNG galaxies. In each panel, a histogram of the logarithmic ratio between the inferred values from fitting and the true values, and its parameters ($\mu$, $\sigma$, $\rho$) are shown. The figure shows that, overall, the inferred parameters by the fittting to the photometric SEDs with the two approaches can recover the true values quite well, except $Z$ in which the true values are only broadly followed by the inferred values from fitting, though with small median offset ($\sim 0.1$ dex) and scatter ($\sim 0.2$ dex). Among the parameters, the $M_{*}$ is the best recovered, corroborated by the small offset (absolute value of $\mu$ of $\lesssim 0.01$ dex), small scatter ($\sim 0.09$ dex), and high value (close to unity) of $\rho$ ($\sim 0.98$). The $\hat{\tau}_{2}$ and SFR are also successfully recovered by the fitting. While overall the mass-weighted age is well recovered, there is a trend of increasing scatter toward the galaxies with young stellar populations. The color-coding indicates that there is a notable relation between the relatively larger scatter around the low mass- weighted age region with the increase of sSFR. The outshining effect by young stars (which is abundant in galaxies with high sSFR) may be playing a role in this. In a high sSFR galaxy, which tend to have stellar population dominated by young stars, the light from the young bright stars dominates the light from the older ones, making it relatively easy to “hide” old stellar populations and consequently it is more difficult to infer SFH of the galaxy (see e.g., Sawicki & Yee, 1998; Papovich et al., 2001; Maraston et al., 2010; Conroy, 2013). However, it is also revealed that the $Z$ inference tend to be better for the high sSFR galaxies than that for the low sSFR ones. The difficulty in inferring metallicities from SED fitting with photometric SED alone has also been reported in the literature, e.g., Pacifici et al. (2012, their Fig. 11), who fit mock optical photometry with a set of model SEDs of galaxies that are drawn from a semi analytical model, which exhibit complex SFHs, Han & Han (2014, their Fig. 18), who fit mock FUV–NIR SEDs with BayeSED that uses the nested sampling method, and Smith & Hayward (2018, their Fig. 11), who employed MAGPHYS to perform pixel-by-pixel SED fitting to a set of FUV–FIR synthetic images constructed from a zoom-in simulation of an isolated disk galaxy. Despite the wide wavelength coverage (which can be expected to break the well-known age–metallicity–dust attenuation degeneracy) implemented in Smith & Hayward (2018), the inferred metallicity from SED fitting is systematically underestimated compared to the true values. Michałowski et al. (2014) evaluated the $M_{*}$ inference of various SED fitting codes, which have various assumed SFH models, using the synthetic FUV–FIR photometric SEDs of simulated galaxies. The median offsets and scatters in the $M_{*}$ comparisons have ranges of $0.01-0.2$ dex and $0.09-0.4$ dex, respectively. Lower et al. (2020) applied non-parametric SFH model with prospector to fit the synthetic FUV–FIR photometric SEDs of simulated galaxies and obtained median offset of $0.02$ dex and scatter of $0.13$ dex. Our $M_{*}$ inference has a smaller offset and scatter than that obtained in the above studies. Though, the more comprehensive (i.e., realistic) simulation of dust component (through the radiative transfer technique) in the construction of mock SEDs that is implemented in the above studies might add more complexity in the fitting test. In order to investigate the effect of the inclusion of spectrum into the SED on the performance of the parameters inference, we do the same fitting tests to the mock spectrophotometric SEDs of the TNG galaxies. In fitting a spectrophotometric SED of a galaxy, only spectral continuum is fitted simultaneously with the photometric SED (see Section 4.1). Figure 8 shows the comparison between the inferred parameters obtained from the fitting and the true values. The format of this figure is the same as that of Figure 7. Overall, we see improvements on the inference of all the parameters (with the two fitting approaches) over what is obtained with the photometric data only, corroborated by the smaller scatter and higher $\rho$ value, though slightly higher offset. The significant increase in $\rho$ value of $Z$ suggests that the inferred $Z$ become better inline with the true $Z$. This happens at the same time with the decreasing scatter of $\hat{\tau}_{2}$, which suggests that the inclusion of spectrum can potentially break the degeneracies in the fitting process. This result agrees with Pacifici et al. (2012) who found that SED fitting using mock optical spectroscopy significantly improve the parameters inference over the one that only use photometry. Figure 8: Similar to Figure 7, but now the fitting is done to the mock spectrophotometric SEDs of the TNG galaxies. ### 5.4 Recovering SFHs of the IllustrisTNG Galaxies In this section, we test the performance of the `piXedfit_fitting` module in terms of its ability of inferring the SFH of galaxies. The way we do this is by comparing the inferred SFH with the true SFH of the TNG galaxies. Figure 9 shows examples of SFHs (black lines and gray shaded areas in the first row) inferred by the MCMC fitting using the `piXedfit_fitting` module to spectrophotometric SEDs of three TNG galaxies. In each panel in the figure, the black line represents the median, while the gray shaded area represents the uncertainty. The true SFHs of the TNG galaxies are shown by the red lines. The SFH of TNG galaxy is calculated with time steps of $100$ Myr. In the second and third rows, the histories of the stellar mass growth ($M_{*}(t)$) and sSFR ($\text{sSFR}(t)$) are shown, respectively. They are derived from the inferred SFHs. Same as in the first row, the red and black lines here represent the true and inferred histories, respectively. The vertical red dashed lines in the $M_{*}(t)$ plots are the true look-back times when the galaxies were still having $M_{*}$ of $30\%$ ($lbt_{30\%M}$), $50\%$ ($lbt_{50\%M}$), $70\%$ ($lbt_{70\%M}$), and $90\%$ ($lbt_{90\%M}$) of the current $M_{*}$, while the vertical black lines are the values inferred from the median $M_{*}(t)$. The figure shows that the inferred SFH, $M_{*}(t)$, and $\text{sSFR}(t)$ can recover the overall shape (i.e., the rising and falling phases) of the true histories of these three TNG galaxies well. Figure 9: Comparison of the inferred SFH (first row), $M_{*}(t)$ (second row), and $\text{sSFR}(t)$ (third row) obtained from fitting to the spectrophotometric SEDs of 3 TNG galaxies using the piXedfit_fitting module and the true histories. The vertical red dashed lines and black lines are the true and inferred look-back times when the galaxies were still having $M_{*}$ of $30\%$, $50\%$, $70\%$, and $90\%$ of the current $M_{*}$. In order to quantitatively assess the performance of the `piXedfit_fitting` module in inferring the SFH, we compare the inferred and true values of the $lbt_{30\%M}$, $lbt_{50\%M}$, $lbt_{70\%M}$, and $lbt_{90\%M}$. Results from the fitting with the photometric SEDs are shown in Figure 10. This figure shows that overall, the true look-back time episodes in the $M_{*}(t)$ can be recovered well using the `piXedfit_fitting` module with the two fitting approaches. The earlier look-back time episodes seem to be more difficult to recovered compared to the later ones, corroborated by the increasing $\rho$ from $lbt_{30\%M}$ to $lbt_{90\%M}$. The color-coding suggests that it is more difficult to infer SFH of galaxies with high sSFR than the galaxies with low sSFR. This may be in part caused by the outshining effect of young bright stars, which is abundant in the galaxies with high sSFR. By fitting synthetic FUV–FIR photometric SEDs of simulated galaxies using a modified version of MAGPHYS, which assumes tau SFH model with random bursts superposed, Smith & Hayward (2015) tried to reconstruct the true SFH of the galaxies. They found that the median-likelihood SFH (obtained by marginalizing over the model libraries, which is similar to what is done in our work) can well recover the smoothly declining SFH of isolated disk galaxies, while it fails to recover the bursty episodes in the SFH of merging galaxies. This is likely caused by the assumed tau SFH model that is not flexible enough to represent the general (i.e., realistic) SFH of galaxies. Inline with our results, Carnall et al. (2018) showed that using the more flexible double power law SFH model can recover the overall shape of the true SFHs of simulated galaxies from MUFASA, despite the narrower wavelength coverage (optical–NIR) of the mock SEDs used. While `piXedfit_fitting` module can recover the overall trend of rising and falling episodes (i.e., low frequency variation) in the true SFHs of TNG galaxies, however, it cannot recover the high frequency variation in the true SFHs. Figure 10: Comparison of the inferred $lbt_{30\%M}$, $lbt_{50\%M}$, $lbt_{70\%M}$, and $lbt_{90\%M}$ from fitting to the photometric SEDs of the TNG galaxies using the piXedfit_fitting module and the true values. Two fitting approaches are used: the RDSPS that uses Student’s t likelihood with $\nu=2.0$ and the MCMC. The scattered data are color-coded based on the sSFR of the TNG galaxies. It is interesting to see how the inclusion of the spectrum into the SED can effect the SFH inference. Figure 11 shows comparison between the inferred $lbt_{30\%M}$, $lbt_{50\%M}$, $lbt_{70\%M}$, and $lbt_{90\%M}$ obtained from fitting to the mock spectrophotometric SEDs. The format of this figure is the same as that of Figure 10. Overall, we see improvements made by the fitting with the spectrophotometric SED, such that the scatters in the one-to-one comparisons become smaller and the $\rho$ values become higher compared to that obtained from the fitting with photometric SED only. However, the offsets become slightly higher. We notice a flattening appears around the highest and lowest ends of the correlation in case of the fitting with the RDSPS approach. This flattening can be caused by a multimodal posteriors distributions. Figure 11: Similar to Figure 10, but now the fitting is done to the spectrophotometric SEDs of the TNG galaxies. ## 6 Testing the Performance of piXedfit Using Spatially Resolved Spectrophotometric Data of Local Galaxies In this section, we analyze spatially resolved spectrophotometric data of 10 galaxies observed by the CALIFA survey and 10 galaxies observed by the MaNGA survey. The goals of this analysis are (1) to demonstrate the ability of the `piXedfit` in spatially matching the FUV–$W2$ broad-band imaging data and the IFS data, (2) demonstrate the SED fitting analysis to the spatially resolved spectrophotometric dataset with `piXedfit`, and (3) test the reliability of the SED fitting module by comparing the inferred SFR from fitting with the SFR derived from $H_{\alpha}$ emission. A more scientific-oriented discussion using larger samples is left for future works. ### 6.1 Sample Selection and Data Reduction First, we construct a catalog of galaxies that are observed by the medium imaging survey (MIS) of GALEX, SDSS, 2MASS, and WISE. We start from the catalog of unique GALEX GR5 sources (i.e., eliminating repeated measurements) that has been matched with the SDSS DR7 catalog by Bianchi et al. (2011)181818Available at http://dolomiti.pha.jhu.edu/uvsky, then cross match it with the MPA-JHU (Max-Planck-Institut für Astrophysik-Johns Hopkins University) value added galaxy catalog191919Available at https://wwwmpa.mpa- garching.mpg.de/SDSS/DR7/ (Kauffmann et al., 2003; Tremonti et al., 2004; Brinchmann et al., 2004) to select only galaxies and get their stellar masses. After that we cross match the catalog with the 2MASS extended source catalog202020Available at https://irsa.ipac.caltech.edu/Missions/2mass.html(Jarrett et al., 2000). Considering the all sky coverage of the WISE survey and its depth compared to 2MASS, we do not cross match the catalog further with the WISE catalog. Once we get the catalog, we cross match it with the CALIFA DR3212121Available at https://califaserv.caha.es/CALIFA_WEB/public_html/?q=content/califa-3rd-data- release and MaNGA DRPALL (from SDSS DR15222222Available at https://www.sdss.org/dr15/manga/manga-data/catalogs/) catalogs separately. As a result we get 41 galaxies matched with the CALIFA catalog and 395 galaxies matched with the MaNGA catalog. Then we randomly select 10 galaxies that have $\log(M_{*}[M_{\odot}])>10.0$ and $z<0.05$ from each of the two catalogs. We download the multiband images and the IFS data from the relevant survey websites, assisted by the galaxies coordinates from the merged catalog. We require the galaxies to be covered by the GALEX MIS because the survey has relatively long exposure time (typically $1500$ s) so that we can have sufficient S/N ratio in the UV. Among the imaging datasets used, the 2MASS imaging data is the shallowest. However, it is still important to include the data because it complements the two WISE bands in putting strong constraint in the NIR regime. In total, the photometry data consists of 12 bands ranging from FUV to $W2$. We use the `piXedfit_images` module to spatially match (in resolution and sampling) the imaging data, then use `piXedfit_spectrophotometric` module to spatially match the reduced imaging data cubes with the IFS data. This processes produce spatially resolved spectrophotometric data cubes that have spatial resolution similar to that of the $W2$ and spatial sampling similar to that of the FUV/NUV. Figure 12 shows spatially resolved spectrophotometric data cubes of 18 galaxies from the sample. The data cubes of the other 2 galaxies are shown in Figure 3. In the left side, the 9 galaxies from CALIFA are shown, while the 9 galaxies from MaNGA are shown in the right side. For each galaxy, $gri$ composite image and example of SEDs of four pixels are shown in the left panel and the right panel, respectively. In the $gri$ composite image, the transparent hexagonal area shows the area covered by the IFU fiber bundle of the CALIFA and MaNGA surveys. In the data cubes, only pixels covered within the region of the IFU fiber bundle have the spectra, while pixels outside of the region only have photometric SED. The spectrophotometric data cubes are then passed to the pixel binning process. The pixel binning is done using the `piXedfit_bin` module. See Section 3.3 for the description of the pixel binning scheme. In this analysis, the criteria for the binning are: S/N ratio threshold of $10.0$ in the GALEX, SDSS, and WISE bands, S/N ratio threshold of $1.0$ in the 2MASS bands, $D_{\rm min,bin}$ of $4$ pixels, and reduced $\chi^{2}$ limit ($\chi^{2}_{\rm max,bin}$) of $3.3$ for the SED shape similarity test. In short, the pixel binning is done by growing the size of bins and including more pixels with similar SED shape until the S/N thresholds in all bands are achieved. Pixel binning maps of the 20 galaxies analyzed in this work are shown in the leftmost panels of Figures 14 and 15. Figure 12: Compilation of the spatially resolved spectrophotometric data cubes of 18 galaxies from the sample analyzed in this paper. The data cubes of the other 2 galaxies are shown in Figure 3. The left and right sides show 9 galaxies from the CALIFA and 9 galaxies from the MaNGA, respectively. For each galaxy, $gri$ composite image (left panel) and SEDs of four pixels (right panel) are shown. In the $gri$ composite images, the area covered by the IFU fiber bundle is shown with the transparent hexagonal area. ### 6.2 SED Fitting Analysis The reduced spectrophotometric data cubes (after pixel binning) are then passed to the SED fitting process. The SED fitting is done using the `piXedfit_fitting` module (see Section 4) with MCMC approach. The SED fitting setup (IMF, isochrone, spectral library, SFH, and dust attenuation law) is the same as that for the fitting with the mock SEDs of the TNG galaxies (see Section 5.2), except for the priors. Flat priors for all parameters are assumed, within the following ranges: $\log(Z/Z_{\odot})=[-2.0,0.2]$, $\log(\tau)=[-1.0,1.14]$, $\log(\text{age})=[-1.0,1.14]$, $\hat{\tau}_{2}=[0.0,3.0]$, $\log(\alpha)=[-2.0,1.5]$, and $\log(\beta)=[-2.0,1.5]$. The $M_{*}$ prior is defined in the same way as that applied in the fitting with the mock SEDs of TNG galaxies. The reason of using different set of priors from those used for fitting the mock SEDs of the TNG galaxies is because here we analyze the spatially resolved SEDs, which come from stellar populations with wide range of ages ($\text{age}_{\text{sys}}$). In the MCMC fitting, we set the number of walkers and steps as 100 and 1000, respectively. For spatial bins that have spectrophotometric SED (see Section 3.3 for the definition of spatial bins with spectrophotometric SEDs), two kinds of fitting are done: a fitting to the photometric SED only and a fitting to the spectrophotometric SED. By default, `piXedfit_fitting` will fit both photometric SED and spectrum (i.e., the spectral continuum) simultaneously whenever it is fed with a spectrophotometric data. The aim of performing fitting to only the photometric SED is for conducting tests, which includes a reconstruction of the observed spectral continuum, $\text{D}_{\rm n}4000$, $H_{\alpha}$ emission, and $H_{\beta}$ emission using model spectra obtained from fitting to the photometry. These analyses will be discussed in the next two sections. Figure 13: Examples of fitting results using MCMC for two spatial bins of the NGC 309, one located around the center and the other located in the spiral arms. The fitting to the centrally-located bin is done to the spectrophotometric SED, while the fitting for the outter bin is done to the photometric SED. The overall symbols in the corner plot, SED plot, and SFH plot are the same as that in the Figure 6. Figure 13 shows fitting results using MCMC for two spatial bins in the NGC 309, one located around the center and the other located in the spiral arms. For centrally-located bin, the fitting is done to both spectrum and photometric SED simultaneously. The overall symbols in the corner plot, SED plot, and SFH plot are the same as that in the Figure 6. An obvious difference in the SED shape between the spatial bin around the galaxy’s center (a red SED typical of old stellar population) and that in the spiral arms (a blue SED typical of young stellar population) is shown in the SED plots. In the SED plot, the black spectrum and gray shaded area around it represent median posterior model spectrum and the associated uncertainty. The small residuals in the SED plot indicate that the observed continuum and photometric SED can be recovered well. The inferred SFH of the spatial bin located in the spiral arms indicates a steeply increasing SFR toward the observational time, while the inferred SFH of the spatial bin located around the galaxy’s center indicates a gradual increase of SFR from $\sim 10$ Gyr ago and reached peak around $\sim 5$ Gyr ago then the SFR gradually decrease toward the observational time. The SED fitting procedure is done to the SEDs of spatial bins in the galaxy, then for parameters that linearly scaled with flux at a certain band, such as $M_{*}$ (which scaled with NIR bands) and SFR (which scaled with UV bands), the inferred value of the bin is divided into the pixels that belong to the bin by assuming that $M_{*}$ is proportional to the $W2$ flux and SFR is proportional to the FUV flux. This way, we can get higher spatial resolution in the $M_{*}$ and SFR maps. The other parameters are kept in the spatial bin space. Figures 14 and 15 show the maps of stellar population properties of the 10 galaxies from CALIFA and the 10 galaxies from MaNGA, respectively. In the both figures, the first 6 columns from the left show maps of the pixel binning, $Z$, mass-weighted age, $A_{V}$ dust attenuation, SFR surface density ($\Sigma_{\rm SFR}$), and $M_{*}$ surface density ($\Sigma_{*}$). The mass- weighted age is derived from the inferred SFH, while the $A_{V}$ can be calculated as $A_{V}=1.086\times\hat{\tau}_{2}$. For comparison, in the rightmost columns of the both figures, we show $\Sigma_{*}$ from the PyCASSO data base232323 Available at http://pycasso.iaa.es/ (de Amorim et al., 2017) which is derived from the CALIFA data alone (in case of figure 14) and $\Sigma_{*}$ from the Pipe3D value added catalog242424Available at https://www.sdss.org/dr14/manga/manga-data/manga-pipe3d-value-added-catalog/ (Sánchez et al., 2018) which is based on the MaNGA data alone (in case of figure 15). The dimensions of the plotted maps in these rightmost columns correspond to the same physical sizes as those of the dimensions of the maps in the other 6 columns. The fact that our data cubes have lower spatial sampling (i.e., larger pixel size; $1.5^{\prime\prime}\text{ pixel}^{-1}$) than that of the CALIFA ($1.0^{\prime\prime}\text{ pixel}^{-1}$) and MaNGA ($0.5^{\prime\prime}\text{ pixel}^{-1}$) data cubes makes our maps have smaller total number of pixels. Figure 14: Maps of the stellar population properties of the 10 galaxies from the CALIFA survey analyzed in this work. The SED fitting uses the MCMC technique. The first 6 columns from the left show maps of the pixel binning, $Z$, mass-weighted age, $A_{V}$ dust attenuation, $\Sigma_{\rm SFR}$, and $\Sigma_{*}$. The righmost column show $\Sigma_{*}$ from the PyCASSO data base (de Amorim et al., 2017) which is derived from the CALIFA data alone. The dimension of this map corresponds to the same physical size as those of the dimensions of the maps in the other 6 columns. Figure 15: Maps of the stellar population properties of the 10 galaxies from the MaNGA survey analyzed in this work. The SED fitting uses MCMC technique. The first 6 columns from the left show maps of the pixel binning, $Z$, mass-weighted age, $A_{V}$ dust attenuation, $\Sigma_{\rm SFR}$, and $\Sigma_{*}$. The righmost column show $\Sigma_{*}$ from the Pipe3D value added catalog (Sánchez et al., 2018) which is based on MaNGA data only. The dimension of this map corresponds to the same physical size as those of the dimensions of the maps in the other 6 columns. ### 6.3 Reconstructing Observed Spectral Continuum with Model Spectra Obtained from Fitting to Photometry In this section and the next section, for spatial bins that have spectrophotometric SEDs, we fit the photometric SEDs and then compare the median posterior model spectra with the observed spectra (see Section 5.2 for the description on how the median posterior model spectra are obtained). We make the comparison by calculating the residual in spectral continuum (in this section) and directly comparing the $\text{D}_{\rm n}4000$ strength, and the $H_{\alpha}$ and $H_{\beta}$ luminosities (in the next section). This analysis can serves as an excellent test for `piXedfit` in terms of its SED modeling (based on FSPS) and the fitting performance. A similar exercise has been carried out by Leja et al. (2017) with the `prospector`, but for galaxies as a whole. We collected the spatial bins that have spectrophotometric SEDs in the CALIFA (560 bins) and MaNGA (145 bins) samples. To get the continuum from the observed spectra and the median posterior spectra, we remove regions within $\pm 10\text{\AA}$ from the central wavelengths of all possible emission lines (based on the list of emission lines wavelengths from the FSPS). Figure 16 shows the residuals (in dex) between the observed spectra and median posterior model spectra obtained from fitting to the photometric SEDs of pixels in the CALIFA (top panel) and MaNGA (bottom panel) samples. The merged residuals are brought to the rest-frame wavelength. The black lines show the median of the residuals, while the gray shaded areas show 16th–84th percentiles. The vertical cyan bands in the two panels show regions in the spectra that are removed. The residuals are relatively flat over the whole wavelength ranges. The mean and standard deviation of the residuals are $0.004$ and $0.037$ (for CALIFA) and $-0.005$ and $0.031$ (for MaNGA). Larger residuals are shown around the $4000\text{\AA}$ break. This could be caused by the lack of photometric sampling around that region, which currently only covered by the $u$ and $g$ bands. Figure 16: Merged residuals between the median posterior model spectra obtained from fitting to the photomeric SEDs and the observed spectra for the CALIFA (top panel) and MaNGA (bottom panel) sample. The black lines and gray shaded areas are the median and 16th–84th percentiles of the residuals. The vertical cyan bands in the two panels show regions in the spectra that are removed. ### 6.4 Predicting $H_{\alpha}$, $H_{\beta}$, and $\text{D}_{\rm n}4000$ with Model Spectra Obtained from Fitting to Photometry In this section, we try to predict $H_{\alpha}$ luminosity, $H_{\beta}$ luminosity, and $\text{D}_{\rm n}4000$ of the observed spectra through fitting using `piXedfit_fitting` with only photometric SED. To measure luminosities of the $H_{\alpha}$ and $H_{\beta}$ emission lines from the observed spectra, first, we subtract the observed spectra with the continuum of the median posterior model spectra, generated from the model posteriors. Then we fit the $H_{\alpha}$ and $H_{\beta}$ emission lines with Gaussian functions using `fit_lines` function in the `specutils` (Earl et al., 2020)252525https://specutils.readthedocs.io/en/stable/ Python package. We visually inspect all the spatial bins to make sure the fitting work well. Uncertainties of the $H_{\alpha}$ and $H_{\beta}$ luminosities are estimated based on the average $\text{S}/\text{N}$ ratio of the observed spectral fluxes $\pm 10\text{\AA}$ around the mean wavelengths. The $H_{\alpha}$ and $H_{\beta}$ luminosities of the median posterior model spectra are derived from the posteriors distributions obtained from the MCMC fitting. The median, 16th, and 84th percentiles are calculated from the posteriors distributions. The median is then used as the mean luminosity, while the 16th–84th percentiles are used as the uncertainty. The $\text{D}_{\rm n}4000$ of both the observed spectra and the median posterior model spectra is measured following the Balogh et al. (1999) definition, which is the ratio of the average flux density $f_{\nu}$ in the narrow wavelength bands of $3850$–$3950\text{\AA}$ and $4000$–$4100\text{\AA}$. To estimate uncertainty for the $\text{D}_{\rm n}4000$ of the observed spectra, we use the bootstrap method; The spectral fluxes within the two bands are randomly perturbed following a Gaussian distribution with the mean of the spectral fluxes and the standard deviation of the flux uncertainties. This is performed 100 times and for each iteration, $\text{D}_{\rm n}4000$ is measured. Then from the distribution, a standard deviation is calculated and used as the $\text{D}_{\rm n}4000$ uncertainty. In Figure 17, first row, we show comparison between the observed $H_{\alpha}$ luminosity (left), $H_{\beta}$ luminosity (middle), and $\text{D}_{\rm n}4000$ (right) and the predictions by `piXedfit_fitting` through fitting with photometric SEDs. In each panel, the histogram in the bottom right corner shows distribution of the logarithmic ratio between the model predictions and the observed ones. The mean (or offset in dex, $\mu$), scatter ($\sigma$), and Spearman rank-order coefficient ($\rho$) of the distribution are shown in the top left corner. There is a good agreement between the models and observations, especially for the $H_{\alpha}$ and $H_{\beta}$, corroborated by the small offsets ($0.105$ and $0.131$ dex for the $H_{\alpha}$ and $H_{\beta}$, respectively) and scatters ($0.333$ and $0.342$ dex for the $H_{\alpha}$ and $H_{\beta}$, respectively). The $\rho$ values for $H_{\alpha}$ and $H_{\beta}$ are relatively high ($0.678$ and $0.606$, respectively), confirming the good agreement between the models and observations. Despite the small offset ($-0.024$ dex) and scatter ($0.036$ dex), however, the $\rho$ values for $\text{D}_{\rm n}4000$ is small ($0.304$) which is caused by the deviation (from the one-to-one relation) around the intermediate $\text{D}_{\rm n}4000$ and larger scatter around the lower end of the $\text{D}_{\rm n}4000$. This trend in the $\text{D}_{\rm n}4000$ comparison is understandable given the non-flat residuals around the $4000\text{\AA}$ between the predicted spectral continuum and the observed spectral continuum as shown in Figure 16. Figure 17: Comparison between the observed $H_{\alpha}$ luminosities, $H_{\beta}$ luminosities, and $\text{D}_{\rm n}4000$, and the model predictions from fitting to photometric SEDs (first row) and spectrophotometric SEDs (second row). In each panel, the histogram in the bottom right corner shows distribution of the logarithmic ratio between the models and the observed ones. The offset ($\mu$), scatter ($\sigma$), and Spearman rank-order correlation coefficient ($\rho$) are shown in the top left corner. In order to see whether including the observed spectral continuum in the fitting can improve the model predictions, we fit the spectrophotometric SEDs of the spatial bins and derived the model predictions for the $H_{\alpha}$, $H_{\beta}$, and $\text{D}_{\rm n}4000$ with the same procedure as described previously. The comparisons between the model predictions and the observed ones are shown in the second row of Figure 17. A better agreement between the models and the observations is obtained with this fitting compared to the previous one that only use photometric SEDs. This is indicated by smaller offsets, smaller scatters, and higher $\rho$ values in all of the three comparisons. ### 6.5 Comparison of SFR from piXedfit_fitting with the SFR Derived from $H_{\alpha}$ The Balmer emission lines, especially $H_{\alpha}$ line (which is the strongest) is a good indicator of instantaneous SFR. In addition to that, the Balmer decrement (i.e., ratio of $H_{\alpha}/H_{\beta}$ emission line fluxes) provides a good indicator for dust attenuation in the stars birth clouds. The $H_{\alpha}$-based SFR estimate has been widely used in the analysis of the IFS data in the CALIFA (e.g., Sánchez et al., 2016b) and MaNGA (e.g., Sánchez et al., 2018; Belfiore et al., 2019) surveys. In this section, for spatial bins with spectrophotometric SEDs and the $H_{\alpha}$ and $H_{\beta}$ $\text{S}/\text{N}>2.0$ (resulting in 527 bins), we compare the SFR derived with `piXedfit_fitting` module and the SFR derived from the observed $H_{\alpha}$ emission. Here, we use the $H_{\alpha}$ and $H_{\beta}$ measurements from the analysis in the previous section (Section 6.4). We do not use the publicly available value added data cubes from the CALIFA and MaNGA surveys, because of the differences in spatial resolution and spatial sampling between their data cubes and our reduced data cubes. Spatially matching the SFR map or $H_{\alpha}$ and $H_{\beta}$ maps from their data cubes to our data cubes will introduce some systematics that could dominate uncertainties in the comparison analysis. Moreover, the comparison can be made more self-consistent. In deriving SFR from the $H_{\alpha}$ emission, first we correct the $H_{\alpha}$ luminosity for the dust attenuation associated with the birth cloud. Balmer color excess is correlated with the ratio of the observed Balmer decrement $(L_{H_{\alpha}}/L_{H_{\beta}})_{\rm obs}$ and its intrinsic value $(L_{H_{\alpha}}/L_{H_{\beta}})_{\rm int}$ through the following equation: $E(H_{\beta}-H_{\alpha})=2.5\log\left(\frac{(L_{H_{\alpha}}/L_{H_{\beta}})_{\rm obs}}{(L_{H_{\alpha}}/L_{H_{\beta}})_{\rm int}}\right).$ (6) The $L_{H_{\alpha}}$ and $L_{H_{\beta}}$ are the luminosities of $H_{\alpha}$ and $H_{\beta}$, respectively. The intrinsic Balmer decrement has a value of $2.86$ for the case B recombination (Osterbrock, 1989). Once we have the Balmer color excess, the attenuation toward $H_{\alpha}$ can then be calculated as: $A_{H_{\alpha}}=\frac{E(H_{\beta}-H_{\alpha})}{k(\lambda_{H_{\beta}})-k(\lambda_{H_{\alpha}})}\times k(\lambda_{H_{\alpha}}).$ (7) The $k(\lambda_{H_{\alpha}})$ and $k(\lambda_{H_{\beta}})$ are the attenuation values at wavelengths of $H_{\alpha}$ and $H_{\beta}$, respectively. To get these values, we assume the Calzetti et al. (2000) attenuation curve with $R_{V}=3.1$. It is important to note that Calzetti et al. (2000) used two different attenuation curves for the nebular and continuum. The two attenuation curves have similar shapes but different normalizations: $R_{V}=3.1$ and $R_{V}=4.05$ for the nebular and continuum, respectively. Once we have $A_{H_{\alpha}}$, the dust-corrected $H_{\alpha}$ luminosity can then be calculated via: $L_{H_{\alpha},\text{corr}}=L_{H_{\alpha},\text{obs}}\times 10^{0.4A_{H_{\alpha}}}$ (8) For deriving the SFR from the dust-corrected $H_{\alpha}$ luminosity, we use the Kennicutt (1998) prescription that has been converted for Chabrier (2003) IMF as follows: $\text{SFR}[M_{\odot}\text{yr}^{-1}]=4.65\times 10^{-42}L_{H_{\alpha},\text{corr}}[\text{erg}\text{ s}^{-1}].$ (9) A division by $1.7$ has been applied to the original Kennicutt (1998) prescription (which assumed Salpeter 1955 IMF) to account for additional low- mass stars in the Salpeter IMF (see e.g., Speagle et al., 2014; Nelson et al., 2016; Leja et al., 2017). The uncertainty of SFR from $H_{\alpha}$ is estimated using the bootstrap method. Figure 18, top panel, shows comparison between the SFR obtained from fitting to the photometric SEDs of spatial bins and the SFR derived from the $H_{\alpha}$ emission. There is a good agreement between the two SFR measurements, with a small offset of $0.127$ dex and scatter of $0.389$ dex. The Spearman $\rho$ value is high ($0.694$), confirming the good agreement between the two SFR measurements. To see whether including spectral continuum in the fitting can make an improvement to the result, we fit the spectrophotometric SEDs of the spatial bins. The result is shown in the bottom panel of Figure 18. Now, the offset is significantly reduced to $-0.005$ dex. However, the scatter becomes slightly larger ($0.467$ dex) and the Spearman $\rho$ value is reduced to $0.480$. Figure 18: Comparison between the SFR derived from fitting with the MCMC method and the SFR derived from $H_{\alpha}$ emission. In the top panel, the fitting is done to the photometric SEDs of spatial bins, while in the bottom panel, the fitting is done to the spectrophotometric SEDs. In each panel, the histogram in the bottom right corner shows distribution of the logarithmic ratio between the two SFR estimates. The offset ($\mu$), scatter ($\sigma$), and Spearman $\rho$ values are shown in the top left corner. The above results are obtained using the MCMC fitting method. Next, we explore the performances of fitting to the photometric SEDs using the RDSPS method with the likelihood function of Student’s t with $\nu=2.0$. Figure 19 shows the comparison between the SFR from the fitting and the SFR from the $H_{\alpha}$ emission. There is a good agreement between the two SFR estimates, as indicated by the small offset ($0.093$ dex), small scatter ($0.363$ dex), and high Spearman $\rho$ value ($0.710$). This result suggests that the RDSPS method with the likelihood function of Student’s t with $\nu=2.0$ can give a good estimate of SFR, as good as that of the MCMC method. Figure 19: Comparison between the SFR derived from $H_{\alpha}$ emission and SFR from fitting using the RDSPS method with the likelihood function of Student’s t with $\nu=2.0$. The histogram in the bottom right corner shows distribution of the logarithmic ratio between the two SFR estimates. The offset ($\mu$), scatter ($\sigma$), and Spearman $\rho$ values are shown in the top left corner. Overall, results of this analysis suggest that SED fitting to the broad-band photometry that covers FUV–NIR is capable of inferring the instantaneous SFR of a galaxy. More interestingly, it is shown in this analysis that it applies to spatially resolved ($\sim 1$ kpc) scale in the galaxy. While fitting with MCMC is computationally expensive, the RDSPS approach (which is $\sim 40$ time faster than MCMC, using the same number of cores) provides a great opportunity for an application to spatially resolved SED fitting analysis. In a future work, we will apply `piXedfit` to a large sample of galaxies. ## 7 Summary In this paper, we present `piXedfit`, a Python package that provides tools for analyzing the spatially resolved properties (including stellar and dust components) of galaxies from broad-band imaging data or a combination of broad-band imaging and IFS data. `piXedfit` is designed to be modular, and consists of six main modules: (1) `piXedfit_images` is for the image processing, (2) `piXedfit_spectrophotometric` is for spatial matching between the imaging data and IFS data, (3) `piXedfit_bin` is for pixel binning to maximize the $\text{S}/\text{N}$ ratio of the spatially resolved SED, (4) `piXedfit_model` is for generating model SED, (5) `piXedfit_fitting` is for performing SED fitting, and (6) `piXedfit_analysis` is for visualization of fitting results. We test the capabilities of `piXedfit` with two analyses in this paper: testing the SED fitting performance using mock FUV–NIR SEDs of IllustrisTNG galaxies and testing `piXedfit` modules using spatially resolved spectrophotometric data of local galaxies. Overall, the testing results are summarized as follows: 1. 1. We test the performance of `piXedfit_fitting` module by fitting mock FUV–NIR SEDs (photometric as well as spectrophotometric SEDs) of IllustrisTNG galaxies and then compare the inferred parameters from fitting with the true parameters. We implement various fitting approaches (the MCMC and RDSPS with likelihoods of Gaussian and Student’s t with various values of $\nu$) provided within `piXedfit_fitting` to compare their performances. With photometric SED that covers FUV–NIR, `piXedfit_fitting` can well recover mass-weighted ages, dust optical depth, $M_{*}$, and SFR of the IllustrisTNG galaxies, for all of the fitting approaches (see Section 5 and Appendix B). The fitting to mock spectrophotometric SED improve the parameters inference, especially for the metallicity. 2. 2. Using the mock SEDs and SFHs of the IlustrisTNG galaxies, we test the performance of `piXedfit_fitting` in inferring the SFH of a galaxy. We quantitatively assess the performance by comparing the true and inferred values of lookback times when the galaxies $M_{*}$ were only $30\%$ ($lbt_{30\%M}$), $50\%$ ($lbt_{50\%M}$), $70\%$ ($lbt_{70\%M}$), and $90\%$ ($lbt_{90\%M}$) of the current values. With FUV–NIR photometric SEDs, `piXedfit_fitting` can well recover the $lbt_{30\%M}$, $lbt_{50\%M}$, $lbt_{70\%M}$, and $lbt_{90\%M}$ using all of the fitting approaches. The fitting to mock spectrophotometric SEDs improves the SFH inference. 3. 3. We demonstrate the performances of `piXedfit` modules using spatially resolved spectrophotometric data of 20 galaxies observed by the CALIFA and MaNGA surveys. The `piXedfit_images` and `piXedfit_spectrophotometric` are capable of spatially matching (in resolution and sampling) of 12-bands imaging data from GALEX, SDSS, 2MASS, and WISE, and the IFS data from CALIFA and MaNGA. `piXedfit_bin` is capable of binning neighboring pixels with similar SED shape and reach target $\text{S}/\text{N}$ ratios in all bands. 4. 4. By fitting to photometric SED only, `piXedfit` can predict real spectral continuum, $\text{D}_{\rm n}4000$, $H_{\alpha}$ emission, and $H_{\beta}$ emission. The residuals between the spectral continuum of the median posterior and that of the observed spectra are flat over a wide range of wavelength, in both CALIFA and MaNGA samples. The predicted $H_{\alpha}$, $H_{\beta}$, and $\text{D}_{\rm n}4000$ are consistent with the observed ones, with offsets of $0.105$, $0.131$, and $-0.024$ dex, respectively. 5. 5. Using the $H_{\alpha}$ and $H_{\beta}$ luminosities of the observed spectra, we derive the SFR. The dust attenuation correction based on the Balmer decrement is applied. Then we compare that SFR with the SFR derived from the SED fitting. The SFR derived from the SED fitting with `piXedfit_fitting` is consistent with the SFR derived from the $H_{\alpha}$ emission. 6. 6. While most of the fitting approaches in the `piXedfit_fitting` give good inferences of stellar population properties and SFH, there are indications that the approach of RDSPS with likelihood of Student’s t with $\nu\sim 2$, with proper priors, can give robust (and stable) parameters inference, as good as MCMC method. With its relatively fast fitting performance ($\sim 40$ time faster than the MCMC), this fitting approach can be a good option for performing spatially resolved SED fitting for a large sample of galaxies. piXedfit is a powerful tool for analyzing the spatially resolved properties of galaxies across wide range of resdshifts in the future era of big data in photometry from the deep and high spatial resolution multiband imaging surveys. piXedfit will be made publicly available on GitHub262626piXedfit codebase: https://github.com/aabdurrouf/piXedfit, archived in Zenodo (Abdurro’uf et al., 2021), and documented at https://pixedfit.readthedocs.io/en/latest/index.html. We thank an anonymous referee for providing useful comments that helped to improve this paper. We are grateful for supports from the Ministry of Science & Technology of Taiwan under the grant MOST 108-2112-M-001-011, MOST 109-2112-M-001-005, and a Career Development Award from Academia Sinica (AS- CDA-106-M01). P.F.W. acknowledges the support of the fellowship from the East Asian Core Observatories Association. The computations in this research were run on the TIARA and SuMIRe clusters at ASIAA. This research made use of Astropy272727http://www.astropy.org, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018). This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources (Bradley et al., 2019). This work is based on observations made with the NASA Galaxy Evolution Explorer (GALEX), which is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max- Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This study uses data provided by the Calar Alto Legacy Integral Field Area (CALIFA) survey (http://califa.caha.es/). Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max- Planck-Institut fűr Astronomie and the Instituto de Astrofísica de Andalucía (CSIC). ## Appendix A Comparison of the Empirical PSF of the SDSS and 2MASS with the Analytical PSF from Aniano et al. (2011) We construct empirical PSFs of the SDSS and 2MASS using the PSF modeling functions provided by `Photutils`. The `Photutils` package provides tools for building an effective PSF, which can represent the net PSF of a given camera. The effective PSF is built based on the prescription in Anderson & King (2000). First, several images of random fields are downloaded from the SDSS and 2MASS websites. Then background subtraction is done, especially for 2MASS images (the SDSS image product is background free). After that, bright stars are collected using `find_peaks` function. The `extract_stars` function is used to extract cutouts of the stars. Then visual inspection is done to exclude “bad stars”, such as multiple stars in one cutout image and saturated stars. Finally, effective PSFs are constructed using `EPSFBuilder` function. In building the effective PSFs, number of stars selected for $u$, $g$, $r$, $i$, $z$, $J$, $H$, and $K_{s}$ are 103, 123, 143, 170, 268, 102, 118, and 94, respectively. The constructed effective PSFs of the SDSS and 2MASS are shown in Figure 20. We compare the empirical PSFs with analytical PSFs from Aniano et al. (2011). We found that the PSFs of $u$, $g$, and $r$ are best represented by the double Gaussian with FWHM of $1.5^{\prime\prime}$; PSFs of the $i$ and $z$ are best represented by the double Gaussian with FWHM of $1.0^{\prime\prime}$; and PSFs of 2MASS are best represented by the Gaussian with FWHM of $3.5^{\prime\prime}$. This comparison is shown in the third and fouth rows in the figure. We also construct empirical PSFs of FUV and NUV bands of GALEX using the same procedure. The constructed empirical PSFs of FUV and NUV are consistent with the PSFs from Aniano et al. (2011). The empirical PSFs from this analysis can be found at https://github.com/aabdurrouf/empPSFs_GALEXSDSS2MASS. Figure 20: Comparison of the empirical PSFs of the SDSS and 2MASS with the analytical PSFs from Aniano et al. (2011). In the first and second rows, empirical PSFs of the SDSS and 2MASS are shown. Comparison of those empirical PSFs with the analytical PSFs are shown in the third and fourth rows. ## Appendix B Comparison of the performances of various fitting approaches provided in piXedfit_fitting module In Section 5, we fit the mock FUV–NIR photometric SEDs of the TNG galaxies using `piXedfit_fitting` module with 8 different fitting approaches, including the 2 posterior sampling methods (MCMC and RDSPS), the 2 likelihood functions (Gaussian and Student’s t) in the RDSPS method, and 6 values of $\nu$ for the Student’s t likelihood function: $0.3$, $1.0$, $2.0$, $3.0$, $5.0$, and $10.0$. The purpose of performing this fitting experiment is to compare the performances of the various fitting approaches provided within the `piXedfit_fitting` module. In this analysis, we compare the performances of those fitting approaches in inferring 5 key parameters: $Z$, dust optical depth ($\hat{\tau}_{2}$), mass- weighted age, $M_{*}$, and SFR. Similar to what we do in Section 5.3 and 5.4, for each parameter obtained with each fitting approach, we calculate the offset ($\mu$), scatter ($\sigma$), and Spearman $\rho$ coefficient of the 1D distribution of the logarithmic ratios between the inferred values from fitting and the true values. Then here, for each parameter, we compare the goodness of the recovery among the fitting approaches by directly comparing the $\mu$, $\sigma$, and $\rho$ values. Figure 21, left panel, shows a compilation of the values of $\mu$ (first row), $\sigma$ (second row), and Spearman $\rho$ (third row). Different fitting parameters are shown with different symbols. The horizontal axis shows the various fitting approaches. From this plots, overall, we can see that all the fitting approaches give good performances, indicated by the low values of absolute $\mu$ ($\lesssim 0.13$ dex) in all the parameters, low scatter ($\lesssim 0.3$ dex) in all the parameters, except the SFR derived with the RDSPS that uses the Student’s t $\nu=0.3$ likelihood, and high $\rho$ ($\gtrsim 0.6$) in all the parameters, except $Z$. The average absolute $\mu$, $\sigma$, and $\rho$ for the [gauss, stdt_dof03, stdt_dof1, stdt_dof2, stdt_dof3, stdt_dof5, stdt_dof10, mcmc] are [$0.0530$, $0.0489$, $0.0298$, $0.0350$, $0.0396$, $0.0441$, $0.0483$, $0.0347$], [$0.1765$, $0.2344$, $0.1841$, $0.1725$, $0.1714$, $0.1720$, $0.1735$, $0.1755$], and [$0.7460$, $0.7685$, $0.7714$, $0.7677$, $0.7629$, $0.7569$, $0.7510$, $0.7516$], respectively. For the ease of comparison among the fitting approaches, in the right panel of Figure 21, the $\mu$, $\sigma$, and $\rho$ values associated with each parameter are sorted and ranked from smallest (ranked as $0$) to highest (ranked as 7). For the $\mu$, the absolute value is considered. Different parameters are shown with circles of different colors and sizes. This plot indicates that the RDSPS method that uses Student’s t likelihood function with $\nu\sim 1.0-3.0$ can possibly outperform the other fitting approaches. However, more fitting experiments, such as those using more realistic panchromatic mock SEDs or comparisons between the inferred parameters from fitting with other empirical independent indicators are needed to verify this finding. A better inference given by the Student’s t likelihood over the Gaussian one is possibly caused by the heavier tails of the Student’s t function which can better accomodate all the model SEDs (despite having large $\chi^{2}$) in the Bayesian inference process compared to the Gaussian function. Figure 21: Compilation of the $\mu$, $\sigma$, and $\rho$ for the 1D distributions of the logrithmic ratios between the inferred values from fitting to the mock photometric SEDs of TNG galaxies and the true values. In the left side, real values of $\mu$, $\sigma$, and $\rho$ associated with the parameters and the fitting approaches are shown. Different symbols represent different parameters. In the right side, for each parameter, the $\mu$, $\sigma$, and $\rho$ values associated with the various fitting approaches are sorted and ranked from smallest (ranked as 0) to highest (ranked as 7). Different parameters are shown with circles of different colors and sizes. ## References * Abazajian et al. (2004) Abazajian, K., Adelman-McCarthy, J. K., Agüeros, M. A., et al. 2004, AJ, 128, 502, doi: 10.1086/421365 * Abdurro’uf & Akiyama (2017) Abdurro’uf, & Akiyama, M. 2017, MNRAS, 469, 2806, doi: 10.1093/mnras/stx936 * Abdurro’uf & Akiyama (2018) —. 2018, MNRAS, 479, 5083, doi: 10.1093/mnras/sty1771 * Abdurro’uf et al. (2021) Abdurro’uf, Lin, Y.-T., Akiyama, M., & Wu, P.-F. 2021, aabdurrouf/piXedfit v0.1-alpha, v0.1-alpha, Zenodo, doi: 10.5281/zenodo.4427650 * Abraham et al. (1999) Abraham, R. G., Ellis, R. S., Fabian, A. C., Tanvir, N. R., & Glazebrook, K. 1999, MNRAS, 303, 641, doi: 10.1046/j.1365-8711.1999.02059.x * Abramson et al. (2020) Abramson, L. E., Brammer, G. B., Schmidt, K. B., et al. 2020, MNRAS, 493, 952, doi: 10.1093/mnras/staa276 * Acquaviva et al. (2011) Acquaviva, V., Gawiser, E., & Guaita, L. 2011, ApJ, 737, 47, doi: 10.1088/0004-637X/737/2/47 * Anderson & King (2000) Anderson, J., & King, I. R. 2000, PASP, 112, 1360, doi: 10.1086/316632 * Aniano et al. (2011) Aniano, G., Draine, B. T., Gordon, K. D., & Sandstrom, K. 2011, PASP, 123, 1218, doi: 10.1086/662219 * Arnouts et al. (1999) Arnouts, S., Cristiani, S., Moscardini, L., et al. 1999, MNRAS, 310, 540, doi: 10.1046/j.1365-8711.1999.02978.x * Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068 * Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f * Balogh et al. (1999) Balogh, M. L., Morris, S. L., Yee, H. K. C., Carlberg, R. G., & Ellingson, E. 1999, ApJ, 527, 54, doi: 10.1086/308056 * Behroozi et al. (2013) Behroozi, P. S., Wechsler, R. H., & Conroy, C. 2013, ApJ, 770, 57, doi: 10.1088/0004-637X/770/1/57 * Belfiore et al. (2019) Belfiore, F., Westfall, K. B., Schaefer, A., et al. 2019, AJ, 158, 160, doi: 10.3847/1538-3881/ab3e4e * Bertin & Arnouts (1996) Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, doi: 10.1051/aas:1996164 * Bianchi et al. (2011) Bianchi, L., Efremova, B., Herald, J., et al. 2011, MNRAS, 411, 2770, doi: 10.1111/j.1365-2966.2010.17890.x * Blanton et al. (2017) Blanton, M. R., Bershady, M. A., Abolfathi, B., et al. 2017, AJ, 154, 28, doi: 10.3847/1538-3881/aa7567 * Boquien et al. (2019) Boquien, M., Burgarella, D., Roehlly, Y., et al. 2019, A&A, 622, A103, doi: 10.1051/0004-6361/201834156 * Bothun (1986) Bothun, G. D. 1986, AJ, 91, 507, doi: 10.1086/114029 * Bradley et al. (2019) Bradley, L., Sipőcz, B., Robitaille, T., et al. 2019, astropy/photutils: v0.6, doi: 10.5281/zenodo.2533376 * Brinchmann et al. (2004) Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004, MNRAS, 351, 1151, doi: 10.1111/j.1365-2966.2004.07881.x * Bruzual & Charlot (2003) Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000, doi: 10.1046/j.1365-8711.2003.06897.x * Bruzual A. & Charlot (1993) Bruzual A., G., & Charlot, S. 1993, ApJ, 405, 538, doi: 10.1086/172385 * Bundy et al. (2015) Bundy, K., Bershady, M. A., Law, D. R., et al. 2015, ApJ, 798, 7, doi: 10.1088/0004-637X/798/1/7 * Burgarella et al. (2005) Burgarella, D., Buat, V., & Iglesias-Páramo, J. 2005, MNRAS, 360, 1413, doi: 10.1111/j.1365-2966.2005.09131.x * Buzzoni (1989) Buzzoni, A. 1989, ApJS, 71, 817, doi: 10.1086/191399 * Byler et al. (2017) Byler, N., Dalcanton, J. J., Conroy, C., & Johnson, B. D. 2017, ApJ, 840, 44, doi: 10.3847/1538-4357/aa6c66 * Calistro Rivera et al. (2016) Calistro Rivera, G., Lusso, E., Hennawi, J. F., & Hogg, D. W. 2016, ApJ, 833, 98, doi: 10.3847/1538-4357/833/1/98 * Calzetti et al. (2000) Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, ApJ, 533, 682, doi: 10.1086/308692 * Cappellari (2017) Cappellari, M. 2017, MNRAS, 466, 798, doi: 10.1093/mnras/stw3020 * Cappellari & Copin (2003) Cappellari, M., & Copin, Y. 2003, MNRAS, 342, 345, doi: 10.1046/j.1365-8711.2003.06541.x * Cappellari et al. (2011) Cappellari, M., Emsellem, E., Krajnović, D., et al. 2011, MNRAS, 413, 813, doi: 10.1111/j.1365-2966.2010.18174.x * Carnall et al. (2019) Carnall, A. C., Leja, J., Johnson, B. D., et al. 2019, ApJ, 873, 44, doi: 10.3847/1538-4357/ab04a2 * Carnall et al. (2018) Carnall, A. C., McLure, R. J., Dunlop, J. S., & Davé, R. 2018, MNRAS, 480, 4379, doi: 10.1093/mnras/sty2169 * Chabrier (2003) Chabrier, G. 2003, PASP, 115, 763, doi: 10.1086/376392 * Charlot & Fall (2000) Charlot, S., & Fall, S. M. 2000, ApJ, 539, 718, doi: 10.1086/309250 * Chauke et al. (2018) Chauke, P., van der Wel, A., Pacifici, C., et al. 2018, ApJ, 861, 13, doi: 10.3847/1538-4357/aac324 * Chen et al. (2020) Chen, X., Akiyama, M., Ichikawa, K., et al. 2020, ApJ, 900, 51, doi: 10.3847/1538-4357/aba599 * Chevallard & Charlot (2016) Chevallard, J., & Charlot, S. 2016, MNRAS, 462, 1415, doi: 10.1093/mnras/stw1756 * Cid Fernandes et al. (2005) Cid Fernandes, R., Mateus, A., Sodré, L., Stasińska, G., & Gomes, J. M. 2005, MNRAS, 358, 363, doi: 10.1111/j.1365-2966.2005.08752.x * Conroy (2013) Conroy, C. 2013, ARA&A, 51, 393, doi: 10.1146/annurev-astro-082812-141017 * Conroy & Gunn (2010) Conroy, C., & Gunn, J. E. 2010, ApJ, 712, 833, doi: 10.1088/0004-637X/712/2/833 * Conroy et al. (2009) Conroy, C., Gunn, J. E., & White, M. 2009, ApJ, 699, 486, doi: 10.1088/0004-637X/699/1/486 * Croom et al. (2012) Croom, S. M., Lawrence, J. S., Bland-Hawthorn, J., et al. 2012, MNRAS, 421, 872, doi: 10.1111/j.1365-2966.2011.20365.x * da Cunha et al. (2008) da Cunha, E., Charlot, S., & Elbaz, D. 2008, MNRAS, 388, 1595, doi: 10.1111/j.1365-2966.2008.13535.x * de Amorim et al. (2017) de Amorim, A. L., García-Benito, R., Cid Fernandes, R., et al. 2017, MNRAS, 471, 3727, doi: 10.1093/mnras/stx1805 * de Zeeuw et al. (2002) de Zeeuw, P. T., Bureau, M., Emsellem, E., et al. 2002, MNRAS, 329, 513, doi: 10.1046/j.1365-8711.2002.05059.x * Diemer et al. (2017) Diemer, B., Sparre, M., Abramson, L. E., & Torrey, P. 2017, ApJ, 839, 26, doi: 10.3847/1538-4357/aa68e5 * Draine & Li (2007) Draine, B. T., & Li, A. 2007, ApJ, 657, 810, doi: 10.1086/511055 * Dressler et al. (2018) Dressler, A., Kelson, D. D., & Abramson, L. E. 2018, ApJ, 869, 152, doi: 10.3847/1538-4357/aaedbe * Dressler et al. (2016) Dressler, A., Kelson, D. D., Abramson, L. E., et al. 2016, ApJ, 833, 251, doi: 10.3847/1538-4357/833/2/251 * Driver et al. (2009) Driver, S. P., Norberg, P., Baldry, I. K., et al. 2009, Astronomy and Geophysics, 50, 5.12, doi: 10.1111/j.1468-4004.2009.50512.x * Dye (2008) Dye, S. 2008, MNRAS, 389, 1293, doi: 10.1111/j.1365-2966.2008.13639.x * Earl et al. (2020) Earl, N., Tollerud, E., Jones, C., et al. 2020, astropy/specutils: v1.0, v1.0, Zenodo, doi: 10.5281/zenodo.3718589 * Eldridge & Stanway (2009) Eldridge, J. J., & Stanway, E. R. 2009, MNRAS, 400, 1019, doi: 10.1111/j.1365-2966.2009.15514.x * Emsellem et al. (2004) Emsellem, E., Cappellari, M., Peletier, R. F., et al. 2004, MNRAS, 352, 721, doi: 10.1111/j.1365-2966.2004.07948.x * Falcón-Barroso et al. (2011) Falcón-Barroso, J., Sánchez-Blázquez, P., Vazdekis, A., et al. 2011, A&A, 532, A95, doi: 10.1051/0004-6361/201116842 * Ferland et al. (1998) Ferland, G. J., Korista, K. T., Verner, D. A., et al. 1998, PASP, 110, 761, doi: 10.1086/316190 * Ferland et al. (2013) Ferland, G. J., Porter, R. L., van Hoof, P. A. M., et al. 2013, Rev. Mexicana Astron. Astrofis., 49, 137. https://arxiv.org/abs/1302.4485 * Fitzpatrick (1999) Fitzpatrick, E. L. 1999, PASP, 111, 63, doi: 10.1086/316293 * Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067 * Foreman-Mackey et al. (2014) Foreman-Mackey, D., Sick, J., & Johnson, B. 2014, Python-Fsps: Python Bindings To Fsps (V0.1.1), v0.1.1, Zenodo, doi: 10.5281/zenodo.12157 * Foreman-Mackey et al. (2018) Foreman-Mackey, D., Meierjurgen Farr, W., Tollerud, E., et al. 2018, Dfm/Emcee: Emcee V3.0Rc2, v3.0rc2, Zenodo, doi: 10.5281/zenodo.1436565 * Foreman-Mackey et al. (2019) Foreman-Mackey, D., Farr, W., Sinha, M., et al. 2019, The Journal of Open Source Software, 4, 1864, doi: 10.21105/joss.01864 * Förster Schreiber et al. (2018) Förster Schreiber, N. M., Renzini, A., Mancini, C., et al. 2018, ApJS, 238, 21, doi: 10.3847/1538-4365/aadd49 * García-Benito et al. (2015) García-Benito, R., Zibetti, S., Sánchez, S. F., et al. 2015, A&A, 576, A135, doi: 10.1051/0004-6361/201425080 * Girardi et al. (2000) Girardi, L., Bressan, A., Bertelli, G., & Chiosi, C. 2000, A&AS, 141, 371, doi: 10.1051/aas:2000126 * Gordon et al. (2008) Gordon, K. D., Engelbracht, C. W., Rieke, G. H., et al. 2008, ApJ, 682, 336, doi: 10.1086/589567 * Groves et al. (2008) Groves, B., Dopita, M. A., Sutherland, R. S., et al. 2008, ApJS, 176, 438, doi: 10.1086/528711 * Han & Han (2014) Han, Y., & Han, Z. 2014, ApJS, 215, 2, doi: 10.1088/0067-0049/215/1/2 * Han & Han (2019) —. 2019, ApJS, 240, 3, doi: 10.3847/1538-4365/aaeffa * Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 * Hunter (2007) Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, doi: 10.1109/MCSE.2007.55 * Inoue et al. (2014) Inoue, A. K., Shimizu, I., Iwata, I., & Tanaka, M. 2014, MNRAS, 442, 1805, doi: 10.1093/mnras/stu936 * Iyer & Gawiser (2017) Iyer, K., & Gawiser, E. 2017, ApJ, 838, 127, doi: 10.3847/1538-4357/aa63f0 * Iyer et al. (2019) Iyer, K. G., Gawiser, E., Faber, S. M., et al. 2019, ApJ, 879, 116, doi: 10.3847/1538-4357/ab2052 * Jarrett et al. (2000) Jarrett, T. H., Chester, T., Cutri, R., et al. 2000, AJ, 119, 2498, doi: 10.1086/301330 * Jedrzejewski (1987) Jedrzejewski, R. I. 1987, MNRAS, 226, 747, doi: 10.1093/mnras/226.4.747 * Johnson et al. (2013) Johnson, S. P., Wilson, G. W., Tang, Y., & Scott, K. S. 2013, MNRAS, 436, 2535, doi: 10.1093/mnras/stt1758 * Kauffmann et al. (2003) Kauffmann, G., Heckman, T. M., White, S. D. M., et al. 2003, MNRAS, 341, 33, doi: 10.1046/j.1365-8711.2003.06291.x * Kelson et al. (2000) Kelson, D. D., Illingworth, G. D., van Dokkum, P. G., & Franx, M. 2000, ApJ, 531, 159, doi: 10.1086/308445 * Kennicutt (1998) Kennicutt, Robert C., J. 1998, ARA&A, 36, 189, doi: 10.1146/annurev.astro.36.1.189 * Koleva et al. (2009) Koleva, M., Prugniel, P., Bouchard, A., & Wu, Y. 2009, A&A, 501, 1269, doi: 10.1051/0004-6361/200811467 * Kriek et al. (2009) Kriek, M., van Dokkum, P. G., Labbé, I., et al. 2009, ApJ, 700, 221, doi: 10.1088/0004-637X/700/1/221 * Lanyon-Foster et al. (2007) Lanyon-Foster, M. M., Conselice, C. J., & Merrifield, M. R. 2007, MNRAS, 380, 571, doi: 10.1111/j.1365-2966.2007.12132.x * Lanyon-Foster et al. (2012) —. 2012, MNRAS, 424, 1852, doi: 10.1111/j.1365-2966.2012.21287.x * Law et al. (2015) Law, D. R., Yan, R., Bershady, M. A., et al. 2015, AJ, 150, 19, doi: 10.1088/0004-6256/150/1/19 * Law et al. (2016) Law, D. R., Cherinka, B., Yan, R., et al. 2016, AJ, 152, 83, doi: 10.3847/0004-6256/152/4/83 * Lee et al. (2009) Lee, S.-K., Idzi, R., Ferguson, H. C., et al. 2009, ApJS, 184, 100, doi: 10.1088/0067-0049/184/1/100 * Leja et al. (2019a) Leja, J., Carnall, A. C., Johnson, B. D., Conroy, C., & Speagle, J. S. 2019a, ApJ, 876, 3, doi: 10.3847/1538-4357/ab133c * Leja et al. (2018) Leja, J., Johnson, B. D., Conroy, C., & van Dokkum, P. 2018, ApJ, 854, 62, doi: 10.3847/1538-4357/aaa8db * Leja et al. (2017) Leja, J., Johnson, B. D., Conroy, C., van Dokkum, P. G., & Byler, N. 2017, ApJ, 837, 170, doi: 10.3847/1538-4357/aa5ffe * Leja et al. (2019b) Leja, J., Johnson, B. D., Conroy, C., et al. 2019b, ApJ, 877, 140, doi: 10.3847/1538-4357/ab1d5a * Lower et al. (2020) Lower, S., Narayanan, D., Leja, J., et al. 2020, ApJ, 904, 33, doi: 10.3847/1538-4357/abbfa7 * Lupton et al. (2004) Lupton, R., Blanton, M. R., Fekete, G., et al. 2004, PASP, 116, 133, doi: 10.1086/382245 * Madau (1995) Madau, P. 1995, ApJ, 441, 18, doi: 10.1086/175332 * Maraston (1998) Maraston, C. 1998, MNRAS, 300, 872, doi: 10.1046/j.1365-8711.1998.01947.x * Maraston (2005) —. 2005, MNRAS, 362, 799, doi: 10.1111/j.1365-2966.2005.09270.x * Maraston et al. (2010) Maraston, C., Pforr, J., Renzini, A., et al. 2010, MNRAS, 407, 830, doi: 10.1111/j.1365-2966.2010.16973.x * Marigo & Girardi (2007) Marigo, P., & Girardi, L. 2007, A&A, 469, 239, doi: 10.1051/0004-6361:20066772 * Marigo et al. (2008) Marigo, P., Girardi, L., Bressan, A., et al. 2008, A&A, 482, 883, doi: 10.1051/0004-6361:20078467 * Marinacci et al. (2018) Marinacci, F., Vogelsberger, M., Pakmor, R., et al. 2018, MNRAS, 480, 5113, doi: 10.1093/mnras/sty2206 * Martin et al. (2005) Martin, D. C., Fanson, J., Schiminovich, D., et al. 2005, ApJ, 619, L1, doi: 10.1086/426387 * Michałowski et al. (2012) Michałowski, M. J., Dunlop, J. S., Cirasuolo, M., et al. 2012, A&A, 541, A85, doi: 10.1051/0004-6361/201016308 * Michałowski et al. (2014) Michałowski, M. J., Hayward, C. C., Dunlop, J. S., et al. 2014, A&A, 571, A75, doi: 10.1051/0004-6361/201424174 * Morishita et al. (2019) Morishita, T., Abramson, L. E., Treu, T., et al. 2019, ApJ, 877, 141, doi: 10.3847/1538-4357/ab1d53 * Morrissey et al. (2007) Morrissey, P., Conrow, T., Barlow, T. A., et al. 2007, ApJS, 173, 682, doi: 10.1086/520512 * Naiman et al. (2018) Naiman, J. P., Pillepich, A., Springel, V., et al. 2018, MNRAS, 477, 1206, doi: 10.1093/mnras/sty618 * Nelson et al. (2018) Nelson, D., Pillepich, A., Springel, V., et al. 2018, MNRAS, 475, 624, doi: 10.1093/mnras/stx3040 * Nelson et al. (2019) Nelson, D., Springel, V., Pillepich, A., et al. 2019, Computational Astrophysics and Cosmology, 6, 2, doi: 10.1186/s40668-019-0028-x * Nelson et al. (2016) Nelson, E. J., van Dokkum, P. G., Momcheva, I. G., et al. 2016, ApJ, 817, L9, doi: 10.3847/2041-8205/817/1/L9 * Nenkova et al. (2008a) Nenkova, M., Sirocky, M. M., Ivezić, Ž., & Elitzur, M. 2008a, ApJ, 685, 147, doi: 10.1086/590482 * Nenkova et al. (2008b) Nenkova, M., Sirocky, M. M., Nikutta, R., Ivezić, Ž., & Elitzur, M. 2008b, ApJ, 685, 160, doi: 10.1086/590483 * Newman et al. (2014) Newman, A. B., Ellis, R. S., Andreon, S., et al. 2014, ApJ, 788, 51, doi: 10.1088/0004-637X/788/1/51 * Noll et al. (2009) Noll, S., Burgarella, D., Giovannoli, E., et al. 2009, A&A, 507, 1793, doi: 10.1051/0004-6361/200912497 * Ocvirk et al. (2006) Ocvirk, P., Pichon, C., Lançon, A., & Thiébaut, E. 2006, MNRAS, 365, 46, doi: 10.1111/j.1365-2966.2005.09182.x * Osterbrock (1989) Osterbrock, D. E. 1989, Astrophysics of gaseous nebulae and active galactic nuclei * Pacifici et al. (2012) Pacifici, C., Charlot, S., Blaizot, J., & Brinchmann, J. 2012, MNRAS, 421, 2002, doi: 10.1111/j.1365-2966.2012.20431.x * Pacifici et al. (2016) Pacifici, C., Oh, S., Oh, K., Lee, J., & Yi, S. K. 2016, ApJ, 824, 45, doi: 10.3847/0004-637X/824/1/45 * Papovich et al. (2001) Papovich, C., Dickinson, M., & Ferguson, H. C. 2001, ApJ, 559, 620, doi: 10.1086/322412 * Pillepich et al. (2018) Pillepich, A., Nelson, D., Hernquist, L., et al. 2018, MNRAS, 475, 648, doi: 10.1093/mnras/stx3112 * Robitaille (2018) Robitaille, T. 2018, reproject: astronomical image reprojection in Python, v0.4, Zenodo, doi: 10.5281/zenodo.1162674 * Ross et al. (2011) Ross, A. J., Ho, S., Cuesta, A. J., et al. 2011, MNRAS, 417, 1350, doi: 10.1111/j.1365-2966.2011.19351.x * Roth et al. (2005) Roth, M. M., Kelz, A., Fechner, T., et al. 2005, PASP, 117, 620, doi: 10.1086/429877 * Salim et al. (2007) Salim, S., Rich, R. M., Charlot, S., et al. 2007, ApJS, 173, 267, doi: 10.1086/519218 * Salpeter (1955) Salpeter, E. E. 1955, ApJ, 121, 161, doi: 10.1086/145971 * Sánchez et al. (2012) Sánchez, S. F., Kennicutt, R. C., Gil de Paz, A., et al. 2012, A&A, 538, A8, doi: 10.1051/0004-6361/201117353 * Sánchez et al. (2016a) Sánchez, S. F., García-Benito, R., Zibetti, S., et al. 2016a, A&A, 594, A36, doi: 10.1051/0004-6361/201628661 * Sánchez et al. (2016b) Sánchez, S. F., Pérez, E., Sánchez-Blázquez, P., et al. 2016b, Rev. Mexicana Astron. Astrofis., 52, 171. https://arxiv.org/abs/1602.01830 * Sánchez et al. (2018) Sánchez, S. F., Avila-Reese, V., Hernandez-Toledo, H., et al. 2018, Rev. Mexicana Astron. Astrofis., 54, 217. https://arxiv.org/abs/1709.05438 * Sánchez-Blázquez et al. (2006) Sánchez-Blázquez, P., Peletier, R. F., Jiménez-Vicente, J., et al. 2006, MNRAS, 371, 703, doi: 10.1111/j.1365-2966.2006.10699.x * Sawicki (2012) Sawicki, M. 2012, PASP, 124, 1208, doi: 10.1086/668636 * Sawicki & Yee (1998) Sawicki, M., & Yee, H. K. C. 1998, AJ, 115, 1329, doi: 10.1086/300291 * Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103 * Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525, doi: 10.1086/305772 * Searle et al. (1973) Searle, L., Sargent, W. L. W., & Bagnuolo, W. G. 1973, ApJ, 179, 427, doi: 10.1086/151882 * Serra et al. (2011) Serra, P., Amblard, A., Temi, P., et al. 2011, ApJ, 740, 22, doi: 10.1088/0004-637X/740/1/22 * Skrutskie et al. (2006) Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163, doi: 10.1086/498708 * Smee et al. (2013) Smee, S. A., Gunn, J. E., Uomoto, A., et al. 2013, AJ, 146, 32, doi: 10.1088/0004-6256/146/2/32 * Smith & Hayward (2015) Smith, D. J. B., & Hayward, C. C. 2015, MNRAS, 453, 1597, doi: 10.1093/mnras/stv1727 * Smith & Hayward (2018) —. 2018, MNRAS, 476, 1705, doi: 10.1093/mnras/sty311 * Sorba & Sawicki (2015) Sorba, R., & Sawicki, M. 2015, MNRAS, 452, 235, doi: 10.1093/mnras/stv1235 * Sorba & Sawicki (2018) —. 2018, MNRAS, 476, 1532, doi: 10.1093/mnras/sty186 * Speagle et al. (2014) Speagle, J. S., Steinhardt, C. L., Capak, P. L., & Silverman, J. D. 2014, ApJS, 214, 15, doi: 10.1088/0067-0049/214/2/15 * Springel et al. (2018) Springel, V., Pakmor, R., Pillepich, A., et al. 2018, MNRAS, 475, 676, doi: 10.1093/mnras/stx3304 * Stalevski et al. (2012) Stalevski, M., Fritz, J., Baes, M., Nakos, T., & Popović, L. Č. 2012, MNRAS, 420, 2756, doi: 10.1111/j.1365-2966.2011.19775.x * Tinsley (1972) Tinsley, B. M. 1972, A&A, 20, 383 * Tojeiro et al. (2007) Tojeiro, R., Heavens, A. F., Jimenez, R., & Panter, B. 2007, MNRAS, 381, 1252, doi: 10.1111/j.1365-2966.2007.12323.x * Tremonti et al. (2004) Tremonti, C. A., Heckman, T. M., Kauffmann, G., et al. 2004, ApJ, 613, 898, doi: 10.1086/423264 * Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2 * Wake et al. (2017) Wake, D. A., Bundy, K., Diamond-Stanic, A. M., et al. 2017, AJ, 154, 86, doi: 10.3847/1538-3881/aa7ecc * Walcher et al. (2011) Walcher, J., Groves, B., Budavári, T., & Dale, D. 2011, Ap&SS, 331, 1, doi: 10.1007/s10509-010-0458-z * Westfall et al. (2019) Westfall, K. B., Cappellari, M., Bershady, M. A., et al. 2019, AJ, 158, 231, doi: 10.3847/1538-3881/ab44a2 * Wilkinson et al. (2017) Wilkinson, D. M., Maraston, C., Goddard, D., Thomas, D., & Parikh, T. 2017, MNRAS, 472, 4297, doi: 10.1093/mnras/stx2215 * Wisnioski et al. (2015) Wisnioski, E., Förster Schreiber, N. M., Wuyts, S., et al. 2015, ApJ, 799, 209, doi: 10.1088/0004-637X/799/2/209 * Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868, doi: 10.1088/0004-6256/140/6/1868 * Wuyts et al. (2012) Wuyts, S., Förster Schreiber, N. M., Genzel, R., et al. 2012, ApJ, 753, 114, doi: 10.1088/0004-637X/753/2/114 * Wuyts et al. (2013) Wuyts, S., Förster Schreiber, N. M., Nelson, E. J., et al. 2013, ApJ, 779, 135, doi: 10.1088/0004-637X/779/2/135 * Yan et al. (2016) Yan, R., Tremonti, C., Bershady, M. A., et al. 2016, AJ, 151, 8, doi: 10.3847/0004-6256/151/1/8 * York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, AJ, 120, 1579, doi: 10.1086/301513 * Zhou et al. (2020) Zhou, S., Mo, H. J., Li, C., Boquien, M., & Rossi, G. 2020, MNRAS, 497, 4753, doi: 10.1093/mnras/staa2337 * Zibetti et al. (2009) Zibetti, S., Charlot, S., & Rix, H.-W. 2009, MNRAS, 400, 1181, doi: 10.1111/j.1365-2966.2009.15528.x
# Collatz convergence is a Hydra Game Alexander Rahn Nuremberg Institute of Technology, Keßlerpl. 12 90489 Nuremberg, Germany Eldar Sultanow and Idriss J. Aberkane Potsdam University, Chair of Business Informatics, Processes and Systems, Karl-Marx Straße 67, 14482, Potsdam, Germany Capgemini, Bahnhofstraße 30, 90402, Nuremberg, GermanyUnesco-Unitwin Complex Systems Digital Campus, Chair of Prof. Pierre Collet, Pierre Collet, ICUBE - UMR CNRS 7357, 4 rue Kirschleger, 67000 Strasbourg, France. Corresponding author<EMAIL_ADDRESS> (January $23^{rd}$ 2021) ###### Abstract The Collatz dynamic is known to generate a complex quiver of sequences over natural numbers which inflation propensity remains so unpredictable it could be used to generate reliable proof of work algorithms for the cryptocurrency industry. Here we establish an ad hoc equivalent of modular arithmetic for Collatz sequences to automatically demonstrate the convergence of infinite quivers of numbers, based on five arithmetic rules we prove apply on the entire Collatz dynamic and which we further simulate to gain insight on their graph geometry and computational properties. We then formally demonstrate these rules define an automaton that is playing a Hydra game on the graph of undecided numbers we also prove is embedded in $24\mathbb{N}-7$, proving that in ZFC the Collatz conjecture is true, before giving a promising direction to also prove it in Peano arithmetic. ## 1 Introduction The dynamical system generated by the $3n+1$ problem is known to create complex quivers over $\mathbb{N}$, one of the most picturesque being the so- called ”Collatz Feather” or ”Collatz Seaweed”, a name popularized by Clojure programmer Oliver Caldwell in 2017. The inflation propensity of Collatz orbits remains so unpredictable it can form the core of a reliable proof-of-work algorithm for Blockchain solutions [1], with groundbreaking applications to the field of number-theoretical cryptography as such algorithms are unrelated to primes yet, being based on the class of congruential graphs, still allow for a wide diversity of practical variants. If Bocart thus demonstrated that graph-theoretical approaches to the $3n+1$ problem can be very fertile to applied mathematics, the authors have also endeavored to demonstrate its pure number-theoretical interest prior to this work [2], [3], [4], [5]. The definitive purpose of this article however is to establish fundamental properties of the ”Collatz Feather” and infer provable consequences of those properties to achieve a positive proof of the Collatz conjecture. Our methodology consists of using the complete binary tree and the complete ternary tree111The complete binary tree over odd numbers is defined as $2\mathbb{N}+1$ endowed with the following two linear applications $\\{\cdot 2-1;\cdot 2+1\\}$. The complete ternary tree over odd numbers is defined as $2\mathbb{N}+1$ endowed with operations $\\{\cdot 3-2;\cdot 3;\cdot 3+2\\}$ over $2\mathbb{N}+1$ as a general coordinate system for each node of the Feather. We owe this strategy to earlier discussions with Feferman [6] on his investigations on the continuum hypothesis, as it is known the complete binary tree over natural numbers is one way of generating real numbers. The last author’s discussion with Feferman argued morphisms, sections and origamis of n-ary trees over $\mathbb{N}$ could be a promising strategy to define objects of intermediate cardinalities between $\aleph_{n}$ and $\aleph_{n+1}$, in a manner inspired from Conway’s construction of the surreal numbers [7], which itself began by investigating the branching factor of the game of Go. The initial interest therefore, was to investigate the branching factor of the Collatz Feather and to define the cardinality of the set of its branches. Here we begin by identifying and proving five arithmetical rules that apply anywhere on the Collatz dynamic, with the purpose of demonstrating that applying them from number $1$ allows to generate the entire Feather. ## 2 Five essential rules of Collatz dynamics ###### Note 2.1. For all intent and purpose we will define Syr(x) or the ”Syracuse action” as ”the next odd number in the forward Collatz orbit of $x$”. Whenever two numbers $a$ and $b$ have a common number in their orbit, we will also note $a\equiv b$, a relation that is self-evidently transitive: $(a\equiv b)\land(b\equiv c)\Rightarrow a\equiv c$ The choice of symbol ”$\equiv$” is a deliberate one to acknowledge a kinship between our method and modular arithmetic. ###### Definition 2.1. Actions G, V and S For any natural number $a$ are specified as follows: 1. 1. $G(a):=2a-1$ 2. 2. $S(a):=2a+1$. The rank of $a$ is its number of consecutive end digits $1$ in base $2$. 3. 3. $V(a):=4a+1=G\circ S(a)$ ###### Definition 2.2. Type A, B and C 1. 1. a number $a$ is of type A if its base 3 representation ends with digit 2 2. 2. a number $b$ is of type B if its base 3 representation ends with digit 0 3. 3. a number $c$ is of type C if its base 3 representation ends with digit 1 To remember which is which one need only remember the order of ABC: $a+1$ is dividable by 3, and so is $c-1$, thus A is on the left of B and C is on the right. $1$$3$$5$$7$$15$$13$$11$$9$$17$$19$$21$$23$$25$$27$$29$$31$GGVSVSSGVSSSSVSSSVGGGG Figure 1: Quiver connecting all odd numbers from 1 to 31 with the arrows of actions S,V and G. The set $2\mathbb{N}+1$ is thus endowed with three unary operations without a general inverse that are non commutative with $G\circ S=V$. Whenever we will mention the inverse of these operations, it will be assuming they exist on $\mathbb{N}$. Type A numbers are circled in teal, B in gold and C in purple. ###### Theorem 2.2. The following arithmetic rules apply anywhere over the system $2\mathbb{N}+1$ endowed with the Collatz dynamic. * • Rule One: $\forall x$ odd, $V(x)\equiv(x)$ * • Rule Two: $\forall x,k$ odd, $S^{k}V(x)\equiv S^{k+1}V(x)$ and $\forall x,k$ even, $S^{k}V(x)\equiv S^{k+1}V(x)$ * • Rule Three: $\forall\\{n;y\\}\in\mathbb{N}^{2}$, $\forall x$ odd non B, $3^{n}x\equiv y\Rightarrow\bigwedge\limits_{i=1}^{n}(V(4^{i}3^{n-i}x))\wedge S(V(4^{i}3^{n-i}x))\equiv y$ * • Rule Four: $\forall\\{n;y\\}\in\mathbb{N}^{2}$ , $\forall x$ odd non B, $S(3^{n}x)\equiv y\Rightarrow\bigwedge\limits_{i=1}^{n}(S(4^{i}3^{n-i}x)\wedge S^{2}(4^{i}3^{n-i}x))\equiv y$ * • Rule Five $\forall n\in\mathbb{N}$, $\forall y\in\mathbb{N}$, $\forall x$ odd non B where $3^{n}x$ is of rank 1, $a\equiv y$, $a=G(3^{n}x)\\\ \Rightarrow\bigwedge\limits_{i=0}^{n}(S^{i}(G(3^{n-i}x))\wedge S^{i+1}(G(3^{n-i}x)))\equiv y$ In the following we will prove these five rules. ###### Note 2.3. In reference to Figure 1 we will call ”vertical even” a number that can be written $V(e)$ where $e$ is even, and ”vertical odd” if it can be written $V(o)$ where $o$ is odd. For example, $9$ is the first vertical even number and $5$ is the first vertical odd. ### 2.1 Proving Rule One If $a$ is written $4b+1$ then $3a+1=12b+4=4(3b+1)$ therefore $a\equiv b$. ### 2.2 Proving Rule Two ###### Lemma 2.4. Let $a$ be a number of rank $1$ thus with an odd number $p$ so that $a=G(p)$ then $Syr(S(a))=G(3\cdot p)$. Let $a$ be a number of rank $n$ so that $S^{-(n-1)}(a)=G(p)$ then $Syr^{n-1}(a)=G(3^{n-1}\cdot p)$ ###### Proof. If $a=2p-1$, $p$ is odd, then it follows: ${\begin{array}[]{l}S(a)=4p-1\\\ \frac{3\cdot SS(a)+1}{2}=\frac{12p-2}{2}=6p-1=G(3\cdot p)\\\ Syr(S(a))=G(3\cdot p)\end{array}}$ Let’s generalize to the $n$. If $Syr(S(a))$ can be written $G(3\cdot p)$ it is also of rank 1, whereas $S(a)$ was of rank 2, therefore, the Syracuse action has made it lose one rank. All we have to prove now is that $Syr(S^{2}(a))$ = $S(Syr(S(a)))$ under those conditions: ${\begin{array}[]{l}\frac{3\cdot(S^{2}(a))+1}{2}=6a+5\\\ S(Syr(S(a)))=S(3a+2)=6a+5=Syr(S^{2}(a))\end{array}}$ If a is of rank $n>1$, $Syr(a)$ is of rank $n-1$, and $Syr(S(a))=S(Syr(a))$ ∎ ###### Note 2.5. The $3n+1$ action over an odd number, since it necessary yields an even one, is in fine equivalent to adding $1$ to it, then the half of the result, then $-1$. How many times one can add an half to an odd number $+1$ directly depends on its base 2 representation, and in particular its number of consecutive end digits 1. Let us take Mersenne numbers for example, which are defined as $2^{n}-1$. One can transform them consecutively in this way a number of time equal to their rank-1, indeed, $31$, which is written $11111$ in base 2 is of rank $5$, because $32=2^{5}$ so if one repeats the action ”add to the number+1 the half of itself” this will yield an even result exactly four consecutive times. Thus, any strictly ascending Collatz orbit concerns only numbers $a$ of rank $n>1$, and is defined by $(a+1)\cdot\left(\frac{3}{2}\right)^{n-1}-1$ ###### Lemma 2.6. Let $a$ be an odd number of rank $1$ that is vertical even, then $3a$ is of rank 2 or more, and $9a$ is vertical even. Let $a$ be an odd number of rank $1$ that is vertical odd, then $3a$ is of rank $2$ or more, and $9a$ is vertical odd. ###### Proof. If $a$ is vertical even it can be written $8k+1$ $\forall k:3a=24k+3$ and this number admits an $S^{-1}$ that is $12k+1$, which is an odd number, therefore $3a$ is at least of rank $2$. Moreover, $9a=72k+9$ and this number admits a $V^{-1}$ that is $18k+2$, an even number. Now if $a$ is vertical odd, it can be written $8k+5$ and $\forall k:3a=24k+15$ and $9a=72k+45$. It follows that $3a$ admits an $S^{-1}$ and $9a$ admits a $V^{-1}$, respectively $12k+7$ and $18k+11$ and they are both odd. ∎ ###### Lemma 2.7. Let $a$ be a number that is vertical even, then $(a)\equiv S(a)$ and $S^{k}(a)\equiv S^{k+1}(a)$ for any even k. Let $a$ be a number that is vertical odd, then $S(a)\equiv S^{2}(a)$ and $S^{k}(a)\equiv S^{k+1}(a)$ for any odd k. ###### Proof. If $a$ is vertical even then it can be written as $G(p)$ where $p$ is necessarily vertical (odd or even). We proved that $3p$ is then of rank $2$ or more and also that we have $Syr(S(a))=G(p)$ so it is necessarily vertical odd (since $3d$ is of rank $2$ or more) so $Syr(a)=V^{-1}(Syr(S(a))$ and therefore $a\equiv S(a)$. This behavior we can now generalize to the $n$, because if $a$ is vertical even with $a=G(p)$, then the lemmas we used also provide that $Syr^{n}(S^{n}(a))=G(3^{n}\cdot p)$ and therefore $Syr^{n}(S^{n}(a))$ will be vertical even for any even $n$ because $3^{n}\cdot p$ will be vertical something (even or odd, depending on $p$ only) for any even $n$. Now if $a$ is vertical odd it can be written $G(p)$ and $p$ is necessarily of rank $2$ or more because $G\circ S=V$. Thus $3p$ is vertical (even or odd) and therefore $Syr(S(a))=G(3p)$ is vertical even. ∎ ###### Note 2.8. Observe that in the process of proving Rule Two we also demonstrated that any number of rank $2$ or more is finitely turned into a rank $1$ number of type A by the Collatz dynamic, and that any number $x$ of rank $2$ or more so that $x\equiv S(x)$ under Rule Two is finitely mapped to a type A number that is vertical even, therefore proving the convergence of such numbers is enough to prove the Collatz Conjecture. ### 2.3 Proving Rules Three and Four ###### Lemma 2.9. Let $a$ be a vertical even number with $a=G^{n+2}(S(b))$ where $n$ and $b$ are odd, then $a\equiv 3^{\frac{n+1}{2}}(b)$. Let $a$ be a vertical even number with $a=G^{m+2}(S(b))$ where $m$ is even (zero included) and $b$ is odd, then $a\equiv S(3^{\frac{m}{2}}(b))$ ###### Proof. If $a=G^{n+2}(S(b))$ by definition $a=2^{n+3}b+1$. Then $3\cdot a+1=3(2^{n+3}b+1)+1)=2^{n+3}\cdot(3b)+4.$ As this expression can be divided by $2$ no more than twice, we have $Syr(a)=2^{n+1}3b+1=G^{n}(S(3b))$. Note that if $n=1$ then $V^{-1}(Syr(a))=V^{-1}(2^{2}\cdot(3a)+1)=2^{2}\cdot\frac{1}{4}\cdot(3b)=3b$ which is of course an odd number. Therefore $Syr(a)$ is vertical odd and $V^{-1}(Syr(a))=3b$ thus we have proven that $a\equiv 3b$. If $n=0$ then $a=2^{3}\cdot b+1$ so $3(a+1)=2^{3}\cdot{3b}+4$ therefore $Syr(a)=S(3b)$ and thus $a\equiv S(3b)$. From this we can generalise the progression of numbers that can be written $G^{n}(x)$ where $x$ is of rank $2$ or more. Let $b$ be any odd number: * • All ”Variety S” numbers above $b$ are written $V(b\cdot 2^{2k-1})\;\;\text{or}\;\;S(b\cdot 2^{2k})=2^{2k+1}\cdot b+1$ and * • all ”Variety V” numbers above $b$ are written $V(b\cdot 4^{k})$ or equivalently $S(b\cdot 2^{2k+1})=4^{k+1}\cdot b+1$. Any number $g$ that can be written $G^{n}(V(x))$ with $x$ odd and $n>0$ may thus be finitely reduced under the Collatz dynamic to a number that can be written either $S(3^{m}x)$ or $V(3^{m}x)$ by the repeated following transformation: $(g-1)\cdot\biggl{(}\frac{3}{4}\biggr{)}^{k}+1$ Therefore we have indeed that, * • for Variety S numbers: $2^{2k+1}\cdot b\cdot\left(\frac{3}{4}\right)^{k}+1=2b\cdot 3^{k}+1=S(b\cdot 3^{k})$, which proves Rule Four. * • for Variety V numbers: $4\cdot 4^{k}\cdot b\cdot\left(\frac{3}{4}\right)^{k}+1=4b\cdot 3^{k}+1=V(b\cdot 3^{k})$ which proves Rule Three because Rule One already provides that $V(b\cdot 3^{k})\equiv b\cdot 3^{k}$. ∎ ### 2.4 Proving Rule Five Any type A number of rank 1 can be written $a=G(b)$ where $b$ is of type B. In proving Rule Two we showed that any number of rank $n>1$ is finitely mapped by the Collatz dynamics to $G(3^{n-1}\cdot G^{-1}(S^{-(n-1)}(a)))$, which combined with Rule Two itself gives Rule Five. $1$$3$$5$$7$$15$$13$$11$$9$$17$$19$$21$$23$$25$$27$$29$$31$$33$$35$$37$$39$$41$$43$$45$$47$$49$$51$$53$$55$$57$$59$$61$$63$$65$$67$$127$$69$$71$$73$$75$$77$$79$$81$$83$$85$$87$$89$$91$$93$$95$$97$$99$$101$$103$$105$$107$$109$$111$$113$$115$$117$$119$$121$$123$$125$$1\equiv 3\Rightarrow 1\equiv 17$$1\equiv 17\Rightarrow 3\equiv 7$$3\equiv 7\Rightarrow 5\equiv 9$$1\equiv 9$$1\equiv 9$$3\equiv 9$$9\equiv 19$$3\equiv 7$$11\equiv 33$$45\equiv 89$$19\equiv 39$ Figure 2: Just a few applications of Rules Three, Four and Five starting from $1\equiv 3\equiv 5$ are here plotted in gold. Rules One and Two are plotted in black. Whenever a number is connected to $1$ by a finite path of black and/or gold edges it is proven to converge to $1$. ## 3 The Golden Automaton ###### Definition 3.1. On {$2\mathbb{N}+1$; G, S} where Rules One and Two are considered pre-computed (the black edges on Figure 2) the systematic computation of Rules Three, Four and Five from number $1$ onward is called the ”Golden Automaton”. ### 3.1 ”Golden Arithmetic” Our purpose is to develop an ad hoc unary algebra that could found a congruence arithmetic specifically made to prove the Collatz conjecture, and which we intend as an epistemological extension of modular arithmetic, hence our use of the symbol $\equiv$ in this article rather than the usual $\thicksim$ which is seen more frequently in the Collatz-related literature. This ”Golden arithmetic” involves words taken in the alphabet $\\{G;S;V;3\\}$, which we will call in their order of application, just like in turtle graphics. For example VGS3 means $3\cdot S\circ G\circ V$ Rules 3, 4 and 5 may now be reformulated as such, without loss of generality as long as Rules One and Two are still assumed: * • Rule Three: Let b be of type B, then $b\equiv VGS3^{-1}$ from b. We will call this action $R_{b}(x)=16\frac{x}{3}+1$ * • Rule Four: Let c be of type C, then $c\equiv GS3^{-1}$ from c. We will call this action $R_{c}(x)=\frac{4x-1}{3}$ * • Rule Five: Let a be of type A, then $a\equiv G3^{-1}$ from a. We will call this action $R_{a}(x)=\frac{2x-1}{3}$ As Rules One and Two ensure that the quiver generated by the Golden Automaton is branching, with each type B number that is vertical even providing both a new A type and a new B type number to keep applying respectively rules 5 and 3, we may follow only the pathway of type A numbers to define a single non- branching series of arrows, forming a single infinite branch of the quiver. The latter, if computed from number 15, leads straight to 31 and 27, solving a great deal of other numbers on the way: $\begin{array}[]{rll}15&\equiv 81&\hskip 20.00003pt\text{Rule 3}\\\ 81&\equiv 1025&\hskip 20.00003pt\text{First type A reached by {Rule 3}}\\\ 1025&\equiv 303&\hskip 20.00003pt\textbf{Rule 5}\\\ 303&\equiv 607&\hskip 20.00003pt\text{Rule 2}\\\ 607&\equiv 809&\hskip 20.00003pt\text{Rule 4}\\\ 809&\equiv 159&\hskip 20.00003pt\textbf{Rule 5}\\\ 159&\equiv 319&\hskip 20.00003pt\text{Rule 2}\\\ 319&\equiv 425&\hskip 20.00003pt\text{Rule 4}\\\ 425&\equiv 283&\hskip 20.00003pt\textbf{Rule 5}\\\ 283&\equiv 377&\hskip 20.00003pt\text{Rule 4}\\\ 377&\equiv 111&\hskip 20.00003pt\textbf{Rule 5}\\\ 111&\equiv 593&\hskip 20.00003pt\text{Rule 3}\\\ 593&\equiv 175&\hskip 20.00003pt\textbf{Rule 5}\\\ 175&\equiv 233&\hskip 20.00003pt\text{Rule 4}\\\ 233&\equiv 103&\hskip 20.00003pt\textbf{Rule 5}\\\ 103&\equiv 137&\hskip 20.00003pt\text{Rule 4}\\\ 137&\equiv 91&\hskip 20.00003pt\textbf{Rule 5}\\\ 91&\equiv 161&\hskip 20.00003pt\text{Rule 4}\\\ 161&\equiv 31&\hskip 20.00003pt\textbf{Rule 5}\\\ 31&\equiv 41&\hskip 20.00003pt\text{Rule 4}\\\ 41&\equiv 27&\hskip 20.00003pt\textbf{Rule 5}\end{array}$ Again, it is in no way a problem, but rather a powerful property of the Golden Automaton that this particular quiver branch already cover 19 steps (and actually more) because each of them is branching into other solutions. We may follow another interesting sequence to show that in the same way that Mersenne number 15 finitely solves Mersenne number 31, Mersenne number 7 solves Mersenne number 127, this time we will follow a B branch up to $Syr^{6}(127)$ which we know can be written $G(3^{6})$ because 127 is the Mersenne of rank 7. By Rule 4 we have the first equivalence $\boldsymbol{7\equiv 9}$ and $\boldsymbol{9\equiv 25\equiv 49}$. So by Rule 2 we also have $\boldsymbol{25\equiv 51}$. Rule 3 gives $\boldsymbol{51\equiv 273}$ and again $\boldsymbol{273\equiv 1457=G(729)\equiv 127}$. The cases of $15$ proving the convergence of $31$ and $27$ and of $7$ proving the one of $127$ naturally lead us to the following conjecture: ###### Conjecture 3.1. Suppose all odd numbers up to $2^{n}$ are proven to converge to $1$ under the Collatz dynamic, then the Golden Automaton finitely proves the convergence of those up to $2^{n+1}$ And indeed we already have that the Golden Automaton starting with $1$ proves $3$ by Rule One, then $3$ proves all numbers from $5$ to $15$ which in turn prove all numbers from $33$ to $127$. In the next subsection we render larger quivers generated by the Golden Automaton to provide a better understanding of their geometry and fundamental properties, and to demonstrate why it is so, and more generally, why it can be proven they can reach any number in $2\mathbb{N}+1$ in ZFC. ### 3.2 Computational Scale-up The purpose of this subsection is to identify provable fundamental properties of the Golden Automaton by scaling it up on the full binary tree over $2\mathbb{N}+1$. To streamline its algorithmic scaling, we use the simplified rules we defined in the previous subsection, again, without loss of generality. Our precise purpose is to pave the way for a formal demonstration that proving the convergence of odd numbers up to $n$ is always isomorphic to a Hydra Game. In the next figures we color all the elements of $24\mathbb{N}-7$ as for example $\\{17,41,65,\ldots\\}$ in red to as we demonstrate in the next section they precisely from the ”heads” in the Hydra Game. Figure 3: Golden Automaton confined to numbers smaller than 32 Figure 4: Golden Automaton confined to numbers smaller than 64 Figure 5: Golden Automaton confined to numbers smaller than 128 Figure 6: Golden Automaton confined to numbers smaller than 256 ## 4 Proving the Collatz conjecture In this section we prove that the Collatz conjecture holds in the Zermelo–Fraenkel set theory with the axiom of choice included (abbreviated ZFC). And we prove that Collatz conjecture also holds in the Peano Arithmetic. ### 4.1 ZFC proves the Collatz conjecture ###### Theorem 4.1. In ZFC, the Collatz conjecture is true. ###### Definition 4.1. A hydra is a rooted tree with arbitrary many and arbitrary long finite branches. Leaf nodes are called heads. A head is short if the immediate parent of the head is the root; long if it is neither short nor the root. The object of the Hydra game is to cut down the hydra to its root. At each step, one can cut off one of the heads, after which the hydra grows new heads according to the following rules: * • if the head was long, grow $n$ copies of the subtree of its parent node minus the cut head, rooted in the grandparent node. * • if the head was short, grow nothing ###### Lemma 4.2. The Golden Automaton reaching any natural number is a Hydra Game over a finite subtree of the complete binary tree over $24\mathbb{N}-7$. ###### Proof. The essential questions to answer in demonstrating either a homomorphism between a Hydra game and the Golden Automaton reaching any odd number, or that the Golden Automaton is playing at worst a Hydra Game are: * • What are the Hydra’s heads? * • How do they grow? * • Does the Golden Automaton cut them according to the rules (at worst)? ###### Definition 4.2. A type A number that is vertical even is called an $A_{g}$. The set of $A_{g}$ numbers is $24\mathbb{N}-7$. Type B numbers that verify $b\equiv S(b)$ and type C numbers that verify $c\equiv S(c)$ under Rule Two are called Bups and Cups respectively. What are the Hydra’s heads? $A_{g}$ numbers are the heads of the Hydra. They are 12 points apart on $2\mathbb{N}+1$ (24 in nominal value, e.g. 17 to 41) and any Bup or Cup of $rank>1$ they represent under Rule Five is smaller than them since action $R_{a}$ is strictly decreasing so up to the $n^{th}$ $A_{g}$ there are $2n$ (Bups + Cups) of rank 2 or more and half of them are equivalent to these $A_{g}$ (e.g. between 17 and 41 Bup 27 is equivalent to $A_{g}$ 41, which is equivalent to Cup 31 by Rule Four How do they grow? Between any two consecutive $A_{g}$ in $2\mathbb{N}+1$ there are * • 8 non-A numbers * • 1 of them at most is mapped to the second $A_{g}$ * • 3 at most are ”ups” (Bup or Cup) of rank 2 or more Besides, we also have anywhere that: * • Let b be of type B, there are $\frac{2b}{3}$ numbers of type $A_{g}$ that are smaller than $V^{2}(b)$ * • Let c be of type C, there are $\frac{S(c)}{3}$ numbers of type $A_{g}$ that are smaller than $V^{2}(c)$ * • Let 3c be a type B where c is of type C, there are $\frac{S(c)}{3}$ numbers of type $A_{g}$ up to $R_{b}(3c)$ included * • Let 3a be a type B where a is of type A, there are $\frac{G(a)}{3}$ numbers of type $A_{g}$ smaller than $R_{b}(3a)$ Which is defining the growth of the heads. Indeed, any supposedly diverging $A_{g}$ is forming a Hydra, as we have proven $24\mathbb{N}-7$ contains an image of all undecided Collatz numbers and that any non-decreasing trajectory identifies a subtree within this set. Does the Golden Automaton play a Hydra game? It could be demonstrated that the Golden Automaton is playing an even simpler game as it is branching and thus cutting heads several at a time and in particular cutting some long heads without them doubling 222In fact, the reason the Golden Automaton dominates $24\mathbb{N}-7$ so fast is that it is playing a much simpler game one could call ”Hecatonchire v. Hydra game” namely a Hydra game where Herakles’ number of arms is also multiplying at each step but as this is needless for the final proof we can now simply demonstrate that even under the worst possible assumptions it follows at least the rules of a regular Hydra game. The computing of $15\equiv\ldots\equiv 27$ that we detailed in Subsection 3.1 is one case of the playing of a Hydra Game by the Golden Automaton; we underlined each use of Rule 5 specifically so the reader can now report to it more easily, because each time this rule is used, a head ($A_{g}$) has just been cut. The demonstration that $27$ and $31$ converge is the cutting of heads $41$ and $161$ respectively. This single branch of the Automaton having first cut head $17$, reaches to the head $1025$ via B-typed numbers $15$ and $81$. It is therefore playing a Hydra with $\frac{1025+7}{24}=43$ heads of which one ($17$) is already cut at this point and of which at least $8$ are rooted (so cutting them does not multiply any number of heads). This process being independent of the targeted number, we now have that the reaching of any number by the Golden Automaton is at least equivalent to the playing of a Hydra with $n$ heads of which $0<m<n$ are rooted. Even without demonstrating more precise limit theorems for each factors $n$ and $m$ (which could still be a fascinating endeavor) the road is now open for a final resolution of the Collatz conjecture. ∎ From there indeed, we know with Goodstein [8] and Kirby and Paris [9] that assuming a system strong enough to prove Peano arithmetic is consistent and that $\epsilon_{0}$ is well-ordered, no Hydra game can be lost. Since we have that the reaching of any number $n$ is a Hydra Game for the Golden Automaton, we have that the Golden Automaton cannot fail to finitely reach any natural number. ### 4.2 Can Peano Arithmetic also prove the Collatz conjecture? If it is now sure that any system strong enough to prove the convergence of Goodstein’s series also proves the Collatz conjecture, it could very well be possible to prove it from Peano arithmetic alone. In this final subsection, we intend to outline a strategy towards such a demonstration by defining a different game than the Hydra one and in particular, a zero-player game that is significantly simpler than John Conway’s Game of Life and played on the complete binary tree $\\{2\mathbb{N}+1;G,S\\}$. In this cellular automaton, each cell is identified by a unique odd number and can only adopt three states: * • Black, meaning the odd number is not (yet) proven to converge under the iterated Collatz transformation or equivalently that it is only equivalent to another black number * • Gold, meaning the odd number is proven to converge and the consequences of its convergence have not yet been computed, ie. it can have an offspring * • Blue, meaning the number is proven to converge and the consequences of its convergence have been computed ie. its offspring has already been turned gold In this ad hoc yet simpler game of life each gold cell yields and offspring then turns blue, and whenever a cell is blue or gold its odd number is proven to converge. Starting with one cell colored in gold at the positions $1$, it applies the following algorithm to each gold cell in the natural order of odd numbers: 1. 1. Rule 1: if a cell on $x$ is gold color cell on $V(x)$ gold 2. 2. Rule 2: if a cell on $x$ is gold, color cell on $S(x)$ gold depending on the precise conditions of rule 2 3. 3. If a cell on $a$ of type A is gold, then color that on $R_{a}(x)$ in gold 4. 4. If a cell on $c$ of type C is gold, then color that on $R_{c}(x)$ in gold 5. 5. After applying the previous rules on a gold cell, turn it blue Note that applying $R_{b}$ on a type B number being equivalent to Rule 1 then $R_{c}$ the algorithm needs not implement a defined $R_{b}$ Whenever a complete series of odd numbers between $2^{n}+1$ and $2^{n+1}-1$ has been colored in gold, it ticks it and returns what we will call its computational ”expense”, namely all the numbers colored blue and gold that are higher than $2^{n+1}-1$, thus giving a clear measurement of the algorithmic time it takes the Golden Automaton to prove the convergence of each complete level of the binary tree over $2\mathbb{N}+1$. We then plot the evolution of this expense on a linear and a logarithmic scale. Figure 7: case n=6, illustrating the principles of the game we defined. On the middle image, row {5;7} has been solved , with an ”expense” of 8 numbers also solved above it. On the right image, row {9; 11; 13; 15} with an expense of 6. As number $1$ is the neutral element of operation $R_{c}$ we leave it in gold during all the simulation Figure 8: case 12, seventh row completed Figure 9: case 12, eighth row completed To facilitate the observation of each row of the binary tree being covered by the Golden Automaton we here zoom into each of them individually: Figure 10: zoom of row 11 (going from 1025 to 2047: each line has about 100 dots) Figure 11: Row 10 (513 to 1023) Figure 12: zoom Row 9 (257 to 511) Figure 13: Row 8 (129 to 255) Figure 14: Charting expenses to proof the convergence of each row of the binary tree over $2\mathbb{N}+1$. For example, the amount of gold and blue dots above the row going from 2049 to 4095 (row 12) is slightly below 175000. The total expense emerges from the generated tree’s height. The same plot against a logarithmic scale (Right) indicates a line-like shape. From there we can thus provide two strategies to finalise a proof of the Collatz conjecture without the axiom of choice (which is needed to demonstrate no Hydra Game can be lost) and precisely within Peano arithmetic. The first strategy would consist of using automated theorem proving to single out the linear behavior we expose in Figure 14 as a provable property of our game. The second - and we believe the most promising - would consist of analysing the average reproductive rate of gold dots, and demonstrating at any level $n$ of the binary tree they cannot fail to finitely take over any population of black dots below it. ###### Definition 4.3. (Reproductive Rate) 1. 1. The reproductive rate of any golden dot on coordinate $n$ is the number of golden and blue dots it generates that are at or below coordinate $V(n)$. 2. 2. The average reproductive rate of all black dots converges to 3,5 new black dots generated from $x$ under or equal to $V(x)$ . Indeed for any $x$, in the time to reach $V(x)$ only the offspring below $G(x)$ gets to generate new black dots under the rules of the binary tree, $S(x)$ can only reproduce once by applying $G(S(x))$, thus generating $V(x)$, and all numbers between $S(x)$ and the next Mersenne number cannot reproduce. More precisely, for any odd number $x$, there are $x+G^{-1}(x)$ odd numbers between itself and $V(x)$ included. We can now count the average or limit reproductive rate of each type of golden dot. * • for type B numbers we always have V(b) in the offspring, and also S(b) one out of two times, so the average reproductive rate is 1,5 * • for type C numbers we always have V(c) and R(c) in the offspring, and S(c) one out of two times so the average reproductive rate is 2,5 * • for type A numbers we have a more interesting converging series defining the average number of gold dots generated below V(a)+2. To easily count the offspring we use the property of the golden Automaton that it can only output $A_{g}$ through operations $R_{b}$ or $R_{c}$, and can only output non-Aup through operations V and S. The net offspring below $V(A_{g})+2$ of $A_{g}$ numbers is defined by a branching process: * – $R_{a}(A_{g})$ outputs either an Aup (in proportion of $\frac{1}{2}$) or equally either a Bup or a Cup ($\frac{1}{4}$ each) * – then again $R_{a}(Aup)$ outputs either an Aup ($\frac{1}{2}$) or equally either a Bup or a Cup (again, $\frac{1}{4}$ each) * – how long $R_{a}(A_{g})$ keeps rendering an Aup only depends on $n$ where $A_{g}=G(3^{n}x)$ with $x$ non B (per Rule 5) So the formula of the average offspring below V(a)+2 of all type $A_{g}$ number, which can be written $G(3^{n}x)$ with x non B is $3n+2,5$. Since one out of two $A_{g}$ can be written $3^{1}x$, a quarter can be written $3^{2}x$ and so on we have the general formula that $A_{g}$ numbers up to $G(3^{n})$ would have an average reproductive rate, which limit we can now determine: $\lim_{n\to\infty}\sum_{i=1}^{n}\frac{1}{2^{i}}\cdot(3{i}+2,5)=6,5$ And this formula does not even account for the accumulated offspring of all the C and B numbers also colored gold in the process, and still below $V(A_{g})$ so the average net reproductive rate of $A_{g}$ numbers is converging to a strictly greater value than this limit. As we know that the reproductive rate of black dots below $V(x)+2$ converges to $3,5$, when the average reproductive rate of all A, B and C type numbers generated by the Golden Automaton grows beyond $3,5$ we can be certain it can always finitely finish any row. As we already have $\frac{1,5+2,5+6,5}{3}=3,5$ since we did not count the offspring of C and B type numbers solved by $A_{g}$ numbers and still below $V(A_{g})$ in the computing of their birthrate, we can prove the average birthrate of golden numbers tends to equal $3,5+\epsilon$ with $\epsilon>0$, which finishes the Peano-arithmetical proof. ## 5 Dedication, Attribution and Acknowledgements This work was supported by a personal grant to I. Aberkane from Mohammed VI University, Morocco and by a collaboration between Capgemini, Potsdam University, The Georg Simon Ohm University of Applied Sciences, and Strasbourg University. I. Aberkane created the framework of studying the Collatz dynamic in the coordinate system defined by the intersection of the binary and ternary trees over $2\mathbb{N}+1$, identified and demonstrated the five rules and predicted they would be isomorphic to a Hydra game over the set of undecided Collatz numbers, which he defined as well, allowing for a final demonstration of the Collatz Conjecture; he also outlined and computed the strategy of using reproductive rates of dots to define a Peano-arithmetical proof. Contributing equally, E. Sultanow and A. Rahn designed and coded an optimised, highly scalable graphical implementation of the five rules and ran all the simulations, confirming the Hydra game isomorphism and computing the first ever dot plot of the Golden Automaton over odd numbers, which they optimised as well. They were also the first team to ever simulate the five rules to the level achieved in this article, and to confirm their emerging geometric properties on such a scale, including the linearity of their logarithmic scaling and the limit reproductive rates of single dots of the golden automaton. I. Aberkane wishes to thank the late Prof. Solomon Feferman and Prof. Alan T. Waterman Jr, along with Prof. Paul Bourgine, Prof. Yves Burnod, Prof. Pierre Collet, Dr. Françoise Cerquetti and Dr. Oleksandra Desiateryk. The authors dedicate this work to the memory of John Horton Conway (1937-2020), Solomon Feferman (1928-2016) and Alan T. Waterman Jr (1918-2008). ## References * [1] F. Bocart. Inflation propensity of collatz orbits: a new proof-of-work for blockchain applications. Journal of Risk and Financial Management, 11(4):83, 2018. * [2] I. Aberkane. On the syracuse conjecture over the binary tree. HAL, 2017. * [3] I. Aberkane. Endomorphisms of the collatz quiver. HAL, 2020. * [4] C. Koch, E. Sultanow, and S. Cox. Divisions by two in collatz sequences: A data science approach. International Journal of Pure Mathematical Sciences, 21, 2020. * [5] E. Sultanow, C. Koch, and S. Cox. Collatz sequences in the light of graph theory. Technical report, Potsdam University, 2020. * [6] S. Feferman. Is the continuum hypothesis a definite mathematical problem., 2012. * [7] D. E. Knuth. Surreal Numbers: How Two Ex-students Turned on to Pure Mathematics and Found Total Happiness. Addison-Wesley, Boston, MA, 12 edition, 1974. * [8] R. Goodstein. On the restricted ordinal theorem. Journal of Symbolic Logic, 9(2):33–41, 1944. * [9] L. Kirby and J. Paris. Accessible independence results for peano arithmetic. Bulletin of the London Mathematical Society, 1982.
# Learning Synthetic Environments for Reinforcement Learning with Evolution Strategies Fabio Ferreira1, Thomas Nierhoff11footnotemark: 11, Frank Hutter1, 2 denotes equal contribution ###### Abstract This work explores learning agent-agnostic _synthetic environments_ (SEs) for Reinforcement Learning. SEs act as a proxy for target environments and allow agents to be trained more efficiently than when directly trained on the target environment. We formulate this as a bi-level optimization problem and represent an SE as a neural network. By using Natural Evolution Strategies and a population of SE parameter vectors, we train agents in the inner loop on evolving SEs while in the outer loop we use the performance on the target task as a score for meta-updating the SE population. We show empirically that our method is capable of learning SEs for two discrete-action-space tasks (CartPole-v0 and Acrobot-v1) that allow us to train agents more robustly and with up to 60% fewer steps. Not only do we show in experiments with 4000 evaluations that the SEs are robust against hyperparameter changes such as the learning rate, batch sizes and network sizes, we also show that SEs trained with DDQN agents transfer in limited ways to a discrete-action-space version of TD3 and very well to Dueling DDQN. ## Introduction In this paper we consider the intriguing idea of learning a proxy data generating process for Reinforcement Learning (RL) that allows one to train learners more effectively and efficiently on a task, that is, to achieve similar or higher performance more quickly compared to when trained directly on the original data generating process. The relevant literature is multifaceted with works in core-sets (Sener and Savarese 2018), World Models (Ha and Schmidhuber 2018), POET (Wang et al. 2019), Generative Teaching Networks (Such et al. 2020), Generative Playing Networks (Bontrager and Togelius 2020), and Reward Shaping (Zheng, Oh, and Singh 2018) which all constitute contributions addressing this idea. Learning proxy models is a very promising direction, because their higher training and evaluation efficiency allows for new applications in fields such as AutoML (Hutter, Kotthoff, and Vanschoren 2019). Moreover, they can serve as a tool for algorithm and dataset design since the proxy can yield insights into the importance of passing states carrying large signal or by identifying information and underrepresented dataset classes required for efficient learning (Such et al. 2020). In this work, we focus on learning a data generating process for RL. More precisely, we investigate the question of whether we can learn a Markov Decision Process (MDP), which we refer to as a _synthetic environment_ (SE), that is capable of producing synthetic data to allow for effective and efficient teaching of a target task to an agent (learner) through an informed representation of the target environment. We report results on the (continuous-state and discrete-action-space) CartPole-v0 and Acrobot-v1 target tasks from OpenAI Gym (Brockman et al. 2016) which show that our SEs can train different types of agents to perform well on the target tasks, and also that these agents can be trained more efficiently. We approach this environment generation problem by posing it as a bi-level optimization problem. The inner loop trains the agent on an SE and, since we employ an agent-agnostic method, we can interchangeably adopt standard RL algorithms at will, for example, ones based on policy gradient (Sutton et al. 2020) or Q-learning (Watkins 1989). In the outer loop, we assess the agent’s performance by evaluating it on the real environment (target task). The collected reward is used as a score to update the SE parameters used in the inner loop. Here, we use a population of SE parameters and update them with Evolution Strategies (Rechenberg 1973). We employ the same learning strategy as in (Salimans et al. 2017) which belongs to the family of Natural Evolution Strategies (Wierstra et al. 2008) but instead of optimizing over the agent policy parameter space we optimize over the SE parameter space. We drew inspiration from Generative Teaching Networks (Such et al. 2020). While we similarly use a bi-level optimization scheme to learn a data generator, our approach is different in central aspects. Particularly, we do not use noise vectors as input to our SEs and view the posed question directly from the perspective of RL instead of Supervised Learning. Also, we use ES to avoid the need for explicitly computing second-order meta-gradients. While ES has its drawbacks, this is beneficial since explicitly computing second-order meta-gradients can be expensive and unstable (Metz et al. 2019), particularly in the RL setting where the length of the inner loop can be variant and high. ES can further easily be parallelized and enables our method to be agent- agnostic. Our contributions are as follows: We * • show that learning of synthetic environments as a bi-level optimization problem with NES constitutes a feasible method that is capable of learning SEs for two discrete-action-space Gym tasks, CartPole-v0 and Acrobot-v1. * • show that SEs trained with DDQN agents are able to transfer to other agents, that is, very well to Dueling DDQN and in limited ways to TD3 (which we adapted to deal with discrete action spaces). * • provide empirical evidence that SEs, once generated, are efficient and robust in training agents, requiring up to 60% fewer training steps while varying hyperparameters such as the learning rate, batch size and neural network size. Our code and trained SEs are made available publicly111https://github.com/automl/learning_environments. * • shed some light on what the agents learn from the synthetic environments in a small qualitative study. ## Method ### Problem Statement We consider a Markov Decision Process represented by a 4-tuple $(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R})$ with $\mathcal{S}$ as the set of states, $\mathcal{A}$ as the set of actions, $\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}$ as the transition probabilities between states if a specific action is executed in that state and $\mathcal{R}$ as the immediate rewards. The MDPs we will consider in this work are either human-designed environments $\mathcal{E}_{real}$ (such as Gym environments) or learned synthetic environments $\mathcal{E}_{syn,\psi}$, referred to as _SE_ , represented by a neural network with the parameters $\psi$. Interfacing with the environments is in both cases almost identical: given an input $a\in\mathcal{A}$, the environment outputs a next state $s^{\prime}\in\mathcal{S}$ and a reward $r_{a}(s,s^{\prime})\in\mathcal{R}$. In the case of $\mathcal{E}_{syn,\psi}$, we additionally input the current state $s\in\mathcal{S}$ since we model it to be stateless. The central objective of an RL agent when interacting on an MDP $\mathcal{E}_{real}$ is to find an optimal policy $\pi_{\theta}$ parameterized by $\theta$ that maximizes the expected reward $F(\theta;\mathcal{E}_{real})$. In RL, there exist many different methods to optimize this objective, for example policy gradient (Sutton et al. 2020) or Q-Learning (Watkins 1989). We now consider the following bi-level optimization problem: find the parameters $\psi^{*}$, such that the policy $\pi_{\theta}$ found by an agent parameterized by $\theta$ that trains on $\mathcal{E}_{syn,\psi^{*}}$ will achieve the highest reward on a target environment $\mathcal{E}_{real}$. Formally that is: $\displaystyle\psi^{*}=\underset{\psi}{\text{arg max}}$ $\displaystyle F(\theta^{*}(\psi);\mathcal{E}_{real})$ (1) s.t. $\displaystyle\theta^{*}(\psi)=\underset{\theta}{\text{arg max}}$ $\displaystyle F(\theta;\mathcal{E}_{syn,\psi}).$ We can use standard RL algorithms for optimizing the agents on the SE in the inner loop. Although gradient-based optimization methods can be applied in the outer loop, we chose Natural Evolution Strategies (NES) over such methods to allow the optimization to be independent of the choice of the agent in the inner loop and to avoid computing potentially expensive, unstable, and agent- specific meta-gradients. Additional advantages of ES are that it is better suited for long episodes (which often occur in RL), sparse or delayed rewards (Salimans et al. 2017), and parallelization. ### Algorithm Based on the formulated problem statement, let us now explain our method. The overall NES scheme is adopted from Salimans et al. (2017) and depicted in Algorithm 1. We instantiate the search distribution similarly as an isotropic multivariate Gaussian with mean 0 and a covariance $\sigma^{2}I$ yielding the score function estimator $\frac{1}{\sigma}\mathbb{E}_{\epsilon\sim N(0,I)}\\{F(\psi+\sigma\epsilon)\epsilon\\}$. The main difference to Salimans et al. (2017) is that, while they maintain a population over perturbed agent parameter vectors, our population consists of perturbed SE parameter vectors. In contrast to their approach, our NES approach also involves two optimizations, namely that of the agent and the SE parameters instead of only the agent parameters. Our algorithm first stochastically perturbs each population member according to the search distribution resulting in $\psi_{i}$. Then, a new randomly initialized agent is trained in _TrainAgent_ on the SE parameterized by $\psi_{i}$ for $n_{e}$ episodes. The trained agent with fixed parameters is then evaluated on the real environment in _EvaluateAgent_ , yielding the average cumulative reward across 10 test episodes which we use as a score $F_{\psi,i}$ in the above score function estimator. Finally, we update $\psi$ in _UpdateSE_ with a stochastic gradient estimate based on all member scores via a weighted sum $\psi\leftarrow\psi+\alpha\frac{1}{n_{p}\sigma}\sum^{n_{p}}_{i}F_{i}\epsilon_{i}$. We repeat this process $n_{o}$ times but perform manual early-stopping when a resulting SE is capable of training agents that consistently solve the target task. Finally, we use a parallel version of the algorithm, using one worker for each member of the population at the same time. Input: initial synthetic environment parameters $\psi$, real environment $\mathcal{E}_{real}$, $\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2}I)$, number of episodes $n_{e}$, population size $n_{p}$ ; repeat for _each member of the pop. $i=1,2,\ldots,n_{p}$_ do $\psi_{i}=\psi+\epsilon_{i}$ ; for _$n=1,2,\ldots,n_{e}$_ do $\theta_{i,n}$ $=$ TrainAgent($\theta_{i,n-1},\mathcal{E}_{syn,\psi_{i}}$) ; end for $F_{\psi,i}$ $=$ EvaluateAgent($\theta_{i,n_{e}}$, $\mathcal{E}_{real}$) ; end for $\psi$ $\leftarrow$ UpdateSE($\psi$, $\big{\\{}\epsilon_{i}\big{\\}}_{i}$, $\big{\\{}F_{\psi,i}\big{\\}}_{i}$) ; until _$n_{o}$ steps_; Algorithm 1 NES for Learning SEs ### Heuristics for Agent Training and Evaluation Determining the number of required training episodes $n_{e}$ on an SE is challenging as the rewards of the SE may not provide information about the current agent’s performance on the real environment. Thus, we use a heuristic to early-stop training once the agent’s training performance on the _SE_ converged. Let us refer to the cumulative reward of the $k$-th training episode as $C_{k}$. The two values $\bar{C}_{d}$ and $\bar{C}_{2d}$ maintain a non-overlapping moving average of the cumulative rewards over the last $d$ and $2d$ respective episodes up to episode $k$. Now, if $\frac{|\bar{C}_{d}-\bar{C}_{2d}|}{|\bar{C}_{2d}|}\leq C_{diff}$ the training is stopped. In all experiments we choose $d=10$ and $C_{diff}=0.01$. Training agents on _real environments_ is stopped when the average cumulative reward across the last $d$ test episodes exceeds the solved reward threshold. In case no heuristic is triggered, we train for 1000 episodes at most on both env. types. Independent of which of the environments ($\mathcal{E}_{real}$ or $\mathcal{E}_{syn}$) we train an agent on, the process to assess the actual agent performance is equivalent: we do this by running the agent on 10 test episodes from $\mathcal{E}_{real}$ for a fixed number of task-specific steps (i.e. 200 on CartPole and 500 on Acrobot) and use the cumulative rewards for each episode as a performance proxy and evaluation data points for visualization. ### Agents, Hyperparameters, and Sampling While our agent-agnostic method in principle allows to train arbitrary RL agents, in this work we always use DDQN (van Hasselt, Guez, and Silver 2016) for the training of our agents in Algorithm 1. Instead, we study the robustness of SEs to varying agent hyperparameters and the transferability of the SEs to train other agents. For studying transferability, we use Dueling DDQN (Wang et al. 2016) and TD3 (Fujimoto, Hoof, and Meger 2018). The latter is chosen because it does not solely rely on (deep) Q-Learning as it constitutes an algorithm of the actor-critic and policy gradient family. However, since TD3 is naturally designed to deal with continuous action spaces but both our tasks employ a discrete action space, we equip the actor with a Gumble-Softmax distribution (Jang, Gu, and Poole 2017) with a learned temperature that enables us to operate on discrete actions while maintaining differentiability. Due to known sensitivity to hyperparameters (HPs), we apply a hyperparameter optimization for the execution of Algorithm 1. In addition to the inner and outer loop of our algorithm, we use another outer loop to optimize some of the agent and NES HPs with BOHB (Falkner, Klein, and Hutter 2018) to identify stable HPs. The optimized HPs are reported in Table 2 in the appendix. We did not optimize some of the HPs that would negatively affect runtime (e.g. population size, number of train and test episode, see Table 3 in the appendix). Resulting from this optimization, we use mirrored sampling (Brockhoff et al. 2011) similar to Salimans et al. (2017) and a score transformation applied to $F$ that considers only members above the average of the population scores and which normalizes these to the range $[0,1]$. In many of our experiments we draw a comparison between varying HPs (denoted as “HP: varying”) and keeping HPs fixed (denoted as “HP: fixed”). For the latter we use the default HPs specified in Table 4 in the appendix. In the case of varying HPs, we focus on a subset of the default HPs given in Table 1 and randomly sample from the specified ranges. agent hyperparameter | value range | log scale ---|---|--- learning rate | $10^{-3}/3$ \- $10^{-3}*3$ | True batch size | $42$ \- $384$ | True hidden size | $42$ \- $384$ | True hidden layer | $1$ \- $3$ | False Table 1: Ranges used for sampling random agent HP config. for variation during NES runs and when training on SEs. ## Experiments After identifying stable DDQN and NES hyperparameters (HPs), we ran Algorithm 1 in parallel with 16 workers for 200 NES outer loop iterations. Each worker had one AMD EPYC 7502 CPU core at its disposal, resulting in an overall runtime of 6-7h on Acrobot and 5-6h on CartPole for 200 NES outer loop iterations. For reference, we note that Salimans et al. (2017) used 80 workers each having access to 18 cores for solving the Humanoid task. Let us now consider Figure 1. It becomes evident that the proposed method with the given resources allows to identify SEs that are able to teach agents to solve CartPole and Acrobot tasks respectively. Each thin line in the plot corresponds to the average of 16 worker evaluation scores given by _EvaluateAgent_ in Algorithm 1 as a function of the NES outer loop iteration. We repeat this for 40 separate NES optimization runs and visualize the average across the thin lines as a thick line for each task. We note that we only show a random subset of the 40 thin lines for better visualization and randomly sample the seeds at all times in this work. We believe that the stochasticity introduced by this may lead to the variance visible in the plot when searching for good SEs. Both the stochasticity of natural gradients and the sensitivity of RL agents to seeds and parameter initializations may additionally contribute to this effect. Notice, it is often sufficient to run approximately 50 NES outer loops in order to find SEs that solve a task, as can be seen in Figure 1. Besides early-stopping, other regularization techniques (e.g. regularizing the SE parameters) can be applied to address overfitting which we likely observe in the advanced training of the Acrobot task. Figure 1: Results from 40 different NES runs with 16 workers each (using random seeds) show that our method is able to identify SEs that allow agents to solve the target tasks. Each thin line corresponds to the average of 16 worker evaluation scores returned by _EvaluateAgent_ in our algorithm as a function of the NES outer loop iterations. ### Evaluation Performance of SE-trained Agents To answer whether the proposed method can learn SEs that are capable training agents effectively and efficiently, we conducted the following experiment. First, we generated two sets each consisting of 40 SEs from individual NES runs all capable of solving the CartPole task (i.e. a cumulative reward of $\geq 195$). For one of the sets we generated 40 SEs where we varied the HPs by sampling configurations of a subset of the DDQN HPs according to Table 1 before running the inner loop. For the other set of 40 SEs we did not vary the HPs and used our default DDQN HPs (see Table 4 in the appendix). Second, on each SE of both of the sets we trained 10 DDQN agents with again varying HPs (again in the ranges specified in Table 1). Then, after training, we evaluated each agent on the target task across 10 test episodes, resulting in overall 4000 evaluations per set. Lastly, we used the cumulative rewards from the test episodes for generating the violin plots seen in Figure 2. The center violin corresponds to the set for which we trained the SEs with varying agent HPs and the right violin corresponds to the other respective set for which we did not vary the HPs. The left violin represents our baseline which shows 4000 evaluations on CartPole without an involvement of SEs. Note, in all three cases we always vary the agent HPs at test time, and the “HP: fixed” / “HP: varying” in the Figure merely indicates whether we varied the agent HPs _during SE training_. We also report the average reward across the 4000 evaluations (top) and the number of average episodes and training steps required until the heuristics for stopping training (see Method Section) are triggered (bottom). We can see that the DDQN-trained SEs are consistently able to train DDQN agents using $\sim$60% fewer steps on average while being more stably (center violin, smaller std. dev.) than the the baseline (left violin), and the SEs also show little sensitivity to HP variations. Training on SEs without varying the agent HPs during the NES optimization degrades the performances noticeably, potentially due to overfitting of the SEs to the specific agent HPs. ### Transferability of Synthetic Environments Obviously, it is difficult to justify the argument of speed improvement when a lot of environment observations have gone into training an SE. This is why in this experiment, we investigate whether DDQN-trained SEs are capable of efficiently training other agents as well. To do this, we reuse the two sets from the previous experiment that consist of 40 DDQN-trained SEs, but this time we train Dueling DDQN agents on the SEs, again with varying HPs according to Table 1. From Figure 3 we conclude that the transfer to the Dueling DDQN agent succeeds and it facilitates a $\sim$50% faster and noticeably more stable training on average. Again, presumably due to overfitting, not varying the DDQN agent HPs during our NES runs negatively affects the Dueling DDQN performance (right violin). Figure 2: We show the distributions of the average cumulative rewards collected by DDQN agents on the CartPole task based on 4000 evaluations per violin and DDQN-trained SEs. We depict three different agent training settings: (left) agents trained on real environments with varying agent HPs, (center) on DDQN-trained SEs when varying agent HPs during NES runs, (right) on DDQN-trained SEs where the agent HPs were fixed during training of the SEs. The DDQN-trained SEs consistently train DDQN agents up to $\sim$60% faster and more stably (mean train steps and std. dev. of center violin) compared to the baseline (left violin), and the agents show little sensitivity to HP variations. Figure 3: Visualization of transferring from DDQN to Dueling DDQN agents on CartPole based on 4000 evaluations per violin. Figure 4: Visualization of transferring from DDQN to discrete-action-space TD3 on the CartPole task. Figure 5: Visualization of transferring from DDQN to Dueling DDQN on the Acrobot task. Figure 6: Histograms of approximate next state $s^{\prime}$ and reward $r$ distributions produced by 10 DDQN agents when trained on an SE (blue) and when afterwards tested for 10 episodes on a real environment (orange) for each task (top: CartPole, bottom: Acrobot). In green we depict the SE responses when the SE is fed with state-action pairs that the agent used during testing on the real environment. For CartPole, we show all state dimensions and for Acrobot we only show the joint angles. We conducted another experiment to analyze the transfer to a second agent which is not solely based on (deep) Q-learning. As described above, we chose the actor-critic-based TD3 agent that we adapted to handle discrete action spaces while maintaining differentiability. We can see in Figure 4 that, while the baseline performance (left violin) indicates that our implementation seems to be correct, the performance decreases in the center violin, showing a limited transferability compared to the very good transfer to Dueling DDQN. We believe this may be due to the different learning behavior of actor-critic methods compared to learning with DDQN. We believe this result may indicate that _TrainAgent_ requires even more variation, i.e. instead of varying the HPs and seeds, we may additionally vary the agent types within an NES run. Another way to address this might be to increase the number of evaluations of the same perturbation and adding additional workers. Nevertheless, we also observe that in some cases the SEs are capable of training discrete TD3 agents successfully and from the low std. dev. (14.29 vs. 198.40) of the average number of episodes we can infer that, when the training succeeds, it remains efficient. All in all, this begs a deeper analysis of the transfer in the future, for example by identifying and analysing well and ill-suited SEs, as we were able to observe that well-suited SEs tended to be consistent in their aptitude of training agents. As can be seen in Figure 5, transferring from DDQN to Dueling DDQN is also possible on the Acrobot task (a cumulative reward of -100 solves the task). In this case, the policies learned on the SE (center violin) are in fact substantially _better_ on average than those learned on the real environment (left violin). Also, the SEs facilitate a $\sim$38% faster training on average compared to the baseline. We note that the default DDQN HPs found for the CartPole task were reused for this task (as well as the HPs and ranges for variation from Table 1). ### Analysing Synthetic Environment Behavior Is it possible to shed some light on the efficacy of the learned SEs? Notice that we are operating on tasks with small state spaces which allow a qualitative visual study. This motivates the following experiment in which we visualized an approximation of the state and reward distributions generated by agents trained on SEs and real environments. First, we randomly chose one trained SE for each task (CartPole and Acrobot). Then, on each SE we trained 10 DDQN agents with default HPs and random seeds until the stopping heuristic specified in the Method Section was triggered (similar to the other experiments). During their training, we logged all $(s,a,r,s^{\prime})$ tuples. Second, we evaluated the SE-trained DDQN agents on the respective real environments for 10 test episodes each and again logged the $(s,a,r,s^{\prime})$ tuples. Lastly, we visualized the histograms of the collected next states and rewards for each task and color-coded them according to their origin (SE or real environment). The result is depicted in Figure 6 for CartPole (top) and Acrobot (bottom). We show all four CartPole state dimensions, but only four of the six Acrobot state dimensions (only the $\sin$ and $\cos$ joint angles) for reasons of brevity. All plots show a strong distributional shift between the SE and the real environment, indicating that the agent is tested on states and provided with rewards it has barely seen during training, yet it is able to solve the environment (average cumulative rewards: 199.28 on CartPole and -90.2 on Acrobot). Furthermore, it can be observed that some of the synthetic _state_ distributions are narrower than the real counterparts. We point out that the synthetic _reward_ distribution is wider than the real one, indicating that the sparse reward distribution becomes dense as we get a reward for each action taken. We hypothesize that the SEs produce an informed representation of the target environment by narrowing the _state_ distributions to bias agents towards helpful (i.e. carrying strong signal) and relevant states. The histograms depicted in green were generated to see whether the distribution shifts are caused by the agent or the SE. They show the SE responses when fed with real environment data based on the logged state-action pairs that the agents have seen during testing. For most of the state dimensions, we observe that the green distributions align better with the blue than with the orange ones. Regardless of the origin (SE or real) of the current state and action, the next state and reward of the SE seem again to converge to values seen during training. Thus, we conclude it is more likely the shift is generated by the SE than the agent. While many questions remain unanswered in these preliminary results, they may offer partial explanations for the efficacy of SEs to the reader, for example, by understanding them as “guiding systems” for agents. ### Limitations Aside from advantages there also exist limitations to our approach. As can be seen in Salimans et al. (2017), NES methods strongly depend on the number of workers and require a lot of parallel computational resources. We observed this limitation in preliminary experiments when we applied our method to more complex environments, such as the HalfCheetah or Humanoid task. Unsurprisingly, 16 workers were insufficient to learn SEs able to solve them. Moreover, we assume observable, Markovian states and partial observability may add further complexity to the optimization. ## Conclusion We proposed a method that allows to learn synthetic environments which act as proxies for RL target task environments. By analyzing this method for the two discrete-action-space target tasks CartPole and Acrobot, we provided empirical evidence of their efficacy in experiments with 4000 evaluations and under varying hyperparameters and agents. Our results show that it is possible to significantly reduce the number of training steps while still achieving the same target task performance. Moreover, the results illustrate that the learned SEs are capable of transferring well (Dueling DDQN) or in limited ways (TD3) to new agents not seen during training. While SEs still have to be better understood, we see them as a useful tool in various applications. For example, as agent-agnostic, cheap-to-run environments for AutoML, as a tool for agent and task analysis or as models for efficient agent pre-training. We believe our promising results motivate future research in which we want to investigate how the method performs on more complex environments and better understand the trade-off between their complexity and the required computational resources. ## Acknowledgements The authors acknowledge funding by Robert Bosch GmbH. ## References * Bontrager and Togelius (2020) Bontrager, P.; and Togelius, J. 2020. Fully Differentiable Procedural Content Generation through Generative Playing Networks. _arXiv:2002.05259_ . * Brockhoff et al. (2011) Brockhoff, D.; Auger, A.; Hansen, N.; Arnold, D. V.; and Hohm, T. 2011. Mirrored sampling and sequential selection for evolution strategies. In _Proc. of PPSN’10_. * Brockman et al. (2016) Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; and Zaremba, W. 2016. OpenAI Gym. _arXiv:1606.01540_ . * Falkner, Klein, and Hutter (2018) Falkner, S.; Klein, A.; and Hutter, F. 2018. BOHB: Robust and Efficient Hyperparameter Optimization at Scale. In _Proc. of ICML’18_ , 1437–1446. * Fujimoto, Hoof, and Meger (2018) Fujimoto, S.; Hoof, H.; and Meger, D. 2018. Addressing Function Approximation Error in Actor-Critic Methods. In _Proc. of ICML’18_ , 1582–1591. * Ha and Schmidhuber (2018) Ha, D.; and Schmidhuber, J. 2018. World Models. _arXiv:1803.10122_ . * Hutter, Kotthoff, and Vanschoren (2019) Hutter, F.; Kotthoff, L.; and Vanschoren, J., eds. 2019. _Automated Machine Learning: Methods, Systems, Challenges_. Springer. * Jang, Gu, and Poole (2017) Jang, E.; Gu, S.; and Poole, B. 2017. Categorical Reparameterization with Gumbel-Softmax. In _Proc. of ICLR’17_. * Metz et al. (2019) Metz, L.; Maheswaranathan, N.; Nixon, J.; Freeman, D.; and Sohl-Dickstein, J. 2019\. Understanding and correcting pathologies in the training of learned optimizers. In _Proc. of ICML’19_. * Rechenberg (1973) Rechenberg, I. 1973. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Frommann-Holzbog, Stuttgart, 1973. _PPSN 1973_ . * Salimans et al. (2017) Salimans, T.; Ho, J.; Chen, X.; and Sutskever, I. 2017. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. _arXiv:1703.03864_ . * Sener and Savarese (2018) Sener, O.; and Savarese, S. 2018. Active Learning for Convolutional Neural Networks: A Core-Set Approach. In _Proc. of ICLR’18_. * Such et al. (2020) Such, F. P.; Rawal, A.; Lehman, J.; Stanley, K. O.; and Clune, J. 2020. Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data. In _Proc. of ICML’20_. * Sutton et al. (2020) Sutton, R. S.; McAllester, D.; Singh, S.; and Mansour, Y. 2020. Policy Gradient Methods for Reinforcement Learning with Function Approximation. In _Proc. of NeurIPS’20_ , 1057–1063. * van Hasselt, Guez, and Silver (2016) van Hasselt, H.; Guez, A.; and Silver, D. 2016. Deep Reinforcement Learning with Double Q-Learning. In _Proc. of AAAI’16_ , 2094–2100. * Wang et al. (2019) Wang, R.; Lehman, J.; Clune, J.; and Stanley, K. O. 2019. Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions. _arXiv:1901.01753_ . * Wang et al. (2016) Wang, Z.; Schaul, T.; Hessel, M.; van Hasselt, H.; Lanctot, M.; and de Freitas, N. 2016. Dueling Network Architectures for Deep Reinforcement Learning. In _Proc. of ICML’16_ , 1995–2003. * Watkins (1989) Watkins, C. J. C. H. 1989. _Learning from delayed rewards_. Ph.D. thesis, King’s College. * Wierstra et al. (2008) Wierstra, D.; Schaul, T.; Peters, J.; and Schmidhuber, J. 2008. Natural evolution strategies. In _Proc. of CEC’08_ , 3381–3387. * Zheng, Oh, and Singh (2018) Zheng, Z.; Oh, J.; and Singh, S. 2018. On learning intrinsic rewards for policy gradient methods. In _Proc. of NeurIPS’18_ , 4644–4654. ## Appendix: Agent and NES Hyperparameters The following tables provide an overview of the hyperparameters used in our experiments. hyperparameter | symbol | CartPole-v0 | Acrobot-v1 | value range | log. scale ---|---|---|---|---|--- NES step size | $\alpha$ | $0.148$ | $0.727$ | $0.1-1$ | True NES std. dev. | $\sigma$ | $0.0124$ | $0.0114$ | $0.01-1$ | True NES mirrored sampling | - | True | True | False/True | - NES score transformation | - | better avg. | better avg. | (rank transform, linear transform, etc.) | - NES SE number of hidden layers | - | $1$ | $1$ | $1-2$ | False NES SE hidden layer size | - | $83$ | $167$ | $48-192$ | True NES SE activation function | - | LReLU | PReLU | Tanh/ReLU/LReLU/PRelu | - DDQN initial episodes | - | $1$ | $20$ | $1-20$ | True DDQN batch size | - | $199$ | $149$ | $64-256$ | False DDQN learning rate | - | $0.000304$ | $0.00222$ | $0.0001-0.005$ | True DDQN target network update rate | - | $0.00848$ | $0.0209$ | $0.005-0.05$ | True DDQN discount factor | - | $0.988$ | $0.991$ | $0.9-0.999$ | True (inv.) DDQN initial epsilon | - | $0.809$ | $0.904$ | $0.8-1$ | True DDQN minimal epsilon | - | $0.0371$ | $0.0471$ | $0.005-0.05$ | True DDQN epsilon decay factor | - | $0.961$ | $0.899$ | $0.8-0.99$ | True (inv.) DDQN number of hidden layers | - | $1$ | $1$ | $1-2$ | False DDQN hidden layer size | - | $57$ | $112$ | $48-192$ | True DDQN activation function | - | Tanh | LReLU | Tanh/ReLU/LReLU/PRelu | - Table 2: Optimized hyperparameters for experiment depicted in Figure 1 hyperparameter | symbol | CartPole-v0 | Acrobot-v1 ---|---|---|--- NES number of outer loops | $n_{o}$ | $200$ | $200$ NES max. number of train episodes | $n_{e}$ | $1000$ | $1000$ NES number of test episodes | $n_{te}$ | $10$ | $10$ NES population size | $n_{p}$ | $16$ | $16$ DDQN replay buffer size | - | $100000$ | $100000$ DDQN early out number | $d$ | $10$ | $10$ DDQN early out difference | $C_{diff}$ | $0.01$ | $0.01$ env. max. episode length | - | $200$ | $500$ env. solved reward | - | $195$ | $-100$ Table 3: Fixed hyperparameters for experiment depicted in Figure 1 NES, SE, DDQN, Dueling DDQN & TD3 hyperparameter | value ---|--- NES step size | $1$ NES std. dev. | $0.05$ NES mirrored sampling | True NES score transformation | better avg. NES SE number of hidden layers | $1$ NES SE hidden layer size | $128$ NES SE activation function | LReLU initial episodes | $10$ batch size | $128$ learning rate (DDQN & D.DDQN / TD3) | $0.001$ / $0.0005$ target network update rate | $0.01$ discount factor | $0.99$ epsilon decay factor | $0.9$ number of hidden layers (agents) | $2$ hidden layer size (agents) | $128$ activation function (DDQN & D. DDQN / TD3) | ReLU / Tanh replay buffer size | $100000$ max. train episodes | $1000$ Gumbel Softmax start temperature / one-hot-encoded actions (TD3) | $1$ / False Table 4: Default HPs for experiments used in Fig. 2, 3, 4, 5, and 6. _early out num_ and _early out diff._ are equivalent to Table 3.
# Hafnian of two-parameter matrices Dmitry Efimov Institute of Physics and Mathematics, Komi Science Centre UrD RAS, Syktyvkar, Russia e-mail<EMAIL_ADDRESS> ###### Abstract The concept of the hafnian first appeared in the works on quantum field theory by E. R. Caianiello. However, it also has an important combinatorial property: the hafnian of the adjacency matrix of an undirected weighted graph is equal to the total sum of the weights of perfect matchings in this graph. In general, the use of the hafnian is limited by the complexity of its computation. In this paper, we present an efficient method for the exact calculation of the hafnian of two-parameter matrices. In terms of graphs, we count the total sum of the weights of perfect matchings in graphs whose edge weights take only two values. This method is based on the formula expressing the hafnian of a sum of two matrices through the product of the hafnians of their submatrices. The necessary condition for the application of this method is the possibility to count the number of $k$-edge matchings in some graphs. We consider two special cases in detail using a Toeplitz matrix as the two- parameter matrix. As an example, we propose a new interpretation of some of the sequences from the On-Line Encyclopedia of Integer Sequences and then provide new analytical formulas to count the number of certain linear chord diagrams. ## Introduction Let $A=(a_{ij})$ be a symmetric matrix of even order $n$ over a commutative associative ring. The hafnian of $A$ is defined as $\textrm{Hf}(A)=\sum_{(i_{1}i_{2}|\dots|i_{n-1}i_{n})}a_{i_{1}i_{2}}\cdots a_{i_{n-1}i_{n}},$ where the sum ranges over all unordered partitions of the set $\\{1,2,\dots,n\\}$ into unordered pairs $(i_{1}i_{2})$, $\dots$, $(i_{n-1}i_{n})$. Therefore, if $n=4$, then $\textrm{Hf}(A)=a_{12}a_{34}+a_{13}a_{24}+a_{14}a_{23}$. The hafnian of the empty matrix is considered as $1$. Note that the elements of the main diagonal are not included in the definition of the hafnian. For the sake of convenience, we assume that all matrices under consideration have a zero main diagonal. A $k$-edge matching in a graph is a set of its $k$ pairwise nonadjacent edges. An $m$-edge matching in a graph with $2m$ vertices is called perfect matching. If a graph is weighted, then the weight of the matching is a product of the weights of the edges included in this matching. The hafnian has a useful combinatorial property related to an important problem in graph theory and its applications: if $M$ is the adjacency matrix of an undirected weighted graph with an even number of vertices, then $\textrm{Hf}(M)$ equals the total sum of the weights of the perfect matchings in the graph. Unfortunately, the widespread use of the hafnian is limited due to the complexity of its computations in general. For example, one of the fastest exact algorithms to compute the hafnian of an arbitrary complex $n\times n$ matrix runs in $O(n^{3}2^{n/2})$ time, and, as the authors show, it seems to be close to an optimal one [1]. Because the calculation of the hafnian has a high computational complexity in general, the actual problem is the discovery of efficient analytical formulas expressing the hafnian for special classes of matrices. Let $T_{n}$ be a symmetric $(0,1)$-matrix of order $n$, and let $a$ and $b$ be elements of a ring $R$. We denote a symmetric matrix of order $n$ by $T_{n}(a,b)$, which is obtained from $T_{n}$ by replacing all instances of $1$ by $a$ and all zero elements outside the main diagonal by $b$. For example (dots denote zeros), $T_{4}=\left(\begin{array}[]{cccc}\cdot&1&\cdot&1\\\ 1&\cdot&1&\cdot\\\ \cdot&1&\cdot&1\\\ 1&\cdot&1&\cdot\end{array}\right)\ \ \Longrightarrow\ \ T_{4}(a,b)=\left(\begin{array}[]{cccc}\cdot&a&b&a\\\ a&\cdot&a&b\\\ b&a&\cdot&a\\\ a&b&a&\cdot\end{array}\right).$ We can say that $T_{n}(a,b)$ is a two-parameter matrix, and $T_{n}$ is the template for $T_{n}(a,b)$. Note that $T_{n}(1,0)=T_{n}$. In our work, we present an effective method for the exact computation of the hafnian of matrices $T_{n}(a,b)$. In terms of graphs, we count the total sum of weights of perfect matchings in two-parameter weighted graphs (i.e., weights of the edges are only $a$ and $b$). This method is based on the formula expressing the hafnian of a sum of two matrices through the sum of the product of the hafnians of matrices and is also closely linked to the combinatorial problem of counting the number of $k$-edge matchings in graphs. In theoretical physics, this problem is known as the monomer-dimer problem [2]. ABCDEFABCDEF Figure 1: A binary tree and its corresponding arc diagram Recall that an arc diagram is a graph presentation method in which all the vertices are located along a line in the plane, whereas all edges are drawn as arcs (Figure 1). In this work, it will be convenient for us to represent graphs in the form of arc diagrams. Perfect matchings of arc diagrams are often called linear chord diagrams [3, 4]. ## 1 Hafnian of two-parameter matrices To begin with, consider two properties of the hafnian. The first one is quite obvious. ###### Proposition 1. Let $A$ be a symmetric matrix of even order $n$ over a commutative associative ring $R$, and $c\in R$. Then $\mathrm{Hf}(cA)=c^{n/2}\mathrm{Hf}(A).$ (1) Let $Q_{k,n}$ denote the set of all unordered $k$-element subsets of $\\{1,2,\dots,n\\}$. Let $A$ be a matrix of order $n$ and $\alpha\in Q_{k,n}$. Moreover, $A[\alpha]$ denotes the submatrix of $A$ formed by the rows and columns of $A$ with numbers in $\alpha$, and $A\\{\alpha\\}$ denotes the submatrix of $A$ formed from $A$ by removing the rows and columns with numbers in $\alpha$. The following property was proved in [5]: ###### Proposition 2. Let $A$ and $B$ be symmetric matrices of even order $n$. Then $\mathrm{Hf}(A+B)=\sum_{k=0}^{n/2}\sum_{\alpha\in Q_{2k,n}}\mathrm{Hf}(A[\alpha])\mathrm{Hf}(B\\{\alpha\\}).$ (2) $J_{n}(b)$ denotes a matrix of order $n$ whose elements outside the main diagonal are equal to $b$. From the definition of the hafnian, it follows that $\mathrm{Hf}(J_{2m}(b))=b^{m}\frac{(2m)!}{m!2^{m}}.$ (3) Since $T_{2m}(a,b)=J_{2m}(b)+T_{2m}(a-b,0)$, using formulas (1), (2), and (3), we can write the following: $\begin{split}&\mathrm{Hf}(T_{2m}(a,b))=\mathrm{Hf}(J_{2m}(b)+T_{2m}(a-b,0))=\\\ &=\sum_{k=0}^{m}\sum_{\alpha\in Q_{2k,2m}}\mathrm{Hf}(J_{2m}(b)[\alpha])\mathrm{Hf}(T_{2m}(a-b,0)\\{\alpha\\})=\\\ &=\sum_{k=0}^{m}(a-b)^{m-k}b^{k}\frac{(2k)!}{k!2^{k}}\sum_{\alpha\in Q_{2k,2m}}\mathrm{Hf}(T_{2m}\\{\alpha\\}).\end{split}$ (4) Here, we use the fact that the matrix $J_{2m}(b)[\alpha]$ has the same form as the initial matrix $J_{2m}(b)$, that is, $J_{2m}(b)[\alpha]$ is a matrix of order $2k$ whose elements outside the main diagonal are equal to $b$. If $M$ is a symmetric nonnegative integer matrix, then $\Gamma(M)$ denotes a multigraph with the adjacency matrix $M$. If $\alpha\in Q_{2k,2m}$, then the hafnian $\mathrm{Hf}(T_{2m}\\{\alpha\\})$ equals the cardinality of a set of $(m-k)$-edge matchings in the graph $\Gamma(T_{2m})$; moreover, such sets do not intersect for different $\alpha$, and their union is the set of all $(m-k)$-edge matchings of the graph $\Gamma(T_{2m})$. Given a graph $\Gamma$, let $\mu_{k}(\Gamma)$ denote the number of all its $k$-edge matchings. By definition, we set $\mu_{0}(\Gamma)=1$. Thus, $\sum_{\alpha\in Q_{2k,2m}}\mathrm{Hf}(T_{2m}\\{\alpha\\})=\mu_{m-k}(\Gamma(T_{2m})),$ and therefore, $\mathrm{Hf}(T_{2m}(a,b))=\sum_{k=0}^{m}(a-b)^{m-k}b^{k}\frac{(2k)!}{k!2^{k}}\mu_{m-k}(\Gamma(T_{2m})).$ (5) Note that (5) is the special case of Theorems 1W and 3W given in [6] in terms of matching vectors of weighted graphs and their complements. The special case of (5) when $a=0$ and $b=1$ is also given in [7] (Theorem $4$). Thus, to calculate the hafnian of a two-parameter matrix by using formula (5), one needs to determine the number of $k$-edge matchings of graphs corresponding to the matrix, which is a nontrivial task in general. One of the simplest special cases was considered in [8]. In the following section, we consider a few more complicated special cases. ## 2 Hafnian of Toeplitz matrices of the first type Recall that a matrix is called Toeplitz if all its diagonals parallel to the main diagonal consist of the same elements. It is obvious that a symmetric Toeplitz matrix is uniquely determined by its first row. As the template matrix $T_{n}$, consider a symmetric Toeplitz matrix of order $n$ with the first row $\left(\begin{array}[]{cccccc}0&0&1&0&\dots&0\end{array}\right).$ We denote it by $C_{n}$. This matrix is the adjacency matrix of the arc diagram shown in Figure 2. 1234$n-2$$n-1$$n$ Figure 2: The arc diagram $\Gamma(C_{n})$ ###### Proposition 3. Let $k$ and $n$ be a pair of nonnegative integers. Suppose $n\not=2k$ when $k$ is odd. Then, the inequality $\left\lceil\frac{3k-n}{2}\right\rceil\leq\left\lfloor\frac{k}{2}\right\rfloor$ (6) is equivalent to $k\leq\left\lfloor\frac{n}{2}\right\rfloor.$ (7) ###### Proof. Suppose (7) is wrong, which means that $n=2k-c$, where $c\geq 1$. On substituting this expression for $n$ into the inequality (6), we can see that the result is wrong as well. Now suppose (7) is true. Then, $n=2k+c$ and $\left\lceil\frac{3k-n}{2}\right\rceil=\left\lceil\frac{k-c}{2}\right\rceil$ , where $c\geq 0$. If $k$ is even, then $\left\lceil\frac{k-c}{2}\right\rceil$ does not exceed $\left\lfloor\frac{k}{2}\right\rfloor$. If $k$ is odd, then, by the assumption, $n\not=2k$. Therefore, $c\geq 1$, and $\left\lceil\frac{k-c}{2}\right\rceil$ also does not exceed $\left\lfloor\frac{k}{2}\right\rfloor$. Thus, the inequality (6) also holds. ∎ ###### Proposition 4. Let $k$ and $n$ be a pair of nonnegative integers. If $k\leq\left\lfloor\frac{n}{2}\right\rfloor$, but $n\not=2k$ when $k$ is odd, then the number of $k$-edge matchings in the arc diagram $\Gamma(C_{n})$ is $\mu_{k}(\Gamma(C_{n}))=\sum\limits_{i=\max{\left(0,\left\lceil\frac{3k-n}{2}\right\rceil\right)}}^{\left\lfloor\frac{k}{2}\right\rfloor}{n-2k+i\choose k-i}{k-i\choose i}.$ (8) Otherwise, $\mu_{k}(\Gamma(C_{n}))=0$. ###### Proof. For the sake of convenience, we use the abbreviated notation $v_{n,k}$ for $\mu_{k}(\Gamma(C_{n}))$. Consider a $k$-edge matching in $\Gamma(C_{n})$. It is obvious that if $k>\left\lfloor\frac{n}{2}\right\rfloor$, then $v_{n,k}$=0. If $n\geq 4$, then the following three cases are possible: the first vertex of the diagram is not incident to the edge of the matching (Figure 3(a)); the first vertex is incident to an edge of the matching, but the second vertex is not (Figure 3(b)); the first and second vertices are incident to the edges of the matching (Figure 3(c)). 1(a)123(b)1234(c) Figure 3: Possible cases of matchings in the arc diagram $\Gamma(C_{n})$ It follows from the above that $v_{n,k}$ satisfies the recurrence relation $v_{n+4,k+2}=v_{n+3,k+2}+v_{n+1,k+1}+v_{n,k}$ (9) with the initial conditions $v_{n,k}=0$ for $k>\left\lfloor\frac{n}{2}\right\rfloor$; $v_{n,0}=1$ for all $n$; $v_{n,1}=n-2$ for $n\geq 2$. Consider the two-parameter generating function $v(x,t)$ for the sequence $v_{n,k}$: $v(x,t)=\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n,k}x^{k}t^{n}.$ By multiplying (9) by $x^{k+3}t^{n+3}$ and summing over all possible $k$ and $n$, we obtain the following equation: $\begin{split}\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n+4,k+2}x^{k+3}t^{n+3}=\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n+3,k+2}x^{k+3}t^{n+3}+\\\ +\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n+1,k+1}x^{k+3}t^{n+3}+\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n,k}x^{k+3}t^{n+3}.\end{split}$ (10) Consider separately the formal power series on the left side of this equation. $\begin{split}\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n+4,k+2}x^{k+3}t^{n+3}=\frac{x}{t}\left(\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n+4,k+2}x^{k+2}t^{n+4}\right)=\\\ =\frac{x}{t}\left(v(x,t)-\sum_{n=0}^{+\infty}v_{n,0}t^{n}-\sum_{n=1}^{+\infty}v_{n,1}xt^{n}\right)=\\\ =\frac{x}{t}\left(v(x,t)-\sum_{n=0}^{+\infty}t^{n}-x\sum_{n=3}^{+\infty}(n-2)t^{n}\right)=\\\ =\frac{x}{t}\left(v(x,t)-\frac{1}{1-t}-xt^{3}\left(\sum_{n=1}^{+\infty}t^{n}\right)^{\prime}\right)=\\\ =\frac{x}{t}\left(v(x,t)-\frac{1}{1-t}-\frac{xt^{3}}{(1-t)^{2}}\right).\end{split}$ In the same way, we get $\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n+3,k+2}x^{k+3}t^{n+3}=x\left(v(x,t)-\frac{1}{1-t}-\frac{xt^{3}}{(1-t)^{2}}\right).$ In addition, $\begin{split}\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n+1,k+1}x^{k+3}t^{n+3}=x^{2}t^{2}\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n+1,k+1}x^{k+1}t^{n+1}=\\\ x^{2}t^{2}\left(v(x,t)-\sum_{n=0}^{+\infty}v_{n,0}t^{n}\right)=x^{2}t^{2}\left(v(x,t)-\frac{1}{1-t}\right).\end{split}$ Finally, $\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}v_{n,k}x^{k+3}t^{n+3}=x^{3}t^{3}v(x,t).$ On substituting all of the above into (10), we get $\displaystyle\frac{x}{t}\left(v(x,t)-\frac{1}{1-t}-\frac{xt^{3}}{(1-t)^{2}}\right)=x\left(v(x,t)-\frac{1}{1-t}-\frac{xt^{3}}{(1-t)^{2}}\right)+$ $\displaystyle+x^{2}t^{2}\left(v(x,t)-\frac{1}{1-t}\right)+x^{3}t^{3}v(x,t).$ On solving this equation, we obtain: $\begin{split}v(x,t)=\frac{1}{1-t(1+xt^{2}+x^{2}t^{3})}=\sum_{m=0}^{+\infty}t^{m}(1+xt^{2}+x^{2}t^{3})^{m}=\\\ \sum_{m=0}^{+\infty}t^{m}\sum_{j=0}^{m}{m\choose j}(xt^{2}+x^{2}t^{3})^{j}=\sum_{m=0}^{+\infty}\sum_{j=0}^{m}{m\choose j}x^{j}t^{m+2j}(1+xt)^{j}=\\\ \sum_{m=0}^{+\infty}\sum_{j=0}^{m}{m\choose j}x^{j}t^{m+2j}\sum_{i=0}^{j}{j\choose i}(xt)^{i}=\sum_{m=0}^{+\infty}\sum_{j=0}^{m}\sum_{i=0}^{j}{m\choose j}{j\choose i}x^{j+i}t^{m+2j+i}.\end{split}$ (11) Fix nonnegative integers $k,n$, $k\leq\left\lfloor\frac{n}{2}\right\rfloor$. From (11), we see that the coefficient at $x^{k}t^{n}$ is equal to $\sum\limits_{i}C_{n-2k+i}^{k-i}C_{k-i}^{i}$ over all $i$, for which the inequalities $i\geq 0$, $k-i\geq i$, $n-2k+i\geq k-i$ hold. The last two inequalities can be rewritten as $i\leq\left\lfloor\frac{k}{2}\right\rfloor$, $i\geq\left\lceil\frac{3k-n}{2}\right\rceil$. Thus, for the set of acceptable values of $i$ to be nonempty, it is necessary that $\left\lceil\frac{3k-n}{2}\right\rceil\leq\left\lfloor\frac{k}{2}\right\rfloor$. However, according to Proposition 3, this condition is equivalent to $k\leq\left\lfloor\frac{n}{2}\right\rfloor$, except for the case when $k$ is odd and $n=2k$. In the last case, it is clear that the inequality $\left\lceil\frac{3k-n}{2}\right\rceil\leq\left\lfloor\frac{k}{2}\right\rfloor$ does not hold, and therefore, the coefficient at $x^{k}t^{n}$ is equal to zero. This completes the proof. ∎ ###### Remark 1. Several initial values $\mu_{k}(\Gamma(C_{n}))$ are presented in Table 1. The empty cells correspond to zero. Table 1 coincides with Table $A220074$ in [9] up to the sign. $k$ $n$ | $0$ | $1$ | $2$ | $3$ | $4$ | $5$ | $6$ | $7$ | $8$ | $9$ | $10$ | $11$ | $12$ ---|---|---|---|---|---|---|---|---|---|---|---|---|--- | | | | | | | | | | | | | $0$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ $1$ | | | | $1$ | $2$ | $3$ | $4$ | $5$ | $6$ | $7$ | $8$ | $9$ | $10$ $2$ | | | | | $1$ | $2$ | $4$ | $7$ | $11$ | $16$ | $22$ | $29$ | $37$ $3$ | | | | | | | | $2$ | $6$ | $13$ | $24$ | $40$ | $62$ $4$ | | | | | | | | | $1$ | $3$ | $9$ | $22$ | $46$ $5$ | | | | | | | | | | | | $3$ | $12$ $6$ | | | | | | | | | | | | | $1$ Table 1: Number of $k$-edge matchings in the graph $\Gamma(C_{n})$ Let $R$ be a commutative associative ring with $1$ and $a,b\in R$. Consider a symmetric two-parameter Toeplitz matrix $C_{2m}(a,b)$ having the first row of the form $\left(\begin{array}[]{cccccc}0&b&a&b&\dots&b\end{array}\right).$ On substituting the value $\mu_{m-k}(\Gamma(C_{2m}))$ in (5), we obtain the following theorem: ###### Theorem 1. If we assume that $0^{0}=1$, then the hafnian of the matrix $C_{2m}(a,b)$ can be calculated using the following formula: $\begin{split}&\mathrm{Hf}(C_{2m}(a,b))=\\\ &\sum_{k=p}^{m}(a-b)^{m-k}b^{k}\frac{(2k)!}{k!2^{k}}\sum\limits_{i=\max{\left(0,\left\lceil\frac{m-3k}{2}\right\rceil\right)}}^{\left\lfloor\frac{m-k}{2}\right\rfloor}{2k+i\choose m-k-i}{m-k-i\choose i}\ ,\end{split}$ (12) where $p=0$ when $m$ is even and $p=1$ when $m$ is odd. ###### Remark 2. Equality (12) allows us to calculate $\mathrm{Hf}(C_{2m}(a,b))$ in time $O(m^{3})$. ###### Example 1. Consider the matrix $C_{2m}(0,1)$. By calculating its hafnian using formula (12) for consecutive $m$’s, we obtain the sequence: $\begin{array}[]{c|c|c|c|c|c|c|c|c|c|c|c}m&1&2&3&4&5&6&7&8&9&10&\dots\\\ \hline\cr\mathrm{Hf}&1&2&7&43&372&4027&51871&773186&13083385&247698481&\dots\end{array}$ In terms of graph theory, its $m$-th member equals the number of perfect matchings in the arc diagram $\Gamma(C_{2m}(0,1))$ (Figure 4). In other words, this is the number of linear chord diagrams with $m$ chords such that the length of each chord does not equal $2$ (see [4]). This sequence has the number $A265229$ in [9], but its description does not contain the interpretation given here. 123456123456123456123456123456123456123456123456 Figure 4: The arc diagram $\Gamma(C_{6}(0,1))$ and all its perfect matchings ## 3 Hafnian of Toeplitz matrices of the second type As the template matrix $T_{n}$, now consider a symmetric Toeplitz matrix of order $n$ with the first row $\left(\begin{array}[]{cccccc}0&1&1&0&\dots&0\end{array}\right).$ We denote it by $D_{n}$. This matrix is the adjacency matrix of the arc diagram shown in Figure 5. 1234$n-2$$n-1$$n$ Figure 5: The arc diagram $\Gamma(D_{n})$ ###### Theorem 2. Let $k$ and $n$ be nonnegative integers such that $k\leq\left\lfloor\frac{n}{2}\right\rfloor$. Then, the number of $k$-edge matchings in the arc diagram $\Gamma(D_{n})$ is equal to the following: $\mu_{k}(\Gamma(D_{n}))=\sum_{i=0}^{\min(k,\left\lfloor\frac{n-k}{2}\right\rfloor)}\sum\limits_{p=\max(0,i+2k-n)}^{\min(i,k-i)}{n-k-i\choose k-p}{k-p\choose i}{i\choose p}.$ (13) ###### Proof. For convenience, we use the abbreviated notation $w_{n,k}$ for $\mu_{k}(\Gamma(D_{n}))$. Consider a $k$-edge matching in $\Gamma(D_{n})$. If $n\geq 4$, then the following four cases are possible: the first vertex of the diagram is not incident to the edge of the matching (Figure 6(a)); the first and second vertices are connected by an edge of the matching (Figure 6(b)); the first vertex is incident to an edge of the matching, but the second vertex is not (Figure 6(c)); the first and second vertices are incident to different edges of the matching (Figure 6(d)). 1(a)12(b)123(c)1234(d) Figure 6: Possible cases of matchings in the arc diagram $\Gamma(D_{n})$ It follows from the above that $w_{n,k}$ satisfies the recurrence relation $w_{n+4,k+2}=w_{n+3,k+2}+w_{n+2,k+1}+w_{n+1,k+1}+w_{n,k}$ (14) with the initial conditions $w_{n,k}=0$ for $k>\left\lfloor\frac{n}{2}\right\rfloor$; $w_{n,0}=1$ for all $n$; $w_{n,1}=2n-3$ for $n\geq 2$. Consider the two-parameter generating function for the sequence $w_{n,k}$: $w(x,t)=\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}w_{n,k}x^{k}t^{n}.$ On multiplying (14) by $x^{k+3}t^{n+3}$ and summing over all possible $k$ and $n$, we get the following equation: $\begin{split}\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}w_{n+4,k+2}x^{k+3}t^{n+3}=\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}w_{n+3,k+2}x^{k+3}t^{n+3}+\\\ \sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}w_{n+2,k+1}x^{k+3}t^{n+3}+\sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}w_{n+1,k+1}x^{k+3}t^{n+3}+\\\ \sum_{n=0}^{+\infty}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor}w_{n,k}x^{k+3}t^{n+3}.\end{split}$ (15) On solving this equation, we obtain: $\begin{split}w(x,t)=&\frac{1}{1-t(1+xt+xt^{2}+x^{2}t^{3})}=\\\ &\sum_{m=0}^{+\infty}\sum_{j=0}^{m}\sum_{i=0}^{j}\sum_{p=0}^{i}{m\choose j}{j\choose i}{i\choose p}x^{j+p}t^{m+j+i+p}.\end{split}$ (16) Fix nonnegative integers $k$ and $n\geq 2k$. From (16), we see that the coefficient at $x^{k}t^{n}$ is equal to the sum $\sum\limits_{i}\sum\limits_{p}{n-k-i\choose k-p}{k-p\choose i}{i\choose p}$, over all $i,p$ for which the inequalities $i\geq p\geq 0$, $k-p\geq i$, $n-k-i\geq k-p$ hold. It can be shown that this system of inequalities is equivalent to the following system of inequalities: $0\leq i\leq\min(k,\left\lfloor\frac{n-k}{2}\right\rfloor)$, $\max(0,i+2k-n)\leq p\leq\min(i,k-i)$. This completes the proof. ∎ ###### Remark 3. Note that the arc diagram $\Gamma(D_{n})$ is isomorphic to the triangular lattice shown in Figure 7. Thus, formula (13) also allows us to calculate the number of $k$-edge matchings in such lattices. 214365$n-2$$n-3$$n$$n-1$(a)214365$n-1$$n-2$$n$(b) Figure 7: The triangular lattice $\Gamma(D_{n})$: (a) $n$ is even; (b) $n$ is odd ###### Remark 4. Several initial values $\mu_{k}(\Gamma(D_{n}))$ are presented in Table 2. The empty cells correspond to zero. Note that the sequence of the first nonzero elements in the rows is the Fibonacci sequence, the sequence of the second nonzero elements in the rows has the number $A023610$ in [9], and nonzero elements of the third row coincide with the sequence $A130883$, excluding the starting element. $k$ $n$ | $0$ | $1$ | $2$ | $3$ | $4$ | $5$ | $6$ | $7$ | $8$ | $9$ | $10$ | $11$ | $12$ ---|---|---|---|---|---|---|---|---|---|---|---|---|--- | | | | | | | | | | | | | $0$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ | $1$ $1$ | | | $1$ | $3$ | $5$ | $7$ | $9$ | $11$ | $13$ | $15$ | $17$ | $19$ | $21$ $2$ | | | | | $2$ | $7$ | $16$ | $29$ | $46$ | $67$ | $92$ | $121$ | $154$ $3$ | | | | | | | $3$ | $15$ | $43$ | $95$ | $179$ | $303$ | $475$ $4$ | | | | | | | | | $5$ | $30$ | $104$ | $271$ | $591$ $5$ | | | | | | | | | | | $8$ | $58$ | $235$ $6$ | | | | | | | | | | | | | $13$ Table 2: Number of $k$-edge matchings in the graph $\Gamma(D_{n})$ Let $R$ be a commutative associative ring with $1$ and $a,b\in R$. Consider a symmetric two-parameter Toeplitz matrix $D_{2m}(a,b)$ having the first row in the form $\left(\begin{array}[]{cccccc}0&a&a&b&\dots&b\end{array}\right).$ On substituting the value $\mu_{m-k}(\Gamma(D_{2m}))$ in (5), we obtain the following theorem: ###### Theorem 3. If we assume that $0^{0}=1$, then the hafnian of the matrix $D_{2m}(a,b)$ is expressed using the following formula: $\mathrm{Hf}(D_{2m}(a,b))=\sum_{k=0}^{m}(a-b)^{m-k}b^{k}\frac{(2k)!}{k!2^{k}}\mu_{m-k}(\Gamma(D_{2m}))\ ,$ (17) where $\begin{split}&\mu_{m-k}(\Gamma(D_{2m}))=\\\ &\sum_{i=0}^{\min(m-k,\left\lfloor\frac{m+k}{2}\right\rfloor)}\sum\limits_{p=\max(0,i-2k)}^{\min(i,m-k-i)}{m+k-i\choose m-k-p}{m-k-p\choose i}{i\choose p}.\end{split}$ ###### Remark 5. Equality (17) allows us to calculate $\mathrm{Hf}(D_{2m}(a,b))$ in time $O(m^{4})$. ###### Example 2. Consider the matrix $D_{2m}(0,1)$. By calculating its hafnian using formula (17) for consecutive $m$s, we obtain the sequence: $\begin{array}[]{c|c|c|c|c|c|c|c|c|c|c|c|}m&1&2&3&4&5&6&7&8&9&10&\dots\\\ \hline\cr\mathrm{Hf}&0&0&1&10&99&1146&15422&237135&4106680&79154927&\dots\end{array}$ In terms of graphs, the $m$-th member is equal to the number of perfect matchings in the arc diagram $\Gamma(D_{2m}(0,1))$ (Figure 8). In other words, this is the number of linear chord diagrams with $m$ chords such that the length of every chord is at least $3$ (see also [4], [10]). This sequence has the number $A190823$ in [9]. 123456123456 Figure 8: The arc diagram $\Gamma(D_{6}(0,1))$ and its only perfect matching ## References * [1] Björklund A., Gupt B., Quesada N. A faster hafnian formula for complex matrices and its benchmarking on a supercomputer // ACM Journal of Experimental Algorithmics. 2019. V. 24. 1.11. * [2] Grimson R.C. Enumeration of Dimer (Domino) Configurations // Discrete Mathematics. 1977. V. 18. pp. 167–178. * [3] Krasko E., Omelchenko A. Enumeration of chord diagrams without loops and parallel chords // The Electronic Journal of Combinatorics. 2017. 24(3). #3.43. * [4] Sullivan E. Linear chord diagrams with long chords // The Electronic Journal of Combinatorics. 2017. V. 24(4). pp. 1–8. * [5] Efimov D.B. The hafnian and a commutative analogue of the Grassmann algebra // Electronic Journal of Linear Algebra. 2018. V. 34. pp. 54-60. * [6] Zaslavsky T. Complementary Matching Vectors and the Uniform Matching Extension Property // European Journal of Combinatorics. 1981. V. 2. pp. 91–103. * [7] Young D. The Number of Domino Matchings in the Game of Memory // Journal of Integer Sequences. 2018. V. 21(8). pp. 1–14. * [8] Efimov D.B. The hafnian of Toeplitz matrices of special type, perfect matchings and Bessel polynomials // Bulletin of Syktyvkar University. Series 1: Mathematics. Mechanics. Informatics. 2018. V. 3(28). pp. 56–64. (in Russian). * [9] N.J.A. Sloane, editor The On-Line Encyclopedia of Integer Sequences, published electronically at https://oeis.org. * [10] Cameron N.T., Killpatrick K. Statistics on Linear Chord Diagrams // Discrete Mathematics and Theoretical Computer Science. 2019. V. 21(2). pp. 1–10.
# Cut–free sequent calculus and natural deduction for the tetravalent modal logic Martín Figallo (Departamento de Matemática. Universidad Nacional del Sur. Bahía Blanca, Argentina) ###### Abstract The tetravalent modal logic ($\cal TML$) is one of the two logics defined by Font and Rius ([13]) (the other is the normal tetravalent modal logic ${\cal TML}^{N}$) in connection with Monteiro’s tetravalent modal algebras. These logics are expansions of the well–known Belnap–Dunn’s four–valued logic that combine a many-valued character (tetravalence) with a modal character. In fact, $\cal TML$ is the logic that preserve degrees of truth with respect to tetravalent modal algebras. As Font and Rius observed, the connection between the logic $\cal TML$ and the algebras is not so good as in ${\cal TML}^{N}$, but, as a compensation, it has a better proof-theoretic behavior, since it has a strongly adequate Gentzen calculus (see [13]). In this work, we prove that the sequent calculus given by Font and Rius does not enjoy the cut–elimination property. Then, using a general method proposed by Avron, Ben-Naim and Konikowska ([4]), we provide a sequent calculus for $\cal TML$ with the cut–elimination property. Finally, inspired by the latter, we present a natural deduction system, sound and complete with respect to the tetravalent modal logic. ## 1 Introduction The class TMA of tetravalent modal algebras was first considered by Antonio Monteiro (1978), and mainly studied by I. Loureiro, A.V. Figallo, A. Ziliani and P. Landini. Later on, J.M. Font and M. Rius were interested in the logics arising from the algebraic and lattice–theoretical aspects of these algebras. From Monteiro’s point of view, in the future these algebras would give rise to a four-valued modal logic with significant applications in Computer Science (see [13]). Although such applications have not yet been developed, the two logics considered in [13] are modal expansions of Belnap-Dunn’s four-valued logic, a logical system that is well–known for the many applications it has found in several fields. In these logics, the four non-classical epistemic values emerge: 1 (true and not false), 0 (false and not true), n (neither true nor false) and b (both true and false). We may think of them as the four possible ways in which an atomic sentence $P$ can belong to the present state of information : we were told that (1) $P$ is true (and were not told that $P$ is false); (2) $P$ is false (and were not told that $P$ is true); (3) $P$ is both true and false (perhaps from different sources, or in different instants of time); (4) we were not told anything about the truth value of $P$. In this interpretation, it makes sense to consider a modal-like unary operator $\square$ of epistemic character, such that for any sentence $P$, the sentence $\square P$ would mean “the available information confirms that $P$ is true”. It is clear that in this setting the sentence $\square P$ can only be true in the case where we have some information saying that $P$ is true and we have no information saying that $P$ is false, while it is simply false in all other cases (i.e., lack of information or at least some information saying that $P$ is false, disregarding whether at the same time some other information says that $P$ is true); that is, on the set $\\{0,{\bf n},{\bf b},1\\}$ of epistemic values this operator must be defined as $\square 1=1$ and $\square{\bf n}=\square{\bf b}=\square 0=0$ . This is exactly the algebra that generates the variety of TMAs. In [13], Font and Rius studied two logics related to TMAs. One of them is obtained by following the usual “preserving truth” scheme, taking $\\{1\\}$ as designated set, that is, $\psi$ follows from $\psi_{1},\dots,\psi_{n}$ in this logic when every interpretation that sends all the $\psi_{i}$ to $1$ also sends $\psi$ to $1$. The other logic, denoted by ${\cal TML}$ (the logic we are interested in), is defined by using the preserving degrees of truth scheme, that is, $\psi$ follows from $\psi_{1},\dots,\psi_{n}$ when every interpretation that assigns to $\psi$ a value that is greater or equal than the value it assigns to the conjunction of the $\psi_{i}$’s. These authors proved that ${\cal TML}$ is not algebraizable in the sense of Blok and Pigozzi, but it is finitely equivalential and protoalgebraic. However, they confirm that its algebraic counterpart is also the class of TMAs: but the connection between the logic and the algebras is not so good as in the first logic. As a compensation, this logic has a better proof-theoretic behavior, since it has a strongly adequate Gentzen calculus (Theorems 3.6 and 3.19 of [13]). In [13], it was proved that $\cal TML$ can be characterized as a matrix logic in terms of two logical matrices, but later, in [9], it was proved that $\cal TML$ can be determined by a single logical matrix. Besides, taking profit of the contrapositive implication introduced by A. V. Figallo and P. Landini ([11]), a sound and complete Hilbert-style calculus for this logic was presented. Finally, the paraconsistent character of $\cal TML$ was also studied from the point of view of the _Logics of Formal Inconsistency_ , introduced by W. Carnielli and J. Marcos in [8] and afterward developed in [7]. ## 2 Preliminaries Recall that, a De Morgan algebra is a structure $\langle A,\wedge,\vee,\neg,0\rangle$ such that $\langle A,\wedge,\vee,0\rangle$ is a bounded distributive lattice and $\neg$ is a De Morgan negation, i.e., an involution that additionally satisfies De Morgan’s laws: for every $a,b\in A$ $\neg\neg a=a$ $\neg(a\vee b)=\neg a\wedge\neg b.$ A tetravalent modal algebra (TMA) is an algebra $\mathbb{A}=\langle A,\wedge,\vee,\neg,\square,0\rangle$ of type $(2,2,1,1,0)$ such that its non- modal reduct $\langle A,\wedge,\vee,\neg,0\rangle$ is a De Morgan algebra and the unary operation $\square$ satisfies, for all $a\in A$, the two following axioms: $\square a\wedge\neg a=0,$ $\neg\square a\wedge a=\neg a\wedge a.$ Every TMA $\mathbb{A}$ has a top element $1$ which is defined as $\neg 0$. These algebras were studied mainly by I. Loureiro ([14]), and also by A. V. Figallo, P. Landini ([11]) and A. Ziliani, at the suggestion of the late A. Monteiro (see [13]). The class of all tetravalent modal algebras constitute a variety which is denoted by TMA. Let $M_{4}=\\{0,{\bf n},{\bf b},1\\}$ and consider the lattice given by the following Hasse diagram $1$$\bf n$$\bf b$$0$ This is a well-known lattice and it is called ${\bf L4}$ (See [1], pg. 516.) Then, TMA is generated by the above four–element lattice enriched with two unary operators $\neg$ and $\square$ given by $\neg{\bf n}={\bf n}$, $\neg{\bf b}={\bf b}$, $\neg 0=1$ and $\neg 1=0$ and the unary operator $\square$ is defined as: $\square{\bf n}=\square{\bf b}=\square 0=0$ and $\square 1=1$ (see [13]). This tetravalent modal algebra, denoted by $\mathfrak{M}_{4m}$, has two prime filters, namely, $F_{\tiny\mbox{\bf n}}=\\{{\bf n},1\\}$ and $F_{\tiny\mbox{\bf b}}=\\{{\bf b},1\\}$. As we said, $\mathfrak{M}_{4m}$ generates the variety ${\bf TMA}$, i.e., an equation holds in every TMA iff it holds in $\mathfrak{M}_{4m}$. ###### Lemma 2.1 (See [13]) In every TMA $\mathbb{A}$ and for all $a,b\in A$ the following hold: (i) $\neg\square a\vee a=1$, (viii) $\square\square a=\square a$, (ii) $\square a\vee\neg a=a\vee\neg a$, (ix) $\square(a\wedge b)=\square a\wedge\square b$, (iii) $\square a\vee\neg\square a=1$, (x) $\square(a\vee\square b)=\square a\vee\square b$, (iv) $\square a\wedge\neg\square a=0$, (xi) $\square\neg\square a=\neg\square a$ (v) $\square a\leq a$, (xii) $a\wedge\square\neg a=0$, (vi) $\square 1=1$, (xiii) $\square(\square a\wedge\square b)=\square a\wedge\square b$ (vii) $\square 0=0$, (xiv) $\square(\square a\vee\square b)=\square a\vee\square b$ The next proposition will be needed in what follows. ###### Proposition 2.2 Let $\mathbb{A}$ be a TMA. If $x\leq y\vee z$ and $x\wedge\neg z\leq y$, then $x\leq y\vee\square z$, for every $x,y,z\in A$. Proof. It is a routine task to check that the assertion holds in $\mathfrak{M}_{4m}$. The fact that $\mathfrak{M}_{4m}$ generates the variety ${\bf TMA}$ completes the proof. $\boldsymbol{\blacksquare}$ Let $\mathscr{L}=\\{\vee,\wedge,\neg,\square\\}$ be a propositional language. From now on, we shall denote by $\mathfrak{Fm}=\langle Fm,\wedge,\vee,\neg,\square,\bot\rangle$ the absolutely free algebra of type (2,2,1,1,0) generated by some denumerable set of variables. We denote by $Fm$ the set of sentential formulas, and we shall refer to them by lowercase Greek letters $\alpha,\beta,\gamma,\dots$ and so on; and we shall denote finite sets of formulas by uppercase Greek letters $\Gamma,\Delta,$ etc. ###### Definition 2.3 The tetravalent modal logic ${\cal TML}$ defined over $\mathfrak{Fm}$ is the propositional logic $\langle Fm,\models_{{\cal TML}}\rangle$ given as follows: for every finite set $\Gamma\cup\\{\alpha\\}\subseteq Fm$, $\Gamma\models_{\cal TML}\alpha$ if and only if, for every $\mathbb{A}\in{\bf TMA}$ and for every $h\in Hom(\mathfrak{Fm},\mathbb{A})$, $\bigwedge\\{h(\gamma)\ :\ \gamma\in\Gamma\\}\leq h(\alpha)$. In particular, $\emptyset\models_{\cal TML}\alpha$ if and only if $h(\alpha)=1$ for every $\mathbb{A}\in{\bf TMA}$ and for every $h\in Hom(\mathfrak{Fm},\mathbb{A})$. ###### Remark 2.4 Observe that, if $h\in Hom(\mathfrak{Fm},\mathbb{A})$ for any $\mathbb{A}\in{\bf TMA}$, we have that $h(\bot)=0$. This follows from the fact that $\bot$ is the $0$-ary operation in $\mathfrak{Fm}$, $0$ is the $0$-ary operation in $\mathbb{A}$ and the definition of homomorphism (in the sense of universal algebra). Let ${\cal M}=\langle{\cal T},{\cal D},{\cal O}\rangle$ be a logical matrix for $\mathscr{L}$, that is, ${\cal T}$ is a finite, non-empty set of truth values, ${\cal D}$ is a non-empty proper subset of ${\cal T}$, and ${\cal O}$ includes a $k$-ary function $\hat{f}:{\cal T}^{k}\to{\cal T}$ for each $k$-ary connective $f\in\mathscr{L}$. Recall that, a valuation in ${\cal M}$ is a function $v:Fm\to{\cal T}$ such that $v(f(\psi_{1},\dots,\psi_{k}))=\hat{f}(v(\psi_{1}),\dots,v(\psi_{k}))$ for each $k$-ary connective $f$ and all $\psi_{1},\dots,\psi_{k}\in Fm$. A formula $\alpha\in Fm$ is satisfied by a given valuation $v$, in symbols $v\models\alpha$, if $v(\alpha)\in{\cal D}$. Let $\Gamma,\Delta\subseteq Fm$. We say that the $\Delta$ is consequence of $\Gamma$, denoted $\Gamma\models_{\cal M}\Delta$, iff for every valuation $v$ in ${\cal M}$, either $v$ does not satisfy some formula in $\Gamma$ or $v$ satisfies some formula in $\Delta$. J. M. Font and M. Rius proved in [13] that the tetravalent modal logic $\cal TML$ is a matrix logic defined in terms of two logical matrices. But later, M. E. Coniglio and M. Figallo proved in [9] that $\cal TML$ can be characterized as a matrix logic in terms of a single logical matrix. Indeed, let ${\cal M}_{4}=\langle{\cal T},{\cal D},{\cal O}\rangle$ be the matrix where the set of truth values is ${\cal T}=\\{0,{\bf n},{\bf b},1\\}$, the set of designated values is ${\cal D}=\\{{\bf b},1\\}$ and ${\cal O}=\\{\tilde{\vee},\tilde{\wedge},\tilde{\neg},\tilde{\square}\\}$ where $\tilde{\vee},\tilde{\wedge}:{\cal T}^{2}\to{\cal T}$ and $\tilde{\neg},\tilde{\square}:{\cal T}\to{\cal T}$ are defined as $x\tilde{\vee}y=\mathop{\rm Sup}\nolimits\\{x,y\\}$, $x\tilde{\wedge}y=\mathop{\rm Inf}\nolimits\\{x,y\\}$ (here we are assuming that the elements of ${\cal T}$ are ordered as in the lattice $M_{4}$). $x$ | $\tilde{\neg}x$ | $\tilde{\square}x$ ---|---|--- 0 | 1 | 0 n | n | 0 b | b | 0 1 | 0 | 1 then, ###### Proposition 2.5 ([9]) $\cal TML$ is sound and complete w.r.t. ${\cal M}_{4}$. Therefore, given $\Gamma$ and $\Delta$ sets of formulas, $\Delta$ is consequence of $\Gamma$ in ${\cal TML}$, denoted $\Gamma\models_{\cal TML}\Delta$, iff for every valuation $v$ in ${\cal M}_{4}$, either $v$ does not satisfy some formula in $\Gamma$ or $v$ satisfies some formula in $\Delta$. If $\Delta$ is a set with exactly one element, we recover the consequence relation given in Definition 2.3. In order to characterize $\cal TML$ syntactically, that is, by means of a deductive system, J. M. Font and M. Rius introduced in [13] the sequent calculus $\mathfrak{G}$. The sequent calculus $\mathfrak{G}$ is single–conclusion, that is, it deals with sequents of the form $\Delta\Rightarrow\alpha$ such that $\Delta\cup\\{\alpha\\}$ is a finite subset of $Fm$. The axioms and rules of $\mathfrak{G}$ are the following: Axioms $\mbox{(Structural axiom) \, }\displaystyle{\alpha\Rightarrow\alpha}\hskip 56.9055pt\mbox{(Modal axiom) \, }{\Rightarrow\alpha\vee\neg\square\alpha}$ Structural rules $\mbox{(Weakening) \, }\displaystyle\frac{\Delta\Rightarrow\alpha}{\Delta,\beta\Rightarrow\alpha}\hskip 56.9055pt\mbox{(Cut) \, }\displaystyle\frac{\Delta\Rightarrow\alpha\hskip 14.22636pt\Delta,\alpha\Rightarrow\beta}{\Delta\Rightarrow\beta}$ Logic rules $\mbox{($\wedge\Rightarrow$) \, }\displaystyle\frac{\Delta,\alpha,\beta\Rightarrow\gamma}{\Delta,\alpha\wedge\beta\Rightarrow\gamma}\hskip 56.9055pt\mbox{($\Rightarrow\wedge$) \, }\displaystyle\frac{\Delta\Rightarrow\alpha\hskip 14.22636pt\Delta\Rightarrow\beta}{\Delta\Rightarrow\alpha\wedge\beta}$ $\mbox{($\vee\Rightarrow$) \, }\displaystyle\frac{\Delta,\alpha\Rightarrow\gamma\hskip 14.22636pt\Delta,\beta\Rightarrow\gamma}{\Delta,\alpha\vee\beta\Rightarrow\gamma}$ $\mbox{($\Rightarrow\vee$)${}_{1}$ \, }\displaystyle\frac{\Delta\Rightarrow\alpha}{\Delta\Rightarrow\alpha\vee\beta}\hskip 56.9055pt\mbox{($\Rightarrow\vee$)${}_{2}$ \, }\displaystyle\frac{\Delta\Rightarrow\beta}{\Delta\Rightarrow\alpha\vee\beta}$ $\mbox{($\neg$) \, }\displaystyle\frac{\alpha\Rightarrow\beta}{\neg\beta\Rightarrow\neg\alpha}\hskip 56.9055pt\mbox{($\bot$) \,}\frac{\Delta\Rightarrow\bot}{\Delta\Rightarrow\alpha}$ $\mbox{($\neg\neg\Rightarrow$) \, }\displaystyle\frac{\Delta,\alpha\Rightarrow\beta}{\Delta,\neg\neg\alpha\Rightarrow\beta}\hskip 56.9055pt\mbox{($\Rightarrow\neg\neg$)}\,\frac{\Delta\Rightarrow\alpha}{\Delta\Rightarrow\neg\neg\alpha}$ $\mbox{($\square\Rightarrow$) \, }\displaystyle\frac{\Delta,\alpha,\neg\alpha\Rightarrow\beta}{\Delta,\alpha,\neg\square\alpha\Rightarrow\beta}\hskip 56.9055pt\mbox{($\Rightarrow\square$)}\,\frac{\Delta\Rightarrow\alpha\wedge\neg\alpha}{\Delta\Rightarrow\alpha\wedge\neg\square\alpha}$ The notion of derivation in the sequent calculus $\mathfrak{G}$ is the usual. Besides, for every finite set $\Gamma\cup\\{\varphi\\}\subseteq Fm$, we write $\Gamma\vdash_{\mathfrak{G}}\varphi$ iff the sequent $\Gamma\Rightarrow\varphi$ has a derivation in $\mathfrak{G}$. We say that the sequent $\Gamma\Rightarrow\varphi$ is provable iff there exists a derivation for it in $\mathfrak{G}$. J. M. Font and M. Rius proved in [13] that $\mathfrak{G}$ is sound and complete with respect to the tetravalent modal logic $\cal TML$. ###### Theorem 2.6 (Soundness and Completeness, [13]) For every finite set $\Gamma\cup\\{\alpha\\}\subseteq Fm$, $\Gamma\models_{\cal TML}\alpha\ \ \textrm{ if and only if}\ \ \Gamma\vdash_{\mathfrak{G}}\alpha.$ Moreover, ###### Proposition 2.7 ([13]) An arbitrary equation $\psi\approx\varphi$ holds in every TMA iff $\psi\dashv\vdash_{\mathfrak{G}}\varphi$ (that is, $\psi\vdash_{\mathfrak{G}}\varphi$ and $\varphi\vdash_{\mathfrak{G}}\psi$). As a consequence of it we have that: ###### Corollary 2.8 ([13]) * (i) The equation $\psi\approx 1$ holds in every TMA iff $\vdash_{\mathfrak{G}}\psi$. * (ii) For any $\psi,\varphi\in Fm$, $\psi\vdash_{\mathfrak{G}}\varphi$ iff $h(\psi)\leq h(\varphi)$ for every $h\in Hom(\mathfrak{Fm},\mathbb{A})$, for every $\mathbb{A}\in{\bf TMA}$. ## 3 $\mathfrak{G}$ does not admit a cut–elimination theorem Corollary 2.8 is a powerful tool to determine whether a given sequent of $\mathfrak{G}$ is provable or not. For instance, ###### Proposition 3.1 In $\mathfrak{G}$ we have that the sequent $\neg\square\alpha\Rightarrow\alpha$ is provable iff the sequent $\Rightarrow\alpha$ is provable. Proof. Indeed, suppose that the sequent $\neg\square\alpha\Rightarrow\alpha$ is provable in $\mathfrak{G}$. Then, $h(\neg\square\alpha)\leq h(\alpha)$, for all $h\in Hom(\mathfrak{Fm},\mathfrak{M}_{4m})$. But, considering all the cases, we must have that $h(\neg\square\alpha)=0$ and $h(\alpha)=1$, for all $h$, and therefore the sequent $\Rightarrow\alpha$ is provable in $\mathfrak{G}$. The converse is straightforward. $\boldsymbol{\blacksquare}$ Recall that a rule of inference is admissible in a formal system if the set of theorems of the system is closed under the rule; and a rule is said to be derivable in the same formal system if its conclusion can be derived from its premises using the other rules of the system. A well–known rule for readers familiar with modal logic is the Rule of Necessitation, which states that if $\varphi$ is a theorem, so is $\square\varphi$. Formally, $\mbox{(Nec) \,}\frac{\Rightarrow\varphi}{\Rightarrow\square\varphi}$ Then, we have that: ###### Lemma 3.2 The Rule of Necessitation is admissible in $\mathfrak{G}$. Proof. From Corollary 2.8 and considering the algebra $\mathfrak{M}_{4m}$. $\boldsymbol{\blacksquare}$ From the above lemma, we can obtain a proof of $\Rightarrow\square(\alpha\vee\neg\square\alpha)$ in $\mathfrak{G}$, for any $\alpha\in Fm$. Let $\Pi$ be a proof of $\Rightarrow\square(\alpha\vee\neg\square\alpha)$ and let ($r$) be the last rule application in $\Pi$. Clearly, $\Pi$ make use of more than one rule since $\square(\alpha\vee\neg\square\alpha)$ is not an axiom. Then, we have the following two cases: Case 1: $\Pi$ is of the form $\cdot$ $\cdot$ $\cdot$ $\Gamma\Rightarrow\varphi$ (r) $\Rightarrow\square(\alpha\vee\neg\square\alpha)$ Case 2: $\Pi$ is of the form $\cdot$ $\cdot$ $\cdot$ $\Gamma_{1}\Rightarrow\varphi_{1}\,\,\,\,\Gamma_{2}\Rightarrow\varphi_{2}$ (r) $\Rightarrow\square(\alpha\vee\neg\square\alpha)$ In case 1, ($r$) has just one premise, and therefore it can be: ($\bot$), weakening, ($\wedge\Rightarrow$), ($\vee\Rightarrow$), ($\Rightarrow\vee$), ($\neg$), ($\neg\neg\Rightarrow$), ($\Rightarrow\neg\neg$), ($\square\Rightarrow$) or ($\Rightarrow\square$). In the case of ($\bot$), the only possibility is having $\Gamma=\emptyset$. But this would imply that the sequent $\Rightarrow\bot$ is provable, which contradicts the soundness of $\mathfrak{G}$. Thus, this case is discarded. On the other hand, none of the other rules above has the structure of ($r$), so they are also discarded. Therefore, $\pi$ is of the form depicted in Case 2. Then, ($r$) must be one of the following: the cut rule, ($\Rightarrow\wedge$) or ($\vee\Rightarrow$). It is clear that ($r$) cannot be ($\Rightarrow\wedge$) nor ($\vee\Rightarrow$). Consequently, ($r$) must be the cut rule. We have just proved, therefore, the following assertion. ###### Proposition 3.3 Every proof of $\Rightarrow\square(\alpha\vee\neg\square\alpha)$ in $\mathfrak{G}$ uses the cut rule. Moreover, we have that: ###### Lemma 3.4 For every $\varphi\in Fm$ such that $\Rightarrow\varphi$ is provable in $\mathfrak{G}$, we have that $\Rightarrow\square\varphi$ is provable in $\mathfrak{G}$; and every proof of $\Rightarrow\square\varphi$ in $\mathfrak{G}$ makes use of the cut rule. Consequently, ###### Theorem 3.5 $\mathfrak{G}$ does not admit cut–elimination. ## 4 The general method of Avron, Ben-Naim and Konikowska In [3], A. Avron and B. Konikowska use the Rasiowa-Sikorski decomposition methodology to get sound and complete proof systems employing $n$-sequents for all propositional logics based on non-deterministic matrices. Later, these same authors jointly with J. Ben-Naim ([4]) presented a general method to transform a given sound and complete $n$-sequent proof system into an equivalent sound and complete system of ordinary two-sided sequents (for languages satisfying a certain minimal expressiveness condition). In this section we shall recall both methods considering ordinary (deterministic) matrices. In what follows, $\mathscr{L}$ is a propositional language and let (in this section) $\mathfrak{Fm}$ be the absolutely free algebra over $\mathscr{L}$ generated by some denumerable set of variables, with underlying set (of formulas) $Fm$. Let ${\cal M}=\langle{\cal T},{\cal D},{\cal O}\rangle$ be a logical matrix for $\mathscr{L}$. As we said, a valuation $v$ in ${\cal M}$ satisfies a given formula $\alpha$ if $v(\alpha)\in{\cal D}$. A sequent $\Gamma\Rightarrow\Delta$ is satisfied by the valuation $v$, in symbols $v\models\,\Gamma\Rightarrow\Delta$, if either $v$ does not satisfy some formula in $\Gamma$ or $v$ satisfies some formula in $\Delta$. A sequent is valid if it is satisfied by all valuations. Now, suppose that ${\cal T}=\\{t_{0},\dots,t_{n-1}\\}$, where $n\geq 2$, and ${\cal D}=\\{t_{d},\dots,t_{n-1}\\}$, where $1\leq d\leq n-1$. ###### Definition 4.1 (see [3]) An $n$–sequent over $\mathscr{L}$ is an expression $\Gamma_{0}\mid\dots\mid\Gamma_{n-1}$ where, for each $i$, $\Gamma_{i}$ is a finite set of formulas. A valuation $v$ satisfies the $n$–sequent $\Gamma_{0}\mid\dots\mid\Gamma_{n-1}$ iff there exists $i$, $0\leq i\leq n-1$ and $\psi\in\Gamma_{i}$ such that $v(\psi)=t_{i}$. An $n$–sequent is valid if it is satisfied by every valuation $v$. Note that, a valuation $v$ satisfies an ordinary sequent $\Gamma\Rightarrow\Delta$ iff $v$ satisfies the $n$–sequent $\Gamma_{1}\mid\dots\mid\Gamma_{n-1}$ where $\Gamma_{i}=\Gamma$ for all $0\leq i\leq d-1$ and $\Gamma_{j}=\Delta$ for all $d\leq j\leq n-1$ . An alternative presentation of $n$-sequents is by means of sets of signed formulas. A signed formula over the language $\mathscr{L}$ and ${\cal T}$, is an expression of the form $t_{i}:\psi$ where $t_{i}\in{\cal T}$ and $\psi\in Fm$. A valuation $v$ satisfies the signed formula $t_{i}:\psi$ iff $v(\psi)=t_{i}$. If ${\cal U}\subseteq{\cal T}$ and $\Gamma\subseteq Fm$, we denote by ${\cal U}:\Gamma$ the set ${\cal U}:\Gamma=\\{t:\alpha\mid t\in{\cal U},\alpha\in\Gamma\\}$ If ${\cal U}=\\{t\\}$, we write $t:\Gamma$ instead of $\\{t\\}:\Gamma$. A valuation satisfies the set of signed formulas ${\cal U}:\Gamma$ if it satisfies some signed formula of ${\cal U}:\Gamma$; and we say that ${\cal U}:\Gamma$ is valid if it is satisfied by every valuation $v\in{\cal V}$. It is clear that, the $n$–sequent $\Gamma_{0}\mid\dots\mid\Gamma_{n-1}$ is valid iff the set of signed formulas $\bigcup\limits_{i=0}^{n-1}t_{i}:\Gamma_{i}$ is valid. A. Avron and B. Konikowska developed in [3] a generic $n$-sequent system for any logic based on an $n$-valued matrix. Consider the $n$-valued matrix ${\cal M}=\langle{\cal T},{\cal D},{\cal O}\rangle$ and let $SF_{\cal M}$ the system defined as follows: for $\Omega$ and $\Omega^{\prime}$ sets of signed formulas * • Axioms: ${\cal T}:\alpha$ * • Structural rules: Weakening: $\displaystyle\frac{\Omega}{\Omega^{\prime}}\hskip 8.5359pt\mbox{ in case }\hskip 5.69046pt\Omega\subseteq\Omega^{\prime}$ * • Logical rules: for each $k$-ary connective $f$ and every $(a_{1},\dots,a_{k})\in{\cal T}^{k}$ $\displaystyle\frac{\Omega,a_{1}:\alpha_{1}\,\,\dots\,\,\Omega,a_{k}:\alpha_{k}}{\Omega,\hat{f}(a_{1},\dots,a_{k}):f(\alpha_{1},\dots,\alpha_{k})}$ ###### Theorem 4.2 ([3]) The system $SF_{\cal M}$ is sound and complete w.r.t. the matrix ${\cal M}$ Let $Fm_{p}$ be the set of all formulas of $Fm$ that have $p$ as their only propositional variable, i.e., $Fm_{p}=\\{\alpha\in Fm:Var(\alpha)=\\{p\\}\\}$. Let ${\cal M}=\langle{\cal T},{\cal D},{\cal O}\rangle$ be a logical matrix and denote by ${\cal N}$ the set ${\cal T}\setminus{\cal D}$. ###### Definition 4.3 ([4]) The language $\mathscr{L}$ is sufficiently expressive for ${\cal M}$ iff for any $i$, $0\leq i\leq n-1$ there exist natural numbers $l_{i},m_{i}$ and formulas $\alpha_{j}^{i},\beta_{k}^{i}\in Fm_{p}$, for $1\leq j\leq l_{i}$ and $1\leq k\leq m_{i}$ such that for any valuation $v$, the following conditions hold: (i) $\alpha_{1}^{i}=p$ if $t_{i}\in{\cal N}$ and $\beta_{1}^{i}=p$ if $t_{i}\in{\cal D}$, (ii) For $\varphi\in Fm$ and $t_{i}\in{\cal T}$ $v(\varphi)=t_{i}\,\Leftrightarrow\ v(\alpha_{1}^{i}[p/\varphi]),\dots,v(\alpha_{l_{i}}^{i}[p/\varphi])\in{\cal N}\,and\,v(\beta_{1}^{i}[p/\varphi]),\dots,v(\alpha_{m_{i}}^{i}[p/\varphi])\in{\cal D}$ where $\alpha_{j}^{i}[p/\varphi]$ ($\beta_{k}^{i}[p/\varphi]$) is the formula obtained by the substitution of $p$ by $\varphi$ in $\alpha_{j}^{i}$ ($\beta_{k}^{i}$). Note that, as it is mentioned in [4], condition (i) above is not really limiting, since given $\alpha_{j}^{i},\beta_{k}^{i}$ satisfying (ii), we can simply add to them the necessary formula $p$ without violating (ii). Condition (i) will only be used for a backward translation from ordinary sequents to $n$-sequents, and will be disregarded otherwise. If $\Gamma$ is a set of formulas and $\alpha\in Fm_{p}$, we denote by $\alpha[p/\Gamma]$ the set $\alpha[\Gamma]=\\{\alpha[p/\gamma]\mid\gamma\in\Gamma\\}$ The method is based on replacing each $n$-sequent by a semantically equivalent set of two-sided sequents. Let $\mathscr{L}$ be a sufficiently expressive language and for $0\leq i\leq n-1$ let $l_{i}$, $m_{i}$, $\alpha_{j}^{i}$ and $\beta_{k}^{i}$ as in Definition 4.3. Consider the $n$–sequent $\Sigma=\Gamma_{0}\mid\dots\mid\Gamma_{n-1}$ over $\mathscr{L}$. A partition $\pi$ of the $n$–sequent $\Sigma$ is a tuple $\pi=(\pi_{0},\dots,\pi_{n-1})$ such that, for every $i$, $\pi_{i}$ is a partition of the set $\Gamma_{i}$ of the form: $\pi_{i}=\\{\Gamma^{\prime}_{ij}\mid 1\leq j\leq l_{i}\\}\cup\\{\Gamma^{\prime\prime}_{ik}\mid 1\leq k\leq m_{i}\\}$ Note that $\pi_{i}$ is not a partition in the usual sense, since its components are allowed to be empty. Besides, observe that the number of sets in this partition is exactly the number of formulas corresponding to $i$ in Definition 4.3. Then, given a partition $\pi$ of the $n$-sequent $\Sigma$, we define the two- sided sequent $\Sigma_{\pi}$ determined by $\Sigma$ and the partition $\pi$, as follows: $\bigcup\limits_{j=0}^{l_{0}}\alpha_{j}^{0}[\Gamma^{\prime}_{0j}],\dots,\bigcup\limits_{j=n-1}^{l_{n-1}}\alpha_{j}^{n-1}[\Gamma^{\prime}_{(n-1)j}]\,\Rightarrow\,\bigcup\limits_{k=0}^{m_{0}}\beta_{k}^{0}[\Gamma^{\prime\prime}_{0k}],\dots,\bigcup\limits_{k=n-1}^{m_{n-1}}\beta_{k}^{n-1}[\Gamma^{\prime\prime}_{(n-1)k}]$ Let $\Pi$ be the set of all partitions of the $n$–sequent $\Sigma$. Then, the set $TWO(\Sigma)$ is defined as follows: $TWO(\Sigma)=\\{\Sigma_{\pi}\mid\pi\in\Pi\\}$ ###### Theorem 4.4 ([4]) Let $\Sigma$ be an $n$–sequent over $\mathscr{L}$ and $v$ a valuation. Then, $v$ satisfies $\Sigma$ iff $v$ satisfies $\Sigma^{\prime}$, for every $\Sigma^{\prime}\in TWO(\Sigma)$. ###### Definition 4.5 ([4]) Let ${\cal C}$ be an $n$–sequent calculus over $\mathscr{L}$. Then, let $TWO({\cal C})$ the (ordinary) sequent calculus over $\mathscr{L}$ given by: * Axioms: $TWO(A)$, for all axiom $A$ of $\cal C$, * Inference rules: $\displaystyle\frac{TWO(S)}{\Sigma^{\prime}}$, where $S$ is a finite set of $n$-sequents, $R$ is one $n$-sequent such that $\displaystyle\frac{S}{R}$ is a rule in $\cal C$ and $\Sigma^{\prime}\in TWO(R)$. Then, ###### Theorem 4.6 ([4]) If an $n$–sequent $\Sigma$ is provable in $\cal C$, then each two-sided sequent $\Sigma^{\prime}\in TWO(\Sigma)$ is provable in $TWO({\cal C})$. ###### Theorem 4.7 ([4]) Let $\mathscr{L}$ be a sufficiently expressive language for ${\cal M}$, and let $\cal C$ be a sound and complete sequent calculus w.r.t $\cal M$. Then, $TWO({\cal C})$ is sound and complete w.r.t. ${\cal M}$. The analogue of the cut rule for ordinary sequents is the following generalized cut rule for sets of signed formulas: $\displaystyle\frac{\Omega\cup\\{i:\alpha\,|\,i\in I\\}\hskip 14.22636pt\Omega\cup\\{j:\alpha\,|\,j\in J\\}}{\Omega}\hskip 14.22636pt\mbox{ for }I,J\subseteq{\cal V},I\cap J=\emptyset$ ###### Theorem 4.8 ([4]) Under the conditions of Theorem 4.7, the cut rule is admissible in $TWO({\cal C})$. In particular, if $\cal C$ is obtained by the method of [3], then the cut rule is admissible in $TWO({\cal C})$. As it was observed in [4], the $n$-sequent calculi obtained using the above general method are hardly optimal (the same is true for the two-sided calculi). We can use the three general streamlining principles from [3] to reduce the calculi to a more compact form. The three streamlining principles are: Principle 1: deleting a derivable rule, Principle 2: simplifying a rule by replacing it with one with weaker premises, and Principle 3: combining two context–free rules with the same conclusion into one. Recall that a rule $R$ is context-free if whenever $\frac{\phi_{1}\dots\phi_{n}}{\Sigma}$ is a valid application of $R$, and $\Sigma^{\prime}$ is a set of signed formulas, then $\frac{\phi_{1}\cup\Sigma^{\prime}\dots\phi_{n}\cup\Sigma^{\prime}}{\Sigma\cup\Sigma^{\prime}}$ is also a valid application of $R$. A rule $R$ of an ordinary two–sided sequent calculus is a context–free if $\displaystyle\frac{\Gamma_{1}\Rightarrow\Delta_{1},\dots,\Gamma_{k}\Rightarrow\Delta_{k}}{\Gamma\Rightarrow\Delta}$ is a valid application of $R$, then $\displaystyle\frac{\Gamma_{1},\Gamma^{\prime}\Rightarrow\Delta_{1},\Delta^{\prime},\dots,\Gamma_{k},\Gamma^{\prime}\Rightarrow\Delta_{k},\Delta^{\prime}}{\Gamma,\Gamma^{\prime}\Rightarrow\Delta,\Delta^{\prime}}$ is also a valid application of $R$, where $\Gamma^{\prime}$ and $\Delta^{\prime}$ are finite sets of formulas. Of these three, the first and the third decrease the number of rules, while the second simplifies a rule by decreasing the number of its premises. It is worth mentioning that applying Principles 1–3 preserves the cut- elimination property since cut-elimination is obtained via the completeness result and the principles are designed to retain completeness. ## 5 Cut–free sequent calculus for ${\cal TML}$ Now, we shall use the method exhibited in Section 4 to develop a $4$-sequent calculus for $\cal TML$. In this case, we shall use its alternative presentation provided by sets of $4$-signed formulas. Let ${\cal SF}_{4}$ be $4$-sequent calculus given by: for $\alpha,\beta\in Fm$, $\Omega$ and $\Omega^{\prime}$ arbitrary sets of signed formulas Axioms: $\\{0:\alpha,{\bf n}:\alpha,{\bf b}:\alpha,n:\alpha\\}$. Structural rules: Weakening. $\displaystyle\frac{\Omega}{\Omega^{\prime}}\hskip 8.5359pt\mbox{ in case }\hskip 5.69046pt\Omega\subseteq\Omega^{\prime}$ Logical rules: for $i,j\in M_{4}$ $\mbox{($\vee_{ij}$) \, }\displaystyle\frac{\Omega,i:\alpha\hskip 28.45274pt\Omega,j:\beta}{\Omega,\mathop{\rm Sup}\nolimits\\{i,j\\}:\alpha\vee\beta}\hskip 71.13188pt\mbox{($\wedge_{ij}$) \, }\displaystyle\frac{\Omega,i:\alpha\hskip 28.45274pt\Omega,j:\beta}{\Omega,\mathop{\rm Inf}\nolimits\\{i,j\\}:\alpha\wedge\beta}$ $\mbox{($\neg_{0}$) \, }\displaystyle\frac{\Omega,0:\alpha}{\Omega,1:\neg\alpha}\hskip 28.45274pt\mbox{($\neg_{\bf n}$) \, }\displaystyle\frac{\Omega,{\bf n}:\alpha}{\Omega,{\bf n}:\neg\alpha}\hskip 28.45274pt\mbox{($\neg_{\bf b}$) \, }\displaystyle\frac{\Omega,{\bf b}:\alpha}{\Omega,{\bf b}:\neg\alpha}\hskip 28.45274pt\mbox{($\neg_{1}$) \, }\displaystyle\frac{\Omega,1:\alpha}{\Omega,0:\neg\alpha}$ $\mbox{($\square_{i}$) \, }\displaystyle\frac{\Omega,i:\alpha}{\Omega,0:\square\alpha},\mbox{ for }i\not=1\hskip 42.67912pt\mbox{($\square_{1}$) \, }\displaystyle\frac{\Omega,1:\alpha}{\Omega,1:\square\alpha}$ In rules ($\vee_{ij}$) (and (($\wedge_{ij}$)), the supremum (infimum) is taken on the lattice $M_{4}$. Besides, observe that the system ${\cal SF}_{4}$ has forty logical rules and it is not optimal. However, in this step we are not going to use the principles mentioned in Section 4 to reduce ${\cal SF}_{4}$. ###### Proposition 5.1 * * (i) ${\cal SF}_{4}$ is sound and complete w.r.t. the matrix ${\cal M}_{4}$, * (ii) the cut rule is admissible in ${\cal SF}_{4}$. Proof. From Theorem 4.2. $\boldsymbol{\blacksquare}$ Now, we shall apply the method described in Section 4 to translate ${\cal SF}_{4}$ to an ordinary two-sided sequent calculus. ###### Proposition 5.2 The language $\mathscr{L}$ is sufficiently expressive for the semantics determined by the matrix ${\cal M}_{4}$. Proof. Let $v:Fm\to M_{4}$ be a valuation and let $\alpha\in Fm$ an arbitrary formula, then we have that $v(\alpha)=0\,\Longleftrightarrow\,v(\alpha)\in{\cal N}\mbox{ and }v(\neg\alpha)\in{\cal D}$ $v(\alpha)={\bf n}\,\Longleftrightarrow\,v(\alpha)\in{\cal N}\mbox{ and }v(\neg\alpha)\in{\cal N}$ $v(\alpha)={\bf b}\,\Longleftrightarrow\,v(\alpha)\in{\cal D}\mbox{ and }v(\neg\alpha)\in{\cal D}$ $v(\alpha)=1\,\Longleftrightarrow\,v(\alpha)\in{\cal D}\mbox{ and }v(\neg\alpha)\in{\cal N}$ where ${\cal N}=M_{4}\setminus{\cal D}=\\{0,{\bf n}\\}$. $\boldsymbol{\blacksquare}$ According to Theorem 4.7, to transform ${\cal SF}_{4}$ to an ordinary one, we have to replace every axiom $A$ with the equivalent set of ordinary sequents $TWO(A)$. In terms of $4$-sequents, the only axiom of ${\cal SF}_{4}$ has the form $\alpha\mid\alpha\mid\alpha\mid\alpha$ and it yields the following ordinary two-sided sequents $\alpha,\neg\alpha\Rightarrow\alpha\hskip 28.45274pt\alpha,\neg\alpha\Rightarrow\neg\alpha\hskip 28.45274pt\alpha\Rightarrow\neg\alpha,\alpha\hskip 28.45274pt\alpha,\neg\alpha\Rightarrow\neg\alpha,\alpha\hskip 28.45274pt\neg\alpha\Rightarrow\neg\alpha,\alpha$ All of them can be derived from $\alpha\Rightarrow\alpha$ (or from an instance of it) by the use of weakening. Now, let us focus on rules ($\vee_{ij}$), $i,j\in M_{4}$. First observe that, if $\varphi\in Fm$ then $TWO(\varphi\mid\hskip 8.5359pt\mid\hskip 8.5359pt\mid\hskip 8.5359pt)=\\{\varphi\Rightarrow\,,\Rightarrow\neg\varphi\\}$ $TWO(\hskip 8.5359pt\mid\varphi\mid\hskip 8.5359pt\mid\hskip 8.5359pt)=\\{\varphi\Rightarrow\,,\neg\varphi\Rightarrow\\}$ $TWO(\hskip 8.5359pt\mid\hskip 8.5359pt\mid\varphi\mid\hskip 8.5359pt)=\\{\Rightarrow\varphi\,,\Rightarrow\neg\varphi\\}$ $TWO(\hskip 8.5359pt\mid\hskip 8.5359pt\mid\hskip 8.5359pt\mid\varphi)=\\{\neg\varphi\Rightarrow\,,\Rightarrow\varphi\\}$ So, after removing the contexts for brevity, the rules ($\vee_{ij}$)’s are translated to the following thirty-two two-sided sequent rules: ($\vee$)10 | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\beta\Rightarrow\hskip 8.5359pt\Rightarrow\neg\beta}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ | ($\vee$)1n | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\beta\Rightarrow\hskip 8.5359pt\neg\beta\Rightarrow}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ ---|---|---|--- ($\vee$)1b | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\beta\hskip 8.5359pt\Rightarrow\neg\beta}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ | ($\vee$)11 | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\beta\hskip 8.5359pt\neg\beta\Rightarrow}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ ($\vee$)b0 | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\beta\Rightarrow\hskip 8.5359pt\Rightarrow\neg\beta}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\Rightarrow\neg(\alpha\vee\beta)}$ | ($\vee$)bn | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\beta\Rightarrow\hskip 8.5359pt\neg\beta\Rightarrow}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ ($\vee$)bb | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\beta\hskip 8.5359pt\Rightarrow\neg\beta}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\Rightarrow\neg(\alpha\vee\beta)}$ | ($\vee$)b1 | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\beta\hskip 8.5359pt\neg\beta\Rightarrow}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ ($\vee$)n0 | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\beta\Rightarrow\hskip 8.5359pt\Rightarrow\neg\beta}{\alpha\vee\beta\Rightarrow\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ | ($\vee$)nn | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\beta\Rightarrow\hskip 8.5359pt\neg\beta\Rightarrow}{\alpha\vee\beta\Rightarrow\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ ---|---|---|--- ($\vee$)nb | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\beta\hskip 8.5359pt\Rightarrow\neg\beta}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ | ($\vee$)n1 | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\beta\hskip 8.5359pt\neg\beta\Rightarrow}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ ($\vee$)00 | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\beta\Rightarrow\hskip 8.5359pt\Rightarrow\neg\beta}{\alpha\vee\beta\Rightarrow\mbox{ \, ; \, }\Rightarrow\neg(\alpha\vee\beta)}$ | ($\vee$)0n | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\beta\Rightarrow\hskip 8.5359pt\neg\beta\Rightarrow}{\alpha\vee\beta\Rightarrow\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ ($\vee$)0b | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\beta\hskip 8.5359pt\Rightarrow\neg\beta}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\Rightarrow\neg(\alpha\vee\beta)}$ | ($\vee$)01 | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\beta\hskip 8.5359pt\neg\beta\Rightarrow}{\Rightarrow\alpha\vee\beta\mbox{ \, ; \, }\neg(\alpha\vee\beta)\Rightarrow}$ In the above list we use an informal notation by separating the alternate conclusion sequents with semicolons. At this point, we shall follow the three principles mentioned in the above section in order to reduce the number of rules. Our main tool for this job will be the next proposition. ###### Proposition 5.3 Let $\mathfrak{SC}$ a sequent calculus in which the cut rule is admissible, let $S$ be a set of sequents and $\Sigma$ be a sequent such that $\displaystyle\frac{S\cup\\{\Gamma\Rightarrow\Delta,\varphi\\}}{\Sigma}$ and $\displaystyle\frac{S\cup\\{\Gamma,\varphi\Rightarrow\Delta\\}}{\Sigma}$ are two context-free rules of $\mathfrak{SC}$. Then, $\displaystyle\frac{S}{\Sigma}$ is derivable in $\mathfrak{SC}$. Proof. From the fact that the rules are context-free and using the cut rule. $\boldsymbol{\blacksquare}$ Then, from ($\vee$)10, ($\vee$)1n and Proposition 5.3 we get $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\beta\Rightarrow}{\Rightarrow\alpha\vee\beta\mid\neg(\alpha\vee\beta)\Rightarrow}$. From ($\vee$)1b, ($\vee$)11 and Proposition 5.3 we get $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\beta}{\Rightarrow\alpha\vee\beta\mid\neg(\alpha\vee\beta)\Rightarrow}$. From these rules and Proposition 5.3 we obtain (1) $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow}{\Rightarrow\alpha\vee\beta}$ and (1’) $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow}{\neg(\alpha\vee\beta)\Rightarrow}$. Analogously, from ($\vee$)b0, ($\vee$)bn, ($\vee$)bb, ($\vee$)b1 we obtain (2) $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\Rightarrow\neg\alpha}{\Rightarrow\alpha\vee\beta}$. Finally, from (1), (2) and Proposition 5.3 we get that $\displaystyle\frac{\Rightarrow\alpha}{\Rightarrow\alpha\vee\beta}$ (3) is derivable. On the other hand, following an analogous reasoning we can prove that $\displaystyle\frac{\Rightarrow\beta}{\Rightarrow\alpha\vee\beta}$ (4) is derivable. Then, after combining rules (3) and (4) and restoring the context we get the rule $\mbox{($\Rightarrow\vee$) }\displaystyle\frac{\Gamma\Rightarrow\Delta,\alpha,\beta}{\Gamma\Rightarrow\Delta,\alpha\vee\beta}$ From ($\vee$)n0, ($\vee$)nn, ($\vee$)nb and ($\vee$)n1 we obtain (5) $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow}{\Rightarrow\alpha\vee\beta}$; then using (1’) and restoring the context we get (5) $\displaystyle\frac{\Gamma,\neg\alpha\Rightarrow\Delta}{\Gamma,\neg(\alpha\vee\beta)\Rightarrow\Delta}$. In a similar way, it can be proved that (6) $\displaystyle\frac{\Gamma,\neg\beta\Rightarrow\Delta}{\Gamma,\neg(\alpha\vee\beta)\Rightarrow\Delta}$ is derivable. Then, combining (5) and (6) and restoring the context we get $\mbox{($\neg\vee\Rightarrow$) }\displaystyle\frac{\Gamma,\neg\alpha,\neg\beta\Rightarrow\Delta}{\Gamma,\neg(\alpha\vee\beta)\Rightarrow\Delta}$ From ($\vee$)n0, ($\vee$)nn, ($\vee$)00 and ($\vee$)0n and restoring context we obtain the rule $\mbox{($\vee\Rightarrow$) }\displaystyle\frac{\Gamma,\alpha\Rightarrow\Delta\hskip 14.22636pt\Gamma,\beta\Rightarrow\Delta}{\Gamma,\alpha\vee\beta\Rightarrow\Delta}$ and, from ($\vee$)00, ($\vee$)0b, ($\vee$)b0 and ($\vee$)bb we get $\mbox{($\Rightarrow\neg\vee$) }\displaystyle\frac{\Gamma\Rightarrow\Delta,\neg\alpha\hskip 14.22636pt\Gamma\Rightarrow\Delta,\neg\beta}{\Gamma\Rightarrow\Delta,\neg(\alpha\vee\beta)}$ In the same way, we obtain the following rules for the connective $\wedge$: $\mbox{($\wedge\Rightarrow$) \, }\displaystyle\frac{\Gamma,\alpha,\beta\Rightarrow\Delta}{\Gamma,\alpha\wedge\beta\Rightarrow\Delta}\hskip 56.9055pt\mbox{($\Rightarrow\wedge$) \, }\displaystyle\frac{\Gamma\Rightarrow\Delta,\alpha\hskip 14.22636pt\Gamma\Rightarrow\Delta,\beta}{\Gamma\Rightarrow\Delta,\alpha\wedge\beta}$ $\mbox{($\neg\wedge\Rightarrow$) \, }\displaystyle\frac{\Gamma,\neg\alpha\Rightarrow\Delta\hskip 14.22636pt\Gamma,\neg\beta\Rightarrow\Delta}{\Gamma,\neg(\alpha\wedge\beta)\Rightarrow\Delta}\hskip 42.67912pt\mbox{($\Rightarrow\neg\wedge$) \, }\displaystyle\frac{\Gamma\Rightarrow\Delta,\neg\alpha,\neg\beta}{\Gamma\Rightarrow\Delta,\neg(\alpha\wedge\beta)}$ On the other hand, rules ($\neg$)i with $i\in M_{4}$ are translated to (after eliminating the trivial rules) ($\neg$)0 | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\neg\alpha}{\neg\neg\alpha\Rightarrow}$ | ($\neg$)n | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow}{\neg\neg\alpha\Rightarrow}$ ---|---|---|--- ($\neg$)b | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\Rightarrow\neg\alpha}{\Rightarrow\neg\neg\alpha}$ | ($\neg$)1 | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow}{\Rightarrow\neg\neg\alpha}$ From ($\neg$)0, ($\neg$)n and Proposition 5.3 on the one hand; and ($\neg$)b, ($\neg$)1 and Proposition 5.3 on the other, we obtain $\mbox{($\neg\neg\Rightarrow$) \, }\displaystyle\frac{\Gamma,\alpha\Rightarrow\Delta}{\Gamma,\neg\neg\alpha\Rightarrow\Delta}\hskip 42.67912pt\mbox{($\Rightarrow\neg\neg$) \, }\displaystyle\frac{\Gamma\Rightarrow\Delta,\alpha}{\Gamma\Rightarrow\Delta,\neg\neg\alpha}$ Finally, rules ($\square$)i are translated to ($\square$)0 | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\Rightarrow\neg\alpha}{\square\alpha\Rightarrow\mbox{ \, ; \, }\Rightarrow\neg\square\alpha}$ | ($\square$)n | $\displaystyle\frac{\alpha\Rightarrow\hskip 8.5359pt\neg\alpha\Rightarrow}{\square\alpha\Rightarrow\mbox{ \, ; \, }\Rightarrow\neg\square\alpha}$ ---|---|---|--- ($\square$)b | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\Rightarrow\neg\alpha}{\square\alpha\Rightarrow\mbox{ \, ; \, }\Rightarrow\neg\square\alpha}$ | ($\square$)1 | $\displaystyle\frac{\Rightarrow\alpha\hskip 8.5359pt\neg\alpha\Rightarrow}{\Rightarrow\square\alpha\mbox{ \, ; \, }\neg\square\alpha\Rightarrow}$ and, from these rules and Proposition 5.3, we obtain $\mbox{($\square\Rightarrow$)${}_{1}$ \, }\displaystyle\frac{\Gamma,\alpha\Rightarrow\Delta}{\Gamma,\square\alpha\Rightarrow\Delta}\hskip 39.83368pt\mbox{($\square\Rightarrow$)${}_{1}$ \, }\displaystyle\frac{\Gamma\Rightarrow\Delta,\neg\alpha}{\Gamma,\square\alpha\Rightarrow\Delta}\hskip 39.83368pt\displaystyle\mbox{($\Rightarrow\square$) \, }\frac{\Gamma\Rightarrow\Delta,\alpha\hskip 8.5359pt\Gamma,\neg\alpha\Rightarrow\Delta}{\Gamma\Rightarrow\Delta,\square\alpha}$ $\mbox{($\neg\square\Rightarrow$) \, }\displaystyle\frac{\Gamma\Rightarrow\Delta,\alpha\hskip 8.5359pt\Gamma,\neg\alpha\Rightarrow\Delta}{\Gamma,\neg\square\alpha\Rightarrow\Delta}\hskip 28.45274pt\mbox{($\Rightarrow\neg\square$)${}_{1}$ \, }\displaystyle\frac{\Gamma,\alpha\Rightarrow\Delta}{\Gamma\Rightarrow\Delta,\neg\square\alpha}\hskip 25.6073pt\mbox{($\Rightarrow\neg\square$)${}_{2}$ \, }\displaystyle\frac{\Gamma\Rightarrow\Delta,\neg\alpha}{\Gamma\Rightarrow\Delta,\neg\square\alpha}$ ###### Definition 5.4 Let ${\bf SC}_{\cal TML}$ be the sequent calculus given by the axiom $\alpha\Rightarrow\alpha$ the structural rules of cut and left and right weakening $\mbox{\rm($w\Rightarrow$) \, }\displaystyle\frac{\Gamma\Rightarrow\Delta}{\Gamma,\alpha\Rightarrow\Delta}\hskip 42.67912pt\mbox{\rm($\Rightarrow w$) \, }\displaystyle\frac{\Gamma\Rightarrow\Delta}{\Gamma\Rightarrow\Delta,\alpha}$ and the logical rules ($\vee\Rightarrow$), ($\Rightarrow\vee$), ($\neg\vee\Rightarrow$), ($\Rightarrow\neg\vee$), ($\wedge\Rightarrow$), ($\Rightarrow\wedge$), ($\neg\wedge\Rightarrow$), ($\Rightarrow\neg\wedge$), ($\neg\neg\Rightarrow$), ($\Rightarrow\neg\neg$), ($\square\Rightarrow$)i, ($\Rightarrow\square$), ($\neg\square\Rightarrow$), ($\Rightarrow\neg\square$)i $i=1,2$. We shall write $\Gamma\Leftrightarrow\Delta$ to indicate that both the sequents $\Gamma\Rightarrow\Delta$ and $\Delta\Rightarrow\Gamma$ are provable. Then, it is not difficult to verify that $\alpha\wedge\neg\alpha\Leftrightarrow\alpha\wedge\neg\square\alpha$, for every formula $\alpha$. Besides, the modal axiom $\Rightarrow\alpha\vee\neg\square\alpha$ of $\mathfrak{G}$ is derivable in ${\bf SC}_{\cal TML}$. Indeed, $\alpha\Rightarrow\alpha$ ($\Rightarrow\neg\square$)1 $\Rightarrow\alpha,\neg\square\alpha$ ($\Rightarrow\vee$) $\Rightarrow\alpha\vee\neg\square\alpha$ Moreover, the sequent $\Rightarrow\square(\alpha\vee\neg\square\alpha)$ is derivable in ${\bf SC}_{\cal TML}$ without the cut rule: $\alpha\Rightarrow\alpha$ ($\Rightarrow\neg\square$)1 $\Rightarrow\alpha,\neg\square\alpha$ ($\Rightarrow\vee$) $\Rightarrow\alpha\vee\neg\square\alpha$ $\neg\alpha\Rightarrow\neg\alpha$ ($\square\Rightarrow$)1 $\neg\alpha,\square\alpha\Rightarrow$ ($\neg\neg\Rightarrow$) $\neg\alpha,\neg\neg\square\alpha\Rightarrow$ ($\neg\vee\Rightarrow$) $\neg(\alpha\vee\neg\square\alpha)\Rightarrow$ ($\Rightarrow\square$) $\Rightarrow\square(\alpha\vee\neg\square\alpha)$ ###### Remark 5.5 In Font and Rius’ system $\mathfrak{G}$ , the propositional constant $\bot$ is used. By following Avron, Ben-Naim and Konikowska’s method, we obtained a system in which $\bot$ does not appear. However, it is easy to check that the sequent $\neg\alpha\wedge\square\alpha\Rightarrow$ is provable in ${\bf SC}_{\cal TML}$, for any formula $\alpha$. Then, if we denote by $\bot$ the formula $\neg\alpha\wedge\square\alpha$, for any formula $\alpha$, we have that the rule $(\bot)$ of $\mathfrak{G}$ is derivable in ${\bf SC}_{\cal TML}$. ###### Theorem 5.6 * * (i) ${\bf SC}_{\cal TML}$ is sound and complete w.r.t. ${\cal M}_{4}$. * (ii) The cut rule is admissible in ${\bf SC}_{\cal TML}$, Proof. The system ${\bf SC}_{\cal TML}$ was constructed according to the method displayed in Section 4. $\boldsymbol{\blacksquare}$ ###### Corollary 5.7 ${\bf SC}_{\cal TML}$ is a cut-free sequent calculus that provides a syntactical counterpart for $\cal TML$. ## 6 Some applications of the cut elimination theorem In this section, we shall use the cut-free system ${\bf SC}_{\cal TML}$ to show independent proofs of some (known) interesting properties of the logic ${\cal TML}$. In what follows $\Gamma$, $\Delta$ are sets of formulas and $\alpha$, $\beta$, $\psi$ are formulas. In the first place, we shall present a new independent proof of Proposition 2.5. To do this, we need the following technical result. ###### Proposition 6.1 If $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\Delta$ then, for every $\mathbb{A}\in{\bf TMA}$ and for every $h\in Hom(\mathfrak{Fm},\mathbb{A})$, $\bigwedge_{\gamma\in\Gamma}h(\gamma)\leq\bigvee_{\delta\in\Delta}h(\delta)$. Proof. Suppose that $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\Delta$ and let $\cal P$ be a cut–free proof of the sequent $\Gamma\Rightarrow\Delta$ in ${\bf SC}_{\cal TML}$. Let $\mathbb{A}\in{\bf TMA}$ and let $h\in Hom(\mathfrak{Fm},\mathbb{A})$. We use induction on the number $n$ of inferences in $\cal P$. If $n=0$ the proposition is obviously valid. (I.H.) Suppose that the proposition holds for $n<k$, $k>0$. Let $n=k$ and let $(r)$ be the last inference in $\cal P$. Ir $(r)$ is the right/left weakening rule, the proposition holds since $\mathbb{A}$ is, in particular, a lattice. If $(r)$ is one of the rules ($\vee\Rightarrow$), ($\Rightarrow\vee$), ($\neg\vee\Rightarrow$), ($\Rightarrow\neg\vee$), ($\wedge\Rightarrow$), ($\Rightarrow\wedge$), ($\neg\wedge\Rightarrow$), ($\Rightarrow\neg\wedge$), ($\neg\neg\Rightarrow$), ($\Rightarrow\neg\neg$), the proposition holds since $\mathbb{A}$ is, in particular, a De Morgan algebra. Finally, if $(r)$ is one of the rules , ($\square\Rightarrow$)i, ($\Rightarrow\square$), ($\neg\square\Rightarrow$), ($\Rightarrow\neg\square$)i $i=1,2$ then the proposition holds since $\mathbb{A}$ is a tetravalent modal algebra. For instance, suppose that $(r)$ is $(\Rightarrow\square)$ and the last inference of $\cal P$ is $\displaystyle\frac{\Gamma\Rightarrow\Delta,\alpha\hskip 14.22636pt\Gamma,\neg\alpha\Rightarrow\Delta}{\Gamma\Rightarrow\Delta,\square\alpha}$. By (I.H.), we have (1) $\bigwedge_{\gamma\in\Gamma}h(\gamma)\leq\bigvee_{\delta\in\Delta}h(\delta)\vee h(\alpha)$ and (2) $\bigwedge_{\gamma\in\Gamma}h(\gamma)\wedge h(\neg\alpha)\leq\bigvee_{\delta\in\Delta}h(\delta)$. Then, from (1), (2) and Proposition 2.2 we have $\bigwedge_{\gamma\in\Gamma}h(\gamma)\leq\bigvee_{\delta\in\Delta}h(\delta)\vee h(\square\alpha)$. $\boldsymbol{\blacksquare}$ ###### Proposition 6.2 The following conditions are equivalent. * (i) $\Gamma\models_{\cal TML}\psi$ , * (ii) $\Gamma\models_{{\cal M}_{4}}\psi$. Proof. (i) imples (ii): immediate. (ii) implies (i): It is consequence of Theorem 5.6 (i) and Proposition 6.1. $\boldsymbol{\blacksquare}$ Next, we shall prove that the rule ($\neg$) of Font and Rius’ system is addmissible in ${\bf SC}_{\cal TML}$. Let $X$ a set of formulas, we shall denote by $\neg X$ the set $\neg X=\\{\neg\gamma:\gamma\in X\\}$. ###### Theorem 6.3 If $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\Delta$, then $\vdash_{{\bf SC}_{\cal TML}}\neg\Delta\Rightarrow\neg\Gamma$. Proof. Suppose that $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\Delta$ and let $\cal P$ a cut–free proof of the sequent $\Gamma\Rightarrow\Delta$. We use induction on the number $n$ of inferences in ${\cal P}$. If $n=0$, then $\Gamma\Rightarrow\Delta$ is $\alpha\Rightarrow\alpha$, for some $\alpha$, and $\neg\Delta\Rightarrow\neg\Gamma$ is $\neg\alpha\Rightarrow\neg\alpha$ which is provable in ${\bf SC}_{\cal TML}$. (I.H.) Suppose that the lemma holds for $n<k$, with $k>0$. Let $n=k$ and let $(r)$ be the last inference in $\cal P$. If $(r)$ is left weakening, then the last inference of $\cal P$ is $\displaystyle\frac{\Gamma\Rightarrow\Delta}{\Gamma,\alpha\Rightarrow\Delta}$. By (I.H.), $\neg\Delta\Rightarrow\neg\Gamma$ is provable in ${\bf SC}_{\cal TML}$ and using right weakening we have $\vdash_{{\bf SC}_{\cal TML}}\neg\Delta\Rightarrow\neg\Gamma,\neg\alpha$. If $(r)$ is an instance of the right weakening the treatment is analogous. Suppose now that $(r)$ is (an instance of) a logic rule. If $(r)$ is $(\Rightarrow\vee)$ and the last inference of $\cal P$ is $\displaystyle\frac{\Gamma\Rightarrow\Delta,\alpha,\beta}{\Gamma\Rightarrow\Delta,\alpha\vee\beta}$. By (I.H.), $\neg\alpha,\neg\beta,\neg\Delta\Rightarrow\neg\Gamma$ is provable, and using $(\neg\vee\Rightarrow)$ we have that $\neg(\alpha\vee\beta),\neg\Delta\Rightarrow\neg\Gamma$ is provable. The cases where $(r)$ is one of the rules $(\vee\Rightarrow)$, $(\Rightarrow\neg\vee)$, $(\neg\vee\Rightarrow)$, $(\Rightarrow\wedge)$ $(\wedge\Rightarrow)$, $(\Rightarrow\neg\wedge)$, $(\neg\wedge\Rightarrow)$ are left to the reader. If $(r)$ is $(\Rightarrow\neg\neg)$ and the last inference of $\cal P$ is $\displaystyle\frac{\Gamma\Rightarrow\Delta,\alpha}{\Gamma\Rightarrow\Delta,\neg\neg\alpha}$. By (I.H.), $\neg\alpha,\neg\Delta\Rightarrow\neg\Gamma$ is provable in ${\bf SC}_{\cal TML}$ and using $(\neg\neg\Rightarrow)$ we have that $\neg\neg\neg\alpha,\neg\Delta\Rightarrow\neg\Gamma$ is provable. If $(r)$ is $(\neg\neg\Rightarrow)$ the proof is analogous. If $(r)$ is $(\square\Rightarrow)_{1}$ and the last inference of $\cal P$ is $\displaystyle\frac{\Gamma,\alpha\Rightarrow\Delta}{\Gamma,\square\alpha\Rightarrow\Delta}$. By (I.H.), we have that $\neg\Delta\Rightarrow\neg\Gamma,\neg\alpha$ is provable in ${\bf SC}_{\cal TML}$. Then, using $(\Rightarrow\neg\square)_{2}$ we have that $\neg\Delta\Rightarrow\neg\Gamma,\neg\square\alpha$ is provable. If $(r)$ is $(\square\Rightarrow)_{2}$ and the last inference of $\cal P$ is $\displaystyle\frac{\Gamma\Rightarrow\Delta,\neg\alpha}{\Gamma,\square\alpha\Rightarrow\Delta}$. By (I.H.), we have that $\neg\Delta,\neg\neg\alpha\Rightarrow\neg\Gamma$ is provable in ${\bf SC}_{\cal TML}$ and using left weakening we have (1) $\vdash_{{\bf SC}_{\cal TML}}\alpha,\neg\Delta,\neg\neg\alpha\Rightarrow\neg\Gamma$. On the other hand, one can easily check that $\vdash_{{\bf SC}_{\cal TML}}\alpha\Rightarrow\neg\neg\alpha$ and by means of (right/left) weakening(s) we have (2) $\vdash_{{\bf SC}_{\cal TML}}\alpha,\neg\Delta\Rightarrow\neg\neg\alpha,\neg\Gamma$. From (1), (2) and the cut rule, we have $\vdash_{{\bf SC}_{\cal TML}}\alpha,\neg\Delta\Rightarrow\neg\Gamma$ (the cut rule is admissible in ${\bf SC}_{\cal TML}$). Then, using $(\Rightarrow\neg\square)_{1}$ we have $\vdash_{{\bf SC}_{\cal TML}}\neg\Delta\Rightarrow\neg\Gamma,\neg\square\alpha$. If $(r)$ is $(\Rightarrow\square)$ and the last inference of $\cal P$ is $\displaystyle\frac{\Gamma\Rightarrow\Delta,\alpha\hskip 14.22636pt\Gamma,\neg\alpha\Rightarrow\Delta}{\Gamma\Rightarrow\Delta,\square\alpha}$. By (I.H.) we have that (3) $\vdash_{{\bf SC}_{\cal TML}}\neg\alpha,\neg\Delta\Rightarrow\neg\Gamma$ and (4) $\vdash_{{\bf SC}_{\cal TML}}\neg\Delta\Rightarrow\neg\neg\alpha,\neg\Gamma$. From (4) and a similar reasoning to the above, we have that (5) $\vdash_{{\bf SC}_{\cal TML}}\neg\Delta\Rightarrow\alpha,\neg\Gamma$. From (3), (5) and $(\neg\square\Rightarrow)$ we get $\vdash_{{\bf SC}_{\cal TML}}\neg\Delta,\neg\square\alpha\Rightarrow\neg\Gamma$. The cases where $(r)$ is one of the rules $(\Rightarrow\neg\square)_{1}$, $(\Rightarrow\neg\square)_{2}$ and $(\neg\square\Rightarrow)$ are treated similarly. $\boldsymbol{\blacksquare}$ ###### Corollary 6.4 $(\neg)$ is admissible in ${\bf SC}_{\cal TML}$. Finally, ###### Theorem 6.5 $\vdash_{\cal TML}\square\psi$ iff $\vdash_{\cal TML}\psi$. Proof. ($\Longrightarrow$) Suppose that $\vdash_{\cal TML}\square\psi$. By Theorem 5.6, Proposition 2.5 we know that the sequent $\Rightarrow\square\psi$ has a cut-free proof $\cal P$ in ${\bf SC}_{\cal TML}$. Let $(r)$ be the last inference of $\cal P$. By inspecting the rules of ${\bf SC}_{\cal TML}$ we may assert that $(r)$ has to be an instance of the rule ($\Rightarrow\square$). So, $(r)$ is $\displaystyle\frac{\Rightarrow\psi\hskip 14.22636pt\neg\psi\Rightarrow}{\Rightarrow\square\psi}$ and clearly the sequent $\Rightarrow\psi$ is provable in ${\bf SC}_{\cal TML}$. Therefore $\vdash_{\cal TML}\psi$. ($\Longleftarrow$) Suppose that $\vdash_{\cal TML}\psi$. By Theorem 5.6 (i), we have: (1) $\Rightarrow\psi$ is provable in ${\bf SC}_{\cal TML}$. From (1) and Theorem 6.3, we have that: (2) $\neg\psi\Rightarrow$ is also provable in ${\bf SC}_{\cal TML}$. From (1), (2) and the rule ($\Rightarrow\square)$, we may assert that $\Rightarrow\square\psi$ is provable in ${\bf SC}_{\cal TML}$. Therefore, $\vdash_{\cal TML}\square\psi$. $\boldsymbol{\blacksquare}$ ## 7 Natural deduction for ${\cal TML}$ In this section, we shall present a natural deduction system for ${\cal TML}$. We take our inspiration from the construction made before. In particular, it threw some light on how the connective $\square$ behaves. We think that this system shows an interesting example of a rule (different from the usual ones), namely the introduction rule of the connective $\square$, that needs to produce a discharge of hypothesis; and this is related to the intrinsic meaning of the connective. The proof system ${\bf ND}_{\cal TML}$ will be defined following the notational conventions given in [15]. ###### Definition 7.1 Deductions in ${\bf ND}_{\cal TML}$ are inductively defined as follows: Basis: The proof tree with a single occurrence of an assumption $\phi$ with a marker is a deduction with conclusion $\phi$ from open assumption $\phi$ . Inductive step: Let ${\cal D}$, ${\cal D}_{1}$ ,${\cal D}_{2}$,${\cal D}_{3}$ be deductions. Then, they can be extended by one of the following rules below. The classes [$\neg\phi$]u, [$\neg\psi$]v, [$\phi$]u , [$\psi$]v below contain open assumptions of the deductions of the premises of the final inference, but are closed in the whole deduction. MA (modal axioma) $\phi\vee\neg\square\phi$ ${\cal D}_{1}$ $\phi$ ${\cal D}_{2}$ $\psi$ $\wedge$I $\phi\wedge\psi$ ${\cal D}$ $\phi\wedge\psi$ $\wedge$E1 $\phi$ ${\cal D}$ $\phi\wedge\psi$ $\wedge$E2 $\psi$ ${\cal D}$ $\neg\phi$ $\neg\wedge$I1 $\neg(\phi\wedge\psi)$ ${\cal D}$ $\neg\psi$ $\neg\wedge$I2 $\neg(\phi\wedge\psi)$ ${\cal D}_{1}$ $\neg(\phi\wedge\psi)$ [$\neg\phi$]u ${\cal D}_{2}$ $\chi$ [$\neg\psi$]v ${\cal D}_{3}$ $\chi$ $\neg\wedge$E,$u$,$v$ $\chi$ ${\cal D}$ $\phi$ $\vee$I1 $\phi\vee\psi$ ${\cal D}$ $\psi$ $\vee$I2 $\phi\vee\psi$ ${\cal D}_{1}$ $\phi\vee\psi$ [$\phi$]u ${\cal D}_{2}$ $\chi$ [$\psi$]v ${\cal D}_{3}$ $\chi$ $\vee$E,$u$,$v$ $\chi$ ${\cal D}_{1}$ $\neg\phi$ ${\cal D}_{2}$ $\neg\psi$ $\neg\vee$I $\neg(\phi\vee\psi)$ ${\cal D}$ $\neg(\phi\vee\psi)$ $\neg\vee$E1 $\neg\phi$ ${\cal D}$ $\neg(\phi\vee\psi)$ $\neg\vee$E2 $\neg\psi$ ${\cal D}$ $\phi$ $\neg\neg$I $\neg\neg\phi$ ${\cal D}$ $\neg\neg\phi$ $\neg\neg$E $\phi$ ${\cal D}_{1}$ $\psi\vee\phi$ [$\neg\phi$]u ${\cal D}_{2}$ $\psi$ $\square$I∗,$u$ $\psi\vee\square\phi$ ${\cal D}$ $\square\phi$ $\square$E $\phi$ ${\cal D}$ $\neg\phi$ $\neg\square$I $\neg\square\phi$ ${\cal D}_{1}$ $\neg\square\phi$ ${\cal D}_{2}$ $\phi$ $\neg\square$E $\neg\phi$ ${\cal D}$ $\neg\phi\wedge\square\phi$ $\bot$I $\bot$ ${\cal D}$ $\bot$ $\bot$E $\alpha$ ###### Remark 7.2 If we take $\psi$ as $\bot$ in $\square$I∗ we get ${\cal D}_{1}$ $\phi$ [$\neg\phi$]u ${\cal D}_{2}$ $\bot$ $\square$I,$u$ $\square\phi$ Formally, $\square$I is derivable in ${\bf ND}_{\cal TML}$. The intuition behind this rule is the following:“if we have a deduction for $\alpha$ and $\neg\alpha$ is not provable, then we have a deduction for $\square\alpha$”. As usual, by application of the rule $\neg\wedge$E a new proof-tree is formed from ${\cal D}$, ${\cal D}_{1}$, and ${\cal D}_{2}$ by adding at the bottom the conclusion $\chi$ while closing the sets [$\neg\phi$]u and [$\neg\psi$]u of open assumptions marked by $u$ and $v$, respectively. Idem for the rules $\wedge$E and $\square$I. Note that we have introduced the symbol $\bot$, it behaves here as an arbitrary unprovable propositional constant. Let $\Gamma\cup\\{\alpha\\}\subseteq Fm$. We say that the conclusion $\alpha$ is derivable from a set $\Gamma$ of premises, noted $\Gamma\vdash\alpha$, if and only if there is a deduction in ${\bf ND}_{\cal TML}$ of $\alpha$ from $\Gamma$. ###### Theorem 7.3 (Soundness and Completeness) Let $\Gamma,\Delta\subseteq Fm$, $\Gamma$ finite. The following conditions are equivalent: * (i) the sequent $\Gamma\Rightarrow\Delta$ is derivable in ${\bf SC}_{\cal TML}$, * (ii) there is a deduction of the disjunction of the sentences in $\Delta$ from $\Gamma$ in ${\bf ND}_{\cal TML}$. Proof. (i) implies (ii): Suppose that the sequent $\Gamma\Rightarrow\Delta$ is derivable in ${\bf SC}_{\cal TML}$, that is, there is a formal proof $\cal P$ of $\Gamma\Rightarrow\Delta$ in ${\bf SC}_{\cal TML}$ which does not use the cut rule. We shall show that there is a deduction of the disjunction of the formulas in $\Delta$ (denoted by $\bigvee\Delta$) from $\Gamma$ in ${\bf ND}_{\cal TML}$, using induction on the number $n$ of rule applications in $\cal P$, $n\geq 0$. If $n=0$, then $\Gamma\Rightarrow\Delta$ is $\alpha\Rightarrow\alpha$ and it is clear that $\alpha\vdash\alpha$. Now, (I.H.) suppose that “(i) implies (ii)” holds for $n<k$, with $k>0$. Let $n=k$, that is $\cal P$ is a derivation in ${\bf SC}_{\cal TML}$ with last rule (r) of the form $\Gamma_{1}\Rightarrow\Delta_{1}$ $\ddots$ $\dots$ $\vdots$ $\Gamma_{t}\Rightarrow\Delta_{t}$ $\ddots$ ($r$) $\Gamma\Rightarrow\Delta$ If (r) is left weakening, then the last rule of $\cal P$ has the form $\displaystyle{\rm(r)}\frac{\Gamma^{\prime}\Rightarrow\Delta}{\Gamma^{\prime},\beta\Rightarrow\Delta}$. By (I.H.), there exists a deduction $\cal D$ of $\Delta$ from $\Gamma^{\prime}$, then $\cal D$ $\bigvee\Delta$ $\beta$ $\wedge$I $\bigvee\Delta\wedge\beta$ $\wedge$E1 $\bigvee\Delta$ is a deduction of $\bigvee\Delta$ from $\Gamma^{\prime}\cup\\{\beta\\}$. If (r) is right weakening, then (r) has the form $\displaystyle{\rm(r)}\frac{\Gamma\Rightarrow\Delta^{\prime}}{\Gamma\Rightarrow\Delta^{\prime},\beta}$, then by (I.H.) there is a deduction $\cal D$ of $\bigvee\Delta^{\prime}$ from $\Gamma$. $\cal D$ $\bigvee\Delta^{\prime}$ $\vee$I1 $\bigvee\Delta^{\prime}\vee\beta$ Now, suppose that (r) is a logical rule, we shall prove it just for ($\Rightarrow\vee$), ($\vee\Rightarrow$), ($\Rightarrow\neg\vee$), ($\neg\vee\Rightarrow$). If (r) is ($\vee\Rightarrow$), then we may assume that the last inference of $\cal P$ has the form $\displaystyle\mbox{($\Rightarrow\vee$)}\frac{\Gamma\Rightarrow\Delta^{\prime},\alpha,\beta}{\Gamma\Rightarrow\Delta^{\prime},\alpha\vee\beta}$. Then, by (I.H.) we have a deduction $\cal D$ of $\bigvee\Delta^{\prime}\vee\alpha\vee\beta$ from $\Gamma$ and the proof is complete. If (r) is ($\vee\Rightarrow$) and last inference of $\cal P$ has the from $\displaystyle\mbox{($\vee\Rightarrow$)}\frac{\Gamma,\gamma_{1}\Rightarrow\Delta\hskip 14.22636pt\Gamma,\gamma_{2}\Rightarrow\Delta}{\Gamma,\gamma_{1}\vee\gamma_{2}\Rightarrow\Delta}$, then by (I.H.) there are deductions ${\cal D}_{i}$ , $i=1,2$, of $\alpha$ from $\Gamma\cup\\{\gamma_{i}\\}$. Then, the following $\gamma_{1}\vee\gamma_{2}$ [$\gamma_{1}$]${}^{u_{1}}$ ${\cal D}_{1}$ $\bigvee\Delta$ [$\gamma_{2}$]${}^{u_{2}}$ ${\cal D}_{2}$ $\bigvee\Delta$ $\vee$E,$u_{1}$,$u_{2}$ $\bigvee\Delta$ is a deduction of $\bigvee\Delta$ from $\Gamma\cup\\{\gamma_{1}\vee\gamma_{2}\\}$. Note that in this last deduction we have made every assumption $\gamma_{i}$ in ${\cal D}_{i}$ an open assumption with label $u_{i}$. If (r) is ($\neg\vee\Rightarrow$) then we may assume that the last instance of ${\cal P}$ has the form $\displaystyle\mbox{($\neg\vee\Rightarrow$)}\frac{\Gamma,\neg\gamma_{1}\Rightarrow\Delta}{\Gamma,\neg(\gamma_{1}\vee\gamma_{2})\Rightarrow\Delta}$. By (I.H.), there is a deduction ${\cal D}$ of $\alpha$ from $\Gamma\cup\\{\gamma_{1}\\}$ and the following $\neg(\gamma_{1}\vee\gamma_{2})$ $\neg\vee$E1 $\neg\gamma_{1}$ ${\cal D}$ $\bigvee\Delta$ is a deduction of $\alpha$ from $\Gamma\cup\\{\neg(\gamma_{1}\vee\gamma_{2})\\}$. If (r) is ($\Rightarrow\neg\vee$) we proceed analogously. For (r) being any of the rules ($\square\Rightarrow$)i, ($\Rightarrow\square$), ($\neg\square\Rightarrow$), ($\Rightarrow\neg\square$)i $i=1,2$, we present the next table showing the deduction corresponding to the premise(s) of (r) and the deduction corresponding to the consequence of (r). Rule (r) | Upper sequent(s)’s | Lower sequent’s ---|---|--- | deduction(s) | deduction | | $\Gamma,\gamma\Rightarrow\Delta$ ($\square\Rightarrow$)1 $\Gamma,\square\gamma\Rightarrow\Delta$ | $\gamma$ $\cal D$ $\bigvee\Delta$ | $\square\gamma$ $\square$E $\gamma$ $\cal D$ $\bigvee\Delta$ | | $\Gamma\Rightarrow\Delta,\neg\gamma$ ($\square\Rightarrow$)2 $\Gamma,\square\gamma\Rightarrow\Delta$ | $\cal D$ $\bigvee\Delta\vee\neg\gamma$ | $\cal D$ $\bigvee\Delta\vee\neg\gamma$ $[\bigvee\Delta]_{u}$ $\bigvee\Delta$ $[\neg\gamma]_{u}$ $\square\gamma$ $\wedge$I $\neg\gamma\wedge\square\gamma$ $\bot$I $\bot$ $\bot$E $\bigvee\Delta$ $\vee$E, $u$, $v$ $\bigvee\Delta$ | | $\Gamma\Rightarrow\Delta,\gamma$ $\Gamma,\neg\gamma\Rightarrow\Delta$ ($\Rightarrow\square$) $\Gamma\Rightarrow\Delta,\square\gamma$ | ${\cal D}_{1}$ $\bigvee\Delta\vee\gamma$ $\neg\gamma$ ${\cal D}_{2}$ $\bigvee\Delta$ | ${\cal D}_{1}$ $\bigvee\Delta\vee\gamma$ $\neg\gamma^{u}$ ${\cal D}_{2}$ $\bigvee\Delta$ $\square$I∗,$u$ $\bigvee\Delta\vee\square\gamma$ | | $\Gamma\Rightarrow\Delta,\gamma$ $\Gamma,\neg\gamma\Rightarrow\Delta$ ($\neg\square\Rightarrow$) $\Gamma,\neg\square\gamma\Rightarrow\Delta$ | ${\cal D}_{1}$ $\bigvee\Delta\vee\gamma$ $\neg\gamma$ ${\cal D}_{2}$ $\bigvee\Delta$ | ${\cal D}_{1}$ $\bigvee\Delta\vee\gamma$ $[\neg\gamma]^{u}$ ${\cal D}_{2}$ $\bigvee\Delta$ $\square$I,$u$ $\bigvee\Delta\vee\square\gamma$ $\neg\square\gamma$ $\vee$I2 $\bigvee\Delta\vee\neg\square\gamma$ $\wedge$I $(\bigvee\Delta\vee\square\gamma)\wedge(\bigvee\Delta\vee\neg\square\gamma)$ 111$(\gamma\vee\alpha)\wedge(\gamma\vee\beta)\dashv\vdash\gamma\vee(\alpha\wedge\beta)$ $\bigvee\Delta\vee(\square\gamma\wedge\neg\square\gamma)$ 222 $\alpha\vee(\square\gamma\wedge\neg\square\gamma)\dashv\vdash\alpha\vee\bot$ $\bigvee\Delta\vee\bot$ $\bigvee\Delta$ | | $\Gamma,\gamma\Rightarrow\Delta$ ($\Rightarrow\neg\square$)1 $\Gamma\Rightarrow\Delta,\neg\square\gamma$ | $\gamma$ $\cal D$ $\bigvee\Delta$ | (MA) $\gamma\vee\neg\square\gamma$ $\gamma^{v}$ $\cal D$ $\bigvee\Delta$ $\bigvee\Delta\vee\neg\square\gamma$ $\neg\square\gamma^{u}$ $\bigvee\Delta\vee\neg\square\gamma$ $\vee$E,$u$,$v$ $\bigvee\Delta\vee\neg\square\gamma$ | | $\Gamma\Rightarrow\Delta,\neg\gamma$ ($\Rightarrow\neg\square$)2 $\Gamma\Rightarrow\Delta,\neg\square\gamma$ | $\cal D$ $\bigvee\Delta\vee\neg\gamma$ | $\cal D$ $\bigvee\Delta\vee\neg\gamma$ $\bigvee\Delta^{u}$ $\bigvee\Delta\vee\neg\square\gamma$ $\neg\gamma^{v}$ $\neg\square$I $\neg\square\gamma$ $\bigvee\Delta\vee\neg\square\gamma$ $\vee$E,$u$,$v$ $\bigvee\Delta\vee\neg\square\gamma$ (ii) implies (i): Let $\cal D$ be a deduction of the disjunction of the sentences in $\Delta$ from $\Gamma$ in ${\bf ND}_{\cal TML}$. As before, we use induction on the number $n$ of rule instances in the deduction $\cal D$. If $r=0$ the proof is trivial. (I.H.) Suppose that “(ii) implies (i)” holds for $n<k$, $k>0$; and let $(r)$ the last rule instance in $\cal D$. If $(r)$ is one of the introduction/elimination rule of $\wedge$I, $\wedge$E, $\neg\wedge$I, $\neg\wedge$E, $\vee$I, $\vee$E , $\neg\vee$I, $\neg\vee$E, $\neg\neg$I and $\neg\neg$E; the proof is immediate since these rules are just translations of the corresponding rules of ${\bf SC}_{\cal TML}$. Suppose that $(r)$ is $\square$I∗, then $\cal D$ is ${\cal D}_{1}$ $\psi\vee\phi$ [$\neg\phi$]u ${\cal D}_{2}$ $\psi$ $\square$I∗,$u$ $\psi\vee\square\phi$ Then, by (I.H), we have that the sequents $\Gamma_{1}\Rightarrow\psi\vee\phi$ and $\Gamma_{2},\neg\phi\Rightarrow\psi$ are provable in ${\bf SC}_{\cal TML}$, where $\Gamma_{1}\cup\Gamma_{2}=\Gamma$. By using weakening(s) and the cut rule we obtain $\Gamma\Rightarrow\psi,\phi$ and $\Gamma\neg\phi\Rightarrow\psi$ are provable. Then, using ($\square\Rightarrow$), we have that $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\psi,\square\phi$. If $(r)$ is $\square$E, then $\cal D$ is ${\cal D}$ $\square\phi$ $\square$E $\phi$ By (I.H.), we have $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\square\phi$. From the fact that $\vdash_{{\bf SC}_{\cal TML}}\square\phi\Rightarrow\phi$ and the cut rule the proof is completed. If $(r)$ is $\neg\square$I, then $\cal D$ is ${\cal D}$ $\neg\phi$ $\neg\square$I $\neg\square\phi$ By (I.H.), we have $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\neg\phi$. By Theorem 6.3, $\vdash_{{\bf SC}_{\cal TML}}\neg\neg\phi\Rightarrow\neg\Gamma$ and from $\vdash_{{\bf SC}_{\cal TML}}\phi\Rightarrow\neg\neg\phi$ and the cut rule, we have $\vdash_{{\bf SC}_{\cal TML}}\phi\Rightarrow\neg\Gamma$. Using ($\square\Rightarrow)$ we obtain $\vdash_{{\bf SC}_{\cal TML}}\square\phi\Rightarrow\neg\Gamma$ and by Theorem 6.3 $\vdash_{{\bf SC}_{\cal TML}}\neg\neg\Gamma\Rightarrow\neg\square\phi$. Finally, from $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\neg\neg\Gamma$ and cut(s) (and weakening(s) if necessary) we obtain $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\neg\square\phi$. If $(r)$ is $\neg\square$E, then $\cal D$ is ${\cal D}_{1}$ $\neg\square\phi$ ${\cal D}_{2}$ $\phi$ $\neg\square$E $\neg\phi$ By (I.H) and using weakening(s) we have that the sequents $\Gamma\Rightarrow\neg\square\phi$ and $\Gamma\Rightarrow\phi$ are provable in ${\bf SC}_{\cal TML}$. Using ($\Rightarrow\wedge$), we obtain $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\phi\wedge\neg\square\phi$ and since $\phi\wedge\neg\square\phi\Leftrightarrow\phi\wedge\neg\phi$ and the cut rule we obtain $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\phi\wedge\neg\phi$. Finally, taking into account that $\vdash_{{\bf SC}_{\cal TML}}\phi\wedge\neg\phi\Rightarrow\neg\phi$ we have $\vdash_{{\bf SC}_{\cal TML}}\Gamma\Rightarrow\neg\phi$. The cases in which $(r)$ is $\bot$I or $\bot$E are immediate (see Remark 5.5). $\boldsymbol{\blacksquare}$ Since our natural deduction system is strongly inspired by the cut-free sequent calculus ${\bf SC}_{\cal TML}$, one can likely expect normalization to hold for ${\bf SC}_{\cal TML}$. ## 8 Conclusions In the present paper we focused on the proof-theoretic aspects of the tetravalent modal logic ${\cal TML}$. In the first place, we showed that the strongly adequate Gentzen calculus given by Font and Rius for ${\cal TML}$ does not enjoy the cut–elimination property. Then, by applying a method due to Avron, Ben-Naim and Konikowska, we developed a sequent calculus for $\cal TML$ with the cut–elimination property. This allowed us to provide new independent proof of some known interesting properties of ${\cal TML}$. Finally, strongly inspired by this cut–free sequent calculus, we presented a natural deduction system, sound and complete with respect to the ${\cal TML}$. Despite the fact that ${\cal TML}$ was originally defined as the logic that preserves degrees of truth w.r.t. tetravalent modal algebras, we could use Avron, Ben-Naim and Konikowska’s method; and this is because ${\cal TML}$ is also a matrix logic. An interesting task to be done is to extend this method to logics to logics that preserves degrees of truth w.r.t. some ordered structure but which do not have a matrix semantics. ## 9 Acknowledgments I would like to thank the anonymous referees for their extremely careful reading, helpful suggestions and constructive comments on this paper. ## References * [1] Anderson, A. R. and Belnap N. D. (with contributions by thirteen others), Entailment: the logic of relevance and necessity, volume II, (1992) Princeton University Press. * [2] Avron, A.. Non-deterministic semantics for logics with a consistency operator. Journal of Approximate Reasoning, 45, 271–287, (2007). * [3] Avron, A. and Konikowska, B., Multi-valued Calculi for Logics Based on Non-determinism, Proceedings COS’04 (Challenge of Semantics Workshop), Vienna 2004, Journal of Interest Group in Pure and Applied Logic, 2005 (10), 365–387. * [4] Avron, A., Ben-Naim, J. and Konikowska, B., Cut-free ordinary sequent calculi for logics having generalzed finite–valued semantics. Logica Universalis, 1, 41–69, 2006. * [5] Arieli, O. and Avron, A., The value of the four values. Artificial Intelligence v. 102, n. 1 (1998), pp. 97–141. * [6] Belnap, N., How computers should think. In: Contemporary Aspects of Philosophy (Editor: G. Ryle). Oriol Press, pp. 30–56, 1976. * [7] Carnielli, W.A., Coniglio, M.E. and Marcos, J., Logics of Formal Inconsistency. In: Handbook of Philosophical Logic, vol. 14, pp. 15-107. Eds.: D. Gabbay; F. Guenthner. Springer, 2007. * [8] Carnielli, W.A. and Marcos, J., A taxonomy of C-systems. In W. A. Carnielli, M. E. Coniglio, and I. M. L. D’Ottaviano, editors, Paraconsistency — The logical way to the inconsistent, volume 228 of Lecture Notes in Pure and Applied Mathematics, pp. 1–94. Marcel Dekker, New York, 2002. * [9] Coniglio, M.E. and Figallo, M., Hilbert-style Presentations of Two Logics Associated to Tetravalent Modal Algebras , Studia Logica, nro. 3, vol. 102 (2014), 525–539. * [10] Da Costa, N.C.A., Calculs propositionnel pour les systèmes formels inconsistants. Comptes Rendus de l’Académie de Sciences de Paris, série A, vol. 257(1963), 3790–3792. * [11] Figallo, A.V. and Landini, P., On generalized I-algebras and 4-valued modal algebras. Reports on Mathematical Logic 29 (1995), 3–18. * [12] Font, J.M. and Rius, M., A four-valued modal logic arising from Monteiro’s last algebras. In Proc. 20th Int. Symp. Multiple-Valued Logic (Charlotte, 1990), The IEEE Computer Society Press, 85–92, 1991. * [13] Font, J.M. and Rius, M., An abstract algebraic logic approach to tetravalent modal logics. J. Symbolic Logic v. 65, n. 2 (2000), 481–518. * [14] Loureiro, I., Algebras modais tetravalentes, Ph. D. Thesis, Faculdade de Ciências de Lisboa, 1983. * [15] Troelstra, A. S. and Schwichtenberg, H., Basic Proof System. Cambridge, UK: Cambridge University Press (1996).
# Growth of subsolutions to fully nonlinear equations in halfspaces NIKLAS L.P. LUNDSTRÖM Department of Mathematics and Mathematical Statistics, Umeå University, SE-90187 Umeå, Sweden; <EMAIL_ADDRESS> ###### Abstract We characterize lower growth estimates for subsolutions in halfspaces of fully nonlinear partial differential equations on the form $F(x,u,Du,D^{2}u)=0$ in terms of solutions to ordinary differential equations built solely upon a growth assumption on $F$. Using this characterization we derive several sharp Phragmen–Lindelöf-type theorems for certain classes of well known PDEs. The equation need not be uniformly elliptic nor homogeneous and we obtain results both in case the subsolution is bounded or unbounded. Among our results we retrieve classical estimates in the halfspace for $p$-subharmonic functions and extend those to more general equations; we prove sharp growth estimates, in terms of $k$ and the asymptotic behaviour of $\int_{0}^{R}C(s)ds$, for subsolutions of equations allowing for sublinear growth in the gradient of the form $C(|x|)|Du|^{k}$ with $k\geq 1$; we establish a Phragmen–Lindelöf theorem for weak subsolutions of the variable exponent $p$-Laplace equation in halfspaces, $1<p(x)<\infty$, $p(x)\in C^{1}$, of which we conclude sharpness by finding the “slowest growing” $p(x)$-harmonic function together with its corresponding family of $p(x)$-exponents. The paper ends with a discussion of our results from the point of view of a spatially dependent diffusion problem. Mathematics Subject Classification: 35B40, 35B50, 35B53, 35D40, 35J25, 35J60, 35J70. Keywords: Phragmen-Lindelöf; general drift; non standard growth; variable exponent; Laplace; unbounded domain; quasi linear; nonhomogeneous; sublinear; harmonic. ## 1 Introduction We consider fully nonlinear nonhomogeneous elliptic partial differential equations in nondivergence form, $\displaystyle F(x,u,Du,D^{2}u)=0,$ ($\star$) in halfspaces in $\mathbb{R}^{n}$ where $n\geq 1$. Here, $Du$ is the gradient, $D^{2}u$ the hessian, $F:\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{S}^{n}\rightarrow\mathbb{R}$ and $\mathbb{S}^{n}$ is the set of symmetric $n\times n$ matrices equipped with the positive semi-definite ordering; for $X,Y\in\mathbb{S}^{n}$, we write $X\leq Y$ if $\langle(X-Y)\xi,\xi\rangle\leq 0$ for all $\xi\in\mathbb{R}^{n}$. Without loss of generality we fix the halfspace to $\mathbb{R}^{n}_{+}:=\\{x\in\mathbb{R}^{n}:x_{n}>0\\}$ and assume the following: * Degenerate ellipticity holds, i.e. $F(x,u,p,X)\geq F(x,v,p,Y)$ whenever $u\geq v$, $X\leq Y$, as well as the growth condition $\displaystyle-F(x,0,p,X)$ $\displaystyle\leq\Phi(|x|,|p|)+\Lambda(x_{n})\text{Tr}(X^{+})-\lambda(x_{n})\text{Tr}(X^{-})$ ($\star\star$) whenever $x,p\in\mathbb{R}^{n},X\in\mathbb{S}^{n}$, $X=X^{+}-X^{-}$, $X^{+}\geq 0$, $X^{-}\geq 0$ and $X^{+}X^{-}=0$. Here, $\Phi:[0,\infty)\times[0,\infty)\to(-\infty,\infty)$ is continuous, nonincreasing in its first argument and $\lambda,\Lambda:[0,\infty)\to(0,\infty)$ are functions such that ${\lambda}$ is nonincreasing and ${\Lambda}$ is nondecreasing. Concerning $\Phi$ we will also need the following assumption: * Either $\Phi$ is nonnegative and it holds that for all $\epsilon,t>0$ (interpreting $1/0=\infty$) $\displaystyle\int_{0}^{\epsilon}\frac{ds}{\Phi(t,s)}=\infty$ ($\star\star\star$) or $\Phi$ is nonpositive and $-\Phi(t,s)$ satisfies ($\star\star\star$ ‣ 1) for all $\epsilon,t>0$. Under assumptions ($\star\star$ ‣ 1)–($\star\star\star$ ‣ 1) we characterize the growth of viscosity subsolutions of ($\star$ ‣ 1) in halfspaces in terms of solutions to ODEs (Theorem 2.1) which are built solely upon functions $\Phi,\lambda$ and $\Lambda$ in ($\star\star$ ‣ 1). Using this characterization we are able to derive sharp growth estimates of Phragmen–Lindelöf type once the solutions to the ODEs are sufficiently understood. Indeed, to apply Theorem 2.1 one needs to (1) find functions $\Phi,\lambda$ and $\Lambda$ to ensure assumptions ($\star\star$ ‣ 1) and ($\star\star\star$ ‣ 1), (2) solve the corresponding ODEs given in (2.1) and (3) find the limit in Theorem 2.1. An estimate is obtained if this limit is positive. Theorem 2.1 applies both in case the subsolution is bounded or unbounded, and it can be used to find such border. In Section 3 we apply Theorem 2.1 to derive sharp estimates for subsolutions to some well known PDEs of which the corresponding ODEs can be solved explicitly. For example, we retrieve the classical Phragmen–Lindelöf theorem in the halfspace for $p$-subharmonic functions by Lindqvist [29] and show in addition that it holds also for equations of $p$-Laplace type with lower order terms and vanishing ellipticity. We obtain sharp lower estimates of the growth, in terms of $k\geq 1$ and the asymptotic behaviour of $\int_{0}^{R}C(s)\lambda^{-1}(s)ds$, for subsolutions of equations with sublinear growth in the gradient such as $\displaystyle-P^{-}_{\lambda,\Lambda}(D^{2}u)+C(|x|)|Du|^{k}=0$ in which $P^{-}_{\lambda,\Lambda}$ is a Pucci operator (definition recalled below) and $C(t)$ is a nonincreasing function. These results reveal e.g. the border determining if a subsolution must grow to infinity or not in terms of $C(t)$ and $k$, see Corollary 3.1 and estimate (3.10). Moreover, Theorem 2.1 applies to nonhomogeneous PDEs including the variable exponent $p$-Laplace equation $\nabla\cdot\left(|Du|^{p(x)-2}Du\right)=0$ and we prove a sharp Phragmen–Lindelöf theorem for weak subsolutions of this equation whenever $1<p(x)<\infty$ is $C^{1}$ regular (Theorem 3.3). It turns out that the growth estimate heavily depends on whether the subsolution ever exceeds $x_{n}$ (distance to boundary) or not. We conclude sharpness by finding the “slowest growing” $p(x)$-harmonic function in the halfspace, for a given ellipticity bound, together with its corresponding family of $p(x)$ exponents (Remark 3.4). In the geometric setting of halfspaces, this theorem sharpens some results of Adamowicz [1]. The proof of Theorem 2.1 relies on comparison with certain classical supersolutions of ($\star$ ‣ 1), which we construct in Lemma 2.2 using solutions to the aforementioned ODEs. We stress generality by pointing out that with the validity of Theorem 2.1 at hand, growth estimates for subsolutions to certain PDEs not considered in Section 3 can be proved mainly by estimating solutions of first order ODEs and limits. We end the paper by discussing the problem under investigation from the point of view of a diffusion problem. Indeed, in Section 4 we briefly explain, through the application of spatially dependent diffusion, why parts of our results presented in Theorem 3.3 should hold. We remark that our main results allow for ellipticity to blow up at infinity as $\lambda(x_{n})$ may vanish and $\Lambda(x_{n})$ may explode at infinity. Moreover, the Osgood-type condition in ($\star\star\star$ ‣ 1) is necessary to ensure that subsolutions must continue to grow. Indeed, for the strong maximum principle, see Julin [22], Lundström–Olofsson–Toivanen [33] and the remarks below Theorem 2.1. Furthermore, assumption ($\star\star$ ‣ 1) can be written, with $\lambda=\lambda(x_{n})$ and $\Lambda=\Lambda(x_{n})$, $\displaystyle-F(x,0,p,X)\leq\Phi(|x|,|p|)-\mathcal{P}^{-}_{\lambda,\Lambda}(X)\quad\text{whenever}\quad x,p\in\mathbb{R}^{n},X\in\mathbb{S}^{n},$ where $\mathcal{P}^{-}_{\lambda,\Lambda}(X)=-\Lambda\text{Tr}(X^{+})+\lambda\text{Tr}(X^{-})$ is the Pucci maximal operator, $X=X^{+}-X^{-}$ with $X^{+}\geq 0$, $X^{-}\geq 0$ and $X^{+}X^{-}=0$. In particular, if $X\in\mathbb{S}^{n}$ has eigenvalues $e_{1},e_{2},\dots,e_{n}$ the Pucci extremal operators $\mathcal{P}^{+}_{\lambda,\Lambda}$ and $\mathcal{P}^{-}_{\lambda,\Lambda}$ with ellipticity $0<\lambda\leq\Lambda$ are defined by $\mathcal{P}^{+}_{\lambda,\Lambda}(X):=-\lambda\sum_{e_{i}\geq 0}e_{i}-\Lambda\sum_{e_{i}<0}e_{i}\quad\text{and}\quad\mathcal{P}^{-}_{\lambda,\Lambda}(X):=-\Lambda\sum_{e_{i}\geq 0}e_{i}-\lambda\sum_{e_{i}<0}e_{i}.$ For properties of the Pucci operators see e.g. Caffarelli–Cabre [9] or Capuzzo-Dolcetta–Vitolo [10]. We remark also that the above assumption ($\star\star$ ‣ 1) is implied by the standard ellipticity assumption $\displaystyle\lambda\text{Tr}(Y)\leq F(x,u,p,X)-F(x,u,p,X+Y)\leq\Lambda\text{Tr}(Y),$ (1.1) whenever $Y$ is positive semi-definite, together with $\displaystyle-F(x,0,p,0)\leq\Phi(|x|,|p|)\quad\text{whenever}\quad x,p\in\mathbb{R}^{n}.$ (1.2) Observe also that ($\star\star$ ‣ 1) allows for nonlinear degenerate elliptic operators which do not satisfy (1.1). For example operators of the form $F(X)=-\Lambda\left(\sum_{i=1}^{n}\Gamma(\mu_{i}^{+})\right)+\lambda\left(\sum_{i=1}^{n}\Psi(\mu_{i}^{-})\right)$ where $\mu_{i},i=1,\dots,n$, are the eigenvalues of the matrix $X\in\mathbb{S}^{n}$ and $\Gamma,\Psi:[0,\infty)\to[0,\infty)$ are continuous and nondecreasing functions such that $\Gamma(s)\leq s\leq\Psi(s)$, see Capuzzo-Dolcetta–Vitolo [10]. The Phragmén-Lindelöf principle and results of Phragmén-Lindelöf type, which has connections to elasticity theory (Horgan [20], Quintanilla [38], Leseduarte–Carme–Quintanilla [28]), have been frequently studied during the last century. To mention a few papers (without giving a complete summary), Ahlfors [2] extended results from Phragmén–Lindelöf [37] to the upper half space of $\mathbb{R}^{n}$, Gilbarg [15], Serrin [39] and Herzog [18] considered more general elliptic equations of second order. Miller [36] considered uniformly elliptic operators in nondivergence form and unbounded domains contained in cones. Kurta [27] and Jin–Lancaster [21] estimated growth of bounded solutions of quasilinear equations, the later used solutions to boundary value problems, while Vitolo [40] considered elliptic equations in sectors. Capuzzo-Dolcetta–Vitolo [10] and Armstrong–Sirakov–Smart [3] considered fully nonlinear equations, the later in certain Lipschitz domains, and Koike–Nakagawa [26] established Phragmén-Lindelöf theorems for subsolutions of fully nonlinear elliptic PDEs with unbounded coefficients and inhomogeneous terms. Adamowicz [1] studied subsolutions of the variable exponent $p$-Laplace equation, while Bhattacharya [7] and Granlund–Marola [16] considered infinity-harmonic functions. Lindqvist [29] established Phragmén- Lindelöf’s theorem for $n$-subharmonic functions when the boundary is an $m$-dimensional hyperplane in $\mathbb{R}^{n}$, $0\leq m\leq n-1$, which was extended to $p$-subharmonic functions, $n-m<p\leq\infty$, in Lundström [32]. We also mention that recently, Braga–Moreira [8] showed that nonnegative solutions to a generalized $p$-Laplace equation in the upper halfplane, vanishing on $\\{x_{n}=0\\}$, is $u(x)=x_{n}$ (modulo normalization) and Lundström–Singh [34] proved a similar result for $p$-harmonic functions in planar sectors as well as a sharp Phragmen–Lindelöf theorem. Lundberg–Weitsman [30] studied the growth of solutions to the minimal surface equation over domains containing a halfplane. The spatial behavior of solutions of the Laplace equation on a semi-infinite cylinder with dynamical nonlinear boundary conditions was investigated in Leseduarte–Carme–Quintanilla [28]. Finally, we mention that recently, local estimates such as a sharp Harnack inequality (Julin [22]), Boundary Harnack inequalities (Avelin–Julin [6]) as well as strong maximum and minimum principles (Lundström–Olofsson–Toivanen [33]) were established for fully nonlinear PDEs covered by the class of equations considered here. ### Preliminaries For a point $x\in\mathbb{R}^{n}$ we use the notation $x=(x_{1},x_{2},\dots x_{n-1},x_{n})=(x^{\prime},x_{n})$. By $\Omega$ we denote a domain, that is, an open connected set. For a set $E\subset\mathbb{R}^{n}$ we let $\overline{E}$ denote the closure and $\partial E$ the boundary of $E$. By $c$ we denote a positive constant not necessarily the same at each occurrence. We write $A\precsim B$ if there exists $c$ such that $A\leq cB$. A function $u:\Omega\to\mathbb{R}$ is a classical subsolution (supersolution) to ($\star$ ‣ 1) in $\Omega$ if it is twice differentiable in $\Omega$ and satisfies $F(x,u,Du,D^{2}u)\leq 0$ ($F(x,u,Du,D^{2}u)\geq 0$). If the inequality holds strict then $u$ is a strict classical subsolution (supersolution), and if equality holds then it is a classical solution. We choose to present our main results for viscosity subsolutions, of which we recall the definition below in case $F:\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{S}^{n}\rightarrow\mathbb{R}$ and $F$ is a continuous function (which is not necessary for our results). The following definition is from Crandall–Ishii–Lions [12]: An upper semicontinuous (USC) function $u:\Omega\to\mathbb{R}$ is a viscosity subsolution if for any $\varphi\in C^{2}(\Omega)$ and any $x_{0}\in\Omega$ such that $u-\varphi$ has a local maximum at $x_{0}$ it holds that $\displaystyle F(x_{0},u(x_{0}),D\varphi(x_{0}),D^{2}\varphi(x_{0}))\leq 0.$ A lower semicontinuous (LSC) function $u:\Omega\to\mathbb{R}$ is a viscosity supersolution if for any $\varphi\in C^{2}(\Omega)$ and any $x_{0}\in\Omega$ such that $u-\varphi$ has a local minimum at $x_{0}$ it holds that $\displaystyle F(x_{0},u(x_{0}),D\varphi(x_{0}),D^{2}\varphi(x_{0}))\geq 0.$ A continuous function is a viscosity solution if it is both a viscosity sub- and a viscosity supersolution. Let $u$ be a subsolution and $v$ a supersolution to ($\star$ ‣ 1) and let $a$ and $b$ be constants. As ($\star$ ‣ 1) is not necessarily homogeneous, $a+bu$ and $a+bv$ may fail as sub- and supersolutions. However, degenerate ellipticity guarantees that $u-c$ is a subsolution, and $u+c$ is a supersolution whenever $c\geq 0$. We will not discuss the validity of a general comparison principle for viscosity solutions of ($\star$ ‣ 1) since we only need the possibility to compare viscosity subsolutions to classical supersolutions which is possible. Indeed, let $\Omega$ be a bounded domain, $u$ a viscosity subsolution and $v$ a classical strict supersolution in $\Omega$, $u\leq v$ on $\partial\Omega$ and that $u\geq v$ somewhere in $\Omega$. By USC the function $u-v$ attains a maximum at some point $x_{0}\in\Omega$. Since $v\in C^{2}(\Omega)$, $u-v$ has a maximum at $x_{0}$ and $u$ is a viscosity subsolution it follows by definition of viscosity solutions that $\displaystyle F(x_{0},u(x_{0}),Dv(x_{0}),D^{2}v(x_{0}))\leq 0.$ (1.3) But since $v$ is a classical strict supersolution we have $F(x,v(x),Dv(x),D^{2}v(x))>0$ whenever $x\in\Omega$, and as $u(x_{0})\geq v(x_{0})$ it follows from degenerate ellipticity that $F(x_{0},u(x_{0}),Dv(x_{0}),D^{2}v(x_{0}))\geq F(x_{0},v(x_{0}),Dv(x_{0}),D^{2}v(x_{0}))>0.$ This contradicts (1.3) and hence we have proved the following simple lemma: ###### Lemma 1.1 Let $\Omega$ be a bounded domain, $u\in USC(\overline{\Omega})$ a viscosity subsolution and $v\in LSC(\overline{\Omega})$ a viscosity supersolution of ($\star$ ‣ 1) in $\Omega$ satisfying $u\leq v$ on $\partial\Omega$. Assume degenerate ellipticity. If either $u$ is a strict classical subsolution, or $v$ is a strict classical supersolution, then $u<v$ in $\Omega$. Neither the choice of viscosity solutions nor the assumption that $F$ is continuous are necessary for our results. Any other definition of “weak solutions” can be considered, whenever more appropriate for the equation, as long as such weak subsolutions of ($\star$ ‣ 1) can be compared to classical strict supersolutions of ($\star$ ‣ 1). In particular, our proof relies on construction of a classical strict supersolution to ($\star$ ‣ 1) and comparison with this barrier function. What is needed is the validity of following simple comparison result: ###### Lemma 1.2 Let $\Omega$ be a bounded domain, $u\in USC(\overline{\Omega})$ a subsolution (in some weak sense) and $v$ a classical strict supersolution to ($\star$ ‣ 1) in $\Omega$, continuous on $\overline{\Omega}$. If $u\leq v$ on $\partial\Omega$ then $u\leq v$ in $\Omega$. ## 2 Characterizing growth in terms of solutions to ordinary differential equations We will estimate the growth of subsolutions to ($\star$ ‣ 1) in terms of solutions $f:[0,\infty)\to\mathbb{R}$ to the following initial value problems, originating from assumption ($\star\star$ ‣ 1): If $\Phi(t,s)\geq 0$ for all $t,s\in\mathbb{R}_{+}$ then we will make use of solutions to $\displaystyle\frac{df}{dt}=-\frac{\Phi(t,f(t))}{\lambda(t)}-K(R)\frac{\Lambda(t)}{\lambda(t)}f(t),\quad t\in(0,R)\quad\text{with}\quad f(0)=\nu,$ (2.1) where $\nu\geq 0$ and $R>0$. Through the paper, we will by $f_{\nu,R}=f_{\nu,R}(t)$ denote the solution to (2.1) with $K(R)=\frac{n}{\gamma(R)}$, in which $\gamma(R)$ appears in the domain defined in (2.2) below. Further, we denote by $f_{\nu}=f_{\nu}(t)$ the solution of (2.1) when $K\equiv 0$. If $\Phi\leq 0$ for all $t,s\in\mathbb{R}_{+}$ then we use instead solutions of (2.1) but with $\lambda(t)$ replaced by $\Lambda(t)$ in the first term on the right hand side of (2.1). We allow ourselves to simplify notation according to $\lambda=\lambda(\cdot),\Lambda=\Lambda(\cdot),K=K(R)$ and $\gamma=\gamma(R)$ whenever appropriate. Since $\Phi(t,0)=0$ and $\nu\in[0,\infty)$ the solutions $f_{\nu,R}$ and $f_{\nu}$ will be nonegative. Moreover, if $\Phi$ satisfies the Osgood-type condition ($\star\star\star$ ‣ 1) then the solutions will, for any $\nu>0$, remain positive. (The only nonpositive solutions are the trivial solutions $f_{\nu,R}\equiv f_{\nu}\equiv 0$ starting at $\nu=0$.) This plays a role in our main results, as pointed out in the remarks made below Theorem 2.1. In Figures 2 and 3 several solutions of (2.1) are plotted for some choices of $\Phi$. To proceed we define, for a nondecreasing function $\gamma=\gamma(R)>0$ and $n\geq 1$, the domain $\displaystyle D(R):=\left\\{x\in\mathbb{R}^{n}_{+}:\sum_{i=1}^{n-1}x_{i}^{2}+(x_{n}+\gamma)^{2}<(R+\gamma)^{2}\right\\},$ (2.2) see Figure 1. Finally, for a subsolution $u$ and for $R>0$ we define $\displaystyle M(R)=\sup_{\partial D(R)}u$ and $\displaystyle M^{\prime}(R)=\liminf_{h\to 0^{+}}\frac{M(R)-M(R-h)}{h}.$ The following theorem characterizes a sharp lower growth estimate of subsolutions to ($\star$ ‣ 1) in terms of solutions $f_{\nu,R}$ and $f_{\nu}$ to the ODE (2.1): ###### Theorem 2.1 Suppose that ($\star\star$ ‣ 1) and ($\star\star\star$ ‣ 1) hold and let $u$ be a subsolution of ($\star$ ‣ 1) in $\mathbb{R}^{n}_{+}$ satisfying $\displaystyle\limsup_{x\to y}\,u(x)\,\leq\,0\quad\textrm{for all}\quad y\in\partial\mathbb{R}^{n}_{+}.$ Then either $u\leq 0$ in $\mathbb{R}^{n}_{+}$ or $M(R)$ is increasing and it holds that $\displaystyle\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{f_{\nu}(R)}\,\geq\,\liminf_{R\to\infty}\,\frac{f_{\nu,R}(R)}{f_{\nu}(R)},$ where $\nu$ satisfies $u(\bar{x})\geq\int_{0}^{\bar{x}_{n}}f_{\nu}(t)\,dt$ for some $\bar{x}$ on the $x_{n}$-axis. Using Theorem 2.1 an “explicit” growth estimate can thus be found by estimating the limit $f_{\nu,R}(R)/f_{\nu}(R)$ as $R\to\infty$. In Section 3 we will consider certain PDEs for which we can solve the ODE (2.1) explicitly – calculate the limit – and thereby prove Phragmen–Lindelöf type theorems. Let us note that if we can prove $\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{f_{\nu}(R)}>0$ then $M(R)-M(R_{0})\geq c\int_{R_{0}}^{R}f_{\nu}(s)ds$ whenever $R>R_{0}$ for some small $c$ and thus $\liminf_{R\to\infty}\,\frac{M(R)}{\int_{0}^{R}f_{\nu}(s)ds}>0.$ Hence, if the integral $\displaystyle\int_{0}^{\infty}f_{\nu}(s)ds$ converges, then subsolutions may be bounded, but if the integral diverges, then subsolutions must grow to infinity and the conclusion of Theorem 2.1 takes the form of classical Phrgmen–Lindelöf theorems. We remark that the assumption “$\bar{x}$ lies on the $x_{n}$-axis” is only for notational simplicity; we may translate coordinates otherwise. Note also that Theorem 2.1 holds whenever $\mathbb{R}^{n}_{+}$ is replaced (in the theorem and in (2.2)) with $\Omega\subset\mathbb{R}^{n}_{+}$, and that Theorem 2.1 gives a growth estimate for any initial condition $\nu\geq 0$ in (2.1) as long as $u(\bar{x})\geq\int_{0}^{\bar{x}_{n}}f_{\nu}(t)\,dt$ for some $\bar{x}$. The best estimate corresponds to the largest $\nu$. Moreover, it can be realized from the proof of Lemma 2.2 that the assumption “${\lambda}$ nonincreasing and ${\Lambda}$ nondecreasing” can be replaced by the slightly weaker assumption that ${\lambda}/{\Lambda}$ is nonincreasing and $\lambda$ is nonincreasing ($\Lambda$ is nondecreasing) when $\Phi\geq 0$ ($\Phi\leq 0$). Finally, it will be clear from the proof that we also have $M(R)-M(R-h)\geq\int_{R}^{R+h}f_{\nu,R}(t)dt$ for $R>\bar{x}_{n}$ and any $h>0$. We realize that $M(R)$ must increase as long as $f_{\nu,R}>0$, which happens whenever $\Phi$ satisfies the Osgood-type condition ($\star\star\star$ ‣ 1). Otherwise, the strong maximum principle does not hold and a positive subsolution to ($\star$ ‣ 1) may stop growing and attain an interior maximum, see Julin [22] and Lundström-Olofsson-Toivannen [33, Remark 4.3] for a counterexample. Concerning sharpness of Theorem 2.1 we consider the function $\displaystyle u(x)=\int_{0}^{x_{n}}f_{\nu}(t)dt,$ (2.3) vanishing on $\partial\mathbb{R}_{+}^{n}$, depending only on $x_{n}$ with derivative $u_{x_{n}}(x)=f_{\nu}(x_{n})$. In case $\Phi(t,s)\geq 0$ it holds that $u_{x_{n}x_{n}}(x)=f^{\prime}_{\nu}(x_{n})=-\lambda^{-1}(x_{n})\Phi(x_{n},f_{\nu}(x_{n}))$ and hence we obtain, e.g., that $\displaystyle-F(x,Du,D^{2}u):=\lambda(x_{n})\Delta u+\phi(f_{\nu}(x_{n}))=0,$ (2.4) for some function $\phi(s)$ satisfying ($\star\star\star$ ‣ 1). In case $\Phi(t,s)\leq 0$ the same holds but with $\lambda$ replaced by $\Lambda$. Thus, the function defined in (2.3) is a classical solution to an equation of type ($\star$ ‣ 1) satisfying ($\star\star$ ‣ 1) and ($\star\star\star$ ‣ 1). Moreover, it satisfies $M^{\prime}(R)=f_{\nu}(R)$. In conclusion, when the limit in Theorem 2.1 is positive, the growth estimate cannot be improved, ignoring the shape of $D(R)$ and the value of the limit. Concerning the shape of $D(R)$ we note the following. If $n=1$ then $D(R)=(0,R)$ independent of $\gamma$, but if $n\geq 2$ then $\gamma=cR$ implies that the spherical segment $D(R)$ preserves its geometric proportions for all $R>0$. If $\gamma(R)/R$ is increasing then $D(R)$ expands faster in the $x^{\prime}$-direction, implying slightly weaker estimates since $\partial D(R)$, on which supremum is taken, becomes larger. Observe that if the problem is considered in $\Omega\subset\mathbb{R}_{+}^{n}$ this might be of minor importance, especially if e.g. $\Omega$ is bounded in $x^{\prime}$-directions or contained in a cone with apex at the origin. There is not much of a gain to take $\gamma(R)/R$ decreasing since $D(R)$ still expands at rate $R$ in $x^{\prime}$-directions. The proof of Theorem 2.1 relies on comparison arguments and the following construction of a classical strict supersolution to ($\star$ ‣ 1). ###### Lemma 2.2 Suppose that ($\star\star$ ‣ 1) and ($\star\star\star$ ‣ 1) hold, let $R>0$ and put $\Xi_{R}(x)=\sqrt{\sum_{i=1}^{n-1}x_{i}^{2}+(x_{n}+\gamma)^{2}}-\gamma=|(x^{\prime},x_{n}+\gamma)|-\gamma$ in which $\gamma=\gamma(R)$ is from (2.2). Then the function $\displaystyle V_{R}(x)=\int_{0}^{\Xi_{R}(x)}f_{\nu,R}(t)dt$ is a strict classical supersolution to ($\star$ ‣ 1) in $D(R)$. Figure 1: Geometric definitions and constructions. Proof of Lemma 2.2. For notational simplicity we set $\Xi=\Xi_{R}(x)$, $V=V_{R}(x)$ and $f(t)=f_{\nu,R}(t)$. Differentiating yields $\displaystyle\frac{\partial V}{\partial x_{i}}=\frac{x_{i}}{|(x^{\prime},x_{n}+\gamma)|}f\left(\Xi\right),\quad 1\leq i\leq n-1,\quad\frac{\partial V}{\partial x_{n}}=\frac{x_{n}+\gamma}{|(x^{\prime},x_{n}+\gamma)|}f\left(\Xi\right).$ It follows that $\displaystyle|DV|=f\left(\Xi\right).$ (2.5) The second derivatives become $\displaystyle\frac{\partial^{2}V}{\partial x_{i}^{2}}=\left(\frac{x_{i}}{|(x^{\prime},x_{n}+\gamma)|}\right)^{2}f^{\prime}(\Xi)+\left(\frac{1}{|(x^{\prime},x_{n}+\gamma)|}-\frac{x_{i}^{2}}{|(x^{\prime},x_{n}+\gamma)|^{3}}\right)f\left(\Xi\right),$ for $1\leq i\leq n-1$, and $\displaystyle\frac{\partial^{2}V}{\partial x_{n}^{2}}$ $\displaystyle=\left(\frac{x_{n}+\gamma}{|(x^{\prime},x_{n}+\gamma)|}\right)^{2}f^{\prime}(\Xi)+\left(\frac{1}{|(x^{\prime},x_{n}+\gamma)|}-\frac{(x_{n}+\gamma)^{2}}{|(x^{\prime},x_{n}+\gamma)|^{3}}\right)f\left(\Xi\right),$ giving $\displaystyle\text{Tr}(D^{2}V)=f^{\prime}(\Xi)+\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}f\left(\Xi\right).$ By construction we have $f^{\prime}(t)=-\frac{\Phi(t,f(t))}{\lambda(t)}-K\frac{\Lambda(t)}{\lambda(t)}f(t)$ and hence $\displaystyle\text{Tr}(D^{2}V)=-\frac{\Phi\left(\Xi,f\left(\Xi\right)\right)}{\lambda(\Xi)}-K\frac{\Lambda(\Xi)}{\lambda(\Xi)}f(\Xi)+\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}f\left(\Xi\right).$ We assume from here on that $\Phi\geq 0$ and decompose $D^{2}V=\left(D^{2}V\right)^{+}-\left(D^{2}V\right)^{-}$ so that $\displaystyle\text{Tr}\left(\left(D^{2}V\right)^{+}\right)$ $\displaystyle=\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}f\left(\Xi\right)\quad\textrm{and}$ $\displaystyle\text{Tr}\left(\left(D^{2}V\right)^{-}\right)$ $\displaystyle=\frac{\Phi\left(\Xi,f\left(\Xi\right)\right)}{\lambda(\Xi)}+K\frac{\Lambda(\Xi)}{\lambda(\Xi)}f(\Xi)$ Utilizing the structure assumption ($\star\star$ ‣ 1), the fact that $V\geq 0$ and using (2.5) give $\displaystyle F(x,V,DV,D^{2}V)\geq$ $\displaystyle F(x,0,DV,D^{2}V)$ $\displaystyle\geq$ $\displaystyle-\Phi\left(|x|,f\left(\Xi\right)\right)-\Lambda(x_{n})\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}f\left(\Xi\right)$ $\displaystyle+\frac{\lambda(x_{n})\Phi\left(\Xi,f\left(\Xi\right)\right)}{\lambda(\Xi)}+K\frac{\lambda(x_{n})\Lambda(\Xi)}{\lambda(\Xi)}f(\Xi)$ $\displaystyle\geq$ $\displaystyle-\Lambda(x_{n})\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}f(\Xi)+K\frac{\lambda(x_{n})\Lambda(\Xi)}{\lambda(\Xi)}f(\Xi)$ (2.6) since $\lambda(\Xi)\leq\lambda(x_{n})$ and $\Phi\left(|x|,f\left(\Xi\right)\right)\leq\Phi\left(\Xi,f\left(\Xi\right)\right)$ holds. This last statement follows since $|x|\geq\Xi_{R}(x)\geq x_{n}$ by geometry, see Figure 1, and functions are nonincreasing by assumption. To show that $V$ is a strict classical supersolution we need $F(x,V,DV,D^{2}V)>0$ and by (2) and the fact that $f(\Xi)>0$ it suffices to ensure $\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}<K\frac{\lambda(x_{n})\Lambda(\Xi)}{\Lambda(x_{n})\lambda(\Xi)}.$ Observing that $\frac{\lambda(x_{n})}{\Lambda(x_{n})}\geq\frac{\lambda(\Xi)}{\Lambda(\Xi)}$ holds since $\lambda/\Lambda$ is nonincreasing, it suffices to ensure $\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}<K.$ We know that $\gamma\leq|(x^{\prime},x_{n}+\gamma)|$ in $\mathbb{R}_{+}^{n}$ so it is enough to have $\frac{n-1}{\gamma(R)}<K$ which holds since we have $K=\frac{n}{\gamma(R)}$. If $\Phi\leq 0$ then by construction $f^{\prime}(t)=-\frac{\Phi\left(t,f\left(t\right)\right)}{\Lambda(t)}-K\frac{\Lambda(t)}{\lambda(t)}f(t)$ and we obtain $\displaystyle\text{Tr}\left(\left(D^{2}V\right)^{+}\right)$ $\displaystyle=-\frac{\Phi\left(\Xi,f\left(\Xi\right)\right)}{\Lambda(\Xi)}+\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}f(\Xi),$ $\displaystyle\text{Tr}\left(\left(D^{2}V\right)^{-}\right)$ $\displaystyle=K\frac{\Lambda(\Xi)}{\lambda(\Xi)}f(\Xi)$ (2.7) and thus instead of (2) we end up with $\displaystyle F(x,V,DV,D^{2}V)\geq$ $\displaystyle F(x,0,DV,D^{2}V)$ $\displaystyle\geq$ $\displaystyle-\Phi\left(|x|,f\left(\Xi\right)\right)+\frac{\Lambda(x_{n})}{\Lambda(\Xi)}\Phi\left(\Xi,f\left(\Xi\right)\right)$ $\displaystyle-\Lambda(x_{n})\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}f(\Xi)+\lambda(x_{n})K\frac{\Lambda(\Xi)}{\lambda(\Xi)}f(\Xi)$ $\displaystyle\geq$ $\displaystyle-\Lambda(x_{n})\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}f(\Xi)+\lambda(x_{n})K\frac{\Lambda(\Xi)}{\lambda(\Xi)}f(\Xi).$ (2.8) Here, the last inequality holds since $x_{n}\leq\Xi_{R}(x)\leq|x|$, $\Lambda$ is nondecreasing and $-\Phi\geq 0$ is nondecreasing in its first argument so that $-\Phi\left(|x|,f\left(\Xi\right)\right)\geq\frac{\Lambda(x_{n})}{\Lambda(\Xi)}\left(-\Phi\left(\Xi,f\left(\Xi\right)\right)\right).$ To ensure that $V$ is a strict supersolution we see from (2) that it remains to show that $\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}<K\frac{\lambda(x_{n})}{\Lambda(x_{n})}\frac{\Lambda(\Xi)}{\lambda(\Xi)}$ and we are thus back in the same situation as in the case $\Phi\geq 0$. The proof of Lemma 2.2 is complete. $\hfill\Box$ Proof of Theorem 2.1. Let $u$ be as in the statement of the theorem and denote with $\nu_{0}$ the initial condition in (2.1) for which we want to prove the growth estimate. Let $R>0$ and put $V:=V_{R}(x)$, where $V_{R}(x)$ is the strict supersolution in $D(R)$ guaranteed by Lemma 2.2. If $V\geq M(R)$ on $\partial D(R)$ then, if $\Phi$ is nonnegative, it follows that $f_{\nu}\leq\nu$ and we obtain equality by decreasing $\nu$. If $\Phi$ is nonpositive we note that $f^{\prime}_{\nu,R}(t)\leq\widetilde{\Phi}(f_{\nu,R}(t))$ for some $\widetilde{\Phi}(s)\geq 0$ satisfying ($\star\star\star$ ‣ 1). Thus $\int_{\nu}^{f_{\nu,R}(t)}\frac{ds}{\widetilde{\Phi}(s)}\leq t$ which implies that $f_{\nu,R}(t)\searrow 0$ as $\nu\searrow 0$ for all $t\in[0,R]$. Therefore, we obtain equality by decreasing $\nu$ also in this case. If $V<M(R)$ on $\partial D(R)$ then we increase $\nu$. If this does not help, (note that we may have $V\leq A$, for all $\nu$ and all $R$, for some $A>0$), then we lift the supersolution by adding a nonnegative constant. Indeed, for $C\geq 0$ it follows from degenerate ellipticity that also $V+C$ is a strict supersolution. We conclude that $V+C\geq C\quad\text{on}\quad\\{x_{n}=0\\}\quad\text{and}\quad V+C=M(R)\quad\text{on}\quad\partial D(R)\cap\mathbb{R}^{n}_{+}.$ We clarify that if $C>0$ then we have taken $\nu>\nu_{0}$. It follows that $\displaystyle\limsup_{x\to z}u(x)\leq V(z)+C\quad\textrm{for all}\quad z\in\partial D(R)$ and the weak comparison principle in Lemma 1.1 implies that $u\leq V+C$ in $D(R)$. We next conclude that $\nu\geq\nu_{0}$. In particular, assume $\nu<\nu_{0}$. By assumption and by the above we have $V(\bar{x})\geq u(\bar{x})\geq\int_{0}^{\bar{x}_{n}}f_{\nu_{0}}(t)\,dt$ for some $\bar{x}\in D(R)\cap\\{\bar{x}^{\prime}=0\\}$, but on the other hand $\displaystyle V(\bar{x})=\int_{0}^{\Xi_{R}(\bar{x})}f_{\nu,R}\left(t\right)dt=\int_{0}^{\bar{x}_{n}}f_{\nu,R}\left(t\right)dt<\int_{0}^{\bar{x}_{n}}f_{\nu_{0},R}\left(t\right)dt\leq\int_{0}^{\bar{x}_{n}}f_{\nu_{0}}(t)\,dt,$ where the last inequality follows since $f_{\nu,R}\leq f_{\nu}$. Hence, we have a contradiction and we therefore conclude $\nu\geq\nu_{0}$. Now let $R>\bar{x}_{n}$, $h\in(0,R)$ and note that by the comparison principle $\displaystyle M(R)-M(R-h)$ $\displaystyle\geq V(0,\dots,0,R)-V(0,\dots,0,R-h)$ $\displaystyle=\int_{R-h}^{R}f_{\nu,R}\left(t\right)dt\geq\int_{R-h}^{R}f_{\nu_{0},R}\left(t\right)dt>0.$ Hence $M(R)$ is increasing and $\displaystyle\frac{M(R)-M(R-h)}{\int_{R-h}^{R}f_{\nu_{0}}\left(t\right)dt}\geq\frac{\int_{R-h}^{R}f_{\nu_{0},R}\left(t\right)dt}{\int_{R-h}^{R}f_{\nu_{0}}\left(t\right)dt}.$ (2.9) Inequality (2.9) holds for any $R>\bar{x}_{n}$ independent of $h$. Taking the limit yields $\displaystyle\frac{M^{\prime}(R)}{f_{\nu_{0}}\left(R\right)}=\liminf_{h\to 0^{+}}\frac{M(R)-M(R-h)}{\int_{R-h}^{R}f_{\nu_{0}}\left(t\right)dt}\geq\frac{f_{\nu_{0},R}\left(R\right)}{f_{\nu_{0}}\left(R\right)}$ and thus $\liminf_{R\to\infty}\frac{M^{\prime}(R)}{f_{\nu_{0}}\left(R\right)}\geq\liminf_{R\to\infty}\frac{f_{\nu_{0},R}\left(R\right)}{f_{\nu_{0}}\left(R\right)}.$ This completes the proof of the theorem. $\hfill\Box$ ## 3 Applications to well known equations In this section we apply Theorem 2.1 to some well known PDEs for which we can solve the ODE in (2.1) explicitly – calculate the limit $\liminf_{R\to\infty}\frac{f_{\nu,R}(R)}{f_{\nu}(R)}$ and conclude explicit growth estimates. When stating corollaries for specific classes of PDEs we would sometimes prefer to infer other types of “weak” solutions than viscosity solutions whenever such are more suitable or more commonly used for such equations in the literature. As the equivalence of different kinds of ”weak” solutions often is a nontrivial problem we will in some cases avoid going into these details, but this should not make things unclear. The reason is that we only use comparison between “weak” subsolutions and classical strict supersolutions – i.e. Lemma 1.2. We begin with the simplest case $\Phi\equiv 0$, including e.g. the famous $p$-Laplace equation, proceed with PDEs having sublinear growth in the gradient according to $\Phi(t,s)=C(t)s^{k}$ for $k\geq 1$ and end by the variable exponent $p$-Laplace equation, which satisfies assumption ($\star\star$ ‣ 1) with $\Phi(s)=C(t)s|\log s|$. ### The case $\Phi(s)\equiv 0$ In this simple case the ODE (2.1) reduces to $\frac{df}{dt}=-K\frac{\Lambda(t)}{\lambda(t)}f(t)\quad\text{with}\quad f(0)=\nu$ and hence $f_{\nu,R}(t)=\nu e^{-K\int_{0}^{t}\Lambda(s)\lambda^{-1}(s)ds}$ and $f_{\nu}(t)\equiv\nu$. We obtain the limit $\displaystyle\liminf_{R\to\infty}\frac{M^{\prime}(R)}{f_{\nu}(R)}\geq\liminf_{R\to\infty}\frac{f_{\nu,R}(R)}{f_{\nu}(R)}=\lim_{R\to\infty}e^{-K\int_{0}^{R}\Lambda(s)\lambda^{-1}(s)ds}$ (3.1) which is positive if $K\int_{0}^{R}\Lambda(s)\lambda^{-1}(s)ds\precsim 1.$ Recalling from the definitions in (2.1) that $K=\frac{n}{\gamma(R)}$ this forces us to chose the function $\gamma(R)$ in the definition of $D(R)$, given by (2.2), $\displaystyle\int_{0}^{R}\frac{\Lambda(s)}{\lambda(s)}ds\precsim\gamma(R).$ (3.2) Following the remark just below Theorem 2.1 we see that (3.1) implies $\displaystyle\liminf_{R\to\infty}\,\frac{M(R)}{R}\,>\,0,$ (3.3) and we thus retrieve the classical form of the Phragmen–Lindelöf theorem. If the PDE is uniformly elliptic, i.e. $\Lambda/\lambda=constant$, then according to (3.2) we can pick $\gamma(R)=R$ and thereby $D(R)$ preserves its geometric proportions for all $R>0$, which also agrees with the classical Phragmen–Lindelöf theorem. If ellipticity blows up at infinity, i.e. $\Lambda(R)/\lambda(R)\to\infty$, then the loss in estimate (3.3) comes only in the shape of $D(R)$ – it expands faster in $x^{\prime}$-direction since we need to take a larger $\gamma(R)$ according to (3.2). Concerning sharpness of (3.3) we note that with $\phi\equiv 0$ the solution of (2.3) yields $\int_{0}^{x_{n}}\,f_{\nu}(s)\,ds=\nu\,x_{n},$ which clearly hits the bottom of (3.3). Following ($\star$ ‣ 1) and ($\star\star$ ‣ 1) we see that (3.3) holds, e.g., for subsolutions of the quasilinear equations $\displaystyle-\sum_{i,j=1}^{n}A_{ij}(x)\frac{\partial^{2}u}{\partial x_{i}\partial x_{j}}+f(x,u,Du)=0,$ (3.4) corresponding to $F(x,r,p,X)=-\text{Tr}\left(A(x)X\right)+f(x,u,Du)$, where $A(x)\in\mathbb{S}^{n}$ satisfies $\lambda(x_{n})\text{Tr}\left(Y\right)\leq\text{Tr}\left(A(x)Y\right)\leq\Lambda(x_{n})\text{Tr}\left(Y\right)$ for all $Y\geq 0$, and $\displaystyle P^{-}_{\lambda,\Lambda}(D^{2}u)+f(x,u,Du)=0,$ (3.5) whenever $f(x,u,Du)\geq 0$ is nondecreasing in $u$. One such PDE is the following $p$-Laplace equation, $p\in(1,\infty)$, with lower order terms $\displaystyle-\nabla\cdot\left(|Du|^{p-2}Du\right)+f(x,u,Du)=0.$ (3.6) Indeed, with $\displaystyle-F(x,u,Du,D^{2}u)=\Delta u+(p-2)\Delta_{\infty}u-\frac{f(x,u,Du)}{|Du|^{p-2}},$ where $\Delta_{\infty}u=\langle D^{2}u\frac{Du}{|Du|},\frac{Du}{|Du|}\rangle$ denotes the infinity Laplace operator, we see that $F$ satisfies (1.2) with $\phi\equiv 0$ and $\min\left\\{1,p-1\right\\}\text{Tr}(Y)\leq F(x,u,Du,X)-F(x,u,Du,X+Y)\leq\max\left\\{1,p-1\right\\}\text{Tr}(Y),$ whenever $Y\geq 0$. Hence $F$ satisfies (1.1) with $\lambda=\min\left\\{1,p-1\right\\}$ and $\Lambda=\max\left\\{1,p-1\right\\}$ and Theorem 2.1 applies. Recalling that Lemma 1.2 holds for weak solutions (defined in the usual way) to $p$-Laplace type problems or that viscosity solutions and weak solutions are equivalent for some $p$-Laplace type equations , see e.g. Juutinen–Lindqvist–Manfredi [24], Julin–Juutinen [23] and Medina–Ochoa [35], we retrieve the well known Phragmen–Lindelöf results in Lindqvist[29] in the setting of halfspaces. ### The case $\Phi(t,s)=C(t)s^{k}$ We now consider equations satisfying ($\star\star$ ‣ 1) with $\Phi(t,s)=C(t)s^{k}$ for some $k\geq 1$. Note that such $\Phi$ satisfies ($\star\star\star$ ‣ 1) and hence we expect that Theorem 2.1 implies that subsolutions must be increasing on the boundary of $D(R)$. The ODE in (2.1) yields $\frac{df}{dt}=-\frac{C(t)f^{k}}{\lambda(t)}-K\frac{\Lambda(t)}{\lambda(t)}f,\quad t\in(0,R),\quad\text{with}\quad f(0)=\nu.$ As $C(t)$ is nonincreasing (by assumptions on $\Phi$) and $\Lambda$ is nondecreasing we can replace the above equation with the separable ODE $\frac{df}{dt}=-A(t)\left(f^{k}+\widetilde{K}f\right),$ where $A(t):=\lambda^{-1}(t)C(t)$ and $\widetilde{K}:=\frac{K\Lambda(R)}{C(R)}$. This is possible since solutions of this ODE will approach the origin faster when $t\in(0,R)$, and hence it creates a lower bound on the limit in Theorem 2.1. To find the solution for $k>1$ we observe that $\frac{1}{\widetilde{K}}\int_{\nu}^{f(t)}\left(\frac{1}{y}-\frac{y^{k-2}}{y^{k-1}+\widetilde{K}}\right)dy=-\int_{0}^{t}A(s)ds$ and $\frac{1}{k-1}\left[\log y^{k-1}-\log\left(y^{k-1}+\widetilde{K}\right)\right]_{\nu}^{f(t)}=-\widetilde{K}\int_{0}^{t}A(s)ds.$ Thus $f_{\nu,R}(t)=\left\\{\begin{array}[]{ll}\nu e^{-\left(1+\widetilde{K}\right)\int_{0}^{t}A(s)ds}&\text{if $k=1$},\\\ \widetilde{K}^{\frac{1}{k-1}}\left({e^{(k-1)\widetilde{K}\int_{0}^{t}A(s)ds}\left(\frac{\widetilde{K}}{\nu^{k-1}}+1\right)-1}\right)^{\frac{1}{1-k}}&\text{if $k>1$},\end{array}\right.$ and by solving $\frac{df}{dt}=-\lambda(t)^{-1}C(t)f^{k}$ with $f(0)=\nu$ we also obtain $\displaystyle f_{\nu}(t)=\left\\{\begin{array}[]{ll}\nu e^{-\int_{0}^{t}A(s)ds}&\text{if $k=1$},\\\ \left((k-1)\int_{0}^{t}A(s)ds+\nu^{1-k}\right)^{\frac{1}{1-k}}&\text{if $k>1$}.\end{array}\right.$ (3.9) The limit in Theorem 2.1 becomes, for $k=1$, $\displaystyle\liminf_{R\to\infty}\frac{M^{\prime}(R)}{f_{\nu}(R)}\geq\liminf_{R\to\infty}\frac{f_{\nu,R}(R)}{f_{\nu}(R)}=\lim_{R\to\infty}e^{-\widetilde{K}\int_{0}^{R}A(s)ds},$ and for $k>1$, $\displaystyle\liminf_{R\to\infty}\frac{M^{\prime}(R)}{f_{\nu}(R)}\geq\liminf_{R\to\infty}\frac{f_{\nu,R}(R)}{f_{\nu}(R)}=\lim_{R\to\infty}\left(\frac{\widetilde{K}(k-1)\int_{0}^{R}A(s)ds+\widetilde{K}\nu^{1-k}}{e^{\widetilde{K}(k-1)\int_{0}^{R}A(s)ds}\left(\widetilde{K}\nu^{1-k}+1\right)-1}\right)^{\frac{1}{k-1}}.$ Let’s observe that if $\widetilde{K}(R)\int_{0}^{R}A(s)ds\precsim 1,$ which is obtained by taking $\frac{\Lambda(R)}{C(R)}\int_{0}^{R}A(s)ds\precsim\gamma(R)$, then the limits are positive and we obtain $\displaystyle\liminf_{R\to\infty}\frac{M^{\prime}(R)}{f_{\nu}(R)}>0.$ (3.10) Thus, we may derive several Phragmen–Lindelöf type results using Theorem 2.1, whose form will depend on the exponent $k$ and the functions $C$, $\lambda$ and $\Lambda$. For example, using (3.9)-(3.10) we have proved: ###### Corollary 3.1 Suppose that ($\star\star$ ‣ 1) holds with $\Phi(t,s)=C(t)s^{k}$, $k\geq 1$. Let $u$ be a subsolution of ($\star$ ‣ 1) in $\mathbb{R}^{n}_{+}$ satisfying $\displaystyle\limsup_{x\to y}\,u(x)\,\leq\,0\quad\textrm{for all}\quad y\in\partial\mathbb{R}^{n}_{+}.$ Assume also that $u(\bar{x})>0$ for some $\bar{x}$ on the $x_{n}$-axis. Then the following is true, with $A(t)=C(t)\lambda(t)^{-1}$: * $(i)$ If $\int_{0}^{R}A(s)ds\precsim R^{\alpha(k-1)}$ for $\alpha\geq 0,k>1$ and $\frac{\Lambda(R)}{\gamma(R)C(R)}\precsim R^{-\alpha(k-1)}$ then $\displaystyle\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{R^{-\alpha}}\,>\,0\quad\text{implying}\quad\liminf_{R\to\infty}\,\frac{M(R)}{R^{1-\alpha}}\,>\,0.$ * $(ii)$ If $\int_{0}^{R}A(s)ds\precsim R$ and $\frac{\Lambda(R)}{\gamma(R)C(R)}\precsim\frac{1}{R}$ then $\displaystyle\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{e^{-R}}\,>\,0\quad\text{if $k=1$},\quad\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{R^{\frac{1}{1-k}}}\,>\,0\quad\text{if $k\in(1,2)$},$ $\displaystyle\liminf_{R\to\infty}\,\frac{M(R)}{\log(R)}\,>\,0\quad\text{if $k=2$}\quad\text{and}\quad\liminf_{R\to\infty}\,\frac{M(R)}{R^{\frac{2-k}{1-k}}}\,>\,0\quad\text{if $2<k$}.$ * $(iii)$ If $\int_{0}^{R}A(s)ds\precsim\log(R)$, $\frac{\Lambda(R)}{\gamma(R)C(R)}\precsim\frac{1}{\log(R)}$ and $k=1$ then $\displaystyle\liminf_{R\to\infty}\,\frac{M(R)}{\log(R)}\,>\,0.$ We remark that in Corollary 3.1 we only summarize some examples of growth estimates that take simple forms – the reader may return to conclusion (3.10) for the general case. Note also that all conclusions in Corollary 3.1 are independent of $\nu$, meaning that we only have to use arbitrary small $\nu>0$ to prove them. Therefore, since $f_{\nu}$ and $f_{\nu,R}$ are nonincreasing functions in this case, we only need that the assumption $\Phi(t,s)=C(t)s^{k}$ holds for arbitrary small $s$. Conclusion $(i)$ takes the form of the classical Phragmen–Lindelöf theorem and when $\alpha=0$ it applies e.g. when $\displaystyle\Phi(|x|,s)=C(|x|)s^{k}=\frac{c}{(1+|x|)^{a}}s^{k},$ (3.11) $k\geq 1$, $a>1$, ellipticity $\lambda(R)=constant$, $\Lambda(R)=constant$ and $\gamma(R)\succsim R^{a}$. Conclusion $(ii)$ holds e.g. when $A=C/\lambda=constant$, $\Lambda/C=constant$ and $\gamma(R)\equiv R$. We observe that the exponent $k$ in $\Phi(t,s)=C(t)s^{k}$ has a borderline value at $k=2$. Namely, if $k\in[1,2)$ then subsolutions may be bounded, but if $k\in[2,\infty)$ then any subsolution must grow to infinity. As already mentioned in Section 2 such border is, beyond the assumptions in Corollary 3.1, characterized by convergence/divergence of $\displaystyle\int_{0}^{\infty}f_{\nu}(s)ds.$ Conclusion $(iii)$ holds e.g. when $a=1$ in (3.11), $\gamma(R)=R\log(R)$ and $\Lambda=constant$. We further remark that upper bounds on $\int_{0}^{R}A(s)ds$ have played an important role for related results in the literature, see e.g. Gilbarg [15], Hopf [19] and Vitolo [40], and that Phragmen–Lindelöf theorems for similar equations in more general domains but with $k=1$ and $k=2$ are proved by Capuzzo-Dolcetta–Vitolo [10] and Koike–Nakagawa [26]. As in the case $\Phi\equiv 0$ the results in this subsection apply to PDEs of type (3.4) and (3.5) but now with relaxed assumption on $f$, namely $f(x,u,Du)\geq-C(|x|)|Du|^{k}.$ In case of the $p$-Laplace type equation (3.6), $1<p<\infty$, the growth condition on the lower order terms becomes $f(x,u,Du)\geq-C(|x|)|Du|^{\,k+p-2}$. When $A(s)=C(s)/\lambda(s)=constant$ we can explicitly find the classical solution of (2.4), ensuring sharpness, in case of $\Phi(t,s)=C(t)s^{k}$: $\displaystyle u(x_{n})=\int_{0}^{x_{n}}f_{\nu}(t)dt=\frac{1}{A}\left\\{\begin{array}[]{ll}\nu\left(1-e^{-Ax_{n}}\right)&\text{if $k=1$},\\\ \log\left(Ax_{n}\nu+1\right)&\text{if $k=2$},\\\ \frac{\nu^{2-k}}{2-k}\left(1-\left((k-1)Ax_{n}\nu^{k-1}+1\right)^{\frac{2-k}{1-k}}\right)&\text{otherwise}.\end{array}\right.$ Figure 2 shows the solutions $\int_{0}^{x_{n}}f_{\nu}(t)dt$ for some values of $k$, $\nu$ and different functions $A(t)$. Figure 2: The derivative $f_{\nu}$ in (3.9) and the solution $\int_{0}^{x}f_{\nu}(t)dt$. Solid curves (for $u$) approach infinity at speed $\log(x)$ and dashed-dot at speed $x^{8/9}$, while dashed curve is bounded. Corresponding growth estimates are established in Corollary 3.1. In all simulations, $\nu=5.$ ### The case $\Phi(t,s)=C(t)s|\log s|$ : variable exponent $p$-Laplace equation We set $\Phi(t,s)=C(t)s|\log s|$ and obtain the ODE $\displaystyle\frac{df}{dt}=-\frac{C(t)f|\log f|}{\lambda(t)}-K\frac{\Lambda(t)}{\lambda(t)}f,\quad t\in(0,R),\quad\text{with}\quad f(0)=\nu.$ By the same argument as in the case $\Phi=C(t)s^{k}$ we replace the above ODE with $\frac{df}{dt}=-A(t)\left(f|\log f|-\widetilde{K}f\right),$ where $A(t)=\lambda^{-1}(t)C(t)$ and $\widetilde{K}=\frac{K\Lambda(R)}{C(R)}$. This equation separates, when $0<f\leq 1$, to $\log\left(\widetilde{K}-\log\nu\right)-\log\left(\widetilde{K}-\log f\right)=-\int_{0}^{t}A(s)ds.$ Thus $f_{\nu,R}(t)=e^{\widetilde{K}\left(1-e^{\int_{0}^{t}A(s)ds}\right)}\nu^{e^{\int_{0}^{t}A(s)ds}}$ and since $f$ must be nonincreasing this holds for $0<\nu\leq 1$. If $f>1$ the solution takes a similar form. In particular $\displaystyle f_{\nu,R}(t)=\left\\{\begin{array}[]{ll}e^{\widetilde{K}\left(1-e^{\int_{0}^{t}A(s)ds}\right)}\nu^{e^{\int_{0}^{t}A(s)ds}}&\text{if $0<\nu\leq 1$},\\\ e^{\widetilde{K}\left(e^{-\int_{0}^{t}A(s)ds}-1\right)}\nu^{e^{-\int_{0}^{t}A(s)ds}}&\text{if $1<f(t)$}\end{array}\right.$ (3.14) and $\displaystyle f_{\nu}(t)=\left\\{\begin{array}[]{ll}\nu^{e^{\int_{0}^{t}A(s)ds}}&\text{if $0<\nu\leq 1$},\\\ \nu^{e^{-\int_{0}^{t}A(s)ds}}&\text{if $1<\nu$}.\end{array}\right.$ (3.17) The limit in Theorem 2.1 becomes, for $0<\nu\leq 1$, $\displaystyle\liminf_{R\to\infty}\frac{M^{\prime}(R)}{f_{\nu}(R)}\geq\liminf_{R\to\infty}\frac{f_{\nu,R}(R)}{f_{\nu}(R)}=\lim_{R\to\infty}e^{\widetilde{K}\left(1-e^{\int_{0}^{R}A(s)ds}\right)}$ (3.18) which is positive if $\widetilde{K}e^{\int_{0}^{R}A(s)ds}\precsim 1$. Therefore, we have to pick $\gamma(R)\succsim\frac{\Lambda(R)}{C(R)}e^{\int_{0}^{R}A(s)ds}$ to achieve a growth estimate. When $\nu>1$ we know that $f_{\nu,R}$ in (3.14) stays above 1 if $\left(\widetilde{K}+\log\nu\right)e^{-\int_{0}^{t}A(s)ds}>\widetilde{K}$ which forces us to take $\gamma(R)\succsim\frac{\Lambda(R)}{C(R)}e^{\int_{0}^{R}A(s)ds}$. Then $\displaystyle\liminf_{R\to\infty}\frac{M^{\prime}(R)}{f_{\nu}(R)}\geq\liminf_{R\to\infty}\frac{f_{\nu,R}(R)}{f_{\nu}(R)}=\lim_{R\to\infty}e^{\widetilde{K}\left(e^{-\int_{0}^{R}A(s)ds}-1\right)}\geq\lim_{R\to\infty}e^{-\widetilde{K}}>0$ as $\gamma(R)\succsim\frac{\Lambda(R)}{C(R)}$. If the solution $f_{\nu,R}$ decreases to 1 then the limit can be estimated as in (3.18) since $f_{\nu,R}$ follows the expression for $\nu\in(0,1]$ with $\nu=1$. In summary, since $\Phi(t,s)=C(t)s|\log s|$ satisfies ($\star\star\star$ ‣ 1) we can conclude that for a subsolution $u$ satisfying the assumptions in Theorem 2.1 with $\Phi(t,s)=C(t)s|\log s|$, the following is true when $\gamma(R)\succsim\frac{\Lambda(R)}{C(R)}e^{\int_{0}^{R}A(s)ds}$: * • If $u(\bar{x})\geq\check{u}$ for some $\bar{x}$ on the $x_{n}$-axis and $\nu\in(0,1]$, then $M(R)$ may be bounded but $\displaystyle\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{\nu^{e^{\int_{0}^{R}A(s)ds}}}>0.$ (3.19) * • If $u(\bar{x})\geq\check{u}$ for some $\bar{x}$ on the $x_{n}$-axis and $\nu>1$, then $M(R)$ approaches infinity according to $\displaystyle\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{\nu^{e^{-\int_{0}^{R}A(s)ds}}}>0\quad\text{implying}\quad\liminf_{R\to\infty}\,\frac{M(R)}{R}>0.$ (3.20) Thus we retrieve the classical form of a Phragmen–Lindelöf theorem if the subsolution exceeds $\check{u}$ in (3.25) with $\nu\geq 1$, but if the subsolution only exceeds $\check{u}$ with $\nu<1$, it may grow very slowly and need not approach infinity. The border at $\nu=1$ originates from the fact that $\Phi(1)=0$ and thus $f_{1}\equiv 1$, while $\Phi(s)>0$ for all other positive $s$ implying $f_{\nu}\to 0$ if $\nu\in(0,1)$ and $f_{\nu}\to 1$ if $\nu>1$. The solution of (2.4) with $\Phi(t,s)=C(t)s|\log s|$, i.e. $\displaystyle C(t)|Du||\log|Du||+\lambda(x_{n})\Delta u=0,$ (3.21) ensuring sharpness, can be calculated analytically when $A(t)=C(t)/\lambda(t)=constant$ and then yields $\displaystyle\check{u}(x)=\int_{0}^{x_{n}}f_{\nu}(s)ds=\frac{1}{A}\left\\{\begin{array}[]{ll}-E_{i}\left(\log\nu\right)+E_{i}\left(e^{At}\log\nu\right)&\text{if $0<\nu<1$},\\\ x_{n}&\text{if $\nu=1$},\\\ E_{i}\left(\log\nu\right)-E_{i}\left(e^{-At}\log\nu\right)&\text{if $1<\nu$}\end{array}\right.$ (3.25) where $E_{i}$ is the Exponential integral. See Figure 3 (upper row) for some illustrations of the functions $f_{\nu}$ in (3.17) and $\check{u}$ in (3.25). #### Variable exponent $p$-Laplace equation The $p(x)$-Laplace equation, which often serves as a model example for PDEs with nonstandard growth, yields $\nabla\cdot(|Du|^{p(x)-2}Du)\,=\,0.$ (3.26) The function $p:\Omega\to(1,\infty)$ is usually called a variable exponent. If $p=constant$, then this equation is the classical $p$-Laplace equation and if $p=2$ it’s the famous Laplace equation. Apart from interesting theoretical considerations such equations arise in the applied sciences, for instance in fluid dynamics, see e.g. Diening–Růžička [13], in the study of image processing, see e.g. Chen-Levine-Rao [11] and for electro-rheological fluids, we refer the reader to Harjulehto–Hästö–Lê–Nuortio [17] for a recent survey and further references. We recall the following standard definition of weak solutions of (3.26): A function $u\in W^{1,p(x)}_{loc}(\Omega)$ is a weak (sub)solution of (3.26) if $\int_{\Omega}|Du|^{p(x)-2}Du\cdot D\psi\,dx(\leq)=0$ for all (nonnegative) $\psi\in C_{0}^{\infty}(\Omega)$. Similarly, $u$ is a weak supersolution if $-u$ is a weak subsolution. A function which is both a weak subsolution and a weak supersolution is called a weak solution. An (USC/LSC) weak (sub/super)solution is called a $p(x)$-(sub/super)harmonic function. We also note that $u\in W^{1,1}_{loc}(\Omega)$ is $p(x)$-harmonic in $\Omega$ if it is a local minimizer of the energy $\int_{\Omega}\frac{1}{p(x)}|Du|^{p(x)}dx,$ where $1<p(x)<\infty$. To proceed we define the operator $\displaystyle\Delta_{p(x)}u:=\Delta u+(p(x)-2)\Delta_{\infty}u+\log|Du|\langle Dp(x),Du\rangle,$ where $\Delta_{\infty}u=\langle D^{2}u\frac{Du}{|Du|},\frac{Du}{|Du|}\rangle$ denotes the infinity Laplace operator. We note that $\Delta_{p(x)}u\geq 0$ implies $\displaystyle-\widehat{F}(x,Du,D^{2}u):=\Lambda(x)\text{Tr}(D^{2}u^{+})-\lambda(x)\text{Tr}(D^{2}u^{-})+|Dp(x)||Du||\log|Du||\geq 0$ (3.27) with $\lambda(x)=\min\\{1,p(x)-1\\}$ and $\Lambda(x)=\max\\{1,p(x)-1\\}$. This suggests that $p(x)$-subharmonic functions should be viscosity subsolutions to $\widehat{F}=0$, which is the case. Indeed, following the proof in Julin [22], which expands on Juutinen–Lukkari–Parviainen [25], we can conclude the following slightly generalized version of [22, Lemma 5.2]: ###### Lemma 3.2 Suppose that $p(x)$ is $C^{1}(\mathbb{R}_{+})$, $1<p(x)<\infty$, $\lambda(x)=\min\\{1,p(x)-1\\}$ and $\Lambda(x)=\max\\{1,p(x)-1\\}$. If $u$ is $p(x)$-subharmonic in a domain $\Omega\in\mathbb{R}^{n}$, $\varphi\in C^{2}(\Omega)$ is such that $\varphi(x_{0})=u(x_{0})$ at $x_{0}\in\Omega$ and $\varphi\geq u$ then $\widehat{F}(x_{0},D\varphi(x_{0}),D^{2}(\varphi(x_{0})))\leq-\Delta_{p(x)}\varphi(x_{0})\leq 0.$ To obtain a PDE satisfying the required assumptions we redefine $\widehat{F}$ by replacing ellipticity with $\lambda(x_{n})\leq\min\\{1,p(x)-1\\}$ nonincreasing, $\Lambda(x_{n})\geq\max\\{1,p(x)-1\\}$ nondecreasing and also replacing the nonhomogeneous term with $\Phi(|x|,s)\geq|Dp(x)|s|\log s|$, where $\Phi(|x|,s)$ is nonincreasing in $|x|$. In particular, we can take $\displaystyle\lambda(t)=$ $\displaystyle p_{\lambda}:=\inf_{y:y_{n}\leq t}\min\\{1,p(y)-1\\},\quad\Lambda(t)=p_{\Lambda}:=\sup_{y:y_{n}\leq t}\max\\{1,p(y)-1\\}\quad\text{and}$ $\displaystyle\Phi(t,s)$ $\displaystyle=||Dp||_{\infty,t}\,s|\log s|,\quad\text{where}\quad||f||_{\infty,t}=\sup_{y:|y|\geq t}|f(y)|.$ (3.28) In conclusion, a weak USC subsolution (a $p(x)$-subharmonic function) to the variable exponent $p$-Laplace equation is a viscosity subsolution of a PDE of type ($\star$ ‣ 1) satisfying ($\star\star$ ‣ 1) and ($\star\star\star$ ‣ 1). We can therefore conclude that deductions (3.19) and (3.20) hold for $p(x)$-subharmonic functions whenever $p(x)$ is $C^{1}(\mathbb{R}_{+})$ and $1<p(x)<\infty$. We summarize our findings in the following theorem yielding Phragmen–Lindelöf- type results, of which some are sharp, for weak solutions to the variable exponent $p$-Laplace equation: ###### Theorem 3.3 Suppose that $p(x)$ is $C^{1}(\mathbb{R}^{n}_{+})$, $1<p(x)<\infty$, and let $u$ be $p(x)$-subharmonic in $\mathbb{R}^{n}_{+}$ satisfying $\displaystyle\limsup_{x\to y}\,u(x)\,\leq\,0\quad\textrm{for all}\quad y\in\partial\mathbb{R}^{n}_{+}.$ Then $u$ is a viscosity subsolution of an equation of type ($\star$ ‣ 1) satisfying ($\star\star$ ‣ 1) and ($\star\star\star$ ‣ 1) with $\Phi,\lambda=p_{\lambda}$ and $\Lambda=p_{\Lambda}$ as in (3). Moreover, if $\frac{p_{\Lambda}(R)}{||Dp||_{\infty,R}}{\exp\left({\int_{0}^{R}\frac{||Dp||_{\infty,s}}{p_{\lambda}(s)}ds}\right)}\precsim\gamma(R)$ and $\check{u}(x)=\int_{0}^{x_{n}}f_{\nu}(s)ds$ with $f_{\nu}$ from (3.17) then the following is true: * • If $u\geq\check{u}$ somewhere on the $x_{n}$-axis for $\nu\in(0,1]$ then $\displaystyle\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{\nu^{\exp\left({\int_{0}^{R}\frac{||Dp||_{\infty,s}}{p_{\lambda}(s)}ds}\right)}}>0.$ * • If $u\geq\check{u}$ somewhere on the $x_{n}$-axis for $\nu>1$ then $\displaystyle\liminf_{R\to\infty}\,\frac{M(R)}{R}>0.$ We thus retrieve the classical form of a Phragmen–Lindelöf theorem if the subsolution exceeds $\check{u}$ with $\nu\geq 1$, in particular if it exceeds $x_{n}$. On the other hand, if the subsolution only exceeds $\check{u}$ with $\nu<1$, then Theorem 3.3 states that it may grow very slowly and be bounded. The sharpness in the case $\nu\geq 1$ follows by observing, e.g., that $\displaystyle u(x)=c\,x_{n}\quad\text{is $p(x)$-harmonic with}\quad p(x)=M_{0}+\sum_{i=1}^{n-1}M_{i}x_{i}^{2},$ (3.29) whenever $c\geq 1,M_{0}>1$ and $M_{i}$, for $i\in[1,n-1]$, are constants. It is worth observing that the conclusion $\displaystyle\liminf_{R\to\infty}\,\frac{M(R)}{R}>0$ follows also in the case $\nu\in(0,1]$ if ${\int_{0}^{R}\frac{||Dp||_{\infty,s}}{p_{\lambda}(s)}ds}\precsim 1$ since then $\liminf_{R\to\infty}\,M^{\prime}(R)>0$. This holds e.g. if the exponent satisfies $p^{-}<p(x)$ and $||Dp||_{\infty,s}\precsim s^{-k}$ for some constants $p^{-},k>1$; a natural conclusion since these assumptions force the equation toward the constant exponent $p$-Laplace equation far away from the origin. Versions of Theorem 3.3 are possible to derive from (3.19) and (3.20); e.g., it may be useful to redefine the norm in (3) as $||f||_{\infty,t}=\sup_{y:y_{n}=t}|f(y)|$. Then, if $||Dp(x)||_{\infty,x_{n}}\neq 0$ for all $x_{n}>0$ we may divide (3.27) by $||Dp(x)||_{\infty,x_{n}}$ and conclude, for $\lambda(x_{n})\leq\frac{\min\\{1,p(x)-1\\}}{||Dp(x)||_{\infty,x_{n}}}$ nonincreasing and $\Lambda(x_{n})\geq\frac{\max\\{1,p(x)-1\\}}{||Dp(x)||_{\infty,x_{n}}}$ nondecreasing, that Theorem 3.3 holds with $\frac{p_{\Lambda}(R)}{||Dp||_{\infty,R}}$ replaced by $\Lambda(R)$ and $\frac{||Dp||_{\infty,s}}{p_{\lambda}(s)}$ replaced by $\lambda^{-1}(s)$. In particular, in the case $\nu\in(0,1]$ the conclusion then reads $\displaystyle\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{\nu^{e^{\int_{0}^{R}\lambda^{-1}(s)ds}}}>0.$ (3.30) We build sharpness of this result in Remark 3.4 below in which we find a family of exponents for which the solution in (3.25), which satisfies $M^{\prime}(R)=\nu^{e^{\int_{0}^{R}\lambda^{-1}(s)ds}}$, is $p$-harmonic. We further remark that Theorem 3.3 sharpens some results of Adamowicz [1] in the geometric setting of halfspaces, and the $C^{1}$-assumption on $p(x)$ should be replaceable with locally Lipschitz continuity by approximation arguments. Furthermore, the reader may recall the remarks made below deductions (3.19) and (3.20) and also note that contrary to the results in the former subsection, for $\Phi(t,s)=C(t)s^{k}$, the growth estimates here depend on $\nu$. Our estimates may not be optimal when $\log|Du|\langle Dp(x),Du\rangle$ is negative since then we lose information by our choice of $\Phi$. We can improve by taking $\phi(s)\equiv 0$, but we still lose information when subsolutions gradients are not “close to perpendicular” to $Dp(x)$. This motivates us to derive better estimates under assumptions excluding e.g. the solution in (3.29). We do so by studying a nonpositive $\Phi$; the case $\Phi(s)=-s|\log s|$, in the next section. We proceed by proving the following result, in which we find the “slowest growing” $p(x)$-harmonic function, for a given ellipticity bound, and the corresponding family of exponents. ###### Remark 3.4 The function $\check{u}(x)=\int_{0}^{x_{n}}f_{\nu}(s)ds$ with $f_{\nu}$ from (3.17) is $p(x)$-harmonic with exponent $\check{p}(x)=1+Me^{-\int_{0}^{x_{n}}A(s)ds}\quad\text{if $\nu\in(0,1]$ and}\quad\check{p}(x)=1+Me^{\int_{0}^{x_{n}}A(s)ds}\quad\text{if $\nu>1$},$ whenever $M\in\mathbb{R}$ is a constant. Suppose that $\nu\in(0,1]$ and $A^{-1}(s)=\lambda(s)$ is nonincreasing. Then $\check{u}(x)$ is the slowest growing $p(x)$-harmonic function in the sense of version (3.30) of Theorem 3.3. In particular, any $p(x)$-subharmonic function with exponent $p(x)\in C^{1}(\mathbb{R}^{n}_{+})$, $1<p(x)<\infty$, $\lambda(x_{n})\leq\frac{\min\\{1,p(x)-1\\}}{||Dp||_{\infty,x}}$ and $\frac{\max\\{1,p(x)-1\\}}{||Dp||_{\infty,x}}\leq\Lambda(x_{n})$, satisfying $\displaystyle\limsup_{x\to y}\,u(x)\,\leq\,0\quad\textrm{for all}\quad y\in\partial\mathbb{R}^{n}_{+}$ that exceeds $\check{u}(x)$ somewhere on the $x_{n}$-axis satisfies $\displaystyle\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{\check{u}(Re_{n})}=\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{\nu^{e^{\int_{0}^{R}\lambda^{-1}(s)ds}}}>0.$ Finally, if $\lambda$ is constant then $\displaystyle\check{u}(x)=\lambda\left\\{\begin{array}[]{ll}-E_{i}\left(\log\nu\right)+E_{i}\left(e^{\lambda^{-1}x_{n}}\log\nu\right)&\text{if $0<\nu<1$},\\\ x_{n}&\text{if $\nu=1$}\end{array}\right.$ where $E_{i}$ is the Exponential integral. Proof. Since $\check{u}$ depends only on $x_{n}$ and solves (3.21) the first statement follows if we prove that the variable exponent $p(x)$-Laplace equation, with exponent $\check{p}(x)=1+Me^{\mp\int_{0}^{x_{n}}A(s)ds}$, reduces to the PDE (3.21) in one dimension. Without derivatives in $x^{\prime}$-direction we have $\displaystyle\Delta_{p(x)}u(x)=(p(x)-1)u^{\prime\prime}_{x_{n}x_{n}}(x)+\log|u_{x_{n}}^{\prime}(x)|p^{\prime}_{x_{n}}(x)u_{x_{n}}^{\prime}(x)=0.$ (3.31) Observe that exponent $\check{p}(x)$ is the unique family of $C^{1}(\mathbb{R}_{+}^{n})$ solutions to the ODE $p_{x_{n}}^{\prime}(x)=\mp(p(x)-1)A(x_{n})$ and substituting this equality into (3.31) yields $u^{\prime\prime}_{x_{n}x_{n}}(x)\mp A(x_{n})\log|u_{x_{n}}^{\prime}(x)|u_{x_{n}}^{\prime}(x)=0$ where the “$-$” sign is for $\nu\in(0,1]$ when $\log|\check{u}_{x_{n}}^{\prime}(x)|<0$. Thus $\displaystyle u^{\prime\prime}_{x_{n}x_{n}}(x)+A(x_{n})|\log|u_{x_{n}}^{\prime}(x)||u_{x_{n}}^{\prime}(x)=0$ which is (3.21) in one dimension. To prove the second statement we need to ensure that a weak subsolution of the variable exponent $\check{p}$-Laplace equation, $\check{p}(x)=1+Me^{-\int_{0}^{x_{n}}\lambda(s)^{-1}ds}$ for some $M$, of which $\check{u}$ is a solution, is a viscosity subsolution of ($\star$ ‣ 1) where ($\star\star$ ‣ 1) holds with the same $\Phi(s)$ and $\lambda(t)$ as in version (3.30) of Theorem 3.3. To do so we observe that, recalling Lemma 3.2, any $p(x)$-subharmonic function is viscosity solution of $\displaystyle\Delta_{p(x)}u=\Delta u+(p(x)-2)\Delta_{\infty}u+\log|Du|\langle Dp(x),Du\rangle\geq 0,$ and hence of $\displaystyle|Dp||Du||\log|Du||+\max\\{1,p(x)-1\\}Tr((D^{2}u)^{+})-\min\\{1,p(x)-1\\}Tr((D^{2}u)^{-})\geq 0.$ Inserting $\check{p}(x)=1+Me^{-\int_{0}^{x_{n}}\lambda(s)^{-1}ds}$, $|D\check{p}|=(\check{p}(x)-1)\lambda^{-1}(x_{n})$ and assuming that $1<\check{p}(x)\leq 2$, which we may by taking $M\in(0,1]$, we see that $\displaystyle|Du||\log|Du||+\frac{\lambda(x_{n})}{Me^{-\int_{0}^{x_{n}}\lambda^{-1}(s)ds}}Tr((D^{2}u)^{+})-\lambda(x_{n})Tr((D^{2}u)^{-})\geq 0.$ This is a PDE satisfying ($\star\star$ ‣ 1) with $\Phi(s)=s|\log s|$ and $\lambda(t)$ as in version (3.30) of Theorem 3.3. It remains to show that $\check{p}$ satisfies $\lambda(x_{n})\leq\frac{\min\\{1,\check{p}(x)-1\\}}{||D\check{p}||_{\infty,x}}.$ This holds with equality since $\check{p}(x)-1=Me^{-\int_{0}^{x_{n}}\lambda^{-1}(s)ds},\qquad||D\check{p}||_{\infty,x}=|\check{p}^{\prime}(x)|=-\lambda(x_{n})^{-1}Me^{-\int_{0}^{x_{n}}\lambda^{-1}(s)ds}$ and we have assumed $M\in(0,1]$. The proof is complete. $\hfill\Box$ ### The case $\Phi(s)=-s|\log s|$ In this case the ODE (2.1) becomes (we skip the $t$-dependence in $\Phi$ only for simplicity) $\frac{df}{dt}=\frac{f|\log f|}{\Lambda(t)}-K\frac{\Lambda(t)}{\lambda(t)}f,\quad t\in(0,R),\quad\text{with}\quad f(0)=\nu.$ By the same argument as in the former cases we replace this ODE by $\frac{df}{dt}=\Lambda^{-1}(t)\left(f|\log f|-\widehat{K}f\right),$ which separates, and we obtain, with $\widehat{K}=K\frac{\Lambda^{2}(R)}{\lambda(R)}$ $\displaystyle f_{\nu,R}(t)=\left\\{\begin{array}[]{ll}e^{-\widehat{K}(1-e^{-\int_{0}^{t}\Lambda^{-1}(s)ds})}\nu^{e^{-\int_{0}^{t}\Lambda^{-1}(s)ds}}&\text{if $0<\nu\leq 1$},\\\ e^{-\widehat{K}\left(e^{\int_{0}^{t}\Lambda^{-1}(s)ds}-1\right)}\nu^{e^{\int_{0}^{t}\Lambda^{-1}(s)ds}}&\text{if $1<f(t)$}\end{array}\right.$ (3.34) and $\displaystyle f_{\nu}(t)=\left\\{\begin{array}[]{ll}\nu^{e^{-\int_{0}^{t}\Lambda^{-1}(s)ds}}&\text{if $0<\nu\leq 1$},\\\ \nu^{e^{\int_{0}^{t}\Lambda^{-1}(s)ds}}&\text{if $1<\nu$}.\end{array}\right.$ (3.37) The limits become, for $0<\nu\leq 1$, $\displaystyle\liminf_{R\to\infty}\frac{M^{\prime}(R)}{f_{\nu}(R)}\geq\liminf_{R\to\infty}\frac{f_{\nu,R}(R)}{f_{\nu}(R)}\geq\lim_{R\to\infty}e^{-\widehat{K}}$ and we only need $\widehat{K}\precsim 1$. When $\nu>1$ we know that $f_{\nu,R}$ in (3.34) stays above 1 if $\left(\log\nu-\widehat{K}\right)e^{\int_{0}^{t}\Lambda^{-1}(s)ds}>-\widehat{K}$ which forces us to take $\widehat{K}<\log\nu$. Then $\displaystyle\liminf_{R\to\infty}\frac{M^{\prime}(R)}{f_{\nu}(R)}\geq\liminf_{R\to\infty}\frac{f_{\nu,R}(R)}{f_{\nu}(R)}=\lim_{R\to\infty}e^{-\widehat{K}\left(e^{\int_{0}^{R}\Lambda^{-1}(s)ds}-1\right)}$ and we need also $\widehat{K}e^{\int_{0}^{R}\Lambda^{-1}(s)ds}\precsim 1$ to achieve a growth estimate. We have defined $\widehat{K}(R)=\frac{n\Lambda(R)^{2}}{\lambda(R)\gamma(R)}$ in this case. However, from (3.34) it can be shown that $f_{\nu,R}(t)$ is nondecreasing if $\widehat{K}\leq|\log(\nu)|$. This means that $\frac{df}{dt}=\Lambda^{-1}(t)\left(f|\log f|-\widehat{K}f\right)\geq 0,\qquad t\in(0,R).$ Now, we let $f_{\nu,R}$ solve this ODE in place of (2.1) and in the proof of Lemma 2.2 we replace (2) with $\displaystyle\text{Tr}\left(\left(D^{2}V\right)^{+}\right)$ $\displaystyle=-\Lambda(\Xi)^{-1}\Phi\left(\Xi,f\left(\Xi\right)\right)-\Lambda(\Xi)^{-1}\widehat{K}f(\Xi)+\frac{n-1}{|(x^{\prime},x_{n}+\gamma)|}f(\Xi),$ $\displaystyle\text{Tr}\left(\left(D^{2}V\right)^{-}\right)$ $\displaystyle=0.$ By tracing the remaining part of the proof of Lemma 2.2 we realize that it is enough to pick $\widehat{K}(R)=\frac{n\Lambda(R)}{\gamma(R)}$. As in the former situation the solution of (2.4) with $\Phi(s)=-s|\log s|$ can be calculated analytically when $\Lambda(t)=constant$: $\displaystyle\check{u}(x)=\int_{0}^{x_{n}}f_{\nu}(s)ds=\Lambda\left\\{\begin{array}[]{ll}E_{i}\left(\log\nu\right)-E_{i}\left(e^{-\Lambda^{-1}t}\log\nu\right)&\text{if $0<\nu<1$},\\\ x_{n}&\text{if $\nu=1$},\\\ -E_{i}\left(\log\nu\right)+E_{i}\left(e^{\Lambda^{-1}t}\log\nu\right)&\text{if $1<\nu$}.\end{array}\right.$ (3.41) See Figure 3 (lower row) for functions $f_{\nu}$ in (3.37) and $\check{u}$ in (3.41). Now, using the calculations above (3.27) we see that $\Delta_{p(x)u\geq 0}$ implies $\displaystyle\max\\{1,p(x)-1\\}\text{Tr}(D^{2}u^{+})-\min\\{1,p(x)-1\\}\text{Tr}(D^{2}u^{-})+|Dp||Du|\cos\theta\log|Du|\geq 0$ where $\theta=\theta(x)$ is the angle between $Du$ and $Dp$. Assume $|Dp||\cos\theta|>0$ and divide the PDE with this factor to obtain $\displaystyle\Lambda(x_{n})\text{Tr}(D^{2}u^{+})-\lambda(x_{n})\text{Tr}(D^{2}u^{-})+\frac{\cos\theta}{|\cos\theta|}|Du|\log|Du|\geq 0$ where $\lambda(x_{n})\leq\frac{\min\\{1,p(x)-1\\}}{|Dp||\cos\theta|}$ and $\Lambda(x_{n})\geq\frac{\max\\{1,p(x)-1\\}}{|Dp||\cos\theta|}$ for some nonincreasing function $\lambda$ and nondecreasing function $\Lambda$. Assuming $\cos\theta\log|Du|\leq 0$ leads to $\displaystyle\Lambda(x_{n})\text{Tr}(D^{2}u^{+})-\lambda(x_{n})\text{Tr}(D^{2}u^{-})-|Du||\log|Du||\geq 0$ and we can apply the results from (3.37)-(3.41) and Lemma 3.2 to obtain: ###### Corollary 3.5 Suppose that $p(x)$ and $u$ are as in Theorem 3.3. Let $\theta=\theta(x)$ be the angel between $Dp$ and $Du$ and assume that $|Dp||\cos\theta|>0\qquad\text{and}\qquad\cos\theta\log|Du|\leq 0$ holds in $\mathbb{R}^{n}_{+}$ (in a suitable weak sense if $u$ is not $C^{1}$ with $|Du|\neq 0$). Then $u$ is a subsolution of an equation of type ($\star$ ‣ 1) satisfying ($\star\star$ ‣ 1) with $\Phi(s)=-s|\log s|$, $\lambda(x_{n})\leq\frac{\min\\{1,p(x)-1\\}}{|Dp||\cos\theta|}$ and $\Lambda(x_{n})\geq\frac{\max\\{1,p(x)-1\\}}{|Dp||\cos\theta|}$ for some nonincreasing function $\lambda$ and nondecreasing function $\Lambda$. If $\frac{n\Lambda(R)}{|\log\nu|}<\gamma(R)$ and $\check{u}(x)=\int_{0}^{x_{n}}f_{\nu}(s)ds$ with $f_{\nu}$ from (3.37) then the following is true: * • If $u\geq\check{u}$ somewhere on the $x_{n}$-axis, $\nu\in(0,1)$ then $\displaystyle\liminf_{R\to\infty}\,\frac{M(R)}{R}>0.$ * • If $u\geq\check{u}$ somewhere on the $x_{n}$-axis, $\nu>1$ and $\Lambda(R)e^{\int_{0}^{R}\Lambda^{-1}(s)ds}\precsim\gamma(R)$, then $\displaystyle\liminf_{R\to\infty}\,\frac{M^{\prime}(R)}{\nu^{e^{\int_{0}^{R}\Lambda^{-1}(s)ds}}}>0$ which, if $\Lambda=constant$ and $E_{i}$ is the Exponential integral, implies $\displaystyle\liminf_{R\to\infty}\,\frac{M(R)}{E_{i}\left(e^{\Lambda^{-1}R}\log\nu\right)-E_{i}\left(\log\nu\right)}>0.$ In the one dimensional case Corollary 3.5 shows that if we know that the exponent $p(x)$ is increasing, $p^{\prime}>0$, and that the subsolution satisfies $0<u^{\prime}<1$, then $\liminf_{R\to\infty}u(R)/R>0$, which is much stronger than the growth estimates that can be derived from Theorem 3.3. Similarly, if we know that $p^{\prime}<0$ and $1<u^{\prime}$ then $\liminf_{R\to\infty}u(R)/{\nu^{e^{\int_{0}^{R}\Lambda^{-1}(s)ds}}}>0$. These improvements can be visualized by comparing the right panels in Figure ; the upper right panel corresponds to Theorem 3.3 while the lower right panel corresponds to the results in Corollary 3.5. . . Figure 3: Functions $f_{\nu}$ in (3.17) and $\check{u}$ in (3.25) (upper row), $f_{\nu}$ in (3.37) and $\check{u}$ in (3.41) (lower row). In all simulations, $A=\Lambda=1$. ## 4 Connections with nonlinear diffusion problems We follow the presentation in Lundström [31] and let $u$ denote the density of some quantity in equilibrium, let $\Omega$ be a domain and $E\subset\Omega$ be a $C^{1}$-domain so that the divergence theorem can be applied. Due to the equilibrium, the net flux of $u$ through $\partial E$ is zero, that is $\displaystyle\oint\limits_{\partial E}\langle\boldsymbol{F},\boldsymbol{n}\rangle\,ds\,=\,0,$ where $\boldsymbol{F}$ denotes the flux density, $\boldsymbol{n}$ the normal to $\partial E$ and $ds$ is the surface measure. The divergence theorem gives $\displaystyle\int\limits_{E}\nabla\cdot\boldsymbol{F}\,dx\,=\,\oint\limits_{\partial E}\langle\boldsymbol{F},\boldsymbol{n}\rangle\,ds\,=\,0.$ Since $E$ was arbitrary, we conclude $\displaystyle\nabla\cdot\boldsymbol{F}\,=\,0\quad\mbox{in}\quad\Omega.$ (4.1) In many situations it is physically reasonable to assume that the flux vector $\boldsymbol{F}$ and the gradient $\nabla u$ are related by a power-law of the form $\displaystyle\boldsymbol{F}\,=\,-c\,|Du|^{\,q}\,Du,$ (4.2) for some factor $c$ and exponent $q$, which may depend on space as well. One reason is that flow is usually from regions of higher concentration to regions of lower concentration. From this assumption, with $q=p-2$, and from (4.1), we obtain the $p$-Laplace equation $\displaystyle\nabla\cdot(|Du|^{p-2}Du)\,=\,0\quad\mbox{in}\quad\Omega.$ The linear case $p=2$ in (4.2) arises as a physical law in the following: If $u$ denotes a chemical concentration, then it is the well known Fick’s law of diffusion, if $u$ denotes a temperature, then it is Fourier’s law of heat conduction, if $u$ denotes electrostatic potential, it is Ohm’s law of electrical conduction, and if $u$ denotes pressure, then it is Darcy’s law of fluid flow through a porous media. A problem involving the nonlinear case $p\neq 2$ is fast/slow diffusion of sandpiles, see Aronsson–Evans–Wu [4]. In that case $p$ is very large and $u$ models the height of a growing sandpile. If $|Du|>1+\delta$ for some $\delta>0$, then $|Du|^{p-2}$ is very large, and hence the transport of sand is also large, and if $|Du|<1-\delta$, then $|Du|^{p-2}$ is very small. Therefore, when adding sand particles to a sandpile, they accumulate as long as the slope of the pile does not exceed one. If the slope exceeds one, then the sand becomes unstable and instantly slides. Other application in which (4.2) arises with $p\neq 2$ is Hele-Shaw flow of power-law fluids (Aronsson–Janfalk [5], Fabricius–Manjate–Wall [14]) and electro-rheological fluids (Harjulehto–Hästö–Lê–Nuortio [17]). When properties of the quantity under investigation depends on space we may model it by a variable exponent $p=p(x)$ in (4.2) and thus enter equations of type (3.26) studied in Section 3. We will now discuss the problem under investigation from the point of a diffusion problem. Indeed, we will briefly explain, through spatially dependent diffusion, why parts of our results presented in Theorem 3.3 holds true. Suppose that $u$ denotes the density of some quantity at equilibrium in the $n$-dimensional halfspace $\\{x_{n}>0\\}$ and that (4.2) holds with a variable exponent $p(x_{n}),1<p(x_{n})<\infty$. Assume also that $u=0$ on the boundary $x_{n}=0$, and at some $x_{n}=a>0$ we assume that $u(x)>0$. We conclude that then $u$ satisfies the $p(x)$-Laplace equation (3.26) in the halfspace and that our results applies. We simplify by further assuming that concentration $u(x)$ is independent of $x^{\prime}$-directions. Since $|Du|$ must be positive there is a flux of $u$, independent of $x^{\prime}$, flowing perpendicular through the plane at $x_{n}=a$ toward the boundary $x_{n}=0$. Due to the equilibrium, the flux must be independent also of $x_{n}$ and is therefore constant through the halfspace. Since the problem is herefrom independent of $x^{\prime}$, we drop the index and write in the following $x=x_{n}$. Figure 4: Examples of how the concentration $u$ may depend on $x$ for decreasing exponents (left) and increasing exponent (right). The slope explodes or vanish as $p(x)\to 1$: The green dashed curves correspond to an exponent $p(x)$ that approaches 1 as $x$ increases (left) and as $x\to 0$ (right). The slope approaches 1 as $p(x)\to\infty$, i.e. fast/slow diffusion: The red solid curves correspond to an exponent $p(x)$ that becomes very large as $x\to 0$ (left) and as $x$ increases (right). Suppose that $p(x)$ is decreasing. As the flux of $u$, given by assumption (4.2), is constant, the concentration $u$ must be convex (upwards) if $|Du|=u^{\prime}>1$. Indeed, if $u^{\prime}>1$ near the boundary we locally have that (4.2) yields flux ${\bf F}=-c\,(u^{\prime})^{p(x)-1}$ and since $p(x)-1>0$ is decreasing it follows that $u^{\prime}$ must be increasing. A similar reasoning explains that if $u^{\prime}=1$ somewhere then the flux ${\bf F}=c$ implying $u(x)=x$, and if $u^{\prime}<1$ then $u$ must be concave. Figure 4(left) shows examples of how the concentration $u(x)$ may depend on $x$ for two different decreasing exponents. We remark that if $p(x)$ becomes very large near the boundary then $u^{\prime}$ must be very close 1 there, otherwise the flux becomes zero or infinity – that is fast/slow diffusion (red solid curve). Similarly, if $p(x)$ comes close to 1 as we move into the domain then $u^{\prime}$ must grow fast if $u^{\prime}$ ever was larger than 1 along the curve in order to keep the flux constant (green dashed curve). Finally, we realize that if $p(x)$ becomes constant then $u^{\prime}$ becomes constant (recall that $u(x)=cx$ is $p$-harmonic when $p=constant$). Suppose now instead that $p(x)$ is decreasing. Reasoning as in the former case we realize that we may switch our conclusions made near the boundary in the former case with those made further away into the domain. Thus fast/slow diffusion may occur away from the boundary and in such a case the slope of $u(x)$ must approach 1. If $p(x)$ approaches 1 near the boundary then $u^{\prime}$ must explode there, see Figure 4(right). Returning to (3.25), (3.41) and (3.31) in Section 3 we find that the one- dimensional $p(x)$-Laplace equation yields $\displaystyle\Delta_{p(x)}u(x)=(p(x)-1)u^{\prime\prime}(x)+\log|u^{\prime}(x)|p^{\prime}(x)u^{\prime}(x)=0,$ and with the decreasing exponent $p(x)=1+Me^{-Ax},$ where $M>0,A>0$ are a constants, the solution yields $\displaystyle u(x)=\frac{1}{A}\left\\{\begin{array}[]{ll}-E_{i}\left(\log\nu\right)+E_{i}\left(e^{Ax}\log\nu\right)&\text{if $\nu\neq 1$},\\\ x&\text{if $\nu=1$}.\end{array}\right.$ (4.5) Similarly, with the increasing exponent $p(x)=1+Me^{Ax}$ the solution yields $\displaystyle u(x)=\frac{1}{A}\left\\{\begin{array}[]{ll}E_{i}\left(\log\nu\right)-E_{i}\left(e^{-Ax}\log\nu\right)&\text{if $\nu\neq 1$},\\\ x&\text{if $\nu=1$}.\end{array}\right.$ (4.8) With $A=\lambda=\Lambda=1$, solution curves for decreasing exponent in (4.5) are plotted in Figure 3(upper right) (below line $u=x$) and (lower right) (above line $u=x$), and solution curves for increasing exponent in (4.8) are plotted in Figure 3(upper right) (above line $u=x$) and (lower right) (below line $u=x$). Compare the structure of these curves to those in Figure 4 with properties of the exponent $p(x)$ in mind. Acknowledgement. This work was partially supported by the Swedish research council grant 2018-03743. ## References * [1] Adamowicz T. _Phragmén-Lindelöf theorems for equations with nonstandard growth_ , Nonlinear Analysis: Theory, Methods and Applications 97, (2014), 169–184. * [2] Ahlfors L., _On Phragmén-Lindelöf’s principle_ , Trans. Amer. Math. Soc. 41, (1937), 1–8. * [3] Armstrong S. N., Sirakov B., Smart C. K. _Singular solutions of fully nonlinear elliptic equations and applications._ Archive for Rational Mechanics and Analysis, 205(2), (2012), 345–394. * [4] Aronsson G., Evans L. C., Wu Y. _Fast/slow diffusion and growing sandpiles._ Journal of Differential Equations, 131 (2), (1996), 304–335. * [5] Aronsson G., Janfalk U. _On Hele-Shaw flow of power-law fluids_. European Journal of Applied Mathematics 3.4, (1992) 343–366. * [6] Avelin B., Julin V. _A Carleson type inequality for fully nonlinear elliptic equations with non-Lipschitz drift term._ Journal of Functional Analysis 272.8, (2017), 3176–3215. * [7] Bhattacharya T., _On the behaviour of infinity-harmonic functions on some special unbounded domains_ , Pacific Journal of Mathematics 219, no 2, (2005), 237–253. * [8] Braga J. E. M., Moreira D. _Classification of Nonnegative g-Harmonic Functions in Half-Spaces._ Potential Analysis, (2020), 1–19. * [9] Caffarelli L.A., Cabre X., _Fully nonlinear Elliptic equations_. American Mathematical Society Colloquium Publications, 43. American Mathematical Society, Providence, RI, 1995. * [10] Capuzzo-Dolcetta I., Vitolo A., _A qualitative Phragmén-Lindelöf theorem for fully nonlinear elliptic equations_ , J. Differential Equations 243, no 2, (2007), 578–592. * [11] Chen Y., Levine S., Rao M. _Variable exponent, linear growth functionals in image restoration_. SIAM J. Appl. Math. 66 (4) (2006), 1383–1406 * [12] Crandall M. G., Ishii H., Lions P.-L. _User’s guide to viscosity solutions of second order partial differential equations_ , Bulletin of the American Mathematical Society, 27 (1992), 1–67. * [13] Diening L., Růžička M. _Strong solutions for generalized Newtonian fluids._ J. Math. Fluid Mech. 7 (2005), 413–450 * [14] Fabricius J., Manjate S., Wall P. _On pressure-driven hele-shaw flow of power-law fluids_ preprint 2021. * [15] Gilbarg D., _The Phragmén-Lindelöf theorem for elliptic partial differential equations_ , J. Rational Mech. Anal. 1, (1952), 411–417. * [16] Granlund S., Marola N. _Phragmén-Lindelöf theorem for infinity harmonic functions_ , Commun. Pure Appl. Anal. 14 (1), (2016), 127–132. * [17] Harjulehto P., Hästö P., Lê U. V., Nuortio M. _Overview of differential equations with non-standard growth_. Nonlinear Analysis: Theory, Methods and Applications, 72(12), (2010), 4551–4574. * [18] Herzog J. O. _Phragmen–Lindelöf Theorems for Second Order Quasi-Linear Elliptic Partial Differential Equations_ , Proceedings of the American Mathematical Society 15, No. 5, (1964), 721–728. * [19] Hopf E. _Remark on a preceding paper of D. Gilbarg_ , J. Ration. Mech. Anal. 1, (1952), 419–424 * [20] Horgan C. O., _Decay estimates for boundary-value problems in linear and nonlinear continuum mechanics_ , in: Mathematical Problems in Elasticity, in: Ser. Adv. Math. Appl. Sci., 38, World Sci. Publ, River Edge, NJ, (1996), 47–89. * [21] Jin Z., Lancaster K., _A Phragmén-Lindelöf theorem and the behavior at infinity of solutions of non-hyperbolic equations_ , Pacific journal of mathematics 211, no 1, (2003), 101–121. * [22] Julin V. _Generalized Harnack inequality for nonhomogeneous elliptic equations._ Archive for Rational Mechanics and Analysis 2.216 (2015), 673–702. * [23] Julin V., Juutinen P. _A new proof for the equivalence of weak and viscosity solutions for the p-Laplace equation_ , Communications in Partial Differential Equations 37.5 (2012), 934–946. * [24] Juutinen P., Lindqvist P., Manfredi J. J., _On the equivalence of viscosity solutions and weak solutions for a quasi-linear equation_ , SIAM journal on mathematical analysis 33, no 3, (2001), 699–717. * [25] Juutinen P., Lukkari T., Parviainen M. _Equivalence of viscosity and weak solutions for the $p(x)$-Laplacian_, Annales de l’Institut Henri Poincare (C) Non Linear Analysis. 27. No. 6. Elsevier Masson, 2010. 1471–1487. * [26] Koike S., Nakagawa K. _Remarks on the Phragmén-Lindelöf theorem for viscosity solutions of fully nonlinear PDEs with unbounded ingredients_. Electronic Journal of Differential Equations (EJDE)[electronic only] 2009 (2009): Paper-No. * [27] Kurta V. V., _Phragmén-Lindelöf theorems for second-order quasilinear elliptic equations_ , (Russian) Ukrain. Mat. Zh. 44, no 10 (1992), 1376–1381; translation in Ukrainian Math. J. 44, no 10 (1992), 1262–1268 (1993). * [28] M. Leseduarte, M. Carme, R Quintanilla, _Phragmén-Lindelöf alternative for the Laplace equation with dynamic boundary conditions_ Journal of applied analysis and computation 7.4 (2017): 1323–1335. * [29] Lindqvist P., _On the growth of the solutions of the differential equation $\nabla\cdot(|\nabla u|^{p-2}\nabla u)=0$ in $n$-dimensional space_, Journal of Differential Equations, 58, (1985), 307–317. * [30] Lundberg E., Weitsman A. _On the growth of solutions to the minimal surface equation over domains containing a halfplane._ Calculus of Variations and Partial Differential Equations, 54(4), (2015), 3385–3395. * [31] Lundström N. L. P., $p$-harmonic functions near the boundary, Doctoral Thesis, ISSN 1102-8300, ISBN 978-91-7459-287-0, Umeå 2011. * [32] Lundström N. L. P., _Phragmén-Lindelöf Theorems and p-harmonic Measures for Sets Near Low-dimensional Hyperplanes_ , Potential Analysis, 44, (2016), 313–330. * [33] Lundström N. L. P., Olofsson M., Toivanen O. _Strong maximum principle and boundary estimates for nonhomogeneous elliptic equations._ arXiv preprint arXiv:2005.03338 (2020). * [34] Lundström N. L. P., Singh J. _Estimates of p-harmonic functions in planar sectors._ arXiv preprint arXiv:2111.02721 (2021). * [35] Medina M., Ochoa P. _On viscosity and weak solutions for non-homogeneous p-Laplace equations_ Advances in Nonlinear Analysis 8 (2017), 468–481. * [36] Miller K. _Extremal barriers on cones with Phragmen–Lindelöf theorems and other applications_. Annali di Matematica Pura ed Applicata, 90(1), (1971), 297–329. * [37] Phragmén E., Lindelöf E., Sur une extension d’un principe classique de l’analyse et sur quelques propriétés des functions monogénes dans le voisinage d’un point singulier, Acta Math. 31, no 1, (1908), 381–406. * [38] Quintanilla R., _Some theorems of Phragmén-Lindelöf type for nonlinear partial differential equations_ , Publ. Mat 37, (1993), 443–463. * [39] Serrin J., _On the Phragmén-Lindelöf principle for elliptic differential equations_ , J. Rational Mech. Anal. 3, (1954), 395–413. * [40] Vitolo A., _On the Phragmén-Lindelöf principle for second-order elliptic equations_ , J. Math. Anal. Appl. 300, no 1, (2004), 244–259.
# Enhancing the accuracy of a data-driven reconstruction of bivariate jump- diffusion models with corrections for higher orders of the sampling interval Esra Aslim1,2, Thorsten Rings1,2, Lina Zabawa1,2 and Klaus Lehnertz1,2,3 1 Department of Epileptology, University of Bonn Medical Centre, Venusberg Campus 1, 53127 Bonn, Germany 2 Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14–16, 53115 Bonn, Germany 3 Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53175 Bonn, Germany<EMAIL_ADDRESS> ###### Abstract We evaluate the significance of a recently proposed bivariate jump-diffusion model for a data-driven characterization of interactions between complex dynamical systems. For various coupled and non-coupled jump-diffusion processes, we find that the inevitably finite sampling interval of time-series data negatively affects the reconstruction accuracy of higher-order conditional moments that are required to reconstruct the underlying jump- diffusion equations. We derive correction terms for conditional moments in higher orders of the sampling interval and demonstrate their suitability to strongly enhance the data-driven reconstruction accuracy. * Keywords: nonlinear dynamics, stochastic processes, interactions, higher-order corrections ## 1 Introduction The problem of reliably characterizing interactions between complex dynamical systems pervades many scientific fields. Since real-world systems quite often impose restrictions to standard approaches, linear and nonlinear time-series- analysis techniques have been developed that allow one to estimate the strength, the direction, and the functional form of an interaction from pairs of time series of appropriate system observables. Given that interactions can manifest themselves in various aspects of the dynamics, analysis techniques have been developed in diverse fields such as statistics, synchronization theory, nonlinear dynamics, information theory, and statistical physics (for an overview, see [1, 2, 3, 4, 5, 6]). Most of these techniques specifically concentrate on the (low-dimensional) deterministic part of the dynamics and have shown interactions to impact on amplitudes, phases, frequencies – or even combinations thereof – as well as on trajectories in the respective phase spaces. Many natural systems, however, exhibit both deterministic and stochastic features, even if the underlying dynamic is deterministic. Stochastic features in a system’s dynamics may arise from a high number of degrees of freedom, from random forcing, and/or from nonlinear couplings. If the systems’ dynamics can be described sufficiently by the Langevin equation, the first- and second- order Kramers-Moyal (KM) coefficients estimated from time-series data [7, 8] can serve as an indicator for interactions between stochastic processes [9, 10, 11]. A more general ansatz that accounts for the presence of additive as well as multiplicative diffusive fluctuations and discontinuous jump contributions in the time-series data [12, 13, 14] has been proposed only recently [15], namely a bivariate jump-diffusion model. It consists of two- dimensional diffusion and two-dimensional jumps that can be coupled to one another. Such a model could improve theoretical modeling of time-series data e.g., in the neurosciences [16, 17], condensed matter physics [18, 19, 20], ecology [21, 22], or finance [23]. When analyzing empirical time-series data, one is faced with the issue of an inevitably finite sampling interval $\mathop{}\\!\mathrm{d}\hskip 1.0ptt$, which not only influences the first- and second-order KM coefficients [24] but also causes non-vanishing higher-order ($>2$) ones. For (one-dimensional) jump-diffusion processes, additional influences need to be taken into account [13]: jump events induce terms of order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ in the conditional moments of even orders and the jump rate and amplitude induce terms of order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2})$ in all conditional moments. We here extend these studies and investigate the data-driven reconstruction of stochastic dynamical equations underlying interacting jump- diffusion processes with finite sampling interval. We will show that in these cases, corrections for higher orders of the sampling interval strongly enhance reconstruction accuracy. The outline of this paper is as follows. In section 2, we recall the definition of a bivariate jump-diffusion model, and we define our scale- independent measure to assess the accuracy of the reconstruction of conditional moments from time-series data. In section 3, we first illustrate the reconstruction of conditional moments of various jump-diffusion models with and without couplings from time-series data with finite sampling interval, thereby emphasizing the necessity for corrections for higher orders of the sampling interval. We then derive these corrections and demonstrate their suitability to enhance reconstruction accuracy. Finally, in section 4 we draw our conclusions. ## 2 Methods ### 2.1 Bivariate jump-diffusion model A bivariate jump-diffusion process consists of two-dimensional diffusion and two-dimensional jumps, that can be coupled to one another. It can be modeled via [12, 8, 15] $\displaystyle\qquad\quad\begin{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptx_{1}(t)\\\ \mathop{}\\!\mathrm{d}\hskip 1.0ptx_{2}(t)\end{pmatrix}$ $\displaystyle=\begin{pmatrix}h_{1}\\\ h_{2}\end{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptt+\begin{pmatrix}g_{1,1}&g_{1,2}\\\ g_{2,1}&g_{2,2}\end{pmatrix}\begin{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptW_{1}\\\ \mathop{}\\!\mathrm{d}\hskip 1.0ptW_{2}\end{pmatrix}$ $\displaystyle+\begin{pmatrix}\xi_{1,1}&\xi_{1,2}\\\ \xi_{2,1}&\xi_{2,2}\end{pmatrix}\begin{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptJ_{1}\\\ \mathop{}\\!\mathrm{d}\hskip 1.0ptJ_{2}\end{pmatrix}.$ (1) The drift is a two-dimensional vector $\boldsymbol{h}=(h_{1},h_{2})$ with $\boldsymbol{h}\in\mathbb{R}^{2}$, where each dimension of $\boldsymbol{h}$, i.e., $h_{i}$, may depend on state variables $x_{1}(t)$ and $x_{2}(t)$. The diffusion takes a matrix ${\boldsymbol{g}\in\mathbb{R}^{2\times 2}}$ with the diagonal elements of $\boldsymbol{g}$ comprise the diffusion coefficients of self-contained stochastic diffusive processes. The off-diagonal elements represent interdependencies between the two Wiener processes $\boldsymbol{W}=(W_{1},W_{2})$, i.e., they result from an interaction between the two processes. The Wiener processes act as independent Brownian noises for the state variables with $\mathopen{}\mathclose{{}\left\langle\mathop{}\\!\mathrm{d}\hskip 1.0ptW_{i}}\right\rangle=0,\mathopen{}\mathclose{{}\left\langle\mathop{}\\!\mathrm{d}\hskip 1.0ptW_{i}^{2}}\right\rangle=\mathop{}\\!\mathrm{d}\hskip 1.0ptt,\forall i$. The discontinuous jump terms are contained in $\boldsymbol{\xi}\in\mathbb{R}^{2\times 2}$ and $\boldsymbol{\mathop{}\\!\mathrm{d}\hskip 1.0ptJ}\in\mathbb{N}^{2}$, where $\boldsymbol{\mathop{}\\!\mathrm{d}\hskip 1.0ptJ}$ represents a two- dimensional Poisson process. These are Poisson-distributed jumps with an average jump rate $\boldsymbol{\lambda}\in\mathbb{R}^{2}$ in unit time $t$. The average expected number of jumps of each jump process $J_{i}$ in a timespan $t$ is $\lambda_{i}t$. The jump amplitudes $\boldsymbol{\xi}$ are Gaussian distributed with zero mean and standard deviation (or size) $s_{i,j}$. We note that elements of vectors $\boldsymbol{h}$ and $\boldsymbol{\mathop{}\\!\mathrm{d}\hskip 1.0ptJ}$ as well as of matrices $\boldsymbol{g}$ and $\boldsymbol{\xi}$ may, in general, be state- and time- dependent; for convenience of notation, we omit these dependencies. The two-dimensional KM coefficients of orders $(\ell,m)$ of a bivariate process read $\displaystyle D^{(\ell,m)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ $\displaystyle=\frac{1}{(\ell+m)!}\lim_{\mathop{}\\!\mathrm{d}\hskip 1.0ptt\to 0}\\!\frac{1}{\mathop{}\\!\mathrm{d}\hskip 1.0ptt}K^{(\ell,m)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ (2) with the conditional moments $\displaystyle K^{(\ell,m)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=\mathopen{}\mathclose{{}\left\langle[x_{1}(t+\mathop{}\\!\mathrm{d}\hskip 1.0ptt)-x_{1}(t)]^{\ell}[x_{2}(t+\mathop{}\\!\mathrm{d}\hskip 1.0ptt)-x_{2}(t)]^{m}}\right\rangle\Big{|}_{\begin{subarray}{c}x_{1}(t)=x_{1}\\\ x_{2}(t)=x_{2}\end{subarray}}$ (3) which can be directly estimated from time-series data [15]. They are related to the elements of the drift vector, the elements of the diffusion matrix, and the jump components via: $\displaystyle K^{(1,0)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ $\displaystyle=h_{1}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ (4) $\displaystyle K^{(0,1)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ $\displaystyle=h_{2}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle K^{(1,1)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ $\displaystyle=\Big{[}g_{11}g_{21}+g_{12}g_{22}\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle K^{(2,0)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ $\displaystyle=\Big{[}g_{11}^{2}+s_{11}\lambda_{1}+g_{12}^{2}+s_{12}\lambda_{2}\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle K^{(0,2)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ $\displaystyle=\Big{[}g_{21}^{2}+s_{21}\lambda_{1}+g_{22}^{2}+s_{22}\lambda_{2}\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle K^{(2\ell,2m)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ $\displaystyle=\Big{[}s_{11}^{\ell}s_{21}^{m}\lambda_{1}+s_{12}^{\ell}s_{22}^{m}\lambda_{2}\Big{]}\frac{(2\ell)!}{2^{\ell}\ell!}\frac{(2m)!}{2^{m}m!}\mathop{}\\!\mathrm{d}\hskip 1.0ptt,$ with $(\ell,m)\in\mathbb{N}^{+}$. We omitted the state- and time-dependencies in the drift and diffusion functions and jump components to enhance readability. If such processes are coupled, the elements of the drift vector, the elements of the diffusion matrix, and the jump components may depend on both state variables and thus information about the coupling may be contained in the respective conditional moments. However, apart from the drift functions, there is – to our knowledge – not yet an analytical way to estimate all diffusion functions and jump components in order to characterize interactions in the diffusion and jump part of the dynamics. As an alternative, one can use some optimization techniques to estimate these unknown functions [8]. ### 2.2 Assessing the accuracy of a data-driven reconstruction of conditional moments Following Refs. [9, 15], we employ a distance measure to relate theoretical and numerical results and to quantify the deviation of the obtained conditional moments from the functions employed. In order to allow a comparison of the accuracy of the reconstruction of conditional moments of order $(\ell,m)$, we here use a scale-independent distance measure, $\mathcal{U}^{(\ell,m)}$, which is based on the bounded relative error of the difference between estimated and theoretical conditional moments (see A for details). $\mathcal{U}^{(\ell,m)}<1$ indicates a sufficient accuracy of the reconstruction of the conditional moment of order $(\ell,m)$. We estimate conditional moments up to orders $\ell=m=6$ from normalized time series (zero mean and unit variance) that we obtain from numerically integrating bivariate jump-diffusion equations (Euler-Maruyama scheme [25] with a sampling interval $\mathop{}\\!\mathrm{d}\hskip 1.0ptt=10^{-3}$). Time series consist of $n=10^{7}$ data points (after eliminating $5\times 10^{3}$ transients), and we ensure that individual jump numbers $n_{j}\simeq\lambda n$ varied by at most 10 % for a constant jump rate. ## 3 Results For our investigations, we consider various interacting jump-diffusion processes. We begin with reconstructing conditional moments of a bivariate jump-diffusion model with uni-directional couplings in the drift and in the diffusion from time-series data. Since interactions between jump-diffusion processes may lead to over- and underrepresented parts of the dynamics [15], we then reconstruct conditional moments of a bivariate jump-diffusion model with a disproportionally weighted drift, diffusion, and jump part from time- series data. Figure 1: a) Reconstruction accuracy ($\mathcal{U}^{(\ell,m)}$) for conditional moments up to order $\ell=m=6$ of a bivariate jump-diffusion model with a uni-directional coupling in the diffusion part (see equation (3.1)) for various values of coupling strength $c_{2}$. $\mathcal{U}^{(\ell,m)}<1$ (horizontal dotted line) indicates a sufficient accuracy. Medians and interquartile ranges (shaded area) derived from 50 time series generated with the respective models using random initial conditions. Lines are for eye guidance only. b) Two-dimensional theoretical conditional moments (equation (4)) up to order $\ell=m=6$ (black grid) and reconstructed ones (gray surface) from exemplary time series generated using equation (3.1) with $c_{1}=0$ and $c_{2}=100$. Reconstruction accuracy is insufficient for conditional moments of orders $(2,2)$, $(0,4)$, $(4,4)$, $(0,6)$, and $(6,6)$ ($\mathcal{U}^{(2,2)}\approx 23$, $\mathcal{U}^{(0,4)}\approx 2\,639$, $\mathcal{U}^{(4,4)}\approx 9$, $\mathcal{U}^{(0,6)}\approx 682$, and $\mathcal{U}^{(6,6)}\approx 7$). ### 3.1 Jump-diffusion model with uni-directional couplings We examine uni-directional couplings in the drift and in the diffusion of an exemplary bivariate jump-diffusion process modeled via: $\displaystyle\begin{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptx_{1}(t)\\\ \mathop{}\\!\mathrm{d}\hskip 1.0ptx_{2}(t)\end{pmatrix}$ $\displaystyle=$ $\displaystyle\begin{pmatrix}-x_{1}^{3}+x_{1}\\\ -x_{2}+f_{1}(x_{1})\end{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptt+\begin{pmatrix}0.1&0.5\\\ 0.3&0.2+f_{2}(x_{1})\end{pmatrix}\begin{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptW_{1}\\\ \mathop{}\\!\mathrm{d}\hskip 1.0ptW_{2}\end{pmatrix}$ $\displaystyle+$ $\displaystyle\begin{pmatrix}\xi_{1,1}&\xi_{1,2}\\\ \xi_{2,1}&\xi_{2,2}\end{pmatrix}\begin{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptJ_{1}\\\ \mathop{}\\!\mathrm{d}\hskip 1.0ptJ_{2}\end{pmatrix},$ with $\displaystyle\boldsymbol{s}=\begin{pmatrix}0.2&0.5\\\ 0.3&0.1\end{pmatrix},\,\,\boldsymbol{\lambda}=\begin{pmatrix}0.1\\\ 0.3\end{pmatrix}$ and with the coupling terms $f_{1}(x_{1})=c_{1}x_{1}$ and $f_{2}(x_{1})=c_{2}x_{1}$. We vary the coupling strengths $c_{1}$ and $c_{2}$ each over four orders of magnitude and reconstruct conditional moments from 50 time series with random initial conditions and with finite sampling interval. For couplings in the drift part ($c_{1}\in[0.01,100]$; $c_{2}=0$), almost all conditional moments up to order $\ell=m=6$ can be reconstructed with sufficient accuracy (data not shown). An exception builds the conditional moment $K^{(0,2)}$ with diffusion contributions of process $x_{2}$. For this moment, we observe an insufficient reconstruction accuracy ($\mathcal{U}^{(0,2)}>1$) in case of strong couplings ($c_{1}>50$). For couplings in the diffusion part ($c_{2}\in[0.01,100]$; $c_{1}=0$; see figure 1), we observe at large values of the coupling strength ($c_{2}>1$) rather strong inaccuracies for some conditional moments. These include conditional moments with jump contributions of process $x_{2}$, namely $K^{(0,4)}$, $K^{(0,6)}$, and $K^{(i,i)}$ with $i\in\\{2,4,6\\}$ (figure 1a). A visual inspection of these conditional moments for a coupling in the diffusion part with $c_{2}=100$ (figure 1b) indicates that the theoretical moments (equation (4)) fail to account for a dependency on process $x_{1}$. This dependency on process $x_{1}$ appears similar to the dependency of $K^{(0,2)}$ on $x_{1}$, where we note that $K^{(0,2)}$ contains information about the coupling in the diffusion part (see equation (4)). ### 3.2 Jump-diffusion model with disproportionally weighted parts Figure 2: Reconstruction accuracy ($\mathcal{U}^{(\ell,m)}$) of conditional moments up to order $\ell=m=6$ of a bivariate jump-diffusion model with disproportionally weighted parts (see equation (6)) for various values of drift-, diffusion-, and jump-scaling parameters ($\alpha$, $\beta$, $\gamma$). The vertical dotted line at $\beta=g_{2,1}=0.3$ and $\gamma=0.3$ indicates the point where the diffusion-scaling parameter is equal to the jump-scaling parameter. The horizontal dotted line indicates a sufficient accuracy ($\mathcal{U}^{(\ell,m)}<1$). Medians and interquartile ranges (shaded area) derived from 50 time series generated with the respective models using random initial conditions. Lines are for eye guidance only. Next, we examine processes derived from an exemplary bivariate jump-diffusion model with disproportionally weighted parts, which we obtain by rescaling drift, diffusion, and jump dynamics. For the latter two, we allow for a mixing of the Wiener processes and a mixing of the Poisson processes (non-vanishing off-diagonal elements in diffusion and jump size matrix): $\displaystyle\qquad\quad\begin{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptx_{1}(t)\\\ \mathop{}\\!\mathrm{d}\hskip 1.0ptx_{2}(t)\end{pmatrix}$ $\displaystyle=\begin{pmatrix}-x_{1}^{3}+x_{1}\\\ -\alpha x_{2}\end{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptt+\begin{pmatrix}0.1&0.5\\\ \beta&0.2\end{pmatrix}\begin{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptW_{1}\\\ \mathop{}\\!\mathrm{d}\hskip 1.0ptW_{2}\end{pmatrix}$ $\displaystyle+\begin{pmatrix}\xi_{1,1}&\xi_{1,2}\\\ \xi_{2,1}&\xi_{2,2}\end{pmatrix}\begin{pmatrix}\mathop{}\\!\mathrm{d}\hskip 1.0ptJ_{1}\\\ \mathop{}\\!\mathrm{d}\hskip 1.0ptJ_{2}\end{pmatrix},$ (6) with $\displaystyle\boldsymbol{s}=\begin{pmatrix}0.2&0.5\\\ \gamma&0.1\end{pmatrix},\,\,\boldsymbol{\lambda}=\begin{pmatrix}0.1\\\ 0.3\end{pmatrix}.$ Here $\alpha$ denotes the drift-, $\beta$ the diffusion- and $\gamma$ the jump-scaling parameter. We rescale the jump size since it equals the variance of the Gaussian-distributed jump amplitudes and thus weighs the jump part of the dynamics. We vary each scaling parameter over four orders of magnitude (while keeping the others fixed) and reconstruct conditional moments of the model from 50 time series with random initial conditions and with finite sampling interval. When rescaling the drift part (with $\beta=0.3$ and $\gamma=0.3$), almost all conditional moments up to order $\ell=m=6$ can be reconstructed with sufficient accuracy (figure 2). The inaccuracy seen for $K^{(0,1)}$ for $\alpha<1$ is to be expected given our chosen bivariate jump-diffusion model. If we rescale the diffusion part (with $\alpha=1$ and $\gamma=0.3$), we observe at large values of the scaling parameter ($\beta>1$) rather strong inaccuracies for conditional moments $K^{(0,4)}$, $K^{(0,6)}$, and $K^{(i,i)}$ with $i\in\\{2,4,6\\}$ (figure 2). As with jump-diffusion models with uni- directional couplings (section 3.1), these conditional moments contain jump contributions of process $x_{2}$. Rescaling the jump part (with $\alpha=1$ and $\beta=0.3$) has no effect for the considered range of values here ($\gamma\in[0.01,100]$; see figure 2). ### 3.3 Corrections for higher orders of the sampling interval Our findings presented above demonstrate that uni-directional couplings in the diffusion may have a similar impact on the accuracy of the reconstruction of conditional moments as rescaling the diffusion part. Estimated conditional moments differ from the respective theoretical moments with jump contributions of process $x_{2}$ at large values of either the coupling strength or the scaling parameter. Given these observations and since we know that a finite sampling interval may have a non-negligible impact on the reconstruction of higher-order conditional moments of one-dimensional jump-diffusion models [13] from time-series data, we conjecture that a similar impact can be expected for bivariate jump-diffusion models. To test this conjecture, we derive the theoretical conditional moments for different orders of the sampling interval $\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ of a bivariate jump-diffusion model (the derivation and further expressions of conditional moments can be found in B). With the abbreviations $\displaystyle A^{(1,0)}$ $\displaystyle=h_{1}$ $\displaystyle A^{(0,1)}$ $\displaystyle=h_{2}$ $\displaystyle B^{(1,1)}$ $\displaystyle=\frac{1}{2}\Big{[}g_{11}g_{21}+g_{12}g_{22}\Big{]}$ $\displaystyle B^{(2,0)}$ $\displaystyle=\frac{1}{2}\Big{[}g_{11}^{2}+s_{11}\lambda_{1}+g_{12}^{2}+s_{12}\lambda_{2}\Big{]}$ $\displaystyle B^{(0,2)}$ $\displaystyle=\frac{1}{2}\Big{[}g_{21}^{2}+s_{21}\lambda_{1}+g_{22}^{2}+s_{22}\lambda_{2}\Big{]}$ $\displaystyle C^{(2\ell,2m)}$ $\displaystyle=\frac{1}{(2\ell+2m)!}\Big{[}s_{11}^{\ell}s_{21}^{m}\lambda_{1}+s_{12}^{\ell}s_{22}^{m}\lambda_{2}\Big{]}\frac{(2\ell)!}{2^{\ell}\ell!}\frac{(2m)!}{2^{m}m!},$ where $(\ell,m)\in\mathbb{N}^{+}$, the theoretical conditional moments of orders $(0,4)$ and $(2,2)$ with correction terms up to order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2})$ read $\displaystyle K^{(0,4)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle 4!C^{(0,4)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}4!\big{(}B^{(0,2)}\big{)}^{2}+4!\big{(}A^{(0,1)}\partial_{x_{2}}C^{(0,4)}+A^{(1,0)}\partial_{x_{1}}C^{(0,4)}\big{)}$ $\displaystyle+4\cdot 4!C^{(0,4)}\partial_{x_{2}}A^{(0,1)}+4!\big{(}B^{(0,2)}\partial_{x_{2}}^{2}C^{(0,4)}+B^{(2,0)}\partial_{x_{1}}^{2}C^{(0,4)}$ $\displaystyle+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}C^{(0,4)}\big{)}+6\cdot 4!\big{(}C^{(0,4)}\partial_{x_{2}}^{2}B^{(0,2)}+C^{(2,2)}\partial_{x_{1}}^{2}B^{(0,2)}\big{)}$ $\displaystyle+4\cdot 5!\big{(}C^{(0,6)}\partial_{x_{2}}^{3}A^{(0,1)}+3C^{(2,4)}\partial_{x_{1}}^{2}\partial_{x_{2}}A^{(0,1)}\big{)}$ $\displaystyle+6\cdot 4!C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(0,4)}+3\cdot 6!C^{(2,4)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}B^{(0,2)}+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ and $\displaystyle K^{(2,2)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle 4!C^{(2,2)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}\frac{4!}{3}\big{(}B^{(2,0)}B^{(0,2)}+2\big{(}B^{(1,1)}\big{)}^{2}\big{)}$ $\displaystyle+4!\big{(}A^{(1,0)}\partial_{x_{1}}C^{(2,2)}+A^{(0,1)}\partial_{x_{2}}C^{(2,2)}\big{)}$ $\displaystyle+2\cdot 4!\big{(}C^{(2,2)}\partial_{x_{1}}A^{(1,0)}+C^{(2,2)}\partial_{x_{2}}A^{(0,1)}\big{)}$ $\displaystyle+4!\big{(}B^{(2,0)}\partial_{x_{1}}^{2}C^{(2,2)}+B^{(0,2)}\partial_{x_{2}}^{2}C^{(2,2)}+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}C^{(2,2)}\big{)}$ $\displaystyle+4!\big{(}C^{(4,0)}\partial_{x_{1}}^{2}B^{(0,2)}+C^{(2,2)}\partial_{x_{2}}^{2}B^{(0,2)}+C^{(2,2)}\partial_{x_{1}}^{2}B^{(2,0)}+C^{(0,4)}\partial_{x_{2}}^{2}B^{(2,0)}$ $\displaystyle+8C^{(2,2)}\partial_{x_{1}}\partial_{x_{2}}B^{(1,1)}\big{)}+2\cdot\frac{6!}{3!}\big{(}C^{(4,2)}\partial_{x_{1}}^{3}A^{(1,0)}+3C^{(2,4)}\partial_{x_{1}}\partial_{x_{2}}^{2}A^{(1,0)}$ $\displaystyle+C^{(2,4)}\partial_{x_{2}}^{3}A^{(0,1)}+3C^{(4,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}A^{(0,1)}\big{)}+6\cdot 4!C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(2,2)}$ $\displaystyle+\frac{5!}{2!}\big{(}6\big{(}C^{(4,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}B^{(0,2)}+C^{(2,4)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}B^{(2,0)}\big{)}$ $\displaystyle+16\big{(}C^{(4,2)}\partial_{x_{1}}^{3}\partial_{x_{2}}B^{(1,1)}+C^{(2,4)}\partial_{x_{1}}\partial_{x_{2}}^{3}B^{(1,1)}\big{)}\big{)}+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3}).$ For the differential operator, we use the short notation $\partial_{x_{i}}=\frac{\partial}{\partial x_{i}}$. With $\mathcal{O}(\delta)$, we indicate all terms that contain $C^{(\ell,m)}$ of higher-order or derivatives $\partial_{x_{i}}^{j}$, $j>3$, and with $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ all terms that contain higher orders $(\geq 3)$ of the sampling interval $\mathop{}\\!\mathrm{d}\hskip 1.0ptt$. We note that the terms of order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2})$ can introduce drift, diffusion and jump contributions to each conditional moment. Figure 3: a) Same as figure 1b) but in addition we present theoretical conditional moments, for which we considered terms of $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2})$ (red grid). In this case, we obtain a sufficient quality of the reconstruction of conditional moments of all shown orders except of $(0,6)$, $(4,4)$ and $(6,6)$ ($\mathcal{U}_{\text{corr}}^{(0,6)}\approx 37$, $\mathcal{U}_{\text{corr}}^{(4,4)}\approx 9$ and $\mathcal{U}_{\text{corr}}^{(6,6)}\approx 7$). b) Same as a) but for a bivariate jump-diffusion model with a more weighted diffusion part (see equation (6) with $\alpha=1$, $\beta=100$ and $\gamma=0.3$). By considering terms of order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2})$ in the theoretical conditional moments, we obtain a sufficient quality of the reconstruction of conditional moments of all orders except of $(0,6)$, $(4,4)$ and $(6,6)$ ($\mathcal{U}_{\text{corr}}^{(0,6)}\approx 2\,736\,249$, $\mathcal{U}_{\text{corr}}^{(4,4)}\approx 41$, $\mathcal{U}_{\text{corr}}^{(6,6)}\approx 8$). With these correction terms, we now focus on conditional moments, which are affected by a uni-directional coupling in the diffusion or a more weighted diffusion part. Already a visual inspection reveals that considering terms of order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2})$ can clearly improve the reconstruction of some conditional moments (see figure 3; $\mathcal{U}_{\text{corr}}^{(\ell,m)}$ indicates the distance measure for which correction terms were considered). Figure 4: Reconstruction accuracies with ($\mathcal{U}_{\text{corr}}^{(\ell,m)}$; red) and without ($\mathcal{U}^{(\ell,m)}$; black) integrating correction terms of order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2})$ for various conditional moments (cf. figure 1a and figure 2). a) Bivariate jump-diffusion model with uni-directional couplings (see equation (3.1)) for various values of coupling strength $c_{2}$. b) Bivariate jump-diffusion model (see equation (6)) for various values of the diffusion-scaling parameter $\beta$. The horizontal dotted line indicates a sufficient accuracy. Medians and interquartile ranges (shaded area) derived from 50 time series generated with the respective models using random initial conditions. Lines are for eye guidance only. For the data shown in figure 3a, the correction terms of order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2})$ can be of the same or even greater magnitude than terms of order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ and thus have a non- negligible effect on the accuracy of the reconstruction of conditional moments. Particularly the accuracy of the reconstruction of moments $K^{(0,4)}$ and $K^{(2,2)}$ is considerably improved ($\mathcal{U}_{\text{corr}}^{(0,4)}<1$ and $\mathcal{U}_{\text{corr}}^{(2,2)}<1$; see figure 4) even at large values of the coupling strength or at large values of the diffusion-scaling parameter. However, we still observe inaccuracies in the reconstruction of conditional moments $K^{(0,6)}$, $K^{(4,4)}$, and $K^{(6,6)}$, and we expect that considering terms of higher order of $\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ ($\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{i})$, $i\geq 3$) will further improve the accuracy of the reconstruction of these conditional moments. ## 4 Concluding remarks We evaluate the significance of a bivariate jump-diffusion model for a data- driven characterization of interactions between complex dynamical systems. Investigating various coupled and non-coupled jump-diffusion processes, we observed strong deviations between conditional moments of the underlying jump- diffusion model and those estimated from time-series data and conjectured that these deviations result from the finiteness of the sampling interval. We derived correction terms for conditional moments in higher orders of the sampling interval and could demonstrate that these corrections strongly enhance the accuracy of a data-driven reconstruction of stochastic evolution equations from time-series data in terms of bivariate jump-diffusion models. Our findings demonstrate that the drift, diffusion and jumps induce terms of order $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2})$ in all conditional moments and are most pronounced in conditional moments with jump contributions (orders $\geq 4$). A blending of all parts of the dynamics should thus be taken into account when investigating interacting jump- diffusion processes. To further enhance the significance of the bivariate jump-diffusion model for the analysis of empirical data, future studies should investigate other possible influencing factors such as measurement noise, limited observation time (finite number of data points), or the impact of indirect interactions mediated by observed/unobserved additional processes. We are grateful to M. Reza Rahimi Tabar for constructive discussions and valuable comments. ## Appendix ## Appendix A Scale-independent measure to assess the accuracy of a data- driven reconstruction of conditional moments For each two-dimensional conditional moments of order $(\ell,m)$, we consider the unscaled mean bounded relative absolute error [26] $\displaystyle\mathcal{U}^{(\ell,m)}=\frac{\mathcal{R}_{b}^{(\ell,m)}}{1-\mathcal{R}_{b}^{(\ell,m)}},$ with the weighted average of bounded relative errors $\displaystyle\mathcal{R}_{b}^{(\ell,m)}=\sum\limits_{i,j=1}^{B}p(x_{1,i},x_{2,j})\frac{\Big{|}\Delta_{ij}^{(\ell,m)}\Big{|}}{\Big{|}\Delta_{ij}^{(\ell,m)}\Big{|}+\Big{|}K^{(\ell,m)}(x_{1,i},x_{2,j},\mathop{}\\!\mathrm{d}\hskip 1.0ptt)\Big{|}},$ where $\Delta_{ij}^{(\ell,m)}=\hat{K}^{(\ell,m)}(x_{1,i},x_{2,j},\mathop{}\\!\mathrm{d}\hskip 1.0ptt)-K^{(\ell,m)}(x_{1,i},x_{2,j},\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ is the difference between the estimated and theoretical conditional moment, $p(\cdot)$ the estimated probability density that is used as normalized weight, and $B$ the number of bins for each dimension. $\mathcal{U}^{(\ell,m)}<1$ indicates a sufficient accuracy of the reconstruction of conditional moments of order $(\ell,m)$ in the sense that $\Delta_{ij}^{(\ell,m)}$ is smaller than $K^{(\ell,m)}(x_{1,i},x_{2,j},\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ on average. We note that the value of $\mathcal{U}^{(\ell,m)}$ becomes large or undefined if the value of the theoretical conditional moments tends to $0$, which can lead to a misinterpretation. In our histogram-based investigations, we used $B=20$ bins for each dimension and considered a range of $\pm\sigma_{i}$ for each $x_{i}$. We refer to Ref. [27] for a discussion on the optimal choice of the number of bins and to Refs. [28, 29] for other, e.g., kernel-based estimation techniques. ## Appendix B Derivation of conditional moments of bivariate jump-diffusion models for different orders of $\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ We follow Refs. [13, 8] to derive conditional moments of bivariate jump- diffusion models for different orders of $\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ using the Kramers-Moyal adjoint operator. With the abbreviations $\displaystyle A^{(1,0)}$ $\displaystyle=h_{1}$ $\displaystyle A^{(0,1)}$ $\displaystyle=h_{2}$ $\displaystyle B^{(1,1)}$ $\displaystyle=\frac{1}{2}\Big{[}g_{11}g_{21}+g_{12}g_{22}\Big{]}$ $\displaystyle B^{(2,0)}$ $\displaystyle=\frac{1}{2}\Big{[}g_{11}^{2}+s_{11}\lambda_{1}+g_{12}^{2}+s_{12}\lambda_{2}\Big{]}$ $\displaystyle B^{(0,2)}$ $\displaystyle=\frac{1}{2}\Big{[}g_{21}^{2}+s_{21}\lambda_{1}+g_{22}^{2}+s_{22}\lambda_{2}\Big{]}$ $\displaystyle C^{(2\ell,2m)}$ $\displaystyle=\frac{1}{(2\ell+2m)!}\Big{[}s_{11}^{\ell}s_{21}^{m}\lambda_{1}+s_{12}^{\ell}s_{22}^{m}\lambda_{2}\Big{]}\frac{(2\ell)!}{2^{\ell}\ell!}\frac{(2m)!}{2^{m}m!},$ where $(\ell,m)\in\mathbb{N}^{+}$, one can find the following corrections for conditional moments of orders $\ell=m=6$ of a bivariate jump-diffusion model: $\displaystyle K^{(1,0)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle A^{(1,0)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}A^{(1,0)}\partial_{x_{1}}A^{(1,0)}+A^{(0,1)}\partial_{x_{2}}A^{(1,0)}$ $\displaystyle+B^{(2,0)}\partial_{x_{1}}^{2}A^{(1,0)}+B^{(0,2)}\partial_{x_{2}}^{2}A^{(1,0)}+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}A^{(1,0)}$ $\displaystyle+6C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}A^{(1,0)}+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ $\displaystyle K^{(2,0)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle 2B^{(2,0)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}2\mathopen{}\mathclose{{}\left(A^{(1,0)}}\right)^{2}+2\mathopen{}\mathclose{{}\left(A^{(1,0)}\partial_{x_{1}}B^{(2,0)}+A^{(0,1)}\partial_{x_{2}}B^{(2,0)}}\right)$ $\displaystyle+4\mathopen{}\mathclose{{}\left(B^{(2,0)}\partial_{x_{1}}A^{(1,0)}+B^{(1,1)}\partial_{x_{2}}A^{(1,0)}}\right)$ $\displaystyle+2\mathopen{}\mathclose{{}\left(B^{(2,0)}\partial_{x_{1}}^{2}B^{(2,0)}+B^{(0,2)}\partial_{x_{2}}^{2}B^{(2,0)}+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}B^{(2,0)}}\right)$ $\displaystyle+8\big{(}C^{(4,0)}\partial_{x_{1}}^{3}A^{(1,0)}+3C^{(2,2)}\partial_{x_{1}}\partial_{x_{2}}^{2}A^{(1,0)}\big{)}$ $\displaystyle+12C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}B^{(2,0)}+5!C^{(4,2)}\partial_{x_{1}}^{3}\partial_{x_{2}}^{2}A^{(1,0)}$ $\displaystyle+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ $\displaystyle K^{(4,0)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle 4!C^{(4,0)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}4!\big{(}B^{(2,0)}\big{)}^{2}+4!\big{(}A^{(1,0)}\partial_{x_{1}}C^{(4,0)}+A^{(0,1)}\partial_{x_{2}}C^{(4,0)}\big{)}$ $\displaystyle+4\cdot 4!C^{(4,0)}\partial_{x_{1}}A^{(1,0)}+4!\big{(}B^{(2,0)}\partial_{x_{1}}^{2}C^{(4,0)}+B^{(0,2)}\partial_{x_{2}}^{2}C^{(4,0)}$ $\displaystyle+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}C^{(4,0)}\big{)}+6\cdot 4!\big{(}C^{(4,0)}\partial_{x_{1}}^{2}B^{(2,0)}+C^{(2,2)}\partial_{x_{2}}^{2}B^{(2,0)}\big{)}$ $\displaystyle+4\cdot 5!\big{(}C^{(6,0)}\partial_{x_{1}}^{3}A^{(1,0)}+3C^{(4,2)}\partial_{x_{1}}\partial_{x_{2}}^{2}A^{(1,0)}\big{)}$ $\displaystyle+6\cdot 4!C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(4,0)}+3\cdot 6!C^{(4,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}B^{(2,0)}+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ $\displaystyle K^{(6,0)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle 6!C^{(6,0)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}2\cdot 6!B^{(2,0)}C^{(4,0)}+6!\big{(}A^{(1,0)}\partial_{x_{1}}C^{(6,0)}+A^{(0,1)}\partial_{x_{2}}C^{(6,0)}\big{)}$ $\displaystyle+6\cdot 6!C^{(6,0)}\partial_{x_{1}}A^{(1,0)}+6!\big{(}B^{(2,0)}\partial_{x_{1}}^{2}C^{(6,0)}+B^{(0,2)}\partial_{x_{2}}^{2}C^{(6,0)}$ $\displaystyle+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}C^{(6,0)}\big{)}+6\cdot 6!\big{(}C^{(4,0)}\partial_{x_{1}}^{2}C^{(4,0)}+C^{(2,2)}\partial_{x_{2}}^{2}C^{(4,0)}\big{)}$ $\displaystyle+\frac{(6!)^{2}}{2!4!}\big{(}C^{(6,0)}\partial_{x_{1}}^{2}B^{(2,0)}+C^{(4,2)}\partial_{x_{2}}^{2}B^{(2,0)}\big{)}$ $\displaystyle+6\cdot\frac{8!}{3!}\big{(}C^{(8,0)}\partial_{x_{1}}^{3}A^{(1,0)}+3C^{(6,2)}\partial_{x_{1}}\partial_{x_{2}}^{2}A^{(1,0)}\big{)}$ $\displaystyle+6\cdot 6!C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(6,0)}+6\cdot\frac{(6!)^{2}}{4!2!}C^{(4,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(4,0)}$ $\displaystyle+30\cdot\frac{8!}{2!2!}C^{(6,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}B^{(2,0)}+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ $\displaystyle K^{(1,1)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle 2B^{(1,1)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}2A^{(1,0)}A^{(0,1)}+2\big{(}A^{(1,0)}\partial_{x_{1}}B^{(1,1)}+A^{(0,1)}\partial_{x_{2}}B^{(1,1)}\big{)}$ $\displaystyle+2\big{(}B^{(2,0)}\partial_{x_{1}}A^{(0,1)}+B^{(1,1)}\partial_{x_{2}}A^{(0,1)}+B^{(1,1)}\partial_{x_{1}}A^{(1,0)}+B^{(0,2)}\partial_{x_{2}}A^{(1,0)}\big{)}$ $\displaystyle+2\big{(}B^{(2,0)}\partial_{x_{1}}^{2}B^{(1,1)}+B^{(0,2)}\partial_{x_{2}}^{2}B^{(1,1)}+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}B^{(1,1)}\big{)}$ $\displaystyle+4\big{(}C^{(4,0)}\partial_{x_{1}}^{3}A^{(0,1)}+3C^{(2,2)}\partial_{x_{1}}\partial_{x_{2}}^{2}A^{(0,1)}+C^{(0,4)}\partial_{x_{2}}^{3}A^{(1,0)}$ $\displaystyle+3C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}A^{(1,0)}\big{)}+12C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}B^{(1,1)}$ $\displaystyle+\frac{5!}{2!}\big{(}C^{(4,2)}\partial_{x_{1}}^{3}\partial_{x_{2}}^{2}A^{(0,1)}+C^{(2,4)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{3}A^{(1,0)}\big{)}+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ $\displaystyle K^{(2,2)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle 4!C^{(2,2)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}\frac{4!}{3}\big{(}B^{(2,0)}B^{(0,2)}+2\big{(}B^{(1,1)}\big{)}^{2}\big{)}$ $\displaystyle+4!\big{(}A^{(1,0)}\partial_{x_{1}}C^{(2,2)}+A^{(0,1)}\partial_{x_{2}}C^{(2,2)}\big{)}$ $\displaystyle+2\cdot 4!\big{(}C^{(2,2)}\partial_{x_{1}}A^{(1,0)}+C^{(2,2)}\partial_{x_{2}}A^{(0,1)}\big{)}$ $\displaystyle+4!\big{(}B^{(2,0)}\partial_{x_{1}}^{2}C^{(2,2)}+B^{(0,2)}\partial_{x_{2}}^{2}C^{(2,2)}+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}C^{(2,2)}\big{)}$ $\displaystyle+4!\big{(}C^{(4,0)}\partial_{x_{1}}^{2}B^{(0,2)}+C^{(2,2)}\partial_{x_{2}}^{2}B^{(0,2)}+C^{(2,2)}\partial_{x_{1}}^{2}B^{(2,0)}+C^{(0,4)}\partial_{x_{2}}^{2}B^{(2,0)}$ $\displaystyle+8C^{(2,2)}\partial_{x_{1}}\partial_{x_{2}}B^{(1,1)}\big{)}+2\cdot\frac{6!}{3!}\big{(}C^{(4,2)}\partial_{x_{1}}^{3}A^{(1,0)}+3C^{(2,4)}\partial_{x_{1}}\partial_{x_{2}}^{2}A^{(1,0)}$ $\displaystyle+C^{(2,4)}\partial_{x_{2}}^{3}A^{(0,1)}+3C^{(4,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}A^{(0,1)}\big{)}+6\cdot 4!C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(2,2)}$ $\displaystyle+\frac{5!}{2!}\big{(}6\big{(}C^{(4,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}B^{(0,2)}+C^{(2,4)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}B^{(2,0)}\big{)}$ $\displaystyle+16\big{(}C^{(4,2)}\partial_{x_{1}}^{3}\partial_{x_{2}}B^{(1,1)}+C^{(2,4)}\partial_{x_{1}}\partial_{x_{2}}^{3}B^{(1,1)}\big{)}\big{)}+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ $\displaystyle K^{(4,4)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle 8!C^{(4,4)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}12\cdot 6!\big{(}B^{(2,0)}C^{(2,4)}+B^{(0,2)}C^{(4,2)}\big{)}$ $\displaystyle+2\cdot 4!4!\big{(}C^{(4,0)}C^{(0,4)}+18\big{(}C^{(2,2)}\big{)}^{2}\big{)}$ $\displaystyle+12\cdot 6!\big{(}C^{(2,4)}B^{(2,0)}+C^{(4,2)}B^{(0,2)}\big{)}+8!\big{(}A^{(1,0)}\partial_{x_{1}}C^{(4,4)}+A^{(0,1)}\partial_{x_{2}}C^{(4,4)}\big{)}$ $\displaystyle+4\cdot 8!\big{(}C^{(4,4)}\partial_{x_{1}}A^{(1,0)}+C^{(4,4)}\partial_{x_{2}}A^{(0,1)}\big{)}$ $\displaystyle+8!\big{(}B^{(2,0)}\partial_{x_{1}}^{2}C^{(4,4)}+B^{(0,2)}\partial_{x_{2}}^{2}C^{(4,4)}+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}C^{(4,4)}\big{)}$ $\displaystyle+3\cdot 4!6!\big{(}C^{(4,0)}\partial_{x_{1}}^{2}C^{(2,4)}+C^{(2,2)}\partial_{x_{2}}^{2}C^{(2,4)}+C^{(0,4)}\partial_{x_{2}}^{2}C^{(4,2)}$ $\displaystyle+C^{(2,2)}\partial_{x_{1}}^{2}C^{(4,2)}\big{)}$ $\displaystyle+12\cdot 6!\big{(}C^{(6,0)}\partial_{x_{1}}^{2}C^{(0,4)}+C^{(4,2)}\partial_{x_{2}}^{2}C^{(0,4)}+C^{(2,4)}\partial_{x_{1}}^{2}C^{(4,0)}+C^{(0,6)}\partial_{x_{2}}^{2}C^{(4,0)}$ $\displaystyle+36\big{(}C^{(4,2)}\partial_{x_{1}}^{2}C^{(2,2)}+C^{(2,4)}\partial_{x_{2}}^{2}C^{(2,2)}\big{)}\big{)}$ $\displaystyle+6\cdot 8!\big{(}C^{(4,4)}\partial_{x_{1}}^{2}B^{(2,0)}+C^{(2,6)}\partial_{x_{2}}^{2}B^{(2,0)}+C^{(6,2)}\partial_{x_{1}}^{2}B^{(0,2)}+C^{(4,4)}\partial_{x_{2}}^{2}B^{(0,2)}$ $\displaystyle+16\cdot\frac{2!}{3!}C^{(4,4)}\partial_{x_{1}}\partial_{x_{2}}B^{(1,1)}\big{)}+6\cdot 8!C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(4,4)}$ $\displaystyle+6\cdot 8!\big{(}C^{(6,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(0,4)}+C^{(2,6)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(4,0)}+36C^{(4,4)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(2,2)}\big{)}$ $\displaystyle+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ $\displaystyle K^{(6,6)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)=$ $\displaystyle 12!C^{(6,6)}\mathop{}\\!\mathrm{d}\hskip 1.0ptt$ $\displaystyle+\frac{1}{2}\Big{[}30\cdot 10!\big{(}B^{(2,0)}C^{(4,6)}+B^{(0,2)}C^{(6,4)}\big{)}$ $\displaystyle+\frac{6!8!}{2!}\big{(}C^{(4,0)}C^{(2,6)}+C^{(0,4)}C^{(6,2)}+15C^{(2,2)}C^{(4,4)}\big{)}$ $\displaystyle+2\cdot 6!6!\big{(}C^{(6,0)}C^{(0,6)}+\frac{(6!)^{2}}{(2!4!)^{2}}C^{(2,4)}C^{(4,2)}\big{)}$ $\displaystyle+\frac{6!8!}{2!}\big{(}C^{(2,6)}C^{(4,0)}+C^{(6,2)}C^{(0,4)}+15C^{(4,4)}C^{(2,2)}\big{)}$ $\displaystyle+30\cdot 10!\big{(}C^{(4,6)}B^{(2,0)}+C^{(6,4)}B^{(0,2)}\big{)}$ $\displaystyle+12!\big{(}A^{(1,0)}\partial_{x_{1}}C^{(6,6)}+A^{(0,1)}\partial_{x_{2}}C^{(6,6)}\big{)}+6\cdot 12!\big{(}C^{(6,6)}\partial_{x_{1}}A^{(1,0)}+C^{(6,6)}\partial_{x_{2}}A^{(0,1)}\big{)}$ $\displaystyle+12!\big{(}B^{(2,0)}\partial_{x_{1}}^{2}C^{(6,6)}+B^{(0,2)}\partial_{x_{2}}^{2}C^{(6,6)}+2B^{(1,1)}\partial_{x_{1}}\partial_{x_{2}}C^{(6,6)}\big{)}$ $\displaystyle+\frac{6!10!}{2!2!}\big{(}C^{(4,0)}\partial_{x_{1}}^{2}C^{(4,6)}+C^{(2,2)}\partial_{x_{2}}^{2}C^{(4,6)}+C^{(2,2)}\partial_{x_{1}}^{2}C^{(6,4)}+C^{(0,4)}\partial_{x_{2}}^{2}C^{(6,4)}\big{)}$ $\displaystyle+30\cdot\frac{6!8!}{2!}\big{(}C^{(6,0)}\partial_{x_{1}}^{2}C^{(2,6)}+C^{(4,2)}\partial_{x_{2}}^{2}C^{(2,6)}+C^{(2,4)}\partial_{x_{1}}^{2}C^{(6,2)}+C^{(0,6)}\partial_{x_{2}}^{2}C^{(6,2)}$ $\displaystyle+15\big{(}C^{(4,2)}\partial_{x_{1}}^{2}C^{(4,4)}+C^{(2,4)}\partial_{x_{2}}^{2}C^{(4,4)}\big{)}\big{)}$ $\displaystyle+\frac{6!8!}{2!}\Big{(}C^{(8,0)}\partial_{x_{1}}^{2}C^{(0,6)}+C^{(6,2)}\partial_{x_{2}}^{2}C^{(0,6)}+C^{(2,6)}\partial_{x_{1}}^{2}C^{(6,0)}+C^{(0,8)}\partial_{x_{2}}^{2}C^{(6,0)}$ $\displaystyle+\frac{(6!)^{2}}{(4!2!)^{2}}\big{(}C^{(4,4)}\partial_{x_{1}}^{2}C^{(4,2)}+C^{(2,6)}\partial_{x_{2}}^{2}C^{(4,2)}+C^{(6,2)}\partial_{x_{1}}^{2}C^{(2,4)}+C^{(4,4)}\partial_{x_{2}}^{2}C^{(2,4)}\big{)}\Big{)}$ $\displaystyle+\frac{6!10!}{2!2!}\big{(}C^{(4,6)}\partial_{x_{1}}^{2}C^{(4,0)}+C^{(2,8)}\partial_{x_{2}}^{2}C^{(4,0)}+C^{(8,2)}\partial_{x_{1}}^{2}C^{(0,4)}+C^{(6,4)}\partial_{x_{2}}^{2}C^{(0,4)}$ $\displaystyle+15\big{(}C^{(6,4)}\partial_{x_{1}}^{2}C^{(2,2)}+C^{(4,6)}\partial_{x_{2}}^{2}C^{(2,2)}\big{)}\big{)}$ $\displaystyle+15\cdot 12!\big{(}C^{(8,4)}\partial_{x_{1}}^{2}B^{(0,2)}+C^{(6,6)}\partial_{x_{2}}^{2}B^{(0,2)}+C^{(6,6)}\partial_{x_{1}}^{2}B^{(2,0)}+C^{(4,8)}\partial_{x_{2}}^{2}B^{(2,0)}$ $\displaystyle+\frac{4!4!}{5!}C^{(6,6)}\partial_{x_{1}}\partial_{x_{2}}B^{(1,1)}\big{)}$ $\displaystyle+6\cdot 12!C^{(2,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(6,6)}$ $\displaystyle+30\cdot 7!8!\big{(}C^{(6,2)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(2,6)}+C^{(2,6)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(6,2)}+15C^{(4,4)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(4,4)}\big{)}$ $\displaystyle+\frac{(6!)^{2}}{(4!2)^{2}}\frac{6!10!}{2!2!}\big{(}C^{(4,6)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(4,2)}+C^{(6,4)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(2,4)}\big{)}$ $\displaystyle+90\cdot 12!\big{(}C^{(4,8)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(4,0)}+C^{(8,4)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(0,4)}+15C^{(6,6)}\partial_{x_{1}}^{2}\partial_{x_{2}}^{2}C^{(2,2)}\big{)}$ $\displaystyle+\mathcal{O}(\delta)\Big{]}\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{2}+\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ For the differential operator, we use the short notation $\partial_{x_{i}}=\frac{\partial}{\partial x_{i}}$. With $\mathcal{O}(\delta)$, we indicate all terms that contain $C^{(\ell,m)}$ of higher-order or derivatives $\partial_{x_{i}}^{j}$, $j>3$, and with $\mathcal{O}(\mathop{}\\!\mathrm{d}\hskip 1.0ptt^{3})$ all terms that contain higher orders $(\geq 3)$ of the sampling interval $\mathop{}\\!\mathrm{d}\hskip 1.0ptt$. One obtains $K^{(m,\ell)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ from $K^{(\ell,m)}(\boldsymbol{x},t,\mathop{}\\!\mathrm{d}\hskip 1.0ptt)$ by interchanging the indices $i\in\\{1,2\\}$ of the differential operator $\partial_{x_{i}}\equiv\frac{\partial}{\partial_{x_{i}}}$ and by interchanging $\ell$ with $m$ in the orders of $A^{(\ell,m)}$, $B^{(\ell,m)}$, and $C^{(\ell,m)}$. ## References ## References * [1] Pikovsky A S, Rosenblum M G and Kurths J 2001 Synchronization: A universal concept in nonlinear sciences (Cambridge, UK: Cambridge University Press) * [2] Kantz H and Schreiber T 2003 Nonlinear Time Series Analysis 2nd ed (Cambridge, UK: Cambridge University Press) * [3] Reinsel G C 2003 Elements of multivariate time series analysis 2nd ed (New York: Springer) * [4] Hlaváčková-Schindler K, Paluš M, Vejmelka M and Bhattacharya J 2007 Phys. Rep. 441 1–46 * [5] Marwan N, Romano M C, Thiel M and Kurths J 2007 Phys. Rep. 438 237–329 * [6] Stankovski T, Pereira T, McClintock P V E and Stefanovska A 2017 Rev. Mod. Phys. 89 045001 * [7] Friedrich R, Peinke J, Sahimi M and Tabar M R R 2011 Phys. Rep. 506 87–162 * [8] Tabar M R R 2019 Analysis and Data-Based Reconstruction of Complex Nonlinear Dynamical Systems: Using the Methods of Stochastic Processes (Cham-Switzerland: Springer) * [9] Prusseit J and Lehnertz K 2008 Phys. Rev. E 77 041914 * [10] Lehle B 2013 J. Stat. Phys. 152 1145–1169 * [11] Scholz T, Raischel F, Lopes V V, Lehle B, Wächter M, Peinke J and Lind P G 2017 Phys. Lett. A 381 194–206 * [12] Anvari M, Tabar M R R, Peinke J and Lehnertz K 2016 Sci. Rep. 6 35435 * [13] Lehnertz K, Zabawa L and Tabar M R R 2018 New J. Physics 20 113043 * [14] Hashtroud A M, Mirzahossein E, Zarei F and Tabar M R R 2019 J. Stat. Mech.: Theory Exp. 2019 083213 * [15] Rydin Gorjão L, Heysel J, Lehnertz K and Tabar M R R 2019 Phys. Rev. E 100 062127 * [16] Ditlevsen S and Löcherbach E 2017 Stoch. Process. Their Appl. 127 1840–1869 * [17] Lombardi F, Gómez-Extremera M, Bernaola-Galván P, Vetrivelan R, Saper C B, Scammell T E and Ivanov P C 2020 J. Neurosci 40 171–190 * [18] Scalliet C, Gnoli A, Puglisi A and Vulpiani A 2015 Phys. Rev. Lett. 114 198001 * [19] Plati A, Baldassarri A, Gnoli A, Gradenigo G and Puglisi A 2019 Phys. Rev. Lett. 123 038002 * [20] Plati A and Puglisi A 2020 Phys. Rev. E 102 012908 * [21] Carpenter S R and Brock W A 2011 Ecology 92 2196–2201 * [22] Li D, Cui J and Song G 2015 J. Math. Anal. Appl. 430 438–464 * [23] Aït-Sahalia Y, Cacho-Diaz J and Laeven R J 2015 J. Financ. Econ. 117 585–606 * [24] see, e.g., M. Ragwitz and H. Kantz, Phys. Rev. Lett. 87, 254501 (2001); R. Friedrich, C. Renner, M. Siefert, and J. Peinke, Phys. Rev. Lett. 89, 217 (2002); P. Sura and J. Barsugli, Phys. Lett. A 305, 304 (2002); D. Kleinhans, R. Friedrich, A. Nawroth, and J. Peinke, Phys. Lett. A 346, 42 (2005); J. Gottschall and J. Peinke, New J. Physics 10, 083034 (2008); S. J. Lade, Phys. Lett. A 373, 3705 (2009); C. Honisch and R. Friedrich, Phys. Rev. E 83, 066701 (2011); C. Honisch, R. Friedrich, F. Hörner, and C. Denz, Phys. Rev. E 86, 026702 (2012); K. Tang, P. Ao, and B. Yuan, EPL (Europhysics Letters) 102, 40003 (2013); A. Vulpiani, and M. Baldovin, J. Stat. Mech.: Theory Exp. 2020, 014003 (2020) * [25] Kloeden P E and Platen E 1999 Numerical Solution of Stochastic Differential Equations (Berlin, Heidelberg: Springer) * [26] Chen C, Twycross J and Garibaldi J M 2017 PloS one 12 e0174202 * [27] Knuth K H 2019 Digit. Signal Process. 95 102581 * [28] Lamouroux D and Lehnertz K 2009 Phys. Lett. A 373 35073512 * [29] Rydin Gorjão L and Meirinhos F 2019 J. Open Source Softw. 4 1693
# Iterative Greedy Matching for 3D Human Pose Tracking from Multiple Views Julian Tanke University of Bonn <EMAIL_ADDRESS> &Jürgen Gall University of Bonn <EMAIL_ADDRESS> ###### Abstract In this work we propose an approach for estimating 3D human poses of multiple people from a set of calibrated cameras. Estimating 3D human poses from multiple views has several compelling properties: human poses are estimated within a global coordinate space and multiple cameras provide an extended field of view which helps in resolving ambiguities, occlusions and motion blur. Our approach builds upon a real-time 2D multi-person pose estimation system and greedily solves the association problem between multiple views. We utilize bipartite matching to track multiple people over multiple frames. This proofs to be especially efficient as problems associated with greedy matching such as occlusion can be easily resolved in 3D. Our approach achieves state- of-the-art results on popular benchmarks and may serve as a baseline for future work. ## 1 Introduction 3D human pose tracking has applications in surveillance [40] and analysis of sport events [7, 23]. Most existing approaches [19, 21, 25, 26, 27, 33, 38, 28, 29] address 3D human pose estimation from single images while multi-view 3D human pose estimation [7, 23, 3, 4, 12] remains less explored, as obtaining and maintaining a configuration of calibrated cameras is difficult and costly. However, in sports or surveillance, calibrated multi-camera setups are available and can be leveraged for accurate human pose estimation and tracking. Utilizing multiple views has several obvious advantages over monocular 3D human pose estimation: ambiguities introduced by foreshortening as well as body joint occlusions or motion blurs can be resolved using other views. Furthermore, human poses are estimated within a global coordinate system when using calibrated cameras. Figure 1: Qualitative results on the Shelf [3] dataset. In this work we propose an iterative greedy matching algorithm based on epipolar geometry to approximately solve the k-partite matching problem of multiple human detections in multiple cameras. To this end we utilize a real- time 2D pose estimation framework and achieve very strong results on challenging multi-camera datasets. The common 3D space proves to be very robust for greedy tracking, resulting in a very efficient and well-performing algorithm. In contrast to previous works [7, 23, 34, 13], our approach does not discretize the solution space but combines triangulation with an efficient pose association approach across camera views and time. Furthermore, our approach does not utilize individual shape models for each person [26]. We make the following contributions: (i) we present a greedy approach for 3D multi-person tracking from multiple calibrated cameras and show that our approach achieves state-of-the-art results. (ii) We provide extensive experiments on both 3D human pose estimation and on 3D human pose tracking on various multi-person multi-camera datasets. ## 2 Related Work Significant progress has been made in pose estimation and pose tracking in recent years [8, 11, 20, 39] and our model is built on advancements in the field of 2D multi-person pose estimation [8, 9, 15, 17, 24, 31, 36, 39]. For instance, part affinity fields [8] are 2D vector fields that represent associations between body joints which form limbs. It utilizes a greedy bottom-up approach to detect 2D human poses and is robust to early commitment. Furthermore, it decouples the runtime complexity from the number of people in the image, yielding real-time performance. There is extensive research in monocular 3D human pose estimation [19, 21, 25, 27, 33, 38, 28, 29]. For instance, Martinez et al. [27] split the problem of inferring 3D human poses from single images into estimating a 2D human pose and then regressing the 3D pose on the low-dimensional 2D representation. Though 3D human pose estimation approaches from single images yield impressive results they do not generalize well to unconstrained data. While multiple views are used in [34, 35] to guide the training for monocular 3D pose estimation, there are also approaches that use multiple views for inference. A common technique to estimate a single 3D human pose from multiple views is to extend the well-known pictorial structure model [14] to 3D [2, 5, 7, 23, 34]. Burenius et al. [7] utilize a 2D part detector based on the HOG- descriptor [10] while Kazemi et al. [23] use random forests. Pavlakos et al. [34] outperform all previous models by utilizing the stacked hourglass network [32] to extract human joint confidence maps from the camera views. However, these models have to discretize their solution space resulting in either a very coarse result or a very large state space making them impractical for estimating 3D poses of multiple people. Furthermore, they restrict their solution space to a 3D bounding volume around the subject which has to be known in advance. Estimating multiple humans from multiple views was first explored by Belagiannis et al. [3, 4]. Instead of sampling from all possible translations and rotations they utilize a set of 3D body joint hypotheses which were obtained by triangulating 2D body part detections from different views. However, these methods rely on localizing bounding boxes using a person tracker for each individual in each frame to estimate the number of persons that has to be inferred from the common state space. This will work well in cases where individuals are completely visible in most frames but will run into issues when the pose is not completely visible in some cameras as shown in Figure 2. A CNN-based approach was proposed by Elhayek et al. [12] where they fit articulated skeletons using 3D sums of Gaussians [37] and where body part detections are estimated using CNNs. However, the Gaussians and skeletons need to be initialized beforehand for each actor in the scene, similar to [26]. Fully connected pairwise conditional random fields [13] utilize approximate inference to extract multiple human poses where DeeperCut [18] is used as 2D human pose estimation model. However, the search space has to be discretized and a fully connected graph has to be solved, which throttles inference speed. Our approach does not suffer from any of the aforementioned drawbacks as our model works off-the-shelf without the need of actor-specific body models or discretized state space and uses an efficient greedy approach for estimating 3D human poses. Figure 2: Challenging 3D reconstruction of 6 persons in the CMU Panoptic Dataset [22] with significant occlusion and partial visibility of persons. ## 3 Model Figure 3: Estimating multiple people from multiple views can be formulated as k-partite graph partitioning where 2D human pose detections must be associated across multiple views. We employ a greedy approach to make the partitioning tractable. Given a set of 2D human pose detections on multiple views (a) we greedily match all detections on two images (b) where the weight between two detections is defined by the average epipolar distance of the two poses. Other views are then integrated iteratively where the weight is the average of the epipolar distance of the 2D detections in the new view and the already integrated 2D detections (c). 2D detections with the same color represent the same person. Our model consists of two parts: First, 3D human poses are estimated for each frame. Second, the estimated 3D human poses are greedily matched into tracks which is described in Section 3.2. To remove outliers and to fill-in missing joints in some frames, a simple yet effective smoothing scheme is applied, which is also discussed in Section 3.2. ### 3.1 3D Human Pose Estimation Figure 4: Epipolar lines for two camera views of the UMPM Benchmark [1]. The blue and the red dot in image (a) are projected as blue (red) epipolar lines in the second image (b) while the orange and light-blue dot from image (b) are projected onto image (a). First, 2D human poses are extracted for each camera separately. Several strong 2D multi-person pose estimation [8, 9, 15, 17, 24, 31, 36, 39] models have been proposed but in our baseline we utilize OpenPose [8] as it is well established and offers real-time capabilities. We denote the 2D human pose estimations as $\displaystyle\big{\\{}h_{i,k}\big{\\}}_{i\in[1,N]}^{k\in[1,K_{i}]}$ (1) where $N$ is the number of calibrated cameras and $K_{i}$ the number of detected human poses for camera $i$. In order to estimate the 3D human poses from multiple cameras, we first associate the detections across all views as illustrated in Figure 3. We denote the associated 2D human poses as $\mathcal{H}$ where $|\mathcal{H}|$ is the number of detected persons and $\mathcal{H}_{m}=\\{h_{i,k}\\}$ is the set of 2D human poses that are associated to person $m$. Once the poses are associated, we estimate the 3D human poses for all detected persons $m$ with $|\mathcal{H}|>1$ by triangulating the 2D joint positions. For the association, we select camera $i=1$ as starting point and choose all 2D human pose detections $h_{1,k}$ in this camera view as person candidates, i.e., $\mathcal{H}=\big{\\{}\\{h_{1,k}\\}\big{\\}}$. We then iterate over the other cameras and greedily match their 2D detections with the current list of person candidates $\mathcal{H}$ using bi-partite matching [30]. The cost for assigning a pose $h_{i,k}$ to an existing person candidate $\mathcal{H}_{m}$ is given by $\displaystyle\Phi(h_{i,k},\mathcal{H}_{m})=\frac{1}{|\mathcal{H}_{m}||J_{kl}|}\sum_{h_{j,l}\in\mathcal{H}_{m}}\sum_{\iota\in J_{kl}}\phi(h_{i,k}(\iota),h_{j,l}(\iota))$ (2) where $h_{i,k}(\iota)$ denotes the 2D pixel location of joint $\iota$ of the 2D human pose $h_{i,k}$ and $J_{kl}$ is the set of joints that are visible for both poses $h_{i,k}$ and $h_{j,l}$. Note that the 2D human pose detections might not contain all $J$ joints due to occlusions or truncations. The distance between two joints in the respective cameras is defined by the distance between the epipolar lines and the joint locations: $\displaystyle\phi(p_{i},p_{j})=|p_{j}^{T}F^{i,j}p_{i}|+|p_{i}^{T}F^{j,i}p_{j}|$ (3) where $F^{i,j}$ is the fundamental matrix from camera $i$ to camera $j$. Figure 4 shows the epipolar lines for two joints. Using the cost function $\Phi(h_{i,k},\mathcal{H}_{m})$, we solve the bi- partite matching problem for each image $i$: $X^{*}=\underset{X}{\mathrm{argmin}}\sum_{m=1}^{|\mathcal{H}|}\sum_{k=1}^{K_{i}}\Phi(h_{i,k},\mathcal{H}_{m})X_{k,m}$ (4) where $\sum_{k}X_{k,m}=1\;\forall m\quad\text{and}\quad\sum_{m}X_{k,m}=1\;\forall k.$ $X_{k,m}^{*}=1$ if $h_{i,k}$ is associated to an existing person candidate $\mathcal{H}_{m}$ and it is zero otherwise. If $X_{k,m}^{*}=1$ and $\Phi(h_{i,k},\mathcal{H}_{m})<\theta$, the 2D detection $h_{i,k}$ is added to $\mathcal{H}_{m}$. If $\Phi(h_{i,k},\mathcal{H}_{m})\geq\theta$, $\\{h_{i,k}\\}$ is added as hypothesis for a new person to $\mathcal{H}$. Algorithm 1 summarizes the greedy approach for associating the human poses across views. Result: Associated 2D poses $\mathcal{H}$ $\mathcal{H}:=\big{\\{}\\{h_{1,k}\\}\big{\\}}$ ; for _camera $i\leftarrow 2$ to $N$_ do for _pose $k\leftarrow 1$ to $K_{i}$ _ do for _hypothesis $m\leftarrow 1$ to $|\mathcal{H}|$_ do $C_{k,m}=\Phi(h_{i,k},\mathcal{H}_{m})$ ; end for end for $X^{*}=\underset{X}{\mathrm{argmin}}\sum_{m=1}^{|\mathcal{H}|}\sum_{k=1}^{K_{i}}C_{k,m}X_{k,m}$ ; for _$k,m$ where $X_{k,m}^{*}=1$_ do if _$C_{k,m} <\theta$_ then $\mathcal{H}_{m}=\mathcal{H}_{m}\ \bigcup\ \\{h_{i,k}\\}$ ; else $\mathcal{H}=\mathcal{H}\ \bigcup\ \big{\\{}\\{h_{i,k}\\}\big{\\}}$ ; end if end for end for $\mathcal{H}=\mathcal{H}\setminus\mathcal{H}_{m}\ \forall m$ where $|\mathcal{H}_{m}|=1$; Algorithm 1 Solving the assignment problem for multiple 2D human pose detections in multiple cameras. $\Phi(h_{i,k},\mathcal{H}_{m})$ (2) is the assignment cost for assigning the 2D human pose $h_{i,k}$ to the person candidate $\mathcal{H}_{m}$. $X^{*}$ is a binary matrix obtained by solving the bi-partite matching problem. The last line in the algorithm ensures that all hypotheses that cannot be triangulated are removed. ### 3.2 Tracking For tracking, we use bipartite matching [30] similar to Section 3.1. Assuming that we have already tracked the 3D human poses until frame $t-1$, we first estimate the 3D human poses for frame $t$ as described in Section 3.1. The 3D human poses of frame $t$ are then associated to the 3D human poses of frame $t-1$ by bipartite matching. The assignment cost for two 3D human poses is in this case given by the average Euclidean distance between all joints that are present in both poses. In some cases, two poses do not have any overlapping valid joints due to noisy detections or truncations. The assignment cost is then calculated by projecting the mean of all valid joints of each pose onto the $xy$-plane, assuming that the $z$-axis is the normal of the ground plane, and taking the Euclidean distance between the projected points. As long as the distance between two matched poses is below a threshold $\tau$, they will be integrated into a common track. Otherwise, a new track is created. In our experiments we set $\tau=200mm$. Due to noisy detections, occlusions or motion blur, some joints or even full poses might be missing in some frames or noisy. We fill in missing joints by temporal averaging and we smooth each joint trajectory by a Gaussian kernel with standard deviation $\sigma$. This simple approach significantly boosts the performance of our model as we will show in Section 3. ## 4 Experiments | [7]* | [23]* | [34]* | [3] | [4] | [13] | Ours | Ours+ ---|---|---|---|---|---|---|---|--- ua | .60 | .89 | 1.0 | .68 | .98 | .97 | .99 | 1.0 la | .35 | .68 | 1.0 | .56 | .72 | .95 | .99 | 1.0 ul | 1.0 | 1.0 | 1.0 | .78 | .99 | 1.0 | .98 | .99 ll | .90 | .99 | 1.0 | .70 | .92 | .98 | .93 | .997 avg | .71 | .89 | 1.0 | .68 | .90 | .98 | .97 | .997 Table 1: Quantitative comparison of methods for single human 3D pose estimation from multiple views on the KTH Football II [23] dataset. The numbers are the PCP score in 3D with $\alpha=0.5$. Methods annotated with * can only estimate single human poses, discretize the state space and rely on being provided with a tight 3D bounding box centered at the true 3D location of the person. Ours+ and Ours describe our method with and without track smoothing (Section 3.2). ul and la show the scores for upper and lower arm, respectively, while ul and ll represent upper and lower legs. Campus dataset $\ (\alpha=0.5)$ | | | ---|---|---|--- | [3] | [4] | [13] | Ours | Ours+ Actor | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 ua | .83 | .90 | .78 | .97 | .97 | .90 | .97 | .94 | 93 | .86 | .97 | .91 | .99 | .98 | .98 la | .78 | .40 | .62 | .86 | .43 | .75 | .87 | .79 | 70 | .74 | .64 | .68 | .91 | .70 | .92 ul | .86 | .74 | .83 | .93 | .75 | .92 | .94 | .99 | 88 | 1.0 | .99 | .99 | 1.0 | .98 | 1.0 ll | .91 | .89 | .70 | .97 | .89 | .76 | .97 | .95 | 81 | 1.0 | .98 | .99 | 1.0 | .98 | .99 avg | .85 | .73 | .73 | .93 | .76 | .83 | .94 | .93 | .85 | .90 | .90 | .89 | .98 | .91 | .98 avg* | .77 | .84 | .91 | .90 | .96 Shelf dataset $\ (\alpha=0.5)$ | | | | [3] | [4] | [13] | Ours | Ours+ Actor | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 ua | .72 | .80 | .91 | .82 | .83 | .93 | .93 | .78 | .94 | .99 | .93 | .97 | .1.0 | .97 | .97 la | .61 | .44 | .89 | .82 | .83 | .93 | .83 | .33 | .90 | .97 | .57 | .95 | .99 | .64 | .96 ul | .37 | .46 | .46 | .43 | .50 | .57 | .96 | .95 | .97 | .998 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 ll | .71 | .72 | .95 | .86 | .79 | .97 | .97 | .93 | .96 | .998 | .99 | 1.0 | 1.0 | 1.0 | 1.0 avg | .60 | .61 | .80 | .73 | .74 | .85 | .92 | .75 | .94 | .99 | .87 | .98 | .998 | .90 | .98 avg* | .67 | .77 | .87 | .95 | .96 Table 2: Quantitative comparison of multi-person 3D pose estimation from multiple views on the evaluation frames of the annotated Campus [16, 3] and Shelf dataset [3]. The numbers are the PCP score in 3D with $\alpha=0.5$. Ours+ and Ours describe our method with and without track smoothing (Section 3.2). We show results for each of the three actors separately as well as averaged for each method (average*). | Ours+ ---|--- Actor | 1 | 2 ua | .997 | .98 la | .98 | .996 ul | 1.0 | 1.0 ll | .99 | .997 avg | 0.99 | 0.99 Table 3: Quantitative comparison of multi-person 3D pose estimation from multiple views on p2_chair_2 of the UMPM benchmark [1]. We evaluate our approach on two human pose estimation tasks, single person 3D pose estimation and multi-person 3D pose estimation, and compare it to state- of-the-art methods. Percentage of correct parts (PCP) in 3D as described in [7] is used for evaluation. We evaluate on the limbs only as annotated head poses vary significantly throughout various datasets. In all experiments, the order in which the cameras are processed is given by the dataset. We then evaluate the tracking performance. The source code is made publicly available 111https://github.com/jutanke/mv3dpose. ### 4.1 Single Person 3D Pose Estimation Naturally, first works on 3D human pose estimation from multiple views cover only single humans. Typical methods [7, 23, 34] find a solution over the complete discretized state space which is intractable for multiple persons. However, we report their results for completeness. All models were evaluated on the complete first sequence of the second player of the KTH Football II [23] dataset. Our results are reported in Table 1. Our model outperforms all other multi-person approaches and gets close to the state-of-the-art for single human pose estimation [34] which makes strong assumptions and is much more constrained. Our model has the lowest accuracy for lower legs (ll) which experience strong deformation and high movement speed. This can be mostly attributed to the 2D pose estimation framework which confuses left and right under motion blur, as can be seen in Figure 7. When smoothing the trajectory (Section 3.2) this kind of errors can be reduced. ### 4.2 Multi-Person 3D Pose Estimation To evaluate our model on multi-person 3D pose estimation, we utilize the Campus [16, 3], Shelf [3], CMU Panoptic [22] and UMPM [1] dataset. The difficulty of the Campus dataset lies in its low resolution ($360\times 288$ pixel) which makes accurate joint detection hard. Furthermore, small errors in triangulation or detection will result in large PCP errors as the final score is calculated on the 3D joint locations. As in previous works [3, 4] we utilize frames $350-470$ and frames $650-750$ of the Campus dataset and frames $300-600$ for the Shelf dataset. Clutter and humans occluding each others make the Shelf dataset challenging. Nevertheless, our model achieves state-of-the- art results on both datasets by a large margin which can be seen in Table 2. Table 3 reports quantitative results on video p2_chair_2 of the UMPM [1] benchmark. A sample frame from this benchmark can be seen in Figure 4. As the background is homogeneous and the human actors maintain a considerable distance to each other the results of our method are quite strong. ### 4.3 Tracking | Ours | Ours+ ---|---|--- 160422_ultimatum1 [22] | .89 | .89 160224_haggling1 [22] | .92 | .92 160906_pizza1 [22] | .92 | .93 Table 4: Quantitative evaluation of multi-person 3D pose tracking on the CMU Panoptic dataset [22] using the MOTA [6] score. Ours+ and Ours describe our method with and without track smoothing (Section 3.2). For evaluating the tracking accuracy, we utilize the MOTA [6] score which provides a scalar value for the rate of false positives, false negatives, and identity switches of a track. Our model is evaluated on the CMU Panoptic dataset [22] which provides multiple interacting people in close proximity. We use videos 160224_haggling1 with three persons, 160422_ultimatum1 with up to seven person, and 160906_pizza1 with six persons. For the videos 160422_ultimatum1 we use frames $300$ to $3758$, for 160906_pizza1 we use frames $1000$ to $4458$ and for 160224_haggling1 we use frames $4209$ to $5315$ and $6440$ to $8200$. The first five HD cameras are used. Our results are reported in Table 4 which shows that our approach yields strong tracking capabilities. ### 4.4 Effects of Smoothing Figure 5: PCP score for different smoothing values $\sigma$ for tracking on KTH Football II, Campus, and Shelf. If $\sigma$ is too small, the smoothing has little effect and coincides with the un-smoothed results. When the joint trajectories are smoothed too much, the PCP score drops as well as the trajectories do not follow the original path anymore. (Larger PCP scores are better) As can be seen in Table 1 and Table 2 the effects of smoothing can be significant, especially when detection and calibration are noisy as is the case with the Campus and the KTH Football II dataset. In both datasets 2D human pose detection is challenging due to low resolution (Campus) or strong motion blur (KTH Football II). Datasets with higher resolution and less motion blur like the Shelf dataset do not suffer from this problems as much and as such do not benefit the same way from track smoothing. However, a small gain can still be noted as smoothing also fills in joint detections that could not be triangulated. Figure 5 explores different $\sigma$ values for smoothing on the KTH Football II, Campus, and Shelf dataset. It can be seen that smoothing improves the performance regardless of the dataset but that too much smoothing obviously reduces the accuracy. We chose $\sigma=2$ for all our experiments except for the Campus dataset where we set $\sigma=4.2$. The reason for the higher value of $\sigma$ for the Campus dataset is due to the very low resolution of the images compared to the other datasets, which increases the noise of the estimated 3D joint position by triangulation. ### 4.5 Effects of camera order Figure 6: PCP score averaged over all subjects for all $120$ camera permutations of the Shelf dataset. The vertical line represents the mean value over all permutations while the dots represent each camera permutation. So far we used the given camera order for each dataset, but the order in which views are greedily matched matters and different results might happen with different orderings. To investigate the impact of the camera order, we evaluated our approach using all $120$ permutations of the $5$ cameras of the Shelf dataset. The results shown in Figure 6 show that the approach is very robust to the order of the camera views. ### 4.6 Early Commitment Figure 7: Issues with early commitment. As we utilize the 2D pose estimations directly, our method suffers when the predictions yield poor results. In this example the pose estimation model correctly estimates (a) and (c) but confuses left and right on (b) due to motion blur. The resulting 3D pose estimation (d) collapses into the centre of the person. The red limbs represent the right body side while blue limbs represent the left body side. A failure case happens due to the early commitment of our algorithm with regards to the 2D pose estimation, as can be seen in Figure 7. When the pose estimation is unsure about a pose, it still fully commits to its output and disregards uncertainty. This problem occurs due to motion blur as the network has difficulties to decide between left and right in this case. As our pose estimation model has mostly seen forward-facing persons it will be more inclined towards predicting a forward-facing person in case of uncertainty. When left and right of a 2D prediction are incorrectly flipped in at least one of the views, the merged 3D prediction will collapse to the vertical line of the person resulting in a poor 3D pose estimation. ## 5 Conclusion In this work we presented a simple baseline approach for 3D human pose estimation and tracking from multiple calibrated cameras and evaluate it extensively on several 3D multi-camera datasets. Our approach achieves state- of-the-art results in multi-person 3D pose estimation while remaining sufficiently efficient for fast processing. Due to the models simplicity some common failure cases can be noted which can be build upon in future work. For example, confidence maps provided by the 2D pose estimation model could be utilized to prevent left-right flips. Our approach may serve as a baseline for future work. ## Acknowledgement The work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) GA 1927/5-1 (FOR 2535 Anticipating Human Behavior) and the ERC Starting Grant ARCA (677650). ## References * [1] Aa, N.v.d., Luo, X., Giezeman, G., Tan, R., Veltkamp, R.: Utrecht Multi-Person Motion (UMPM) benchmark: A multi-person dataset with synchronized video and motion capture data for evaluation of articulated human motion and interaction. In: Workshop on Human Interaction in Computer Vision (2011) * [2] Amin, S., Andriluka, M., Rohrbach, M., Schiele, B.: Multi-view Pictorial Structures for 3D Human Pose Estimation. In: British Machine Vision Conference (2013) * [3] Belagiannis, V., Amin, S., Andriluka, M., Schiele, B., Navab, N., Ilic, S.: 3d pictorial structures for multiple human pose estimation. In: Conference on Computer Vision and Pattern Recognition (2014) * [4] Belagiannis, V., Amin, S., Andriluka, M., Schiele, B., Navab, N., Ilic, S.: 3d pictorial structures revisited: Multiple human pose estimation. Transactions on Pattern Analysis and Machine Intelligence (2016) * [5] Bergtholdt, M., Kappes, J., Schmidt, S., Schnörr, C.: A study of parts-based object class detection using complete graphs. International Journal of Computer Vision (2010) * [6] Bernardin, K., Elbs, A., Stiefelhagen, R.: Multiple object tracking performance metrics and evaluation in a smart room environment. In: Workshop on Visual Surveillance (2006) * [7] Burenius, M., Sullivan, J., Carlsson, S.: 3D pictorial structures for multiple view articulated pose estimation. In: Conference on Computer Vision and Pattern Recognition (2013) * [8] Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. In: Conference on Computer Vision and Pattern Recognition (2017) * [9] Chen, Y., Wang, Z., Peng, Y., Zhang, Z., Yu, G., Sun, J.: Cascaded pyramid network for multi-person pose estimation. In: Conference on Computer Vision and Pattern Recognition (2018) * [10] Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Conference on Computer Vision and Pattern Recognition (2005) * [11] Doering, A., Iqbal, U., Gall, J.: JointFlow: Temporal Flow Fields for Multi Person Tracking. British Machine Vision Conference (2018) * [12] Elhayek, A., de Aguiar, E., Jain, A., Tompson, J., Pishchulin, L., Andriluka, M., Bregler, C., Schiele, B., Theobalt, C.: Efficient ConvNet-based marker-less motion capture in general scenes with a low number of cameras. In: Conference on Computer Vision and Pattern Recognition (2015) * [13] Ershadi-Nasab, S., Noury, E., Kasaei, S., Sanaei, E.: Multiple human 3d pose estimation from multiview images. Multimedia Tools and Applications (2018) * [14] Felzenszwalb, P.F., Huttenlocher, D.P.: Pictorial structures for object recognition. International Journal of Computer Vision (2005) * [15] Fieraru, M., Khoreva, A., Pishchulin, L., Schiele, B.: Learning to refine human pose estimation. In: Conference on Computer Vision and Pattern Recognition Workshops (2018) * [16] Fleuret, F., Berclaz, J., Lengagne, R., Fua, P.: Multicamera people tracking with a probabilistic occupancy map. Pattern Analysis and Machine Intelligence (2007) * [17] Guo, H., Tang, T., Luo, G., Chen, R., Lu, Y., Wen, L.: Multi-Domain Pose Network for Multi-Person Pose Estimation and Tracking. In: European Conference on Computer Vision (2018) * [18] Insafutdinov, E., Pishchulin, L., Andres, B., Andriluka, M., Schiele, B.: Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In: European Conference on Computer Vision (2016) * [19] Iqbal, U., Doering, A., Yasin, H., Krüger, B., Weber, A., Gall, J.: A dual-source approach for 3D human pose estimation from single images. Computer Vision and Image Understanding (2018) * [20] Iqbal, U., Milan, A., Gall, J.: PoseTrack: Joint Multi-Person Pose Estimation and Tracking. In: Conference on Computer Vision and Pattern Recognition (2017) * [21] Iqbal, U., Molchanov, P., Breuel Jürgen Gall, T., Kautz, J.: Hand pose estimation via latent 2.5D heatmap regression. In: Proceedings of the European Conference on Computer Vision (2018) * [22] Joo, H., Liu, H., Tan, L., Gui, L., Nabbe, B., Matthews, I., Kanade, T., Nobuhara, S., Sheikh, Y.: Panoptic Studio: A Massively Multiview System for Social Motion Capture. In: International Conference on Computer Vision (2015) * [23] Kazemi, V., Burenius, M., Azizpour, H., Sullivan, J.: Multi-view body part recognition with random forests. In: British Machine Vision Conference (2013) * [24] Kocabas, M., Karagoz, S., Akbas, E.: MultiPoseNet: Fast multi-person pose estimation using pose residual network. In: European Conference on Computer Vision (2018) * [25] Kostrikov, I., Gall, J.: Depth Sweep Regression Forests for Estimating 3D Human Pose from Images. In: British Machine Vision Conference (2014) * [26] Liu, Y., Stoll, C., Gall, J., Seidel, H.P., Theobalt, C.: Markerless motion capture of interacting characters using multi-view image segmentation. In: Conference on Computer Vision and Pattern Recognition1 (2011) * [27] Martinez, J., Hossain, R., Romero, J., Little, J.J.: A simple yet effective baseline for 3d human pose estimation. In: International Conference on Computer Vision (2017) * [28] Mehta, D., Rhodin, H., Casas, D., Fua, P., Sotnychenko, O., Xu, W., Theobalt, C.: Monocular 3D human pose estimation in the wild using improved CNN supervision. In: International Conference on 3D Vision (2017) * [29] Mehta, D., Sotnychenko, O., Mueller, F., Xu, W., Sridhar, S., Pons-Moll, G., Theobalt, C.: Single-shot multi-person 3D pose estimation from monocular RGB. In: International Conference on 3D Vision (2018) * [30] Munkres, J.: Algorithms for the assignment and transportation problems. Journal of the Society for Industrial and Applied Mathematics (1957) * [31] Newell, A., Huang, Z., Deng, J.: Associative embedding: End-to-end learning for joint detection and grouping. In: Advances in Neural Information Processing Systems (2017) * [32] Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: European Conference on Computer Vision (2016) * [33] Pavlakos, G., Zhou, X., Derpanis, K.G., Daniilidis, K.: Coarse-to-fine volumetric prediction for single-image 3d human pose. In: Conference on Computer Vision and Pattern Recognition (2017) * [34] Pavlakos, G., Zhou, X., Derpanis, K.G., Daniilidis, K.: Harvesting Multiple Views for Marker-less 3D Human Pose Annotations. In: Conference on Computer Vision and Pattern Recognition (2017) * [35] Rhodin, H., Spörri, J., Katircioglu, I., Constantin, V., Meyer, F., Müller, E., Salzmann, M., Fua, P.: Learning Monocular 3D Human Pose Estimation from Multi-view Images. In: Conference on Computer Vision and Pattern Recognition (2018) * [36] Rogez, G., Weinzaepfel, P., Schmid, C.: Lcr-net++: Multi-person 2d and 3d pose detection in natural images. Transactions on Pattern Analysis and Machine Intelligence (2019) * [37] Stoll, C., Hasler, N., Gall, J., Seidel, H.P., Theobalt, C.: Fast articulated motion tracking using a sums of gaussians body model. In: International Conference on Computer Vision (2011) * [38] Tome, D., Russell, C., Agapito, L.: Lifting from the deep: Convolutional 3d pose estimation from a single image. Conference on Computer Vision and Pattern Recognition (2017) * [39] Xiao, B., Wu, H., Wei, Y.: Simple baselines for human pose estimation and tracking. In: European Conference on Computer Vision (2018) * [40] Zheng, L., Shen, L., Tian, L., Wang, S., Wang, J., Tian, Q.: Scalable person re-identification: A benchmark. In: International Conference on Computer Vision (2015)
11institutetext: D. R. J. Chillingworth 22institutetext: Mathematical Sciences University of Southampton Southampton SO17 1BJ, UK 22email<EMAIL_ADDRESS>33institutetext: M. G. Forest 44institutetext: Departments of Mathematics & Applied Physical Sciences & Biomedical Engineering University of North Carolina at Chapel Hill Chapel Hill, NC 27599-3250, USA 44email<EMAIL_ADDRESS>55institutetext: R. Lauterbach 66institutetext: Fachbereich Mathematik Universität Hamburg 20146 Hamburg, Germany 66email<EMAIL_ADDRESS>77institutetext: C. Wulff ✠ Deceased 12 June 2021 88institutetext: Department of Mathematics University of Surrey Guildford GU2 7XH, UK and Free University Berlin Department of Mathematics Arnimallee 2-6 14195 Berlin, Germany # Existence and stability of kayaking orbits for nematic liquid crystals in simple shear flow David Chillingworth M. Gregory Forest Reiner Lauterbach Claudia Wulff (Received: date / Accepted: date) ###### Abstract We use geometric methods of equivariant dynamical systems to address a long- standing open problem in the theory of nematic liquid crystals, namely a proof of the existence and asymptotic stability of kayaking periodic orbits in response to steady shear flow. These are orbits for which the principal axis of orientation of the molecular field (the director) rotates out of the plane of shear and around the vorticity axis. With a small parameter attached to the symmetric part of the velocity gradient, the problem can be viewed as a symmetry-breaking bifurcation from an orbit of the rotation group ${\rm SO}(3)$ that contains both logrolling (equilibrium) and tumbling (periodic rotation of the director within the plane of shear) regimes as well as a continuum of neutrally stable kayaking orbits. The results turn out to require expansion to second order in the perturbation parameter. ###### Keywords: Nematic shear flow kayaking bifurcation periodic orbit Lyapunov-Schmidt ###### MSC: 37G15 37G40 37N10 76T99 ###### Contents 1. 1 Introduction 2. 2 Geometry and symmetries of the system 1. 2.1 Rotation coordinates: the Veronese map 2. 2.2 Isotypic decomposition 3. 2.3 Alignment relative to the flow 4. 2.4 Tangent and normal vectors to the group orbit ${\mathcal{O}}$ Dedication The first three named authors dedicate this paper to the fond memory of our late colleague Claudia Wulff, who passed away between the first provisional acceptance of this paper and its eventual publication. Throughout our work her cheerfulness, enthusiasm, clear geometric insight and scrupulous attention to detail have continually inspired us and sustained this project. We are grateful to have had the good fortune to share this collaboration over many years with her. ## 1 Introduction Nematic liquid crystals, regarded as fluids in which the high aspect ratio, rigid, rod molecules require descriptive variables for orientation as well as position, are observed to exhibit a wide range of prolonged unsteady dynamical responses to steady shear flow. The mathematical study of these phenomena in principle involves the Navier-Stokes equations for fluid flow coupled with equations representing molecular alignment and nonlocal interactions between rod molecules, typically leading to PDE systems currently intractable to rigorous analysis on a global scale and resolved only through local analysis and/or numerical simulation. It becomes appropriate therefore to deal with simpler models as templates for capturing some of the dynamical regimes of interest and their responses to physical parameters. Stability and bifurcation behaviours that are robust for finite-dimensional dynamical systems, and that numerically reflect the same orbits of interest (specifically, kayaking orbits) in infinite-dimensional systems, provide a framework for extension of rigorous results to the infinite-dimensional systems. Much of the work on dynamics of liquid crystals (and more generally, rigid large aspect ratio polymers) in fluid flow rests on models proposed by Hess Hess76 and Doi Doi81 that consider the evolution of the probability density on the 2-sphere (more accurately, projective space ${\mathbb{R}}{\mathrm{P}}^{2}$) representing unoriented directions of molecular alignment with the molecules regarded as rigid rods. Extensive theoretical and numerical investigations ( BurFul90, FaraoniEtAl1999, FarhoudiRey1993, LarOtt91, MSV00, MaffCres95, OlmLu99, RienHess, RienackerEtAl2002, RienackerEtAl2002b to cite only a few) of these and related nematic director or orientation tensor models in 2D or 3D reveal a wide range of periodic molecular dynamical regimes with evocative names LarOtt91 logrolling, tumbling, wagging and kayaking according to the behaviour (steady versus periodic) of the principal axis of molecular orientation (the nematic director) relative to the shear (flow velocity and velocity gradient) plane and vorticity axis (normal to the shear plane). Tumbling orbits, for which the principal axis of molecular orientation rotates periodically in the shear plane, are seen to be stable at low shear rates, but become unstable to out- of-plane perturbations and give way to kayaking orbits, for which the principal molecular axis is transverse to the shear plane, and rotates around the vorticity axis, reminiscent of the motion of the paddles propelling a kayak along the shear flow of a calm stream. The limiting case is logrolling, a stationary state where the principal axis of the rod ensemble collapses onto the vorticity axis, while wagging corresponds to oscillations (but not complete rotations) of the molecular orientation in the shear plane about some mean angle, although wagging regimes do not appear in our analysis. We note very recent experimental results FoxEtAl2020 coupled with the high-resolution numerical results of the Doi-Hess kinetic theory ForestEtAl2004b that provide overwhelming evidence that the kayaking orbit is responsible for the anomalous shear-thickening response of a high aspect ratio, rodlike, liquid crystal polymer with the acronym PBDT. The papers ForestWang2003, FoxEtAl2020 give extensive lists of literature references. In the particular case of a steady shear flow and spatially homogeneous liquid crystal in a region in ${\mathbb{R}}^{3}$, the PDEs describing the evolution of orientational order can be simplified to an autonomous ODE in the setting of the widely-used ${Q}$-tensor model deGP, MottNewt, SV for nematic liquid crystals. The assumption of spatial homogeneity of course rules out many important applications, to display technology for example, but nevertheless gives a worthwhile approximation in local domains of homogeneity (monodomains) away from boundaries and defects. In this setting the propensity of a molecule to align in any given direction in ${\mathbb{R}}^{3}$ is represented by an order tensor ${Q}$ belonging to the 5-dimensional space $V$ of traceless symmetric $3\times 3$ matrices, $V:=\\{A\in{\mathbb{R}}^{3\times 3}:A^{\mathtt{t}}=A,\ {\rm tr}(A)=0\\}$ (1.1) where $\\{\,\\}^{\mathtt{t}}$ and $\,{\rm tr}\,$ denote transpose and trace respectively. The tensor ${Q}$ is interpreted as the normalised second moment of a more general probability distribution on ${\mathbb{R}}{\mathrm{P}}^{2}$. All such ${Q}$-tensor models can be associated with a moment-closure approximation of the Smoluchowski equation for the full orientational distribution function ForestWang2003. The derivation of the equation yields technical problems concerning the approximation of higher-order moments, a topic of some discussion in the literature: see Feng+98, ForestWang2003, KAC, LeeEtAl06 for example. In this context the dimensionless equation for the evolution of the orientational order takes the general form111In this paper we do not use bold face symbols for elements of $V$, but reserve bold face for the higher order tensor $\bm{L}({Q})$ and for vectors in ${\mathbb{R}}^{3}$. This matches the convention adopted by MacMillan in MacM92a; MacM92b. Lower case Greek symbols denote scalars. $\frac{{\mathrm{d}}{Q}}{{\mathrm{d}}t}=F({Q},\beta):=G({Q})+\omega[W,{Q}\,]+\beta\bm{L}({Q})D$ (1.2) as an equation in $V\cong{\mathbb{R}}^{5}$; here $[W,{Q}\,]=W\\!{Q}-{Q}W$. On the right hand side of (1.2) the first term represents the molecular interactions in the absence of flow, derived for example from a Maier-Saupe interaction potential or Landau - de Gennes free energy: thus $G$ is a frame- indifferent vector field in $V$. In the second term, $W$ denotes the vorticity tensor, the anti-symmetric part of the (spatial homogeneous) velocity gradient, providing the rotational effect of the flow with constant coefficient $\omega$. In the third term $\bm{L}(Q)$ is a linear transformation $V\to V$ applied to the rate-of-strain tensor $D$, the symmetric part of the velocity gradient, and represents the molecular aligning effect of the flow: the linearity in $D$ is a simplifying assumption. Here $\bm{L}(Q)$ depends (not necessarily linearly) on ${Q}$, and $\bm{L}({Q})D$ is frame-indifferent with respect to simultaneous coordinate choice for the flow and the molecular orientation. The coefficients $\omega$ and $\beta$ are constant scalars that depend on the physical characteristics of the liquid crystal molecule as well as the flow. In this study we take $\omega$ as fixed, and regard $\beta$ as a variable parameter. In the Olmsted-Goldbart model OlmGol used in Chillingworth2001, VAWS2003 the term $\bm{L}({Q})D$ is simply a constant scalar multiple of $D$. A more detailed model for $\bm{L}({Q})D$ is the basis of a series of studies by the second author and co-workers ForestEtAl2002–ForestEtAl2004b, LeeEtAl06 as well as by many other authors CRWX, GrossoEtal01, MarrucciMaffettone89, PacZar. We draw attention also to the earlier theoretical work MacM92a; MacM92b assuming a general form for $\bm{L}({Q})D$ and where similar methods to ours are used to study equilibrium states (uniaxial or biaxial), although the question of periodic orbits in general and kayaking orbits in particular is hardly addressed, the existence of the latter having yet to be discovered. We remark that although in this paper our underlying assumption is of spatial homogeneity there have been studies of nematic liquid crystals dynamics in a nonhomogeneous environment: see among others ChoF for analytical results and YWMF for numerical simulations. A particular model of the form (1.2) that ‘combines analytic tractability with physical relevance’ MTZ is the Beris-Edwards model BE, a basis for some more recent investigations DMOY, DHW, MTZ, WXZ in both the PDE and ODE settings. Here $G$ is the negative gradient of a degree four Landau-de Gennes free energy function, while the term $\bm{L}({Q})D$ takes the form $\bm{L}({Q})D=\frac{2}{3}D+[D,Q\,]^{+}-2{\rm tr}(DQ)Q$ (1.3) in which we use the notation $[H,K]^{+}:=HK+KH-\frac{2}{3}{\rm tr}(HK)I$ (1.4) for any matrices $H,K\in V$; here and elsewhere $I$ denotes the $3\times 3$ identity matrix. Observe that (1.3) is a linear combination of a constant, a linear and a quadratic term in ${Q}$, that we denote (without their coefficients) respectively by $\bm{L}^{c}({Q})D,\,\bm{L}^{l}(Q)D,\,\bm{L}^{q}({Q})D$. In this paper we initially work with an arbitrary choice of smooth222Throughout the paper we take smooth to mean $C^{\infty}$ although the results hold with sufficient finite order of differentiability. field $\bm{L}({Q})D$ subject to a natural assumption of frame-indifference. We then replace this by an arbitrary linear combination $\bm{L}({Q})D=m_{c}\bm{L}^{c}({Q})D+m_{l}\bm{L}^{l}(Q)D+m_{q}\bm{L}^{q}({Q})D$ (1.5) which helps to keep track of the analysis, and also enables the results to apply to simpler models for which one or more of the $m_{i}$ may be zero. For the Beris-Edwards model (1.3) the ratios are $(m_{c}:m_{l}:m_{q})=(2/3:1:-2)$, while for the Olmsted-Goldbart model OlmGol the ratios are $(1:0:0)$ and for the model in MSV00 they are $(\sqrt{3/10}:3/7:0)$. Moreover, in Appendix LABEL:s:genform we pursue the analysis for general $\bm{L}({Q})D$, using the 7-term expression assumed for example in MacM92a; MacM92b, and show that with the exception of one term the results are the same as those for (1.5) albeit with different interpretation of the coefficients $m_{c},m_{l},m_{q}$. The exceptional term (being the symmetric traceless form of $Q^{2}D$) also fits into our overall framework as shown in the expressions (LABEL:e:newLam0) and (LABEL:e:newLam2) with (LABEL:e:7gens). When $\beta=0$ the equation (1.2) represents the co-rotational case or long time regime, as discussed in MTZ. If ${Q}^{*}\in V$ satisfies $G({Q}^{*})=0$ then frame-indifference of $G$, interpreted as equivariance (covariance) of $G$ under the action of the rotation group ${\rm SO}(3)$ on $V$, implies that every element ${Q}$ of the ${\rm SO}(3)$ group orbit ${\mathcal{O}}$ of $Q^{*}$ also satisfies $G({Q})=0$. If moreover $[W,{Q}^{*}]=0$ then $F({Q}^{*},0)=0$ and so ${Q}^{*}$ is an equilibrium for (1.2): the rotational component of the shear flow leaves ${Q}^{*}$ fixed. This implies that ${Q}^{*}$ has two equal eigenvalues, and if these are less than the third (principal) eigenvalue then ${Q}^{*}$ represents a logrolling regime. Moreover, $[W,{Q}\,]$ is tangent to ${\mathcal{O}}$ for every $Q\neq{Q}^{*}\in{\mathcal{O}}$ and so ${\mathcal{O}}$ (which is topologically a copy of ${\mathbb{R}}{\mathrm{P}}^{2}$) is an invariant manifold for the flow on $V$ generated by (1.2) when $\beta=0$. The dynamical orbit of every such $Q\in{\mathcal{O}}$ is periodic, as it coincides with the group orbit of rotations about the axis orthogonal to the shear plane: in the language of equivariant dynamics CL, FDS, HPF it is a relative equilibrium. All of these periodic orbits represent kayaking regimes, except for a unique orbit representing tumbling, and they are neutrally stable with respect to the dynamics on ${\mathcal{O}}$, as also is the logrolling equilibrium ${Q}^{*}$. We discuss this geometry of the ${\rm SO}(3)$-action on $V$ in more detail below; it plays a central role in what follows, as it must do in any global study of the system (1.2), an observation of course recognised by other authors ForestEtAl2002, MacM92a; MacM92b. There are a few rigorous mathematical proofs of the existence of tumbling limit cycle orbits with limiting assumptions. By positing 2D rods, both with a tensor model LeeEtAl06 and with the stochastic ODE LelievreLeBris11, proofs follow from the Poincaré-Bendixson theorem; for 3D rods with a tensor model the proof in Chillingworth2001 uses geometric arguments on in-plane tensors. Until now, there has been no proof of existence of (stable) kayaking orbits, and the purpose of this paper is to provide a proof for second-moment tensor models (1.2), (1.5) at low rates of molecular interaction (although not necessarily low shear shear rates). We thus consider a dynamical regime different from those considered by other authors in numerical simulations such as RienackerEtAl2002b, ForestWang2003. A regime analogous to ours in considered in the theoretical work MacM92a; MacM92b using very similar methods, but in that case the molecules are assumed biaxial and it is equilibria rather than periodic orbits that are sought. The approach we take is to regard $\beta$ as a small parameter and view (1.2) as a perturbation of the co-rotational case. This enables us to use tools from equivariant bifurcation theory CL, GSS, HPF, Satt1978; Satt1979 and in particular Lyapunov-Schmidt reduction over the group orbit ${\mathcal{O}}$ to obtain criteria for the persistence or otherwise of the periodic orbits of the co-rotational case after perturbation, and to determine the stability or otherwise of the resulting logrolling, tumbling and kayaking dynamics. Our general results are independent of the choice of the interaction field $G$, given that it is frame-indifferent and the logrolling state is an equilibrium: $G({Q}^{*})=0$ (Assumptions 1 and 2 in Section 2) and also that the eigenvalues $\lambda,\mu$ of the linearisation of $G$ at ${Q}^{*}$ normal to ${\mathcal{O}}$ are real and nonzero (Assumption 3 in Section LABEL:s:pert). In addition we require a natural condition of frame-indifference for the perturbing field $\bm{L}({Q})D$ (Assumption 4 in Section LABEL:s:pert). Finally, the stability results require $\lambda,\mu<0$ (Assumption 5 in Section LABEL:s:zeros). However, our methods do not allow us to make deductions when $\beta$ is large compared with the rotational coefficient $\omega$. Other limit cycles are possible, and indeed are routinely observed numerically. Our main result is Theorem LABEL:t:stability1 with Remark LABEL:r:klammu, showing that the existence of a limit cycle kayaking orbit after perturbation depends on the ratio $\lambda/\mu$ as well as the size of the product $\lambda\mu$ relative to the rotation coefficient $\omega$. We show also in Corollary LABEL:c:stabsum that for the Beris-Edwards and Olmsted-Goldbart models the kayaking orbit is linearly stable without further assumption. This paper is organised as follows. In Section 2 we discuss symmetries of the model and key features of the action of ${\rm SO}(3)$ on $V$ that it inherits from the usual action on ${\mathbb{R}}^{3}$. Of particular importance are the tangent and normal subspaces to the group orbit ${\mathcal{O}}$. Section LABEL:s:pert gives initial results showing the persistence of log-rolling and tumbling regimes after perturbation, and introduces the rotating coordinate system convenient for further analysis. In Section LABEL:s:poincare a natural Poincaré section for the (dynamical) flow near ${\mathcal{O}}$ is described and relevant first-order derivatives of the associated Poincaré map are calculated and shown to vanish. Lyapunov-Schmidt reduction is applied in Section LABEL:s:lsred to obtain a real-valued bifurcation function defined on a meridian of ${\mathcal{O}}$. This function happens to vanish to first order in $\beta$ and so we are obliged to pursue the $\beta$-expansion to second order. In Section LABEL:s:explicit we choose $\bm{L}({Q})D$ explicitly as (1.5) and evaluate these second order terms. Finally, in Section LABEL:s:zeros the zeros of the bifurcation function are found and the conditions for existence and stability of kayaking motion are determined. For the specific cases of the Beris-Edwards and Olmsted-Goldbart models with Landau-de Gennes free energy the criteria for existence and stability of kayaking orbits are stated explicitly. Following a brief concluding section there are Appendices giving some technical results arising from symmetries that simplify the main calculations, as well as a discussion of how a fully general form of the molecular alignment term $\bm{L}({Q})D$ fits into the framework of our analysis. ## 2 Geometry and symmetries of the system The molecular interaction field $G$ is independent of the coordinate frame and therefore equivariant (covariant) with respect to the action of the rotation group ${\rm SO}(3)$ on $V$ by conjugation induced from the natural action on ${\mathbb{R}}^{3}$. Therefore our first working assumption in this paper is the following. Assumption 1: $\widetilde{R}G({Q})=G(\widetilde{R}{Q})$ for all ${Q}\in V$ and $R\in{\rm SO}(3)$ where we use the notation $\widetilde{R}{Q}:=R{Q}R^{-1}.$ Further discussion of equivariant maps, in particular relating to the action of ${\rm SO}(3)$ on $V$ that we shall use extensively in this paper, is given in Appendix LABEL:s:emvf. Choosing coordinates $(x,y,z)\in{\mathbb{R}}^{3}$ so that the shear flow velocity field has the form $k(y,0,0)$ for constant $k\neq 0$ the velocity gradient tensor is $k\begin{pmatrix}0&1&0\\\ 0&0&0\\\ 0&0&0\\\ \end{pmatrix}$ with symmetric and anti-symmetric parts $kD/2$ and $-kW/2$ respectively, where $D=\begin{pmatrix}0&1&0\\\ 1&0&0\\\ 0&0&0\\\ \end{pmatrix}\,,\qquad W=\begin{pmatrix}0&-1&0\\\ 1&0&0\\\ 0&0&0\\\ \end{pmatrix}.$ (2.1) Without loss of generality we take $k=2$ since the coefficients $\omega$ and $\beta$ in (1.2) are at present arbitrary. The rotational component $W$ corresponds to infinitesimal rotation about the $z$-axis. A nonzero matrix ${Q}\in V$ is called uniaxial if it has two equal eigenvalues less than the third, in which case it is invariant under rotations about the axis determined by the third eigenvalue. Matrices with three distinct eigenvalues are biaxial. In this paper an important role is played by the uniaxial matrix ${Q}^{*}:=a\begin{pmatrix}-1&0&\,0\,\\\ 0&-1&\,0\,\\\ 0&0&\,2\,\end{pmatrix}$ (2.2) where $0<a<1/3$ for which the principal axis (largest eigenvalue) is the $z$-axis and about which ${Q}^{*}$ is rotationally invariant. We take $a>0$ to ensure that ${Q}^{*}$ is uniaxial, and the upper bound on $a$ is imposed for physical reasons since the second moment of the probability distribution defining the ${Q}$-tensor has eigenvalues in the interval $[0,1]$ and so those of ${Q}$ are no greater than $2/3$: see BMJ for example. We exclude $a=1/3$ as we shall need to work in a neighbourhood of ${Q}^{*}$. Our second underlying assumption is that this phase is an equilibrium for the system (1.2) in the absence of flow, that is when $\omega=\beta=0$. In other words Assumption 2: The coefficient $a$ is such that $G({Q}^{*})=0$. With this assumption, the equivariance property of $G$ implies that $G$ vanishes on the entire ${\rm SO}(3)$-orbit ${\mathcal{O}}$ of ${Q}^{*}$ in $V$, and ${\mathcal{O}}$ is an invariant manifold for the flow on $V$ generated by (1.2) with $\beta=0$. The dynamical orbits on ${\mathcal{O}}$ coincide with the group orbits of rotation about the $z$-axis under which ${Q}^{*}$ remains fixed, this being the only fixed point on ${\mathcal{O}}$ since if ${Q}\in{\mathcal{O}}$ and $[W,Q\,]=0$ then ${Q}$ is a scalar multiple of and hence equal to ${Q}^{*}$. ### 2.1 Rotation coordinates: the Veronese map For calculation purposes it is natural and convenient to take coordinates in $V$ geometrically adapted to ${\mathcal{O}}$. We do this in a standard way by representing the orbit ${\mathcal{O}}$ of ${Q}^{*}$ as the image of the unit sphere ${\mathbb{S}}^{2}\subset{\mathbb{R}}^{3}$ under the map ${\mathcal{V}}:{\mathbb{R}}^{3}\to V:{\mathbf{z}}\mapsto a(3{\mathbf{z}}{\mathbf{z}}^{\mathtt{t}}-|{\mathbf{z}}|^{2}I)$ where again t denotes matrix (or vector) transpose. Here ${\mathcal{V}}$ is the projection to $V$ of the case $n=3$ of the more general Veronese map construction ${\mathbb{R}}^{n}\to{\mathbb{R}}^{m}$ with $m=\binom{n}{2}$ and it represents ${\mathcal{O}}$ as a Veronese surface in ${\mathbb{R}}^{5}$: see for example GHAG or HAG. It is straightforward to check that ${\mathcal{V}}$ is equivariant with respect to the actions of ${\rm SO}(3)$ on ${\mathbb{R}}^{3}$ and $V$, that is if $R\in{\rm SO}(3)$ then ${\mathcal{V}}(R{\mathbf{z}})=\widetilde{R}{\mathcal{V}}({\mathbf{z}})$ (2.3) for all ${\mathbf{z}}\in{\mathbb{R}}^{3}$. Note that ${Q}^{*}={\mathcal{V}}({\mathbf{e}}_{3})$ where $\\{{\mathbf{e}}_{1},{\mathbf{e}}_{2},{\mathbf{e}}_{3}\\}$ is the standard basis in ${\mathbb{R}}^{3}$, and that ${\mathcal{V}}({\mathbf{e}}_{1})$ and ${\mathcal{V}}({\mathbf{e}}_{2})$ are obtained from ${Q}^{*}$ by permutation of the diagonal terms. On $V$ we have a standard inner product given by $\left<H,K\right>={\rm tr}(H^{\mathtt{t}}K)={\rm tr}(HK)$. However, the Veronese map is quadratic and does not preserve inner products. Nevertheless, up to a constant factor, its derivative does preserve inner products on tangent vectors to ${\mathbb{S}}^{2}$. Explicitly ${\mathrm{D}}{\mathcal{V}}({\mathbf{z}}):{\mathbf{u}}\mapsto a(3{\mathbf{z}}{\mathbf{u}}^{\mathtt{t}}+3{\mathbf{u}}{\mathbf{z}}^{\mathtt{t}}-2{\mathbf{z}}\cdot{\mathbf{u}}\,I)$ (2.4) with the dot denoting usual inner product in ${\mathbb{R}}^{3}$, from which it follows that for ${\mathbf{z}}\in{\mathbb{S}}^{2}$ and ${\mathbf{u}},{\mathbf{v}}\in{\mathbb{R}}^{3}$ orthogonal to ${\mathbf{z}}$ $\displaystyle{\mathrm{D}}{\mathcal{V}}({\mathbf{z}}){\mathbf{u}}\cdot{\mathrm{D}}{\mathcal{V}}({\mathbf{z}}){\mathbf{v}}$ $\displaystyle=a^{2}{\rm tr}\bigl{(}(3{\mathbf{z}}{\mathbf{u}}^{\mathtt{t}}+3{\mathbf{u}}{\mathbf{z}}^{\mathtt{t}}-2{\mathbf{z}}\cdot{\mathbf{u}}\,I)(3{\mathbf{z}}{\mathbf{v}}^{\mathtt{t}}+3{\mathbf{v}}{\mathbf{z}}^{\mathtt{t}}-2{\mathbf{z}}\cdot{\mathbf{v}}\,I))$ $\displaystyle=a^{2}{\rm tr}({\mathbf{z}}{\mathbf{u}}^{\mathtt{t}}{\mathbf{v}}{\mathbf{z}}^{\mathtt{t}})=a^{2}{\mathbf{u}}\cdot{\mathbf{v}}.$ (2.5) Observe that the restriction of ${\mathcal{V}}$ to ${\mathbb{S}}^{2}$ is a double cover ${\mathbb{S}}^{2}\to{\mathcal{O}}$ since ${\mathcal{V}}(-{\mathbf{z}})={\mathcal{V}}({\mathbf{z}})$ for all ${\mathbf{z}}\in{\mathbb{R}}^{3}$. Through ${\mathcal{V}}$ the familiar latitude and longitude coordinates on ${\mathbb{S}}^{2}$ go over to a corresponding coordinate system on ${\mathcal{O}}$. Any ${\mathbf{z}}\neq{\mathbf{e}}_{3}\in{\mathbb{S}}^{2}$ can be written using spherical coordinates as ${\mathbf{z}}=R_{\mathbf{z}}{\mathbf{e}}_{3}=R_{3}(\phi)R_{2}(\theta)\,{\mathbf{e}}_{3}$ (2.6) for unique $\theta\bmod\pi$ and $\phi\bmod 2\pi$, where $R_{j}(\psi)$ denotes rotation by angle $\psi$ around the $j$th axis in ${\mathbb{R}}^{3}$, $j=1,2,3$, so that in particular $R_{2}(\theta)=\begin{pmatrix}\cos\theta&0&\sin\theta\\\ 0&1&0\\\ -\sin\theta&0&\cos\theta\end{pmatrix},\quad R_{3}(\phi)=\begin{pmatrix}\cos\phi&-\sin\phi&0\\\ \sin\phi&\cos\phi&0\\\ 0&0&1\end{pmatrix}.$ Hence by (2.6) and equivariance (2.3) any $Z\in{\mathcal{O}}$ can be written (not uniquely) as $Z={\mathcal{V}}({\mathbf{z}})=\widetilde{R}_{\mathbf{z}}{Q}^{*}=\widetilde{R}_{3}(\phi)\widetilde{R}_{2}(\theta)Q^{*}=:Z(\theta,\phi)$ (2.7) for some ${\mathbf{z}}\in{\mathbb{S}}^{2}$, as the counterpart of (2.6) using rotations $\widetilde{R}$ on $V$ in place of $R$ on ${\mathbb{R}}^{3}$. We shall make frequent use of this notation throughout the paper. By analogy with ${\mathbb{S}}^{2}$ we call each closed curve $\theta={\rm const}\neq 0\bmod\pi$ on ${\mathcal{O}}$ a latitude curve and each curve $\phi={\rm const}$ on ${\mathcal{O}}$ a meridian. It follows from (2.5) that all latitude curves are orthogonal to all meridians. The case $\theta=0\bmod\pi$ corresponds to $Q^{*}$, and so we think of $Q^{*}$ as the north pole of ${\mathcal{O}}\cong{\mathbb{R}}{\mathrm{P}}^{2}$. ###### Remark 1 The expression (2.6) provides the standard spherical coordinates on ${\mathbb{S}}^{2}$. Standard Euler angle coordinates on $SO(3)$ are obtained as the composition of three rotation matrices; the Veronese coordinates for ${\mathcal{O}}$ provided by (2.7) are obtained by disregarding one of those rotations. ### 2.2 Isotypic decomposition The rotation symmetry of ${\mathcal{O}}$ about the north pole ${Q}^{*}$ plays a fundamental role in our analysis of (1.2) for sufficiently small nonzero $\beta$, and enables us to choose coordinates in $V$ that are strongly adapted to the inherent geometry of the problem. More generally, for any ${\mathbf{z}}\in{\mathbb{S}}^{2}$ let $\Sigma_{\mathbf{z}}=\\{R\in{\rm SO}(3):R{\mathbf{z}}={\mathbf{z}}\\}\cong{\rm SO}(2)\subset{\rm SO}(3)$ denote the isotropy subgroup of ${\mathbf{z}}$ (namely the group of rotations about the ${\mathbf{z}}$-axis) under the natural action of ${\rm SO}(3)$ on ${\mathbb{R}}^{3}$. Equivariance of ${\mathcal{V}}$ implies that $\Sigma_{\mathbf{z}}$ also fixes $Z={\mathcal{V}}({\mathbf{z}})$ in ${\mathcal{O}}$ under the conjugacy action, and moreover $Z$ is an isolated fixed point of $\Sigma_{\mathbf{z}}$ on ${\mathcal{O}}$ since ${\mathbf{z}}$ is an isolated fixed point of $\Sigma_{\mathbf{z}}$ on ${\mathbb{S}}^{2}$. At this point it is convenient to develop some further machinery from the theory of linear group actions to describe key features of the geometry highly relevant to our analysis. Introductions to the theory of group actions and orbit structures can be found for example in ABS, CHO, MZH. We shall make much use of the further fact that corresponding to the action of $\Sigma_{\mathbf{z}}$ on $V$ there is an isotypic decomposition of $V$ (for theoretical background to this notion see for example CL, FDS, GSS) into the direct sum of three $\Sigma_{\mathbf{z}}$-invariant subspaces $V=V_{0}^{Z}\oplus V_{1}^{Z}\oplus V_{2}^{Z}$ (2.8) on each of which $\Sigma_{\mathbf{z}}$ acts differently: the element $R_{\mathbf{z}}(\psi)\in\Sigma_{\mathbf{z}}$ denoting rotation about the ${\mathbf{z}}$-direction through angle $\psi$ acts on $V_{k}^{Z}$ by rotation through $k\psi$ for $k=0,1,2$. In particular, with ${\mathbf{z}}={\mathbf{e}}_{3}$ and $Z=Q^{*}$ writing $V_{k}^{*}=V_{k}^{Q^{*}}$ we have $\displaystyle V_{0}^{*}$ $\displaystyle:={\rm span}\\{E_{0}\\}$ (2.9) $\displaystyle V_{1}^{*}$ $\displaystyle:={\rm span}\\{E_{1}(\alpha)\\}_{\alpha\in[0,2\pi)}$ (2.10) $\displaystyle V_{2}^{*}$ $\displaystyle:={\rm span}\\{E_{2}(\alpha)\\}_{\alpha\in[0,\pi)}$ (2.11) where the mutually orthogonal matrices $E_{0},E_{1}(\alpha),E_{2}(\alpha)$ are given by $\displaystyle\hskip 99.58464ptE_{0}:=\frac{1}{a\sqrt{6}}Q^{*},$ $\displaystyle E_{1}(\alpha)$ $\displaystyle:=\frac{1}{\sqrt{2}}\begin{pmatrix}0&0&\cos\alpha\\\ 0&0&\sin\alpha\\\ \cos\alpha&\sin\alpha&0\end{pmatrix},\quad E_{2}(\alpha):=\frac{1}{\sqrt{2}}\begin{pmatrix}\cos 2\alpha&\sin 2\alpha&0\\\ \sin 2\alpha&-\cos 2\alpha&0\\\ 0&0&0\end{pmatrix}$ (2.12) and we set $\displaystyle E_{11}=E_{1}(0),\quad E_{12}=E_{1}(\pi/2),\quad E_{21}=E_{2}(0),\quad E_{22}=E_{2}(\pi/4).$ (2.13) Here $R_{3}(\phi)$ acts on $V_{1}^{*}$ and $V_{2}^{*}$ by $\widetilde{R}_{3}(\phi)E_{1}(\alpha)=E_{1}(\alpha+\phi),\quad\widetilde{R}_{3}(\phi)E_{2}(\alpha)=E_{2}(\alpha+\phi)$ (2.14) where we keep in mind that $E_{2}(\alpha)$ is defined in terms of $2\alpha$. For $Z=Z(\theta,\phi)$ as in (2.7) we use the notation $E_{1}^{Z}(\alpha)=\widetilde{R}_{3}(\phi)\widetilde{R}_{2}(\theta)E_{1}(\alpha),\quad E_{2}^{Z}(\alpha)=\widetilde{R}_{3}(\phi)\widetilde{R}_{2}(\theta)E_{2}(\alpha)$ (2.15) and $E^{Z}_{ij}=\widetilde{R}_{3}(\phi)\widetilde{R}_{2}(\theta)E_{ij},\quad i,j\in\\{1,2\\}.$ (2.16) so that $\displaystyle V_{0}^{Z}$ $\displaystyle={\rm span}\\{E_{0}^{Z}\\}$ $\displaystyle V_{1}^{Z}$ $\displaystyle={\rm span}\\{E_{1}^{Z}(\alpha)\\}_{\alpha\in[\,0,2\pi)}={\rm span}\\{E_{11}^{Z},\,E_{12}^{Z}\\}$ $\displaystyle V_{2}^{Z}$ $\displaystyle={\rm span}\\{E_{2}^{Z}(\alpha)\\}_{\alpha\in[\,0,\pi)}={\rm span}\\{E_{21}^{Z},E_{22}^{Z}\\}\,.$ A consequence of ${\rm SO}(3)$-equivariance is that for $Z\in{\mathcal{O}}$ the derivative ${\mathrm{D}}G(Z):V\to V$ respects the decomposition (2.8) and commutes with the $\Sigma_{\mathbf{z}}$-rotations on each component. A further important consequence that simplifies several later calculations is the following. ###### Proposition 2.1 If a differentiable function $f:V\to{\mathbb{R}}$ is invariant under the action of $\Sigma_{\mathbf{z}}$ then its derivative ${\mathrm{D}}f(Z):V\to{\mathbb{R}}$ annihilates $V_{1}^{Z}\oplus V_{2}^{Z}$. ###### Proof If $f(\widetilde{R}{Q})=f({Q})$ for all $R\in\Sigma_{\mathbf{z}}$ and ${Q}\in V$ then ${\mathrm{D}}f(\widetilde{R}{Q})\widetilde{R}={\mathrm{D}}f({Q})$ and so in particular ${\mathrm{D}}f(Z)\widetilde{R}={\mathrm{D}}f(Z)$ for all $R\in\Sigma_{\mathbf{z}}$. The only linear map $V\to{\mathbb{R}}$ invariant under all rotations of $V_{1}^{Z}$ and of $V_{2}^{Z}$ must be zero on those components. ∎ ### 2.3 Alignment relative to the flow Since the element $R_{3}(\pi)\in{\rm SO}(3)$ acts on $V_{k}^{*}$ by a rotation through $k\pi$ it follows that $V_{0}^{*}\oplus V_{2}^{*}$ is precisely the fixed-point space for the action of $R_{3}(\pi)$ on $V$. Thus ${Q}=(q_{ij})\in V$ is fixed by $\widetilde{R}_{3}(\pi)$ if and only if $q_{13}=q_{23}=0$, in which case $q_{33}$ is an eigenvalue with eigenspace the $z$-axis and the other eigenspaces lie in (or coincide with) the $x,y$-plane. It is immediate to check that if ${Q}=pE_{0}+qE_{2}(\alpha)$ then the eigenvalues of ${Q}$ are $2p/\sqrt{6}$ and $(-p\pm\sqrt{3}q)/\sqrt{6}$ and so ${Q}$ has two equal eigenvalues precisely when $q=0\quad\text{or}\quad q=\pm\sqrt{3}p.$ (2.17) In the first case ${Q}=pE_{0}$, while in the second case the eigenvalues are $2p/\sqrt{6}$ (repeated) and $-4p/\sqrt{6}$ so that if $p<0$ then ${Q}$ is uniaxial with principal axis lying in the $x,y$-plane. From the point of view of the liquid crystal orientation relative to the shear flow such matrices ${Q}$ are called in-plane; nonzero matrices which are not in-plane are called out-of-plane. This agrees with standard terminology where tumbling and wagging dynamical regimes are described as in-plane (see FaraoniEtAl1999, RienHess for example), while logrolling and kayaking are out- of-plane. Let $C$ denote the equator $\\{\theta=\pi/2\\}$ of ${\mathbb{S}}^{2}$, and let ${\mathcal{C}}={\mathcal{V}}(C)\subset{\mathcal{O}}$ which we also call the equator of ${\mathcal{O}}$. It is straightforward to check that $\displaystyle{\mathcal{C}}$ $\displaystyle=\\{{\mathcal{V}}(\cos\phi,\sin\phi,0):0\leq\phi<2\pi\\}$ $\displaystyle=a\sqrt{6}\\{\cos\tfrac{2\pi}{3}\,E_{0}+\sin\tfrac{2\pi}{3}\,E_{2}(\phi):0\leq\phi<2\pi\\}\subset{\mathcal{O}}\subset V.$ (2.18) ###### Proposition 2.2 ${\mathcal{O}}\cap(V_{0}^{*}\oplus V_{2}^{*})=\\{{Q}^{*}\\}\cup{\mathcal{C}}.$ ###### Proof Since $V_{0}^{*}\oplus V_{2}^{*}$ is the orthogonal complement to $V_{1}^{*}$ we see ${Q}\in V_{0}^{*}\oplus V_{2}^{*}$ if and only if $\left<{Q},E_{1}(\alpha)\right>=0$ for all $\alpha$. If $Z={\mathcal{V}}({\mathbf{z}})\in{\mathcal{O}}$ then $\left<Z,E_{1}(\alpha)\right>=3a\,{\rm tr}({\mathbf{z}}{\mathbf{z}}^{\mathtt{t}}E_{1}(\alpha))=3a{\mathbf{z}}\cdot E_{1}(\alpha){\mathbf{z}}.$ With ${\mathbf{z}}=(\cos\phi\sin\theta,\sin\phi\sin\theta,\cos\theta)^{\mathtt{t}}$ in usual spherical coordinates we find ${\mathbf{z}}\cdot E_{1}(\alpha){\mathbf{z}}=(1/\sqrt{2})\sin 2\theta\cos(\phi-\alpha)$ which vanishes for all $\alpha$ just when $\sin 2\theta=0$, that is $\theta=0$ or $\theta=\pi/2$ corresponding to $Z={Q}^{*}$ or $Z\in{\mathcal{C}}$ respectively. ∎ When $\beta=0$ the equation (1.2) reduces on ${\mathcal{O}}$ to $\frac{{\mathrm{d}}{Q}}{{\mathrm{d}}t}=\omega[W,{Q}\,]$ since $G({Q})=0$ for ${Q}\in{\mathcal{O}}$, giving solution curves $t\mapsto\widetilde{R}_{3}(\omega t){Q}$ each of which has least period $2\pi/\omega$ apart from the equilibrium $Q^{*}$ and the equator ${\mathcal{C}}$: this has least period $\pi/\omega$, the equator $C$ of ${\mathbb{S}}^{1}$ being a double cover of ${\mathcal{C}}$ via the Veronese map. A matrix ${Q}\in{\mathcal{C}}$ is in-plane and its dynamical orbit corresponds to steady rotation of period $\pi/\omega$ about the origin in the shear plane, and so ${\mathcal{C}}$ represents a tumbling orbit. All latitude curves of ${\mathcal{O}}$ other than the equator ${\mathcal{C}}$ represent kayaking orbits of period $T_{0}=2\pi/\omega$ and of neutral stability on ${\mathcal{O}}$ and so most of them are unlikely to persist for $\beta\neq 0$. The geometry can be visualised as follows: removing the poles at ${\mathbf{z}}=\pm{\mathbf{e}}_{3}$ from ${\mathbb{S}}^{2}$ leaves an (open) annulus foliated by circles of latitude, so that removing ${Q}^{*}$ from ${\mathcal{O}}$ leaves a Möbius strip foliated by closed latitude curves each of which traverses the strip twice since $Z(\pi/2+\theta,\phi)=Z(\pi/2-\theta,\phi+\pi)$, except for the ‘central curve’ ${\mathcal{C}}$ given by $\theta=0$ which traverses it only once. ### 2.4 Tangent and normal vectors to the group orbit ${\mathcal{O}}$ The 2-dimensional tangent space ${\mathcal{T}}^{Z}$ to ${\mathcal{O}}$ at $Z\in{\mathcal{O}}$ is spanned by infinitesimal rotations of $Z$, that is ${\mathcal{T}}^{Z}={\rm span}\,\\{\,[W_{i},Z],i=1,2,3\,\\}$ where $\frac{{\mathrm{d}}}{{\mathrm{d}}\theta}\widetilde{R}_{i}(\theta)Q\big{|}_{\theta=0}=[W_{i},Q\,]=W_{i}{Q}-{Q}W_{i}$ with $W_{1}=\begin{pmatrix}0\,&0&0\\\ 0\,&0&{-1}\\\ 0\,&{1}&0\end{pmatrix}\,\quad W_{2}=\begin{pmatrix}\,0&0&\,1\\\ \,0&0&\,0\\\ -1\,&0&\,0\end{pmatrix}\,\quad W_{3}=\begin{pmatrix}0&{-1}&0\\\ {1}&0&0\\\ 0&0&0\end{pmatrix}.$ (2.19) However, for $Z\neq{Q}^{*}$ the tangent space ${\mathcal{T}}^{Z}$ is also spanned by the tangents at $Z$ to the meridian and latitude curve of ${\mathcal{O}}$ through $Z$. ###### Lemma 1 Let $Z\in{\mathcal{O}}$ with $Z\neq{Q}^{*}$. The ($1$-dimensional) tangent spaces at $Z$ to the meridian and latitude curve of ${\mathcal{O}}$ through $Z$ are spanned by $E_{11}^{Z}$ and $E_{12}^{Z}$ respectively.
# Marginal speed confinement resolves the conflict between correlation and control in natural flocks of birds Andrea Cavagna1,2, Antonio Culla111Corresponding author, e-mail: a.culla@uniroma1.it,2,1, Xiao Feng1,2, Irene Giardina2,1,3, Tomas S. Grigera4,5,6, Willow Kion-Crosby1,2, Stefania Melillo1,2, Giulia Pisegna2,1, Lorena Postiglione1,2, Pablo Villegas1,2 1 Istituto Sistemi Complessi, Consiglio Nazionale delle Ricerche, UOS Sapienza, 00185 Rome, Italy 2 Dipartimento di Fisica, Università Sapienza, 00185 Rome, Italy 3 INFN, Unità di Roma 1, 00185 Rome, Italy 4 Instituto de Física de Líquidos y Sistemas Biológicos CONICET - Universidad Nacional de La Plata, La Plata, Argentina 5 CCT CONICET La Plata, Consejo Nacional de Investigaciones Científicas y Técnicas, Argentina 6 Departamento de Física, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, Argentina ###### Abstract Speed fluctuations of individual birds within natural flocks are moderate, due to the aerodynamic, energetic and biomechanical constraints of flight. Yet the spatial correlations of such fluctuations are scale-free, namely they have a range as wide as the entire group. Scale-free correlations and limited fluctuations set conflicting constraints on the mechanism controlling the speed of each bird, as the factors boosting correlations tend to amplify fluctuations, and vice versa. Here, using a field-theoretical approach, we demonstrate that a marginal speed confinement that ignores small deviations from the natural reference value while ferociously suppressing larger fluctuations, is the only mechanism reconciling scale-free correlations with biologically acceptable flocks’ speed, a result that we confirm through numerical simulations of self-propelled particles in three dimensions. We validate the theoretical as well as the numerical predictions of this analysis by comparing our results with field experimental data on starling flocks having group sizes spanning an unprecedented interval of over two orders of magnitude. Since the early stages of the effort to formulate a mathematical description of flocking behaviour, the fundamental dynamical rule common to all theoretical models has been that of local mutual imitation: each individual within the group tends to adjust its state of motion to that of its neighbours reynolds1987flocks ; heppner1990stochastic ; huth1992simulation ; vicsek+al_95 ; couzin+al_02 ; Chate_2008 ; romanczuk2012swarming ; grossmann2013self . This type of imitative behaviour can be either explicitly prescribed by the model through a direct interaction between the animals’ velocities reynolds1987flocks ; huth1992simulation ; vicsek+al_95 ; couzin+al_02 , or it may be an effective interaction emerging from simpler positional rules, as attraction and repulsion heppner1990stochastic ; romanczuk2012swarming ; grossmann2013self ; perna2014duality , depending on the coarse-graining level we decide to work at. In either case, effective imitation of the local neighbours is the cornerstone of organised flocking dynamics. The early models also assumed that all individuals within the group moved with the same constant speed reynolds1987flocks ; heppner1990stochastic ; huth1992simulation ; vicsek+al_95 ; couzin+al_02 ; Chate_2008 . In that case, mutual imitation requires each particle to only adapt the orientation of its velocity to that of its neighbours. In real flocks, however, the individual speeds fluctuate cavagna+al_10 , hence mutual imitation requires a bird to also adjusts its speed to that of its neighbours. In contrast to orientation, though, speed control cannot be left just to mutual imitation, as nothing then would prevent birds to move in sync at unreasonably large (or small) speeds. One therefore needs to devise a control mechanism aimed at keeping the individual speed of each bird in the ballpark of some reference biological value, $v_{0}$, set by species-specific aerodynamic and biomechanical constraints rayner2001aerodynamics . The most straightforward way to confine the individual speeds is through a linear restoring force: whenever the speed $v_{i}$ of particle $i$ deviates from the natural reference value $v_{0}$, it gets ‘pushed back’ proportionally to the deviation. Linear speed control is widely used to study the collective behaviour of the most diverse systems, as bird flocks bialek+al_14 ; hemelrijk2015scale , fish schools Fish_linear , pedestrian collectives Pedestrian_linear_1 ; Pedestrian_linear_2 ; Pedestrian_linear_3 , robots swarms Drones_linear , and vehicle crowds Vehicles_linear , to name just a few examples. Linear control also captures a crucial experimental trait of bird flocks, namely the fact that speed correlations are scale-free cavagna+al_10 : when the stiffness of the linear force is small enough, the correlation length grows linearly with the group’s size, as it happens in real flocks bialek+al_14 , thus ensuring that group fragmentation is very low hemelrijk2015scale . Linear speed control therefore lies at the basis of all current theories of collective behaviour. Here, by using field data on starling flocks, numerical simulations of self- propelled particles and statistical field theory, we will prove that linear speed confinement entails an intrinsic conflict between scale-free correlations and group’s speed control, which is impossible to resolve. Such conflict has not been uncovered until now, due to the lack of experimental data on animal groups with a wide-enough size spectrum. At its core, the problem is that, to reproduce long-range correlations, linear control requires a weak speed-confining force, so that the animal’s speed is very loosely confined around its reference value, $v_{0}$. When this happens, entropic forces push the typical speed of the group to grow unreasonably larger than the reference natural speed, thus completely destroying the agreement between theory and experiments. We will show that, to resolve this conflict, quite a different speed control mechanism is needed. Figure 1: Experimental evidence on starling flocks. a: Probability distribution of the polarization, $\Phi=(1/N)\sum_{i}\mathbf{v}_{i}/v_{i}$ across all recorded flocks; data clearly indicate that these are highly ordered systems, incompatible with the standard notion of near-criticality. b: The equal-time space correlation function of the speed fluctuations (for the precise definition of the correlation function see Methods), plotted against the distance $r$ between the birds rescaled by the flock’s size $L$, for some typical flocks; the fact that all the curves collapse onto each other indicates that the spatial range of the speed correlation, namely the correlation length $\xi_{\mathrm{sp}}$, scales with $L$, i.e. that the system is scale-free (see also Fig.2a and Fig.2c). c: Probability distribution of the mean speed of birds within a flock, $s=(1/N)\sum_{i}v_{i}$, across all recorded flocks. The average mean speed is $11.9$ m/s, with typical fluctuations of $2.3$ m/s. Experimental evidence from field observations We consider $3D$ experimental data on natural flocks of starlings (Sturnus vulgaris) in the field. To the data previously reported by our lab in cavagna+al_08 ; cavagna+al_08b ; attanasi+al_14 , we added new data from our most recent campaign of acquisition conducted in 2019-2020 (see Methods for details of the experiments, and Table S1 in the SI for all biological data in each acquisition). The new data expand the span of the group sizes between $N=10$ and $N=3000$ animals, a wider interval than any previously reported study. As we shall see, this will be crucial in selecting the correct theory. The three main experimental results that are of interest for us here are the following: i) Flocks are highly ordered systems. The polarization, $\Phi=(1/N)\sum_{i}^{N}\mathbf{v}_{i}/v_{i}$ (where $N$ is the total number of birds in the flock, $\mathbf{v}_{i}$ is the vector velocity of bird $i$, and $v_{i}$ is its modulus, i.e. speed), is always quite large, typically above $0.9$ (Fig.1a). This observation rules out the possibility that flocks are close to an ordering transition, hence near-criticality in the standard ferromagnetic sense cannot be invoked to explain the phenomenology of flocks. Instead, these are clearly systems deep into their ordered phase. ii) Speed fluctuations are correlated over long distances, namely their spatial correlation functions are scale-free (Fig.1b) cavagna+al_10 . Having ruled out standard near-criticality because of the previous point, the origin of this trait is puzzling. Unlike the case of orientations, scale-free correlations of the speed cannot be explained as the effect of spontaneously breaking a continuous symmetry goldstone1961field ; in fact, in standard statistical physics systems fluctuations of the modulus of the order parameter are heavily suppressed in the ordered phase, so they are very much short-range correlated patashinskii_book . iii) Flock-to-flock speed fluctuations are moderate (Fig.1c). The average cruising speed of starlings within a flock is about $12$ meters-per-second (m/s), with typical fluctuations of $2$ m/s cavagna+al_10 . This is also the typical cruising speed of an entire flock, namely $s=(1/N)\sum_{i}v_{i}$, whose distribution is reported in (Fig.1c). Hence, neither the individuals, nor the group, ever cruise at a speed much different from the natural reference value, $v_{0}$. As we shall see, this seemingly puny experimental trait may become tremendously difficult to reconcile with scale-free correlations at the theoretical level. General theory The reference flocking dynamics we will consider here is the Vicsek model vicsek+al_95 ; vicsek_review ; ginelli2016physics , in which the animals’ velocities interact through an explicit velocity coupling, aimed at describing the effective imitation between neighbouring individuals. This kind of dynamics can be written in a compact way as follows, $\displaystyle\frac{d\mathbf{v}_{i}}{dt}$ $\displaystyle=$ $\displaystyle-\frac{\partial H}{\partial\mathbf{v}_{i}}+\bm{\eta}_{i}$ (1) $\displaystyle\frac{d\mathbf{x}_{i}}{dt}$ $\displaystyle=$ $\displaystyle\mathbf{v}_{i}\ ,$ (2) where $\bm{\eta}_{i}$ is a white noise with strength proportional to $T$, a parameter playing the role of an effective temperature in the statistical physics context, namely $\langle\bm{\eta}_{i}(t)\bm{\eta}_{j}(t^{\prime})\rangle=2dT\delta_{ij}\delta(t-t^{\prime})$; $H$ is a cost function (or effective Hamiltonian), whose derivative with respect to $\mathbf{v}_{i}$ represents the social force acting on the particle’s velocity.222The effective friction coefficient in front of $\dot{\mathbf{v}}_{i}$ in (1) can be set to $1$ through an appropriate rescaling of time zwanzig_book . At a fairly general level, we can write, $H=\frac{1}{2}J\sum_{i,j}^{N}n_{ij}(\mathbf{v}_{i}-\mathbf{v}_{j})^{2}+\sum_{i}^{N}V(\mathbf{v_{i}})\ ,$ (3) where the first term represents the imitation interaction between particles’ velocities, having strength $J$, and the second term is the speed control term, which affects each particle independently. The adjacency matrix, $n_{ij}$, is $1$ for interacting neighbours and $0$ otherwise. The self- propulsion part of the dynamics, (2), implies that the interaction network depends on time, $n_{ij}=n_{ij}(t)$. In the original Vicsek model the speed of the particles is kept constant, $|\mathbf{v}_{i}|=v_{0}$. Here we want to study speed fluctuations and their correlations, hence we relax the hard Vicsek constraint and use the confining potential, $V(\mathbf{v_{i}})$, to keep the speed of each particle confined around the natural reference value, $v_{\mathrm{0}}$. Note that the actual speed of a bird will be the product of the interplay between the individual confining force and the collective imitation among the birds. Linear speed control The simplest control, and indeed the one used in virtually all models with fluctuating speed to date, consists of a Gaussian potential confining the speed, $V(\mathbf{v_{i}})=g\,(v_{i}-v_{0})^{2}\ ,$ (4) where $v_{i}=|\mathbf{v_{i}}|$. This potential generates a linear restoring force acting on the speed in the equation of motion (1), hence it is called linear speed control. The constant $g$ is the stiffness of the restoring force, and it can be interpreted as the elastic constant of a spring keeping the speed around its natural reference value, $v_{0}$. The determination of the correlation length $\xi_{\mathrm{sp}}$ of speed fluctuations in the case of a linear speed control has been worked out in bialek+al_14 , $\xi_{\mathrm{sp}}=r_{1}\left(\frac{Jn_{c}}{g}\right)^{1/2}\ ,$ (5) where $r_{1}$ is the mean inter-particle distance and $n_{c}$ is average number of interacting nearest-neighbours. The explanation of (S21) is simple: the theory defined by (3) and (4), has a critical point at $g=0$, where the correlation length diverges. Conversely, large values of the speed stiffness $g$ suppress the range of speed correlations bialek+al_14 . To have scale-free correlations with linear control, then, it is sufficient to have $\xi_{\mathrm{sp}}\gg L_{\mathrm{max}}$ (where $L_{\mathrm{max}}$ is the size of the largest flock in the dataset), that is the stiffness $g$ must be smaller than $1/L_{\mathrm{max}}^{2}$ (see the SI for more details on the bounds on $g$ in the linear theory). This theoretical scenario is confirmed in Fig.2a, where we report the correlation length of the speed fluctuations, $\xi_{\mathrm{sp}}$, vs the system’s size, $L$, in numerical simulations of self-propelled particles (SPP) regulated by linear speed control (colored points - see Methods and SI for details of the SPP simulations): when the speed stiffness $g$ is small enough, namely smaller than $1/L_{\mathrm{max}}^{2}$, the correlation length $\xi_{\mathrm{sp}}$ correctly scales linearly with $L$ over the whole range (dark red points), thus reproducing the scale-free nature of the experimental correlation length (black points). On the contrary, if $g$ is larger than $1/L_{\mathrm{max}}^{2}$, the range of the correlation grows linearly with $L$ only up to a certain size, and then it saturates to its bulk value (S21) (orange and yellow points). We conclude that, if correlations were our only experimental concern - as it has been the case in the literature up to now - there would be no need to increase the speed stiffness $g$ beyond $1/L_{\mathrm{max}}^{2}$, and everything would be fine. However, when we consider as an observable the mean speed of the flock, $s=(1/N)\sum_{i}v_{i}$, the outlook for the linear theory gets bleak. Empirical data show that the mean speed does not change much from flock to flock and it does not have any dependence on the flock’s number of birds, $N$ (black points in Fig.2b). Let us see what is the prediction of the linear theory for the mean speed, $s$. Calculating the probability distribution, $P(s)$, from equations (1)-(2) is a prohibitive task, due to the time- dependence of the interaction network, $n_{ij}(t)$; however, previous studies have shown that, in the deeply ordered phase in which flocks live, the timescale for rearrangement of $n_{ij}(t)$ is significantly larger than the relaxation time of the velocities, hence one obtains reasonably accurate results by assuming a time-independent form of $n_{ij}$, namely a fixed interaction network mora2016local ; as we shall see from the perfect agreement between off-equilibrium SPP simulations and theory, this approximation works very well. Under this assumption (plus some more bland algebraic approximations - see SI for details) one can calculate the probability distribution of the mean speed, obtains for $d=3$ the result, $P(s)=\frac{1}{Z}s^{2}\exp\left[-\frac{Ng}{T}(s-v_{0})^{2}\right]\ ,$ (6) where $Z$ is a normalization factor. We can easily evaluate the peak of this distribution, that is the typical value of the mean speed, $s_{\mathrm{typical}}=\frac{1}{2}v_{0}\left(1+\sqrt{1+\frac{4T}{Ng\,v_{0}^{2}}}\ \right)\ .$ (7) For $N\to\infty$ we get $s_{\mathrm{typical}}=v_{0}$, so all is good for infinitely large flocks, as their typical speed is just the same as the natural reference speed, $v_{0}$. But in finite groups serious troubles emerge, as the typical speed grows for decreasing $N$, eventually becoming absurdly larger than the natural reference value, $v_{0}$; and because in (7) the combination $Ng$ appears, this disastrous effect is all the more serious the smaller the speed stiffness $g$, so that for very weak control, even relatively large flocks will have a biologically implausible speed. Yet weak control $g$ is exactly what we need to grant strong correlations! Hence linear control has a serious problem. Figure 2: Linear vs Marginal speed control. a: Natural flocks show a clear scale-free behaviour of the speed correlation length, $\xi_{\mathrm{sp}}$, which scales linearly with $L$ (Pearson coefficient $r_{P}=0.97$, $p<10^{-9}$). SPP simulations with linear speed control yields scale-free correlations over the entire range of $L$ only at the smallest value of the stiffness $g$ (dark red). b: Natural flocks show no detectable dependence of their mean speed on the number of birds in the flock (Spearman coefficient $r_{S}=-0.13,p=0.21$; the black line is the average over all flocks). SPP simulations with linear control give a near-constant speed compatible with experiments only at the largest value of the stiffness $g$ (light yellow); coloured lines represent the theoretical prediction of (6). Linear speed control is therefore unable to reproduce both experimental traits at the same time. c: The correlation length in SPP simulations with marginal speed control scales linearly with $L$ over the full range, provided that the temperature/noise $T$ is low enough to have a polarization equal to the experimental one. d: At the same value of the parameters as in panel c, SPP simulations with marginal control give mean group’s speed very weakly dependent on $N$, fully compatible with the experimental data; the blue line represents the theoretical prediction of (S27). Inset: same data over a smaller range to appreciate the agreement between theory and simulations. (Numerical and experimental correlation lengths are reported on the same scale by matching the curves at the scale-free value of the parameters; numerical and experimental speeds are reported on the same scale by matching the curves at the largest value of $N$. Colored points correspond to averages over numerical data, error bars to standard deviations. Black points correspond to the median (over time) of experimental data for each individual flocking event, error bars to median absolute deviations.) The physical reason for this drift of the mean speed in the linear theory is the following. In absence of the prefactor $s^{2}$ in the distribution (6), a decrease of the stiffness $g$ would increase the flock-to-flock fluctuations of the mean speed, but its typical value would be always equal to the natural one, namely $v_{0}$. The $s^{2}$ prefactor, though, changes this, pushing the maximum of the distribution at larger and larger speed for decreasing $g$. Where is this prefactor from? It is essentially the Jacobian of the change of variable between the $d$-dimensional velocity vector and the modulus of the velocity (see SI); in generic dimension, $s^{d-1}ds$ is the volume in phase space of all configurations with the same mean speed, but variable velocity direction (an identical term appears in the Maxwell-Boltzmann speed distribution). This is an entropic term, which boosts the probability of large speed merely because there are more ways to realise larger rather than smaller velocity vectors. When the imitation force is strong (as it is within a flock) and the speed-confining force is weak (as it must be for the sake of scale- free correlation), the system is allowed to gain entropy by increasing in a coordinated fashion all the individual speeds of the particles; as this entropic push is not suppressed by a strong enough exponential weight, it gives rise to unreasonably fast flocks. This theoretical prediction - and its disastrous consequences - are confirmed by a comparison between numerical SPP simulations ruled by linear speed control and experimental data. Fig.2b shows that, once the reference speeds $v_{0}$ of the theory and of the experiments are matched at the largest sizes, for small values of $N$ and $g$ numerical flocks with linear speed control (dark red points) have a mean speed that is completely incompatible with that of actual experimental flocks (black points), which shows no appreciable dependence on $N$. To contrast the increase of the typical mean speed in smaller SPP flocks one needs a larger value of the speed stiffness $g$ (light yellow points), but we know from the previous discussion that this depresses the range of the speed correlations, so that one fails to reproduce scale-free correlations, Fig.2a. This is the blanket-too-short dilemma of linear speed control: either we use a speed stiffness $g$ small enough to reproduce scale- free correlations even at the largest observed values of $N$, but in that case we get implausible large speed at low $N$ (dark red points), or we increase $g$ to tame the entropic boost of the speed and keep it within the experimental fluctuations at low $N$, but then we lose scale-free correlations at large $N$ (light yellow points). Linear speed control cannot yield both experimental traits at the same time. We must therefore turn to some other mechanism. Marginal speed control In statistical physics, the correlation length $\xi$ is connected to the inverse of the quadratic curvature of the (renormalized) potential, calculated at its minimum le1991quantum ; goldenfeld_lectures_1992 ; binney_book ; very small curvature implies very large correlation length, so that a divergent $\xi$ is always due to a zero second derivative (or marginal mode) along some direction of the (renormalized) potential. This is also the case for linear speed control (4): the second derivative of the quadratic potential along the speed is proportional to $g$, hence when $g$ is small, the correlation length is large. The problem, however, is that because the function is quadratic, by decreasing $g$ we weaken the whole potential confining the speed, not just its curvature, hence giving a freeway to the entropic boost we have discussed before, ultimately resulting in an implausible large speed of small flocks. This state of affairs suggests that we must turn to a confining potential that does not vanish entirely when its curvature does. To find this potential we proceed through general considerations of symmetry and common sense. First, the potential must keep the speed around the reference natural value $v_{0}$ and it must diverge for large values of the speed; secondly, it must be rotationally symmetric in the whole velocity vector; third, it must have the simplest mathematical form compatible with the previous conditions and with the experimental evidence. The most general form of a rotationally symmetric potential that confines the speed around the natural reference value $v_{0}$ is, $V(\mathbf{v_{i}})=\left(\mathbf{v}_{i}\cdot\mathbf{v}_{i}-v_{0}^{2}\right)^{p}$, where the integer power $p\geq 2$ must be even, in order to produce a minimum of the potential at $v_{0}$. For $p=2$ we have the classic $\mathrm{O}(n)$ potential of standard vector ferromagnets patashinskii_book ; le1991quantum ; this theory is not suitable for our purposes, though, as in the low- temperature symmetry-broken phase it cannot develop zero curvature in the speed direction unless we put an amplitude $g$ in front of the whole potential and let $g$ itself go to zero; but this is what we already did for the Gaussian case, and we know it does not work.333Modulus fluctuations in the standard $\mathrm{O}(n)$ theory are different from longitudinal fluctuations; the latter, being coupled to the massless transverse fluctuations, are also scale-free (albeit with a weaker divergence), while the former are truly massive fluctuations, hence they have finite (in fact very small) correlation length and susceptibility brezin1973feynman ; brezin1973critical ; patashinskii1973longitudinal . Indeed, for $p=2$ and for $v_{i}\sim v_{0}$, we can write, $V\sim\left(v_{i}-v_{0}\right)^{2}$, which is nothing else than the Gaussian potential that we already took into consideration.444The fact that the ‘linear’ speed control theory is essentially identical to the standard $O(n)$ model, shows that that theory is not ‘linear’ at all, nor Gaussian, in the actual degrees of freedom, $\mathbf{v}_{i}$. The next simplest possibility is $p=4$, which gives the following speed- control potential, $V(\mathbf{v_{i}})=\frac{1}{v_{0}^{6}}\lambda\left(\mathbf{v}_{i}\cdot\mathbf{v}_{i}-v_{0}^{2}\right)^{4}$ (8) where, thanks to the $v_{0}^{-6}$ normalization, the amplitude $\lambda$ has the same physical dimensions as the other coupling constants, $J$ and $g$. This potential was first studied on purely speculative grounds in cavagna2019CRP , although no SPP simulations, nor comparison with the experiments, were performed there. The crucial feature of the potential in (8) is that its second derivative with respect to the speed is always zero, irrespective of the value of the amplitude $\lambda$, hence we will call this marginal speed control. Higher order powers in the expansion of the potential are nonzero and very steep, though, thus confining the speed much more effectively to its reference value, $v_{0}$, compared to linear control. Correspondingly, the speed-restoring force is very weak for small deviations from $v_{0}$, but very strong for large deviations (we will discuss later about the biological plausibility of this kind of nonlinear speed confinement). The complete absence of a quadratic term in the expansion of (8) seems to suggest that the marginal potential gives rise to an infinite correlation length of speed fluctuations under all physical conditions; in fact, this is not the case. Speed correlations are regulated by the confining potential (i.e. the energy), but also by the fluctuations induced by the noise (i.e. the entropy): at very low noise the marginal potential dominates, so that speed fluctuations are indeed scale-free, while by increasing the noise the correlation is increasingly suppressed by entropic fluctuations cavagna2019CRP . In field theory terms, what happens is that at finite temperature entropy provides a non-zero second derivative of the renormalized potential, i.e. a non-zero mass of speed fluctuations, and therefore a finite correlation length. The mass goes to zero at $T=0$, where the speed correlation length diverges. As a consequence, the marginal theory has a zero-temperature (or zero-noise) critical point, where the correlation length of the speed fluctuations diverges. A mean-field analysis cavagna2019CRP shows that the speed correlation length diverges as, $\xi_{\mathrm{sp}}\sim\frac{1}{T^{1/2}}\ ,$ (9) where the generalized temperature $T$ is the strength of the noise in (1). This scenario has an interesting and very convenient consequence: in the marginal theory, by simply decreasing the noise strength $T$, we bring home two out of three empirical traits, that is a large polarization and a large correlation length, a somewhat unusual result within standard statistical systems. But what about the constraint of having a biologically reasonable group speed? The calculation of the distribution of the mean speed of flocks under the marginal potential is more complicated than in the linear case (see SI), but under some reasonable approximations one obtains, $\displaystyle P(s)=\frac{1}{Z}s^{2}\exp{\left[-\frac{N\lambda}{Tv_{0}^{6}}(v_{0}^{2}-s^{2})^{4}\right]}$ (10) which has two great differences compared to the linear case: first, the power $4$ in the exponential tames the entropic push of the $s^{2}$ term extremely sharply; secondly, the amplitude $\lambda$, unlike $g$, does not need to be small to grant scale-free correlations, so the exponential weight remains always effective in suppressing large values of the mean speed, $s$. The maximum of this distribution, i.e. the typical mean speed of the flock, is given by (see SI for details), $\displaystyle s_{\mathrm{typical}}\simeq\begin{cases}v_{0}\ \ \ \ \ &\textrm{for}\ N\gg\frac{T}{\lambda v_{0}^{2}}\\\ v_{0}\left(\frac{T}{4N\lambda v_{0}^{2}}\right)^{1/8}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &\textrm{for}\ N\ll\frac{T}{\lambda v_{0}^{2}}\end{cases}$ As in the linear case, for $N\to\infty$, the typical speed of the flocks becomes the same as the natural reference speed, $v_{0}$, while it increases for smaller sizes. However, in the marginal case this growth is very moderate indeed: the strength $T$ of the noise is small (to have large correlation length) and the amplitude $\lambda$ is finite, so that the crossover size $N=T/(\lambda v_{0}^{2})$, below which the mean speed increases, is very small, thus shielding this regime; moreover, the exponent of the speed’s increase for small $N$ is $1/8$, significantly smaller than the exponent $1/2$ of linear speed control (see equation (7)). For these two reasons marginal speed control is so much more effective than linear control at small $N$. These theoretical results are fully confirmed by numerical simulations of SPP flocks regulated by marginal speed control (Fig.2c and Fig.2d). Moreover, the comparison with experimental data on real starling flocks in the field indicates that marginal control is indeed the right way to go, as with just one reasonable set of parameters - the only crucial one in fact being the low noise strength, $T$ \- numerical simulation of SPP flocks with marginal speed control reproduce the experimental data very well: the correlation length $\xi_{\mathrm{sp}}$ scales linearly with $L$ up to the largest size (Fig.2c), and the mean speed $s$ shows only a very moderate increase at low $N$, well within the scatter of the empirical data (Fig.2d). We stress once again that all three pieces of phenomenology – large polarization, large correlation length, moderate speed at all group sizes – are achieved by marginal control by doing just one very sensible thing, namely pushing the system into the ordered phase real flocks naturally belong to. The entropy-triggered conflict between scale-free correlation and moderate group speed that hinders linear control is therefore completely resolved by the marginal theory. Figure 3: Qualitative sketch of linear vs marginal speed-restoring force. In the linear case the force pulls the speed back to its natural reference value $v_{0}$ proportionally to the deviation from $v_{0}$. Instead, in the marginal case, the force is extremely weak for small deviations from the reference speed, while it increases very sharply for large deviations, harshly suppressing them. Biological significance What is the biological meaning of marginal speed control in the context of avian flight? The highly non-linear marginal potential (8) implies that small speed fluctuations elicit nearly zero restoring force, while larger speed fluctuations are pushed back extremely sharply, in contrast with the constant slope of a linear confining force, Fig.3. Is this reasonable for actual birds in a flock? Small speed fluctuations are certainly not prevented by biomechanical constraints, but they could be depressed by energetic expenditure concerns, as changing the speed requires extra energy consumption; however, starlings prove to be very liberal about their energy expenditure habits while flocking hamilton1967starling ; heppner1974avian ; bajec2009organized : although their metabolic rate is dramatically higher in flight than on the roost hamilton1967starling , these birds will spectacularly wheel every day for half an hour before landing, expending energy at a ferocious rate; this suggests that small extra energy expenditures due to small speed fluctuations may indeed be weaker-than-linearly suppressed. On the other hand, large speed fluctuations clash against biomechanical and aerodynamic constraints, which are set very stringently by anatomy, physiology and physics pennycuick1986mechanical ; rayner1988form ; rayner1996biomechanical ; therefore, a stronger-than-linear suppression of large speed fluctuations also seems quite reasonable. In view of the general nature of this argument, it seems to us that marginal speed control may not only be a fairly natural mechanism from a biological point of view, but also quite a general one, possibly extending beyond the case study of starling flocks. ## Methods Experiments. Empirical observations of starling flocks have been performed in Rome, from the terrace of Palazzo Massimo alle Terme, in front of a large roosting site at the Termini Railway Station. The experimental technique used is stereoscopic photography, where multiple synchronized video-sequences of a flocking event are acquired from different observation points with a calibrated multi-camera video-acquisition system Cavagna2015error . Digital images are then analysed with a specifically designed tracking software attanasi2015greta , in order to extract from the raw data the three- dimensional trajectories of the individual birds in the flock. Data have been collected across the years during several experimental campaigns. The very first data were collected in the context of the Starflag project cavagna+al_08 ; cavagna+al_08b , between 2007 and 2010, using Canon D1-Mark II cameras, shooting interlaced at 10 frames-per-second (FPS), with a resolution of $8.2$ Megapixels (MP). A second campaign took place between 2011 and 2012, using faster cameras, namely IDT M5, shooting at $170$FPS with a resolution of $5$MP. A final campaign took place in the last months of 2019 and in January and February 2020. This campaign uses state-of-the-art IDT OS10-4K cameras, shooting at $155$FPS with a resolution of $9.2$MP. Overall, we have data from flocks with sizes ranging between $10$ and $2500$ birds, a span that is essential to differentiate between linear and marginal speed control. All campaigns have been conducted with a three camera system exploiting trifocal geometry hartley2004multiple . The image analysis - segmentation of individual birds, stereometric matching and dynamical tracking - have been performed using the method of cavagna+al_08 ; cavagna+al_08b for the first campaign, and the most advanced method of attanasi2015greta for the second and third campaigns. We summarise in Table S1 in the SI all the experimental quantities used in our analysis for each flocking event. Correlation functions and correlation length. The speed spatial connected correlation function is defined as cavagna2018physics $C(r)=\frac{\sum\limits_{i,j}^{N}\delta v_{i}\ \delta v_{j}\ \delta(r-r_{ij})}{\sum\limits_{i,j}^{N}\delta(r-r_{ij})}$ (11) where $N$ is the number of individuals in the system, $r_{ij}=|\bm{r}_{i}-\bm{r}_{j}|$ is the mutual distance between individuals $i$ and $j$, and $\delta v_{i}=v_{i}-\frac{1}{N}\sum\limits_{k}^{N}{v}_{k}$ (12) is the fluctuation of the individual speed $v_{i}=|\bm{v}_{i}|$ with respect to the mean speed of the group $s=(1/N)\sum_{i}v_{i}$, evaluated at a given instant of time. The function $C(r)$ represents the instantaneous average of mutual correlations among all pairs at distance $r$: in systems with local, distance-dependent interactions, for large enough system sizes, this quantity is a good proxy of the typical correlation at that distance, as computed with the correct theoretical measure. A full discussion of definition (11), its asymptotic limit, finite size effects, and behaviour in known cases, can be found in cavagna2018physics . Here we notice that the correlation function (11) is the only possible definition applicable to experimental data, where no a priori information is available on the true nature of the dynamics. This definition has indeed been used in all the previous analysis of speed correlations mentioned in this paper. We display in Fig.S2 of the SI the correlation function (11) computed, respectively, from experimental data (panel a), and from numerical simulations with a linear speed control model (panel b) and a marginal speed control model (panel c). For each configuration of the system (at a given time) we estimate the correlation length as: $\xi=\frac{\int\limits_{0}^{r_{0}}\ \textrm{d}r\ r\ C(r)}{\int\limits_{0}^{r_{0}}\ \textrm{d}r\ C(r)}$ (13) and then we perform a time average of this quantity over different configurations. The point $r_{0}$ is the first point for which $C(r)=0$. Such a point always exists due to the very definition of correlation function given in (11) (the integral of $C(r)$ between $0$ and the size of the system $L$ always vanishes). (13) provides a reliable estimate of the correlation length in every regime, both when the system is scale-free with long-range correlations and when the system is far from criticality with short-range correlations (in the linear speed control model we can see all this phenomenology, simply by changing the parameter $g$, see Fig. 2a). The reason is that (13) makes use of the information encoded in the zero-crossing point $r_{0}$, together with the shape of the entire function $C(r)$. When correlations are short-range and the correlation function is nearly exponential cavagna2018physics , e.g. $C(r)\sim e^{-r/\hat{\xi}}$, given that it almost vanishes after the point $r_{0}$, we can think of extending the integrals of (13) up to $L$, and we obtain $\xi\sim\hat{\xi}$. Conversely, when correlations are long range, regardless of the precise shape of the $C(r)$, the first point of zero-crossing $r_{0}$ dominates, hence we obtain $\xi\sim r_{0}$. Other possible definitions are legitimate but, either they are reliable near criticality and they fail in the short-range correlation regime, or they capture the correlation range when it is much smaller than the system size, completely failing when the correlation range becomes extensive. For example, if one chooses $r_{0}$ as an estimate for the correlation length, it works efficiently when the system is scale-free, and $r_{0}$ identifies the size of the correlated domains, while it fails when the system has an intrinsic length-scale cavagna+al_10 ; cavagna2018physics . On the other hand, in the phase of short-range correlations, it is easy to determine the correlation length via an exponential fit, because the $C(r)$ has an exponential shape cavagna2018physics ; however this procedure is unfeasible in the critical regime when the correlation functions do not have an exponential behaviour (see Fig. S2 in the SI). Numerical simulations of SPP flocks. To investigate the flocking dynamics described by (1) and (2), we perform numerical simulations with a system of self-propelled particles. The flock is modeled as a set of particles moving in a three-dimensional space with update rules for positions and velocities, which are a discretized version of (1) and (2). Following a simple Euler integration scheme rapaport2004 , we get $\displaystyle\bm{v}_{i}(t+\Delta t)=\bm{v}_{i}(t)+\Delta t\,\bm{F}_{i}+\delta\bm{\eta}_{i}$ (14) $\displaystyle\bm{x}_{i}(t+\Delta t)=\bm{x}_{i}(t)+\Delta t\,\bm{v}_{i}(t)$ (15) Here the force $\bm{F}_{i}=\bm{F}_{int}+\bm{F}_{sc}$ acting on particle $i$ contains both an alignment term $\bm{F}_{int}$, $\bm{F}_{int}=-J\sum\limits_{j}^{N}n_{ij}(t)\left(\bm{v}_{i}(t)-\bm{v}_{j}(t)\right)$ (16) and a speed control term $\bm{F}_{sc}$, which can be either linear: $\bm{F}_{sc}=2g\frac{\bm{v}_{i}}{|\bm{v}_{i}|}\left(v_{0}-|\bm{v}_{i}|\right)$ (17) or marginal, $\bm{F}_{sc}=\frac{8\lambda}{v_{0}^{6}}\bm{v}_{i}(v_{0}^{2}-\bm{v}_{i}^{2})^{3}$ (18) The last term in (14) is a white gaussian noise with zero mean and variance: $\sigma_{\eta}^{2}=2dT\Delta t$ (19) where $d=3$ is the space dimension and $T$ is the effective temperature. The matrix $n_{ij}(t)$ is the adjacency matrix that defines which pairs interact; its entries can assume only the values $0$ and $1$, according to a rule of interaction that can be metric (i.e. $n_{ij}\neq 0$ if and only if $r_{ij}<r_{c}$) or topological (i.e. $n_{ij}\neq 0$ if $j$ is one of $i$’s first $n_{c}$ neighbours) ballerini+al_08 ; bialek+al_12 . When working at fixed average density and in the very low temperature region where density fluctuations are small, there is not great difference between metric and topological interaction. Even though natural flocks are known to have topological interactions ballerini+al_08 ; bialek+al_12 , we therefore decide to perform simulations with the metric rule, which are much less expensive computationally. In this way, we are able to study systems in $d=3$ with $N$ up to $3\times 10^{5}$ particles. We consider a metric connectivity matrix with interaction radius $r_{c}=1.2$, such that the number of nearest neighbours at the time $t=0$ is $n_{c}=6$, close to the biological value ballerini+al_08 ; bialek+al_12 . We then check a posteriori that the system remains spatially homogeneous in time by computing the distribution of the number of nearest neighbours for every simulation, and verifying that it is always sharply peaked around the initial value $n_{c}=6$. All the simulations are made in a cubic box (of linear size $L$) with periodic boundary conditions. Individuals are initialized in a global polarized configuration on a cubic lattice with lattice spacing (i.e. nearest neighbour distance) $r_{c}=1$ and then evolve off-lattice according to rules (14). The effective temperature clearly drives the system from a disordered to an ordered state through a phase transition at fixed density. However, since we are considering self-propelled particles, the same configurations can be reached using another control parameter defined as the ratio between the mean first neighbour distance $r_{1}$, which directly depends on the density of the system, and the interaction radius $r_{c}$ but at fixed noise. We decide to perform all the simulations at constant density $\rho=1$, maintaining $r_{1}/r_{c}$ constant and choosing the temperature according to the desired polarization. We choose the value of the reference speed of the particles $v_{0}$ and of the integration step $\Delta t$ to ensure an average displacement $\Delta r\simeq v_{0}\Delta t$ much smaller than the size of the box $L$. In this way there is a weak rewiring of the interaction network during the time of simulation, consistently with the quasi-equilibrium condition of natural flocks mora2016local . The step of integration is selected as the maximum value granting a robust numerical integration in terms of errors and stationarity of the energy of the system (absence of trends in time or in size). In simulations with marginal speed control this algorithmic stability is achieved with $\Delta t=0.01$, while linear speed control requires a $\Delta t=0.001$. Every simulation consists in a run of length $N_{steps}=2\times 10^{4}$ steps for thermalization and in an independent run long $N_{steps}=1.2\times 10^{6}$ steps. From the latter we extract configurations every $1000$ steps in order to compute the quantities needed by our analysis. In Table S2 of the SI we report the values of the other parameters used in the simulations. ## Acknowledgements This work was supported by ERC Advanced Grant RG.BIO (785932) to ACa, and ERANET-CRIB grant to ACa and TSG. TSG was also supported by grants from CONICET, ANPCyT and UNLP (Argentina). The authors acknowledge several illuminating discussions with William Bialek regarding speed control in flocks. ACu wishes to thank Victor Martin-Major for careful advice on the numerics. ACa warmly thanks Frank Heppner for reading the original manuscript and for a decade-long conversation on collective avian behaviour. ## Author contributions ACa, IG and TSG designed the study. XF, WKK, SM, LP, and PV - coordinated by SM - collected the experimental data. SM carried out the 3D dynamical tracking of the raw experimental data. ACa, ACu, and TSG performed the analytic calculation. ACu and GP carried out the numerical simulations. ACu analysed the experimental and numerical data. ACa and IG wrote the manuscript. ## References * (1) C. W. Reynolds, “Flocks, herds and schools: A distributed behavioral model,” in Proceedings of the 14th annual conference on Computer graphics and interactive techniques, pp. 25–34, 1987. * (2) F. Heppner and U. Grenander, “A stochastic nonlinear model for coordinated bird flocks,” The ubiquity of chaos, vol. 233, p. 238, 1990. * (3) A. Huth and C. Wissel, “The simulation of the movement of fish schools,” Journal of theoretical biology, vol. 156, no. 3, pp. 365–385, 1992. * (4) T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, and O. Shochet, “Novel type of phase transition in a system of self-driven particles,” Phys Rev Lett, vol. 75, pp. 1226–1229, Aug 1995. * (5) I. D. Couzin, J. Krause, R. James, G. D. Ruxton, and N. R. Franks, “Collective memory and spatial sorting in animal groups,” Journal of theoretical biology, vol. 218, no. 1, pp. 1–11, 2002. * (6) H. Chaté, F. Ginelli, G. Grégoire, F. Peruani, and F. Raynaud, “Modeling collective motion: Variations on the vicsek model,” Eur. Phys. J. B, vol. 64, pp. 451–456, 08 2008. * (7) P. Romanczuk and L. Schimansky-Geier, “Swarming and pattern formation due to selective attraction and repulsion,” Interface focus, vol. 2, no. 6, pp. 746–756, 2012. * (8) R. Grossmann, L. Schimansky-Geier, and P. Romanczuk, “Self-propelled particles with selective attraction–repulsion interaction: from microscopic dynamics to coarse-grained theories,” New Journal of Physics, vol. 15, no. 8, p. 085014, 2013. * (9) A. Perna, G. Grégoire, and R. P. Mann, “On the duality between interaction responses and mutual positions in flocking and schooling,” Movement ecology, vol. 2, no. 1, pp. 1–11, 2014. * (10) A. Cavagna, A. Cimarelli, I. Giardina, G. Parisi, R. Santagati, F. Stefanini, and M. Viale, “Scale-free correlations in starling flocks,” Proc Natl Acad Sci USA, vol. 107, pp. 11865–70, Jun 2010. * (11) J. M. Rayner, P. W. Viscardi, S. Ward, and J. R. Speakman, “Aerodynamics and energetics of intermittent flight in birds,” American Zoologist, vol. 41, no. 2, pp. 188–204, 2001. * (12) W. Bialek, A. Cavagna, I. Giardina, T. Mora, O. Pohl, E. Silvestri, M. Viale, and A. M. Walczak, “Social interactions dominate speed control in poising natural flocks near criticality,” Proceedings of the National Academy of Sciences, vol. 111, no. 20, pp. 7212–7217, 2014. * (13) C. K. Hemelrijk and H. Hildenbrandt, “Scale-free correlations, influential neighbours and speed control in flocks of birds,” Journal of Statistical Physics, vol. 158, no. 3, pp. 563–578, 2015. * (14) C. K. Hemelrijk and H. Hildenbrandt, “Self-organized shape and frontal density of fish schools,” Ethology, vol. 114, no. 3, pp. 245–254, 2008. * (15) D. Helbing and P. Molnár, “Social force model for pedestrian dynamics,” Phys. Rev. E, vol. 51, pp. 4282–4286, May 1995. * (16) D. Helbing, I. Farkas, and T. Vicsek, “Simulating dynamical features of escape panic.,” Nature, vol. 407, pp. 487–490, September 2000. * (17) D. Helbing, I. Farkas, P. Molnar, and T. Vicsek, “Simulation of pedestrian crowds in normal and evacuation situations,” vol. 21, pp. 21–58, 01 2002. * (18) A. Garrell, L. Garza-Elizondo, M. Villamizar, F. Herrero, and A. Sanfeliu, “Aerial social force model: A new framework to accompany people using autonomous flying robots,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7011–7017, 2017. * (19) D. Yang, U. Ozguner, and K. Redmill, “Social force based microscopic modeling of vehicle-crowd interaction,” in 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1537–1542, 2018. * (20) A. Cavagna, I. Giardina, A. Orlandi, G. Parisi, A. Procaccini, M. Viale, and V. Zdravkovic, “The starflag handbook on collective animal behaviour: 1. empirical methods,” Anim Behav, vol. 76, pp. 217–236, Jan 2008. * (21) A. Cavagna, I. Giardina, A. Orlandi, G. Parisi, and A. Procaccini, “The starflag handbook on collective animal behaviour: 2. three-dimensional analysis,” Anim Behav, vol. 76, pp. 237–248, Jan 2008. * (22) A. Attanasi, A. Cavagna, L. Del Castello, I. Giardina, T. S. Grigera, A. Jelić, S. Melillo, L. Parisi, O. Pohl, E. Shen, et al., “Information transfer and behavioural inertia in starling flocks,” Nature physics, vol. 10, no. 9, pp. 691–696, 2014. * (23) J. Goldstone, “Field theories with superconductor solutions,” Il Nuovo Cimento (1955-1965), vol. 19, no. 1, pp. 154–164, 1961. * (24) A. Z. Patashinskii and V. L. Pokrovskii, Fluctuation Theory of Phase Transitions. Pergamon Press, 1979. * (25) T. Vicsek and A. Zafeiris, “Collective motion,” Physics Reports, vol. 517, no. 3, pp. 71–140, 2012. * (26) F. Ginelli, “The physics of the Vicsek model,” The European Physical Journal Special Topics, vol. 225, no. 11-12, pp. 2099–2117, 2016. * (27) R. Zwanzig, Nonequilibrium statistical mechanics. Oxford University Press, USA, 2001. * (28) T. Mora, A. M. Walczak, L. Del Castello, F. Ginelli, S. Melillo, L. Parisi, M. Viale, A. Cavagna, and I. Giardina, “Local equilibrium in bird flocks,” Nature Physics, vol. 12, no. 12, pp. 1153–1157, 2016. * (29) M. Le Bellac, Quantum and Statistical Field Theory. Clarendon Press Oxford, 1991. * (30) N. Goldenfeld, Lectures on Phase Transitions and the Renormalization Group. Reading, Massachusetts: Perseus Books, 1992. * (31) J. J. Binney, N. Dowrick, A. Fisher, and M. Newman, The theory of critical phenomena: an introduction to the renormalization group. Oxford University Press, Inc., 1992. * (32) E. Brézin, D. Wallace, and K. G. Wilson, “Feynman-graph expansion for the equation of state near the critical point,” Physical Review B, vol. 7, no. 1, p. 232, 1973. * (33) E. Brézin and D. Wallace, “Critical behavior of a classical heisenberg ferromagnet with many degrees of freedom,” Physical Review B, vol. 7, no. 5, p. 1967, 1973. * (34) A. Patashinskii and V. Pokrovskii, “Longitudinal susceptibility and correlations in degenerate systems,” Zh. Eksp. Teor. Fiz, vol. 64, p. 1445, 1973. * (35) A. Cavagna, A. Culla, L. Di Carlo, I. Giardina, and T. S. Grigera, “Low-temperature marginal ferromagnetism explains anomalous scale-free correlations in natural flocks,” Comptes Rendus Physique, vol. 20, pp. 319–328, May-Jun 2019. * (36) W. J. Hamilton III, W. M. Gilbert, F. H. Heppner, and R. J. Planck, “Starling roost dispersal and a hypothetical mechanism regulating rhthmical animal movement to and from dispersal centers,” Ecology, vol. 48, no. 5, pp. 825–833, 1967. * (37) F. H. Heppner, “Avian flight formations,” Bird-banding, vol. 45, no. 2, pp. 160–169, 1974. * (38) I. L. Bajec and F. H. Heppner, “Organized flight in birds,” Animal Behaviour, vol. 78, no. 4, pp. 777–789, 2009. * (39) C. J. Pennycuick, “Mechanical constraints on the evolution of flight,” Memoirs of the California Academy of Sciences, vol. 8, pp. 83–98, 1986. * (40) J. M. Rayner, “Form and function in avian flight,” in Current ornithology, pp. 1–66, Springer, 1988. * (41) J. M. Rayner, “Biomechanical constraints on size in flying vertebrates,” in Symposia of the Zoological Society of London, no. 69, pp. 83–110, London: The Society, 1960-1999., 1996. * (42) A. Cavagna, C. Creato, L. Del Castello, I. Giardina, S. Melillo, L. Parisi, and M. Viale, “Error control in the set-up of stereo camera systems for 3d animal tracking,” The European Physical Journal Special Topics, vol. 224, no. 17, pp. 3211–3232, 2015. * (43) A. Attanasi, A. Cavagna, L. Del Castello, I. Giardina, A. Jelić, S. Melillo, L. Parisi, F. Pellacini, E. Shen, E. Silvestri, et al., “Greta-a novel global and recursive tracking algorithm in three dimensions,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 12, pp. 2451–2463, 2015. * (44) R. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge University Press, 2004. * (45) A. Cavagna, I. Giardina, and T. S. Grigera, “The physics of flocking: Correlation as a compass from experiments to theory,” Physics Reports, vol. 728, pp. 1–62, 2018. * (46) D. C. Rapaport, The art of molecular dynamics simulation. Cambridge University Press, 2nd ed., 2004. * (47) M. Ballerini, N. Cabibbo, R. Candelier, A. Cavagna, E. Cisbani, I. Giardina, V. Lecomte, A. Orlandi, G. Parisi, A. Procaccini, et al., “Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study,” Proceedings of the national academy of sciences, vol. 105, no. 4, pp. 1232–1237, 2008. * (48) W. Bialek, A. Cavagna, I. Giardina, T. Mora, E. Silvestri, M. Viale, and A. M. Walczak, “Statistical mechanics for natural flocks of birds,” Proc Natl Acad Sci USA, vol. 109, pp. 4786–91, Mar 2012. * (49) F. Dyson, “General theory of spin-wave interactions,” Physical review, vol. 102, no. 5, p. 1217, 1956. * (50) M. Ballerini, N. Cabibbo, R. Candelier, A. Cavagna, E. Cisbani, I. Giardina, A. Orlandi, G. Parisi, A. Procaccini, M. Viale, and V. Zdravkovic, “Empirical investigation of starling flocks: a benchmark study in collective animal behaviour,” Anim Behav, vol. 76, pp. 201–215, Jan 2008. * (51) T. Mora and W. Bialek, “Are biological systems poised at criticality?,” J Stat Phys, vol. 144, pp. 268–302, Jul 2011. Supplementary Information ## Distribution of the mean speed: linear speed control In this section we describe how to derive the approximate mean speed distribution of Eq. ($6$) in the main text. The starting point is the pseudo- Hamiltonian with the Gaussian potential: ${H}(\\{\bm{v}_{i}\\})=\frac{J}{2}\sum_{i,j}n_{ij}(\bm{v}_{i}-\bm{v}_{j})^{2}+g\sum_{i}(v_{i}-v_{0})^{2}$ (S1) where all the sums are from $1$ to the number of particles in the system $N$. We are dealing with an active system, hence the matrix $n_{ij}=n_{ij}(t)$ depends on time. However, it has been shown in mora2016local that, due to the large polarization of real flocks, the relaxation time scale of $n_{ij}(t)$ is significantly larger than that of the velocities, so that a quasi-equilibrium approach to the problem is reasonable; from now on we will then consider a time-independent $n_{ij}$. The validity of this approach is retrospectively confirmed by the remarkable agreement between the predictions of the approximate equilibrium theory derived here below, and the results from self propelled particles simulations, as displayed in Fig.2b and Fig.2d of the main text. In the context of quasi-equilibrium, we can assume a Boltzmann-like distribution for the velocities $\displaystyle P(\\{\bm{v}_{i}\\})=\frac{1}{Z}\exp{(-\beta{H}(\\{\bm{v}_{i}\\}))}\ .$ (S2) where $\beta=1/T$ is the inverse temperature, and quantifies the degree of noise in the system. Our aim is now to marginalize (S2) to get a probability distribution for the mean speed (notice that, although the confining potential is Gaussian, it is so in the speed, i.e. the modulus of the velocity, $|\mathbf{v}_{i}|$, which is not a linear function of $\mathbf{v}_{i}$; hence, the model is in fact not Gaussian). It is convenient to rewrite (S1) in terms of the individual speeds $v_{i}=|\bm{v}_{i}|$ and flight directions $\bm{\sigma}_{i}=\bm{v}_{i}/v_{i}$. In the very ordered phase, one can use the “spin-wave approximation” (SW) dyson_56 , as already done in previous analysis of starling flocks bialek+al_12 ; bialek+al_14 . When the polarization is large (enforced in our model by choosing $J\gg 1$), the flight direction of each individual is very close to the polarization vector. Hence: $\displaystyle\bm{v}_{i}=v_{i}\bm{\sigma}_{i}\ \ \ \ \ \textrm{with}\ \ \ \ |\bm{\sigma}_{i}|=1$ (S3) $\displaystyle\bm{\sigma}_{i}\simeq\bm{n}\left(1-\frac{\pi_{i}^{2}}{2}\right)+\bm{\pi}_{i}$ (S4) where $\bm{n}$ is the unit vector along the polarization vector $\bm{\Phi}=\frac{1}{N}\sum_{i}\bm{\sigma}_{i}$, and the $\bm{\pi}_{i}$ are the fluctuations orthogonal to $\bm{n}$. The constraint $\sum_{i}\bm{\pi}_{i}=0$ holds by construction and, in the high ordered regime, $\pi_{i}^{2}\ll 1$ for every $i$. The Hamiltonian (S1) then becomes, up to order $\pi_{i}^{2}$: $\displaystyle{H}(\\{v_{i}\\},\\{\bm{\pi}_{i}\\})=J\sum\limits_{i,j}\Lambda_{ij}v_{i}v_{j}+g\sum\limits_{i}(v_{i}-v_{0})^{2}+J\sum\limits_{i,j}\tilde{\Lambda}_{ij}(\\{v_{k}\\})\bm{\pi}_{i}\cdot\bm{\pi}_{j}\ ,$ (S5) where we defined the matrices: $\displaystyle\Lambda_{ij}=-n_{ij}+\delta_{ij}\sum\limits_{k}n_{ik}\ \ \ \ \textrm{(Discrete Laplacian)}$ (S6) $\displaystyle\tilde{\Lambda}_{ij}(\\{v_{k}\\})=-n_{ij}v_{i}v_{j}+\delta_{ij}\sum\limits_{k}n_{ik}v_{i}v_{k}$ (S7) In terms of the variables $\\{v_{i}\\}$ and $\\{\bm{\pi}_{i}\\}$, the probability density (S2) becomes, $\displaystyle P(\\{v_{i}\\},\\{\bm{\pi}_{i}\\})=\frac{\delta\left(\sum\limits_{k}\bm{\pi}_{k}\right)\prod\limits_{i}v_{i}^{d-1}e^{-\beta{H}}}{\int\textrm{D}v^{\prime}\textrm{D}\bm{\pi}^{\prime}\delta\left(\sum\limits_{k}\bm{\pi}^{\prime}_{k}\right)e^{-\beta{H}}\prod\limits_{i}v_{i}^{\prime d-1}}$ (S8) where $\textrm{D}v^{\prime}\equiv\prod_{k}\textrm{d}v^{\prime}_{k}$, $\textrm{D}\bm{\pi}^{\prime}\equiv\prod_{k}\textrm{d}\bm{\pi}^{\prime}_{k}$ and $d$ is the dimension of the velocity vector. We now need to integrate out the fluctuations $\bm{\pi}_{i}$, to obtain the marginalized distribution of the individual speeds $v_{i}$. Let us define $\displaystyle\Omega(\\{v_{i}\\})\equiv\prod_{j}v_{j}^{d-1}\int\textrm{D}\bm{\pi}\exp{\left[-\beta J\sum\limits_{i,j}\tilde{\Lambda}_{ij}(\\{v_{k}\\})\bm{\pi}_{i}\cdot\bm{\pi}_{j}\right]}\delta\left(\sum\limits_{k}\bm{\pi}_{k}\right)$ (S9) The integral can be easily performed upon a change of integration variables from the $\\{\bm{\pi}_{i}\\}$ to the eigenvectors $\\{\tilde{\bm{\pi}}_{\alpha}\\}$ of the matrix $\tilde{\Lambda}$. Both $\Lambda$ and $\tilde{\Lambda}$ inherit the translational invariance of the original Hamiltonian and have a constant eigenvector corresponding to a zero mode, since $\sum_{j}\Lambda_{ij}=\sum_{j}\tilde{\Lambda}_{ij}=0$. The constraint on the $\\{\bm{\pi}_{i}\\}$ becomes a constraint on the zero mode, i.e. $\delta(\tilde{\bm{\pi}}_{0})$, making the integral finite and leaving out only $d-1$ eigenvalues. We get $\displaystyle\Omega(\\{v_{i}\\})=\left[\prod_{j}v_{j}^{d-1}\right]\left[\prod\limits_{\alpha\neq 0}\tilde{\lambda}_{\alpha}(\\{v_{k}\\})\right]^{-\frac{d-1}{2}}$ (S10) where the $\\{\tilde{\lambda}_{\alpha}\\}$ are the eigenvalues of $\tilde{\Lambda}$ and depend on the $\\{v_{i}\\}$ in some complicated way. Since we are interested in the distribution of the mean speed $s=(1/N)\sum_{i}v_{i}$, we will now estimate the behaviour of $\Omega$ to leading order in $s$. Once again, it is convenient to make a change of variables, going from real space to the space of the eingenvectors $\\{\hat{v}_{a}\\}$ of the discrete Laplacian $\Lambda$. Each $v_{i}$ can be decomposed into its $\hat{v}_{a}$ components using the formula $v_{i}=\sum_{a}w_{i}^{(a)}\hat{v}_{a}$, where $w^{(a)}_{i}$ is the change of basis matrix. As mentioned above, the zero-mode has constant coefficients $w_{i}^{(0)}=1/\sqrt{N}$ and the zero-mode eigenvector is therefore proportional to the mean speed, i.e. it is exactly $\sqrt{N}s=\left(1/\sqrt{N}\right)\sum_{i}v_{i}$. This also implies that for each $v_{i}$ we have $\displaystyle v_{i}=s+\delta v_{i}=s+\sum_{a\neq 0}w^{(a)}_{i}\hat{v}_{a}\ .$ (S11) We can now express the function $\Omega$, in terms of this new representation $\displaystyle\Omega\sim\frac{\prod\limits_{j}v_{j}^{d-1}}{\left[\prod\limits_{\alpha\neq 0}\tilde{\lambda}_{\alpha}(\\{v_{k}\\})\right]^{\frac{d-1}{2}}}=\frac{\prod\limits_{j}\left[s+\sum\limits_{a\neq 0}w_{j}^{(a)}\hat{v}_{a}\right]^{d-1}}{f\left(\left\\{s+\sum\limits_{a\neq 0}w_{k}^{(a)}\hat{v}_{a}\right\\}\right)}=s^{d-1}\frac{\prod\limits_{j}\left[1+\sum\limits_{a\neq 0}\frac{w_{j}^{(a)}\hat{v}_{a}}{s}\right]^{d-1}}{f\left(\left\\{1+\sum\limits_{a\neq 0}\frac{w_{k}^{(a)}\hat{v}_{a}}{s}\right\\}\right)}=s^{d-1}h\left(\left\\{1+\sum\limits_{a\neq 0}\frac{w_{k}^{(a)}\hat{v}_{a}}{s}\right\\}\right)\ .$ (S12) Here $h$ is a generic rational function of its argument. The function $f$ is a generic polynomial of order $(N-1)(d-1)$ in its argument (from dimensional analysis), hence it is safe to extract a $s^{(N-1)(d-1)}$, because $s$ is present in the expansion of every $v_{k}$. The term $\Omega$ describes the contribution to the measure coming from the integration of the directional fluctuations. Once we integrate the directional fluctuations, we have an Hamiltonian that only depends on the moduli $\\{v_{i}\\}$. Also in this case, we can express everything in terms of $s$ and the non-zero modes $\\{\hat{v}_{a}\\}$ of $\Lambda$. Remembering that $\sum_{i}w_{i}^{(a)}w_{i}^{(b)}=\delta_{a,b}$, we get: $\displaystyle{H}=J\sum\limits_{i,j}\Lambda_{ij}v_{i}v_{j}+g\sum\limits_{i}(v_{i}-v_{0})^{2}=\sum\limits_{a=1}^{N}(J\lambda_{a}+g)\hat{v}_{a}^{2}+gN(s-v_{0})^{2}$ (S13) where, with a slight abuse of notation, we still indicate with $H$ the marginalised Hamiltonian depending only on the speeds. After these manipulations we get the distribution $\displaystyle P(\\{s,\hat{v}_{a}\\})=\frac{\Omega(\\{s,\hat{v}_{a}\\})\ e^{-\beta{H}}}{\int\textrm{d}s^{\prime}\ \textrm{D}\hat{v^{\prime}}\ \Omega(\\{s^{\prime},\hat{v^{\prime}}_{b}\\})\ e^{-\beta{H}}}$ (S14) with $a\neq 0$ and $\textrm{D}\hat{v^{\prime}}\equiv\prod_{b\neq 0}\textrm{d}\hat{v^{\prime}}_{b}$. We can now derive the distribution of the mean speed $s=\frac{1}{N}\sum_{i}v_{i}$ by marginalizing over all the non-zero modes $\hat{v}_{a}$. To this end, we note that since $\left|w_{i}^{(a)}\right|<1$ for every $i$ and $a$, we have $\hat{v}_{a}=\sum_{i}w^{(a)}_{i}v_{i}<\sum_{i}v_{i}=Ns$. The domain of the variables appearing in (S14) is therefore: $\displaystyle 0\leq s$ $\displaystyle<\infty$ (S15) $\displaystyle-{N}s\leq\hat{v}_{a}$ $\displaystyle\leq{N}s\ \ \ \ \ \ \ \ \ \ \ \textrm{for}\ a\neq 0$ (S16) We then get $\displaystyle\begin{split}P(s)&=\frac{1}{Z_{s}}\exp{\left[-N\beta g(s-v_{0})^{2}\right]}\int_{-{N}s}^{{N}s}\textrm{D}\hat{v}\ \Omega(s,\\{\hat{v}_{a}\\})\ \exp{\left[-\beta\sum\limits_{a=1}^{N}\left(J\lambda_{a}+g\right)\hat{v}_{a}^{2}\right]}\\\ &=\frac{1}{Z_{s}}s^{d-1}\exp{\left[-N\beta g(s-v_{0})^{2}\right]}\int_{-{N}s}^{{N}s}\textrm{D}\hat{v}\ h\left(\left\\{1+\sum\limits_{a\neq 0}\frac{w_{k}^{(a)}\hat{v}_{a}}{s}\right\\}\right)\ \exp{\left[-\beta\sum\limits_{a=1}^{N}\left(J\lambda_{a}+g\right)\hat{v}_{a}^{2}\right]}\end{split}$ (S17) where $Z_{s}$ is the normalization of the distribution and the integral in $\textrm{D}\hat{v}$ is over all the non-zero modes. We omitted all the irrelevant constants that cancel out through simplification between the distribution and its normalization. In the approximation where the relative fluctuations of the individual speeds are small, we can expand the function $h$ appearing in the above expression and compute the remaining Gaussian integral for large values of $N$. We obtain, at leading order $\displaystyle P(s)=\frac{1}{Z}s^{d-1}\exp{\left[-\frac{Ng}{T}(s-v_{0})^{2}\right]}$ (S18) We stress that the above approximation is quite reasonable in the deeply ordered phase. The quantity $\delta v_{i}=\sum_{a\neq 0}w^{(a)}_{i}\hat{v}_{a}$ indeed represents the fluctuation of the individual speed with respect to the mean speed of the group, $s$, and it must not be confused with the fluctuations of the mean speed itself. At low noise, when mutual adaptation is strong, individuals efficiently coordinate both their directions and speeds so that we expect individual deviations from the group mean flight direction (the polarization), and the mean speed to be small (as confirmed by simulations, see Fig. S1). On the other hand, if the value of $g$ is small, i.e. the control on the individual speeds is loose, the $\\{v_{i}\\}$ can remain coordinated and at the same time wildly fluctuate (e.g. everyone speeds up), giving rise to large fluctuations of $s$, while keeping the relative deviations $\delta v_{i}$ small. Figure S1: Relative fluctuations of the speed. We report in this plot the relative fluctuations of the individual speed $\Delta s/s$ as a function of the mean speed, computed from numerical simulations for different values of $N$, and for $g=10^{-3}$. The fluctuation is defined as $\Delta s=[(1/N)\sum_{i}\delta v_{i}^{2}]^{1/2}$. Each point in the plot corresponds to a distinct configuration, and all points of the same color are drawn from the same simulation performed at a given value of $N$. The big yellow points are averages over all data in the same simulation (i.e. $N$). The black line is a fit of the data with a $f(x)=a/x$ function. The vertical dashed line corresponds to $s=v_{0}$, which is the asymptotic value for the mean speed in the thermodynamic limit. The fluctuations themselves are small and depend on $s$ only very weakly (inset), so that the relative fluctuations decay as $1/s$. Relative fluctuations therefore only increase due to the decrease of the average value of $s$ at large sizes. However, such value is limited by below ($s<v_{0}$) and the relative fluctuations therefore remain small in the whole range of parameters. The average value of the mean speed computed from distribution (S18) has been plotted in Fig.2b of the main paper: it predicts very nicely the values measured through numerical simulations of the off-lattice linear control model (see next section for details), confirming the validity of the approximations performed in the calculation (i.e. large directional order, quasi-equilibrium, small relative fluctuations of the speed). To get an analytical estimate of the typical speed, we can compute the maximum of the distribution. By imposing $\frac{\partial P}{\partial s}=0$ for $d=3$, we obtain the following equation: $\displaystyle s_{typical}^{2}-s_{typical}v_{0}-\frac{T}{Ng}=0$ (S19) that gives us the expression for the maximum: $\displaystyle s_{typical}=v_{0}\left[\frac{1}{2}+\frac{1}{2}\sqrt{1+\frac{4T}{Ngv_{0}^{2}}}\right]$ (S20) This result confirms the idea that the mean speed is substantially different from $v_{0}$ for small $N$, if $g$ is too small, as clearly shown in Fig.2b of the main paper. We wish to draw the reader’s attention on the fact that, despite the approximations we used to derive them (in particular the fixed network assumption), the analytical results of this section are in perfect agreement with numerical simulations performed by using an actual self-propelled particle model (see Fig.2b in the main text). This is not surprising, considering that in the deeply ordered flocking phase the time scale to reshuffle the interaction network is much larger than the time of local relaxation mora2016local . ## Bounds on the stiffness for linear speed control We have seen in the main text that when the stiffness $g$ is small enough, correlations are scale free. To understand how small is ”small enough”, we notice that in order to have scale-free correlations at all observed sizes, one needs $\xi_{\mathrm{sp}}\gg L$ for each $L$, a condition that, together with (see main text), $\xi_{\mathrm{sp}}=r_{1}\left(\frac{Jn_{c}}{g}\right)^{1/2}$ (S21) leads to, $g\ll\frac{a}{L_{\mathrm{max}}^{2}}$ (S22) where $L_{\mathrm{max}}$ is the size of the largest flock in the dataset and $a=r_{1}^{2}Jn_{c}$ collects all size-independent quantities. When the stiffness is so small that (S22) holds, scale-free correlations over the entire range of $L$ are reproduced. On the other hand, we can quantify the conflict between control and correlation within the linear theory by setting a second bound on the speed stiffness $g$. From (S20) we see that, in order to have a typical flock’s speed reasonably close to the natural reference value, $v_{0}$, one must ensure that $T/(Ngv_{0}^{2})\ll 1$ for all observed sizes; if we use the reasonable approximation $N\sim L^{3}/r_{1}^{3}$, where $r_{1}$ is the mean nearest neighbour distance, we obtain the condition, $g\gg\frac{b}{L^{3}_{\mathrm{min}}}$ (S23) where $L_{\mathrm{min}}$ is the size of the smallest flock in the data-set, and where once again we have grouped into the parameter $b=r_{1}^{3}T/v_{0}^{2}$ all size-independent constants. Once the spectrum of observed values of $L$ is wide enough, the two bounds (S22) and (S23) cannot be satisfied both with one single value of the speed control stiffness $g$. The only way to reconcile linear speed control with the empirical observations would be to assume the existence of a tuning mechanism such that the speed stiffness $g$ depends on the size $L$ of the flock, in a way to satisfy the following condition, $\frac{b}{L^{3}}\ll g(L)\ll\frac{a}{L^{2}}$ (S24) This is a rather narrow strip for $g(L)$ to live in, so that a biological mechanism fulfilling (S24) would require some very tricky size-dependent fine- tuning. But even that could be insufficient: the two asymptotic inequalities in (S24) require the stiffness $g$ to stay well clear of both boundaries, $b/L^{3}$ and $a/L^{2}$, which for medium-small values of $L$ becomes impossible to achieve. ## Distribution of the mean speed: marginal speed control Let us now consider the marginal speed control model, that has the pseudo- Hamiltonian: $\displaystyle{H}(\\{\bm{v}_{i}\\})=\frac{J}{2}\sum\limits_{i,j}n_{ij}(\bm{v}_{i}-\bm{v}_{j})^{2}+\frac{\lambda}{v_{0}^{6}}\sum\limits_{i}\left(v_{i}^{2}-v_{0}^{2}\right)^{4}$ (S25) We can follow a similar procedure as the one used for linear control, i.e. we apply the SW approximation to deal with directional fluctuations and we decompose in normal modes for the speed fluctuations. We end up with a distribution with the same structure as the one of (S14) with $\displaystyle{H}=\sum\limits_{a=1}^{N}J\lambda_{a}\hat{v}_{a}^{2}+\frac{\lambda}{v_{0}^{6}}\sum\limits_{i}\left(\sum\limits_{a,b}w_{i}^{(a)}w_{i}^{(b)}\hat{v}_{a}\hat{v}_{b}-v_{0}^{2}\right)^{4}$ (S26) Integration over the non-zero modes with this effective Hamiltonian is clearly a hard task, due to the non-Gaussian contributions. However, in the approximation where the relative speed fluctuations are small, things simplify: we can easily extract the zero mode contribution $\simeq N\frac{\lambda}{v_{0}^{6}}(s^{2}-v_{0}^{2})^{4}$ in the exponent, while at leading order the integration over the remaining modes (which is non Gaussian in this case) will produce a constant integral. The distribution for the average speed $s$ will then be: $\displaystyle P(s)=\frac{1}{Z}s^{d-1}\exp{\left[-\frac{N\lambda}{Tv_{0}^{6}}(s^{2}-v_{0}^{2})^{4}\right]}$ (S27) The agreement between theory and simulations is less accurate than in the linear speed control case, but we still have a satisfying match between the predicted average mean speed and the value measured from numerical simulations (Fig.2d of the main paper). Once again we can compute the maximum of the distribution to estimate the typical mean speed. For $d=3$ we get: $\displaystyle 1-\frac{4N\lambda v_{0}^{2}}{T}\left(\frac{s_{typical}}{v_{o}}\right)^{2}\left(\left(\frac{s_{typical}}{v_{0}}\right)^{2}-1\right)^{3}=0$ (S28) Since we are interested in the behaviour of $s_{typical}$ in $N$ at fixed $T$ and $\lambda$, we can solve this equation in the two limits of big $N$ and small $N$, obtaining: $\displaystyle s_{typical}\simeq\begin{cases}v_{0}\left[1+\left(\frac{T}{32N\lambda v_{0}^{2}}\right)^{1/3}\right]\ \ \ \ \ &\textrm{for}\ N\gg\frac{T}{\lambda v_{0}^{2}}\\\ v_{0}\left(\frac{T}{4N\lambda v_{0}^{2}}\right)^{1/8}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &\textrm{for}\ N\ll\frac{T}{\lambda v_{0}^{2}}\end{cases}$ (S29) ## Polarization dependence on model parameters In equilibrium ferromagnetic models in their low temperature phase, the polarization $\Phi$ only depends on the ratio between ferromagnetic coupling $J$ and temperature $T$ through the relation dyson_56 , $\Phi=1-\alpha\frac{T}{J}$ (S30) where $\alpha$ is a constant of order $1$ whose value depends on the specific structure of the interaction network. This relation, though, is only valid when the vectorial degrees of freedom $\bm{v}_{i}$ have modulus $1$, whereas if they have modulus equal to $v_{0}$ the relation changes to, $\Phi=1-\alpha\frac{T}{v_{0}^{2}J}$ (S31) For out-of-equilibrium models, as the present case, the polarization depends in principle on all parameters; however, in the deeply ordered phase that we are considering here, the main contribution to $\Phi$ is still given by the ratio between alignment and noise, so that (S31) remains a very useful rule of thumb to fix the parameters of the model such to have a polarization equal to that of natural flocks, namely $\Phi\simeq 0.89\div 0.99$. ## Gauging the values of $N$ and $L$ in numerical Simulations In order to compare numerical data with experimental ones, we need to satisfy two criteria in our simulations: we must span a range in group sizes $N$ analogous to the one of real flocks (to reproduce the behaviour of the mean speed as a function of $N$), and we also need to span a range in linear sizes $L$ of the same order of what found in experiments (to reproduce the scaling of the correlation length). However, natural flocks have non-trivial aspect ratios (being flatter in the direction of gravity ballerini+al_08b ), which implies that their spatial extension $L^{\mathrm{bio}}$ (defined as the maximum distance between two individuals) does not scale with the group size $N^{\mathrm{bio}}$ as $L^{\mathrm{bio}}=\left(N^{\mathrm{bio}}\right)^{1/3}$. For this reason, we need to perform two sets of simulations to satisfy the criteria mentioned above. In the first set, we perform simulations with $N=8\div 2744$, that are the minimum and maximum number of individuals in the recorded natural flocks. With these numerical data we compute the mean speed distribution and its average, that are then displayed in Fig.2b,d. Then, we perform a second set of simulations with $L/r_{1}$ up to the maximum experimental value of $L^{\mathrm{bio}}/r_{1}^{\mathrm{bio}}=70$, where $r_{1}^{\mathrm{bio}}$ is the average nearest neighbour distance of real flocks. With these data we compute the correlation lengths that are plotted in Fig2.a,c; for this set of simulations we have $N=125\div 343\times 10^{3}$. An alternative strategy would have been to perform simulations in a box reproducing the aspect ratio of natural flocks, rather than in a cubic box. This is however not convenient. The aspect ratio for real flocks is not a stable quantity, but fluctuates from flock to flock and - for the same flock - from time to time. This would require an extremely painful calibration of the chosen simulation box. On the other hand, analysis of real data show that the aspect ratio does not influence the statistics of speeds and correlations (e.g. correlations of flocks with different aspect ratios rescale very well, safe for boundary effects at very large distances cavagna+al_10 ). Choosing a cubic a box is therefore perfectly legitimate in terms of the physics, and it very much simplifies the data analysis. ## Is the marginal theory tuned at criticality? An interesting question is whether or not a theory with marginal speed control requires any tuning of the parameters, and in particular tuning close to criticality mora+al_11 . The success of the marginal theory is based on the fact that the second derivative of the potential is exactly zero at $v_{0}$, a condition that seems to require some tuning. However, a small non-zero quadratic term would still be acceptable in the marginal case, as long as its amplitude is much smaller than $1/L_{\mathrm{max}}^{2}$; and thanks to the steep nonlinear rise of the marginal potential, there is no lower bound for it, so that this tuning is therefore rather lukewarm. On the other hand, the marginal theory requires the system to be close to the zero-temperature critical point, so in this sense there is a case for near-criticality. However, a zero-temperature critical point is not shifted by finite-size effects,it has just one physical side (positive temperature); hence, we can just push the system at low temperature, without worrying to cross the critical point in any way; as a result, the control parameter does not need to depend on size to keep the system close to criticality. As we have seen, the situation would be different in the case of linear control, as the speed stiffness must be carefully tuned in a size-dependent way to remain close, but not too close, to the critical point. Acquisition | No. of birds $N$ | Flock’s size $L$, $m$ | Polarization $\Phi$ | Mean speed $s$, $m/s$ | Correlation length $\xi$, $m$ ---|---|---|---|---|--- 16-05 | 1548 | 68.1 | 0.961 | 15.5 | 9.1 17-06 | 380 | 40.5 | 0.935 | 10.0 | 5.8 21-06 | 530 | 26.6 | 0.973 | 11.0 | 3.8 25-08 | 1079 | 52.5 | 0.962 | 12.7 | 7.1 25-10 | 696 | 30.1 | 0.991 | 12.6 | 3.3 25-11 | 854 | 33.1 | 0.957 | 10.7 | 3.4 28-10 | 1122 | 32.3 | 0.982 | 11.2 | 3.2 29-03 | 422 | 28.1 | 0.963 | 10.8 | 3.7 31-01 | 1565 | 67.3 | 0.921 | 7.5 | 8.5 32-06 | 690 | 18.4 | 0.981 | 10.0 | 2.6 42-03 | 366 | 27.2 | 0.979 | 10.2 | 3.4 48-17 | 709 | 25.7 | 0.886 | 13.5 | 3.0 49-05 | 636 | 15.1 | 0.995 | 13.7 | 2.0 54-08 | 2548 | 66.6 | 0.971 | 14.2 | 8.9 57-03 | 2559 | 76.1 | 0.978 | 14.3 | 10.7 58-06 | 351 | 19.4 | 0.987 | 10.8 | 2.2 58-07 | 445 | 15.3 | 0.977 | 10.9 | 2.4 63-05 | 712 | 47.2 | 0.978 | 10.3 | 4.1 69-09 | 206 | 13.4 | 0.985 | 11.8 | 1.8 69-10 | 994 | 32.4 | 0.987 | 12.0 | 4.1 69-13 | 1238 | 39.2 | 0.937 | 10.1 | 5.9 69-19 | 617 | 21.0 | 0.975 | 14.3 | 3.6 72-02 | 101 | 7.2 | 0.993 | 13.3 | 1.5 77-07 | 131 | 6.5 | 0.978 | 9.2 | 1.5 20110208_ACQ3 | 178 | 12.9 | 0.983 | 8.8 | 1.7 20110211_ACQ1 | 595 | 23.5 | 0.971 | 8.6 | 2.6 20110217_ACQ2 | 405 | 15.0 | 0.982 | 11.1 | 2.0 20111124_ACQ1 | 125 | 8.1 | 0.993 | 11.0 | 1.5 20111125_ACQ1 | 50 | 8.7 | 0.983 | 12.4 | 2.0 20111125_ACQ2 | 512 | 26.6 | 0.956 | 9.4 | 3.3 20111201_ACQ3_F1 | 133 | 8.2 | 0.973 | 10.2 | 1.0 20111201_ACQ3_F4 | 488 | 16.2 | 0.972 | 10.6 | 1.0 20111207_ACQ1 | 108 | 13.8 | 0.931 | 8.1 | 2.3 20111214_ACQ4_F1 | 154 | 10.3 | 0.992 | 11.4 | 1.8 20111214_ACQ4_F2 | 144 | 13.4 | 0.968 | 11.6 | 2.2 20111215_ACQ1 | 391 | 16.1 | 0.984 | 11.1 | 2.5 20111220_ACQ2 | 198 | 10.2 | 0.985 | 16.6 | 1.2 20191209_ACQ53 | 97 | 9.9 | 0.991 | 11.1 | 1.4 20191209_ACQ55_F1 | 19 | 5.6 | 0.996 | 13.3 | 1.3 20191209_ACQ58_F2 | 53 | 10.5 | 0.988 | 13.2 | 1.3 20191209_ACQ58_F3 | 14 | 1.5 | 0.998 | 17.2 | 1.6 20200129_ACQ3 | 54 | 10.1 | 0.998 | 17.4 | 1.1 20200129_ACQ4_F1 | 11 | 4.7 | 0.988 | 11.8 | 1.5 20200129_ACQ4_F2 | 54 | 13.2 | 0.994 | 16.6 | 1.5 20200211_ACQ7 | 10 | 1.2 | 0.995 | 12.0 | 0.8 Table 1: Experimental data. This table reports all the data required to perform the analysis presented in this paper. Each line corresponds to a different acquisition (i.e. flocking recording), for all the three experimental campaigns considered (acquisitions labeling system changed from one campaign to another). Acquisitions belonging to different campaigns are separated by a straight line. For each acquisition we have: the median number of individuals $N$, the median flock’s size $L$, the mean polarization $\Phi=1/N\sum_{i}\bm{v}_{i}/v_{i}$, the median of the mean speed $s=1/N\sum_{i}v_{i}$ and the median correlation length $\xi$, computed via eq. ($13$) of the main text (Methods). Every median (or mean), relative to a particular acquisition, is made over all the frames in that recording. Since the measured polarization value depends on time resolution (higher resolution bringing more noise), acquisition of the second and third campaign (that are acquired at much faster rates) have been re-sampled at the same rate of the first campaign so as to have homogeneous measurements for all the data. Speed control | $g$ | $\lambda$ | $T$ | $J$ ---|---|---|---|--- Linear | $0.001$ | - | $2.5\times 10^{-3}$ | $10$ | $0.03$ | - | $2.5\times 10^{-3}$ | $10$ | $0.1$ | - | $2.5\times 10^{-3}$ | $10$ | $1.0$ | - | $2.5\times 10^{-3}$ | $10$ Marginal | - | $0.001$ | $1.25\times 10^{-4}$ | $1.0$ Table 2: Parameters of simulations. In this table we report the values of relevant parameters used in the numerical simulations. The other parameters are $r_{c}=1.2$, $v_{0}=0.05$, $\Delta t_{MRG}=0.01$, $\Delta t_{GAUSS}=0.001$. Figure S2: Some examples of connected correlation functions. We report some examples of speed connected correlation functions, computed by eq. ($11$) of the main text. The first point of zero-crossing ($r_{0}$) is visible for each function. All the functions are normalized such that $C(r=0)=1$. a: Example of speed connected correlation function in experimental data. b: Example of speed connected correlation function in linear speed control model simulations. c: Example of speed connected correlation function in marginal speed control model simulations. For b and c the distance $r$ is measured in simulation units.
Entropy Partial Transport with Tree Metrics: Theory and Practice Tam Le* Truyen Nguyen* RIKEN AIP The University of Akron ###### Abstract Optimal transport (OT) theory provides powerful tools to compare probability measures. However, OT is limited to nonnegative measures having the same mass, and suffers serious drawbacks about its computation and statistics. This leads to several proposals of regularized variants of OT in the recent literature. In this work, we consider an entropy partial transport (EPT) problem for nonnegative measures on a tree having different masses. The EPT is shown to be equivalent to a standard complete OT problem on a one-node extended tree. We derive its dual formulation, then leverage this to propose a novel regularization for EPT which admits fast computation and negative definiteness. To our knowledge, the proposed regularized EPT is the first approach that yields a closed-form solution among available variants of unbalanced OT. For practical applications without priori knowledge about the tree structure for measures, we propose tree-sliced variants of the regularized EPT, computed by averaging the regularized EPT between these measures using random tree metrics, built adaptively from support data points. Exploiting the negative definiteness of our regularized EPT, we introduce a positive definite kernel, and evaluate it against other baselines on benchmark tasks such as document classification with word embedding and topological data analysis. In addition, we empirically demonstrate that our regularization also provides effective approximations. ## 1 Introduction Optimal transport (OT) theory offers powerful tools to compare probability measures [67]. OT has been applied for various tasks in machine learning [9, 14, 51, 54], statistics [49, 68] and computer graphics [39, 63]. However, OT requires input measures having the same mass which may limit its applications in practice since one often needs to deal with measures of unequal masses. For instance, in natural language processing, we can view a document as a measure where each word is regarded as a point in the support with a unit mass. Thus, documents with different lengths lead to their associated measures having different masses. To tackle the transport problem for measures having different masses, Caffarelli and McCann [10] proposed the partial optimal transport (POT) where one only transports a fixed amount of mass from a measure into another. Later, Figalli [20] extended the theory of POT, notably, about the uniqueness of solutions. A different approach is to optimize the sum of a transport functional and two convex entropy functionals which quantify the deviation of the marginals of the transport plan from the input measures [46], i.e., the optimal entropy transport (OET) problem. This formulation recovers many different previous works. For examples, when the entropy is equal to the total variation distance or the $\ell^{2}$ distance, the OET is respectively equivalent to the generalized Wasserstein distance [56, 57] or the unbalanced mass transport [5]. It is worth noting that the generalized Wasserstein distance shares the same spirit as the Kantorovich-Rubinstein discrepancy [25, 26, 45]. Another variant is the unnormalized optimal transport [23] which mixes Wasserstein distance and the $\ell^{p}$ distance. There are several applications of the transport problem for measures having different masses such as in machine learning [22, 31], deep learning [69], topological data analysis [37], computational imaging [44], and computational biology [61]. One important case for the OET problem is when the entropy is equal to the Kullback-Leibler (KL) divergence and a particular cost function is used, then OET is equivalent to the Kantorovich-Hellinger distance (i.e., Wasserstein- Fisher-Rao distance) [12, 46]. In addition, one can apply the Sinkhorn-based algorithm to efficiently solve OET problem when the entropy is equal to KL divergence, i.e., Sinkhorn-based approach for unbalanced optimal transport (Sinkhorn-UOT) [12, 22]. In [55], Pham et al. showed that the complexity of Sinkhorn-based algorithm for Sinkhorn-UOT is quadratic which is similar to the case of entropic regularized OT [15] for probability measures. However, for large-scale applications where the supports of measures contain a large number of points, the computation of Sinkhorn-UOT becomes prohibited. Following the sliced-Wasserstein (SW) distance [8, 58] which projects supports into a one- dimensional space and employs the closed-form solution of the univariate optimal transport (1d-OT), Bonneel and Coeurjolly [7] propose the sliced partial optimal transport (SPOT) for nonnegative measures having different masses. Unlike the standard 1d-OT, one does not have a closed-form solution for measures of unequal masses that are supported in a one-dimensional space. With an assumption of a unit mass on each support, Bonneel and Coeurjolly [7] derived an efficient algorithm to solve the SPOT problem in quadratic complexity for the worst case. Especially, in practice, their proposed algorithm is nearly linear for computation. However, as in SW, the SPOT uses one-dimensional projection for supports which limits its capacity to capture a structure of a distribution, especially in high-dimensional settings [43, 47]. In this work, we aim to develop an efficient and scalable approach for the transport problem when input measures have different masses. Inspired by the tree-sliced Wasserstein (TSW) distance [43] which has fast closed-form computation and remedies the curse of dimensionality for SW, we propose to consider the entropy partial transport (EPT) problem with tree metrics. As a high level, our main contribution is three-fold as follows: * • We establish a relationship between the EPT problem with mass constraint and a formulation with Lagrangian multiplier. Then, we employ it to transform the EPT problem to the standard complete OT problem on a suitable one-node extended tree. * • We derive a dual formulation for our EPT problem. We then leverage it to propose a novel regularization which admits a closed-form formula and negative definiteness. Consequently, we introduce positive definite kernels for our regularized EPT. We also derive tree-sliced variants of the regularized EPT for applications without priori knowledge about tree structure for measures. * • We empirically show that (i) our regularization provides both efficient approximations and fast computations, and (ii) the performances of the proposed kernels for our regularized EPT compare favorably with other baselines in applications. ## 2 Preliminaries Let ${\mathcal{T}}=(V,E)$ be a tree rooting at node $r$ and with nonnegative edge lengths $\\{w_{e}\\}_{e\in E}$, where $V$ is the collection of nodes and $E$ is the collection of edges. For convenience, we use ${\mathcal{T}}$ to denote the set of all nodes together with all points on its edges. We then recall the definition of tree metric as follow: ###### Definition 2.1 (Tree metric [62](§7, p.145–182)). A metric $\texttt{d}:\Omega\times\Omega\rightarrow[0,\infty)$ is called a tree metric on $\Omega$ if there exists tree $\mathcal{T}$ such that $\Omega\subseteq\mathcal{T}$ and for $x,y\in\Omega$, $\texttt{d}(x,y)$ equals to the length of the (unique) path between $x$ and $y$. Assume that $V$ is a subset of a vector space, and let $d_{\mathcal{T}}(\cdot,\cdot)$ be the tree metric on ${\mathcal{T}}$. Hereafter, the unique shortest path in ${\mathcal{T}}$ connecting $x$ and $y$ is denoted by $[x,y]$. Let $\omega$ be the unique Borel measure (i.e., the length measure) on ${\mathcal{T}}$ satisfying $\omega([x,y])=d_{\mathcal{T}}(x,y)$ for all $x,y\in{\mathcal{T}}$. Given $x\in{\mathcal{T}}$, the set $\Lambda(x)$ stands for the subtree below $x$. Precisely, $\Lambda(x):=\big{\\{}y\in{\mathcal{T}}:\,x\in[r,y]\big{\\}}.\vspace{-6pt}$ (1) We shall use notation ${\mathcal{M}}({\mathcal{T}})$ to represent the set of all nonnegative Borel measures on ${\mathcal{T}}$ with a finite mass. Also let $C({\mathcal{T}})$ be the set of all continuous functions on ${\mathcal{T}}$, while $L^{\infty}({\mathcal{T}})$ be the collection of all Borel measurable functions on ${\mathcal{T}}$ that are bounded $\omega$-a.e. Then, $L^{\infty}({\mathcal{T}})$ is a Banach space under the norm $\|f\|_{L^{\infty}({\mathcal{T}})}:=\inf\\{a\in{\mathbb{R}}:\,|f(x)|\leq a\mbox{ for $\omega$-a.e. }x\in{\mathcal{T}}\\}.$ ## 3 Entropy Partial Transport (EPT) with Tree Metrics Let $b\geq 0$ be a constant, $c:{\mathcal{T}}\times{\mathcal{T}}\to{\mathbb{R}}$ be a continuous cost with $c(x,x)=0$, $F_{1},\,F_{2}:[0,\infty)\to(0,\infty)$ be entropy functions which are convex and lower semicontinuous, and let $w_{1},w_{2}:{\mathcal{T}}\to[0,\infty)$ be two nonnegative weights. For $\mu,\nu\in{\mathcal{M}}({\mathcal{T}})$, consider the region $\Pi_{\leq}(\mu,\nu):=\Big{\\{}\gamma\in{\mathcal{M}}({\mathcal{T}}\times{\mathcal{T}}):\,\gamma_{1}\leq\mu,\,\gamma_{2}\leq\nu\Big{\\}}\vspace{-6pt}$ with $\gamma_{i}$ ($i=1,2$) denoting the $i$th marginal of the measure $\gamma$. For $\gamma\in\Pi_{\leq}(\mu,\nu)$, the Radon-Nikodym derivatives of $\gamma_{1}$ with respect to $\mu$ and of $\gamma_{2}$ with respect to $\nu$ exist due to $\gamma_{1}\leq\mu$ and $\gamma_{2}\leq\nu$. From now on, we let $f_{1}$ and $f_{2}$ respectively denote these Radon-Nikodym derivatives, i.e., $\gamma_{1}=f_{1}\mu$ and $\gamma_{2}=f_{2}\nu$. Then $0\leq f_{1}\leq 1$ $\mu$-a.e. and $0\leq f_{2}\leq 1$ $\nu$-a.e. Throughout the paper, $\bar{m}$ stands for the minimum of the total masses of $\mu$ and $\nu$. That is, $\bar{m}:=\min\\{\mu({\mathcal{T}}),\nu({\mathcal{T}})\\}$. Inspiring by [10, 46], we fix a number $m\in[0,\bar{m}]$ and consider the following EPT problem: $\displaystyle{\mathcal{W}}_{c,m}(\mu,\nu):=\inf_{\gamma\in\Pi_{\leq}(\mu,\nu),\,\gamma({\mathcal{T}}\times{\mathcal{T}})=m}\Big{[}{\mathcal{F}}_{1}(\gamma_{1}|\mu)$ $\displaystyle+{\mathcal{F}}_{2}(\gamma_{2}|\nu)+b\,\int_{{\mathcal{T}}\times{\mathcal{T}}}c(x,y)\gamma(dx,dy)\Big{]},\vspace{-6pt}$ (2) where ${\mathcal{F}}_{1}(\gamma_{1}|\mu):=\int_{\mathcal{T}}w_{1}(x)F_{1}(f_{1}(x))\mu(dx)$ and ${\mathcal{F}}_{2}(\gamma_{2}|\nu):=\int_{\mathcal{T}}w_{2}(x)F_{2}(f_{2}(x))\nu(dx)$ are the weighted relative entropies. The role of the two entropies in the minimization problem is to force the marginals of $\gamma$ close to $\mu$ and $\nu$ respectively. Let us introduce a Lagrange multiplier $\lambda\in{\mathbb{R}}$ conjugate to the constraint $\gamma({\mathcal{T}}\times{\mathcal{T}})=m$. As a result, we instead study the following formulation $\displaystyle\mathrm{ET}_{c,\lambda}(\mu,\nu):=\inf_{\gamma\in\Pi_{\leq}(\mu,\nu)}\Big{[}{\mathcal{F}}_{1}(\gamma_{1}|\mu)+{\mathcal{F}}_{2}(\gamma_{2}|\nu)$ $\displaystyle+b\,\int_{{\mathcal{T}}\times{\mathcal{T}}}[c(x,y)-\lambda]\gamma(dx,dy)\Big{]}.\vspace{-6pt}$ In this paper, we focus on the specific entropy functions $F_{1}(s)=F_{2}(s)=|s-1|$. Thus, the quantity of interest becomes $\displaystyle\mathrm{ET}_{c,\lambda}(\mu,\nu)=\inf_{\gamma\in\Pi_{\leq}(\mu,\nu)}\mathcal{C}_{\lambda}(\gamma),$ (3) where $\mathcal{C}_{\lambda}(\gamma)$ is defined as follow: $\displaystyle\mathcal{C}_{\lambda}(\gamma):=\int_{\mathcal{T}}w_{1}[1-f_{1}(x)]\mu(dx)+\int_{\mathcal{T}}w_{2}[1-f_{2}(x)]\nu(dx)$ $\displaystyle\hskip 90.00014pt+b\,\int_{{\mathcal{T}}\times{\mathcal{T}}}[c(x,y)-\lambda]\gamma(dx,dy)$ $\displaystyle=\int_{\mathcal{T}}w_{1}\mu(dx)+\int_{\mathcal{T}}w_{2}\nu(dx)-\int_{\mathcal{T}}w_{1}\gamma_{1}(dx)$ $\displaystyle\hskip 5.0pt-\int_{\mathcal{T}}w_{2}\gamma_{2}(dx)+b\int_{{\mathcal{T}}\times{\mathcal{T}}}[c(x,y)-\lambda]\gamma(dx,dy).$ (4) Notice that problem (3) is a generalization of the generalized Wasserstein distance ${\mathcal{W}}_{1}^{a,b}(\mu,\nu)$ introduced in [56, 57]. We next display some relationships between problem (3) with mass constraint $m$ and problem (3) with Lagrange multiplier $\lambda$. For this, let $\Gamma^{0}(\lambda)$ denote the set of all optimal plans (i.e., minimizers $\gamma$) for $\mathrm{ET}_{c,\lambda}(\mu,\nu)$. Then, since $\mathcal{C}_{\lambda}(\gamma)$ is an affine function of $\gamma\in\Pi_{\leq}(\mu,\nu)$, the set $\Gamma^{0}(\lambda)$ is a nonempty convex set. Indeed, for any $\tilde{\gamma},\hat{\gamma}\in\Gamma^{0}(\lambda)$ and for any $t\in[0,1]$ we have $(1-t)\tilde{\gamma}+t\hat{\gamma}\in\Gamma^{0}(\lambda)$ due to $\mathcal{C}_{\lambda}((1-t)\tilde{\gamma}+t\hat{\gamma})=(1-t)\mathcal{C}_{\lambda}(\tilde{\gamma})+t\mathcal{C}_{\lambda}(\hat{\gamma})\leq(1-t)\mathcal{C}_{\lambda}(\gamma)+t\mathcal{C}_{\lambda}(\gamma)=\mathcal{C}_{\lambda}(\gamma)$ for every $\gamma\in\Pi_{\leq}(\mu,\nu)$. The following result extends Corollary 2.1 in [10] and reveals the connection between problem (3) and problem (3). ###### Theorem 3.1. Let $u(\lambda):=-\mathrm{ET}_{c,\lambda}(\mu,\nu)$ for $\lambda\in{\mathbb{R}}$, and denote $\partial u(\lambda):=\Big{\\{}p\in{\mathbb{R}}:u(t)\geq u(\lambda)+p(t-\lambda),\forall t\in{\mathbb{R}}\Big{\\}}\vspace{-6pt}$ for the set of all subgradients of $u$ at $\lambda$. Also, set $\partial u({\mathbb{R}}):=\cup_{\lambda\in{\mathbb{R}}}\partial u(\lambda)$. Then, we have * i) $u$ is a convex function on ${\mathbb{R}}$, and $\partial u(\lambda)=\big{\\{}b\,\gamma({\mathcal{T}}\times{\mathcal{T}}):\gamma\in\Gamma^{0}(\lambda)\big{\\}}\quad\forall\lambda\in{\mathbb{R}}.\vspace{-4pt}$ Also if $\lambda_{1}<\lambda_{2}$, then $m_{1}\leq m_{2}$ for every $m_{1}\in\partial u(\lambda_{1})$ and $m_{2}\in\partial u(\lambda_{2})$. * ii) $u$ is differentiable at $\lambda$ if and only if every optimal plan in $\Gamma^{0}(\lambda)$ has the same mass. When this happens, we in addition have $u^{\prime}(\lambda)=b\,\gamma({\mathcal{T}}\times{\mathcal{T}})$ for any $\gamma\in\Gamma^{0}(\lambda)$. * iii) If there exists a constant $M>0$ such that $w_{1}(x)+w_{2}(y)\leq b\,c(x,y)+M$ for all $x,y\in{\mathcal{T}}$, then $\partial u({\mathbb{R}})=[0,b\,\bar{m}]$. Moreover, $u(\lambda)=-\int_{\mathcal{T}}w_{1}\mu(dx)-\int_{\mathcal{T}}w_{2}\nu(dx)$ when $\lambda<-M$, and $u^{\prime}(\lambda)=b\,\bar{m}$ for $\lambda>\|c\|_{L^{\infty}({\mathcal{T}}\times{\mathcal{T}})}$. Proof is placed in the Supplementary (§A.1). For any $m\in[0,\bar{m}]$, part iii) of Theorem 3.1 implies that there exists $\lambda\in{\mathbb{R}}$ such that $b\,m\in\partial u(\lambda)$. It then follows from part i) of this theorem that $m=\gamma^{*}({\mathcal{T}}\times{\mathcal{T}})$ for some $\gamma^{*}\in\Gamma^{0}(\lambda)$. It is also clear that this $\gamma^{*}$ is an optimal plan for ${\mathcal{W}}_{c,m}(\mu,\nu)$, and $\displaystyle{\mathcal{W}}_{c,m}(\mu,\nu)=\mathrm{ET}_{c,\lambda}(\mu,\nu)+\lambda b\,m.$ Thus solving the auxiliary problem (3) gives us a solution to the original problem (3). When $u$ is differentiable, the relation between $m$ and $\lambda$ is given explicitly as $u^{\prime}(\lambda)=b\,m$. Note that the above selection of $\lambda$ is unique only if the function $u$ is strictly convex. Nevertheless, it enjoys the following monotonicity regardless of the uniqueness: if $m_{1}<m_{2}$, then $\lambda_{1}\leq\lambda_{2}$. Indeed, we have $m_{1}=\gamma^{1}({\mathcal{T}}\times{\mathcal{T}})$ and $m_{2}=\gamma^{2}({\mathcal{T}}\times{\mathcal{T}})$ for some $\gamma^{1}\in\Gamma^{0}(\lambda_{1})$ and $\gamma^{2}\in\Gamma^{0}(\lambda_{2})$. Since $\gamma^{1}({\mathcal{T}}\times{\mathcal{T}})<\gamma^{2}({\mathcal{T}}\times{\mathcal{T}})$, one has $\lambda_{1}\leq\lambda_{2}$ by i) of Theorem 3.1. To investigate problem (3), we recast it as the standard complete OT problem by using an observation in [10]. More precisely, let $\hat{s}$ be a point outside ${\mathcal{T}}$ and consider the set $\hat{\mathcal{T}}:={\mathcal{T}}\cup\\{\hat{s}\\}$. We next extend the cost function to $\hat{\mathcal{T}}\times\hat{\mathcal{T}}$ as follow $\hat{c}(x,y):=\left\\{\begin{array}[]{lr}\\!\\!b[c(x,y)-\lambda]\hskip 10.00002pt\mbox{ if }x,y\in{\mathcal{T}},\\\ \\!\\!w_{1}(x)\hskip 40.00006pt\mbox{ if }x\in{\mathcal{T}}\mbox{ and }y=\hat{s},\\\ \\!\\!w_{2}(y)\hskip 40.00006pt\mbox{ if }x=\hat{s}\mbox{ and }y\in{\mathcal{T}},\\\ \\!\\!0\hskip 60.00009pt\mbox{ if }x=y=\hat{s}.\end{array}\right.$ The measures $\mu,\nu$ are extended accordingly by adding a Dirac mass at the isolated point $\hat{s}$: $\hat{\mu}=\mu+\nu({\mathcal{T}})\delta_{\hat{s}}$ and $\hat{\nu}=\nu+\mu({\mathcal{T}})\delta_{\hat{s}}$. As $\hat{\mu},\hat{\nu}$ have the same total mass on $\hat{\mathcal{T}}$, we can consider the standard complete OT problem between $\hat{\mu},\hat{\nu}$ as follow $\displaystyle\mathrm{KT}(\hat{\mu},\hat{\nu}):=\inf_{\hat{\gamma}\in\Gamma(\hat{\mu},\hat{\nu})}\int_{\hat{\mathcal{T}}\times\hat{\mathcal{T}}}\hat{c}(x,y)\hat{\gamma}(dx,dy),$ (5) where $\Gamma(\hat{\mu},\hat{\nu}):=\Big{\\{}\hat{\gamma}\in{\mathcal{M}}(\hat{\mathcal{T}}\times\hat{\mathcal{T}}):\hat{\mu}(U)=\hat{\gamma}(U\times\hat{\mathcal{T}}),\,\hat{\nu}(U)=\hat{\gamma}(\hat{\mathcal{T}}\times U)\mbox{ for all Borel sets }U\subset\hat{\mathcal{T}}\Big{\\}}$. A one-to-one correspondence between $\gamma\in\Pi_{\leq}(\mu,\nu)$ and $\hat{\gamma}\in\Gamma(\hat{\mu},\hat{\nu})$ is given by $\displaystyle\hat{\gamma}=\gamma+[(1-f_{1})\mu]\otimes\delta_{\hat{s}}+\delta_{\hat{s}}\otimes[(1-f_{2})\nu]$ $\displaystyle+\gamma({\mathcal{T}}\times{\mathcal{T}})\delta_{(\hat{s},\hat{s})}.$ (6) Indeed, if $\gamma\in\Pi_{\leq}(\mu,\nu)$, then it is clear that $\hat{\gamma}$ defined by (3) satisfies $\hat{\gamma}\in\Gamma(\hat{\mu},\hat{\nu})$. The converse is guaranteed by the next technical result. ###### Lemma 3.2. For $\hat{\gamma}\in\Gamma(\hat{\mu},\hat{\nu})$, let $\gamma$ be the restriction of $\hat{\gamma}$ to ${\mathcal{T}}$. Then, relation (3) holds and $\gamma\in\Pi_{\leq}(\mu,\nu)$. Proof is placed in the Supplementary (§A.2). These observations in particular display the following connection between the EPT problem and the standard complete OT problem. ###### Proposition 3.3 (EPT versus complete OT). For every $\mu,\nu\in{\mathcal{M}}({\mathcal{T}})$, we have $\mathrm{ET}_{c,\lambda}(\mu,\nu)=\mathrm{KT}(\hat{\mu},\hat{\nu})$. Moreover, relation (3) gives a one-to-one correspondence between optimal solution $\gamma$ for EPT problem (3) and optimal solution $\hat{\gamma}$ for standard complete OT problem (5). Proof is placed in the Supplementary (§A.3). ### 3.1 Dual Formulations The relationship given in Proposition 3.3 allows us to obtain the dual formulation of EPT in problem (3) from that of problem (5) proved in [10, Corollary 2.6]. ###### Theorem 3.4 (Dual formula for general cost). For any $\lambda\geq 0$ and nonnegative weights $w_{1}(x),w_{2}(x)$, we have $\mathrm{ET}_{c,\lambda}(\mu,\nu)=\sup_{(u,v)\in{\mathbb{K}}}\Big{[}\int_{{\mathcal{T}}}u(x)\mu(dx)+\int_{{\mathcal{T}}}v(x)\nu(dx)\Big{]},\vspace{-10pt}$ where ${\mathbb{K}}:=\Big{\\{}(u,v):\,u\leq w_{1},\,-b\lambda+\inf_{x\in{\mathcal{T}}}[b\,c(x,y)-w_{1}(x)]\leq v(y)\leq w_{2}(y),\,u(x)+v(y)\leq b[c(x,y)-\lambda]\Big{\\}}$. Proof is placed in the Supplementary (§A.4). This dual formula is our main theoretical result and can be rewritten more explicitly when the cost $c$ is the tree distance. Hereafter, we use $c(x,y)=d_{\mathcal{T}}(x,y)$. To ease the notations, we simply write $\mathrm{ET}_{\lambda}(\mu,\nu)$ for $\mathrm{ET}_{d_{\mathcal{T}},\lambda}(\mu,\nu)$. ###### Corollary 3.5 (Dual formula for tree distance). Assume that $\lambda\geq 0$ and the nonnegative weights $w_{1},w_{2}$ are $b$-Lipschitz w.r.t. $d_{\mathcal{T}}$. Then, we have $\displaystyle\mathrm{ET}_{\lambda}(\mu,\nu)=\sup\left\\{\int_{\mathcal{T}}f(\mu-\nu):\,f\in\mathbb{L}\right\\}$ $\displaystyle\hskip 100.00015pt-\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})+\nu({\mathcal{T}})\big{]},\vspace{-4pt}$ (7) where $\mathbb{L}:=\Big{\\{}f\in C({\mathcal{T}}):\,-w_{2}-\frac{b\lambda}{2}\leq f\leq w_{1}+\frac{b\lambda}{2},\,|f(x)-f(y)|\leq b\,d_{\mathcal{T}}(x,y)\Big{\\}}$. Proof is placed in the Supplementary (§A.5). Corollary 3.5 extends the dual formulation for the generalized Wasserstein distance ${\mathcal{W}}_{1}^{a,b}(\mu,\nu)$ proved in [57, Theorem 2] and [13]. In the next section, we will leverage (3.5) to propose an effective regularization for computation in practice. ###### Remark 3.6. An example of $b$-Lipschitz weight is $w(x)=a_{1}\,d_{\mathcal{T}}(x,x_{0})+a_{0}$ for some $x_{0}\in{\mathcal{T}}$ and for some constants $a_{1}\in[0,b]$ and $a_{0}\in[0,\infty)$. As a consequence of the dual formulation, we obtain the following geometric properties: ###### Proposition 3.7 (Geometric structures of metric d). Assume that $\lambda\geq 0$ and the weights $w_{1},w_{2}$ are positive and $b$-Lipschitz w.r.t. $d_{\mathcal{T}}$. Define $d(\mu,\nu):=\mathrm{ET}_{\lambda}(\mu,\nu)+\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})+\nu({\mathcal{T}})\big{]}$. Then, we have 1. i) $d(\mu+\sigma,\nu+\sigma)=d(\mu,\nu)$, $\forall\sigma\in{\mathcal{M}}({\mathcal{T}})$. 2. ii) $d$ is a divergence and satisfies the triangle inequality $d(\mu,\nu)\leq d(\mu,\sigma)+d(\sigma,\nu)$. 3. iii) If in addition $w_{1}=w_{2}$, then $({\mathcal{M}}({\mathcal{T}}),d)$ is a complete metric space. Moreover, it is a geodesic space in the sense that for every two points $\mu$ and $\nu$ in ${\mathcal{M}}({\mathcal{T}})$ there exists a path $\varphi:[0,a]\to{\mathcal{M}}({\mathcal{T}})$ with $a:=d(\mu,\nu)$ such that $\varphi(0)=\mu$, $\varphi(a)=\nu$, and $d(\varphi(t),\varphi(s))=|t-s|\quad\mbox{for all }t,s\in[0,a].\vspace{-6pt}$ Proof is placed in the Supplementary (§A.6). Let $m\in[0,\bar{m}]$, and choose $\lambda\geq 0$ such that there exists an optimal plan $\gamma^{0}$ for $\mathrm{ET}_{\lambda}(\mu,\nu)$ with $\gamma^{0}({\mathcal{T}}\times{\mathcal{T}})=m$. As pointed out right after Theorem 3.1, this choice of $\lambda$ is possible. Then, the proof of Lemma A.1 in the Supplementary (§A.6) shows that $\displaystyle\inf_{\gamma\in\Pi_{\leq}(\mu,\nu),\,\gamma({\mathcal{T}}\times{\mathcal{T}})=m}\Big{[}{\mathcal{F}}_{1}(\gamma_{1}|\mu)+{\mathcal{F}}_{2}(\gamma_{2}|\nu)$ $\displaystyle\hskip 70.0001pt+b\,\int_{{\mathcal{T}}\times{\mathcal{T}}}c(x,y)\gamma(dx,dy)\Big{]}\leq d(\mu,\nu).\vspace{-6pt}$ Moreover, the equality happens if and only if there exists an optimal plan $\gamma^{0}$ for $\mathrm{ET}_{\lambda}(\mu,\nu)$ such that $m=\gamma^{0}({\mathcal{T}}\times{\mathcal{T}})=\frac{1}{2}[\mu({\mathcal{T}})+\nu({\mathcal{T}})]$. The necessary conditions for the latter one to hold is $\mu({\mathcal{T}})=\nu({\mathcal{T}})$ and $m=\bar{m}$. ### 3.2 An Efficient Regularization for Entropy Partial Transport with Tree Metrics First observe that any $f\in\mathbb{L}$ can be represented by $f(x)=f(r)+\int_{[r,x]}g(y)\omega(dy)\vspace{-4pt}$ for some function $g\in L^{\infty}({\mathcal{T}})$ with $\|g\|_{L^{\infty}({\mathcal{T}})}\leq b$. Note that condition $|f(x)-f(y)|\leq b\,d_{\mathcal{T}}(x,y)$ is equivalent to $\|g\|_{L^{\infty}({\mathcal{T}})}\leq b$. It follows that $\mathbb{L}\subset\mathbb{L}_{0}$, where we define for $0\leq\alpha\leq\frac{1}{2}[b\lambda+w_{1}(r)+w_{2}(r)]$ that $\mathbb{L}_{\alpha}$ is the collection of all functions $f$ of the form $f(x)=s+\int_{[r,x]}g(y)\omega(dy),\vspace{-6pt}$ with $s$ being a constant in the interval $\Big{[}-w_{2}(r)-\frac{b\lambda}{2}+\alpha,w_{1}(r)+\frac{b\lambda}{2}-\alpha\Big{]}$ and with $\|g\|_{L^{\infty}({\mathcal{T}})}\leq b$. This leads us to consider the following regularization for $\mathrm{ET}_{\lambda}(\mu,\nu)$: $\displaystyle\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu):=\sup\left\\{\int_{\mathcal{T}}f(\mu-\nu):\,f\in\mathbb{L}_{\alpha}\right\\}$ $\displaystyle\hskip 100.00015pt-\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})+\nu({\mathcal{T}})\big{]}.\vspace{-10pt}$ (8) Especially, when $\alpha=0$ and notice that $\mathbb{L}\subset\mathbb{L}_{0}$, $\widetilde{\mathrm{ET}}_{\lambda}^{0}(\mu,\nu)$ is an upper bound of $\mathrm{ET}_{\lambda}(\mu,\nu)$ through the dual formulation. The next result gives a closed-form formula for $\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)$ and is our main formula used for computation in practice. ###### Proposition 3.8 (closed-form for regularized EPT). Assume that $\lambda,w_{1}(r),w_{2}(r)$ are nonnegative numbers. Then, for $0\leq\alpha\leq\frac{1}{2}[b\lambda+w_{1}(r)+w_{2}(r)]$, we have $\displaystyle\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)=\int_{{\mathcal{T}}}|\mu(\Lambda(x))-\nu(\Lambda(x))|\,\omega(dx)$ $\displaystyle-\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})+\nu({\mathcal{T}})\big{]}+\big{[}w_{i}(r)+\frac{b\lambda}{2}-\alpha\big{]}|\mu({\mathcal{T}})-\nu({\mathcal{T}})|$ with $i:=1$ if $\mu({\mathcal{T}})\geq\nu({\mathcal{T}})$ and $i:=2$ if $\mu({\mathcal{T}})<\nu({\mathcal{T}})$. In particular, the map $\alpha\longmapsto\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)$ is nonincreasing and $|\widetilde{\mathrm{ET}}_{\lambda}^{\alpha_{1}}(\mu,\nu)-\widetilde{\mathrm{ET}}_{\lambda}^{\alpha_{2}}(\mu,\nu)|=|\alpha_{1}-\alpha_{2}||\mu({\mathcal{T}})-\nu({\mathcal{T}})|.$ Proof is placed in the Supplementary (§A.7). It is also possible to use $\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)$ to upper or lower bound the distance $\mathrm{ET}_{\lambda}(\mu,\nu)$ as follows: ###### Proposition 3.9. Assume that $\lambda\geq 0$ and the weights $w_{1},w_{2}$ are $b$-Lipschitz w.r.t. $d_{\mathcal{T}}$. Then, $\mathrm{ET}_{\lambda}(\mu,\nu)\leq\widetilde{\mathrm{ET}}_{\lambda}^{0}(\mu,\nu).\vspace{-4pt}$ In addition, if $[4L_{{\mathcal{T}}}-\lambda]b\leq w_{1}(r)+w_{2}(r)$ where $L_{{\mathcal{T}}}:=\max_{x\in{\mathcal{T}}}\omega([r,x])$, then $\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)\leq\mathrm{ET}_{\lambda}(\mu,\nu),\vspace{-4pt}$ for every $2bL_{{\mathcal{T}}}\leq\alpha\leq\frac{1}{2}[b\lambda+w_{1}(r)+w_{2}(r)]$. Proof is placed in the Supplementary (§A.8). Analogous to Proposition 3.7, we obtain: ###### Proposition 3.10 (Geometric structures of regularized metric $d_{\alpha}$). Assume that $\lambda,w_{1}(r),w_{2}(r)$ are nonnegative numbers. For $0\leq\alpha<\frac{b\lambda}{2}+\min\\{w_{1}(r),w_{2}(r)\\}$, define $d_{\alpha}(\mu,\nu):=\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)+\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})+\nu({\mathcal{T}})\big{]}.\vspace{-6pt}$ (9) Then, we have 1. i) $d_{\alpha}(\mu+\sigma,\nu+\sigma)=d_{\alpha}(\mu,\nu)$, $\forall\sigma\in{\mathcal{M}}({\mathcal{T}})$. 2. ii) $d_{\alpha}$ is a divergence and satisfies the triangle inequality $d_{\alpha}(\mu,\nu)\leq d_{\alpha}(\mu,\sigma)+d_{\alpha}(\sigma,\nu)$. 3. iii) If in addition $w_{1}(r)=w_{2}(r)$, then $({\mathcal{M}}({\mathcal{T}}),d_{\alpha})$ is a complete metric space. Moreover, it is a geodesic space in the sense defined in part iii) of Proposition 3.7 but with $d_{\alpha}$ replacing $d$. Proof is placed in the Supplementary (§A.9). ###### Proposition 3.11. With the same assumptions as in Proposition 3.8 for $\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}$ and in Proposition 3.10 for $d_{\alpha}$, both $\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}$ and $d_{\alpha}$ are negative definite. Proof is placed in the Supplementary (§A.10). From Proposition 3.11 and following Berg et al. [6] (Theorem 3.2.2, p.74), given $t>0$, the kernels $k_{\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}}(\mu,\nu):=\exp\left(-t\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)\right)$ and $k_{d_{\alpha}}(\mu,\nu):=\exp\left(-td_{\alpha}(\mu,\nu)\right)$ are positive definite. ### 3.3 Tree-sliced Variants by Sampling Tree Metrics In most of practical applications, we usually do not have priori knowledge about tree structure for measures. Therefore, we need to choose or sample tree metrics from support data points for a given task. We use the tree metric sampling methods in [43]: (i) partition-based tree metric sampling for a low- dimensional space, or (ii) clustering-based tree metric sampling for a high- dimensional space. Moreover, those tree metric sampling methods are not only fast for computation111E.g., the complexity of the clustering-based tree metric is $\mathcal{O}(H_{\mathcal{T}}m\log\kappa)$ when we set $\kappa$ clusters for the farthest-point clustering [24], and $H_{\mathcal{T}}$ for the predefined tree deepest level for $m$ input support data points., but also adaptive to the distribution of supports. We further propose the tree-sliced variants of the regularized EPT, computed by averaging the regularized EPT using those randomly sampled tree metrics. One advantage is to reduce the quantization effects or cluster sensitivity problems (i.e, support data points are quantized, or clustered into an adjacent hypercube, or cluster respectively) within the tree metric sampling procedure. Although one can leverage tree metrics to approximate arbitrary metrics [3, 4, 11, 19, 29], our goal is rather to sample tree metrics and use them as ground metrics in the regularized EPT, similar to TSW. Despite the fact that one- dimensional projections do not have interesting properties in terms of distortion viewpoints, they remain useful for SPOT (or SW, sliced-Gromov- Wasserstein [66]). In the same vein, we believe that trees with high distortion are still useful for EPT, similar as in TSW. Moreover, one may not need to spend excessive effort to optimize $\textnormal{ET}_{\lambda}$ (in Equation (3.5)) for a randomly sampled tree metric since it can lead to overfitting within the computation of the EPT itself. Therefore, the proposed efficient regularization of EPT (e.g, $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ in Equation (3.2)) is not only fast for computation (i.e., closed-form), but also gives a benefit to overcome the overfitting problem within the computation of the EPT. ## 4 Discussion and Related Work One can leverage tree metrics to approximate arbitrary metrics for speeding up a computation [3, 4, 11, 19, 29]. For instances, (i) Indyk and Thaper [30] applied tree metrics (e.g., quadtree) to approximate OT with Euclidean cost metric for a fast image retrieval. (ii) Sato et al. [60] considered a generalized Kantorovich-Rubinstein discrepancy [25, 26, 45] with general weights for unbalanced OT, and used a quadtree as in [30] to approximate the proposed distance via a dynamic programming with infinitely many states. They then derived an efficient algorithm with a quasi-linear time complexity to speed up the dynamic programming computation by leveraging high-level programming techniques. However, such approximations following the approach of [30] result in large distortions in high dimensional spaces [53]. ## 5 Experiments In this section, we first illustrate that $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ (Equation (3.2)) is an efficient approximation for $\textnormal{ET}_{\lambda}$ (Equation (3.5)). Then, we evaluate our proposed $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ and $d_{\alpha}$ (Equation (9)) for comparing measures in document classification with word embedding and topological data analysis (TDA). Experiments are evaluated with Intel Xeon CPU E7-8891v3 2.80GHz and 256GB RAM. ##### Documents with word embedding. We consider each document as a measure where each word is regarded as a point in the support with a unit mass. Following [36, 43], we applied the word2vec word embedding [50], pretrained on Google News222https://code.google.com/p/word2vec containing about 3 millions words/phrases. Each word/phrase in a document is mapped into a vector in $\mathbb{R}^{300}$. We removed all SMART stop word [59], and dropped words in documents if they are not available in the pretrained word2vec. ##### Geometric structured data via persistence diagrams in TDA. TDA has recently emerged in machine learning community as a powerful tool to analyze geometric structured data such as material data, or linked twist maps [1, 37, 42]. TDA applies algebraic topology methods (e.g., persistence homology) to extract robust topological features (e.g., connected components, rings, cavities) and output a multiset of 2-dimensional points, i.e., persistence diagram (PD). The coordinates of a 2-dimensional point in PD are corresponding to the birth and death time of a particular topological feature. Therefore, each point in PD summarizes a life span of a topological feature. We can regard PD as measures where each 2-dimensional point is considered as a point in the support with a unit mass. ##### Tree metric sampling. In our experiments, we do not have priori knowledge about tree metrics for neither word embeddings in documents nor 2-dimensional points in persistence diagrams (PDs). To compute the EPT, e.g., $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ and its associated $d_{\alpha}$, we considered $n_{s}$ randomized tree metrics. We employed the clustering-based tree metric sampling for word embeddings in documents (i.e., high-dimensional space $\mathbb{R}^{300}$), while we used the partition-based tree metric sampling for 2-dimensional points in PDs (i.e., low-dimensional space $\mathbb{R}^{2}$). Those tree metric sampling methods are built with a predefined deepest level $H_{\mathcal{T}}$ of tree $\mathcal{T}$ as a stopping condition as in [43]. ##### Baselines and setup. We considered 2 baselines based on OT theory for measures with different masses: (i) Sinkhorn-UOT [12, 22], and (ii) SPOT [7]. Following [43], we apply the kernel approach in the form $\exp(-t\bar{d})$ with SVM for document classification with word embedding. Here, $\bar{d}$ is a discrepancy between measures and $t>0$. We also employed this kernel approach for various tasks in TDA, e.g., orbit recognition and object shape classification with SVM, as well as change point detection for material data analysis with kernel Fisher discriminant ratio (KFDR) [27]. While kernels for $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ and $d_{\alpha}$ are positive definite, kernels for Sinkhorn-UOT and SPOT are empirically indefinite333In practice, we observed negative eigenvalues of some Gram matrices corresponding to kernels for Sinkhorn-UOT and SPOT.. When kernels are indefinite, we regularized for the corresponding Gram matrices by adding a sufficiently large diagonal term as in [15, 43]. For SVM, we randomly split each dataset into $70\%/30\%$ for training and test with 10 repeats. Typically, we choose hyper- parameters via cross validation, choose $1/t$ from $\\{q_{10},q_{20},q_{50}\\}$ where $q_{s}$ is the $s\%$ quantile of a subset of corresponding discrepancies observed on a training set, use 1-vs-1 strategy with Libsvm444https://www.csie.ntu.edu.tw/$\sim$cjlin/libsvm/ for multi-class classification, and choose SVM regularization from $\left\\{10^{-2:1:2}\right\\}$. For Sinkhorn-UOT, we select the entropic regularization from $\left\\{0.01,0.05,0.1,0.5,1\right\\}$. Following Proposition 3.9, we take $\alpha=0$ for $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ and $d_{\alpha}$ in all our experiments. ### 5.1 Efficient Approximation of $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ for $\textnormal{ET}_{\lambda}$ Figure 1: Relative difference between $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and $\textnormal{ET}_{\lambda}$ w.r.t. Lipschitz const. of $w_{1},w_{2}$. We randomly sample 500K pairs of documents in TWITTER dataset. Following Proposition 3.3, we compute $\textnormal{ET}_{\lambda}$ via the corresponding KT (Equation (5)). Our goal is to compare $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ to $\textnormal{ET}_{\lambda}$. Figure 2: Relative difference between $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and $\textnormal{ET}_{\lambda}$ w.r.t. $\lambda$ when $a_{1}=b$. (LT := $L_{\mathcal{T}}$) Change Lipschitz constants. We choose $w_{1}(x)=w_{2}(x)=a_{1}d_{\mathcal{T}}(r,x)+a_{0}$, and set $\lambda=b=1$, $a_{0}=1$. In particular, $a_{1}\in[0,b]$ since $w_{1},w_{2}$ are $b$-Lipschitz functions (see Corollary 3.5 and Remark 3.6). We illustrate the relative difference $(\widetilde{\textnormal{ET}}_{\lambda}^{0}-\textnormal{ET}_{\lambda})/\textnormal{ET}_{\lambda}$ when $a_{1}$ is changed in $[0,b]$ in Figure 1. We observe that when $a_{1}$ is close to $b$ (i.e., the Lipschitz constants of $w_{1},w_{2}$ are close to $b$), $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ becomes closer to $\textnormal{ET}_{\lambda}$. When $a_{1}=b$, the values of $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ is almost identical to $\textnormal{ET}_{\lambda}$. Change $\lambda$. From the results in Figure 1, we set $a_{1}=b$ to investigate the relative different between $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and $\textnormal{ET}_{\lambda}$ when $\lambda$ is changed. As illustrated in Figure 2, $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ is almost identical to $\textnormal{ET}_{\lambda}$ regardless the value of $\lambda$ when $a_{1}=b$. ### 5.2 Document Classification with Word Embedding We consider 4 datasets: TWITTER, RECIPE, CLASSIC and AMAZON for document classification with word embedding. Statistical characteristics of these datasets are summarized in Figure 3. Figure 3: SVM results on document classification (row 1), and corresponding time consumption of kernel matrices (row 2). For each dataset, the numbers in the parenthesis are respectively the number of classes, the number of documents, and the maximum number of unique words for each document. ### 5.3 Topological Data Analysis (TDA) Figure 4: SVM results for TDA (row 1), and corresponding time consumption of kernel matrices (row 2). For each dataset, the numbers in the parenthesis are respectively the number of PDs, and the maximum number of points in PD. #### 5.3.1 Orbit Recognition We considered a synthesized dataset, proposed by Adams et al. [1], for link twist map which is a discrete dynamical system to model flows in DNA microarrays [28]. There are 5 classes of orbits. Following [42], we generated 1000 orbits for each class of orbits, and each orbit has 1000 points. We used the 1-dimensional topological features for PD extracted with Vietoris-Rips complex filtration [16]. #### 5.3.2 Object Shape Classification We evaluated our approach for object shape classification on a subset of MPEG7 dataset [38] containing 10 classes where each class has 20 samples as in [42]. For simplicity, we followed [42] to extract $1$-dimensional topological features for PD with Vietoris-Rips complex filtration555A more complicated and advanced filtration for this task is considered in [65]. [16]. #### 5.3.3 Change Point Detection for Material Analysis We applied our approach on change point detection for material analysis with KFDR as a statistical score on granular packing system (GPS) [21] and SiO2 [52] datasets. Statistical characteristics of these datasets are summarized in Figure 5. Following [42], we set $10^{-3}$ for the regularization parameter in KFDR and used the ball model filtration to extract 2-dimensional topological features for PD in GPS dataset, and 1-dimensional topological features for PD in SiO2 dataset. Note that we omit the baseline kernel for Sinkhorn-UOT in this application since its computation of Sinkhorn-UOT is out of memory. Figure 5: KFDR graphs and time consumption of kernel matrices for change point detection. For each dataset, the numbers in the parenthesis are respectively the number of PDs, and the maximum number of points in PD. We illustrate the KFDR graphs for both datasets in Figure 5. For GPS dataset, all kernel approaches get the change point at the index 23 which supports the observation (corresponding id = 23) in [2]. For SiO2 dataset, all kernel approaches get the change point in a supported range ($35\leq\text{id}\leq 50)$, obtained by a traditional physical approach [17]. The KFDR results of kernels corresponding to $d_{0}$ and $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ compare favorably with those of kernel for SPOT. ### 5.4 Results of SVM, Time Consumption and Discussions Figure 6: SVM results and time consumption for corresponding kernel matrices in TWITTER dataset w.r.t. the number of (tree) slices. We illustrate the results of SVM and time consumption of kernel matrices in document classification with word embedding and TDA in Figure 3 and Figure 4 respectively. The performances of kernels for $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and $d_{0}$ outperform those of kernels for SPOT. They also outperform those of kernels for Sinkhorn-UOT on TDA, and are comparative on document classification. The fact that SPOT uses the 1-dimensional projection for support data points may limit its ability to capture high-dimensional structure in data distributions [43, 47]. The regularized EPT remedies this problem by leveraging the tree metrics which have more flexibility and degrees of freedom (e.g., choose a tree rather than a line). In addition, while kernels for $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and $d_{0}$ are positive definite, kernels for SPOT and Sinkhorn-UOT are empirically indefinite. The indefiniteness of kernels may affect their performances in some applications, e.g., kernels for Sinkhorn-UOT work well for document classification with word embedding, but perform poorly in TDA applications. There are also similar observations in [43]. Additionally, we illustrate a trade-off between performances and computational time for different number of (tree) slices in TWITTER dataset in Figure 6. The performances are usually improved with more slices, but with a trade-off about the computational time. In applications, we observed that a good trade off is about $n_{s}=10$ slices. Tree metric sampling. Time consumption for the tree metric sampling is negligible in applications. With the predefined tree deepest level $H_{\mathcal{T}}=6$ and tree branches $\kappa=4$ as in [43], it took $1.5,11.0,17.5,20.5$ seconds for TWITTER, RECIPE, CLASSIC, AMAZON datasets respectively, and $21.0,0.1$ seconds for Orbit, MPEG7 datasets respectively. $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ versus $\textnormal{ET}_{\lambda}$. We also compare $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and $\textnormal{ET}_{\lambda}$ (or KT) in TWITTER dataset for document classification, and in MPEG7 dataset for object shape recognition in TDA. The performances of $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and $\textnormal{ET}_{\lambda}$ are identical (i.e., their kernel matrices are almost the same for those datasets), but $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ is faster than $\textnormal{ET}_{\lambda}$ about 11 times in TWITTER dataset, and 81 times in MPEG7 dataset when $n_{s}=10$ slices. Further results are placed in the supplementary (§B). ## 6 Conclusion We have developed a rigorous theory for the entropy partial transport (EPT) problem for nonnegative measures on a tree having different masses. We show that the EPT problem is equivalent to a standard complete OT problem on a suitable one-node extended tree which allows us to develop its dual formulation. By leveraging the dual problem, we proposed efficient novel regularization for EPT which yields closed-form solution for a fast computation and negative definiteness—an important property to build positive definite kernels required in many kernel-dependent frameworks. Moreover, our regularization also provides effective approximations in applications. We further derive tree-sliced variants of the regularized EPT for practical applications without priori knowledge about a tree structure for measures. The question about sampling efficient tree metrics for the tree-sliced variants from data points is left for future work. ## References * [1] Henry Adams, Tegan Emerson, Michael Kirby, Rachel Neville, Chris Peterson, Patrick Shipman, Sofya Chepushtanova, Eric Hanson, Francis Motta, and Lori Ziegelmeier. Persistence images: A stable vector representation of persistent homology. Journal of Machine Learning Research, 18(1):218–252, 2017. * [2] Anonymous. What is random packing? Nature, 239:488–489, 1972. * [3] Yair Bartal. Probabilistic approximation of metric spaces and its algorithmic applications. In Proceedings of 37th Conference on Foundations of Computer Science, pages 184–193, 1996. * [4] Yair Bartal. On approximating arbitrary metrices by tree metrics. In ACM Symposium on Theory of Computing (STOC), volume 98, pages 161–168, 1998. * [5] Jean-David Benamou. Numerical resolution of an “unbalanced” mass transport problem. ESAIM: Mathematical Modelling and Numerical Analysis-Modélisation Mathématique et Analyse Numérique, 37(5):851–868, 2003. * [6] C. Berg, J. P. R. Christensen, and P. Ressel, editors. Harmonic analysis on semigroups. Springer-Verglag, New York, 1984. * [7] Nicolas Bonneel and David Coeurjolly. Spot: sliced partial optimal transport. ACM Transactions on Graphics (TOG), 38(4):1–13, 2019. * [8] Nicolas Bonneel, Julien Rabin, Gabriel Peyré, and Hanspeter Pfister. Sliced and radon wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 51(1):22–45, 2015. * [9] Charlotte Bunne, David Alvarez-Melis, Andreas Krause, and Stefanie Jegelka. Learning Generative Models across Incomparable Spaces. In International Conference on Machine Learning (ICML), volume 97, 2019. * [10] Luis A Caffarelli and Robert J McCann. Free boundaries in optimal transport and monge-ampere obstacle problems. Annals of mathematics, pages 673–730, 2010. * [11] Moses Charikar, Chandra Chekuri, Ashish Goel, Sudipto Guha, and Serge Plotkin. Approximating a finite metric by a small number of tree metrics. In Proceedings 39th Annual Symposium on Foundations of Computer Science (FOCS), pages 379–388, 1998. * [12] Lenaic Chizat, Gabriel Peyré, Bernhard Schmitzer, and François-Xavier Vialard. Scaling algorithms for unbalanced optimal transport problems. Mathematics of Computation, 87(314):2563–2609, 2018. * [13] Nhan-Phu Chung and Thanh-Son Trinh. Duality and quotient spaces of generalized wasserstein spaces. arXiv preprint arXiv:1904.12461, 2019. * [14] Nicolas Courty, Rémi Flamary, Amaury Habrard, and Alain Rakotomamonjy. Joint distribution optimal transportation for domain adaptation. In Advances in Neural Information Processing Systems, pages 3730–3739, 2017. * [15] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems, pages 2292–2300, 2013. * [16] Herbert Edelsbrunner and John Harer. Persistent homology-a survey. Contemporary mathematics, 453:257–282, 2008. * [17] Stephen Richard Elliott. Physics of amorphous materials. Longman Group, 1983. * [18] Steven N Evans and Frederick A Matsen. The phylogenetic kantorovich–rubinstein metric for environmental sequence samples. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(3):569–592, 2012. * [19] Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar. A tight bound on approximating arbitrary metrics by tree metrics. Journal of Computer and System Sciences, 69(3):485–497, 2004. * [20] Alessio Figalli. The optimal partial transport problem. Archive for rational mechanics and analysis, 195(2):533–560, 2010\. * [21] Nicolas Francois, Mohammad Saadatfar, R Cruikshank, and A Sheppard. Geometrical frustration in amorphous and partially crystallized packings of spheres. Physical review letters, 111(14):148001, 2013. * [22] Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. Learning with a wasserstein loss. In Advances in neural information processing systems, pages 2053–2061, 2015. * [23] Wilfrid Gangbo, Wuchen Li, Stanley Osher, and Michael Puthawala. Unnormalized optimal transport. Journal of Computational Physics, 399:108940, 2019. * [24] Teofilo F Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38:293–306, 1985. * [25] Kevin Guittet. Extended kantorovich norms: a tool for optimization. INRIA report, 2002. * [26] Leonid G Hanin. Kantorovich-rubinstein norm and its application in the theory of lipschitz spaces. Proceedings of the American Mathematical Society, 115(2):345–352, 1992. * [27] Zaid Harchaoui, Eric Moulines, and Francis R Bach. Kernel change-point analysis. In Advances in neural information processing systems, pages 609–616, 2009. * [28] Jan-Martin Hertzsch, Rob Sturman, and Stephen Wiggins. Dna microarrays: design principles for maximizing ergodic, chaotic mixing. Small, 3(2):202–218, 2007. * [29] Piotr Indyk. Algorithmic applications of low-distortion geometric embeddings. In Proceedings 42nd IEEE Symposium on Foundations of Computer Science (FOCS), pages 10–33, 2001. * [30] Piotr Indyk and Nitin Thaper. Fast image retrieval via embeddings. In International workshop on statistical and computational theories of vision, volume 2, page 5, 2003. * [31] Hicham Janati, Marco Cuturi, and Alexandre Gramfort. Wasserstein regularization for sparse multi-task regression. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1407–1416, 2019. * [32] Hicham Janati, Boris Muzellec, Gabriel Peyré, and Marco Cuturi. Entropic optimal transport between (unbalanced) gaussian measures has a closed form. In Advances in neural information processing systems, 2020. * [33] Benoît R Kloeckner. A geometric study of Wasserstein spaces: ultrametrics. Mathematika, 61(1):162–178, 2015. * [34] Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, and Gustavo Rohde. Generalized sliced wasserstein distances. In Advances in Neural Information Processing Systems, pages 261–272, 2019. * [35] Genki Kusano, Kenji Fukumizu, and Yasuaki Hiraoka. Kernel method for persistence diagrams via kernel embedding and weight factor. The Journal of Machine Learning Research, 18(1):6947–6987, 2017\. * [36] Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. From word embeddings to document distances. In International conference on machine learning, pages 957–966, 2015. * [37] Théo Lacombe, Marco Cuturi, and Steve Oudot. Large scale computation of means and clusters for persistence diagrams using optimal transport. In Advances in Neural Information Processing Systems, pages 9770–9780, 2018. * [38] Longin Jan Latecki, Rolf Lakamper, and T Eckhardt. Shape descriptors for non-rigid shapes with a single closed contour. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 424–429, 2000. * [39] Hugo Lavenant, Sebastian Claici, Edward Chien, and Justin Solomon. Dynamical optimal transport on discrete surfaces. In SIGGRAPH Asia 2018 Technical Papers, page 250. ACM, 2018. * [40] Tam Le, Nhat Ho, and Makoto Yamada. Flow-based alignment approaches for probability measures in different spaces. In International Conference on Artificial Intelligence and Statistics (AISTATS). 2021. * [41] Tam Le, Viet Huynh, Nhat Ho, Dinh Phung, and Makoto Yamada. On scalable variant of wasserstein barycenter. arXiv preprint arXiv:1910.04483, 2019. * [42] Tam Le and Makoto Yamada. Persistence Fisher kernel: A Riemannian manifold kernel for persistence diagrams. In Advances in Neural Information Processing Systems, pages 10007–10018, 2018. * [43] Tam Le, Makoto Yamada, Kenji Fukumizu, and Marco Cuturi. Tree-sliced variants of Wasserstein distances. In Advances in neural information processing systems, pages 12283–12294, 2019. * [44] John Lee, Nicholas P Bertrand, and Christopher J Rozell. Parallel unbalanced optimal transport regularization for large scale imaging problems. arXiv preprint arXiv:1909.00149, 2019. * [45] Jan Lellmann, Dirk A Lorenz, Carola Schonlieb, and Tuomo Valkonen. Imaging with kantorovich–rubinstein discrepancy. SIAM Journal on Imaging Sciences, 7(4):2833–2859, 2014. * [46] Matthias Liero, Alexander Mielke, and Giuseppe Savaré. Optimal entropy-transport problems and a new hellinger–kantorovich distance between positive measures. Inventiones mathematicae, 211(3):969–1117, 2018. * [47] Antoine Liutkus, Umut Simsekli, Szymon Majewski, Alain Durmus, and Fabian-Robert Stöter. Sliced-wasserstein flows: Nonparametric generative modeling via optimal transport and diffusions. In International Conference on Machine Learning, pages 4104–4113, 2019. * [48] Facundo Mémoli, Axel Munk, Zhengchao Wan, and Christoph Weitkamp. The ultrametric gromov-wasserstein distance. arXiv preprint arXiv:2101.05756, 2021. * [49] Gonzalo Mena and Jonathan Niles-Weed. Statistical bounds for entropic optimal transport: sample complexity and the central limit theorem. In Advances in Neural Information Processing Systems, pages 4541–4551, 2019. * [50] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013. * [51] Kimia Nadjahi, Alain Durmus, Umut Simsekli, and Roland Badeau. Asymptotic guarantees for learning generative models with the sliced-wasserstein distance. In Advances in Neural Information Processing Systems, pages 250–260, 2019. * [52] Takenobu Nakamura, Yasuaki Hiraoka, Akihiko Hirata, Emerson G Escolar, and Yasumasa Nishiura. Persistent homology and many-body atomic structure for medium-range order in the glass. Nanotechnology, 26(30):304001, 2015. * [53] Assaf Naor and Gideon Schechtman. Planar Earthmover is not in L_1. SIAM Journal on Computing, 37(3):804–826, 2007. * [54] Gabriel Peyré and Marco Cuturi. Computational optimal transport. Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019. * [55] Khiem Pham, Khang Le, Nhat Ho, Tung Pham, and Hung Bui. On unbalanced optimal transport: An analysis of Sinkhorn algorithm. In Proceedings of the International Conference on Machine Learning, 2020. * [56] Benedetto Piccoli and Francesco Rossi. Generalized wasserstein distance and its application to transport equations with source. Archive for Rational Mechanics and Analysis, 211(1):335–358, 2014\. * [57] Benedetto Piccoli and Francesco Rossi. On properties of the generalized wasserstein distance. Archive for Rational Mechanics and Analysis, 222(3):1339–1365, 2016\. * [58] Julien Rabin, Gabriel Peyré, Julie Delon, and Marc Bernot. Wasserstein barycenter and its application to texture mixing. In International Conference on Scale Space and Variational Methods in Computer Vision, pages 435–446, 2011. * [59] Gerard Salton and Christopher Buckley. Term-weighting approaches in automatic text retrieval. Information processing & management, 24(5):513–523, 1988. * [60] Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Fast unbalanced optimal transport on tree. In Advances in neural information processing systems, 2020. * [61] Geoffrey Schiebinger, Jian Shu, Marcin Tabaka, Brian Cleary, Vidya Subramanian, Aryeh Solomon, Joshua Gould, Siyan Liu, Stacie Lin, Peter Berube, et al. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming. Cell, 176(4):928–943, 2019. * [62] Charles Semple and Mike Steel. Phylogenetics. Oxford Lecture Series in Mathematics and its Applications, 2003\. * [63] Justin Solomon, Fernando De Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on Graphics (TOG), 34(4):66, 2015. * [64] Max Sommerfeld and A. Munk. Inference for empirical wasserstein distances on finite spaces. Journal of The Royal Statistical Society Series B-statistical Methodology, 80:219–238, 2016. * [65] Katharine Turner, Sayan Mukherjee, and Doug M Boyer. Persistent homology transform for modeling shapes and surfaces. Information and Inference: A Journal of the IMA, 3(4):310–344, 2014\. * [66] Titouan Vayer, Rémi Flamary, Romain Tavenard, Laetitia Chapel, and Nicolas Courty. Sliced Gromov-Wasserstein. Advances in Neural Information Processing Systems, 2019. * [67] Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008. * [68] Jonathan Weed and Quentin Berthet. Estimation of smooth densities in wasserstein distance. In Proceedings of the Thirty-Second Conference on Learning Theory, volume 99, pages 3118–3119, 2019. * [69] Karren D. Yang and Caroline Uhler. Scalable unbalanced optimal transport using generative adversarial networks. In International Conference on Learning Representations, 2019. In the supplementary, * • We give detailed proofs of the theoretical results in the main text for the entropy partial transport (EPT) problem for nonnegative measures on a tree having different masses in §A. * • We provide further experimental results in §B. For examples, * – about more setups for the efficient approximation of $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ for $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$, * – about different values of $\alpha$, * – about different numbers of slices, * – and about different parameters in tree metric sampling. * • We next give more details and discussions in §C. For examples, * – more details about experiments (e.g., softwares, datasets, more details about the experiment setup). * – some brief review about kernels, and more referred details (e.g., for tree metric sampling, persistence diagrams and related mathematical definitions in topological data analysis). * – more discussions about other relations with other work. ## Appendix A Detailed Proofs In this section, we present detailed proofs of the theoretical results in the main text. ### A.1 Proof for Theorem 3.1 in the main text ###### Proof. i) Note that $\mathrm{ET}_{c,\lambda}(\mu,\nu)$ is a concave function in $\lambda$ since it is the infimum of a family of concave functions in $\lambda$. Therefore, $u$ is convex on ${\mathbb{R}}$. In particular, $u$ is differentiable almost everywhere on ${\mathbb{R}}$. Let $\lambda\in{\mathbb{R}}$, recall the definition of $\mathcal{C}_{\lambda}(\gamma)$ in Equation (4) in the main text. Then for any $\gamma\in\Gamma^{0}(\lambda)$, we have $\displaystyle\mathrm{ET}_{c,\lambda+\delta}(\mu,\nu)\leq\mathcal{C}_{\lambda+\delta}(\gamma)=\mathcal{C}_{\lambda}(\gamma)-b\delta\gamma({\mathcal{T}}\times{\mathcal{T}})=\mathrm{ET}_{c,\lambda}(\mu,\nu)-b\delta\gamma({\mathcal{T}}\times{\mathcal{T}})\,\,\,\forall\delta\in{\mathbb{R}}.$ (10) This implies that $\big{\\{}b\,\gamma({\mathcal{T}}\times{\mathcal{T}}):\gamma\in\Gamma^{0}(\lambda)\big{\\}}\subset\partial u(\lambda).$ We next show that the opposite inclusion is also true, i.e., $\big{\\{}b\,\gamma({\mathcal{T}}\times{\mathcal{T}}):\gamma\in\Gamma^{0}(\lambda)\big{\\}}=\partial u(\lambda)$. This is obviously holds if $\partial u(\lambda)$ is singleton and hence we only need to consider $\lambda$ for which the convex set $\partial u(\lambda)$ has more than one element. Let $m\in\partial u(\lambda)$, then $m$ can be expressed as a convex combination of extreme points $m_{1},\dotsc,m_{N}$ of $\partial u(\lambda)$, i.e., $m=\sum_{i=1}^{N}t_{i}m_{i}$ with $0\leq t_{i}\leq 1$ and $\sum_{i=1}^{N}t_{i}=1$. As $m_{i}$ is an extreme point of $\partial u(\lambda)$, there exists a sequence $\lambda_{n}\to\lambda$ such that $\lambda_{n}$ is a differentiable point of $u$ and $u^{\prime}(\lambda_{n})\to m_{i}$. Let $\gamma^{n}\in\Gamma^{0}(\lambda_{n})$, then $b\,\gamma^{n}({\mathcal{T}}\times{\mathcal{T}})=u^{\prime}(\lambda_{n})\to m_{i}$. By compactness, there exists a subsequence $\\{\gamma^{n_{k}}\\}$ and $\tilde{\gamma}^{i}\in\Pi_{\leq}(\mu,\nu)$ such that $\gamma^{n_{k}}\to\tilde{\gamma}^{i}$ weakly. It follows that $\gamma^{n_{k}}({\mathcal{T}}\times{\mathcal{T}})\to\tilde{\gamma}^{i}({\mathcal{T}}\times{\mathcal{T}})$, and hence we must have $b\,\tilde{\gamma}^{i}({\mathcal{T}}\times{\mathcal{T}})=m_{i}$. We have $\displaystyle\mathcal{C}_{\lambda_{n_{k}}}(\gamma^{\lambda_{n_{k}}})=\mathcal{C}_{\lambda}(\gamma^{\lambda_{n_{k}}})+b(\lambda-\lambda_{n_{k}})\gamma^{n_{k}}({\mathcal{T}}\times{\mathcal{T}})$ $\displaystyle\geq\mathrm{ET}_{c,\lambda}(\mu,\nu)+b(\lambda-\lambda_{n_{k}})\gamma^{n_{k}}({\mathcal{T}}\times{\mathcal{T}})$ $\displaystyle\geq\mathrm{ET}_{c,\lambda}(\mu,\nu)-b\bar{m}|\lambda-\lambda_{n_{k}}|$ and for any $\gamma\in\Gamma^{0}(\lambda)$, there holds $\displaystyle\mathcal{C}_{\lambda_{n_{k}}}(\gamma^{\lambda_{n_{k}}})\leq\mathcal{C}_{\lambda_{n_{k}}}(\gamma)=\mathcal{C}_{\lambda}(\gamma)+b(\lambda-\lambda_{n_{k}})\gamma({\mathcal{T}}\times{\mathcal{T}})=\mathrm{ET}_{c,\lambda}(\mu,\nu)+b(\lambda-\lambda_{n_{k}})\gamma({\mathcal{T}}\times{\mathcal{T}}).$ We thus deduce that $\lim_{k\to\infty}\mathcal{C}_{\lambda_{n_{k}}}(\gamma^{\lambda_{n_{k}}})=\mathrm{ET}_{c,\lambda}(\mu,\nu)$. These together with the lower semicontinuity of $\mathcal{C}_{\lambda}$ give $\displaystyle\mathrm{ET}_{c,\lambda}(\mu,\nu)=\liminf_{k\to\infty}\mathcal{C}_{\lambda_{n_{k}}}(\gamma^{\lambda_{n_{k}}})$ $\displaystyle=\liminf_{k\to\infty}\Big{[}\mathcal{C}_{\lambda}(\gamma^{\lambda_{n_{k}}})+b(\lambda-\lambda_{n_{k}})\gamma^{n_{k}}({\mathcal{T}}\times{\mathcal{T}})\Big{]}$ $\displaystyle=\liminf_{k\to\infty}\mathcal{C}_{\lambda}(\gamma^{\lambda_{n_{k}}})\geq\mathcal{C}_{\lambda}(\tilde{\gamma}^{i}).$ Therefore, $\tilde{\gamma}^{i}\in\Gamma^{0}(\lambda)$ with mass $b\,\tilde{\gamma}^{i}({\mathcal{T}}\times{\mathcal{T}})=m_{i}$. Due to the convexity of $\Gamma^{0}(\lambda)$, we have $\bar{\gamma}:=\sum_{i=1}^{N}t_{i}\tilde{\gamma}^{i}\in\Gamma^{0}(\lambda)$ with $b\,\bar{\gamma}({\mathcal{T}}\times{\mathcal{T}})=\sum_{i=1}^{N}t_{i}m_{i}=m$. That is, $\partial u(\lambda)\subset\big{\\{}b\,\gamma({\mathcal{T}}\times{\mathcal{T}}):\gamma\in\Gamma^{0}(\lambda)\big{\\}},$ and we thus infer that $\big{\\{}b\,\gamma({\mathcal{T}}\times{\mathcal{T}}):\gamma\in\Gamma^{0}(\lambda)\big{\\}}=\partial u(\lambda)$ for all $\lambda\in{\mathbb{R}}$. In order to prove the second part of i), let $\gamma\in\Gamma^{0}(\lambda_{1})$ and $\tilde{\gamma}\in\Gamma^{0}(\lambda_{2})$ be arbitrary. We have $\displaystyle\mathrm{ET}_{c,\lambda_{2}}(\mu,\nu)=\mathcal{C}_{\lambda_{2}}(\tilde{\gamma})$ $\displaystyle=\mathcal{C}_{\lambda_{1}}(\tilde{\gamma})-b(\lambda_{2}-\lambda_{1})\tilde{\gamma}({\mathcal{T}}\times{\mathcal{T}})$ $\displaystyle\geq\mathrm{ET}_{c,\lambda_{1}}(\mu,\nu)-b(\lambda_{2}-\lambda_{1})\tilde{\gamma}({\mathcal{T}}\times{\mathcal{T}}).$ (11) Hence by combining with (10), we deduce that $\displaystyle\mathrm{ET}_{c,\lambda_{1}}(\mu,\nu)-b(\lambda_{2}-\lambda_{1})\tilde{\gamma}({\mathcal{T}}\times{\mathcal{T}})\leq\mathrm{ET}_{c,\lambda_{2}}(\mu,\nu)\leq\mathrm{ET}_{c,\lambda_{1}}(\mu,\nu)-b(\lambda_{2}-\lambda_{1})\gamma({\mathcal{T}}\times{\mathcal{T}}),$ which yields $\gamma({\mathcal{T}}\times{\mathcal{T}})\leq\tilde{\gamma}({\mathcal{T}}\times{\mathcal{T}})$. This together with the above characterization of $\partial u(\lambda)$ implies the second part of i). ii) If $u$ is differentiable at $\lambda$, then $\partial u(\lambda)$ is a singleton set. However, as $\partial u(\lambda)=\big{\\{}b\,\gamma({\mathcal{T}}\times{\mathcal{T}}):\gamma\in\Gamma^{0}(\lambda)\big{\\}}$ by i), we thus infer that the mass $\gamma({\mathcal{T}}\times{\mathcal{T}})$ must be the same for every $\gamma\in\Gamma^{0}(\lambda)$. Next assume that every element in $\Gamma^{0}(\lambda)$ has the same mass, say $m$. For $\delta\neq 0$, let $\gamma^{\lambda+\delta}\in\Gamma^{0}(\lambda+\delta)$ and $m(\lambda+\delta):=\gamma^{\lambda+\delta}({\mathcal{T}}\times{\mathcal{T}})$. Then, we claim that $\lim_{\delta\to 0}m(\lambda+\delta)=m.$ (12) Assume the claim for the moment, and let $\delta>0$. Then, as in (10)–(A.1), we have $\displaystyle\mathrm{ET}_{c,\lambda+\delta}(\mu,\nu)\leq\mathrm{ET}_{c,\lambda}(\mu,\nu)-b\delta m\quad\mbox{and}\quad\mathrm{ET}_{c,\lambda+\delta}(\mu,\nu)\geq\mathrm{ET}_{c,\lambda}(\mu,\nu)-b\delta m(\lambda+\delta).$ It follows that $-bm(\lambda+\delta)\leq\frac{\mathrm{ET}_{c,\lambda+\delta}(\mu,\nu)-\mathrm{ET}_{c,\lambda}(\mu,\nu)}{\delta}\leq- bm.$ This together with claim (12) gives $\lim_{\delta\to 0^{+}}\frac{\mathrm{ET}_{c,\lambda+\delta}(\mu,\nu)-\mathrm{ET}_{c,\lambda}(\mu,\nu)}{\delta}=-bm$. By the same argument, we also have $\lim_{\delta\to 0^{-}}\frac{\mathrm{ET}_{c,\lambda+\delta}(\mu,\nu)-\mathrm{ET}_{c,\lambda}(\mu,\nu)}{\delta}=-bm$. Thus, we infer that $u$ is differentiable at $\lambda$ with $u^{\prime}(\lambda)=bm$. Therefore, it remains to prove claim (12). Indeed, by compactness there exists a subsequence, still labeled by $\gamma^{\lambda+\delta}$, and $\gamma\in\Pi_{\leq}(\mu,\nu)$ such that $\gamma^{\lambda+\delta}\to\gamma$ weakly as $\delta\to 0$. As in i), we can show that $\gamma\in\Gamma^{0}(\lambda)$. Then, as the mass functional is weakly continuous, we obtain $m(\lambda+\delta)=\gamma^{\lambda+\delta}({\mathcal{T}}\times{\mathcal{T}})\to\gamma({\mathcal{T}}\times{\mathcal{T}})=m$. We in fact have shown that any subsequence of $\\{m(\lambda+\delta)\\}_{\delta}$ has a further subsequence converging to the same number $m$. Therefore, the full sequence $\\{m(\lambda+\delta)\\}_{\delta}$ must converge to $m$, and hence (12) is proved. iii) For any $\lambda\in{\mathbb{R}}$, we have $\partial u(\lambda)=\big{\\{}b\,\gamma({\mathcal{T}}\times{\mathcal{T}}):\gamma\in\Gamma^{0}(\lambda)\big{\\}}\subset[0,b\,\bar{m}]$. Thus, we only need to prove $[0,b\,\bar{m}]\subset\partial u({\mathbb{R}})$. First, note that as $\partial u(\lambda)\subset{\mathbb{R}}$ is a compact and convex set, it must be a finite and closed interval. Therefore, if we let $\gamma^{\lambda}_{min}:=\operatorname*{arg\,min}_{\gamma\in\Gamma^{0}(\lambda)}\gamma({\mathcal{T}}\times{\mathcal{T}})\quad\mbox{and}\quad\gamma^{\lambda}_{max}:=\operatorname*{arg\,max}_{\gamma\in\Gamma^{0}(\lambda)}\gamma({\mathcal{T}}\times{\mathcal{T}}),$ then it follows from ii) that $\partial u(\lambda)=\big{[}b\,\gamma^{\lambda}_{min}({\mathcal{T}}\times{\mathcal{T}}),b\,\gamma^{\lambda}_{max}({\mathcal{T}}\times{\mathcal{T}})\big{]}$ for every $\lambda\in{\mathbb{R}}$. From Equation (4) in the main text, it is clear that $\partial u(\lambda)=\\{0\\}$ for $\lambda$ negative enough. Indeed, if we take $\lambda<-M$, then as $w_{1}(x)+w_{2}(y)\leq b\,c(x,y)+M$, we have $0<b\,c(x,y)-w_{1}(x)-w_{2}(y)-\lambda$ for all $x,y\in{\mathcal{T}}$. Then, we obtain from Equation (4) in the main text that $\mathcal{C}_{\lambda}(0)\leq\mathcal{C}_{\lambda}(\gamma)$ for every $\gamma\in\Pi_{\leq}(\mu,\nu)$ and the strict inequality holds if $\gamma\neq 0$. Thus, $\Gamma^{0}(\lambda)=\\{0\\}$ which gives $\partial u(\lambda)=\\{0\\}$ and $u(\lambda)=-\int_{\mathcal{T}}w_{1}\mu(dx)-\int_{\mathcal{T}}w_{2}\nu(dx)$. We next show that $\partial u(\lambda)=\\{b\,\bar{m}\\}$ for $\lambda$ positive enough. Since $c(x,y)$ is bounded due to its continuity on ${\mathcal{T}}\times{\mathcal{T}}$, we can choose $\lambda\in{\mathbb{R}}$ such that $c(x,y)-\lambda<0$ for all $x,y\in{\mathcal{T}}$. Let $\gamma\in\Gamma^{0}(\lambda)$. We claim that either $\gamma_{1}=\mu$ or $\gamma_{2}=\nu$. Indeed, since otherwise we have $\gamma_{1}(A_{0})<\mu(A_{0})$ and $\gamma_{2}(B_{0})<\nu(B_{0})$ for some Borel sets $A_{0},B_{0}\subset{\mathcal{T}}$. Let $\tilde{\gamma}:=\gamma+[(\mu-\gamma_{1})\chi_{A_{0}}]\otimes[(\nu-\gamma_{2})\chi_{B_{0}}]$. Then, for any Borel set $A\subset{\mathcal{T}}$ we have $\displaystyle\tilde{\gamma}_{1}(A)=\gamma_{1}(A)+\mu(A\cap A_{0})-\gamma_{1}(A\cap A_{0})$ $\displaystyle=\gamma_{1}(A\setminus A_{0})+\mu(A\cap A_{0})$ $\displaystyle\leq\mu(A\setminus A_{0})+\mu(A\cap A_{0})=\mu(A).$ Likewise, $\tilde{\gamma}_{2}(B)\leq\nu(B)$ for any Borel set $B\subset{\mathcal{T}}$. Thus $\tilde{\gamma}\in\Pi_{\leq}(\mu,\nu)$. On the other hand, it is clear from Equation (4) in the main text and the facts $\gamma_{1}\leq\tilde{\gamma}_{1}$, $\gamma_{2}\leq\tilde{\gamma}_{2}$, and $c-\lambda<0$ that $\mathcal{C}_{\lambda}(\tilde{\gamma})<\mathcal{C}_{\lambda}(\gamma)$. This is impossible and so the claim is proved. That is, either $\gamma_{1}=\mu$ or $\gamma_{2}=\nu$. It follows that $\gamma({\mathcal{T}}\times{\mathcal{T}})=\bar{m}$ for every $\gamma\in\Gamma^{0}(\lambda)$, and hence $\partial u(\lambda)=\\{b\,\bar{m}\\}$. This also means that $u$ is differentiable at $\lambda$ with $u^{\prime}(\lambda)=b\,\bar{m}$. Therefore, it remains to show that $(0,b\,\bar{m})\subset\partial u({\mathbb{R}})=\bigcup_{\lambda\in{\mathbb{R}}}\big{[}b\,\gamma^{\lambda}_{min}({\mathcal{T}}\times{\mathcal{T}}),b\,\gamma^{\lambda}_{max}({\mathcal{T}}\times{\mathcal{T}})\big{]}.$ (13) Assume by contradiction that there exists $m\in(0,b\,\bar{m})$ such that $m\not\in\partial u(\lambda)$ for every $\lambda\in{\mathbb{R}}$. For convenience, we adopt the following notation: for sets $A,B\subset{\mathbb{R}}$ and $r\in{\mathbb{R}}$, we write $A<r$ if $a<r$ for every $a\in A$, and $A<B$ if $a<b$ for every $a\in A$ and $b\in B$. Let us consider the following two sets $S_{1}:=\\{\lambda:\partial u(\lambda)<m\\}\quad\mbox{and}\quad S_{2}:=\\{\lambda:\partial u(\lambda)>m\\}.$ Then $\lambda\in S_{1}$ if $\lambda$ is negative enough, and $\lambda\in S_{2}$ if $\lambda$ is positive enough. For any $\lambda_{1}\in S_{1}$ and $\lambda_{2}\in S_{2}$, we have $\partial u(\lambda_{1})<m<\partial u(\lambda_{2})$, and hence $\lambda_{1}<\lambda_{2}$ by the monotonicity in i). That is, $S_{1}<S_{2}$ and so we obtain $\lambda^{*}:=\sup\\{\lambda:\lambda\in S_{1}\\}\leq\inf\\{\lambda:\lambda\in S_{2}\\}=:\lambda^{**}.$ If $\lambda^{*}<\lambda^{**}$, then for any $\lambda\in(\lambda^{*},\lambda^{**})$ we have $\lambda\not\in S_{1}$ and $\lambda\not\in S_{2}$. Therefore, $\partial u(\lambda)\not<m$ and $\partial u(\lambda)\not>m$. Hence, we can find $m_{1},m_{2}\in\partial u(\lambda)$ such that $m_{1}\geq m$ and $m_{2}\leq m$. Thus, $m\in[m_{2},m_{1}]\subset\partial u(\lambda)$ due to the convexity of the set $\partial u(\lambda)$. This contradicts our hypothesis, and we conclude that $\lambda^{*}=\lambda^{**}$. We next select sequences $\\{\lambda^{1}_{n}\\}\subset S_{1}$ and $\\{\lambda^{2}_{n}\\}\subset S_{2}$ such that $\lambda^{1}_{n}\to\lambda^{*}$ and $\lambda^{2}_{n}\to\lambda^{**}=\lambda^{*}$. For each $n$, let $\gamma^{n}_{min}:=\operatorname*{arg\,min}_{\gamma\in\Gamma^{0}(\lambda^{1}_{n})}\gamma({\mathcal{T}}\times{\mathcal{T}})\quad\mbox{and}\quad\gamma^{n}_{max}:=\operatorname*{arg\,max}_{\gamma\in\Gamma^{0}(\lambda^{2}_{n})}\gamma({\mathcal{T}}\times{\mathcal{T}}).$ By compactness, there exist subsequences, still labeled as $\\{\gamma^{n}_{min}\\}$ and $\\{\gamma^{n}_{max}\\}$, and $\gamma^{*},\gamma^{**}\in\Pi_{\leq}(\mu,\nu)$ such that $\gamma^{n}_{min}\to\gamma^{*}$ weakly and $\gamma^{n}_{max}\to\gamma^{**}$ weakly. By arguing exactly as in i), we then obtain $\gamma^{*},\gamma^{**}\in\Gamma^{0}(\lambda^{*})$, $\gamma^{n}_{min}({\mathcal{T}}\times{\mathcal{T}})\to\gamma^{*}({\mathcal{T}}\times{\mathcal{T}})$, and $\gamma^{n}_{max}({\mathcal{T}}\times{\mathcal{T}})\to\gamma^{**}({\mathcal{T}}\times{\mathcal{T}})$. As $b\,\gamma^{n}_{min}({\mathcal{T}}\times{\mathcal{T}})<m$ due to $\lambda^{1}_{n}\in S_{1}$, we must have $b\,\gamma^{*}({\mathcal{T}}\times{\mathcal{T}})\leq m$. Likewise, we have $b\,\gamma^{**}({\mathcal{T}}\times{\mathcal{T}})\geq m$ as $b\,\gamma^{n}_{max}({\mathcal{T}}\times{\mathcal{T}})>m$ for all $n$. Hence, $m\in[b\,\gamma^{*}({\mathcal{T}}\times{\mathcal{T}}),b\,\gamma^{**}({\mathcal{T}}\times{\mathcal{T}})]$. Since $\gamma^{*},\gamma^{**}\in\Gamma^{0}(\lambda^{*})$, we infer that $m\in\partial u(\lambda^{*})$. This is a contradiction and the proof is complete. We note that since $\lambda^{1}_{n}\leq\lambda^{*}\leq\lambda^{2}_{n}$, we have from the monotonicity in i) that $\gamma^{n}_{min}({\mathcal{T}}\times{\mathcal{T}})\leq\gamma({\mathcal{T}}\times{\mathcal{T}})\leq\gamma^{n}_{max}({\mathcal{T}}\times{\mathcal{T}})$ for every $\gamma\in\Gamma^{0}(\lambda^{*})$. By sending $n$ to infinity, it follows that $\gamma^{*}({\mathcal{T}}\times{\mathcal{T}})\leq\gamma({\mathcal{T}}\times{\mathcal{T}})\leq\gamma^{**}({\mathcal{T}}\times{\mathcal{T}})$ for every $\gamma\in\Gamma^{0}(\lambda^{*})$. That is, $\gamma^{*}=\gamma^{\lambda^{*}}_{min}$ and $\gamma^{**}=\gamma^{\lambda^{*}}_{max}$. ∎ ### A.2 Proof for Lemma 3.2 in the main text ###### Proof. We first observe for any Borel set $A\subset{\mathcal{T}}$ that $\displaystyle\hat{\gamma}(A\times\\{\hat{s}\\})=\hat{\gamma}(A\times\hat{\mathcal{T}})-\hat{\gamma}(A\times{\mathcal{T}})=\hat{\mu}(A)-\gamma(A\times{\mathcal{T}})=\mu(A)-\gamma_{1}(A)=\int_{A}(1-f_{1})\mu(dx).$ For the same reason, we have $\hat{\gamma}(\\{\hat{s}\\}\times B)=\int_{B}(1-f_{2})\nu(dx)$ for any set Borel set $B\subset{\mathcal{T}}$. Also, $\displaystyle\hat{\gamma}(\\{\hat{s}\\}\times\\{\hat{s}\\})$ $\displaystyle=\hat{\gamma}(\hat{\mathcal{T}}\times\\{\hat{s}\\})-\hat{\gamma}({\mathcal{T}}\times\\{\hat{s}\\})$ $\displaystyle=\hat{\gamma}(\hat{\mathcal{T}}\times\hat{\mathcal{T}})-\hat{\gamma}(\hat{\mathcal{T}}\times{\mathcal{T}})-\big{[}\hat{\gamma}({\mathcal{T}}\times\hat{\mathcal{T}})-\hat{\gamma}({\mathcal{T}}\times{\mathcal{T}})\big{]}$ $\displaystyle=\hat{\mu}(\hat{\mathcal{T}})-\hat{\nu}({\mathcal{T}})-\hat{\mu}({\mathcal{T}})+\gamma({\mathcal{T}}\times{\mathcal{T}})=\gamma({\mathcal{T}}\times{\mathcal{T}}).$ Since the Equation (6) in the main text is obviously true for sets of the form $A\times B$ with $A,B\subset{\mathcal{T}}$ being Borel sets, we only need to verify it for sets of the following forms: $(A\cup\\{\hat{s}\\})\times B$, $A\times(B\cup\\{\hat{s}\\})$, $(A\cup\\{\hat{s}\\})\times(B\cup\\{\hat{s}\\})$ for Borel sets $A,B\subset{\mathcal{T}}$. We check it case by case as follows. Case 1: Using the above observation, we have $\displaystyle\hat{\gamma}((A\cup\\{\hat{s}\\})\times B)$ $\displaystyle=\hat{\gamma}(A\times B)+\hat{\gamma}(\\{\hat{s}\\}\times B)=\gamma(A\times B)+\int_{B}(1-f_{2})\nu(dx).$ Therefore, the Equation (6) in the main text holds in this case. Case 2: the Equation (6) in the main text is also true for this case because $\displaystyle\hat{\gamma}(A\times(B\cup\\{\hat{s}\\}))$ $\displaystyle=\hat{\gamma}(A\times B)+\hat{\gamma}(A\times\\{\hat{s}\\})=\gamma(A\times B)+\int_{A}(1-f_{1})\mu(dx).$ Case 3: the Equation (6) in the main text is true as well since $\displaystyle\hat{\gamma}((A\cup\\{\hat{s}\\})\times(B\cup\\{\hat{s}\\}))$ $\displaystyle=\hat{\gamma}(A\times B)+\hat{\gamma}(A\times\\{\hat{s}\\})+\hat{\gamma}(\\{\hat{s}\\}\times B)+\hat{\gamma}(\\{\hat{s}\\}\times\\{\hat{s}\\})$ $\displaystyle=\gamma(A\times B)+\int_{A}(1-f_{1})\mu(dx)+\int_{B}(1-f_{2})\nu(dx)+\gamma({\mathcal{T}}\times{\mathcal{T}}).$ Now as the Equation (6) in the main text holds, we obviously have $\gamma(U\times{\mathcal{T}})\leq\hat{\gamma}(U\times{\mathcal{T}})\leq\hat{\gamma}(U\times\hat{\mathcal{T}})=\hat{\mu}(U)=\mu(U)$ for any Borel set $U\subset{\mathcal{T}}$. Likewise, $\gamma({\mathcal{T}}\times U)\leq\nu(U)$ for any Borel set $U\subset{\mathcal{T}}$. Therefore, $\gamma\in\Pi_{\leq}(\mu,\nu)$. ∎ ### A.3 Proof for Proposition 3.3 in the main text ###### Proof. We first show that $\mathrm{KT}(\hat{\mu},\hat{\nu})\leq\mathrm{ET}_{c,\lambda}(\mu,\nu)$. For any $\gamma\in\Pi_{\leq}(\mu,\nu)$, let $\hat{\gamma}$ be given by the Equation (6) in the main text. Then, $\hat{\gamma}\in\Gamma(\hat{\mu},\hat{\nu})$ and $\displaystyle\mathrm{KT}(\hat{\mu},\hat{\nu})\leq\int_{\hat{\mathcal{T}}\times\hat{\mathcal{T}}}\hat{c}(x,y)\hat{\gamma}(dx,dy)$ $\displaystyle=b\int_{{\mathcal{T}}\times{\mathcal{T}}}[c(x,y)-\lambda]\gamma(dx,dy)$ $\displaystyle\quad+\int_{\mathcal{T}}w_{1}[1-f_{1}(x)]\mu(dx)+\int_{\mathcal{T}}w_{2}[1-f_{2}(x)]\nu(dx).$ It follows that $\mathrm{KT}(\hat{\mu},\hat{\nu})\leq\mathrm{ET}_{c,\lambda}(\mu,\nu)$. We next show that $\mathrm{KT}(\hat{\mu},\hat{\nu})\geq\mathrm{ET}_{c,\lambda}(\mu,\nu)$. To see this, for any $\hat{\gamma}\in\Gamma(\hat{\mu},\hat{\nu})$ we let $\gamma$ be the restriction of $\hat{\gamma}$ to ${\mathcal{T}}$. Then by Lemma 3.2 in the main text, we have $\gamma\in\Pi_{\leq}(\mu,\nu)$ and the Equation (6) in the main text holds. Consequently, $\displaystyle\int_{\hat{\mathcal{T}}\times\hat{\mathcal{T}}}\hat{c}(x,y)\hat{\gamma}(dx,dy)$ $\displaystyle=b\int_{{\mathcal{T}}\times{\mathcal{T}}}[c(x,y)-\lambda]\gamma(dx,dy)$ $\displaystyle\quad+\int_{\mathcal{T}}w_{1}[1-f_{1}(x)]\mu(dx)+\int_{\mathcal{T}}w_{2}[1-f_{2}(x)]\nu(dx)$ $\displaystyle\geq\mathrm{ET}_{c,\lambda}(\mu,\nu).$ By taking the infimum over $\hat{\gamma}$, we infer that $\mathrm{KT}(\hat{\mu},\hat{\nu})\geq\mathrm{ET}_{c,\lambda}(\mu,\nu)$. Thus we obtain $\mathrm{KT}(\hat{\mu},\hat{\nu})=\mathrm{ET}_{c,\lambda}(\mu,\nu).$ The relation about the optimal solutions also follows from the above arguments. ∎ ### A.4 Proof for Theorem 3.4 in the main text ###### Proof. From Proposition 3.3 in the main text and the dual formulation for $\mathrm{KT}(\hat{\mu},\hat{\nu})$ proved in [10, Corollary 2.6], we have $\displaystyle\mathrm{ET}_{c,\lambda}(\mu,\nu)=\sup_{\begin{subarray}{c}\hat{u}\in L^{1}(\hat{\mu}),\,\hat{v}\in L^{1}(\hat{\nu})\\\ \hat{u}(x)+\hat{v}(y)\leq\hat{c}(x,y)\end{subarray}}\int_{\hat{\mathcal{T}}}\hat{u}(x)\hat{\mu}(dx)+\int_{\hat{\mathcal{T}}}\hat{v}(x)\hat{\nu}(dx)=:I.$ Therefore, it is enough to prove that $I=J$ where $J:=\sup_{(u,v)\in{\mathbb{K}}}\Big{[}\int_{{\mathcal{T}}}u(x)\mu(dx)+\int_{{\mathcal{T}}}v(x)\nu(dx)\Big{]}.$ For $(u,v)$ satisfying $u\leq w_{1}$, $v\leq w_{2}$ and $u(x)+v(y)\leq b[c(x,y)-\lambda]$, we extend it to $\hat{\mathcal{T}}$ by taking $\hat{u}(\hat{s})=0$ and $\hat{v}(\hat{s})=0$. Then, it is clear that $\hat{u}(x)+\hat{v}(y)\leq\hat{c}(x,y)$ for $x,y\in\hat{\mathcal{T}}$, and $\displaystyle I\geq\int_{\hat{\mathcal{T}}}\hat{u}(x)\hat{\mu}(dx)+\int_{\hat{\mathcal{T}}}\hat{v}(x)\hat{\nu}(dx)=\int_{{\mathcal{T}}}u(x)\mu(dx)+\int_{{\mathcal{T}}}v(x)\nu(dx).$ It follows that $I\geq J$. In order to prove the converse, let $(\hat{u},\hat{v})$ be a maximizer for $I$. Then, by considering $(\hat{u}-\hat{u}(\hat{s}),\hat{v}+\hat{u}(\hat{s}))$, we can assume that $\hat{u}(\hat{s})=0$. Also, if we let $v(y):=\inf_{x\in\hat{\mathcal{T}}}[\hat{c}(x,y)-\hat{u}(x)]$, then $(\hat{u},v)$ is still in the admissible class for $I$ and $\hat{v}(y)\leq v(y)$. This implies that $(\hat{u},v)$ is also a maximizer for $I$. For these reasons, we can assume w.l.g. that the maximizer $(\hat{u},\hat{v})$ has the following additional properties: $\hat{u}(\hat{s})=0$ and $\hat{v}(y)=\inf_{x\in\hat{\mathcal{T}}}[\hat{c}(x,y)-\hat{u}(x)]\quad\forall y\in\hat{\mathcal{T}}.$ In particular, $\hat{v}(\hat{s})=\inf_{x\in\hat{\mathcal{T}}}[\hat{c}(x,\hat{s})-\hat{u}(x)]$. For convenience, define $w_{1}(\hat{s})=0$ and consider the following two possibilities. Case 1: $\inf_{x\in\hat{\mathcal{T}}}[w_{1}(x)-\hat{u}(x)]\geq 0$. Then, since $\hat{c}(\hat{s},\hat{s})-\hat{u}(\hat{s})=0$ and $\inf_{x\in{\mathcal{T}}}[\hat{c}(x,\hat{s})-\hat{u}(x)]=\inf_{x\in{\mathcal{T}}}[w_{1}(x)-\hat{u}(x)]\geq 0$, we have $\hat{v}(\hat{s})=0$. Also, $\hat{v}(y)\leq\hat{c}(\hat{s},y)-\hat{u}(\hat{s})\leq w_{2}(y)$ for all $y\in\hat{\mathcal{T}}$. For each $y\in{\mathcal{T}}$, by using the facts $\hat{u}\leq w_{1}$ and $\hat{c}(\hat{s},y)-w_{1}(\hat{s})=w_{2}(y)\geq 0$ we get $\displaystyle\hat{v}(y)\geq\inf_{x\in\hat{\mathcal{T}}}[\hat{c}(x,y)-w_{1}(x)]=\inf_{x\in{\mathcal{T}}}\\{b[c(x,y)-\lambda]-w_{1}(x)\\}=-b\lambda+\inf_{x\in{\mathcal{T}}}[b\,c(x,y)-w_{1}(x)].$ Thus $(\hat{u},\hat{v})\in{\mathbb{K}}$ and $\displaystyle I=\int_{\hat{\mathcal{T}}}\hat{u}(x)\hat{\mu}(dx)+\int_{\hat{\mathcal{T}}}\hat{v}(x)\hat{\nu}(dx)$ $\displaystyle=\int_{{\mathcal{T}}}\hat{u}(x)\hat{\mu}(dx)+\int_{{\mathcal{T}}}\hat{v}(x)\hat{\nu}(dx)+\hat{v}(\hat{s})\mu({\mathcal{T}})$ $\displaystyle=\int_{{\mathcal{T}}}\hat{u}(x)\mu(dx)+\int_{{\mathcal{T}}}\hat{v}(x)\nu(dx)\leq J.$ Case 2: $\inf_{x\in\hat{\mathcal{T}}}[w_{1}(x)-\hat{u}(x)]<0$. Then, by arguing as in Case 1, we have $\hat{v}(\hat{s})=\inf_{x\in{\mathcal{T}}}[w_{1}(x)-\hat{u}(x)]<0$ and $\displaystyle I=\int_{{\mathcal{T}}}\hat{v}(x)\nu(dx)+\int_{{\mathcal{T}}}\hat{u}(x)\mu(dx)+\mu({\mathcal{T}})\inf_{{\mathcal{T}}}[w_{1}-\hat{u}].$ (14) Let $\tilde{u}(x):=\min\\{\hat{u}(x),w_{1}(x)\\}$. Then, it is obvious that $\tilde{u}(x)+\hat{v}(y)\leq\hat{c}(x,y)$ and $\tilde{u}(\hat{s})=0$. Since $\inf_{x\in{\mathcal{T}}}[w_{1}(x)-\hat{u}(x)]<0$, there exists $x_{0}\in{\mathcal{T}}$ such that $w_{1}(x_{0})<\hat{u}(x_{0})$. Thus, $\tilde{u}(x_{0})=w_{1}(x_{0})$ and hence $\inf_{{\mathcal{T}}}[w_{1}-\tilde{u}]\leq 0$. As $\tilde{u}\leq w_{1}$, we infer further that $\inf_{{\mathcal{T}}}[w_{1}-\tilde{u}]=0$. We also have $\displaystyle\int_{{\mathcal{T}}}\hat{u}(x)\mu(dx)+\mu({\mathcal{T}})\inf_{{\mathcal{T}}}[w_{1}-\hat{u}]$ $\displaystyle=\int_{{\mathcal{T}}}\tilde{u}(x)\mu(dx)+\int_{{\mathcal{T}}:\hat{u}>w_{1}}[\hat{u}(x)-w_{1}(x)]\mu(dx)+\mu({\mathcal{T}})\inf_{{\mathcal{T}}}[w_{1}-\hat{u}]\leq\int_{{\mathcal{T}}}\tilde{u}(x)\mu(dx).$ This together with (14) gives $\displaystyle I\leq\int_{{\mathcal{T}}}\tilde{u}(x)\mu(dx)+\int_{{\mathcal{T}}}\hat{v}(x)\nu(dx).$ Now let $\tilde{v}(y)=\inf_{x\in\hat{\mathcal{T}}}[\hat{c}(x,y)-\tilde{u}(x)]$ for $y\in{\mathcal{T}}$. Then, $\hat{v}(y)\leq\tilde{v}(y)\leq\hat{c}(\hat{s},y)-\tilde{u}(\hat{s})=w_{2}(y)$ for $y\in{\mathcal{T}}$. For each $y\in{\mathcal{T}}$, by using the facts $\tilde{u}\leq w_{1}$ and $\hat{c}(\hat{s},y)-w_{1}(\hat{s})=w_{2}(y)\geq 0$ we also get $\displaystyle\tilde{v}(y)\geq\inf_{x\in\hat{\mathcal{T}}}[\hat{c}(x,y)-w_{1}(x)]=\inf_{x\in{\mathcal{T}}}\\{b[c(x,y)-\lambda]-w_{1}(x)\\}=-b\lambda+\inf_{x\in{\mathcal{T}}}[b\,c(x,y)-w_{1}(x)].$ It follows that $(\tilde{u},\tilde{v})\in{\mathbb{K}}$ and $\displaystyle I\leq\int_{{\mathcal{T}}}\tilde{u}(x)\mu(dx)+\int_{{\mathcal{T}}}\tilde{v}(x)\nu(dx)\leq J.$ Thus we conclude that $I=J$ and the theorem follows. ∎ ### A.5 Proof for Corollary 3.5 in the main text ###### Proof. Notice that as $w_{i}$ ($i=1,2$) is $b$-Lipschitz, we have for every $x\in{\mathcal{T}}$ that $-w_{i}(x)\leq\inf_{y\in{\mathcal{T}}}\big{[}b\,d_{\mathcal{T}}(x,y)-w_{i}(y)\big{]}.$ (15) For each $(u,v)\in{\mathbb{K}}$, let $\displaystyle v^{*}(x)$ $\displaystyle:=\inf_{y\in{\mathcal{T}}}\big{\\{}b[d_{\mathcal{T}}(x,y)-\lambda]-v(y)\big{\\}}=-b\lambda+\inf_{y\in{\mathcal{T}}}\big{[}b\,d_{\mathcal{T}}(x,y)-v(y)\big{]}\geq u(x),$ $\displaystyle v^{**}(y)$ $\displaystyle:=\inf_{x\in{\mathcal{T}}}\big{\\{}b[d_{\mathcal{T}}(x,y)-\lambda]-v^{*}(x)\big{\\}}=-b\lambda+\inf_{x\in{\mathcal{T}}}\big{[}b\,d_{\mathcal{T}}(x,y)-v^{*}(x)\big{]}\geq v(y).$ By using $-b\lambda+\inf_{x\in{\mathcal{T}}}[b\,d_{\mathcal{T}}(x,y)-w_{1}(x)]\leq v(y)\leq w_{2}(y)$ and (15), we obtain for every $x\in{\mathcal{T}}$ that $\displaystyle v^{*}(x)$ $\displaystyle\leq-b\lambda-v(x)\leq-\inf_{y\in{\mathcal{T}}}[b\,d_{\mathcal{T}}(x,y)-w_{1}(y)]\leq w_{1}(x),$ $\displaystyle v^{*}(x)$ $\displaystyle\geq-b\lambda+\inf_{y\in{\mathcal{T}}}\big{[}b\,d_{\mathcal{T}}(x,y)-w_{2}(y)\big{]}\geq-b\lambda- w_{2}(x).$ We also have $v^{*}$ is $b$-Lipschitz, i.e., $|v^{*}(x_{1})-v^{*}(x_{2})|\leq b\,d_{\mathcal{T}}(x_{1},x_{2})$. Indeed, let $x_{1},x_{2}\in{\mathcal{T}}$. Then for any $\varepsilon>0$, there exists $y_{1}\in{\mathcal{T}}$ such that $b\,d_{\mathcal{T}}(x_{1},y_{1})-v(y_{1})<v^{*}(x_{1})+b\lambda+\varepsilon$. It follows that $v^{*}(x_{2})-v^{*}(x_{1})\leq b\,d_{\mathcal{T}}(x_{2},y_{1})-v(y_{1})+\varepsilon-[b\,d_{\mathcal{T}}(x_{1},y_{1})-v(y_{1})]\leq b\,d_{\mathcal{T}}(x_{1},x_{2})+\varepsilon.$ Since this holds for every $\varepsilon>0$, we get $v^{*}(x_{2})-v^{*}(x_{1})\leq b\,d_{\mathcal{T}}(x_{1},x_{2})$. By interchanging the role of $x_{1}$ and $x_{2}$, we also obtain $v^{*}(x_{1})-v^{*}(x_{2})\leq b\,d_{\mathcal{T}}(x_{1},x_{2})$. Thus, $|v^{*}(x_{1})-v^{*}(x_{2})|\leq b\,d_{\mathcal{T}}(x_{1},x_{2})$. Hence, we have shown that $v^{*}\in\mathbb{L^{\prime}}$ with $\mathbb{L^{\prime}}:=\Big{\\{}f\in C({\mathcal{T}}):\,-b\lambda-w_{2}\leq f\leq w_{1},\,|f(x)-f(y)|\leq b\,d_{\mathcal{T}}(x,y)\Big{\\}}.$ We next claim $v^{**}=-b\lambda-v^{*}$. For this, it is clear from the definition that $v^{**}(y)\leq-b\lambda-v^{*}(y)$. On the other hand, from the Lipschitz property of $v^{*}$ we obtain $-v^{*}(y)\leq b\,d_{\mathcal{T}}(x,y)-v^{*}(x)\quad\forall x\in{\mathcal{T}},$ which gives $-b\lambda-v^{*}(y)\leq v^{**}(y)$. Thus, we conclude that $v^{**}=-b\lambda-v^{*}$ as claimed. From these, we obtain that $\displaystyle\int_{{\mathcal{T}}}u(x)\mu(dx)+\int_{{\mathcal{T}}}v(x)\nu(dx)$ $\displaystyle\leq\int_{{\mathcal{T}}}v^{*}(x)\mu(dx)+\int_{{\mathcal{T}}}v^{**}(x)\nu(dx)$ $\displaystyle=\int_{{\mathcal{T}}}v^{*}(x)\mu(dx)-\int_{{\mathcal{T}}}v^{*}(x)\nu(dx)-b\lambda\nu({\mathcal{T}})$ $\displaystyle\leq-b\lambda\nu({\mathcal{T}})+\sup\left\\{\int_{\mathcal{T}}f(\mu-\nu):\,f\in\mathbb{L^{\prime}}\right\\}.$ This together with Theorem 3.4 in the main text implies that $\mathrm{ET}_{\lambda}(\mu,\nu)\leq-b\lambda\nu({\mathcal{T}})+\sup\left\\{\int_{\mathcal{T}}f(\mu-\nu):\,f\in\mathbb{L^{\prime}}\right\\}$. To prove the converse, let $f\in\mathbb{L^{\prime}}$. Define $u:=f$ and $v:=-b\lambda-f$. Then, we have $u(x)\leq w_{1}(x)$, $v(x)\leq-b\lambda-[-b\lambda-w_{2}(x)]=w_{2}(x)$, and $v(x)\geq-b\lambda- w_{1}(x)\geq-b\lambda+\inf_{y\in{\mathcal{T}}}[b\,d_{\mathcal{T}}(x,y)-w_{1}(y)].$ Also, the Lipschitz property of $f$ gives $u(x)+v(y)=-b\lambda+f(x)-f(y)\leq b[d_{\mathcal{T}}(x,y)-\lambda]\quad\forall x,y\in{\mathcal{T}}.$ Thus $(u,v)\in{\mathbb{K}}$, and hence we obtain from Theorem 3.4 in the main text that $\displaystyle-b\lambda\nu({\mathcal{T}})+\int_{\mathcal{T}}f(\mu-\nu)=\int_{{\mathcal{T}}}u(x)\mu(dx)+\int_{{\mathcal{T}}}v(x)\nu(dx)\leq\mathrm{ET}_{\lambda}(\mu,\nu).$ As this holds for every $f\in\mathbb{L^{\prime}}$, we get $-b\lambda\nu({\mathcal{T}})+\sup\left\\{\int_{\mathcal{T}}f(\mu-\nu):\,f\in\mathbb{L^{\prime}}\right\\}\leq\mathrm{ET}_{\lambda}(\mu,\nu).$ Thus, we have shown that $\mathrm{ET}_{\lambda}(\mu,\nu)=-b\lambda\nu({\mathcal{T}})+\sup\left\\{\int_{\mathcal{T}}f(\mu-\nu):\,f\in\mathbb{L^{\prime}}\right\\}.$ (16) Now consider $f=\tilde{f}-\frac{b\lambda}{2}$. Then, $f\in\mathbb{L^{\prime}}$ if and only if $\tilde{f}\in\mathbb{L}$. Moreover, $\int_{\mathcal{T}}f(\mu-\nu)=-\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})-\nu({\mathcal{T}})\big{]}+\int_{\mathcal{T}}\tilde{f}(\mu-\nu).$ Therefore, the conclusion of the corollary follows from (16). ∎ ### A.6 Proof for Proposition 3.7 in the main text In order to prove Proposition 3.7 in the main text, we need the following auxiliary result. ###### Lemma A.1. Assume that $w_{1}>0$ and $w_{2}>0$. Then, $d(\mu,\nu)=0$ implies that $\mu=\nu$. ###### Proof. Assume that $d(\mu,\nu)=0$. Let $\gamma^{0}$ be an optimal plan for $\mathrm{ET}_{\lambda}(\mu,\nu)$, and set $m:=\gamma^{0}({\mathcal{T}}\times{\mathcal{T}})$. Then, $m\leq\min\\{\mu({\mathcal{T}}),\nu({\mathcal{T}})\\}$, and hence we obtain from Problem (3) in the main text that $\displaystyle\int_{\mathcal{T}}w_{1}[1-f_{1}(x)]\mu(dx)+\int_{\mathcal{T}}w_{2}[1-f_{2}(x)]\nu(dx)+b\,\int_{{\mathcal{T}}\times{\mathcal{T}}}d_{\mathcal{T}}(x,y)\gamma^{0}(dx,dy)$ $\displaystyle=\mathrm{ET}_{\lambda}(\mu,\nu)+\lambda bm\leq\mathrm{ET}_{\lambda}(\mu,\nu)+\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})+\nu({\mathcal{T}})\big{]}=d(\mu,\nu)=0.$ Thus, $\displaystyle\int_{\mathcal{T}}w_{1}[1-f_{1}(x)]\mu(dx)=\int_{\mathcal{T}}w_{2}[1-f_{2}(x)]\nu(dx)=\int_{{\mathcal{T}}\times{\mathcal{T}}}d_{\mathcal{T}}(x,y)\gamma^{0}(dx,dy)=0.$ Since $w_{1}$ and $w_{2}$ are positive, it follows in particular that $f_{1}=1$ $\mu$-a.e. and $f_{2}=1$ $\nu$-a.e. That is, $\gamma^{0}_{1}=\mu$ and $\gamma^{0}_{2}=\nu$. Moreover, the above last identity implies that $\gamma^{0}$ is supported on the diagonal $(y=x)$. Therefore, for any continuous function $\varphi$ on ${\mathcal{T}}$ we have $\int_{\mathcal{T}}\varphi(x)\mu(dx)=\int_{{\mathcal{T}}\times{\mathcal{T}}}\varphi(x)\gamma^{0}(dx,dy)=\int_{{\mathcal{T}}\times{\mathcal{T}}}\varphi(y)\gamma^{0}(dx,dy)=\int_{\mathcal{T}}\varphi(y)\nu(dy).$ We thus conclude that $\mu=\nu$. ∎ ###### Proof. [Of Proposition 3.7 in the main text] i) This follows immediately from Corollary 3.5 in the main text. ii) By Corollary 3.5 in the main text, it is clear that $d(\mu,\nu)\geq 0$ and $d(\mu,\mu)=0$. Also, if $d(\mu,\nu)=0$, then by Lemma A.1, we have $\mu=\nu$. It is obvious that $d$ satisfies the triangle inequality. iii) Due to the assumption $w_{1}=w_{2}$ we have $f\in\mathbb{L}$ if and only if $-f\in\mathbb{L}$. It follows that $d(\mu,\nu)=d(\nu,\mu)$. This together with ii) implies that $({\mathcal{M}}({\mathcal{T}}),d)$ is a metric space. Its completeness follows from [56, Proposition 4]. As a complete metric space, it is well known that $({\mathcal{M}}({\mathcal{T}}),d)$ is a geodesic space if and only if for every $\mu,\nu\in{\mathcal{M}}({\mathcal{T}})$ there exists $\sigma\in{\mathcal{M}}({\mathcal{T}})$ such that $d(\mu,\sigma)=d(\nu,\sigma)=\frac{1}{2}d(\mu,\nu).$ To verify the latter, take $\sigma:=\frac{\mu+\nu}{2}$. Then using Corollary 3.5 in the main text, we obtain $d(\mu,\sigma)=\frac{1}{2}\sup_{f\in\mathbb{L}}\int_{\mathcal{T}}f(\mu-\nu)=\frac{1}{2}d(\mu,\nu)$ and $d(\nu,\sigma)=\frac{1}{2}\sup_{f\in\mathbb{L}}\int_{\mathcal{T}}f(\nu-\mu)=\frac{1}{2}d(\nu,\mu)=\frac{1}{2}d(\mu,\nu).$ ∎ ### A.7 Proof for Proposition 3.8 in the main text ###### Proof. Observe that $\displaystyle\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)$ $\displaystyle=-\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})+\nu({\mathcal{T}})\big{]}$ $\displaystyle\quad+\sup\Big{\\{}s[\mu({\mathcal{T}})-\nu({\mathcal{T}})]:\,s\in\big{[}-\frac{b\lambda}{2}-w_{2}(r)+\alpha,w_{1}(r)+\frac{b\lambda}{2}-\alpha\big{]}\Big{\\}}$ $\displaystyle\quad+\sup\left\\{\int_{\mathcal{T}}\Big{[}\int_{[r,x]}g(y)\omega(dy\Big{]}(\mu-\nu)(dx):\,\|g\|_{L^{\infty}({\mathcal{T}})}\leq b\right\\}.$ The first supremum equals to $[w_{1}(r)+\frac{b\lambda}{2}-\alpha][\mu({\mathcal{T}})-\nu({\mathcal{T}})]$ if $\mu({\mathcal{T}})\geq\nu({\mathcal{T}})$ and equals to $-[w_{2}(r)+\frac{b\lambda}{2}-\alpha][\mu({\mathcal{T}})-\nu({\mathcal{T}})]$ if $\mu({\mathcal{T}})<\nu({\mathcal{T}})$. On the other hand, by the same arguments as in [18, p.575-576], we see that the second supremum equals to $\int_{{\mathcal{T}}}|\mu(\Lambda(x))-\nu(\Lambda(x))|\,\omega(dx)$. Putting them together, we obtain the desired formula for $\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)$. ∎ ### A.8 Proof for Proposition 3.9 in the main text ###### Proof. The inequality $\mathrm{ET}_{\lambda}(\mu,\nu)\leq\widetilde{\mathrm{ET}}_{\lambda}^{0}(\mu,\nu)$ holds due to $\mathbb{L}\subset\mathbb{L}_{0}$ and Corollary 3.5 in the main text. Next, let $2bL({\mathcal{T}})\leq\alpha\leq\frac{1}{2}[b\lambda+w_{1}(r)+w_{2}(r)].$ Then, thanks to Corollary 3.5 in the main text, the stated lower bound will follow if $\mathbb{L}_{\alpha}\subset\mathbb{L}$. This is achieved if we can show that any $f\in\mathbb{L}_{\alpha}$ satisfies $-w_{2}-\frac{b\lambda}{2}\leq f\leq w_{1}+\frac{b\lambda}{2}$. Indeed, for such function $f$ we have $f(x)=s+\int_{[r,x]}g(y)\omega(dy),$ with $s\in\big{[}-w_{2}(r)-\frac{b\lambda}{2}+\alpha,w_{1}(r)+\frac{b\lambda}{2}-\alpha\big{]}$ and $\|g\|_{L^{\infty}({\mathcal{T}})}\leq b$. This together the $b$-Lipschitz property of $w_{1},w_{2}$ gives for every $x\in{\mathcal{T}}$ that $f(x)\leq s+\|g\|_{L^{\infty}({\mathcal{T}})}\omega([r,x])\leq w_{1}(r)+\frac{b\lambda}{2}-\alpha+bL({\mathcal{T}})\leq w_{1}(x)+\frac{b\lambda}{2}-\alpha+2bL({\mathcal{T}})\leq w_{1}(x)+\frac{b\lambda}{2}$ and $\displaystyle f(x)$ $\displaystyle\geq s-\|g\|_{L^{\infty}({\mathcal{T}})}\omega([r,x])\geq- w_{2}(r)-\frac{b\lambda}{2}+\alpha-bL({\mathcal{T}})$ $\displaystyle\geq- w_{2}(x)-\frac{b\lambda}{2}+\alpha-2bL({\mathcal{T}})\geq- w_{2}(x)-\frac{b\lambda}{2}.$ It follows that $f\in\mathbb{L}$. Thus, $\mathbb{L}_{\alpha}\subset\mathbb{L}$ and we obtain $\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)\leq\mathrm{ET}_{\lambda}(\mu,\nu).$ ∎ ### A.9 Proof of Proposition 3.10 in the main text We begin with the following auxiliary result. ###### Lemma A.2. Let $\mu,\nu\in{\mathcal{M}}({\mathcal{T}})$. Then, $\mu=\nu$ if and only if $\mu(\Lambda(x))=\nu(\Lambda(x))$ for every $x$ in ${\mathcal{T}}$. ###### Proof. It is obvious that $\mu=\nu$ implies that $\mu(\Lambda(x))=\nu(\Lambda(x))$ for every $x$ in ${\mathcal{T}}$. Now assume that $\mu(\Lambda(x))=\nu(\Lambda(x))$ for every $x$ in ${\mathcal{T}}$. We first claim that $\mu(\\{a\\})=\nu(\\{a\\})$ for any $a\in{\mathcal{T}}$. Indeed, if $a$ is not a node then we have $\Lambda(a)\setminus\Lambda(a_{n})\downarrow\\{a\\}$, where $\\{a_{n}\\}_{n=1}^{\infty}$ is a sequence of distinct points on the same edge as $a$ and converges to $a$ from below. Hence, $\mu(\\{a\\})=\lim_{n\to\infty}\big{[}\mu(\Lambda(a))-\mu(\Lambda(a_{n}))\big{]}=\lim_{n\to\infty}\big{[}\nu(\Lambda(a))-\nu(\Lambda(a_{n}))\big{]}=\nu(\\{a\\}).$ In case $a$ is a common node for edges $e_{1},...,e_{k}$, then we have $\Gamma(a)\setminus\cup_{i=1}^{k}\Gamma(a^{i}_{n})\downarrow\\{a\\}$, where $\\{a^{i}_{n}\\}_{n=1}^{\infty}$ is a sequence of distinct points on edge $e_{i}$ that converges to $a$ from below. Then, we obtain $\displaystyle\mu(\\{a\\})=\lim_{n\to\infty}\big{[}\mu(\Lambda(a))-\sum_{i=1}^{k}\mu(\Lambda(a^{i}_{n}))\big{]}=\lim_{n\to\infty}\big{[}\nu(\Lambda(a))-\sum_{i=1}^{k}\nu(\Lambda(a^{i}_{n}))\big{]}=\nu(\\{a\\}).$ Thus, the claim is proved. On the other hand, for any points $x,y$ belonging to the same edge $\mu([x,y))=\mu(\Lambda(x))-\mu(\Lambda(y))=\nu(\Lambda(x))-\nu(\Lambda(y))=\nu([x,y)).$ Thus, by combining them, we infer further that $\mu([x,y])=\nu([x,y])$ for any $x,y\in{\mathcal{T}}$. It follows that $\mu=\nu$, and the proof is complete. ∎ ###### Proof. [Of Proposition 3.10 in the main text] We note first that the quantity $d_{\alpha}$ depends only on the values of the weights at the root $r$ of the tree. This comes from the fact that only $w_{1}(r)$ and $w_{2}(r)$ are used in the definition of $\mathbb{L}_{\alpha}$. The proofs of i) and iii) are exactly the same as that of Proposition 3.7 in the main text. For ii), it follows from the fact $d_{\alpha}(\mu,\nu)=\sup\left\\{\int_{\mathcal{T}}f(\mu-\nu):\,f\in\mathbb{L}_{\alpha}\right\\}$ that $d_{\alpha}(\mu,\nu)\geq 0$, $d_{\alpha}(\mu,\mu)=0$, and $d_{\alpha}$ satisfies the triangle inequality. Also, if $d_{\alpha}(\mu,\nu)=0$, then by Proposition 3.8 in the main text, we get $\big{[}w_{i}(r)+\frac{b\lambda}{2}-\alpha\big{]}|\mu({\mathcal{T}})-\nu({\mathcal{T}})|+\int_{{\mathcal{T}}}|\mu(\Lambda(x))-\nu(\Lambda(x))|\,\omega(dx)=0.$ As $\big{[}w_{i}(r)+\frac{b\lambda}{2}-\alpha\big{]}>0$ by the assumption, we must have $\mu({\mathcal{T}})=\nu({\mathcal{T}})$ and $\int_{{\mathcal{T}}}|\mu(\Lambda(x))-\nu(\Lambda(x))|\,\omega(dx)=0$. Therefore, $\mu(\Lambda(x))=\nu(\Lambda(x))$ for every $x\in{\mathcal{T}}$. By using Lemma A.2, we then conclude that $\mu=\nu$. Alternatively, we can argue as follows. Assume that $d_{\alpha}(\mu,\nu)=0$. Since $\mathbb{L}_{\alpha}\supset\tilde{\mathbb{L}}:=\left\\{f:\,-w_{2}(r)-\frac{b\lambda}{2}+\alpha\leq f(x)\leq w_{1}(r)+\frac{b\lambda}{2}-\alpha,\,\|f\|_{Lip({\mathcal{T}})}\leq b\right\\},$ we have $0\leq\sup_{f\in\tilde{\mathbb{L}}}\int_{\mathcal{T}}f(\mu-\nu)\leq d_{\alpha}(\mu,\nu)=0.$ Thus, $\sup_{f\in\tilde{\mathbb{L}}}\int_{\mathcal{T}}f(\mu-\nu)=0$. Then, by applying Corollary 3.5 in the main text and Lemma A.1 for constant weights $\tilde{w}_{1}:=w_{1}(r)+\frac{b\lambda}{2}-\alpha>0$ and $\tilde{w}_{2}:=w_{2}(r)+\frac{b\lambda}{2}-\alpha>0$, we obtain that $\mu=\nu$. ∎ ### A.10 Proof for Proposition 3.11 in the main text ###### Proof. Let $\mathbf{f}(x_{i},x_{j})=\tilde{a}(x_{i}+x_{j})$ for $\tilde{a},x_{i},x_{j}\in\mathbb{R}$. We first prove that $\mathbf{f}$ is negative definite. For all $n\geq 2$, for $c_{1},c_{2},\dotsc,c_{n}$ such that $\sum_{i=1}^{n}c_{i}=0$. Given $x_{1},x_{2},\dotsc,x_{n}\in\mathbb{R}$, we have $\sum_{i,j}c_{i}c_{j}\mathbf{f}(x_{i},x_{j})=\sum_{i,j}c_{i}c_{j}\tilde{a}x_{i}+\sum_{i,j}c_{i}c_{j}\tilde{a}x_{j}\leq 0.$ Therefore, $\mathbf{f}$ is negative definite. From Proposition 3.8 in the main text, we have $\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)=-\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})+\nu({\mathcal{T}})\big{]}+\big{[}w_{i}(r)+\frac{b\lambda}{2}-\alpha\big{]}|\mu({\mathcal{T}})-\nu({\mathcal{T}})|+\int_{{\mathcal{T}}}|\mu(\Lambda(x))-\nu(\Lambda(x))|\,\omega(dx).$ The first term is negative definite since $\mathbf{f}$ is negative definite. Additionally, the second and third terms are equivalent to the weighted $\ell_{1}$ distance with nonnegative weights (i.e., $\alpha\leq w_{i}(r)+\frac{b\lambda}{2}$ and lengths of edges in tree ${\mathcal{T}}$ are nonnegative). Therefore, the second and third terms are also negative definite. Hence, $\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}$ is negative definite. From Proposition 3.10 in the main text, we have $d_{\alpha}(\mu,\nu)=\widetilde{\mathrm{ET}}_{\lambda}^{\alpha}(\mu,\nu)+\frac{b\lambda}{2}\big{[}\mu({\mathcal{T}})+\nu({\mathcal{T}})\big{]}.$ Both terms are negative definite. Therefore, $d_{\alpha}$ is also negative definite. ∎ ## Appendix B Further Experimental Results In this section, we illustrate further experimental results. ### B.1 Further Results on the Efficient Approximation of $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ for $\textnormal{ET}_{\lambda}$ In this section, we consider some further setups. ##### Change $\lambda$. In Figure 7(a), we use the same setup as in Figure 2 in the main text, but set the Lipschitz $a_{1}=\frac{b}{2}=0.5$ for $w_{1},w_{2}$. It shows that when $\lambda$ is increased, $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ is farther to $\textnormal{ET}_{\lambda}$. (a) (b) (c) Figure 7: In (7(a)), an illustration about the relative difference between $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and $\textnormal{ET}_{\lambda}$ w.r.t. $\lambda$. LT is the longest path from a root to a node in tree $\mathcal{T}$ (LT := $L_{\mathcal{T}}$). Lipchitz for functions $w_{1},w_{2}$ is $a_{1}=0.5$ (where $b=1$). In (7(b), 7(c)), an illustration about the absolute relative difference between $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and $\textnormal{ET}_{\lambda}$, i.e., $(\widetilde{\textnormal{ET}}_{\lambda}^{0}-\textnormal{ET}_{\lambda})/\left|\textnormal{ET}_{\lambda}\right|$, w.r.t. $b$. For (7(b)), the weight functions $w_{1},w_{2}$ are set constants ($a_{1}=0$, or $w_{1}=w_{2}=a_{0}$) while for (7(c)), the weight functions $w_{1},w_{2}$ are set with largest Lipchitz ($a_{1}=b$). ##### Change $b$. We consider 2 following cases: ##### $\bullet$ For constant functions $w_{1},w_{2}$ (with $a_{1}=0$). We use the same setup as in Figure 1 in the main text, but with constant functions for $w_{1},w_{2}$ (i.e., $a_{1}=0$, or $w_{1}=w_{2}=a_{0}$), and change $b$. We set $\lambda=a_{0}=1$. In Figure 7(b), we illustrate that when the regularization $b$ between entropy and partial matching is farther to $1$ (one of the two terms is more weighted, see Equation (2) in the main text), $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ is farther to $\textnormal{ET}_{\lambda}$. ##### $\bullet$ For functions $w_{1},w_{2}$ with largest Lipschitz $a_{1}=b$. We use the same setup as in Figure 7(b), but with $a_{1}=b$. Figure 7(c) shows similar results as in Figure 7(b) for $a_{1}=0$. For the largest Lipchitz for functions $w_{1},w_{2}$ (i.e., $a_{1}=b$), but for $b=a_{0}=1$, $\widetilde{\textnormal{ET}}$ is almost identical to KT, but they are different when when the regularization $b$ between entropy and partial matching is farther to $1$ (one of the two terms is more weighted, see Equation (2) in the main text). ### B.2 Further Results w.r.t. $\alpha$ We illustrate further SVM results of $d_{\alpha}$ and $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ w.r.t. value of $\alpha$ in TWITTER, RECIPE, CLASSIC, AMAZON datasets in Figure 8(a), and in Orbit, MPEG7 datasets in Figure 8(b). The value of $\alpha$ may affect performances of $d_{\alpha}$ and $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ in some datasets (e.g., RECIPE, AMAZON datasets for document classification, and Orbit dataset in TDA), but may not sensitive in some other datasets (e.g., TWITTER, CLASSIC datasets for document classification, and MPEG7 dataset in TDA). Therefore, although $\alpha=0$ gives $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ good property as in Proposition 3.9 in the main text (upper bound for $\textnormal{ET}_{\lambda}$), there is a possibility to choose suitable value for $\alpha$ (e.g., via cross validation) to improve performances of $d_{\alpha}$ and $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ for some certain datasets. (a) In TWITTER, RECIPE, CLASSIC, AMAZON datasets. (b) In Orbit, MPEG7 datasets. Figure 8: SVM results of $d_{\alpha}$ and $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ w.r.t. value of $\alpha$ with 10 tree slices. ### B.3 Further Results w.r.t. the Number of (Tree) Slices Similar as Figure 6 in the main text, we illustrate further SVM results and time consumption for corresponding kernel matrices for document classification (e.g., TWITTER, RECIPE, CLASSIC, AMAZON datasets) and TDA (Orbit, MPEG7 datasets in Figure 9(a) and Figure 9(b) respectively. For a trade-off between performances and time consumption, one can choose about $n_{s}=10$ slices in applications. (a) In TWITTER, RECIPE, CLASSIC, AMAZON datasets. (b) In Orbit, MPEG7 datasets. Figure 9: SVM results and time consumption for corresponding kernel matrices w.r.t. the number of (tree) slices. ### B.4 Further Results w.r.t. Parameters of Tree Metric Sampling ##### Document classification. * • In Figure 10(a), Figure 10(b), Figure 10(c), Figure 10(d), we illustrate further SVM results and time consumption for corresponding kernel matrices of $d_{0}$ in TWITTER, RECIPE, CLASSIC, AMAZON datasets respectively w.r.t. different parameters for clustering-based tree metric sampling such as the predefined tree deepest level $H_{\mathcal{T}}$, and number of tree branches $\kappa$ which is the number of clusters in the farthest-point clustering. * • In Figure 11(a), Figure 11(b), Figure 11(c), Figure 11(d), we illustrate further SVM results and time consumption for corresponding kernel matrices of $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ in TWITTER, RECIPE, CLASSIC, AMAZON datasets respectively w.r.t. different parameters for clustering-based tree metric sampling such as the predefined tree deepest level $H_{\mathcal{T}}$, and number of tree branches $\kappa$ which is the number of clusters in the farthest-point clustering. (a) In TWITTER dataset. (b) In RECIPE dataset. (c) In CLASSIC dataset. (d) In AMAZON dataset. Figure 10: SVM results and time consumption for corresponding kernel matrices of $d_{0}$ w.r.t. different parameters for clustering-based tree metric sampling (predefined tree deepest level $H_{\mathcal{T}}$, and number of tree branches $\kappa$—the number of clusters in the farthest-point clustering.). (a) In TWITTER dataset. (b) In RECIPE dataset. (c) In CLASSIC dataset. (d) In AMAZON dataset. Figure 11: SVM results and time consumption for corresponding kernel matrices of $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ w.r.t. different parameters for clustering-based tree metric sampling (predefined tree deepest level $H_{\mathcal{T}}$, and number of tree branches $\kappa$—the number of clusters in the farthest-point clustering.). ##### TDA. * • In Figure 12(a), we illustrate further SVM results and time consumption for corresponding kernel matrices of $d_{0}$ in Orbit, MPEG7 datasets w.r.t. different parameters for partition-based tree metric sampling such as the predefined tree deepest level $H_{\mathcal{T}}$. * • In Figure 12(b), we illustrate further SVM results and time consumption for corresponding kernel matrices of $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ in Orbit, MPEG7 datasets w.r.t. different parameters for partition-based tree metric sampling such as the predefined tree deepest level $H_{\mathcal{T}}$. (a) For $d_{0}$. (b) For $\widetilde{\textnormal{ET}}_{\lambda}^{0}$. Figure 12: SVM results and time consumption for corresponding kernel matrices in Orbit, MPEG7 datasets w.r.t. different parameters for partition-based tree metric sampling (predefined tree deepest level $H_{\mathcal{T}}$). Similar as in [43] (tree metric sampling for tree-sliced-Wasserstein in applications), we also observed that the default parameters (e.g., the predefined deepest level $H_{\mathcal{T}}=6$, and the tree branches $\kappa=4$—the number of clusters in the farthest-point clustering) is a reasonable choice to trade-off about performances and time consumption. With these default parameters, sampled trees contains about $4000$ nodes. ## Appendix C Further Details and Discussions In this section, we give further details about experiments, some brief reviews about important aspects used in our work and discuss other relations to other work. ### C.1 More Details about Experiments In this section, we give further details about softwares, datasets and experimental setups. ##### For softwares. * • For experiments in topological data analysis, we used DIPHA toolbox, available at https://github.com/DIPHA/dipha, to extract persistence diagrams. * • For the standard complete optimal transport (OT) problem (e.g., KT in our work which we used to compute the corresponding $\textnormal{ET}_{\lambda}$), we used a fast OT implementation, available at https://github.com/gpeyre/2017-ot- beginners/tree/master/matlab/mexEMD. It is about 4 times faster than the popular mex-file with Rubner’s implementation in C, available at http://robotics.stanford.edu/~rubner/emd/default.htm. * • For tree metric sampling, we used the MATLAB implementation, available at https://github.com/lttam/TreeWasserstein. We directly used this code for clustering-based tree metric sampling, and adapted it into its special case partition-based tree metric sampling. * • For Sinkhorn-based approach for unbalanced OT (Sinkhorn-UOT), we used the MATLAB implementation, available at https://github.com/gpeyre/2017-MCOM- unbalanced-ot. * • For sliced partial optimal transport (SPOT), we adapt the C++ implementation, available at https://github.com/nbonneel/spot, into MATLAB. ##### For datasets. * • For document datasets (e.g., TWITTER, RECIPE, CLASSIC, AMAZON), they are available at https://github.com/mkusner/wmd. * • For Orbit dataset, we follow the procedure, detailed in [1] to generate the dataset. * • For MPEG7 dataset, it is available at http://www.imageprocessingplace.com/downloads_V3/root_downloads/image_databases/MPEG7_CE- Shape-1_Part_B.zip, then we follow [43] to extract the 10-class subset of the dataset. * • For granular packing system and SiO2 datasets, one may access to them by contacting the corresponding authors. ##### For experimental setups. We further clarify some details about experimental setup. As mentioned in the main text, for $d_{0}$ and $\widetilde{\textnormal{ET}}_{\lambda}^{0}$, we choose the weight functions for $w_{1},w_{2}$ as $w_{1}(x)=w_{2}(x)=a_{1}d_{\mathcal{T}}(r,x)+a_{0},$ where $r$ is the root of tree $\mathcal{T}$, we set $\lambda=b=1$, $a_{0}=1$. Following §5.1 in the main text, we set $a_{1}=b=1$. As in §3.2 in the main text, $\alpha\in\left[0,\frac{1}{2}\left(b\lambda+w_{1}(r)+w_{2}(r)\right)\right]$. Thus, $\alpha\in[0,\frac{3}{2}]$ in our experiments (see more experiment results with different values of $\alpha$ in §B.2). We used $n_{s}=10$ (tree) slices for $d_{0}$, $\widetilde{\textnormal{ET}}_{\lambda}^{0}$ and SPOT. For tree metric sampling, we used the default hyperparameters, the predefined tree deepest level $H_{\mathcal{T}}=6$, and the tree branches $\kappa=4$—the number of clusters used in the farthest-point clustering. ### C.2 Some Brief Reviews In this section, we give some brief reviews (or more referred details) about some important aspects in our work. ##### For kernels. We review some important definitions (e.g., positive/negative definite kernels [6]) and theorems (e.g., Theorem 3.2.2 in [6]) about kernels used in our work. ##### $\bullet$ Positive definite kernels [6, p.66–67]. A kernel function $k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ is positive definite if $\forall n\in\mathbb{N}^{*},\forall x_{1},x_{2},...,x_{n}\in\mathcal{X}$, we have $\sum_{i,j}c_{i}c_{j}k(x_{i},x_{j})\geq 0,\qquad\forall c_{i}\in\mathbb{R}.$ ##### $\bullet$ Negative definite kernels [6, p.66–67]. A kernel function $k:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ is negative definite if $\forall n\geq 2,\forall x_{1},x_{2},...,x_{n}\in\mathcal{X}$, we have $\sum_{i,j}c_{i}c_{j}k(x_{i},x_{j})\leq 0,\qquad\forall c_{i}\in\mathbb{R}\,\,\text{s.t.}\,\sum_{i}c_{i}=0.$ ##### $\bullet$ Theorem 3.2.2 in [6, p.74] for kernels. If $\kappa$ is a negative definite kernel, then $\forall t>0$, kernel $k_{t}(x,z):=\exp{\left(-t\kappa(x,z)\right)}$ is positive definite. ##### For tree metric sampling. The tree metric sampling is described in details in [43][S4]. Le et al. [43] also reviewed the details of the farthest-point clustering in §4.2 in the supplementary, discussed about thee quantization/clustering sensitivity problems of tree metric sampling in §5 in the supplementary. ##### For persistence diagrams and related mathematical definitions in topological data analysis. We refer the reader into [35, §2] for a review about mathematical framework for persistence diagrams (e.g., persistence diagrams, filtrations, persistent homology). ### C.3 Discussions about Other Relations to Other Work We note that ultrametric (i.e., non-Archimedean metric, or isosceles metric) and its special case—binary metric are tree metrics [43]. Additionally, a metric for points in a line (e.g., in 1-dimensional projections for supports in SPOT, or SW), or in 1-dimensional manifold (e.g., in 1-dimensional manifold projections for supports in generalized SW [34]) is also a tree metric since we have a corresponding tree as a chain of these points. We also list some other studies related to OT problem with tree metrics as follows: (i) Kloeckner [33] derived geometric properties of OT space for measures on an ultrametric space, (ii) Sommerfeld and Munk [64] studied statistical inferences for OT on finite spaces including tree metrics, (iii) tree-Wasserstein barycenter [41], (iv) alignment problems for probability measures having supports in different spaces (i.e., fast tree variants for Gromov-Wasserstein) [40], and ultrametric Gromov-Wasserstein [48]. We note that we consider the discrete measures in our work (e.g., empirical measures). The closed-form formulation of our regularized entropy partial transport (EPT) $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ in Equation (8) in the main text is for general discrete nonnegative measures having different masses. To our knowledge, the proposed regularized EPT (i.e., $\widetilde{\textnormal{ET}}_{\lambda}^{\alpha}$ in Equation (8) in the main text) is the first approach that yields a closed-form solution among available variants of unbalanced OT for discrete measures. In the context of unbalanced OT for continuous measures (e.g., probability measures are scaled by positive constants), Janati et al. [32] recently showed that entropic optimal transport for unbalanced Gaussian measures (i.e., Gaussian measures are scaled by different positive constants) has a closed-form solution.
# Reliable compositional analysis of airborne particulate matter beyond the quantification limits of total reflection X-ray fluorescence Yves Kayser [<EMAIL_ADDRESS>János Osán [ Philipp Hönicke Burkhard Beckhoff [ ###### Abstract Knowledge on the temporal and size distribution of particulate matter (PM) in air as well as on its elemental composition is a key information for source appointment, for the investigation of their influence on environmental processes and for providing reliable data for climate models. While cascade impactors allow for time- and size-resolved collection of airborne PM, total reflection X-ray fluorescence (TXRF) allows for element-sensitive investigation of minute sample amounts thanks to its detection sensitivity. But during quantification by means of TXRF it is crucial to be aware of the linear calibration limits of TXRF in order to identify situations where collection times or pollution levels in the different size partitions were exceedingly long or high. Indeed, TXRF can only be reliably used when the amount of matter collected on the top of substrate is sufficiently dilute. By means of grazing incidence X-ray fluorescence (GIXRF), where the excitation conditions are varied in a controlled and reliable manner and include also the TXRF regime, a self consistent quantification of elemental mass depositions can be performed in order to validate or falsify TXRF quantification results. For low mass depositions an agreement within few percent for the different excitation conditions was found, while for increasing amounts of material relative errors of up to factor of 4 were found for TXRF as compared to GIXRF. Thus, TXRF cannot be applied to all samples regardless of their coverage and threshold values for the validity of quantification results need to be determined. As a flexible solution, GIXRF allows extending the dynamic range of reliably quantifiable mass depositions beyond the linear regime of TXRF, an important advantage when variable amounts of airborne PM need to be quantified as in the case of collection with cascade impactors. Furthermore, the presented more reliable quantification approach can be transferred to mobile tabletop instrumentation as well. This aspect is highly relevant for air quality monitoring in terms supporting the definition of appropriate legislation and measures for health and climate protection as well as for supporting their enforcement. ###### keywords: TXRF, GIXRF, aerosols, particulate matter PTB]Physikalisch-Technische Bundesanstalt, Abbestraße 2-12, 10587 Berlin, Germany EK]Environmental Physics Department, Centre for Energy Research, Konkoly-Thege M. út 29-33., 1121 Budapest, Hungary PTB]Physikalisch-Technische Bundesanstalt, Abbestraße 2-12, 10587 Berlin, Germany ## 1 Introduction Aerosols present in the environment affect our daily life at multiple levels. For example, airborne particulate matter (PM) in air can impact health due to inhalation 1, 2, 3 or can influence atmospheric processes 4, more precisely the climate and environmental ecosystems through impacting cloud formation 5 or reflecting and scattering sunlight 6. The chemical composition, which requires element-sensitive analytical methods, as well as the chemical speciation of the elements contained in airborne PM is of interest for a correct comprehension of its physical and chemical properties 7. With regard to health concerns, the smallest particles, so called fine and ultrafine particles with sizes in the sub-micrometer and sub-100 nanometer range, are the most concerning for epidemiology as they can penetrate into the airways of the lungs and may be held responsible for health-averse effects on the respiratory and cardio-vascular system upon long-term exposure 8, 9, 10, 11. In particular anthropogenic emissions result in a noticeably higher generation of ultrafine particles 12. While toxicological studies have to assess possible health risks of different nanomaterials 13, parallel efforts have to be undertaken to quantify the presence of the different elements in air and trace back the physical processes airborne PM is undergoing under different weather conditions. The compositional analysis of aerosols is often addressed by regulated analytical techniques requiring moderate amounts of substance collected in fiber filters. Improved time-resolved and size-fractionated information is, however, relevant for accurate modeling of climate changes, for regulatory bodies to impose preventive measures and for legal entities to enforce regulations on air quality and to correctly pinpoint anthropogenic or natural sources. Adding to this the requirement to not only detect but to quantify reliably trace levels of aerosols contained in air in order to achieve good time resolution during environmental monitoring campaigns, highly sensitive and accurate techniques need to be used. Among different available techniques 14, 15 X-ray fluorescence (XRF) based methods are promising contributors to the field by delivering ensemble information on the chemical composition due to the advantages provided in terms of sample preparation, consumption and sensitivity 14, 16. The best possible detection limits with XRF based techniques can be achieved by means of total reflection XRF (TXRF) 17, 18. In combination with cascade impactors, where the PM is collected on different impaction stages and discriminated by their size using the principle on inertia, all relevant information for a time-dependent, element- and size-sensitive quantification of airborne PM in a defined volume of air down to the range of few ng/m3 are at hand 19, 20, 21, 22, 23, 24, 25. But varying environmental conditions and human activities result in unpredictable pollution levels which may in addition severely vary between the different impaction stages for each collection interval. Thus, it can not be ensured beforehand that the samples from the different impaction stages and for the different collection times are all within the linear regime of TXRF where it is assumed that the X-ray standing wavefield (XSW) created is not or only to a small amount disturbed by the material on the top of the substrate. Since the impact of airborne PM on the XSW can not be known beforehand, quantification by means of TXRF, which is a single-point measurement technique without parameter variation, risks to fail and must be assessed during outdoor campaigns. In the present work, the impact of different mass depositions on the reliability of the quantification of the elemental mass deposition by means of TXRF is investigated. This assessment is being done by analyzing samples from field campaigns with different collection times by both, standard TXRF at a single, fixed incidence angle and by grazing incidence X-ray fluorescence (GIXRF) where the incidence angle is varied in a controlled and reproducible manner. Indeed, during a GIXRF measurement the X-ray standing wavefield (XSW) created on the top of the sample is modified, respectively can be neglected for incidence angles far above the critical angle of total external reflection such that the excitation conditions are fundamentally modified during the measurement. For any given sample the mass deposition of the airborne PM collected depends on the pollution level and collection time and does not vary during analysis. Hence, it must be expected that the quantification of the different elements yields the same result for each incidence angle used during the GIXRF measurement. If this is the case, the quantification result can be considered as validated in a self-consistent approach. Hereafter, it will be assessed whether quantification of elemental mass depositions using TXRF and a self-consistent validation of the results can be achieved independently of the amount of airborne PM collected. ## 2 Quantification by means of TXRF TXRF employs a specific geometry in which the X-ray beam used for the excitation of the XRF signal impinges the sample at a very shallow angle beneath the critical angle for total external reflection 26. Hence, TXRF demands for collimated and monochromatic excitation conditions but offers advantages such as the illumination of large sample areas and a large solid angle of detection. Further benefits offered by TXRF are twofold. On the one side, the penetration of the incident beam into the substrate is reduced to an evanescent wave and any background signal, XRF or scattering, originating from it is suppressed. On the other side, the reflection of the incident X-ray beam at the substrate surface leads to the creation of a XSW due to interference between incident and reflected X-rays. Consequently, enhanced excitation conditions for XRF originating from the particulate matter deposited on the top of the substrate can be achieved. A prerequisite to profit from total external reflection is to use substrates which are flat on a macroscopic scale and characterized on a microscopic scale by a roughness smaller than the wavelength of the incident X-rays for best possible reflectivity. In the field of environmental sciences, this necessity for TXRF measurements prohibits analyzing foils or membrane filters as it is realized with different instruments used for the collection of airborne PM. In this case, the sampled material needs first to be transferred onto an adequate substrate which is realized usually via different digestion techniques 27, 28, 29, 25 or slurry techniques 30, 31. However, this approach discards TXRF based analytics of its main advantages since such a time-consuming preparation step may involve sample digestion, material loss or contamination issues. A more suitable approach is to use the TXRF substrates directly, as can be done in cascade impactors, to collect the particulate matter, either as they are 32, 33, 23 or after applying a coating (to prevent bounce-off effects for example) and removing it after the sample collection 34, 28, 22, 35. This additional a posteriori treatment prior to TXRF investigation 22 is detrimental if other, complementary analytical techniques shall be used as well. Regardless of the sample preparation, the knowledge of the XSW is of importance for quantitative measurements independently if external standardization, internal standardization or reference-free quantification schemes are applied. When using internal standardization the XSW created needs to be identical throughout the sample area illuminated, while in case of external standardization the same XSW needs to be created in a reproducible manner for all samples investigated. External standardization means that for each element of interest in an experimental campaign a calibration curve is established by means of a set of reference samples with different mass depositions of the selected elements. This approach requires adequate reference samples which are sufficiently representative of the samples investigated 36, 37. For samples collected during outdoor campaigns the criteria include elemental composition (sample matrix), mass deposition (concentration), particle size range and morphology as well as deposition pattern. Hence, the production and selection of adequate calibration samples for outdoor sampling campaigns requires additional a priori information. The characterization of actual samples from the measurement campaign via complementary techniques in order to use these samples as a kind of standards is often impeded by the sensitivity, i.e., the amount of sample required, of these techniques. For this reason approaches based on internal standardization were developed as an alternative that can be used with digested samples or in conjunction with substrates prepared for sampling 20, 38. Internal standardization means that on each sample to be analyzed a known quantity of a reference element is added beforehand of the measurement while assuming that the excitation and detection conditions at the position where the standard is deposited is representative for the whole sample. However, the standard which is added needs to fulfill the requirements of non-toxicity, not being ubiquitous and having XRF energies which do not overlap with the XRF lines to be contained within the sample. The goal of both approaches, external and internal standardization, is to extract combined information on instrumental factors in order to allow quantifying the elemental content of the material deposited on the top of the substrates. In the reference-free XRF quantification scheme 17, information on the different experimental and fundamental parameters is used to calculate the mass deposition of different elements from the measured count rate of the corresponding fluorescence line. This approach requires the use of (radiometrically) calibrated instrumentation, e.g. apertures for an accurate knowledge of the solid angle of detection as well as efficiency of diodes and the silicon drift detector (SDD) for the incident photon flux and the detected XRF intensity, and the knowledge of atomic fundamental parameters (FPs), i.e. photoionization cross-sections and fluorescence yields which are element- dependent and in a large part also energy-dependent. ## 3 Samples The size-fractionated sampling of PM was realized by means of a 9-stage extension of the May-type cascade impactor 39. The time-resolved aspect is subject to the collection time and should be kept within a suitable range of collected mass for TXRF analysis in order to best profit from the sensitivity offered by this technique and avoid biasing the quantification as discussed in this work. The aerodynamic cut-off diameters of the stages 1 to 9 are respectively 17.9 $\mu$m, 8.9 $\mu$m, 4.5 $\mu$m, 2.25 $\mu$m, 1.13 $\mu$m, 0.57 $\mu$m, 0.29 $\mu$m, 0.18 $\mu$m and 0.07 $\mu$m at a constant flow rate of 16.7 L$/$min. The cut-off diameter is defined as the dimension of the PM which is collected with 50$\%$ efficiency, smaller particles escaping with a higher probability. A well-known and constant airflow is required during the collection of airborne PM. The first two stages with the largest particle sizes are disregarded in general for X-ray analysis and for the 7 further stages $20\times 20$ mm2 Si wafers are used as substrates. As Si wafers have very low background contamination and very low surface roughness, they are ideally suited for TXRF and GIXRF experiments. Measurements with good signal- to-background ratio can be expected even though other substrates might be more suitable for the collection of PM. Indeed, the collection efficiency depends not only on the design of the airflow, where losses of particles should be minimized 40, but also on the substrate surface. In order to preserve the capabilities offered by TXRF, no pre-treatment of the Si wafers was used. Sample sets selected for the present study were collected from two campaigns at two cities, Budapest, Hungary, 24-31 May 2018; and Cassino, Central Italy, 20-27 September 2018, with sampling duration ranging from 20 min to 5 h. In total 19 Si substrates collected on the 3 stages with the finest particle distributions were used in this survey. Among these 19 samples 6 samples will be discussed in more details and further information on these samples is provided in Table 1. These 6 samples are representative for the range of deposited particulate mass quantified for all 19 samples. A summary of the quantification results for all 19 samples for the different excitation regimes and elements in Fig. S1. The deposition area from the May-type cascade impactor corresponds to a stripe of 20 mm length and, depending on the stage, of 0.1 to 1 mm width (fine to coarser PM). The width is determined by the width of the slits used as nozzles for the different stages. This type of deposition pattern presents the advantage of being highly suitable for investigation by means of TXRF and GIXRF once the stripe is aligned along the incidence direction. Thus, the May- type cascade impactor is ideal to demonstrate the capability offered by the combination of cascade impactors and TXRF, respectively GIXRF analysis to provide element, size- and time-resolved information on the PM collected. Table 1: Description of the 6 selected Si substrates with aerosol particles collected in Budapest and at Cassino which are discussed in more details in Figs. 2 and 4. Samples are listed in the order of deposited particulate mass and cover the full range of elemental mass depositions quantified on the total of 19 samples investigated. Sample | Duration | Stage | Diameter range / nm ---|---|---|--- A | 30 min | 7 | 300 - 600 B | 20 min | 9 | 70 - 180 C | 20 min | 8 | 180 - 300 D | 1 h | 7 | 300 - 600 E | 5 h | 9 | 70 - 180 F | 5 h | 7 | 300 - 600 ## 4 Experimental The reference-free GIXRF measurements 17, 41, 42 for quantification of elemental mass depositions were realized at the plane grating monochromator (PGM) 43 beamline in the PTB laboratory at the BESSY II electron storage ring. The experiments were conducted at an incident photon energy of 1620 eV which is below the Si K-edge in order to suppress the contribution of the Si K XRF lines. An ultrahigh-vacuum chamber equipped with a 9-axis manipulator was used 44. The use of an ultrahigh vacuum may induce loss of volatile material, for example organic compounds, but this was not further considered in this work where the focus lies on the evaluation of analytical techniques. The instrument allows for precisely tuning the incident angle $\theta$ between the incidence direction of the synchrotron radiation and the sample surface (Fig. 1). The fluorescence radiation emitted from the sample was detected by means of a silicon drift detector (SDD) calibrated in terms of response function 45 and detection efficiency 46, which is placed in the polarization plane and perpendicular to the propagation direction of the linearly polarized incident X-ray beam in order to minimize scattered radiation. The SDD allows for an energy-dispersive detection of the XRF emitted from the sample such that the information from different elements can be discriminated and processed in parallel during quantification. The incident photon flux is determined by using a calibrated photodiode. The spectra were deconvoluted using the known detector response functions for the relevant fluorescence lines and background contributions, which was mainly resonant Raman scattering (RRS) from the Si K shell 47 and to a lower extent Bremsstrahlung from L shell electrons from the Si substrate. The resulting count rate $I$ for each fluorescence line of interest is normalized with respect to the sine of the incident angle $\theta$, the incident photon flux $I_{0}$, the effective solid angle of detection $\frac{\Omega(\theta)}{4\pi}$ and the energy dependent detection efficiency $\epsilon(E)$ of the SDD for the respective fluorescence photons in order to derive the emitted fluorescence intensity. It has to be emphasized that the calculation of the incident angle dependent solid angle of detection requires an accurate knowledge of the detection geometry but also of the incident beam profile. Figure 1: Illustration of the experimental setup with the Si wafer and the collected airborne PM on the top of it, a diode for measuring the reflectivity and a calibrated SDD for recording the XRF emitted for different incidence angles $\theta$ (top panel). Typical XRF spectra recorded for an incidence angle beneath and above the critical angle of total external reflection illustrate the lower background contributions from the Si wafer at the smaller incidence angle (bottom panels). From the absolute XRF intensity the elemental mass deposition $m_{\text{A}}$, defined as mass per unit area, can be extracted for each position of the GIXRF measurements where the incident angle $\theta$ was varied in variable steps from 0∘ to 10∘ (Fig. 2). Hence, the excitation conditions on each sample were gradually modified from total reflection conditions, where an XSW needs to be considered, to shallow incidence angle conditions, where no XSW is present. Nevertheless shallow incidence angles still provide efficient excitation conditions by dispersing the incident X-ray radiation over a larger sample area and increasing the incidence path length through the PM collected along the stripe-like deposition pattern. The calculation of the XSW requires the knowledge of the optical properties of the substrate, including possible surface oxidation, for the incident photon energy used during the experiment. However, even if the incident photon energy dependent optical properties are measured beforehand from a blank Si substrate to not rely on tabulated data, the presence of the PM on the top of the Si wafer will impact the reflectivity to a certain extent as the contrast in electronic density at the interface separating the bulk Si from the vacuum or PM is changing 48. Therefore, the reflectivity $R(\theta)$ for each wafer was measured by means of a photodiode positioned in a $\theta$ \- $2\theta$ configuration during the GIXRF measurement. This approach is novel and allows for a reliable direct calculation of the incident angle dependent XSW for each sample under the actual measurement conditions. Any deviation from optimal alignment of the sample, increased surface roughness of the substrate or impact of the collected airborne PM on the XSW will be directly considered in this experimental approach. Figure 2: GIXRF data for the 6 selected different samples (labelled A to F and described in Table 1). The changes in the angular intensity profiles for each element indicate differences in the excitation of the XRF signal. The typical particle-like signature of the main elements detected in the GIXRF measurement gradually vanishes which is a clear indicator that the XSW on the top of the substrates significantly differs between the samples. It can be noted as well that for sample A, the angular evolution of O contains both particle- and layer-like signatures. The latter contribution arises from the surface oxide of the Si wafers used, but the relative contribution vanishes with increasing mass of collected airborne PM. ## 5 Reference-free GIXRF Quantification In a TXRF measurement the XRF intensity is usually recorded at a single incidence angle corresponding to $\frac{1}{\sqrt{2}}$ ($\approx 70\%$) of the critical angle for total external reflection $\theta_{c}$, which depends on the incident photon energy and the substrate density 26 and which was about 1∘ for the Si wafers used. The relative intensity distribution within the XSW is given by 26, $XSW(\theta,z)=1+R(\theta)+2\sqrt{R(\theta)}\,\cos\left(\arccos(2\frac{\theta^{2}}{\theta_{c}^{2}}-1)-4\pi\sin\theta\,\frac{z\,E_{0}}{hc}\right)$ (1) with $E_{0}$ the energy of incident photons, $z$ the height above the reflecting substrate and $R(\theta)$ the measured reflectivity. Since the projection of the incident beam has to be below the sample dimension to discard scattering contributions to the signal, the reflectivity could only be accurately measured for incidence angles above 0.6∘. For smaller angles a geometrical correction needs to be considered with the consequence of a larger error in the calculation and hence the quantification result. As already noted, the use of experimental reflectivity data allows for more reliable calculations of the XSW without requiring assumptions in how far the airborne PM collected will impact the XSW created on the top of the substrate. In Fig. 3 it is shown that the reflectivity at a typical angle used for TXRF measurements drops significantly with increasing mass deposition of airborne PM. This observation means that the XSW is significantly different for each sample and different compared to the case of a blank substrate (Fig. S2). Figure 3: The reflectivity from the Si substrate for the same samples than displayed in Figs. 2 and 4 indicates the growing impact on the attenuation of X-rays within the collected PM, resulting in significant differences in the $\overline{XSW}(\theta)$ between the different samples. The vertical bar in the left panel indicates the position typically selected for a TXRF measurement for a Si substrate and the incident photon energy used. This position was used for the calculation of $\overline{XSW}(\theta)$ (Fig. S3). In the following the mean intensity of the XSW, labeled $\overline{XSW}(\theta)$, over the direction vertical $z$ (Fig. 1) to the substrate surface is considered. The assumptions of a laterally and vertically homogeneous chemical composition of the collected PM and of PM dimensions extending over several periods $\frac{hc}{2\,E_{0}\,\sin\theta}$ of the XSW are made thereby. The averaging of the XSW by integration is further backed up by the fact that different particles sizes and compositions are intermixed on each stage and that the deposition pattern is homogeneous along the direction of the incident radiation (Fig. S3). A more intricate calculation would require knowledge on the particle size and relative particle size distribution 49, as well as on the surface coverage 48. Under TXRF conditions, the mass deposition $m_{\text{A},k}$ of element $k$ for each incidence angle $\theta$ can then be determined from the respective measured XRF count rate 17 $m_{\text{A},k}=\frac{-1}{\mu_{eff}(E_{0},E_{k})}\ln\left({1-\frac{I_{k}(\theta)\sin\theta\,\mu_{eff}(E_{0},E_{k})}{\frac{\Omega}{4\pi}\,I_{0}\,\overline{XSW}(\theta)\,\omega_{k}\,\tau_{k}(E_{0})\,\epsilon(E_{k})}}\right)$ (2) where $\omega_{k}$ corresponds to the fluorescence yield and $\tau_{k}(E_{0})$ to the photoionization cross-section of the element (index $k$) being quantified. The values of atomic fundamental parameters can be found in literature databases 50 or selected parameters are determined in dedicated experiments as for the fluorescence yield for C 51 or O 52. The factor $\mu_{eff}(E_{0},E_{k})$ accounts for the effective absorption cross-section of incident and emitted X-ray photons labelled $\mu_{in}(E_{0})$ and $\mu_{out}(E_{k})$ respectively, within the PM investigated $\mu_{eff}(E_{0},E_{k})=\sum_{j}c_{j}\left(\frac{\mu_{in,j}(E_{0})}{\sin\theta}+\frac{\mu_{out,j}(E_{k})}{\sin\frac{\pi}{2}-\theta}\right)$ (3) and requires hence knowledge on the mass deposition of the different elements present in order to take correctly into account the relative contributions via the factor $c_{k}=\frac{\overline{m_{\text{A},k}}}{\sum_{j}\overline{m_{\text{A},j}}}$ with $\overline{m_{\text{A},k}}$ being the mean quantified mass deposition at incidence angles above the critical angle for total external reflection (more precisely from 6∘ to 10∘) where no XSW is present ($\overline{XSW}(\theta)=1$ for $\theta>3\,\theta_{c}$). In this angular regime it can also be shown by means of a first order Taylor expansion of Eq. 2 that the quantification result will correspond to the one from a standard XRF quantification as used in Ref. 53. A consideration which is usually made at larger incidence angles during the quantification is the correction for absorption of X-rays on the incidence and emission paths $M_{k}(E_{0},E_{k})=\frac{\sum_{j}\overline{m_{\text{A},j}}(\frac{\mu_{in,j}(E_{0})}{\sin\theta}+\frac{\mu_{out,j}(E_{k})}{\sin\frac{\pi}{2}-\theta})}{1-\exp({-\sum_{j}\overline{m_{\text{A},j}}(\frac{\mu_{in,j}(E_{0})}{\sin\theta}+\frac{\mu_{out,j}(E_{k})}{\sin\frac{\pi}{2}-\theta})})}$ (4) It was found that for incidence angles in the range from 6$\circ$ to 10$\circ$ this factor accounts for at most a few percent only (less than 5$\%$) for most of the samples. Only for samples with very high mass depositions a relative correction of 25$\%$ to 30$\%$ was introduced in this iterative correction scheme. For a most accurate correction factor and quantification a complete knowledge of the matrix composition is required. Furthermore, secondary fluorescence due to photoelectrons or fluorescence is neglected. This introduces only a minor error for low mass depositions but, depending on the matrix composition and the incident photon energy, should not be disregarded for high mass depositions where errors of up to 20$\%$-30$\%$ can be introduced 54. Finally, the GIXRF measurement allows to compare the quantification results of the elemental mass deposition $m_{\text{A},k}$ between TXRF conditions and XRF conditions under shallow incidence angles. The uncertainty made in the quantification depends on the uncertainties on the incident flux ($1\%$), the XSW factor ($5\%$ in the angular range where it needs to be taken into account), the atomic fundamental parameters (fluorescence yield, $10\%$ for light elements, and photoionization cross-section, $7.5\%$), the detector efficiency and spectral deconvolution ($2.5\%$), the counting statistics and the solid angle of detection (about $15\%$ for the smallest incidence angles to about $4\%$ for the largest incidence angles used for quantification)17. Thus, the systematic errors, which disregard any sample effects, usually amount to about $20\%$ in in the TXRF regime and to about $12\%$ or better for the largest incidence angles used in the GIXRF measurements. Note, that the mass deposition in terms of mass (or likewise number of atoms) for each element per unit area is quantified. A conversion to mass, which is a more commonly used metric in the aerosol community, can be straightforwardly realized if the area on which the airborne PM is collected and its lateral distribution are known. ## 6 Results & Discussion Given the uniform distribution of the PM, quantification results by means of Eq. 2 can be expected to be constant for each incidence angle covered throughout a GIXRF measurement. Indeed, the different excitation conditions realized at each incidence angle should only affect the lowest limit of detection achievable due to a changing and even vanishing XSW and an increasing penetration depth into the bulk volume of the substrate but not impact the quantification result. However, a consistent quantification result where comparable elemental mass depositions are quantified for all incidence angles used is not achieved for all samples. More explicitly, only for a part of the samples present a good agreement under TXRF conditions, $\theta\approx 0.7^{\circ}$, and under XRF conditions, $\theta>3.0^{\circ}$ (Fig. 4). For the lowest mass depositions used, the quantification results are independent of the incidence angle and agree reasonably well with each other (Fig. 4, upper panels). In this situation, the quantification of the mass deposition does not depend on the excitation conditions used such that the GIXRF measurement is useful to validate the results from the TXRF measurement, which is usually performed at a single position. This observation is congruent with the proven reliability of TXRF for quantifying trace level contamination, as it is routinely done for semiconductor applications. Nonetheless, the GIXRF data allows providing more robust quantification data as it allows to inherently validate results obtained by means of TXRF. A specificity can be observed for samples A and B where an imperfect deconvolution of the XRF spectra recorded at larger incidence angles affects the quantification results because of the underlying Si-RRS which is not perfectly described by the model used. In particular for Al, whose main characteristic line is close to the high energy cut-off of the Si RRS at 1520 eV, but partially also for Mg this results as well in a larger scattering of the quantification results at larger incidence angles (Fig. 1). This issue illustrates perfectly the main benefit of TXRF for low mass depositions since it allows suppressing background contributions from the substrate (Fig. 1). For higher mass loadings, a discrepancy between the quantification results appears (Fig. 4, lower panels) in the sense that under TXRF conditions mass depositions are underestimated. For these samples, the quantification results are not consistent throughout the monitored angular range such that the GIXRF measurement falsifies the TXRF results. These samples indicate that it is necessary to be aware of the range of validity of TXRF quantification results and to validate findings on unknown samples by a GIXRF measurement where the excitation conditions are varied in a controlled manner by varying the incidence angle. While the comparison of quantified mass depositions on each sample allows assessing whether the robustness of the quantification results obtained under TXRF conditions, GIXRF allows extending the dynamic range of mass depositions which can be quantified. Indeed, the variation from shallow to larger incidence angles still provides a considerable variation of the path length of the incident photons through the airborne PM such that absorption effects, which may affect the validity of quantification results, are probed while the premise of an XSW which is not or only to a little extent perturbed by the presence of airborne PM on the substrate is not given. The influence of the airborne PM on the XSW is also qualitatively visible in the GIXRF measurement: for the samples with the lowest mass deposition an enhanced XRF rate was detected in the vicinity of the critical angle of total external reflection compared to larger incidence angles, while for the highest mass the opposite was the case. Hence, in the latter case no XSW was created and by this way the GIXRF measurement by itself indicates already that quantification under TXRF conditions is compromised despite the fact that the measured reflectivity is used in the quantification (Fig. 3). If it was not taken into account how increasing mass deposition affect the contrast in optical density at the interface defined by the surface of the substrate the discrepancy between quantification results for the different incidence angles used would even be more important (Eq. 2). This insight emphasizes the benefit of monitoring in parallel to a TXRF or GIXRF measurement the reflectivity from the sample. The dependence of the XSW on the surface coverage must be taken into account when quantifying the mass deposition, a statement which is not only valid when using the reference-free quantification approach but also when applying external standards. For the extreme case of the two samples with the highest PM loads (panels E and F in Figs. 2 and 4) the variation of the quantified mass deposition with the incident angle indicates that an accurate quantification is tedious since here the attenuation of the incident radiation within the PM collected would need to be considered. This aspect introduces considerable uncertainties in the final result. Hence, a GIXRF measurement allows to discard these types of samples from further use in analytical campaigns. Figure 4: Quantified mass deposition for the different incidence angles covered when varying the excitation conditions during the GIXRF measurement from the TXRF regime to the XRF regime under shallow incidence angles. The vertical bar indicates the position typically selected for a TXRF measurement for a Si substrate and the incident photon energy used, while the horizontal bar indicate the mass deposition quantified at the largest incidence angles for each element. For low mass depositions (upper 3 panels) a satisfyingly good agreement can be observed, but for increasing mass deposition a growing discrepancy appears for all the elements. This indicates that not all physical effects due to attenuation of X-rays in the collected airborne PM are accounted for in the quantification scheme. Under shallow incidence angles attenuation is less important and has therefore a lesser impact on the quantification scheme as can be seen from the results approaching a constant value. The relationship between the mass deposition and the XSW is also noteworthy (Eq. 2) with regard to the need for using representative specimen when applying external standards for quantification purposes. In case of internal standards, reliable results are only obtained if the collected mass deposition of the airborne PM is within the range of mass depositions covered by the standard under the premise that a homogeneous intermixing is realized. In other words, the dynamic range within which the calibration is valid needs to be considered. Finally, in the reference-based approaches the XSW needs to be comparable between the calibration material and investigated sample material, be it locally when using an internal standard or between samples when using an external standard, in order to avoid a calibration bias. An upper limit for reliable TXRF quantification is discussed in literature in terms of critical thickness 55 and saturation effect 56. A further reason for the deviation of TXRF quantification results as compared to the results obtained at the largest incidence angles in the GIXRF scan is that the full volume of the PM collected is not illuminated homogeneously in its depth direction because of the attenuation of the incident and reflected X-ray radiation. For increasing incidence angles and high surface coverage the effective path length is reduced as $\frac{1}{\sin\theta}$ such that the X-ray attenuation within the PM volume becomes less pronounced. For samples with a high surface density of airborne PM, this argument becomes even more crucial under conditions where an XSW is expected since then the effective path lengths of the incident and reflected X-rays need to be considered. This insight impacts directly the reliability of quantification by means of TXRF and it becomes necessary to indicate an upper limit for the range of validity of TXRF quantification results. A challenge in air quality monitoring campaigns is that not all stages from a sampling interval will be affected the same as important variations between the different stages can be expected due to inhomogeneous particle size distribution of airborne PM. In case all stages are affected such a situation is the result of exceedingly long collection times or high pollution levels. Depending on the amount of collected airborne PM, the agreement respectively discrepancy in the quantification results obtained for the different excitation conditions achieved in a GIXRF measurement becomes even more obvious when considering the distribution of the quantified mass deposition for each element of a sample (Fig. 5). The results for the further samples are included in Figs. S4 and S5. In Fig. 5 the relative range between the lower and upper 5${}^{\text{th}}$ percentile are indicated. For sample C the results for each element show a very good consistency with each other, mostly within a range of several percent only as can also be recognized from the tabulated values. This agreement can be considered as acceptable for air quality monitoring campaigns. For the sample with the highest mass depositions (sample F), the relative difference between the lower and upper 5${}^{\text{th}}$ percentile relative to the mean value amounts to about a factor of 3 to 4 depending on the sample considered. Given that the measurement were not equidistant in the incidence angles, the results at the lower incidence angles have a higher impact on the percentile intervals and the discrepancy between the quantification results can even be larger than indicated. This aspect is highlighted by the dashed circles in Fig. 5 which indicate the quantification results obtained at the largest incidence angles which clearly indicate the importance of the deviation obtained from the quantification performed in the TXRF regime. This discrepancy makes the need for self-consistent validation by means of GIXRF measurements evident: the more exhaustive data obtained by quantifying the mass deposition for different incidence angles and excitation conditions reveals immediately any discrepancies in the quantification under TXRF conditions when for the sample considered deviations in the quantification results are observed. In view of the demands on quantitative techniques for regulatory purposes this critical assessment of the validity of the results is mandatory. Element | $m_{\text{TXRF}}$ / $\frac{\text{g}}{\text{cm}^{2}}$ | $\sigma_{\text{TXRF}}$ / $\frac{\text{g}}{\text{cm}^{2}}$ | $m_{\text{GIXRF}}$ / $\frac{\text{g}}{\text{cm}^{2}}$ | $\sigma_{\text{GIXRF}}$ / $\frac{\text{g}}{\text{cm}^{2}}$ ---|---|---|---|--- C | 0.7 E-5 | 0.5 E-5 | 2.6 E-5 | 1.0 E-5 N | 0.3 E-6 | 0.1 E-6 | 1.0 E-6 | 0.4 E-6 O | 2.4 E-6 | 1.0 E-6 | 8.0 E-6 | 3.2 E-6 Na | 0.9 E-7 | 0.5 E-7 | 3.6 E-7 | 1.9 E-7 Mg | 2.3 E-8 | 1.2 E-9 | 9.2 E-8 | 4.8 E-8 Al | 0.3 E-7 | 0.2 E-7 | 1.1 E-7 | 0.6 E-7 Element | $m_{\text{TXRF}}$ / $\frac{\text{g}}{\text{cm}^{2}}$ | $\sigma_{\text{TXRF}}$ / $\frac{\text{g}}{\text{cm}^{2}}$ | $m_{\text{GIXRF}}$ / $\frac{\text{g}}{\text{cm}^{2}}$ | $\sigma_{\text{GIXRF}}$ / $\frac{\text{g}}{\text{cm}^{2}}$ ---|---|---|---|--- C | 1.9 E-6 | 0.2 E-6 | 2.3 E-6 | 0.2 E-6 N | 1.5 E-6 | 0.1 E-6 | 1.8 E-6 | 0.2 E-6 O | 4.6 E-6 | 0.4 E-6 | 4.5 E-6 | 0.4 E-6 Na | 1.4 E-8 | 0.2 E-8 | 1.7 E-8 | 0.2 E-8 Mg | 1.6 E-9 | 0.2 E-9 | 2.0 E-9 | 0.3 E-9 Al | 0.8 E-8 | 0.1 E-8 | 1.1 E-8 | 0.2 E-8 Figure 5: Histogram of the GIXRF quantification results for samples C and F. Indicated above the vertical bars is the difference between the $5\%$ and $95\%$ percentile relative to the $5\%$ value. For sample C an agreement throughout the full angular range, i.e., with and without XSW, is obtained, while sample F TXRF underestimates the elemental mass depositions. This becomes evident by a comparison to the quantification results obtained under the XRF regime under shallow incidence angles (dashed circles), where no XSW needs to be considered ($\theta>3\,\theta_{c}$). The tables include the quantification results under TXRF conditions and XRF conditions using shallow incidence angles. The $\sigma$ value does not represent the systematic quantification error, but the standard deviation of the results obtained in the TXRF regime (incidence angles from 0.6∘ to 0.8∘) and XRF conditions using shallow incidence angles (incidence angles from 6∘ to 10∘). A ratio of up to a factor 4 between the different results can be observed. As a consequence, we propose to expand TXRF based quantitative analysis of airborne PM to GIXRF. From an experimental point of view this approach allows combining the benefit of TXRF (low limits of detection) with the ones of GIXRF (higher dynamic range of mass depositions that can be covered, reliable quantification for higher mass depositions) since total reflection excitation conditions are necessarily covered during a GIXRF measurement. By this means, unknown amounts of airborne PM can optimally be handled since low mass depositions can be quantified in the angular regime below and high deposition samples can in the angular regime above the critical angle of total external reflection. In both cases the GIXRF measurements allow for robust and reliable quantification with internal validation by assessing whether the quantification results are in reasonable agreement. It can be noted that for this purpose a sparser set of incidence angles can be monitored in order to arrange for more time-efficient measurements as required for example during field campaigns where high-throughput quantification of airborne PM is aimed at. From an instrumental point of view this strategy is nowadays a solution which can be readily implemented, as more and more successful examples for laboratory GIXRF instruments and even commercially available GIXRF instruments exist. In case only TXRF measurements can be performed, the consequence is that only a restricted range of samples or mass depositions can be investigated. Indeed, for reliable quantification under TXRF conditions the attenuation of X-rays within the collected airborne PM and the differences in the XSW created should remain below a threshold value. This criteria cannot be specified generically since it depends on the incident photon energy and the absolute elemental composition of the PM. Furthermore, simple control cross-check measurements can be used to identify possible issues with the quantification performed, regardless if standards are used or a reference-free quantification scheme is being applied. One indicator is the reflectivity from the substrate as compared to a blank substrate of the same type. This cross-check can even be applied when samples are investigated under TXRF conditions solely (Fig. 4). An other possibility is to consider the angular intensity profile of the XRF originating from the bulk volume of the substrate and to compare it against the one of a blank substrate (Fig. S6). For increasing mass deposition of airborne PM and larger incidence angles significant attenuation compared to a blank Si substrate can be observed. In both cases the comparison to a blank substrate allows elucidating whether, besides possible sample alignment issues, quantification by TXRF alone is compromised due to the amount of airborne PM collected exceeding the range within which TXRF can be validly applied. ## 7 Conclusion & Outlook It was shown that GIXRF by means of a controlled variation of the excitation conditions allows for applying a robust quantification scheme and, hence, for assessing in a self-consistent manner the validity of quantification under TXRF conditions over a wide range of mass depositions. Self-consistent means that throughout the different excitation conditions covered during a GIXRF measurement the quantification results are in agreement with each other for each incidence angle in case of low mass depositions and for incidence angles far above the critical angle of total reflection for higher mass depositions. Varying pollution levels and particle size repartition make such verification of the validity of results necessary since reliable quantification in the TXRF regime can only be realized for mass depositions on the level of lowest amount of matter where a linear calibration between count rate and mass deposition can be guaranteed. This range depends on the matrix composition, the airborne PM size and the incident photon energy, but cannot be assessed from TXRF measurements alone. While increasing mass depositions result in nonlinear effects in terms of different XSW created on the top of the substrates and pronounced attenuation of the incident and reflected X-rays within the airborne PM collected, such that quantification by means of TXRF alone is compromised. Therefore criteria or thresholds for reliable quantification by means of TXRF need to be established in the future. This pitfall cannot be readily circumvented, let alone be identified by means of calibration samples or procedures since, both, the range of elemental mass deposition and the matrix composition on the substrates are unknown beforehand of any measurement in the field. As a consequence a robust quantification scheme requires GIXRF measurements, i.e., measurements covering two fundamentally different excitation regimes. Only then the quantified elemental mass depositions can be considered as reliable and validated, respectively the range of validity for quantification results obtained by means of TXRF can be assessed for a selected class of samples and experimental conditions. Indeed, it has to be noted that the presented experiment with its emphasis on light elements was realized in the soft X-ray regime but the conclusions made can also be applied for higher X-ray energies. While advanced instrumentation was applied for allowing for a physically traceable quantification which does not rely on the use of standards, it can be emphasized that this approach is transferable to laboratory instrumentation 57. Thus, possibilities are offered to transfer the approach presented to instrumentation used for high-throughput measurements in the field or in the laboratory. The additional implementation of a diode to measure the reflectivity would not only allow for more accurate determination of the XSW but would also provide a rough but straightforward indication whether the quantity of airborne PM collected presents an issue for the quantification by means of TXRF. ## Acknowledgments Parts of this research was performed within the EMPIR project 19ENV08 AEROMET II. This project has received funding from the EMPIR programme co-financed by the Participating States and from the European Union’s Horizon 2020 research and innovation programme. The support of the European Structural and Investment Funds jointly financed by the European Commission and the Hungarian Government through grant no. VEKOP-2.3.2-16-2016-00011 (on behalf of J. Osán) is also appreciated. ## References * Apte et al. 2015 Apte, J. S.; Marshall, J. D.; Cohen, A. J.; Brauer, M. _Environ. Sci. Technol._ 2015, _49_ , 8057–8066 * Lelieveld et al. 2015 Lelieveld, J.; Evans, J. S.; Fnais, M.; Giannadaki, D.; Pozzer, A. _Nature_ 2015, _525_ , 367–371 * Leni et al. 2020 Leni, Z.; Cassagnes, L. E.; Daellenbach, K. R.; El Haddad, I.; Vlachou, A.; Uzu, G.; Prévôt, A. S. H.; Jaffrezo, J.-L.; Baumlin, N.; Salathe, M.; Baltensperger, U.; Dommen, J.; Geiser, M. _PLOS ONE_ 2020, _15_ , 1–17 * Mellouki et al. 2015 Mellouki, A.; Wallington, T. J.; Chen, J. _Chem. Rev._ 2015, _115_ , 3984–4014 * McNeill 2015 McNeill, V. F. _Environ. Sci. Technol._ 2015, _49_ , 1237–1244 * Zhang et al. 2015 Zhang, R.; Wang, G.; Guo, S.; Zamora, M. L.; Ying, Q.; Lin, Y.; Wang, W.; Hu, M.; Wang, Y. _Chem. Rev._ 2015, _115_ , 3803 – 3855 * Prather et al. 2008 Prather, K. A.; Hatch, C. D.; Grassian, V. H. _Annu. Rev. Anal. Chem._ 2008, _1_ , 485–514 * Møller et al. 2008 Møller, P.; Folkmann, J. K.; Forchhammer, L.; Bräuner, E. V.; Danielsen, P. H.; Risom, L.; Loft, S. _Cancer Lett._ 2008, _266_ , 84 – 97 * Stone et al. 2007 Stone, V.; Johnston, H.; Clift, M. J. D. _IEEE Transactions on NanoBioscience_ 2007, _6_ , 331–340 * Valavanidis et al. 2008 Valavanidis, A.; Fiotakis, K.; Vlachogianni, T. _J. Environ. Sci. Health Pt. C_ 2008, _26_ , 339–362 * Brook et al. 2010 Brook, R. D.; Rajagopalan, S.; Pope, C. A.; Brook, J. R.; Bhatnagar, A.; Diez-Roux, A. V.; Holguin, F.; Hong, Y.; Luepker, R. V.; Mittleman, M. A.; Peters, A.; Siscovick, D.; Smith, S. C.; Whitsel, L.; Kaufman, J. D. _Circulation_ 2010, _121_ , 2331–2378 * Rönkkö and Timonen 2019 Rönkkö, T.; Timonen, H. _J Alzheimers Dis._ 2019, _72_ , 15–28 * Daellenbach et al. 2020 Daellenbach, K. R. et al. _Nature_ 2020, _587_ , 414–419 * Bulska and Ruszczyńska 2017 Bulska, E.; Ruszczyńska, A. _Phys. Sci. Rev._ 2017, _2_ , 20178002 * Ault and Axson 2017 Ault, A. P.; Axson, J. L. _Anal. Chem._ 2017, _89_ , 430–452, PMID: 28105816 * Furger et al. 2017 Furger, M.; Minguillón, M. C.; Yadav, V.; Slowik, J. G.; Hüglin, C.; Fröhlich, R.; Petterson, K.; Baltensperger, U.; Prévôt, A. S. H. _Atmos. Meas. Tech._ 2017, _10_ , 2061–2076 * Beckhoff et al. 2007 Beckhoff, B.; Fliegauf, R.; Kolbe, M.; Müller, M.; Weser, J.; Ulm, G. _Anal. Chem._ 2007, _79_ , 7873–7882 * Streli et al. 2008 Streli, C.; Wobrauschek, P.; Meirer, F.; Pepponi, G. _J. Anal. At. Spectrom._ 2008, _23_ , 792–798 * Bukowiecki et al. 2008 Bukowiecki, N.; Lienemann, P.; Zwicky, C. N.; Furger, M.; Richard, A.; Falkenberg, G.; Rickers, K.; Grolimund, D.; Borca, C.; Hill, M.; Gehrig, R.; Baltensperger, U. _Spectrochim. Acta B_ 2008, _63_ , 929 – 938 * Richard et al. 2010 Richard, A.; Bukowiecki, N.; Lienemann, P.; Furger, M.; Fierz, M.; Minguillón, M. C.; Weideli, B.; Figi, R.; Flechsig, U.; Appel, K.; Prévôt, A. S. H.; Baltensperger, U. _Atmos. Meas. Tech._ 2010, _3_ , 1473–1485 * Bontempi et al. 2010 Bontempi, E.; Zacco, A.; Benedetti, D.; Borgese, L.; Colombi, P.; Stosnach, H.; Finzi, G.; Apostoli, P.; Buttini, P.; Depero, L. _Environ. Technol._ 2010, _31_ , 467–477 * Prost et al. 2017 Prost, J.; Wobrauschek, P.; Streli, C. _X-Ray Spectrom._ 2017, _46_ , 454–460 * Osán et al. 2020 Osán, J.; Börcsök, E.; Czömpöly, O.; Dian, C.; Groma, V.; Stabile, L.; Török, S. _Spectrochim. Acta B_ 2020, _167_ , 105852 * Borgese et al. 2020 Borgese, L.; Bilo, F.; Zacco, A.; Federici, S.; Mutahi, A. W.; Bontempi, E.; Trzepla, K.; Hyslop, N.; Yatkin, S.; Wobrauschek, P.; Prost, J.; Ingerle, D.; Depero, L. E. _Spectrochim. Acta B_ 2020, _167_ , 105840 * Fomba et al. 2020 Fomba, K. W.; Deabji, N.; Barcha, S. E. I.; Ouchen, I.; Elbaramoussi, E. M.; El Moursli, R. C.; Harnafi, M.; El Hajjaji, S.; Mellouki, A.; Herrmann, H. _Atmos. Meas. Tech._ 2020, _13_ , 4773–4790 * Klockenkämper and von Bohlen 2014 Klockenkämper, R.; von Bohlen, A. _Total Reflection X-ray Fluorescence Analysis and Related Methods_ ; John Wiley & Sons, Ltd, 2014 * Leland et al. 1987 Leland, D. J.; Bilbrey, D. B.; Leyden, D. E.; Wobrauschek, P.; Aiginger, H.; Puxbaum, H. _Analytical Chemistry_ 1987, _59_ , 1911–1914 * Schmeling and Klockow 1997 Schmeling, M.; Klockow, D. _Anal. Chim. Acta_ 1997, _346_ , 121 – 126 * Wagner and Mages 2010 Wagner, A.; Mages, M. _Spectrochim. Acta B_ 2010, _65_ , 471 – 477 * Natali et al. 2016 Natali, M.; Zanella, A.; Rankovic, A.; Banas, D.; Cantaluppi, C.; Abbadie, L.; Lata, J. C. _Environ. Sci. Pollut. Res._ 2016, _23_ , 23496 – 23510 * Bilo et al. 2017 Bilo, F.; Borgese, L.; Dalipi, R.; Zacco, A.; Federici, S.; Masperi, M.; Leonesio, P.; Bontempi, E.; Depero, L. E. _Chemosphere_ 2017, _178_ , 504 – 512 * Schneider 1989 Schneider, B. _Spectrochim. Acta By_ 1989, _44_ , 519 – 523 * Esaka et al. 2003 Esaka, F.; Watanabe, K.; Onodera, T.; Taguchi, T.; Magara, M.; Usuda, S. _Spectrochim. Acta B_ 2003, _58_ , 2145 – 2155 * Injuk and Van Grieken 1995 Injuk, J.; Van Grieken, R. _Spectrochim. Acta B_ 1995, _50_ , 1787 – 1803 * Prost et al. 2018 Prost, J.; Zinkl, A.; Ingerle, D.; Wobrauschek, P.; Streli, C. _Spectrochim. Acta B_ 2018, _147_ , 13 – 20 * Hönicke et al. 2018 Hönicke, P.; Krämer, M.; Lühl, L.; Andrianov, K.; Beckhoff, B.; Dietsch, R.; Holz, T.; Kanngießer, B.; Weißbach, D.; Wilhein, T. _Spectrochim. Acta B_ 2018, _145_ , 36 – 42 * Horntrich et al. 2012 Horntrich, C.; Kregsamer, P.; Prost, J.; Stadlbauer, F.; Wobrauschek, P.; Streli, C. _Spectrochim. Acta B_ 2012, _77_ , 31 – 34 * Böttger et al. 2018 Böttger, S.; Tyssebotn, I. M. B.; Jansen, W.; Fittschen, U. E. _Spectrochim. Acta B_ 2018, _147_ , 93 – 99 * May 1975 May, K. _J. Aerosol Sci._ 1975, _6_ , 413 – 419 * Marple 2004 Marple, V. A. _Aerosol Sci. Technol._ 2004, _38_ , 247–292 * Müller et al. 2014 Müller, M.; Hönicke, P.; Detlefs, B.; Fleischmann, C. _Materials_ 2014, _7_ , 3147–3159 * Hönicke et al. 2019 Hönicke, P.; Detlefs, B.; Nolot, E.; Kayser, Y.; Mühle, U.; Pollakowski, B.; Beckhoff, B. _Journal of Vacuum Science & Technology A_ 2019, _37_ , 041502 * Senf et al. 1998 Senf, F.; Flechsig, U.; Eggenstein, F.; Gudat, W.; Klein, R.; Rabus, H.; Ulm, G. _Journal of Synchrotron Radiation_ 1998, _5_ , 780–782 * Lubeck et al. 2013 Lubeck, J.; Beckhoff, B.; Fliegauf, R.; Holfelder, I.; Hönicke, P.; Müller, M.; Pollakowski, B.; Reinhardt, F.; Weser, J. _Rev. Sci. Instrum._ 2013, _84_ , 045106 * Scholze and Procop 2009 Scholze, F.; Procop, M. _X-Ray Spectrom._ 2009, _38_ , 312–321 * Scholze and Procop 2005 Scholze, F.; Procop, M. _X-Ray Spectrom._ 2005, _34_ , 473–476 * Müller et al. 2006 Müller, M.; Beckhoff, B.; Ulm, G.; Kanngießer, B. _Phys. Rev. A_ 2006, _74_ , 012702 * Unterumsberger et al. 2020 Unterumsberger, R.; Hönicke, P.; Kayser, Y.; Pollakowski-Herrmann, B.; Gholhaki, S.; Guo, Q.; Palmer, R. E.; Beckhoff, B. _J. Anal. At. Spectrom._ 2020, _35_ , 1022–1033 * Kayser et al. 2015 Kayser, Y.; Sá, J.; Szlachetko, J. _Nanoscale_ 2015, _7_ , 9320–9330 * Schoonjans et al. 2011 Schoonjans, T.; Brunetti, A.; Golosio, B.; Sanchez del Rio, M.; Solé, V. A.; Ferrero, C.; Vincze, L. _Spectrochim. Acta B_ 2011, _66_ , 776 – 784 * Beckhoff and Ulm 2001 Beckhoff, B.; Ulm, G. _Adv. X-Ray Anal._ 2001, _44_ , 349–354 * Hönicke et al. 2016 Hönicke, P.; Kolbe, M.; Krumrey, M.; Unterumsberger, R.; Beckhoff, B. _Spectrochim. Acta B_ 2016, _124_ , 94 – 98 * Beckhoff 2008 Beckhoff, B. _J. Anal. At. Spectrom._ 2008, _23_ , 845–853 * Wählisch et al. 2020 Wählisch, A.; Streeck, C.; Hönicke, P.; Beckhoff, B. _J. Anal. At. Spectrom._ 2020, _35_ , 1664–1670 * Klockenkämper and von Bohlen 1989 Klockenkämper, R.; von Bohlen, A. _Spectrochim. Acta B_ 1989, _44_ , 461 – 469 * Hellin et al. 2004 Hellin, D.; Fyen, W.; Rip, J.; Delande, T.; Mertens, P. W.; De Gendt, S.; Vinckier, C. _J. Anal. At. Spectrom._ 2004, _19_ , 1517–1523 * Hönicke et al. 2020 Hönicke, P.; Waldschläger, U.; Wiesner, T.; Krämer, M.; Beckhoff, B. _Spectrochim. Acta B_ 2020, _174_ , 106009
In this article we provide a framework for the study of Hecke operators acting on the Bredon (co)homology of an arithmetic discrete group. Our main interest lies in the study of Hecke operators for Bianchi groups. Using the Baum-Connes conjecture, we can transfer computations in Bredon homology to obtain a Hecke action on the $K$-theory of the reduced $C^{*}$-algebra of the group. We show the power of this method giving explicit computations for the group $\SL_2(\Z[i])$. In order to carry out these computations we use an Atiyah-Segal type spectral sequence together with the Bredon homology of the classifying space for proper actions. § INTRODUCTION Hecke operators play a prominent role in the study of arithmetic groups. The action of Hecke operators on various cohomology theories associated to arithmetic groups and their symmetric spaces provides an essential tool bridging the analytic and arithmetic aspects of the theory. An important class of arithmetic groups that has received a lot of attention recently is that of Bianchi groups. These are groups of the form $\PSL_2(\mathcal{O}_{\Q (\sqrt{-D} )})$ where $\mathcal{O}_{\Q (\sqrt{-D} )}$ is the ring of integers of an imaginary quadratic field. Bianchi groups were first studied by Luigi Bianchi and others in the 1890's as a natural extension of the study of the Modular group $\PSL_2(\Z)$. Bianchi studied in [3] their algebraic properties, finding generators for many of these groups and showing that each Bianchi group acts discontinuously on the hyperbolic 3-space. Bianchi also developed the tools from reduction theory for binary Hermitian forms needed in the study of this class of groups (cf. [21]). For free subgroups of Bianchi groups, Mesland and Şengün in [15] have recently defined a Hecke action on $K$-homology using Kasparov's bivariant $KK$-theory. We tackle the general case by first defining Hecke operators on Bredon (co)homology, allowing us to then transfer the computations to $K$-theory in full generality. Computations in Bredon (co)homology can be carried out using Atiyah-Segal type spectral sequences. We develop the corresponding machinery which we then apply to explicitly compute the Hecke action on the $K$-homology of the group $\PSL_2(\Z[i])$. The plan of the article is as follows. In section 2 we review the definition of Bredon modules and Bredon (co)homology. Given a discrete group $G$, Bredon modules associated to families of subgroups of $G$ provide coefficient systems for $G$-equivariant (co)homology theories. We focus in the case where the coefficient system is given by the representation ring. In this case the Bredon (co)homology of a $G$-CW-complex $X$ is given in terms of the representation rings of its cell stabilizers. In section 3 we define equivariant K-(co)homology in terms of spectra and discuss its relation to Bredon (co)homology using spectral sequences. This in turn leads via the Baum-Connes conjecture to a description of the $K$-theory of the reduced $C^{*}$-algebra of $G$ and the possibility of defining Hecke operators at the level of such algebras. In section 4 we review the theory of Hecke algebras and introduce the natural Hecke action on group cohomology. Here we also discuss Hecke correspondences over a $G$-space. In section 5 we develop the machinery necessary to define Hecke operators in Bredon (co)homology and transfer these to equivariant $K$-(co)homology. The core of our treatment lies in identifying the appropriate restriction, corestriction and conjugation morphisms necessary leading to the action of Hecke correspondences. Section 6 of the article is devoted to computations for Bianchi groups. Expressing Bianchi groups as amalgamated products, we can carry out the computations in terms of the representation theory of their factors viewed as cell stabilizers on $G$-spaces. Explicit computations in the case of Hecke operators associated to congruence subgroups of $\PSL_2(\Z[i])$ of prime level are carried out in full. We close the article in section 7 with a few concluding remarks. § BREDON (CO)HOMOLOGY §.§ Bredon (Co)homology Bredon (co)homology for finite groups was introduced by Bredon in [5], [6] in order to provide an appropriate framework for coefficient systems in an equivariant (co)homology theory. The theory can be extended to arbitrary topological groups (cf.[11]). In this section we recall the main aspects of the theory for discrete groups. Our treatment follows that of [19] and [17]. In order to define Hecke operators we will need to develop in full generality the theory underlying Bredon's definition of cohomology with coefficients. Given a discrete group $G$, its Bredon cohomology groups with coefficients in the representation ring will be computed via a cochain complex where each term is the representation ring of the stabilizer of an $n$-cell of the classifying space for proper actions of $G$ viewed as a $G-CW$-complex. We will develop the necessary machinery leading to this description in this section and the next one. This section will be devoted to the description of the cohomology with coefficients for a $G-CW$-complex. In the next section we recast the discussion in terms of spectra leading to the possibility of carrying out the computations using the classifying space for proper $G$-actions. We begin by defining the orbit category which is central to Bredon's definition of (co)homological invariants for spaces with a group action. Let $G$ be a discrete group and let $\fF$ be a family of subgroups of $G$, closed under conjugation and finite intersections. Define the orbit category $\Or_{\fF}(G)$ as the category whose objects are sets of the form $G/H$ with $H\in \fF$, and whose morphisms are given by $G$-maps. Notice that such a morphism $f_{g}: G/H\rightarrow G/K$ is determined by an element $gK\in G/K$ with $g^{-1}H g\subset K$, so that it sends the coset $H$ to the coset $gK$, i.e. we have an identification \begin{eqnarray*} \mor_{\Or_{\fF}(G)} (G/H,G/K) &=& \Maps(G/H,G/K)^G. \end{eqnarray*} When $\fF$ is the family of all subgroups of $G$ we simply denote $\Or_{\fF}(G)$ by $\Or_{}(G)$. Throughout what follows we fix a choice of a family $\fF$ of subgroups of $G$ as above. Denote by $\Ab$ the category of abelian groups. A covariant (resp. contravariant) Bredon module is a covariant (resp. contravariant) functor \begin{eqnarray*} M : \Or_{\fF}(G) & \longrightarrow & \Ab. \end{eqnarray*} A morphism \begin{eqnarray*} \Psi : M & \longrightarrow & N \end{eqnarray*} between Bredon modules is given by a natural transformation between the corresponding functors. This means that for each $H\in\mathfrak{F}$ there is a morphism of abelian groups \begin{eqnarray*} \Psi(G/H) : M(G/H) & \longrightarrow & N(G/H) \end{eqnarray*} and for every $f_g : G/H \to G/K$ we have, in the covariant case, a commutative diagram $$\xymatrix{M(G/H)\ar[r]^{M(f_g)}\ar[d]_{\Psi(G/H)}& M(G/K)\ar[d]^{\Psi(G/K)}\\N(G/H)\ar[r]^{N(f_g)}&N(G/K)}$$ whilst in the cotravariant case the horizontal arrows are reversed. If $M$ and $N$ are both covariant (resp. contravariant) Bredon modules, the group structure in each of the $\Hom(M(G/H), N(G/H))$ induces an abelian group structure in the set of na­tu­ral transformations $\mor(M, N)$. It can be shown moreover that the category of covariant (resp. contravariant) Bredon modules is an Abelian category. If $M$ is a contravariant Bredon module and $N$ is a covariant Bredon module, we define the abelian group \begin{eqnarray*} M\otimes_\mathfrak{F} N & = & \bigoplus_{H\in\mathfrak{F}} M(G/H)\otimes_{\Z} N(G/H) \;\bigg/ \sim \end{eqnarray*} where the relation $\sim$ is generated by $M(f)(m)\otimes n-m\otimes N(f)(n)$ for each $f:G/H\rightarrow G/K$, $m\in M(G/K)$, and $n\in N(G/H)$. Given a CW-complex $Z$ we denote by $C_{\ast}(Z)$ its cellular chain complex. Let $X$ be $G$-CW-complex, for each $n \geq 0$ we can define a contravariant Bredon module $\underline{C_n(X)}$ by \begin{eqnarray*} \underline{C_n(X)} : G/H & \longmapsto & C_n(X^H), \end{eqnarray*} where $X^H$ is the subspace of $X$ fixed by the subgroup $H$. Let $\{ \delta_{\alpha}\}$ be the set of $n$-cells of $X$, then there is an isomorphism \begin{eqnarray*} C_n(X^H) & \cong & \bigoplus_{\alpha}\Z[ \delta_{\alpha}^H], \end{eqnarray*} where $ \delta_{\alpha}^H$ is the $H$-fixed point set of $\delta_\alpha$; explicitly, $\delta_{\alpha}^H$ is $\delta_\alpha$ if the cell is fixed by $H$, and otherwise is empty, in which case it does not count in the sum. For a morphism $f_{g}:G/H\rightarrow G/K$ we have $$\underline{f_{g}}:=\underline{C_n{(X)}}(f_{g}):C_n(X^K)\rightarrow C_n(X^H), \quad \delta_{\alpha}^K\longmapsto g\cdot \delta_{\alpha}^K=: \delta_{\alpha_g}^H.$$ For each $H\in\mathfrak{F}$ the usual boundary map $\partial:C_n(X^H)\rightarrow C_{n-1}(X^H)$ induces a boundary map \begin{eqnarray*} \partial:\underline{C_n(X)} & \longrightarrow & \underline{C_{n-1}(X)}. \end{eqnarray*} If $M$ is a contravariant Bredon module and $X$ is a $G$-CW-complex we obtain a cochain complex Let $X$ be a $G$-CW-complex and let $M$ be a contravariant Bredon module. We define the $n$-th Bredon cohomology group of $X$ with coefficients in $M$ as \begin{eqnarray*} \mathcal{H}_{G}^{n}(X;M) & = & H^n(\mbox{mor}(\underline{C_{\ast}(X)},M)). \end{eqnarray*} Analogously, if $N$ is a covariant Bredon module and $X$ is a $G$-CW-complex we obtain a chain complex $$ \underline{C_{\ast}(X)}\otimes_\mathfrak{F} N . $$ Let $X$ be a $G$-CW-complex and let $N$ be a covariant Bredon module. We define the $n$-th Bredon homology group of $X$ with coefficients in $N$ as \begin{eqnarray*} \mathcal{H}_{n}^{G}(X;N) & = &H_n(\underline{C_{\ast}(X)}\otimes_\mathfrak{F} N). \end{eqnarray*} As mentioned above contravariant Bredon modules form an Abelian category. We will now define a class of projective objects which will play an important role in computations. Let $K\in\mathfrak{F}$. We define the standard projective contravariant Bredon module $P_K$ as the functor given in objects of $\Or_{\fF}(G)$ by \begin{eqnarray*} P_K (G/H) & = & \Z [\mbox{mor}(G/H,G/K)],\qquad \mbox{for }H\in\mathfrak{F}, \end{eqnarray*} and which associated to a morphism $f:G/H_1\rightarrow G/H_2$, the morphism $P_K(f):P_K(G/H_2)\rightarrow P_K(G/H_1)$ is given by the linear extension of pre-composing with $f$. For the Bredon modules $P_K$, an appropriate form of the Yoneda Lemma shows that given a contravariant Bredon module $M$ there is an induced isomorphism of Abelian groups $$\ev_K:\mbox{mor}(P_K,M) \longrightarrow M(G/K), \qquad \varphi\longmapsto \ev_K(\varphi)=\varphi(G/K)(1).$$ In a similar manner, if $N$ is a covariant Bredon module, there is an isomorphism $$P_K\otimes_{\mathfrak{F}}N\cong N(G/K).$$ See [17] for more information on these isomorphisms. Let $X$ be a $G$-CW-complex and, as above, let $\{\delta_{\alpha}\}$ be the set of $n$-cells of $X$, and let $\{e_{\beta}\}$ be a set of $G$-representatives of those $n$-cells; we know that $$C_n(X^H) \; \cong \; \bigoplus_{\alpha}\Z[\delta_{\alpha}^H] \; \cong \; \bigoplus_{\beta}\Z[(G\cdot e_{\beta})^H].$$ Moreover, if $S_{\beta}$ is the stabilizer of the cell $e_{\beta}$, and the $g$'s are taken as representatives in $G/S_{\beta}$ then there is a $g e_{\beta}$ fixed by $H$ if and only if $g$ is such that $g^{-1}Hg\subset S_{\beta}$, so we have a bijective correspondence \begin{eqnarray*} (G\cdot e_{\beta})^H & = & \mbox{mor}(G/H,G/S_{\beta}). \end{eqnarray*} Therefore, we obtain $$C_n(X^H) \; \cong \; \bigoplus_{\beta}\Z[\mbox{mor}(G/H,G/S_{\beta})] \; = \; \bigoplus_{\beta} P_{S_{\beta}}(G/H),$$ so, as Bredon modules, \begin{eqnarray*} \underline{C_{n}(X)} & \cong & \bigoplus_{\beta}P_{S_{\beta}}. \end{eqnarray*} We have an isomorphism of chain complexes $$\mbox{mor}_{G}(\underline{C_{\ast}(X)},M) \, \cong \, \prod_{\beta^\ast} \mbox{mor}(P_{S_{\beta^\ast}},M) \, \cong \, \prod_{\beta^\ast} M(G/S_{\beta^\ast}),$$ where $\{\beta^\ast\}$ indexes the $G$-representatives of $\ast$-cells. This becomes a direct sum assuming there are finite representatives for the cells. §.§ Coefficients in the representation ring Consider now the family $\fF$ of finite subgroups of $G$. For computations of equivariant K-theory and K-homology we will use the contravariant Bredon module $\mathcal{R}$ which acts on objects of $\Or_{\fF}(G)$ by sending $G/H$ to $R(H)$, the rep­re­sen­ta­tion ring of the subgroup $H$. At the level of morphisms, $\mathcal{R}$ acts via the composition of restriction and the isomorphism given by conjugation, so for any $f_{g}:G/H\rightarrow G/L$ the morphism $\mathcal{R}(f_{g})$ is the composition Then, as above, we have an isomorphism describing the Bredon cochain complex \begin{eqnarray}\label{bredonc} \mbox{mor}_{G}(\underline{C_{n}(X)},\mathcal{R}) & \cong &\bigoplus_{\alpha}R(S_{\alpha}), \end{eqnarray} with the assumption that there are finite orbit representatives of $n$-cells. Here, the coboundary map is given by restriction of representations, from the stabilizer of an $n$-cell to the stabilizer of the corresponding $(n+1)$-cell that contains it. Similarly, we can consider $\mathcal{R}$ as a covariant Bredon module, setting $\mathcal{R}(f_g)$ to be the composition Then the chain complex $\underline{C_{n}(X)}\otimes_{\mathfrak{F}}\mathcal{R} $ can be described as \begin{eqnarray} \underline{C_{n}(X)}\otimes_{\mathfrak{F}}\mathcal{R} & \cong & \bigoplus_{\alpha}R(S_{\alpha}), \end{eqnarray} where the boundary map is given by induction of representations. Once we extend scalars to $\C$ we obtain an isomorphism \begin{eqnarray*} \left[ \; \bigoplus_{\alpha}R(S_{\alpha}) \, \right] \otimes_{\Z} \C & \cong & \bigoplus_{\alpha} \mathfrak{Cl}_{c}(S_{\alpha}) \end{eqnarray*} where $\mathfrak{Cl}_{c}(S_{\alpha})$ is the vector space of locally constant complex valued class functions on $S_{\alpha}$, and the individual isomorphisms $R(S_{\alpha}) \otimes_{\Z} \C \cong \mathfrak{Cl}_{c}(S_{\alpha})$ are given by passing from representations to their characters. The spaces \begin{eqnarray*} \mathrm{C}_{\; n}^{\, \mathrm{ch}}(G ; X) & = &\bigoplus_{\alpha} \mathfrak{Cl}_{c}(S_{\alpha}) \end{eqnarray*} form a chain complex whose boundary maps are similarly defined by induction of representations. This chain complex computes the chamber homology groups $\mathrm{H}_{\; n}^{\mathrm{ch}}(G ; X)$. Chamber homology is an equivariant homology theory which, as Bredon homology, incorporates the structure coming from the representation theory of subgroups of $G$. We refer the reader to [2] for a treatment of chamber homology together with its relevance in the context of the Baum-Connes conjecture for $p$-adic groups. § EQUIVARIANT K-HOMOLOGY AND THE BAUM-CONNES CONJECTURE Now we will describe a topological version of the Baum-Connes assembly map, from this approach we easily obtain the naturality of the assembly map. It will be necessary for the computation in Section <ref>. The proofs of all results in this section can be found in [14]. §.§ Spectra and homology theories A spectrum $${\bf E} = \{(E(n), \sigma(n)) \mid n \in \Z\}$$ is a sequence of pointed spaces $$\{E(n)\mid n \in \Z\}$$ together with pointed maps, called structure maps, \begin{eqnarray*} \sigma(n):E(n)\wedge S^1 & \longrightarrow & E(n + 1). \end{eqnarray*} A map of spectra $f:{\bf E}\to {\bf E'}$ is a sequence of maps $f(n):E(n)\to E'(n)$ which are compatible with the structure maps $\sigma(n)$. The homotopy groups of a spectrum are defined by \begin{eqnarray*} \pi_n({\bf E}) & := & \colim_{k\to\infty} \pi_{n+k}(E(k)). \end{eqnarray*} Where the $n$-th structure map of the system $\pi_{n+k}(E(k))$ is given by the composite $$\pi_{n+k}(E(k))\xrightarrow{S}\pi_{n+k+1}(E(k)\wedge S^1)\xrightarrow{\sigma(k)_*}\pi_{n+k+1}(E(k+1)),$$ where $S$ denotes the suspension homomorphism. We will denote by $\SPECTRA$ the category of spectra. If ${\bf E}$ is a spectrum, one obtains a (non-equivariant) homology theory $H_*(-; {\bf E})$ by defining for a CW-pair $(X,A)$ and $n\in \Z$ \begin{eqnarray*} H_n(X,A; {\bf E}) & = & \pi_n (X_+ \cup_{A_+}\cone(A_+) \wedge {\bf E}), \end{eqnarray*} where $X_+$ is obtained from $X$ by adding a disjoint base point and $\cone$ denotes the (reduced) mapping cone. The main property of this homology theory is given by the equality $H_n(\pt; {\bf E}) = \pi_n({\bf E})$. The definitions can be extended to an equivariant context using the orbit category. §.§ $\Or(G)$-spaces Denote by $\SPACES$ the category of topological spaces. Also, recall that if $\fF$ is the family of all subgroups of $G$ we denote $\Or_{\fF}$ by $\Or(G)$. In order to ease notation we will at times use lower case letters to denote objects in $\Or(G)$. A covariant $\Or(G)$-space is a covariant functor \begin{eqnarray*} \mathcal{X} : \Or(G) &\longrightarrow &\SPACES. \end{eqnarray*} Given a $G$-space $X$, the fixed point set system of $X$, denoted by $\Phi X$, is the $\Or(G)$-space defined by \begin{eqnarray*} \Phi X (G/H) & := & {\rm{Maps}}(G/H, X)^G \; = \; X^H \end{eqnarray*} and if $\theta: G/H \to G/K$ is a $G$-map corresponding to $gK \in (G/K)^H$ then when $x\in X^K$ \begin{eqnarray*} \Phi X (\theta)(x) &:=& gx \in X^H. \end{eqnarray*} $\Phi$ defines a contravariant functor from the category of proper $G$-spaces to the category of Let $\XX$ and $\YY$ be $\Or(G)$-spaces, we define the space \begin{eqnarray} \XX\times_{\Or(G)}\YY & := & \bigsqcup_{c \in {\rm{Obj}}(\Or(G))} \XX(c) \times \YY(c) / \sim \end{eqnarray} where $\sim$ is the equivalence relation generated by $(\XX(\phi)(x),y) \sim (x, \YY(\phi)(y))$ for all morphisms $\phi: c \to d$ in $\Or(G)$ and points $x \in \XX(d)$ and $y \in \YY(c)$. A covariant $\Or(G)$-spectrum is a covariant functor \begin{eqnarray*} {\bf E}^G : \Or(G) & \longrightarrow & \SPECTRA \end{eqnarray*} If ${\bf E}^G$ is a covariant $\Or(G)$-spectrum and $\YY$ is a $\Or(G)$-space, then one obtains a spectrum $Y\times_{\Or(G)}{\bf E}^G$. Hence, we can extend ${\bf E}^G$ to a covariant functor \begin{align*}{\bf E}^G_\%:G\text{-CW}^2&\to \SPECTRA\\ (X, A)&\mapsto \Phi(X_+\cup_{A_+} \cone(A_+))\times_{\Or(G)}{\bf E}^G.\end{align*} If ${\bf E}^G$ is a covariant $\Or(G)$-spectrum, then we obtain a $G$-homology theory $H^G_*(-;{\bf E}^G)$ defined as \begin{eqnarray*} H^G_n(X,A;{\bf E}^G) & = & \pi_n({\bf E}^G_\%(X,A)) \end{eqnarray*} satisfying $H^G_n(G/H;{\bf E}^G) = \pi_n({\bf E}^G(G/H))$ for $n\in \Z$ and $H\subseteq G$. See Lemmas 2.3 and 2.5 in [14]. Recall that the reduced group $C^{*}$-algebra $C_r^*(G)$ is the norm closure of the complex group ring $\C G$ embedded into the space $\mathcal{B}(L^2(G))$ of bounded operators $L^2(G)\to L^2(G)$ equipped with the sup norm given by the right regular representation. Denote by $\GROUPOIDS$ the category of groupoids. There is a covariant functor, respecting equivalences, \begin{eqnarray*} {\bf K}^{\topo} : \GROUPOIDS \longrightarrow \SPECTRA, \end{eqnarray*} such that for every group $G$ and all $n\in \Z$ we have \begin{eqnarray*} \pi_n({\bf K}^{\topo}(G)) & \cong & K_n(C_r^*(G)), \end{eqnarray*} where $K_n(C_r^*(G))$ is the topological K-theory of the reduced group $C^{*}$-algebra $C_r^*(G)$, see [4]. A group $G$ satisfies the Baum-Connes Conjecture if the assembly map induced by the projection $pr:\underbar{E}G\to G/G$ \begin{eqnarray*} H^G_n(pr;{\bf K}^{\topo}) : H^G_n(\underbar{E}G; {\bf K}^{\topo}) & \longrightarrow & H^G_n(G/G;{\bf K}^{\topo}) = K_n(C^*_r(G)), \end{eqnarray*} is bijective for all $n\in\Z$. The homology theory $H_*^G(-;{\bf K}^{\topo})$ is called $G$-equivariant K-homology and is also denoted by $K_n^G(-)$. The map $H^G_n(pr;{\bf K}^{\topo})$ is called the assembly map and is also denoted by $\mu_{G,n}$. §.§ K-theory The equivariant cohomology theory associated to the $\Or(G)$-spectra ${\bf K}^{\topo}$ is called equivariant K-theory and is denoted by $K_G^n(-)$. By results of Lück and Oliver in [13] we know that in the case of proper $G$-actions on $G$-CW-complexes, $K_G^*(-)$ can be defined in terms of vector bundles. We summarize the construction in the following lines. For any discrete group $G$ and any finite proper $G$-CW-complex $X$, let $\mathbb{K}_G(X)=\mathbb{K}_G^0(X)$ be the Grothendieck group of the semigroup $\text{Vect}_G(X)$ of isomorphism classes of $G$-vector bundles over $X$ together with direct sum. Define $\mathbb{K}_G^{-n}(X)$, for all $n>0$, by setting $$\mathbb{K}_G^{-n}(X) = \ker(\, \mathbb{K}_G(X\times \mathbb{S}^n) \xrightarrow{\;\;\emph{incl}^\ast\;\;} \mathbb{K}_G(X) \,).$$ For any proper $G$-CW-pair $(X,A)$, and $n\geq0$, set $$\mathbb{K}_G^{-n}(X,A) = \ker(\, \mathbb{K}_G^{-n}(X\cup_A X) \xrightarrow{\;\;i_2^\ast\;\;} \mathbb{K}_G^{-n}(X) \,).$$ And, let $\mathbb{K}_G^n(X)=\mathbb{ K}_G^{-n}(X)$ and $\mathbb{ K}_G^n(X,A)=\mathbb{ K}_G^{-n}(X,A)$. We have then the following equivalence of equivariant homology theories. Let $G$ be a discrete group, $(X,A)$ be a proper $G$-CW-pair and $n$ be an integer number. There is a natural isomorphism \begin{eqnarray*} K_G^n(X,A) & \cong & \mathbb{ K}_G^n(X,A). \end{eqnarray*} See [13]. As in the classical case, there is an Atiyah-Hirzebruch type spectral sequence converging to equivariant K-theory (respectively $K$-homology) whose $E_2$-term is the Bredon cohomology (respectively Bredon homology) with coefficients in the Bredon module of complex representations $\mathcal{R}$ (see [13] for details). More concretely, given a discrete group $G$ and any finite dimensional proper $G$-complex $X$, the equivariant skeletal filtration of $X$ induces a cohomological spectral sequence with \begin{eqnarray*} E^{p,2q}_{2}(X) \; \cong \; \mathcal{H}^{p}_{G}(X;\mathcal{R}) & \Longrightarrow & K^{\ast}_{G}(X), \end{eqnarray*} and a homological spectral sequence with \begin{eqnarray*} E_{p,2q}^{2}(X) \; \cong \; \mathcal{H}_{p}^{G}(X;\mathcal{R}) & \Longrightarrow & K_{\ast}^{G}(X). \end{eqnarray*} Notice that, if $\dim(X)=2$ (which is the case when $X = \underbar EG$ for a Bianchi group $G$, cf. below), we have that Bredon cohomology groups (respectively homology groups) are trivial for $p>2$, so both spectral sequences collapse in $E_2$. In fact, both spectral sequences induce split short exact sequences (natural in $X$) \begin{equation}\label{eq1} 0 \longrightarrow \mathcal{H}_G^{2}(X;\mathcal{R}) \longrightarrow K_G^{0}(X) \longrightarrow \mathcal{H}_G^{0}(X;\mathcal{R}) \longrightarrow 0. \end{equation} \begin{equation}\label{eq2} 0 \longrightarrow \mathcal{H}^G_{2}(X;\mathcal{R}) \longrightarrow K^G_{0}(X) \longrightarrow \mathcal{H}^G_{0}(X;\mathcal{R}) \longrightarrow 0\end{equation} and natural isomorphisms in $X$, $K_G^1(X)\cong\mathcal{H}_G^1(X;\mathcal{R})$ and $K^G_1(X)\cong\mathcal{H}^G_1(X;\mathcal{R})$. Both the exact sequences and the isomorphisms above are compatible with the induction structure. § HECKE ALGEBRAS AND HECKE OPERATORS In the case where $G$ is an arithmetic group, given as a discrete subgroup $G \subset \mathfrak{G}$ of a Lie group $\mathfrak{G}$, homogeneous $\mathfrak{G}$-spaces lead to $G$-spaces of arithmetic relevance[Here the Lie group $\mathfrak{G}$ is the group of real or complex points of an algebraic group defined over $\mathbb{Q}$.]. As many of the arithmetic properties of the group $G$ can be encoded in terms of its Hecke algebra, it will be important for us to study the action of the Hecke algebra on the Bredon cohomology and homology of these $G$-spaces. In the following paragraphs we discuss some generalities on Hecke algebras. §.§ Double cosets and Hecke algebras Let $\mathfrak{G}$ be a group. Two subgroups of $\mathfrak{G}$ are said to be commensurable if their intersection has finite index in both. Commensurability defines an equivalence relation in the set of subgroups of $\mathfrak{G}$. If $G_{1}$ and $G_{2}$ are subgroups of $\mathfrak{G}$ that are commensurable, we use the notation \begin{eqnarray*} G_{1} &\sim& G_{2}. \end{eqnarray*} Let $G$ be a subgroup of $\mathfrak{G}$. We define the commensurator of $G$ in $\mathfrak{G}$ as the subgroup \begin{eqnarray*} \mathrm{Comm}_{\mathfrak{G}}(G) = & \{ \,  g \in \mathfrak{G} \; | \; G \sim g G g^{-1} \, \}. \end{eqnarray*} Note that if $G_{1}$ and $G_{2}$ are commensurable subgroups of $\mathfrak{G}$ then \begin{eqnarray*} \mathrm{Comm}_{\mathfrak{G}}(G_1) &=& \mathrm{Comm}_{\mathfrak{G}}(G_2) . \end{eqnarray*} \begin{eqnarray*} \mathfrak{G} &=& \PGL_{2}^{+}(\mathbb{R}) \end{eqnarray*} and let $G$ be the modular group $\PSL_2(\mathbb{Z})$, then the commensurator of $G$ in $\mathfrak{G}$ is \begin{eqnarray*} \mathrm{Comm}_{\mathfrak{G}}(G) \; = \; \mathrm{Comm}_{\PGL_{2}^{+}(\mathbb{R})}(\PSL_2(\mathbb{Z})) &=& \PGL_{2}^{+}(\mathbb{Q}). \end{eqnarray*} \begin{eqnarray*} \mathfrak{G} &=& \PGL_{2}(\mathbb{C}) \end{eqnarray*} and let $\Q(\sqrt{-D}) \subset \mathbb{C}$ be a quadratic imaginary extension of $\mathbb{Q}$. Denote by $\cO_{\Q(\sqrt{-D})}$ the ring of integers of $\Q(\sqrt{-D})$. If $G = \Gamma_{D}$ is the corresponding Bianchi group \begin{eqnarray*} \Gamma_{D} &=& \PSL_2(\cO_{\Q(\sqrt{-D})}) \end{eqnarray*} then the commensurator of $G$ in $\mathfrak{G}$ is \begin{eqnarray*} \mathrm{Comm}_{\mathfrak{G}}(G) \; = \; \mathrm{Comm}_{\PGL_{2}(\mathbb{C})}( \Gamma_{D} ) &=& \PGL_2\left(\Q(\sqrt{-D})\right). \end{eqnarray*} As above let $\mathfrak{G}$ be a group and let $G$ be a subgroup of $\mathfrak{G}$. Given an element $g$ in $\mathrm{Comm}_{\mathfrak{G}}(G)$ we consider the double coset in $\mathfrak{G}$ given by \begin{eqnarray*} G g G. \end{eqnarray*} The left action of $G$ on $G g G$ has a finite number of orbits. To compute this number. let $ G^{(g)} = g^{-1} G g \bigcap G $ and notice that the map \begin{eqnarray*} G & \longrightarrow & G g G \\ \gamma & \longmapsto & g \gamma \end{eqnarray*} induces a surjection from $G $ to the quotient $G \backslash G g G $ with kernel $G^{(g)} $ so we have \begin{eqnarray*} G g G & = & \bigsqcup_{i=1}^{d} G \alpha_i, \quad \text{ where } d = [G : G^{(g)} ]. \end{eqnarray*} We obtain an analogous decomposition by considering the right action of $G$ on $G g G$. The above decompositions lead to a natural product of double cosets as follows. If $\alpha$ and $\beta$ are elements in $\mathrm{Comm}_{\mathfrak{G}}(G)$ with \begin{eqnarray*} G \alpha G = \bigsqcup_{i=1}^{d} G \alpha_i &\text{ and } & G \beta G = \bigsqcup_{j=1}^{e} \beta_j G \end{eqnarray*} then we define \begin{eqnarray*} (G \alpha G) \cdot (G \beta G) & = & \sum c_{\alpha, \beta}^{\delta} G \delta G \end{eqnarray*} \begin{eqnarray*} c_{\alpha, \beta}^{\delta} & = & \text{ number of pairs of indices } (i, j ) \text{ such that } G \alpha_i \beta_j = G \delta \end{eqnarray*} and the expression on the right is viewed as an element of the free abelian group generated by double cosets of the form $G \delta G$ with $\delta \in \mathrm{Comm}_{\mathfrak{G}}(G)$. If $\Delta$ is a subsemigroup of $\mathrm{Comm}_{\mathfrak{G}}(G)$ with \begin{eqnarray*} G & \subseteq & \Delta \end{eqnarray*} we denote by \begin{eqnarray*} \cA (G ; \Delta) \end{eqnarray*} the free abelian group generated by double cosets of the form \begin{eqnarray*} G g G \end{eqnarray*} with $g \in \Delta$. The above product of double cosets can be extended linearly to a bilinear map of $\mathbb{Z}$-modules \begin{eqnarray*} \cA (G ; \Delta) \times \cA (G ; \Delta)& \longrightarrow & \cA (G ; \Delta) \end{eqnarray*} which is associative. The group $\cA (G ; \Delta)$ thus becomes a ring which will be called the Hecke algebra of $G$ with respect to $\Delta$. For the above results together with generalities on Hecke algebras see chapter 3 of [20]. In the case $\Delta = \mathrm{Comm}_{\mathfrak{G}}(G)$ we will denote $\cA (G ; \Delta)$ simply by $\cA (G)$. §.§ Action of Hecke operators on group cohomology Let $\mathfrak{G}$ be a group and let $G$ be a subgroup of $\mathfrak{G}$. As above, let $\Delta$ be a subsemigroup of $\mathrm{Comm}_{\mathfrak{G}}(G)$ with $G \subseteq \Delta$. If $M$ is an abelian group on which $\Delta$ acts by endomorphisms we can consider $M$ as a $G$-module. Elements of the Hecke algebra $\cA (G ; \Delta)$ define endomorphisms of the cohomology groups of $G$ with coefficients in $M$ leading to an action of $\cA (G ; \Delta)$ on $H^{n}(G; M)$. These operators will be called Hecke operators associated to $(G; \Delta)$. In order to define the Hecke operator corresponding to a double coset $G g G$ with $g \in \Delta$ notice that if we have a decomposition \begin{eqnarray*} G g G & = & \bigsqcup_{i=1}^{d} G \alpha_i, \qquad \alpha_i \in \Delta, \end{eqnarray*} and $m \in M^{G}$ is an element of $M$ fixed by $G$ then the element of $M$ given by \begin{eqnarray*} m \, | \, G g G & = & \sum_{i=1}^{d}\, \alpha_i \, m \end{eqnarray*} is again fixed by $G$ and independent of the representatives $\alpha_i$ so the coset $G g G$ defines a map \begin{eqnarray*} T_{g} \, : \, M^{G} & \longrightarrow & M^{G}. \end{eqnarray*} These maps can be linearly extended to define operators associated to elements of $\cA(G; \Delta)$: \begin{eqnarray*} m \, | \, \xi & = & \sum_{k=1}^{r} c_k \, T_{g_k}(m) \end{eqnarray*} for $ \sum_{k=1}^{r} c_k ( G g_k G ) \in \cA (G; \Delta)$ and $m\in M^{G}$. These operators define an action of $\cA(G; \Delta)$ on $M^G$. By naturality the action of $\cA (G; \Delta)$ on $M^{G}$ extends to an action on the cohomology groups of $G$ with coefficients in $M$. At the level of group cocycles in the standard complex the action can be described as follows: If $g \in \Delta$ \begin{eqnarray*} G g G & = & \bigsqcup_{i=1}^{d} G \alpha_i, \end{eqnarray*} and for $\gamma \in G$ we denote by $\sigma_{g}^{\gamma}$ the unique permutation in $S_{d}$ such that \begin{eqnarray*} G \, \alpha_i \gamma & = & G \, \alpha_{\sigma^{g}_{\gamma}(i)} . \end{eqnarray*} In this way we obtain maps \begin{eqnarray*} \rho^{g}_{j} \, : \, G & \longrightarrow & G \qquad \text{ for } j = 1, \dots,d \end{eqnarray*} where for $\gamma \in G$ the element $\rho^{g}_{j} (\gamma) \in G$ is determined by \begin{eqnarray*} \alpha_j \gamma & = & \rho^{g}_{j} (\gamma) \alpha_{\sigma^{g}_{\gamma}(j)}. \end{eqnarray*} Now, given a group $r$-cocycle \begin{eqnarray*} \phi \, : \, G \times \dots \times G &\longrightarrow & M \end{eqnarray*} we can take \begin{eqnarray*} (\, T_g \, \phi \,) \, (\gamma_0, \gamma_1, \dots, \gamma_r) & = & \sum_{j=1}^{d} \alpha_{j}^{-1} \phi ( \rho^{g}_{j}(\gamma_0), \rho^{g}_{j}( \gamma_1), \dots, \rho^{g}_{j}( \gamma_r)) \end{eqnarray*} which is again a cocycle, so $T_g$ defines a morphism \begin{eqnarray*} T_g \, : \, H^{r}(G; M) & \longrightarrow & H^{r}(G; M) \end{eqnarray*} which can be extended to an action of $\cA (G; \Delta)$ on $ H^{r}(G; M) $ for $r \geq 0$, which for $r = 0$, where we have $H^{0}(G; M) = M^{G} $, coincides with the action defined above. Further information on this action together with its functorial properties and its relation to the classical theory of Hecke operators can be found in [12]. §.§ Hecke correspondences In order to extend the above definition of Hecke operators to the Bredon cohomology groups of an arithmetic discrete group $G$ it will be important to understand the natural action of elements of $\cA (G; \Delta)$ on $G$-spaces and their quotients. The natural framework for this comes from considering elements of the Hecke ring as correspondences defined in terms of quotients corresponding to double cosets. As in the previous sections let $G$ be a subgroup of a group $\mathfrak{G}$. Suppose now that the group $\mathfrak{G}$ acts on a topological space $S$ and consider the action of the subgroup $G$ on $S$. We will be interested in the case where $\mathfrak{G}$ is a Lie group and $S$ is a homogeneous $\mathfrak{G}$-space, also we assume in the following that the action of the discrete group $G$ on $S$ satisfies sufficient conditions for the quotient $ S / G $ to be well behaved. Given an element $g \in \mathrm{Comm}_{\mathfrak{G}}(G)$ consider the groups \begin{eqnarray*} K \; = \; g^{-1} G g \, \cap \, G &\text{ and }& {}_{g}K \; = \; G \, \cap \,g G g^{-1} . \end{eqnarray*} Observe that we have group morphisms \begin{eqnarray*} \begin{tikzcd} K \arrow[r] \arrow[d, hook] & {}_{g}K \arrow[d, hook] \\ G & G \end{tikzcd} \end{eqnarray*} where the horizontal arrow is a group isomorphism and the vertical ones are inclusions with finite index. These morphisms induce maps between the corresponding quotients of $S$, \begin{eqnarray*} \begin{tikzcd} S / K \arrow[r] \arrow[d, two heads]{a} & S / {}_{g}K \arrow[d, two heads] \\ S / G & S / G \end{tikzcd} \end{eqnarray*} where the horizontal arrow is a homeomorphism and the vertical ones are finite index covers. This diagram determines a correspondence \begin{eqnarray*} \mathcal{C}_{GgG} & \subset & S / G \; \times \; S / G \end{eqnarray*} homeomorphic to $\mathcal{S} / K $. We call this correspondence the Hecke correspondence from $S / G$ to $S / G$ associated to $GgG$. We extend this definition using linearity in order to associate correspondences to elements of $\cA (G; \Delta)$. These can in turn be used to define a Hecke action at the level of sheaves on these spaces and their cohomology by defining operators $T_g$ via successive pullbacks and pushforwards along the horizontal maps above. The classical example leading to Hecke operators acting on modular forms corresponds to $\mathfrak{G} = \PGL_{2}^{+}(\mathbb{R})$ viewed as a group of transformations of the hyperbolic plane $S = \mathbb{H}_{2}$. In this case if $G$ is a subgroup of $\mathfrak{G}$ commensurable with $\PSL_2(\mathbb{Z})$ the above quotients are modular curves and the Hecke correspondences coming from them define Hecke operators between spaces of modular forms. As mentioned at the beginning of this section the main example for us arises from \begin{eqnarray*} \mathfrak{G} &=& \PGL_{2}(\mathbb{C}) \end{eqnarray*} which we will view in what follows as a group of transformations of the hyperbolic $3$-space $ S = \mathbb{H}_{3}$. Hecke correspondences for quotients of $\mathbb{H}_{3}$ by a Bianchi group \begin{eqnarray*} \Gamma_{D} &=& \PSL_2(\cO_{\Q(\sqrt{-D})}) \end{eqnarray*} and its subgroups will be used in the following sections in order to define Hecke operators on the Bredon cohomology of Bianchi groups. For more on this point of view together with its relation to the theory of automorphic forms see [9] and [10]. § HECKE OPERATORS ON BREDON (CO)HOMOLOGY AND $K$-THEORY §.§ Restriction and corestriction Let us start defining the restriction and corestriction operators in equivariant K-(co)homology and Bredon (co)homology. Let $X$ be a $G$-CW-complex and let $H\subseteq G$ be a subgroup of finite index, first note that we have natural isomorphisms given by the induction structure \begin{eqnarray*} K_*^H(X) & \xrightarrow{\ind_H^G} & K_*^G(G/H\times X), \\ K^*_H(X)& \xrightarrow{\ind_H^G} & K^*_G(G/H\times X), \\ \cH_*^H(X;\mathcal{R}) & \xrightarrow{\ind_H^G} & \cH_*^G(G/H\times X;\mathcal{R}), \\ \cH^*_H(X;\mathcal{R}) & \xrightarrow{\ind_H^G} & \cH^*_G(G/H\times X;\mathcal{R}). \end{eqnarray*} Define the corestriction morphisms as the compositions $$K_*^H(X)\xrightarrow{\ind_H^G}K_*^G(G/H\times X)\xrightarrow{\pi_{2*}}K_*^G(X),$$ $$\cH_*^H(X;\mathcal{R})\xrightarrow{\ind_H^G} \cH_*^G(G/H\times X;\mathcal{R})\xrightarrow{\pi_{2*}} \cH_*^G(X;\mathcal{R}),$$ $$K^*_H(X)\xrightarrow{\ind_H^G}K^*_G(G/H\times X)\xrightarrow{\pi_{2!}}K^*_G(X),$$ $$\cH^*_H(X;\mathcal{R})\xrightarrow{\ind_H^G} \cH^*_G(G/H\times X;\mathcal{R})\xrightarrow{\pi_{2!}} \cH^*_G(X;\mathcal{R}),$$ where $\pi_2:G/H\times X\to X$ is the second projection, and $\pi_{2!}$ denotes the shriek map that in this case can be defined as the composition $$K^*_G(G/H\times X)\xrightarrow{p^*}K_G^*(\mathbb{C}[G/H]\times X)\xrightarrow{Th}K_G^*(X), $$ where $\mathbb{C}[G/H]$ denotes the complex vector space generated by $G/H$ with the canonical $G$-action, $p:\mathbb{C}[G/H]\times X\to G/H\times X$ is the natural projection and $Th$ is the Thom isomorphism as in Thm. 3.14 in [13]. We denote the above compositions by $\cores_H^G$ (or simply $\cores$ if $H$ and $G$ are clear from the context). This construction can be performed also in the case of Bredon cohomology. If $G$ is finite and $X=\pt$, corestriction corresponds to the usual induction morphism on the representation ring. We have also restriction morphisms denoted by $\res_H^G$ (or simply by res if $H$ and $G$ are clear from the context) \begin{eqnarray*} K_*^G(X)& \xrightarrow{\res_H^G} & K_*^H(X), \\ \cH_*^G(X;\mathcal{R})& \xrightarrow{\res_H^G} & \cH_*^H(X;\mathcal{R}), \\ K^*_G(X) & \xrightarrow{\res_H^G} &K^*_H(X), \\ \cH^*_G(X;\mathcal{R})& \xrightarrow{\res_H^G} & \cH^*_H(X;\mathcal{R}) \end{eqnarray*} defined in a similar way as corestriction. §.§ Conjugation and Hecke operators Now suppose that the discrete group $G$ is given as a subgroup of a Lie group $\mathfrak{G}$. Given an element $g \in \mathrm{Comm}_{\mathfrak{G}}(G)$ there are conjugation morphisms \begin{eqnarray*} K_*^H(X) & \xrightarrow{Ad_g} & K_*^{gHg^{-1}}(X), \\ \cH_*^H(X;\mathcal{R})& \xrightarrow{Ad_g} & \cH_*^{gHg^{-1}}(X;\mathcal{R}), \\ K^*_H(X)& \xrightarrow{Ad_g} & K^*_{gHg^{-1}}(X), \\ \cH^*_H(X;\mathcal{R}) & \xrightarrow{Ad_g} & \cH^*_{gHg^{-1}}(X;\mathcal{R}), \end{eqnarray*} induced from the conjugation morphism from $\Or(H)$ to $\Or(gHg^{-1})$. Let $G$ be a discrete subgroup of a Lie group $\mathfrak{G}$ and let $X$ be a $G$-CW-complex. Given an element $g \in \mathrm{Comm}_{\mathfrak{G}}(G)$ and a finite index subgroup $H\subseteq G$ we define the Hecke operator associated to $(G, H, g, X)$ as the composition $$ \xymatrix{ K^G_n(X) \ar[r]^{\res} &K_n^H(X)\ar[r]^{Ad_g\quad}&K^{gHg^{-1}}_n(X)\ar[r]^{\;\;\cores} &K_n^G(X).}$$ We will denote this operator by $T_{g,X}$ when $G$ and $H$ are clear from the context. Similarly, for Bredon homology, let $X$ be a proper $G$-CW-complex, we denote by $\mathcal{T}_{g,X}$ to the Hecke operator associated to $(G,H,g,X)$ defined as the composition \cH^G_n(X;\mathcal{R}) \ar[r]^{\res} & \cH_n^H(X;\mathcal{R})\ar[r]^{Ad_g\quad}& \cH^{gHg^{-1}}_n(X;\mathcal{R})\ar[r]^{\;\;\cores} & \cH_n^G(X;\mathcal{R}).}$$ Because all of these morphisms are defined by maps of spectra, they are natural on $X$, then we have the following commutative diagram. K^G_n(\underbar{E}G) \ar[d]^{\mu_{G,n}}\ar[r]^{\res} & K_n^H(\underbar E G)\ar[d]^{\mu_{H,n}}\ar[r]^{Ad_g\;\;} & K^{gHg^{-1}}_n(\underbar EG)\ar[d]^{\mu_{gHg^{-1},n}}\ar[r]^\cores & K_n^G(\underbar EG)\ar[d]^{\mu_{G,n}}\\ K_n(C_r^*(G))\ar[r]^{\res} & K_n(C^*_r(H))\ar[r]^{Ad_g\quad} & K_n(C^*_r(gHg^{-1}))\ar[r]^{\quad\cores} & K_n(C_r^*(G))}$$ Thus, in the case that the group $G$ satisfies the Baum-Connes conjecture, we can compute the Hecke operators defined on the $K$-theory of the reduced $C^*$-algebra of $G$ using the Hecke operators defined on the left hand side of the Baum-Connes conjecture, the main point being that these are more suitable for computations. Summarizing we have the following. There is a commutative diagram K_n^G(\underbar{E}G)\ar[d]^{\mu_{G,n}}\ar[rr]^{T_{g,\uE G}} && K_n^G(\underbar{E}G)\ar[d]^{\mu_{G,n}}\\ K_n(C_r^*(G))\ar[rr]^{T_{g,\pt}} && K_n(C_r^*(G)).}$$ We also have a relation between $T_{g,X}$ and $\mathcal{T}_{g,X}$. Let $G$ be a discrete group such that $\underbar{E}G$ is a $G$-CW-complex of dimension at most 2. We have commutative diagrams $$\xymatrix{0 \ar[r] &\mathcal{H}^G_{2}(\underbar{E}G;\mathcal{R}) \ar[r]\ar[d]^{\mathcal{T}_{g,\uE G}}& K^G_{0}(\underbar{E}G) \ar[r]\ar[d]^{{T}_{g,\uE G}}& \mathcal{H}^G_{0}(\underbar{E}G;\mathcal{R})\ar[r]\ar[d]^{\mathcal{T}_{g,\uE G}} & 0\\ 0 \ar[r] &\mathcal{H}^G_{2}(\underbar{E}G;\mathcal{R}) \ar[r]& K^G_{0}(\underbar{E}G) \ar[r]& \mathcal{H}^G_{0}(\underbar{E}G;\mathcal{R})\ar[r]&0 }$$ K_1^G(\underbar{E}G)\ar[d]^{\cong}\ar[rr]^{T_{g,\uE G}} & & K_1^G(\underbar{E}G)\ar[d]^{\cong}\\ \cH_1^G(\underbar{E}G;\mathcal{R})\ar[rr]^{\mathcal{T}_{g,\uE G}} & & \cH_1^G(\underbar{E}G;\mathcal{R}).}$$ Since $\underbar{E}G$ and $G/H\times \underbar{E}G$ are proper $G$-CW-complexes of dimension at most 2, we have natural short exact sequences as in <ref> and the natural identification $K^G_1(\uE G)\cong \cH_1^G(\uE G;\mathcal{R})$. We conclude this section explaining how to compute $\mathcal{T}_{g,X}$ directly from the Bredon chain complex. First note that if \begin{eqnarray*} \underline{C_n(X)}\otimes_{\mathfrak{F}}\mathcal{R} & \cong & \bigoplus_\alpha R(S_\alpha), \end{eqnarray*} \begin{eqnarray*} \underline{C_n(G/H\times X)}\otimes_{\mathfrak{F}}\mathcal{R} & \cong & \bigoplus_\alpha \bigoplus_{\chi\in G/H}R(\chi H\chi^{-1}\cap S_\alpha). \end{eqnarray*} The morphism \begin{eqnarray*} \res \, : \, \cH_n^G(X;\mathcal{R}) & \longrightarrow & \cH_n^G(G/H\times X;\mathcal{R})\;\cong\;\cH_n^H(X;\mathcal{R}) \end{eqnarray*} is induced by the restriction morphism on the representation ring \begin{eqnarray*} \bigoplus_\alpha R(S_\alpha) & \longrightarrow & \bigoplus_\alpha \bigoplus_{\chi\in G/H}R(\chi H\chi^{-1}\cap S_\alpha). \end{eqnarray*} The morphism \begin{eqnarray*} Ad_g:\cH_n^H(X;\mathcal{R})& \longrightarrow & \cH_n^{gHg^{-1}}(X;\mathcal{R}) \end{eqnarray*} is induced by the isomorphism \begin{eqnarray*} \bigoplus_\alpha \bigoplus_{\chi\in G/H}R(\chi H\chi^{-1}\cap S_\alpha) & \longrightarrow & \bigoplus_\alpha \bigoplus_{\chi\in G/gHg^{-1}}R(\chi g Hg^{-1}\chi^{-1}\cap S_\alpha). \end{eqnarray*} Finally, the morphism \begin{eqnarray*} \cores \, : \, \cH_n^{gHg^{-1}}(X;\mathcal{R}) \;\cong\; \cH_n^G(G/gHg^{-1}\times X;\mathcal{R}) & \longrightarrow & \cH_n^G(X;\mathcal{R}) \end{eqnarray*} is induced by the induction morphism on the representation ring \begin{eqnarray*} \bigoplus_\alpha \bigoplus_{\chi\in G/gHg^{-1}}R(\chi g Hg^{-1}\chi^{-1}\cap S_\alpha) & \longrightarrow & \bigoplus_\alpha R(S_\alpha). \end{eqnarray*} For Bredon cohomology the Hecke operator can be described in a similar way. As we will see in the next section, these three morphisms can also be computed directly in the Bredon homology of $X$ as an $H$-space. § COMPUTATIONS FOR BIANCHI GROUPS In this section we specialize to the case where $G$ is a Bianchi group. We use Thm. <ref> and Thm. <ref> in order to explicitly compute the action of the operators $T_{g,\pt}$ for the group $\Gamma_1 = \PSL_{2}(\Z[i])$ for elements $g \in \PGL_2(\cO_{\Q(i)})$ associated to primes in $\Z[i]$. As will be seen below, these techniques apply for other Bianchi groups as well. Let $D$ be a positive square-free integer, and let $\cO_{D } = \cO_{\Q(\sqrt{-D})}$ be the ring of integers of the imaginary quadratic extension $\Q(\sqrt{-D})$. The Bianchi group associated to $D$ is defined as \begin{eqnarray*} \Gamma_{D} &= & \PSL_{2}(\cO_{D }) \, = \, \SL_{2}(\cO_{\Q(\sqrt{-D})})/\{\pm I\}. \end{eqnarray*} We can describe the rings $\cO_D$ explicitly in terms of the discriminant of the quadratic field $\Q(\sqrt{-D})$. Let $\delta=\sqrt{-D}$ and $\eta=\frac{1}{2}(1+\delta)$, we have $$\cO_D = \Z[\delta]\quad\mbox{for}\quad D \equiv1,2\!\!\mod 4,\quad\mbox{and}$$ $$\cO_D =\Z[\eta]\quad\mbox{for}\quad D \equiv3\!\!\mod 4.$$ For a proof see <cit.>. For $D = 1, \, 2,\, 3,\, 7, \, 11$ the ring $\cO_{D }$ is an Euclidean domain. For these values of $D$ we will refer to the corresponding Bianchi groups $\Gamma_{D}$ as Euclidean Bianchi groups. In general, except from $D=3$, Bianchi groups can be described as amalgamated products. The amalgam decompositions for the Euclidean Bianchi groups are described below. We have \begin{eqnarray*} \Gamma_{1} & \cong & \big(A_{4}\ast_{C_{3}}S_{3}\big)\ast_{\PSL_{2}(\Z)}\big(S_{3}\ast_{C_{2}}D_{2}\big); \end{eqnarray*} \begin{eqnarray*} \Gamma_{2} & \cong & G_{1}\ast_{(\Z*\,C_{2})}G_{2}, \end{eqnarray*} where $G_{1}$ is the HNN extension of $C_{2}\times C_{2}$ associating two generators and $G_{2}$ is the HNN extension of $A_{4}$ associating two $3$-cycles; \begin{eqnarray*} \Gamma_{7} & \cong & \big(\Z\ast\,C_{2}\big)\ast_{(\Z\ast C_{2}\ast C_{2})}G_, \end{eqnarray*} where $G$ is the HNN extension of $S_{3}*_{C_{2}}S_{3}$ associating a $3$-cycle with itself; and \begin{eqnarray*} \Gamma_{11} & \cong & \big(\Z\ast\,C_{3}\big)\ast_{(\Z\ast C_{3}\ast C_{3})}G, \end{eqnarray*} where $G$ is the HNN extension of $A_{4}*_{C_{3}}A_{4}$ associating a $3$-cycle with itself. For a proof of the above facts the reader may consult [7]. We will now focus on $\Gamma_1$ and the explicit computation of Hecke operators associated to it. §.§ The group $\Gamma_1 = \PSL_{2}(\Z[i])$ From the above proposition we have an isomorphism (where for a group $G$ the notation $G'$ means just an isomorphic copy of $G$) with $\PSL_{2}(\Z)=C'_{3}\ast C'_{2}$ and the intersections $A_{4}\cap S'_{3}=C'_{3}$, $A_{4}\cap D_{2}=\{1\}$, $S_{3}\cap S'_{3}=\{1\}$, and $S_{3}\cap D_{2}=C'_{2}$. In fact, we can obtain the presentation $$\Gamma_{1}=\langle\; \mathbf{a,b,c,d} \;|\; \mathbf{a}^{3}=\mathbf{b}^{2}=\mathbf{c}^{3}=\mathbf{d}^{2}=(\mathbf{ac})^{2}=(\mathbf{ad})^{2}=(\mathbf{bd})^{2}=(\mathbf{bc})^{2}=1\;\rangle,$$ with $A_4=\langle \mathbf{a,c}\rangle$, $S_3=\langle \mathbf{a,d}\rangle$, $D_2=\langle \mathbf{b,d}\rangle$, and $S_3'=\langle \mathbf{b,c}\rangle$, so that $C_3=\langle \mathbf{a}\rangle$, $C_2=\langle \mathbf{b}\rangle$, $C'_3=\langle \mathbf{c}\rangle$, and $C'_2=\langle \mathbf{d}\rangle$, and explicit matrices that represent the generators, namely $$\mathbf{a}=\begin{pmatrix} 0 & i \\ i & 1\end{pmatrix},\;\; \mathbf{b}=\begin{pmatrix} 0 & i \\ i & 0\end{pmatrix},\;\; \mathbf{c}=\begin{pmatrix} 1 & 1 \\ -1 & 0\end{pmatrix},\;\;\mbox{and }\;\; \mathbf{d}=\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}.$$ §.§ Classifying space for proper actions Using the above presentation and explicit generators, we can construct a 2-dimensional $\Gamma_1$-CW-complex $X$, which is a model for $\uE\Gamma_1$. Let $$X^{(0)}=\Gamma_{1}/A_{4}\times\{p\}\;\bigsqcup\; \Gamma_{1}/S_{3}\times\{q\}\;\bigsqcup\; \Gamma_{1}/D_{2}\times\{r\}\;\bigsqcup\; \Gamma_{1}/S'_{3}\times\{s\},$$ where each $\mathbb{D}^0$ (point) has been labeled with a letter. The $1$-skeleton is obtained from the pushout $$\begin{tikzpicture}[scale=1, decoration={markings, mark=at position 1 with {\arrow{stealth}}}] \node at (0,2) {$\Gamma_{1}/C_{3}\times\mathbb{S}^0\,\bigsqcup\, \Gamma_{1}/C'_{2}\times\mathbb{S}^0\,\bigsqcup\, \Gamma_{1}/C_{2}\times\mathbb{S}^0\,\bigsqcup\, \Gamma_{1}/C'_{3}\times\mathbb{S}^0$}; \node at (8,2.1) {$X^{(0)}$}; \node at (0,0) {$\Gamma_{1}/C_{3}\times\mathbb{D}^1\,\bigsqcup\, \Gamma_{1}/C'_{2}\times\mathbb{D}^1\,\bigsqcup\, \Gamma_{1}/C_{2}\times\mathbb{D}^1\,\bigsqcup\, \Gamma_{1}/C'_{3}\times\mathbb{D}^1$}; \node at (8,0.1) {$X^{(1)}$}; %\node at (0.6,2.07) {\scriptsize $\subset$}; \draw[postaction={decorate}] (5.2,2) -- (7.3,2); \node at (6,2.3) {$\varphi$}; \draw[postaction={decorate}] (0,1.5) -- (0,0.6); \node at (-0.8,1.1) {\footnotesize inclusion}; \draw[postaction={decorate}] (8,1.5) -- (8,0.6); \draw[postaction={decorate}] (5.2,0) -- (7.3,0); \end{tikzpicture}$$ so that $X^{(1)}$ is the union of $X^{(0)}$ and many copies of $\mathbb{D}^1$, identifying the image by $\varphi$ and the inclusion, respectively, of many copies of $\mathbb{S}^0$. Writing each copy of $\mathbb{S}^0$ as two ordered points $\{-1,1\}$ and denoting a point in $X^{(0)}$ just as the coset, the map $\varphi$ is defined as follows. For any $\gamma\in\Gamma_1$, \begin{eqnarray*} \varphi:& \gamma C_{3}\times\{-1,1\}\mapsto \{\gamma A_{4}\,,\,\gamma S_{3}\},\\ &\gamma C'_{2}\times\{-1,1\}\mapsto \{\gamma S_{3}\,,\,\gamma D_{2}\},\\ &\gamma C_{2}\times\{-1,1\}\mapsto \{\gamma D_{2}\,,\,\gamma S'_{3}\},\\ &\gamma C'_{3}\times\{-1,1\}\mapsto \{\gamma S'_{3}\,,\,\gamma A_{4}\}. \end{eqnarray*} This means that we will add a segment between two points whenever their corresponding cosets intersect as a coset of any of the cyclic groups in $\Gamma_1$. Take $P$, $Q$, $R$, $S$ as the trivial cosets of $A_4$, $S_3$, $D_2$, $S'_3$, respectively. The space $X^{(1)}$ would begin to look like this: $$\begin{tikzpicture}[scale=1, decoration={markings, mark=at position 1 with {\arrow{stealth}}}] \node at (0,2) {$\bullet$}; \node at (0.3,1.7) {\footnotesize $P$}; \node at (2,2) {$\bullet$}; \node at (1.7,1.7) {\footnotesize $Q$}; \node at (0,0) {$\bullet$}; \node at (0.3,0.3) {\footnotesize $S$}; \node at (2,0) {$\bullet$}; \node at (1.7,0.3) {\footnotesize $R$}; \draw (0,2) -- (2,2); \draw (0,0) -- (0,2); \draw (2,2) -- (2,0); \draw (0,0) -- (2,0); \draw (0,2) -- (0.517,3.932); \node at (0.517,3.932) {$\bullet$}; \node at (0.7,4.3) {\footnotesize $Q_1$}; \draw (0,2) -- (0,4); \node at (0,4) {$\bullet$}; \node at (0,4.4) {\footnotesize $Q_2$}; \draw (0,2) -- (-0.517,3.932); \node at (-0.517,3.932) {$\bullet$}; \node at (-0.7,4.3) {\footnotesize $Q_3$}; \draw (2,2) -- (2,4); \node at (2,4) {$\bullet$}; \node at (2,4.4) {\footnotesize $P'_1$}; \draw (0,2) -- (-1.932,1.482); \node at (-1.932,1.482) {$\bullet$}; \node at (-2.3,1.3) {\footnotesize $S'_1$}; \draw (0,2) -- (-2,2); \node at (-2,2) {$\bullet$}; \node at (-2.4,2) {\footnotesize $S'_2$}; \draw (0,2) -- (-1.932,2.518); \node at (-1.932,2.518) {$\bullet$}; \node at (-2.3,2.7) {\footnotesize $S'_3$}; \draw (0,0) -- (-2,0); \node at (-2,0) {$\bullet$}; \node at (-2.4,0) {\footnotesize $P_1$}; \draw (2,2) -- (3.932,1.482); \node at (3.932,1.482) {$\bullet$}; \node at (4.3,1.3) {\footnotesize $R_1$}; \draw (2,2) -- (4,2); \node at (4,2) {$\bullet$}; \node at (4.4,2) {\footnotesize $R_2$}; \draw (2,0) -- (4,0); \node at (4,0) {$\bullet$}; \node at (4.4,0) {\footnotesize $Q'_1$}; \draw (0,0) -- (0.517,-1.932); \node at (0.517,-1.932) {$\bullet$}; \node at (0.7,-2.3) {\footnotesize $R'_1$}; \draw (0,0) -- (0,-2); \node at (0,-2) {$\bullet$}; \node at (0,-2.4) {\footnotesize $R'_2$}; \draw (2,0) -- (2,-2); \node at (2,-2) {$\bullet$}; \node at (2,-2.4) {\footnotesize $S_1$}; \end{tikzpicture}$$ The lines $PQ_i$, $i=1,2,3$, come from the cosets $\mathbf{c}C_3$, $\mathbf{c}^2 C_3$, and $\mathbf{ac}^2 C_3$, re­spec­tive­ly. There are no more cosets of $S_3$ connected to $A_4$. It continues similarly. Finally, we add a 2-cell, filling the square: $$\begin{tikzpicture}[scale=1, decoration={markings, mark=at position 1 with {\arrow{stealth}}}] \node at (0,2) {$\Gamma_1/\{1\}\times \mathbb{S}^{1}$}; \node at (5,2) {$X^{(1)}$}; \node at (0,0) {$\Gamma_1/\{1\}\times \mathbb{D}^{2}$}; \node at (5.5,0) {$X^{(2)}=X$}; \draw[postaction={decorate}] (1.3,2) -- (4.4,2); %\node at (3,2.6) {$\bigsqcup_{i\in I_n}q^{n}_{i}$}; \draw[postaction={decorate}] (0,1.6) -- (0,0.5); \draw[postaction={decorate}] (5,1.6) -- (5,0.5); \draw[postaction={decorate}] (1.3,0) -- (4.5,0); \end{tikzpicture}.$$ This space is proper since all the isotropy groups are finite groups, and this is because $X$ can be thought as the space obtained from a square by the action of $\Gamma_1$ with the isotropy groups showed below. $$\begin{tikzpicture}[scale=1, decoration={markings, mark=at position 1 with {\arrow{stealth}}}] \draw[fill=gray!15] (0,0) rectangle (2,2); \node at (0,2) {$\bullet$}; \node at (-0.3,2.3) {\small $A_4$}; \node at (2,2) {$\bullet$}; \node at (2.3,2.3) {\small $S_3$}; \node at (0,0) {$\bullet$}; \node at (-0.3,-0.3) {\small $S'_3$}; \node at (2,0) {$\bullet$}; \node at (2.3,-0.3) {\small $D_2$}; \draw (0,2) -- (2,2); \node at (1,2.25) {\footnotesize $C_3$}; \draw (0,0) -- (0,2); \node at (-0.3,1) {\footnotesize $C'_3$}; \draw (2,2) -- (2,0); \node at (2.3,1) {\footnotesize $C'_2$}; \draw (0,0) -- (2,0); \node at (1,-0.25) {\footnotesize $C_2$}; \end{tikzpicture}$$ Also, $X$ is indeed a model for $\underline{E}\Gamma_1$, since every fixed space $X^H$, $H$ finite subgroup of $\Gamma_1$, is weakly contractible. An alternative description of this space can be found in [8] and [18]. §.§ Bredon cohomology for $\Gamma_{1}$ In order to compute the Hecke operators $T_{g,\pt}$ we start computing the Bredon (co)homology groups of $\Gamma_{1}$ with coefficients in the representation ring. We will later use these groups together with Thm. <ref> to obtain computations in equivariant K-homology. The Bredon cochain complex with coefficients in the representation ring <ref> for the group $\Gamma_{1}$ and the space $X=\underline{E}\Gamma_1$ has the form $$0\longrightarrow \bigoplus_{\substack{ \alpha \\ 0\text{-cells} }} R(S_{\alpha}) \xrightarrow{\;d^0\;} \bigoplus_{\substack{ \alpha \\ 1\text{-cells} }} R(S_{\alpha}) \xrightarrow{\;d^1\;} \bigoplus_{\substack{ \alpha \\ 2\text{-cells} }} R(S_{\alpha}) \longrightarrow0,$$ where the sum runs over representatives of $n$-cells, the $S_\alpha$ are the corresponding stabilizers, and the differentials are given by restriction of representations. We know that $$R(A_{4})\cong\Z^4,\quad R(S_3)\cong R(S'_{3})\cong \Z^3,\quad R(D_2)\cong\Z^4,\quad\mbox{and}\quad R(C_n)\cong\Z^n,$$ so the cochain complex becomes $$0\longrightarrow \Z^{4+3+4+3} \xrightarrow{\quad d^0\quad} \Z^{3+2+2+3} \xrightarrow{\quad d^1\quad} \Z \longrightarrow0.$$ Here, $d^1$ is represented by the matrix $(\;1\;1\;1\;1\;1\;1\;1\;1\;1\;1\;),$ of rank $1$, and $d^0$ by the matrix -1 & 0 & 0 & -1 & 1 & 1 & 0 & & & & & & & \\ 0 & -1 & 0 & -1 & 0 & 0 & 1 & & & & & & & \\ 0 & 0 & -1 & -1 & 0 & 0 & 1 & & & & & & & \\ \hdashline & & & & -1 & 0 & -1 & 1 & 1 & 0 & 0 & & & \\ & & & & 0 & -1 & -1 & 0 & 0 & 1 & 1 & & & \\ \hdashline & & & & & & & -1 & 0 & -1 & 0 & 1 & 0 & 1 \\ & & & & & & & 0 & -1 & 0 & -1 & 0 & 1 & 1 \\ \hdashline 1 & 0 & 0 & 1 & & & & & & & & -1 & -1 & 0 \\ 0 & 0 & 1 & 1 & & & & & & & & 0 & 0 & -1 \\ 0 & 1 & 0 & 1 & & & & & & & & 0 & 0 & -1 \end{array}\right),$$ of rank $8$ (blank spaces stand for blocks of zeros). We obtain $$\mathcal{H}_{\Gamma_1}^{n}(X;\mathcal{R}) \cong \begin{cases} \Z^6, & n=0; \\ \Z, & n=1; \\ 0, & n\geq2. \end{cases}$$ Bredon homology is computed in a similar way. We have $$\mathcal{H}_n^{\Gamma_1}(X;\mathcal{R}) \cong \begin{cases} \Z^6, & n=0; \\ \Z, & n=1; \\ 0, & n\geq2. \end{cases}$$ §.§ Hecke correspondences for prime level congruence subgroups We study now Hecke operators coming from congruence subgroups associated to primes in $\Z[i]$. We begin by reviewing these first. The group of units of the ring of Gaussian integers $\Z[i]$ is given by $\{1, -1, i, -i \}$. Any prime ideal in $\Z[i]$ is of the form $(\pi)$ where $\pi$ is an irreducible element of $\Z[i]$. Up to units, these prime elements are the following: * $\pi = 1 + i$, * $\pi = p$ for a prime $p$ in $\Z$ with $p \equiv 3 \mod 4$, * $ \pi = a+ib$, with $a^2+b^2=p$ for a prime $p$ in $\Z$ with $p \equiv 1 \mod 4$. Notice that $(2) = (1 + i)^2$ and that this is the only prime which ramifies in $\Z[i]$. This leads to exceptional behavior of the prime $1+i$. One instance of this exceptional behavior will be seen in the computations for the classifying spaces for proper actions and the isotropy groups arising in the corresponding Hecke operators. Fix now a prime $p$ in $\Z[i]$. Let $g$ be the class \begin{eqnarray*} \begin{pmatrix} p & 0 \\ 0 & 1\end{pmatrix} \end{eqnarray*} in $\PGL_2\left(\Q(i)\right) = \mathrm{Comm}_{\PGL_{2}(\mathbb{C})}( \Gamma_{1} )$. As above, cf. <ref>, in order to define the Hecke operator $T_g$ associated to the double coset $\Gamma_1 g \Gamma_1$, we have to consider the Hecke correspondence determined by the congruence subgroup \begin{eqnarray*} K & = & \Gamma_1 \cap g^{-1}\Gamma_1 g. \end{eqnarray*} We start describing explicitly this subgroup. For this, notice that if $\gamma$ is the class of a matrix $\begin{pmatrix} a & b \\ c & d\end{pmatrix}$ in $\PGL_{2}(\mathbb{C})$ we have $$g^{-1}\gamma g=\begin{pmatrix} 1/p & 0 \\ 0 & 1\end{pmatrix}\begin{pmatrix} a & b \\ c & d\end{pmatrix}\begin{pmatrix} p & 0 \\ 0 & 1\end{pmatrix} = \begin{pmatrix} a & b/p \\ pc & d\end{pmatrix}.$$ This means that (the class of) any matrix in $K$ will be of this form. Then $$K=\left\{\begin{pmatrix} a & b \\ c & d\end{pmatrix}\in\Gamma_1 \quad:\quad c\in p\cdot \Z[i] \right\},$$ so $K$ is the congruence subgroup $$K=\left\{\gamma\in\Gamma_1 \quad:\quad \gamma\equiv\begin{pmatrix} \cdot & \cdot \\ 0 & \cdot\end{pmatrix} \mbox{ mod }p \right\}.$$ To compute the index of $K$ in $\Gamma_1$ we use the following lemma. Here the notation $\widetilde{\text{P}}\SL_2(\mathbb{F})$ stands for $\SL_2(\mathbb{F})/\{\pm I\}$ (usually without the tilde it would be the quotient by all the center of the group). For a field with $q$ elements, $\mathbb{F}_q$, the order of the group $\widetilde{\emph{P}}\SL_2(\mathbb{F}_q)$ is $q(q^2-1)$ if $q$ is even and $q(q^2-1)/2$ if $q$ is odd. The group $\SL_2(\mathbb{F}_q)$ is the kernel of the surjective homomorphism $$\mbox{det}:\GL_2(\mathbb{F}_q)\longrightarrow \mathbb{F}_q^\ast,\quad \mbox{so} \quad |\SL_2(\mathbb{F}_q)|=|\GL_2(\mathbb{F}_q)|/|\mathbb{F}_q^\ast|=|\GL_2(\mathbb{F}_q)|/(q-1).$$ The order of $\GL_2(\mathbb{F}_q)$ is equal to the number of bases for $\mathbb{F}_q^2$ over $\mathbb{F}_q$ (there is a non-singular matrix for every pair of linearly independent vectors in $\mathbb{F}_q^2$), which is equal to the number of non-zero vectors in $\mathbb{F}_q^2$ times the number of vectors which are not a multiple of the first one, that is $(q^2 -1)(q^2 -q)$. Then, $|\SL_2(\mathbb{F}_q)|=q(q^2 -1)$. Finally, since $\widetilde{\text{P}}\SL_2(\mathbb{F}_q)=\SL_2(\mathbb{F}_q)/\{\pm I\}$, we divide by $2$ when $q$ is odd and we do not when $q$ is even, because the characteristic of $\mathbb{F}_q$ is $2$, so $I=-I$. It is known that the quotient $\Z[i]/p$ is a field and is isomorphic to $\mathbb{F}_{|p|}$, where $|p|$ is the norm of $p$ in $\Z[i]$ (the square of its absolute value as a complex number). Consider the surjective homomorphism $$\pi:\Gamma_1=\PSL_2(\Z[i])\longrightarrow \widetilde{\text{P}}\SL_2(\Z[i]/p).$$ The kernel of $\pi$ is the group of matrices that are the identity modulo $p$; the index of this subgroup in $\Gamma_1$ is equal to the size of $\widetilde{\text{P}}\SL_2(\mathbb{F}_{|p|})$. Since Ker$(\pi)$ is contained in the group $K$, we have $$(\Gamma_1:K)=\dfrac{(\Gamma_1:\mbox{Ker}(\pi))}{(K:\mbox{Ker}(\pi))} = \dfrac{|\widetilde{\text{P}}\SL_2(\mathbb{F}_{|p|})|}{(K:\mbox{Ker}(\pi))}.$$ Besides, the index $(K:\mbox{Ker}(\pi))$ is equal to the size of the quotient group $$K\,/\,\mbox{Ker}(\pi)\cong \left\{\begin{pmatrix} a & b \\ 0 & a^{-1}\end{pmatrix}\in\widetilde{\text{P}}\SL_2(\mathbb{F}_{|p|})\right\},$$ and the size of this group is $|p|(|p|-1)$, if $|p|$ is even, or $|p|(|p|-1)/2$, if $|p|$ is odd. (As before, we do not divide by $2$ when $|p|$ is even because $\mathbb{F}_{|p|}$ has characteristic $2$.) Thus, we obtain that $$(\Gamma_1:K)=\dfrac{|p|(|p|^2 -1)}{|p|(|p|-1)}=|p|+1.$$ Furthermore, we can give (left and right) coset representatives for $\Gamma_1$ modulo $K$. There are $|p|$ cosets represented by the matrices $$\gamma_{z}=\begin{pmatrix} 1 & 0 \\ z & 1\end{pmatrix}, \quad \mbox{with $z$ as representatives of} \quad \Z[i]/p \cong \mathbb{F}_{|p|},$$ and the last is given by the matrix $\sigma=\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}$. §.§.§ Bredon (co)homology of the congruence subgroup $K$ Now, we will compute the Bredon (co)homology of $\underbar{E}K$. First, note that since $K$ is a subgroup of $\Gamma_1$, we can think of $X=\underline{E}\Gamma_1$ as a model for $\underline{E}K$. Then, we need $K$-orbit representatives for $n$-cells in $X$. We can start from the right coset partition $$\Gamma_1 = \bigsqcup_{K\gamma\,\in\, K\backslash\Gamma_1} K\gamma.$$ Note that for any cell $e\subset X$, the $\Gamma_1$-orbit of $e$ splits into the union of some $K$-orbits, $$\Gamma_1\cdot e = \bigcup_{K\gamma\,\in\, K\backslash\Gamma_1} K\gamma\cdot e,$$ and, after omitting repetitions, the union would be disjoint (apart from the boundaries). To count these repetitions, it is sufficient to find if there exists any $k\in K$ such that $$\gamma^{-1}k\,\gamma' \in\mbox{Stab}_{\Gamma_1}(e) \quad \mbox{for two distinct representatives $\gamma$, $\gamma'$ of $K\backslash\Gamma_1$},$$ in which case we would know that the $K$-orbits of $\gamma e$ and $\gamma'e$ are the same. In the following paragraphs we focus on the case $p=1+i$, carrying out in full the corresponding computations. For other primes, an algorithm in GAP has been developed by the first author (cf. [16]). We have that $(\Gamma_1:K)=|1+i|+1=3$ and $$\Gamma_1 = K\gamma_0 \;\sqcup\; K\gamma_1 \;\sqcup\; K\sigma = K \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \;\sqcup\; K \begin{pmatrix} 1 & 0 \\ 1 & 1\end{pmatrix} \;\sqcup\; K \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}.$$ First we search for repeated $K$-orbits. These are all we need: \gamma_0^{-1} \begin{pmatrix} -i & i \\ i-1 & 1\end{pmatrix} \gamma_1 = \begin{pmatrix} 0 & i \\ i & 1\end{pmatrix} &= \; \mathbf{a}, & \qquad \sigma^{-1} \begin{pmatrix} i & 1 \\ 0 & -i\end{pmatrix} \gamma_0 = \begin{pmatrix} 0 & i \\ i & 1\end{pmatrix} &= \; \mathbf{a}, \\ \gamma_0^{-1} \begin{pmatrix} 1 & -1 \\ 0 & 1\end{pmatrix} \gamma_1 = \begin{pmatrix} 0 & -1 \\ 1 & 1\end{pmatrix} &= \; \mathbf{c}^2, & \sigma^{-1} \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \gamma_1 = \begin{pmatrix} 1 & 1 \\ -1 & 0\end{pmatrix} &= \; \mathbf{c}, \\ \sigma^{-1} \begin{pmatrix} i & 0 \\ 0 & -i\end{pmatrix} \gamma_0 = \begin{pmatrix} 0 & i \\ i & 0\end{pmatrix} &= \; \mathbf{b}, & \sigma^{-1} \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \gamma_0 = \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix} &= \; \mathbf{d}, \\ \gamma_0^{-1} \begin{pmatrix} i & 1 \\ 0 & -i\end{pmatrix} \gamma_1 = \begin{pmatrix} 1+i & 1 \\ -i & -i\end{pmatrix} &= \; \mathbf{a}^2\mathbf{c}, & \sigma^{-1} \begin{pmatrix} i & i \\ 1+i & 1\end{pmatrix} \gamma_0 = \begin{pmatrix} 1+i & 1 \\ -i & -i\end{pmatrix} &= \; \mathbf{a}^2\mathbf{c}, \\ \gamma_0^{-1} \begin{pmatrix} i & 0 \\ 1+i & -i\end{pmatrix} \gamma_1 = \begin{pmatrix} i & 0 \\ 1 & -i\end{pmatrix} &= \; \mathbf{ad}, & \sigma^{-1} \begin{pmatrix} -i & i \\ i-1 & 1\end{pmatrix} \gamma_1 = \begin{pmatrix} i & 1 \\ 0 & -i\end{pmatrix} &= \; \mathbf{a}^2\mathbf{d}, \\ \gamma_0^{-1} \begin{pmatrix} -i & 0 \\ 0 & i\end{pmatrix} \gamma_1 = \begin{pmatrix} -i & 0 \\ i & i\end{pmatrix} &= \; \mathbf{bc}, & \sigma^{-1} \begin{pmatrix} -i & i \\ 0 & i\end{pmatrix} \gamma_1 = \begin{pmatrix} i & i \\ 0 & -i\end{pmatrix} &= \; \mathbf{b}\mathbf{c}^2. \end{aligned}$$ For $0$-cells we have $$\Gamma_1 \cdot P = K \cdot P, \quad \Gamma_1 \cdot Q = K \cdot Q, \quad \Gamma_1 \cdot S = K \cdot S, \quad \mbox{and}$$ $$\Gamma_1 \cdot R = K \cdot R \;\;\sqcup\;\; K\gamma_1 \cdot R = K \cdot R \;\;\sqcup\;\; K \cdot \widetilde{R},$$ with $\widetilde{R}=\gamma_1 R$. For $1$-cells we have $$\Gamma_1 \cdot PQ = K \cdot PQ, \quad \Gamma_1 \cdot QR = K \cdot QR \;\;\sqcup\;\; K \cdot Q\widetilde{R}, $$ $$\Gamma_1 \cdot RS = K \cdot RS \;\;\sqcup\;\; K \cdot \widetilde{R}S, \quad \mbox{and} \quad \Gamma_1 \cdot SP = K \cdot SP,$$ with $Q\widetilde{R}=\gamma_1 QR$ and $\widetilde{R}S = \gamma_1 RS$. Finally, the orbit of the $2$-cell is not repeated, so there are three $2$-cells in the quotient $X/K$. Let $E$ be the 2-cell $PQRS$. The quotient space $X/K$ would look like this: $$\begin{tikzpicture}[scale=1, decoration={markings, mark=at position 1 with {\arrow{stealth}}}] %\draw[fill=gray!15] (0,0) rectangle (2,2); \node at (0,2) {$\bullet$}; \node at (-0.3,2.3) {\small $P$}; \node at (2,2) {$\bullet$}; \node at (2.3,2.3) {\small $Q$}; \node at (0,0) {$\bullet$}; \node at (-0.3,-0.3) {\small $S$}; \node at (2,0) {$\bullet$}; \node at (2.2,-0.2) {\small $R$}; \draw (0,2) -- (2,2); \draw (0,0) -- (0,2); \draw (2,2) -- (2,0); \draw (0,0) -- (2,0); \node at (2.6,-0.6) {$\bullet$}; \node at (2.9,-0.9) {\small $\widetilde{R}$}; \draw (2,2) -- (2.6,-0.6); \draw (0,0) -- (2.6,-0.6); \node at (7,1.3) {\small With two 2-cells $E$ and $\sigma E$ with the same}; \node at (7,0.8) {\small boundary, and one other 2-cell $\gamma_1 E$.}; %\node at (1,2.2) {\footnotesize $C_3$}; %\node at (-0.3,1) {\footnotesize $C'_3$}; %\node at (2.3,1) {\footnotesize $C'_2$}; %\node at (1,-0.25) {\footnotesize $C_2$}; \end{tikzpicture}$$ The stabilizer of each orbit representative is the intersection between the stabilizer in $\Gamma_1$ and the subgroup $K$, so $$\mbox{Stab}_K(P)=A_4\cap K=\langle\mathbf{ac,ca,ac}^2\mathbf{a}\rangle\cong D_2,$$ $$\mbox{Stab}_K(Q)=S_3\cap K=\langle\mathbf{a}^2\mathbf{d}\rangle\cong C_2, \quad \mbox{Stab}_K(R)=D_2\cap K=\langle\mathbf{bd}\rangle\cong C_2,$$ $$\mbox{Stab}_K(S)=S'_3\cap K=\langle\mathbf{bc}^2\rangle\cong C_2,$$ $$\mbox{Stab}_K(\widetilde{R})=\mbox{Stab}_{\Gamma_1}(\gamma_1 R)\cap K=\left(\gamma_1\,\mbox{Stab}_{\Gamma_1}(R)\,\gamma_1^{-1}\right)\cap K$$ $$=\left(\gamma_1\, \langle\bfb,\bfd,\bfb\bfd\rangle \,\gamma_1^{-1}\right)\cap K=\gamma_1\, \langle\bfb,\bfd,\bfb\bfd\rangle \,\gamma_1^{-1}\cong D_2.$$ The remaining stabilizers are trivial, except the following two. $$\mbox{Stab}_K(\widetilde{R}S)=\left(\gamma_1\,\mbox{Stab}_{\Gamma_1}(RS)\,\gamma_1^{-1}\right)\cap K=\langle\gamma_1\,\bfb\,\gamma_1^{-1}\rangle\cap K\cong C_2$$ $$\mbox{Stab}_K(Q\widetilde{R})=\left(\gamma_1\,\mbox{Stab}_{\Gamma_1}(QR)\,\gamma_1^{-1}\right)\cap K=\langle\gamma_1\,\bfd\,\gamma_1^{-1}\rangle\cap K\cong C_2$$ The cochain complex becomes $$0\longrightarrow \Z^{4+2+2+4+2}=\Z^{14} \xrightarrow{\quad d^0\quad} \Z^{1+1+2+1+2+1}=\Z^{8} \xrightarrow{\quad d^1\quad} \Z^{3} \longrightarrow0.$$ Here, $d^0$ is represented by the matrix & P & & & & Q & & R & & \widetilde{R} & & & & S & \\ \hdashline PQ & -1 & -1 & -1 & -1 & 1 & 1 & & & & & & & & \\ \hdashline QR & & & & & -1 & -1 & 1 & 1 & & & & & & \\ \hdashline & & & & & -1 & 0 & & & 1 & 1 & 0 & 0 & & \\ Q\widetilde{R} & & & & & 0 & -1 & & & 0 & 0 & 1 & 1 & & \\ \hdashline RS & & & & & & & -1 & -1 & & & & & 1 & 1 \\ \hdashline & & & & & & & & & -1 & 0 & -1 & 0 & 1 & 0 \\ \widetilde{R}S & & & & & & & & & 0 & -1 & 0 & -1 & 0 & 1 \\ \hdashline SP & 1 & 1 & 1 & 1 & & & & & & & & & -1 & -1 \end{array}\right),$$ of rank $6$, and $d^1$ by the matrix & PQ & QR & Q\widetilde{R} & & RS & \widetilde{R}S & & SP \\ \hdashline E & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ \sigma E & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ \gamma_1 E & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 1 \end{array}\right),$$ of rank $2$. We obtain $$\mathcal{H}_{K}^{n}(X;\mathcal{R}) \cong \begin{cases} \Z^8, & n=0; \\ 0, & n=1, \; n>2; \\ \Z, & n=2. \end{cases}$$ As before, the homology is computed in a similar way. We have $$\mathcal{H}_n^K(X;\mathcal{R}) \cong \begin{cases} \Z^8, & n=0; \\ 0, & n=1, \; n>2; \\ \Z, & n=2. \end{cases}$$ §.§ The Hecke operator $T_g$ in Bredon homology via the subgroup $K$ §.§.§ Restriction As we explained above, the morphism at the level of chain complexes is given by restriction of representations of the isotropy groups of each cell. We obtain the following commutative diagram. $$\begin{tikzpicture}[scale=1, decoration={markings, mark=at position 1 with {\arrow{stealth}}}] \node at (-2,2) {$0$}; \draw[postaction={decorate}] (-1.7,2) -- (-0.5,2); \node at (0,2) {$\Z$}; \draw[postaction={decorate}] (0.5,2) -- (2.5,2); \node at (3,2) {$\Z^{10}$}; \draw[postaction={decorate}] (3.5,2) -- (5.5,2); \node at (6,2) {$\Z^{14}$}; \draw[postaction={decorate}] (6.5,2) -- (7.7,2); \node at (8,2) {$0$}; %\draw[postaction={decorate}] (8.3,2) -- (9,2); %\node at (9.5,1.9) {$\cdots$}; \node at (-2,0) {$0$}; \draw[postaction={decorate}] (-1.7,0) -- (-0.5,0); \node at (0,0) {$\Z^{3}$}; \draw[postaction={decorate}] (0.5,0) -- (2.5,0); \node at (3,0) {$\Z^8$}; \draw[postaction={decorate}] (3.5,0) -- (5.5,0); \node at (6,0) {$\Z^{14}$}; \draw[postaction={decorate}] (6.5,0) -- (7.7,0); \node at (8,0) {$0$}; %\draw[postaction={decorate}] (8.3,0) -- (9,0); %\node at (9.5,-0.1) {$\cdots$}; \node at (1.5,2.3) {$d^{\Gamma_1}_1$}; \node at (4.5,2.3) {$d^{\Gamma_1}_0$}; \node at (1.5,0.3) {$d^{K}_1$}; \node at (4.5,0.3) {$d^{K}_0$}; \draw[postaction={decorate}] (-0.1,1.7) -- (-0.1,0.4); \node at (-0.4,1) {$f_2$}; \draw[postaction={decorate}] (2.9,1.7) -- (2.9,0.4); \node at (2.6,1) {$f_1$}; \draw[postaction={decorate}] (5.9,1.7) -- (5.9,0.4); \node at (5.6,1) {$f_0$}; %\draw[postaction={decorate}] (0.9,0) -- (2.9,0); %\node at (1.8,0.25) {$\partial$}; \end{tikzpicture}$$ where $f_0$ is represented by the matrix & P & & & & Q & & & R & & & & S & & \\ \hdashline & 1 & 1 & 1 & 0 & & & & & & & & & \\ P & 0 & 0 & 0 & 1 & & & & & & & & & \\ & 0 & 0 & 0 & 1 & & & & & & & & & \\ & 0 & 0 & 0 & 1 & & & & & & & & & \\ \hdashline Q & & & & & 1 & 0 & 1 & & & & & & & \\ & & & & & 0 & 1 & 1 & & & & & & & \\ \hdashline R & & & & & & & & 1 & 0 & 0 & 1 & & & \\ & & & & & & & & 0 & 1 & 1 & 0 & & & \\ \hdashline & & & & & & & & 1 & 0 & 0 & 0 & & & \\ \widetilde{R}& & & & & & & & 0 & 1 & 0 & 0 & & & \\ & & & & & & & & 0 & 0 & 1 & 0 & & & \\ & & & & & & & & 0 & 0 & 0 & 1 & & & \\ \hdashline S & & & & & & & & & & & & 1 & 0 & 1 \\ & & & & & & & & & & & & 0 & 1 & 1 \end{array}\right),$$ 1 & 1 & 1 & & & & & & & & & & & \\ & & & 1 & & & & & & & & & & \\ & & & 1 & & & & & & & & & & \\ & & & 1 & & & & & & & & & & \\ \hdashline & & & & 1 & & 1 & & & & & & & \\ & & & & & 1 & 1 & & & & & & & \\ \hdashline & & & & & & & 1 & & & 1 & & & \\ & & & & & & & & 1 & 1 & & & & \\ \hdashline & & & & & & & 1 & & & 1 & & & \\ & & & & & & & & 1 & 1 & & & & \\ \hdashline & & & & & & & & & & & 1 & & 1 \\ & & & & & & & & & & & & 1 & 1 \end{array}\right),$$ of rank $10$, and $f_1$ by & PQ & & & QR & & RS & & SP & & \\ PQ & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ QR & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ Q\widetilde{R} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ RS & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ \widetilde{R}S & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ SP & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{array}\right),$$ of rank $6$. Using these matrices, the restriction morphism $$\text{res}:\mathcal{H}^{\Gamma_1}_{0}(X;\mathcal{R})\cong \Z^6 \longrightarrow \mathcal{H}^{K}_{0}(X;\mathcal{R})\cong \Z^8$$ can be represented by the matrix \begin{equation}\label{m1}\left(\begin{array}{cccccc} 1 & 2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 2 &-1 & 1 &-1 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right).\end{equation} §.§.§ Corestriction Let $_gK=gKg^{-1}$. As before, the morphism in the chain complexes is given by induction of representations of the isotropy groups of each cell. We have the morphism at the level of Bredon chain complexes as follows: $$\begin{tikzpicture}[scale=1, decoration={markings, mark=at position 1 with {\arrow{stealth}}}] \node at (-2,2) {$0$}; \draw[postaction={decorate}] (-1.7,2) -- (-0.5,2); \node at (0,2) {$\Z^{14}$}; \draw[postaction={decorate}] (0.5,2) -- (2.5,2); \node at (3,2) {$\Z^{10}$}; \draw[postaction={decorate}] (3.5,2) -- (5.5,2); \node at (6,2) {$\Z$}; \draw[postaction={decorate}] (6.5,2) -- (7.7,2); \node at (8,2) {$0$}; %\draw[postaction={decorate}] (8.3,2) -- (9,2); %\node at (9.5,1.9) {$\cdots$}; \node at (-2,0) {$0$}; \draw[postaction={decorate}] (-1.7,0) -- (-0.5,0); \node at (0,0) {$\Z^{14}$}; \draw[postaction={decorate}] (0.5,0) -- (2.5,0); \node at (3,0) {$\Z^8$}; \draw[postaction={decorate}] (3.5,0) -- (5.5,0); \node at (6,0) {$\Z^3$}; \draw[postaction={decorate}] (6.5,0) -- (7.7,0); \node at (8,0) {$0$}; %\draw[postaction={decorate}] (8.3,0) -- (9,0); %\node at (9.5,-0.1) {$\cdots$}; \node at (1.5,2.4) {$d_{\Gamma_1}^0$}; \node at (4.5,2.4) {$d_{\Gamma_1}^1$}; \node at (1.5,0.4) {$d_{{}_g K}^0$}; \node at (4.5,0.4) {$d_{{}_g K}^1$}; \draw[postaction={decorate}] (-0.1,0.4) -- (-0.1,1.7); \node at (-0.4,1) {$g_0$}; \draw[postaction={decorate}] (2.9,0.4) -- (2.9,1.7); \node at (2.6,1) {$g_1$}; \draw[postaction={decorate}] (5.9,0.4) -- (5.9,1.7); \node at (5.6,1) {$g_2$}; %\draw[postaction={decorate}] (0.9,0) -- (2.9,0); %\node at (1.8,0.25) {$\partial$}; \end{tikzpicture}$$ $$\begin{tikzpicture}[scale=1, decoration={markings, mark=at position 1 with {\arrow{stealth}}}] \node at (-2,2) {$0$}; \draw[postaction={decorate}] (-1.7,2) -- (-0.5,2); \node at (0,2) {$\Z$}; \draw[postaction={decorate}] (0.5,2) -- (2.5,2); \node at (3,2) {$\Z^{10}$}; \draw[postaction={decorate}] (3.5,2) -- (5.5,2); \node at (6,2) {$\Z^{14}$}; \draw[postaction={decorate}] (6.5,2) -- (7.7,2); \node at (8,2) {$0$}; %\draw[postaction={decorate}] (8.3,2) -- (9,2); %\node at (9.5,1.9) {$\cdots$}; \node at (-2,0) {$0$}; \draw[postaction={decorate}] (-1.7,0) -- (-0.5,0); \node at (0,0) {$\Z^{3}$}; \draw[postaction={decorate}] (0.5,0) -- (2.5,0); \node at (3,0) {$\Z^8$}; \draw[postaction={decorate}] (3.5,0) -- (5.5,0); \node at (6,0) {$\Z^{14}$}; \draw[postaction={decorate}] (6.5,0) -- (7.7,0); \node at (8,0) {$0$}; %\draw[postaction={decorate}] (8.3,0) -- (9,0); %\node at (9.5,-0.1) {$\cdots$}; \node at (1.5,2.4) {$d^{\Gamma_1}_1$}; \node at (4.5,2.4) {$d^{\Gamma_1}_0$}; \node at (1.5,0.4) {$d^{{}_g K}_1$}; \node at (4.5,0.4) {$d^{{}_g K}_0$}; \draw[postaction={decorate}] (-0.1,0.4) -- (-0.1,1.7); \node at (-0.4,1) {$g_2$}; \draw[postaction={decorate}] (2.9,0.4) -- (2.9,1.7); \node at (2.6,1) {$g_1$}; \draw[postaction={decorate}] (5.9,0.4) -- (5.9,1.7); \node at (5.6,1) {$g_0$}; %\draw[postaction={decorate}] (0.9,0) -- (2.9,0); %\node at (1.8,0.25) {$\partial$}; \end{tikzpicture}$$ -1 & 0 & 0 & -1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline 0 & 0 & 0 & 0 & -1 & 0 & -1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ \hdashline 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & -1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & -1 & 0 & 1 & 1 \\ \hdashline 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \end{array}\right)$$ -1 & 0 &0&0&0&0&0&1&0&0\\0&-1&0&0&0&0&0&0&0&1\\0&0&-1&0&0&0&0&0&1&0\\-1&-1&-1&0&0&0&0&1&1&1\\\hdashline1&0&0&-1&0&0&0&0&0&0\\1&0&0&0&-1&0&0&0&0&0\\0&1&1&-1&-1&0&0&0&0&0\\\hdashline0&0&0&1&0&-1&0&0&0&0\\0&0&0&1&0&0&-1&0&0&0\\0&0&0&0&1&-1&0&0&0&0\\0&0&0&0&1&0&-1&0&0&0\\\hdashline0&0&0&0&0&1&0&-1&0&0\\0&0&0&0&0&0&1&-1&0&0\\0&0&0&0&0&1&1&0&-1&-1\end{array}\right) $$d^{{}_g K}_0=\left(\begin{array}{rrrrrr} -1 &0&0&0&0&1 \\-1 &0&0&0&0&1\\-1 &0&0&0&0&1\\-1 &0&0&0&0&1\\\hdashline \end{array}\right)$$ & & P & & & Q & & R & & \widetilde{R} & & & & S & \\ \hdashline & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ P & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Q & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ R & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ \hdashline & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ S & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \end{array}\right)$$ & PQ & QR & Q\widetilde{R} & & RS & \widetilde{R}S & & SP \\ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ PQ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline QR & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ \hdashline RS & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ \hdashline & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ SP & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right)$$ (transpose of $f_0$ and $f_1$) And we have $d_{\Gamma_1}^0 \circ g_0 = g_1 \circ d_{{}_gK}^0$. where the $g_i$ are the transposed matrices of the $f_i$. In a similar way, the morphism $$\text{cores}: \mathcal{H}^{{}_gK}_{0}(X;\mathcal{R})\cong \Z^8\longrightarrow \mathcal{H}^{\Gamma_1}_{0}(X;\mathcal{R})\cong \Z^6$$ can be represented the matrix \begin{equation}\label{m2}\left(\begin{matrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 & 0 & -1 & 0 & 1 \end{matrix}\right).\end{equation} §.§.§ $Ad_g$ For the conjugation morphism we have $$\begin{tikzpicture}[scale=1, decoration={markings, mark=at position 1 with {\arrow{stealth}}}] \node at (-2,2) {$0$}; \draw[postaction={decorate}] (-1.7,2) -- (-0.5,2); \node at (0,2) {$\Z^{3}$}; \draw[postaction={decorate}] (0.5,2) -- (2.5,2); \node at (3,2) {$\Z^{8}$}; \draw[postaction={decorate}] (3.5,2) -- (5.5,2); \node at (6,2) {$\Z^{14}$}; \draw[postaction={decorate}] (6.5,2) -- (7.7,2); \node at (8,2) {$0$}; %\draw[postaction={decorate}] (8.3,2) -- (9,2); %\node at (9.5,1.9) {$\cdots$}; \node at (-2,0) {$0$}; \draw[postaction={decorate}] (-1.7,0) -- (-0.5,0); \node at (0,0) {$\Z^{3}$}; \draw[postaction={decorate}] (0.5,0) -- (2.5,0); \node at (3,0) {$\Z^8$}; \draw[postaction={decorate}] (3.5,0) -- (5.5,0); \node at (6,0) {$\Z^{14}$}; \draw[postaction={decorate}] (6.5,0) -- (7.7,0); \node at (8,0) {$0$}; %\draw[postaction={decorate}] (8.3,0) -- (9,0); %\node at (9.5,-0.1) {$\cdots$}; \node at (1.5,2.3) {$d^{K}_1$}; \node at (4.5,2.3) {$d^{K}_0$}; \node at (1.5,0.3) {$d^{{}_gK}_1$}; \node at (4.5,0.3) {$d^{{}_gK}_0$}; \draw[postaction={decorate}] (-0.1,1.7) -- (-0.1,0.4); \node at (-0.4,1) {$h_2$}; \draw[postaction={decorate}] (2.9,1.7) -- (2.9,0.4); \node at (2.6,1) {$h_1$}; \draw[postaction={decorate}] (5.9,1.7) -- (5.9,0.4); \node at (5.6,1) {$h_0$}; %\draw[postaction={decorate}] (0.9,0) -- (2.9,0); %\node at (1.8,0.25) {$\partial$}; \end{tikzpicture}$$ {}_gK \;\backslash\; K & P & & & & Q & & R & & \widetilde{R} & & & & S & \\ \hdashline & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ P & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline Q & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline R & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline \widetilde{R} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hdashline S & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right)$$ $$d_{{}_g K}^0=\left(\begin{array}{rrrr:rr:rr:rr:rr} -1 & -1 & -1 & -1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & -1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 \end{array}\right)$$ where the $h_i$ are the identity (the conjugation is not changing the order of the representations). §.§.§ Hecke operators Finally, we can compute the Hecke operators on the K-homology of the C*-algebra $C_r^*(\Gamma_1)$. We have $$K_n(C_r^*(\Gamma_1)) \cong \begin{cases} \Z^6, & n=0; \\ \Z, & n=1. \end{cases}$$ The Hecke operator \begin{eqnarray*} T_{g,\pt} \, : \, K_n(C_r^*(\Gamma_1))%\, \cong \, \Z^6 &\longrightarrow & K_n(C_r^*(\Gamma_1))% \, \cong \, \Z^6 \end{eqnarray*} for $n=0$ is given by the matrix 1 & 2 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 3 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 2 &-1 & 2 &-1 \\ 0 & 1 & 1 & 1 & -1 & 2 \end{array}\right)$$ and is zero for $n=1$. First note that as $\Gamma_1$ satisfies the Baum-Connes conjecture, the discussion at the end of Section <ref> implies $$K_0(C_r^*(\Gamma_1))\cong \mathcal{H}_0^{\Gamma_1}(\underbar{E}\Gamma_1;\mathcal{R})\cong\Z^6 \quad\text{and}\quad K_1(C_r^*(\Gamma_1))\cong \mathcal{H}_1^{\Gamma_1}(\underbar{E}\Gamma_1;\mathcal{R})\cong\Z.$$ Then, Theorems <ref> and <ref> imply that $T_{g,\pt}$ can be represented the same as the Hecke operator on Bredon homology. For $n=0$, it is the product of matrices <ref> and <ref>; for $n=1$ it is trivial, since it factors through $\mathcal{H}_1^{K}(\underbar{E}K;\mathcal{R})=0$. § CONCLUDING REMARKS These computations are a first step towards developing a general algorithm for working with Hecke operators in Bredon cohomology. The implementation of such algorithm in GAP is part of forthcoming work. As mentioned above, an initial GAP algorithm to compute $\uE\Gamma_1/K$ for subgroups $K$ associated to other primes $p$ in $\Z[i]$ can be found in [16]. Relations of our work with arithmetic aspects of the theory involving automorphic forms are still to be studied. We expect that using the above set up the study of such relations will lead to fruitful developments of the theory. [1] M. Artin. Pearson Prentice Hall, 2011. [2] P. F. Baum, N. Higson, and R. J. Plymen. Representation theory of $p$-adic groups: a view from operator In The mathematical legacy of Harish-Chandra (Baltimore, MD, 1998), volume 68 of Proc. Sympos. Pure Math., pages 111–149. Amer. Math. Soc., Providence, RI, 2000. [3] Luigi Bianchi. Sui gruppi di sostituzioni lineari con coefficienti appartenenti a corpi quadratici immaginarî. Math. Ann., 40(3):332–412, 1892. [4] B. Blackadar. Operator algebras, volume 122 of Encyclopaedia of Mathematical Sciences. Springer-Verlag, Berlin, 2006. Theory of $C^*$-algebras and von Neumann algebras, Operator Algebras and Non-commutative Geometry, III. [5] Glen E. Bredon. Equivariant cohomology theories. Bull. Amer. Math. Soc., 73:266–268, 1967. [6] Glen E. Bredon. Equivariant cohomology theories. Lecture Notes in Mathematics, No. 34. Springer-Verlag, Berlin-New York, 1967. [7] Benjamin Fine. Algebraic theory of the Bianchi groups, volume 129 of Monographs and Textbooks in Pure and Applied Mathematics. Marcel Dekker, Inc., New York, 1989. [8] Dieter Flöge. Zur Struktur der ${\rm PSL}_{2}$ über einigen imaginär-quadratischen Zahlringen. Math. Z., 183(2):255–279, 1983. [9] G. Harder. Eisenstein cohomology of arithmetic groups. The case ${\rm Invent. Math., 89(1):37–118, 1987. [10] Günter Harder. Eisenstein cohomology of arithmetic groups and its applications to number theory. In Proceedings of the International Congress of Mathematicians, Vol. I, II (Kyoto, 1990), pages 779–790. Math. Soc. Japan, Tokyo, 1991. [11] Sören Illman. Equivariant singular homology and cohomology. I. Mem. Amer. Math. Soc., 1(issue 2, 156):ii+74, 1975. [12] Michio Kuga, Walter Parry, and Chih Han Sah. Group cohomology and Hecke operators. In Manifolds and Lie groups (Notre Dame, Ind., 1980), volume 14 of Progr. Math., pages 223–266. Birkhäuser, Boston, Mass., 1981. [13] Wolfgang Lück and Bob Oliver. The completion theorem in $K$-theory for proper actions of a discrete group. Topology, 40(3):585–616, 2001. [14] Wolfgang Lück. Assembly maps. arXiv e-prints, page arXiv:1805.00226, May 2018. [15] Bram Mesland and Mehmet Haluk Şengün. Hecke operators in $KK$-theory and the $K$-homology of Bianchi J. Noncommut. Geom., 14(1):125–189, 2020. [16] David Muñoz. Hecke Operators in K-theory of Bianchi Groups. Master's thesis, Pontificia Universidad Javeriana, January 2020. [17] Guido Mislin and Alain Valette. Proper group actions and the Baum-Connes conjecture. Advanced Courses in Mathematics. CRM Barcelona. Birkhäuser Verlag, Basel, 2003. [18] Alexander Rahm. (Co)homologies and K-theory of Bianchi groups using computational geometric models. Theses, Université Joseph-Fourier - Grenoble I, October 2010. [19] Rubén José Sánchez-García. Equivariant K-homology of the classifying space for proper ProQuest LLC, Ann Arbor, MI, 2005. Thesis (Ph.D.)–University of Southampton (United Kingdom). [20] Goro Shimura. Introduction to the arithmetic theory of automorphic functions, volume 11 of Publications of the Mathematical Society of Japan. Princeton University Press, Princeton, NJ, 1994. Reprint of the 1971 original, Kanô Memorial Lectures, 1. [21] Richard G. Swan. Generators and relations for certain special linear groups. Advances in Math., 6:1–77 (1971), 1971.
# The six functors for Zariski-constructible sheaves in rigid geometry Bhargav Bhatt Bhargav Bhatt Department of Mathematics, University of Michigan Ann Arbor, MI 48109, USA<EMAIL_ADDRESS>and David Hansen David Hansen Max Planck Institute for Mathematics Vivatsgasse 7, Bonn 53111, Germany<EMAIL_ADDRESS> ###### Abstract. We prove a generic smoothness result in rigid analytic geometry over a characteristic zero nonarchimedean field. The proof relies on a novel notion of generic points in rigid analytic geometry which are well-adapted to “spreading out” arguments, in analogy with the use of generic points in scheme theory. As an application, we develop a six functor formalism for Zariski- constructible étale sheaves on characteristic zero rigid spaces. Among other things, this implies that characteristic zero rigid spaces support a well- behaved theory of perverse sheaves. ###### Key words and phrases: Rigid analytic spaces, étale cohomology, generic smoothness, six functors ###### 1991 Mathematics Subject Classification: 14G22, 14F20 ###### Contents 1. 1 Introduction 2. 2 Generic flatness and generic smoothness 1. 2.1 Shilov and weakly Shilov points 2. 2.2 Tools involving the cotangent complex 3. 2.3 Regularity of Shilov fibers 4. 2.4 Generic flatness 5. 2.5 Generic smoothness 3. 3 The six functors for Zariski-constructible sheaves 1. 3.1 Definition of Zariski-constructible sheaves 2. 3.2 Zariski-constructible sheaves via algebraic geometry 3. 3.3 Pushforward, $\otimes$, and $\mathop{R\mathscr{H}\mathrm{om}}$ 4. 3.4 Verdier duality 5. 3.5 Miscellany 6. 3.6 Adic coefficients 4. 4 Perverse sheaves 1. 4.1 Finite coefficients 2. 4.2 $\mathbf{Z}_{\ell}$-coefficients 3. 4.3 $\mathbf{Q}_{\ell}$-coefficients 4. 4.4 Intersection cohomology 5. 4.5 Some conjectures ## 1\. Introduction In this paper, we prove a new generic smoothness result for morphisms of rigid analytic spaces (regarded as adic spaces, always), and apply it to set up a $6$ functor formalism for étale cohomology of rigid analytic spaces with coefficients in Zariski-constructible sheaves. Our first geometric result is the following (see also Remark 1.4 for an alternative approach through [Duc18]). ###### Theorem 1.1 (Generic smoothness, Theorems 2.21, 2.27 and 2.29). Fix a nonarchimedean field $K$, and let $f:X\to Y$ be a quasicompact map of rigid analytic spaces over $\mathop{\mathrm{Spa}}K$ with $Y$ reduced. 1. (1) If $Y$ is geometrically reduced (which is automatic from reducedness if $K$ has characteristic $0$, [Con99, Lemma 3.3.1]), there is a dense open subset $U\subset Y$ such that $f^{-1}(U)\to U$ is flat. 2. (2) If $\mathrm{char}\,K=0$ and $X$ is smooth, there is a dense open subset $U\subset Y$ such that $f^{-1}(U)\to U$ is smooth. If moreover $f$ is proper, then the maximal such $U$ is Zariski-open. In classical algebraic geometry, results like this are easily proved by spreading out from generic points in $Y$. In non-archimedean geometry, at least from the point of view of topology, there are far too many generic points: all rank one points of $Y$, and in particular all classical rigid points, are generic in the sense of locally spectral spaces. Moreover, at most of these points, spreading out cannot work naively, due to the subtle mixture of completions and integral closures which arise when computing the stalks and residue fields of $\mathcal{O}_{Y}$ and $\mathcal{O}_{Y}^{+}$. Our main new observation in the proof of Theorem 1.1 is that there is nevertheless a reasonable rigid analytic analog of generic points from algebraic geometry, given as follows. ###### Definition 1.2 (Weakly Shilov points, §2.1). Fix a nonarchimedean field $K$ with a pseudouniformizer $t\in K^{\circ}$ and residue field $k$. A rank one point $x$ in a rigid space $X/K$ is _weakly Shilov_ if any one of the following equivalent conditions is satisfied: 1. (1) There is an open affinoid subset $\mathop{\mathrm{Spa}}(A,A^{\circ})\subset X$ such that $x$ lies in the Shilov boundary of $\mathop{\mathrm{Spa}}(A,A^{\circ})$. 2. (2) There is an open affinoid subset $\mathop{\mathrm{Spa}}(A,A^{\circ})\subset X$ containing $x$ such that the map $A^{\circ}\to K_{x}^{+}$ identifies111As $x$ is a rank $1$ point, the ring $K_{x}^{+}$ is a rank $1$ valuation ring, and identifies with the subring $\mathcal{O}_{K_{x}}\subset K_{x}$ of power bounded elements of the valued field $K_{x}$. $K_{x}^{+}$ with a $t$-completed localization of $A^{\circ}$. 3. (3) The transcendence degree of the secondary residue field $K_{x}^{+}/\mathfrak{m}$ over $k$ equals the local dimension of $X$ at $x$. 4. (4) (Applicable only if $X$ is quasiseparated and quasi-paracompact.) There exists a formal model $\mathfrak{X}$ of $X$ such that the specialization map $\mathop{\mathrm{sp}}:|X|\to|\mathfrak{X}_{k}|$ carries $x$ to the generic point of an irreducible component of $\mathfrak{X}_{k}$. ###### Example 1.3. If $X=\mathop{\mathrm{Spa}}K\\!\left\langle T\right\rangle$ is the closed unit disc, the weakly Shilov points are exactly the points of Type 2 in the usual nomenclature, i.e. the points defined by the Gauss norms on closed subdisks. Weakly Shilov points are closely related to divisorial valuations as considered in birational geometry. For our purposes, the utility of these points arises by combining the characterizations (1) and (2) above: the former implies such points are dense in $X$, while the latter (roughly) makes these points amenable to the same commutative algebra arguments as generic points in algebraic geometry. The proof of Theorem 1.1 proceeds by making this idea precise: indeed, our arguments show that the subsets $U$ in Theorem 1.1 can be chosen to contain all the weakly Shilov points of $Y$. ###### Remark 1.4 (Obtaining Theorem 1.1 from Ducros’ work). Theorem 1.1 can also be deduced from Ducros’ [Duc18] if one switches to Berkovich spaces; we indicate the argument for Theorem 1.1 (2) in the proper case (which is the most essential one for this paper) in Remark 2.30. Note that the idea of using Abhyankar points in [Duc18] seems functionally equivalent to our idea of weakly Shilov points. On the other hand, our differential approach to a key step (see Theorem 2.18) differs from the function theoretic approach of the corresponding [Duc18, Theorem 6.3.7]. We were unaware of [Duc18] when working on Theorem 1.1, and thank Brian Conrad for bringing [Duc18] to our attention. Let us now turn to the application of this result to étale cohomology of rigid spaces. Recall that for any rigid space $X/K$, work of Huber [Hub96] and Berkovich [Ber93] shows that the derived category $D(X,\mathbf{Z}/n)$ of étale $\mathbf{Z}/n$-sheaves admits a reasonable 6-functor formalism (at least for $n$ invertible on $K$). However, unlike in the case of schemes, it is much more subtle to isolate a reasonable subcategory of “constructible” complexes which are stable under the 6 operations.222For instance, constructible sheaves in the sense of Huber’s work [Hub96], while having many wonderful categorical properties, do not capture the same geometric intuition as the corresponding notion in algebraic or complex geometry. Indeed, even the skyscraper sheaf at a classical point, perhaps the simplest example of a proper pushforward, is not Huber-constructible. Relatedly, analytifications of algebraically constructible sheaves on algebraic varieties are almost never Huber- constructible. In [Han20], the second author proposed that the following notion should yield the desired theory. ###### Definition 1.5 (Zariski-constructible sheaves, Definition 3.1). Fix a rigid space $X/K$ and $n>0$. An étale sheaf $\mathscr{F}$ of $\mathbf{Z}/n$-modules is called _Zariski-constructible_ if $X$ admits a locally finite stratification $X=\coprod_{i\in I}X_{i}$ into Zariski locally closed subsets $X_{i}$ such that $\mathscr{F}|X_{i}$ is locally constant with finite stalks for all $i$. Write $D^{(b)}_{zc}(X,\mathbf{Z}/n)\subset D(X_{\mathop{\mathrm{\acute{e}t}}},\mathbf{Z}/n)$ for the full subcategory of the derived category of $\mathbf{Z}/n$-module sheaves on $X_{\mathop{\mathrm{\acute{e}t}}}$ spanned by complexes that have Zariski- constructible cohomology sheaves and are locally bounded on $X$. The paper [Han20] only showed the stability of $D_{zc}(X,\mathbf{Z}/n)$ by the 6 operations in some very limited situations. The techniques in the present paper yield this stability in satisfactory generality over characteristic zero fields. The resulting formalism, which can be regarded as the rigid analytic analog of the classical theory of analytically constructible sheaves on complex analytic spaces (see, e.g., [Ver76, §2]), is summarized as follows: ###### Theorem 1.6 (The $6$ functor formalism for Zariski-constructible sheaves). Let $K$ be a characteristic zero nonarchimedean field of residue characteristic $p\geq 0$. For an integer $n\geq 1$, the assignment $X\mapsto D^{(b)}_{zc}(X,\mathbf{Z}/n)$ enjoys the following properties: 1. (1) Pullback: For any map $f:X\to Y$, the pullback $f^{*}$ preserves $D^{(b)}_{zc}$ (Proposition 3.4). 2. (2) Proper pushforward: For a proper map $f:X\to Y$, the pushforward $Rf_{*}$ preserves $D^{(b)}_{zc}$ (Theorem 3.10). 3. (3) More pushforwards: For a Zariski-compactifiable map $f:X\to Y$, the pushforwards $Rf_{*}$ and $Rf_{!}$ carry lisse objects in $D^{(b)}_{zc}$ into $D^{(b)}_{zc}$ (Corollary 3.11). 4. (4) $!$-pullback: Given a map $f:X\to Y$, if either $f$ is finite or $(n,p)=1$, the pullback $Rf^{!}$ preserves $D^{(b)}_{zc}$ (Corollary 3.12). 5. (5) Verdier duality: There is a natural dualizing complex $\omega_{X}\in D^{(b)}_{zc}(X,\mathbf{Z}/n)$ such that the functor $\mathbf{D}_{X}(-):=\mathop{R\mathscr{H}\mathrm{om}}(-,\omega_{X})$ induces an anti-equivalence on $D^{(b)}_{zc}$ satisfying biduality (Theorem 3.21). 6. (6) $\otimes$ and $\mathop{R\mathscr{H}\mathrm{om}}$: Given $\mathscr{F}\in D^{(b)}_{zc}(X,\mathbf{Z}/n)$ with locally finite Tor dimension (e.g., if $n$ is a prime), the functors $\mathscr{F}\otimes^{L}_{\mathbf{Z}/n}(-)$ and $\mathop{R\mathscr{H}\mathrm{om}}(\mathscr{F},-)$ preserves $D^{(b)}_{zc}$ (Corollary 3.14). Moreover, proper base change holds (Theorem 3.15), and all of these operations are compatible with extensions of the nonarchimedean base field and with analytification of algebraic varieties (Proposition 3.16, Theorem 3.21, Proposition 3.24). Finally, all of these results admit extensions to $\mathbf{Z}_{\ell}$-coefficients (Theorem 3.36). Let us make a couple of remarks. First, we do not assume any conditions on $p$ relative to the coefficient ring (except in the case of $Rf^{!}$, but see Remark 3.23), so this result generalizes some previously known finiteness theorems in $p$-adic Hodge theory (and uses them as input). Secondly, due to the poor behaviour of Zariski closures in rigid geometry, it is unreasonable to expect arbitrary Zariski-constructible sheaves to be stable under pushforward (Warning 3.2 (1)), so one cannot do much better than (3) above (although see Proposition 3.26); similar issues also occur in complex geometry. Next, we briefly comment on the proofs. Preservation under $f^{\ast}$ and $\otimes$ is straightforward and is stated for completeness. The first key new result is the preservation of Zariski-constructibility under $Rf_{\ast}$ for proper $f$. This was raised as a conjecture in [Han20, Conjecture 1.14]. Here we reduce it to the known statement that $Rf_{\ast}$ preserves locally constant constructible complexes when $f$ is both smooth and proper. This reduction relies on Temkin’s embedded resolution of singularities for quasi- excellent $\mathbf{Q}$-schemes, results of the second author [Han20, Theorem 1.6] on extending branched covers across Zariski-open immersions (building on previous work of Bartenwerfer [Bar76] and Lütkebohmert [Lüt93]), and (most crucially) Theorem 1.1.(2). For the remaining stabilities in Theorem 1.6, we largely reduce them to analogous results for schemes (e.g., by replacing an affinoid $\mathrm{Spa}(A)$ with the scheme $\mathrm{Spec}(A)$). The classical results from SGA4 treat only finite type objects, and are not sufficient for our purposes. However, Gabber’s work on étale cohomology of excellent schemes [ILO14] is presented in exactly the right amount of generality, provided we are allowed to localize our questions to affinoids. The latter is possible thanks to our second key new result (which, in particular, settles [Han20, Conjecture 1.12]). ###### Theorem 1.7 (Locality of Zariski-constructibility, Theorem 3.5). For abelian sheaves on characteristic zero rigid spaces, the property of being Zariski-constructible is an étale-local property. Using this toolkit, one can imitate many standard constructions with constructible sheaves found in complex or algebraic geometry. As an example, we show that characteristic zero rigid spaces support a theory of perverse sheaves which has the same pleasant formal properties as its algebraic counterpart [BBD82] (except that we need to restrict to qcqs spaces when working with $\mathbf{Q}_{\ell}$-coefficients). ###### Theorem 1.8 (Perverse sheaves in rigid geometry, Theorem 4.2 and Theorem 4.11). Let $K$ be a characteristic zero nonarchimedean field with residue characteristic $p\geq 0$ and let $X/K$ be a rigid space. Fix a prime $\ell$ and a coefficient ring $\Lambda\in\\{\mathbf{Z}/\ell^{n}\mathbf{Z},\mathbf{Q}_{\ell}\\}$. If $\Lambda=\mathbf{Q}_{\ell}$, then assume that $X$ is qcqs and define $D^{(b)}_{zc}(X,\mathbf{Q}_{\ell}):=D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})\otimes_{\mathbf{Z}_{\ell}}\mathbf{Q}_{\ell}$. There is a naturally defined perverse $t$-structure on $D^{(b)}_{zc}(X,\Lambda)$, with abelian heart denoted $\mathop{\mathrm{Perv}}(X,\Lambda)$. This construction has the following stability properties: 1. (1) Duality: $\mathrm{Perv}(X,\Lambda)$ is Verdier self-dual inside $D^{(b)}_{zc}(X,\Lambda)$. 2. (2) Finite pushforward: $\mathrm{Perv}(-,\Lambda)$ is stable under $f_{*}$ for $f:Y\to X$ a finite map. 3. (3) Intermediate extensions: There is a notion of intermediate extension of lisse sheaves defined on Zariski-locally closed subsets of $X$. 4. (4) Finiteness: If $X$ is quasi-compact, then $\mathop{\mathrm{Perv}}(X,\Lambda)$ is noetherian and artinian. 5. (5) Nearby cycles: The nearby cycles functor associated with a formal model of $X$ is perverse $t$-exact when $p\neq\ell$. Moreover, this construction is compatible with the usual constructions in algebraic geometry under analytification (in the proper case for $\Lambda=\mathbf{Q}_{\ell}$), and is compatible with extensions of the ground field. As an application, we can define an intersection cohomology complex on any qcqs characteristic $0$ rigid space, and the resulting intersection cohomology groups have reasonable properties: ###### Corollary 1.9 (Intersection cohomology of rigid spaces, Theorem 4.13). Let $K$ be a characteristic zero nonarchimedean field of residue characteristic $p\geq 0$; let $C/K$ be a completed algebraic closure and let $\ell$ be any prime. Let $X/K$ be a qcqs rigid space $X/K$. 1. (1) Existence of intersection cohomology: There are naturally defined $\ell$-adic intersection cohomology groups $IH^{n}(X_{C},\mathbf{Q}_{\ell})$. These are finitely generated $\mathbf{Q}_{\ell}$-modules if $\ell\neq p$ or if $X$ is proper. 2. (2) GAGA: If $X=\mathcal{X}^{an}$ for a proper $K$-scheme $\mathcal{X}$, then $IH^{*}(X_{C},\mathbf{Q}_{\ell})\simeq IH^{*}(\mathcal{X}_{C},\mathbf{Q}_{\ell})$. 3. (3) Poincaré duality: If $X$ is proper and equidimensional of dimension $d$ and $\ell\neq p$, there is a natural Poincaré duality isomorphism $IH^{n}(X_{C},\mathbf{Q}_{\ell})^{\ast}\cong IH^{-n}(X_{C},\mathbf{Q}_{\ell})(d).$ We end this paper by formulating some conjectures in §4.5, roughly predicting that deep known results on the intersection cohomology of algebraic varieties over $K$ carry forth to the rigid context. ### Conventions We follow the convention that the term “nonarchimedean field” is reserved for valued fields carrying a rank $1$ valuation. If $K$ is a nonarchimedean field, we use the terms $K$-affinoid algebra and topologically of finite type (henceforth abbreviated tft) $K$-algebra synonymously; recall that these are exactly the Banach $K$-algebras that can receive a continuous surjection from a Tate algebra $K\langle x_{1},...,x_{n}\rangle$. Likewise, we say “rigid space over $K$” and “adic space locally of tft over $\mathop{\mathrm{Spa}}K$” interchangeably. If $A$ is a tft $K$-algebra, we write $\tilde{A}:=A^{\circ}/A^{\circ\circ}=(A^{\circ}/t)^{red}$; this is a $k$-algebra of finite type [BGR84, Corollary 3, §6.3.4], where $k$ is the residue field of $K$ and $t$ is a pseudouniformizer (i.e., any nonzero element of $K^{\circ\circ}-\\{0\\}$). We warn the reader that $A^{\circ}$ need not be tfp over $\mathcal{O}_{K}$ even when $A$ is reduced; this pathology does not occur if either $K$ is discrete, or if $K$ is stable field with $|K^{*}|$ divisible (if $K$ is algebraically closed); see [BGR84, §3.6] for the definition of stability, and [BGR84, §6.4] for the finiteness properties. We write $\mathop{\mathrm{Spa}}A=\mathop{\mathrm{Spa}}(A,A^{\circ})$ for any Huber ring $A$. Say $X$ is an analytic adic space and $x\in X$. We write $K_{x}$ for the completed residue field of $X$ at $x$. This is a nonarchimedean field. We shall write $|\cdot|_{x}$ for the associated valuation on functions, though note that is only well-defined up to equivalence. However, if $A$ is a tft $K$-algebra and $x\in\mathop{\mathrm{Spa}}A$ is a rank one point, there is a unique $\mathbf{R}_{\geq 0}$-valued representative of the associated equivalence class which extends the fixed norm on the base field $K$, i.e. a unique representative such that $|t|_{x}=|t|_{K}$. We always choose this representative. Given $X$ and $x$ as above, the secondary residue field of $X$ at a point $x$ is the quotient $\tilde{K}_{x}=K_{x}^{+}/\mathfrak{m}_{x}$. Here $K_{x}^{+}$ is the valuation subring of the residue field $K_{x}$ defined as the completed image of the map $\mathcal{O}_{X,x}^{+}\to K_{x}$; when $x$ is a rank $1$ point, the ring $K_{x}^{+}$ coincides with the subring $\mathcal{O}_{K_{x}}\subset K_{x}$ of power bounded elements. Recall that if $y\prec x$ is any specialization, then $K_{x}\cong K_{y}$ and $K_{y}^{+}\subset K_{x}^{+}$ under this identification. For any tft $K$-algebra $A$, we shall write $\mathop{\mathrm{sp}}:\mathop{\mathrm{Spa}}A\to\mathop{\mathrm{Spec}}\tilde{A}$ for the specialization map, given by taking the center of the valuation. This is a continuous, closed, and spectral map of spectral spaces. If $f:X\to Y$ is a map of rigid spaces over $K$, we say $f$ is _Zariski- compactifiable_ if it admits a factorization $f=\overline{f}\circ j$, where $j:X\to X^{\prime}$ is a Zariski-open immmersion and $\overline{f}:X^{\prime}\to Y$ is a proper morphism. We say a rigid space $X$ over $K$ is Zariski-compactifiable if the structure map $f:X\to\mathop{\mathrm{Spa}}K$ is so. ### Acknowledgements In April 2020, Bogdan Zavyalov asked DH whether [Han20, Conjecture 1.14] was within reach, and the present paper grew directly out of that conversation. We thank Bogdan heartily for this crucial initial stimulus, and for some helpful comments on previous drafts of this paper. We are also grateful to Brian Conrad, Johan de Jong, Haoyang Guo, Mattias Jonsson, Shizhang Li, and Jacob Lurie for useful conversations and exchanges, and to the anonymous referees for a number of comments that helped improve the exposition. Bhatt was partially supported by the NSF (#1801689, #1952399, #1840234), a Packard fellowship, and the Simons Foundation (#622511). ## 2\. Generic flatness and generic smoothness In this section, we introduce the notion of weakly Shilov points, and prove our main geometric results on generic smoothness. ### 2.1. Shilov and weakly Shilov points Fix a complete nonarchimedean field $K$ and a pseudouniformizer $t\in K^{\circ}$. Recall the following basic example of a “non-classical” point of a standard $K$-affinoid. ###### Example 2.1. If $X=\mathop{\mathrm{Spa}}K\\!\left\langle T\right\rangle$ is the one- dimensional affinoid ball over $K$, then the Gauss point $X$ is given by the $t$-adic norm on $K\langle T\rangle$. To describe this norm ring theoretically, recall that the standard formal for $X$ is given by the formal closed disc $\mathfrak{X}:=\mathrm{Spf}(\mathcal{O}_{K}\langle T\rangle)$. The special fibre $\mathfrak{X}_{s}=\mathrm{Spec}(\mathcal{O}_{K}/t[T])$ has a unique a generic point $\overline{\eta}$; the $t$-completed localization of $\mathcal{O}_{K}\langle T\rangle$ at $\overline{\eta}$ is a rank one $t$-complete and $t$-torsionfree valuation ring $V$ equipped with a map $\mathcal{O}_{K}\langle T\rangle\to V$. The resulting valuation on $K\langle T\rangle$ is the Gauss point $\eta$. Moreover, the canonical map $X\to\mathfrak{X}$ given by taking the center of the valuation carries $\eta$ to $\overline{\eta}$. We now isolate a general class of points with properties similar to the Gauss point from Example 2.1. ###### Proposition 2.2. Fix a tft $K$-algebra $A$ and a rank one point $x\in\mathrm{Spa}(A,A^{\circ})$. The following are equivalent: 1. (1) $\mathop{\mathrm{sp}}(x)$ is the generic point of an irreducible component of $\mathop{\mathrm{Spec}}\tilde{A}$. 2. (2) $\\{x\\}=\mathop{\mathrm{sp}}^{-1}(\mathop{\mathrm{sp}}(x))$ as subsets of $\mathop{\mathrm{Spa}}A$. 3. (3) $K_{x}^{+}$ is a $t$-completed ind-Zariski localization of $A^{\circ}$. More precisely, $K_{x}^{+}$ is the $t$-completed local ring of $\mathop{\mathrm{Spec}}A^{\circ}$ at the point $\mathop{\mathrm{sp}}(x)\in\mathop{\mathrm{Spec}}\tilde{A}\subset\mathop{\mathrm{Spec}}A^{\circ}$. 4. (4) The seminorm $|\cdot|_{x}$ belongs to the Shilov boundary of $A$ in the sense of $K$-Banach algebras. ###### Proof. We begin with two (well-known) observations on the affinoid adic space $\mathrm{Spa}(A,A^{\circ})$; the first concerns finding suitable rings of definition, while the second concerns a description via formal models. First, we claim that there exists an open and bounded tfp $\mathcal{O}_{K}$-subalgebra $A_{0}\subset A^{0}$ such that $\mathrm{Spec}(\tilde{A})\to\mathrm{Spec}(A_{0}/tA_{0})$ is a universal homeomorphism and such that $A^{\circ}$ is the integral closure of $A_{0}$ in $A$. In fact, the second part follows from [BGR84, Remark following 6.3.4/1]. For the first, using Noether normalization, we can choose a surjective map $T\to A$, where $T=K\langle x_{1},...,x_{n}\rangle$ is a Tate algebra. By [BGR84, 6.3.4/2], the map $\tilde{T}\to\tilde{A}$ is module finite. Setting $A^{\prime}_{0}=\mathrm{im}(T^{\circ}\to A^{\circ})$, we learn that $A_{0}^{\prime}/(A^{\circ\circ}\cap A_{0}^{\prime})\to\tilde{A}$ is module finite and injective, with $A_{0}^{\prime}$ being open, bounded, and tfp over $\mathcal{O}_{K}$. Enlarging $A_{0}^{\prime}$ inside $A^{\circ}$ by adding finitely topological generators of $\tilde{A}$ over $A_{0}^{\prime}$, we find an open bounded tfp $\mathcal{O}_{K}$-subalgebra $A_{0}\subset A^{\circ}$ such that $A_{0}/(A^{\circ\circ}\cap A_{0})\to\tilde{A}$ is bijective. But we also know that $\sqrt{tA_{0}}=A^{\circ\circ}\cap A_{0}$: any element of the right side is topologically nilpotent and in $A_{0}$, and must thus have a large enough power inside $tA_{0}$. Thus, we have found the subalgebra $A_{0}$ indicated at the start of this paragraph. Next, we also recall an alternative description of the locally ringed space $(\mathrm{Spa}(A,A^{\circ}),\mathcal{O}^{+})$. Consider the category of all proper maps $f_{i}:X_{i}\to\mathrm{Spec}(A_{0})$ of schemes which are isomorphisms after inverting $t$. For each $f_{i}$, let $X_{i,t=0}\subset X_{i}$ be the special fibre (regarded merely as a closed subset). Set $Z=\lim_{i}X_{i,t=0}$, so $Z$ is a spectral space. Let $\pi_{i}:Z\to X_{i}$ be the structure map, and define the structure sheaf $\mathcal{O}_{Z}$ of $Z$ via $\mathcal{O}_{Z}:=\mathop{\mathrm{colim}}_{i}\pi_{i}^{-1}\mathcal{O}_{X_{i}}$. Then it is a basic fact that $\mathrm{Spa}(A,A^{\circ})=Z$ as topological spaces, and $\mathcal{O}^{+}$ identifies with the $t$-adic completion of $\mathcal{O}_{Z}$. For future use, we remark that, by passing to a cofinal subsystem, we may (and do) assume that each $X_{i}$ is $\mathcal{O}_{K}$-flat, i.e., $\mathcal{O}_{X_{i}}$ is $t$-torsionfree. This condition implies that the generic fibre $X_{i}[1/t]$ is dense in each $X_{i}$, and thus all the transition maps $X_{i}\to X_{j}$ in the system are surjective: their image is a closed set containing a dense open. In particular, $\mathcal{O}_{Z}$ is $t$-torsionfree. By spectrality, the maps $\pi_{i}:Z\to X_{i,t=0}$ are also surjective for all $i$. Finally, we also remark that $\mathcal{O}_{Z}$ is integrally closed in $\mathcal{O}_{Z}[1/t]$ by generalities on blowups. Using the preceding two observations, we prove the equivalences. $(1)\Rightarrow(2)$: Fix $x\in\mathrm{Spa}(A)$ with $\mathop{\mathrm{sp}}(x)\in\mathop{\mathrm{Spec}}\tilde{A}$ being a generic point. Using a suitable Noether normalization, we then learn that $\tilde{A}_{\mathop{\mathrm{sp}}(x)}$ is a rank $1$ valuation ring with pseudouniformizer $t$: this ring is the integral closure of a rank $1$ $t$-complete valuation ring in a finite extension of its fraction field. Any point $y\in\mathop{\mathrm{sp}}^{-1}(\mathop{\mathrm{sp}}(x))$ is represented by an equivalence class of maps $A_{0}\to V$ where $V$ is a $t$-complete $t$-torsionfree valuation ring with the property that the closed point of $\mathrm{Spec}(V)$ is carried to $\mathop{\mathrm{sp}}(x)$. But any such map factors uniquely as $A_{0}\to\tilde{A}_{\mathop{\mathrm{sp}}(x)}\to V$. Thus, taking $V$ to be the $t$-completion of $\tilde{A}_{\mathop{\mathrm{sp}}(x)}$ gives the unique such map up to equivalence, showing that $y=x$. $(2)\Rightarrow(3)$: Write $x_{i}\in X_{i}$ for the image of $x$, so $\mathcal{O}^{+}_{\mathrm{Spa}(A,A^{+}),x}$ and $\mathop{\mathrm{colim}}\mathcal{O}_{X_{i},x_{i}}$ identify after $t$-completion. Now if $x=\mathop{\mathrm{sp}}^{-1}(\mathop{\mathrm{sp}}(x))$, then $x_{i}=f_{i}^{-1}(\mathop{\mathrm{sp}}(x))$ is the unique preimage of $\mathop{\mathrm{sp}}(x)$. The map $A_{0,\mathop{\mathrm{sp}}(x)}\to\mathcal{O}_{X_{i},x_{i}}$ is then integral (by properness of $X_{i}\to\mathrm{Spec}(A_{0})$, base changed to $\mathrm{Spec}(A_{0,\mathop{\mathrm{sp}}(x)}$) and an isomorphism after inverting $t$. Taking a colimit, we learn that $A_{0,\mathop{\mathrm{sp}}(x)}\to\mathop{\mathrm{colim}}_{i}\mathcal{O}_{X_{i},x_{i}}$ is integral and an isomorphism after inverting $t$. But the target is also integrally closed in its $t$-localization, so it must coincide with $A^{\circ}_{\mathop{\mathrm{sp}}(x)}$; here we implicitly use that $\mathrm{Spec}(\tilde{A})\simeq\mathrm{Spec}(A_{0}/t)$ to identify $\mathop{\mathrm{sp}}(x)$ with a point of $\mathrm{Spec}(A^{\circ})$, as well as the fact that $A^{\circ}$ is the integral closure of $A_{0}$. Thus, we have shown that the $t$-completion of $A^{\circ}_{\mathop{\mathrm{sp}}(x)}\to\mathcal{O}^{+}_{\mathrm{Spa}(A,A^{+}),x}$ is an isomorphism. As the $t$-completion of the target is $K_{x}^{+}$, the claim follows. $(3)\Rightarrow(2)$: This is clear from the description of points of $\mathrm{Spa}(A,A^{\circ})$ as equivalence classes of maps $A^{\circ}\to V$ to $t$-complete and $t$-torsionfree valuation rings. $(2)\Rightarrow(1)$: Assume that $x$ is the unique preimage of $\mathop{\mathrm{sp}}(x)$ and yet that $\mathop{\mathrm{sp}}(x)\in\mathrm{Spec}(\tilde{A})$ is not a generic point; we shall obtain a contradiction. Let $\overline{y}\in\mathrm{Spec}(\tilde{A})$ be a generic point of an irreducible component $Y\subset\mathrm{Spec}(\tilde{A})$ containing $\mathop{\mathrm{sp}}(x)$. Applying the implications $(1)\Rightarrow(3)$ to a lift $y\in\mathrm{Spa}(A,A^{+})$ of $\overline{y}$, we learn that $\overline{y}$ has a unique lift $y\in\mathrm{Spa}(A,A^{+})$, and that $K_{y}^{+}$ is the $t$-completed local ring of $A^{\circ}_{y}$. In particular, the residue field of $K_{y}^{+}$ identifies with function field $K(Y)$ of the irreducible component $Y$. Choose a valuation subring $\overline{V}\subset K(Y)$ that is a $\tilde{A}$-algebra and has center $\mathop{\mathrm{sp}}(x)\in\mathrm{Spec}(\tilde{A})$; this valuation has rank $\geq 1$ as $\overline{y}\neq\mathop{\mathrm{sp}}(x)$. Let $V\subset K_{y}^{+}$ be the preimage of $\overline{V}$. Then $V$ is a $t$-complete valuation ring of rank $\geq 2$, is an $A^{\circ}$-algebra, and has center $\mathop{\mathrm{sp}}(x)$ on $\mathrm{Spec}(\tilde{A})$. The resulting map $A^{\circ}\to V$ then gives a point $x^{\prime}\in\mathrm{Spa}(A,A^{\circ})$ with rank $\geq 2$ and image $\mathop{\mathrm{sp}}(x)$ in $\mathrm{Spec}(\tilde{A})$. But $x$ is the unique preimage of $\mathop{\mathrm{sp}}(x)$, so $x=x^{\prime}$. We now obtain a contradiction as $x$ had rank $1$ by assumption, while $x^{\prime}$ has rank $\geq 2$ by construction. $(1)\Longleftrightarrow(4)$: This is proven in [Ber90, 2.4.4]. ∎ ###### Remark 2.3. One may ask if the equivalences proven in Proposition 2.2 continue to hold true for the affinoid adic space $\mathrm{Spa}(A,A^{+})$ attached to any complete Tate ring $(A,A^{+})$. Inspection of the proof shows that the equivalence of $(2)$ and $(3)$ and the implication $(2)\Rightarrow(1)$ hold true in general. On the other hand, there is a perfectoid affinoid algebra where $(1)\Rightarrow(2)$ and $(1)\Rightarrow(3)$ in Proposition 2.2 fail, as we explain next. We are not aware of a broader class of algebras (than the $K$-affinoid ones) where the Proposition 2.2 holds true. Take a complete and algebraically closed extension $C/\mathbf{Q}_{p}$ whose algebraically closed residue field $k$ has transcendence degree $\geq 1$ over $\mathbf{F}_{p}$. Write $V=\mathcal{O}_{C}$, and let $W\subset V$ be the preimage of $\overline{\mathbf{F}_{p}}\subset k$. Then $W$ is a $p$-complete and $p$-torsionfree local ring that is integrally closed in $W[1/p]=V[1/p]$. Moreover, we have $\sqrt{pW}=\sqrt{pV}$ with $W/\sqrt{pW}\to V/\sqrt{pV}$ identifying with the map $\overline{\mathbf{F}_{p}}\subset k$. A lift $x\in V\subset W[1/p]$ of any element of $k-\overline{\mathbf{F}_{p}}$ has the property that $x,x^{-1}\notin W$, so $W$ is not a valuation ring. Endowing $W$ with the $p$-adic topology, we obtain a complete uniform Tate ring $(W[1/p],W)$ with $W$ perfectoid. The special fibre $\widetilde{W[1/p]}=W/\sqrt{pW}$ identifies with $\overline{\mathbf{F}_{p}}$, so $\mathop{\mathrm{Spec}}(\widetilde{W[1/p]})$ has a unique point which is thus a generic point. On the other hand, the local ring of $\mathrm{Spec}(W)$ at this point is simply $W$, which is not a valuation ring. This shows that the implication $(1)\Rightarrow(3)$ in Proposition 2.2 fails in this example. To show that $(1)\Rightarrow(2)$ also fails in this example, one calculates that $\mathrm{Spa}(W[1/p],W)$ identifies with the Riemann-Zariski space of $k$, which has more than $1$ element; we omit the argument. ###### Definition 2.4. Let $A$ be a tft $K$-algebra. A _Shilov point_ $x\in\mathop{\mathrm{Spa}}A$ is any point satisfying the equivalent conditions of Proposition 2.2. Recall that in an analytic adic space, every rank one point is generic in the sense of spectral spaces. For algebraic purposes, the following class of points is more relevant. ###### Definition 2.5. Let $X$ be any rigid space over a non-archimedean field $K$. A point $x\in X$ is _weakly Shilov_ if there is an open affinoid subset $x\in W\subset X$ such that $x$ is a Shilov point of $W$. In particular, such a point has rank one. ###### Example 2.6. For $X=\mathrm{Spa}(K\langle T\rangle)$ the closed unit disc, the weakly Shilov points are exactly the points of type $2$ (see [Sch12, Example 2.20] for the classification of points on $X$). This follows from the characterization in Proposition 2.9, since the secondary residue field of $x$ is algebraic over $K$ when $x$ has type $1$, $3$ or $4$ (and $x$’s of type 5 have rank $2$, so they are not Shilov). ###### Remark 2.7. The valuations isolated in Definition 2.5 are sometimes also called divisorial valuations in birational geometry (see Proposition 2.9 (3)). ###### Lemma 2.8. Let $X$ be a rigid space over a nonarchimedean field $K$. Then weakly Shilov points are dense in $X$. ###### Proof. By definition, any open affinoid $U\subset X$ contains a weakly Shilov point. ∎ We also have the following alternative characterization of weakly Shilov points. ###### Proposition 2.9. Let $X$ be a rigid space over a nonarchimedean field $K$ and let $x\in X$ be a rank one point. The following are equivalent: 1. (1) $x$ is weakly Shilov. 2. (2) The transcendence degree of the secondary residue field $\tilde{K}_{x}$ over $K^{\circ}/\mathfrak{m}$ equals the local dimension $\dim_{x}X$ of $X$ at $x$. 3. (3) (Applicable only when $X$ is quasiseparated and quasi-paracompact) There exists a formal model $\mathfrak{X}$ such that $x$ maps to the generic point of an irreducible component of $\mathfrak{X}_{s}$. ###### Proof. $(1)\Longleftrightarrow(2)$ follows by combining Lemme 4.4 and Corollaire 4.15 in [Poi13]. $(3)\Rightarrow(1)$: if there exists a formal model $\mathfrak{X}$ as in (3), then taking $W\subset X$ be the preimage of any formal affine open $\mathfrak{X}$ containing the image of $x$ shows that $x$ is weakly Shilov. $(1)\Rightarrow(3)$: Assume $x\in X$ is weakly Shilov. We can then find an affinoid open $\mathrm{Spa}(R)\subset X$ containing $x$ such that $x\in\mathrm{Spa}(R)$ is Shilov. By Raynaud, there is a formal model $\mathfrak{X}$ of $X$ such that $\mathrm{Spa}(R)\subset X$ is the preimage of a formal affine open $\mathrm{Spf}(R_{0})\subset\mathfrak{X}$, cf. [Bos14, Theorem 8.4.3]. By passing to a refinement, we can assume that the map $R_{0}/t\to\tilde{R}$ gives a homeomorphism of spectra; see first paragraph of the proof of Proposition 2.2. As $x$ is weakly Shilov, its image in $\mathrm{Spec}(\tilde{R})$ is a generic point. But then its image in $\mathrm{Spec}(R_{0}/t)$ is also a generic point since $\mathrm{Spec}(\tilde{R})\to\mathrm{Spec}(R_{0}/t)$ is a homeomorphism by construction. As $\mathrm{Spf}(R_{0})\subset\mathfrak{X}$ is a formal open immersion, it follows that $x$ gives a generic point of $\mathfrak{X}_{s}$ as well ∎ ###### Corollary 2.10. Say $X$ is a rigid space over a nonarchimedean field $K$, and $Z\subset X$ is a nowhere dense Zariski closed set. Then $Z$ does not contain any weakly Shilov point of $X$. In particular, if $X$ is reduced, then any weakly Shilov point of $X$ lies in the regular locus of $X$. ###### Proof. As $Z\subset X$ is a nowhere dense Zariski closed set, we have $\dim_{x}(Z)<\dim_{x}(X)$ for all $x\in X$. The first statement now follows from the characterization of weak Shilov points in Proposition 2.9 (2). The second statement follows from the first statement applied to the locus $Z\subset X$ of points that are not regular, which is a nowhere dense Zariski closed set by excellence considerations. ∎ ### 2.2. Tools involving the cotangent complex To prove our generic smoothness result, it will be convenient to use the analytic cotangent complex as this provides a homologically well-behaved object detecting smoothness in non-noetherian situations (such as topologically finitely presented algebras over non-discrete valuation rings). In this subsection, we recall some results on this object. ###### Notation 2.11. Fix a complete nonarchimedean field $K$ with valuation ring $V\subset K$ and a pseudouniformizer $t\in V$. For any map $A\to B$ of $V$-algebras, write $L^{an}_{B/A}$ for the derived $t$-completion of $L_{B/A}$. A tft (or topologically finite type) $V$-algebra is a $V$-algebra $A$ of the form $V[x_{1},...,x_{n}]^{\wedge}/I$ (where the completion is $t$-adic). If moreover $I$ is finitely generated, we say that $A$ is tfp (or topologically finitely presented). The class of tfp $V$-algebras has good properties such as coherence and classical $t$-adic completenesss, see [GR03, Proposition 7.1.1]. Moreover, any tft $V$-algebra which is $t$-torsion-free is in fact tfp, see [FGK11, Corollary 7.3.6]. ###### Theorem 2.12 (Gabber-Ramero). Say $A\to B$ is a map of tfp $V$-algebras. Then the following hold true: 1. (1) $L^{an}_{B/A}$ is a pseudocoherent $B$-complex. 2. (2) Assume that $A\to B$ induces a smooth map of relative dimension $n$ on taking adic generic fibres. Then $L^{an}_{B/A}[1/t]$ is a finite projective $B[1/t]$-module of rank $n$. 3. (3) If $A\to B$ is surjective, then $L_{B/A}\simeq L^{an}_{B/A}$. ###### Proof. (1) is [GR03, Theorem 7.1.31]. (2) is the key assertion checked in the proof of [GR03, Theorem 7.2.39]. (3) is [GR03, Theorem 7.1.29]. ∎ ###### Lemma 2.13. Let $R$ be a finitely presented flat $V$-algebra. 1. (1) $L_{R/V}$ is a pseudocoherent $R$-complex. 2. (2) If $R$ is also $V$-finite, then $L_{R/V}$ is derived $t$-complete. ###### Proof. For (1), by noetherian approximation, we can write $V\to R$ as the base change of a finitely presented flat map $V_{0}\to R_{0}$ of finitely generated $\mathbf{Z}$-algebras along some map $V_{0}\to V$. Then $L_{R_{0}/V_{0}}$ is pseudocoherent, and $L_{R_{0}/V_{0}}\otimes_{R_{0}}^{L}R\simeq L_{R/V}$ by Tor independent base change, so $L_{R/V}$ is also pseudocoherent. For (2), observe that if $R$ is $V$-finite, then $R$ is a finite free $V$-module (as any finitely presented flat $V$-module is finite free). In particular, a pseudocoherent $R$-complex is also pseudocoherent as a $V$-complex. The claim now follows as any pseudocoherent $V$-complex is derived $t$-complete (since $V$ itself is so). ∎ ###### Corollary 2.14. Let $E/K$ be a finite extension, and let $W_{0}\subset E^{\circ}$ be an open tfp $V$-subalgebra. 1. (1) $L_{W_{0}/V}$ is pseudocoherent so $L_{W_{0}/V}\simeq L^{an}_{W_{0}/V}$, whence $L_{E/K}\simeq L^{an}_{W_{0}/V}[1/t]$. 2. (2) $H_{i}(L^{an}_{W_{0}/V}[1/t])=0$ for $i\neq 0,1$. ###### Proof. For (1): since $W_{0}\subset E^{\circ}$, choosing monic equations defining generators of $W_{0}$ shows that any such $W_{0}$ is a finitely presented flat $V$-algebra. Lemma 2.13 then implies that $L_{W_{0}/V}$ is pseudocoherent and thus already derived $t$-complete, so $L_{W_{0}/V}\simeq L^{an}_{W_{0}/V}$. The last part of (1) follows by inverting $t$ and noting that formation of cotangent complexes commutes with localization. (2) follows from (1) and a general fact: the cotangent complex of any extension of fields only has homology in degrees $1$ and $0$. Indeed, this follows from transitivity triangles and the fact that any field is ind-smooth over its prime subfield (which is perfect) by generic smoothness in algebraic geometry. ∎ We need the following result later. ###### Theorem 2.15 (Quillen). Fix a noetherian ring $A$ and a maximal ideal $\mathfrak{m}$ with residue field $k=A/\mathfrak{m}$. If $L_{k/A}$ is concentrated in degree $-1$, then $A$ is regular at $\mathfrak{m}$. ###### Proof. [Qui, Corollary 10.5] shows that $\mathfrak{m}$ is generated by a regular sequence if $H_{2}(L_{k/A})=0$. As the maximal ideal of a noetherian local ring is generated by a regular sequence exactly when the ring is regular, the claim follows. ∎ The following lemma will be useful later as well. ###### Lemma 2.16. Say $A$ is a reduced Jacobson noetherian ring. Let $M\to N$ be a map of finitely generated $A$-modules such that $M/\mathfrak{m}M\to N/\mathfrak{m}N$ is injective for every maximal ideal $\mathfrak{m}$. If $M$ is a locally free $A$-module, then $M\to N$ is injective. ###### Proof. The question is local, so we can assume $M=A^{\oplus n}$ is a free module. Let $x=(x_{1},...,x_{n})\in M$ lie in the kernel of $M\to N$. The hypothesis implies that $x_{i}\in\mathfrak{m}$ for every maximal ideal $\mathfrak{m}\subset A$. But the intersection of all maximal ideals of $A$ is its nilradical (as $A$ is Jacobson) which is $0$ (as $A$ is reduced). So $x=0$, as wanted. ∎ ### 2.3. Regularity of Shilov fibers In this subsection, we establish the key technical ingredient behind our generic smoothness result, using the analytic cotangent complex. ###### Notation 2.17. Fix a complete nonarchimedean field $C$ with valuation ring $\mathcal{O}_{C}\subset C$ and a pseudouniformizer $t\in\mathcal{O}_{C}$.333Contrary to modern notational conventions, we are not assuming $C$ is algebraically closed. We hope this causes no confusion. Our goal is the following. ###### Theorem 2.18. Let $T\to R$ be a map of tfp $\mathcal{O}_{C}$-algebras. Assume $\mathrm{Spa}(R[1/t])$ is smooth over $C$. Let $V$ denote the $t$-completed Zariski local ring of $T$ at a generic point $\eta\in\mathrm{Spec}(T/t)\subset\mathrm{Spec}(T)$, and set $R_{V}=R\widehat{\otimes}_{T}V$. Then $R_{V}[1/t]$ is regular. ###### Proof. Our strategy is to first simplify $T$, and then argue that $R_{V}[1/t]$ is regular using the cotangent complex. First, observe that the statement of the theorem is Zariski local around $\eta\in\mathrm{Spf}(T)$, so we may shrink $\mathrm{Spf}(T)$ around $\eta$ to assume $\mathrm{Spec}(T/t)$ is irreducible with generic point $\eta$. If $T^{\prime}=\mathcal{O}_{C}\langle x_{1},...,x_{n}\rangle\to T$ is a finite injective map, then $\mathrm{Spec}(T/t)\to\mathrm{Spec}(T^{\prime}/t)$ is finite and surjective. As both these schemes are irreducible, the image of the generic point $\eta\in\mathrm{Spec}(T/t)$ is the generic point $\eta^{\prime}\in\mathrm{Spec}(T^{\prime}/t)$, and $\eta$ is the unique preimage of $\eta^{\prime}$. In particular, we must have $T\widehat{\otimes}_{T^{\prime}}V^{\prime}\simeq V$ by base change. But then $R\widehat{\otimes}_{T^{\prime}}V^{\prime}\simeq R_{V}$. Thus, we may replace $T$ with $T^{\prime}$ to assume that $T$ is a standard Tate algebra over $\mathcal{O}_{C}$. In particular, $T$ is formally smooth over $\mathcal{O}_{C}$. In this case, we have $T=T[1/t]^{\circ}$, so $V$ is actually a $t$-complete and $t$-torsionfree rank one valuation ring (either by explicit calculation, or as explained in Proposition 2.2). Next, we describe $L^{an}_{V/\mathcal{O}_{C}}$. By definition, the ring $V$ is a $t$-completed ind-Zariski localization of $T$. As the formation of the $t$-completed cotangent complex is compatible with $t$-completed localizations and $t$-completed filtered colimits, we learn that $L^{an}_{V/\mathcal{O}_{C}}\simeq L^{an}_{T/\mathcal{O}_{C}}\widehat{\otimes}_{T}^{L}V$. But $T/\mathcal{O}_{C}$ is formally smooth, so $L^{an}_{T/\mathcal{O}_{C}}$ is a finite projective $T$-module placed in degree $0$, whence $L^{an}_{V/\mathcal{O}_{C}}$ is a finite free $V$-module placed in degree $0$. We now begin proving the theorem. Since $V$ is a $t$-complete and $t$-torsionfree rank $1$ valuation ring, the ring $K:=V[1/t]$ is a nonarchimedean field extension of $C$ with $K^{\circ}=V$. As $R_{V}$ is a tfp $V$-algebra, we can (and will) regard $R_{V}[1/t]$ as an affinoid $K$-algebra. To show regularity of $R_{V}[1/t]$, it is enough to show that the local rings of $R_{V}[1/t]$ at all closed points are regular: the regular locus in $R_{V}[1/t]$ is open (as affinoid $K$-algebras are noetherian and excellent) and the maximal ideals are dense (as affinoid $K$-algebras are Jacobson). A closed point is given by a quotient $R_{V}[1/t]\to E$ where $E/K$ is a finite extension; fix such a point. By Theorem 2.15, it suffices to show that $L_{E/R_{V}[1/t]}$ has homology only in degrees $0,1$. (In fact, there is no $H_{0}$ as $R_{V}[1/t]\to E$ is surjective, but it will be convenient to formulate things this way.) Let $W_{0}\subset E^{\circ}$ be the image of $R_{V}$ under this map. As $E^{\circ}$ is the integral closure of $V$ in $E$, the ring $W_{0}$ is a finitely presented finite flat $V$-algebra. Since the map $R_{V}\to W_{0}$ is a surjection of tfp $V$-algebras, we have $L_{W_{0}/R_{V}}\simeq L^{an}_{W_{0}/R_{V}}$ by Theorem 2.12 (3), and hence $L_{E/R_{V}[1/t]}\simeq L^{an}_{W_{0}/R_{V}}[1/t]$ by inverting $t$, so it is enough to show that $L^{an}_{W_{0}/R_{V}}[1/t]$ has homology only in degrees $0,1$. Consider the transitivity triangle for $\mathcal{O}_{C}\to R_{V}\to W_{0}$: $L^{an}_{R_{V}/\mathcal{O}_{C}}\widehat{\otimes}_{R_{V}}^{L}W_{0}\to L^{an}_{W_{0}/\mathcal{O}_{C}}\to L^{an}_{W_{0}/R_{V}}.$ (1) As $R\to R_{V}$ is a $t$-completed Zariski localization, we have $L^{an}_{R/\mathcal{O}_{C}}{\otimes}_{R}^{L}R_{V}\simeq L^{an}_{R/\mathcal{O}_{C}}\widehat{\otimes}_{R}^{L}R_{V}\simeq L^{an}_{R_{V}/\mathcal{O}_{C}},$ where the first isomorphism is from the pseudocoherence of $L^{an}_{R/\mathcal{O}_{C}}$ coming from Theorem 2.12 (1). The same reasoning also allows us to drop the completion on the leftmost term in (1). Inverting $t$ in (1) then gives a triangle $L^{an}_{R_{V}/\mathcal{O}_{C}}[1/t]{\otimes}_{R_{V}[1/t]}^{L}E\to L^{an}_{W_{0}/\mathcal{O}_{C}}[1/t]\to L^{an}_{W_{0}/R_{V}}[1/t].$ (2) Our task was to show that the term on the right has homology only in degrees $0,1$. The term on the left is a finite projective $E$-module in degree $0$: indeed, $L^{an}_{R/\mathcal{O}_{C}}[1/t]$ is a finite projective $R[1/t]$-module by Theorem 2.12 (2) and the smoothness assumption on $\mathrm{Spa}(R[1/t])$, and we have $L^{an}_{R/\mathcal{O}_{C}}\otimes_{R[1/t]}^{L}R_{V}[1/t]\simeq L^{an}_{R_{V}/\mathcal{O}_{C}}[1/t]$ by the reasoning explained above. By the long exact sequence, it thus suffices to check that $L^{an}_{W_{0}/\mathcal{O}_{C}}$ has homology only in degrees $0,1$. For this, we consider transitivity triangle for $\mathcal{O}_{C}\to V\to W_{0}$: $L^{an}_{V/\mathcal{O}_{C}}\widehat{\otimes}_{V}^{L}E\to L^{an}_{W_{0}/\mathcal{O}_{C}}[1/t]\to L^{an}_{W_{0}/V}[1/t].$ Now $L^{an}_{V/\mathcal{O}_{C}}[1/t]$ is a finite free $K$-module as explained previously, so the term on the left is a finite free $E$-module. It remains to observe that $L^{an}_{W_{0}/V}[1/t]$ has homology in degrees $0,1$ by Corollary 2.14. ∎ ###### Remark 2.19. In Theorem 2.18, one cannot strengthen the conclusion from regularity to smoothness. For example, the Frobenius map on the Tate algebra over any characteristic $p$ nonarchimedean field satisfies the hypothesis of Theorem 2.18, but does not have a single smooth fibre. ###### Remark 2.20. The proof of Theorem 2.18 relies on the analytic cotangent complex. When $C$ is discretely valued, it is possible to prove Theorem 2.18 by avoiding the cotangent complex and using instead Popescu’s desingularization theorem. Indeed, one first observes that $R$ is an excellent noetherian ring: by Elkik’s theorem, we can write $R$ as the $t$-completion of a finite type $\mathcal{O}_{C}$-algebra, so $R$ is excellent since $\mathcal{O}_{C}$ is so. But then the maps $R\to R\otimes_{T}T_{\eta}\to R_{V}$ are regular: the first map is a localization, while the second one is the completion of an excellent ring. By Popescu’s theorem, the map $R\to R_{V}$ is ind-smooth. It follows that $R_{V}[1/t]$ must be regular since $R[1/t]$ is so. In fact, this reasoning shows that any property of $R[1/t]$ that is local for the smooth topology passes to $R_{V}[1/t]$. We do not know how to prove the analogous statement in the general case. ### 2.4. Generic flatness Fix a nonarchimedean base field $K$. Our goal in this section is the following theorem. ###### Theorem 2.21. Let $f:X\to Y$ be a quasicompact map of rigid spaces over $K$. Assume that $Y$ is geometrically reduced. Let $\mathrm{Fl}_{X/Y}\subset Y$ be the maximal open subset such that $X\times_{Y}U\to U$ is flat. Then $\mathrm{Fl}_{X/Y}$ contains all weakly Shilov points of $Y$. In particular, $\mathrm{Fl}_{X/Y}$ is a dense open subset of $Y$. Observe that even the non-emptiness of $\mathrm{Fl}_{X/Y}$ is not clear a priori. Moreover, if $K$ has characteristic $0$, then it suffices to assume that $Y$ is reduced: as in algebraic geometry, this implies geometric reducedness in characteristic $0$ (see [Con99, Lemma 3.3.1]). ###### Lemma 2.22. Let $A$ be a ring with a locally nilpotent ideal $J\subset A$, and let $f:R\to S$ be any map of finitely presented flat $A$-algebras. Then $f$ is flat if and only if $\overline{f}:R/JR\to S/JS$ is flat. ###### Proof. “Only if” is clear. Conversely, suppose $\overline{f}$ is flat. If $J$ is nilpotent (e.g. if $A$ is Artinian), then flatness of $f$ follows from Proposition 0.8.3.7 in [FK18]. In the general case, write $A$ as a filtered colimit $A\simeq\mathop{\mathrm{colim}}_{i\in I}A_{i}$ where $A_{i}\subset A$ is a filtered system of $\mathbf{Z}$-algebras of finite type. Set $J_{i}=J\cap A_{i}$, so $J_{i}\subset A_{i}$ is a nilpotent ideal. By standard approximation arguments, there is an index $i_{0}\in I$ such that $f$ is the base change along $A_{i_{0}}\to A$ of a map $f_{i_{0}}:R_{i_{0}}\to S_{i_{0}}$ of finitely presented $A_{i_{0}}$-algebras. For all $i\geq i_{0}$, write $f_{i}:R_{i}\to S_{i}$ for the evident base change of $f_{i_{0}}$. By two applications of [Gro66, Théorème 11.2.6], $R_{i}$ and $S_{i}$ are flat $A_{i}$-algebras for all sufficiently large $i$. Next, note that $\overline{f}$ is the colimit of the diagrams $\overline{f_{i}}:R_{i}/J_{i}R_{i}\to S_{i}/J_{i}S_{i}$. Since $\overline{f}$ is flat by assumption, $\overline{f_{i}}$ is flat for all sufficiently large $i$ by another application of [Gro66, Théorème 11.2.6]. But $J_{i}$ is nilpotent, so flatness of $\overline{f_{i}}$ implies flatness of $f_{i}$ by the special case of the lemma treated in the first paragraph of the proof. Therefore $f_{i}$ is flat for all sufficiently large $i$, so $f$ is flat. ∎ Recall that an adic ring $A$ admitting a finitely generated ideal of definition $I$ is called _topologically universally (t.u.) rigid-noetherian_ if the scheme $\mathop{\mathrm{Spec}}A\left\langle T_{1},\dots,T_{n}\right\rangle\smallsetminus V(IA\left\langle T_{1},\dots,T_{n}\right\rangle)$ is Noetherian for all $n\geq 0$. If $A$ is a tft $K$-algebra, then any ring of definition $A_{0}\subset A$ is t.u. rigid- Noetherian. ###### Proposition 2.23. Let $A$ be a t.u. rigid-Noetherian ring, and let $J\subset A$ be any open ideal consisting of topologically nilpotent elements. Let $f:R\to S$ be a morphism of flat and topologically finitely presented $A$-algebras. Then $f$ is flat if and only if $\overline{f}:R/JR\to S/JS$ is flat. ###### Proof. “Only if” is clear. For the converse, choose a finitely generated ideal of definition $I\subset A$ contained in $J$; let $\overline{J}_{n}\subset A/I^{n}$ be the image of $J$, so $\overline{J}_{n}$ is locally nilpotent. Since $R/JR\to S/JR$ is flat, the previous lemma implies that $R/I^{n}R\to S/I^{n}S$ is flat for all $n\geq 1$. By Corollary 0.8.3.9 in [FK18], we then deduce that $R\to S$ is flat. ∎ ###### Proposition 2.24. Let $K$ be a nonarchimedean field, and let $f:A\to B$ be a map of tft $K$-algebras with $A$ geometrically reduced. Then there is a (nonempty) rational subset $U\subset\mathop{\mathrm{Spa}}A$ containing the Shilov boundary such that $\mathop{\mathrm{Spa}}B\times_{\mathop{\mathrm{Spa}}A}U\to U$ is flat. ###### Proof. By (the proof of) Theorem 1.3 in [BLR95], we can find a finite étale Galois extension $K^{\prime}/K$ such that the unit ball $A_{K^{\prime}}^{\circ}\subset A_{K^{\prime}}\overset{def}{=}A\otimes_{K}K^{\prime}$ is topologically finitely presented over $K^{\prime\circ}$ and the special fiber $A_{K^{\prime}}^{\circ}/K^{\prime\circ\circ}A_{K^{\prime}}^{\circ}$ is (geometrically) reduced. Choose an open tfp $K^{\prime\circ}$-algebra $B_{0}\subset B\otimes_{K}K^{\prime}$ such that $(f\otimes_{K}K^{\prime})(A_{K^{\prime}}^{\circ})\subset B_{0}$. Let $k^{\prime}$ be the residue field of $K^{\prime\circ}$. Then $A_{K^{\prime}}^{\circ}\otimes_{K^{\prime\circ}}k^{\prime}\to B_{0}\otimes_{K^{\prime\circ}}k^{\prime}$ is a map of finite-type $k^{\prime}$-algebras with reduced source, so there exists a non-zero-divisor $f\in A_{K^{\prime}}^{\circ}\otimes_{K^{\prime}\circ}k^{\prime}$ such that $(A_{K^{\prime}}^{\circ}\otimes_{K^{\prime\circ}}k^{\prime})[1/f]\to(B_{0}\otimes_{K^{\prime\circ}}k^{\prime})[1/f]$ is flat. Choose any lift $\tilde{f}\in A_{K^{\prime}}^{\circ}$, and let $C$ be the $\pi$-adic completion of $A_{K^{\prime}}^{\circ}[1/\tilde{f}]$; similarly, let $D$ be the $\pi$-adic completion of $B_{0}[1/\tilde{f}]$. Applying the previous proposition with $A=K^{\prime\circ}$, $J=K^{\prime\circ\circ}$, $R=C$, and $S=D$, we deduce that the map $C\to D$ is flat. Then $\mathop{\mathrm{Spa}}C[1/\pi]\to\mathop{\mathrm{Spa}}A_{K^{\prime}}$ is the inclusion of the Laurent domain $U(\frac{1}{\tilde{f}})$, and $\mathop{\mathrm{Spa}}B_{K^{\prime}}\times_{\mathop{\mathrm{Spa}}A_{K^{\prime}}}U(\frac{1}{\tilde{f}})\cong\mathop{\mathrm{Spa}}D[1/\pi]$ by design, so $\mathop{\mathrm{Spa}}B_{K^{\prime}}\times_{\mathop{\mathrm{Spa}}A_{K^{\prime}}}U(\frac{1}{\tilde{f}})\to U(\frac{1}{\tilde{f}})$ is flat. Moreover, $U(\frac{1}{\tilde{f}})$ contains all the Shilov points of $\mathop{\mathrm{Spa}}A_{K^{\prime}}$ by construction. It remains to undo the base change from $K$ to $K^{\prime}$. For this, let $h\in A^{\circ}$ be the image of $\tilde{f}$ under the norm map $A_{K^{\prime}}\to A$, and let $U=U(\frac{1}{h})\subset\mathop{\mathrm{Spa}}A$ be the associated Laurent domain. We claim that $U$ satisfies the conclusions of the theorem. Indeed, writing $\pi:\mathop{\mathrm{Spa}}A_{K^{\prime}}\to\mathop{\mathrm{Spa}}A$ for the evident finite étale map, it is clear from the definitions that $\pi^{-1}(U)=\cap_{g\in\mathrm{Gal}(K^{\prime}/K)}U(\frac{1}{\tilde{f}})g$ as subsets of $\mathop{\mathrm{Spa}}A_{K^{\prime}}$. Since each $g$-translate of $U(\frac{1}{\tilde{f}})$ still contains all Shilov points of $\mathop{\mathrm{Spa}}A_{K^{\prime}}$, we see that $\pi^{-1}(U)$ contains all Shilov points of $\mathop{\mathrm{Spa}}A_{K^{\prime}}$, and so $U$ contains all Shilov points of $\mathop{\mathrm{Spa}}A$. Moreover, the results in the previous paragraph show that $f_{U}:\mathop{\mathrm{Spa}}B\times_{\mathop{\mathrm{Spa}}A}U\to U$ becomes flat after base change along the surjective finite étale map $\pi^{-1}(U)\to U$, so $f_{U}$ is flat by [War17, Proposition 3.1.12]. This concludes the proof. ∎ ###### Proof of Theorem 2.21. It suffices to check that $\mathrm{Fl}_{X/Y}$ contains an open neighborhood of every weakly Shilov point $y\in Y$. The formation of $\mathrm{Fl}_{X/Y}$ commutes with base change along open immersions $Y^{\prime}\to Y$ in the evident sense, so we’re reduced to showing that if $Y$ is affinoid, then $\mathrm{Fl}_{X/Y}$ contains an open neighborhood of every Shilov point of $Y$. For this, cover $X$ by finitely many open affinoid subsets $X_{i}$. Then each $\mathrm{Fl}_{X_{i}/Y}\subset Y$ contains all Shilov points of $Y$ by Proposition 2.24. Since $\mathrm{Fl}_{X/Y}=\cap_{i}\mathrm{Fl}_{X_{i}/Y}$, we deduce that $\mathrm{Fl}_{X/Y}$ contains all Shilov points of $Y$. ∎ ### 2.5. Generic smoothness In this subsection, we combine Theorem 2.18 and Theorem 2.21 to prove our main generic smoothness result. We begin by translating Theorem 2.18 into geometric language. ###### Theorem 2.25. Fix a nonarchimedean field $K$, and let $f:X=\mathop{\mathrm{Spa}}B\to Y=\mathop{\mathrm{Spa}}A$ be a map of $K$-affinoid rigid spaces. Suppose that $X$ is smooth over $\mathop{\mathrm{Spa}}K$. Then for any Shilov point $y\in Y$, the adic fiber $X_{y}=\mathop{\mathrm{Spa}}(B\widehat{\otimes}_{A}K_{y})$ is regular. In particular, if $K$ has characteristic zero, then $X_{y}\to\mathop{\mathrm{Spa}}K_{y}$ is smooth. ###### Proof. The first part follows immediately from Theorem 2.18 thanks to the characterization of Shilov points in Proposition 2.2 once one observes that the local rings of $\mathrm{Spa}(R)$ for a regular tft $K$-algebra $R$ are regular. The last statement follows as a rigid analytic space over a characteristic zero nonarchimedean field is smooth exactly when all of its local rings are regular. ∎ We also need the following result, relating smoothness and fibral smoothness. ###### Lemma 2.26. Let $f:X\to Y$ be any map of rigid spaces, and let $x\in X$ be any point with image $y=f(x)$. Then the following are equivalent 1. (1) $f$ is smooth at $x$. 2. (2) $f$ is flat in a neighborhood of $x$ and $X_{y}=X\times_{Y}\mathop{\mathrm{Spa}}(K_{y},K_{y}^{+})\to\mathop{\mathrm{Spa}}(K_{y},K_{y}^{+})$ is smooth at $x$. ###### Proof. This follows from (the proof of) Lemma 2.9.2 in [War17]. ∎ ###### Theorem 2.27. Let $f:X\to Y$ be any quasicompact map of rigid spaces in characteristic zero. Assume that $X$ is smooth and $Y$ is reduced. Then there is a dense open subset $U\subset Y$ such that $f^{-1}(U)\to U$ is smooth. ###### Remark 2.28. Consideration of standard examples (for instance, quasi-elliptic fibrations in characteristics $2$ and $3$) shows that no result like this can hold in positive characteristic, not even with the weaker conclusion that $f^{-1}(U)\to U$ is smooth up to a universal homeomorphism. ###### Proof. Let $y\in Y$ be any weakly Shilov point. By Theorem 2.21, we can choose some open subset $U(y)\subset Y$ containing $y$ such that $X\times_{Y}U(y)\to U(y)$ is flat. Moreover, by Theorem 2.25, the entire fiber $X_{y}$ is smooth over $y$. Applying Lemma 2.26 and the openness of the smooth locus in the source, we deduce that every point $x\in f^{-1}(y)$ admits a quasicompact open neighborhood $W_{x}$ in $X$ such that $W_{x}\to Y$ is smooth. Forming a suitable union of the $W_{x}$’s and using the quasicompacity of $f$, we deduce that there is a quasicompact open neighborhood $W\subset X$ of the fiber $f^{-1}(y)$ such that $W\to Y$ is smooth. Shrinking $U(y)$ further, we may assume by an easy quasicompacity argument that $f^{-1}(U(y))\subset W$. In particular, $X\times_{Y}U(y)\to U(y)$ is smooth. Since weakly Shilov points are dense in $Y$, setting $U=\cup_{y\,\mathrm{weakly\,Shilov}}U(y)$ concludes the proof. ∎ In the proper case, we can do even better. ###### Theorem 2.29. Let $f:X\to Y$ be a proper map of rigid spaces in characteristic zero, with $X$ smooth and $Y$ reduced. Then the maximal open subset $S_{f}\subset Y$ over which $f$ becomes smooth is a dense Zariski-open subset. ###### Proof. Let $W\subset X$ be the Zariski-open subset where $X\to Y$ is smooth, and let $Z=X-W$ be the complement regarded as a rigid space with its induced reduced structure. The composite morphism $g:Z\to Y$ is proper, so by Kiehl’s results $g_{\ast}\mathcal{O}_{Z}$ is a coherent sheaf on $Y$. Since $\mathrm{Supp}(g_{\ast}\mathcal{O}_{Z})=f(Z)$, we deduce that $f(Z)\subset Y$ is Zariski-closed, so then $S_{f}=Y-f(Z)$ is Zariski-open, and density follows from the previous theorem. ∎ ###### Remark 2.30 (Deducing Theorem 2.29 from [Duc18]). Let us indicate how to prove the Berkovich variant of Theorem 2.29, using results from [Duc18].444We thank a referee for encouraging us to flesh out this deduction (and in fact for providing a complete argument). Precisely, given a nonarchimedean base field $K$ of characteristic $0$, we claim that if $f:X\to Y$ is a proper map of $K$-analytic spaces in the sense of Berkovich with $X$ quasi-smooth and $Y$ reduced, then $f$ is smooth over a dense Zariski-open subset of $Y$. To see this, let $T\subset Y$ be image under $f$ of the non-smooth locus of $f$; the latter is Zariski-closed by [Duc18, Theorem 10.7.2], so the former is Zariski-closed by properness (as in the proof of Theorem 2.29). As $f$ is smooth over the Zariski-open subset $Y-T\subset T$, it suffices to show that $Y-T$ is dense in $Y$. In fact, we claim that $Y-T$ contains all the Abhyankar points $y\in Y$. By [Duc18, Theorem 10.3.7], the map $f$ is flat at $y$, so by [Duc18, Theorem 5.3.4], it suffices to note that $f^{-1}(y)$ is quasi-smooth (or equivalently regular, as $K_{y}$ has characteristic $0$) by [Duc18, Theorem 6.3.7]. ## 3\. The six functors for Zariski-constructible sheaves In this section, we use the geometric results of §2 to develop the six functor formalism for Zariski-constructible sheaves in rigid analytic geometry over a characteristic $0$ field. ### 3.1. Definition of Zariski-constructible sheaves In this subsection we briefly review the definition and basic properties of Zariski-constructible sheaves on rigid analytic spaces. Most of this material is taken from [Han20]; the exception is Theorem 3.5. ###### Definition 3.1. Let $X$ be a rigid analytic space over a nonarchimedean field $K$, and let $\Lambda$ be a finite commutative ring. 1. (1) An étale sheaf $\mathscr{F}\in\mathrm{Sh}(X,\Lambda)$ is _lisse_ there exists an étale cover $\\{U_{i}\to X\\}$ such that $\mathscr{F}|_{U_{i}}$ is the constant sheaf associated to a finitely generated $\Lambda$-module. 2. (2) A complex $A\in D(X,\Lambda)$ is lisse if the cohomology sheaves $\mathcal{H}^{n}(A)$ are lisse for all $n$. We write $D_{lis}(X,\Lambda)\subset D(X,\Lambda)$ for the full subcategory spanned by lisse complexes. 3. (3) An étale sheaf $\mathscr{F}\in\mathrm{Sh}(X,\Lambda)$ is _Zariski- constructible_ if $X$ admits a locally finite stratification $X=\coprod_{i\in}X_{i}$ into Zariski locally closed subsets $X_{i}$ such that $\mathscr{F}|X_{i}$ is a lisse sheaf of $\Lambda$-modules for all $i\in I$. We write $\mathrm{Sh}_{zc}(X,\Lambda)$ for the full subcategory of Zariski- constructible sheaves. 4. (4) A complex $A\in D(X,\Lambda)$ is Zariski-constructible if the cohomology sheaves $\mathcal{H}^{n}(A)$ are Zariski-constructible for all $n$. We write $D_{zc}(X,\Lambda)$ for the full subcategory spanned by Zariski-constructible complexes. One has the bounded below variant $D^{+}_{zc}(X,\Lambda)$; similarly for $D^{-}$ and $D^{b}$. Finally, let $D^{(b)}_{zc}(X,\Lambda)\subset D_{zc}(X,\Lambda)$ denotes the full triangulated subcategory of complexes which are locally bounded; similarly for $D^{(-)}$ and $D^{(+)}$. The natural $\infty$-categorical refinements of $D(X,\Lambda),D^{(b)}_{zc}(X,\Lambda)$, etc. shall be denoted $\mathcal{D}(X,\Lambda),\mathcal{D}^{(b)}_{zc}(X,\Lambda)$, etc., as usual. ###### Warning 3.2. Let us record some subtleties concerning this notion. 1. (1) Given a Zariski-open immersion $j:U\to X$ and a Zariski-constructible sheaf $\mathscr{F}$ on $U$, the extension $j_{!}\mathscr{F}$ can fail to be Zariski- constructible $X$, unlike the situation in algebraic geometry (see next example). The main problem is that the operation of taking Zariski-closures in $X$ of Zariski-closed subsets of $U$ is poorly behaved in general (e.g., it does something non-trivial over $U$); this issue does not arise if $\mathscr{F}$ is itself locally constant. ###### Example 3.3. Let $\mathscr{F}$ be the direct sum of skyscraper sheaves supported at an infinite discrete set of classical points in $(\mathbf{A}^{1})^{an}$, and let $j:(\mathbf{A}^{1})^{an}\to(\mathbf{P}^{1})^{an}$ be the standard open immersion. Then $j_{!}\mathscr{F}$ is not Zariski-constructible on $(\mathbf{P}^{1})^{an}$: any Zariski-closed set of $(\mathbf{P}^{1})^{an}$ must be either finite or all of $(\mathbf{P}^{1})^{an}$ by rigid GAGA. This phenomenon should not be regarded as a pathology: similar examples occur in complex analytic geometry as well, and are a natural consequence of the non-quasi-compactness of affine space in any kind of analytic geometry. 2. (2) Huber’s book [Hub96] defines a notion of “constructible” sheaves that is very well-behaved from a topos theoretic perspective. However, these sheaves are typically not Zariski-constructible; for instance, if $j$ is the qcqs open immersion defined by including a closed disc of radius (say) $1/2$ inside a closed disc of radius $1$, then $j_{!}\Lambda$ is constructible in Huber’s sense but is not Zariski-constructible. In fact, the overlap between these two notions is exactly the lisse sheaves. Next, we record some simple stability properties of this notion. ###### Proposition 3.4. 1. (1) $\mathrm{Sh}_{zc}(X,\Lambda)$ is a weak Serre subcategory of $\mathrm{Sh}(X,\Lambda)$, and $D_{zc}(X,\Lambda)$ is a thick triangulated subcategory of $D(X,\Lambda)$. 2. (2) _(Devissage)_ A sheaf $\mathscr{F}\in\mathop{\mathrm{Sh}}(X,\Lambda)$ is Zariski-constructible iff there is a dense Zariski-open subset $U\subset X$ such that $\mathscr{F}|U$ is lisse and $\mathscr{F}|(X\smallsetminus U)$ is Zariski-constructible. 3. (3) Zariski-constructibility is stable under $f^{\ast}$ for $f$ any morphism of rigid spaces, and under $f_{\ast}$ for finite morphisms. 4. (4) A sheaf $\mathscr{F}\in\mathop{\mathrm{Sh}}(X,\Lambda)$ is Zariski- constructible iff $\mathscr{F}|X_{i}$ is Zariski-constructible for all irreducible components $X_{i}\subset X$. ###### Proof. (1)-(3) are proved in [Han20]. For (4), one direction is clear. For the other direction, let $f_{0}:\tilde{X}\to X$ be the normalization of $X$, and let $f_{1}:\tilde{X}\times_{X}\tilde{X}\to X$ be the evident map. The hypothesis guarantees that $f_{0}^{\ast}\mathscr{F}$ (and then also $f_{1}^{\ast}\mathscr{F}$) is Zariski-constructible, since $\tilde{X}=\coprod_{i}\tilde{X}_{i}$ is the disjoint union of normalizations of the irreducible components of $X$. Since $f_{0}$ and $f_{1}$ are both finite, pushforward along these maps preserves Zariski-constructibility by (3). The exact sequence $0\to\mathscr{F}\to f_{0\ast}f_{0}^{\ast}\mathscr{F}\to f_{1\ast}f_{1}^{\ast}\mathscr{F}$ now exhibits $\mathscr{F}$ as the kernel of a map between Zariski-constructible sheaves, so we conclude by (1). ∎ In view of Warning 3.2 (1), the following result on the analytic (or even étale) locality of the notion of Zariski-constructibility is somewhat surprising: ###### Theorem 3.5. Let $X$ be a rigid space over a characteristic zero nonarchimedean field $K$, equipped with an étale $\Lambda$-sheaf $\mathscr{F}$. If there exists an étale cover $\\{U_{i}\\}$ of $X$ such that $\mathscr{F}|_{U_{i}}$ is Zariski- constructible, then $\mathscr{F}$ is Zariski-constructible. In particular, the assignment carrying a rigid space $X$ to the $\infty$-category $\mathcal{D}^{(b)}_{zc}(X,\Lambda)$ is a stack for the étale topology. We shall prove this result in §3.2. Finally, the following result is often very useful. ###### Proposition 3.6. If $X$ is a quasicompact rigid space over a nonarchimedean field $K$ of characteristic zero, then $D^{b}_{zc}(X,\Lambda)$ is the thick triangulated subcategory of $D(X,\Lambda)$ generated by objects of the form $f_{\ast}M$ for $f:Y\to X$ a finite morphism and $M$ a constant constructible $\Lambda$-sheaf on $Y$. ###### Proof. By induction on $\dim X$ and devissage, it’s enough to show that if $j:U\to X$ is a dense Zariski-open and $\mathscr{F}$ is lisse and killed by a prime $\ell$, then $j_{!}{\mathscr{F}}$ lies in the claimed subcategory. For this, choose (as in [Sta18, Tag 0A3R]) a finite étale cover $g:U^{\prime}\to U$ of prime-to-$\ell$ degree such that $g^{\ast}\mathscr{F}$ is an iterated extension of copies of $\mathbf{F}_{\ell}$. Then $\mathscr{F}$ is a summand of $g_{\ast}g^{\ast}\mathscr{F}$, so $\mathscr{F}$ is a summand of an iterated extension of copies of $g_{\ast}\mathbf{F}_{\ell}$. Now extend $g$ to a finite cover $f:X^{\prime}\to X$ as in [Han20], so $j_{!}\mathscr{F}$ is a summand of an iterated extension of copies of $\mathscr{G}=j_{!}g_{\ast}\mathbf{F}_{\ell}=f_{\ast}j^{\prime}_{!}\mathbf{F}_{\ell}$, where $j^{\prime}:U^{\prime}\to X^{\prime}$ is the evident map. Letting $i:Z\to X^{\prime}$ be the complement of $j^{\prime}$, the exact sequence $0\to f_{\ast}j^{\prime}_{!}\mathbf{F}_{\ell}\to f_{\ast}\mathbf{F}_{\ell}\to(f\circ i)_{\ast}\mathbf{F}_{\ell}\to 0$ shows that $\mathscr{G}$ lies in the desired subcategory. ∎ ### 3.2. Zariski-constructible sheaves via algebraic geometry In this subsection, we describe Zariski-constructible sheaves on affinoids purely in terms of algebraic geometry, and deduce that the property of being Zariski-constructible is étale local. Let $K$ be a characteristic zero nonarchimedean field. Recall from [Han20] that for any affinoid $K$-algebra $A$ and any scheme $\mathcal{X}$ locally of finite type over $\mathop{\mathrm{Spec}}A$, there is a naturally associated rigid space $X=\mathcal{X}^{an}$ over $\mathop{\mathrm{Spa}}A$, and a natural map $X_{\mathop{\mathrm{\acute{e}t}}}\to\mathcal{X}_{\mathop{\mathrm{\acute{e}t}}}$ of sites, inducing a $t$-exact pullback functor $\mu_{X}:D(\mathcal{X},\mathbf{Z}/n)\to D(X,\mathbf{Z}/n)$ carrying $D_{c}$ into $D_{zc}$. Here we change notation slightly from [Han20], and write $(-)^{an}$ interchangeably for $\mu_{X}^{\ast}(-)$. ###### Proposition 3.7 (Algebraization of Zariski-constructible sheaves over affinoids). Fix an affinoid $K$-algebra $A$, and write $\mathcal{S}=\mathop{\mathrm{Spec}}\,A$ and $S=\mathop{\mathrm{Spa}}A$. 1. (1) If $f:\mathcal{X}\to\mathcal{Y}$ is any finite type map of finite type $\mathcal{S}$-schemes, then for any $\mathscr{F}\in D^{b}_{c}(\mathcal{X},\mathbf{Z}/n)$ the natural base change map $(Rf_{\ast}\mathscr{F})^{an}\to Rf^{an}_{\ast}\mathscr{F}^{an}$ is an isomorphism. In particular, $Rf^{an}_{\ast}\mathscr{F}^{an}$ lies in $D^{b}_{zc}(Y,\mathbf{Z}/n)$. 2. (2) If $\mathcal{X}$ is any finite type $\mathcal{S}$-scheme, the functor $(-)^{an}:D^{b}_{c}(\mathcal{X},\mathbf{Z}/n)\to D^{b}_{zc}(X,\mathbf{Z}/n)$ is fully faithful. If $\mathcal{X}$ is proper over $\mathcal{S}$, it is an equivalence of categories. 3. (3) If $\mathcal{X}$ is any finite type $\mathcal{S}$-scheme, the fully faithful functor $(-)^{an}:D^{b}_{c}(\mathcal{X},\mathbf{Z}/n)\to D^{b}_{zc}(X,\mathbf{Z}/n)$ from (2) identifies the full subcategory of lisse objects on both sides. Note that part (2) applies notably when $\mathcal{X}=\mathcal{S}$. ###### Proof. 1. (1) This is exactly [Han20, Theorem 1.8]. 2. (2) The full faithfulness is a special case of [Han20, Theorem 1.10.ii]. Essential surjectivity in the case where $\mathcal{X}/\mathcal{S}$ is proper can be checked on hearts. We can also assume that $\mathcal{X}$ and $X$ are reduced. Arguing by induction on $\dim\mathcal{X}$ as in the proof of [Han20, Theorem 1.7], one reduces to checking that any sheaf on $X$ of the form $j_{!}\mathscr{F}$ is in the essential image of $(-)^{an}$; here $j:U\subset X$ is the inclusion of any normal Zariski-open subset and $\mathscr{F}$ is any lisse sheaf. To do this, first note that the complement $Z=X-U$ algebraizes to a closed subscheme $\mathcal{Z}\subset\mathcal{X}$ by relative rigid GAGA, so then $\mathcal{U}=\mathcal{X}-\mathcal{Z}$ is an algebraization of $U$. We’re now reduced to proving that the analytification functor $\mathrm{F\acute{E}t}(\mathcal{U})\to\mathrm{F\acute{E}t}(U)$ is an equivalence of categories. We explain the construction of an essential inverse. Suppose $V\to U$ is any finite étale map. By [Han20, Theorem 1.6], this extends uniquely to a branched covering $V^{\prime}\to X^{n}$, where $X^{n}$ is the normalization of $X$. By relative rigid GAGA again, this algebraizes to a branched covering $\mathcal{V}^{\prime}\to\mathcal{X}^{n}$, and then $\mathcal{V}:=\mathcal{V}^{\prime}\times_{\mathcal{X}^{n}}\mathcal{U}\to\mathcal{U}$ is the desired algebraization of $V$. 3. (3) Since we already have full faithfulness, it suffices to prove essential surjectivity on the hearts, i.e., we want to realize a lisse sheaf on $X$ as the analytification of a unique lisse sheaf on $\mathcal{X}$. By uniqueness and Zariski/analytic descent for lisse sheaves in algebraic/analytic geometry, we may assume that $\mathcal{X}$ is separated (or even affine). By finite descent for lisse sheaves in both algebraic and analytic geometry, we may also assume $\mathcal{X}$ is normal. In this case, we can realize $\mathcal{X}$ as an an open subscheme of a normal proper $\mathcal{S}$-scheme $\overline{\mathcal{X}}$; the argument used in the proof of (2) now yields the desired algebraization. ∎ We shall use the above description to prove Theorem 3.5 by a topological argument. To run this argument, we need a couple of lemmas in pure algebraic geometry on the existence and properties of the maximal open set where a constructible sheaf is lisse. ###### Lemma 3.8. Let $X$ be a scheme and let $\mathscr{F}$ be a constructible sheaf on $X$. Then there exists a maximal open subset $U_{\mathscr{F}}\subset X$ such that $\mathscr{F}|_{U_{\mathscr{F}}}$ is locally constant. Moreover, $U_{\mathscr{F}}$ is given by either of the following equivalent descriptions: 1. (1) The set of all $x\in X$ such that $\mathscr{F}|_{X_{x}}$ is locally constant. (Here $X_{x}$ is the local scheme of $X$ at $x$.) 2. (2) The set of all $x\in X$ admitting an open neighborhood $x\in U\subset X$ with $\mathscr{F}|_{U}$ being locally constant. In particular, $U_{\mathscr{F}}$ contains all the generic points of $X$. ###### Proof. As local constancy is a local property, the collection of all opens $V\subset X$ such that $\mathscr{F}|_{V}$ is locally constant is stable under taking unions. Taking the union of all such opens then gives the maximal open $U_{\mathscr{F}}$ such that $\mathscr{F}|_{U_{\mathscr{F}}}$ is open. It is also clear from this description that $U_{\mathscr{F}}$ agrees with the set in (2). The set in (2) is trivially contained in the set in (1). Conversely, as the functor sending a scheme $Y$ to its category of locally constant sheaves (resp. constructible sheaves) is locally finitely presented (i.e., carries cofiltered limits of affine schemes to direct limits), the set in (1) is also contained in the set in (2), so the two sets coincide. The final statement is clear from the description of $U_{\mathscr{F}}$ given by the set in (1). ∎ ###### Lemma 3.9. The formation of the open set $U_{\mathscr{F}}\subset X$ associated with a pair $(X,\mathscr{F})$ as in Lemma 3.8 is compatible with pullback along universally generalizing maps of schemes. ###### Proof. Let $f:Y\to X$ be a universally generalizing map of schemes. We have an obvious containment $f^{-1}(U_{\mathscr{F}})\subset U_{f^{*}\mathscr{F}}$, and we must show it is an equality. Assume towards contradiction that there exists some $y\in U_{f^{*}\mathscr{F}}-f^{-1}(U_{\mathscr{F}})$. Thus, $(f^{*}\mathscr{F})|_{Y_{y}}$ is locally constant but $\mathscr{F}|_{X_{f(y)}}$ is not locally constant. There is then a specialization $x_{1}\rightsquigarrow x_{2}$ of geometric points of $X_{f(y)}$ such that the corresponding cospecialization map $\mathscr{F}_{x_{2}}\to\mathscr{F}_{x_{1}}$ is not an isomorphism. Choose an absolutely integrally closed valuation ring $V$ and a map $\eta_{X}:\mathrm{Spec}(V)\to X_{f(y)}$ that witnesses the specialization $x_{1}\rightsquigarrow x_{2}$, so $\eta_{X}^{*}(\mathscr{F}|_{X_{f(y)}})$ is not locally constant. Now that the map $Y_{y}\to X_{f(y)}$ is universally generalizing (it factors as $Y_{y}\to X_{f(y)}\times_{X}Y\to X_{f(y)}$, with both maps being universally generalizing) and surjective (all points of $X_{f(y)}$ specialize to $y$, so surjectivity follows from the universally generalizing property). By stability of universally generalizing surjective maps under base change, we can replace $V$ with an extension if necessary to lift $\eta_{X}$ to a map $\eta_{Y}:\mathrm{Spec}(V)\to Y_{y}$. But then we have $\eta_{X}^{*}(\mathscr{F}|_{X_{f(y)}})=\eta_{Y}^{*}(f^{*}\mathscr{F}|_{Y_{y}})$; this is a contradiction as the left side is not locally constant by choice of $\eta_{X}$, while the right side is locally constant by choice of $y$. ∎ ###### Proof of Theorem 3.5. Let us first give the argument when $\\{U_{i}\\}$ is a cover of $X$ for the analytic topology. We proceed by induction on $\dim(X)$. The $\dim(X)=0$ case is clear: $X$ is a disjoint union of points in this case. In general, as Zariski-constructibility is stable under pullback, we may assume each $U_{i}=\mathrm{Spa}(A_{i})$ is affinoid. Write $\mathcal{U}_{i}=\mathrm{Spec}(A_{i})$ for the obvious algebraization of $U_{i}$. The map $U_{i}\to\mathcal{U}_{i}$ identifies constructible $\mathbf{Z}/n$-sheaves on the target with Zariski-constructible $\mathbf{Z}/n$-sheaves on the source by Proposition 3.7, so there is a unique constructible $\mathbf{Z}/n$-sheaf $\mathscr{F}_{i}$ over $\mathcal{U}_{i}$ descending $\mathscr{F}|_{U_{i}}$. Let $\mathcal{V}_{i}:=\mathcal{U}_{i,\mathscr{F}_{i}}\subset\mathcal{U}_{i}$ be the maximal Zariski open over which $\mathcal{F}_{i}$ is locally constant as in Lemma 3.8, let $V_{i}\subset U_{i}$ be its Zariski-open preimage, and let $Z_{i}\subset U_{i}$ be the Zariski-closed complement of $V_{i}$ (regarded as a reduced rigid space); note that $Z_{i}\subset V_{i}$ is nowhere dense as $\mathcal{V}_{i}\subset\mathcal{U}_{i}$ contains all the generic points. As the natural algebraizations of the maps given by rational localizations of affinoids are universally generalizing, Lemma 3.9 implies that for all $i,j$, the Zariski-open subsets $V_{i}\cap(U_{i}\cap U_{j})$ and $V_{j}\cap(U_{i}\cap U_{j})$ of $U_{i}\cap U_{j}$ agree, and consequently their complements also agree. By descent for coherent ideal sheaves applied to the ideal sheaves $\mathcal{I}_{Z_{i}}\subset\mathcal{O}_{U_{i}}$ of the $Z_{i}$’s, there is a unique Zariski-closed subset $Z\subset X$ such that $Z\cap U_{i}=Z_{i}$. As $Z_{i}$ is nowhere dense in $U_{i}$ for all $i$, we must have $\dim(Z)<\dim(X)$. Moreover, the sheaf $\mathscr{F}$ is lisse over $X-Z$ by construction. Induction on dimension shows that $\mathscr{F}|_{Z}$ is also Zariski-constructible, so we win. To adapt this argument to the étale topology, note the proof above has two essential ingredients: 1. (1) For each pair of indices $i,j$, if $V\subset U_{i}\cap U_{j}=U_{i}\times_{X}U_{j}$ is an open affinoid, then the natural algebraization of $V\to U_{i}$ (resp. $V\to U_{j}$) is a universally generalizing map of affine schemes. 2. (2) Descent for coherent sheaves holds true with respect to the cover $\\{U_{i}\\}$. These properties are also true for étale covers of $X$, so the descent claim also holds true in the étale topology. Indeed, étale descent for coherent sheaves on rigid spaces is [dJvdP96, Corollary 3.2.3], while the first property reduces to the well-known fact that for any étale map of affinoid rigid spaces, the associated ring map is flat. ∎ ### 3.3. Pushforward, $\otimes$, and $\mathop{R\mathscr{H}\mathrm{om}}$ In this subsection, we prove some of our main stability properties for Zariski-constructible sheaves. Until further notice, we fix a characteristic zero nonarchimedean base field $K$ of residue characteristic $p$ (with $p=0$ allowed). The first main result in this section is the following theorem, which was conjectured by the second author [Han20, Conjecture 1.14]. ###### Theorem 3.10 (Proper direct images). Let $f:X\to Y$ be a proper map of rigid spaces over $K$. Then $Rf_{*}$ preserves $D^{(b)}_{zc}(-,\mathbf{Z}/n)$. Note that $p|n$ is allowed here. We caution the reader that if $f:X\to Y$ is a proper map of rigid spaces with $Y$ irreducible, then in contrast with the situation for algebraic varieties, it is not always true that $X$ has finitely many irreducible components, or that the fibers of $f$ have bounded dimension.555 For a simple example where both conditions fail, let $Y=(\mathop{\mathrm{Spec}}K[T])^{an}$ be the rigid affine line, set $X_{n}=(\mathbf{P}^{n})^{an}$ and let $f_{n}:X_{n}\to Y$ be the map which factors over the inclusion of the closed point $T=p^{-n}$. Then $f=\coprod f_{n}:\coprod X_{n}\to Y$ is proper. Thus, it is necessary to use $D^{(b)}$ and not $D^{b}$ in the above formulation, and similarly in many other places in this section. ###### Proof. By Theorem 3.5, we may assume $Y=\mathrm{Spa}(A)$ for an affinoid $K$-algebra $A$. In particular, $X$ and $Y$ are both qcqs. We may clearly also assume that $Y$ is reduced. Our task is to show that $Rf_{*}$ preserves $D^{b}_{zc}(-,\mathbf{Z}/n)$ in this situation. As $X$ is quasi-compact, we may apply Proposition 3.6, so it suffices to show that $Rf_{*}\mathbf{Z}/n\in D^{b}_{zc}$. In fact, as the fibres of $f$ have bounded dimension (e.g., by checking on formal models), we know by cohomological dimension estimates and proper base change that $Rf_{*}$ has finite cohomological dimension, so it is enough to show that $Rf_{*}\mathbf{Z}/n\in D_{zc}$, i.e., that each $R^{i}f_{*}\mathbf{Z}/n$ is Zariski-constructible on $Y$. By proper base change and induction on $\dim(Y)$, it suffices to find a dense open $U\subset Y$ such that $(R^{i}f_{*}\mathbf{Z}/n)|_{U}$ is locally constant. As $K$ has characteristic $0$, Temkin’s [Tem18, Theorem 1.1.13 (i)] gives a proper hypercover $\epsilon:X^{\bullet}\to X$ with each $X^{i}$ being $K$-smooth. Cohomological descent enables us to compute $Rf_{*}\mathbf{Z}/n$ as the totalization of $Rg_{*}\mathbf{Z}/n$, where $g=f\circ\epsilon$. As $\mathcal{H}^{i}$ of the totalization of a cosimplicial object $K(\bullet):\Delta\to\mathcal{D}^{\geq 0}$ only depends on truncated cosimplicial object $K(\bullet)|_{\Delta_{\leq i+1}}$, we can replace $X^{\bullet}$ with the finite diagram $X^{\leq i+1}$ and then by each $X^{j}$ to assume that $X$ is smooth. As $Y$ is reduced, our generic smoothness result (Theorem 2.29) yields a Zariski-dense Zariski-open $U\subset X$ such that $f:X\to Y$ is smooth over $U$. It then suffices to show that $R^{i}g_{*}\mathbf{Z}/n$ is locally constant when $g$ is both proper and smooth. To check this, we may assume $n=\ell$ is a prime. Now if $\ell\neq p$, the claim reduces to [Hub96, Corollary 6.2.3], while the claim for $\ell=p$ reduces to [SW20, Theorem 10.5.1]. ∎ As a consequence of Theorem 3.10, we also get some additional stability results. First, locally constant sheaves are carried to Zariski-constructible complexes via pushforward along a fairly general class of maps. ###### Corollary 3.11 (Direct images of lisse complexes). Let $f:X\to Y$ be a Zariski-compactifiable map of rigid spaces. Then $Rf_{!}$ and $Rf_{*}$ carry $D^{(b)}_{lis}(-,\mathbf{Z}/n)$ into $D^{(b)}_{zc}(-,\mathbf{Z}/n)$. As explained in Warning 3.2 (1), the functor $Rf_{*}$ does not preserve $D^{(b)}_{zc}$ in general, even for Zariski-open immersions. Thus, the above seems to be the best general statement one can expect. ###### Proof. The statement is local on the target by Theorem 3.5, so we may assume $Y=\mathrm{Spa}(A)$ is affinoid. By assumption, we can factor $f$ as $X\xrightarrow{j}\overline{X}\xrightarrow{g}Y$ with $j$ being a Zariski-open immersion and $g$ being proper. Using Theorem 3.10, we can reduce to the case $f=j$ is a Zariski-open immersion. The claim for $Rf_{!}$ is clear from the definition of Zariski-constructible sheaves (using $\\{X,\overline{X}-X\\}$ as the stratification on $\overline{X}$ witnessing Zariski-constructibility), so it remains to check the assertion for $Rf_{*}$. As $Y$ is affinoid, the Zariski-open immersion $f:X\hookrightarrow Y$ is the algebraization of a unique open immersion $g:\mathcal{X}\to\mathcal{Y}=\mathrm{Spec}(A)$. Moreover, any object of $D^{b}_{lis}(X,\mathbf{Z}/n)$ is the analytification of a unique object in $D^{b}_{lis}(\mathcal{X},\mathbf{Z}/n)$ by Proposition 3.7 (3). Using the compatibility of pushforwards with analytification from Proposition 3.7 (1), the claim follows from Gabber’s constructibility theorem in [ILO14, Expose XIII, Theorem 1.1.1]. ∎ Secondly, $!$-pullback along finite morphisms preserves Zariski- constructibility. ###### Corollary 3.12 (Finite $!$-pullback). Let $f:X\to Y$ be a finite morphism of rigid spaces over $K$. Then the right adjoint $Rf^{!}:D^{+}(Y,\mathbf{Z}/n)\to D^{+}(X,\mathbf{Z}/n)$ to $f_{!}=f_{*}$ constructed in [Hub96, §7.1] preserves $D^{(b)}_{zc}(-)$. ###### Proof. By Theorem 3.5, the assertion is étale-local on $Y$, so we may assume both $X$ and $Y$ are affinoid, corresponding to a finite map $A\to B$ of affinoid $K$-algebras. Fix $F\in D^{b}_{zc}(Y,\mathbf{Z}/n)$. Let $S_{F}\subset Y$ be the smallest Zariski-closed subset of $Y$ containing the support of $F$. We shall prove the claim by induction on $d_{F}=\dim(S_{F})$. If $d_{F}=0$, then $F$ is supported at finitely many points. As the claim is étale local on $Y$, we may then assume that $F$ is a finite direct sum of sheaves of the form $k_{*}\mathbf{Z}/n$, where $k:W\to Y$ is the inclusion of a Zariski-closed point. Now $f^{!}$ and $k_{*}$ commute: the corresponding statement for left adjoints is the proper base change theorem for $f$. We are thus reduced to checking the statement when $Y$ (and thus $X$) are $0$-dimensional. In this case, up to universal homeomorphisms, the map $f$ is finite étale, so $Rf^{!}=f^{*}$, so the claim is clear. Now assume $d_{F}>0$. We may then choose a Zariski open subset $j:U\hookrightarrow Y$ such that $U\cap S_{F}$ is dense is $S_{F}$, the restriction $L:=F|_{U\cap S_{F}}$ is lisse, and $f$ is finite étale (up to universal homeomophisms) over $U$: one can find such an open $U$ as the algebraization of an open $\mathcal{U}\subset\mathrm{Spec}(A)$ satisfying the analogous properties for the map $\mathrm{Spec}(B)\to\mathrm{Spec}(A)$ and the algebraization $\mathcal{F}$ of $F$ (in the sense of Proposition 3.7). Let $i:Z\hookrightarrow Y$ be the closed complement, so we have the standard exact triangle $i_{*}Ri^{!}F\to F\to Rj_{*}(F|_{U}).$ Now $Rj_{*}(F|_{U})\simeq k_{*}Rj^{\prime}_{*}L$, where $j^{\prime}:U\cap S_{F}\to S_{F}$ and $k:S_{F}\to X$ are the natural maps (and thus Zariski open and Zariski closed immersions respectively). By Corollary 3.11, the third term in the triangle above is then Zariski-constructible. The remaining term $i_{*}Ri^{!}F$ in the triangle is then also Zariski-constructible, so $Ri^{!}F$ is itself Zariski-constructible. Applying $Rf^{!}$ to the above triangle gives a triangle $Rf^{!}i_{*}i^{!}F\to f^{!}F\to Rf^{!}k_{*}Rj^{\prime}_{*}L.$ By proper base change for $f$ as in the previous paragraph, the last term identifies with $k_{X,*}Rj^{\prime}_{X,*}(f|_{U\cap S_{F}})^{!}L$, where $k_{X}$ and $j^{\prime}_{X}$ are the base changes of $k$ and $j^{\prime}$ along $f$. As $f$ is finite étale up to universal homeomorphisms over $U$, we have $(f|_{U\cap S_{F}})^{!}L\simeq(f|_{U\cap S_{F}})^{*}L$, so this object is lisse on $f^{-1}(U\cap S_{F})$. Corollary 3.11 then implies that the third term in the triangle above is lisse. For the first term, using proper base change again lets us write it as $i_{X,*}f_{Z}^{!}i^{!}F$, where $i_{X}$ and $f_{Z}$ are the base changes of $i$ and $f$ against $f$ and $i$. As $i^{!}F$ is known to be Zariski constructible, the induction hypothesis then shows that the first term is also Zariski constructible. ∎ ###### Remark 3.13. Using results from [Hub96, §7], in the special case $(p,n)=1$, we can extend Corollary 3.12 to much larger generality. Indeed, if $f:Y\to X$ is any separated taut morphism of rigid spaces over $K$, then $Rf^{!}$ sends $D^{(b)}_{zc}(X,\mathbf{Z}/n)$ into $D^{(b)}_{zc}(Y,\mathbf{Z}/n)$. To see this, by Theorem 3.5, this assertion can be checked locally on $X$ and $Y$, so we can assume they are affinoid. The map $f$ can be then be factored as the composition of a Zariski-closed immersion followed by a smooth map of pure dimension $d$. The claim for Zariski-closed immersions follows from Corollary 3.12, while that for smooth morphisms follows from Huber’s [Hub96, Theorem 7.5.3], which identifies $Rf^{!}$ with $f^{*}(d)[2d]$. We deduce the existence of $\otimes$ and $\mathop{R\mathscr{H}\mathrm{om}}$. ###### Corollary 3.14 ($\otimes$ and $\mathop{R\mathscr{H}\mathrm{om}}$). Let $X$ be a rigid space. For any $\mathscr{F},\mathscr{G}\in D^{(b)}_{zc}(X,\mathbf{Z}/n)$ with $\mathscr{F}$ having finite Tor dimension, both $\mathscr{F}\otimes\mathscr{G}$ and $\mathop{R\mathscr{H}\mathrm{om}}(\mathscr{F},\mathscr{G})$ lie in $D^{(b)}_{zc}(X,\mathbf{Z}/n)$. ###### Proof. We may work locally on $X$ on Theorem 3.5, so assume $X$ is affinoid. The claim about tensor products is clear (e.g., by Proposition 3.7 and the corresponding statement in algebraic geometry). For $\mathop{R\mathscr{H}\mathrm{om}}$, we proceed by induction on $d=\dim(X)$, the case $d=0$ being trivial. Choose a dense Zariski-open $j:U\subset X$ such that both $\mathscr{F}|_{U}$ and $\mathscr{G}|_{U}$ are lisse. Applying $\mathop{R\mathscr{H}\mathrm{om}}(-,\mathscr{G})$ to the triangle $j_{!}j^{\ast}\mathscr{F}\to\mathscr{F}\to i_{\ast}i^{\ast}\mathscr{F}\to$, we get a triangle $i_{\ast}\mathop{R\mathscr{H}\mathrm{om}}(i^{\ast}\mathscr{F},i^{!}\mathscr{G})\to\mathop{R\mathscr{H}\mathrm{om}}(\mathscr{F},\mathscr{G})\to Rj_{\ast}(\mathscr{F}|_{U}^{\vee}\otimes\mathscr{G}|_{U})\to,$ where we simplified the first term using the adjunction defining $i^{!}$, and the last term by using $\mathop{R\mathscr{H}\mathrm{om}}(A,B)=A^{\vee}\otimes B$ for $A,B\in D(\mathbf{Z}/n)$ with $A\in D_{perf}(\mathbf{Z}/n)$. Now induction on dimension and Corolary 3.12 ensure that the first term lies in $D^{b}_{zc}$. The last term lies in $D^{b}_{zc}$ by Corollary 3.11, so we win. ∎ We also deduce the proper base change theorem. ###### Theorem 3.15. Let $\textstyle{X^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g^{\prime}}$$\scriptstyle{f^{\prime}}$$\textstyle{Y^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{g}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{Y}$ be a Cartesian diagram of rigid spaces over $K$, with $f$ proper. Then for any $\mathscr{F}\in D^{(b)}_{zc}(X,\mathbf{Z}/n)$, the natural base change map $g^{\ast}Rf_{\ast}\mathscr{F}\to Rf^{\prime}_{\ast}g^{\prime\ast}\mathscr{F}$ is an isomorphism. ###### Proof. This easily splits into the two disjoint cases where $p\nmid n$ or $n=p^{a}$. When $p\nmid n$, the result follows from Huber’s much more general base change results [Hub96, Theorem 4.1.1.(c)]. We may thus assume that $n=p^{a}$. By Theorem 3.10, we know that $g^{\ast}Rf_{\ast}\mathscr{F}$ and $Rf^{\prime}_{\ast}g^{\prime\ast}\mathscr{F}$ are Zariski-constructible, hence overconvergent, so it suffices to show that the base change map induces an isomorphism on stalks at all rank one geometric points $\overline{y}\to Y^{\prime}$. By two applications of [Hub96, Example 2.6.2], we compute that $(g^{\ast}Rf_{\ast}\mathscr{F})_{\overline{y}}\simeq R\Gamma(X\times_{Y}\overline{g(y)},\mathscr{F})$ and $(Rf^{\prime}_{\ast}g^{\prime\ast}\mathscr{F})_{\overline{y}}\simeq R\Gamma(X\times_{Y}\overline{y},\mathscr{F})$. We now conclude by Lemma 3.25. ∎ We end this section by recording that the equivalence in Proposition 3.7 is compatible with all the operations we have seen so far. ###### Proposition 3.16. Fix an affinoid $K$-algebra $A$, and write $\mathcal{S}=\mathop{\mathrm{Spec}}\,A$ and $S=\mathop{\mathrm{Spa}}A$. The functor $(-)^{an}:D^{b}_{c}(\mathcal{X},\mathbf{Z}/n)\to D^{b}_{zc}(X,\mathbf{Z}/n)$ for finite type $\mathcal{S}$-schemes $\mathcal{X}$ with $X=\mathcal{X}^{an}$ is compatible with $\otimes$, $\mathop{R\mathscr{H}\mathrm{om}}$ when the first argument has finite Tor dimension, $f^{\ast}$, $Rf_{\ast}$, $Rf_{!}$ for compactifiable $f$, and $Ri^{!}$ for any finite morphism $i$. If $p\nmid n$, it is compatible with $Rf^{!}$ for compactifiable $f$. ###### Proof. The compatibilities for $f^{\ast}$, $\otimes$, and $j_{!}$ for open immersions $j$ are easy and left to the reader. The compatibility for $Rf_{\ast}$ is a special case of Proposition 3.7 (1). For proper $f$, the claim for $Rf_{\ast}=Rf_{!}$ is [Hub96, Theorem 3.7.2] This implies the claim for $Rf_{!}$ for compactifiable $f$. The result for $Ri^{!}$ in the case of a closed immersion follows from the triangle $Ri^{!}\to i^{\ast}\to i^{\ast}Rj_{\ast}j^{\ast}\to$ and its analytic counterpart, where $j$ is the complementary open immersion, using the known compatibilities for $i^{\ast}$, $j^{\ast}$, and $Rj_{\ast}$. Given these compatibilities, the claim for $\mathop{R\mathscr{H}\mathrm{om}}$ now follows by induction on the dimension, imitating the devissage carried out in Corollary 3.14. The result for $Ri^{!}$ for general finite maps $i:\mathcal{X}\to\mathcal{Y}$ can be proven by following the argument in Corollary 3.12. Finally, the claim for $Rf^{!}$ in the case $(n,p)=1$ is local on the source, so we may factor $f$ as $g\circ i$ where $g$ is smooth of some pure relative dimension $d$ and $i$ is a closed immersion. Then $Rf^{!}=Ri^{!}\circ Rg^{!}=Ri^{!}\circ g^{\ast}[2d](d)$ by Poincaré duality for schemes, and $Rf^{an!}=Ri^{an!}\circ g^{an\ast}[2d](d)$ by Poincaré duality for rigid spaces as in Huber’s book. The result now follows from the known compatibilities for $g^{\ast}$ and $Ri^{!}$. ∎ ### 3.4. Verdier duality It remains to discuss Verdier duality. Recall that if $p$ is invertible in the coefficient ring $\mathbf{Z}/n$, then [Hub96, §7] shows that any separated taut morphism $f:X\to Y$ of rigid spaces over $K$ induces a well-behaved functor $Rf_{!}:D(X,\mathbf{Z}/n)\to D(Y,\mathbf{Z}/n)$ with a well-behaved right adjoint $Rf^{!}$. In particular, for any separated taut rigid space $X$ and any $n$ prime to $p$, we can define the dualizing complex $\omega_{X}=R\pi_{X}^{!}(\mathbf{Z}/n)$, where $\pi_{X}:X\to\mathop{\mathrm{Spa}}\,K$ is the structure map. We shall construct dualizing complexes in a different way using results from [ILO14]; our construction works without restriction on $p$, and is equivalent to Huber’s if $(p,n)=1$. We will need the following form of unbounded BBDG gluing [BBD82] for complexes. ###### Lemma 3.17. Let $(\mathcal{C},\mathcal{O})$ be a ringed site with a final object $X$ and fiber products, and with enough points. Let $\mathcal{B}$ be a collection of open subobjects $U\subset X$ such that $X=\cup_{U\in\mathcal{B}}U$ and for all $U,V\in\mathcal{B}$ we have $U\cap V=\cup_{W\in\mathcal{B},W\subset U\cap V}W$. Suppose we are given objects $K_{U}\in D(U,\mathcal{O})$ for all $U\in\mathcal{B}$ together with isomorphisms $\rho_{V}^{U}:K_{U}|_{V}\to K_{V}$ for all $V\subset U$ with $V,U\in\mathcal{B}$ which are compatible with composition. Finally, suppose that $\mathrm{Ext}^{i}(K_{U},K_{U})=0$ for all $U\in\mathcal{B}$ and all $i<0$. Then there exists a pair $(K,\\{\rho_{U}\\}_{U\in\mathcal{B}})$ consisting of an object $K\in D(X,\mathcal{O})$ and isomorphisms $\rho_{U}:K|_{U}\to K_{U}$ such that $\rho_{V}^{U}\circ\rho_{U}=\rho_{V}$ for all $V\subset U$ with $V,U\in\mathcal{B}$. The pair $(K,\\{\rho_{U}\\})$ is unique up to unique isomorphism. Recall that a subobject $U\subset X$ is open if the associated map $\mathrm{Sh}(\mathcal{C}_{U})\to\mathrm{Sh}(\mathcal{C})$ is an open immersion of topoi, cf. [Sta18, Tag 08M0] for the latter notion. Note that unlike the unbounded gluing results proved in [LO08] or [Sta18, Tag 0DCC], we do not assume here that the underlying site locally has finite cohomological dimension. The price to pay is that we only prove gluing for open covers. ###### Proof. This follows from the proof of [Sta18, Tag 0D6C]. ∎ Next, we recall some results from [ILO14] on the existence and uniqueness of étale dualizing complexes for a fairly general class of noetherian schemes. For this, we need good dimension functions. ###### Proposition 3.18. Let $\mathcal{X}$ be a Noetherian universally catenary scheme whose irreducible components are equicodimensional in the sense of EGA, i.e. such that $\dim(\mathcal{O}_{\mathcal{Y},y})=\dim\mathcal{Y}$ for every irreducible component $\mathcal{Y}\subset\mathcal{X}$ and every closed point $y\in\mathcal{Y}$. Then the function $\delta(x)=\dim\overline{\\{x\\}}$ is a dimension function on $\mathcal{X}$, which we call the _canonical_ dimension function. The hypotheses here are satisfied for any scheme $\mathcal{X}$ of finite type over $\mathbf{Z}$, over a field, or over $\mathop{\mathrm{Spec}}A$ for some affinoid $K$-algebra $A$. We will only need the latter case. ###### Proof. By [ILO14, Corollaire XIV.2.4.4], the map $x\mapsto\dim\overline{\\{x\\}}$ defines a dimension function on each irreducible component of $\mathcal{X}$. Since these functions agree on overlaps of irreducible components, this implies the claim. ∎ In the next theorem, we shall use the language introduced in [ILO14, §XVII]. In particular, we shall use the notion of a potential dualizing complex from [ILO14, §XVII.2]. Recall that such a complex on a scheme $\mathcal{X}$ equipped with a dimension function $\delta$ is an object $\omega_{\mathcal{X}}\in D^{+}(\mathcal{X},\mathbf{Z}/n)$ equipped with some additional data: one has specified isomorphisms $R\Gamma_{\overline{x}}(\omega_{\mathcal{X}})\cong\mathbf{Z}/n[2\delta(x)](\delta(x))$, called pinnings, for each geometric point $\overline{x}\to\mathcal{X}$ lying over a point $x\in\mathcal{X}$, and these isomorphisms are required to be compatible with immediate specializations in the appropriate sense. The following theorem asserts that such complexes exist in large generality, and have good properties. ###### Theorem 3.19 (Existence of dualizing complexes in algebraic geometry). Let $\mathcal{X}$ be an excellent Noetherian $\mathbf{Z}[1/n]$-scheme satisfying the hypotheses of Proposition 3.18, and set $\Lambda=\mathbf{Z}/n$. Then $\mathcal{X}$ admits a potential dualizing complex $\omega_{\mathcal{X}}\in D^{b}_{ctf}(\mathcal{X},\Lambda)$ relative to the canonical dimension function, which is unique up to unique isomorphism. The functor $\mathbf{D}_{\mathcal{X}}(-)=\mathop{R\mathscr{H}\mathrm{om}}(-,\omega_{\mathcal{X}})$ preserves $D^{b}_{c}(\mathcal{X},\Lambda)$ and the biduality map $\mathscr{F}\to\mathbf{D}_{\mathcal{X}}\mathbf{D}_{\mathcal{X}}\mathscr{F}$ is an isomorphism for all $\mathscr{F}\in D^{b}_{c}(\mathcal{X},\Lambda)$. ###### Proof. The existence and uniqueness is [ILO14, Theorem XVII.5.1.1], while the rest follows from [ILO14, Theoreme XVII.6.1.1]. ∎ ###### Remark 3.20. Choose $\mathcal{X}$ as in Theorem 3.19. Assume additionally than $\mathcal{X}$ is regular of (locally constant) dimension $d$. Then the twisted constant sheaf $\mathbf{Z}/n[2d](d)$ comes equipped with the required pinning data thanks to absolute cohomological purity [ILO14, Theorem XVI.3.1.1], so it follows that there is a unique isomorphism $\mathbf{Z}/n[2d](d)\cong\omega_{\mathcal{X}}$ compatible with the pinnings. Using the preceding results in algebraic geometry and the algebraization results in Proposition 3.7, we construct dualizing complexes in rigid geometry using Lemma 3.17 on gluing. ###### Theorem 3.21 (Existence of dualizing complexes in rigid geometry). Let $X$ be a rigid space over $K$. 1. (1) Existence: There exists a natural dualizing complex $\omega_{X}\in D^{(b)}_{zc}(X,\mathbf{Z}/n)$, characterized up to unique isomorphism by the requirement that its formation commutes with passage to open subsets and is given by the algebraization (in the sense of Proposition 3.7) of the potential dualizing complexes from Theorem 3.19 when $X$ is affinoid. Moreover, one has $\omega_{X}\cong(\mathbf{Z}/n)[2d](d)$ for $X$ smooth of pure dimension $d$, and canonical isomorphisms $\omega_{Z}\simeq Ri^{!}\omega_{X}$ for any finite morphism $i:Z\to X$. 2. (2) $!$-compatibility for $(n,p)=1$: If $X$ is separated and taut and $(n,p)=1$, then $\omega_{X}\cong R\pi_{X}^{!}(\mathbf{Z}/n)$ where $\pi_{X}:X\to\mathop{\mathrm{Spa}}K$ is the structure map. 3. (3) Biduality: The dualizing functor $\mathbf{D}_{X}(-)=\mathop{R\mathscr{H}\mathrm{om}}(-,\omega_{X})$ induces a contravariant self-equivalence of $D^{(b)}_{zc}(X,\mathbf{Z}/n)$ satisfying biduality $\mathrm{id}\cong\mathbf{D}_{X}\circ\mathbf{D}_{X}$ via the natural map. 4. (4) Duality and finite morphisms: For any finite morphism $i:Z\to X$, there are natural identifications of functors $Ri^{!}\mathbf{D}_{X}\cong\mathbf{D}_{Z}i^{\ast}$ and $i_{\ast}\mathbf{D}_{Z}\cong\mathbf{D}_{X}i_{\ast}$. 5. (5) Duality and open immersions: For a Zariski-open immersion $j:U\to X$, there are natural isomorphisms $j^{*}\mathbf{D}_{X}\simeq\mathbf{D}_{U}j^{*}$ and $Rj_{*}\mathbf{D}_{U}\simeq\mathbf{D}_{X}j_{!}$. 6. (6) Base change: If $L/K$ is an extension of nonarchimedean fields, with $a^{\ast}:D_{zc}(X,\mathbf{Z}/n)\to D_{zc}(X_{L},\mathbf{Z}/n)$ the natural pullback map, then $a^{\ast}\omega_{X}\cong\omega_{X_{L}}$. 7. (7) Compatibility with algebraic geometry: If $X=\mathcal{X}^{an}$ for a finite type $K$-scheme $\mathcal{X}$, then $\omega_{X}=(\omega_{\mathcal{X}})^{an}$. ###### Proof. Let us first construct $\omega_{X}$ by glueing together the analytifications of the dualizing complexes coming from Theorem 3.19. Let $U=\mathop{\mathrm{Spa}}A$ be any affinoid rigid space, and set $\mathcal{U}=\mathop{\mathrm{Spec}}A$. Then $\mathcal{U}$ satisfies the hypotheses of Theorem 3.19. Let $\omega_{\mathcal{U}}\in D^{b}_{c}(\mathcal{U},\mathbf{Z}/n)$ be the dualizing complex provided by that theorem. By Propositions 3.7 and 3.16, we have an equivalence $(-)^{an}:D^{b}_{c}(\mathcal{U},\mathbf{Z}/n)\to D^{b}_{zc}(U,\mathbf{Z}/n)$ compatible with $\mathop{R\mathscr{H}\mathrm{om}}$s. We now define $\omega_{U}:=(\omega_{\mathcal{U}})^{an}$, so $(\mathbf{D}_{\mathcal{U}}\mathscr{F})^{an}\cong\mathbf{D}_{U}(\mathscr{F}^{an})$ for every $\mathscr{F}\in D^{b}_{c}(\mathcal{U},\mathbf{Z}/n)$. Since the dualizing functor $\mathbf{D}_{\mathcal{U}}$ induces a contravariant self- equivalence on $D^{b}_{c}(\mathcal{U},\mathbf{Z}/n)$ which satisfies biduality, and $(-)^{an}$ is an equivalence of categories, we now conclude the analogous results for $\mathbf{D}_{U}$. To construct $\omega_{X}$ over an arbitrary rigid space $X$, we apply Lemma 3.17, taking $\mathcal{B}$ to be the collection of affinoid opens $U\subset X$, and letting $K_{U}=\omega_{U}$ as constructed in the previous paragraph. Note that the full faithfulness in Proposition 3.7 and Theorem 3.19 show that $\mathbf{Z}/n\cong\mathop{R\mathscr{H}\mathrm{om}}(\omega_{U},\omega_{U})$ for all $U\in\mathcal{B}$, so all negative self-exts vanish. Moreover, for any inclusion of open affinoid subsets $g:V\subset U$, there is a natural isomorphism $g^{\ast}\omega_{U}\cong\omega_{V}$ compatible with compositions by Lemma 3.22 below (applied with $d=0$). The gluing lemma now applies, and gives a unique $\omega_{X}\in D(X,\mathbf{Z}/n)$ equipped with a transitive system of isomorphisms $\omega_{X}|_{U}\simeq\omega_{U}$ for all open affinoids $U\subset X$. We now prove this construction has all the required properties. 1. (1) By construction, the complex $\omega_{X}$ is characterized by the properties demanded in the first sentence of (1); these also characterize $\omega_{X}$ uniquely (up to unique isomorphism) by the uniqueness assertions in Theorem 3.19 and Lemma 3.22 through the equivalence in Proposition 3.7. It is also clear from the construction and Remark 3.20 that $\omega_{X}|_{X^{sm}}\cong\mathbf{Z}/n[2d](d)$, where $d$ is the dimension. For the compatibility with $Ri^{!}$ for finite morphisms, we reduce to the affinoid case following our construction, and use compatibility of the analytification functor with $Ri^{!}$ (Proposition 3.16) to reduce to the corresponding result in algebraic geometry ([ILO14, Proposition XVII.4.1.2]). The compatibility with restriction to open subsets is clear from our construction. As these properties guarantee uniqueness of $\omega_{X}$ up to isomorphism, we are done with (1). 2. (2) The identification with $R\pi_{X}^{!}(\mathbf{Z}/n)$ can be checked locally, where it follows by factoring $\pi_{X}$ as a closed immersion followed by a smooth map and using the results on Poincare duality proved in Huber’s book. 3. (3) This follows from the corresponding assertion in the affinoid case (which was explained above whilst constructing $\omega_{X}$) as the property of being Zariski-constructible is local in the analytic topology (Theorem 3.5). 4. (4) This is a formal argument given duality and known adjunctions. For compatibility with $i_{*}$, fix $G\in D^{(b)}_{zc}(Z,\mathbf{Z}/n)$. Then we have isomorphisms $\displaystyle i_{*}\mathbf{D}_{Z}(G)=$ $\displaystyle i_{*}\mathop{R\mathscr{H}\mathrm{om}}(\mathbf{Z}/n,\mathbf{D}_{Z}(G))$ $\displaystyle=$ $\displaystyle i_{*}\mathop{R\mathscr{H}\mathrm{om}}(G,\omega_{Z})$ $\displaystyle=$ $\displaystyle i_{*}\mathop{R\mathscr{H}\mathrm{om}}(G,i^{!}\omega_{X})$ $\displaystyle=$ $\displaystyle\mathop{R\mathscr{H}\mathrm{om}}(i_{*}G,\omega_{X})$ $\displaystyle=$ $\displaystyle\mathbf{D}_{X}(i_{*}G).$ where the second isomorphism is by biduality, the third by $\omega_{Z}=i^{!}\omega_{X}$, and the fourth by the defining property of $i^{!}$. Comparing the first and last term gives $i_{*}\mathbf{D}_{Z}=\mathbf{D}_{X}i_{*}$. For compatibility with $i^{!}$, fix additionally $F\in D^{(b)}_{zc}(X,\mathbf{Z}/n)$. Then we have bifunctorial isomorphisms $\displaystyle\mathrm{RHom}(i^{*}\mathbf{D}_{X}(F),G)=$ $\displaystyle\mathrm{RHom}(\mathbf{D}_{X}(F),i_{*}G)$ $\displaystyle=$ $\displaystyle\mathrm{RHom}(\mathbf{D}_{X}(i_{*}G),F)$ $\displaystyle=$ $\displaystyle\mathrm{RHom}(i_{*}\mathbf{D}_{Z}(G),F)$ $\displaystyle=$ $\displaystyle\mathrm{RHom}(\mathbf{D}_{Z}(G),Ri^{!}F)$ $\displaystyle=$ $\displaystyle\mathrm{RHom}(\mathbf{D}_{Z}(Ri^{!}F),G),$ where first equality is by adjunction for $(i^{*},i_{*})$, the second and last by duality, the third by the equality $i_{*}\mathbf{D}_{Z}=\mathbf{D}_{X}i_{*}$ we just showed, and the fourth by adjunction for $(i_{*},i^{!})$. 5. (5) The compatibility with $j^{*}$ is built into the construction. The rest is again a formal argument using duality and known adjunctions. For $F\in D^{(b)}_{zc}(U,\mathbf{Z}/n)$ and $G\in D^{(b)}_{zc}(X,\mathbf{Z}/n)$, we have $\displaystyle\mathrm{RHom}(F,Rj_{*}\mathbf{D}_{U}(G))=$ $\displaystyle\mathrm{RHom}(j^{*}F,\mathbf{D}_{U}(G))$ $\displaystyle=$ $\displaystyle\mathrm{RHom}(G,\mathbf{D}_{U}(j^{*}F))$ $\displaystyle=$ $\displaystyle\mathrm{RHom}(G,j^{*}\mathbf{D}_{X}(F))$ $\displaystyle=$ $\displaystyle\mathrm{RHom}(j_{!}G,\mathbf{D}_{X}(F))$ $\displaystyle=$ $\displaystyle\mathrm{RHom}(F,\mathbf{D}_{X}(j_{!}G)),$ with first equality by the adjunction $(j^{*},Rj_{*})$, the second and last by duality, third by $j^{*}\mathbf{D}_{X}=\mathbf{D}_{U}j^{*}$, and the fourth by the adjunction $(j_{!},j^{*})$. 6. (6) By our construction of dualizing complexes, it suffices to construct a natural system of such isomorphisms over affinoids. Thus, assume $X=\mathrm{Spa}(A)$ is affinoid with base change $X_{L}=\mathrm{Spa}(A_{L})$. Write $\mathcal{X}$ and $\mathcal{X}_{L}$ for the natural algebraization, and let $f:\mathcal{X}_{L}\to\mathcal{X}$ be the natural map. It is enough to construct a natural isomorphism $f^{*}\omega_{\mathcal{X}}\simeq\omega_{\mathcal{X}_{L}}$. We claim this follows from [ILO14, Proposition 4.1.1] (and uniqueness of potential dualizing complexes). To apply this lemma, we need to know that $f$ is a regular map, and that the dimension function $y\mapsto\delta^{\prime}(y):=\delta(f(y))-\mathrm{codim}_{f^{-1}(f(y))}(y)$ on $\mathcal{X}_{L}$ agrees with the standard dimension function $y\mapsto\dim(\overline{\\{y\\}})$. The regularity of $f$ is discussed in the last paragraph of the proof of Proposition 3.24 below, and essentially comes from [And74]. To obtain agreement of the dimension functions, it suffices to know that $\delta^{\prime}(y)=0$ for a closed point $y$ (as any point on $\mathcal{X}_{L}$ is linked to a closed point by a finite chain of immediate specializations). Thus, we must check that $\dim(\overline{\\{f(y)\\}})=\mathrm{codim}_{f^{-1}(f(y))}(y)$. Since $y$ is a closed point, this follows from [Con99, Lemma 2.1.5] applied to the affinoid algebra of functions on integral scheme $\overline{\\{y\\}}\subset\mathcal{X}$. 7. (7) To deduce this from our construction, it suffices to show the following: if $\mathcal{X}=\mathrm{Spec}(A)$ is an affine finite-type $K$-scheme and $U=\mathrm{Spa}(B)\subset X=\mathcal{X}^{an}$ is an open affinoid with natural map $\nu:U\to\mathcal{X}$, then there is a natural isomorphism $\nu^{*}\omega_{\mathcal{X}}\simeq\omega_{U}$. This follows by Lemma 3.22 below applied to the map $\mathrm{Spec}(B)\to\mathrm{Spec}(A)$ with $d=0$. ∎ In the course of the previous proof, we used the following lemma. ###### Lemma 3.22. Let $g:V=\mathop{\mathrm{Spec}}B\to U=\mathop{\mathrm{Spec}}A$ be a regular morphism between excellent affine $\mathbf{Q}$-schemes of finite Krull dimension. Suppose moreover that $U$ and $V$ have equicodimensional irreducible components, that $g$ maps closed points to closed points, and that $V\times_{U}\mathop{\mathrm{Spec}}\kappa(g(x))$ is equidimensional of dimension $d$ for all closed points $x\in V$ for some (constant) $d\geq 0$. Then there is a unique isomorphism $g^{\ast}\omega_{U}[2d](d)\cong\omega_{V}$ compatible with the pinnings. This lemma applies if $g$ arises from an étale map of affinoid rigid spaces $\mathop{\mathrm{Spa}}B\to\mathop{\mathrm{Spa}}A$, with $d=0$. This is the only case we will need. ###### Proof. This is a variant of the argument used in Theorem 3.21 (6). By [ILO14, Exp. XVII, Prop. 4.1.1], $g^{\ast}\omega_{U}$ is a potential dualizing complex for the dimension function $\tilde{\delta}:|V|\to\mathbf{Z}$ defined by $\tilde{\delta}(x)=\dim\overline{\\{g(x)\\}}-\mathrm{codim}_{g^{-1}(g(x))}(x)$. Our assumptions guarantee that $\tilde{\delta}(x)+d=\dim\overline{\\{x\\}}=0$ for all closed points $x$. Since the difference of any two dimension functions is locally constant, this implies that $\tilde{\delta}(x)+d=\dim\overline{\\{x\\}}$ for all $x\in V$, and therefore that $g^{\ast}\omega_{U}[2d](d)$ is a potential dualizing complex for the canonical dimension function on $V$. We now conclude by the uniqueness of potential dualizing complexes. ∎ ###### Remark 3.23 (Duality and proper maps). In Theorem 3.21, we have only discussed the functor $Rf^{!}$ when $p$ is invertible in the coefficient ring, or when $f$ is finite. However, we expect that for any proper map $f:X\to Y$, the functor $Rf_{*}:D_{zc}^{(b)}(X,\mathbf{Z}/p^{n})\to D_{zc}^{(b)}(Y,\mathbf{Z}/p^{n})$ admits a right adjoint $Rf^{!}$ naturally isomorphic to $\mathbf{D}_{X}f^{\ast}\mathbf{D}_{Y}$. One can check that this expectation holds iff there is a natural isomorphism $\mathbf{D}_{Y}Rf_{\ast}\cong Rf_{\ast}\mathbf{D}_{X}$. For $f$ proper and smooth, ongoing work of Zavyalov confirms these expectations. ### 3.5. Miscellany We collect some auxiliary results. First, we note that the entire formalism is compatible with changing the nonarchimedean base field. More precisely, let $K\to L$ be an extension of characteristic zero nonarchimedean fields. For any rigid space $X/\mathop{\mathrm{Spa}}K$, there is a natural map of étale sites $X_{L,\mathrm{\acute{e}t}}\to X_{\mathrm{\acute{e}t}}$ which induces a pullback functor $D(X)\to D(X_{L})$ sending $D_{zc}$ into $D_{zc}$. ###### Proposition 3.24. Notation as above, the change of base field functors $D_{zc}(X)\to D_{zc}(X_{L})$ are compatible (under the appropriate boundedness conditions) with the operations $f^{\ast}$, $\otimes$, $\mathop{R\mathscr{H}\mathrm{om}}$, Verdier duality, $Rf_{\ast}$ for proper $f$, $Rf_{!}$ and $Rf_{\ast}$ on lisse complexes for Zariski-compactifiable morphisms $f$, and $Rf^{!}$ if either $f$ is a finite morphism or $p$ is invertible in the coefficient ring. ###### Proof. The compatibilities for $f^{\ast}$, $\otimes$ and $j_{!}$ are trivial. The compatibilities for $Rf_{\ast}$ in the proper case and $Rj_{\ast}$ in the Zariski-open lisse case are the hardest; the rest follow from these. For $Rf_{\ast}$ in the case of a proper map $f:Y\to X$, one easily reduces to the two disjoint cases $p\nmid n$ and $n=p^{a}$. The first case follows from Huber’s general base change theorem [Hub96, Theorem 4.1.1.b]. The second case can be reduced, via [Hub96, Theorem 4.1.1.b’], to the situation where $K$ and $L$ are algebraically closed, and then by Theorem 3.10 it can be checked on stalks at classical points of $X_{L}$ which map to classical points of $X$. At these points it reduces to Lemma 3.25 below. For $Rj_{\ast}$ on lisse sheaves in the case of a Zariski-open immersion $j:U\to X$, we can assume that $X=\mathop{\mathrm{Spa}}A$ is affinoid. Write $j_{L}:U_{L}\to X_{L}=\mathop{\mathrm{Spa}}A\widehat{\otimes}_{K}L$ for the base change, and let $j^{alg}:\mathcal{U}\to\mathcal{X}=\mathop{\mathrm{Spec}}A$ and $j^{alg}_{L}:\mathcal{U}_{L}\to\mathcal{X}_{L}=\mathop{\mathrm{Spec}}A_{L}$ be the evident algebraizations. By (multiple applications of) Proposition 3.7, we are reduced to proving that $g^{\ast}Rj^{alg}_{\ast}\mathcal{F}\cong Rj_{L,\ast}^{alg}g^{\prime\ast}\mathcal{F}$ for any lisse sheaf $\mathcal{F}$ on $\mathcal{U}$, where $g:\mathcal{X}_{L}\to\mathcal{X}$ is the natural map and $g^{\prime}:\mathcal{U}_{L}\to\mathcal{U}$ is its base change to $\mathcal{U}$. Since $g$ is regular (see next paragraph), this follows from regular base change [ILO14, Exp. XVII, Prop. 4.2.1]. (The results for $\mathop{R\mathscr{H}\mathrm{om}}$ and $Ri^{!}$ can be handled in an entirely analogous way, with the endgame supplied by [ILO14, Exp. XVII, Prop. 4.2.2 and Cor. 4.2.3].) The claimed regularity of $A\to A\widehat{\otimes}_{K}L$ can be reduced by Noether normalization to the regularity of the map $K\left\langle x_{1},\dots,x_{n}\right\rangle\to L\left\langle x_{1},\dots,x_{n}\right\rangle$. By standard excellence properties of affinoid rings, regularity of this map can be verified after $\mathfrak{m}$-adic completion at all maximal ideals $\mathfrak{m}$ in the source. This finally reduces us to the fact that for any separable extension of nonarchimedean fields $L/K$ the ring map $K[[x_{1},\dots,x_{n}]]\to L[[x_{1},\dots,x_{n}]]$ is regular, which follows from the formal smoothness of this map, cf. [And74]. ∎ In the previous proof, we used the following lemma. ###### Lemma 3.25. Let $C^{\prime}/C/\mathbf{Q}_{p}$ be an extension of algebraically closed nonarchimedean fields. Then for any proper rigid space $X/C$ and any $\mathscr{F}\in D^{b}_{zc}(X,\mathbf{Z}/p^{a})$, the natural map $R\Gamma(X,\mathscr{F})\to R\Gamma(X_{C^{\prime}},\mathscr{F}_{C^{\prime}})$ is an isomorphism. ###### Proof. By an easy induction on $a$ and Proposition 3.6, we can assume that $\mathscr{F}=f_{\ast}\mathbf{F}_{p}$ for some finite map $f:X^{\prime}\to X$. Replacing $X$ by $X^{\prime}$, we can assume further that $\mathscr{F}=\mathbf{F}_{p}$ is constant. By two applications of the primitive comparison theorem, it’s enough to check that the natural map $R\Gamma(X,\mathcal{O}^{+}_{X}/p)\otimes_{\mathcal{O}_{C}/p}\mathcal{O}_{C^{\prime}}/p\to R\Gamma(X_{C^{\prime}},\mathcal{O}^{+}_{X_{C^{\prime}}}/p)$ is an almost isomorphism. This can be deduced from a purely local statement: if $U=\mathop{\mathrm{Spa}}A/C$ is any affinoid, then the natural map $R\Gamma(U,\mathcal{O}^{+}_{U}/p)\otimes_{\mathcal{O}_{C}/p}\mathcal{O}_{C^{\prime}}/p\to R\Gamma(U_{C^{\prime}},\mathcal{O}^{+}_{U_{C^{\prime}}}/p)$ is an almost isomorphism. To prove the local statement, choose a perfectoid $\mathbf{Z}_{p}^{d}$-torsor $A\to A_{\infty}$. Then $R\Gamma(U,\mathcal{O}^{+}_{U}/p)\cong^{a}R\Gamma_{cts}(\mathbf{Z}_{p}^{d},A_{\infty}^{\circ}/p),$ and similarly for $U_{C^{\prime}}$. The result now follows by writing $R\Gamma_{cts}(\mathbf{Z}_{p}^{d},-)$ as the usual $d+1$-term Koszul complex and observing that the natural map $A_{\infty}^{\circ}\widehat{\otimes}_{\mathcal{O}_{C}}\mathcal{O}_{C^{\prime}}\to A_{C^{\prime},\infty}^{\circ}$ is an almost isomorphism (e.g., because both sides provide perfectoid rings of definition for the Tate ring $A_{\infty}\widehat{\otimes}_{C}C^{\prime}$). ∎ Secondly, with biduality in hand, we can somewhat extend our results on pushfoward. ###### Proposition 3.26. Let $f:X\to Y$ be a Zariski-compactifiable morphism, and let $\mathscr{F}\in D_{zc}^{(b)}(X,\mathbf{Z}/n)$ be an object such that one of the following holds true: 1. (1) $\mathscr{F}$ is lisse or $\mathbf{D}_{X}\mathscr{F}$ is lisse, or 2. (2) $\mathscr{F}=j^{\ast}\mathscr{F}^{\prime}$ for some compactification $X\overset{j}{\to}X^{\prime}\overset{\overline{f}}{\to}Y$ and some $\mathscr{F}^{\prime}\in D_{zc}^{(b)}(X^{\prime},\mathbf{Z}/n)$. Then $Rf_{\ast}\mathscr{F}$ and $Rf_{!}\mathscr{F}$ lie in $D^{(b)}_{zc}(Y,\mathbf{Z}/n)$. It follows from the above proposition that if $f$ is a Zariski locally closed immersion, any $\mathscr{F}$ satisfying (1) automatically satisfies (2): we may simply take $\mathscr{F}^{\prime}=Rf_{*}\mathscr{F}$ by the proposition. ###### Proof. For the first case, the result is already proved for $\mathscr{F}$ lisse. Suppose now that $\mathbf{D}_{X}\mathscr{F}$ is lisse, and choose a compactification $X\overset{j}{\to}X^{\prime}\overset{\overline{f}}{\to}Y$. We first show that $Rj_{\ast}\mathscr{F}$ is Zariski-constructible. To see this, use Theorem 3.21 to write $Rj_{\ast}\mathscr{F}\cong Rj_{\ast}\mathbf{D}_{X}\mathbf{D}_{X}\mathscr{F}\cong\mathbf{D}_{X^{\prime}}j_{!}\mathbf{D}_{X}\mathscr{F}.$ Then $\mathbf{D}_{X}\mathscr{F}$ is lisse by assumption, so $j_{!}\mathbf{D}_{X}\mathscr{F}$ is Zariski-constructible by Corollary 3.11, and then duality preserves Zariski-constructibility. Now the triangle $j_{!}\mathscr{F}\to Rj_{\ast}\mathscr{F}\to i_{\ast}i^{\ast}Rj_{\ast}\mathscr{F}\to$ shows that $j_{!}\mathscr{F}$ is also Zariski-constructible. Applying $R\overline{f}_{\ast}$ now gives the claim. For the second case, let $i:Z\to X^{\prime}$ be the complementary closed immersion. Then we get a triangle $Rf_{!}\mathscr{F}\to R\overline{f}_{\ast}\mathscr{F}^{\prime}\to R(\overline{f}\circ i)_{\ast}i^{\ast}\mathscr{F}^{\prime}\to$, and the second and third terms are Zariski-constructible by Theorem 3.10. Likewise, we get a triangle $R(\overline{f}\circ i)_{\ast}Ri^{!}\mathscr{F}^{\prime}\to R\overline{f}_{\ast}\mathscr{F}^{\prime}\to Rf_{\ast}\mathscr{F}\to$, and the first two terms are Zariski-constructible by Theorem 3.10 and Proposition 3.12. ∎ ###### Remark 3.27 (Unbounded variants). We briefly discuss without proof how the results discussed above extend to the unbounded derived category. We shall need the following notion: ###### Definition 3.28. Given a non-archimedean base field $K$ and the coefficient ring $\mathbf{Z}/n$, we say $(\dagger)$ holds if $\mathrm{Gal}(\overline{K}/K)$ has finite $\ell$-cohomological dimension for all primes $\ell|n$. This condition is very mild, and holds for example if $K$ is separably closed, or if $K$ is a local field, or if $K$ has separably closed residue field and $(n,p)=1$. One can also check that $(\dagger)$ is stable under replacing $K$ by $K_{x}$, where $K_{x}$ is the residue field of any rigid space $X/K$ at any (adic) point $x\in X$. The main reason for introducing this condition is the following: ###### Proposition 3.29. Fix $K$ and $\Lambda=\mathbf{Z}/n$ such that $(\dagger)$ holds. Then for any rigid space $X/K$, the derived category $D(X_{\mathrm{\acute{e}t}},\Lambda)$ is left-complete and compactly generated, and the functor $R\lim:D(X_{\mathrm{\acute{e}t}}^{\mathbf{N}},\Lambda)\to D(X_{\mathrm{\acute{e}t}},\Lambda)$ has bounded cohomological amplitude on any finite-dimensional open subspace of $X$. ###### Proof. This is well-known; cf. [Roo06] for the final statement. ∎ Let us now formulate the promised unbounded variants of the results discussed in this paper. Let $f:X\to Y$ be a map of rigid spaces over $K$, and let $\mathscr{F}\in D_{zc}(X,\mathbf{Z}/n)$. 1. (1) Pullback: The pullback $f^{*}$ takes $D_{zc}$ into $D_{zc}$. 2. (2) Proper pushforward: If $\mathscr{F}$ is bounded below or $(\dagger)$ holds, then $Rf_{*}\mathscr{F}\in D_{zc}$. 3. (3) General pushforward: Say $\mathscr{F}$ is lisse and $f$ is Zariski- compactifiable. If $\mathscr{F}$ is bounded below or $(\dagger)$ holds, then $Rf_{!}\mathscr{F},Rf_{*}\mathscr{F}\in D_{zc}$. 4. (4) Duality: The functor $\mathbf{D}_{X}(-)=\mathop{R\mathscr{H}\mathrm{om}}(-,\omega_{X})$ carries $D^{(-)}_{zc}$ into $D^{(+)}_{zc}$. Furthermore, if $(\dagger)$ holds, then $\mathbf{D}_{X}(-)$ also carries $D^{(+)}_{zc}$ into $D^{(-)}_{zc}$, and gives an autoequivalence of $D_{zc}(X,\mathbf{Z}/n)$ satisfying biduality. 5. (5) $!$-pullback: If $f$ is a finite map, then $Rf^{!}$ takes $D_{zc}$ to $D_{zc}$. 6. (6) $!$-pullback for good coefficients: If $(p,n)=1$, $f$ is any taut separated map and $(\dagger)$ holds true, then $Rf^{!}$ preserves $D_{zc}$. 7. (7) Tensor product: $D^{(-)}_{zc}(X,\mathbf{Z}/n)$ is stable under $\otimes$ inside $D(X,\mathbf{Z}/n)$. 8. (8) Internal Hom: The bifunctor $\mathop{R\mathscr{H}\mathrm{om}}(-,-)$ carries $D^{(-)}_{zc}\times D^{(+)}_{zc}$ into $D^{(+)}_{zc}$. These assertions are all proven by using homological arguments to reduce to the locally bounded case (using Proposition 3.29 as needed). We omit the proofs. The most notably difficult case is (4), where the reduction to the bounded situation is non-formal, and requires the following lemma. ###### Lemma 3.30. Let $X$ be a $d$-dimensional rigid space. Then for any $\mathscr{F}\in\mathrm{Sh}_{zc}(X,\mathbf{Z}/n)$, the complex $\mathop{R\mathscr{H}\mathrm{om}}(\mathscr{F},\omega_{X})$ is concentrated in degrees $[-2d,0]$. In particular, for any finite-dimensional $X$, the functor $\mathbf{D}_{X}(-)$ preserves $D^{b}$ without assuming that $X$ is quasi-compact. ###### Proof. One could deduce this from the last assertion in [ILO14, Theorem XVII.5.1.1] using [Gab04, §3] as well as our algebraization Proposition 3.7. For the convenience of the reader, we give a direct proof, essentially mimicing the argument in Corollary 3.14 when $\mathscr{G}=\omega_{X}$, while controlling the amplitude of the terms showing up. Without loss of generality, we may assume $X$ is affinoid and reduced. We proceed induction on $d$, the $d=0$ case being trivial. Let $j:U\to X$ be a smooth Zariski-open subset of pure dimension $d$ with complement of dimension $<d$ such that $\mathscr{F}|U$ is locally constant. Let $i:Z=\mathop{\mathrm{Spa}}B\to X$ be the inclusion of the closed complement of $U$. Applying $\mathop{R\mathscr{H}\mathrm{om}}(-,\omega_{X})$ to the triangle $j_{!}j^{\ast}\mathscr{F}\to\mathscr{F}\to i_{\ast}i^{\ast}\mathscr{F}\to$, we get a triangle $i_{\ast}\mathop{R\mathscr{H}\mathrm{om}}(i^{\ast}\mathscr{F},\omega_{Z})\to\mathop{R\mathscr{H}\mathrm{om}}(\mathscr{F},\omega_{X})\to(Rj_{\ast}j^{\ast}\mathscr{F}^{\vee})[2d](d)\to,$ where we used the isomorphism $\omega_{U}\simeq\mathbf{Z}/n[2d](d)$ (coming from the smoothness of $U$) to simplify the last term, and Theorem 3.21 (4) for the first term. By induction, the first term here is concentrated in degrees $[-2d+2,0]$. It thus suffices to show that $Rj_{\ast}j^{\ast}\mathscr{F}^{\vee}$ is concentrated in degrees $[0,2d-1]$. Proposition 3.7 identifies $Rj_{\ast}j^{\ast}\mathscr{F}^{\vee}$ as the analytification of $Rj^{alg}_{\ast}\mathcal{G}$, where $j^{alg}:\mathcal{U}=\mathop{\mathrm{Spec}}A-\mathop{\mathrm{Spec}}B\to\mathcal{X}=\mathop{\mathrm{Spec}}A$ is the natural algebraization of $j$ and $\mathcal{G}$ is some locally constant sheaf on $\mathcal{U}$. By [ILO14, Exp. XVIII-A, Theorem 1.1], $Rj^{alg}_{\ast}\mathcal{G}$ is concentrated in degrees $[0,2d-1]$, so the result follows. ∎ ### 3.6. Adic coefficients We fix a characteristic zero nonarchimedean base field $K$ of residue characteristic $p>0$. Fix any prime $\ell$. In this section we explain a variant of the theory of Zariski-constructible sheaves with $\mathbf{Z}_{\ell}$-coefficients, using the formalism666The paper [Sch17] assumes that a prime number is topologically nilpotent in the base field $K$. This is the reason we assume that the residue characteristic $p$ of $K$ is $>0$. Since we only use the relatively formal aspects of [Sch17], we expect that this assumption can be removed once a theory of adic coefficients over nonarchimedean base fields of residue characteristic $0$ has been developed. from [Sch17]. For a rigid space $X/K$, let $X_{v}$ denote the $v$-site of $X$ from [Sch17]. Let ${\mathbf{Z}_{\ell}}=\lim_{n}\underline{\mathbf{Z}/\ell^{n}}$ be the displayed inverse limit of constant sheaves, regarded as a sheaf of (abstract) rings. Any perfect complex $M\in D(\mathbf{Z}_{\ell})$ yields a “constant” $\ell$-complete sheaf $\underline{M}:=\lim_{n}\underline{M/\ell^{n}}\in D(X_{v},\mathbf{Z}_{\ell})$. Our goal is to build a theory of Zariski- constructible $\mathbf{Z}_{\ell}$-complexes on a rigid space $X$ where the locally constant objects are twisted forms of $\underline{M}$ for $M\in D_{perf}(\mathbf{Z}_{\ell})$. We first recall the basic notion of “étale $\mathbf{Z}_{\ell}$-sheaves” that is introduced in [Sch17] for the purposes of defining operations. ###### Construction 3.31. Fix $n\geq 1$. For any strictly totally disconnected perfectoid space $Y$, the usual derived category $D(Y_{\mathop{\mathrm{\acute{e}t}}},\mathbf{Z}/\ell^{n})$ is left-complete and identifies with a full subcategory $D(Y_{v},\mathbf{Z}/\ell^{n})$ via pullback along $Y_{v}\to Y_{\mathop{\mathrm{\acute{e}t}}}$. Moreover, containment in this subcategory can be checked $v$-locally (see [Sch17, Proposition 14.10, Proposition 14.11 (iii), Theorem 14.12 (ii)]). For any $v$-stack $X$, let $D_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}/\ell^{n})\subset D(X_{v},\mathbf{Z}/\ell^{n})$ be the full subcategory spanned by objects whose pullback along any map $Y\to X$ with $Y$ a strictly totally disconnected perfectoid space lies in the subcategory $D(Y_{\mathop{\mathrm{\acute{e}t}}},\mathbf{Z}/\ell^{n})\subset D(Y_{v},\mathbf{Z}/\ell^{n})$; this condition can be checked after pullback along a $v$-cover ([Sch17, Remark 14.14]. Imposing this condition for a complex $K$ is equivalent to imposing it for each $v$-cohomology sheaf $\mathcal{H}^{i}(K)$ (regarded as a complex) ([Sch17, Proposition 14.16]). Moreover, if $X$ is a locally spatial diamond (e.g., one attached to a rigid space), then $D_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}/\ell^{n})$ admits a classical description: it agrees with the left-completion $\widehat{D}(X_{et},\mathbf{Z}/\ell^{n})$ of the usual derived category $D(X_{\mathop{\mathrm{\acute{e}t}}},\mathbf{Z}/\ell^{n})$ ([Sch17, Proposition 14.15]). Finally, for any $v$-stack $X$, write $D_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})\subset D_{\ell-\text{comp}}(X_{v},\mathbf{Z}_{\ell})$ as the full subcategory derived $\ell$-complete objects in $D(X_{v},\mathbf{Z}_{\ell})$ whose mod $\ell$-reduction lies in the subcategory $D_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}/\ell)$ mentioned above ([Sch17, Definition 26]). If $X$ is a locally spatial diamond, then derived $\ell$-completeness as well as the previous remark on left-completeness give an equivalence $\mathcal{D}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})=\lim_{n}\mathcal{D}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}/\ell^{n})\simeq\lim_{n}\widehat{\mathcal{D}}(X_{\mathop{\mathrm{\acute{e}t}}},\mathbf{Z}/\ell^{n})$ (3) at the level of the corresponding $\infty$-categories, thus giving a classical description of the left side in the case of rigid spaces, see [Sch17, Proposition 26.2]. In fact, we could have defined $D_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})$ as the homotopy category of the right side above, and thus avoided ever mentioning the ambient category $D_{\ell-\text{comp}}(X_{v},\mathbf{Z}_{\ell})$; one reason we introduce the latter is that it carries an obvious $t$-structure, which we shall use in our proofs. As in [Sch17, §26], all operations between the categories $D_{\mathop{\mathrm{\acute{e}t}}}(-,\mathbf{Z}_{\ell})$ of $\ell$-adic complexes introduced above are always interpreted in the $\ell$-completed sense, i.e., the functor in question takes values in $\ell$-complete complexes by fiat, and agrees after reduction mod $\ell$ with the corresponding functor for finite coefficients. For instance, if $j:U\to X$ is a Zariski-open immersion and $M\in D^{b}_{\mathop{\mathrm{\acute{e}t}}}(U,\mathbf{Z}_{\ell})\subset D^{b}_{\ell-\text{comp}}(U_{v},\mathbf{Z}_{\ell})$, then $j_{!}M\in D^{b}_{\ell-\text{comp}}(X_{v},\mathbf{Z}_{\ell})$ is defined to be the derived $\ell$-completion of $j^{top}_{!}M\in D^{b}(X_{v},\mathbf{Z}_{\ell})$ where $j^{top}_{!}$ denotes the topos theoretic $!$-extension (without any completions); then one can see that $j_{!}M$ lies in $D^{b}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})$ and $j_{!}M\otimes_{\mathbf{Z}_{\ell}}\mathbf{F}_{\ell}$ agrees with $j_{!}(M\otimes_{\mathbf{Z}_{\ell}}\mathbf{F}_{\ell})$, where the latter is defined in the classical way. In the above setting, we can introduce Zariski-constructible complexes: ###### Definition 3.32. Let $X$ be a rigid space. We define full subcategories $D^{(b)}_{lis}(X,\mathbf{Z}_{\ell})\subset D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})\subset D^{(b)}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})\subset D^{(b)}_{\ell-\text{comp}}(X_{v},\mathbf{Z}_{\ell})$ as follows: * • An object $K\in D^{(b)}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})$ lies in $D^{(b)}_{lis}(X,\mathbf{Z}_{\ell})$ (and is called lisse) if $K/\ell\in D^{(b)}(X_{\mathop{\mathrm{\acute{e}t}}},\mathbf{Z}/\ell)$ is lisse in our previous sense (Definition 3.1). * • An object $K\in D^{(b)}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})$ lies in $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ (and is called Zariski- constructible) if $K/\ell\in D^{(b)}(X_{\mathop{\mathrm{\acute{e}t}}},\mathbf{Z}/\ell)$ has Zariski- constructible cohomology sheaves. As before, we write $\mathcal{D}^{(b)}_{lis}(X,\mathbf{Z}_{\ell})$ and $\mathcal{D}^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ for the corresponding full $\infty$-categories inside $\mathcal{D}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})$. ###### Remark 3.33. Let us explain an inverse limit description of $\mathcal{D}^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$, similarly to (3). For each $n\geq 1$, let $\mathcal{D}^{(b)}_{zc,\ell-\text{ftd}}(X,\mathbf{Z}/\ell^{n})\subset\mathcal{D}^{(b)}_{zc}(X,\mathbf{Z}/\ell^{n})$ be the full subcategory spanned by objects $M$ such that $M\otimes_{\mathbf{Z}/\ell^{n}}^{L}\mathbf{Z}/\ell$ is locally bounded. These are compatible under the base change functors changing $n$. We claim that the equivalence in (3) restricts to an equivalence $\mathcal{D}^{(b)}_{zc}(X,\mathbf{Z}_{\ell})\simeq\lim_{n}\mathcal{D}^{(b)}_{zc,\ell-\text{ftd}}(X,\mathbf{Z}/\ell^{n}).$ Indeed, since $\mathbf{Z}_{\ell}$ has global dimension $1$, it is clear that the equivalence in (3) gives a fully faithful functor from the left to the right. The essential surjectivity follows by observing that, under the equivalence in (3), the condition that $M\in\mathcal{D}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})$ lies inside $\mathcal{D}^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ can be checked after reduction mod $\ell$. As both locally constant sheaves and Zariski-constructible sheaves with $\mathbf{Z}/\ell$-coefficients form weak Serre subcategories of the category of $\mathbf{Z}/\ell$-sheaves on $X_{\mathop{\mathrm{\acute{e}t}}}$, both categories introduced above form triangulated subcategories of $D^{(b)}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})$. These categories admit an algebraic description on affinoids: ###### Lemma 3.34. Let $X=\mathrm{Spa}(A)$ be an affinoid rigid space with the natural algebraization $\mathcal{X}=\mathrm{Spec}(A)$. The pullback along $X\to\mathcal{X}$ induces equivalences $D^{b}_{lis}(\mathcal{X},\mathbf{Z}_{\ell})\simeq D^{(b)}_{lis}(X,\mathbf{Z}_{\ell})\quad\text{and}\quad D^{b}_{zc}(\mathcal{X},\mathbf{Z}_{\ell})\simeq D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ of triangulated categories. ###### Proof. This follows from the description in Remark 3.33 together with Proposition 3.7 that implies the corresponding statements with $\mathbf{Z}/\ell^{n}$-coefficients by passing to the full subcategory of objects with finite Tor dimension. ∎ Using the aforementioned algebraic description, we can show that local constancy mod $\ell$ implies local constancy, justifying our definition of lisse complexes. ###### Lemma 3.35. Let $X$ be a rigid space and let $M\in D^{(b)}_{lis}(X,\mathbf{Z}_{\ell})$. Then $M$ is locally constant. More precisely, for any cover $\\{U_{i}\\}$ of $X$ by connected affinoids, there exist perfect complexes $N_{i}\in D_{perf}(\mathbf{Z}_{\ell})$ such that $M|_{U_{i}}$ is locally isomorphic (for the $v$\- or in fact even the pro-(finite étale) topology of $U_{i}$) to $\underline{N_{i}}$. In particular, each $v$-cohomology sheaf $\mathcal{H}^{i}(M)$ is locally constant as well. This lemma is analogous to [BS15, Remark 6.6.13], with Achinger’s theorem replacing Artin’s theorem. ###### Proof. We may assume $X=\mathrm{Spa}(A)$ is a connected affinoid. Proposition 3.34 then implies that $M$ is uniquely pulled back from some $M^{\prime}\in D^{b}_{lis}(\mathrm{Spec}(A),\mathbf{Z}_{\ell})$. Let $B:=\mathop{\mathrm{colim}}_{i}B_{i}$ be a universal cover of $A$, i.e., this is filtered colimit of connected finite étale covers $A\to B_{i}$ with $B$ itself being simply connected. Thus, $\mathrm{Spec}(B)$ admits no non-trivial locally constant sheaves of finitely generated $\mathbf{Z}_{\ell}$-modules. Moreover, Achinger has shown [Ach17, §1.5] that each $\mathrm{Spec}(B_{i})$ is a $K(\pi,1)$, which implies that $R\Gamma(\mathrm{Spec}(B),\mathbf{Z}_{\ell})=\mathbf{Z}_{\ell}$. The combination of these two properties of $\mathrm{Spec}(B)$ implies that taking the “constant” sheaf gives an equivalence $D_{perf}(\mathbf{Z}_{\ell})\simeq D^{b}_{lis}(\mathrm{Spec}(B),\mathbf{Z}_{\ell})$, so $M^{\prime}|_{\mathrm{Spec}(B)}\in D_{lis}^{b}(\mathrm{Spec}(B),\mathbf{Z}_{\ell})$ is the “constant” $\mathbf{Z}_{\ell}$-complex attached to a perfect $\mathbf{Z}_{\ell}$-complex $N$. Analytifying this cover then solves the problem, i.e., taking $Y:=\lim_{i}\mathrm{Spa}(B_{i})$ where each $B_{i}$ is given the natural topology and the inverse limit is computed in $v$-sheaves, we obtain a pro-(finite étale) cover $Y\to X$ such that $M|_{Y}\simeq\underline{N}$ for some $N\in D_{perf}(\mathbf{Z}_{\ell})$, as wanted. ∎ Next, we observe that all operations defined before extend to $\mathbf{Z}_{\ell}$-sheaves. ###### Theorem 3.36. On the category of rigid spaces over $K$, the following operations (defined in [Sch17, §26]) restrict to operations on $D^{(b)}_{zc}(-,\mathbf{Z}_{\ell})$ and are compatible with reduction modulo $\ell^{n}$. 1. (1) $f^{*}$, $\otimes$, and $\mathop{R\mathscr{H}\mathrm{om}}$. 2. (2) Verdier duality. 3. (3) $Rf_{*}$ for $f$ proper. 4. (4) $Rf_{!}$ and $Rf_{*}$ on lisse complexes for Zarisk-compactifiable morphisms $f$. 5. (5) $Rf^{!}$ if either $f$ is a finite morphism or $p\neq\ell$. Moreover, proper base change holds, and all of these operations are compatible with extensions of the nonarchimedean base field. ###### Proof. Let us first define the dualizing complex $\omega_{X}\in\mathcal{D}^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$, thereby defining the operation that is supposed to give Verdier duality. Given a rigid space $X$ and an integer $n\geq 1$, we have constructed in Theorem 3.21 a dualizing complex $\omega_{n}\in\mathcal{D}^{(b)}_{zc}(X,\mathbf{Z}/\ell^{n})$. Given two integers $n\geq m$, we claim that there is a transitive system of isomorphisms $a_{nm}:\omega_{n}\otimes^{L}_{\mathbf{Z}/\ell^{n}}\mathbf{Z}/\ell^{m}\simeq\omega_{m}$ in $D^{b}_{zc}(X,\mathbf{Z}/\ell^{m})$: for $X=\mathrm{Spa}(A)$ being affinoid, this follows by a similar isomorphism for potential dualizing complexes on $\mathrm{Spec}(A)$ (see discussion on potential dualizing complexes following Theorem 3.19, and use the pinning data there to see transitivity), and the general case follows by BBDG glueing (as in the proof of Theorem 3.21). By canonicity as well as the fact that $\mathrm{Ext}^{<0}_{\mathbf{Z}/\ell^{n}}(\omega_{n},\omega_{n})=0$ for all $n\geq 1$, the system $\\{\omega_{n}\\}$ lifts naturally to an object of the $\infty$-category $\lim_{n}\mathcal{D}^{(b)}_{zc,\ell-\text{ftd}}(X,\mathbf{Z}/\ell^{n})$ from Remark 3.33. Using the equivalence there, the inverse limit $\omega_{X}:=\lim_{n}\omega_{n}\in\mathcal{D}(X_{v},\mathbf{Z}_{\ell})$ then lies in $\mathcal{D}^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$; this object comes equipped with a transitive system of isomorphisms $\omega_{X}\otimes_{\mathbf{Z}_{\ell}}^{L}\mathbf{Z}/\ell^{n}\simeq\omega_{n}$, thus providing our candidate dualizing complex $\omega_{X}$. All the operations are now defined on the larger category $\mathcal{D}_{\mathop{\mathrm{\acute{e}t}}}(X,\mathbf{Z}_{\ell})$, and are compatible with reduction mod $\ell$; the claims in the proposition now follow from the analogous statements mod $\ell$. ∎ ###### Remark 3.37 (Relating Verdier duality with finite and $\mathbf{Z}_{\ell}$-coefficients). For any $n\geq 1$, the reduction modulo $\ell^{n}$-functor $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})\to D^{(b)}_{zc}(X,\mathbf{Z}_{\ell}/\ell^{n})$ carries the dualizing complex $\omega_{X,\mathbf{Z}_{\ell}}:=\omega_{X}\in D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ constructed in Proposition 3.36 to the dualizing complex $\omega_{X,\mathbf{Z}/\ell^{n}}:=\omega_{X}\in D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ from Theorem 3.21 (1), which gives the formula $\mathbf{D}_{X,\mathbf{Z}_{\ell}}(-)/\ell^{n}\simeq\mathbf{D}_{X,\mathbf{Z}/\ell^{n}}(-/\ell^{n})$ relating the Verdier duality operations under reduction modulo $\ell^{n}$. Moreove, using the formula $\mathrm{RHom}_{\mathbf{Z}_{\ell}}(\mathbf{Z}/\ell^{n},\mathbf{Z}_{\ell})=\mathbf{Z}/\ell^{n}[-1]$, it follows that the restriction of scalars functor $\mathrm{Res}:D^{(b)}_{zc}(X,\mathbf{Z}_{\ell}/\ell^{n})\to D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ satisfies $\mathbf{D}_{X,\mathbf{Z}_{\ell}}\circ\mathrm{Res}=\mathrm{Res}\circ\mathbf{D}_{X,\mathbf{Z}/\ell^{n}}[-1]$. ###### Remark 3.38. In the entire discussion in this section, we could have used the pro-étale topology from [Sch17] instead of the $v$-topology without any modification: this follows from the full faithfulness results in [Sch17, §14] for the “change of topology map” and the observation that any $\mathscr{F}\in\mathrm{Sh}_{lis}(X,\mathbf{Z}_{\ell})$ as defined above is in fact locally constant in the pro-étale topology by Lemma 3.35. Nevertheless, we have preferred to formulate things using the $v$-topology since the operations defined in [Sch17, §26] are defined using the $v$-topology. As our final goal in this section, we define the “standard” or “constructible” $t$-structure on $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$. We first explain how to do the analogous construction in algebraic geometry; we use the pro-étale approach from [BS15], but a closely related result can be found in [Eke90, Theorem 3.6 (v)]. ###### Proposition 3.39 (The constructible $t$-structure for $\mathbf{Z}_{\ell}$-sheaves on a noetherian scheme). Let $Y$ be a noetherian scheme. Then the standard $t$-structure on $D(Y_{proet},\mathbf{Z}_{\ell})$ restricts to one on $D^{b}_{cons}(Y_{proet},\mathbf{Z}_{\ell})$. In the statement above and the proof below, we use the notions from [BS15, §5, 6]. In particular, we refer to an object of $D^{b}(Y_{proet})$ as classical if it is in the essential image of the (fully faithful) pullback along $\nu:Y_{proet}\to Y_{et}$ (see [BS15, §5.1]). Classical abelian sheaves on $Y_{proet}$ are thus equivalent to abelian sheaves on $Y_{et}$ and form an abelian Serre subcategory of all abelian sheaves on $Y_{proet}$. ###### Proof. Given $M\in D^{b}_{cons}(Y_{proet},\mathbf{Z}_{\ell})$, we must show that each $\mathcal{H}^{i}(M)$ lies in $D^{b}_{cons}(Y_{proet},\mathbf{Z}_{\ell})$. Using the definition of constructibility [BS15, §5], we must show that the abelian pro-étale sheaves $\mathcal{H}^{i}(M)/\ell$ and $\mathcal{H}^{i}(M)[\ell]$ are constructible $\mathbf{F}_{\ell}$-sheaves for all $i$. Note that these sheaves can be regarded as subobjects (resp. quotient objects) of some $\mathcal{H}^{i}(M/\ell)$ via the Bockstein sequence for $\ell$. As étale subquotients of étale constructible constructible sheaves on a noetherian scheme are constructible [Sta18, Tag 09BH], it suffices to show that the pro-étale sheaves $\mathcal{H}^{i}(M)/\ell$ and $\mathcal{H}^{i}(M)[\ell]$ are classical. In fact, by the Bockstein sequence and stability of classical sheaves under cokernels in all pro-étale sheaves, it suffices to prove that each $\mathcal{H}^{i}(M)/\ell$ is étale. By definition of constructibility, we know that $\mathcal{H}^{i}(M/\ell^{n})$ is classical for all $i$ and $n$. As classical sheaves are stable under images, it is then enough to show that $\mathcal{H}^{i}(M)/\ell\subset\mathcal{H}^{i}(M/\ell)$ is exactly the image of $\mathcal{H}^{i}(M/\ell^{n})\to\mathcal{H}^{i}(M/\ell)$ for $n\gg 0$. By the Bockstein sequences for $\ell^{n}$, this would follow if we knew that the projective system $\\{\mathcal{H}^{i}(M)[\ell^{n}]\\}_{n\geq 1}$ are Mittag- Leffler for each $i$, i.e., if each $\mathcal{H}^{i}(M)$ had bounded $\ell$-power torsion. If $M$ is lisse, this is clear. In general, recall the following fact from [BS15, §6.2]: if $k:Z\to Y$ is a (necessarily constructible, as $Y$ is noetherian) locally closed immersion, then the functors $k^{*}$ and $k_{!}$ on the derived category of all pro-étale sheaves preserve limits and colimits and commute with $\mathcal{H}^{i}(-)$. By [BS15, Proposition 6.6.11], we know that $Y$ admits a finite stratification $\\{k_{j}:Y_{j}\to Y\\}$ such that each $N_{j}:=k_{j}^{*}M$ is lisse. The aforementioned properties of $k_{j,!}$ and $k_{j}^{*}$ then show that $\mathcal{H}^{i}(M)[\ell^{n}]$ admits a finite filtration whose graded pieces have the form $k_{j,!}k_{j}^{*}(\mathcal{H}^{i}(N_{j})[\ell^{n}])$ for lisse complexes $N_{i}$. But then each $\mathcal{H}^{i}(N_{j})$ is also lisse and hence has bounded $\ell$-power torsion, so the corresponding claim for $\mathcal{H}^{i}(M)$ follows by devissage. ∎ ###### Theorem 3.40 (The constructible $t$-structure for $\mathbf{Z}_{\ell}$-sheaves on a rigid space). Let $X/K$ be a rigid space. Then there exists a natural “constructible” $t$-structure $({}^{c}D^{\leq 0}_{zc}(X,\mathbf{Z}_{\ell}),{}^{c}D^{\geq 0}_{zc}(X,\mathbf{Z}_{\ell}))$ on $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ with the following properties: 1. (1) An object $K$ lies in ${}^{c}D^{\leq 0}_{zc}(X,\mathbf{Z}_{\ell})$ if and only if $K/\ell\in D^{\leq 0}(X,\mathbf{Z}/\ell)$. 2. (2) An object $K$ lies in the heart if and only if there exists a locally finite stratification $X=\\{X_{i}\\}$ by Zariski locally closed subsets such that $K|_{X_{i}}$ is locally constant and concentrated in degree $0$ in the obvious sense (i.e., isomorphic locally on $X_{i,v}$ to an object of the form $\underline{N}$ with $N$ a finitely generated $\mathbf{Z}_{\ell}$-module). The restrictions appearing in part (2) above are in the sense of the operations in Proposition 3.36 (see also Remark 3.38). ###### Proof. First assume $X=\mathrm{Spa}(A)$ is affinoid. In this case, to obtain a $t$-structure by the description in (1), we may use Lemma 3.34 to translate to a similar question on $Y=\mathrm{Spec}(A)$. Thus, it suffices to show that the $t$-structure on $D^{b}_{cons}(Y_{proet},\mathbf{Z}_{\ell})$ constructed in Proposition 3.39 is characterized by the property that $K\in D^{\leq 0}(Y_{proet})$ exactly when $K/\ell\in D^{\leq 0}(Y_{et})$; this follows from repleteness of $Y_{proet}$, exactness and full faithfulness of pullback along $Y_{proet}\to Y_{et}$, and standard facts on derived completions. Moreover, part (2) also follows from the reasoning at the end of the proof of Proposition 3.39 as well as the fact that Zariski-closed subsets of $\mathrm{Spa}(A)$ are the same as closed subsets of $\mathrm{Spec}(A)$. For future reference, still in the affinoid case, we remark that once we know (2) is satisfied for some stratification, there is in fact canonical stratification where (2) is satisfied. Indeed, if we take the open stratum $X_{0}\subset X$ to be the maximal Zariski dense open provided by Proposition 3.8 for $\mathcal{H}^{*}(K/\ell)$ and continue inductively, we obtain a stratification $\\{X_{i}\\}_{i\geq 0}$ of $X$ by Zariski locally closed subsets such that $K|_{X_{i}}$ is lisse by Lemma 3.35. To check that $K|_{X_{i}}$ is concentrated in degree $0$ in the sense of (2), we may refine the canonical stratification to ensure it is finer than a given stratification witnessing the property in (2), take stalks, and then deduce the result for the canonical stratification itself. We observe also that this canonical stratification has the feature that it is compatible with restricting to smaller affinoids by Lemma 3.9. We now deduce the general case by glueing. Indeed, first observe that the pullback along maps of affinoids is $t$-exact with respect to the $t$-structure we constructed in the first paragraph: right $t$-exactness is clear from the description in (1), while left $t$-exactness follows from the description of the heart in (2) and the boundedness of the $t$-structure on affinoids. As the condition appearing in part (1) is of a local nature, it follows that for any rigid space $X$, we can glue the $t$-structures defined above on the affinoid opens of $X$ to produce a $t$-structure on $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ satisfying part (1). For part (2), thanks to the last sentence of the previous paragraph, we may simply glue together the canonical stratification on affinoids constructed in the previous paragraph to obtain the desired stratification. ∎ ## 4\. Perverse sheaves In this section, we use the results of §3 to define a notion of perverse sheaves on rigid spaces over nonarchimedean base field $K$ of characteristic $0$. Our results with finite coefficents work for any rigid space, with $\mathbf{Z}_{\ell}$ coefficients when $K$ has residue characteristic $p$ (due to the corresponding requirement in §3.6), and $\mathbf{Q}_{\ell}$-coefficients when one further restricts to the qcqs case. ### 4.1. Finite coefficients Let $K$ be a nonarchimedean field of characteristic $0$ and let $X/K$ be a rigid space. In this subsection, we use $\mathbf{Z}/n$-coefficients for some $n\geq 1$. In this section, we develop a theory of perverse sheaves on $X$ that enjoys the same pleasant formal properties as its counterpart in algebraic geometry [BBD82]. ###### Definition 4.1. Let $X/K$ be a rigid space. 1. (1) Define $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X)\subset D_{zc}^{(b)}(X)$ as the full subcategory of complexes $\mathscr{F}$ such that $\dim\mathrm{supp}\mathcal{H}^{j}(\mathscr{F})\leq-j$ for all $j\in\mathbf{Z}$. 2. (2) Define $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X)\subset D_{zc}^{(b)}(X)$ as the full subcategory of complexes $\mathscr{F}$ such that $\mathbf{D}_{X}(\mathscr{F})\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X)$. The main results about this definition are summarized as follows. ###### Theorem 4.2 (Properties of the perverse $t$-structure). In the setup above, we have the following. 1. (1) The pair $(\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X),\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X))$ define a t-structure on $D_{zc}^{(b)}(X)$. Write $\mathop{\mathrm{Perv}}(X)=\mathop{\mathrm{Perv}}(X,\mathbf{Z}/n)$ for the heart of this $t$-structure, and write $\mathop{\phantom{}{}^{\mathfrak{p}}\mathcal{H}}^{n}:D^{(b)}_{zc}(X)\to\mathop{\mathrm{Perv}}(X)$ for the associated cohomology functors. 2. (2) For a Zariski-open immersion $j$ (resp. Zariski-closed immersion $i$), we have the following exactness properties with respect to the perverse $t$-structure: 1. (a) $j^{*}$ and $i_{*}$ are $t$-exact. 2. (b) $j_{!}$ is right $t$-exact in the context of Proposition 3.26 (2), i.e., if $\mathscr{F}\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(U,\mathbf{Z}/n)$ arises as the pullback of some object from $D^{(b)}_{zc}(X,\mathbf{Z}/n)$, then $j_{!}\mathscr{F}\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X,\mathbf{Z}/n)$. 3. (c) $i^{*}$ is right $t$-exact and $Ri^{!}$ is left $t$-exact. 4. (d) $Rj_{*}$ is left $t$-exact in the context of Proposition 3.26 (2). 3. (3) $\mathop{\mathrm{Perv}}(X)$ is stable under Verdier duality. 4. (4) If $X=\mathcal{X}^{an}$ for a finite type $K$-scheme $\mathcal{X}$, the functor $D^{b}_{c}(\mathcal{X})\to D^{b}_{zc}(X)$ induces a fully faithful functor $\mathop{\mathrm{Perv}}(\mathcal{X})\to\mathop{\mathrm{Perv}}(X)$. If $\mathcal{X}$ is proper over $\mathop{\mathrm{Spec}}K$, this functor is an equivalence of categories. 5. (5) Say $j:U\subset X$ is the inclusion of any Zariski locally closed subset and $\mathscr{L}$ is a perverse sheaf on $U$ that admits an extension to $D^{(b)}_{zc}(X,\mathbf{Z}/n)$ under $j^{*}$ (e.g., if one of $\mathscr{L}$ or $\mathbf{D}_{U}(\mathscr{L})$ is lisse, see Proposition 3.26). Then there is a naturally associated intermediate extension $j_{!\ast}\mathscr{L}\in\mathop{\mathrm{Perv}}(X)$ such that $j^{\ast}j_{!\ast}\mathscr{L}\cong\mathscr{L}$. Moreover, $\mathbf{D}_{X}(j_{!\ast}\mathscr{L})\cong j_{!\ast}\mathbf{D}_{U}(\mathscr{L})$. 6. (6) If $X$ is quasicompact, $\mathop{\mathrm{Perv}}(X)$ is Noetherian and Artinian. The simple objects have the form $j_{!*}(\mathscr{L}[d])$, where $j:U\to X$ is a Zariski-locally closed immersion with $U$ smooth of dimension $d$ and $\mathscr{L}$ is a simple locally constant sheaf on $U$. 7. (7) Perversity is stable under pushforward along finite morphisms. 8. (8) Assume $p$ is invertible on $\Lambda$. If $K$ is algebraically closed and $\mathfrak{X}$ is a formal model of $X$ with special fiber $\mathfrak{X}_{s}$, the nearby cycles functor $R\lambda_{\mathfrak{X}\ast}:D^{b}_{zc}(X)\to D^{b}_{c}(\mathfrak{X}_{s})$ is t-exact for the perverse t-structures. We expect that the $t$-exactness in (8) holds true without the assumption on $p$ (using the perverse $t$-structure on the target constructed in [Gab04]). The right $t$-exactness ought to follow from the relevant affinoid vanishing theorem, generalizing [BM20, Han20], that has been announced by Gabber. ###### Proof. 1. (1) We give two proofs: one via localizing to [Gab04], and one via a direct argument. Proof via [Gab04]: We have seen before that $X\mapsto\mathcal{D}^{(b)}_{zc}(X)$ is a stack for the analytic topology on $X$. Moreover, pullback along open inclusions $U\subset X$ of rigid spaces preserves $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(-)$ by definition, and $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(-)$ as Verdier duality localizes. Consequently, these pullbacks are perverse $t$-exact once we know the perverse $t$-structure exists. Given a diagram of stable $\infty$-categories equipped with $t$-structures and $t$-exact transition maps, the inverse limit carries a unique $t$-structure compatible with those of the terms. Using the stackyness of $\mathcal{D}^{(b)}_{zc}(-)$, we thus conclude that it suffices to prove (1) when $X=\mathrm{Spa}(A)$ is affinoid. In this case, using Proposition 3.7 as well as the compatibility of the notion of dimension and duality with analytification, it is enough to prove the corresponding statements for $D^{b}_{cons}(\mathcal{X},\mathbf{Z}/n)$ where $\mathcal{X}=\mathrm{Spec}(A)$; we do this next via [Gab04]. Consider the strong perversity function $p:\mathcal{X}\to\mathbf{Z}$ given by $p(x)=-\dim(\overline{\\{x\\}})$. The results of [Gab04, §2 & 6] show that there is a natural perverse $t$-structure on $D^{b}_{c}(\mathcal{X},\mathbf{Z}/n)$ attached to the function $p(-)$. It is clear from the definition in [Gab04, §2] as well as the compatibility of the notion of dimension with analytification that the connective part ${}^{p}D^{\leq 0}_{c}(\mathcal{X},\mathbf{Z}/n)\subset D^{b}_{c}(X,\mathbf{Z}/n)$ of this $t$-structures agrees with $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X)\subset D_{zc}^{b}(X)$ under the equivalence $(-)^{an}:D^{b}_{c}(\mathcal{X},\mathbf{Z}/n)\simeq D^{b}_{zc}(X)$ from Proposition 3.7. It remains to identify $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X)\subset D_{zc}^{b}(X)$ as defined above (via stalks of the dual) with ${}^{p}D^{\geq 0}_{c}(\mathcal{X},\mathbf{Z}/n)\subset D^{\geq 0}_{c}(\mathcal{X},\mathbf{Z}/n)$ as defined in [Gab04, §2] (via costalks). For this, it suffices to show the following pair of assertions: * $(\ast)$ For any $\mathscr{G}\in D^{b}(\mathcal{X},\mathbf{Z}/n)$ and any geometric point $\overline{x}\to x\in\mathcal{X}$, the costalk $i_{\overline{x}}^{!}\mathscr{G}$ of $\mathscr{G}$ identifies with the $\mathbf{Z}/n$-linear dual of the stalk $i_{\overline{x}}^{*}\mathbf{D}_{\mathcal{X}}(\mathscr{G})$ of the Verdier dual of $\mathscr{G}$. Indeed, assume $(\ast)$. Fix some $\mathscr{F}\in D^{b}_{zc}(X,\mathbf{Z}/n)$ arising as the analytification of $\mathscr{G}\in D^{b}_{c}(\mathcal{X},\mathbf{Z}/n)$. Assume first that $\mathscr{F}\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X,\mathbf{Z}/n)$. Then for any irreducible Zariski-closed subset $Z\subset X$ of dimension $i$, we know by assumption $\mathcal{H}^{-j}(\mathbf{D}_{X}(F))$ vanishes after restriction to a Zariski open subset of $Z$ for all $j<i$. This implies a similar constraint on $\mathscr{G}$ by $t$-exactness of analytification and its compatibility with duality and the notion of dimension. Using $(\ast)$ and passing to the limit then shows that $\mathscr{G}\in{}^{p}D^{\geq 0}(\mathcal{X},\mathbf{Z}/n)$. Conversely, if $\mathscr{G}\in{}^{p}D^{\geq 0}(\mathcal{X},\mathbf{Z}/n)$, then $(\ast)$ and the compatibility of analyitification with duality and the notion of dimension shows that $\mathscr{F}\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X,\mathbf{Z}/n)$. It remains to prove $(\ast)$. This follows by passage to the limit from (the algebraic version of) Theorem 3.21 (4) applied to quasi-finite maps of the form $\mathcal{U}\hookrightarrow\mathcal{Z}\hookrightarrow\mathcal{X}$, with the first map being a dense open immersion, and the second map being the closed immersion of an irreducible closed subset. Direct proof. We now explain a direct proof of the existence of the perverse $t$-structure on $D^{b}_{zc}(X)$ when $X$ is finite dimensional by induction on $\dim X$. The result is trivial when $\dim X=0$. For the moment, fix a smooth dense Zariski-open subset $j:U\to X$, with closed complement $i:Z\to X$. It is trivial from the definition that $i^{\ast}:D_{zc}^{b}(X)\to D_{zc}^{b}(Z)$ carries $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}$ into $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}$, and then (using biduality) that $Ri^{!}$ carries $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}$ into $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}$. By induction, we can assume that (1) is true for $Z$. Write $D_{zc.U-lis}^{b}(X)\subset D_{zc}^{b}(X)$ for the full subcategory spanned by complexes whose cohomology sheaves are lisse after restriction to $U$. One trivially checks that $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(U)\cap D_{lis}^{b}(U)$ and $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(U)\cap D_{lis}^{b}(U)$ define a $t$-structure on $D_{lis}^{b}(U)$, which locally on connected components is the obvious shift by $\dim U$ of the standard $t$-structure. Moreover, a complex $\mathscr{F}\in D^{b}_{zc.U-lis}(X)$ lies in $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X)$ iff $j^{\ast}\mathscr{F}\in D^{\leq-\dim X}_{lis}(U)$ and $i^{\ast}\mathscr{F}\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(Z)$. By duality, this implies that $\mathscr{F}\in D^{b}_{zc.U-lis}(X)$ lies in $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X)$ iff $j^{\ast}\mathscr{F}\in D^{\geq-\dim X}_{lis}(U)$ and $i^{!}\mathscr{F}\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(Z)$. On the other hand, by [BBD82, Theoreme 1.4.10] we can glue the perverse $t$-structure on $D^{b}_{lis}(U)$ and the perverse $t$-structure on $D^{b}_{zc}(Z)$ to get an actual $t$-structure on $D_{zc.U-lis}^{b}(X)$. The key technical ingredient here is Theorem 3.11, which guarantees that $Rj_{\ast}$ carries $D^{b}_{lis}(U)$ into $D^{b}_{zc.U-lis}(X)$. This together with the induction hypothesis implies that the truncation functors $\phantom{}{}^{\mathfrak{p}}\tau^{\leq i}$ preserve $D^{b}_{zc.U-lis}(X)$. It is clear that this glued t-structure agrees with the restriction of the putative perverse $t$-structure from Definition 4.1 to the full subcategory $D_{zc.U-lis}^{b}(X)\subset D_{zc}^{b}(X)$. Since $D_{zc}^{b}(X)$ is the filtered colimit of $D_{zc.U-lis}^{b}(X)$ over (the opposite category of) all $U\subset X$ as above, we deduce that $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}$ and $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}$ define an honest $t$-structure on $D^{b}_{zc}(X)$. 2. (2) The right $t$-exactness in part (a) is clear, while the left $t$-exactness follows as both functors commute with Verdier duality. Part (b) is clear. The claim for $i^{*}$ in part (c) is clear and that for $Ri^{!}$ then follows by duality. For part (d), it suffices to identify $\mathbf{D}_{X}Rj_{*}\mathscr{F}$ with $j_{!}\mathbf{D}_{U}(\mathscr{F})$ (whenever $\mathscr{F}$ satisfies the hypothesis in the proposition). These sheaves are isomorphic over $U$ as duality is local, so it is enough to show that $i^{*}\mathbf{D}_{X}Rj_{*}\mathscr{F}=0$ for $i:Z\to X$ being the complementary closed immersion. But this follows as $i^{*}\mathbf{D}_{X}=\mathbf{D}_{Z}Ri^{!}$ on $D^{b}_{zc}(X)$ and $Ri^{!}Rj_{*}=0$ on all of $D(U)$. 3. (3) Clear from the definitions. 4. (4) By Proposition 3.7, $(-)^{an}:D^{b}_{c}(\mathcal{X})\to D^{b}_{zc}(X)$ is fully faithful, and is an equivalence in the proper case. It remains to show that $(-)^{an}$ is perverse $t$-exact. Right $t$-exactness is clear, while left $t$-exactness follows as $(-)^{an}$ is compatible with duality (e.g., via Lemma 3.22). 5. (5) As usual, we define $j_{!*}\mathscr{L}$ to be the image of the map ${}^{\mathfrak{p}}\mathcal{H}^{0}(j_{!}\mathscr{L})\to{}^{\mathfrak{p}}\mathcal{H}^{0}(Rj_{*}\mathscr{L})$ of perverse sheaves, noting that this makes sense by part (2) and Proposition 3.26. The remaining claims are immediate, using the formula $\mathbf{D}_{X}Rj_{*}\mathscr{L}=j_{!}\mathbf{D}_{U}(\mathscr{L})$ from (2) for the last part. 6. (6) It suffices to prove every perverse sheaf has finite length. We prove the claim by induction on dimension $d=\dim(X)$. Clearly we can assume $X$ is reduced. When $d=0$, the space $X$ identifies with $\bigsqcup_{i=1}^{n}\mathrm{Spa}(K_{i})$ with $K_{i}/K$ a finite extension. For such spaces, the claim is clear after translating from étale sheaves to Galois representations, ultimately because finite $\mathbf{Z}/n$-modules have finite length in the category of all $\mathbf{Z}/n$-modules. Next, we show that for any Zariski locally closed immersion $j:U\to X$ and any lisse sheaf $\mathscr{L}$ on $U$, the intermediate extension $j_{!\ast}\mathscr{L}[\dim U]$ is a perverse sheaf of finite length. As pushforward along closed immersions is exact and fully faithful with essential image closed under passage to subquotients, we may assume $j$ is a dense Zariski-open immersion. Using induction on dimension as well as the fact that $j_{!\ast}$ is exact up to perverse sheaves supported on the Zariski-closed space $i:Z:=X-U\hookrightarrow X$ which has dimension $<d$, it is enough to prove that $j_{!\ast}\mathscr{L}$ is simple if $\mathscr{L}$ is so. As $j^{*}$ is perverse $t$-exact, it suffices to show that $j_{!\ast}\mathscr{L}$ admits no non-trivial subobjects or quotients supported on $X-U$. The statement for quotients follows from the surjection ${}^{\mathfrak{p}}\mathcal{H}^{0}(j_{!}\mathscr{L})\to j_{!*}\mathscr{L}$, the right perverse $t$-exactness of $j_{!}$, and the fact that $\mathop{R\mathscr{H}\mathrm{om}}(j_{!}(-),i_{*}(-))=0$; the statement for subobjects follows by duality. We now handle the general case. Given a perverse sheaf $\mathscr{F}$ on $X$, let $U\subset X$ be a dense Zariski-open subset such that $\mathscr{F}|_{U}[-\dim(U)]$ is lisse. Then we have a correspondence $j_{!*}(\mathscr{F}|_{U})\leftarrow{}^{\mathfrak{p}}\mathcal{H}^{0}(j_{!}(\mathscr{F}|_{U}))\to\mathscr{F}$ of perverse sheaves with both maps having cones have perverse cohomology sheaves supported on $Z=X-U$. As $\dim(Z)<\dim(X)$, induction on dimension and the previous paragraph show that $\mathscr{F}$ has finite length. The claimed description of simple objects also follows from the proof above (and is similar to the algebraic case). Indeed, say $\mathscr{F}$ is simple and supported on some Zariski-closed subset $Z\subset X$. Replacing $X$ with $Z$, we can assume $\mathscr{F}$ is supported everywhere. Let $j:U\subset X$ be a Zariski-dense Zariski-open subset of (locally constant) dimension $d$ such that $\mathscr{F}|_{U}=\mathscr{L}[d]$ for a lisse sheaf $\mathscr{L}$ on $U$. As $j_{!*}$ preserves injections and has a left-inverse, the simplicity of $\mathscr{F}$ implies that $\mathscr{L}$ must be simple. Both maps in the correspondence $j_{!*}(\mathscr{F}|_{U})\leftarrow{}^{\mathfrak{p}}\mathcal{H}^{0}(j_{!}(\mathscr{F}|_{U}))\to\mathscr{F}$ used in the previous paragraph must then be surjective by simplicity of the targets. The kernels of both maps are supported on $X-U$ while the simple targets are supported on all of $X$. It follows that kernels of both maps identify with the maximal perverse subsheaf of ${}^{\mathfrak{p}}\mathcal{H}^{0}(j_{!}(\mathscr{F}|_{U}))$ supported on $X-U$. In particular, both maps are isomorphic, so $\mathscr{F}=j_{!*}(\mathscr{F}|_{U})$, as wanted. 7. (7) Right t-exactness is clear, and commutation of finite pushforward with Verdier duality (Theorem 3.21 (4)) gives left t-exactness. 8. (8) By the commutation of nearby cycles with Verdier duality [Han18], it’s enough to show that $R\lambda_{\mathfrak{X}\ast}$ is right $t$-exact for the perverse t-structures. Fix some $\mathscr{F}\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X)$. By [BBD82, Réciproque 4.1.6], to check that $R\lambda_{\mathfrak{X}\ast}\mathscr{F}\in\phantom{}^{\mathfrak{p}}D^{\leq 0}(\mathfrak{X}_{s})$ it suffices to show that for any étale map $\mathfrak{j}:\mathfrak{Y}_{s}\to\mathfrak{X}_{s}$ from an affine scheme $\mathfrak{Y}_{s}$, the complex $R\Gamma(\mathfrak{Y}_{s},\mathfrak{j}^{\ast}R\lambda_{\mathfrak{X}\ast}\mathscr{F})$ is concentrated in non-positive degrees. Let $j:Y\to X$ be the étale map obtained by deforming $\mathfrak{j}$ to a map of formal schemes and then passing to the rigid generic fiber. Note that $Y$ is affinoid. Then $R\Gamma(\mathfrak{Y}_{s},\mathfrak{j}^{\ast}R\lambda_{\mathfrak{X}\ast}\mathscr{F})\cong R\Gamma(Y,j^{\ast}\mathscr{F})$ by basic properties of the nearby cycles functor in this setting, and $j^{\ast}\mathscr{F}\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(Y)$. But $R\Gamma(Y,\mathscr{G})$ is concentrated in degrees $\leq 0$ for any $\mathscr{G}\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(Y)$ by rigid analytic Artin-Grothendieck vanishing [BM20, Han20]. ∎ ### 4.2. $\mathbf{Z}_{\ell}$-coefficients Let $K$ be a nonarchimedean field of characteristic $0$ and residue characteristic $p>0$, let $\ell$ be a prime number (including possibly $\ell=p$), and let $X/K$ be a rigid space. Our goal is to define a perverse $t$-structure on the category $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ (introduced in §3.6) that agrees on $\ell$-torsion objects with our previous construction. The definition of the connective part is the same, but that of the coconnective part needs to modified to account for the fact that the standard $t$-structure on $D_{perf}(\mathbf{Z}_{\ell})$ is not quite self-dual. A similar issue occurs in algebraic geometry (see [BBD82, §3.3]), and our fix is also similar: there are two perverse $t$-structures with $\mathbf{Z}_{\ell}$-coefficients that are exchanged by Verdier duality and which differ from each other by torsion (Proposition 4.6). ###### Construction 4.3 (The $\mathfrak{p}$\- and $\mathfrak{p}^{+}$-perverse $t$-structures). Consider the following full subcategories of $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$: * • $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X,\mathbf{Z}_{\ell})$ is the collection of all $K$’s with $K/\ell\in\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X,\mathbf{F}_{\ell})$. * • $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X,\mathbf{Z}_{\ell})$ is the collection of all those $K$’s with $\mathbf{D}_{X}(K)\in{}^{\mathfrak{p}}D^{\leq 1}_{zc}(X,\mathbf{Z}_{\ell})$ and such that, locally on $X$, there exists some $c$ with $\ell^{c}\cdot{}^{\mathfrak{p}}\mathcal{H}^{1}(\mathbf{D}_{X}(K)/\ell^{n})=0$ for all $n$. We refer to the pair $(\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X,\mathbf{Z}_{\ell}),\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X,\mathbf{Z}_{\ell}))$ as the $\mathfrak{p}$-perverse $t$-structure on $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$; it will be shown to be a $t$-structure later (Proposition 4.6). Write $({}^{\mathfrak{p}^{+}}D^{\leq 0}(X,\mathbf{Z}_{\ell}),{}^{\mathfrak{p}^{+}}D^{\geq 0}(X,\mathbf{Z}_{\ell}))$ for the dual of the pair $(\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X,\mathbf{Z}_{\ell}),\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X,\mathbf{Z}_{\ell}))$, i.e., ${}^{\mathfrak{p}^{+}}D^{\leq 0}_{zc}(X,\mathbf{Z}_{\ell})=\mathbf{D}_{X}\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\geq 0}}(X,\mathbf{Z}_{\ell})\quad\text{and}\quad{}^{\mathfrak{p}^{+}}D^{\geq 0}_{zc}(X,\mathbf{Z}_{\ell})=\mathbf{D}_{X}\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X,\mathbf{Z}_{\ell}).$ We refer to the pair $({}^{\mathfrak{p}^{+}}D^{\leq 0}_{zc}(X,\mathbf{Z}_{\ell}),{}^{\mathfrak{p}^{+}}D^{\geq 0}_{zc}(X,\mathbf{Z}_{\ell}))$ as the $\mathfrak{p}^{+}$-perverse $t$-structure on $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$. ###### Example 4.4 (The case of a point). Assume $X=\mathrm{Spa}(K)$ is a geometric point, so $K$ is algebraically closed. In this case, we may identify $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})=D_{perf}(\mathbf{Z}_{\ell})$. Under this equivalence, the $\mathfrak{p}$-perverse $t$-structure on $D_{perf}(\mathbf{Z}_{\ell})$ identifies with the standard $t$-structure (and is thus a $t$-structure). Indeed, the identification of the connective part is clear. For the coconnective part, we must show that $M\in D_{perf}(\mathbf{Z}_{\ell})$ lies in $D^{\geq 0}$ exactly when $M^{\vee}:=\mathrm{RHom}(M,\mathbf{Z}_{\ell})\in D^{\leq 1}$ with $\mathrm{Ext}^{1}(M,\mathbf{Z}_{\ell})$ being torsion. This follows easily by using biduality $M=\mathrm{RHom}(M^{\vee},\mathbf{Z}_{\ell})$ as well as the fact that $\mathrm{Hom}(N,\mathbf{Z}_{\ell})=0$ if $N$ is torsion. More generally, a similar argument shows the following: for a smooth rigid space $X/K$ of dimension $d$, intersecting the $\mathfrak{p}$-perverse $t$-structure with $D^{b}_{lis}(X,\mathbf{Z}_{\ell})$ gives (homological) $d$-fold shift of the standard $t$-structure on $D^{b}_{lis}(X,\mathbf{Z}_{\ell})$. To compare the two $t$-structures in Construction 4.3, we shall need the following notion. ###### Definition 4.5. We say that an object $K\in D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ is locally bounded torsion if, locally on $X$, there exists some $c$ with $\ell^{c}\cdot\mathcal{H}^{*}(K)=0$. The main result of this subsection is the following analog of some remarks in [BBD82, §3.3]: ###### Proposition 4.6 (Properties of $\mathbf{Z}_{\ell}$-perverse sheaves). 1. (1) The $\mathfrak{p}$-perverse $t$-structure is indeed a $t$-structure on $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$. Consequently, the same holds for $\mathfrak{p}^{+}$-perverse $t$-structure. 2. (2) For any $n\geq 1$, the reduction modulo $\ell^{n}$-functor $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})\to D^{(b)}_{zc}(X,\mathbf{Z}_{\ell}/\ell^{n})$ is right $t$-exact with respect to $\mathfrak{p}$-perverse $t$-structure on the source and the perverse $t$-structure on the target. 3. (3) For any $n\geq 1$, the restriction of scalars functor $\mathrm{Res}:D^{(b)}_{zc}(X,\mathbf{Z}_{\ell}/\ell^{n})\to D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ is $t$-exact with respect to the same pair of $t$-structures as in (2). 4. (4) We have $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X,\mathbf{Z}_{\ell})\subset{}^{\mathfrak{p}^{+}}D^{\leq 0}_{zc}(X,\mathbf{Z}_{\ell})\subset{}^{\mathfrak{p}}D^{\leq 1}_{zc}(X,\mathbf{Z}_{\ell})$. 5. (5) Given $K\in D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$, we have $K\in{}^{\mathfrak{p}^{+}}D^{\leq 0}_{zc}(X,\mathbf{Z}_{\ell})$ if and only if $K\in{}^{\mathfrak{p}}D^{\leq 1}_{zc}(X,\mathbf{Z}_{\ell})$ with ${}^{\mathfrak{p}}\mathcal{H}^{1}(K)$ being locally bounded torsion. (Note that ${}^{\mathfrak{p}}\mathcal{H}^{1}(-)$ makes sense by part (1).) ###### Proof. 1. (1) All assertions are local, so we may assume $X=\mathrm{Spa}(A)$ is affinoid. We proceed by induction on $\dim(X)$. If $\dim(X)=0$, then we can reduce to the case where $X$ is a point. In this case, the claim follows by Example 4.4. In general, we translate the theorem to a similar question about $D^{b}_{cons}(\mathrm{Spec}(A),\mathbf{Z}_{\ell})$ with evident definitions, and proceed by imitating the glueing method of [BBD82, §1.4]. Fix a smooth dense Zariski-open $j:U\subset\mathcal{X}$ of dimension $d$ with complementary closed $i:Z\subset\mathcal{X}$. Consider the full subcategory $D_{U-lis}\subset D^{b}_{cons}(\mathrm{Spec}(A),\mathbf{Z}_{\ell})$ spanned by complexes $K$ which are lisse over $U$. Then $D_{U-lis}$ admits a semi- orthogonal decomposition into $D_{lis}^{b}(U,\mathbf{Z}_{\ell})$ as well as $D^{b}_{cons}(Z,\mathbf{Z}_{\ell})$ as in [BBD82, §1.4.3]. Moreover, for $K\in D_{U-lis}$, one checks that $K\in{}^{p}D^{\leq 0}_{cons}(X,\mathbf{Z}_{\ell})$ (resp. $K\in{}^{p}D^{\geq 0}_{cons}(X,\mathbf{Z}_{\ell})$) exactly when its $*$-pullbacks (resp. $!$-pullbacks) to $U$ and $Z$ lie in ${}^{p}D^{\leq 0}_{lis}(U,\mathbf{Z}_{\ell})$ and ${}^{p}D^{\leq 0}_{cons}(Z,\mathbf{Z}_{\ell})$ (resp. ${}^{p}D^{\geq 0}_{lis}(U,\mathbf{Z}_{\ell})$ and ${}^{p}D^{\geq 0}_{cons}(Z,\mathbf{Z}_{\ell})$): this is clear for ${}^{p}D^{\leq 0}$ over both $U$ and $Z$ as well as for ${}^{p}D^{\geq 0}$ over $U$, and follows for ${}^{p}D^{\leq 0}$ over $Z$ by the formula $i^{*}\mathbf{D}_{\mathcal{X}}=\mathbf{D}_{Z}Ri^{!}$. One can then use [BBD82, Theorem 1.4.10] to glue the $\mathfrak{p}$-perverse $t$-structures on $D_{lis}^{b}(U,\mathbf{Z}_{\ell})$ as well as $D^{b}_{cons}(Z,\mathbf{Z}_{\ell})$ (which are $t$-structures by Example 4.4 and induction respectively) to conclude that intersecting the $\mathfrak{p}$-perverse $t$-structure with $D_{U-lis}$ gives a $t$-structure on $D_{U-lis}$. Taking the colimit over all such $U$’s then proves (1). 2. (2) Clear from the definition. 3. (3) The right $t$-exactness is again clear from the definition. The left $t$-exactness follows by unwinding definitions from Remark 3.37. 4. (4) Both containments are immediate from biduality. 5. (5) Fix some $K\in D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$. Both directions can be checked locally on $X$, so we may assume $X$ is qcqs and thus $K$ is bounded. We first prove the “only if” direction, so assume that $K\in{}^{\mathfrak{p}^{+}}D^{\leq 0}(X,\mathbf{Z}_{\ell})$. Unwinding definitions and using biduality, this means that $K\in{}^{\mathfrak{p}}D^{\leq 1}_{zc}(X,\mathbf{Z}_{\ell})$ and that there exists some $c\geq 1$ such that $\ell^{c}\cdot{}^{\mathfrak{p}}\mathcal{H}^{1}(K/\ell^{n})=0$ for all $n$. We shall prove that $\ell^{c}\cdot{}^{\mathfrak{p}}\mathcal{H}^{1}(K)=0$. Since $K\in{}^{\mathfrak{p}}D^{\leq 1}_{zc}(X,\mathbf{Z}_{\ell})$ and $\mathop{\phantom{}{}^{\mathfrak{p}}\\!D_{zc}^{\leq 0}}(X,\mathbf{Z}_{\ell})\subset{}^{\mathfrak{p}^{+}}D^{\leq 0}_{zc}(X,\mathbf{Z}_{\ell})$, we are allowed to replace $K$ with ${}^{\mathfrak{p}}\mathcal{H}^{1}(K)[-1]$, so we may assume that $K$ is concentrated in cohomological degree $1$ with respect to the $\mathfrak{p}$-perverse $t$-structure. Moreover, by a variant of the argument used to prove (1), one checks that there exists a constant $c^{\prime}$ such that $K$ (or any perverse $\mathbf{Z}_{\ell}$-sheaf) has $\ell^{\infty}$-torsion bounded by $\ell^{c^{\prime}}$, i.e., that the perverse $\mathbf{Z}_{\ell}$-sheaves $\ker(\ell^{n}:K\to K)$ are killed by $\ell^{c^{\prime}}$ for all $n$. Our hypothesis on $K$ then shows that the complex $K/\ell^{n}$ is killed by $\ell^{2\max(c,c^{\prime})}$ for all $n$. But this implies $K$ must be killed by $\ell^{2\max(c,c^{\prime})}$ by generalities on derived $\ell$-complete sheaves in the replete topos777This follows from (the replete topos variant of) the following statement (which appears in [BL20] and whose proof we leave as an exercise here): If $M$ is any derived $\ell$-complete abelian group such that there exists some $c\geq 0$ with $\ell^{c}\cdot(M/\ell^{n}M)=0$ for all $n\geq c+1$, then $\ell^{c}M=0$. of all $v$-sheaves on $X$, so we are done. For the “if” direction, assume that $K\in{}^{\mathfrak{p}}D^{\leq 1}_{zc}(X,\mathbf{Z}_{\ell})$ and the object ${}^{\mathfrak{p}}\mathcal{H}^{1}(K)$ is killed by $\ell^{c}$ for some $c$. As reduction modulo powers of $\ell$ is right $t$-exact for the perverse $t$-structure, it is then trivially true that $\ell^{c}\cdot{}^{\mathfrak{p}}\mathcal{H}^{1}(K/\ell^{n})=0$ for all $n\geq 1$. It is then immediate from the definitions that $K\in{}^{\mathfrak{p}^{+}}D^{\leq 0}(X,\mathbf{Z}_{\ell})$. ∎ ###### Remark 4.7. Proposition 4.6 (5) cannot be strengthened to the assertion that ${}^{\mathfrak{p}}\mathcal{H}^{1}(K)$ is bounded torsion globally on $X$ for $K\in{}^{\mathfrak{p}^{+}}D^{\leq 0}(X,\mathbf{Z}_{\ell})$. Indeed, given a countable discrete subset $S:=\\{x_{1},x_{2},x_{3},...\\}\subset X:=(\mathbf{A}^{1})^{an}$ of classical points, one may take $K=\bigoplus i_{x_{n},*}\mathbf{Z}/\ell^{n}[-1]$ to obtain a counterexample (where $i_{x_{n}}:\mathrm{Spa}(k(x_{n}))\to X$ is the inclusion of the point at $x_{n}$). ### 4.3. $\mathbf{Q}_{\ell}$-coefficients We continue with notation from §4.2. There are some subtleties with passing from $\mathbf{Z}_{\ell}$ to $\mathbf{Q}_{\ell}$-coefficents for rigid spaces that are not qcqs.888In fact, similar issues arise in algebraic geometry but are typically not as consequential as non-qcqs schemes are much rarer than non-compact rigid spaces. For instance, the affine line over $K$ is qcqs when regarded as a scheme simply because it is a noetherian scheme, but its analytification is not a qcqs rigid space. Thus, in this subsection, we assume $X$ is qcqs (e.g., $X$ could be affinoid or proper over $K$), so $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})=D^{b}_{zc}(X,\mathbf{Z}_{\ell})$. In this setting, we shall prove in Theorem 4.11 that the basic theory of perverse sheaves with $\mathbf{Q}_{\ell}$-coefficients behaves as well as can be expected. Our constructions will take place in the following category: ###### Definition 4.8 ($\mathbf{Q}_{\ell}$-constructible sheaves). Set $D^{b}_{zc}(X,\mathbf{Q}_{\ell}):=D^{b}_{zc}(X,\mathbf{Z}_{\ell})\otimes_{\mathbf{Z}_{\ell}}\mathbf{Q}_{\ell}$ (i.e., objects remain the same and endomorphisms are tensored with $\mathbf{Q}_{\ell}$). ###### Remark 4.9 ($\mathbf{Q}_{\ell}$-sheaves as Verdier quotient). The category $D^{b}_{zc}(X,\mathbf{Q}_{\ell})$ can also be described as the Verdier quotient of $D^{b}_{zc}(X,\mathbf{Z}_{\ell})$ by its full subcategory of objects annihilated by a power of $\ell$. In fact, the analogous statement holds true with $D^{b}_{zc}(X,\mathbf{Z}_{\ell})$ replaced by any $\mathbf{Z}_{\ell}$-linear triangulated category $\mathcal{C}$. To see this, let $\mathcal{C}_{tors}\subset\mathcal{C}$ be the full subcategory of objects annihilated by a power of $\ell$. As the multiplication by $\ell$ map on any object of $\mathcal{C}$ has cone in $\mathcal{C}_{tors}$, it follows that $\mathcal{C}/\mathcal{C}_{tors}$ is naturally $\mathbf{Q}_{\ell}$-linear, so there is a natural map $\mathcal{C}\otimes_{\mathbf{Z}_{\ell}}\mathbf{Q}_{\ell}\to\mathcal{C}/\mathcal{C}_{tors}$. Conversely, as the Verdier quotient $\mathcal{C}/\mathcal{C}_{tors}$ can be regarded as the localization $S^{-1}\mathcal{C}$, where $S$ is the collection of maps in $\mathcal{C}$ whose cone lies in $\mathcal{C}_{tors}$, one also immediately constructs a natural map $\mathcal{C}/\mathcal{C}_{tors}\to\mathcal{C}\otimes_{\mathbf{Z}_{\ell}}\mathbf{Q}_{\ell}$. We leave it to the reader to check that these constructions give mutually inverse equivalences of categories. ###### Remark 4.10 (Problems in the non-qcqs case). While Definition 4.8 makes sense for any rigid space $X$, it is the “wrong” definition to use when $X$ is not qcqs. For example, the object $K$ described in Remark 4.7 is nonzero in $D^{b}_{zc}(X,\mathbf{Q}_{\ell})$ yet vanishes after restriction to any quasi-compact open in $X$. While there are several candidate replacements (e.g., based on Remark 4.9, one might work with the quotient of $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ by the full subcategory of locally bounded torsion objects; alternately, one might attempt to work with Zariski constructible $\mathbf{Q}_{\ell}$-complexes defined using the proétale site), we were unable to develop enough machinery to construct a reasonable intersection cohomology theory (e.g., a self-dual theory with a GAGA theorem) using any of these approaches, so we restrict to the qcqs case in our discussion. Using our results on integral coefficients, we obtain a well-behaved perverse $t$-structure with $\mathbf{Q}_{\ell}$-coefficients: ###### Theorem 4.11 (Properties of $\mathbf{Q}_{\ell}$-perverse sheaves). The $\mathfrak{p}$\- and $\mathfrak{p}^{+}$\- perverse $t$-structures from §4.2 induce a $t$-structure on $D^{b}_{zc}(X,\mathbf{Q}_{\ell})$, and they are the same $t$-structure; we call this the perverse $t$-structure on $D^{b}_{zc}(X,\mathbf{Q}_{\ell})$. This $t$-structure satisfies all the properties in Theorem 4.2 with the following changes: one works with proper $\mathcal{X}$ in (4), only finite maps of qcqs spaces in (7), and replaces the assumption $p\nmid\\#\Lambda$ with $p\neq\ell$ in (8). ###### Proof. The first part is immediate from Proposition 4.6 (using part (5) there and the fact $X$ is qcqs to get the equality of the two $t$-structures). It remains to verify the properties in Theorem 4.2 (2) - (7). Property (3): this is immediate from the fact that the $\mathfrak{p}$\- and $\mathfrak{p}^{+}$-perverse $t$-structures on $D^{(b)}_{zc}(X,\mathbf{Z}_{\ell})$ are exchanged by Verdier duality. Property (2): parts (a), (b) and the right $t$-exactness in (c) is clear from the definition, while the left $t$-exactness in (c) was implicitly asserted in the proof of Proposition 4.6 (1). For part (d), we use the stronger property $\mathbf{D}_{X}Rj_{*}\mathscr{F}=j_{!}\mathbf{D}_{U}(\mathscr{F})$ proven in Theorem 4.2 (2) (d) and invert $\ell$. Property (4): the equivalence is clear. For perverse $t$-exactness, one simply notes that entire discussion in this section also holds true in the algebro- geometric context (and in fact was borrowed from there, see [BBD82, §3.3]), and that analytification is compatible with duality and passing to perverse cohomology sheaves with finite coefficients. Properties (5)-(8): these follow by the same proof as in Theorem 4.2. ∎ ### 4.4. Intersection cohomology In this section, fix a rigid space $X/K$, a prime $\ell$, and a coefficient ring $\Lambda\in\\{\mathbf{Z}/\ell^{n},\mathbf{Q}_{\ell}\\}$. If $\Lambda\in\\{\mathbf{Z}_{\ell},\mathbf{Q}_{\ell}\\}$, we assume $K$ has positive residue characteristic $p>0$. If $\Lambda=\mathbf{Q}_{\ell}$, then we also assume that $X$ is qcqs. Note that we have a reasonable (e.g., self-dual) theory of perverse $\Lambda$-sheaves in this context by §4.1 and §4.3 respectively. ###### Construction 4.12 (Intersection cohomology of rigid spaces). Let $j:U\subset X$ a Zariski-dense Zariski-open subset such that $U_{red}$ is smooth. Write $\mathrm{IC}_{X,\Lambda}:=j_{!*}\Lambda[\dim(X)]\in\mathop{\mathrm{Perv}}(X,\Lambda)$; one can show that this is independent of the choice of $U$. We call $\mathrm{IC}_{X,\Lambda}$ the intersection cohomology complex on $X$ and write $IH^{*}(X,\Lambda):=H^{*}(X,\mathrm{IC}_{X,\Lambda})$ for its cohomology, called the intersection homology of $X$. We then have the following result on these objects: ###### Theorem 4.13 (Basic properties of intersection cohomology). Write $C/K$ for a completed algebraic closure. 1. (1) $IH^{*}(X_{C},\Lambda)$ are finitely generated $\Lambda$-modules if either $X$ is proper or if $X$ is qcqs and $p\neq\ell$. 2. (2) If $X=\mathcal{X}^{an}$ for a proper $K$-scheme $X$, then $IH^{*}(X_{C},\mathbf{Q}_{\ell})\simeq IH^{*}(\mathcal{X}_{C},\mathbf{Q}_{\ell})$. 3. (3) If $X$ is proper and equidimensional of dimension $d$ and $p\neq\ell$, there is a natural Poincaré duality isomorphism $IH^{i}(X_{C},\Lambda)^{\vee}\simeq IH^{-i}(X_{C},\Lambda)(d)$ for all $i$. Ongoing work by Zavyalov suggests that the third part should hold true without the assumption on $\ell$. ###### Proof. 1. (1) For $X$ proper, we obtain the result from Theorem 3.10. For $X$ qcqs and $p\neq\ell$, we can then choose a formal model of $X$ and deduce the claim from the constructibility of nearby cycles [Hub98]. 2. (2) It is enough to prove that $\mathrm{IC}_{X,\Lambda}=(\mathrm{IC}_{\mathcal{X},\Lambda})^{an}$. This follows from the definition of either side as an appropriate image, and the compatibility of $(-)^{an}$ with all the constituent operations (namely, $j_{!}$, $j_{*}$, perverse truncations, and images). 3. (3) As $X$ is proper, Huber’s results show that $R\Gamma(X_{C},-)$ is compatible with duality [Hub96, Ch. 7], so it is enough to show that $\mathbf{D}_{X}(IC_{X,\Lambda})\simeq IC_{X,\Lambda}(d)$. Using Theorem 4.2 (4), this amounts to checking that $\mathbf{D}_{U}(\Lambda[d])\simeq\Lambda[d](d)$ for a smooth rigid space $U$ of dimension $d$, which follows immediately as $\omega_{U}=\Lambda[2d](d)$ (Remark 3.20). ∎ ###### Remark 4.14 (Intersection cohomology for Zariski-compactifiable spaces). We expect that there is a well-behaved notion of intersection cohomology with $\mathbf{Q}_{\ell}$-coefficients on any rigid space, not merely the qcqs ones. Since we did not construct a good category of perverse $\mathbf{Q}_{\ell}$-sheaves (see Remark 4.10), let us formulate a precise conjecture. Say $X$ is a rigid space equipped with a Zariski open immersion $j:X\hookrightarrow\overline{X}$ with $\overline{X}$ proper over $K$. One can then define a candidate intersection cohomology complex $\mathrm{IC}_{X,\mathbf{Q}_{\ell}}:=j^{*}\mathrm{IC}_{\overline{X},\mathbf{Q}_{\ell}}\in D^{b}_{zc}(X,\mathbf{Z}_{\ell})\otimes_{\mathbf{Z}_{\ell}}\mathbf{Q}_{\ell}$ as well as the resulting intersection homology groups $IH^{*}(X,\mathbf{Q}_{\ell}):=\mathrm{Ext}^{*}(\mathbf{Q}_{\ell},\mathrm{IC}_{X,\mathbf{Q}_{\ell}})$ (where the Exts are computed in $D^{b}_{zc}(X,\mathbf{Z}_{\ell})\otimes_{\mathbf{Z}_{\ell}}\mathbf{Q}_{\ell}$). We conjecture that these objects are independent of the compactification. Note that $IH^{*}(X,\mathbf{Q}_{\ell})$ will be finite dimensional $\mathbf{Q}_{\ell}$-vector space using Theorem 3.26. ### 4.5. Some conjectures Given the results in this paper, it is natural to expect that most of the important foundational theorems on perverse sheaves in complex geometry or arithmetic algebraic geometry admit analogs for Zariski-constructible sheaves in $p$-adic analytic geometry. In this section, we formulate some conjectures along these lines. Let $K/\mathbf{Q}_{p}$ be a finite extension, with residue field $k$ of cardinality $q$; let $C/K$ be a completed algebraic closure. Let $X$ be a rigid space over $K$. Let us begin by describing a conjecture on $\ell$-adic intersection cohomology; we believe this conjecture is accessible in the algebraic case thanks to de Jong’s alterations theorem [dJ96]. ###### Conjecture 4.15 ($\ell$-adic intersection cohomology). Assume $X$ is qcqs. Fix a prime $\ell\neq p$. 1. (1) Nearby cycles: For any formal model $\mathfrak{X}/\mathcal{O}_{K}$ and any $\ell\neq p$, the nearby cycle sheaf $\mathscr{F}=R\lambda_{\mathfrak{X}\ast}(\mathrm{IC}_{X,\mathbf{Q}_{\ell}})$ is a mixed $\ell$-adic perverse sheaf on the geometric special fiber $\mathfrak{X}_{\overline{s}}$. Moreover, if $X$ is equidimensional of dimension $d$, then $\mathrm{IC}_{\mathfrak{X}_{s},\mathbf{Q}_{\ell}}$ occurs as a summand of the $d$th graded piece of the weight filtration of $\mathscr{F}$. 2. (2) Weights: For any prime $\ell\neq p$ and any $g\in W_{K}$ projecting to a nonnegative power of geometric Frobenius, the eigenvalues of $g$ acting on $IH^{\ast}(X_{C},\mathbf{Q}_{\ell})$ are $q$-Weil numbers of weight $\geq 0$ . Next, we formulate a conjecture on the $p$-adic Hodge theoretic properties of $p$-adic intersection cohomology. The first part of this conjecture can be proven in the algebraic case using the decomposition theorem. ###### Conjecture 4.16 ($p$-adic intersection cohomology). Say $X$ is proper over $K$. 1. (1) Each $IH^{i}(X_{C},\mathbf{Q}_{p})$ is a de Rham $G_{K}$-representation. (Moreover, assuming the conjecture in Remark 4.14, this should be true for any Zariski compactifiable rigid space.) 2. (2) If $\mathbf{L}$ is a de Rham $\mathbf{Z}_{p}$-local system on a smooth Zariski-open subset $j:U\to X$, then $H^{\ast}(X_{\overline{K}},\mathrm{IC}(\mathbf{L}[\dim(X)]))$ is de Rham. Finally, we discuss the rigid analog of the BBDG decomposition theorem. As in complex geometry, the Hopf surface construction $X=(\mathbf{A}^{2,\mathrm{an}}-\\{0\\})/q^{\mathbf{Z}}$ (with $q\in K$ with $0<|q|<|1|$) gives a proper smooth genus $1$ fibration $f:X\to(\mathbf{A}^{2,\text{an}}-\\{0\\})/\mathbf{G}_{m}^{\text{an}}\simeq\mathbf{P}^{1,\text{an}}$ over $K$ such that $Rf_{*}\mathbf{Q}_{\ell}$ is not formal (i.e., is not isomorphic to a direct sum of its shifted cohomology sheaves). It is thus unreasonable to expect the decomposition theorem to hold true for arbitrary proper maps between rigid spaces. Nevertheless, by analogy with the complex geometric story in [Sai90], the following appears plausible: ###### Conjecture 4.17 (Decomposition theorem). Let $f:X\to Y$ be a projective map of rigid spaces over $K$ with $Y$ qcqs. Then $Rf_{*}\mathrm{IC}_{X,\mathbf{Q}_{\ell}}$ is a direct sum of shifts of perverse sheaves of the form $j_{!*}\mathscr{L}$, where $j:U\to Y$ is a Zariski-locally closed immersion and $\mathscr{L}$ is a $\mathbf{Q}_{\ell}$-local system on $U$. Finally, we also expect that Zariski-constructible sheaves on smooth rigid spaces are holonomic, in analogy with [KS94, Bei16], but we do not formulate a precise statement here. ## References * [Ach17] Piotr Achinger. Wild ramification and $K(\pi,1)$ spaces. Invent. Math., 210(2):453–499, 2017. * [And74] Michel André. Localisation de la lissité formelle. Manuscripta Math., 13:297–307, 1974. * [Bar76] Wolfgang Bartenwerfer. Der erste Riemannsche Hebbarkeitssatz im nichtarchimedischen Fall. J. Reine Angew. Math., 286(287):144–163, 1976. * [BBD82] A. A. Beĭlinson, J. Bernstein, and P. Deligne. Faisceaux pervers. In Analysis and topology on singular spaces, I (Luminy, 1981), volume 100 of Astérisque, pages 5–171. Soc. Math. France, Paris, 1982. * [Bei16] A. Beilinson. Constructible sheaves are holonomic. Selecta Math. (N.S.), 22(4):1797–1819, 2016. * [Ber90] Vladimir G. Berkovich. Spectral theory and analytic geometry over non-Archimedean fields, volume 33 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1990. * [Ber93] Vladimir G. Berkovich. Étale cohomology for non-Archimedean analytic spaces. Inst. Hautes Études Sci. Publ. Math., (78):5–161 (1994), 1993\. * [BGR84] S. Bosch, U. Güntzer, and R. Remmert. Non-Archimedean analysis, volume 261 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1984. A systematic approach to rigid analytic geometry. * [BL20] Bhargav Bhatt and Jacob Lurie. A $p$-adic Riemann-Hilbert functor I. 2020\. preprint. * [BLR95] Siegfried Bosch, Werner Lütkebohmert, and Michel Raynaud. Formal and rigid geometry. IV. The reduced fibre theorem. Invent. Math., 119(2):361–398, 1995. * [BM20] Bhargav Bhatt and Akhil Mathew. The arc-topology. 2020\. to appear in Duke Math. J. * [Bos14] Siegfried Bosch. Lectures on formal and rigid geometry, volume 2105 of Lecture Notes in Mathematics. Springer, Cham, 2014. * [BS15] Bhargav Bhatt and Peter Scholze. The pro-étale topology for schemes. Astérisque, (369):99–201, 2015. * [Con99] Brian Conrad. Irreducible components of rigid spaces. Ann. Inst. Fourier (Grenoble), 49(2):473–541, 1999. * [dJ96] A. J. de Jong. Smoothness, semi-stability and alterations. Inst. Hautes Études Sci. Publ. Math., (83):51–93, 1996. * [dJvdP96] Johan de Jong and Marius van der Put. Étale cohomology of rigid analytic spaces. Doc. Math., 1:No. 01, 1–56, 1996. * [Duc18] Antoine Ducros. Families of Berkovich spaces. Astérisque, (400):vii+262, 2018. * [Eke90] Torsten Ekedahl. On the adic formalism. In The Grothendieck Festschrift, Vol. II, volume 87 of Progr. Math., pages 197–218. Birkhäuser Boston, Boston, MA, 1990. * [FGK11] Kazuhiro Fujiwara, Ofer Gabber, and Fumiharu Kato. On Hausdorff completions of commutative rings in rigid geometry. J. Algebra, 332:293–321, 2011. * [FK18] Kazuhiro Fujiwara and Fumiharu Kato. Foundations of rigid geometry. I. EMS Monographs in Mathematics. European Mathematical Society (EMS), Zürich, 2018. * [Gab04] Ofer Gabber. Notes on some $t$-structures. In Geometric aspects of Dwork theory. Vol. I, II, pages 711–734. Walter de Gruyter, Berlin, 2004. * [GR03] Ofer Gabber and Lorenzo Ramero. Almost ring theory, volume 1800 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2003. * [Gro66] A. Grothendieck. Éléments de géométrie algébrique. IV. Étude locale des schémas et des morphismes de schémas. III. Inst. Hautes Études Sci. Publ. Math., (28):255, 1966. * [Han18] David Hansen. Remarks on nearby cycles of formal schemes. 2018\. * [Han20] David Hansen. Vanishing and comparison theorems in rigid analytic geometry. Compos. Math., 156(2):299–324, 2020. * [Hub96] Roland Huber. Étale cohomology of rigid analytic varieties and adic spaces. Aspects of Mathematics, E30. Friedr. Vieweg & Sohn, Braunschweig, 1996\. * [Hub98] R. Huber. A finiteness result for the compactly supported cohomology of rigid analytic varieties. J. Algebraic Geom., 7(2):313–357, 1998. * [ILO14] Luc Illusie, Yves Laszlo, and Fabrice Orgogozo, editors. Travaux de Gabber sur l’uniformisation locale et la cohomologie étale des schémas quasi-excellents. Société Mathématique de France, Paris, 2014. Séminaire à l’École Polytechnique 2006–2008. [Seminar of the Polytechnic School 2006–2008], With the collaboration of Frédéric Déglise, Alban Moreau, Vincent Pilloni, Michel Raynaud, Joël Riou, Benoît Stroh, Michael Temkin and Weizhe Zheng, Astérisque No. 363-364 (2014) (2014). * [KS94] Masaki Kashiwara and Pierre Schapira. Sheaves on manifolds, volume 292 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1994. With a chapter in French by Christian Houzel, Corrected reprint of the 1990 original. * [LO08] Yves Laszlo and Martin Olsson. The six operations for sheaves on Artin stacks. I. Finite coefficients. Publ. Math. Inst. Hautes Études Sci., (107):109–168, 2008. * [Lüt93] W. Lütkebohmert. Riemann’s existence problem for a $p$-adic field. Invent. Math., 111(2):309–330, 1993. * [Poi13] Jérôme Poineau. Les espaces de Berkovich sont angéliques. Bull. Soc. Math. France, 141(2):267–297, 2013. * [Qui] Daniel Quillen. Homology of commutative rings. preprint. * [Roo06] Jan-Erik Roos. Derived functors of inverse limits revisited. J. London Math. Soc. (2), 73(1):65–83, 2006. * [Sai90] Morihiko Saito. Decomposition theorem for proper Kähler morphisms. Tohoku Math. J. (2), 42(2):127–147, 1990. * [Sch12] Peter Scholze. Perfectoid spaces. Publications mathématiques de l’IHÉS, 116(1):245–313, 2012\. * [Sch17] Peter Scholze. étale cohomology of diamonds. 2017\. preprint. * [Sta18] The Stacks Project Authors. Stacks Project. https://stacks.math.columbia.edu, 2018. * [SW20] Peter Scholze and Jared Weinstein. Berkeley lectures on $p$-adic geometry. 2020\. to appear in Annals of Math. Studies. * [Tem18] Michael Temkin. Functorial desingularization over $\bf Q$: boundaries and the embedded case. Israel J. Math., 224(1):455–504, 2018. * [Ver76] Jean-Louis Verdier. Classe d’homologie associée à un cycle. In Séminaire de géométrie analytique (École Norm. Sup., Paris, 1974-75), pages 101–151. Astérisque, No. 36–37. 1976. * [War17] Evan Warner. Adic moduli spaces. 2017\. Stanford Ph.D. thesis, available at https://searchworks.stanford.edu/view/12135003.
PASA 2024 # Remnant Radio Galaxies Discovered in a Multi-frequency Survey B. Quici${}^{\rm\ref{ICRAR}}$ N. Hurley-Walker${}^{\rm\ref{ICRAR}}$ N. Seymour${}^{\rm\ref{ICRAR}}$ R. J. Turner${}^{\rm\ref{uTAS}}$ S. S. Shabala${}^{\rm\ref{uTAS},\ref{ASTRO3D}}$ M. Huynh${}^{\rm\ref{CSIRO}}$ H. Andernach${}^{\rm\ref{mexico}}$ A. D. Kapińska${}^{\rm\ref{NRAO}}$ J. D. Collier${}^{\rm\ref{CASS},\ref{wSYD},\ref{ct}}$ M. Johnston- Hollitt${}^{\rm\ref{ICRAR}}$ S. V. White${}^{\rm\ref{ICRAR},\ref{uSAF}}$ I. Prandoni${}^{\rm\ref{inaf}}$ T. J. Galvin${}^{\rm\ref{CSIRO}}$ T. Franzen${}^{\rm\ref{ASTRON},\ref{ICRAR}}$ C. H. Ishwara- Chandra${}^{\rm\ref{TIFR}}$ S. Bellstedt${}^{\rm\ref{UWA}}$ S. J. Tingay${}^{\rm\ref{ICRAR}}$ B. M. Gaensler${}^{\rm\ref{dunlap}}$ A. O’Brien${}^{\rm\ref{CSIRO_},\ref{CFG},\ref{wSYD}}$ J. Rogers${}^{\rm\ref{uTAS}}$ K. Chow${}^{\rm\ref{CSIRO_}}$ S. Driver${}^{\rm\ref{UWA}}$ and A. Robotham${}^{\rm\ref{UWA}}$ International Centre for Radio Astronomy Research, Curtin University, Bentley, WA 6102, Australia School of Natural Sciences, University of Tasmania, Private Bag 37, Hobart, 7001, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) CSIRO Astronomy and Space Science, 26 Dick Perry Avenue, Kensington, WA 6151, Australia IFUG, Universidad de Guanajuato, Guanajuato, C.P. 36000, Mexico" for "DCNE, Universidad de Guanajuato, Cjón. de Jalisco s/n, Guanajuato, C.P. 36023, Mexico National Radio Astronomy Observatory, 1003 Lopezville Road, Socorro, NM 87801, USA CSIRO Astronomy and Space Science (CASS), Marsfield, NSW 2122, Australia School of Science, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751, Australia The Inter-University Institute for Data Intensive Astronomy (IDIA), Department of Astronomy, University of Cape Town, Private Bag X3, Rondebosch, 7701, South Africa Department of Physics and Electronics, Rhodes University, PO Box 94, Grahamstown, 6140, South Africa Istituto di Radioastronomia, Via P. Gobetti 101, 40129, Italy ASTRON: the Netherlands Institute for Radio Astronomy, PO Box 2, 7990 AA, Dwingeloo, The Netherlands National Centre for Radio Astrophysics, TIFR, Post Bag No. 3, Ganeshkhind Post, 411007 Pune, India International Centre for Radio Astronomy Research, M468, University of Western Australia, Crawley, WA 6009, Australia Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto ON M5S 3H4, Canada CSIRO Astronomy and Space Science, PO Box 76, 1710, Epping, NSW, Australia Center for Gravitation, Cosmology, and Astrophysics, Department of Physics, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI 53201, USA ICRARICRARuTASuTASASTRO3DASTRO3DCSIROCSIROmexicomexicoNRAONRAOCASSCASSwSYDwSYDctctuSAFuSAFinafinafASTRONASTRONTIFRTIFRUWAUWAdunlapdunlapCSIRO_CSIRO_CFGCFG ###### Abstract The remnant phase of a radio galaxy begins when the jets launched from an active galactic nucleus are switched off. To study the fraction of radio galaxies in a remnant phase, we take advantage of a $8.31$ deg2 sub-region of the GAMA 23 field which comprises of surveys covering the frequency range 0.1–9 GHz. We present a sample of 104 radio galaxies compiled from observations conducted by the Murchison Wide-field Array (216 MHz), the Australia Square Kilometer Array Pathfinder (887 MHz), and the Australia Telescope Compact Array (5.5 GHz). We adopt an ‘absent radio core’ criterion to identify 10 radio galaxies showing no evidence for an active nucleus. We classify these as new candidate remnant radio galaxies. Seven of these objects still display compact emitting regions within the lobes at 5.5 GHz; at this frequency the emission is short-lived, implying a recent jet switch-off. On the other hand, only three show evidence of aged lobe plasma by the presence of an ultra-steep spectrum ($\alpha<-1.2$) and a diffuse, low surface- brightness radio morphology. The predominant fraction of young remnants is consistent with a rapid fading during the remnant phase. Within our sample of radio galaxies, our observations constrain the remnant fraction to $4\%\lesssim f_{\mathrm{rem}}\lesssim 10\%$; the lower limit comes from the limiting case in which all remnant candidates with hotspots are simply active radio galaxies with faint, undetected radio cores. Finally, we model the synchrotron spectrum arising from a hotspot to show they can persist for 5–10 Myr at 5.5 GHz after the jets switch off – radio emission arising from such hotspots can therefore be expected in an appreciable fraction of genuine remnants. ###### doi: 10.1017/pas.2024.xxx ###### keywords: galaxies: active – radio continuum: galaxies – methods: statistical ## 1 Introduction The jets launched from a radio-loud active galactic nucleus (AGN) arise from the accretion onto a super-massive black hole, and form synchrotron-emitting radio lobes in the intergalactic environments of their host galaxies (Scheuer, 1974). Whilst the jets are active, a radio galaxy will often display compact features such as an unresolved radio core coincident with its host galaxy, bi- polar jets, and hotspots in the lobes of Fanaroff Riley type II ( FR-II; Fanaroff & Riley, 1974) radio galaxies. The radio continuum spectrum arising from the lobes is usually well approximated by a broken power-law for radio frequencies between $100$ MHz and $10$ GHz; the observed spectral index, $\alpha$111The spectral index $\alpha$ is defined through $S_{\nu}\propto\nu^{\alpha}$, typically ranges within $-1.0<\alpha<-0.5$. Steepening in the radio lobe spectrum is allowed by $\Delta\alpha\leq 0.5$ (e.g. the CI model; Kardashev, 1962; Pacholczyk, 1970; Jaffe & Perola, 1973), and is attributed to ageing of the lobe plasma. Such active radio galaxies do not offer a complete picture to the life-cycle of radio galaxies, due to a seemingly intermittent behaviour of the AGN jet activity. The remnant phase of a radio galaxy begins once the jets switch off. During this phase the lobes will fade as they undergo a rapid spectral evolution, e.g. as shown observationally by Murgia et al. (2011), Shulevski et al. (2012), Godfrey et al. (2017), Brienza et al. (2017), Mahatma et al. (2018), Jurlin et al. (2020), and shown with modelling conducted by Turner (2018), Hardcastle (2018) and Shabala et al. (2020). Remnant radio galaxies, remnants herein, will remain observable for many tens of Myr at low ($\sim 150$ MHz) frequencies, which is comparable but shorter than the duration of their previous active phase (Shulevski et al., 2017; Turner, 2018; Brienza et al., 2020). Jets are also known to restart after a period of inactivity (e.g; Roettiger et al., 1994), giving rise to a restarted radio galaxy. Several observational classes exist to describe such sources; double-double radio galaxies (DDRG; Schoenmakers et al., 2000) describe sources in which two distinct pairs of lobes can be observed, however restarting jets can also appear as compact steep spectrum sources embedded within larger-scale remnant lobes (e.g; Brienza et al., 2018). Compiling samples of remnant (Saripalli et al., 2012; Godfrey et al., 2017; Brienza et al., 2017; Mahatma et al., 2018) and restarted (Saripalli et al., 2012; Mahatma et al., 2019) radio galaxies sheds new light on their dynamics and evolution, and by extension, the AGN jet duty cycle. Jurlin et al. (2020) present a direct analysis of the radio galaxy life cycle, in which their sample is decomposed into active, remnant and restarted radio galaxies. To complement these observational works, Shabala et al. (2020) present a new methodology in which uniformly-selected samples of active, remnant and restarted radio galaxies are used to constrain evolutionary models describing the AGN jet duty cycle. However, using radio observations to confidently identify radio sources in these phases is a challenging task, even in the modern era of radio instruments. Remnant radio galaxies, which are the focus of this work, display various observational properties that correlate with age, presenting a challenge for identifying complete samples of such sources. Various selection techniques exist amongst the literature, each with their selection biases. Due to the red preferential cooling of higher-energy synchrotron-radiating electrons (Jaffe & Perola, 1973; Komissarov & Gubanov, 1994), many authors have identified remnants by their ultra-steep radio spectrum ($\alpha<-1.2$), reflecting the absence of a source of energy injection, e.g. (Cordey, 1987; Parma et al., 2007; Hurley-Walker et al., 2015). However, Brienza et al. (2016) demonstrates that this technique preferentially selects aged remnants and will miss those in which the lobes have not had time to steepen over the observed frequency range. Murgia et al. (2011) propose a spectral curvature (SPC) criterion, which evaluates the difference in spectral index over two frequency ranges, e.g. SPC = $\alpha_{\mathrm{high}}$ \- $\alpha_{\mathrm{low}}$ such that SPC $>$0 for a convex spectrum. Sources with SPC $>0.5$ demonstrate highly curved spectra, and are likely attributed to remnant lobes. However, modelling conducted by Godfrey et al. (2017) shows that not all remnants will be selected this way, even at high ($\sim$10 GHz) frequencies. Morphological selection offers a complementary way to identify remnants, independent of spectral ageing of the lobes. This technique often involves searching for low surface brightness (SB) profiles (SB $<50$ mJy arcmin-2), amorphous radio morphologies, and an absence of hotspots (e.g; Saripalli et al., 2012; Brienza et al., 2017). However, young remnants in which the hotspots have not yet disappeared due to a recent switch-off of the jets, (e.g 3C 028; Feretti et al., 1984; Harwood et al., 2015), will be missed with these techniques. An alternative approach is to identify remnants based on an absent radio core; a radio core should be absent if the AGN is currently inactive (Giovannini et al., 1988). This property is often invoked to confirm the status of a remnant radio galaxy, e.g. see Cordey (1987), and is recently employed by Mahatma et al. (2018) as a criterion to search for remnant candidates in a LOw Frequency ARray (LOFAR)-selected sample. The caveat here is the plausibility for a faint radio core to exist below the sensitivity of the observations, meaning this method selects only for remnant candidates. This method will also miss remnant lobes from a previous epoch of activity in restarted radio galaxies. A likely example of such a source is MIDAS J230304-323228, discussed in Sect 3. These sources are beyond the scope of this work, however are a promising avenue for future work. A common aspect of almost all previously-mentioned observational studies of remnant radio galaxies, is their selection at low frequencies (typically around 150 MHz). The preferential radiating of high-energy electrons means that the observable lifetime of remnant lobes increases at a lower observing frequency. Incorporating a low-frequency selection thus plays an important role in improving the completeness of remnant radio galaxy samples; quite often the oldest remnant lobes are detectable only at such low frequencies. In this work, we take advantage of a broad range of radio surveys targeting the Galaxy And Mass Assembly (GAMA; Driver et al., 2011) 23 field to identify and study new remnant radio galaxies candidates in which the central AGN is currently inactive. We use new low-frequency observations provided by the Murchison Wide-field Array (MWA; Tingay et al., 2013) and the Australian Square Kilometre Array Pathfinder (ASKAP; Johnston et al., 2007; McConnell et al., 2016) to compile a sample of radio galaxies, and use high-frequency (5.5 GHz) observations provided by the Australia Telescope Compact Array (ATCA) to identify remnant candidates by the ‘absent radio core’ criterion. In Sect. 2 we present the multi-wavelength data used for this work. In Sect. 3 we discuss the compilation of our radio galaxy sample, the classification of remnant candidates, and the matching of host galaxies. In Sect. 4 we discuss each of the selected remnant candidates. In Sect. 5 we discuss the observed radio properties of the sample and emphasise the caveats of various remnant selection methods. We also discuss the rest frame properties of the sample, present the fraction of remnants constrained by our observations, and examine a particularly interesting remnant with detailed modelling. Finally, our results are concluded in Sect. 6. We assume a flat $\Lambda$CDM cosmology with $H_{0}=67.7$kms-1Mpc-1, $\Omega_{\rm M}$=0.307 and $\Omega_{\Lambda}$=1-$\Omega_{\rm M}$ (Planck Collaboration et al., 2014). All coordinates are reported in the J2000 equinox. Figure 1: Sky coverage of radio surveys dedicated to observing GAMA 23, described in Sect. 2.1. Near-infrared VIKING observations (Sect. 2.2.1) are also displayed. Thick-red and thin-green arrows indicate the directions in which MIDAS ES and VIKING extend beyond the represented footprint. ## 2 DATA Our sample is focused on a 8.31 deg2 sub-region of the GAMA 23 field ($\mathrm{RA}=345^{\circ},\mathrm{Dec}=-32.5^{\circ}$), e.g. see Figure 1. The rich multi-wavelength coverage makes the absent radio-core criterion a viable method to search for remnant radio galaxies, and allows us to match the radio sample with their host galaxies. Table 1: Summarised properties of the radio surveys spanning the GAMA 23 field. Each column, in ascending order, details the telescope used to conduct the observations, the name of the radio survey, the dates observations were conducted, the central frequency of the observing band, the bandwidth available in each band, the average noise properties within region D, and the shape of the restoring beam. Telescope | Survey | Date | Frequency | Bandwidth | Noise | Beam shape ---|---|---|---|---|---|--- | | observed | [MHz] | [MHz] | [mJy beam-1] | Bmaj[′′], Bmin[′′], PA[∘] {MWA Phase I | GLEAM SGP | 2013–2015 | 119 | 30.72 | 16.2 | 241, 202, -57 154 | 9.3 | 199, 157, -51 186 | 6.7 | 162, 129, -47 MWA Phase II | MIDAS ES | 2018–2020 | 216 | 30.72 | 0.9 | 54, 43, 157 uGMRT | – | 2016 | 399 | 200 | 0.1 | 15.8, 6.7, 9.5 ASKAP | EMU-ES | 2019 | 887.5 | 288 | 0.035 | 10.5, 7.8, 87 VLA | NVSS | 1993–1996 | 1400 | 50 | 0.45 | 45, 45, 0 ATCA | GLASS | 2016–2020 | 5500, 9500 | 2000, 2000 | 0.024, 0.04 | (4, 2, 0), (3.4, 1.7, 0) ### 2.1 Radio data Below we describe the radio observations used throughout this work, summarised in Table 1 and 2. #### 2.1.1 GLEAM SGP (119, 154, 185 MHz) From 2013 to 2015, the MWA observed $\sim$30,000 deg2 of the sky south of Dec $=30^{\circ}$. The GaLactic and Extra-galactic All-sky MWA (GLEAM; Wayth et al., 2015) survey adopted a drift scan observing method using the MWA Phase I configuration and a maximum baseline of $\sim$3 km. Observations exclusive to 2013–2014 (year 1) were used to generate the public extra-galactic GLEAM first release (Hurley-Walker et al., 2017)222GLEAM catalogue publicly available on VizieR: http://cdsarc.u-strasbg.fr/viz-bin/Cat?VIII/100. GLEAM spans a 72–231 MHz frequency range, divided evenly into 30.72 MHz wide-bands centered at $\nu_{c}=$ {88, 119, 154, 185, 216} MHz. Observations conducted during 2014–2015 (year 2) covered the same sky area as GLEAM, but with a factor of two increase in the observing time due to offset ($\pm$1 hour in median RA) scans. Within a footprint of: 310${}^{\circ}\leq$ RA $\leq 76^{\circ}$, -48${}^{\circ}\leq$ Dec $\leq-2^{\circ}$ ($\sim 5,100$ deg2 sky coverage), Franzen et al. (in prep) independently reprocessed the overlapping observations from both years to produce the GLEAM South Galactic Pole (SGP) survey, achieving a factor $\sim 3$ increase in integration time over GLEAM. Due to calibration errors associated with Fornax A and Pictor A entering the primary beam side-lobes, the lowest wide-band centered at 88 MHz was discarded at imaging. Despite its declination-dependence, the GLEAM SGP root-mean-square (RMS) remains effectively constant within the footprint of our (8.31 deg2) study. The point spread function (PSF) varies slightly across the survey; at 216 MHz the PSF size within GAMA 23 is approximately $150^{\prime\prime}\times 119^{\prime\prime}$, and an RMS of $\sigma\sim 4\pm 0.5$mJy beam-1 is achieved. This is consistent with a factor$~{}2$ increase in sensitivity over GLEAM. We use an internal GLEAM SGP component catalogue, developed by the method used for GLEAM. Franzen et al. (in prep) quote an 8% absolute uncertainty in the GLEAM SGP flux density scale, consistent with the value reported for GLEAM by Hurley-Walker et al. (2017). #### 2.1.2 MIDAS ES (216 MHz) In 2018 the MWA was reconfigured into an extended baseline configuration (MWA Phase II; Wayth et al., 2018). The MWA Phase II provides a maximum baseline of 5.3 km which, at 216 MHz, improves the angular resolution to $54^{\prime\prime}\times 43^{\prime\prime}$. This drives down the classical and sidelobe confusion limits, allowing for deeper imaging to be made with an RMS of $\sigma\ll 1\,$mJy beam-1. MWA Phase II science is presented in Beardsley et al. (2020). The MWA Interestingly Deep Astrophysical Survey (MIDAS, Seymour et al. in prep) will provide deep observations of six extra- galactic survey fields, including GAMA 23, in the MWA Phase II configuration. Here, we use an early science (MIDAS ES herein) image of the highest frequency band centered at 216 MHz. Data reduction followed the method outlined by Franzen et al. (in prep) where each snapshot was calibrated using a model derived from GLEAM. Imaging was performed using the WSClean imaging software (Offringa et al., 2014) using a robustness of 0 (Briggs, 1995). Altogether, 54 two-minute snapshot images, each achieving an average RMS of $\sim 8\,$mJy beam-1 without self calibration, were mosaiced to produce a deep image. Stacking of individual snapshots reproduced the expected $\sqrt{t}$ increase in sensitivity, where $t$ is the total integration time, and implied the classical and sidelobe confusion limits were not reached. After a round of self calibration the final deep image, at zenith, achieved an RMS of $\sim$0.9 mJy beam-1 with a $\lesssim 1^{\prime}$ restoring beam. Radio sources were catalogued using the Background And Noise Estimation tool bane, to measure the direction-dependent noise across the image, and aegean to perform source finding and characterisation (Hancock et al., 2012, 2018)333aegean and bane publicly available on GitHub: https://github.com/PaulHancock/Aegean. This work made use of 2.0.2 version of aegean. Using the GLEAM catalogue, sources above 10$\sigma$ in MIDAS ES and GLEAM were used to correct the MIDAS ES flux-scale. For 887 such sources in GAMA 23, correction factors were derived based on the integrated flux density ratio between MIDAS ES and GLEAM. The correction factors followed a Gaussian distribution, and indicated an agreement with GLEAM within 5%. Given the 8% uncertainty in the GLEAM flux density scale (Sect. 2.1.1), we prescribe an 8% uncertainty in the MIDAS ES flux density scale. #### 2.1.3 EMU Early Science (887 MHz) In 2019 the ASKAP delivered the first observations conducted with the full 36-antenna array configuration. The design of ASKAP opens up a unique parameter space for studying the extra-galactic radio source population. With a maximum baseline of 6 km, ASKAP produces images with $\sim 10^{\prime\prime}$ resolution at 887 MHz. The shortest baseline is 22 m, recovering maximum angular scales of $\sim 0.8^{\circ}$. As part of the Evolutionary Map of the Universe (EMU; Norris, 2011) Early Science, the GAMA 23 field was observed at 887 MHz and made publicly available on the CSIRO ASKAP Science Data Archive (CASDA444https://data.csiro.au/collections/domain/casdaObservation/search/). Details of the reduction are briefly summarized here. This data has been reduced by the ASKAP collaboration using the ASKAPsoft555https://www.atnf.csiro.au/computing/software/askapsoft/sdp/docs/current/pipelines/introduction.html data-reduction pipeline (Whiting et al. in prep) on the Galaxy supercomputer hosted by the Pawsey Supercomputing Centre. PKS B1934-638 was used to perform bandpass and flux calibration for each of the unique 36 beams. Bandpass solutions were applied to the target fields. The final images are restored with a 10.55${}^{\prime\prime}\times 7.82^{\prime\prime}$ (BPA = 86.8∘) elliptical beam, and achieves an RMS of $\sigma\approx 34-45\mu$Jy beam-1. Henceforth we refer to these observations as EMU-ES. Table 2: Details of the additional ATCA data collected here under project code C3335, PI: B. Quici. Each column, in ascending order, details the configuration used to conduct observations, the date observations were conducted, the central frequency of the receiver band, the bandwidth available within each band, the approximate time spent per-source in each configuration, the secondary calibrator observed, the average noise per pointing, and the shape of the restoring beam. Note, primary calibrator PKS B1934-638 was observed for all observations. Config. | Date | Frequency | Bandwidth | Duration | Secondary | Noise | Beam shape ---|---|---|---|---|---|---|--- | observed | [GHz] | [GHz] | [min] | calibrator | [mJy beam-1] | Bmaj[′′], Bmin[′′], PA[∘] H168 | 18/10/19 | 2.1 | 2 | 22 | PKS B2259-375 | 0.7 | 104, 127, 0 H75 | 24/10/19 | 5.5, 9.0 | 2, 2 | 25 | PKS B2254-367 | 0.22, 0.14 | (88, 110, 0), (54, 67, 0) #### 2.1.4 NVSS (1.4 GHz) We use observations conducted by the National Radio Astronomy Observatory (NRAO) using the Very Large Array (VLA; Thompson et al., 1980) to sample an intermediate frequency range. The NRAO VLA Sky Survey (NVSS; Condon et al., 1998)666NVSS catalogue publicly available on VizieR: https://vizier.u-strasbg.fr/viz-bin/VizieR?-source=%20NVSS surveys the entire sky down to Dec$=-40^{\circ}$ at 1400 MHz. Observations for NVSS were collected predominately in the D configuration, however the DnC configuration was used for Southern declinations. Final image products were restored using a circular synthesized beam of $45^{\prime\prime}\times 45^{\prime\prime}$. At 1400 MHz, NVSS achieves an RMS of $\sigma=0.45\,$mJy beam-1. #### 2.1.5 GLASS (5.5 & 9.5 GHz) The GAMA Legacy ATCA Sky Survey (GLASS; Huynh et al. in prep) offers simultaneous, high-frequency (5.5, 9.5 GHz) observations of GAMA 23 observed by the ATCA. Observations for GLASS were conducted over seven semesters between 2016 – 2020 (PI: M. Huynh, project code: C3132). Data at each frequency were acquired with a 2 GHz bandwidth, made possible by the Compact Array Broadband Backend (Wilson et al., 2011), with the correlator set to a 1 MHz spectral resolution. Observations for GLASS were conducted in two separate ATCA array configurations, the 6A and 1.5C configurations, contributing 69% and 31% towards the total awarded time, respectively. The shortest interferometer spacing is 77 m in the 1.5C configuration, providing a largest recoverable angular scale of 146′′ at 5.5 GHz. As part of the observing strategy, GLASS was divided into six 8.31 deg2 regions (regions A $-$ F), for which region D (RA=345∘, Dec=-33.75∘) was observed, reduced and imaged first. For this reason, this paper focuses only on region D. Processing and data reduction were conducted using the Multichannel Image Reconstruction, Image Analysis and Display (MIRIAD) software package (Sault et al., 1995), similar to the method outlined by Huynh et al. (2015, 2020). The 1435 region D pointings are restored with the Gaussian fit of the dirty beam, and convolved to a common beam of $4^{\prime\prime}\times 2^{\prime\prime}$ (BPA=$0^{\circ}$) at 5.5 GHz, and achieves an RMS of $\sim 24\,\mu$Jy beam-1. A similar process at 9.5 GHz results in a $3.4^{\prime\prime}\times 1.7^{\prime\prime}$ (BPA=$0^{\circ}$) synthesized beam and achieves an RMS of $\sim 40\,\mu$Jy beam-1. Although the same theoretical sensitivity is expected at 5.5 GHz and 9.5 GHz, the sparse overlap in adjacent pointings, larger phase calibration errors, and increased radio frequency interference (RFI) all result in a drop in sensitivity. Henceforth, we refer to GLASS observations conducted at 5.5 GHz and 9.5 GHz as GLASS 5.5 and GLASS 9.5 respectively. #### 2.1.6 uGMRT legacy observations (399 MHz) As part of the GLASS legacy survey (Sect. 2.1.5), uGMRT has observed GAMA 23 in band-3 (250–500 MHz) centered at 399 MHz. The project $32\\_060$ (PI: Ishwara Chandra) was awarded 33 hours to cover 50 contiguous pointings spanning a 50 square-degree region. Observations were conducted in a semi- snapshot mode, with $\approx 30$ minutes per pointing distributed through three 10 minute scans. In band-3, the wide-band correlator collects a bandwidth of 200 MHz divided into 4,000 fine channels. Data reduction was conducted using a Common Astronomical Software Application (CASA; McMullin et al., 2007) pipeline777Pipeline can be found at http://www.ncra.tifr.res.in/~ishwar/pipeline.html, and makes use of CASA version 5.1.2-4. Data reduction followed the standard data reduction practices such as data flagging, bandpass and gain calibration, application of solutions to target scans, imaging and self-calibration (see Ishwara-Chandra et al. 2020 for details). The image is restored by a 15.8${}^{\prime\prime}\times 6.71^{\prime\prime}$ (BPA= 9.5∘) beam, and achieves a best RMS of $\sim$100 $\mu$ Jy beam-1. Note that several bright sources throughout the field adversely impact the data reduction, resulting in large spatial variations in the RMS. #### 2.1.7 Low resolution ATCA observations (2.1, 5.5 & 9 GHz) Due to their power-law spectral energy distributions, the integrated luminosities of radio galaxy lobes decrease with increasing frequency (Sect. 1). Given that remnants can display ultra-steep radio spectra, this presents a sensitivity challenge associated with their detection. For a survey such as GLASS which has high $\theta\sim 5^{\prime\prime}$ resolution (Sect. 2.1.5), resolution bias further exacerbates this problem: diffuse low-surface- brightness regions escape detection with greater ease, resulting in an under- estimation of the integrated flux density. To combat the resolution bias suffered by GLASS, we carried out low-resolution observations with ATCA. The project C3335 (PI: B. Quici) was awarded 14 hours to conduct low- resolution observations of each remnant identified for this work, at 2.1, 5.5 and 9 GHz. Observations at 2.1 GHz (LS-band) were conducted in the H168 configuration; observations at 5.5 and 9 GHz (CX-band) were conducted in the H75 configuration. The minimum and maximum baselines achieved within each configuration are 61 m and 192 m for H168, 31 m and 89 m for H75 respectively (excluding baselines formed with antenna CA06). Bandpass, gain and flux calibration was performed using PKS B1934-638. Due to the small angular separation between each target, the secondary calibrator was held constant for each target. At 2.1 GHz, PKS B2259-375 was used for phase calibration. At 5.5 and 9 GHz, phase calibration was performed with PKS B2254-367. To maximize $uv$ coverage, each target was observed on a rotating block of length $\sim$30 minutes, with approximately888Time per source was varied slightly to accommodate for the fainter/brighter sources. 2 minutes allocated per target per block. Secondary calibrators were observed twice per block to ensure stable phase solutions. Data reduction was performed using the CASA software package999This work makes use of CASA version 5.1.2-4, and followed the standard data reduction practices. As part of preliminary flagging, the data are ‘clipped’, ‘shadowed’ and ‘quacked’ using the flagdata task, in order to flag for zero values, shadowed antennas and the first five seconds of the scans, respectively. Forty edge channels of the original 2049 are also flagged due to bandpass roll-off (Wilson et al., 2011). Again using the flagdata task, the uncalibrated data are then automatically flagged for RFI using mode=‘tfcrop’, which calculates flags based on a time and frequency window. The data are manually inspected in an amplitude versus channel space, to ensure RFI is adequately flagged. Observations conducted in the $LS$-band and $CX$-band were split into four and eight sub-bands, respectively. Calibration was performed per sub-band per pointing. To perform calibration, the complex gains and bandpass are solved for first using the primary calibrator. Complex gains and leakages were solved for next using the secondary calibrator. After applying a flux-scale correction based on PKS B1934-638, the calibration solutions were copied individually to each target scan. A secondary round of automatic RFI flagging is performed with flagdata where mode=‘rflag’, which is used for calibrated data. The primary and secondary calibrator, as well as each target scan are flagged in this way. In total, approximately $\approx 52\%$ and $\approx 15\%$ of the available bandwidth was flagged due to RFI present in the $LS$ and $CX$ bands, respectively. Within the hybrid configurations the first five antennas, CA01–CA05, provide the dense packing of short (10–100 m) spacings. Antenna CA06 is fixed and provides much larger spacings of $\sim$4500 m. Given that this results in a large gap in the $uv$ coverage, all baselines formed with antenna CA06 are excluded to achieve a well-behaved point-spread-function. The average noise properties and restoring beams at 2.1, 5.5 and 9 GHz are respectively: $\sigma\sim 0.7$ mJy beam-1 and $\theta=104^{\prime\prime}\times 127^{\prime\prime}$, $\sigma\sim 0.22$ mJy beam-1 and $88^{\prime\prime}\times 110^{\prime\prime}$, and $\sigma\sim 0.14$ mJy beam-1 and $\theta=54^{\prime\prime}\times 67^{\prime\prime}$. We use any unresolved GLASS sources present within the target scans to evaluate a calibration uncertainty. At 5.5 GHz, the ratio of the integrated flux density between the low-resolution ATCA observations and GLASS was consistently within 3%. We use this value as the absolute flux-scale uncertainty. Details of these observations are summarized in Table 2. A comparison of these observations and GLASS, at 5.5 GHz, is presented in Figure 2. The reader should note, due to persistent RFI in the $LS$-band, we were unable to make a detection of most of our targets at this frequency. Figure 2: A 5.5 GHz view of the radio source MIDAS J225337$-$344745\. The image offers a comparison between (i) GLASS (Sect. 2.1.5) presented as the gray-scale image on a linear stretch, and (ii) the low-resolution ATCA observations (Sect. 2.1.7) represented by the solid white contours. Contour levels are set at [5, 6.3, 9.6, 17.3, 35.5]$\times\sigma$, where $\sigma=210\mu$ Jy beam-1 and is the local RMS. ### 2.2 Optical/near-infrared data #### 2.2.1 VIKING near-infrared imaging Observed with the Visible and Infrared Survey Telescope for Astronomy (VISTA), the VISTA Kilo-degree INfrared Galaxy survey (VIKING) provides medium-deep observations over $\sim$1500 deg2 across the $Z$, $Y$, $J$, $H$ & $K_{s}$ bands, each achieving a 5$\sigma$ AB magnitude limit of 23.1, 22.3, 22.1, 21.5 and 21.2 respectively. VIKING imaging is used to perform both an automated and manual host galaxy identification (see Sect. 3). #### 2.2.2 GAMA 23 photometry catalogue The GAMA 23 photometry catalogue (Bellstedt et al., 2020) contains measured properties and classifications of each object catalogued by the ProFound101010Publicly available on GitHub: https://github.com/asgr/ProFound (Robotham et al., 2018) source finding routine, which uses a stacked $r+Z$ image to perform initial source finding. Approximately 48,000 objects have spectroscopic redshifts provided by the Anglo-Australian Telescope (AAT), and the survey is $\sim$95% spectroscopically complete up to an $i$ band magnitude of 19.5 (Liske et al., 2015). For objects without spectroscopic redshifts, we use the near-UV to far-IR photometry (available in the photometry catalogue) to obtain photometric redshifts using a public photometric redshift code (EAZY; Brammer et al., 2008)111111EAZY publicly available on GitHub: https://github.com/gbrammer/eazy-photoz. ## 3 Methodology ### 3.1 Sample construction Table 3: Summary of each sample criteria discussed in Sect. 3. Steps 1–4 describe the radio galaxy sample selection. Step 5 describes the active/remnant classification. Step 6 describes host galaxy association. †Step 3 is broken into two parts, denoted by steps 3.1 and 3.2, for which the resulting sample size is the sum total. No. | Sample step | Criteria | Sample size ---|---|---|--- 1 | Limit footprint within GLASS region D | 343∘ $\leq$ RA $\leq$ 347∘ | 676 -32.5∘ $\leq$ Dec $\leq$ -35∘ 2 | Flux-density cut | $S_{\mathrm{216MHz}}>10\,$mJy | 446 3 | Angular size cut† | $\theta\geq 25^{\prime\prime}$ | 109 3.1 | | $\theta_{\mathrm{GLASS}}\geq 25^{\prime\prime}$ | (82) 3.2 | | $\theta_{\mathrm{GLASS}}<25^{\prime\prime}$ & $\theta_{\mathrm{EMU-ES}}\geq 25^{\prime\prime}$ | (27) 4 | AGN dominated | Remove radio sources tracing the | 106 optical/near-IR galaxy component 5 | Activity status | Radio core in GLASS 5.5 (Active) | 94 No radio-core in GLASS 5.5 (Remnant cand.) | 10 6 | Host identification (Active) | Visually match radio core with VIKING galaxy | 26 ($z_{s}$) 54 ($z_{p}$) 14 (no $z$) Host identification (Remnant) | See Sect. 4 | 3 ($z_{s}$) 7 ($z_{p}$) Our methodology for compiling a sample of radio galaxies, and classifying their activity status, is presented below. We begin our selection at low frequencies where steep-spectrum radio lobes are naturally brighter. Not only does MIDAS ES provide access to such low frequencies, but also its relatively low spatial resolution results in a sensitivity to low-surface-brightness emission that is required to recover emission from diffuse, extended radio sources. Genuine remnant radio galaxies will not display radio emission from the core associated with the AGN jet activity at the centre of the host galaxy. As such, we classify any radio galaxy as ‘active’ if positive radio emission is observed from the radio core. Similarly, we prescribe a ‘candidate remnant’ status to any radio galaxy that demonstrates an absence of radio emission from the core. True emission from the radio core will be unresolved even on parsec scales, meaning observations with high spatial resolution are ideal for their detection. This also ensures that the emission from the radio core will not experience blending by the backflow of the radio lobes. As demonstrated by Mahatma et al. (2018), sensitive observations are equally important to enable the detection of radio cores. GLASS addresses both of these considerations by providing sensitive ($\sim 30\,\mu$Jy beam-1), high resolution ($4^{\prime\prime}\times 2^{\prime\prime}$) radio observations at 5.5 GHz. Our sample is constructed by following the steps outlined below (see Table 3), and is presented in full as a supplementary electronic table (see Table 9 for column descriptions). 1. 1. Limit the search footprint within GLASS region D The footprint, within which radio galaxies are selected, is constrained to $343^{\circ}\leq\mathrm{RA}\leq~{}347^{\circ}$, $-35^{\circ}\leq\mathrm{Dec}\leq-32.5^{\circ}$. This excludes the outer regions of higher noise from the GLASS mosaic thus ensuring the GLASS noise levels range between 25$-$35$\mu$Jy beam-1 at 5.5 GHz. By maintaining almost uniform noise levels across the field, this reduces the bias of selecting brighter radio cores in higher-noise regions. By applying this footprint to the MIDAS ES component catalogue, 676 components are selected. The resulting sky coverage within this footprint is 8.31 deg2. 2. 2. Flux-density cut A 10 mJy flux density cut at 216 MHz is imposed on the sample. Given the low angular resolution, this ensures the MIDAS ES detections are robust, e.g. greater than 10$\sigma$ for an unresolved source. We identify 446 radio sources brighter than 10 mJy at 216 MHz. 3. 3. Angular size cut Our decision to impose a minimum size constraint was motivated by two factors: firstly to minimise blending of the radio core with the radio lobes, and secondly to allow for an interpretation of the radio source morphology. By imposing a minimum 25′′ angular size constraint, we ensured a minimum of six GLASS 5.5 synthesized beams spread out across the source. For each of the 446 radio sources, we produce $2^{\prime}\times 2^{\prime}$ cutouts121212Image cutouts are generated using Astropy module Cutout2D centered at the catalogued MIDAS ES source position. EMU-ES, GLASS 5.5, GLASS 9.5 and VIKING $K_{s}-$band cutouts are generated, and contours of the radio emission are overlaid onto the VIKING $K_{s}-$band image. Henceforth, we refer to these as image overlays. While GLASS 5.5 has the advantage in spatial resolution, EMU-ES has a significantly better brightness-temperature sensitivity. This consideration is important since faint radio lobes, while seen in EMU-ES, may become undetected by GLASS 5.5. As such, a first pass is conducted by identifying any radio source with an angular size greater than 25′′ in GLASS 5.5 (e.g. $\theta_{\mathrm{GLASS~{}5.5}}\geq 25^{\prime\prime}$). Due to the manageable size of the sample, we do this step manually by visually matching the correct components of each radio source, and measure the linear angular extent across each radio source. We identify 82 radio sources this way. For radio sources with $\theta_{\mathrm{GLASS~{}5.5}}<25$”, we use the aforementioned image overlays to identify any radio sources for which the low-surface-brightness lobes escape detection in GLASS 5.5. For such cases, the angular size is measured using EMU-ES, and are accepted if $\theta_{\mathrm{EMU-ES}}\geq 25^{\prime\prime}$ (e.g. see Figure 3). An additional 27 radio sources are identified this way, giving a total of 109 radio sources greater than 25′′. For consistency, the angular size of each radio source is re-measured using EMU-ES, by considering the largest angular size subtended within the footprint of radio emission above 5$\sigma$. 4. 4. AGN dominated As a result of their sensitivity to low-surface-brightness emission, both MIDAS ES and EMU-ES are able to detect radio emission from a typical face-on spiral galaxy. Radio emission from these objects is not driven by a radio-loud AGN, and therefore these sources need to be removed from the sample. While the radio emission of virtually all radio galaxies extends well beyond the host galaxy, radio emission from spiral galaxies is associated only with the optical component of the galaxy. Thus using the aforementioned image overlays, we remove three radio sources that trace the optical/near-infrared component of the host galaxy, as revealed with VIKING $K_{s}-$band imaging.We provide an example in Figure 4. The remaining sample contains 106 extended radio galaxies, forming the parent sample for this analysis. 5. 5. Activity status To constrain the nuclear activity associated with an AGN, we use GLASS 5.5 imaging to search for evidence of a radio core. For a successful radio core detection, we require a compact object with a peak flux density greater than 3$\sigma$. We use bane to produce an RMS image associated with each GLASS 5.5 image cutout. Only pixel values above 3$\sigma$ are considered. We use the orientation and morphology of the radio lobes as a rough constraint on the potential position of the radio core. Following this method, we classify 94 radio galaxies as active, and a further 11 as candidate remnant radio galaxies. We emphasise that this method only selects candidate remnant radio galaxies, since the existence of a faint, undetected radio core is still possible. For each remnant candidate we place a 3$\sigma$ upper limit on the peak flux density of the core. Here, $\sigma$ is measured by drawing a circle equivalent to four GLASS synthesized beams at the position of the presumed host galaxy and measuring the RMS within this region. 6. 6. Host identification For radio sources with a core, we use a 1′′ search radius to cross match the position of the radio core with the GAMA 23 photometry catalog. Out of 94 such sources, the hosts of 80 radio galaxies are identified this way, of which 26 and 54 have spectroscopic and photometric redshifts, respectively. The hosts of the remaining 14 sources are either extremely faint in $K_{s}$-band, or remain completely undetected in VIKING, potentially due to lying at higher redshift. Host identification for remnant candidates is discussed on a per- source basis in Sect. 4, as this is a complicated and often ambiguous procedure. For each radio source we also use WISE (Wright et al., 2010) 3.4$\mu$ m and 4.6$\mu$ m images to determine if any potential hosts were not present in the VIKING imaging, however this did not reveal any new candidates. Figure 3: Example of the radio source MIDAS J230304-323228 satisfying the criterion: $\theta_{\mathrm{GLASS}}<25^{\prime\prime}$ & $\theta_{\mathrm{EMU- ES}}\geq 25^{\prime\prime}$. The low-surface-brightness lobes are escaping detection in GLASS, resulting in an incomplete morphology. The contours represent EMU-ES (navy blue), GLASS 5.5 (cyan) and GLASS 9.5 (magenta), with levels set at [3,4,5,7,10,15,25,100]$\times\sigma$, where $\sigma$ is the local RMS of 43, 26 and 40 $\mu$Jy beam-1 respectively. Contours are overlaid on a linear stretch VIKING Ks-band image. The seemingly absent hotspots would imply these are remnant lobes, however the presence of a radio core means this source is classified as ‘active’. The true nature of this source may be a restarted radio galaxy, however the lack of any resolved structure around the core is puzzling. Figure 4: Example of a non AGN-dominated radio source, MIDAS J225802-334432, excluded from the sample. Analysis of the radio morphology shows that the radio emission traces the optical component of the host galaxy. The contours represent EMU-ES (navy blue), GLASS 5.5 (cyan) and GLASS 9.5 (magenta), with levels set at [3,4,5,7,10,15,25,100]$\times\sigma$, where $\sigma$ is the local RMS of 45, 28 and 41 $\mu$ Jy beam-1 respectively. Contours are overlaid on a linear stretch VIKING Ks-band image. The radio emission is hosted by IC 5271 (ESO 406-G34). ### 3.2 Collating flux densities For each of the 104 radio galaxies, integrated flux densities are compiled from the data described in Sect. 2.1. To compile the integrated flux densities at 119, 154 and 186 MHz (GLEAM SGP), 216 MHz (MIDAS ES), and 1400 MHz (NVSS), we use their appropriate source catalogues described in their relevant data sections. As a result of the high spatial resolution at 399 MHz (uGMRT observations), 887 MHz (EMU-ES) and 5.5 GHz (GLASS 5.5), sources are often decomposed into multiple components. To ensure the integrated flux densities are measured consistently across these surveys, we convolve their image cutouts of each source with a $54^{\prime\prime}$ circular resolution (e.g. the major axis of the MIDAS synthesized beam). Integrated flux densities are then extracted using aegean by fitting a Gaussian to radio emission. For each remnant candidate observed with ATCA at low resolution at 2.1, 5.5 and 9 GHz, we use aegean to measure their integrated flux densities. The 5.5 GHz integrated flux density reported for each remnant candidate is exclusively taken from these low resolution ATCA observations, not GLASS. Finally, for each integrated flux density measurement, uncertainties are calculated as the quadrature sum of the measurement uncertainty and the absolute flux-scale uncertainty. ### 3.3 Radio SED Fitting To better understand their energetics, we model the integrated radio spectrum of each remnant candidate. We use a standard power-law model of the radio continuum spectrum (Eqn 1), where the spectral index, $\alpha$, and the flux normalization $S_{0}$ are constrained by the fitting, and $\nu_{0}$ is the frequency at which $S_{0}$ is evaluated: $S_{\nu}~{}=~{}S_{0}~{}(\nu/\nu_{0})^{\alpha}$ (1) Given we can expect to see evidence of a curvature in their spectra, especially over such a large frequency range, we also fit a generic curved power-law model (Eqn 2). Here, $q$ offers a parameterization of the curvature in the spectrum, where $q<0$ describes a convex spectrum. For optically-thin synchrotron emitted from radio lobes, $q$ typically ranges within $-0.2\leq q\leq 0$. Although $q$ is not physically motivated, Duffy & Blundell (2012) show that it can be related to physical quantities of the plasma lobes such as the energy and magnetic field strength. $S_{\nu}~{}=~{}S_{0}~{}\nu^{\alpha}e^{q(\mathrm{ln}\nu)^{2}}$ (2) Fitting of each model is performed in Python using the curve_fit module; fitted models are presented in Figures 5(a)$-$8(b). Additionally, we calculate a Bayesian Information Criterion (BIC) for each model. To compare each model, we calculate a $\Delta$BIC = BIC${}_{\mathrm{1}}-$BIC2 which suggests a preference towards the second model for $\Delta$BIC$>0$ (and similarly, a preference towards the first model if $\Delta$BIC$<0$). Weak model preference is implied by the following range $0<|\Delta$BIC| $<2$, whereas a model is strongly preferred if $|\Delta$BIC| $>6$. In Table 4, we calculate $\Delta$BIC = BIC${}_{\mathrm{power-law}}-$BICcurved-power-law. Table 4: Summarized radio properties of the selected remnant candidates. S216 gives the 216 MHz integrated flux density. LAS gives the largest angular size measured from EMU-ES. Score gives the 5.5 GHz upper limit placed on the radio core peak flux density using GLASS. $\alpha_{\mathrm{fit}}$ denotes the spectral index fitted by each model. The curvature term modelled by the curved power-law model is represented by $q$. As per Sect. 3.3, the $\Delta$BIC is calculated between each model and presented in the final column. A reduced chi-squared ($\chi^{2}_{\mathrm{red}}$) is also evaluated for each model. MIDAS Name | Fig. | S216 | LAS | Score | Power-law | Curved power-law | $\Delta$BIC ---|---|---|---|---|---|---|--- | (mJy) | (′′) | ($\mu$Jy beam-1) | $\alpha_{\mathrm{fit}}$ | $\chi^{2}_{\mathrm{red}}$ | $\alpha_{\mathrm{fit}}$ | q | $\chi^{2}_{\mathrm{red}}$ | J225522$-$341807 | 5(a) | 24.7$\pm$1.9 | 100 | $<73$ | -1.40$\pm$0.09 | 2.2 | -1.50$\pm$0.10 | -0.11$\pm$0.08 | 1.5 | -13.2 J225607$-$343212 | 5(b) | 18.3$\pm$2.0 | 83 | $<72$ | -1.10$\pm$0.04 | 1.2 | -1.20$\pm$0.01 | -0.073$\pm$0.01 | 0.1 | 3.1 J225608$-$341858 | 6(a) | $14.7\pm 2.0$ | 60 | $<72$ | -0.86$\pm$0.05 | 0.3 | -0.91$\pm$0.1 | -0.04$\pm$0.07 | 0.3 | -2.2 J225337$-$344745 | 6(b) | $170.6\pm 5.8$ | 105 | $<74$ | -0.92$\pm$0.05 | 12.1 | -0.89$\pm$0.02 | -0.12$\pm$0.02 | 0.8 | 23.8 J225543$-$344047 | 6(c) | $192.7\pm 5.8$ | 84 | $<117$ | -0.87$\pm$0.01 | 0.4 | -0.87$\pm$0.02 | -0.003$\pm$0.01 | 0.5 | -2.8 J225919$-$331159 | 7(a) | $36.6\pm 2.0$ | 70 | $<60$ | -0.86$\pm$0.02 | 1.0 | -0.89$\pm$0.02 | 0.035$\pm$0.01 | 0.5 | -6.9 J230054$-$340118 | 7(b) | $113.4\pm 6.7$ | 106 | $<86$ | -0.73$\pm$0.03 | 2.8 | -0.72$\pm$0.03 | -0.027$\pm$0.02 | 2.5 | -4.1 J230104$-$334939 | 7(c) | $55.4\pm 2.2$ | 37 | $<57$ | -0.72$\pm$0.03 | 2.0 | -0.69$\pm$0.04 | -0.023$\pm$0.01 | 2.0 | 2 J230321$-$325356 | 8(a) | $153.6\pm 6.0$ | 93 | $<74$ | -0.86$\pm$0.01 | 0.9 | -0.85$\pm$0.01 | -0.019$\pm$0.01 | 0.5 | 6.4 J230442$-$341344 | 8(b) | $198.1\pm 6.1$ | 50 | $<84$ | -1.00$\pm$0.02 | 1.2 | -1$\pm$0.02 | -0.008$\pm$0.02 | 1.5 | -1.3 ## 4 Remnant radio galaxy candidates We present and discuss each of the 11 candidate remnant radio galaxies below. Seven are found to display hotspots in GLASS. Image overlays and the radio continuum spectrum are presented in Figure 5. General radio properties are presented in Table 4. ### 4.1 Remnant candidates without hotspots #### 4.1.1 MIDAS J225522-341807 Radio properties. Figure 5(a) shows extremely relaxed lobes and an amorphous radio morphology. No compact structures that would indicate hotspots are observed. The average 154 MHz surface brightness is $\sim 32\,$mJy arcmin-2, satisfying the low SB criterion (SB $<50$ mJy arcmin-2) employed by Brienza et al. (2017). The diffuse radio emission is undetected by the uGMRT observations, NVSS, GLASS, as well as the 2.1 GHz and 9 GHz ATCA follow-up observations. Unsurprisingly, we find that the source spectrum appears ultra- steep at low frequencies, and demonstrates a curvature ($q=-0.11$) across the observed range of frequencies. The radio properties point towards an aged remnant. Host galaxy. Identification of the host galaxy is rather challenging here as the amorphous radio morphology provides little constraints on the host position. No clear host galaxy is seen along the centre of the radio emission, however this can easily be explained if the radio lobes have drifted. We approximate the central position of the radio emission by taking the centre of an ellipse drawn to best describe the radio source. G1 ($z_{p}=0.474$) is located 10.2′′ from the radio center, corresponding to a 61 kpc offset. G2 ($z_{p}=0.433$) is located 14.1′′ from the radio center, corresponding to a 80 kpc offset. G3 ($z_{p}=0.294$) is located 23′′ from the radio center, corresponding to a 102 kpc offset. Without any additional information, we take G1 as the likely host galaxy. We note that G4 ($z_{p}=0.41$) shows compact radio emission at 887 MHz, however it is unclear whether this is related to the extended structure. We include the radio spectrum arising from G4 in Figure 5(a), and note that it contributes approximately 5% to the total radio flux density at 887 MHz. If G4 is unrelated, its radio spectrum should be subtracted from the integrated spectrum of MIDAS J225522-341807. #### 4.1.2 MIDAS J225607-343212 Radio properties. Figure 5(b) shows a pair of relaxed radio lobes, with a diffuse bridge of emission connecting each lobe along the jet axis. The 154 MHz average surface brightness is calculated as $\sim 26$mJy arc-minute-2, satisfying the low surface brightness criterion. The edge brightened regions likely represent the expanded hotspots of the previously active jet, similar to what is observed in B2 0924+30 (Shulevski et al., 2017). The source is undetected by the uGMRT observations, GLASS, as well as the ATCA follow-up at 2.1 GHz and 9 GHz. Curvature is evident in the spectrum, which becomes ultra- steep above 1.4 GHz. Host galaxy. Along the projected centre of the jet axis, a collection of three potential host galaxies exist within a $\sim 7^{\prime\prime}$ aperture. The redshift of each host, G1 ($z_{s}=0.31307$), G2 ($z_{p}=0.361$), G3 ($z_{s}=0.27867$), suggests they are all at similar redshift, and thus would not result in an appreciable difference to the corresponding physical size and radio power. We take G1 as the most likely host as it lies closest to the projected centre of the radio lobes. #### 4.1.3 MIDAS J225608-341858 Radio properties. Figure 6(a) shows two relaxed, low surface brightness lobes that are asymmetrical in shape. The flattened ‘pancake’-like morphology of the Northern lobe can be explained by the buoyant rising of the lobes (Churazov et al., 2001). The surface brightness of each lobes is approximately $43$ mJy arcmin-2, satisfying the low SB criterion employed by Brienza et al. (2017). The source is undetected by the follow-up 2.1 GHz and 9 GHz ATCA observations. The spectrum seems consistent with a single power law ($\alpha=-0.86$), and the detection at 5 GHz is too weak to determine whether spectral curvature is evident at higher frequencies. Host galaxy. G1 ($z_{p}=0.57$) lies $4.8^{\prime\prime}$ away from the radio center, corresponding to a 32 kpc offset. G2 ($z_{p}=0.321$) lies $13^{\prime\prime}$ away from the radio center, corresponding to a 61 kpc offset. This assumes that the lobes are equidistant from the host, which is not always the case. However, we retain G1 as the likely host galaxy. ### 4.2 Candidates with hotspots #### 4.2.1 MIDAS J225337-344745 Radio properties. Figure 6(b) shows a typical low-resolution FR-II radio galaxy as evidenced by the edge-brightened morphology. The average 154 MHz surface brightness is $160$ mJy arcmin-2. The source is firmly detected by the ATCA follow-up at all frequencies, revealing that the spectrum is highly curved ($q=-0.12$) over the observed frequency range and only becomes ultra- steep at $\nu\gtrsim 2$ GHz. An evaluation of its spectral curvature reveals SPC $=0.78\pm 0.17$, suggesting the lobes are remnant. The properties of the spectrum strongly suggest a lack of energy supply to the lobes, however, GLASS 5.5 reveals compact emitting regions at the edges of each lobe that may suggest recent energy injection. We divert a detailed analysis of this source to Sect. 5.3. Host galaxy. The radio lobes are unambiguously associated with galaxy G1 ($z_{s}=0.2133$) based on the close proximity to the geometric centre of the lobes. #### 4.2.2 MIDAS J225543-344047 Radio properties. Figure 6(c) demonstrates an elongated, ‘pencil-thin’, radio galaxy with an edge-brightened FR-II morphology. GLASS detects only the brightest and most compact emitting regions, and misses the lower surface brightness emission seen at 887 MHz. The radio source is detected in all but the 2.1 GHz ATCA follow-up observations. The radio spectrum is well modelled by a power-law ($\alpha=-0.87$), and shows no evidence of a curvature up to 9 GHz. Host galaxy. G1 ($z_{p}=1.054$) lies closest to the projected radio center. G2 ($z_{p}=1.019$) and G3 ($z_{p}=1.342$) are also likely, however the difference in implied redshift would not result in an appreciable change to the derived physical size and radio power. #### 4.2.3 MIDAS J225919-331159 Radio properties. Figure 7(a) demonstrates a pair of lobes with compact emitting regions seen by GLASS. The radio spectrum is well approximated as a power-law ($\alpha=-0.86$) with no evidence of a spectral curvature over the observed range of frequencies. Host galaxy. G1 ($z_{p}=0.504$) is taken as the likely host galaxy due to its central position between each lobe. #### 4.2.4 MIDAS J230054-340118 Radio properties. Figure 7(b) shows a peculiar radio morphology; while the western lobe shows bright emitting regions in GLASS, the counter lobe is completely diffuse and does not show a hotspot. It is unclear what is causing this. Host galaxy. The galaxies G1 ($z_{p}=0.32$) and G2 ($z_{s}=0.306$) are considered as host candidates, given their position between each lobe. We find that G2 is not associated with any galaxy group; if the lobe asymmetry is due to an environmental effect, we would not expect a field galaxy host. We can not comment on whether G1 is associated with a group, given the lack of a spectroscopic confirmation. #### 4.2.5 MIDAS J230104-334939 Radio properties. Figure 7(c) shows a typical FR-II radio galaxy, as implied by the edge-brightened morphology. The source exhibits clear hotspots in each lobe, as seen by GLASS 5.5 and GLASS 9.5. The source is detected in all but the 2.1 GHz ATCA follow-up. Modelling the radio continuum spectrum gives a spectral index of approximately $\alpha=-0.7$, revealing no significant energy losses. The $\Delta$BIC offers tentative evidence for some spectral curvature, however this may just be a result of a poorly constrained spectrum at low ($\leq 215$ MHz) frequencies. Host galaxy. The radio source is unambiguously associated with G1 $(z_{s}=0.312)$, which almost perfectly aligns with the projected centre of the source. (a) MIDAS J225522$-$341807. EMU-ES contour levels: [3,4,5,7,10]$\times\sigma$. GLASS 5.5 contour levels: [3,4,5]$\times\sigma$. GLASS 9.5 contours are not presented due to an absence of radio emission above $3\sigma$. Compact component at RA=22h55m25.5s, Dec= -34∘18′40′′ is unrelated. The radio spectrum of the compact radio component, G4, is demonstrated by the blue markers. Radio emission from G4 is undetected by GLASS 5.5, we thus present a 3$\sigma$ upper limit. (b) MIDAS J225607$-$343212. EMU-ES contour levels: [3,4,5,7,10,12,15,20]$\times\sigma$. GLASS 5.5 contour levels: [3,4,5]$\times\sigma$. GLASS 9.5 contours are not presented due to an absence of radio emission above $3\sigma$. Compact component at RA=22h56m03s, Dec= -34∘32′55′′ is unrelated. Figure 5: Left: Plotted are the remnant candidates presented in Sect. 4. Background image is a VIKING Ks band cutout set on a linear stretch. Three sets of contours are overlaid, representing the radio emission as seen by EMU- ES (black), GLASS 5.5 (orange) and GLASS 9.5 (blue). Red markers are overlaid on the positions of potential host galaxies. Right: The radio continuum spectrum between 119 MHz and 9 GHz. The integrated flux densities at 5.5 GHz come from the low-resolution ATCA observations (Sect. 2.1.7) not the lower resolution GLASS images. A simple power-law (Eqn 1) and curved power-law (Eqn 2) model are fit to the spectrum, indicated by the purple and blue models, respectively. (a) MIDAS J225608$-$341858. EMU-ES contour levels: [3,4,5,7,10]$\times\sigma$. GLASS 5.5 contour levels: [3,4,5]$\times\sigma$. GLASS 9.5 contours are not presented due to an absence of radio emission above $3\sigma$. (b) MIDAS J225337$-$344745. EMU-ES contour levels: [4,5,10,30,50,70]$\times\sigma$, GLASS 5.5 contour levels: [3,4,5,6]$\times\sigma$. GLASS 9.5 contours are not presented due to an absence of radio emission above $3\sigma$. (c) MIDAS J225543$-$344047. EMU-ES contour levels: [3,4,5,7,15,30,100]$\times\sigma$, GLASS 5.5 contour levels: [3,5,10,20]$\times\sigma$. GLASS 9.5 contour levels: [3,5,10,20]$\times\sigma$ Figure 6: – continued. (a) MIDAS J225919-331159. EMU-ES contour levels: [5,10,20,40,60]$\times\sigma$, GLASS 5.5 contour levels: [3,4,5,6,10,20]$\times\sigma$. GLASS 9.5 contour levels: [3,4,5,6]$\times\sigma$ (b) MIDAS J230054$-$340118. EMU-ES contour levels: [3,4,5,7,15,30,100,300]$\times\sigma$, GLASS 5.5 contour levels: [3,5,10,20,30]$\times\sigma$. GLASS 9.5 contour levels: [3,5,10,20]$\times\sigma$ (c) MIDAS J230104-334939. EMU-ES contour levels: [5,8,15,35,50]$\times\sigma$, GLASS 5.5 contour levels: [3,5,7,9,11]$\times\sigma$. GLASS 9.5 contour levels: [3,4,5,6]$\times\sigma$ Figure 7: – continued. (a) MIDAS J230321$-$325356. EMU-ES contour levels: [5,10,30,100,300]$\times\sigma$, GLASS 5.5 contour levels: [3,5,10,20,30,40,50]$\times\sigma$. GLASS 9.5 contour levels: [3,5,10,20]$\times\sigma$ (b) MIDAS J230442$-$341344. EMU-ES contour levels: [5,10,30,100,300]$\times\sigma$, GLASS 5.5 contour levels: [3,5,10,20,30,40,50]$\times\sigma$. GLASS 9.5 contour levels: [3,5,10,20]$\times\sigma$ Figure 8: – continued. #### 4.2.6 MIDAS J230321-325356 Radio properties. Figure 8(a) shows two distinct radio lobes with compact emitting regions observed at 5.5 GHz and 9.5 GHz. The spectral index is approximately $\alpha=-0.86$ showing no significant evidence of spectral ageing. The $\Delta$BIC does indicate a preference towards the curved power law model, however any real curvature in the spectrum is marginal ($q\sim-0.02$) and does not necessarily require an absence of energy injection. Host galaxy. Three likely host galaxies are identified by their alignment along the projected jet axis; G1 ($z_{p}=1.327$), G2 ($z_{p}=0.622$) and G3 ($z_{p}=0.775$). We assume G1 is the true host, due to its smaller separation from the projected radio center. #### 4.2.7 MIDAS J230442-341344 Radio properties. Figure 8(b) . Both lobes are detected in GLASS 5.5, however, only the southern lobe shows emission in GLASS 9.5. It is unclear what is causing this; both components are resolved by GLASS and show similar morphologies, so it is unlikely they are unrelated. The radio spectrum is well approximated by a power-law ($\alpha=-1$), with no evidence of a spectral curvature. Host galaxy. G1 ($z_{p}=0.9$) is a favourable host galaxy candidate, due to its small angular separation from the projected centre between the lobes. G2 ($z_{p}=0.8075$) is located further off centre towards the South, at a similar implied redshift. ## 5 Discussion ### 5.1 Sample properties #### 5.1.1 Core prominence distribution To understand the limitations imposed by our selection criteria, we investigate the core prominence distribution across our sample. We define the core prominence (CP) as the ratio of core to total flux density, i.e. $\mathrm{CP}=S_{\mathrm{core}}/S_{\mathrm{total}}$. The total integrated flux density is measured at 216 MHz. We take the GLASS 5.5 measurement of the radio core flux density, and re-scale to 216 MHz assuming the spectrum of the radio core is a flat spectrum ($\alpha=0$) (e.g; Hardcastle & Looney, 2008). For the selected remnant candidates, we present upper limits on their CP by using the 3$\sigma$ upper limits on their core peak flux density. Our results are presented in Figure 9. Sources with radio cores show a wide distribution in their CP, varying within the range 10-1 – 10-4. The median CP of the sample is $\sim 1\times 10^{-2}$, almost two orders of magnitude larger than the median CP reported by Mullin et al. (2008) for the 3CRR sample (e.g. $\sim 3\times 10^{-4}$). This can be expected given the most powerful radio galaxies are preferentially selected by 3CRR. Instead, comparing our CP range to the LOFAR-selected sample compiled by Mahatma et al. (2018) – e.g. see their Figure 4 – we find the ranges are consistent. Although the reader should note Mahatma et al. (2018) compute their CP at 150 MHz, meaning a $\Delta\nu=66$ MHz frequency shift should be accounted for if a direct comparison is made. As discussed in Sect. 1, the ‘absent radio core’ criterion only selects remnant candidates. Genuine remnant radio galaxies will not display a radio core, meaning their CP should approach null. In fact, the CP in such sources should be lower than any active radio source in which a radio core is present. This implies that a clean separation should exist between active and remnant radio galaxies, however, this is not what we see in Figure 9. Instead, we find that the CP upper limits imposed on the remnant candidates overlap with core- detected radio galaxies. This comes as a result of our sample criteria. Given the GLASS detection limit ($\sim 75\mu$Jy beam-1), only remnant candidates brighter than $\sim 500$ mJy will show CP upper limits below what is observed for core-detected radio galaxies (e.g. log(CP) $\lesssim-3.7$). This result indicates that it is still possible for some of the selected remnant candidates to display a faint radio core that is missed by GLASS. For remnant candidates without hotspots this is less of a concern, as we have additional information that would suggest the jets have switched off. However, for those with hotspots, decreasing the upper limits on their CP is required to confidently assert whether their AGN has switched off. As such, these sources must retain their remnant candidate classification. A low core-prominence criterion is indeed necessary to classify recently switched off remnants. However, we find that the three remnant candidates with the weakest constraints on their CP upper limits are also those without hotspots, two of which have ultra-steep spectra, e.g. see Sect. 4.1. While the sample size is rather small, this observation would suggest that aged remnants may be preferentially deselected if only a low ($\lesssim 10^{-4}$) CP criterion is used. This observation echoes the results of (Brienza et al., 2017), who show that none of their ultra-steep spectrum remnants are selected by low CP (e.g. $<0.005$), and, only 3/10 morphologically-selected remnants are selected by low CP. Figure 9: 216 MHz CP distribution of radio sources (see Sect. 5.1.1). Core- detected radio galaxies are represented by the blue markers. 3$\sigma$ upper limits are placed on the remnant CP, denoted by the left-pointing arrows. Orange and red colored arrows are used to indicate remnant candidates with and without hotspots, respectively. The solid black line gives the value of the CP above which we are complete, given the 10 mJy integrated flux density threshold and the 75$\mu$Jy beam-1 average GLASS 5.5 detection limit. The orange line traces the lowest CP that can be recovered at the corresponding total flux density. Uncertainties on the CP are propagated from the uncertainties on the total and core flux density. A histogram of CP is presented in the top panel. Figure 10: The high-frequency spectral index $\alpha_{887}^{5500}$ is plotted against the low-frequency spectral index $\alpha_{119}^{399}$. A third color- bar axis is over-plotted to show the largest angular size in arc-seconds. Solid black line represents a constant spectral index across both frequency ranges. Dashed black line represents a spectral curvature of $\mathrm{SPC}=0.5$. The red dotted and dot-dashed lines represent a $\alpha=-1.2$ spectral index across the low- and high- frequency range, respectively. #### 5.1.2 Spectral index distribution Table 5: Spectral index statistics calculated based on data represented in Figure 10. The median and mean spectral index, indicated by med and mean subscripts, are presented for the low $\alpha_{119}^{399}$ and high $\alpha_{887}^{5500}$ frequency ranges. $f_{\mathrm{US,\>low}}$ and $f_{\mathrm{US,\>high}}$ represent the low- and high-frequency ultra-steep fractions, respectively. † A range is given here, as it is unclear whether MIDAS J225608$-$341858 (Fig. 6(a)) is ultra-steep at high frequencies. Sample | $\alpha_{\mathrm{119,\>med}}^{399}$ | $\alpha_{\mathrm{119,\>mean}}^{399}$ | $\alpha_{\mathrm{887,\>med}}^{5500}$ | $\alpha_{\mathrm{887,\>mean}}^{5500}$ | $f_{\mathrm{US,\>low}}$ | $f_{\mathrm{US,\>high}}$ ---|---|---|---|---|---|--- Core-detected | -0.60 | -0.63 | -0.66 | -0.67 | 0/94 | 3/94 Remnant cand. (with hotspot) | -0.69 | -0.69 | -0.84 | -0.88 | 0/7 | 1/7 Remnant cand. (without hotspot) | -1.27 | -1.29 | -1.27 | -1.38 | 1/3 | $(2-3)^{\dagger}/3$ The integrated spectral properties of our sample are explored over two frequency ranges. The integrated flux densities at 119, 154, 186, 216 and 399 MHz are used to develop a low-frequency spectral index, $\alpha_{119}^{399}$. To compute correct fitting uncertainties, we fit power-law models to the data in linear space. A high-frequency spectral index, $\alpha_{887}^{5500}$, is computed using $\alpha=\frac{\mathrm{log_{10}}(S_{887}/S_{5500})}{\mathrm{log_{10}}(887/5500)}$, and an associated uncertainty $\Delta\alpha=\frac{1}{\mathrm{ln}(887/5500)}\sqrt{\big{(}\frac{\Delta S_{887}}{S_{887}}\big{)}^{2}+\big{(}\frac{\Delta S_{5500}}{S_{5500}}\big{)}^{2}}$. We populate our results onto an $\alpha-\alpha$ plot, e.g. Figure 10, and summarize our results in Table 5. Despite the access to high frequencies ($\nu$ = 5.5 GHz), we find that the selected remnant candidates with hotspots show similar spectral properties to radio galaxies with an active radio core. At low frequencies, remnant candidates with hotspots display spectral indices that are consistent with continuous injection. The high frequency spectral index does appear to be steeper than for the bulk of remnant candidates with hotspots, however, this is also observed for core-detected radio sources and simply reflects the preferential ageing of higher-energy electrons. No remnant candidates with hotspots display an ultra-steep low-frequency spectral index, and only one such source (e.g. Sect. 4.2.1) demonstrates a high-frequency ultra-steep spectral index. Regarding these remnant candidates with non ultra-steep spectra, their position on the $\alpha-\alpha$ plot can be explained if these are young, recently switched off remnants. However, their spectral properties can just as easily be explained if these are active radio galaxies in which the radio core is below the GLASS detection limit. We can not rule out either of these possibilities based on their spectra alone, and as such they must remain as remnant candidates. We note that Mahatma et al. (2018) also report on a large overlap in the observed spectral index (150 MHz–1.4 GHz) between their active and candidate remnant radio galaxies. Only one of their remnant candidates display hotspots at 6 GHz with the Very Large Array telescope, meaning it is not necessarily only the remnant candidates with hotspots which display similar spectral indices as active radio galaxies. A great example of this is the remnant radio galaxy Blob 1 identified by Brienza et al. (2016). We also find a small fraction (3/94) of core-detected radio sources which demonstrate an ultra-steep spectral index. The angular size of these sources is well below the largest angular scale that GLASS can recover at 5.5 GHz, suggesting the curvature is genuine. This tells us that ultra-steep selection will not only select radio galaxies in which the AGN has switched off, however it is interesting to note these sources also do not display GLASS hotspots. It is possible these sources represent restarted radio galaxies in which the ‘core’ represents a newly-restarted jet. Unsurprisingly, we find that the remnant candidates without hotspots also display steeper spectra than those with hotspots. The absence of compact features in their morphologies implies a lack of recent jet activity, and is supported by their ultra-steep spectra which implies significant spectral ageing within the lobes. Our low fraction of ultra-steep spectrum remnant candidates echoes the result of Brienza et al. (2016) who discuss the bias of preferentially selecting aged remnant radio galaxies via their ultra-steep spectra. Table 6: Derived distribution averages from Sect. 5.1.3. The number of radio sources included in each category are denoted by $N$. The redshift, $z$, radio power, $P$, and largest linear size (LLS) are presented. The subscripts med and mean refer to the median and mean values. In the upper half of the table, we consider the entire sample of 104 radio sources. In the lower half, we consider only those with spectroscopic redshifts . † Including the 14 core-detected radio galaxies with $z\geq 1$ lower limits. Sample $N$ $z_{\mathrm{med}}$ $z_{\mathrm{mean}}$ $P_{\mathrm{med}}$ $P_{\mathrm{mean}}$ $\mathrm{LLS}_{\mathrm{med}}$ $\mathrm{LLS}_{\mathrm{mean}}$ log${}_{10}(\mathrm{W\,Hz^{-1}})$ log${}_{10}(\mathrm{W\,Hz^{-1}}$) kpc kpc Full sample. Core-detected 80 0.519 0.59 25.2 25.2 277 379 Core-detected† 94 0.549 0.651 25.3 25.3 294 388 Remnant candidates 10 0.504 0.619 25.5 25.8 435 512 Spectroscopic redshifts. Core- detected 25 0.303 0.290 25.1 25.2 171 322 Remnant candidates 3 0.312 0.279 25.2 25.1 351 299 #### 5.1.3 Power–size distribution Figure 11: 216 MHz radio power against the largest linear size. Core-detected radio sources (blue markers), remnant candidates with hotspots (orange markers) and remnant candidates without hotspots (red markers) are displayed. Circular and square markers are used to denote spectroscopic and photometric redshifts, respectively. Lower limits on the 14 radio sources without host identifications are denoted by green arrows. Plotted also are the largest linear sizes that would result in a 5$\sigma$ detection at 216 MHz at $z=0.3$ (black) and $z=1$ (red). Limits are calculated assuming a uniform brightness ellipse, and a lobe axis ratio of 2.5 (solid line) and 1.5 (dashed line). Aged remnants often display low axis ratios, e.g. MIDAS J225522$-$341807 (Sect. 4.1.1). We investigate the sample distribution in redshift, total radio power, and largest linear size, given the host galaxy identifications. We stress that many of the selected remnant candidates have uncertain host galaxy associations, presenting a major challenge in analysing their rest-frame properties. An additional uncertainty comes from the photometric redshifts, which make up $60/104$ (57%) of the sample. In addition, 14 active radio galaxies do not have an optical identification, meaning photometric redshift estimates can not be attained. If we assume their host galaxies are at least $10^{10.6}$M⊙, e.g. the lowest stellar mass reported by Best et al. (2005) to host a radio-loud AGN, we can apply the $K-z$ relation (e.g. see Longair & Lilly 1984, Rocca-Volmerange et al. 2004) to estimate the lowest redshift for which a $10^{10.6}$M⊙ galaxy will be undetected below the VIKING $K_{s}$-band AB magnitude limit (Sect. 2.2.1). This suggests that the host galaxies of the 14 unidentified radio galaxies must be above $z=1$. The caveat here is that Best et al. (2005) investigate local ($z\leq 0.3$) radio-loud AGN samples, whereas here we are assuming higher redshift. Figure 1 of Smolčić et al. (2017) shows a hint of a decline in stellar mass with redshift, however, our assumption of the minimum stellar mass seems valid up to $z\sim 1$. For each radio source, we calculate the total radio power following: $P=4\pi SD_{L}^{2}(1+z)^{-\alpha-1}$, where $D_{L}$ is the luminosity distance. For the 14 radio galaxies without optical identifications, the radio power is calculated assuming $z\geq 1$. Our results are presented in Table 6 and Figure 11. Assuming radio lobes continue expanding once the jets switch off, which at least appears to be the case for FR-II radio galaxies as shown by Godfrey et al. (2017), an expectation of this is for remnants to display larger physical sizes with respect to their active radio galaxy progenitors. Interestingly, our results suggest that the largest linear sizes of core-detected radio galaxies and remnant candidates are similar. This may be explained by the observational bias against remnant radio galaxies of large linear size; such sources will preferentially fall below a fixed detection limit due to their lower surface brightness profiles. Since the linear size and age of a radio galaxy are correlated, for a fixed jet power and environment, it is not unreasonable to suggest that these ‘missing’ remnants also correspond to older remnant radio galaxies, which in turn would imply our sample predominately comprises of young remnants (see also Mahatma et al. 2018). If so, this result is consistent with the low fraction of ultra-steep remnants discussed in in Sect. 5.1.2. We do however caution this analysis since seven remnant candidates have ambiguous host galaxy associations, as well as the uncertainties surrounding the photometric redshift estimates. We instead consider the sample of 28 spectroscopically-confirmed radio galaxies to see whether we can draw the same conclusions as above. Within this ‘spectroscopic sample’, we can be confident not only of the sample redshifts, but also of the three remnant candidates (e.g. see Sect. 4.1.2, 4.2.1, and 4.2.5) which have unambiguous host galaxy associations, meaning we can be confident of their positions on the power-size diagram. As demonstrated in Figure 11, the absence of remnant radio galaxies of large linear size becomes quite clear in the ‘spectroscopic sample’, and is consistent with the previously-discussed conclusions. Our limiting factor here is the small sample size, however this will be addressed in future work where we can expect a factor $\sim 6$ increase in the sample size of radio galaxies in GAMA 23. The additional benefit here will be the GAMA 23 group catalogue, which provides group/cluster associations for galaxies with spectroscopic redshifts, as well as virial estimates for the group mass. This will allow us to begin decomposing the degeneracy between radio power, linear size and environment. Table 7: Remnant fractions constrained by previous authors. Each column, in ascending order, represents the cited study, the sky coverage over which the sample is compiled, the flux limit across the sample (or the faintest source in the sample), the frequency at which the flux cut is made, the angular size cut of the sample, the number of radio galaxies within the sample, and the resulting remnant fraction. References. (1) Saripalli et al. (2012), (2) Brienza et al. (2017) and Jurlin et al. (2020), (3) Mahatma et al. (2018), (4) This work. Ref. | Sky coverage | Flux limit | Sample frequency | $\theta_{\mathrm{cut}}$ | Sample size | $f_{\mathrm{rem}}$ ---|---|---|---|---|---|--- | (deg2) | (mJy) | (MHz) | (′′) | | 1 | 7.52 | 1 | 1400 | 30 | 119 | < 4% 2 | 35 | 40 | 150 | 60 | 158 | < 11% 3 | 140 | 80 | 150 | 60 | 127 | < 9% 4 | 8.31 | 10 | 216 | 25 | 104 | $4\lesssim f_{\mathrm{rem}}\lesssim 10\%$ ### 5.2 Constraining a remnant fraction #### 5.2.1 Remnant fraction upper limit The fraction of remnant radio galaxy candidates identified in this work provides an upper limit to the genuine fraction of remnant radio galaxies, $f_{\mathrm{rem}}$, present within this sample. Of 104 radio galaxies, 10 are identified as remnant radio galaxy candidates, resulting in a $f_{\mathrm{rem}}\approx 10\%$ upper limit on the remnant fraction. Saripalli et al. (2012), Brienza et al. (2017) and Mahatma et al. (2018) each constrain a remnant fraction from radio observation and their results are presented in Table 7. At face value, the remnant fraction obtained in this work is consistent with that of Brienza et al. (2017) and Mahatma et al. (2018), and shows a considerable increase over the fraction constrained by Saripalli et al. (2012). The apparent inconsistency with Saripalli et al. (2012) may very well be a result of their selection. Their sample was selected at 1.4 GHz, where ultra-steep remnants may potentially be missed, and they also excluded sources without a radio core but hotspots still present within the lobes. It is interesting to note that despite the difference in the flux limit and angular size cut compared to the samples complied by Mahatma et al. (2018), Brienza et al. (2017), and Jurlin et al. (2020), the upper limits on the remnant fraction appear consistent. Shabala et al. (2020) show that the remnant fraction predicted by constant-age models, e.g. those in which the jets are active for a constant duration, is highly sensitive to observable constraints, e.g. the flux and angular size limit. On the other hand the remnant fraction predicted by power-law age models, e.g. those in which the duration of the active phase is power-law distributed, shows little dependence on observable parameters. It is therefore possible that the similarity in remnant fractions implies a preference towards power-law age models that describe AGN jet activity, although this needs to be pursued in more detailed future work. #### 5.2.2 Remnant candidates without hotspots As presented in Sect. 4, the lobes of seven remnant candidates display compact emitting regions in GLASS, potentially indicating a hotspot formed by an active jet. As discussed in Sections 5.1.1 and 5.1.2, these sources could be interpreted either as recently switched off remnants, or, active radio galaxies with unidentified radio cores. We thus propose a lower limit to the remnant fraction, by considering the limiting case where each remnant candidate with a hotspot is an active radio galaxy. This would suggest a $f_{\mathrm{rem}}=4/104$ ($\approx 4\%$) lower limit on the remnant fraction, consistent with the value reported by Saripalli et al. (2012). As discussed in Sect. 5.3, an appreciable fraction of genuine remnants with hotspots can potentially be expected. (a) Continuous injection model (CI) (b) Continuous injection with ‘off’ component model (CI off) Figure 12: Modelled integrated spectrum of MIDAS J225337$-$344745\. Figure 12(a) models the spectrum assuming a continuous injection model (CI). Figure 12(b) models the spectrum assuming a continuous injection model with an ‘off’ component, encoding a jet switch-off (CI off). In each model, a 2$\sigma$ uncertainty envelope is represented by the violet shaded region. As discussed in Sect. 5.3, the model uncertainties take into account only the uncertainties on the flux density measurements, and do not reflect the underlying uncertainties due to an inhomogeneous magnetic field. For reference, a best-fit to the data using single power-law model is represented by a blue line. Figure 13: A ‘MIDAS J225337$-$344745’ type remnant radio galaxy is modelled by assuming a jet power $Q=10^{38.1}$ W, an injection energy index of $s=2.1$, an equipartition factor of $B/B_{\mathrm{eq}}=0.22$, and a total source age of 71 Myr of which 50 Myr is spent in an active phase, and a further 21 Myr is spent as a remnant. The shaded blue bar corresponds to the time during which the source is active, after which the jets are switched off and the hotspots/lobes begin to fade. The evolution of the synchrotron emission from the lobes (solid black tracks) and the hotspots (dashed tracks) are shown as a function of the total source age. The assumption that the hotspot magnetic field strength is a factor five greater than the lobes (colored in orange) comes from Cygnus A, however we also assume a factor ten increase in the hotspot magnetic field strength (colored in red) to consider shorter fading timescales. We explore this in terms of the peak flux density, as this ultimately decides whether the emitting regions are detected in observations. The vertical drop in the flux density tracks reflects the depletion of electrons capable of producing emission at 5.5 GHz. As expected, the synchrotron emission evolves faster in the hotspot, however, their fading timescale is non-negligible in comparison to that of the lobes. Table 8: Summarized properties of MIDAS J225337$-$344745 spectral modelling (Section 5.3). A reduced chi-squared ($\chi^{2}_{\mathrm{red}}$) is provided to assess the quality of fit. The injection index $\alpha_{\mathrm{inj}}$, observed-frame break frequency $\nu_{b}$ and quiescent fraction $T$ are presented for the fitted continuous injection (CI) and continuous injection-off CI-off models. We quote a $\Delta$BIC calculated between the two models. Model | $\chi^{2}_{\mathrm{red}}$ | $\alpha_{\mathrm{inj}}$ | $\nu_{b}$ | $T$ | $\Delta$BIC ---|---|---|---|---|--- fitted | | | GHz | | CI | 3.43 | -0.608 | 1.60$\pm$0.08 | $-$ | 2.1 CI-off | 0.64 | -0.55 | 2.2$\pm$0.09 | 0.28$\pm$0.004 ### 5.3 Evolutionary history of MIDAS J225337-344745 A particularly interesting source to examine is MIDAS J225337$-$344745, which as mentioned in Sect. 4.2.1, demonstrates two seemingly contradictory features: the higher-frequency ultra-steep ($\alpha<-1.52$) spectrum of the lobes, suggesting energy losses consistent with an aged remnant radio galaxy, and the compact 5.5 GHz emitting regions in GLASS, which may in turn suggest current or recent energy injection. It is unclear whether these are genuine hotspots; the north-eastern lobe is detected just above the noise level and thus demonstrates many ‘hotspot-like’ features. A singular, bright ‘hotspot’ is evident in the south-western lobe, however, the emitting region is clearly resolved at a physical resolution of $7\times 14$ kpc (the physical resolution of GLASS 5.5 at $z=0.213$), and so it is unclear whether this ‘hotspot’ is formed by an active jet, or, whether it is the expanding hotspot of a previously-active jet. It is also possible that the compact emitting region is a combination of the lobe and hotspot. Fortunately, the radio continuum spectrum appears to preserve the original spectrum at low frequencies, while also encoding the energy losses evident at higher frequencies. This allows us to model and parameterise the spectrum in terms of physically meaningful parameters, allowing us to probe the energetics along with the dynamics. Our ultimate objective here is to determine whether the compact emitting regions can be expected within a remnant radio galaxy. Our analysis is laid out as follows. We model the radio source using the Radio AGN in Semi-analytic Environments (RAiSE) code which considers, amongst other things: (i) the evolution of the magnetic field strength; (ii) the adiabatic losses contributing to the radio luminosity evolution of the lobes; and (iii) the Rayleigh-Taylor mixing of the plasma contained within the remnant lobes. These processes are described by the dynamical (Turner & Shabala, 2015) and synchrotron emissivity (Turner et al., 2018b) models for lobes FR-II radio lobes, which follow the continuous injection model (CI model; Kardashev, 1962; Pacholczyk, 1970; Jaffe & Perola, 1973). The remnant spectrum follows the continuous injection ‘off’ model (CI- off; Komissarov & Gubanov, 1994) alternatively known as the KGJP model, which Turner (2018) parameterise with two break frequencies. A jet power Q$=10^{38.1}$ W, magnetic field strength $B=1.08$ nT, and total source age $\tau=71$ Myr are constrained by the method of Turner et al. (2018b). The synchrotron spectrum is modelled by the method of Turner (2018) which, provided sufficient spectral sampling, allows the injection index $\alpha_{\mathrm{inj}}$, break frequency $\nu_{b}$ and quiescent fraction131313The quiescent fraction is defined as $T=t_{\mathrm{off}}/\tau$, where $t_{\mathrm{off}}$ is the duration of the remnant phase, and $\tau$ is the total age of the radio source. $T$ to be uniquely constrained. Our results are shown in Figure 12. Both models constrain an injection index that is consistent within their typically observed range of $-1.0<\alpha_{\mathrm{inj}}<-0.5$ (see Table 8). The discrepancy between the modelling becomes evident at the highest frequencies, for which the CI model begins to over-predict the integrated flux density. On the other hand, the CI- off model is able to comfortably reproduce the observed spectral curvature. We calculate a reduced chi2 ($\chi^{2}_{\mathrm{red}}$) for each model, and a $\Delta$ BIC$=2.1$ (by BICCI $-$ BICCI-off) which demonstrates a preference towards the CI-off model (see Table 8). Although the distinction between the two models is ultimately driven by the 9 GHz measurement, CI-off provides a better model of the observed spectrum suggesting these are remnant lobes. Modelling the spectrum as a remnant suggests the source spends approximately 21 Myr in a remnant phase. Turner et al. (2018b) showed that the CI model provides a statistically significant fit to the broad frequency radio spectra of over 86 per cent of FR-IIs in the Mullin et al. (2008) sample of 3C sources (typically 12 measurements between 0.01 and 10 GHz from Laing & Peacock 1980; but see Harwood 2017). Further, despite the non-physical CI model assumption of time- invariant magnetic fields, Turner et al. (2018b) find that CI model is an excellent fit to the simulated spectra of lobed FR-II which do consider magnetic field evolution, and are able to recover the source dynamical age. As a final sanity check on the duration of the remnant phase fitted above (by the integrated CI-off model), we constrain the age of the most recently accelerated electron populations which we assume are located near the ‘hotspots’. We convolve the radio observations at 399 MHz, 887 MHz and 5.5 GHz images to a common 11′′ circular resolution and measure the integrated flux density within an aperture centered at the southern ‘hotspot’. We fit the spectrum with a Tribble JP model assuming the magnetic field derived previously in our RAiSE modelling. We arrive at a remnant age of $t_{\mathrm{off}}=13_{-4}^{+8}$ Myr, consistent with the $t_{\mathrm{off}}\approx 21$ Myr derived from the integrated CI-off model. We stress that the observations are not properly (u,v) matched, and thus are not necessarily seeing the same radio emission; e.g. at 5.5 GHz, the flux density is likely underestimated due to resolving out the extended radio emission. We leave a more detailed analysis of this source to future work. Next, we model the hotspot to better understand its typical fading timescale. We model a ‘MIDAS J225337$-$344745 like’ radio galaxy, meaning we adopt the same values for the jet power, active age, lobe axis ratio, energy injection index and equipartition factor. By the method of Turner et al. (2018a) we forward model the 5.5 GHz lobe spatial emissivity (without the hotspot) and convolve the output map with the GLASS 5.5 synthesized beam. We consider the GLASS 5.5 beam with the largest integrated flux (e.g. the maximum peak flux density), as this ultimately determines whether the youngest emitting regions of the lobes are detected. The hotspot is modelled assuming a JP spectrum (Jaffe & Perola, 1973) for the same initial properties as the lobe, but for an increased magnetic field strength. Cygnus A displays a factor five increase in the hotspot magnetic field strength, in comparison to that of the lobes (Carilli & Barthel, 1996). We make the same assumption for MIDAS J225337$-$344745, however also consider a case where the hotspot magnetic field strength is a factor 10 greater than the lobes. We also assume the same ratio of hotspot to lobe volume as that measured in Cygnus A; this only shifts the spectrum along the log-frequency log-luminosity plane and does not influence the spectral shape (i.e. the slope). Our results are presented in Figure 13. As expected, the hotspot fades rapidly once injection switches off. Bright radio emission from the hotspot is driven by the freshly injected electrons which have a short lifetime at high frequencies due to the strong magnetic field. Despite the hotspot fading rapidly, its fading timescale is non-negligible when compared to the characteristic active and remnant ages. It is not unreasonable to suggest that recently switched off remnants may still display hotspots, and that some of the selected remnant candidates with hotspots (Sect. 5.2.2) may indeed be such cases. The main result of Figure 13 is that the hotspots of a recently switched off jet should not be visible in MIDAS J225337$-$344745, assuming a $t_{\mathrm{off}}\approx 21$ Myr remnant phase. At the implied age of the remnant, the modelled emitting regions of the lobes range between 100–150 kpc in size. This is consistent with the observed emitting regions in GLASS, which demonstrate a projected linear size of $\sim 96$ kpc. This indicates that the bright features in GLASS are entirely consistent with the youngest plasma regions, and are not necessarily the hotspots of an active jet. As such, the interpretation of this source is challenging for the following reasons. Modelling the spectrum suggests these are the lobes of a remnant radio galaxy. While hotspots will remain visible for a non-negligible period of time after injection is switched off, they should not be visible in this source assuming the remnant age is correct. The two pieces of evidence are consistent with one another if the bright GLASS features are just the youngest emitting regions of the lobes. If we alternatively assume MIDAS J225337$-$344745 is an active source, the RAiSE modelling of Turner et al. (2018b) finds a comparable age ($\tau=100\rm\>Myr$) to the remnant case but a substantially lower jet power of $Q=10^{36.7}\rm\>W$ is required to match the relatively low observed flux density. This jet power is typically associated with an FR-I morphology for the known host galaxy mass (e.g. Ledlow, 1994; Best, 2009), potentially resulting in a broad flare point rather than compact hotspots at the end of a jet. This morphology may explain the large spatial extent of the GLASS 5.5 GHz observations, however does not resolve the discrepant spectral curvature. Finally, the result presented in Figure 13 makes a rather interesting prediction for the observed properties of genuine remnants. As discussed in Sect. 5.2, the upper limit on the remnant fraction is $f_{\mathrm{rem}}\approx 10\%$. For a characteristic active lifetime of $\sim 100$ Myr, our observed remnant fraction would thus suggest an observable remnant phase lasting $\sim 10$ Myr. The modelling of a ‘MIDAS J225337$-$344745 like’ radio galaxy shows that the hotspot can persist for at least several Myr before fading out completely. As such, we can expect an appreciable fraction of remnants to still display hotspots. ## 6 Conclusions Within a sub-region of 8.31 deg2 of the GAMA 23 field, we have compiled a sample consisting of 104 extended, low-frequency selected (216 MHz) radio galaxies. Using the 5.5 and 9.5-GHz GLASS survey, we have adopted the ‘absent radio core’ criterion to search for remnant radio galaxy candidates. Our conclusions are summarized as follows: * • We identify 10 new remnant radio galaxy candidates, thereby constraining an $f_{\mathrm{rem}}~{}\leq~{}10\%$ upper limit on the fraction of radio galaxies with quiescent AGN. Our upper limit is consistent with that proposed by previous authors, and suggests that remnants must have a short observable lifetime. * • Seven remnant candidates show compact emitting regions in GLASS, an observation that can only be explained if the jets have recently switched off. A much smaller fraction (3/10) show relaxed, hotspot-less lobes, and only one displays an ultra-steep spectrum across the entire frequency range. This implies remnants are detected soon after switching off, suggesting a rapid fading during the remnant phase. * • The small fraction of ultra-steep ($\alpha<-1.2$) remnants is likely a result of the oldest remnant lobes escaping detection due to their expansion. * • At present, the upper limits placed on the remnant core prominence are too weak to confidently rule out the presence of AGN activity. Those with compact hotspots and a non ultra-steep spectrum must therefore retain their remnant candidate classification. Considering the limiting case in which all these are active radio galaxies, we would expect a $f_{\mathrm{rem}}\approx 4\%$ remnant fraction. * • MIDAS J225337$-$344745 represents an interesting object for future study. Modelling the integrated lobe spectrum shows consistency with a remnant radio galaxy. We find that although the hotspot has a non-negligible fading timescale, we do not expect to see hotspots in this source. * • By modelling the synchrotron spectrum arising from a ‘MIDAS J225337$-$344745-like’ hotspot, we show that the hotspot can persist for $5-10$ Myr at 5.5 GHz after the jets switch off. This would imply that the presence of a hotspot in radio maps may not necessarily reflect an active jet, and by extension we can expect an appreciable fraction of genuine remnants to still display hotspots. ## 7 Acknowledgements BQ acknowledges a Doctoral Scholarship and an Australian Government Research Training Programme scholarship administered through Curtin University of Western Australia. NHW is supported by an Australian Research Council Future Fellowship (project number FT190100231) funded by the Australian Government. This scientific work makes use of the Murchison Radio-astronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the Pawsey Supercomputing Centre which is supported by the Western Australian and Australian Governments. The Australian SKA Pathfinder is part of the Australia Telescope National Facility which is managed by CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. The Australia Telescope Compact Array is part of the Australia Telescope National Facility which is funded by the Australian Government for operation as a National Facility managed by CSIRO. We thank the staff of the GMRT that made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. CHIC acknowledges the support of the Department of Atomic Energy, Government of India, under the project 12-R&D-TFR-5.02-0700. SW acknowledges the financial assistance of the South African Radio Astronomy Observatory (SARAO) towards this research is hereby acknowledged (www.ska.ac.za). IP acknowledges support from INAF under the SKA/CTA PRIN "FORECaST" and the PRIN MAIN STEAM "SAuROS" projects. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. H.A. benefited from grant CIIC 90/2020 of Universidad de Guanajuato, Mexico. We acknowledge the work and support of the developers of the following following python packages: Astropy (Astropy Collaboration et al., 2013; Price- Whelan et al., 2018) and Numpy (van der Walt et al., 2011). We also made extensive use of the visualisation and analysis packages DS9141414http://ds9.si.edu/site/Home.html and Topcat (Taylor, 2005). We thank an anonymous referee for their insightful comments that have improved the manuscript. This work was compiled in the very useful free online LaTeX editor Overleaf. ## References * Astropy Collaboration et al. (2013) Astropy Collaboration et al., 2013, A&A, 558, A33 * Beardsley et al. (2020) Beardsley A. P., et al., 2020, PASA, 37, e014 * Bellstedt et al. (2020) Bellstedt S., et al., 2020, MNRAS, 496, 3235 * Best (2009) Best P. N., 2009, Astronomische Nachrichten, 330, 184 * Best et al. (2005) Best P. N., Kauffmann G., Heckman T. M., Brinchmann J., Charlot S., Ivezić Ž., White S. D. M., 2005, MNRAS, 362, 25 * Brammer et al. (2008) Brammer G. B., van Dokkum P. G., Coppi P., 2008, ApJ, 686, 1503 * Brienza et al. (2016) Brienza M., et al., 2016, A&A, 585, A29 * Brienza et al. (2017) Brienza M., et al., 2017, A&A, 606, A98 * Brienza et al. (2018) Brienza M., et al., 2018, A&A, 618, A45 * Brienza et al. (2020) Brienza M., et al., 2020, arXiv e-prints, p. arXiv:2003.13476 * Briggs (1995) Briggs D. S., 1995, in American Astronomical Society Meeting Abstracts. p. 112.02 * Carilli & Barthel (1996) Carilli C. L., Barthel P. D., 1996, A&A Rev., 7, 1 * Churazov et al. (2001) Churazov E., Brüggen M., Kaiser C. R., Böhringer H., Forman W., 2001, ApJ, 554, 261 * Condon et al. (1998) Condon J. J., Cotton W. D., Greisen E. W., Yin Q. F., Perley R. A., Taylor G. B., Broderick J. J., 1998, AJ, 115, 1693 * Cordey (1987) Cordey R. A., 1987, MNRAS, 227, 695 * Driver et al. (2011) Driver S. P., et al., 2011, MNRAS, 413, 971 * Duffy & Blundell (2012) Duffy P., Blundell K. M., 2012, MNRAS, 421, 108 * Fanaroff & Riley (1974) Fanaroff B. L., Riley J. M., 1974, MNRAS, 167, 31P * Feretti et al. (1984) Feretti L., Gioia I. M., Giovannini G., Gregorini L., Padrielli L., 1984, A&A, 139, 50 * Giovannini et al. (1988) Giovannini G., Feretti L., Gregorini L., Parma P., 1988, A&A, 199, 73 * Godfrey et al. (2017) Godfrey L. E. H., Morganti R., Brienza M., 2017, MNRAS, 471, 891 * Hancock et al. (2012) Hancock P. J., Murphy T., Gaensler B. M., Hopkins A., Curran J. R., 2012, MNRAS, 422, 1812 * Hancock et al. (2018) Hancock P. J., Trott C. M., Hurley-Walker N., 2018, PASA, 35, e011 * Hardcastle (2018) Hardcastle M. J., 2018, MNRAS, 475, 2768 * Hardcastle & Looney (2008) Hardcastle M. J., Looney L. W., 2008, MNRAS, 388, 176 * Harwood (2017) Harwood J. J., 2017, MNRAS, 466, 2888 * Harwood et al. (2015) Harwood J. J., Hardcastle M. J., Croston J. H., 2015, MNRAS, 454, 3403 * Hurley-Walker et al. (2015) Hurley-Walker N., et al., 2015, MNRAS, 447, 2468 * Hurley-Walker et al. (2017) Hurley-Walker N., et al., 2017, MNRAS, 464, 1146 * Huynh et al. (2015) Huynh M. T., Bell M. E., Hopkins A. M., Norris R. P., Seymour N., 2015, MNRAS, 454, 952 * Huynh et al. (2020) Huynh M. T., Seymour N., Norris R. P., Galvin T., 2020, MNRAS, 491, 3395 * Ishwara-Chandra et al. (2020) Ishwara-Chandra C. H., Taylor A. R., Green D. A., Stil J. M., Vaccari M., Ocran E. F., 2020, MNRAS, 497, 5383 * Jaffe & Perola (1973) Jaffe W. J., Perola G. C., 1973, A&A, 26, 423 * Johnston et al. (2007) Johnston S., et al., 2007, PASA, 24, 174 * Jurlin et al. (2020) Jurlin N., et al., 2020, arXiv e-prints, p. arXiv:2004.09118 * Kardashev (1962) Kardashev N. S., 1962, Soviet Ast., 6, 317 * Komissarov & Gubanov (1994) Komissarov S. S., Gubanov A. G., 1994, A&A, 285, 27 * Laing & Peacock (1980) Laing R. A., Peacock J. A., 1980, MNRAS, 190, 903 * Ledlow (1994) Ledlow M. J., 1994, in American Astronomical Society Meeting Abstracts #184. p. 17.02 * Liske et al. (2015) Liske J., et al., 2015, MNRAS, 452, 2087 * Longair & Lilly (1984) Longair M. S., Lilly S. J., 1984, Journal of Astrophysics and Astronomy, 5, 349 * Mahatma et al. (2018) Mahatma V. H., et al., 2018, MNRAS, 475, 4557 * Mahatma et al. (2019) Mahatma V. H., et al., 2019, A&A, 622, A13 * McConnell et al. (2016) McConnell D., et al., 2016, PASA, 33, e042 * McMullin et al. (2007) McMullin J. P., Waters B., Schiebel D., Young W., Golap K., 2007, in Shaw R. A., Hill F., Bell D. J., eds, Astronomical Society of the Pacific Conference Series Vol. 376, Astronomical Data Analysis Software and Systems XVI. p. 127 * Mullin et al. (2008) Mullin L. M., Riley J. M., Hardcastle M. J., 2008, MNRAS, 390, 595 * Murgia et al. (2011) Murgia M., et al., 2011, A&A, 526, A148 * Norris (2011) Norris R. P., 2011, Journal of Astrophysics and Astronomy, 32, 599 * Offringa et al. (2014) Offringa A. R., et al., 2014, WSClean: Widefield interferometric imager (ascl:1408.023) * Pacholczyk (1970) Pacholczyk A. G., 1970, in Series of Books in Astronomy and Astrophysics. * Parma et al. (2007) Parma P., Murgia M., de Ruiter H. R., Fanti R., Mack K. H., Govoni F., 2007, A&A, 470, 875 * Planck Collaboration et al. (2014) Planck Collaboration et al., 2014, A&A, 571, A16 * Price-Whelan et al. (2018) Price-Whelan A. M., et al., 2018, AJ, 156, 123 * Robotham et al. (2018) Robotham A. S. G., Davies L. J. M., Driver S. P., Koushan S., Taranu D. S., Casura S., Liske J., 2018, MNRAS, 476, 3137 * Rocca-Volmerange et al. (2004) Rocca-Volmerange B., Le Borgne D., De Breuck C., Fioc M., Moy E., 2004, A&A, 415, 931 * Roettiger et al. (1994) Roettiger K., Burns J. O., Clarke D. A., Christiansen W. A., 1994, ApJ, 421, L23 * Saripalli et al. (2012) Saripalli L., Subrahmanyan R., Thorat K., Ekers R. D., Hunstead R. W., Johnston H. M., Sadler E. M., 2012, ApJS, 199, 27 * Sault et al. (1995) Sault R. J., Teuben P. J., Wright M. C. H., 1995, in Shaw R. A., Payne H. E., Hayes J. J. E., eds, Astronomical Society of the Pacific Conference Series Vol. 77, Astronomical Data Analysis Software and Systems IV. p. 433 (arXiv:astro-ph/0612759) * Scheuer (1974) Scheuer P. A. G., 1974, MNRAS, 166, 513 * Schoenmakers et al. (2000) Schoenmakers A. P., de Bruyn A. G., Röttgering H. J. A., van der Laan H., Kaiser C. R., 2000, MNRAS, 315, 371 * Shabala et al. (2020) Shabala S. S., Jurlin N., Morganti R., Brienza M., Hardcastle M. J., Godfrey L. E. H., Krause M. G. H., Turner R. J., 2020, MNRAS, * Shulevski et al. (2012) Shulevski A., Morganti R., Oosterloo T., Struve C., 2012, A&A, 545, A91 * Shulevski et al. (2017) Shulevski A., et al., 2017, A&A, 600, A65 * Smolčić et al. (2017) Smolčić V., et al., 2017, A&A, 602, A6 * Taylor (2005) Taylor M. B., 2005, in Shopbell P., Britton M., Ebert R., eds, Astronomical Society of the Pacific Conference Series Vol. 347, Astronomical Data Analysis Software and Systems XIV. p. 29 * Thompson et al. (1980) Thompson A. R., Clark B. G., Wade C. M., Napier P. J., 1980, ApJS, 44, 151 * Tingay et al. (2013) Tingay S. J., et al., 2013, PASA, 30, e007 * Turner (2018) Turner R. J., 2018, MNRAS, 476, 2522 * Turner & Shabala (2015) Turner R. J., Shabala S. S., 2015, ApJ, 806, 59 * Turner et al. (2018a) Turner R. J., Rogers J. G., Shabala S. S., Krause M. G. H., 2018a, MNRAS, 473, 4179 * Turner et al. (2018b) Turner R. J., Shabala S. S., Krause M. G. H., 2018b, MNRAS, 474, 3361 * Wayth et al. (2015) Wayth R. B., et al., 2015, PASA, 32, e025 * Wayth et al. (2018) Wayth R. B., et al., 2018, PASA, 35, 33 * Wilson et al. (2011) Wilson W. E., et al., 2011, MNRAS, 416, 832 * Wright et al. (2010) Wright E. L., et al., 2010, AJ, 140, 1868 * van der Walt et al. (2011) van der Walt S., Colbert S. C., Varoquaux G., 2011, Computing in Science Engineering, 13, 22 Table 9: Column descriptions corresponding to the supplementary electronic table. The MIDAS_Name is derived using the MIDAS_RA and MIDAS_Dec columns. Entries without a GAMA_IAUID, mag_Ks, or NED_Object_Name are assigned a value of ’$-99$’. Hotspots (L1) and Hotspots (L2) refer to the number of GLASS hotspots found in each lobe, where L1 and L2 refer to the lobe and counter-lobe. We define L1 as having the smaller position angle measured in a counter-clockwise direction from the North. No. | Column name | Unit | Description ---|---|---|--- 1 | MIDAS_name | J hh:mm:ss-dd:mm:ss | Name of radio source in J2000 format. 2 | MIDAS_RA | deg | Right Ascension of MIDAS source. 3 | MIDAS_Dec | deg | Declination of MIDAS source. 4 | AGN_status | – | Activity state of AGN. 5 | FR_classification | – | Fanaroff & Riley Classification: FR-I, FR-II. 6 | LAS | arc-seconds | Largest angular size measured from EMU-ES. 7 | Hotspots (L1) | – | Number of GLASS hotspots present in L1. 8 | Hotspots (L2) | – | Number of GLASS hotspots present in L2. 9 | peak_flux_core | $\mu$Jy/beam | 5.5 GHz radio core peak flux density. 10 | err_peak_flux_core | $\mu$Jy/beam | Error on 5.5 GHz radio core peak flux density. 11 | S118 | mJy | Integrated flux density at MHz. 12 | err_S118 | mJy | Error on integrated flux density at MHz. 13 | S154 | mJy | Integrated flux density at MHz. 14 | err_S154 | mJy | Error on integrated flux density at MHz. 15 | S186 | mJy | Integrated flux density at MHz. 16 | err_S186 | mJy | Error on integrated flux density at MHz. 17 | S216 | mJy | Integrated flux density at MHz. 18 | err_S216 | mJy | Error on integrated flux density at MHz. 19 | S399 | mJy | Integrated flux density at MHz. 20 | err_S399 | mJy | Error on integrated flux density at MHz. 21 | S887 | mJy | Integrated flux density at MHz. 22 | err_S887 | mJy | Error on integrated flux density at MHz. 23 | S1400 | mJy | Integrated flux density at MHz. 24 | err_S1400 | mJy | Error on integrated flux density at MHz. 25 | S2100 | mJy | Integrated flux density at MHz. 26 | err_S2100 | mJy | Error on integrated flux density at MHz. 27 | S5500 | mJy | Integrated flux density at MHz. 28 | err_S5500 | mJy | Error on integrated flux density at MHz. 29 | S9000 | mJy | Integrated flux density at MHz. 30 | err_S9000 | mJy | Error on integrated flux density at MHz. 31 | CP | – | 216 MHz core prominence. 32 | GAMA_IAUID | – | ID of host galaxy in GAMA photometry catalogue. 33 | Host_RA | deg | Right ascension of host galaxy. 34 | Host_Dec | deg | Declination of host galaxy. 35 | z | – | Redshift. 36 | z_type | – | Redshift type: Spectroscopic, Photometric, lower-limit. 37 | mag_Ks | – | Host galaxy VIKING Ks-band magnitude. 38 | NED_Object_Name | – | Name of NED object corresponding to the GAMA_IAUID. 39 | log10_L216 | log10(W Hz-1) | 216 MHz radio power. 40 | LLS | kpc | Largest linear size.
# Novel pentaquark picture for singly heavy baryons from chiral symmetry Daiki Suenaga<EMAIL_ADDRESS>Research Center for Nuclear Physics, Osaka University, Ibaraki, 567-0048, Japan Atsushi Hosaka <EMAIL_ADDRESS>Research Center for Nuclear Physics, Osaka University, Ibaraki, 567-0048, Japan Advanced Science Research Center, Japan Atomic Energy Agency (JAEA), Tokai 319-1195, Japan ###### Abstract We propose a new type of structure for singly heavy baryons of $Qqq\bar{q}q$ in addition to the conventional one of $Qqq$. Based on chiral symmetry of the light quarks, we show that the $Qqq\bar{q}q$ baryon offers a novel picture for heavy quark spin-singlet and flavor-antisymmetric baryons. By making use of the effective Lagrangian approach, we find $\Lambda_{c}(2765)$ and $\Xi_{c}(2967)$ are mostly $Qqq\bar{q}q$ while $\Lambda_{c}(2286)$ and $\Xi_{c}(2470)$ are mostly $Qqq$. The masses of negative-parity baryons are predicted. We also derive a sum rule and the extended Goldberger-Treiman relation that the masses of the baryons satisfy. Furthermore, a mass degeneracy of parity partners of the baryons with the restoration of chiral symmetry is discussed. These are the unique features that the conventional picture of radial excitation in the quark model does not accommodate. Our findings provide useful information not only for future experiments but also for future lattice simulations on diquarks. ## I Introduction Investigation of heavy baryons has been attracting great attention with recent development of experimental observation at, e.g., KEK, LHC, and SLAC. Singly heavy baryons contain only one heavy quark, the mass of which is larger than the typical energy scale of Quantum Chromodynamics (QCD). There the heavy quark behaves not only as a color source but also as a spectator for the remaining light quarks governed by nonperturbative QCD Manohar and Wise (2000). Therefore, singly heavy baryons provide useful testing ground toward elucidation of colorful light-quark dynamics inside hadrons. In this regard, understanding of hadrons from chiral symmetry is important, since it is one of the fundamental symmetries of QCD. In fact, the dynamics of light pseudo-scalar mesons and nucleons are systematically formulated by the spontaneous breakdown of chiral symmetry. In addition, the expected mass degeneracy of chiral partners of light hadrons at extreme conditions provides a key signal of the restoration of chiral symmetry for experiments and lattice simulations Hatsuda and Kunihiro (1994); Harada and Yamawaki (2003). In this paper we discuss singly heavy baryons from the chiral symmetry point of view which cannot be easily implemented in the quark model Copley _et al._ (1979); Yoshida _et al._ (2015). In particular, we investigate unique features that are derived from the linear representations of chiral symmetry. For baryons of light flavors, the mirror representation has been proposed for the negative parity nucleon $N^{*}(1535)$, referred to as the mirror nucleon. This picture contrasts with the quark model description of orbital excitation for $N^{*}(1535)$. In the chiral representation approach, not only spectroscopies but also the properties at the extreme conditions such as change in masses and a degeneracy reflecting the chiral partner structurecan be studied Detar and Kunihiro (1989); Jido _et al._ (2001); Gallas _et al._ (2010); Yamazaki and Harada (2019). The latter has been strongly suggested by the lattice simulations Detar and Kogut (1987); Aarts _et al._ (2017). The quark content of the mirror nucleon is understood as an exotic pentaquark state ($qqq\bar{q}q$). Figure 1: A sketch of conventional picture (left) and novel picture proposed in this paper (right) for describing singly heavy baryons. Turning to the heavy hadrons, the linear representations of chiral symmetry have been studied in various contexts Nowak _et al._ (1993); Bardeen and Hill (1994); Dmitrasinovic (2012); Ma and Harada (2018); Kawakami and Harada (2019); Dmitrašinović and Chen (2020). Recently $\Xi_{c}(2967)$ was studied experimentally and its spin and parity were determined to be $J^{P}=\frac{1}{2}^{+}$ Moon _et al._ (2021). The properties of $\Lambda_{c}(2765)$ are similar to $\Xi_{c}(2967)$, so that the spin and parity of $\Lambda_{c}(2765)$ are likely to be $J^{P}=\frac{1}{2}^{+}$ Arifi _et al._ (2020a). We can safely assume that both the ground state $\Lambda_{c}(2286)$ [$\Xi_{c}(2470)$] and the excited state $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] are heavy quark spin-singlet and flavor- antisymmetric, since no spin partner baryons have been observed Zyla _et al._ (2020), i.e., the total angular momentum of the light quarks in these heavy baryons is zero: $j_{l}=0$. Recently observed $\Lambda_{b}(6072)$ baryon also seems to share the same properties Aaij _et al._ (2020). Within the conventional quark model, $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] can be understood as a radial excitation. In this paper, as an analogue of the mirror nucleon for $N^{*}(1535)$ we propose a new description where these heavy baryons are described by the mirror representation of chiral symmetry. As anticipated from the previous paragraph, the mirror representation for heavy baryons is realized by exotic multi-quark contents. Due to the presence of one heavy quark, it is achieved by the mirror diquark, which is actually a tetraquark $d^{\prime}\sim qqq\bar{q}$ as displayed in the right panel in Fig. 1. The significance of the mirror diquark is implied when we implement the chiral representation for diquarks. Introduction of the mirror diquark enables us to understand $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] as well as $\Lambda_{c}(2286)$ [$\Xi_{c}(2470)$] in a unified way based on chiral symmetry. The novel state shown in the right panel of Fig. 1 is distinct from the hadronic molecule state Chen _et al._ (2016); Guo _et al._ (2018), but should be regarded as a compact pentaquark one because $d^{\prime}$ is a colorful object binding the remaining heavy quark strongly. In the quark model Copley _et al._ (1979); Yoshida _et al._ (2015), $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] is considered as analogous states of the Roper resonance $N(1440)$ Roper (1964) which have been studied as a radial excitation Burkert and Roberts (2019).111In the quark model approach, the properties of this Roper-like baryons $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] such as its mass and decay width can be reproduced by employing the $\,{}^{3}P_{0}$ model Chen _et al._ (2017), or by including relativistic corrections Arifi _et al._ (2021). In contrast, in our approach by introducing the mirror diquark with chiral symmetry we will find various unique features that cannot be seen in the quark model. Namely $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] and $\Lambda_{c}(2286)$ [$\Xi_{c}(2470)$] are described as superpositions of the three-quark and pentaquark states. Moreover general relations that the masses of the baryons satisfy such as the sum rule and the extended Goldberger-Treiman relation are derived. Furthermore, our approach gives a natural explanation for the strong suppression of direct decay process of $\Lambda_{c}(2765)$ by the two pion emission Arifi _et al._ (2020a). In addition to those findings, our present approach can provide useful information on hadron properties such as a mass degeneracy of the parity partners Suenaga _et al._ (2015); Sasaki (2014); Suenaga _et al._ (2017); Buchheim _et al._ (2018); Ishikawa _et al._ (2019a); Montaña _et al._ (2020) at extreme conditions, e.g., finite temperature and/or density where chiral symmetry is (partially) restored. This paper is organized as follows. In Sec. II we present an effective Lagrangian for the heavy baryons formed by the conventional and mirror diquarks. In Sec. III we determine the parameters of our model and show the resultant mass spectrum of the baryons. In Sec. IV we discuss the mass degeneracy of the parity partners with the restoration of chiral symmetry. In Sec. V we conclude our present work. ## II Model The interpolating fields of the conventional and mirror diquarks contributing to $\Lambda_{c}(2286)$ [$\Xi_{c}(2470)$] and $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] are $\displaystyle(d_{R})^{a}_{i}$ $\displaystyle\sim$ $\displaystyle\epsilon_{ijk}\epsilon^{abc}(q_{R}^{T})^{b}_{j}C(q_{R})^{c}_{k}\ ,$ $\displaystyle(d_{L})^{a}_{i}$ $\displaystyle\sim$ $\displaystyle\epsilon_{ijk}\epsilon^{abc}(q_{L}^{T})^{b}_{j}C(q_{L})^{c}_{k}\ ,$ $\displaystyle(d^{\prime}_{R})_{i}^{a}$ $\displaystyle\sim$ $\displaystyle\epsilon_{jkl}\epsilon^{abc}(q_{R}^{T})^{b}_{k}C(q_{R})^{c}_{l}[(\bar{q}_{L})^{d}_{i}(q_{R})^{d}_{j}]\ ,$ $\displaystyle(d^{\prime}_{L})_{i}^{a}$ $\displaystyle\sim$ $\displaystyle\epsilon_{jkl}\epsilon^{abc}(q_{L}^{T})^{b}_{k}C(q_{L})^{c}_{l}[(\bar{q}_{R})^{d}_{i}(q_{L})^{d}_{j}]\ ,$ (1) where the right- (left-)handed light quark $q_{R(L)}$ is defined by $q_{R(L)}=\frac{1\pm\gamma_{5}}{2}q$ with $q=(u,d,s)^{T}$. The subscript “$i,j,\cdots$” and the superscript “$a,b,\cdots$” indicate the chiral and color indices, respectively. We have introduced scalar diquarks whose total angular momentum is zero: $j_{l}=0$, here because $\Lambda_{c}(2286)$ [$\Xi_{c}(2470)$] and $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] are heavy quark spin-singlet. Equation (1) shows that $d_{R}$, $d_{L}$, $d_{R}^{\prime}$, and $d_{L}^{\prime}$ belong to the chiral representation of $\displaystyle d_{R}\sim({\bm{1}},\bar{\bm{3}})\ ,\ \ d_{L}\sim(\bar{\bm{3}},{\bm{1}})\ ,$ $\displaystyle d^{\prime}_{R}\sim(\bar{\bm{3}},{\bm{1}})\ ,\ \ d^{\prime}_{L}\sim({\bm{1}},\bar{\bm{3}})\ .$ (2) Therefore, singly heavy baryons formed by a heavy quark $Q$ and a diquark in Eq. (1) transform as $\displaystyle B_{R}\to B_{R}g^{\dagger}_{R}\ ,\ \ B_{L}\to B_{L}g_{L}^{\dagger}\ ,$ $\displaystyle B^{\prime}_{R}\to B^{\prime}_{R}g^{\dagger}_{L}\ ,\ \ B^{\prime}_{L}\to B^{\prime}_{L}g_{R}^{\dagger}\ ,$ (3) with $g_{R(L)}\in SU(3)_{R(L)}$, where $B_{R}$, $B_{L}$, $B^{\prime}_{R}$, and $B_{L}^{\prime}$ are given by $\displaystyle B_{R,i}\sim Q^{a}(d_{R})_{i}^{a}\ ,\ \ B_{L,i}\sim Q^{a}(d_{L})_{i}^{a}\ ,$ $\displaystyle B^{\prime}_{R,i}\sim Q^{a}(d^{\prime}_{R})_{i}^{a}\ ,\ \ B^{\prime}_{L,i}\sim Q^{a}(d^{\prime}_{L})_{i}^{a}\ ,$ (4) respectively. The interpolating fields in Eq. (4) imply that the heavy quark only plays a role of a spectator for light-quark degrees of freedom. Heavy baryon fields in Eq. (4) allow us to construct an $SU(3)_{L}\times SU(3)_{R}$ symmetric effective Lagrangian within the heavy baryon effective theory (HBET) as $\displaystyle{\cal L}_{\rm eff}$ $\displaystyle=$ $\displaystyle\sum_{\chi=L,R}\Big{(}\bar{B}_{\chi}iv\cdot\partial{B}_{\chi}-\mu_{1}\bar{B}_{\chi}B_{\chi}$ (5) $\displaystyle+$ $\displaystyle\bar{B}^{\prime}_{\chi}iv\cdot\partial B^{\prime}_{\chi}-\mu_{2}\bar{B}^{\prime}_{\chi}B^{\prime}_{\chi}\Big{)}$ $\displaystyle-$ $\displaystyle\mu_{3}(\bar{B}_{R}B^{\prime}_{L}+\bar{B}^{\prime}_{L}B_{R}+\bar{B}_{L}B^{\prime}_{R}+\bar{B}^{\prime}_{R}{B}_{L})$ $\displaystyle-$ $\displaystyle g_{1}(\bar{B}_{L}\Sigma^{*}B_{R}+\bar{B}_{R}\Sigma^{T}B_{L})$ $\displaystyle-$ $\displaystyle g_{2}(\bar{B}^{\prime}_{R}\Sigma^{*}B^{\prime}_{L}+\bar{B}^{\prime}_{L}\Sigma^{T}B_{R})$ $\displaystyle-$ $\displaystyle g_{3}(\bar{B}^{\prime}_{R}\Sigma^{*}B_{R}+\bar{B}_{L}\Sigma^{*}B^{\prime}_{L}+{\rm h.c.})\ .$ Here $\Sigma=S+iP$ consists of scalar ($S$) and pseudo-scalar ($P$) meson nonets and transforms as $\Sigma\to g_{L}\Sigma g_{R}^{\dagger}$. Also, $v^{\mu}$ is a velocity of the heavy baryons. In Lagrangian (5), $\mu_{1}$, $\mu_{2}$, and $\mu_{3}$ terms are responsible for the part of heavy baryon mass which are of order $\Lambda_{\rm QCD}$, since a common mass parameter $m_{B}$ for all the heavy baryons is subtracted for defining the HBET. The remaining $g_{1}$, $g_{2}$, and $g_{3}$ terms are for the interactions between light quarks and light mesons. These terms also contribute to the masses of heavy baryons when chiral symmetry is spontaneously broken. The terms proportional to $\mu_{3}$, $g_{1}$ and $g_{2}$ in Eq. (5) break the $U(1)_{A}$ axial symmetry allowed by the quantum anomaly. We note that the mass term of $\bar{B}^{(\prime)}_{R}B_{R}^{(\prime)}+\bar{B}^{(\prime)}_{L}B_{L}^{(\prime)}$ is allowed in Eq. (5) because the property of spinor for the baryons is determined by the heavy quark which is irrelevant to the chiral representation of the diquarks. The physical baryons as parity eigenstates of $\pm$ are given by $\displaystyle B_{\pm}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}(B_{R}\mp B_{L})\ ,$ $\displaystyle B_{\pm}^{\prime}$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2}}(B_{L}^{\prime}\mp B_{R}^{\prime})\ ,$ (6) with $B^{(\prime)}_{R}\leftrightarrow-B^{(\prime)}_{L}$ under parity transformation Harada _et al._ (2020). Under the spontaneous breakdown of chiral symmetry, $\Sigma$ acquirers its vacuum expectation values (VEVs) as $\langle\Sigma\rangle={\rm diag}(\sigma_{q},\sigma_{q},\sigma_{s})$. When we ignore the $u$ and $d$ quark masses, $\sigma_{q}$ becomes identical to the pion decay constant. We will use $\sigma_{q}=93$ MeV as one of inputs for model parameters, whereas $\sigma_{s}$ is not determined by the decay constants due to corrections from the non-negligible $s$ quark mass $m_{s}\sim 100$ MeV. By substituting the parity eigenstates $B_{\pm}$ and $B_{\pm}^{\prime}$ into the Lagrangian (5), and by reading off the mass terms together with the VEVs $\langle\Sigma\rangle$, mass eigenvalues of the heavy baryons are obtained as $\displaystyle M(B_{+,i}^{H/L})$ $\displaystyle=$ $\displaystyle m_{B}+\frac{1}{2}\Big{[}m_{+,i}+m_{+,i}^{\prime}$ $\displaystyle\pm\sqrt{(m_{+,i}-m_{+,i}^{\prime})^{2}+4\tilde{m}^{2}_{+,i}}\ \Big{]}\ ,$ $\displaystyle M(B_{-,i}^{H/L})$ $\displaystyle=$ $\displaystyle m_{B}+\frac{1}{2}\Big{[}m_{-,i}+m_{-,i}^{\prime}$ (7) $\displaystyle\pm\sqrt{(m_{-,i}-m_{-,i}^{\prime})^{2}+4\tilde{m}^{2}_{-,i}}\ \Big{]}\ ,$ with $\displaystyle m_{\pm,i}$ $\displaystyle=$ $\displaystyle\mu_{1}\mp g_{1}\sigma_{i}\ ,$ $\displaystyle m_{\pm,i}^{\prime}$ $\displaystyle=$ $\displaystyle\mu_{2}\mp g_{2}\sigma_{i}\ ,$ $\displaystyle\tilde{m}_{\pm,i}$ $\displaystyle=$ $\displaystyle\mu_{3}\mp g_{3}\sigma_{i}\ .$ (8) Here $i=1,2,3$ stands for the flavor index where $i=1,2$ corresponds to $\Xi_{c}$’s while $i=3$ to $\Lambda_{c}$’s. In Eq. (7) we have introduced a common heavy mass parameter $m_{B}$ to define the HBET Manohar and Wise (2000). The choice of $m_{B}$ is arbitrary, thus we take the average mass of all the heavy baryons which will be explicitly given latter. For the VEVs, we have defined $\sigma_{i=1}=\sigma_{i=2}\equiv\sigma_{q}$ and $\sigma_{i=3}\equiv\sigma_{s}$. The superscript $H$ $(L)$ represents the higher (lower) mass eigenstate, i.e. $H$ ($L$) corresponds to “$+$” (“$-$”) sign in front of the square root in right-hand-side. In obtaining Eq. (7), we have defined mass eigenstates by $\displaystyle\left(\begin{array}[]{c}B_{\pm,i}^{L}\\\ B_{\pm,i}^{H}\\\ \end{array}\right)=\left(\begin{array}[]{cc}\cos\theta_{B_{\pm,i}}&\sin\theta_{B_{\pm,i}}\\\ -\sin\theta_{B_{\pm,i}}&\cos\theta_{B_{\pm,i}}\\\ \end{array}\right)\left(\begin{array}[]{c}B_{\pm,i}\\\ B_{\pm,i}^{\prime}\\\ \end{array}\right)\ ,$ (15) with the mixing angle satisfying $\tan\theta_{B_{\pm,i}}=(2\tilde{m}_{\pm,i})/(m_{\pm,i}-m^{\prime}_{\pm,i})$. Hence the physical states are superpositions of three-quark ($B_{\pm,i}$) and pentaquark ($B_{\pm,i}^{\prime}$) states as depicted in Fig. 1. This is the novel picture of singly heavy baryons proposed in this paper. ## III Mass spectrum To start with, we explain how the remaining parameters $\mu_{1}$, $\mu_{2}$, $\mu_{3}$, $g_{1}$, $g_{2}$, $g_{3}$, and $\sigma_{s}$ are fixed. First, we employ masses of experimentally observed states of $J^{P}=\frac{1}{2}^{+}$; $\Lambda_{c}(2286)$, $\Lambda_{c}(2765)$, $\Xi_{c}(2470)$, and $\Xi_{c}(2967)$ are used as inputs for Zyla _et al._ (2020) $\displaystyle M(B_{+,i=3}^{L})=2286\,{\rm MeV}\ ,\ M(B_{+,i=3}^{H})=2765\,{\rm MeV}\ ,$ $\displaystyle M(B_{+,i=1,2}^{L})=2470\,{\rm MeV}\ ,\ M(B_{+,i=1,2}^{H})=2967\,{\rm MeV}\ .$ Next, we use the masses of the conventional diquarks which are considered to be decoupled from the mirror ones. These masses were estimated by the lattice QCD Bi _et al._ (2016) as $\displaystyle M(d_{+,i=1,2})=906\,{\rm MeV}\ ,\ M(d_{-,i=1,2})=1142\,{\rm MeV}\ ,$ $\displaystyle M(d_{+,i=3})=725\,{\rm MeV}\ ,\ M(d_{-,i=3})=1265\,{\rm MeV}\ .$ (17) It should be noted that the last one in Eq. (17) is estimated by a chiral effective theory Harada _et al._ (2020) together with the simulation Bi _et al._ (2016). Here “$\pm$” and “$i$” stand for the parity and flavor indices of the diquark, respectively. These diquark masses are then related to the masses of baryons of three-quark states $B_{\pm,i}$ with the pentaquark states switched off: $B^{\prime}_{\pm,i}=0$ in the Lagrangian (5), leading to $M(B_{\pm,i})=m_{B}+m_{\pm,i}$. The mass difference of the heavy baryons may equal that of diquarks, i.e. $\displaystyle M(B_{-,i})-M(B_{+,i})=2g_{1}\sigma_{i}=M(d_{-,i})-M(d_{+,i})\ .$ (18) which can be used to fix $g_{1}$ and $\sigma_{s}$. In addition to the above inputs, we employ a quark model prediction as a useful reference for the demonstration. Namely, we take the mass of the lightest heavy quark spin-singlet $\Lambda_{c}$ baryon carrying $J^{P}=\frac{1}{2}^{-}$ predicted in Ref. Yoshida _et al._ (2015) as another input: $M(B_{-,i=3}^{L})=2890$ MeV. We note that the observed $J^{P}=\frac{1}{2}^{-}$ baryons of $\Lambda_{c}(2595)$ and $\Xi_{c}(2790)$ are heavy quark spin-doublet which are not treated in this paper. Figure 2: Mass spectrum of $\Lambda_{c}$’s and $\Xi_{c}$’s with $J^{P}=\frac{1}{2}^{+},\frac{1}{2}^{-}$ obtained in our model. The asterisk ($*$) stands for the inputs. The ratios indicated under the bars correspond to the components of three-quark and pentaquark states in each baryon: $Qqq:Qqq\bar{q}q$, in which the upper and lower ratios correspond to the parameter sets (I) and (II), respectively. | $\mu_{1}$ [MeV] | $\mu_{2}$ [MeV] | $\mu_{3}$ [MeV] | $g_{1}$ | $g_{2}$ | $g_{3}$ | $\sigma_{q}$ [MeV] | $\sigma_{s}$ [MeV] ---|---|---|---|---|---|---|---|--- Set (I) | $-247$ | $247$ | $\mp 91.0$ | $1.27$ | 1.94 | $\pm 0.34$ | $93^{*}$ | 212 Set (II) | $94.1$ | $-94.1$ | $\pm 246$ | $1.27$ | 1.94 | $\pm 0.34$ | $93^{*}$ | 212 Table 1: Two parameter sets (I) and (II). The mass parameter $m_{B}$ is fixed to be $m_{B}=2868$ MeV for both the sets. The asterisk ($\ast$) stands for the inputs. | $\theta_{B_{+,i=1,2}}$ | $\theta_{B_{-,i=1,2}}$ | $\theta_{B_{+,i=3}}$ | $\theta_{B_{-,i=3}}$ ---|---|---|---|--- Set (I) | $\pm 14.8^{\circ}$ | $\pm 6.01^{\circ}$ | $\pm 21.4^{\circ}$ | $\pm 1.67^{\circ}$ Set (II) | $\pm 29.9^{\circ}$ | $\pm 38.6^{\circ}$ | $\pm 23.2^{\circ}$ | $\pm 43.0^{\circ}$ Table 2: The fixed mixing angles for parameter sets (I) and (II), with the sign corresponding to the ones in the text. Note that $\theta_{B_{\pm,i=3}}$ and $\theta_{B_{\pm,i=1,2}}$ stand for the mixing angles of $\Lambda_{c}$ baryons and those of $\Xi_{c}$ baryons, respectively. Now we can fix all the parameters. Since the mass eigenvalues in Eq. (7) include square roots, the coupled equations yield four solutions for the parameter sets. Physically these four sets are classified into two sets (I) and (II) as displayed in Table 1. Both the parameter sets (I) and (II) predict the remaining negative-parity baryon masses as $M(B_{-,i=1,2}^{L})=2732$ MeV, $M(B_{-,i=1,2}^{H})=3302$ MeV, and $M(B_{-,i=3}^{H})=3529$ MeV. We show the resultant mass spectrum of the heavy baryons in Fig. 2 where the asterisk ($*$) stands for the inputs. Also, the mixing angles defined in Eq. (15) are determined as shown in Table 2, where $\theta_{B_{\pm,i=3}}$ and $\theta_{B_{\pm,i=1,2}}$ stand for the mixing angles of $\Lambda_{c}$’s and those of $\Xi_{c}$’s, respectively. Table 2 indicates that, for $J^{P}=\frac{1}{2}^{+}$ baryons the lower state $\Lambda_{c}(2286)$ [$\Xi_{c}(2470)$] is dominated by $Qqq$ while the higher one $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] by $Qqq\bar{q}q$ for both the parameter sets. On the other hand, for $J^{P}=\frac{1}{2}^{-}$ baryons the ratio is largely dependent on the parameters. The ratio of the $Qqq$ and $Qqq\bar{q}q$ components for each baryon is also shown under the bars in Fig. 2, in which the upper and lower ratios correspond to the parameter sets (I) and (II), respectively. The ratio is estimated by the square of each coefficient in Eq. (15), i.e. $\cos^{2}\theta_{B_{\pm,i}}$ or $\sin^{2}\theta_{B_{\pm,i}}$. --- Figure 3: The left panel shows the $\sigma_{s}$ dependence of $M(B_{+,i=3}^{L})$, $M(B_{-,i=3}^{L})$, $M(B_{+,i=3}^{H})$, and $M(B_{-,i=3}^{H})$, while the right one shows the $\sigma_{q}$ dependence of $M(B_{+,i=1,2}^{L})$, $M(B_{-,i=1,2}^{L})$, $M(B_{+,i=1,2}^{H})$, and $M(B_{-,i=1,2}^{H})$. The results are identical for the parameter sets (I) and (II) in Table 1. The $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] baryon have been found to be mainly $Qqq\bar{q}q$ with $J^{P}=\frac{1}{2}^{+}$ where the $\bar{q}q$ constituent in the mirror diquark $qq\bar{q}q$ requires a $P$-wave excitation. Although adding such a $P$-wave $\bar{q}q$ pair costs an energy of order 1 GeV in the quark model, the above analysis shows that the mirror diquark based on chiral symmetry costs only about 0.5 GeV to form $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$]. This finding is similar to the small mass of the light scalar meson $\bar{q}q$ in the chiral model. One of the most important consequences of our model is the sum rule of the masses: $\displaystyle\sum_{p=\pm,n=H,L}M(B_{p,i=1,2}^{n})=\sum_{p=\pm,n=H,L}M(B_{p,i=3}^{n})\ ,$ (19) which can be derived from Eq. (7). Namely the sum of mass of the four $\Lambda_{c}$’s coincides with that of the four $\Xi_{c}$’s. The mass formula (7) also yields the extended Goldberger-Treiman relation $\displaystyle\sum_{n=H,L}M(B_{-,i}^{n})-\sum_{n=H,L}M(B_{+,i}^{n})=2(g_{1}+g_{2})\sigma_{i}$ (20) for each $i$, which gives a constraint on the mass difference between parity partners and the coupling constant of one pion (kaon) emission. In deriving Eq. (20) the higher and lower masses $M^{H}_{p,i}$ and $M^{L}_{p,i}$ have been summed up to cancel out the square roots in Eq. (7). At the end of this section we give comments on decay properties of the excited baryons in our present model. In this paper, we have investigated mostly the mass spectrum of the baryons as the first step towards understanding of the importance of the newly introduced pentaquark from chiral symmetry. In addition to the masses, our Lagrangian (5) can derive various couplings among the light mesons and heavy baryons. Namely, decay properties of the excited baryons can be also studied. In this case, the ground-state $\Sigma_{c}$ baryons are necessary since the excited baryons can decay into them. Based on chiral symmetry, the $\Sigma_{c}$ baryons are straightforwardly included on top of our present model Harada _et al._ (2020).222We expect that inclusion of a pentaquark component for the ground-state $\Sigma_{c}$ baryons is not necessary since they are not excited states. By comparing our calculation given by such a hybrid model and the experimental data of decays, e.g., the $\Lambda_{c}(2765)$ decay widths, some of the model parameters are expected to be fixed or constrained such that the uncertainty of our present model is narrowed. Moreover, to shed light on the decay properties of the excited baryons predicted in this paper is useful for future heavy baryon experiments to observe them. In addition, detailed understanding of decay properties of the excited baryons, especially from the viewpoint of the Goldberger-Treiman relation (20) and $U(1)_{A}$ anomaly Kawakami _et al._ (2020), leads to further elucidation of properties of exotic constituents from symmetry aspects of QCD. Investigation of the decays is beyond the scope of this paper and we leave it for future publication. ## IV Baryon masses with the restoration of chiral symmetry The symmetry relations such as the sum rule (19) and the extended Goldberger- Treiman relation (20) provide useful information on the chiral symmetry properties of the baryons for future experiments and lattice simulations. In addition, as one of the most important advantages of employing the present chiral effective model, we can examine the properties of baryons at extreme conditions such as finite temperature/density where chiral symmetry tends to be restored. The mirror diquark can be regarded as an analogue of the mirror nucleon Detar and Kunihiro (1989); Jido _et al._ (2001); Gallas _et al._ (2010); Yamazaki and Harada (2019), which gives rise to the mass degeneracy of parity partners at the chiral restoration point in the nucleon sector Detar and Kogut (1987); Zschiesche _et al._ (2007); Motohiro _et al._ (2015); Aarts _et al._ (2017); Suenaga (2018); Ishikawa _et al._ (2019b). Thus, we can expect that a similar mass degeneracy of the partners of the singly heavy baryons arises. In order to see the above mass degeneracy, we study mass modifications of the baryons by changing the VEVs $\sigma_{q}$ and $\sigma_{s}$ in the mass formulae (7). In Fig. 3 we show the VEV dependence of the baryon masses. The left panel of this figure shows the $\sigma_{s}$ dependence of $M(B_{+,i=3}^{L})$, $M(B_{-,i=3}^{L})$, $M(B_{+,i=3}^{H})$, and $M(B_{-,i=3}^{H})$, while the right one shows the $\sigma_{q}$ dependence of $M(B_{+,i=1,2}^{L})$, $M(B_{-,i=1,2}^{L})$, $M(B_{+,i=1,2}^{H})$, and $M(B_{-,i=1,2}^{H})$. It should be noted that the results are identical for the parameter sets (I) and (II) in Table 1. Figure 3 shows that $\displaystyle M(B_{+,i}^{L})=M(B_{-,}^{L})\ ,$ $\displaystyle M(B_{+,i}^{H})=M(B_{-,i}^{H})$ (21) hold for $i=1,2$ and $i=3$, respectively, at the chiral restoration point denoted by $\sigma_{q}=0$ and $\sigma_{s}=0$. Therefore, our model predicts that $B_{+,i}^{L}$ and $B_{-,i}^{L}$ are the parity partners as in the nucleon sector. Similarly, $B_{+,i}^{H}$ and $B_{-,i}^{H}$ are the partners. Even when chiral symmetry is restored, due to the presence of $\mu_{3}$ which is chiral invariant, still the baryons are interpreted as superpositions of a three- quark state and a pentaquark state. The role of the mass parameter $\mu_{3}$ is similar to the one of the so-called chiral invariant mass $M_{0}$ for the naive and mirror nucleons Detar and Kunihiro (1989); Jido _et al._ (2001). Namely, $\mu_{3}$ and $M_{0}$ are expected to share common fundamental properties. The mass degeneracy demonstrated above originating from the restoration of chiral symmetry is expected to be realized at extreme conditions such as finite temperature/density. These environments are provided in heavy-ion collisions (HICs) and lattice simulations. Further studies on the properties of the baryons at such extreme conditions from the viewpoint of the restoration of chiral symmetry are left for future work. ## V Conclusions In this paper, we have proposed a new type of the light-quark degrees of freedom named mirror diquark in addition to the conventional diquark, to explain $\Lambda_{c}(2765)$ [$\Xi_{c}(2970)$] and $\Lambda_{c}(2286)$ [$\Xi_{c}(2470)$] in a unified way based on chiral symmetry. Accordingly, we have obtained the masses of negative-parity as well as positive-parity baryons in Eq. (7), and furthermore have derived the unique relations such as the sum rule (19) and the extended Goldberger-Treiman relation (20). Moreover our model can naturally explain the strong suppression of the direct decay process of $\Lambda_{c}(2765)$ by the two pion emission Arifi _et al._ (2020a), because the $\sigma$-$\Lambda_{c}(2765)$-$\Lambda_{c}(2286)$ ($\sigma$ is the light scalar meson) coupling disappears after the diagonalization in the Lagrangian (5). The mass spectrum of singly heavy baryons predicted in this paper will provide useful information for future experiments including HICs. Moreover, our finding that the mirror diquark plays a significant role is expected to provide new direction for future lattice simulations on diquarks Hess _et al._ (1998); Alexandrou _et al._ (2006); Babich _et al._ (2007); Bi _et al._ (2016). For example, the importance of the mirror diquark would be tested by examining the correlators of the conventional $qq$ and mirror $qq\bar{q}q$ diquarks, similarly to the simulation of light scalar mesons with $\bar{q}q$ and $\bar{q}q\bar{q}q$ states Wakayama _et al._ (2015). The development of the diquark chiral effective theory Hong _et al._ (2004) including the mirror diquark would be of interest. We also expect that the mirror diquark with chiral symmetry provides a new aspect for the understanding of the Roper resonance $N(1440)$ with the unique relations similar to Eqs. (19) and (20). In our novel picture, the mirror diquark is the main constituent of $\Lambda_{c}(2765)$ [$\Xi_{c}(2970)$], while conventionally such a baryon is regarded as a radial excitation ($2S$ state) in the quark model Arifi _et al._ (2020b). Namely, an additional mixing from $2S$ state to the pentaquark for $\Lambda_{c}(2765)$ [$\Xi_{c}(2970)$] is expected 333In addition, another constituent provided by bi-local interpolating fields Dmitrasinovic and Chen (2011); Chen and Dmitrašinović (2013) may enter.. Therefore, checking the unique relations Eqs. (19) and (20), and the mass degeneracy of parity partners or its precursory behaviors in future experiments or lattice simulations would be desired for the better understanding of the hadrons from chiral symmetry. ## Acknowledgement We thank Ahmad Jafar Arifi for fruitful discussions and comments. We also thank Veljko Dmitrasinovic for useful comments. A. H. is supported in part by Grants-in Aid for Scientific Research, Grants No. 17K05441(C) and by Grants-in Aid for Scientific Research on Innovative Areas (No. 18H05407). ## References * Manohar and Wise (2000) A. V. Manohar and M. B. Wise, _Heavy Quark Physics_, Cambridge Monographs on Particle Physics, Nuclear Physics and Cosmology (Cambridge University Press, 2000). * Hatsuda and Kunihiro (1994) T. Hatsuda and T. Kunihiro, Phys. Rept. 247, 221 (1994), arXiv:hep-ph/9401310 [hep-ph] . * Harada and Yamawaki (2003) M. Harada and K. Yamawaki, Phys. Rept. 381, 1 (2003), arXiv:hep-ph/0302103 . * Copley _et al._ (1979) L. A. Copley, N. Isgur, and G. Karl, Phys. Rev. D 20, 768 (1979), [Erratum: Phys.Rev.D 23, 817 (1981)]. * Yoshida _et al._ (2015) T. Yoshida, E. Hiyama, A. Hosaka, M. Oka, and K. Sadato, Phys. Rev. D 92, 114029 (2015), arXiv:1510.01067 [hep-ph] . * Detar and Kunihiro (1989) C. E. Detar and T. Kunihiro, Phys. Rev. D 39, 2805 (1989). * Jido _et al._ (2001) D. Jido, M. Oka, and A. Hosaka, Prog. Theor. Phys. 106, 873 (2001), arXiv:hep-ph/0110005 . * Gallas _et al._ (2010) S. Gallas, F. Giacosa, and D. H. Rischke, Phys. Rev. D 82, 014004 (2010), arXiv:0907.5084 [hep-ph] . * Yamazaki and Harada (2019) T. Yamazaki and M. Harada, Phys. Rev. D 99, 034012 (2019), arXiv:1809.02359 [hep-ph] . * Detar and Kogut (1987) C. E. Detar and J. B. Kogut, Phys. Rev. Lett. 59, 399 (1987). * Aarts _et al._ (2017) G. Aarts, C. Allton, D. De Boni, S. Hands, B. Jäger, C. Praki, and J.-I. Skullerud, JHEP 06, 034 (2017), arXiv:1703.09246 [hep-lat] . * Nowak _et al._ (1993) M. A. Nowak, M. Rho, and I. Zahed, Phys. Rev. D 48, 4370 (1993), arXiv:hep-ph/9209272 . * Bardeen and Hill (1994) W. A. Bardeen and C. T. Hill, Phys. Rev. D 49, 409 (1994), arXiv:hep-ph/9304265 . * Dmitrasinovic (2012) V. Dmitrasinovic, Phys. Rev. D 86, 016006 (2012). * Ma and Harada (2018) Y.-L. Ma and M. Harada, J. Phys. G 45, 075006 (2018), arXiv:1709.09746 [hep-ph] . * Kawakami and Harada (2019) Y. Kawakami and M. Harada, Phys. Rev. D 99, 094016 (2019), arXiv:1902.06774 [hep-ph] . * Dmitrašinović and Chen (2020) V. Dmitrašinović and H.-X. Chen, Phys. Rev. D 101, 114016 (2020). * Moon _et al._ (2021) T. J. Moon _et al._ (Belle), Phys. Rev. D 103, L111101 (2021), arXiv:2007.14700 [hep-ex] . * Arifi _et al._ (2020a) A. Arifi, H. Nagahiro, A. Hosaka, and K. Tanida, Phys. Rev. D 101, 111502 (2020a), arXiv:2004.07423 [hep-ph] . * Zyla _et al._ (2020) P. Zyla _et al._ (Particle Data Group), PTEP 2020, 083C01 (2020). * Aaij _et al._ (2020) R. Aaij _et al._ (LHCb), JHEP 06, 136 (2020), arXiv:2002.05112 [hep-ex] . * Chen _et al._ (2016) H.-X. Chen, W. Chen, X. Liu, and S.-L. Zhu, Phys. Rept. 639, 1 (2016), arXiv:1601.02092 [hep-ph] . * Guo _et al._ (2018) F.-K. Guo, C. Hanhart, U.-G. Meißner, Q. Wang, Q. Zhao, and B.-S. Zou, Rev. Mod. Phys. 90, 015004 (2018), arXiv:1705.00141 [hep-ph] . * Roper (1964) L. Roper, Phys. Rev. Lett. 12, 340 (1964). * Burkert and Roberts (2019) V. D. Burkert and C. D. Roberts, Rev. Mod. Phys. 91, 011003 (2019), arXiv:1710.02549 [nucl-ex] . * Note (1) In the quark model approach, the properties of this Roper-like baryons $\Lambda_{c}(2765)$ [$\Xi_{c}(2967)$] such as its mass and decay width can be reproduced by employing the $\tmspace+{.1667em}^{3}P_{0}$ model Chen _et al._ (2017), or by including relativistic corrections Arifi _et al._ (2021). * Suenaga _et al._ (2015) D. Suenaga, B.-R. He, Y.-L. Ma, and M. Harada, Phys. Rev. D 91, 036001 (2015), arXiv:1412.2462 [hep-ph] . * Sasaki (2014) C. Sasaki, Phys. Rev. D 90, 114007 (2014), arXiv:1409.3420 [hep-ph] . * Suenaga _et al._ (2017) D. Suenaga, S. Yasui, and M. Harada, Phys. Rev. C 96, 015204 (2017), arXiv:1703.02762 [nucl-th] . * Buchheim _et al._ (2018) T. Buchheim, T. Hilger, B. Kämpfer, and S. Leupold, J. Phys. G 45, 085104 (2018), arXiv:1801.01472 [nucl-th] . * Ishikawa _et al._ (2019a) T. Ishikawa, K. Nakayama, D. Suenaga, and K. Suzuki, Phys. Rev. D 100, 034016 (2019a), arXiv:1905.11164 [hep-ph] . * Montaña _et al._ (2020) G. Montaña, A. Ramos, L. Tolos, and J. M. Torres-Rincon, Phys. Lett. B 806, 135464 (2020), arXiv:2001.11877 [hep-ph] . * Harada _et al._ (2020) M. Harada, Y.-R. Liu, M. Oka, and K. Suzuki, Phys. Rev. D 101, 054038 (2020), arXiv:1912.09659 [hep-ph] . * Bi _et al._ (2016) Y. Bi, H. Cai, Y. Chen, M. Gong, Z. Liu, H.-X. Qiao, and Y.-B. Yang, Chin. Phys. C 40, 073106 (2016), arXiv:1510.07354 [hep-ph] . * Note (2) We expect that inclusion of a pentaquark component for the ground-state $\Sigma_{c}$ baryons is not necessary since they are not excited states. * Kawakami _et al._ (2020) Y. Kawakami, M. Harada, M. Oka, and K. Suzuki, Phys. Rev. D 102, 114004 (2020), arXiv:2009.06243 [hep-ph] . * Zschiesche _et al._ (2007) D. Zschiesche, L. Tolos, J. Schaffner-Bielich, and R. D. Pisarski, Phys. Rev. C 75, 055202 (2007), arXiv:nucl-th/0608044 . * Motohiro _et al._ (2015) Y. Motohiro, Y. Kim, and M. Harada, Phys. Rev. C 92, 025201 (2015), [Erratum: Phys.Rev.C 95, 059903 (2017)], arXiv:1505.00988 [nucl-th] . * Suenaga (2018) D. Suenaga, Phys. Rev. C 97, 045203 (2018), arXiv:1704.03630 [nucl-th] . * Ishikawa _et al._ (2019b) T. Ishikawa, K. Nakayama, and K. Suzuki, Phys. Rev. D 99, 054010 (2019b), arXiv:1812.10964 [hep-ph] . * Hess _et al._ (1998) M. Hess, F. Karsch, E. Laermann, and I. Wetzorke, Phys. Rev. D 58, 111502 (1998), arXiv:hep-lat/9804023 . * Alexandrou _et al._ (2006) C. Alexandrou, P. de Forcrand, and B. Lucini, Phys. Rev. Lett. 97, 222002 (2006), arXiv:hep-lat/0609004 . * Babich _et al._ (2007) R. Babich, N. Garron, C. Hoelbling, J. Howard, L. Lellouch, and C. Rebbi, Phys. Rev. D 76, 074021 (2007), arXiv:hep-lat/0701023 . * Wakayama _et al._ (2015) M. Wakayama, T. Kunihiro, S. Muroya, A. Nakamura, C. Nonaka, M. Sekiguchi, and H. Wada, Phys. Rev. D 91, 094508 (2015), arXiv:1412.3909 [hep-lat] . * Hong _et al._ (2004) D. K. Hong, Y. J. Sohn, and I. Zahed, Phys. Lett. B 596, 191 (2004), arXiv:hep-ph/0403205 . * Arifi _et al._ (2020b) A. Arifi, H. Nagahiro, A. Hosaka, and K. Tanida, Phys. Rev. D 101, 094023 (2020b), arXiv:2003.08202 [hep-ph] . * Note (3) In addition, another constituent provided by bi-local interpolating fields Dmitrasinovic and Chen (2011); Chen and Dmitrašinović (2013) may enter. * Chen _et al._ (2017) B. Chen, K.-W. Wei, X. Liu, and T. Matsuki, Eur. Phys. J. C 77, 154 (2017), arXiv:1609.07967 [hep-ph] . * Arifi _et al._ (2021) A. J. Arifi, D. Suenaga, and A. Hosaka, Phys. Rev. D 103, 094003 (2021), arXiv:2102.03754 [hep-ph] . * Dmitrasinovic and Chen (2011) V. Dmitrasinovic and H.-X. Chen, Eur. Phys. J. C 71, 1543 (2011), arXiv:1101.5906 [hep-ph] . * Chen and Dmitrašinović (2013) H.-X. Chen and V. Dmitrašinović, Phys. Rev. D 88, 036013 (2013), arXiv:1309.0387 [hep-ph] .
# Belief-based Generation of Argumentative Claims Milad Alshomary Wei-Fan Chen Timon Gurcke Henning Wachsmuth <first name>.<last<EMAIL_ADDRESS> Department of Computer Science Paderborn University, Paderborn, Germany ###### Abstract When engaging in an argumentative discourse, skilled human debaters tailor claims to the beliefs of the audience, to construct effective arguments. Recently, the field of computational argumentation witnessed extensive effort to address the automatic generation of arguments. However, existing approaches do not perform any audience-specific adaptation. In this work, we aim to bridge this gap by studying the task of belief-based claim generation: Given a controversial topic and a set of beliefs, generate an argumentative claim tailored to the beliefs. To tackle this task, we model the people’s prior beliefs through their stances on controversial topics, and extend state-of- the-art text generation models to generate claims conditioned on the beliefs. Our automatic evaluation confirms the ability of our approach to adapt claims to a set of given beliefs. In a manual study, we additionally evaluate the generated claims in terms of informativeness and their likelihood to be uttered by someone with a respective belief. Our results reveal the limitations of modeling users’ beliefs based on their stances, but demonstrate the potential of encoding beliefs into argumentative texts, laying the ground for future exploration of audience reach. ## 1 Introduction According to van Eemeren and Houtlosser (1999), debaters engaging in an argumentative discourse, aimed to resolve disagreement, design their next argumentative move considering the topical potential, the audience demand, and appropriate presentational devices. Feinberg and Willer (2015) stress based on the moral foundation theory Godden (2010) how phrasing arguments to fit the audience’s morals leads to a better agreement. For example, in a debate on former US president Donald Trump, potential topics could have been immigration, health care plans, tax plans, etc. However, knowledge about the audience being middle-class workers would have suggested to restrict the selection to Trump’s tax plans. An appropriate usage of presentational devices may have then put a con argument as follows: #### Example “Donald Trump was a bad president. He did nothing but hurt the poor and middle class, his tax plan benefited only rich people who could afford it.” There is a recent growth of interest in argument generation as a subfield of computational argumentation. Several tasks have been proposed, including claim negation Bilu et al. (2015); Hidey and McKeown (2019), counterargument generation Hua et al. (2019), and conclusion generation Alshomary et al. (2020). While some research considers argumentative strategies when delivering arguments Wachsmuth et al. (2018); El Baff et al. (2019), no one has worked on adapting arguments to user beliefs yet. Our goal is to bridge this gap. In this work, we propose to extend argument generation technologies with the ability to encode beliefs. This does not only better reflect the process by which humans reason, but it also allows controlling the output, in order to better reach the audience. In particular, we introduce the task of belief- based claim generation: Given a controversial topic and a representation of a user’s beliefs, generate a claim that is both relevant to the topic and matches the beliefs. To approach this task, we first model user beliefs by their stances (pro or con) on a set of controversial topics, and then extend two state-of-the-art text generation approaches by conditioning their output on a specific set of beliefs. One approach builds on Li et al. (2016), equipping a sequence-to- sequence (Seq2Seq) model with a context vector representing the given stances. The other approach controls the output of a pre-trained argumentative language model (LM) using the algorithm of Dathathri et al. (2020) to assure resembling the user’s beliefs. We study the given task empirically on the debate.org dataset of Durmus and Cardie (2018). The dataset contains users’ arguments on various controversial topics as well as their stances towards the most popular topics on the website, called the big issues. For our purposes, we use these big issues as the controversial topics, and we model beliefs by the user’s stances towards them. In our automatic evaluation, we compare both models against their unconditioned correspondents (i.e., the same models without knowledge about a user). We assess the generated claims in terms of the similarity to the ground truth and the likelihood of carrying textual features that reflect users’ stances on big issues. Our results suggest that using users’ beliefs significantly increases the effectiveness of the Seq2Seq and LM in most cases. Moreover, a stance classifier trained on claims generated by the conditioned LM achieves the best averaged accuracy across all big issues. In a subsequent manual evaluation, we find that claims generated by the conditioned LM are more informative regarding the topic. In terms of predicting stance from generated claims, we analyze the limitations of our approach in detail, which lie in the belief encoding step. By avoiding these limitations, we find that the generated claims enable the annotators to predict correctly a stance on a given big issue in 45% of the cases (26% incorrectly). These results demonstrate the applicability of encoding a user’s beliefs into argumentative texts, enabling future research on the effect of belief-based argumentative claims on audiences. The contribution of this work is threefold111Code can be found under: http://www.github.com/webis-de/eacl21-belief-based-claim-generation: * • A new task, belief-based claim generation. * • An approach to model and match users’ beliefs in the generation of arguments. * • Empirical evidence of the applicability of encoding beliefs into argumentative texts. ## 2 Related Work Early research on argument generation aimed to create argumentative texts starting from a symbolic representation Zukerman et al. (2000); Grasso et al. (2000); Carenini and Moore (2006). Conceptually, those approaches all had a similar architecture consisting of three main phases: text planning, sentence planning, and realization Stede et al. (2018). While they included a user model to a certain extent and aimed to generate convincing arguments, they were still performed on a limited scale. With the tremendous advances of NLP and machine learning since then, research has begun to address different tasks in the realm of argument generation, showing promising results. Hua et al. (2019) proposed a neural network-based framework for generating counter-arguments. Both Bilu et al. (2015) and Hidey and McKeown (2019) addressed the task of claim negation, using a rule-based and a neural approach respectively. Also, Sato et al. (2015) proposed an approach to argument generation based on sentence retrieval, in which, given a topic, a set of paragraphs covering different aspects is generated. However, these approaches are agnostic to the target audience. Chen et al. (2018) modified the political bias of (often claim-like) news headlines using style transfer, accounting for general political sides (left and right) at least. Moreover, Wachsmuth et al. (2018) modeled rhetorical strategies in argument synthesis conceptually, but its computational realization El Baff et al. (2019) considers the audience implicitly only, using a language model approach to select and arrange argumentative discourse units that are phrased in an argument. In the field of conversational AI, researchers have utilized machine translation techniques to tackle the task of dialog generation Ritter et al. (2011). Li et al. (2016) worked on augmenting sequence-to-sequence models by learning persona vectors from the given data. In a similar fashion, one of our approaches extends such a model by a context vector representing a user’s belief. Here, however, we deal with argumentative text. Progress in the field of text generation has been made due to the availability of large pre-trained language models Devlin et al. (2018); Solaiman et al. (2019). While these models excel in generating coherent texts, ensuring a generated text possesses a certain property is not straightforward. Some research tackled this limitation, offering ways to better control the output Keskar et al. (2019); Ziegler et al. (2019). One of the most flexible of such approaches is by Dathathri et al. (2020), which does not require fine-tuning for each controlling theme. Their algorithm conditions the output of a language model to contain certain properties defined by a discriminative classifier or a bag-of-words. One of our approaches makes use of this algorithm to condition the output of an argumentative language model on a bag- of-words that represents a user’s beliefs. A recent relevant work by Schiller et al. (2020) deals with the generation of aspect-controlled arguments. Similar to us, the authors utilize a pre-trained language model to generate arguments on a specific topic, with a controlled stance and aspect. Their focus is on topical aspects of arguments, though, and their approach based on Keskar et al. (2019) is limited to a predefined set of topics and aspects. ## 3 Task Due to the importance of audience in argumentation when aiming for persuasiveness van Eemeren and Houtlosser (1999), and due to the fact that humans comply to certain morals that shape their beliefs and affect their reasoning Godden (2010); Feinberg and Willer (2015), we introduce the audience’s beliefs as a new dimension to the argument generation process in this work. For this, we propose a new task, belief-based claim generation: > Given a controversial topic and a representation of the audience’s beliefs, > generate a claim that is both relevant to the topic and matches the beliefs. We focus this task on generating claims rather than full arguments to keep it simple and because claims denote the main units from which arguments are built. As shown by Feinberg and Willer (2015), better agreement is achieved when arguments are framed with respect to audience’s beliefs. Therefore, we argue that studying the mentioned task will enable argumentation technology, knowing its audience, to generate more convincing arguments, bridging the gap between disagreeing parties. ### 3.1 Data To study the proposed task, a dataset is needed in which information about users revealing their beliefs as well as their arguments on various topics are given. Here, we build upon the dataset introduced by Durmus and Cardie (2018), which was collected from debate.org, an online platform where users can engage in debates over controversial topics and share their profiles. The dataset contains users’ arguments as answers to topic questions and engagement in debates, along with various user information, including a user’s self- specified stances (pro or con) on up to 48 predefined popular controversial topics, called big issues. Dataset | # Claims | # Topics | # Users ---|---|---|--- Training set | 41 288 | 22 241 | 5 189 Validation set | 5 028 | 2 450 | 2 509 Test set | 5 154 | 2 728 | 2 512 Full dataset | 51 470 | 27 419 | 5 189 Table 1: Number of claims, topics, and users in each of the training, validation, and test set of the data used in this paper. In our dataset, for the task at hand, we keep only users who have at least three arguments and stated their stance on at least one of the big issues. For those, we collected their arguments along with the topics and stances. In total, the dataset contains around 51k claims, on 27k topics from 5k users. We randomly split the dataset per topic into 10% test and 90% training. 10% of the latter are used as the validation set. Statistics are given in Table 1. To develop approaches to the belief-based claim generation task, we need training data where claims can be identified as such. Since claim detection is not our focus, we preprocess all data using the claim detection approach of Chakrabarty et al. (2019). In particular, we score the likelihood of each sentence being a claim, and only keep the one with the highest score as the user’s claim on the topic. To evaluate the model, we created a sample of 100 arguments, and two annotators decided whether the extracted sentence represents a claim on the given topic or not. In terms of full agreement, the model extracted claims correctly in 81% of the cases, the Cohen’s $\kappa$ inter-annotator agreement being 0.3. We note that this preprocessing step produces some noise in the data, mainly affecting the training of our Seq2Seq model below. ## 4 Approach To study our research question, we propose and compare two approaches that build on top of known techniques for text generation. Both approaches rely on modeling users’ beliefs via their stances on big issues. The first is an extension of the Seq2Seq model Sutskever et al. (2014), where the user’s stances are encoded as a context vector, while the second conditions the output of a pre-trained argumentative language model via a bag-of-words, constructed based on stances on big issues. ### 4.1 Seq2Seq-based Model Given a topic, as a sequences of words $T=(w_{1},w_{2},...,w_{n})$, a user vector $\overrightarrow{U}\in\\{0,1\\}^{k}$ with $k$ being the number of big issues, and a claim as a sequence of words $C=(w_{1},w_{2},...,w_{m})$, first an LSTM-based encoder consumes the input topic and produces a hidden state $\overrightarrow{h}$, which is used to initialize the LSTM-based decoder. The user vector $\overrightarrow{U}$ is projected into a new embedding space via a feed forward network with a learned weight matrix $W_{U}$, producing a new vector, $\overrightarrow{V}$: $\overrightarrow{V}=\sigma(W_{U}\cdot\overrightarrow{U})$ Following Li et al. (2016), our $\overrightarrow{V}$ is served as their speaker embedding in the model. The difference between the speaker model in Li et al. (2016) and this model is that the vector $\overrightarrow{V}$ is not explicitly predefined but rather learned from the data, while in our model it is already predefined as a binary vector representing the user’s stances on big issues. By augmenting the Seq2Seq model with a context user vector, the model is supposed to capture the correlation between users’ stances on big issue and the corresponding claims. Once the correlation is learned, the model can generate a claim utilizing not only the topic, but also the stances on big issues of the target user, which reflect the beliefs. ### 4.2 Conditioned Language Model In this approach, we represent a user’s stances on big issues as a bag-of- words. We then use the topic as a prompt for a pre-trained argumentative language model (LM) to synthesize a claim conditioned using the algorithm of Dathathri et al. (2020). The synthesis process is illustrated in Figure 1. #### Argumentative Language Model Since we aim to generate claims in particular, a standard LM is not enough. To model argumentative language, we take a LM pre-trained on general language and fine-tune it on a large set of arguments (in our experiments, we use the corpus of Ajjour et al. (2019)). The result is an LM that is able to generate argumentative text. Figure 1: The synthesis process of the conditioned LM on the topic “Whaling”, given a user who is pro environmental protection and global warming and con torture. Steps: (1) Building $U_{bow}$, based on stances (2) Forward pass through the LM to generate a token, sport (3) Updating the LM history $H_{t}$, based on $p(U_{bow}|x)$, and (4) Generating from the new history $\hat{H}_{t}$ a new token cruel. #### Belief-based Bag-of-words Next, we build a bag-of-words that represents the beliefs of a user. We learn this from the user’s stances on the big issues. For example, a user pro abortion would likely be pro choice. Hence, words such as right and choice are candidates to be included in their belief-based bag-of-words. To this end, we first build two bag-of-words representations for each big issue, one for the pro and for the con side. For a user, we then construct a belief-based bag-of- words based on their stances on big issues. To build a representative pro and con bag-of-words for each big issue, we follow the topic signature approach of Lin and Hovy (2000). Given a big issue, we first collect from some corpus of arguments three sets: relevant pro arguments $R_{pro}$, relevant con arguments $R_{con}$, and a random set of non-relevant arguments $\hat{R}$. For each relevant set, we then compute a likelihood ratio for all its words with respect to $\hat{R}$ and keep only words with a score higher than a specific threshold $\tau$, resulting in two sets of words, $W_{pro}$ and $W_{con}$. Since a word may appear in both sets, we remove it from the set where it occurs fewer times. Finally, we sort words according to their likelihood ratio and keep in both $W_{pro}$ and $W_{con}$ the top $k$ words, forming the final pro and con bag-of-words respectively. #### Claim Generation Given a user (represented by stances on big issues) and a topic, we construct a belief-based bag-of-words (Step 1 in Figure 1): $U_{bow}=W_{1}\cup W_{2}\cup\ldots\cup W_{n}$ where $W_{i}$ is the pro bag-of-words if the stance is pro and the con bag-of- words otherwise. Then, we use the topic as a prompt and the user’s bag-of- words $U_{bow}$ to condition the generated claim (see Figure 1). In particular, given a transformer-based LM Vaswani et al. (2017), a token $x_{t+1}$ is generated at each time step as follows: $o_{t+1},H_{t+1}=LM(x_{t},H_{t})$ $x_{t+1}\sim p_{t+1}=Softmax(W\cdot o_{t+1})$ where $H_{t}$ represents the history of the LM. Using the algorithm of Radford et al. (2019), called Plug and Play LM (PPLM), an update to the past, $\Delta H$, is computed to control the generated claim, based on the sum of the log likelihood $p(U_{bow}|x)$ of all words in the belief-based bag-of-words. Then the new history, $\hat{H}_{t}=H_{t}+\Delta H_{t}$, is used as in the previous equations to draw a new distribution $\hat{p}_{t+1}$, of which a new token is sampled. To ensure fluency in the generated text, $\Delta H$ is further modified to ensure a high log-likelihood $p(x)$ with respect to the LM. More details on the algorithm can be found in the work of Radford et al. (2019). In short, through fine-tuning an LM on argumentative text, we tune it to generate claims. Using the topic as a prompt, we ensure that the claim is on the topic. Finally, the PPLM represents beliefs, modeled as a bag-of-words $U_{bow}$, in the claim. ## 5 Automatic Evaluation In this section, we evaluate whether utilizing user’s beliefs as input, modeled as stances on big issues, leads to claims that better match the ground-truth claims and reveal the input stances on big issues. ### 5.1 Experimental Setup On one hand, we compute the BLEU and METEOR scores of the generated claims with respect to the ground-truth claims. On the other hand, we compute the likelihood that the generated claims possess textual features that reflect the input user’s beliefs. We do so by measuring the accuracy of predicting user’s stances on big issues given the generated claims. We compute this accuracy for each of the 48 big issues individually and report the results for all of them. To this end, we carry out the following three steps for a given approach. First, we generate claims for all given users and topics in the test dataset. Second, we keep only instances in which users have a stance (pro/con) on the tested big issue, and split the filtered dataset into training and test. Finally, we train a simple TF-IDF based linear classifier on the training set to predict the stance on the big issue given the text of the claim. The accuracy of the classifier on the test split then quantifies the likelihood of the generated claims possessing textual features that reflect the stance on the corresponding big issue. Approach | BLEU-1 | BLEU-3 | METEOR ---|---|---|--- S2S-baseline | 18.2% | 0.44% | 16% S2S-model | *18.4% | *0.46% | 16% LM-baseline | 09.6% | 0.26% | 08% LM-conditioned | *12.0% | 0.16% | *11% Table 2: BLEU and METEOR scores of the claims of each evaluated approach compared to the ground-truth claims. Values marked with * are significantly better than the respective baseline at $p$ < .05 (student’s $t$-test). | | Death | Gay | Drug | Global | Environm. | Medical | Smok. | Minim. | Border | All 48 ---|---|---|---|---|---|---|---|---|---|---|--- Approach | Abortion | penalty | Marriage | legaliz. | warming | protection | mariju. | ban | wage | fence | big issues Ground-truth | 0.49 | 0.59 | 0.55 | 0.55 | 0.55 | 0.55 | 0.50 | 0.53 | 0.48 | 0.62 | 0.52 S2S-baseline | 0.49 | 0.48 | 0.52 | 0.45 | 0.51 | 0.51 | 0.57 | 0.53 | 0.53 | 0.46 | 0.50 S2S-model | 0.55 | 0.55 | 0.45 | 0.45 | 0.51 | 0.58 | 0.57 | 0.53 | 0.49 | 0.52 | 0.51 LM-baseline | 0.48 | 0.50 | 0.54 | 0.49 | 0.54 | 0.56 | 0.51 | 0.45 | 0.59 | 0.46 | 0.50 LM-conditioned | *0.58 | *0.53 | 0.45 | 0.56 | *0.61 | 0.58 | 0.58 | 0.53 | 0.65 | 0.50 | 0.54 # Training | 1 610 | 1 532 | 2 098 | 1 538 | 1 960 | 2 196 | 2 096 | 1 370 | 1 580 | 1 092 | - # Test | 350 | 366 | 196 | 316 | 156 | 86 | 138 | 294 | 172 | 280 | - Table 3: Accuracy of each classifier trained on claims generated by the evaluated approaches to predict the stance, on the 10 most frequent big issues as well as on average over all 48 big issues. Values marked with * are significantly better than corresponding baseline at $p<0.05$ according to a one-tailed Student’s $t$-test. ### 5.2 Implementation Details In the following, we give implementation details of our approaches and the corresponding baselines: #### Seq2seq-based Model Based on the OpenNMT framework Klein et al. (2017), the encoder and decoder are each two-layer LSTMs of hidden size 512 with GloVe word embeddings of size 300. Users’ stances on big issues are represented as a one-hot encoded vector, and then projected into 16 dimensions space through a one-layer dense neural network. We trained the model with the Adagrad optimizer (batch size 16) and refer to it as S2S-model. #### Conditioned Language Model We constructed the pro/con relevant argument sets ($R_{pro},R_{con}$) by querying the respective big issue from the API provided by Ajjour et al. (2019) and extracting pro/con arguments from the top 60 results. For the non- relevant argument set ($\hat{R}$), we used the same corpus Ajjour et al. (2019) and randomly selected 100 arguments. We eliminated all words with a score under $\tau=10$ and finally kept the top $k=25$ words from each set ($R_{pro},R_{con}$) to represent the bag-of-words.222We refrained from tuning the parameters here since we do not have a ground truth. To model the argumentative language, we fine-tuned the GPT-2 model on the corpus of Ajjour et al. (2019), which contains around 400k arguments. The fine-tuning was performed using the transformers framework Wolf et al. (2019). We used the topic as a prompt to trigger the generation process. However, since some topics are phrased as a question (e.g, “is abortion wrong?”), we extracted the noun phrase from the topic and used it as a prompt. For conditioning the generated claim, we used the PPLM implementation Dathathri et al. (2020)333step-size=0.15 and the repetition-penalty=1.2. We call this model the LM-conditioned. #### Baselines To evaluate the gain of encoding user’s beliefs, we compare our two approaches to the corresponding version without stances on big issues as an input. We refer to these baselines as S2S-baseline and LM-baseline respectively444A baseline that uses the corresponding bag-of-words of the targeted topic to guide the generation wouldn’t be valid, since we don’t have information on the user’s stance on this targeted topic.. ### 5.3 Results Table 2 shows the results of our approaches and the baselines in terms of BLEU and METEOR. For S2S, the BLEU scores of our approach are significantly better than the baseline. The LM-conditioned is significantly better than the baseline version in terms of BLEU-1 and METEOR. In general, the S2S-model has the highest scores across all measures. The reason may be that it was trained in a supervised manner on the given dataset, whereas the LM-model was only fine- tuned in an unsupervised way on a different argument corpus. Regarding the encoding of user stances, Table 3 shows the accuracy of a linear classifier trained to predict the stance from the claims generated by each approach as well as from the ground-truth, on average and on the 10 most frequent big issues. A complete table with all big issues can be found in the appendix. The best average accuracy across all the big issues is achieved by the LM- model (0.54). Compared to the corresponding baselines, the LM-model and the S2S-model generated claims that boosted the accuracy of the stance classifier on 33 (69%) and 21 (44%) of all big issues respectively. Overall, in 20 of the big issues, the best accuracy was achieved on the claims generated by the conditioned LM, compared to only nine big issues for the S2S-model. This indicates that the LM-conditioned can better encode a user’s beliefs, modeled as stances on big issues, into generated claims. ## 6 Manual Evaluation To obtain more insights into belief-based claim generation, we let users manually evaluate the output of the given approaches. Upon inspecting a sample of generated claims by our approaches, we noticed that the LM-conditioned produces more fluent and informative texts. Accordingly, we focused on the LM- conditioned and its baseline in the evaluation, where we conducted two user studies. The goal of the first was to assess the quality of the big-issue bag- of-words collected automatically, while the second targeted the output of the LM-model, its baseline, and a variant that utilizes a manually refined bag-of- words. ### 6.1 Automatic Collection of Bag-of-words To keep the manual annotation effort manageable, we evaluated only the top-10 big issues. Two authors of this paper categorized each word in the pro/con bag-of-words of the corresponding big issue into five categories, c1–c5: 1. c1: Word irrelevant to the big issue. 2. c2: Relevant word, wrong stance. 3. c3: Relevant word, both stances possible. 4. c4: Relevant word, correct stance. 5. c5: Very relevant word, correct stance. | Overall | Relatedness Level 4 | Relatedness Level 3 | Relatedness Level-2 ---|---|---|---|--- Approach | True | False | Undec. | True | False | Undec. | True | False | Undec. | True | False | Undec. LM-baseline | 44% | 34% | 22% | 50% | 50% | 0% | 55% | 31% | 14% | 27% | 20% | 53% LM-conditioned | 37% | 32% | 31% | 35% | 38% | 27% | 59% | 41% | 0% | 13% | 13% | 74% LM-cond. (manual) | 45% | 26% | 28% | 50% | 31% | 19% | 61% | 28% | 11% | 25% | 18% | 56% Ground Truth | 42% | 30% | 28% | 38% | 42% | 19% | 64% | 27% | 9% | 27% | 19% | 54% Table 4: Manual Evaluation: Percentage of cases for each approach where the majority of annotators predicted the stance of a generated claim on the given big issue correctly (true), incorrectly (false), or could not decide it (Undec.). The overall scores and those for each topic/big-issue relation level are listed. | Irrelevant | Relevant | Very Relevant ---|---|---|--- Words | c1 | c2 | c3 | c4 | c5 Pro | 14% | 10% | 36% | 34% | 6% Con | 36% | 2% | 34% | 26% | 2% Table 5: Distribution of the pro/con bag-of-words, averaged across the top-10 big issues, over the five considered categories: c2 means wrong stance, c3 words that fit both stances, and c3 and c4 represent correct stance. Examples can be found in the appendix. To compute inter-annotator agreement, three big issues were annotated by both annotators, resulting in Cohen’s $\kappa$ of 0.45, reflecting moderate agreement. Afterwards, only one annotator continued the annotations for the other big issues. Table 5 shows the distribution of words over categories, averaged across the 10 big issues. For the pro bag-of-words, around 40% of the words are relevant and reflect the right stance, while 36% are relevant but could be used in arguments from both stances. For the con bag-of-words, however, the percentages are lower (28% and 34% respectively). A considerable proportion of words belong to categories c1 and c2, which creates noise that could confuse the conditioning process of the LM. Hence, we also consider a variant of the conditioned LM that uses only relevant words from c4 and c5. ### 6.2 Claim Generation We evaluate the effectiveness in terms of whether a given generated claim reveals the stance of the given user on a specific big issue as well as how informative the claim is regarding the given topic. Since not all topics are directly related to the big issues that can be revealed in the generated claims, we manually annotated the relatedness of the top frequent 200 topics in the test dataset to the most frequent 10 big issues, and created the evaluation sample accordingly. In particular, two authors of this paper scored the relatedness of each pair of topics and big issues on a scale from 1 to 4: 1. 4: Topic and big issue are the same. Example: "gay marriage should be legalized" and "gay marriage" 2. 3: A stance on the topic likely affects the stance on the big issue. Example: "killing domestic abusers" and "death penalty" 3. 2: A stance on the topic may affect the stance on the big issue. Example: "morality" and "abortion" 4. 1: Topic and big issue are not related. Example: "do aliens exist?" and "abortion" The two annotators had a Cohen’s $\kappa$ agreement of 0.54. Around 97.4% of all pairs got score 1, 1.1% score 2, 0.8% score 3, and 0.7% score 4. The small percentage of cases that can be evaluated reflects a limitation in the designed evaluation study. However, it still allows us to evaluate the effectiveness of our approach for different levels of relatedness. Given the annotated pairs, we randomly selected 10 pairs from levels 2, 3, and 4 each. For each pair, we then collected all claims on the topic from the test set, where the author specifies a stance on the corresponding big issue. We randomly select 30 claims each, resulting in an evaluation sample of 90 instances. We used the crowdsourcing platform MTurk555A crowd sourcing platform: https://www.mturk.com/ for evaluation. For each instance, we showed a topic, a claim, and the corresponding big issue to three annotators. The annotators had to perform two tasks: (1) to predict the stance of the user on the corresponding big issue from the text of the claim, and (2) to rate the claim’s informativeness regarding the topic on a scale from 1 to 3. Table 4 shows the percentage of cases in which the majority of annotators predicted the stance correctly (true), incorrectly (false), or could not decide about the stance (undec.) from the generated claim. Across the whole sample (Overall), the claims generated by LM-conditioned (manual), the model conditioned on the refined bag-of-words, most often allowed to predict the stance correctly (45%). We thus attribute the low effectiveness of the LM- model to the noise generated by the automatic collection of big-issues’ bag- of-words, especially seeing that the effectiveness gets better across all levels when eliminating this noise. Analyzing each relatedness level individually yields more insights. For relatedness level 4, where the topic is the same as the big issue, the LM- conditioned (manual) generated claims where the majority of the cases with known stance were correct (63%). In level 3, we observe that both versions of our approach outperform the baseline in producing claims that express the correct stance on the corresponding big issue with percentages of 59% and 68% respectively. Finally, at relation level 2, which represents a weak relation between topics and big issues, predicting the stance seems to become hard, as indicated by high percentages of undecided cases. We believe that the weak relatedness made the annotators guess the stance in some cases, leading to unreliable annotations. Approach | Overall | Level 4 | Level 3 | Level 2 ---|---|---|---|--- LM-baseline | 1.8 | 2.5 | 1.9 | 1.4 LM-conditioned | 2.1 | 2.3 | 2.5 | 1.5 LM-cond. (manual) | 2.0 | 2.3 | 2.2 | 1.5 Ground Truth | 2.0 | 1.9 | 1.8 | 2.2 Table 6: Manual Evaluation: Mean informativeness of the claims generated by each approach with regard to the topic (1–3, higher is better). The overall scores and those for each topic/big-issue relation level are listed. Table 6 shows the average score of all approaches regarding the informativeness of the generated claims. Here, both versions of our approach achieved better scores than the baseline, matching the ground-truth score. We believe that the low scores of the ground-truth claims stem from the noise generated in the claim detection step. (a) Topic: is abortion ok | ---|--- Big issue: (Con) Abortion | Level: 4 Approach | Claim | Stance LM-cond. | abortion rights groups argue that the right to abortion is a fundamental human right. | Undec. LM-baseline | abortionists are not the only ones who are against abortion. There are many other people who are against abortion | Undec. (b) Topic: abortion is right or wrong | Big issue: (Con) Abortion | Level: 4 Approach | Claim | Stance LM-cond. | abortion right or wrong is a personal choice. I believe that abortion should be legal because it’s the woman"s body and she has every right to do what ever she wants. | Pro LM-cond. (manual) | abortion right or wrong. I believe that abortion is murder and should be illegal in all cases except when the mother’s life is at stake, rape, incest etc. | Con (c) Topic: whaling | Big issue: (Pro) Environmental Protection | Level: 3 Approach | Claim | Stance LM-cond. | whaling is a very dangerous and cruel practice. It causes great harm to the environment, animals that are caught in it have their lives cut short by being eaten alive. | Pro LM-baseline | whaling is a sport. It is a sport because it involves a lot of physical exertion and skill. | Undec. (d) Topic: should murder be legalized? | Big issue: (Pro) Death Penalty | Level: 2 Approach | Claim | Stance LM-cond. | should murder be legalized? I think that abortion should not be legal. Abortion is killing a baby, and it’s wrong to kill an innocent human being. | Undec. LM-baseline | should murder be legalized? I think so. I think that it should be legalized because it is a good thing | Undec. Table 7: A selection of claims generated by the different evaluated approaches for the different association levels between topic and big issue discussed in the text. #### Error Analysis Table 7 shows some cases from our evaluation. In case (a), even though the model generated a claim focusing on fundamental rights, the majority of the annotators was undecided about the stance. Case (b) shows the effect of noise in the bag-of-words: the generated claim reflects a pro stance on abortion, while the ground-truth stance is con. This is avoided in the claim generated by LM-conditioned (manual). Case (c) shows a working example of which our approach correctly generated a claim on whaling from an environmental perspective when conditioned as such. Case (d) is a level 2 example, indicating limitation in our evaluation, namely, the generated claim reveals a stance on abortion, but we asked about death penalty. ## 7 Conclusion In this paper, we have proposed to equip argument generation technology with the ability to encode beliefs for two reasons: first, it reflects the human process of synthesizing arguments, and second, it gives more control on the generated arguments leading to a better reach of the audience. For this purpose, we have presented the task of belief-based claim generation. Concretely, we studied the research questions of how to model a user’s beliefs as well as how to encode them when generating an argumentative text. We have modeled users’ beliefs via their stances on big issues, and used them as an extra input in our approaches. Our automatic evaluation has provided evidence of the applicability of encoding beliefs into argumentative texts. In manual studies, we found that limitations in the effectiveness of our approach stem from noise produced by the automatic collection of a bag-of-words. The findings of this paper lay the ground to investigate the role of beliefs in generating arguments that reach their audience. We point out that ethical issues arise, when tuning arguments to affect specific people, such as attempts to manipulate them. While the task and settings considered here are rather too fundamental to already make these issues critical, future work should pay attention to them. Our goal is to develop systems that bring people together. ## Acknowledgments We thank the anonymous reviewers for their helpful feedback. This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Center “On-The-Fly Computing” (SFB 901/3) under the project number 160364472. ## References * Ajjour et al. (2019) Yamen Ajjour, Henning Wachsmuth, Johannes Kiesel, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Data acquisition for argument search: The args.me corpus. In _Proceedings of the 42nd Edition of the German Conference on Artificial Intelligence_ , page 48?59. * Alshomary et al. (2020) Milad Alshomary, Shahbaz Syed, Martin Potthast, and Henning Wachsmuth. 2020. Target inference in argument conclusion generation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4334–4345, Online. Association for Computational Linguistics. * Bilu et al. (2015) Yonatan Bilu, Daniel Hershcovich, and Noam Slonim. 2015. Automatic claim negation: Why, how and when. In _Proceedings of the 2nd Workshop on Argumentation Mining_ , pages 84–93, Denver, CO. Association for Computational Linguistics. * Carenini and Moore (2006) Giuseppe Carenini and Johanna D Moore. 2006. Generating and evaluating evaluative arguments. _Artificial Intelligence_ , 170(11):925–952. * Chakrabarty et al. (2019) Tuhin Chakrabarty, Christopher Hidey, and Kathleen McKeown. 2019. Imho fine-tuning improves claim detection. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 558–563. * Chen et al. (2018) Wei-Fan Chen, Henning Wachsmuth, Khalid Al Khatib, and Benno Stein. 2018. Learning to flip the bias of news headlines. In _Proceedings of the 11th International Conference on Natural Language Generation_ , pages 79–88. Association for Computational Linguistics. * Dathathri et al. (2020) Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In _International Conference on Learning Representations_. * Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_. * Durmus and Cardie (2018) Esin Durmus and Claire Cardie. 2018. Exploring the role of prior beliefs for argument persuasion. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1035–1045, New Orleans, Louisiana. Association for Computational Linguistics. * van Eemeren and Houtlosser (1999) Frans H van Eemeren and Peter Houtlosser. 1999. Strategic manoeuvring in argumentative discourse. _Discourse studies_ , 1(4):479–497. * El Baff et al. (2019) Roxanne El Baff, Henning Wachsmuth, Khalid Al Khatib, Manfred Stede, and Benno Stein. 2019. Computational argumentation synthesis as a language modeling task. In _Proceedings of the 12th International Conference on Natural Language Generation_ , pages 54–64, Tokyo, Japan. Association for Computational Linguistics. * Feinberg and Willer (2015) Matthew Feinberg and Robb Willer. 2015. From gulf to bridge: When do moral arguments facilitate political influence? _Personality and Social Psychology Bulletin_ , 41(12):1665–1681. * Godden (2010) David M Godden. 2010. The importance of belief in argumentation: Belief, commitment and the effective resolution of a difference of opinion. _Synthese_ , 172(3):397–414. * Grasso et al. (2000) Floriana Grasso, Alison Cawsey, and Ray Jones. 2000. Dialectical argumentation to solve conflicts in advice giving: a case study in the promotion of healthy nutrition. _International Journal of Human-Computer Studies_ , 53(6):1077–1115. * Hidey and McKeown (2019) Christopher Hidey and Kathleen McKeown. 2019. Fixed that for you: Generating contrastive claims with semantic edits. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1756–1767. * Hua et al. (2019) Xinyu Hua, Zhe Hu, and Lu Wang. 2019. Argument generation with retrieval, planning, and realization. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2661–2672. * Keskar et al. (2019) Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. _arXiv preprint arXiv:1909.05858_. * Klein et al. (2017) Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017\. OpenNMT: Open-source toolkit for neural machine translation. In _Proceedings of ACL 2017, System Demonstrations_ , pages 67–72, Vancouver, Canada. Association for Computational Linguistics. * Li et al. (2016) Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 994–1003, Berlin, Germany. Association for Computational Linguistics. * Lin and Hovy (2000) Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In _COLING 2000 Volume 1: The 18th International Conference on Computational Linguistics_. * Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. _OpenAI Blog_ , 1(8):9. * Ritter et al. (2011) Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In _Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing_ , pages 583–593, Edinburgh, Scotland, UK. Association for Computational Linguistics. * Sato et al. (2015) Misa Sato, Kohsuke Yanai, Toshinori Miyoshi, Toshihiko Yanase, Makoto Iwayama, Qinghua Sun, and Yoshiki Niwa. 2015. End-to-end argument generation system in debating. In _Proceedings of ACL-IJCNLP 2015 System Demonstrations_ , pages 109–114. * Schiller et al. (2020) Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2020. Aspect-controlled neural argument generation. _arXiv preprint arXiv:2005.00084_. * Solaiman et al. (2019) Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, and Jasmine Wang. 2019. Release strategies and the social impacts of language models. _arXiv preprint arXiv:1908.09203_. * Stede et al. (2018) M. Stede, J. Schneider, and G. Hirst. 2018. _Argumentation Mining_. * Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In _Advances in neural information processing systems_ , pages 3104–3112. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008. * Wachsmuth et al. (2018) Henning Wachsmuth, Manfred Stede, Roxanne El Baff, Khalid Al Khatib, Maria Skeppstedt, and Benno Stein. 2018. Argumentation synthesis following rhetorical strategies. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 3753–3765. Association for Computational Linguistics. * Wolf et al. (2019) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Transformers: State-of-the-art natural language processing. _arXiv preprint arXiv:1910.03771_. * Ziegler et al. (2019) Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. _arXiv preprint arXiv:1909.08593_. * Zukerman et al. (2000) Ingrid Zukerman, Richard McConachy, and Sarah George. 2000. Using argumentation strategies in automated argument generation. In _INLG’2000 Proceedings of the First International Conference on Natural Language Generation_ , pages 55–62, Mitzpe Ramon, Israel. Association for Computational Linguistics.
Further author information: (Send correspondence to Th.A.) E-mail<EMAIL_ADDRESS> # An innovative integral field unit upgrade with 3D-printed micro-lenses for the RHEA at Subaru Theodoros Anagnos Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia MQ Photonics Research Centre, Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, 69117 Heidelberg, Germany Pascal Maier Institute of Microstructure Technology (IMT), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany Institute of Photonics and Quantum Electronics (IPQ), Karlsruhe Institute of Technology (KIT), Engesserstr. 5, 76131 Karlsruhe Philipp Hottinger Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, 69117 Heidelberg, Germany Christopher H. Betters University of Sydney, Sydney Institute for Astronomy, Institute for Photonics and Optical Science, School of Physics, Camperdown, Australia Tobias Feger Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia Redback Systems Pty Ltd, Sydney, Australia Sergio G. Leon-Saval University of Sydney, Sydney Institute for Astronomy, Institute for Photonics and Optical Science, School of Physics, Camperdown, Australia Itandehui Gris-Sánchez Department of Physics, University of Bath, Claverton Down, Bath, BA2 7AY, UK ITEAM Research Institute, Universitat Politècnica de València, Camino de Vera, 46022 Valencia, Spain Stephanos Yerolatsitis Department of Physics, University of Bath, Claverton Down, Bath, BA2 7AY, UK Julien Lozi National Institutes of Natural Sciences, Subaru Telescope, National Astronomical Observatory of Japan, Hilo, Hawaii, United States Tim A. Birks Department of Physics, University of Bath, Claverton Down, Bath, BA2 7AY, UK Sebastian Vievard National Institutes of Natural Sciences, Subaru Telescope, National Astronomical Observatory of Japan, Hilo, Hawaii, United States Nemanja Jovanovic California Institute of Technology, 1200 E. California Blvd., Pasadena CA, 91125, USA Adam D. Rains Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia Michael J. Ireland Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611, Australia Robert J. Harris Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, 69117 Heidelberg, Germany Max- Planck-Institute for Astronomy, Königstuhl 17, 69117, Heidelberg, Germany Blaise C. Kuo Tiong Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia MQ Photonics Research Centre, Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia Olivier Guyon National Institutes of Natural Sciences, Subaru Telescope, National Astronomical Observatory of Japan, Hilo, Hawaii, United States Barnaby Norris University of Sydney, Sydney Institute for Astronomy, Institute for Photonics and Optical Science, School of Physics, Camperdown, Australia Sebastiaan Y. Haffert Leiden Observatory, Leiden University, PO Box 9513, Niels Bohrweg 2, 2300 RA Leiden, The Netherlands Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, Arizona Matthias Blaicher Institute of Microstructure Technology (IMT), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany Institute of Photonics and Quantum Electronics (IPQ), Karlsruhe Institute of Technology (KIT), Engesserstr. 5, 76131 Karlsruhe Yilin Xu Institute of Microstructure Technology (IMT), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany Institute of Photonics and Quantum Electronics (IPQ), Karlsruhe Institute of Technology (KIT), Engesserstr. 5, 76131 Karlsruhe Moritz Straub Institute for System Dynamics, University of Stuttgart, Waldburgstr. 19, 70563 Stuttgart, Germany Jörg-Uwe Pott Max-Planck-Institute for Astronomy, Königstuhl 17, 69117, Heidelberg, Germany Oliver Sawodny Institute for System Dynamics, University of Stuttgart, Waldburgstr. 19, 70563 Stuttgart, Germany Philip L. Neureuther Institute for System Dynamics, University of Stuttgart, Waldburgstr. 19, 70563 Stuttgart, Germany David W. Coutts Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia MQ Photonics Research Centre, Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia Christian Schwab Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia MQ Photonics Research Centre, Department of Physics and Astronomy, Macquarie University, NSW 2109, Australia Christian Koos Institute of Microstructure Technology (IMT), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany Institute of Photonics and Quantum Electronics (IPQ), Karlsruhe Institute of Technology (KIT), Engesserstr. 5, 76131 Karlsruhe Vanguard Photonics GmbH, Hermann-von-Helmholtz-Platz 1,76344 Eggenstein-Leopoldshafen, 76227 Karlsruhe Andreas Quirrenbach Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, 69117 Heidelberg, Germany ###### Abstract In the new era of Extremely Large Telescopes currently under construction, challenging requirements drive spectrograph designs towards techniques that efficiently use a facility’s light collection power. Operating in the single- mode (SM) regime, close to the diffraction limit, reduces the footprint of the instrument compared to a conventional high-resolving power spectrograph. The custom built injection fiber system with 3D-printed micro-lenses on top of it for the replicable high-resolution exoplanet and asteroseismology spectrograph (RHEA) at Subaru in combination with extreme adaptive optics of SCExAO, proved its high efficiency in a lab environment, manifesting up to $\sim$77% of the theoretical predicted performance. ###### keywords: astrophotonics, spectroscopy, micro-lenslets, SCExAO, radial velocity, optical fibers, fiber injection, diffraction-limited spectrograph, integral field unit ## 1 Introduction A wealth of crucial information can be collected through astronomical spectroscopy, such as chemical composition, motion parameters as well as the indirect discovery of celestial bodies in orbit around other stars[1]. Conventional spectrograph designs began to make use of fibers half a century ago[2, 3] in order to enable more efficient observations, as it became possible to locate the instrument off the telescope. Soon after, fiber based integral field unit (IFU) systems were developed [4] that allowed flexibility in arranging spectra on a given detector space. Early on, multi-mode fibers were used for the IFU, which had high throughput for seeing-limited starlight (e.g. Ref. 5, 6). Afterwards, new designs of IFU systems emerged taking advantage of single-mode fibers (e.g. Ref. 7, 8). Using SMFs and operating in the diffraction limit reduces the footprint of the instrument, however major limitations apply in coupling efficiency under seeing-limited conditions. By making use of the extreme adaptive optics (ExAO) systems installed in state-of-the-art telescopes, the coupling efficiency gets significantly better (e.g. Ref. 9, 10). While high-spatial resolution spectroscopy is achieved by using SM-IFUs, giving access to many new science capabilities, the coupling losses are high due to the low fill fraction of SMFs and the requirement for sub-$\upmu$m precision in alignment. In this study, we present an upgrade of the IFU system on the RHEA at Subaru [11, 12]. This custom IFU makes use of a multi-core fiber (MCF) with 19 SM cores, with 3D-printed micro-lenses on top of the cores manufactured by the two-photon polymerization lithography technique [13, 14]. This custom injection system significantly increases the free-space coupling of starlight into the fiber cores while allowing more tolerance for misalignment errors in targeting. The IFU system is optimized using Zemax optical software for instantaneous angular sky areas of 11 and 18 milli-arcseconds (mas) per lenslet. The system also offers a relatively high coupling efficiency and fill factor due to the 3D-printed micro-lens array (MLA). In Section 2 we present the core design and parameters, complemented by the detailed experimental design description for the characterization of its performance. In Section 3 the laboratory results are presented. We draw conclusions in Section 4 and detail our future plans in Section 5. ## 2 Methods To increase the efficiency of light coupling from the 8-m Subaru telescope into the IFU feeding the RHEA, the following components are necessary: the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system, the IFU itself with the 3D printed MLAs, the MCF and the spectrograph adapted to the output of the MCF. Below, these components are presented in more detail. ### 2.1 Initial simulations In order to simulate the intensity distribution in the entrance of the IFU system, all the optical components of the SCExAO were taken into account. First of all, the output beam of the 8-m Subaru telescope undergoes adaptive optics (AO) correction in the AO188 unit and is then routed to the visual bench of SCExAO. The beam intensity distribution at this stage can be described as an Airy pattern. For an efficient coupling of the Airy disk into the SM cores of the fiber, a special injection unit is necessary. This unit is specially designed to match the specifications of the MCF. The MCF has a 6.1 $\upmu$m mode-field diameter (MFD) at 650 nm ($\mathrm{1/e^{2}}$) where the SM cut-off limit is at $\sim$600 nm (Figure 1 shows a microscope image of the MCF end face). The cores have a pitch of 30 $\upmu$m in hexagonal formation, which leads to a ratio of 4.9:1, giving enough separation to eliminate the cross-coupling between cores. The 3D printed structure of the MLAs is applied directly on top of the fiber end face using the two-photon lithography technique described below. This custom IFU will be installed into RHEA to maximize the system potential. The structure of the MLAs was optimized for a platescale of 11 and 18 mas on the sky per lenslet given that the diffraction limit of the Subaru telescope with the SCExAO at 650 nm is 17 mas. Figure 1: MCF end face after polishing and packaging into an FC/PC connector. The cores are positioned in a hexagonal formation with a pitch of 30 $\upmu$m. Each of the cores has a 6.1 $\upmu$m MFD at 600 nm ($\mathrm{1/e^{2}}$). A HeNe laser source was used to back illuminate the fiber cores, which are labeled with red numbers for referencing. ### 2.2 The visual SCExAO infrastructure The SCExAO is located in the near-infrared (NIR) Nasmyth focus of the 8 m Subaru Telescope. A complete and far more detailed description of both the visual and NIR paths of SCExAO is provided in Ref. 15. A brief overview starts with the starlight entering the Subaru Telescope and a 30-40% Strehl correction on the point spread function (PSF) for the H-band accomplished by AO188 [16, 17, 18]. The starlight then enters the SCExAO NIR bench, where the wavefront is further corrected for higher-order aberrations caused by the atmosphere. In the next step, a dichroic filter separates the light beam in two paths: the visible ($<$900 nm) and the NIR ($>$900 nm) channel. Finally, the light beam is focused down with optical lenses onto the 3D printed MCF surface (see Figure 2). Figure 2: The visual bench of the SCExAO. The IFU injection is located in the bottom center of the illustration. The input beam from AO188 to the periscope to the IFU is represented with green and brown color. ### 2.3 Throughput simulations For our simulations, an Airy disk was used an an input to the IFU system, taking into account the Subaru telescope profile and the key components of SCExAO [15, 9]. The simulation of the the Airy profile was performed by using the physical- optics propagation (POP) module of Zemax [19]. This Airy disk output feeds a performance optimizer that selects for the best MLA structure to be 3D printed on top of the MCF. POP operands were used for varying MLA geometries that affect the Airy beam coupling into the MCF cores. Several geometrical shapes were tested for the MLA structure in order to increase the coupling efficiency into the cores. Finally, a spherical surface was selected, as the alternatives provided negligible improvement in performance. A spherical surface MLA of 272 $\upmu$m in height and 115 $\upmu$m in radius of curvature achieved the highest coupling efficiency for the wavelength range of 600-800 nm, resulting in a throughput of 50% for the 18 mas platescale for the central lens and a throughput of 21.9% for the platescale of 11 mas for the central lens. To achieve a fill factor of $\sim$100% in between the cores of the MCF, the micro-lenses were merged together forming a hexagonal effective aperture. The roughness of the MLA structure is expected to be better than $\lambda/16$-$\lambda/21$ at the working wavelength, as a surface roughness of 37 nm is achieved using the 3D printing technique [20]. ### 2.4 Micro-lens array fabrication The MLA was printed to the cleaved facet of the MCF as a single model block using the commercially available negative-tone photoresist IP-Dip [21] and an in-house built two-photon lithography machine. This system is equipped with a 780 nm femtosecond laser [22] and a 40x Zeiss objective lens with numerical aperture (NA) = 1.4. A custom control software was developed in-house to guarantee optimum shape-fidelity of the printed MLA and allow for high- precision alignment with respect to the fiber cores of the MCF. As a first step, the MCF was manually glued to an FC-PC connector and subsequently polished to achieve a flat fiber end-facet accessible for the lithography machine for printing. Thereafter, the fiber is back-illuminated by coupling in the light of a red light-emitting diode (LED) to accommodate machine vision for the detection of the 19 cores of the MCF. After the detection procedure, the individual lenses of the MLA are aligned with respect to the detected core positions of the MCF, thereby taking into account variations of the core positions and pitch. All individually positioned lenses are then merged into a single 3D-model of the MLA to prevent unnecessary double-illumination in the overlap regions during the printing process. The structure is further automatically adapted to compensate for any tilt of the fiber end-facet. For the purpose of reducing the required printing time, the MLA is divided into two parts: the first block of the model up to just below the lens surfaces was written with a distance between subsequent layers, i.e., slicing distance, of 600 nm. For optimal printing quality, the remaining second model block comprising the 19 lens surfaces of the MLA was written with a slicing distance of 100 nm. The writing distance between subsequent lines, i.e., hatching distance, was set to 100 nm throughout the full model. The fabricated structure was afterwards developed in propylene-glycol-methyl- ether-acetate (PGMEA), flushed with isopropanol, and subsequently blow dried. In the next stage, scanning electron microscopy (SEM) and vertically-scanned white-light interferometry (VSI) images of the structure were acquired to check the quality of the manufacturing process (see Figure 3). Figure 3: The top structure of the 3D printed MLA on top of the MCF end face acquired using the SEM technique. ### 2.5 Laboratory throughput measurement setup For measuring the throughput performance of the custom 3D-printed MCF, a set of opto-mechanical parts was constructed as an addition to the Königstuhl Observatory Opto-mechatronics Laboratory (KOOL) test-bed. The throughput setup is presented in Figure 4. The HeNe laser light (632 nm) passes through a 50:50 non-polarizing beamsplitter (BS) L1 (Thorlabs CM1-BS014) and is collimated using an achromat lens L2 (Thorlabs AC127-025-B-ML). Later on, the beam is split using another 50:50 non-polarizing BS L4 (Thorlabs CM1-BS014). One beam is focused down using an 100 mm achromat (AC254-100-B-ML) to the CMOS detector (Thorlabs DCC1545M), and the other is routed through a flip mirror L6 to an achromat L7 (AC254-060-B-ML or AC254-100-B-ML depending on the platescale) and focused down to the 3D-printed MCF that is mounted onto a 4-axis mount (Thorlabs MBT401D). After that, the fiber exit is re-imaged using a combination of achromats L8 and L9 (AC127-019-B-ML and AC127-050-B-ML) to the CMOS detector (Thorlabs DCC1545M). To calculate the total throughput of the MCF including the the coupling losses, a power meter was used (Thorlabs S120C) in order to perform the measurements and calibrate absolute flux through the re-imaging system. Figure 4: The throughput experimental setup for measuring the efficiency of the custom IFU. A power meter is used to calibrate the throughput. A set of achromat lenses (L2-L5-L7) is used for collimation and focusing of the beam, beamsplitter (BS) and CMOS detector (D1, D2) for imaging the near-field output of the MCF (L8, L9). ## 3 Results ### 3.1 Throughput efficiency results To asses the performance of the custom IFU system before installation into RHEA, laboratory tests were performed as described in section 2.5. The outcome of these tests are presented here. As mentioned in section 2.5 the throughput measurements were monochromatic using a HeNe laser at 632 nm. After a series of optical elements in the KOOL test-bed, the beam was focused down to the 3D-printed MCF. Data frames with exposure times of a fraction of a second were collected with both achromats L7 (AC254-060-B-ML or AC254-100-B-ML), using the setup described above. The setup was able to sample the near-field of the MCF exit and filter the light between the adjacent cores of the fiber. Averaged dark data frames were recorded as well, and subtracted from the data for further processing. To characterize the absolute performance of the IFU, two separate experiments were conducted; in the first, the total coupling efficiency of the light into each of the 19 cores was determined after the alignment of the cores on-axis with the injected beam. In the second experiment, the misalignment tolerances of the injected beam were measured by translating the injected beam by steps of 5 $\upmu$m in respect to the central core of the MCF. This was more representative of realistic on-sky conditions where the star would be moving due to atmospheric perturbations. The results of the first experiment for the 18 mas platescale are shown in Figure 6. The average coupling efficiency was 21.45 $\pm$ 3% with a maximum of 30.89 $\pm$ 3% for the core #11. The maximum measured throughput corresponds to 77.41% of the simulated value (40%) calculated with Zemax. For the 11 mas platescale, the coupling efficiency of the central core was 16.9 $\pm$ 3% (77.2% of the simulated value). The total throughput from all of the cores summed was calculated, representative of an unresolved target, and was measured to be 41.5 $\pm$ 2%. The residual coupling losses are associated with the imperfectly polished fiber, Fresnel reflections ($\sim$4%), and mode-field mismatch at the focus. In Figure 7 the results from the second experiment are presented. This illustration shows the coupling tolerances as a function of off-axis target injection for the case of 18 mas. This demonstrates the potential of 3D-printed MLA technology for misaligned targets, showing fairly good coupling efficiency even for an off-axis injection of $\sim$10 $\upmu$m, retaining $\sim$40% of the maximum throughput. Figure 5: Left panel: 2D image of the injected beam in logarithmic color scale for better clarity, as measured in the laboratory. Right panel: Intensity profile of the injected point spread function normalized to its maximum, from the 2D image data. Figure 6: Coupling efficiency for all of the cores of the MCF (see Figure 1 for the numbering of the cores). This is shown for the 18 mas platescale. Figure 7: Coupling efficiency of the central core of the MCF as a function of off-axis target, compared with the simulated data from Zemax. Results are normalized to the maximum coupling efficiency including the errors (smaller than the data points). ## 4 Conclusions In this work we presented a novel IFU system upgrade for RHEA at the Subaru telescope. This IFU is composed of a custom MCF with 3D printed micro-lenses on top of the cores to increase the coupling efficiency for off and on-axis targets from SCExAO at the Subaru 8 m telescope. The IFU system is optimized using the Zemax POP module for an on-sky angular dimension of 11 and 18 mas using an Airy profile beam produced by the visual arm of SCExAO. The custom MCF is composed of 19 cores in the same cladding with a core-to- core spacing of 30 $\upmu$m and a 6.1 $\upmu$m MFD at 650 nm ($\mathrm{1/e^{2}}$) which leads to a negligible cross-coupling between the cores. The cores are positioned in a hexagonal formation and their cut-off SM limit is above 600 nm. The structure of the MLA was manufactured with two-photon lithography and 3D-printed on top of the cores of the MCF, significantly enhancing the throughput of light into the fiber cores from levels of few percent to a maximum of 30.89 $\pm$ 3% for on-axis targets for a platescale of 18 mas. Furthermore, the custom MLA improved the off-axis light losses even for a 10 $\upmu$m lateral injection. The throughput performance across all the MLA as a representative of a single unresolved target was 41.5 $\pm$ 2%. The laboratory results correspond to $\sim$77% of the simulated results. The difference in throughput performance from simulated results are likely associated with the imperfectly polished MCF, Fresnel reflections ($\sim$4%) and mode-field mismatch at the focus. These lab results were performed at the KOOL test-bed with a HeNe laser source (632 nm). ## 5 Further work Plans for further work include a separate 3D-printed MCF, optimized for only the 18 mas platescale with Zemax simulations. Both IFU systems will be tested on the KOOL test bed including the effect of atmospheric turbulence using the AO system of the KOOL infrastructure. Future work will be to integrate the fibers into the RHEA and perform on-sky tests on a variety of targets (resolved, un-resolved stars, confirmed exoplanets, spectroscopic standard stars and double star systems) in order to probe its scientific potential. ###### Acknowledgements. T.A. is a fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD) and is supported by the Cotutelle International Macquarie University Research Excellence Scholarship. P.M., M.B., Y.X. and C.K. are supported by Bundesministerium für Bildung und Forschung (BMBF), joint project PRIMA (13N14630), the Helmholtz International Research School for Teratronics (HIRST), Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy via the Excellence Cluster 3D Matter Made to Order (EXC2082/1-390761711). R. J. H. and P.H. are supported by the Deutsche Forschungsgemeinschaft (DFG) through project 326946494, ’Novel Astronomical Instrumentation through photonic Reformatting’. T.B. & S.Y. are supported from the European Union’s Horizon 2020 grant 730890, and from the UK Science and Technology Facilities Council grant ST/N000544/1. S.Y.H. is supported by the NASA Hubble Fellowship grant #HST-HF2-51436.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. The development of SCExAO was supported by the JSPS (Grant-in-Aid for Research #23340051, #26220704 #23103002), the Astrobiology Center (ABC) of the National Institutes of Natural Sciences, Japan, the Mt Cuba Foundation and the directors contingency fund at Subaru Telescope, and the OptoFab node of the Australian National Fabrication Facility. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This research made use of Astropy, a community-developed core Python package for Astronomy [23, 24], Numpy [25] and Matplotlib [26]. Furthermore, this publication makes use of data generated at the Königstuhl Observatory Opto-mechatronics Laboratory (KOOL) which is run at the Max-Planck-Institute for Astronomy (MPIA, PI Jörg-Uwe Pott<EMAIL_ADDRESS>in Heidelberg, Germany. KOOL is a joint project of the MPIA, the Landessternwarte Königstuhl (LSW, Univ. Heidelberg, Co-I Philipp Hottinger), and the Institute for System Dynamics (ISYS, Univ. Stuttgart, Co-I Prof. Oliver Sawodny). KOOL is partly supported by the German Federal Ministry of Education and Research (BMBF) via individual project grants. ## References * [1] P. Massey and M. M. Hanson, “Astronomical Spectroscopy,” Planets, Stars and Stellar Systems. Volume 2: Astronomical Techniques, Software and Data , 35 (2013). * [2] E. N. Hubbard, J. R. P. Angel, and M. S. Gresham, “Operation of a long fused silica fiber as a link between telescope and spectrograph.,” ApJ 229, 1074–1078 (May 1979). * [3] J. R. Powell, “Application of optical fibres to astronomical instrumentation.,” Proc. SPIE 445, 77–84 (1984). * [4] J. Allington-Smith, “Basic principles of integral field spectroscopy,” New A Rev. 50, 244–251 (June 2006). * [5] J. Ge, J. R. P. Angel, and J. C. Shelton, “Optical spectroscopy with a near-single-mode fiber-feed and adaptive optics,” Proc. SPIE 3355, 253 – 263 (1998). * [6] S. M. Croom, J. S. Lawrence, J. Bland-Hawthorn, et al., “The Sydney-AAO Multi-object Integral field spectrograph,” MNRAS 421, 872–893 (Mar. 2012). * [7] S. G. Leon-Saval, C. H. Betters, and J. Bland -Hawthorn, “The Photonic TIGER: a multicore fiber-fed spectrograph,” Proc. SPIE 8450, 84501K (2012). * [8] M. Tamura, H. Suto, J. Nishikawa, et al., “Infrared Doppler instrument for the Subaru Telescope (IRD),” Proc. SPIE 8446, 84461T (2012). * [9] N. Jovanovic, C. Schwab, O. Guyon, et al., “Efficient injection from large telescopes into single-mode fibres: Enabling the era of ultra-precision astronomy,” A&A 604, A122 (Aug. 2017). * [10] O. Guyon, R. Belikov, E. Bendek, et al., “Wavefront Sensing and Control R&amp;D on the SCExAO Testbed,” American Astronomical Society 51, 280.06 (Jan. 2020). * [11] T. Feger, C. Bacigalupo, T. R. Bedding, et al., “RHEA: the ultra-compact replicable high-resolution exoplanet and Asteroseismology spectrograph,” Proc. SPIE 9147, 91477I (Aug 2014). * [12] A. D. Rains, M. J. Ireland, N. Jovanovic, et al., “Precision single mode fibre integral field spectroscopy with the RHEA spectrograph,” Proc. SPIE 9908, 990876 (Aug 2016). * [13] P.-I. Dietrich, R. J. Harris, M. Blaicher, et al., “Printed freeform lens arrays on multi-core fibers for highly efficient coupling in astrophotonic systems,” Optics Express 25, 18288 (jul 2017). * [14] P. Hottinger, R. J. Harris, P.-I. Dietrich, et al., “Micro-lens array as tip-tilt sensor for single-mode fiber coupling,” in [SPIE Astronomical Telescopes+ Instrumentation ], International Society for Optics and Photonics (2018). * [15] N. Jovanovic, F. Martinache, O. Guyon, et al., “The Subaru Coronagraphic Extreme Adaptive Optics System: Enabling High-Contrast Imaging on Solar-System Scales,” PASP 127, 890 (Sept. 2015). * [16] Y. Hayano, H. Takami, O. Guyon, et al., “Current status of the laser guide star adaptive optics system for Subaru Telescope,” Proc. SPIE 7015, 701510 (2008). * [17] Y. Hayano, H. Takami, S. Oya, et al., “Commissioning status of Subaru laser guide star adaptive optics system,” Proc. SPIE 7736, 77360N (2010). * [18] Y. Minowa, Y. Hayano, S. Oya, et al., “Performance of Subaru adaptive optics system AO188,” Proc. SPIE 7736, 77363N (2010). * [19] Zemax, “Opticstudio - zemax,” (2016). * [20] P.-I. Dietrich, M. Blaicher, I. Reuter, et al., “In situ 3d nanoprinting of free-form coupling elements for hybrid photonic integration,” Nature Photonics 12(4), 241–247 (2018). * [21] Nanoscribe GmbH, “Ip photoresists,” (2018). * [22] Menlo Systems GmbH, “C-fiber 780 femtosecond erbium laser,” (2020). * [23] Astropy Collaboration, T. P. Robitaille, E. J. Tollerud, et al., “Astropy: A community Python package for astronomy,” A&A 558, A33 (Oct. 2013). * [24] A. M. Price-Whelan, B. M. Sipőcz, H. M. Günther, et al., “The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package,” AJ 156, 123 (Sept. 2018). * [25] S. van der Walt, S. C. Colbert, and G. Varoquaux, “The NumPy Array: A Structure for Efficient Numerical Computation,” Computing in Science and Engineering 13, 22–30 (Mar. 2011). * [26] J. D. Hunter, “Matplotlib: A 2D Graphics Environment,” Computing in Science and Engineering 9, 90–95 (May 2007). *[SM]: single-mode *[RHEA]: replicable high-resolution exoplanet and asteroseismology spectrograph *[IFU]: integral field unit *[SMFs]: single-mode fiber *[ExAO]: extreme adaptive optics *[IFUs]: integral field unit *[MCF]: multi-core fiber *[mas]: milli-arcseconds *[MLA]: micro-lens array *[SCExAO]: Subaru Coronagraphic Extreme Adaptive Optics *[MLAs]: micro-lens array *[AO]: adaptive optics *[MFD]: mode-field diameter *[NIR]: near-infrared *[PSF]: point spread function *[POP]: physical-optics propagation *[NA]: numerical aperture *[LED]: light-emitting diode *[PGMEA]: propylene-glycol-methyl-ether-acetate *[SEM]: scanning electron microscopy *[VSI]: vertically-scanned white-light interferometry *[KOOL]: Königstuhl Observatory Opto-mechatronics Laboratory *[BS]: beamsplitter
# Amicable Heron Triangles Iwan Praton and Nart Shalqini Franklin & Marshall College <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract. A Heron triangle is a triangle whose side lengths and area are integers. Two Heron triangles are _amicable_ if the perimeter of one is the area of the other. We show, using elementary techniques, that there is only one pair of amicable Heron triangles. ## Introduction A Heron triangle is a triangle whose side lengths and area are integers. They are named for the Greek mathematician Heron (or Hero) of Alexandria, who is usually credited with inventing the formula for the area $A$ of a triangle in terms of its side lengths $a,b,c$: $A=\sqrt{s(s-a)(s-b)(s-c)};$ here $s$ is the semiperimeter $\frac{1}{2}(a+b+c)$. Heron triangles form a popular topic (e.g., [1], [2], [6], [7]), and new facts about them are still being discovered. For example, Hirakawa and Matsumura [5] showed recently that there is a unique pair (up to scaling) of right and isosceles Heron triangles with the same perimeter and the same area. Surprisingly, the proof uses sophisticated tools from the theory of hyperelliptic curves. The result has been featured in a Numberphile video [4]. The video focuses mostly on _equable_ Heron triangles (called Super-Hero triangles in the video), i.e., Heron triangles where the perimeter is equal to the area. This seems analogous to _perfect numbers_. Recall that a perfect number is a positive integer whose aliquot sum is equal to itself. (The aliquot sum of $n$ is the sum of the divisors of $n$, excluding $n$.) In equable Heron triangles, the perimeter and area play the roles of $n$ and its aliquot sum. Venturing beyond a single number, recall that a pair of positive integers $n$ and $m$ form an _amicable pair_ if the aliquot sum of $n$ is equal to $m$ and the aliquot sum of $m$ is equal to $n$. Analogously, we define two Heron triangles $H_{1}$ and $H_{2}$ to be _amicable_ if the area of $H_{1}$ is equal to the perimeter of $H_{2}$ and the perimeter of $H_{1}$ is equal to the area of $H_{2}$. Amicable Heron triangles exist: the triangles with side lengths $(3,25,26)$ and $(9,12,15)$ form an example. They are an unusual looking pair. (See Figure 1.) (a) (b) Figure 1. A unique pair of triangles Somewhat surprisingly, there is no other example. ###### Theorem. There is only one pair of amicable Heron triangles: the $(3,25,26)$ and $(9,12,15)$ triangles. In contrast to [5], we use completely elementary methods to prove this result. ## Proofs We begin by establishing our notation. Suppose we have a Heron triangle with side lengths $a,b,c$. Its semiperimeter $s=(a+b+c)/2$ is an integer. We define $x=s-a$, $y=s-b$, $z=s-c$, and set $x\leq y\leq z$. Then Heron’s formula for the area of the triangle becomes $\sqrt{sxyz}$. ###### Lemma 1. Suppose $H$ is one of a pair of amicable Heron triangles with perimeter $p$ and area $A$. Then $A$ divides $2p^{2}$. ###### Proof. For any Heron triangle, $\frac{2A^{2}}{p}=\frac{2sxyz}{2s}=xyz$ is an integer. Apply this result to the partner triangle of $H$. Since the perimeter of $H$ is equal to the area of its partner and the area of $H$ is equal to the perimeter of its partner, we get the result of the lemma. ∎ We now take care of the case where both amicable triangles are equable. It is well-known that there are only five equable Heron triangles [3]: triangles with side lengths $(5,12,13),(6,8,10),(6,25,29),(7,15,20)$, and $(9,10,17)$. Their perimeters (and thus their areas) are all different, so none of them form an amicable pair. Equable triangles are not amicable. We conclude that if we have an amicable pair, then for one of the triangles the perimeter is larger than the area. These triangles are long and skinny, similar to the second triangle in Figure 1. There are not many such triangles. From now on, let $H$ denote a triangle of this kind. ###### Lemma 2. For $H$ as above, $None$ $4(x+y+z)>xyz.$ ###### Proof. This is a simple consequence of the perimeter of $H$ being larger than the area of $H$. ∎ These two lemmas are our main tool for cutting down the possible values of $x,y,$ and $z$. In fact, they suffice to show that there are only finitely many. ###### Lemma 3. Let $H$ be as above. Then there are only a finite number of $x,y,z$ values that satisfy lemmas 1 and 2. ###### Proof. We first show that $x\leq 3$. If $x\geq 4$, then by Lemma 2, $4(z+z+z)\geq 4(x+y+z)>xyz\geq 4\cdot 4\cdot z=16z,$ a contradiction, so $x\leq 3$. Now we show that $y\leq 9$. If $y\geq 10$, then $4(3+z+z)\geq 4(x+y+z)>xyz\geq 10z\implies 12+8z>10z\implies 6>z,$ which is a contradiction since $z\geq y\geq 9$. We conclude that there are only finitely many values of $x$ and $y$. We now tackle $z$. Lemma 1 states that $2p^{2}/A$ is an integer, which means $\frac{8s^{2}}{\sqrt{sxyz}}\in\mathbb{N}\implies\frac{64s^{4}}{sxyz}\in\mathbb{N}\implies\frac{64(x+y+z)^{3}}{xyz}\in\mathbb{N}\implies\frac{64(x+y+z)^{3}}{z}\in\mathbb{N}.$ Let $c=x+y$. Then $64(z+c)^{3}/z=64(z^{2}+3zc+3c^{2}+c^{3}/z)$ is an integer, which implies that $64c^{3}/z$ is an integer. So $z$ must be a divisor of $64(x+y)^{3}$, and since there are only finitely many values of $64(x+y)^{3}$, there are only finitely many values of $z$ also. ∎ We now need to investigate only a finite number of cases. It turns out that the possibilities can be cut down considerably by using the requirement that the area of $H$ is an integer. We provide an example here; other cases are similar. Suppose $x=1$ and $y=4$. The $2p^{2}/A$ calculation in Lemma 3 shows that $4z$ divides $64(z+5)^{3}$; we conclude that $z$ is a divisor of $2^{4}\cdot 5^{3}$. Since $z\geq 4$, there are $18$ possibilities for $z$. The area of $H$ is $\sqrt{4z(z+5)}$; among the 18 possible value of $z$, only $z=4$ produces an integer area. Thus this case produces only one possibility: $x=1$, $y=4$, and $z=4$. Indeed, when we check all possible values of $x,y,z$, we come up with just four cases that satisfy all the requirements mentioned above: * • $x=1,y=4,z=4$, producing $H$ with side lengths $5,5,8$. * • $x=1,y=2,z=3$, producing $H$ with side lengths $3,4,5$. * • $x=1,y=2,z=24$, producing $H$ with side lengths $3,25,26$. * • $x=1,y=2,z=864$, producing $H$ with side lengths $3,865,866$. The first two cases are easy to eliminate. The first case produces a triangle of area $12$ and perimeter $18$. Its amicable partner, if it exists, must have semiperimeter $6$. This yields three possibilities: $x=1,y=2,z=3$ or $x=1,y=1,z=4$ or $x=2,y=2,z=2$, none of which yields an area of $18$. The second case produces a triangle of area $6$, but it is impossible to have a partner triangle of perimeter $6$, the only possibility being an equilateral triangle with side-length $2$ that has an irrational area. If $H$ is the fourth triangle listed above then its perimeter is 1734 and its area is 1224. Thus its partner triangle has perimeter 1224 and area 1734. Therefore for the partner triangle, we have $xyz=1734^{2}/612=4913$, an odd number, which implies that $x,y,$ and $z$ are all odd. This contradicts that the semiperimeter of the partner triangle is 612. Therefore $H$ has no partner triangle. The third case produces the amicable pair mentioned in the Theorem. This concludes the proof of the Theorem: there is a unique pair of amicable Heron triangles. ## References * [1] J. Carlson, Determination of Heronian triangles. Fibonacci Quart. 8 (1970), no. 5, 499–506, 551. * [2] Wm F. Cheney, Heronian Triangles. Amer. Math. Monthly 36 (1929), no. 1, 22–28. * [3] L. Dickson, History of the Theory of Numbers, Volume Il. Dover Publications (2005). * [4] B. Haran, Superhero triangles, `https://www.youtube.com/watch?v=UIjeCKPHbso` * [5] Y. Hirakawa and H. Matsumura, A unique pair of triangles. J. Number Theory, 194 (2019), 297–302. * [6] R. Nelsen, Almost Equilateral Heronian Triangles. Math. Mag. 93 (2020), no. 5, 378–379. * [7] P. Yiu, Construction of indecomposable Heronian triangles. Rocky Mountain J. Math. 28 (1998), no. 3, 1189–1202.
# On multiplicative energy of subsets of varieties ††This work is supported by the Russian Science Foundation under grant 19–11–00001. Shkredov I.D Annotation. We obtain a non–trivial upper bound for the multiplicative energy of any sufficiently large subset of a subvariety of a finite algebraic group. We also find some applications of our results to growth of conjugates classes, estimates of exponential sums and the restriction phenomenon. ## 1 Introduction In papers [2], [3], [5], [8], [9], [20], [21], [22] and in many others authors study growth properties of rather general subsets $A$ of different groups ${\mathbf{G}}$ (basically, of Lie type). One of the difficulties concerning growth of $A$ is that, in principle, $A$ can live in a subvariety of ${\mathbf{G}}$, see [8], [9], [20]. In this article we restrict ourselves to the case when $A$ indeed belongs to a subvariety and consider the most natural combinatorial problem connecting growth of $A$, namely, the basic question about obtaining upper bounds for the multiplicative energy (see, e.g., [30]) of $A$ $\mathsf{E}(A):=|\\{(a,b,c,d)\in A^{4}~{}:~{}ab^{-1}=cd^{-1}\\}|\,.$ Our result is the following ###### Theorem 1 Let ${\mathbf{G}}$ be a finite algebraic group over $\mathbb{F}_{q}$, $V\subseteq{\mathbf{G}}$ be a variety and $\Gamma$ be a maximal algebraic subgroup such that a coset of $\Gamma$ is contained in $V$. Then for any $A\subseteq V$, $|A|\geqslant|\Gamma|^{1+\varepsilon}$ and all sufficiently large $q$ one has $\mathsf{E}(A)\ll|A|^{3-\delta}\,,$ (1) where $\delta=\delta(\varepsilon,{\rm dim}(V))>0$ and the implied constant in (1) depends on ${\rm dim}(V),\deg(V)$, ${\rm dim}({\mathbf{G}}),\deg({\mathbf{G}})$ and dimension $n$ of $\mathbb{F}_{q}^{n}\supseteq{\mathbf{G}}$. In particular, bound (1) takes place for a variety $V$ iff $V$ does not contain a coset of an algebraic subgroup of size $\Omega(|V|)$. Theorem above gives us a non–trivial upper bound for the operator norm of $\widehat{A}$ (all definitions can be found in section 2) for any sufficiently large subset of a Chevalley group living in a variety differ from the maximal parabolic subgroup. ###### Theorem 2 Let ${\mathbf{G}}_{r}(\mathbb{F}_{q})$ be a finite Chevalley group with rank $r$ and odd $q$ and let $\Pi\leqslant{\mathbf{G}}_{r}(\mathbb{F}_{q})$ be its maximal (by size) parabolic subgroup. Also, let $V\subset{\mathbf{G}}_{r}(\mathbb{F}_{q})$ be a variety differ from all shifts of conjugates of $\Pi$. Then for any $A\subseteq V$, $|A|\geqslant|\Pi|q^{-1+c}$, $c>0$ one has $\|\widehat{A}(\rho)\|_{o}\leqslant|A|^{1-\delta}\,,$ (2) where $\delta=\delta(c,r)>0$ and $\rho$ is any non–trivial unitary representation of ${\mathbf{G}}_{r}(\mathbb{F}_{q})$. Bound (2) implies the uniform distribution of $A$ among any sets with small product, see Proposition 46 below. It is interesting that all our conditions in Theorems 1, 2 concerning intersection of $A$ with subgroups are formulated in terms of $V$ but not $A$. In a similar way, we do not require that $A$ is a generating set of ${\mathbf{G}}$. It differs our result from Larsen–Pink machinery, see [20] and also [3]. Theorem 1 has a naturally–looking algebraic consequence (see rigorous formulation in section 4). ###### Corollary 3 Suppose that ${\mathbf{G}}$ is a finite algebraic group, and $V\subseteq{\mathbf{G}}$ is a variety. Then size of maximal (by cardinality) subgroup of $V$ is comparable with size of maximal (by cardinality) algebraic subgroup of $V$. In other words, if $\Gamma_{a}$ is a maximal by cardinality algebraic subgroup in shifts of $V$, then for any $x$ and $\Gamma\leqslant{\mathbf{G}}$ such that $x\Gamma\subseteq V$ one has $|\Gamma|=O(|\Gamma_{a}|)$. In big–O here we assume that $q\to\infty$ and the implied constant depends on ${\rm dim}(V),\deg(V),{\rm dim}({\mathbf{G}}),\deg({\mathbf{G}})$ and dimension of the ground affine space. We obtain several applications of Theorem 1. In the first one we take our variety $V$ be a (Zariski closure of) conjugate class $C$ of a finite algebraic group. In [21], [22] authors obtain that for any such $C$ one has $|CC|\gg\min\\{|C|^{2-o(1)},|{\mathbf{G}}|\\}$. We prove that in certain cases one has $|AA|\gg|A|^{1+c}$, $c>0$ for any sufficiently large subset $A$ of $C$. Of course Theorem 2 is based on a purely non–commutative phenomenon of growth in groups and, say, estimate (2) does not hold in $\mathbb{F}^{n}_{q}$. Nevertheless, in section 5 we obtain a purely commutative application to so–called restriction problems. Here we have deal with the restriction phenomenon in finite fields, see [14] and good survey [12]. We prove (all definitions can be found in section 6) ###### Theorem 4 Let $V\subseteq\mathbb{F}^{n}_{q}$ be a variety, $d={\rm dim}(V)$. Suppose that $V$ does not contain any line. Then $R^{*}(\frac{4}{3-c}\rightarrow 4)\lesssim 1$, where $c=c(d)>0$. In papers [11]—[14] and [31] authors consider some particular varieties as cones, paraboloids and spheres. Our result is weaker but on the other hand we have deal with an almost arbitrary variety $V$. We thank Nikolai Vavilov for useful discussions. We deeply thank Brendan Murphy for his idea to study energies of subsets of various varieties. ## 2 Definitions Let ${\mathbf{G}}$ be a group with the identity $1$. Given two sets $A,B\subset{\mathbf{G}}$, define the product set of $A$ and $B$ as $AB:=\\{ab~{}:~{}a\in{A},\,b\in{B}\\}\,.$ In a similar way we define the higher product sets, e.g., $A^{3}$ is $AAA$. Let $A^{-1}:=\\{a^{-1}~{}:~{}a\in A\\}$. As usual, having two subsets $A,B$ of a group ${\mathbf{G}}$, denote by $\mathsf{E}(A,B)=|\\{(a,a_{1},b,b_{1})\in A^{2}\times B^{2}~{}:~{}a^{-1}b=a^{-1}_{1}b_{1}\\}|$ the common energy of $A$ and $B$. Clearly, $\mathsf{E}(A,B)=\mathsf{E}(B,A)$ and by the Cauchy–Schwarz inequality $\mathsf{E}(A,B)|A^{-1}B|\geqslant|A|^{2}|B|^{2}\,.$ (3) In a little more general way define $\mathsf{E}^{L}_{k}(A)=|\\{(a_{1},\dots,a_{k},b_{1},\dots,b_{k})\in A^{2k}~{}:~{}a^{-1}_{1}b_{1}=\dots=a^{-1}_{k}b_{k}\\}|\,,$ and, similar, one can define $\mathsf{E}^{R}_{k}(A)$. For $k=2$, we have $\mathsf{E}^{L}_{k}(A)=\mathsf{E}^{R}_{k}(A)$ but for larger $k$ it is not the case. If there is no difference between $\mathsf{E}^{L}_{k}(A)$ and $\mathsf{E}^{R}_{k}(A)$, then we write just $\mathsf{E}_{k}(A)$. In this paper we use the same letter to denote a set $A\subseteq{\mathbf{G}}$ and its characteristic function $A:{\mathbf{G}}\to\\{0,1\\}$. First of all, we recall some notions and simple facts from the representation theory, see, e.g., [16] or [24]. For a finite group ${\mathbf{G}}$ let $\widehat{{\mathbf{G}}}$ be the set of all irreducible unitary representations of ${\mathbf{G}}$. It is well–known that size of $\widehat{{\mathbf{G}}}$ coincides with the number of all conjugate classes of ${\mathbf{G}}$. For $\rho\in\widehat{{\mathbf{G}}}$ denote by $d_{\rho}$ the dimension of this representation. Thus ${\mathbf{G}}$ is a quasi–random group in the sense of Gowers (see [5]) iff $d_{\rho}\geqslant|{\mathbf{G}}|^{\varepsilon}$, where $\varepsilon>0$ and $\rho$ is any non–trivial irreducible unitary representation of ${\mathbf{G}}$. We write $\langle\cdot,\cdot\rangle$ for the corresponding Hilbert–Schmidt scalar product $\langle A,B\rangle=\langle A,B\rangle_{HS}:=\mathrm{tr}(AB^{*})$, where $A,B$ are any two matrices of the same sizes. Put $\|A\|=\sqrt{\langle A,A\rangle}$. Finally, it is easy to check that for any matrices $A,B$ one has $\|AB\|\leqslant\|A\|_{o}\|B\|$ and $\|A\|_{o}\leqslant\|A\|$, where the operator $l^{2}$–norm $\|A\|_{o}$ is just the maximal singular value of $A$. For any function $f:{\mathbf{G}}\to\mathbb{C}$ and $\rho\in\widehat{{\mathbf{G}}}$ define the matrix $\widehat{f}(\rho)$, which is called the Fourier transform of $f$ at $\rho$ by the formula $\widehat{f}(\rho)=\sum_{g\in{\mathbf{G}}}f(g)\rho(g)\,.$ (4) Then the inverse formula takes place $f(g)=\frac{1}{|{\mathbf{G}}|}\sum_{\rho\in\widehat{{\mathbf{G}}}}d_{\rho}\langle\widehat{f}(\rho),\rho(g^{-1})\rangle\,,$ (5) and the Parseval identity is $\sum_{g\in{\mathbf{G}}}|f(g)|^{2}=\frac{1}{|{\mathbf{G}}|}\sum_{\rho\in\widehat{{\mathbf{G}}}}d_{\rho}\|\widehat{f}(\rho)\|^{2}\,.$ (6) The main property of the Fourier transform is the convolution formula $\widehat{f*g}(\rho)=\widehat{f}(\rho)\widehat{g}(\rho)\,,$ (7) where the convolution of two functions $f,g:{\mathbf{G}}\to\mathbb{C}$ is defined as $(f*g)(x)=\sum_{y\in{\mathbf{G}}}f(y)g(y^{-1}x)\,.$ Given a function $f:{\mathbf{G}}\to\mathbb{C}$ and a positive integer $k$, we write $f^{(k)}=f^{(k-1)}*f$ for the $k$th convolution of $f$. Now let $k\geqslant 2$ be an integer and $f_{j}:{\mathbf{G}}\to\mathbb{C}$, $j\in[2k]$ be any functions. Denote by $\mathcal{C}$ the operator of convex conjugation. As in [27] define $\mathsf{T}_{k}(f_{1},\dots,f_{2k})=\frac{1}{|{\mathbf{G}}|}\sum_{\rho\in\widehat{{\mathbf{G}}}}d_{\rho}\langle\prod_{j=1}^{k}\mathcal{C}^{j}\widehat{f}_{j}(\rho),\prod_{j=k+1}^{2k}\mathcal{C}^{j}\widehat{f}_{j}(\rho)\rangle\,.$ (8) Put $\mathsf{T}_{k}(f)=\mathsf{T}_{k}(f,\dots,f)$. For example, we have, clearly, $\mathsf{T}_{2}(A)=\mathsf{E}(A)$. It is easy to see that $\mathsf{T}^{1/2k}_{k}(f)$ defines a norm of a function $f$ (see [27]). This fact follows from the following inequality [27, Lemma 10] $\mathsf{T}^{2k}_{k}(f_{1},\dots,f_{2k})\leqslant\prod_{j=1}^{2k}\mathsf{T}_{k}(f_{j})\,.$ (9) In particular, $\mathsf{E}(A,A^{-1})\leqslant\mathsf{E}(A)$. Now let us say a few words about varieties. Having a field $\mathbb{F}$ define an (affine) variety in $\mathbb{F}^{n}$ to be the set of the form $V=\\{(x_{1},\dots,x_{n})\in\mathbb{F}^{n}~{}:~{}p_{j}(x_{1},\dots,x_{n})=0\,\mbox{ for all }j\\}\,,$ where $p_{j}\in\mathbb{F}[x_{1},\dots,x_{n}]$. Let us recall some basic properties of varieties. General theory of varieties and schemes can be found, e.g., in [6]. The union of any finite number of varieties is, clearly, a variety and the intersection of any number of varieties is a variety as well. Having a set $X$ we denote by $\mathrm{Zcl}(X)$ a minimal (by inclusion) variety, containing $X$. A variety is irreducible if it is not the union of two proper subvarieties. Every variety has a unique (up to inclusion) decomposition into finitely many irreducible components [6]. The dimension of $V$ is ${\rm dim}(V)=\max\\{n~{}:~{}V\supseteq X_{n}\supset X_{n-1}\supset\dots\supset X_{0}\neq\emptyset\\}\,,$ where $X_{j}$ are irreducible subvarieties of $V$. We will frequently use the simple fact that if $V_{1}\subseteq V_{2}$ are two varieties and $V_{2}$ is irreducible, then either $V_{1}=V_{2}$, or ${\rm dim}(V_{1})<{\rm dim}(V_{2})$. A variety is absolutely irreducible if it is irreducible over $\overline{\mathbb{F}}$. In this paper we consider just these varieties. We define the degree of any irreducible variety $V$ with ${\rm dim}(V)=d$ as in [7], namely, $\deg(V)=\sup\\{|L\cap V|<\infty~{}:~{}L\mbox{ is }(n-d)\mbox{-dimensional affine subspace in }\mathbb{F}^{n}\\}\,.$ For an arbitrary variety $V$ we denote $\deg(V)$ to be the sum of the degrees of its irreducible components. Recall generalized Bézout Theorem (see [7, Theorem 1]) : for any varieties $U,V$ one has $\deg(U\cap V)\leqslant\deg(U)\deg(V)\,.$ (10) The signs $\ll$ and $\gg$ are the usual Vinogradov symbols. If we want to underline the dependence on a parameter $M$, then we write $\ll_{M}$ and $\gg_{M}$. All logarithms are to base $2$. Sometimes we allow ourselves to lose logarithmic powers of $|\mathbb{F}|$. In this situation we write $\lesssim$ and $\gtrsim$ instead of $\ll$, $\gg$. ## 3 On non–commutative Gowers norms Let ${\mathbf{G}}$ be a group and $A\subseteq{\mathbf{G}}$ be a finite set. Let $\|A\|_{\mathcal{U}^{k}}$ be the Gowers non–normalized $k$th–norm [4] of the characteristic function of $A$ (in multiplicative form), see, say [25]: $\|A\|_{\mathcal{U}^{k}}=\sum_{x_{0},x_{1},\dots,x_{k}\in{\mathbf{G}}}\,\prod_{\vec{\varepsilon}\in\\{0,1\\}^{k}}A\left(x_{0}x^{\varepsilon_{1}}_{1}\dots x^{\varepsilon_{k}}_{k}\right)\,,$ where $\vec{\varepsilon}=(\varepsilon_{1},\dots,\varepsilon_{k})$. For example, $\|A\|_{\mathcal{U}^{2}}=\sum_{x_{0},x_{1},x_{2}\in{\mathbf{G}}}A(x_{0})A(x_{0}x_{1})A(x_{0}x_{2})A(x_{0}x_{1}x_{2})=\mathsf{E}(A)$ is the energy of $A$ and $\|A\|_{\mathcal{U}^{1}}=|A|^{2}$. For any $\vec{s}=(s_{1},\dots,s_{k})\in{\mathbf{G}}^{k}$ put $A_{\vec{s}}(x)=\prod_{\vec{\varepsilon}\in\\{0,1\\}^{k}}A\left(xs^{\varepsilon_{1}}_{1}\dots s^{\varepsilon_{k}}_{k}\right)\,,$ (11) and similar for an arbitrary function $f:{\mathbf{G}}\to\mathbb{C}$, namely, $f_{\vec{s}}(x)=\prod_{\vec{\varepsilon}\in\\{0,1\\}^{k}}\mathcal{C}^{\varepsilon_{1}+\dots+\varepsilon_{k}}f\left(xs^{\varepsilon_{1}}_{1}\dots s^{\varepsilon_{k}}_{k}\right)\,,$ where $\mathcal{C}$ is the operator of the conjugation. E.g., $A_{s}(x)=A(x)A(xs)$ or, in other words, $A_{s}=A\cap(As^{-1})$. Then, obviously, $\|A\|_{\mathcal{U}^{k}}=\sum_{\vec{s}}|A_{\vec{s}}|\,.$ (12) Also note that $\|A\|_{\mathcal{U}^{k+1}}=\sum_{\vec{s}}|A_{\vec{s}}|^{2}\,.$ (13) Moreover, the induction property for Gowers norms holds (it follows from the definitions or see [4]) $\|A\|_{\mathcal{U}^{k+1}}=\sum_{s\in A^{-1}A}\|A_{s}\|_{\mathcal{U}^{k}}\,,$ (14) e.g., in particular, $\|A\|_{\mathcal{U}^{3}}=\sum_{s\in A^{-1}A}\mathsf{E}(A_{s})\,.$ The Gowers norms enjoy the following weak commutativity property. Namely, let $k=n+m$, and $\vec{s}=(s_{1},\dots,s_{k})=(\vec{u},\vec{v})$, where the vectors $\vec{u},\vec{v}$ have lengths $n$ and $m$, correspondingly. We have $\|A\|_{\mathcal{U}^{k}}=\sum_{\vec{s}}\sum_{x}\prod_{\vec{\varepsilon}\in\\{0,1\\}^{k}}A\left(xs^{\varepsilon_{1}}_{1}\dots s^{\varepsilon_{k}}_{k}\right)=$ $=\sum_{\vec{u},\vec{v}}\,\sum_{x}\prod_{\vec{\eta}\in\\{0,1\\}^{m}}\,\prod_{\vec{\omega}\in\\{0,1\\}^{n}}A\left(u^{\eta_{1}}_{1}\dots u^{\eta_{m}}_{m}xv^{\omega_{1}}_{1}\dots v^{\omega_{n}}_{n}\right)\,.$ (15) In particular, $\|A^{-1}\|_{\mathcal{U}^{k}}=\|A\|_{\mathcal{U}^{k}}$ and $\|gA\|_{\mathcal{U}^{k}}=\|Ag\|_{\mathcal{U}^{k}}=\|A\|_{\mathcal{U}^{k}}$ for any $g\in{\mathbf{G}}$. To obtain (15) just make the changing of variables $u_{j}x=x\tilde{u}_{j}$ for $j\in[m]$. It was proved in [4] that ordinary Gowers $k$th–norms of the characteristic function of any subset of an abelian group ${\mathbf{G}}$ are connected to each other. In [25] the author shows that the connection for the non–normalized norms does not depend on size of the group ${\mathbf{G}}$. Here we formulate a particular case of Proposition 35 from [25], which relates $\|A\|_{\mathcal{U}^{k}}$ and $\|A\|_{\mathcal{U}^{2}}$, see Remark 36 here. ###### Lemma 5 Let $A$ be a finite subset of a commutative group ${\mathbf{G}}$. Then for any integer $k\geqslant 1$ one has $\|A\|_{\mathcal{U}^{k+1}}\geqslant\frac{\|A\|^{(3k-2)/(k-1)}_{\mathcal{U}^{k}}}{\|A\|^{2k/(k-1)}_{\mathcal{U}^{k-1}}}\,.$ In particular, $\|A\|_{\mathcal{U}^{k}}\geqslant\mathsf{E}(A)^{2^{k}-k-1}|A|^{-(3\cdot 2^{k}-4k-4)}\,.$ Actually, one can derive Lemma 5 from Lemma 6 below but to prove this more general result we need an additional notation and arguments. Given two functions $f,g:{\mathbf{G}}\to\mathbb{C}$ and an integer $k\geqslant 0$ consider the ”scalar product” $\langle f,g\rangle_{k}:=\sum_{\vec{s},t}\sum_{x}f_{\vec{s}}(x)\overline{g_{\vec{s}}(xt)}=\overline{\langle g,f\rangle_{k}}\,,$ where $\vec{s}=(s_{1},\dots,s_{k})$. For example, $\langle A,B\rangle_{1}=\mathsf{E}(A,B)$, $\langle A,B\rangle_{0}=|A||B|$, $\langle A,A\rangle_{k}=\|A\|_{\mathcal{U}^{k+1}}$, $\langle A,1\rangle_{k}=|{\mathbf{G}}|\|A\|_{\mathcal{U}^{k}}$ (for finite group ${\mathbf{G}}$). Clearly, $\langle f,g\rangle_{0}=\left(\sum_{x}f(x)\right)\left(\overline{\sum_{x}g(x)}\right)$ but for $k\geqslant 1$ it is easy to see that $\langle f,g\rangle_{k}\geqslant 0$ because $\langle f,g\rangle_{k}=\sum_{\vec{s}_{*},s_{k}}|(f_{\vec{s}_{*}}*\overline{\tilde{g}}_{\vec{s}_{*}})(s_{k})|^{2}\geqslant 0$, where $\vec{s}_{*}=(s_{1},\dots,s_{k-1})$, and $\tilde{g}(x):=g(x^{-1})$. Also note that $\langle A,B\rangle_{k}=\sum_{\vec{s}}|A_{\vec{s}}||B_{\vec{s}}|=\sum_{\vec{s}_{*}}\sum_{s_{k}}|A_{\vec{s}_{*}}\cap A_{\vec{s}_{*}}s_{k}||B_{\vec{s}_{*}}\cap B_{\vec{s}_{*}}s_{k}|\,.$ (16) ###### Lemma 6 Let ${\mathbf{G}}$ be a commutative group and $f,g:{\mathbf{G}}\to\mathbb{C}$ be functions. Then for any integer $k\geqslant 1$ one has $\langle f,g\rangle^{3+1/k}_{k}\leqslant\langle f,g\rangle^{2}_{k-1}\langle f,g\rangle_{k+1}\|f\|^{1/k}_{\mathcal{U}^{k}}\|g\|^{1/k}_{\mathcal{U}^{k}}\,,$ (17) and hence for an arbitrary $k\geqslant 2$ and any sets $A,B\subseteq{\mathbf{G}}$ the following holds $\mathsf{E}(A,B)\leqslant(|A||B|)^{\frac{3}{2}-\frac{\beta(k+2)}{2}}\langle A,B\rangle^{\beta}_{k}\,,$ (18) where $\beta=\beta(k)\in[4^{-k},2^{-k+1}]$. For any (not necessary commutative) group ${\mathbf{G}}$ if $\|A\|_{\mathcal{U}^{k}}\leqslant|A|^{k+1-c}$, where $c>0$, then $\mathsf{E}(A)\leqslant|A|^{3-c_{*}}$ with $c_{*}=c_{*}(c,k)>0$. P r o o f. We have $\sigma:=\langle f,g\rangle_{k}=\sum_{\vec{s},t}\sum_{x}f_{\vec{s}}(x)\overline{g_{\vec{s}}(xt)}$ (19) and our first task to estimate size of the set of $(\vec{s},t)$ in the last formula. Basically, we consider two cases. If the summation in (19) is taken over the set $Q:=\\{\vec{s}~{}:~{}\sum_{t}g_{\vec{s}}(t)\geqslant\sigma(2k\|f\|_{\mathcal{U}^{k}})^{-1}\\}\,,$ (20) then it gives us $(1-1/2k)$ proportion of $\sigma$. Cardinality of the set $Q$ can be estimated as $|Q|\cdot\sigma(2k\|f\|_{\mathcal{U}^{k}})^{-1}\leqslant\|g\|_{\mathcal{U}^{k}}$ and hence $|Q|\leqslant 2k\|f\|_{\mathcal{U}^{k}}\|g\|_{\mathcal{U}^{k}}\sigma^{-1}$. Now we fix any $j\in[k]$ (without any loss of generality we can assume that $j=1$) put $\vec{s}_{*}=(s_{2},\dots,s_{k})$ and consider $Q_{1}:=\\{(\vec{s}_{*},t)~{}:~{}\sum_{s_{1}}f_{\vec{s}_{*}}(s_{1})\overline{g_{\vec{s}_{*}}(ts_{1})}\geqslant\sigma(2k\langle f,g\rangle_{k-1})^{-1}\\}\,.$ (21) Using the changing of the variables as in (15) (here we appeal to the commutativity of the group ${\mathbf{G}}$) and applying the argument as above, we have $|Q_{1}|\cdot\sigma(2k\langle f,g\rangle_{k-1})^{-1}\leqslant\sum_{\vec{s}_{*},t}\sum_{s_{1}}f_{\vec{s}_{*}}(s_{1})\overline{g_{\vec{s}_{*}}(ts_{1})}=\langle f,g\rangle_{k-1}\,.$ Again if the summation in (19) is taken over the set $Q_{1}$, then it gives us $(1-1/2k)$ proportion of $\sigma$. Hence by the standard projection results see, e.g., [1] we see that the summation in (19) is taken over a set $\mathcal{S}$ of vectors $(\vec{s},t)$ of size at most $|\mathcal{S}|\leqslant((2k)^{k+1}\|f\|_{\mathcal{U}^{k}}\|g\|_{\mathcal{U}^{k}}\sigma^{-(k+1)})^{1/k}\langle f,g\rangle^{2}_{k-1}\,.$ Whence by the Cauchy–Schwartz inequality, we get $2^{-4}\sigma^{2}\leqslant|\mathcal{S}|\sum_{\vec{s},t}\left|\sum_{x}f_{\vec{s}}(x)\overline{g_{\vec{s}}(xt)}\right|^{2}\leqslant((2k)^{k+1}\|f\|_{\mathcal{U}^{k}}\|g\|_{\mathcal{U}^{k}}\sigma^{-(k+1)})^{1/k}\langle f,g\rangle^{2}_{k-1}\langle f,g\rangle_{k+1}$ and we have (17) up to a constant depending on $k$. Using the tensor trick (e.g., see, [30]) we obtain the result with the constant one. To prove inequality (18) we see by induction and formula (17) that one has for $l\leqslant k$ $\langle A,B\rangle_{l}\leqslant\langle A,B\rangle^{\alpha_{0}(l,k)}_{0}\prod_{j=1}^{k-1}(\|A\|_{\mathcal{U}^{j}}\|B\|_{\mathcal{U}^{j}})^{\alpha_{j}(l,k)}\cdot\langle A,B\rangle^{\beta_{k}(l,k)}_{k}\,,$ where $\alpha_{j}(l,k)$, $\beta_{j}(l,k)$ are some non–negative functions. In principle, in view of (17) these functions can be calculated via some recurrences but we restrict ourselves giving just crude bounds for them. We are interested in $l=1$ and $k$ is a fixed number and hence we write $\mathsf{E}(A,B)=\langle A,B\rangle_{1}\leqslant\langle A,B\rangle^{\alpha_{0}}_{0}\prod_{j=1}^{k-1}(\|A\|_{\mathcal{U}^{j}}\|B\|_{\mathcal{U}^{j}})^{\alpha_{j}}\cdot\langle A,B\rangle^{\beta}_{k}\,.$ By homogeneity, we get $2=\alpha_{0}+\sum_{j=1}^{k-1}\alpha_{j}2^{j}+2^{k}\beta\,.$ (22) In particular, $\beta\leqslant 2^{-k+1}$. Further taking $A=B$ equals a subgroup, we obtain one more equation $3=2\alpha_{0}+2\sum_{j=1}^{k-1}\alpha_{j}(j+1)+(k+2)\beta\,.$ (23) Using trivial inequalities $\|A\|_{\mathcal{U}^{j}}\leqslant|A|^{j+1}$, $\|B\|_{\mathcal{U}^{j}}\leqslant|B|^{j+1}$ and formula (23), we derive $\mathsf{E}(A,B)\leqslant(|A||B|)^{\alpha_{0}+\sum_{j=1}^{k-1}\alpha_{j}(j+1)}\langle A,B\rangle^{\beta}_{k}=(|A||B|)^{\frac{3}{2}-\frac{\beta(k+2)}{2}}\langle A,B\rangle^{\beta}_{k}$ as required. Actually, if there is a non-trivial upper bound for $\|A\|_{\mathcal{U}^{j}}$ (and it will be so in the next section), then the last estimate can be improved in view of Lemma 5. Further our task is to obtain a good lower bound for $\beta$. Put $\omega_{j}:=3+1/j>3$, $j\in[k-1]$. Using (17), we get $\prod_{j=1}^{k-1}\langle A,B\rangle^{\omega_{j}x_{j}}_{j}\leqslant S\prod_{j=1}^{k-1}\left(\langle A,B\rangle^{2}_{j-1}\langle A,B\rangle_{j+1}\right)^{x_{j}}\,,$ (24) where $S$ is a quantity depending on $\|A\|_{\mathcal{U}^{j}}$, $\|B\|_{\mathcal{U}^{j}}$, which we do not specify and let $x_{j}$ be some positive numbers, which we will choose (indirectly) later. For $2\leqslant j\leqslant k-2$ put $x_{j-1}+2x_{j+1}=\omega_{j}x_{j}\,.$ (25) Then we obtain from (24) $\langle A,B\rangle^{4x_{1}}_{1}\langle A,B\rangle^{\omega_{k-1}x_{k-1}}_{k-1}\leqslant S\langle A,B\rangle^{2x_{2}}_{1}\langle A,B\rangle^{x_{k-1}}_{k}\langle A,B\rangle^{x_{k-2}}_{k-1}\,.$ Now choosing $x_{k-2}=\omega_{k-1}x_{k-1}$, we see that $\beta=x_{k-1}/(4x_{1}-2x_{2})$ and it remains to estimate $x_{k-1}$ in terms of $x_{1},x_{2}$. But for all $j$ one has $\omega_{j}\leqslant 4$, hence $x_{k-1}\geqslant 4^{-1}x_{k-2}$ and, similarly, from $x_{j-1}+2x_{j+1}=\omega_{j}x_{j}$, $2\leqslant j\leqslant k-2$, we get $x_{j}\geqslant 4^{-1}x_{j-1}$ and hence $x_{k-1}\geqslant 4^{-(k-1)}x_{1}$. Further summing (25) over $2\leqslant j\leqslant k-2$ and putting $T=\sum_{j=1}^{k-1}x_{j}$, we obtain $T-x_{k-1}-x_{k-2}+2T-2x_{1}-2x_{2}=\sum_{j=2}^{k-2}\omega_{j}x_{j}\geqslant 3T-3x_{1}-3x_{k-1}$ and hence $x_{1}-2x_{2}\geqslant x_{k-2}-2x_{k-1}=(\omega_{k-1}-2)x_{k-1}>0\,.$ In particular, it gives $x_{j}>0$ for all $j\in[k-1]$ and thus indeed $\beta\geqslant 4^{-k}$. Now suppose that ${\mathbf{G}}$ is an arbitrary group and $\|A\|_{\mathcal{U}^{k}}\leqslant|A|^{k+1-c}$ but $\mathsf{E}(A)\geqslant|A|^{3}/K$, where $K\geqslant 1$ is a parameter. By the non–commutative Balog–Szemerédi–Gowers Theorem, see [15, Theorem 32] or [30, Proposition 2.43, Corollary 2.46] there is $a\in A$ and $A_{*}\subseteq a^{-1}A$, $|A_{*}|\gg_{K}|A|$ such that $|A^{3}_{*}|\ll_{K}|A_{*}|$. We can apply the previous argument to the set $A_{*}$ and obtain an estimate similar to Lemma 5 $|A|^{k+1-c}\geqslant\|A\|_{\mathcal{U}^{k}}\geqslant\|A_{*}\|_{\mathcal{U}^{k}}\gg_{K}\mathsf{E}(A_{*})^{2^{k-2}}|A_{*}|^{-(3\cdot 2^{k-2}-k-1)}\gg_{K}$ $\gg_{K}\mathsf{E}(A)^{2^{k-2}}|A|^{-(3\cdot 2^{k-2}-k-1)}\,.$ (26) Indeed, to bound $\mathsf{E}(A)$ via $\|A\|_{\mathcal{U}^{k+2}}$ using the argument as in the proof above we need to estimate size of the set $\mathcal{S}_{k}$ at each step $k$. But clearly $|\mathcal{S}_{k}|\leqslant|AA^{-1}|^{k+1}\ll_{K}|A|^{k+1}$ and hence by induction we obtain $\mathsf{E}^{2^{k}}(A)\ll_{K}|A|^{3\cdot 2^{k}-k-3}\|A\|_{\mathcal{U}^{k+2}}$ as required. Finally, from (26), it follows that $K^{C(k)}\gg|A|^{c}$, where $C(k)$ is a constant depending on $k$ only. This completes the proof. $\hfill\Box$ A closer look to the proof (see, e.g., definition (21)) shows that for $k=1$ estimate (17) of Lemma 5 takes place for any class–functions $f$, $g$. Nevertheless, for larger $k$ this argument does not work. ## 4 The proof of the main result Let ${\mathbf{G}}$ be an algebraic group in an affine or projective space of dimension $n$ over the field $\mathbb{F}_{q}$, and let $V\subseteq{\mathbf{G}}$ be a variety, $d={\rm dim}(V)$, $D=\deg(V)$. If $V$ is absolutely irreducible, then by Lang–Weil [19] we know that $\left||V|-q^{d}\right|\leqslant(d-1)(d-2)q^{d-1/2}+A(n,d,D)q^{d-1}\,,$ (27) where $A(n,d,D)$ is a certain constant. By sufficiently large $q$ we mean that $q\geqslant q_{0}(n,d,D,$ ${\rm dim}({\mathbf{G}}),\deg({\mathbf{G}}))$ and all constants below are assumed to depend on $n,d,D,{\rm dim}({\mathbf{G}}),\deg({\mathbf{G}})$. In particular, for an absolutely irreducible variety $V$ one has $q^{d}\ll|V|\ll q^{d}$. One can think about ${\mathbf{G}}$ and $V$ as varieties defined over $\mathbb{Q}$ by absolutely irreducible polynomials. Then by the Noether Theorem [17] we know that ${\mathbf{G}}$ and $V$ reduce $\mathrm{mod~{}}p$ to some absolutely irreducible varieties defined over $p$, $p\notin S({\mathbf{G}},V)$, where $S({\mathbf{G}},V)$ is a certain finite set of the primes. Finally, for any set $W\subseteq{\mathbf{G}}$ consider the quantity $t=t(W):=\max_{x\in{\mathbf{G}},\,\Gamma\leqslant{\mathbf{G}}}\,\\{|\Gamma|~{}:~{}x\Gamma\subseteq W,\,\,\Gamma\mbox{ is an algebraic subgroup}\\}\,.$ (28) Now we are ready to estimate different energies of varieties in terms of the quantity $t(V)$. We are also able to give a non–trivial bound for any sufficiently large subset of $V$, see inequality (31). ###### Theorem 7 Let ${\mathbf{G}}$ be an algebraic group, $V\subseteq{\mathbf{G}}$ be a variety, $d={\rm dim}(V)$, $D=\deg(V)$, $t=t(V)$. Then for any positive integer $k$ and all sufficiently large $q$ one has $\mathsf{E}_{k}(V)\ll_{d,D}\frac{|V|^{k+1}}{q^{k-1}}+t|V|^{k}\,.$ (29) In particular, if $V$ is absolutely irreducible and $V$ is not a coset of a subgroup, then $\mathsf{E}_{k}(V)\ll_{d,D}|V|^{k+1-\frac{1}{d}}$. Similarly, one has $\|V\|_{\mathcal{U}^{k}}\ll_{d,D}|V|^{k+1}q^{-\frac{k(k-1)}{2}}+|V|^{2}t^{k-1}+|V|^{2}\sum_{j=1}^{k-2}|V|^{j}t^{k-1-j}q^{-\frac{j(1+j)}{2}}\,,$ (30) and for any $A\subseteq V$ the following holds $\|A\|_{\mathcal{U}^{d+1}}\ll_{d,D}t|A|^{d+1}\,.$ (31) P r o o f. First of all, consider the case of an absolutely irreducible $V$. For any $g\in{\mathbf{G}}$ we have either ${\rm dim}(V\cap gV)<{\rm dim}(V)$, or $g$ belongs to the stabilizer ${\rm Stab}(V)$ of $V$. It is well–known that any stabilizer under any action of an algebraic group is an algebraic subgroup (but not necessary irreducible). Clearly, ${\rm Stab}(V)\subseteq v^{-1}V$ for any $v\in V$ and hence either $V$ is a coset of an (algebraic) subgroup, or ${\rm dim}({\rm Stab}(V))<{\rm dim}(V)$. The degree of the variety $gV\cap V$ is at most $\deg^{2}(V)$ by inequality (10). Similarly, since our topological space is a Noetherian one (see, e.g., [6, page 5]) and ${\rm Stab}(V)=\bigcap_{v\in V}v^{-1}V$, it follows that cardinality of ${\rm Stab}(V)$ can be estimated in terms of $d$ and $D$ thanks to (27). Hence all parameters of all appeared varieties are controlled by $d,D$, $n$ (and, possibly, by ${\rm dim}({\mathbf{G}}),\deg({\mathbf{G}}))$. Using Lang–Weil formula (27), we obtain for sufficiently large $q$ that $q^{d}/2\leqslant|V|\leqslant 2q^{d}$, say, further $|{\rm Stab}(V)|\ll q^{d-1}$, and, similarly, for any $g\notin{\rm Stab}(V)$ one has $|V\cap gV|\ll q^{d-1}$. Hence $\mathsf{E}_{k}(V)=\sum_{g\in{\mathbf{G}}}|V\cap gV|^{k}=\sum_{g\notin{\rm Stab}(V)}|V\cap gV|^{k}+\sum_{g\in{\rm Stab}(V)}|V\cap gV|^{k}\ll$ (32) $\ll(q^{d-1})^{k-1}\sum_{g\in{\mathbf{G}}}|V\cap gV|+q^{d-1}|V|^{k}\ll(q^{d-1})^{k-1}|V|^{2}+q^{d-1}|V|^{k}\ll|V|^{k+1-\frac{1}{d}}\,.$ (33) Now to obtain (29) we apply the same argument but before we need to consider $V$ as a union of its irreducible components $V=\bigcup_{j=1}^{s}V_{j}$. Clearly, $s\leqslant\deg(V)$. Take any $g\in{\mathbf{G}}$ and consider $V\cap gV=\bigcup_{i,j=1}^{s}(V_{i}\cap gV_{j})$. If for all $i,j\in[s]$ one has ${\rm dim}(V_{i}\cap gV_{j})<{\rm dim}V$, then for such $g$ we can repeat the previous calculations in (32)–(33). Consider the set of the remaining $g$ and denote this set by $B$. For any $g\in B$ there is $i,j\in[s]$ such that ${\rm dim}(V_{i}\cap gV_{j})={\rm dim}(V)$. In particular, ${\rm dim}(V_{i})={\rm dim}(V_{j})={\rm dim}(V)$ and $V_{i}\cap gV_{j}=V_{i}=gV_{j}$ by irreducibility of $V_{i},V_{j}$. Suppose that for the same pair $(i,j)$ there is another $g_{*}=g_{*}(i,j)\in B$ such that $g_{*}V_{j}=V_{i}$. Then $g^{-1}_{*}g\in{\rm Stab}(V_{j})$. It follows that $g\in g_{*}{\rm Stab}(V_{j})$ and hence the set $B$ belongs to $\bigcup_{i,j=1}^{s}g_{*}(i,j){\rm Stab}(V_{j})$ plus at most $s^{2}\leqslant\deg^{2}(V)$ points. Hence we need to add to the computations in (32)–(33) the term $s^{2}(t+1)|V|^{k}\leqslant\deg(V)^{2}(t+1)|V|^{k}\ll t|V|^{k}$ as required. To prove bound (30) let us obtain a generalization of (29). Put $\mathsf{E}^{(l)}_{k}=\mathsf{E}^{(l)}_{k}(V):=\sum_{\vec{s}}|V_{\vec{s}}|^{k}$, where $\vec{s}=(s_{1},\dots,s_{l})$ and $V_{\vec{s}}$ as in (11). Write $\vec{s}=(\vec{s}_{*},s_{l})$. Since $\mathsf{E}^{(l)}_{k}=\sum_{\vec{s}_{*}}\mathsf{E}_{k}(V_{\vec{s}_{*}})$, it follows that by the obtained estimate (29) $\mathsf{E}^{(l)}_{k}\ll\sum_{\vec{s}_{*}}\left(\frac{|V_{\vec{s}_{*}}|^{k+1}}{q^{k-1}}+t(V_{\vec{s}_{*}})|V_{\vec{s}_{*}}|^{k}\right)\ll q^{-(k-1)}\mathsf{E}^{(l-1)}_{k+1}+t\mathsf{E}^{(l-1)}_{k}\,.$ (34) Here we have used the fact that the function $t$ on a subset of $V$ does not exceed $t(V)$. Further inequality (34) gives by induction $\mathsf{E}^{(l)}_{k}\ll|V|^{k}\sum_{j=0}^{l}q^{-\frac{j(2k+j-3)}{2}}|V|^{j}t^{l-j}\,.$ (35) Now, applying inequality (29) with the parameter $k=2$ and using the notation $\vec{s}=(\vec{s}_{*},s_{l})$ again, we get $\|V\|_{\mathcal{U}^{l+1}}=\sum_{\vec{s}}|V_{\vec{s}}|^{2}=\sum_{\vec{s}_{*}}\mathsf{E}(V_{\vec{s}_{*}})\ll\sum_{\vec{s}_{*}}\left(\frac{|V_{\vec{s}_{*}}|^{3}}{q}+t|V_{\vec{s}_{*}}|^{2}\right)=q^{-1}\mathsf{E}^{(l-1)}_{3}+t\mathsf{E}^{(l-1)}_{2}.$ Hence in view of (35), we derive $\|V\|_{\mathcal{U}^{l+1}}\ll|V|^{2}\sum_{j=0}^{l-1}|V|^{j}t^{l-1-j}(|V|q^{-\frac{j(3+j)}{2}-1}+tq^{-\frac{j(1+j)}{2}})\ll$ $\ll|V|^{2}t^{l}+|V|^{l+2}q^{-\frac{l^{2}+l}{2}}+|V|^{2}\sum_{j=1}^{l-1}|V|^{j}t^{l-j}q^{-\frac{j(1+j)}{2}}\,.$ Finally, take any $A\subseteq V$. As above put $\vec{s}=(s_{1},\dots,s_{d})=(\vec{s}_{*},s_{d})$. We have $A_{\vec{s}}\subseteq V_{\vec{s}}$. Further by (16) $\|A\|_{\mathcal{U}^{d+1}}=\sum_{\vec{s}}|A_{\vec{s}}|^{2}=\sum_{\vec{s}_{*}}\mathsf{E}(A_{\vec{s}_{*}})=\sum_{\vec{s}_{*}}\sum_{s_{d}}|A_{\vec{s}_{*}}\cap A_{\vec{s}_{*}}s_{d}|^{2}\,.$ (36) Take any vector $\vec{z}=(z_{1},\dots,z_{l})$, $l<d$ and consider the decomposition of the variable $V_{\vec{z}}$ onto irreducible components $V_{\vec{z}}(j)$. As above define the set $B(\vec{z})$ of all $g\in{\mathbf{G}}$ such that there are $V_{\vec{z}}(i),V_{\vec{z}}(j)$ with $V_{\vec{z}}(i)=gV_{\vec{z}}(j)$. Then by the arguments as before, we have $|B(\vec{z})|\ll t$. Using formula (36), we get $\|A\|_{\mathcal{U}^{d+1}}\ll t\sum_{\vec{s}_{*}}|A_{\vec{s}_{*}}|^{2}+\sigma=t\|A\|_{\mathcal{U}^{d}}+\sigma\,,$ (37) where for all $\vec{s}$ in $\sigma$, we have ${\rm dim}(V_{\vec{s}})=0$. Hence $\|A\|_{\mathcal{U}^{d+1}}\ll t\|A\|_{\mathcal{U}^{d}}+\sum_{\vec{s}}|A_{\vec{s}}|\ll t|A|^{d+1}+|A|^{d+1}\ll t|A|^{d+1}\,.$ (38) This completes the proof. $\hfill\Box$ Once again the bounds above depend on $d,D$, as well as on ${\rm dim}({\mathbf{G}})$, $\deg({\mathbf{G}})$ and on dimension of the ground affine space. ###### Remark 8 The quantity $\|V\|_{\mathcal{U}^{k}}$ can be written in different ways as $\sum_{s_{1},\dots,s_{k}}|V_{s_{1},\dots,s_{k}}|^{2}$, $\sum_{s_{1},\dots,s_{k-1}}\mathsf{E}(V_{s_{1},\dots,s_{k-1}})$ and so on. Taking variables $s_{j}$ running over the maximal coset belonging to $V$ we see that all terms with $t$ in (30) are needed. ###### Corollary 9 Let ${\mathbf{G}}$ be an abelian algebraic group, $V\subseteq{\mathbf{G}}$ be a variety, $d={\rm dim}(V)$, $D=\deg(V)$. Then for all sufficiently large $q$ and any $A\subseteq V$ one has $\mathsf{E}(A)\ll_{d,D}|A|^{3}\left(\frac{t}{|A|}\right)^{(2^{d+1}-d-5)^{-1}}\mbox{ for }d\geqslant 2\quad\mbox{and}\quad\mathsf{E}(A)\ll_{d,D}|A|^{2}t\,,\mbox{ for }d=1\,.$ (39) In particular, for any $A\subseteq V$ with $|A|\geqslant t^{1+c}$, $c>0$ there is $\delta=\delta(d,c)>0$ such that $\mathsf{E}(A)\ll_{d,D}|A|^{3-\delta}\,.$ (40) Bound (40) takes place in any algebraic group. Moreover, let $B\subseteq{\mathbf{G}}$ be an arbitrary set. Then $\mathsf{E}(A,B)\ll_{d,D}\left(\frac{t}{|A|}\right)^{\beta}\cdot|A|^{\frac{3}{2}-\frac{\beta d}{2}}|B|^{\frac{3}{2}+\frac{\beta d}{2}}\,,$ (41) where $\beta=\beta(d)\in[4^{-d},2^{-d+1}]$, and for any $k\geqslant 1$ one has either $\mathsf{T}_{k}(A)\leqslant|A|^{2k-1-c\beta/4}$ or $\mathsf{T}_{k+1}(A)\ll_{d,D}|A|^{2}\mathsf{T}_{k}(A)\cdot|A|^{-c\beta/4}\,.$ (42) P r o o f. Let $d\geqslant 2$. By Theorem 31, we have $\|A\|_{\mathcal{U}^{d+1}}\ll t|A|^{d+1}$. Using the second part of Lemma 5 with $k=d+1$, we obtain $\mathsf{E}(A)\ll|A|^{3}\left(\frac{t}{|A|}\right)^{(2^{k}-k-4)^{-1}}=|A|^{3}\left(\frac{t}{|A|}\right)^{(2^{d+1}-d-5)^{-1}}\,.$ If $d=1$, then the arguments of the proof of Theorem 31 (see, e.g., (38)) give us $\mathsf{E}(A)\ll t|A|^{2}+\sum_{s}|A_{s}|\ll t|A|^{2}\,.$ For an arbitrary algebraic group use the last part of Lemma 6. To derive (41) we can suppose that $|B|\geqslant|A|$ because otherwise the required bound $\mathsf{E}(A,B)^{2}\leqslant\mathsf{E}(A)\mathsf{E}(B)\leqslant\left(\frac{t}{|A|}\right)^{\beta}|A|^{3}|B|^{3}\leqslant\left(\frac{t}{|A|}\right)^{\beta}\cdot|A|^{3-\beta d}|B|^{3+\beta d}$ takes place for $\beta=(2^{d+1}-d-5)^{-1}$, $d\geqslant 2$ (and similar for $d=1$), see estimate (39). Further let $|B|\geqslant|A|$. Then we use Lemma 6 with $k=d$, combining with Theorem 31 (see formulae (16), (37), (38)) and the assumption $|A|\leqslant|B|$ to obtain $\mathsf{E}(A,B)\leqslant(|A||B|)^{\frac{3}{2}-\frac{\beta(d+2)}{2}}\langle A,B\rangle^{\beta}_{d}\ll(|A||B|)^{\frac{3}{2}-\frac{\beta(d+2)}{2}}\left(\|B\|_{\mathcal{U}^{d}}+t\langle A,B\rangle_{d-1}\right)^{\beta}\leqslant$ $\leqslant(|A||B|)^{\frac{3}{2}-\frac{\beta(d+2)}{2}}\left(|B|^{d+1}+t|A||B|^{d}\right)^{\beta}\ll(|A||B|)^{\frac{3}{2}-\frac{\beta(d+2)}{2}}t^{\beta}|B|^{\beta(d+1)}=t^{\beta}|A|^{\frac{3}{2}-\frac{\beta(d+2)}{2}}|B|^{\frac{3}{2}+\frac{\beta d}{2}}$ and (41) follows. Finally, to get (42) we use the dyadic pigeon–hole principle and the fact that $\mathsf{T}^{1/2k}(f)$ defines a norm of $f$ to find the number $\Delta>0$ and the set $P$ such that $P=\\{x\in{\mathbf{G}}~{}:~{}\Delta<A^{(k)}(x)\leqslant 2\Delta\\}$ and $\mathsf{T}_{k+1}(A)\lesssim\Delta^{2}\mathsf{E}(A,P)$. Thus (we assume that $c\leqslant 1$) $\mathsf{T}_{k+1}(A)\lesssim(t/|A|)^{\beta}|A|^{\frac{3}{2}-\frac{\beta d}{2}}(\Delta^{2}|P|)^{\frac{1}{2}-\frac{d\beta}{2}}(\Delta|P|)^{1+d\beta}\leqslant|A|^{-\frac{c\beta}{2}}|A|^{\frac{3+2k}{2}-\frac{\beta d}{2}+\beta kd}\cdot\mathsf{T}^{\frac{1}{2}-\frac{d\beta}{2}}_{k}(A)\,.$ Suppose that $\mathsf{T}_{k}(A)\geqslant|A|^{2k-1-\varepsilon}$, where $\varepsilon\leqslant c\beta/4$. In view of the last inequality and $\beta\leqslant 2^{-d+1}$ one has $|A|^{-\frac{c\beta}{2}}|A|^{\frac{3+2k}{2}-\frac{\beta d}{2}+\beta kd}\cdot\mathsf{T}^{\frac{1}{2}-\frac{d\beta}{2}}_{k}(A)\leqslant|A|^{2-\varepsilon}\mathsf{T}_{k}(A)\,.$ This completes the proof. $\hfill\Box$ ###### Remark 10 As it was said in the proof of Lemma 6 the bound for $\mathsf{E}(A,B)$, $A\subseteq V$, where $V$ is our variety and $B\subseteq{\mathbf{G}}$ is an arbitrary set can be improved because we have a non–trivial upper bound for $\|A\|_{\mathcal{U}^{l}}$, $2\leqslant l\leqslant d+1$. Thus bounds (41), (42) can be improved slightly. Also, inequalities (41), (42) say, basically, that either $|A^{3}|$ is much larger than $|A|$ or $|A^{3}|$ is larger than $|A^{2}|$. Theorem 31 and Corollary 42 imply the following criterion. ###### Corollary 11 Let ${\mathbf{G}}$ be a finite simple group, $V\subseteq{\mathbf{G}}$ be a variety, $d={\rm dim}(V)$, $D=\deg(V)$. Suppose that $t(V)=o(|V|)$. Then there is $\delta=\delta(d,n)>0$ such that for any $A\subseteq V$, $|A|\gg|V|$ the following holds $\mathsf{E}(A)\ll_{d,D}|A|^{3-\delta}\,.$ (43) Clearly, if $t(V)\gg|V|$, then (43) does not hold and hence Corollary 43 is indeed a criterion. Also, it gives a lower bound for $\delta$ of the form $\delta\gg 1/d$. Recall that our current dependence on $d$ in (43) has an exponential nature. ## 5 Applications In [21], [22] authors obtain the following results on growth of normal sets. ###### Theorem 12 Let ${\mathbf{G}}$ be a finite simple group and $N\subseteq{\mathbf{G}}$ be a normal set. Then there is $n\ll\log|\Gamma|/\log|N|$ such that $N^{n}={\mathbf{G}}$. Moreover, for any $\varepsilon>0$ there is $\delta=\delta(\varepsilon)>0$ such that any normal set $N$ with $|N|\leqslant|{\mathbf{G}}|^{\delta}$ satisfies $|NN|\geqslant|N|^{2-\varepsilon}$. From Corollary 42 we obtain a result on growth of an arbitrary subset of a conjugate class. ###### Corollary 13 Let ${\mathbf{G}}$ be a finite connected semisimple algebraic group and let $C\subseteq{\mathbf{G}}$ be a conjugate class. Also, let $A\subseteq C$ be an arbitrary set with $|A|\geqslant t(\mathrm{Zcl}(C))^{1+\varepsilon}$. Then for a certain $\delta>0$ depending on dimension of $C$ one has $\mathsf{E}(A)\ll_{\deg(\mathrm{Zcl}(C))}|A|^{3-\delta}$. In particular, $|AA|\gg_{\deg(\mathrm{Zcl}(C))}|A|^{1+\delta}$. P r o o f. In is well–known (e.g., see, [10, pages 15, 17]) that for any conjugate class $C$ its Zariski closure $\mathrm{Zcl}(C)$ equals $C$ and possibly other conjugate classes of strictly lower dimension, as well as that $C=C(x)$ is a variety iff $x\in{\mathbf{G}}$ is a semisimple element. Now the result follows from a direct application of Corollary 42 where the implied constants depend on $\deg(\mathrm{Zcl}(C))$, ${\rm dim}(C)$, ${\rm dim}({\mathbf{G}}),\deg({\mathbf{G}})$ and dimension of the ground affine space. This completes the proof. $\hfill\Box$ One can see that $t(C)\leqslant|C|^{1-c_{*}}$ for a certain $c_{*}>0$ via the general bound on such intersections with generating sets, see [20] or, alternatively, from some modifications of Theorem 12, see [21], [22]. Thus Corollary 13 takes place for all large subsets of conjugate classes. Question. Is it true that for any $A\subseteq C$, where $C$ is a conjugate class such that $|A|\geqslant|C|^{1-o(1)}$, say, one has $A^{n}={\mathbf{G}}$, where $n$ is a function on $\log|{\mathbf{G}}|/\log|A|$? For $n\ll\log|{\mathbf{G}}|/\log|A|$? For $C=C(x)$, where $x$ is a semisimple element? Now we are ready to obtain a non–trivial upper bound for any sufficiently large subset of a Chevalley group living in a variety differ from the maximal parabolic subgroup. ###### Theorem 14 Let ${\mathbf{G}}_{r}(\mathbb{F}_{q})$ be a finite Chevalley group with rank $r$ and odd $q$ and $\Pi\leqslant{\mathbf{G}}_{r}(\mathbb{F}_{q})$ be its a maximal (by size) parabolic subgroup. Also, let $V\subset{\mathbf{G}}_{r}(\mathbb{F}_{q})$ be a variety differ from all shifts of conjugates of $\Pi$. Then for any $A\subseteq V$, $|A|\geqslant|\Pi|q^{-1+c}$, $c>0$ one has $\|\widehat{A}(\rho)\|_{o}\leqslant|A|^{1-\delta}\,,$ (44) where $\delta=\delta(c,r)>0$ and $\rho$ is any non–trivial representation of ${\mathbf{G}}_{r}(\mathbb{F}_{q})$. P r o o f. Since by the assumption $|A|\geqslant|\Pi|q^{-1+c}$, it follows that $|V|\geqslant|\Pi|q^{-1+c}$ and hence $V$ is rather large. Also, one can see that, trivially, $|A|\geqslant|\Pi|q^{-1+c}\gg q^{1+c}$. Further by [29, Lemma 8] we know that $\Pi$ is the maximal (by size) subgroup of ${\mathbf{G}}(\mathbb{F}_{q})$ and for all other subgroups $\Gamma\leqslant{\mathbf{G}}(\mathbb{F}_{q})$ one has $|\Gamma|\leqslant q^{-1}|\Pi|$ ($\Gamma$ is not conjugate to $\Pi$, of course). In particular, in view of (27) $t(V)\leqslant\max_{x,y\in{\mathbf{G}}_{r}(\mathbb{F}_{q})}\\{q^{-1}|\Pi|,|V\cap x\Pi y|\\}\ll|V|q^{-c_{*}}\,,$ where $c_{*}=\min\\{c,1\\}$. Take any subgroup $H$ differ from all conjugates of $\Pi$. Also, let $x\in{\mathbf{G}}_{r}(\mathbb{F}_{q})$ be an arbitrary element. Our task is to estimate above size of the intersection $A_{*}:=A\cap xH\subseteq V$. By Corollary 43 and estimate (3), we have $|A_{*}|^{1+\delta}\ll|A^{-1}_{*}A_{*}|\leqslant|H|\leqslant|\Pi|q^{-1}\leqslant|A|q^{-c}$ and hence in particular, $|A_{*}|\ll|A|^{(1+\delta)^{-1}}$ (actually, in this place of the proof we can assume a weaker condition on size of $A$). By a similar argument and estimate (27), we derive that $|A\cap x\Pi y|\leqslant|V\cap x\Pi y|\ll|\Pi|q^{-1}\leqslant|A|q^{-c}$ and hence for any proper subgroup $\Gamma\subset{\mathbf{G}}_{r}(\mathbb{F}_{q})$ and for all $x\in{\mathbf{G}}_{r}(\mathbb{F}_{q})$ one has $|A\cap x\Gamma|\ll|A|q^{-c/2}$ (we use $|A|\gg q$ and assume that $\delta\leqslant c$). In particular, $A$ is a generating set of ${\mathbf{G}}_{r}(\mathbb{F}_{q})$. Combining this observation with the fact (see [18]) that Chevalley groups are quasi–random in the sense of Gowers [5], we obtain desired estimate (44), see, e.g., [8], [9] and [26, Sections 8,10]. This completes the proof. $\hfill\Box$ Now we obtain an application of Corollary 42 to some questions about the restriction phenomenon. In this setting our group ${\mathbf{G}}$ is ${\mathbf{G}}=\mathbb{F}^{n}$, $\mathbb{F}$ is a finite field, $V\subseteq\mathbb{F}^{n}$ is a variety and ${\mathbf{G}}$ acts on ${\mathbf{G}}$ via shifts. For any function $g:\mathbb{F}^{n}\to\mathbb{C}$ consider the commutative analogue of (4) $\hat{g}(\xi):=\sum_{x\in\mathbb{F}^{n}}g(x)e(-x\cdot\xi)\,,$ as well as the inverse Fourier transform of a function $f:V\to\mathbb{C}$ $(fd\sigma)^{\lor}(x):=\frac{1}{|V|}\sum_{\xi\in V}f(\xi)e(x\cdot\xi)\,,$ where $e(x\cdot\xi)=e^{2\pi i(x_{1}\xi_{1}+\dots+x_{n}\xi_{n})}$ for $x=(x_{1},\dots,x_{n})$, $\xi=(\xi_{1},\dots,\xi_{n})$. Thus a ”Lebesgue $L^{q}$-norm” of $f$ on $V$ is defined as $\|f\|_{L^{q}(V,d\sigma)}:=\left(\frac{1}{|V|}\sum_{\xi\in V}|f(\xi)|^{q}\right)^{\frac{1}{q}}\,,$ while for a function $g$ it is $\|g\|_{L^{q}(\mathbb{F}^{n})}:=\left(\sum_{x\in\mathbb{F}^{n}}|g(x)|^{q}\right)^{\frac{1}{q}}\,.$ The finite field restriction problem [14] for our variety $V$ seeks exponents pairs $(q,r)$ such that one has the inequality $\left\|(fd\sigma)^{\vee}\right\|_{L^{r}\left(\mathbb{F}^{n}\right)}\leq R^{*}(q\rightarrow r)\|f\|_{L^{q}(V,d\sigma)}$ or, equivalently, $\|\widehat{g}\|_{L^{q^{\prime}}(V,d\sigma)}\leq R^{*}(q\rightarrow r)\|g\|_{L^{r^{\prime}}\left(\mathbb{F}^{n}\right)}$ takes place with a constant $R^{*}(q\rightarrow r)$ independent of the size of the finite field. As before we use the notation $\lesssim$ and $\gtrsim$ instead of $\ll$, $\gg$ allowing ourselves to lose logarithmic powers of $|\mathbb{F}|$. Using the arguments of the proofs of [14, Lemma 5.1, Proposition 5.2], we obtain ###### Theorem 15 Let $V\subseteq\mathbb{F}^{n}$ be a variety, $d={\rm dim}(V)$. Suppose that $V$ does not contain any line. Then $R^{*}(\frac{4}{3-c}\rightarrow 4)\lesssim 1$, where $c=c(d)>0$. P r o o f. According our assumption that $V$ does not contain any line, we see that the parameter $t(V)$ equals $1$. Hence by Corollary 42 we know that $\mathsf{E}(A)\ll|A|^{3-c}=|A|^{\kappa}$, where $c=c(d)>0$. Put $q=4/\kappa$ and we want to obtain a good bound for $R^{*}(q\rightarrow 4)$. We want to obtain an estimate of the form (see the proofs of [14, Lemma 5.1, Proposition 5.2]) $\sum_{x}(fV*fV)^{2}(x)\lesssim\left(\sum_{x\in V}|f(x)|^{q}\right)^{4/q}\,,$ where $f$ is an arbitrary function (we can freely assume that $f$ is positive). Using the dyadic pigeon–hole principle, we need to prove the last bound for any $f=A$ with $A\subseteq V$ and this is equivalent to $\mathsf{E}(A)\lesssim|A|^{4/q}=|A|^{3-c}\,.$ This completes the proof. $\hfill\Box$ Notice that if the variety $V$ contains subspaces of positive dimension, then there is no any restriction–type result as in Theorem 15 in such generality see, e.g., [14, Section 4]. ## 6 Appendix Now we obtain an analogue of the Weyl criterion for non–commutative case. In this situation ordinary abelian intervals or progressions correspond to some structural non–abelian objects as subgroups. In particular, the first part of proposition below is applicable for subgroups $H$ of our group ${\mathbf{G}}$. Of course such results should be known but it is difficult to find them in the literature and we include Proposition 46 and its converse for the completeness. ###### Proposition 16 Let $\varepsilon\in(0,1]$ be a real number, ${\mathbf{G}}$ be a finite group, $A\subseteq{\mathbf{G}}$ be a set such that for any non–trivial irreducible representation $\rho$ one has $\|\widehat{A}(\rho)\|_{o}\leqslant\varepsilon|A|\,.$ (45) Then for any $H,H_{*}\subseteq{\mathbf{G}}$, $1\in H_{*}$ with $|HH_{*}|\leqslant|H|+K|H_{*}|$ one has $\left||A\cap H|-\frac{|A||H|}{|{\mathbf{G}}|}\right|\leqslant 2K|H_{*}|+\varepsilon|A|\sqrt{|H|/|H_{*}|+K}\,.$ (46) P r o o f. Put $\Pi=HH_{*}$. Then for any $x\in H$ the following holds $H(x)=|H_{*}|^{-1}(\Pi*H_{*}^{-1})(x)$. Hence $\|H(x)-|H_{*}|^{-1}(\Pi*H_{*}^{-1})(x)\|_{1}\leqslant|HH_{*}|-|H|\leqslant K|H_{*}|\,,$ and thus in view of formulae (6), (7), we obtain $|A\cap H|=|H_{*}|^{-1}\sum_{x}A(x)(\Pi*H_{*}^{-1})(x)+\mathcal{E}=\frac{|A||\Pi|}{|{\mathbf{G}}|}+\frac{1}{|H_{*}||{\mathbf{G}}|}\sum_{\rho\in\widehat{{\mathbf{G}}},\,\rho\neq 1}d_{\rho}\langle\widehat{A}(\rho),\widehat{\Pi}(\rho)\widehat{H}^{*}_{*}(\rho)\rangle+\mathcal{E}\,,$ where $|\mathcal{E}|\leqslant K|H_{*}|$. Applying condition (45), the Cauchy–Schwarz inequality and formula (6) again, we get $\left||A\cap H|-\frac{|A||H|}{|{\mathbf{G}}|}\right|\leqslant K|H_{*}|+\frac{K|A||H_{*}|}{|{\mathbf{G}}|}+\varepsilon|A||\Pi|^{1/2}|H_{*}|^{-1/2}\leqslant 2K|H_{*}|+\varepsilon|A|\sqrt{|H|/|H_{*}|+K}\,.$ This completes the proof. $\hfill\Box$ The inverse statement to Proposition 46 also takes place but it requires some notation and, actually, our argument gives an effective bound if dimension of the correspondent representation $\rho$ is small. Following [23, Section 17] define the Bohr sets in a (non–abelian) group ${\mathbf{G}}$. ###### Definition 17 Let $\Gamma$ be a collection of some unitary representations of ${\mathbf{G}}$ and $\delta\in(0,2]$ be a real number. Put ${\rm Bohr}(\Gamma,\delta)=\\{g\in{\mathbf{G}}~{}:~{}\|\gamma(g)-I\|_{o}\leqslant\delta\,,\forall\gamma\in\Gamma\\}\,.$ The number $|\Gamma|$ is called the dimension of ${\rm Bohr}(\Gamma,\delta)$. If $\Gamma=\\{\rho\\}$, then we write just ${\rm Bohr}(\rho,\delta)$ for ${\rm Bohr}(\\{\rho\\},\delta)$. A Bohr set ${\rm Bohr}(\rho,\delta)$ is called to be regular if $\left||{\rm Bohr}(\rho,(1+\kappa)\delta)|-|{\rm Bohr}(\rho,\delta)|\right|_{o}\leqslant 100d^{2}_{\rho}|\kappa|\cdot|{\rm Bohr}(\rho,\delta)|\,,$ whenever $|\kappa|\leqslant 1/(100d^{2}_{\rho})$. Even in the abelian case it is easy to see that not each Bohr set is regular (e.g., see, [30, Section 4.4]). Nevertheless, it can be showed (e.g., see, [28]) that one can find a regular Bohr set decreasing the parameter $\delta$ slightly. ###### Lemma 18 Let $\delta\in[0,1/2]$ be a real number and $\rho$ be a unitary representation. Then there is $\delta_{1}\in[\delta,2\delta]$ such that ${\rm Bohr}(\rho,\delta_{1})$ is regular. Let us remark an universal lower bound for size of any Bohr set (see [23, Lemma 17.3] and [28, Proposition 28] for the case of multi–dimensional Bohr sets). ###### Lemma 19 Let $\delta\in(0,2]$ be a real number and ${\rm Bohr}(\rho,\delta)\subseteq{\mathbf{G}}$ be a one–dimensional Bohr set. Then $|{\rm Bohr}(\rho,\delta)|\geqslant(c\delta)^{d^{2}_{\rho}}\cdot|{\mathbf{G}}|\,,$ where $c>0$ is an absolute constant. Now suppose that for a set $A\subseteq{\mathbf{G}}$ one has $|A|=\delta|{\mathbf{G}}|$ and $\|\widehat{A}(\rho)\|_{o}\geqslant\varepsilon|A|$. Put $f(x)=f_{A}(x)=A(x)-\delta$. Take a regular Bohr set $B={\rm Bohr}(\rho,\delta)$, $\delta=\varepsilon/4$ and let $B_{*}={\rm Bohr}(\rho,\kappa\delta)$, where $|\kappa|\leqslant 1/(100d^{2}_{\rho})$ is a certain number. Then by the definition of Bohr sets, we have $\varepsilon|A|\leqslant\|\widehat{A}(\rho)\|_{o}=|B|^{-1}\|\sum_{h}\sum_{g}f(g)B(gh^{-1})\rho(g)\|_{o}=|B|^{-1}\|\sum_{h}\sum_{g}f(g)B(gh^{-1})\rho(h)\|_{o}+\mathcal{E}\,,$ where $|\mathcal{E}|\leqslant 2\delta|A|$. Thus $\varepsilon|A|/2\leqslant|B|^{-1}\sum_{h}\left|\sum_{g}f(g)B(gh^{-1})\right|$ and hence in view of Lemma 19, we find $h\in{\mathbf{G}}$ with $\frac{|A||B|}{|{\mathbf{G}}|}+\varepsilon|A|\exp(-O(d^{2}_{\rho}\log(1/\delta)))\leqslant|A\cap Bh|\,.$ On the other hand, by the regularity of $B$ one has $|BB_{*}|\leqslant|B|(1+100d^{2}_{\rho}|\kappa|)$. It implies that Proposition 46 can be reversed indeed. ## References * [1] B. Bollobás, A. Thomason, Projections of bodies and hereditary properties of hypergraphs, Bull. London Math. Soc. 27 (1995) 417–424. * [2] J. Bourgain, A. Gamburd, P. Sarnak, Affine linear sieve, expanders, and sum–product, Inventiones mathematicae 179.3 (2010): 559–644. * [3] E. Breuillard, B. Green, T. Tao, Approximate subgroups of linear groups, Geom. Funct. Anal., 21:4 (2011), 774–819. * [4] W.T. Gowers, A new proof of Szemerédi’s theorem, GAFA, 11 (2001), 465–588. * [5] W.T. Gowers, Quasirandom groups, Probab. Comput., 17(3):363–387, 2008. * [6] R. Hartshorne, Algebraic geometry, Vol. 52. Springer Science & Business Media, 2013. * [7] J. Heintz, Definability and fast quantifier elimination in algebraically closed fields, Theoret. Comput. Sci. 24 (1983), 239–277. * [8] H. Helfgott, Growth and generation in $\mathrm{SL}_{2}(\mathbb{Z}/p\mathbb{Z})$, Annals of Math. 167 (2008), no. 2, 601–623. * [9] H. Helfgott, Growth in groups: ideas and perspectives, Bulletin of the American Mathematical Society 52.3 (2015): 357–413. * [10] J.E. Humphreys, Conjugacy classes in semisimple algebraic groups, No. 43, (2011), American Mathematical Soc. * [11] A. Iosevich, D. Koh, Extension theorems for spheres in the finite field setting, Forum. Math. 22 (2010), no.3, 457–483. * [12] A. Iosevich, D. Koh, M. Lewko, Finite field restriction estimates for the paraboloid in high even dimensions, Journal of Functional Analysis (2020): 108450. * [13] M. Lewko, Finite field restriction estimates based on Kakeya maximal operator estimates, Journal of the European Mathematical Society 21.12 (2019): 3649–3707. * [14] G. Mockenhaupt, T. Tao, Restriction and Kakeya phenomena for finite fields, Duke Math. J. 121:1 (2004), 35–74. * [15] B. Murphy, Upper and lower bounds for rich lines in grids, arXiv:1709.10438v1 [math.CO] 29 Sep 2017. * [16] M.A. Naimark, Theory of group representations, Moscow: Fizmatlit., 2010, ISBN: 978-5-9221-1260-4. * [17] E. Noether, Ein algebraisches Kriterium für absolute Irreduzibilität, Math. Ann. 85, 26–40 (1922). * [18] V. Landazuri, G. M. Seitz, On the minimal degrees of projective representations of the finite Chevalley groups, Journal of Algebra 32, 418–443 (1974). * [19] S. Lang, A. Weil, Number of points of varieties in finite fields, Am. J. Math. 76 (1954), 819–827. * [20] M. J. Larsen, R. Pink, Finite subgroups of algebraic groups, J. Amer. Math. Soc., 24:4 (2011), 1105–1158. * [21] M. W. Liebeck, A. Shalev, Diameters of finite simple groups: sharp bounds and applications, Annals of Math. 154 (2001), 383–406. * [22] M. W. Liebeck, G. Schul, A. Shalev, Rapid growth in finite simple groups, Transactions of the American Mathematical Society 369.12 (2017): 8765–8779. * [23] T. Sanders, A quantitative version of the non-abelian idempotent theorem, GAFA 21:1 (2011), 141–221; arXiv: 0912.0308.2009. * [24] J.-P. Serre, Représentations linéaires des groupes finis, Collections Méthodes, Hermann, Paris, 1967. * [25] I.D. Shkredov, Energies and structure of additive sets, Electronic Journal of Combinatorics, 21:3 (2014), #P3.44, 1–53. * [26] I. D. Shkredov, On asymptotic formulae in some sum–product questions, Tran. Moscow Math. Soc, 79 (2018), 271–334; English transl. Trans. Moscow Math. Society 2018, 231–281. * [27] I. D. Shkredov, Modular hyperbolas and bilinear forms of Kloosterman sums, Journal of Number Theory, 220 (2021), 182–211. * [28] I. D. Shkredov, On the spectral gap and the diameter of Cayley graphs, arXiv:2004.10038v1, accepted. * [29] I. D. Shkredov, Growth in Chevalley groups relatively to parabolic subgroups and some applications, arXiv:2003.12785v1 [math.NT] 28 Mar 2020. * [30] T. Tao, V. Vu, Additive combinatorics, Cambridge University Press 2006. * [31] A. Volobuev, preprint. I.D. Shkredov Steklov Mathematical Institute, ul. Gubkina, 8, Moscow, Russia, 119991 and IITP RAS, Bolshoy Karetny per. 19, Moscow, Russia, 127994 and MIPT, Institutskii per. 9, Dolgoprudnii, Russia, 141701 <EMAIL_ADDRESS>
# Knowledge Grounded Conversational Symptom Detection with Graph Memory Networks Hongyin Luo1 Shang-Wen Li2 James Glass1 1MIT CSAIL 2Amazon AI <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> Work done before the second author joined Amazon ###### Abstract In this work, we propose a novel goal-oriented dialog task, automatic symptom detection. We build a system that can interact with patients through dialog to detect and collect clinical symptoms automatically, which can save a doctor’s time interviewing the patient. Given a set of explicit symptoms provided by the patient to initiate a dialog for diagnosing, the system is trained to collect implicit symptoms by asking questions, in order to collect more information for making an accurate diagnosis. After getting the reply from the patient for each question, the system also decides whether current information is enough for a human doctor to make a diagnosis. To achieve this goal, we propose two neural models and a training pipeline for the multi-step reasoning task. We also build a knowledge graph as additional inputs to further improve model performance. Experiments show that our model significantly outperforms the baseline by 4%, discovering 67% of implicit symptoms on average with a limited number of questions. ## 1 Introduction In a typical clinical conversation between a patient and a doctor, the patient initiates the dialog by providing a number of explicit symptoms as a self- report. Based on this information, the doctor asks about other possible symptoms, in order to make an accurate diagnosis and suggest treatments. This is a multi-step reasoning process. At each step, the doctor choose a symptom to ask or concludes the diagnosis by considering the dialog history and possible diseases. With recent advances in deep reinforcement learning (Mnih et al., 2013) and task-oriented dialog systems (Bordes et al., 2016; Wen et al., 2016), recent studies have proposed human-computer dialog systems for automatic diagnosis (Wei et al., 2018). The automatic diagnosis system applied a deep Q network (DQN) to decide whether to continue the dialog by asking about a symptom or conclude the diagnosis by predicting a disease. Xu et al. (2019) proposed a knowledge-routed DQN that improves this process by considering relations among diseases and symptoms. The systems described above can achieve around 70% accuracy in making a diagnosis among 4 common diseases and detects a few implicit symptoms. However, the automatic diagnosis systems is far from being ready for clinical diagnosis, since there is still a gap in accuracy between the system and human doctors. Furthermore, he current legislation system has to be amended such that liability can be clarified when the system mis-diagnoses. As a result, current machines are not ready to replace human doctors, but they can still detect symptoms automatically to assist doctors making decision more efficiently. In this work, we propose a system that automatically detects clinical symptoms of a patient through dialog, instead of diagnosing. With such symptom information collected, a doctor can be more knowledgeable before interviewing the patient. Only those symptoms not covered by the dialog system should be asked during the interview. Thus both the time and cost of the clinical interviews can be reduced. Meanwhile, the diagnosis is still made by the doctor, which guarantees the accuracy and responsibility. Table 1 shows two example conversations generated by a dialog agent for automatic diagnosis (AD) and a dialog agent for automatic symptom detection (ASD). AD and ASD agents conclude the conversations differently. The goal of the AD system is making the correct diagnosis, so it concludes the conversation with a prediction of the possible disease. The ASD system aims at discovering as many implicit symptoms as possible, thus it stops the conversation when after deciding that current information is enough for a doctor to make diagnosis. As shown in Table 1, the number of turns of the ASD system is possibly more than an AD system, and it covers more implicit symptoms that are not mentioned in the patient’s self-report. I. Automatic Diagnosis (AD) --- User: | The baby has a Runny Nose. Agent: | Does the baby cough? User: | Yes, the baby is coughing. Agent: | Does the baby have a fever? User: | I am not sure. Agent: | It is upper respiratory infection II. Automatic Symptom Detection (ASD) User: | The baby has a Runny Nose. Agent: | Does the baby cough? User: | Yes, the baby is coughing. Agent: | Does the baby sneeze? User: | Yes, the baby is sneezing. Agent: | Does the baby have a fever? User: | I am not sure. Agent: | Does the baby have a headache? User: | Yes, the baby has a headache. Agent: | Does the baby have Phlegm? User: | Yes, the baby has Phlegm. Agent: | | Thank you for the information! --- A report has been sent to your doctor. Table 1: Two examples of dialog between different systems and a user. Conversation I is generated by an automatic diagnosis system, and conversation II is generated by an automatic symptom detection system. The explicit symptom is highlighted in blue, the implicit symptoms are highlighted in red, and unrelated symptoms are marked in green. In this work, we focus on the conversational ASD task. We propose a system that predicts implicit symptoms and whether to conclude the conversation with neural networks. To train the neural networks, we borrow the idea of the masked language model (Devlin et al., 2018) and simulate both training and test datasets. To improve the performance of the system, we annotate a medical knowledge graph based on an online medical dictionary. Then we propose a graph memory network (GMemNN) architecture to utilize the external knowledge graph. We also propose two metrics: symptom hit rate and unrelate rate to evaluate the performance of the system. We make following contributions in this paper, * • We propose the conversational symptom detection task and evaluation metrics. * • We annotate a knowledge graph in the medical domain to enrich the current corpus. * • We propose a graph memory network (GMemNN) architecture to build the dialog agent, which produces the state-of-the-art performance. ## 2 Related Work ### 2.1 Task-Oriented Dialog Systems Task-oriented dialog systems aim at completing a specific task by interacting with users through natural language, and the main challenge is learning a dialog policy manager (Papineni et al., 2001). Typical applications include flight booking (Seneff and Polifroni, 2000), movie recommendation (Dodge et al., 2015; Fazel-Zarandi et al., 2017), restaurant reservation (Bordes et al., 2016), and vision grounding (Chattopadhyay et al., 2017). Recently, such systems have been applied in automatic diagnosis (Wei et al., 2018; Xu et al., 2019; Luo et al., 2020). The authors of De Vries et al. (2017) proposed the GuessWhat game, which requires computers to guess a visual object given a natural language description by asking a series of questions. The GuessWhat game is similar with our task in the medical domain. ### 2.2 Knowledge and Graph Processing Many tasks require processing knowledge in different formats. Sukhbaatar et al. (2015) proposed memory networks (MemNNs) for question answering. The context of the question, or knowledge, is stored in an external memory bank and the model reads information from the memory with an attention mechanism. The MemNN model is also applied in question answering in the movie domain (Miller et al., 2016), video question answering (Luo et al., 2019), and stance detection (Mohtarami et al., 2018). The neural Turing machine (Graves et al., 2014) and the neural computer (Graves et al., 2016) also applied external memory banks, and enable the models to write into and read from the external memory cells dynamically. In many tasks, knowledge can be organized as graphs. Recent studies have proposed different neural models for processing graph-structured data. The graph neural networks (GNNs) (Scarselli et al., 2008) uses neural networks to perform message propagation on graphs. The graph convolutional networks (GCNs) (Kipf and Welling, 2016) employed a multi-layer architecture to learn node embeddings by integrating the information of the nodes and their neighbors. The graph attention networks (Veličković et al., 2017) integrates node embeddings with an attention mechanism. Shang et al. (2019) proposed a graph augmented memory network (GAMENet) model for medication recommendation. A similar idea that combines graphs and memory networks is proposed in Pham et al. (2018) for molecular activity prediction. In this work, we also propose a memory network architecture that processes graph-structured knowledge, but focus on bipartite graphs. ## 3 Data and Task Definition In this section, we formally define the automatic symptom detection task and describe the corpus used to train and evaluate the model. We first introduce the Muzhi corpus (Wei et al., 2018), then describe the task based on the corpus. Lastly, we describe the medical knowledge graph we annotated and the annotation method. ### 3.1 Muzhi Corpus We train and evaluate our models using the Muzhi corpus. The corpus was collected from a online medical forum111http://muzhi.baidu.com, including 4 common diseases and 66 symptoms. The corpus contains 710 dialog sessions represented as 710 user goals. Each user goal includes a set of explicit symptoms as the user’s self report, and a set of implicit symptoms queried by doctors. An example of a user goal is shown in Table 2. In the example, 1 means that the patient confirms a symptom, while 0 means that the patient is confident that the symptom does not exist. Other symptoms not listed in the user goal are considered either unrelated to the diagnosis, or the patient is not sure about their existence. In the Muzhi corpus, each user goal contains $2.35$ explicit symptoms and $3.26$ implicit symptoms on average. Disease_tag | Bronchiolitis | ---|---|--- Exp Sym | Runny Nose: 1 | Cough: 1 Imp Sym | Sore Throat: 1 | Emesis: 0 | Harsh Breath: 1 | Fever: 0 Table 2: An example of a user goal in the Muzhi corpus, containing explicit symptoms and implicit symptoms. 1 means a symptom is confirmed by the patient, while 0 means that a symptom is denied by the patient. ### 3.2 Automatic Conversational Symptom detection Task The goal of the automatic conversational symptoms detection (ASD) task is detecting as many implicit symptoms as possible through dialogs with the patients, limiting the number of dialog turns. The initial input of a dialog agent is the set of explicit symptoms. Based on the query and user response of each step, the system decides a new symptom to ask, or stop the dialog. All implicit symptoms, including the positive and negative ones, are considered as the target of the system. The user goals are collected from real doctor-patient conversation, so we consider every queried symptom a necessary step of making an accurate diagnosis. The systems are evaluated with two metrics. We say model A outperforms model B if model A discovers more implicit symptoms, and queries less unrelated symptoms. ### 3.3 Medical Knowledge Graph We annotate a medical knowledge graph to provide information about the relations among symptoms and diseases based on the symptoms included in the Muzhi corpus. As described above, we have 66 symptoms in total. We regard each symptom and disease as a node in the graph and annotate symptom-symptom and symptom-disease edges based on the A-Hospital222http://www.a-hospital.com/ website, which contains webpages for both symptoms and diseases. We propose a novel annotation method to build the medical knowledge graph considering complications. The symptom pages in A-Hospital describes a series of diseases that can cause a symptom. Meanwhile, it also listed most possible symptoms to appear if the target symptom is caused by a certain disease. We regard these symptoms as complications and make use of this information. In practice, we annotate the knowledge graph with the following method, * 1. For each symptom $s$ and its related disease $d$, add edge $(s,d)$. * 2. For each symptom $s$, its related disease $d$, and complication $c$, add edge $(s,c)$. Figure 1: An example of an annotated symptom in the knowledge graph. Red blocks represent symptoms and blue blocks stands for disease. “Cough” is the target symptom and other symptoms are complications. An example of the annotated knowledge is shown in Figure 1, and Table 3 summaries the knowledge graph we annotated. In the table, S-D edge stands for symptom-disease edge and S-C edge stands for symptom-disease-symptom edge. The number of S-C edge is lower than multiplying the number of symptoms per disease and diseases per symptom is that only a subset of symptoms caused by a disease are regarded as significant complications of a given symptom. Items | Statistics ---|--- Num. Sym. | 66 Num. Dise. | 28 Num. Edge | 1094 Num. S-D Edge | 284 Num. S-C Edge | 810 Table 3: A statistics of the annotated knowledge graph of symptoms, diseases, and complications. Note that both symptom-disease and symptom-complication edges exist. ## 4 Methods In this section, we introduce the structure and pipeline of the proposed automatic symptom detection system, including dialog state representation, the neural models for predicting symptoms and dialog actions, the training strategy, and the evaluation metrics. ### 4.1 Dialog State Representation Automatic symptom detection is a multi-step reasoning task handled by action and symptom predictions. Both tasks are accomplished with neural networks based on the current dialog state. The first step of building such a system is representing dialog states with vectors that can be processed by the neural networks. Following the method applied in Wei et al. (2018) for vectorizing the dialog states, each dialog state consists of 4 parts: I. UserAction: The user action of the previous dialog turn. Possible actions are: * • SelfReport: A user sends a self-report containing a set of explicit symptoms. * • Confirm: A user confirms that a queried symptom exists. * • Deny: A user indicates that a queried symptom does not exist. * • NotSure: A user replies “not sure” when an unrelated symptom is queried. II. AgentAction: The previous action of the dialog agent. Possible actions are: * • Initiate: The system initiate the dialog and ask the user to send the self- rport. * • Request: The system query about the existence of a symtom. III. Slots: Contains all symptoms appeared in the dialog history and their status. Each symptom has 4 possible status, * • Confirmed: Confirmed by the user. * • Denied: Denied by the user. * • Unrelated: The symptom is not necessary for the doctor to make an accurate diagnosis. * • NotQueried: A symptom has not been queried by the agent. IV. NumTurns: Indicates the length of the dialog history, in other words, current number of turns. In each step, only one value is selected for UserAction, AgentAction, and NumTurns, and we represent them with one-hot vectors $a^{u},a^{r}$, and $n$ respectively. We use a 66-dimension vector $s$ to represent the Slots, where each dimension indicates the status of a symptom. If a symptom is confirmed, the corresponding dimension is set to $1$. If a symptom is denied, the corresponding dimension is set to $-1$. If a symptom is unrelated to the diagnosis process, and the dimension is set to $-2$. All other dimensions are set to $0$. The final input of the neural networks at the $t$-th step is represented as $x_{t}=[a_{t}^{u},a_{t}^{r},n_{t},s_{t}]$ (1) which is genereted by concatenating all the vectors decribed above (a) Initiate patient embedding with edges with input slots. (b) Integrate disease information with attention. (c) Integrate symptom information with attention. (d) Predict action and symptom with linear transformations. Figure 2: The 4 steps for processing an input dialog state with a graph memory network (GMemNN). The gray nodes stand for patient, the red nodes represent symptoms, and the blue nodes represent diseases. The edges with arrows, which are labeled with same color as their source nodes, indicate the direction of message propagation. ### 4.2 Models #### 4.2.1 Multi-Layer Perceptrons The first neural model we apply in this work is a multi-layer perceptron (MLP) with 1 hidden layer. The same neural network is applied in Wei et al. (2018) and Xu et al. (2019) for the automatic diagnosis task. With input $x$, the feed forward process of the MLP is shown as follows, $\begin{split}&h=ReLU(W_{1}\cdot x+b_{1})\\\ &y=Softmax(W_{2}\cdot h+b_{2})\end{split}$ (2) where Softmax calculates probabilistic distribution by $Softmax(a_{i})=\frac{e^{a_{i}}}{\sum_{j}e^{a_{j}}}$ (3) The MLP is used for both implicit symptom and dialog action predictions. Note that the MLP model only uses the dialogs in the training set, and does not use the knowledge graph we annotated. #### 4.2.2 Graph Memory Networks Limited by the structure, MLPs cannot directly utilize the knowledge graph, which contains necessary medical knowledge for clinical diagnosis. Inspired by previous studies on processing knowledge and graphs (Sukhbaatar et al., 2015; Veličković et al., 2017), we propose graph memory networks (GMemNN) that utilizes the medical knowledge graph to improve the performance of the automatic symptom detection system. The knowledge graph is stored in an external memory bank. In each step, we regard a patient as a node connected with the known symptoms in the graph. Our purpose is to learn the embedding of the patient node and predict dialog actions and symptoms based on it. The prediction using GMemNN contains 4 steps: 1. encoding dialog states, 2. integrating potential disease information, 3. integrating complication symptoms, and 4. predicting action/symptom. The 4 steps are illuminated in Figure 2. Dialog State Encoding The GMemNN encodes the input dialog states with a lookup matrix, or a linear transformation. Given an input dialog state representation $x$, the network encodes the dialog state with $u^{0}=W_{x}\cdot x+b_{x}$ (4) Note that no non-linear activation is applied on $u$ at this step, and $u$ is considered as the initial embedding of the patient node in the graph. Integrating Disease Information After encoding the dialog state, we update the patient embedding using the embeddings of possible diseases. We calculate an embedding to summarize potential diseases using the attention mechanism for reading from the memory bank applied in the memory networks (Sukhbaatar et al., 2015). Similar with the method applied in the MemNN, we first calculate two sets of embeddings for the diseases based on their neighbors, or related symptoms, in the knowledge graph. In this paper, we use $W_{m}^{s}$ to denote the symptom embedding matrices for calculating attentions on memory, and $W_{c}^{s}$ to denote the symptom embeddings for calculating outputs. The related symptoms are summarized with the adjacency matrix $A_{d}$ between symptoms and diseases. $\begin{split}&d_{i,m}^{1}=d_{i,m}^{0}+A_{d}^{i}\\*W_{m}^{s}\\*D_{d,i}^{-1}\\\ &d_{i,c}^{1}=d_{i,c}^{0}+A_{d}^{i}\\*W_{c}^{s}\\*D_{d,i}^{-1}\end{split}$ (5) where $d_{i,\cdot}^{1}$ represents the updated embedding of disease $i$, $d_{i,\cdot}^{0}$ is the initial disease embedding, $W_{\cdot}^{s}$ stands for the symptom embedding matrix for updating disease embeddings. $A_{d}^{i}$ is the $i$-th row of $A_{d}$, and $D_{d,i}$ is the disease node degree for normalization. This is a variant of the normalization method proposed in Kipf and Welling (2016). Then we summarize potential diseases using $d_{m}^{1}$, $d_{c}^{1}$, and the initial input embedding $u^{0}$. $\begin{split}&e^{d}=\sum_{i}\alpha_{i}^{d}\cdot d_{i,c}^{1}\\\ &\alpha_{i}^{d}=Softmax(u^{0}\cdot d_{i,m}^{1})\end{split}$ (6) Then we update the initial patient embedding $u^{0}$ by integrating disease embeddings. $u^{d}=ReLU(u^{0}+e^{d})$ (7) Integrating Symptom Information After integrating the information of possible diseases, the model continues integrating the complication symptom information to produce the final patient embedding. For symptom $i$, given the initial symptom embeddings $s_{i,\cdot}^{0}$, the adjacency matrix $A_{s}$ between symptom and symptom, we calculate symptom embeddings with $\begin{split}&s_{i,m}^{1}=s_{i,m}^{0}+A_{s}^{i}\\*W_{m}^{s}\\*D_{s,i}^{-1}+A_{d}^{\cdot,i}\\*W_{m}^{d}\\*D_{d,\cdot,i}^{-1}\\\ &s_{i,c}^{1}=s_{i,c}^{0}+A_{s}^{i}\\*W_{c}^{s}\\*D_{s,i}^{-1}+A_{d}^{\cdot,i}\\*W_{c}^{d}\\*D_{d,\cdot,i}^{-1}\end{split}$ (8) where $W_{\cdot}^{s}$ is the complication symptom embedding matrix, $W_{\cdot}^{d}$ is the disease embedding matrix. $D_{s,i}$ is the number of neighbor symptoms of symptom $i$, and $D_{d,i}$ is the number of neighbor diseases of symptom $i$. Similarly, we summarize the complication symptoms by $\begin{split}&e^{s}=\sum_{i}\alpha_{i}^{s}\cdot s_{i,c}^{1}\\\ &\alpha_{i}^{s}=Softmax(u^{d}\cdot s_{i,m}^{1})\end{split}$ (9) Then we get the final patient embedding by integrating $u^{d}$ with the complication symptoms embedding $u^{d,s}=ReLU(u^{d}+e^{s})$ (10) $u^{d,s}$ stands for a patient embedding that has integrated both disease and symptom information. Action/Sympotom Prediction The GMemNN model predicts both dialog actions and symptoms with linear transformations based on the same patient embedding $u^{d,s}$. $\begin{split}&y^{act}=W^{act}\cdot u^{d,s}+b_{act}\\\ &y^{sym}=W^{sym}\cdot u^{d,s}+b_{sym}\end{split}$ (11) The action and symptom distributions are calculated with $y^{act}$ and $y^{sym}$ with the Softmax function. The available dialog actions are Conclude and Query, and the prediction space of the symptom prediction network is the 66 symptoms except the known symptoms. #### 4.2.3 Training The Muzhi dataset does not contain any dialog history to mimic. Inspired by the masked language model training pipeline proposed by Devlin et al. (2018), we construct our own training set by randomly masking and sampling symptoms. Symptom Prediction We build the training set by simulating dialog states from user goals in the original training set of the Muzhi corpus. We consider user goal $g_{i}$ with explicit symptom set $S_{e}$ and implicit symptom set $S_{i}$ as an example, where $|S_{e}|=n_{e}$ and $|S_{i}|=n_{i}$. We simulate $t$ dialog states based on $g_{i}$ with the following steps. * • Select the entire explicit symptom set $S_{e}$. * • Randomly select $n_{i}^{\prime}\in[0,n_{i})$ and sample $n_{i}^{\prime}$ implicit symptoms to construct $S_{i}^{\prime}\subset S_{i}$ * • Randomly select $n_{u}\in[0,T_{max}-n_{i}^{\prime})$ and sample $n_{u}$ unrelated symptoms to construct set $S_{u}$. $T_{max}$ stands for the maximum number of symptoms can be queried. * • Set the number of turns with $t=n_{i}^{\prime}+n_{u}$. * • If $n_{i}^{\prime}=n_{u}=0$, set AgentAction to “Initiate”. Else set the AgentAction to “Request”. * • Randomly select a symptom $s\in S_{i}\cup S_{u}$. If $s\in S_{u}$, set UserAction to “NotSure”, else set it to “Confirm” or “Deny” based on $g_{i}$. * • Set current slot to $S_{e}\cup S_{i}^{\prime}\cup S_{u}$. * • Randomly select a implicit symptom $s_{l}\in S_{i}-S_{i}^{\prime}$ as the prediction label. Action Prediction We simulate dialog states for the dialog action prediction task with the same procedure as described above, except that we can involve all implicit symptoms. If all implicit symptoms are included, the training label will set to “Conclude”, otherwise the label will be “Query”. We train MLPs and GMemNNs on both tasks after the training sets are generated. The models are trained with the simulated dialog states and labels with the stochastic gradient descent (SGD) algorithm. ## 5 Experiments We train and evaluate our models on the Muzhi corpus. The symptom predictor and the dialog action predictor are trained separately. Using the same strategy of simulating the training set, we also generated test sets for symptom prediction and action prediction respectively using the test user goals with the same method. The generated test sets are used for evaluating the performances of our models on both unit tasks. After evaluating the models in with the unit tasks, we conduct conversational evaluations using the trained models and a user simulator. We evaluate the performance of the models by accounting the number of implicit and unrelated symptoms queried in the conversations. ### 5.1 Action Prediction For action prediction, we simulate 20 dialog states for each user goal in both training and test sets. All simulated states contain the entire explicit symptom sets. 10 of the 20 states also contain the complete implicit symptom sets, thus they are labeled with “$1$”, meaning that the dialog system should conclude the dialog given these states in a dialog. The other states only contains a proper subset of implicit symptoms. These states are labeled with “$0$”, meaning that the agent should continue querying symptoms. We have 11,360 training states and 2,840 test states. We train an MLP and a GMemNN model on the simulated training sets. The MLP model has one hidden layer with 128 neurons, while the size of the hidden layers of GMemNN is set to 64. The models are trained with stochastic gradient descent (SGD) algorithm. The learning rate for training the MLP is 0.025, and is set to 0.035 for training the GMemNN. A weight decay rate of 0.001 is applied for training both models. Both models are trained for 40 epochs. The experimental results are shown in Table 4. All experimental results are obtained by running 5 independent experiments for each model from data simulation. The GMemNN model outperformed the MLP model with a small margin. The experimental results indicated that action prediction is not a hard classification task that external knowledge and complex neural networks do not help much. Unit Task | Model | Acc (%) | Stdv (%) ---|---|---|--- Action Prediction | MLP | 94.14 | 0.27 GMemNN | 94.50 | 0.42 Symptom Prediction | MLP | 45.10 | 0.62 GMemNN | 47.88 | 1.18 Table 4: Unit task evaluation results of the action and symptom prediction tasks. Acc stands for average accuracy, and Stdv stands for the standard deviation of the accuracies. The statistics are obtained by running 10 experiments for each model on each task. ### 5.2 Implicit Symptom Prediction For implicit symptom prediction, we simulate 10 dialog states for each user goal in both training and test sets. All dialog states contains the complete explicit symptom set and a proper subset of implicit symptoms. A random number of unrelated symptoms are also included. The label for training set is randomly sampled from implicit symptoms that are not included in the dialog state. We train the neural networks for the implicit symptom prediction task with SGD. The architectures of MLP and GMemNN are the same as the models applied for action prediction respectively. We also apply the same hyper-parameter settings for training as the previous task. The experimental results of symptom prediction are shown in Table 4, which are also collected by runing 5 independent experiments from data simulation. The GMemNN model significantly outperformed the basic MLP model by $2.7\%$ on average and the performance is more stable. Comparing with the action prediction task, symptom prediction is much more difficult. As a result, domain specific knowledge can improve the performance more significantly. ### 5.3 Conversational Evaluation We also evaluate our model by conducting dialogs using the original test split of user goals in the Muzhi corpus. For each test user goal, we generate a conversation using the dialog action predictor, the implicit symptom predictor, and a rule-based user simulator. The user simulator initiates a dialog by providing a set of explicit symptoms as the initiate state. In each dialog step, the action predictor decides if the current state is informative enough to conclude the dialog. If a conclusion action is predicted, the system stops the conversation. Otherwise, the system queries the user simulator with a symptom selected by the symptom predictor. If the selected symptom is positive in the implicit symptom set, the user simulator confirms the query. If it is negative in the implicit symptom set, the user simulator denies the query. If the selected symptom is not in the implicit, the user simulator responses “NotSure”. The dialog continues until the “Conclusion” action is selected, or the maximum limit of dialog turns is reached. For each test user goal, we calculate the number of unrelated symptoms queried $N_{u}$, the number of dialog turns $N$, and the ratio of detected implicit $R_{d}$. Given the number of all implicit symptoms $N_{i}$ and the number of the detected implicit symptoms $N_{i}^{\prime}$, we calculate the hit rate $R_{h}$, unrelated rate $R_{u}$, and the F1 score by $R_{h}=\frac{N_{i}^{\prime}}{N_{i}},\>R_{u}=\frac{N_{u}}{N},\>F_{1}=\frac{2\\*R_{h}\\*(1-R_{u})}{R_{h}+1-R_{u}}$ (12) We evaluate the models by calculating and comparing $R_{d}$, $R_{u}$, and F1 score averaged by the number of conversations. The experimental results are shown in Table 5. Model | Hit (%) | UnRel (%) | F1 (%) ---|---|---|--- MLP-AD | 9.62 | 83.37 | 18.75 MLP-ASD | 63.26 | 81.88 | 31.35 GMemNN | 67.30 | 81.05 | 32.59 Table 5: The experimental results of the conversational evaluation. MLP-AD stands for the pretrained state-of-the-art MLP model for automatic diagnosis (AD) provided by the authors of Xu et al. (2019). MLP-ASD stands for the MLP model for automatic symptom detection (ASD) in this work. Hit stands for average hit rate $R_{h}$, UnRel stands for average unrelated rate $R_{u}$. The experiments are conducted by setting the tolerate rate (TolR) to 10, meaning allowing the agent to query up to 10 symptoms. The experimental results showed that the MLP-ASD and GMemNN models detected significantly more implicit symptoms than the MLP-AD model Xu et al. (2019), which makes diagnosis by querying only $9.62\%$ of implicit symptoms that a human doctor would ask about. Comparing the MLP-AD and GMemNN models, the GMemNN model significantly outperformed the MLP model by $4.04\%$ hit rate with $0.83\%$ lower unrelated rate. The improvement on F1 score is $1.24\%$. We use tolerate rate (TolR) to limit the number of dialog turns. If the symptom predictor is completely random and the TolR equals to the number of symptoms, the hit rate $R_{h}$ will be $100\%$. However, querying all symptoms costs too much time for the patient. Since the average number of symptoms per user goal is $3.26$, the average unrelate rate $R_{u}$ of such a system will be $(66-3.26)/66=95.06\%$ and the F1 score will be as low as $9.45\%$. Figure 3: The effect of tolerate rate on hit rate and unrelate rate for the MLP and the GMemNN models. To understand the effect of the tolerate rate, we visualized the relation between $R_{h}$, $R_{u}$, and TolR in Figure 3. The plot indicates that increasing TolR from 1 to 10 can significantly improve the hit rates. However, the improvement vanishes after the 15th query because having too many queried symptoms makes the dialog states noisy. When the TolR is less than 10, the performance gap between The MLP and GMemNN model is not as large as the cases where TolR is larger than 10. There are two reasons for this phenomenon. I. some symptoms are queried by human doctors very frequently and they are equally easy for both models to predict; II. The GMemNN has better ability to model and process noisy inputs. ## 6 Conclusion In this work, we propose a new task: detecting implicit symptoms of patient with an automatic dialog system. We construct the system with a dialog action prediction module and a symptom query module. We first implement and evaluate a baseline system based on multi-layer perceptrons (MLPs). To improve the performance of the system, we annotate a medical-domain knowledge graph and propose the graph memory network (GMemNN) model. We systematically evaluate and compare both models with unit tasks and conversations. We also studied how the number of dialog turns effects the performance of the systems. Experiments showed that both models can detect more than $60\%$ implicit symptoms using limited turns of dialogs, which significantly outperformed the state-of-the- art automatic diagnosis system. In future work, we will expand the knowledge graph and aim to assist human doctors by making the clinical interview process more efficient. ## References * Bordes et al. (2016) Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. _arXiv preprint arXiv:1605.07683_. * Chattopadhyay et al. (2017) Prithvijit Chattopadhyay, Deshraj Yadav, Viraj Prabhu, Arjun Chandrasekaran, Abhishek Das, Stefan Lee, Dhruv Batra, and Devi Parikh. 2017. Evaluating visual conversational agents via cooperative human-ai games. In _Fifth AAAI Conference on Human Computation and Crowdsourcing_. * De Vries et al. (2017) Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pages 5503–5512. * Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_. * Dodge et al. (2015) Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. 2015. Evaluating prerequisite qualities for learning end-to-end dialog systems. _arXiv preprint arXiv:1511.06931_. * Fazel-Zarandi et al. (2017) Maryam Fazel-Zarandi, Shang-Wen Li, Jin Cao, Jared Casale, Peter Henderson, David Whitney, and Alborz Geramifard. 2017. Learning robust dialog policies in noisy environments. * Graves et al. (2014) Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. _arXiv preprint arXiv:1410.5401_. * Graves et al. (2016) Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural network with dynamic external memory. _Nature_ , 538(7626):471. * Kipf and Welling (2016) Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_. * Luo et al. (2020) Hongyin Luo, Shang-Wen Li, and James Glass. 2020. Prototypical q networks for automatic conversational diagnosis and few-shot new disease adaption. _arXiv preprint arXiv:2005.11153_. * Luo et al. (2019) Hongyin Luo, Mitra Mohtarami, James Glass, Karthik Krishnamurthy, and Brigitte Richardson. 2019. Integrating video retrieval and moment detection in a unified corpus for video question answering. _Proc. Interspeech 2019_ , pages 599–603. * Miller et al. (2016) Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. _arXiv preprint arXiv:1606.03126_. * Mnih et al. (2013) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. _arXiv preprint arXiv:1312.5602_. * Mohtarami et al. (2018) Mitra Mohtarami, Ramy Baly, James Glass, Preslav Nakov, Lluís Màrquez, and Alessandro Moschitti. 2018. Automatic stance detection using end-to-end memory networks. _arXiv preprint arXiv:1804.07581_. * Papineni et al. (2001) Kishore A Papineni, Salim Roukos, and Robert T Ward. 2001. Natural language task-oriented dialog manager and method. US Patent 6,246,981. * Pham et al. (2018) Trang Pham, Truyen Tran, and Svetha Venkatesh. 2018. Graph memory networks for molecular activity prediction. In _2018 24th International Conference on Pattern Recognition (ICPR)_ , pages 639–644. IEEE. * Scarselli et al. (2008) Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. _IEEE Transactions on Neural Networks_ , 20(1):61–80. * Seneff and Polifroni (2000) Stephanie Seneff and Joseph Polifroni. 2000. Dialogue management in the mercury flight reservation system. In _Proceedings of the 2000 ANLP/NAACL Workshop on Conversational systems-Volume 3_ , pages 11–16. Association for Computational Linguistics. * Shang et al. (2019) Junyuan Shang, Cao Xiao, Tengfei Ma, Hongyan Li, and Jimeng Sun. 2019. Gamenet: Graph augmented memory networks for recommending medication combination. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 1126–1133. * Sukhbaatar et al. (2015) Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In _Advances in neural information processing systems_ , pages 2440–2448. * Veličković et al. (2017) Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. _arXiv preprint arXiv:1710.10903_. * Wei et al. (2018) Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuanjing Huang, Kam-Fai Wong, and Xiangying Dai. 2018. Task-oriented dialogue system for automatic diagnosis. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 201–207. * Wen et al. (2016) Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016. A network-based end-to-end trainable task-oriented dialogue system. _arXiv preprint arXiv:1604.04562_. * Xu et al. (2019) Lin Xu, Qixian Zhou, Ke Gong, Xiaodan Liang, Jianheng Tang, and Liang Lin. 2019\. End-to-end knowledge-routed relational dialogue system for automatic diagnosis. _arXiv preprint arXiv:1901.10623_.
# Noncommutative CW-spectra as enriched presheaves on matrix algebras Gregory Arone Stockholm University <EMAIL_ADDRESS>Supported in part by the Swedish Research Council, grant number 2016-05440 Ilan Barnea Haifa University <EMAIL_ADDRESS>Supported by ISF 786/19 Tomer M. Schlank Hebrew University <EMAIL_ADDRESS>Supported by ISF 1588/18 and BSF 2018389 ###### Abstract Motivated by the philosophy that $C^{*}$-algebras reflect noncommutative topology, we investigate the stable homotopy theory of the (opposite) category of $C^{*}$-algebras. We focus on $\operatorname{C}^{*}$-algebras which are non-commutative CW-complexes in the sense of [ELP]. We construct the stable $\infty$-category of noncommutative CW-spectra, which we denote by $\mathtt{NSp}$. Let $\mathcal{M}$ be the full spectral subcategory of $\mathtt{NSp}$ spanned by ``noncommutative suspension spectra'' of matrix algebras. Our main result is that $\mathtt{NSp}$ is equivalent to the $\infty$-category of spectral presheaves on $\mathcal{M}$. To prove this we first prove a general result which states that any compactly generated stable $\infty$-category is naturally equivalent to the $\infty$-category of spectral presheaves on a full spectral subcategory spanned by a set of compact generators. This is an $\infty$-categorical version of a result by Schwede and Shipley [ScSh1]. In proving this we use the language of enriched $\infty$-categories as developed by Hinich [Hin2, Hin3]. We end by presenting a ``strict'' model for $\mathcal{M}$. That is, we define a category $\mathcal{M}_{s}$ strictly enriched in a certain monoidal model category of spectra $\mathtt{Sp^{M}}$. We give a direct proof that the category of $\mathtt{Sp^{M}}$-enriched presheaves $\mathcal{M}_{s}^{\operatorname{op}}\to\mathtt{Sp^{M}}$ with the projective model structure models $\mathtt{NSp}$ and conclude that $\mathcal{M}_{s}$ is a strict model for $\mathcal{M}$. ###### Contents 1. 1 Introduction 2. 2 The $\infty$-category of noncommutative CW-complexes 3. 3 Enriched infinity categories 4. 4 Stable $\infty$-categories and spectral presheaves 5. 5 Stabilization of categories 6. 6 The $\infty$-category of noncommutative CW-spectra ## 1 Introduction The celebrated Gelfand theorem gives a contravariant equivalence between the categories of locally compact Hausdorff spaces and commutative $C^{*}$-algebras. This correspondence led to the point of view that $C^{*}$-algebras are noncommutative generalizations of topological spaces. The study of $C^{*}$-algebras from this perspective is the subject of noncommutative geometry and topology. In this paper we study noncommutative stable homotopy theory, i.e., the stable homotopy category of the opposite of the category of $C^{*}$-algebras. In doing this we are continuing the investigations of Østvær [Ost] and Mahanta [Mah1], among others. To be a little more specific, in this paper we construct the $\infty$-category of noncommutative CW-spectra, which we denote $\mathtt{NSp}$, and show that $\mathtt{NSp}$ is equivalent to the category of spectral presheaves over a spectrally enriched category $\mathcal{M}$. The objects of $\mathcal{M}$ are noncommutative suspension spectra of matrix algebras, and its morphisms are mapping spectra between matrix algebras. In a companion paper [ABS2] we analyze the category $\mathcal{M}$ in considerable detail. In that paper we introduce a rank filtration of $\mathcal{M}$, describe the subquotients of the rank filtration, and use it, in particular, to give an explicit model for the rationalization of $\mathtt{NSp}$. We construct the $\infty$-category of noncommutative CW-spectra, as the stabilization of the $\infty$-category of noncommutative CW-complexes. Our construction of the $\infty$-category of noncommutative CW-complexes mimics Lurie's construction of the $\infty$-category of ``ordinary'' CW-complexes as the ind-completion of the $\infty$-category of finite CW-complexes [Lur1]. The latter is considered an $\infty$-category by first viewing it as a topological category in the obvious way and then taking the topological nerve [Lur1, Section 1.1.5]. (By a topological category in this paper we mean a category enriched in the category of compactly generated weak Hausdorff spaces.) In more detail, consider the class of $C^{*}$-algebras called ``noncommutative CW-complexes'' in [ELP, Section 2.4]. These are algebras generated from the finite dimensional matrix algebras by a finite inductive procedure, generalizing the construction of finite CW-complexes from $S^{0}$ in the commutative case. We will therefore call them _finite noncommutative CW- complexes_ in this paper. Finite noncommutative CW-complexes have been studied in several places (for instance [Ped] and [Die]). In Section 2 we define the _topological category of finite noncommutative CW-complexes_ to be the opposite of the topological category whose objects are the $C^{*}$-algebras which are noncommutative CW- complexes and whose hom-spaces are given by taking the topology of pointwise norm convergence on the sets of $*$-homomorphisms. We define the _$\infty$ -category of finite noncommutative CW-complexes_ by taking the topological nerve of this topological category. We denote both versions of this category by $\mathtt{NCW}^{f}$. We now define the _$\infty$ -category of noncommutative CW-complexes_ to be the ind-completion of $\mathtt{NCW}^{f}$. This will be our generalization of the $\infty$-category of spaces and we denote it by $\mathtt{NCW}$. It can be shown that the $\infty$-category $\mathtt{NCW}^{f}$ is pointed, essentially small and admits finite colimits, so $\mathtt{NCW}$ is a pointed compactly generated $\infty$-category. Let $\mathtt{NSp}:=\mathtt{Sp}(\mathtt{NCW})$ be the _$\infty$ -category of noncommutative CW-spectra_, i.e., the stabilization of $\mathtt{NCW}$. By construction $\mathtt{NSp}$ is a stable $\infty$-category. In particular, it is enriched and tensored over the $\infty$-category of ``ordinary'' spectra $\mathtt{Sp}$. There is a suspension-spectrum functor from noncommutative spaces to noncommutative spectra, which we denote $\Sigma^{\infty}_{\mathtt{NC}}\colon\mathtt{NCW}\to\mathtt{NSp}$. It can be shown (see Section 2) that the (maximal) tensor product of $C^{*}$-algebras induces a closed symmetric monoidal structure on both $\mathtt{NCW}$ and $\mathtt{NSp}$, such that $\Sigma_{\mathtt{NC}}^{\infty}:\mathtt{NCW}\longrightarrow\mathtt{NSp}$ is symmetric monoidal. Our main result is a presentation of $\mathtt{NSp}$ as a category of spectral presheaves over a full spectral subcategory spanned by an explicit set of generators. In order to prove this we first prove a general result about presenting a compactly generated stable $\infty$-category as a category of spectral presheaves over a full spectral subcategory spanned by a set of compact generators. Such a result was proven by Schwede and Shipley [ScSh1] using model categories (see also [GM] for the same result under more general hypotheses). However, in this paper we need a more general result formed in the language of $\infty$-categories. To obtain this we use the formalism of enriched $\infty$-categories developed by Hinich [Hin2, Hin3] and reviewed in Section 3. We can formulate our result as follows: ###### Theorem 1.1 (Theorem 4.1). Let $\mathcal{D}$ be a cocomplete stable $\infty$-category. Suppose that there is a small set $C$ of compact objects in $\mathcal{D}$, that generates $\mathcal{D}$ under colimits and desuspensions. Thinking of $\mathcal{D}$ as left-tensored over the $\infty$-category of spectra $\mathtt{Sp}$, we let $\mathcal{C}$ be the full $\mathtt{Sp}$-enriched subcategory of $\mathcal{D}$ spanned by $C$. Then $\mathcal{D}$ is naturally equivalent to the $\infty$-category of spectral presheaves on $\mathcal{C}$, denoted $P_{\mathtt{Sp}}(\mathcal{C})$. There is also a monoidal version of the Theorem 1.1, given in Theorem 4.3. The classical stable infinity category of spectra $\mathtt{Sp}$ is generated by a single object, the sphere spectrum ${\mathbb{S}}$, and this is closely related to the fact that $\mathtt{Sp}$ can be identified with the category of ${\mathbb{S}}$-modules. By contrast $\mathtt{NSp}$ requires infinitely many generators. Let $M_{n}$ be the algebra of $n\times n$ matrices over $\mathbb{C}$. The algebras $\\{M_{n}\mid n=1,2,\ldots\\}$ are the finite- dimensional simple $C^{*}$-algebras. Collectively, they play the same role in $\mathtt{NCW}$ as $S^{0}$ in the usual category of CW-complexes. The suspension spectra $\\{\Sigma^{\infty}_{\mathtt{NC}}M_{n}\mid n=1,2,\ldots\\}$ are compact objects of $\mathtt{NSp}$, and they generate $\mathtt{NSp}$ under $\infty$-colimits and desuspensions. Let $\mathcal{M}$ be the full $\mathtt{Sp}$-enriched subcategory of $\mathtt{NSp}$ spanned by $\\{\Sigma^{\infty}_{\mathtt{NC}}M_{n}\mid n=1,2,\ldots\\}$. For every $n,m\geq 0$ we have $M_{n}\otimes M_{m}\simeq M_{n\times m}$, so the set $\\{M_{n}\mid n=1,2,\ldots\\}$ is closed under the tensor product. Since $\Sigma^{\infty}_{\mathtt{NC}}$ is monoidal, we see that $\\{\Sigma^{\infty}_{\mathtt{NC}}M_{n}\mid n=1,2,\ldots\\}$ is also closed under the tensor product. It follows that the $\mathtt{Sp}$-enriched category $\mathcal{M}$ acquires a symmetric monoidal structure from $\mathtt{NSp}$. This monoidal structure induces a symmetric monoidal structure on the $\infty$-category of spectral presheaves $P_{\mathtt{Sp}}(\mathcal{M})$ (Day convolution). Using the monoidal version of Theorem 1.1 we obtain ###### Theorem 1.2 (Theorem 6.2). The symmetric monoidal $\infty$-category $\mathtt{NSp}$ is naturally equivalent to the symmetric monoidal $\infty$-category $P_{\mathtt{Sp}}(\mathcal{M})$ of spectral presheaves on $\mathcal{M}$. Thus, understanding the spectral $\infty$-category $\mathcal{M}$ should help us understand the $\infty$-category $\mathtt{NSp}$. The objects of $\mathcal{M}$ are in one-one correspondence with natural numbers and the monoidal product acts as multiplication. Given natural numbers $k,l$, we denote the corresponding mapping spectrum by $\mathbb{S}^{k,l}$ $\mathbb{S}^{k,l}:=\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l}).$ One can describe $\mathbb{S}^{k,l}$ explicitly as follows. First, let us define a functor $G_{k,l}$ from finite pointed spaces to pointed spaces by the formula $G_{k,l}(X)=\operatorname{Map}_{\mathtt{NCW}^{f}}(M_{k},X\wedge M_{l}).$ Since the pointed $\infty$-category $\mathtt{NCW}^{f}$ has finite colimits, it is tensored over finite spaces and enriched over spaces. The spectrum $\mathbb{S}^{k,l}$ is the stabilization of $G_{k,l}$, i.e., $\mathbb{S}^{k,l}$ is the spectrum given by the sequence $\\{G_{k,l}(S^{0}),G_{k,l}(S^{1}),\ldots$}. In the companion paper [ABS2] we undertake a detailed study of the spectra $\mathbb{S}^{k,l}$ and the structure of $\mathcal{M}$. We end the paper by constructing a ``strict" version of $\mathcal{M}$. Namely, let $\mathtt{Sp^{M}}$ be the category of continuous pointed functors from finite pointed CW-complexes to topological spaces, endowed with the stable model structure. This is a symmetric monoidal model category, that models the $\infty$-category of spectra [Lyd, MMSS]. In Definition 6.3, we define a symmetric monoidal category, strictly enriched in $\mathtt{Sp^{M}}$, denoted $\mathcal{M}_{s}$. We give a direct proof of the following, which can be considered a strict version of Theorem 1.2: ###### Theorem 1.3 (Theorem 6.7). The category of $\mathtt{Sp^{M}}$-enriched functors $\mathcal{M}_{s}^{\operatorname{op}}\to\mathtt{Sp^{M}}$ with the projective model structure and Day convolution is a symmetric monoidal model category that models the symmetric monoidal $\infty$-category $\mathtt{NSp}$. In Definition 3.6 we define the notion of enriched $\infty$-localization. This takes a category strictly enriched in a monoidal model category, and produces an $\infty$-category enriched in the $\infty$-localization of this model category (see also Remark 2.1). A consequence of Theorem 1.3 is that the enriched $\infty$-localization of $\mathcal{M}_{s}$ is equivalent to $\mathcal{M}$. ###### Remark 1.4. Using the work of Blom and Moerdijk [BM], it is possible to define a model category structure on the opposite of the pro-category of separable $C^{*}$-algebras, that models $\mathtt{NCW}$. This model structure is a right Bousfield localization of the model structure presented in [BJM]. We might then be able to use known results on stable model categories to prove a similar result to Theorem 1.3. We did not pursue this approach in this paper. ### An alternative definition, via nonabelian derived categories We will now digress to describe another natural way of defining a noncommutative analogue to the $\infty$-category of pointed spaces of Lurie. It relies even more on $\infty$-categorical constructions, and we do not develop it in this paper. Namely, we can do so using the concept of a nonabelian derived category (see [Lur1, Section 5.5.8]). If $\mathcal{C}$ is a small $\infty$-category with finite coproducts, Lurie defines the nonabelian derived category of $\mathcal{C}$, denoted $\mathcal{P}_{\Sigma}(\mathcal{C})$, as the $\infty$-category obtained from $\mathcal{C}$ by formally adjoining colimits of sifted diagrams. Loosely speaking sifted diagrams are generated by filtered diagrams and the simplicial diagram $\Delta^{\operatorname{op}}$. Taking $\mathcal{C}=\mathtt{Fin}_{*}$ to be the category of finite pointed sets, we obtain the $\infty$-category of pointed spaces, that is, we have a natural equivalence $\mathcal{P}_{\Sigma}(\mathtt{Fin}_{*})\simeq\operatorname{Top}$. Under the Gelfand correspondence, the finite pointed sets correspond to the finite dimensional commutative $C^{*}$-algebras. We thus denote by $\mathtt{NFin_{*}}$ the full subcategory of $\mathtt{NCW}^{f}$ spanned by the finite dimensional $C^{*}$-algebras (which are just finite products of matrix algebras). We can now define a noncommutative analogue to the $\infty$-category of pointed spaces to be the nonabelian derived category of $\mathtt{NFin_{*}}$: $\overline{\mathtt{NCW}}:=\mathcal{P}_{\Sigma}(\mathtt{NFin_{*}}).$ We have natural inclusions $\mathtt{NFin_{*}}\hookrightarrow\mathtt{NCW}^{f}\hookrightarrow\operatorname{Ind}(\mathtt{NCW}^{f})=\mathtt{NCW}$ and the $\infty$-category $\mathtt{NCW}$ admits sifted colimits, so by the universal property we have an induced functor $\overline{\mathtt{NCW}}=\mathcal{P}_{\Sigma}(\mathtt{NFin_{*}})\to\mathtt{NCW},$ that commutes with sifted colimits. We do not know if this functor is an equivalence. This is true iff for every $n\geq 0$ and every simplicial object $X$ in $\mathtt{NCW}^{f}$ the natural map ${\mathop{\operatorname{colim}}}_{\Delta^{\operatorname{op}}}\operatorname{Map}_{\mathtt{NCW}}(M_{n},X)\to\operatorname{Map}_{\mathtt{NCW}}(M_{n},{\mathop{\operatorname{colim}}}_{\Delta^{\operatorname{op}}}^{\mathtt{NCW}}X)$ is an equivalence. We know how to prove this last assertion when $X$ is a simplicial object in $\mathtt{NFin_{*}}$ or $X$ has the form $Y\otimes M_{k}$ for $k\geq 1$ and $Y$ is a simplicial object in $\mathtt{NCW}^{f}$ composed of commutative algebras. What we do know is that the induced map on stabilizations: $\mathtt{Sp}(\overline{\mathtt{NCW}})\to\mathtt{Sp}(\mathtt{NCW})=\mathtt{NSp}$ is an equivalence. To see this note that, by a similar reasoning as in the beginning of Section 6, we have that $\overline{M}:=\\{\Sigma^{\infty}M_{i}\mid i\in\mathbb{N}\\}$ generates $\mathtt{Sp}(\overline{\mathtt{NCW}})$ under small colimits. Thus, by Theorem 6.2, it is enough to show that for every $k,l\geq 1$ the induced map $\operatorname{Hom}_{\mathtt{Sp}(\overline{\mathtt{NCW}})}(\Sigma^{\infty}M_{k},\Sigma^{\infty}M_{l})\to\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}M_{k},\Sigma^{\infty}M_{l})$ is an equivalence. We can define the functor $\overline{G}_{k,l}$ from finite pointed spaces to pointed spaces by $\overline{G}_{k,l}(X):=\operatorname{Map}_{\overline{\mathtt{NCW}}}(M_{k},X\wedge M_{l}).$ The stabilization of $\overline{G}_{k,l}$ is the mapping spectrum $\operatorname{Map}_{\mathtt{Sp}(\overline{\mathtt{NCW}})}(\Sigma^{\infty}M_{k},\Sigma^{\infty}M_{l}).$ It is thus enough to show that the induced natural transformation $\overline{G}_{k,l}\to G_{k,l}$ is an equivalence. The functor $\overline{G}_{k,l}$ clearly commutes with sifted colimits and therefore it is equivalent to the (derived) left Kan extension of $\overline{G}_{k,l}|_{\mathtt{Fin}_{*}}$ along the inclusion $\mathtt{Fin}_{*}\subseteq\mathcal{S}_{*}$. We prove in [ABS2] that the same holds for the functor $G_{k,l}$. Therefore it is enough to show that the restriction $\overline{G}_{k,l}|_{\mathtt{Fin}_{*}}\to G_{k,l}|_{\mathtt{Fin}_{*}}$ is an equivalence. But for every $[t]\in\mathtt{Fin}_{*}$ we have $\overline{G}_{k,l}([t])=\operatorname{Map}_{\mathcal{P}_{\Sigma}(\mathtt{NFin_{*}})}(M_{k},[t]\wedge M_{l})\simeq\operatorname{Map}_{\mathtt{NFin_{*}}}(M_{k},M_{l}^{t})=$ $\operatorname{Map}_{\mathtt{NCW}^{f}}(M_{k},M_{l}^{t})\simeq\operatorname{Map}_{\operatorname{Ind}(\mathtt{NCW}^{f})}(M_{k},[t]\wedge M_{l})=G_{k,l}([t]),$ so we are done. Note that since the main results in this paper (and in [ABS2]) concern the stabilization of $\mathtt{NCW}$, they apply equally well to the stabilization of the alternative model $\overline{\mathtt{NCW}}$. ### Comparison with previous work We end the introduction by relating the $\infty$-categories $\mathtt{NCW}^{f}$ and $\mathtt{NCW}$ constructed here with different $\infty$-categories constructed in [Mah1]. For more detail see Section 2. In [Mah1], Mahanta constructed the $\infty$-category $\mathtt{SC_{\infty}^{*}}$ as the topological nerve of the topological category of all separable $C^{*}$-algebras, with the mapping spaces given by the topology of pointwise norm convergence on the sets of $*$-homomorphisms. He called $(\mathtt{SC_{\infty}^{*}})^{\operatorname{op}}$ the $\infty$-category of _pointed compact metrizable noncommutatives spaces_. He then defined the $\infty$-category $\mathtt{N}{\mathcal{S}}_{*}$ as the ind-completion of $(\mathtt{SC_{\infty}^{*}})^{\operatorname{op}}$, and called it the $\infty$-category of _pointed noncommutative spaces_. It can be shown that our $\infty$-category $\mathtt{NCW}^{f}$ is a full subcategory of $(\mathtt{SC_{\infty}^{*}})^{\operatorname{op}}$ and the inclusion commutes with finite colimits. It follows that our $\infty$-category $\mathtt{NCW}$ is a coreflective full subcategory of $\mathtt{N}\mathcal{S}_{*}$, and thus the inclusion admits a right adjoint: $i\colon\mathtt{NCW}\rightleftarrows\mathtt{N}\mathcal{S}_{*}\nobreak\mspace{6.0mu}{:}\nonscript\mkern-3.0mu\mathpunct{}\mspace{2.0mu}R.$ We call a morphism $g\colon X\to Y$ in $\mathtt{N}\mathcal{S}_{*}$ a _weak homotopy equivalence_ if for every $n\geq 1$ the induced map $g_{*}\colon\operatorname{Map}_{\mathtt{N}\mathcal{S}_{*}}(M_{n},X)\to\operatorname{Map}_{\mathtt{N}\mathcal{S}_{*}}(M_{n},Y)$ is an equivalence of spaces. This is analogous to weak homotopy equivalences between topological spaces. The counit of the adjunction above $i\circ R\to\operatorname{Id}_{\mathtt{N}\mathcal{S}_{*}}$ is a levelwise weak equivalence, and thus can be thought of as a CW approximation to elements in $\mathtt{N}\mathcal{S}_{*}$. If $X$ and $Y$ are noncommutative CW-complexes then $g$ is a weak equivalence iff it is an equivalence in $\mathtt{N}{\mathcal{S}}_{*}$. Informally speaking, since the equivalences in $(\mathtt{SC_{\infty}^{*}})^{\operatorname{op}}$ are homotopy equivalences of $C^{*}$-algebras, the category $\mathtt{N}\mathcal{S}_{*}=\operatorname{Ind}((\mathtt{SC_{\infty}^{*}})^{\operatorname{op}})$ is somewhat analogous to the infinity category modeled by the Strøm model structure on topological spaces [Str], in which the weak equivalences are the homotopy equivalences. The category $\mathtt{NCW}$ constructed here is analogous to the infinity category modeled by the Quillen model structure on topological spaces, in which the weak equivalences are the weak homotopy equivalences. ### Section by section outline of the paper In Section 2 we define the $\infty$-category $\mathtt{NCW}$: a noncommutative analogue of the $\infty$-category of pointed spaces. In Section 3 we review the theory of enriched $\infty$-categories, as developed by Hinich [Hin2, Hin3]. In particular we state the enriched Yoneda lemma for $\infty$-categories. We also present a way to pass from model categories to $\infty$-categories is the enriched setting. In Section 4 we review the notion of a stable $\infty$-category and show that a compactly generated stable $\infty$-category is equivalent to the category of spectral presheaves on a full subcategory spanned by a set of compact generators. In Section 5 we review the process of stabilizing an $\infty$-category. We use the framework established by Lurie in [Lur2, Section 1.4]. We also review how a similar procedure can be applied to an ordinary topologically enriched category, and compare the strict and the $\infty$-categorical versions of stabilization. In Section 6 we define the category of non-commutative CW-spectra $\mathtt{NSp}$ as the stabilization of the category $\mathtt{NCW}$. We identify the suspension spectra of matrix algebras as an explicit set of generators of $\mathtt{NSp}$. Letting $\mathcal{M}$ be the full subcategory of $\mathtt{NSp}$ spanned by matrix algebras, we conclude that $\mathtt{NSp}$ is (monoidally) equivalent to $P_{\mathtt{Sp}}(\mathcal{M})$, the category of spectral presheaves on $\mathcal{M}$. We give a strict model for $\mathcal{M}$, denoted $\mathcal{M}_{s}$, as a category enriched over a Quillen model category of spectra $\mathtt{Sp^{M}}$. We also show that the category of $\mathtt{Sp^{M}}$-enriched functors $\mathcal{M}_{s}^{\operatorname{op}}\to\mathtt{Sp^{M}}$ with the projective model structure models the $\infty$-category $\mathtt{NSp}$ and conclude that $\mathcal{M}$ is equivalent to the enriched $\infty$-localization of $\mathcal{M}_{s}$. ### Acknowledgements We are grateful to Vladimir Hinich for explaining to us his theory of enriched infinity categories and its relevance to our work. ## 2 The $\infty$-category of noncommutative CW-complexes In this section we define a noncommutative analogue of the $\infty$-category of pointed spaces defined by Lurie [Lur1]. Let $\mathtt{SC^{*}}$ (resp. $\mathtt{CSC^{*}}$) denote the category of all (resp. commutative) separable $C^{*}$-algebras and $*$-homomorphisms. Following the common convention in the field, the term $C^{*}$-algebra or $*$-homomorphism will always mean _non-unital_. The Gelfand correspondence implies that the functor $X\mapsto\mathrm{C}_{0}(X):\mathtt{CM_{*}}\longrightarrow\mathtt{CSC^{*}}^{\operatorname{op}}$ that assigns to every pointed compact metrizable space $X$ the commutative separable $C^{*}$-algebra of continuous complex valued functions on $X$ that vanish at the basepoint, is an equivalence of categories. It is thus natural to regard $\mathtt{SC^{*}}^{\operatorname{op}}$ as the category of _noncommutative_ pointed compact metrizable spaces. Consider $\mathtt{CM_{*}}$ as a topologically enriched category, where for every $X,Y\in\mathtt{CM_{*}}$ we endow the set of pointed continuous maps $\mathtt{CM_{*}}(X,Y)$ with the _compact open topology_. Now we take the topological nerve [Lur1, Section 1.1.5] of this topological category and obtain the $\infty$-category $(\mathtt{CM_{*}})_{\infty}$. It is well-known that $(\mathtt{CM_{*}})_{\infty}$ admits finite $\infty$-colimits and that $\infty$-pushouts can be calculated using the standard cylinder object. Let us construct the $\infty$-category of pointed spaces in a way that admits a natural generalization to the noncommutative case. We denote by $\mathtt{CW}^{f}_{*}$ the smallest full subcategory of $\mathtt{CM_{*}}$ that contains $S^{0}$ and is closed under finite homotopy-colimits using the standard cylinder object. Thus $\mathtt{CW}^{f}_{*}$ is the topological category of finite pointed CW-complexes. We will also consider $\mathtt{CW}^{f}_{*}$ as an $\infty$-category, by applying the coherent nerve functor to it. We will use the same notation $\mathtt{CW}^{f}_{*}$ to indicate both the ordinary (topologically enriched) and the $\infty$-categorical incarnation of the category, trusting that it is clear from the context which is meant. The $\infty$-category of pointed spaces can be defined as the $\infty$-categorical $\operatorname{Ind}$ construction of $\mathtt{CW}^{f}_{*}$. Note that under the Gelfand correspondence, $S^{0}$ corresponds to $\mathbb{C}$, which is the only nonzero finite dimensional simple commutative $C^{*}$-algebra. We now turn to the noncommutative analogue. We first recall from [Mah1, Section 2.1] the construction of the $\infty$-category $\mathtt{SC_{\infty}^{*}}$. Consider $\mathtt{SC^{*}}$ as a topologically enriched category, where for every $A,B\in\mathtt{SC^{*}}$ we endow the set of $*$-homomorphisms $\mathtt{SC^{*}}(A,B)$ with the topology of pointwise norm convergence. Now we take the topological nerve of this topological category and obtain the $\infty$-category $\mathtt{SC_{\infty}^{*}}$. It is shown in [Mah1, Section 2.1] that $\mathtt{SC_{\infty}^{*}}$ is (essentially) small, pointed, and finitely complete. ###### Remark 2.1. Recall that any relative category, that is a pair $(\mathcal{C},\mathcal{W})$ consisting of a category $\mathcal{C}$ an a subcategory $\mathcal{W}\subseteq\mathcal{C}$, has a canonically associated $\infty$-category $\mathcal{C}_{\infty}$, obtained by formally inverting the morphisms in $\mathcal{W}$, in the infinity categorical sense. There is also a canonical localization functor $\mathcal{C}\to\mathcal{C}_{\infty}$ satisfying a universal property. We refer the reader to [Hin1] for a thorough account, and also to the discussion in [BHH, Section 2.2]. We refer to $\mathcal{C}_{\infty}$ as the $\infty$-localization of $\mathcal{C}$ (with respect to $\mathcal{W}$). If $\mathcal{C}$ is a model category or a (co)fibration category, we always take $\mathcal{W}$ to be the set of weak equivalences in $\mathcal{C}$. Using $\infty$-localization, there is another natural way of considering separable $C^{*}$-algebras as an $\infty$-category. There is a well known notion of homotopy equivalence between $C^{*}$-algebras. We can consider $\mathtt{SC^{*}}$ as a relative category, with the weak equivalences given by the homotopy equivalences, and take its $\infty$-localization. This is the point of view taken, for instance, in [AG, Uuy]. It follows from [BJM, Proposition 3.17] that we obtain an $\infty$-category equivalent to $\mathtt{SC_{\infty}^{*}}$. ###### Remark 2.2. It is well-known that $\mathtt{SC^{*}}$ is cotensored over the category of pointed finite CW-complexes [AG]. If $K$ is a finite pointed CW-complex and $A\in\mathtt{SC^{*}}$ then the cotensoring of $A$ and $K$ is given by the $C^{*}$-algebra of pointed continuous functions from $K$ to $A$. One can define finite homotopy limits in $\mathtt{SC^{*}}$ using this cotensoring. Consequently, the $\infty$-pullbacks in $\mathtt{SC_{\infty}^{*}}$ can be calculated as homotopy pullbacks using the standard path object [Mah1, Proposition 2.7]. ###### Definition 2.3. We denote by $\mathtt{NCW}^{f}$ the opposite of the smallest full subcategory of $\mathtt{SC^{*}}$ that contains the nonzero finite dimensional simple algebras in $\mathtt{SC^{*}}$ (which are just the matrix algebras over $\mathbb{C}$) and is closed under finite homotopy-limits using the standard path object. We call $\mathtt{NCW}^{f}$ the category of _finite pointed noncommutative CW-complexes_. The category $\mathtt{NCW}^{f}$ is an ``ordinary'' topologically enriched category. We will also consider $\mathtt{NCW}^{f}$ as an $\infty$-category, by applying the coherent nerve functor to it. Like in the commutative case, will use the same notation $\mathtt{NCW}^{f}$ to indicate both the ordinary (topologically enriched) and the $\infty$-categorical incarnation of the category, trusting that it is clear from the context which is meant. The topological category $\mathtt{NCW}^{f}$ is the category of _noncommutative CW-complexes_ as defined in [ELP, Section 2.4]. Using [Ped, Theorem 11.14], the same proof as in [Mah2, Proposition 1.1] gives that the (maximal) tensor product of $C^{*}$-algebras induces a symmetric monoidal structure on the $\infty$-category $\mathtt{NCW}^{f}$, that preserves finite colimits in each variable separately. Note also that since the topological category $\mathtt{SC^{*}}$ is cotensored over pointed finite CW-complexes, the topological category $\mathtt{NCW}^{f}$ is tensored over pointed finite CW- complexes. We will denote the tensoring of a finite CW-complex $K$ and a noncommutative finite complex $X$ by $K\wedge X$. We now define the $\infty$-category of _noncommutative pointed CW-complexes_ to be the $\infty$-categorical ind-completion of $\mathtt{NCW}^{f}$, $\mathtt{NCW}:=\operatorname{Ind}(\mathtt{NCW}^{f}).$ The $\infty$-category $\mathtt{NCW}^{f}$ is (essentially) small, pointed and finitely cocomplete so $\mathtt{NCW}$ is a compactly generated pointed $\infty$-category. By [Lur2, Corollary 4.8.1.14] there is an induced closed symmetric monoidal structure on the $\infty$-category $\mathtt{NCW}\simeq\operatorname{Ind}(\mathtt{NCW}^{f})$ such that the natural embedding $j\colon\mathtt{NCW}^{f}\longrightarrow\mathtt{NCW}$ is symmetric monoidal. In [Mah1], Mahanta defined the $\infty$-category $\mathtt{N}\mathcal{S}_{*}$ as the ind-completion of $(\mathtt{SC_{\infty}^{*}})^{\operatorname{op}}$, and called it the $\infty$-category of _pointed noncommutative spaces_. By definition our category $\mathtt{NCW}^{f}$ is a full subcategory of $(\mathtt{SC_{\infty}^{*}})^{\operatorname{op}}$. Since $\infty$-pushouts in both $\mathtt{NCW}^{f}$ and $(\mathtt{SC_{\infty}^{*}})^{\operatorname{op}}$ can be calculated as homotopy pushouts using the standard cylinder object, we see that the inclusion $\mathtt{NCW}^{f}\hookrightarrow(\mathtt{SC_{\infty}^{*}})^{\operatorname{op}}$ commutes with finite colimits. Passing to ind-completions we get that the induced inclusion $\mathtt{NCW}\hookrightarrow\mathtt{N}\mathcal{S}_{*}$ admits a right adjoint (or in other words, $\mathtt{NCW}$ is a coreflective full subcategory of $\mathtt{N}\mathcal{S}_{*}$) $i\colon\mathtt{NCW}\rightleftarrows\mathtt{N}\mathcal{S}_{*}\nobreak\mspace{6.0mu}{:}\nonscript\mkern-3.0mu\mathpunct{}\mspace{2.0mu}R.$ We call a morphism $g\colon X\to Y$ in $\mathtt{N}\mathcal{S}_{*}$ a _weak homotopy equivalence_ if $R(g)$ is an equivalence in $\mathtt{NCW}$, or equivalently, if for every object $W$ in $\mathtt{NCW}$ the induced map $g_{*}\colon\operatorname{Map}_{\mathtt{N}\mathcal{S}_{*}}(i(W),X)\to\operatorname{Map}_{\mathtt{N}\mathcal{S}_{*}}(i(W),Y)$ is an equivalence in $\mathcal{S}$. Since every object in $\mathtt{NCW}$ is a small colimit of matrix algebras and $i$ commutes with small colimits, we see that $g$ is a weak equivalence iff for every $n\geq 1$ the induced map $g_{*}\colon\operatorname{Map}_{\mathtt{N}\mathcal{S}_{*}}(M_{n},X)\to\operatorname{Map}_{\mathtt{N}\mathcal{S}_{*}}(M_{n},Y)$ is an equivalence in $\mathcal{S}$. This is analogous to weak homotopy equivalences between topological spaces. The counit of the adjunction above $i\circ R\to\operatorname{Id}_{\mathtt{N}\mathcal{S}_{*}}$ is a levelwise weak equivalence, and thus can be thought of as a CW approximation to elements in $\mathtt{N}\mathcal{S}_{*}$. If $X$ and $Y$ are noncommutative CW-complexes then $g$ is a weak equivalence iff it is an equivalence in $\mathtt{N}\mathcal{S}_{*}$. Since the weak equivalences in $(\mathtt{SC_{\infty}^{*}})^{\operatorname{op}}$ are the homotopy equivalences, the category $\mathtt{N}\mathcal{S}_{*}$ is somewhat analogous to the infinity category modeled by the Strøm model category on topological spaces [Str]. ## 3 Enriched infinity categories In Theorem 1.1 we make use of enriched infinity categories. There are a few approaches to this theory (see, for example, [GH, Lur2]) but so far only in [Hin2] the Yoneda embedding is defined and its basic properties are shown. Since we need these results, we chose to follow Hinich's approach in this paper. In this section we give an overview of the basic definitions and constructions needed for later on. We also present some new material in subsection 3.0.1, concerning the connection between model categories and $\infty$-categories in the enriched setting. Let $\mathtt{Cat}$ denote the $\infty$-category of $\infty$-categories, and let $\mathtt{Cat^{L}}$ denote the $\infty$-subcategory of $\mathtt{Cat}$ whose objects are $\infty$-categories having small colimits and whose morphisms preserve these colimits. The category $\mathtt{Cat}$ is symmetric monoidal under the cartesian product, while $\mathtt{Cat^{L}}$ has a symmetric monoidal structure induced from the cartesian structure on $\mathtt{Cat}$ (see [Lur2, 4.8.1.3, 4.8.1.4]). With this structure on $\mathtt{Cat^{L}}$, $\operatorname{Map}_{\mathtt{Cat^{L}}}(P\otimes L,M)$ is the subspace of $\operatorname{Map}_{\mathtt{Cat}}(P\times L,M)$ consisting of functors preserving small colimits along each argument. Note that a monoidal $\infty$-category is equivalent to an associative algebra object in $\mathtt{Cat}$, while an associative algebra in $\mathtt{Cat^{L}}$ is equivalent to a monoidal $\infty$-category with colimits, whose monoidal product commutes with colimits in each variable. We define a _closed monoidal $\infty$-category_ to be an associative algebra in $\mathtt{Cat^{L}}$. If $\mathcal{M}$ is a closed monoidal $\infty$-category, then a category left- tensored over $\mathcal{M}$ is by definition a left module over $\mathcal{M}$ in $\mathtt{Cat^{L}}$. More generally, if $\mathcal{O}$ is an $\infty$-operad, an $\mathcal{O}$-monoidal category is an algebra over $\mathcal{O}$ in $\mathtt{Cat^{L}}$. If $\mathcal{M}$ is an $\mathcal{O}$-monoidal category one can define an $\mathcal{O}$-algebra in $\mathcal{M}$. Let $\mathtt{Ass}$ be the associative operad and $\mathtt{LM}$ be the two colored operad of left modules. Algebras over $\mathtt{Ass}$ are associative algebras and algebras over $\mathtt{LM}$ consist of an associative algebra and a left module over it. Thus, an $\mathtt{Ass}$-monoidal category is just a closed monoidal $\infty$-category and an $\mathtt{LM}$-monoidal category is a pair consisting of a closed monoidal $\infty$-category and a category left- tensored over it. Let $\mathcal{M}$ be a closed monoidal $\infty$-category. For every space (i.e., an $\infty$-groupoid) $X$, Hinich constructs (see [Hin2, Sections 3 and 4]) a closed monoidal structure on the $\infty$-category of Quivers $\operatorname{Quiv}_{X}(\mathcal{M}):=\operatorname{Fun}(X^{\operatorname{op}}\times X,\mathcal{M}).$ Hinich's monoidal structure is an $\infty$-categorical version of the usual convolution product that one uses to define ordinary enriched categories. For $\mathcal{B}$ a category left-tensored over $\mathcal{M}$, Hinich constructs a left action of the closed monoidal $\infty$-category $\operatorname{Quiv}_{X}(\mathcal{M})$ on the $\infty$-category $\operatorname{Fun}(X,\mathcal{B})$. In his notation we obtain an $\mathtt{LM}$-monoidal category $\operatorname{Quiv}^{\mathtt{LM}}_{X}(\mathcal{M},\mathcal{B}):=(\operatorname{Quiv}_{X}(\mathcal{M}),\operatorname{Fun}(X,\mathcal{B})).$ ###### Definition 3.1. A $\mathcal{M}$-enriched category, with space of objects $X$ is an associative algebra in $\operatorname{Quiv}_{X}(\mathcal{M})$. ###### Remark 3.2. Hinich uses the term $\mathcal{M}$-enriched precategory for an associative algebra in $\operatorname{Quiv}_{X}(\mathcal{M})$. He reserves the term $\mathcal{M}$-enriched category for precategories satisfying a version of Segal completeness condition (see [Hin2, Definition 7.1.1]). In this paper we are not concerned with Segal completeness, so we will not distinguish between enriched categories and precategories. We will just say ``enriched category'' where Hinich might have said ``enriched precategory''. ###### Remark 3.3. If $\mathcal{M}$ is the $\infty$-category of spaces, then the category of $\mathcal{M}$-enriched categories with space of objects $X$ is equivalent to the category of simplicial spaces satisfying the Segal condition and equalling $X$ in simplicial degree zero. See [Hin2, Corollary 5.6.1], where a more general statement is proved. In other words, a category enriched in spaces is the same thing as an ordinary $\infty$-category. For a general closed monoidal $\infty$-category $\mathcal{M}$, there is a monoidal ``forgetful'' functor from $\mathcal{M}$ to spaces, given by $\operatorname{Map}_{\mathcal{M}}(\mathtt{1},-)$. In this way we obtain a forgetful functor from the category of $\mathcal{M}$-enriched categories to ordinary infinity categories (compare with [Hin2, Definition 7.1.1]). ###### Remark 3.4. As we show in next subsection 3.0.1, the theory of monoidal model categories and categories enriched or tensored over them extends nicely to the theory presented above upon application of $\infty$-localization. In [Hin2, Section 6] Hinich defines the notion of an $\mathcal{M}$-functor from an $\mathcal{M}$-enriched category to a category left-tensored over $\mathcal{M}$. Let $\mathcal{A}$ be an $\mathcal{M}$-enriched category with space of objects $X$ and $\mathcal{B}$ a category left-tensored over $\mathcal{M}$. Then $\mathcal{A}$ is an associative algebra in $\operatorname{Quiv}_{X}(\mathcal{M})$ and $\operatorname{Fun}(X,\mathcal{B})$ is left tensored over $\operatorname{Quiv}_{X}(\mathcal{M})$. An $\mathcal{M}$-functor $\mathcal{A}\to\mathcal{B}$ is defined to be a left module over $\mathcal{A}$ in $\operatorname{Fun}(X,\mathcal{B})$, and the $\infty$-category of $\mathcal{M}$-functors $\mathcal{A}\to\mathcal{B}$ is defined to be $\operatorname{Fun}_{\mathcal{M}}(\mathcal{A},\mathcal{B}):=\mathtt{LMod}_{\mathcal{A}}(\operatorname{Fun}(X,\mathcal{B})).$ For $x,y$ objects of $\mathcal{B}$, we define the presheaf $\operatorname{Hom}_{\mathcal{B}}(x,y)\in P(\mathcal{M})$ by $\operatorname{Hom}_{\mathcal{B}}(x,y)(K):=\operatorname{Map}_{\mathcal{B}}(K\otimes x,y).$ Clearly $\operatorname{Hom}_{\mathcal{B}}(x,y):\mathcal{M}^{\operatorname{op}}\to\mathcal{S}$ preserves limits, but it is not necessarily representable. If it happens to be representable, then the representing object serves as an internal mapping object from $b$ to $c$. Every $\mathcal{M}$-functor $F:\mathcal{A}\to\mathcal{B}$ induces maps in $P(\mathcal{M})$ $h_{\mathcal{A}(x,y)}\to\operatorname{Hom}_{\mathcal{B}}(F(x),F(y))$ for $x,y\in X$. The $\mathcal{M}$-functor $F$ is called $\mathcal{M}$-fully faithful if all these maps are equivalences. If $\operatorname{Hom}_{\mathcal{B}}(b,c):\mathcal{M}^{\operatorname{op}}\to\mathcal{S}$ is representable for all objects $b,c$ of $\mathcal{B}$, then $\mathcal{B}$ is enriched as well as left tensored. More generally and more precisely, Hinich proves the following proposition (see [Hin2, Proposition 6.3.1 and Corollary 6.3.4]). We note that if $\mathcal{M}$ is presentable, then any functor $\mathcal{M}^{\operatorname{op}}\to\mathcal{S}$ that preserves limits is representable (see [Lur1, Proposition 5.5.2.2]). ###### Proposition 3.5. Let $\mathcal{M}$ be a closed monoidal $\infty$-category and $\mathcal{B}$ left tensored over $\mathcal{M}$. Let $C$ be a class of objects of $\mathcal{B}$. If for all $x,y\in C$, the presheaf $\operatorname{Hom}_{\mathcal{B}}(x,y)$ is representable then there exists an $\mathcal{M}$-enriched category $\mathcal{C}$ whose class of objects is $C$, such that for any two objects $x,y$ of $C$, the morphism object $\mathcal{C}(x,y)$ is a representing object for the functor $\operatorname{Hom}_{\mathcal{B}}(x,y)$. There is a fully-faithful $\mathcal{M}$-enriched functor $\mathcal{C}\to\mathcal{B}$, extending the inclusion of $C$ into $\mathcal{B}$. Using [Hin2, Lemma 6.3.3] it is not hard to see that given the conditions of Theorem 3.5 all the categories $\mathcal{C}$ that can be obtained are canonically equivalent (via the choice of $X$ as the full subspace of $\mathcal{B}$ spanned by $C$ in [Hin2, Corollary 6.3.4]). We can thus call $\mathcal{C}$ the full enriched subcategory of $\mathcal{B}$ spanned by $C$. Taking $C$ to be the class of all objects in $\mathcal{B}$ we see that if $\operatorname{Hom}_{\mathcal{B}}(x,y)$ is representable for all $x,y$, then $\mathcal{B}$ is enriched as well as tensored over $\mathcal{M}$. ### 3.0.1 From enriched model categories to enriched infinity categories Let $\mathtt{Cat}_{1}$ denote the category of small categories and functors between them and let $\mathcal{S}_{\mathtt{M}}$ denote the category of simplicial sets. We have the usual nerve functor $\mathtt{N}:\mathtt{Cat}_{1}\to\mathcal{S}_{\mathtt{M}}.$ The functor $\mathtt{N}$ is limit preserving and in particular it is a (cartesian) monoidal functor. In [Hor, Section 2.1], Horel constructs a (large, coloured) $\mathtt{Cat}_{1}$-operad denoted $\mathtt{ModCat}$. The colours in $\mathtt{ModCat}$ are model categories, while the category of multilinear operations $\operatorname{Map}_{\mathtt{ModCat}}(\mathcal{M}_{1},\cdots,\mathcal{M}_{n},\mathcal{N})$ is the category of left Quillen $n$-functors $\mathcal{M}_{1}\times\dots\times\mathcal{M}_{n}\to\mathcal{N}$ and natural weak equivalences (on cofibrant objects) between them. Since $\mathtt{N}$ is a monoidal functor, we obtain a simplicial operad from $\mathtt{ModCat}$ by composing with $\mathtt{N}$. We denote this simplicial operad also by $\mathtt{ModCat}$. Let $\mathcal{S}_{\mathtt{M}}^{\Delta^{\operatorname{op}}}$ denote the category of simplicial objects in $\mathcal{S}_{\mathtt{M}}$ with Rezk's model structure. This is a combinatorial simplicial cartesian closed symmetric monoidal model category with all objects cofibrant. Let $\mathtt{CSS}$ denote the full simplicial subcategory of $\mathcal{S}_{\mathtt{M}}^{\Delta^{\operatorname{op}}}$ spanned by the fibrant objects. Then $\mathtt{CSS}$ is a monoidal simplicial category (under the cartesian product) whose simplicial nerve is naturally equivalent to the monoidal $\infty$-category $\mathtt{Cat}$. Horel also constructs in [loc. cit.] another full simplicial subcategory $\mathtt{Cat}_{\infty}\subseteq\mathcal{S}_{\mathtt{M}}^{\Delta^{\operatorname{op}}}$ containing $\mathtt{CSS}$ and closed under the cartesian product. He shows that the inclusion $\mathtt{CSS}\to\mathtt{Cat}_{\infty}$ induces an equivalence of $\infty$-categories after application of simplicial nerve. Let $\mathtt{ModCat}^{c}$ be the full sub simplicial operad of $\mathtt{ModCat}$ on model categories that are Quillen equivalent to a combinatorial model category. For $\mathcal{M}\in\mathtt{ModCat}^{c}$, let $\mathtt{N}_{\mathcal{R}}(\mathcal{M})$ denote the Resk nerve construction on the cofibrant objects in $\mathcal{M}$ and weak equivalences between them. By [Hor, Theorem 2.16], $\mathtt{N}_{\mathcal{R}}$ extends to a map of simplicial operads $\mathtt{ModCat}^{c}\to\mathtt{Cat}_{\infty}$. Applying simplicial nerve, we obtain a map of $\infty$-operads $\mathtt{N}_{\mathcal{S}}(\mathtt{ModCat}^{c})\to\mathtt{Cat}$. By [Hor, Remark 2.17], this map factors through $\mathtt{Cat^{L}}$. Since Resk's nerve is one of the models for $\infty$-localization (see, for example, [BHH, Section 2.2]), we obtain a map of $\infty$-operads $(-)_{\infty}:\mathtt{N}_{\mathcal{S}}(\mathtt{ModCat}^{c})\to\mathtt{Cat^{L}}$ which acts as $\infty$-localization on objects. Let $\mathtt{M}$ be the nonsymmetric operad (in $\operatorname{Set}$) freely generated by an operation in degree 0 and 2. An algebra over $\mathtt{M}$ in $\operatorname{Set}$ is a set with a binary multiplication and a base point. Let $\mathtt{P}$ be the operad in $\mathtt{Cat}_{1}$ which is given in degree $n$ by the groupoid whose objects are points of $\mathtt{M}(n)$ and a with a unique morphism between any two objects. Then an algebra over $\mathtt{P}$ in $\mathtt{Cat}_{1}$ is a monoidal category. The nerve of $\mathtt{P}$ is a simplicial operad which we also denote by $\mathtt{P}$. Clearly, we have an equivalence of $\infty$-operads $\mathtt{N}_{\mathcal{S}}\mathtt{P}\simeq\mathtt{Ass}$. Let $\mathcal{M}$ be a monoidal model category, Quillen equivalent to a combinatorial model category. Then $\mathcal{M}$ is an algebra over $\mathtt{P}$ in $\mathtt{ModCat}^{c}$ (as operads in $\mathtt{Cat}_{1}$ and thus in $\mathcal{S}_{\mathtt{M}}$). Applying the simplicial nerve we get that $\mathcal{M}$ is an algebra over $\mathtt{N}_{\mathcal{S}}\mathtt{P}\simeq\mathtt{Ass}$ in $\mathtt{N}_{\mathcal{S}}(\mathtt{ModCat}^{c})$. It follows that $\mathcal{M}_{\infty}$ is an algebra over $\mathtt{Ass}$ in $\mathtt{Cat^{L}}$, so $\mathcal{M}_{\infty}$ is a presentable closed monoidal $\infty$-category. Furthermore, the localization functor $\mathcal{M}\to\mathcal{M}_{\infty}$ is lax monoidal. Now let $\mathcal{C}$ be a model category, Quillen equivalent to a combinatorial one. Suppose that $\mathcal{C}$ is an $\mathcal{M}$-model category, in the sense that $\mathcal{C}$ is tensored closed over $\mathcal{M}$ and satisfies the Quillen SM7 axiom. As above, we can construct a simplicial operad $\mathtt{Q}$ whose simplicial nerve is equivalent to $\mathtt{LM}$ and such that $(\mathcal{M},\mathcal{C})$ is an algebra over $\mathtt{Q}$ in $\mathtt{ModCat}^{c}$. Applying the simplicial nerve we get that $(\mathcal{M},\mathcal{C})$ is an algebra over $\mathtt{N}_{\mathcal{S}}\mathtt{Q}\simeq\mathtt{LM}$ in $\mathtt{N}_{\mathcal{S}}(\mathtt{ModCat}^{c})$. It follows that $(\mathcal{M}_{\infty},\mathcal{C}_{\infty})$ is an algebra over $\mathtt{LM}$ in $\mathtt{Cat^{L}}$, so $\mathcal{C}_{\infty}$ is a presentable $\infty$-category left tensored over $\mathcal{M}_{\infty}$. Furthermore, the localization functor $(\mathcal{M},\mathcal{C})\to(\mathcal{M}_{\infty},\mathcal{C}_{\infty})$ is $\mathtt{LM}$-lax monoidal. Let $(-)^{f}$ and $(-)^{c}$ denote fibrant and cofibrant replacement functors in a model category. The model category $\mathcal{C}$ is an $\mathcal{M}$-model category so for every $A\in\mathcal{C}$ we have a Quillen pair $(-)\otimes A^{c}:\mathcal{M}\rightleftarrows\mathcal{C}:\operatorname{Hom}_{\mathcal{C}}(A^{c},-).$ By [Maz] we have an induced adjunction of $\infty$-categories $\mathbb{L}(-)\otimes A^{c}:\mathcal{M}_{\infty}\rightleftarrows\mathcal{C}_{\infty}:\mathbb{R}\operatorname{Hom}_{\mathcal{C}}(A^{c},-).$ Thus we have equivalences natural in $A,B\in\mathcal{C}$: $\operatorname{Map}_{\mathcal{M}_{\infty}}(K,\mathbb{R}\operatorname{Hom}_{\mathcal{C}}(A^{c},B))\simeq\operatorname{Map}_{\mathcal{C}_{\infty}}(\mathbb{L}K\otimes A^{c},B)$ Clearly $\mathbb{L}(-)\otimes A^{c}:\mathcal{M}_{\infty}\to\mathcal{C}_{\infty}$ represents the tensor product $(-)\otimes A$ of $\mathcal{C}_{\infty}$ as tensored over $\mathcal{M}_{\infty}$ so we have natural equivalences $\operatorname{Map}_{\mathcal{M}_{\infty}}(K,\operatorname{Hom}_{\mathcal{C}}(A^{c},B^{f}))\simeq\operatorname{Map}_{\mathcal{C}_{\infty}}(K\otimes A,B).$ We see that $\operatorname{Hom}_{\mathcal{C}}(A^{c},B^{f})\in\mathcal{M}_{\infty}$ is a representing object for $\operatorname{Hom}_{\mathcal{C}_{\infty}}(A,B)\in P(\mathcal{M}_{\infty})$ and thus we have $\operatorname{Hom}_{\mathcal{C}}(A^{c},B^{f})\simeq\operatorname{Hom}_{\mathcal{C}_{\infty}}(A,B).$ ###### Definition 3.6. Let $\mathcal{A}$ be a strictly enriched category over $\mathcal{M}$. Let $X$ denote the discrete space on the set of objects of $\mathcal{A}$. Then $\mathcal{A}$ is an algebra over $\mathtt{Ass}$ in the monoidal category $\operatorname{Quiv}_{X}(\mathcal{M})$ (see [Hin4]). The localization functor $\mathcal{M}\to\mathcal{M}_{\infty}$ is lax monoidal, so the induced functor $\operatorname{Quiv}_{X}(\mathcal{M})\to\operatorname{Quiv}_{X}(\mathcal{M}_{\infty})$ is also lax monoidal. We define the _enriched $\infty$-localization_ functor to be composition with this last functor: $(-)_{\infty}:\operatorname{Alg}_{\mathtt{Ass}}(\operatorname{Quiv}_{X}(\mathcal{M}))\to\operatorname{Alg}_{\mathtt{Ass}}(\operatorname{Quiv}_{X}(\mathcal{M}_{\infty})).$ Thus, $\mathcal{A}_{\infty}$ is an enriched $\infty$-category over $\mathcal{M}_{\infty}$. Let $F:\mathcal{A}\to\mathcal{C}$ be a strict $\mathcal{M}$-functor. We have an $\mathtt{LM}$-monoidal category $\operatorname{Quiv}^{\mathtt{LM}}_{X}(\mathcal{M},\mathcal{C}):=(\operatorname{Quiv}_{X}(\mathcal{M}),\operatorname{Fun}(X,\mathcal{C}))$ and $F$ is just a module in $\operatorname{Fun}(X,\mathcal{C})$ over $\mathcal{A}$ (see [Hin4]). In this case $(\mathcal{A},F)$ is an algebra over $\mathtt{LM}$ in the $\mathtt{LM}$-monoidal category $\operatorname{Quiv}^{\mathtt{LM}}_{X}(\mathcal{M},\mathcal{C})$. The localization functor $(\mathcal{M},\mathcal{C})\to(\mathcal{M}_{\infty},\mathcal{C}_{\infty})$ is $\mathtt{LM}$-lax monoidal, so the induced functor $\operatorname{Quiv}^{\mathtt{LM}}_{X}(\mathcal{M},\mathcal{C})\to\operatorname{Quiv}^{\mathtt{LM}}_{X}(\mathcal{M}_{\infty},\mathcal{C}_{\infty})$ is also $\mathtt{LM}$-lax monoidal. It follows that we obtain a functor that we denote $(-)_{\infty}:\operatorname{Alg}_{\mathtt{LM}}(\operatorname{Quiv}^{\mathtt{LM}}_{X}(\mathcal{M},\mathcal{C}))\to\operatorname{Alg}_{\mathtt{LM}}(\operatorname{Quiv}^{\mathtt{LM}}_{X}(\mathcal{M}_{\infty},\mathcal{C}_{\infty})).$ Clearly this functor lifts the functor above so we have $(\mathcal{A},\mathcal{F})_{\infty}=(\mathcal{A}_{\infty},F_{\infty}).$ Thus, $F_{\infty}$ is an $\mathcal{M}_{\infty}$-functor from $\mathcal{A}_{\infty}$ to $\mathcal{C}_{\infty}$ and we call it the _enriched $\infty$-localization_ of $F$. We call $F$ _homotopy fully faithful_ if for every $x,y\in X$ the composition $\mathcal{A}(x,y)\xrightarrow{F}\operatorname{Hom}_{\mathcal{C}}(F(x),F(y))\to\operatorname{Hom}_{\mathcal{C}}(F(x)^{c},F(y)^{f})$ is an equivalences in the model category $\mathcal{M}$. ###### Theorem 3.7. Let $\mathcal{A}$ be a strictly enriched category over $\mathcal{M}$ and let $F:\mathcal{A}\to\mathcal{C}$ be a strict $\mathcal{M}$-functor which is homotopy fully faithful. Then the $\mathcal{M}_{\infty}$-functor $F_{\infty}:\mathcal{A}_{\infty}\to\mathcal{C}_{\infty}$ is $\mathcal{M}_{\infty}$-fully faithful (in the sense described after Remark 3.4). ###### Proof. Lat $L:\mathcal{M}\to\mathcal{M}_{\infty}$ denote the localization functor. Then for every $x,y\in X$ we have a commutative square in $\mathcal{M}_{\infty}$ $\textstyle{L(\mathcal{A}(x,y))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{LF}$$\textstyle{L(\operatorname{Hom}_{\mathcal{C}}(F(x),F(y)))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{A}_{\infty}(x,y)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{F_{\infty}}$$\textstyle{\operatorname{Hom}_{\mathcal{C}_{\infty}}(F_{\infty}(x),F_{\infty}(y)).}$ The left map is an equivalence by definition of $\mathcal{A}_{\infty}$ and the right map is equivalent to $L$ applied to the map $\operatorname{Hom}_{\mathcal{C}}(F(x),F(y))\to\operatorname{Hom}_{\mathcal{C}}(F(x)^{c},F(y)^{f})$. Since $F$ is homotopy fully faithful, the diagonal map is an equivalence, and thus also the bottom map. ∎ ###### Corollary 3.8. Let $\mathcal{A}$ be a strictly enriched category over $\mathcal{M}$ and let $F:\mathcal{A}\to\mathcal{C}$ be a strict fully faithful $\mathcal{M}$-functor that lands in the fibrant cofibrant objects. Then the $\mathcal{M}_{\infty}$-functor $F_{\infty}:\mathcal{A}_{\infty}\to\mathcal{C}_{\infty}$ is $\mathcal{M}_{\infty}$-fully faithful. ### Enriched Yoneda Lemma Hinich formulates and proves a version of enriched Yoneda lemma, which is of key importance to us. We will review this part of Hinich's work next. Let $\mathcal{M}$ be a closed monoidal $\infty$-category and let $\mathcal{A}$ be an $\mathcal{M}$-enriched category with space of objects $X$. Hinich defines the opposite category $\mathcal{A}^{\operatorname{op}}$, which is an $\mathcal{M}^{rev}$-enriched category with space of objects $X^{\operatorname{op}}$, and constructs a structure of a category left- tensored over $\mathcal{M}$ on the $\infty$-category of $\mathcal{M}$-presheaves $P_{\mathcal{M}}(\mathcal{A}):=\operatorname{Fun}_{\mathcal{M}^{rev}}(\mathcal{A}^{\operatorname{op}},\mathcal{M}).$ Here, $\mathcal{M}$ is considered as a right $\mathcal{M}$-module which is the same as a left $\mathcal{M}^{rev}$-module. ###### Remark 3.9. In the case of interest to us, $\mathcal{M}$ is the category of spectra, which is a symmetric monoidal category. This means that there is a canonical equivalence of monoidal categories $\mathcal{M}\simeq\mathcal{M}^{rev}$. Hinich also constructs an $\mathcal{M}$-fully faithful functor called the enriched Yoneda embedding $Y:\mathcal{A}\to P_{\mathcal{M}}(\mathcal{A}).$ In [Hin3] it is shown that this construction has the following universal property: If $\mathcal{B}$ is any category left-tensored over $\mathcal{M}$ then precomposition with $Y$ induces an equivalence $\operatorname{Map}_{\mathtt{LMod}_{\mathcal{M}}}(P_{\mathcal{M}}(\mathcal{A}),\mathcal{B})\simeq\operatorname{Map}_{\mathcal{M}}(\mathcal{A},\mathcal{B}).$ In [Hin3], all the above is done more generally relative to an $\infty$-operad $\mathcal{O}$. Taking $\mathcal{O}=\mathtt{Com}$ to be the terminal $\infty$-operad and noting that $\mathtt{Com}\otimes\mathtt{Ass}\simeq\mathtt{Com}$, we obtain the following. Suppose $\mathcal{M}$ is a closed symmetric monoidal $\infty$-category. Then the category of $\mathcal{M}$-left-tensored categories is symmetric monoidal and we define a symmetric monoidal $\mathcal{M}$-left-tensored category to be a commutative algebra in the category of $\mathcal{M}$-left-tensored categories. Similarly, the category of $\mathcal{M}$-enriched categories is symmetric monoidal and we define a symmetric monoidal $\mathcal{M}$-enriched category to be to be a commutative algebra in the category of $\mathcal{M}$-enriched categories. Moreover, one can define the notion of a symmetric monoidal $\mathcal{M}$-functor from a symmetric monoidal $\mathcal{M}$-enriched category to a symmetric monoidal $\mathcal{M}$-left- tensored category. If $\mathcal{A}$ is a symmetric monoidal $\mathcal{M}$-enriched category, the category of presheaves $P_{\mathcal{M}}(\mathcal{A})$ acquires a canonical symmetric monoidal $\mathcal{M}$-left-tensored structure (Day convolution), and the Yoneda embedding $Y:\mathcal{A}\to P_{\mathcal{M}}(\mathcal{A})$ acquires a structure of a symmetric monoidal $\mathcal{M}$-functor. Moreover, this construction has the following universal property: If $\mathcal{B}$ is any symmetric monoidal $\mathcal{M}$-left-tensored category then precomposition with $Y$ induces an equivalence $\operatorname{Map}^{\mathtt{Com}}_{\mathtt{LMod}_{\mathcal{M}}}(P_{\mathcal{M}}(\mathcal{A}),\mathcal{B})\simeq\operatorname{Map}^{\mathtt{Com}}_{\mathcal{M}}(\mathcal{A},\mathcal{B}).$ ## 4 Stable $\infty$-categories and spectral presheaves In this section we consider the notion of stable $\infty$-categories. We show that a compactly generated stable $\infty$-category is equivalent to $P_{\mathtt{Sp}}(\mathcal{A})$ for some small $\mathtt{Sp}$-enriched category $\mathcal{A}$. Let $\mathcal{D}$ be a pointed finitely cocomplete $\infty$-category. We define the _suspension functor_ on $\mathcal{D}$ $\Sigma_{\mathcal{D}}\colon\mathcal{D}\to\mathcal{D}$ by the formula $\Sigma_{\mathcal{D}}(X):=*\coprod_{X}*.$ Alternatively, the suspension functor can be defined as the smash product $S^{1}\wedge X$, using the fact that a pointed finitely cocomplete $\infty$-category is tensored over pointed spaces. If the suspension functor is an equivalence of categories, then $\mathcal{D}$ is called _stable_. A stable presentable $\infty$-category is naturally left- tensored over the closed monoidal $\infty$-category of spectra $\mathtt{Sp}$ (see [Lur2, Proposition 4.8.2.18]). Moreover, $\mathtt{Sp}$ is presentable, so for every $b,c\in\mathcal{D}$ the presheaf $\operatorname{Hom}_{\mathcal{D}}(b,c)\in P(\mathtt{Sp})$ is representable (we will denote the representing object also by $\operatorname{Hom}_{\mathcal{D}}(b,c)\in\mathtt{Sp}$). By Proposition 3.5 it follows that a stable presentable $\infty$-category is canonically enriched over $\mathtt{Sp}$ (this was observed by Gepner-Haugseng in [GH, Example 7.4.14], where they also pointed out that the presentability assumption is unnecessary). ###### Theorem 4.1. Let $\mathcal{D}$ be a cocomplete stable $\infty$-category. Suppose that there is a small set $C$ of compact objects in $\mathcal{D}$, that generates $\mathcal{D}$ under colimits and desuspensions. Thinking of $\mathcal{D}$ as left-tensored over $\mathtt{Sp}$, we let $\mathcal{C}$ be the full $\mathtt{Sp}$-enriched subcategory of $\mathcal{D}$ spanned by $C$. Then we have a natural functor of categories left-tensored over $\mathtt{Sp}$ $P_{\mathtt{Sp}}(\mathcal{C})\xrightarrow{\sim}\mathcal{D},$ which is an equivalence of the underlying $\infty$-categories and sends each representable presheaf $Y(c)\in P_{\mathtt{Sp}}(\mathcal{C})$ to $c\in C$. ###### Remark 4.2. Theorem 4.1 appears in [Lur2, Theorem 7.1.2.1] for the case that $|C|=1$. The general case, formulated in the language of model categories, can be found in [ScSh1, Theorem 3.3.3]. In [GM] the last result can be found under more general hypotheses ###### Proof. By defintion of full enriched subcategory, there is a fully-faithful $\mathtt{Sp}$-functor $i:\mathcal{C}\to\mathcal{D}$. By the universal property of the Yoneda embedding, we have an induced functor of categories left- tensored over $\mathtt{Sp}$ $I\colon P_{\mathtt{Sp}}(\mathcal{C})\to\mathcal{D},$ such that $I\circ Y\simeq i$. We thus get an equivalence $I(Y(c))\simeq i(c)\simeq c$, for every $c\in\mathcal{C}$, and it remains to show that $I$ is an equivalence. The functor $I$ is a morphism of left modules over $\mathtt{Sp}$ in $\mathtt{Cat^{L}}$, so in particular $I$ commutes with colimits. The $\infty$-category $\mathcal{D}$ is presentable by [Lur1, Theorem 5.5.1.1]. Let $X$ denote the full subspace of $\mathcal{D}$ generated by $C$ and recall that the $\infty$-category $P_{\mathtt{Sp}}(\mathcal{C})$ is defined as the category of left $\mathcal{C}^{\operatorname{op}}$-modules with values in $\operatorname{Fun}(X^{\operatorname{op}},\mathtt{Sp})$. Since $\operatorname{Fun}(X^{\operatorname{op}},\mathtt{Sp})$ is stable and presentable, so is $P_{\mathtt{Sp}}(\mathcal{C})$ (see [Lur2, 1.1.3.1 and 4.2.3.5]). Thus, by the adjoint functor theorem $I$ has a right adjoint $J$: $I\colon P_{\mathtt{Sp}}(\mathcal{C})\rightleftarrows\mathcal{D}\nobreak\mspace{6.0mu}{:}\nonscript\mkern-3.0mu\mathpunct{}\mspace{2.0mu}J.$ We first show that the unit $Y(c)\to J(I(Y(c)))$ of the adjunction $I\dashv J$ is an equivalence for every $c\in\mathcal{C}$. It is not hard to show that $J$ preserves $\mathtt{Sp}$-enrichment, so both $Y$ and $J\circ I\circ Y$ are $\mathtt{Sp}$-functors $\mathcal{C}\to P_{\mathtt{Sp}}(\mathcal{C})$, and that the unit induces a map $Y\to J\circ I\circ Y$ of $\mathtt{Sp}$-functors. Note that $\operatorname{Fun}_{\mathtt{Sp}}(\mathcal{C},P_{\mathtt{Sp}}(\mathcal{C}))=\mathtt{LMod}_{\mathcal{C}}(\operatorname{Fun}(X,P_{\mathtt{Sp}}(\mathcal{C})))=$ $\mathtt{LMod}_{\mathcal{C}}(\operatorname{Fun}(X,\operatorname{Fun}_{\mathtt{Sp}^{rev}}(\mathcal{C}^{\operatorname{op}},\mathtt{Sp})))=\mathtt{LMod}_{\mathcal{C}}(\operatorname{Fun}(X,\mathtt{LMod}_{\mathcal{C}^{\operatorname{op}}}(\operatorname{Fun}(X^{\operatorname{op}},\mathtt{Sp}))))=$ $\mathtt{LMod}_{\mathcal{C}}(\mathtt{LMod}_{\mathcal{C}^{\operatorname{op}}}(\operatorname{Fun}(X,\operatorname{Fun}(X^{\operatorname{op}},\mathtt{Sp}))))=\mathtt{LMod}_{\mathcal{C}}(\mathtt{RMod}_{\mathcal{C}}(\operatorname{Fun}(X^{\operatorname{op}}\times X,\mathtt{Sp}))),$ so an $\mathtt{Sp}$-functor $\mathcal{C}\to P_{\mathtt{Sp}}(\mathcal{C})$ is the same as a $\mathcal{C}$-$\mathcal{C}$-bimodule in the category $\operatorname{Fun}(X^{\operatorname{op}}\times X,\mathtt{Sp})$. Thus we can view $Y\to J\circ I\circ Y$ as a map of $\mathcal{C}$-$\mathcal{C}$-bimodules in $\operatorname{Fun}(X^{\operatorname{op}}\times X,\mathtt{Sp})$ and we need to show that it is an equivalence. The forgetful functor to $\operatorname{Fun}(X^{\operatorname{op}}\times X,\mathtt{Sp})$ reflects equivalences, and an equivalence in $\operatorname{Fun}(X^{\operatorname{op}}\times X,\mathtt{Sp})$ can be verified objectwise, so we can fix two objects $c,d$ in $\mathcal{C}$ and show that the induced map of spectra $Y(c,d)\to(J\circ I\circ Y)(c,d)$ is an equivalence. But since $Y$ and $i$ are $\mathtt{Sp}$-fully faithful, we have $(J\circ I\circ Y)(c,d)=J(I(Y(d)))(c)\simeq$ $\operatorname{Hom}_{P_{\mathtt{Sp}}(\mathcal{C})}(Y(c),J(I(Y(d))))\simeq\operatorname{Hom}_{\mathcal{D}}(I(Y(c)),I(Y(d)))\simeq$ $\operatorname{Hom}_{\mathcal{D}}(i(c),i(d))\simeq{\mathcal{C}}(c,d)\simeq\operatorname{Hom}_{P_{\mathtt{Sp}}(\mathcal{C})}(Y(c),Y(d))\simeq Y(d)(c)\simeq Y(c,d).$ Since $I(Y(c))\simeq c$ and $J(c)\simeq Y(c)$, the counit $I(J(c))\to c$ of $I\dashv J$ is also an equivalences, for every $c\in C$. Note that $C$ generates $\mathcal{D}$ under colimits, $\\{Y(c)|c\in C\\}$ generates $P_{\mathtt{Sp}}(\mathcal{C})$ under colimits and the functor $I$ commutes with colimits. Thus, if we can show that $J$ also commutes with colimits, it would follow that the unit and counit of $I\dashv J$ are equivalences, and we are done. Let us show first that $J$ commutes with filtered colimits. So let $d=\mathop{\operatorname{colim}}_{i\in I}d_{i}$ be a filtered colimit diagram in $\mathcal{D}$. We need to verify that the induced map $\mathop{\operatorname{colim}}_{i\in I}J(d_{i})\to J(d)$ is an equivalence. Recall that $P_{\mathtt{Sp}}(\mathcal{C})=\operatorname{Fun}_{\mathtt{Sp}^{rev}}(\mathcal{C}^{\operatorname{op}},\mathtt{Sp})=\mathtt{LMod}_{\mathcal{C}^{\operatorname{op}}}(\operatorname{Fun}(X^{\operatorname{op}},\mathtt{Sp})).$ The forgetful functor $U$ from $P_{\mathtt{Sp}}(\mathcal{C})$ to $\operatorname{Fun}(X^{\operatorname{op}},\mathtt{Sp})$ commutes with colimits (see [Lur2, 4.2.3.5]) and reflects equivalences, so it is enough to verify that ${\mathop{\operatorname{colim}}}_{i\in I}U(J(d_{i}))\to U(J(d))$ is an equivalence. Now colimits in $\operatorname{Fun}(X^{\operatorname{op}},\mathtt{Sp})$ are pointwise, so we can fix $c\in\mathcal{C}$ and show that ${\mathop{\operatorname{colim}}}_{i\in I}U(J(d_{i}))(c)\to U(J(d))(c)$ in an equivalence. We have an equivalence natural in $e\in\mathcal{D}$ $U(J(e))(c)=J(e)(c)=\operatorname{Hom}_{P_{\mathtt{Sp}}(\mathcal{C})}(Y(c),J(e))\simeq$ $\operatorname{Hom}_{\mathcal{D}}(I(Y(c)),e)\simeq\operatorname{Hom}_{\mathcal{D}}(i(c),e)\simeq\operatorname{Hom}_{\mathcal{D}}(c,e),$ so it is enough to show that ${\mathop{\operatorname{colim}}}_{i\in I}\operatorname{Hom}_{\mathcal{D}}(c,d_{i})\to\operatorname{Hom}_{\mathcal{D}}(c,d)$ is an equivalence, which is true by the compactness of $c$ in $\mathcal{D}$. Since both range and domain of $J$ are stable, and in a stable $\infty$-category every pullback square is a pushout square and vice versa, it follows that $J$ sends pushout squares to pushout squares. Thus $J$ commutes with all small colimits. ∎ Using the results in [Hin3] one can prove an extension to Theorem 4.1: ###### Theorem 4.3. In the situation of Theorem 4.1, suppose $\mathcal{D}$ is symmetric monoidal and the set $C$ is closed under the monoidal product in $\mathcal{D}$ and contains the unit of $\mathcal{D}$. Then $\mathcal{C}$ acquires a canonical symmetric monoidal $\mathtt{Sp}$-enriched structure, the category of presheaves $P_{\mathtt{Sp}}(\mathcal{C})$ acquires a canonical symmetric monoidal left $\mathtt{Sp}$-tensored structure and the equivalence $P_{\mathtt{Sp}}(\mathcal{C})\xrightarrow{\sim}\mathcal{D}$ acquires a canonical symmetric monoidal left $\mathtt{Sp}$-tensored structure. ## 5 Stabilization of categories In this section we review the notion of stabilization of an $\infty$-category. We will use the framework established by Lurie in [Lur2, Section 1.4]. We present in subsection 5.0.1 a similar procedure that can be applied to an ordinary topologically enriched category. We will compare the strict and the $\infty$-categorical versions of stabilization. Let $\mathcal{C}$ be a pointed $\infty$-category. To ensure that $\mathcal{C}$ has all the good properties we may want, we will assume that $\mathcal{C}\simeq\operatorname{Ind}(\mathcal{C}_{0})$ where $\mathcal{C}_{0}$ is a small pointed $\infty$-category that is closed under finite colimits. This includes $\mathtt{NCW}\simeq\operatorname{Ind}(\mathtt{NCW}^{f})$. By [Lur1, Theorem 5.5.1.1] $\mathcal{C}$ is presentable, and therefore has small limits and colimits [op. cit., Corollary 5.5.2.4]. Furthermore, filtered colimits commute with finite limits in $\mathcal{C}$ by the remark immediately following [Lur1, Definition 5.5.7.1]. It follows, in particular, that $\mathcal{C}$ is differentiable in the sense of [Lur2, Definition 6.1.1.6] and therefore the results of [op. cit., Chapter 6] apply to $\mathcal{C}$. Recall that $\mathtt{CW}^{f}_{*}$ is the $\infty$-category of pointed finite CW-complexes. Let $F\colon\mathtt{CW}^{f}_{*}\to\mathcal{C}$ be a functor. Recall that $F$ is called reduced if $F(*)$ is a final object of $\mathcal{C}$, and $F$ is called $1$-excisive if $F$ takes pushout squares to pullback squares. Let linear functors be functors that are both reduced and $1$-excisive. Linear functors provide a good framework for defining spectra in the context of general $\infty$-categories. ###### Definition 5.1 ([Lur2], Definition 1.4.2.8). A spectrum object in $\mathcal{C}$ is a linear functor $\mathtt{CW}^{f}_{*}\to\mathcal{C}$. Let $\mathtt{Sp}(\mathcal{C})$ be the $\infty$-category of linear functors $\mathtt{CW}^{f}_{*}\to\mathcal{C}$. $\mathtt{Sp}(\mathcal{C})$ is called the category of spectra of $\mathcal{C}$, or the stabilization of $\mathcal{C}$. By results in [Lur2], $\mathtt{Sp}(\mathcal{C})$ is a stable and presentable $\infty$-category (Corollary 1.4.2.17 and Proposition 1.4.4.4 respectively). Let $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\mathcal{C})$ be the category of all pointed functors from $\mathtt{CW}^{f}_{*}$ to $\mathcal{C}$. Then $\mathtt{Sp}(\mathcal{C})$ is by definition a full subcategory of $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\mathcal{C})$. The fully faithful functor $\mathtt{Sp}(\mathcal{C})\hookrightarrow\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\mathcal{C})$ has a left adjoint $\mathcal{L}\colon\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\mathcal{C})\to\mathtt{Sp}(\mathcal{C})$ called linearization. Explicitly, if $F\colon\mathtt{CW}^{f}_{*}\to\mathcal{C}$ is a pointed functor, then the linearization of $F$ is given by the following formula $\mathcal{L}F(X)=\mathop{\operatorname{colim}}_{n\to\infty}\Omega^{n}_{\mathcal{C}}F(\Sigma^{n}X).$ See [Lur2, Example 6.1.1.28] for a discussion of this formula in the context of $\infty$-categories (of course this formula is older than [Lur2] and goes back at least to [Goo]). The stabilization $\mathtt{Sp}(\mathcal{C})$ is the left Bousfield localization of $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\mathcal{C})$ at the stable equivalences and $\mathcal{L}$ is the localization functor (a map between functors is a stable equivalence if it induces an equivalence between linearizations). There is an adjoint pair of functors $\Sigma_{\mathcal{C}}^{\infty}\colon\mathcal{C}\leftrightarrows\mathtt{Sp}(\mathcal{C})\nobreak\mspace{6.0mu}{:}\nonscript\mkern-3.0mu\mathpunct{}\mspace{2.0mu}\Omega^{\infty}_{\mathcal{C}},$ where $\Sigma^{\infty}_{\mathcal{C}}x(K)=\mathop{\operatorname{colim}}_{n\to\infty}\Omega^{n}_{\mathcal{C}}\Sigma^{n}_{\mathcal{C}}(K\wedge x)$ and $\Omega^{\infty}_{\mathcal{C}}G=G(S^{0})$. This formula for $\Omega^{\infty}_{\mathcal{C}}$ agrees with the one in [Lur2, Notation 1.4.2.20], and therefore our $\Sigma^{\infty}_{\mathcal{C}}$, being left adjoint to $\Omega^{\infty}_{\mathcal{C}}$ is also equivalent to Lurie's. The functor $\Sigma^{\infty}_{\mathcal{C}}$ satisfies the following universal property: For every stable presentable $\infty$-category $\mathcal{D}$, pre- composition with $\Sigma^{\infty}_{\mathcal{C}}$ induces an equivalence of $\infty$-categories $\mathtt{Fun^{L}}(\mathtt{Sp}(\mathcal{C}),\mathcal{D})\xrightarrow{\simeq}\mathtt{Fun^{L}}(\mathcal{C},\mathcal{D}),$ where $\mathtt{Fun^{L}}$ denotes left functors (that is, colimit preserving functors). An important special case is when $\mathcal{C}=\mathcal{S}_{*}$ is the $\infty$-category of pointed spaces. In this case $\mathtt{Sp}:=\mathtt{Sp}(\mathcal{S}_{*})$ is the classical $\infty$-category of spectra, presented as the category of linear functors from $\mathtt{CW}^{f}_{*}$ to $\mathcal{S}_{*}$. Whenever $\mathcal{C}=\mathcal{S}_{*}$ we write $\mathtt{Sp}$, $\Sigma^{\infty}$ or $\Omega^{\infty}$, omitting the subscript $\mathcal{C}$. There is another useful way to construct $\mathtt{Sp}(\mathcal{C})$ when $\mathcal{C}=\operatorname{Ind}(\mathcal{C}_{0})$, with $\mathcal{C}_{0}$ closed under finite colimits. We will now describe it. ###### Definition 5.2. Let $\mathcal{C}_{0}$ be an $\infty$-category closed under finite colimits. Define the Spanier-Whitehead category of $\mathcal{C}_{0}$, which we denote by $\mathtt{SW}(\mathcal{C}_{0})$, to be the colimit of the sequence $\mathcal{C}_{0}\xrightarrow{\Sigma_{\mathcal{C}_{0}}}\mathcal{C}_{0}\xrightarrow{\Sigma_{\mathcal{C}_{0}}}\cdots$ in $\mathtt{Cat}$. Thus, the objects of $\mathtt{SW}(\mathcal{C}_{0})$ are pairs $(X,n)$ where $X\in\mathcal{C}_{0}$ and $n\in\mathbb{N}$. The pair $(X,n)$ represents the $n$-fold desuspension of $X$. The mapping spaces in $\mathtt{SW}(\mathcal{C}_{0})$ are given by $\operatorname{Map}_{\mathtt{SW}(\mathcal{C}_{0})}((X,n),(Y,m))={\mathop{\operatorname{colim}}}_{k\in\mathbb{N}}\operatorname{Map}_{\mathcal{C}_{0}}(\Sigma_{\mathcal{C}}^{k-n}X,\Sigma_{\mathcal{C}}^{k-m}Y),$ where the colimit is taken in the $\infty$-category of spaces. Clearly, $\mathtt{SW}(\mathcal{C}_{0})$ is a stable $\infty$-category. It is closed under finite colimits, but not under arbitrary colimits. It plays the role of the category of finite spectra over $\mathcal{C}$. There is a finite suspension spectrum functor ${\Sigma^{\infty}_{\mathcal{C}_{0}}}^{f}:\mathcal{C}_{0}\to\mathtt{SW}(\mathcal{C}_{0})$ given by $X\mapsto(X,0)$, which satisfies the following universal property: For every stable $\infty$-category $\mathcal{D}$, pre-composition with ${\Sigma^{\infty}_{\mathcal{C}_{0}}}^{f}$ induces an equivalence of $\infty$-categories $\mathtt{Fun_{*}^{fc}}(\mathtt{SW}(\mathcal{C}_{0}),\mathcal{D})\xrightarrow{\simeq}\mathtt{Fun_{*}^{fc}}(\mathcal{C}_{0},\mathcal{D}),$ where $\mathtt{Fun_{*}^{fc}}$ denotes pointed finite colimit preserving functors. One has the following description of $\mathtt{Sp}(\operatorname{Ind}(\mathcal{C}_{0}))$: ###### Proposition 5.3. Let $\mathcal{C}_{0}$ be a small pointed finitely cocomplete ${\infty}$-category, and let $\mathcal{C}:=\operatorname{Ind}(\mathcal{C}_{0})$. Then there is a natural equivalence $\mathtt{Sp}(\mathcal{C})\simeq\operatorname{Ind}(\mathtt{SW}(\mathcal{C}_{0})).$ and under this equivalence, $\Sigma^{\infty}_{\mathcal{C}}:\operatorname{Ind}(\mathcal{C}_{0})\simeq\mathcal{C}\to\mathtt{Sp}(\mathcal{C})\simeq\operatorname{Ind}(\mathtt{SW}(\mathcal{C}_{0})),$ is just the prolongation of ${\Sigma^{\infty}_{\mathcal{C}_{0}}}^{f}:{\mathcal{C}_{0}}\to\mathtt{SW}({\mathcal{C}_{0}}).$ ###### Proof. This is proved in [Lur2, Chapter 1.4] in the case when $\mathcal{C}_{0}=\mathtt{CW}^{f}_{*}$ and $\mathcal{C}=\operatorname{Ind}(\mathtt{CW}^{f}_{*})\simeq\mathcal{S}_{*}$. The proof in the general case is similar. In brief, one can check that $\operatorname{Ind}(\mathtt{SW}(\mathcal{C}_{0}))$ satisfies the same universal property as $\mathtt{Sp}(\mathcal{C})$. ∎ This proposition has the following rather important corollary. ###### Corollary 5.4. Let $\mathcal{C}_{0},\mathcal{C}$ be as before. Suppose $A$ is a set of objects of $\mathcal{C}_{0}$ that generates $\mathcal{C}_{0}$ under finite colimits, in the sense that $\mathcal{C}_{0}$ is the only subcategory of $\mathcal{C}_{0}$ that contains $A$ and is closed under finite colimits and equivalences. Then the set of suspension spectra $\\{\Sigma^{\infty}_{\mathcal{C}}x\mid x\in A\\}$ generates $\mathtt{SW}(\mathcal{C}_{0})$ under finite colimits and desuspensions, and generates $\mathtt{Sp}(\mathcal{C})$ under arbitrary colimits and desuspensions. The following notation is going to be used quite a lot. ###### Definition 5.5. Suppose $\mathcal{C}$ is a pointed $\infty$-category with finite colimits. Let $x$ and $y$ be object of $\mathcal{C}$. Then the functor $G^{\mathcal{C}}_{x,y}\colon\mathtt{CW}^{f}_{*}\to\mathcal{S}_{*}$ is defined by $G^{\mathcal{C}}_{x,y}(K)=\operatorname{Map}_{\mathcal{C}}(x,K\wedge y)$. Sometimes we will omit the superscript $\mathcal{C}$ and write simply $G_{x,y}$. ###### Lemma 5.6. If $\mathcal{C}$ is a stable $\infty$-category then $G_{x,y}$ is linear for any two objects $x,y$. ###### Proof. We need to prove that $G_{x,y}(*)\simeq*$ and that $G_{x,y}$ is $1$-excisive. The first condition holds because $*\wedge y$ is equivalent to a final object of $\mathcal{C}$. Now let us prove that $G_{x,y}$ is $1$-excisive. We have equivalences $\operatorname{Map}_{\mathcal{C}}(x,K\wedge y)\xrightarrow{\simeq}\operatorname{Map}_{\mathcal{C}}(S^{1}\wedge x,S^{1}\wedge K\wedge y)\xrightarrow{\simeq}\operatorname{Map}_{\mathcal{C}}(x,\Omega(S^{1}\wedge K\wedge y)).$ Here the first map is an equivalence because $\mathcal{C}$ is stable, and the second equivalence is a standard adjunction. The composite equivalence can be reinterpreted as saying that the canonical map $G_{x,y}\to\Omega G_{x,y}\Sigma$ is an equivalence. It follows that the map $G_{x,y}\to\mathcal{L}G_{x,y}$ is an equivalence, so $G_{x,y}$ is linear. ∎ If $\mathcal{C}$ is a stable $\infty$-category then $G^{\mathcal{C}}_{x,y}$ is in fact equivalent to the canonical enrichment of $\mathcal{C}$ over spectra. The next lemma and remark say that more generally the linearization of $G^{\mathcal{C}}$ gives, in favorable circumstances, the spectral enrichment of the stabilization of $\mathcal{C}$. ###### Lemma 5.7. Let $\mathcal{C}_{0}$ be a small finitely cocomplete $\infty$-category. To simplify notation, let $S\colon\mathcal{C}_{0}\to\mathtt{SW}(\mathcal{C}_{0})$ be the finite suspension spectrum functor. Then the natural map $G^{\mathcal{C}_{0}}_{x,y}\to G^{\mathtt{SW}(\mathcal{C}_{0})}_{S(x),S(y)}$ induced by the finite suspension spectrum functor is equivalent to the linearization map $G^{\mathcal{C}_{0}}_{x,y}\to\mathcal{L}G^{\mathcal{C}_{0}}_{x,y}.$ ###### Proof. By definition, we have equivalences $G^{\mathtt{SW}(\mathcal{C}_{0})}_{S(x),S(y)}(K)=\operatorname{Map}_{\mathtt{SW}(\mathcal{C}_{0})}(S(x),K\wedge S(y))=\mathop{\operatorname{colim}}_{n\to\infty}\operatorname{Map}_{\mathcal{C}_{0}}(S^{n}\wedge x,S^{n}\wedge K\wedge y)=\\\ =\mathop{\operatorname{colim}}_{n\to\infty}\Omega^{n}\operatorname{Map}_{\mathcal{C}_{0}}(x,S^{n}\wedge K\wedge y)=\mathop{\operatorname{colim}}_{n\to\infty}\Omega^{n}G^{\mathcal{C}_{0}}_{x,y}(S^{n}\wedge K)=\mathcal{L}G^{\mathcal{C}_{0}}_{x,y}(K)$ and the map from $G^{\mathcal{C}_{0}}_{x,y}$ is precisely the linearization map. ∎ ###### Remark 5.8. If $\mathcal{C}=\operatorname{Ind}\mathcal{C}_{0}$, with $\mathcal{C}_{0}$ as in the previous lemma, and $x,y$ are objects of $\mathcal{C}$, it is not true that $G^{\mathtt{Sp}(\mathcal{C})}_{\Sigma^{\infty}_{\mathcal{C}}(x),\Sigma^{\infty}_{\mathcal{C}}(y)}$ is equivalent to the linearization of $G^{\mathcal{C}}_{x,y}$. Rather, there is an equivalence $G^{\mathtt{Sp}(\mathcal{C})}_{\Sigma^{\infty}_{\mathcal{C}}(x),\Sigma^{\infty}_{\mathcal{C}}(y)}(K)=\operatorname{Map}_{\mathcal{C}}(x,\mathop{\operatorname{colim}}_{n\to\infty}\Omega^{n}(S^{n}\wedge y)).$ This is equivalent to $\mathcal{L}G^{\mathcal{C}}_{x,y}(K)$ if $x$ is a compact object, but not in general. In particular, it is true when $x\in\mathcal{C}_{0}$, which is the case considered in the previous lemma. ### 5.0.1 Spectral enrichment of pointed topological categories Suppose $\mathcal{C}$ is an $\infty$-category and $x,y$ are objects of $\mathcal{C}$. We saw that functors of the form $G^{\mathcal{C}}_{x,y}$ can be used to define the spectral enrichment of the stabilization of $\mathcal{C}$. If $\mathcal{C}$ is an ordinary topological category, one can use similar functors $G^{\mathcal{C}}_{x,y}$ to define a spectral enrichment of $\mathcal{C}$, using the more traditional view of spectra as modeled by the Quillen model category of continuous functors. In this subsection we define a strict spectral enrichment of pointed topological categories and compare it with the $\infty$-categorical enrichment. Let $\operatorname{Top}$ denote the category of pointed compactly generated weak Hausdorff spaces with the standard model structure of Quillen [Qui]. Every object in $\operatorname{Top}$ is fibrant, and every CW-complex is cofibrant. The model category $\operatorname{Top}$ is a model for the $\infty$-category of pointed spaces $\mathcal{S}_{*}$. This means that the $\infty$-localization of $\operatorname{Top}$ with respect to the weak equivalences (see Remark 2.1) is canonically equivalent to $\mathcal{S}_{*}$. Note that any pointed topological category is naturally enriched in $\operatorname{Top}$. ###### Definition 5.9. A pointed topological category $\mathcal{C}$ is called tensored closed over $\mathtt{CW}^{f}_{*}$ if we are given a bi-continuous left action $\wedge:\mathtt{CW}^{f}_{*}\times\mathcal{C}\to\mathcal{C}$ such that the following hold: 1. 1. The $\infty$-category $\mathcal{C}_{\infty}$ is finitely cocomplete, where $\mathcal{C}_{\infty}$ is the topological nerve of $\mathcal{C}$. 2. 2. After application of the topological nerve the functor $\wedge_{\infty}:\mathtt{CW}^{f}_{*}\times\mathcal{C}_{\infty}\to\mathcal{C}_{\infty}$ commutes with finite colimits in each variable. ###### Definition 5.10. Let $\mathcal{C}$ be a pointed topological category, tensored closed over $\mathtt{CW}^{f}_{*}$. Let $x$ and $y$ be objects of $\mathcal{C}$. In keeping with notation we introduced in Definition 5.5, we define the pointed topological functor $G^{\mathcal{C}}_{x,y}\colon\mathtt{CW}^{f}_{*}\to\operatorname{Top}$ by $G^{\mathcal{C}}_{x,y}(K)=\operatorname{Map}_{\mathcal{C}}(x,K\wedge y)$. Again, we may omit the superscript $\mathcal{C}$ and write simply $G_{x,y}$. ###### Remark 5.11. We saw earlier that pointed $\infty$-functors from $\mathtt{CW}^{f}_{*}$ to $\mathcal{S}_{*}$ provide a way of defining the $\infty$-category of spectra. This is known also in the more traditional approach to spectra via model categories. There is a Quillen model structure on the category $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})$ of continuous pointed functors, called the stable model structure, and it provides one of the models for the category of spectra. We refer the reader to [Lyd, MMSS] for more details about this model structure. We denote the category $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})$ with the stable model structure by $\mathtt{Sp^{M}}$. We denote the category $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})$ with the projective model structure simply by $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})$. The model category $\mathtt{Sp^{M}}$ is a left Bousfield localization of $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})$, so we have a Quillen pair $\operatorname{Id}:\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})\rightleftarrows\mathtt{Sp^{M}}:\operatorname{Id}.$ Applying $\infty$-localization, this Quillen pair becomes the localization adjunction $\mathcal{L}:\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\mathcal{S}_{*})\rightleftarrows\mathtt{Sp}:\i.$ Let $\mathcal{C}$ be a pointed topological category, tensored closed over $\mathtt{CW}^{f}_{*}$ and let $x,y\in\mathcal{C}$. By the previous definition we have a functor $G^{\mathcal{C}}_{x,y}\colon\mathtt{CW}^{f}_{*}\to\operatorname{Top}$. Under the identification $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})_{\infty}\simeq\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\mathcal{S}_{*})$, $G^{\mathcal{C}}_{x,y}$ can be thought of as an object in $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\mathcal{S}_{*})$. We will now compare this with the functor $G^{\mathcal{C}_{\infty}}_{x,y}\colon\mathtt{CW}^{f}_{*}\to\mathcal{S}_{*}$ from Definition 5.5. Clearly, the bi-functor $\wedge_{\infty}:\mathtt{CW}^{f}_{*}\times\mathcal{C}_{\infty}\to\mathcal{C}_{\infty}$ is a left action of the monoidal $\infty$-category $\mathtt{CW}^{f}_{*}$ on $\mathcal{C}_{\infty}$. It follows that we have an induced action on the ind- categories $\wedge_{\infty}:\mathcal{S}_{*}\times\operatorname{Ind}(\mathcal{C}_{\infty})\to\operatorname{Ind}(\mathcal{C}_{\infty})$ and this action commutes with small colimits in each variable. Since $\mathcal{S}_{*}$ is a mode in the sense of [CSY, Section 5], this action coincides with the canonical action of $\mathcal{S}_{*}$ on $\operatorname{Ind}(\mathcal{C}_{\infty})$ as a presentable pointed $\infty$-category. In particular we get that the restriction $\wedge_{\infty}:\mathtt{CW}^{f}_{*}\times\mathcal{C}_{\infty}\to\mathcal{C}_{\infty}$ coincides with the canonical action of $\mathtt{CW}^{f}_{*}$ on $\mathcal{C}_{\infty}$ as an $\infty$-category with finite colimits. It follows that under the identification $\operatorname{Top}_{\infty}\simeq\mathcal{S}_{*}$, for any $K\in\mathtt{CW}^{f}_{*}$ and $z\in\mathcal{C}$ we have natural equivalences $\operatorname{Map}_{\mathcal{C}}(x,z)\simeq\operatorname{Map}_{\mathcal{C}_{\infty}}(x,z)$ $K\wedge y\simeq K\wedge_{\infty}y.$ Thus we have $G^{\mathcal{C}_{\infty}}_{x,y}(K)=\operatorname{Map}_{\mathcal{C}_{\infty}}(x,K\wedge_{\infty}y)\simeq\operatorname{Map}_{\mathcal{C}}(x,K\wedge y)=G^{\mathcal{C}}_{x,y}(K)$ (1) or $G^{\mathcal{C}_{\infty}}_{x,y}\simeq G^{\mathcal{C}}_{x,y}.$ ###### Definition 5.12. Let $\mathcal{C}$ be a pointed topological category, tensored closed over $\mathtt{CW}^{f}_{*}$. We define a strict enrichment of $\mathcal{C}$ over $\mathtt{Sp^{M}}$ as follows: If $x$ and $y$ are objects of $\mathcal{C}$ we define $\operatorname{Hom}_{\mathcal{C}}(x,y):=G^{\mathcal{C}}_{x,y}\in\mathtt{Sp^{M}}.$ Let $x,y,z$ be objects of $\mathcal{C}$ and let $K$ and $L$ be finite CW- complexes. Note that there is a natural map $G_{x,y}(K)\wedge G_{y,z}(L)\to G_{x,z}(K\wedge L)$, defined as the composite: $\operatorname{Map}_{\mathcal{C}}(x,K\wedge y)\wedge\operatorname{Map}_{\mathcal{C}}(y,L\wedge z)\to\\\ \to\operatorname{Map}_{\mathcal{C}}(x,K\wedge y)\wedge\operatorname{Map}_{\mathcal{C}}(K\wedge y,K\wedge L\wedge z)\to\\\ \to\operatorname{Map}_{\mathcal{C}}(x,K\wedge L\wedge z)$ where the first map is induced by the topological functor $K\wedge(-):\mathcal{C}\to\mathcal{C}$ and the second map is given by composition. This map induces a natural map $G_{x,y}\otimes G_{y,z}\to G_{x,z}$, where $\otimes$ denotes Day convolution which is the tensor product in $\mathtt{Sp^{M}}$. Thus we have defined composition and it can be checked that the above indeed defines a strict enrichment of $\mathcal{C}$ over $\mathtt{Sp^{M}}$. ###### Theorem 5.13. Let $\mathcal{C}$ be a small pointed topological category, tensored closed over $\mathtt{CW}^{f}_{*}$. Then under the identification $\mathtt{Sp^{M}}_{\infty}\simeq\mathtt{Sp}$, for any two objects $x$ and $y$ of $\mathcal{C}$ we have a natural equivalence $\operatorname{Hom}_{\mathcal{C}}(x,y)\simeq\operatorname{Hom}_{\mathtt{Sp}(\operatorname{Ind}(\mathcal{C}_{\infty}))}(\Sigma^{\infty}(x),\Sigma^{\infty}(y)).$ ###### Proof. Let $x$ and $y$ be objects in $\mathcal{C}$. Recall that $\operatorname{Hom}_{\mathcal{C}}(x,y)=G^{\mathcal{C}}_{x,y}$ and consider $G^{\mathcal{C}}_{x,y}$ as an object in $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})$. By (1), we have $G^{\mathcal{C}}_{x,y}\simeq G^{\mathcal{C}_{\infty}}_{x,y}.$ We have a commutative square $\textstyle{\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})_{\infty}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathbb{L}\operatorname{Id}}$$\scriptstyle{\sim}$$\textstyle{\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\mathcal{S}_{*})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mathcal{L}}$$\textstyle{\mathtt{Sp^{M}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{\mathtt{Sp}}$ Let $(G^{\mathcal{C}}_{x,y})^{f}$ be a fibrant replacement to $G^{\mathcal{C}}_{x,y}$ in $\mathtt{Sp^{M}}$. Then the map $G^{\mathcal{C}}_{x,y}\to(G^{\mathcal{C}}_{x,y})^{f}$, considered in $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})$, translates to $G^{\mathcal{C}_{\infty}}_{x,y}\to\mathcal{L}G^{\mathcal{C}_{\infty}}_{x,y}$ under the top horizontal map. By Lemma 5.7 the last map is equivalent to $G^{\mathcal{C}_{\infty}}_{x,y}\to G^{\mathtt{SW}(\mathcal{C}_{\infty})}_{S(x),S(y)}.$ After applying $\mathcal{L}$, this map becomes an equivalence, so we have a natural equivalence in $\mathtt{Sp}$ $G^{\mathcal{C}}_{x,y}\simeq G^{\mathtt{SW}(\mathcal{C}_{\infty})}_{S(x),S(y)}\simeq\operatorname{Hom}_{\mathtt{SW}(\mathcal{C}_{\infty})}(S(x),S(y)).$ By Proposition 5.3 we are done. ∎ ## 6 The $\infty$-category of noncommutative CW-spectra In this section we define the $\infty$-category of noncommutative CW-spectra $\mathtt{NSp}$. The suspension spectra of matrix algebras form a set of compact generators of $\mathtt{NSp}$. We denote by $\mathcal{M}$ the full spectral subcategory of $\mathtt{NSp}$ spanned by this set of generators (see Proposition 3.5). Thus $\mathcal{M}$ is an $\mathtt{Sp}$-enriched $\infty$-category. We prove our main theorem: there is an equivalence of $\infty$-categories between $\mathtt{NSp}$ and the category of spectral presheaves on $\mathcal{M}$. We give two versions and two independent proofs of this result. One version is formulated fully in the language of enriched $\infty$-categories, using Hinich's theory. In the second approach, we first define a strict version of $\mathcal{M}$, denoted $\mathcal{M}_{s}$, which is a category strictly enriched in $\mathtt{Sp^{M}}$. We then prove that $\mathtt{NSp}$ is modelled by a Quillen model category of $\mathtt{Sp^{M}}$-valued presheaves on $\mathcal{M}_{s}$. Finally we prove that our two models of $\mathcal{M}$ are equivalent, in the sense that $\mathcal{M}$ is equivalent to the enriched $\infty$-localization of $\mathcal{M}_{s}$ (see Definition 3.6). Let us proceed with the definition of $\mathtt{NSp}$. Recall that in Section 2 we defined the $\infty$-category of finite noncommutative CW-complexes and denoted it by $\mathtt{NCW}^{f}$. We then defined the $\infty$-category of all noncommutative CW-complexes by the formula $\mathtt{NCW}:=\operatorname{Ind}(\mathtt{NCW}^{f}).$ We now define the $\infty$-category of noncommutative CW-spectra to be $\mathtt{NSp}:=\mathtt{Sp}(\mathtt{NCW}).$ By the results in Section 5 we know that $\mathtt{NSp}$ is a presentable stable $\infty$-category. In particular, $\mathtt{NSp}$ is naturally left- tensored over spectra. By [Lur2, Corollary 4.8.2.19] the monoidal structure on $\mathtt{NCW}$ induces a closed symmetric monoidal structure on $\mathtt{NSp}$, such that $\Sigma^{\infty}_{\mathtt{NC}}:\mathtt{NCW}\longrightarrow\mathtt{NSp}$ is symmetric monoidal. Recall that $M_{n}$ is the algebra of $n\times n$ matrices over $\mathbb{C}$. Since the set of objects $\\{M_{i}\mid i\in\mathbb{N}\\}$ generates $\mathtt{NCW}^{f}$ under finite colimits, it follows by Corollary 5.4 that $M:=\\{\Sigma^{\infty}_{\mathtt{NC}}M_{i}\mid i\in\mathbb{N}\\}$ generates $\mathtt{NSp}$ under small colimits and desuspensions. The following is one of the main definitions of the paper: ###### Definition 6.1. Let $\mathcal{M}$ be the full $\mathtt{Sp}$-enriched subcategory of $\mathtt{NSp}$ spanned by the spectra $\\{\Sigma^{\infty}_{\mathtt{NC}}M_{i}\mid i\in\mathbb{N}\\}$. Since $\mathcal{M}$ is closed under the monoidal product in $\mathtt{NSp}$, the following theorem is a special case of Theorem 4.3: ###### Theorem 6.2. The $\mathtt{Sp}$-enriched category $\mathcal{M}$ acquires a canonical symmetric monoidal structure, the category of presheaves $P_{\mathtt{Sp}}(\mathcal{M})$ acquires a canonical symmetric monoidal left $\mathtt{Sp}$-tensored structure and we have a natural symmetric monoidal left $\mathtt{Sp}$-tensored functor $P_{\mathtt{Sp}}(\mathcal{M})\xrightarrow{\sim}\mathtt{NSp},$ which is an equivalence of the underlying $\infty$-categories and sends each representable presheaf $Y(\Sigma^{\infty}_{\mathtt{NC}}M_{n})\in P_{\mathtt{Sp}}(\mathcal{C})$ to $\Sigma^{\infty}_{\mathtt{NC}}M_{n}$. ### 6.0.1 Strictification of $\mathcal{M}$ In this subsection we give a strict model for the category $\mathcal{M}$ as a monodial spectrally enriched category, as well as a strict version of Theorem 6.2. In the context of Section 5.0.1, let us consider the example $\mathcal{C}=\mathtt{NCW}^{f}$, considered as a topological category. Then $\mathtt{NCW}^{f}$ is a pointed topological category, tensored closed over $\mathtt{CW}^{f}_{*}$ (see Definition 5.9). As explained in Definition 5.12, we have a strict enrichment of $\mathtt{NCW}^{f}$ over the model category of spectra $\mathtt{Sp^{M}}$ using the functors $G^{\mathtt{NCW}^{f}}_{x,y}$. The topological category $\mathtt{NCW}^{f}$ has a continuous symmetric monoidal structure induced by tensor product in $\mathtt{SC^{*}}$. The spectral enrichment respects the monoidal structure, in the sense that given objects $x,x_{1},y,y_{1}$, there is a natural transformation $G_{x,y}^{\mathtt{NCW}^{f}}(K)\wedge G_{x_{1},y_{1}}^{\mathtt{NCW}^{f}}(L)\to G_{x\otimes x_{1},y\otimes y_{1}}^{\mathtt{NCW}^{f}}(K\wedge L).$ Thus the spectral enrichment of $\mathtt{NCW}^{f}$ is symmetric monoidal. ###### Definition 6.3. Let $\mathcal{M}_{s}$ be the full (strict) $\mathtt{Sp^{M}}$-enriched subcategory of $\mathtt{NCW}^{f}$ spanned by $\\{M_{n}\mid n\in\mathbb{N}\\}$. That is, the objects of $\mathcal{M}_{s}$ are $\\{M_{n}\mid n\in\mathbb{N}\\}$ and for any $m,n\in\mathbb{N}$ we have $\operatorname{Hom}_{\mathcal{M}_{s}}(M_{m},M_{n})=G^{\mathtt{NCW}^{f}}_{M_{m},M_{n}}\in\mathtt{Sp^{M}}.$ Since $\mathcal{M}_{s}$ is a category enriched over $\mathtt{Sp^{M}}$, we can define the strict category of spectral presheaves on $\mathcal{M}_{s}$, which we denote by $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})$, to be the category of enriched functors $\mathcal{M}_{s}^{\operatorname{op}}\to\mathtt{Sp^{M}}$. We endow $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})$ with the projective model structure (see, for example, [GM] on the projective model structure in the enriched setting). Since both $\mathcal{M}_{s}^{\operatorname{op}}$ and $\mathtt{Sp^{M}}$ have a symmetric monoidal structure, the category $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})$ has a symmetric monoidal structure given by enriched Day convolution turning it into a symmetric monoidal model category. Consider $\mathtt{NCW}^{f}$ as a category enriched in $\mathtt{Sp^{M}}$. There is a canonical strict spectral functor $RY\colon\mathtt{NCW}^{f}\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}),$ (2) which is the composition of the enriched Yoneda embedding $\mathtt{NCW}^{f}\to P_{\mathtt{Sp^{M}}}(\mathtt{NCW}^{f})$ followed by restriction $P_{\mathtt{Sp^{M}}}(\mathtt{NCW}^{f})\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})$. It is well-known, and easy to check that $RY$ is lax symmetric monoidal. We call a map $A\to B$ in $\mathtt{NCW}^{f}$ a weak equivalence if it is a homotopy equivalence in $\mathtt{NCW}^{f}$ considered as a topological category. ###### Lemma 6.4. The functor $RY$ sends weak equivalences to weak equivalences. ###### Proof. Let $A\to B$ be a weak equivalence in $\mathtt{NCW}^{f}$. We need to show that $RY(A)\to RY(B)$ is a levelwise weak equivalence in $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})$. Let $n\geq 1$. We need to show that the induced map $\operatorname{Hom}_{\mathtt{NCW}^{f}}(M_{n},A)\to\operatorname{Hom}_{\mathtt{NCW}^{f}}(M_{n},B)$ is a weak equivalence in $\mathtt{Sp^{M}}$. Since $\mathtt{Sp^{M}}$ is a localization of the projective model structure, is enough to show that $\operatorname{Hom}_{\mathtt{NCW}^{f}}(M_{n},A)\to\operatorname{Hom}_{\mathtt{NCW}^{f}}(M_{n},B)$ is a levelwise weak equivalence in $\operatorname{Fun}_{*}(\mathtt{CW}^{f}_{*},\operatorname{Top})$. That is, it is enough to show that for every finite pointed CW-complex $K$, $\operatorname{Map}_{\mathtt{NCW}^{f}}(M_{n},K\wedge A)\to\operatorname{Map}_{\mathtt{NCW}^{f}}(M_{n},K\wedge B)$ is a weak equivalence. Since $\mathtt{NCW}^{f}$ is a topological category, and a weak equivalence in $\mathtt{NCW}^{f}$ is just a homotopy equivalence, we are done. ∎ By the lemma above, we can apply $\infty$-localization with respect to weak equivalences (see Remark 2.1) to $RY$ and obtain a functor of $\infty$-categories $RY_{\infty}\colon\mathtt{NCW}^{f}_{\infty}\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}.$ ###### Lemma 6.5. The $\infty$-category $\mathtt{NCW}^{f}_{\infty}$ is naturally equivalent to $\mathtt{NCW}^{f}$ defined above as the topological nerve of the topological category $\mathtt{NCW}^{f}$. ###### Proof. The category $\mathtt{SC^{*}}^{\operatorname{op}}$ (defined in the beginning of Section 2) has the structure of a category of cofibrant objects with the weak equivalences given by the homotopy equivalences and the cofibrations by Schochet cofibrations (see, for instance, in [AG, Uuy]). We say that a map in $\mathtt{NCW}^{f}$ is a weak equivalence (resp. cofibration) if it is a weak equivalence (resp. cofibration) when regarded as a map in $\mathtt{SC^{*}}^{\operatorname{op}}$. Since $\mathtt{SC^{*}}^{\operatorname{op}}$ is a category of cofibrant objects and $\mathtt{NCW}^{f}\subseteq\mathtt{SC^{*}}^{\operatorname{op}}$ is a full subcategory which is closed under weak equivalences and pushouts along cofibrations it follows that $\mathtt{NCW}^{f}$ inherits a structure of a category of cofibrant objects. In exactly the same way as in [BHH, Lemma 7.1.1] one can show that the natural map between $\infty$-localizations with respect to weak equivalences: $(\mathtt{NCW}^{f})_{\infty}\longrightarrow(\mathtt{SC^{*}}^{\operatorname{op}})_{\infty}$ is fully faithful. By [BJM, Proposition 3.17] we have that the $\infty$-localization of $\mathtt{SC^{*}}^{\operatorname{op}}$ is equivalent to the topological nerve of the topological category structure on $\mathtt{SC^{*}}^{\operatorname{op}}$ described in Section 2 (see Remark 2.1 and the paragraph before). Since $\mathtt{NCW}^{f}$ is a full topological subcategory of $\mathtt{SC^{*}}^{\operatorname{op}}$, we are done. ∎ ###### Lemma 6.6. The functor $RY_{\infty}$ preserves finite colimits. ###### Proof. By [Lur1, Corollary 4.4.2.5], it is enough to prove that the functor preserves initial objects and pushout squares. Since $\mathtt{NCW}^{f}$ is a pointed category, the inital object of $\mathtt{NCW}^{f}$ is also the final object, and the first condition obviously holds. In both $\mathtt{NCW}^{f}_{\infty}$ and $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}$ pushouts can be calculated as homotopy pushouts in an appropriate structure. Suppose we have a homotopy pushout diagram in $\mathtt{NCW}^{f}$ $\begin{array}[]{ccc}y_{0}&\to&y_{1}\\\ \downarrow&&\downarrow\\\ y_{2}&\to&y_{12}.\end{array}$ (3) We want to prove that for any $x\in\mathcal{M}_{s}$ the induced diagram of functors is a homotopy pushout in the stable model structure $\begin{array}[]{ccc}\operatorname{Map}_{\mathtt{NCW}^{f}}(x,-\wedge y_{0})&\to&\operatorname{Map}_{\mathtt{NCW}^{f}}(x,-\wedge y_{1})\\\ \downarrow&&\downarrow\\\ \operatorname{Map}_{\mathtt{NCW}^{f}}(x,-\wedge y_{2})&\to&\operatorname{Map}_{\mathtt{NCW}^{f}}(x,-\wedge y_{12}).\end{array}$ Since we are working in the stable model structure, a square is a homotopy pushout if and only if it is a homotopy pullback. A square of functors is a homotopy pullback in the stable model structure if the induced square of linearizations is a homotopy pullback. But the linearization of the functor $\operatorname{Map}_{\mathtt{NCW}^{f}}(x,-\wedge y)\colon\mathtt{CW}^{f}_{*}\to\operatorname{Top}$ evaluated at $K$ is the same as the linearization of the functor $\operatorname{Map}_{\mathtt{NCW}^{f}}(x,K\wedge-)\colon\mathtt{NCW}^{f}\to\operatorname{Top}$ evaluated at $y$. Indeed, the two linearizations are given by the equivalent formulas $\mathop{\operatorname{hocolim}}_{n\to\infty}\Omega^{n}\operatorname{Map}_{\mathtt{NCW}^{f}}(x,\Sigma^{n}K\wedge y)=\mathop{\operatorname{hocolim}}_{n\to\infty}\Omega^{n}\operatorname{Map}_{\mathtt{NCW}^{f}}(x,K\wedge\Sigma^{n}_{\mathtt{NCW}^{f}}y).$ We have been thinking of the functor $\operatorname{Map}_{\mathtt{NCW}^{f}}(x,K\wedge-)\colon\mathtt{NCW}^{f}\to\operatorname{Top}$ as a strict functor, but now let us think of it as a functor between $\infty$-categories by applying the $\infty$-localization. The $\infty$-category $\mathtt{NCW}^{f}$ has finite colimits and a final object. The conditions of [Lur1, Lemma 6.1.1.33] are satisfied, and therefore the linearization of this functor really is linear, i.e., takes homotopy pushout squares to homotopy pullback squares. Therefore applying the linearization to the square (3) yields a homotopy pullback square, which is what we wanted to prove. ∎ Since $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}$ is a stable $\infty$-category, we have, by the lemma above, that $RY_{\infty}$ extends canonically to a finite-colimit preserving functor $RY_{\infty}:\mathtt{SW}(\mathtt{NCW}^{f})\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}.$ This functor extends, in turn, to an all-small-colimit-preserving functor $RY_{\infty}\colon\mathtt{NSp}=\operatorname{Ind}(\mathtt{SW}(\mathtt{NCW}^{f}))\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}.$ (4) This functor takes an object $\Sigma^{\infty}_{\mathtt{NC}}M_{n}$ to the presheaf represented by $M_{n}$. ###### Theorem 6.7. The functor $RY_{\infty}\colon\mathtt{NSp}\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}$ is an equivalence of $\infty$-categories. ###### Proof. First let us prove that $RY_{\infty}$ is fully faithful, that is, that for all objects $x,y$ of $\mathtt{NSp}$ the map of spectral mapping functors $G^{\mathtt{NSp}}_{x,y}\to G^{P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}}_{RY_{\infty}(x),RY_{\infty}(y)}$ (5) is an equivalence. First, consider the case $x,y\in\Sigma^{\infty}\mathcal{M}_{s}$, i.e., $x=\Sigma^{\infty}_{\mathtt{NC}}M_{k},y=\Sigma^{\infty}_{\mathtt{NC}}M_{l}$ for some $k,l$. In this case $x,y$ are in the image of the finite suspension functor $\mathtt{NCW}^{f}\to\mathtt{SW}(\mathtt{NCW}^{f})$. The functor $\mathtt{SW}(\mathtt{NCW}^{f})\to\mathtt{NSp}$ is fully faithful, so it induces an equivalence $G^{\mathtt{SW}(\mathtt{NCW}^{f})}_{x,y}\xrightarrow{\simeq}G^{\mathtt{NSp}}_{x,y}.$ By Lemma 5.7, the map $G^{\mathtt{NCW}^{f}}_{M_{k},M_{l}}\to G^{\mathtt{SW}(\mathtt{NCW}^{f})}_{x,y}$ is stabilization. The functor $RY:\mathtt{NCW}^{f}\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})$ when restricted to $\mathcal{M}_{s}$ is just the enriched Yoneda embedding of $\mathcal{M}_{s}$ $Y:\mathcal{M}_{s}\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}).$ Since the unit in $\mathtt{Sp^{M}}$ is cofibrant, $RY_{\infty}(\Sigma^{\infty}M_{k})=Y(M_{k})$ is cofibrant in the projective model structure on $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})$ (see [GM, Theorem 4.32]). The fibrant replacement in $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})$ is levelwise, so using the (strict) enriched Yoneda lemma we get $G^{P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}}_{RY_{\infty}(x),RY_{\infty}(y)}=\operatorname{Hom}_{P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}}(Y(M_{k}),Y(M_{l}))\simeq$ $\operatorname{Hom}_{P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})}(Y(M_{k}),Y(M_{l})^{f})\cong(Y(M_{l})^{f})(M_{k})\simeq Y(M_{l})(M_{k})^{f}=(G^{\mathtt{NCW}^{f}}_{M_{k},M_{l}})^{f}.$ But the map $G^{\mathtt{NCW}^{f}}_{M_{k},M_{l}}\to(G^{\mathtt{NCW}^{f}}_{M_{k},M_{l}})^{f}$ translates to the stabilization $G^{\mathtt{NCW}^{f}}_{M_{k},M_{l}}\to\mathcal{L}G^{\mathtt{NCW}^{f}}_{M_{k},M_{l}}$ under $\infty$-localization (see the proof of Theorem 5.13). Thus the map $G^{\mathtt{NCW}^{f}}_{M_{k},M_{l}}\to G^{P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}}_{RY_{\infty}(x),RY_{\infty}(y)}$ is also the stabilization. By the uniqueness of the stabilization map, we get that the map (5) is an equivalence in the case when $x,y$ are suspension spectra of matrix algebras. Next, let $x$ be a fixed suspension spectrum of a matrix algebra, but let $y$ vary. We may consider the functor $y\mapsto G^{\mathtt{NSp}}_{x,y}$ as a functor $\mathtt{NSp}\to\mathtt{Sp}$. This functor preserves all small colimits, because $x$ is compact in $\mathtt{NSp}$ and both $\mathtt{NSp}$ and $\mathtt{Sp}$ are stable. Similarly, the functor $y\mapsto G^{P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}}_{RY_{\infty}(x),RY_{\infty}(y)}$ is also a functor $\mathtt{NSp}\to\mathtt{Sp}$ that preserves small colimits. It follows that the category of objects $y$ for which the map (5) is an equivalence is closed under colimits and also desuspensions. Since this category contains $\mathcal{M}_{s}$, it is all of $\mathtt{NSp}$. Now fix $y$, and consider the functors $x\mapsto G^{\mathtt{NSp}}_{x,y},G^{P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}}_{RY_{\infty}(x),RY_{\infty}(y)}$ as contravariant functors from $\mathtt{NSp}$ to spectra. Since both functors take small colimits to limits, a similar argument shows that this map is an equivalence for all $x\in\mathtt{NSp}$. We have shown that the functor $RY_{\infty}\colon\mathtt{NSp}\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}$ is fully faithful and also preserves all small colimits, so it is a left adjoint. It follows that the image of $RY_{\infty}$ is closed under small colimits. Since the image contains the representable presheaves, $RY_{\infty}$ is essentially surjective. It follows that $RY_{\infty}$ is an equivalence of categories. ∎ Thus $RY_{\infty}$ is an explicit monoidal model for the inverse of the equivalence given by Theorem 6.2. This also allows us to show that $\mathcal{M}_{s}$ is indeed a strictification of $\mathcal{M}$ from Definition 6.1. ###### Theorem 6.8. We have a natural equivalence $(\mathcal{M}_{s})_{\infty}\simeq\mathcal{M}$ between the enriched $\infty$-localization of $\mathcal{M}_{s}$ and $\mathcal{M}$. ###### Proof. By definition, $\mathcal{M}_{s}$ is an $\mathtt{Sp^{M}}$-enriched category, whose set of objects is the set of natural numbers $\mathbb{N}$. Let $\mathtt{Sp^{M}}\mathbb{N}$-$\mathtt{Cat}$ be the category of all $\mathtt{Sp^{M}}$-enriched categories, whose set of objects is $\mathbb{N}$, and whose morphisms are functors that are the identity on objects. This category has a Quillen model structure, where fibrations and weak equivalences are defined levelwise, and where cofibrant objects are levelwise cofibrant [ScSh2, Proposition 6.3]. Thus, $\mathcal{M}_{s}$ is an object in $\mathtt{Sp^{M}}\mathbb{N}$-$\mathtt{Cat}$. Let $\mathcal{M}_{s}\to\mathcal{M}_{s}^{f}$ be a fibrant replacement of $\mathcal{M}_{s}$ in $\mathtt{Sp^{M}}\mathbb{N}$-$\mathtt{Cat}$. We get a Quillen adjunction $\operatorname{LKan}_{i}:P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})\rightleftarrows P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}^{f}):i^{*}.$ It follows from [GM, Proposition 2.4] that this adjunction is a Quillen equivalence. To give a little more detail, it follows from the general result of Guillou and May that it is enough to show that for every cofibrant object $M$ of $\mathtt{Sp^{M}}$ and every two objects $x,y$ of $\mathcal{M}_{s}$, the following induced map is an equivalence $M\wedge\operatorname{Hom}_{\mathcal{M}_{s}}(x,y)\to M\wedge\operatorname{Hom}_{\mathcal{M}_{s}^{f}}(x,y),$ where $\operatorname{Hom}(-,-)$ denotes the spectral mapping object. The map $\operatorname{Hom}_{\mathcal{M}_{s}}(x,y)\to\operatorname{Map}_{\mathcal{M}_{s}^{f}}(x,y)$ is an equivalence by definition of $M_{s}^{f}$. It follows by [MMSS, Proposition 12.3] that the induced map is an equivalence for all cofibrant $M$. Applying $\infty$-localization we obtain an equivalence $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}\xrightarrow{\sim}P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}^{f})_{\infty}.$ Now, $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}^{f})$ is an $\mathtt{Sp^{M}}$-model category, and it is Quillen equivalent to a combinatorial model category. The enriched Yoneda embedding $Y:\mathcal{M}_{s}^{f}\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}^{f})$ is a fully faithful $\mathtt{Sp^{M}}$-enriched functor and it clearly lands in the fibrant cofibrant objects. Thus, by Corollary 3.8, the $\mathtt{Sp}$-functor $Y_{\infty}:(\mathcal{M}_{s}^{f})_{\infty}\to P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}^{f})_{\infty}$ is $\mathtt{Sp}$-fully faithful. Note that this claim is made with respect to the action of $\mathtt{Sp}$ on $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}^{f})_{\infty}$ induced by the structure of $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}^{f})$ as an $\mathtt{Sp^{M}}$-model category. Since $\mathtt{Sp}$ is a mode in the sense of [CSY, Section 5], this action coincides with the canonical action of $\mathtt{Sp}$ on $P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}^{f})_{\infty}$ as a presentable stable $\infty$-category. Now, using Theorem 6.7, we have the following composition $(\mathcal{M}_{s}^{f})_{\infty}\xrightarrow{Y_{\infty}}P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s}^{f})_{\infty}\xrightarrow{\sim}P_{\mathtt{Sp^{M}}}(\mathcal{M}_{s})_{\infty}\xrightarrow{\sim}\mathtt{NSp}.$ We get a fully faithful $\mathtt{Sp}$-enriched functor $(\mathcal{M}_{s}^{f})_{\infty}\to\mathtt{NSp}$, with essential image $\mathcal{M}$, so that $(\mathcal{M}_{s}^{f})_{\infty}\simeq\mathcal{M}$. ∎ ## References * [AG] Andersen K. K. S., Grodal J. A Baues fibration category structure on Banach and $C^{*}$-algebras, preprint, available at http://www.math.ku.dk/~jg/papers/fibcat.pdf, 1997. * [ABS2] Arone G., Barnea I., Schlank T. M. _Suspension spectra of matrix algebras, the rank filtration, and rational noncommutative CW-spectra_ , arXiv:2101.09778. * [BHH] Barnea I., Harpaz Y., Horel G. Pro-categories in homotopy theory, Algebraic and Geometric Topology 17.1, 2017, p. 567–643. * [BJM] Barnea I., Joachim M., Mahanta S. Model structure on projective systems of $C^{*}$-algebras and bivariant homology theories, New York Journal of Mathematics 23, 2017, p. 383–439. * [BM] Blom T., Moerdijk I. _Simplicial model structures on pro-categories_ , arXiv:2009.07539. * [CSY] Carmeli S., Schlank T. M., Yanovski L. _Ambidexterity and Height_ , arXiv:2007.13089. * [Die] Diep D. N. Category of noncommutative CW-complexes, Vietnam J. Math. 38.3, 2010, p. 363–371. * [ELP] Eilers S., Loring T. A., Pedersen G. K. Stability of anticommutation relations: An application of noncommutative CW-complexes, J. Reine Angew. Math. , vol. 499 (1998), 101–143. * [GH] Gepner D., Haugseng R. Enriched infinity categories via non-symmetric operads, Adv. Math. 36, 2015, p. 575–716. * [Goo] Goodwillie T. _Calculus. I. The first derivative of pseudoisotopy theory_ , K-Theory 4 (1990), no. 1, 1–27. * [GM] Guillou B., May J.P. _Enriched model categories and presheaf categories_ , New York J. Math. 26 (2020), 37–91. * [Hin1] Hinich V. _Dwyer-Kan localization revisited_ , Homology, Homotopy and Applications 18.1, 2016, p. 27–48. * [Hin2] Hinich V. _Yoneda lemma for enriched infinity categories_ , Adv. Math. 367 (2020), p. 107–129. * [Hin3] Hinich V. _Colimits in enriched $\infty$-categories and Day convolution_, arXiv:2101.09538. * [Hin4] Hinich V. _Enriched Yoneda Lemma_ , arXiv:1511.00857. * [Hor] Horel G. _Operads, modules and topological field theories_ , arXiv:1405.5409. * [Lur1] Lurie J. Higher Topos Theory, Annals of Mathematics Studies, 170. Princeton University Press, Princeton, NJ, 2009. * [Lur2] Lurie J. Higher Algebra, preprint, available at http://www.math.harvard.edu/~lurie/papers/HA.pdf. * [Lyd] Lydakis, M. Simplicial functors and stable homotopy theory, unpublished preprint, available at https://hopf.math.purdue.edu/Lydakis/s_functors.pdf or http://users.math.uoc.gr/~mlydakis/papers/sf.pdf. * [Mah1] Mahanta S. Noncommutative stable homotopy and stable infinity categories, Journal of Topology and Analysis 7.1 2015, p. 135–165. * [Mah2] Mahanta S. Symmetric monoidal noncommutative spectra, strongly self-absorbing $C^{*}$-algebras, and bivariant homology, J. Noncommut. Geom. 10.4 2016, p. 1269–1301. * [MMSS] Mandell, M., May, P., Schwede, S., Shipley, B. Model categories of diagram spectra, Proc. London Math. Soc. (3) 82 (2001), no. 2, 441–512. * [Maz] Mazel-Gee A. _Quillen adjunctions induce adjunctions of quasicategories_ , New York Journal of Mathematics 22, 2016, p. 57–93. * [Ost] Østvær, P., Homotopy theory of $C^{*}$-algebras, Frontiers in Mathematics. Birkhäuser/Springer Basel AG, Basel, 2010. * [Qui] Quillen D. G. Homotopical Algebra, Lecture Notes in Mathematics, Vol. 43, Springer-Verlag, Berlin, 1967. * [Ped] Pedersen G. K. Pullback and pushout constructions in $C^{*}$-algebra theory, Journal of Functional Analysis 167.2, 1999, p. 243–344. * [ScSh1] Schwede S., Shipley B., Stable model categories are categories of modules, Topology 42, 2003, p. 103–153. * [ScSh2] Schwede S., Shipley B., Equivalences of monoidal model categories, Algebraic and Geometric Topology 3, 2003, p. 287–334. * [Str] Strøm A. The homotopy category is a homotopy category, Archiv der Mathematik 23 (1972), 435–441. * [Uuy] Uuye O. Homotopical algebra for $C^{*}$-algebras, Journal of Noncommutative Geometry 7.4 2013, p. 981–1006.
# Suspension spectra of matrix algebras, the rank filtration, and rational noncommutative CW-spectra Gregory Arone Stockholm University <EMAIL_ADDRESS>Supported in part by the Swedish Research Council, grant number 2016-05440 Ilan Barnea Haifa University <EMAIL_ADDRESS>Supported by ISF 786/19 Tomer M. Schlank Hebrew University <EMAIL_ADDRESS>Supported by ISF 1588/18 and BSF 2018389 ###### Abstract In a companion paper [ABS1] we introduced the stable $\infty$-category of noncommutative CW-spectra, which we denoted $\mathtt{NSp}$. Let $\mathcal{M}$ denote the full spectrally enriched subcategory of $\mathtt{NSp}$ whose objects are the non-commutative suspension spectra of matrix algebras. In [ABS1] we proved that $\mathtt{NSp}$ is equivalent to the $\infty$-category of spectral presheaves on $\mathcal{M}$. In this paper we investigate the structure of $\mathcal{M}$, and derive some consequences regarding the structure of $\mathtt{NSp}$. To begin with, we introduce a rank filtration of $\mathcal{M}$. We show that the mapping spectra of $\mathcal{M}$ map naturally to the connective $K$-theory spectrum $ku$, and that the rank filtration of $\mathcal{M}$ is a lift of the classical rank filtration of $ku$. We describe the subquotients of the rank filtration in terms of complexes of direct-sum decompositions which also arose in the study of $K$-theory and of Weiss's orthogonal calculus. We prove that the rank filtration stabilizes rationally after the first stage. Using this we give an explicit model of the rationalization of $\mathtt{NSp}$ as presheaves of rational spectra on the category of finite-dimensional Hilbert spaces and unitary transformations up to scaling. Our results also have consequences for the $p$-localization and the chromatic localization of $\mathcal{M}$. ###### Contents 1. 1 Introduction 2. 2 The functors $G_{k,l}$ and their stabilization 3. 3 The rank filtration 4. 4 The restriction of $G_{k,l}$ to finite sets 5. 5 $G_{k,l}$ is a cofibrant $\Gamma$-space 6. 6 Connection with topological $K$-theory 7. 7 Subquotients of the rank filtration 8. 8 Connection with the complex of direct-sum decompositions 9. 9 Some calculations of $\mathbb{S}^{k,l}$ 10. 10 On the rationalization and $p$-localization of $\mathcal{M}$ ## 1 Introduction In our previous paper [ABS1] we introduced the $\infty$-category of noncommutative CW-spectra, which we denoted $\mathtt{NSp}$. This is the stabilization of the $\infty$-category of noncommutative CW-complexes, denoted $\mathtt{NCW}$, which in turn is the ind completion of the $\infty$-category of finite noncommutative CW-complexes, denoted $\mathtt{NCW}^{f}$. The latter is defined as opposite of the topological nerve of the topological category whose objects are the $C^{*}$-algebras which are noncommutative CW-complexes in the sense of [ELP] and whose hom-spaces are given by taking the topology of pointwise norm convergence on the sets of $*$-homomorphisms. The main result of [ABS1] says that $\mathtt{NSp}$ is equivalent to the $\infty$-category of spectral presheaves over a small spectrally enriched $\infty$-category $\mathcal{M}$. The spectral $\infty$-category $\mathcal{M}$ is defined to be the full spectral subcategory of $\mathtt{NSp}$, whose objects are noncommutative suspension spectra of matrix algebras. In this paper we analyze the category $\mathcal{M}$ in considerable detail. We introduce a rank filtration of $\mathcal{M}$, describe the subquotients of the rank filtration and use this to give an explicit model for the rationalization of $\mathtt{NSp}$. ###### Remark 1.1. In [ABS1] we made extensive use of Hinich's theory of enriched $\infty$-categories (see [Hin2, Hin3]). In this paper we also use this theory and terminology on a few occasions. The interested reader is referred also to [ABS1, Section 3] for a summery of the parts of this theory relevant to us. Given an integer $k\geq 1$, let $M_{k}$ be the $C^{*}$-algebra of $k\times k$ matrices over $\mathbb{C}$. We have the stabilization functor $\Sigma^{\infty}_{\mathtt{NC}}:\mathtt{NCW}\to\mathtt{NSp}$ and the set of objects of $\mathcal{M}$ is $\\{\Sigma^{\infty}_{\mathtt{NC}}M_{k}|\,k\geq 1\\}$. Thus the objects of $\mathcal{M}$ are in one to one correspondence with the positive integers. Given two integers $k,l$, we denote the spectral mapping object in $\mathcal{M}$ by $\mathbb{S}^{k,l}:=\operatorname{Hom}_{\mathcal{M}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})\simeq\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})\in\mathtt{Sp},$ where $\mathtt{Sp}$ is the $\infty$-category of (ordinary) spectra. Our goal in this paper is to analyze the category $\mathcal{M}$. For this, it will be convenient to use a model category that models the $\infty$-category of spectra $\mathtt{Sp}$. Analyzing $\mathcal{M}$ means, firstly, that we want to describe, for each $k$ and $l$, the homotopy type of $\mathbb{S}^{k,l}$. For this purpose it is adequate to use the simple model for spectra, as a sequence of pointed spaces with structure maps. But we also want to model the composition maps, $\mathbb{S}^{k,l}\wedge\mathbb{S}^{j,k}\to\mathbb{S}^{j,l}.$ (1) To do this properly, we need to use a more sophisticated model for spectra, which incorporates a smash product. To be more explicit, in [ABS1] we defined a ``strict" model of $\mathcal{M}$. That is, we defined a category $\mathcal{M}_{s}$ strictly enriched in a certain monoidal model category of spectra $\mathtt{Sp^{M}}$, such that the enriched $\infty$-localization of $\mathcal{M}_{s}$ is equivalent to $\mathcal{M}$. The model category $\mathtt{Sp^{M}}$ is the category of pointed continuous functors from pointed finite CW-complexes to pointed topological spaces, endowed with the stable model structure. (By a topological space in this paper we always mean a compactly generated weak Hausdorff space.) Day convolution turns $\mathtt{Sp^{M}}$ into a symmetric monoidal model category. See [Lyd2, MMSS, Lyd1] for more details on this model structure. Recall that any relative category, that is a pair $(\mathcal{C},\mathcal{W})$ consisting of a category $\mathcal{C}$ an a subcategory $\mathcal{W}\subseteq\mathcal{C}$, has a canonically associated $\infty$-category $\mathcal{C}_{\infty}$, obtained by formally inverting the morphisms in $\mathcal{W}$ in the infinity categorical sense. There is also a canonical localization functor $\mathcal{C}\to\mathcal{C}_{\infty}$ satisfying a universal property. We refer the reader to [Hin1] for a thorough account, and also to the discussion in [BHH, Section 2.2]. We refer to $\mathcal{C}_{\infty}$ as the $\infty$-localization of $\mathcal{C}$ (with respect to $\mathcal{W}$). If $\mathcal{C}$ is a model category or a (co)fibration category, we always take $\mathcal{W}$ to be the set of weak equivalences in $\mathcal{C}$. We have a canonical equivalence of symmetric monoidal $\infty$-categories $\mathtt{Sp^{M}}_{\infty}\simeq\mathtt{Sp}.$ We will identify the two $\infty$-categories above through this equivalence. Similarly, if $\operatorname{Top}$ is the category of pointed topological spaces endowed with the Quillen model structure [Qui], then we have a canonical equivalence $\operatorname{Top}_{\infty}\simeq\mathcal{S}_{*}$ where $\mathcal{S}_{*}$ is the $\infty$-category of pointed spaces. Again, we identify the two $\infty$-categories above through this equivalence. We denote the localization functor from $\mathtt{Sp^{M}}$ to $\mathtt{Sp^{M}}_{\infty}=\mathtt{Sp}$ by $\partial_{1}$ $\partial_{1}:\mathtt{Sp^{M}}\to\mathtt{Sp}.$ If $G\in\mathtt{Sp^{M}}$ is a pointed continuous functors from pointed finite CW-complexes to pointed topological spaces, then $\partial_{1}G$ is the spectrum corresponding to the sequence of spaces $\\{G(S^{0}),G(S^{1}),\ldots$} (where we identify a pointed topological space with its image in $\operatorname{Top}_{\infty}=\mathcal{S}_{*}$). In other words we can write $\partial_{1}G\simeq{\mathop{\operatorname{hocolim}}}_{n}\Sigma^{-n}\Sigma^{\infty}G(S^{n}),$ where by $\mathop{\operatorname{hocolim}}$ here we mean $\infty$-colimit in $\mathtt{Sp}$. This is known as the stabilization, or the first derivative of the functor $G$. The way we define $\mathcal{M}_{s}$ in [ABS1] is by letting for every $k,l$ $\operatorname{Hom}_{\mathcal{M}_{s}}(M_{k},M_{l}):=G_{k,l}\in\mathtt{Sp^{M}},$ where $G_{k,l}(X):=\mathtt{SC^{*}}(\mathrm{C}_{0}(X,M_{l}),M_{k})$ (see also Definition 2.2). Here $\mathrm{C}_{0}(X,M_{l})$ is the space of pointed map from $X$ to $M_{l}$, considered as a $C^{*}$-algebra, and $\mathtt{SC^{*}}(-,-)$ denotes the space of $C^{*}$-algebra maps with the topology of pointwise norm convergence. Since $\mathcal{M}_{s}$ is a model for $\mathcal{M}$ we have that $G_{k,l}$ is a model for $\mathbb{S}^{k,l}$, or in other words, the stabilization of $G_{k,l}$ is $\mathbb{S}^{k,l}$: $\mathbb{S}^{k,l}\simeq\partial_{1}G_{k,l}\simeq{\mathop{\operatorname{hocolim}}}_{n}\Sigma^{-n}\Sigma^{\infty}G_{k,l}(S^{n}).$ ###### Remark 1.2. In this paper we use both $\mathcal{M}$ and $\mathcal{M}_{s}$. For convenience of notation we denote both categories by $\mathcal{M}$ trusting that it is clear from the context which is meant. We proceed to investigate the homotopy type of $\mathbb{S}^{k,l}$. It is not hard to show that if $l>k$ then $\mathbb{S}^{k,l}$ is contractible (see corollary 5.8), so we generally assume that $k\geq l$. To analyze $\mathbb{S}^{k,l}$ further, we introduce a natural filtration of $\mathcal{M}$ in Section 3, which we call the rank filtration. More precisely, for each $k$ and $l$, we define a sequence of subfunctors $G_{k,l,i}\in\mathtt{Sp^{M}}$ $G_{k,l,1}\subset G_{k,l,2}\subset\cdots\subset G_{k,l,\lfloor\frac{k}{l}\rfloor}=G_{k,l,\lfloor\frac{k}{l}\rfloor+1}=\cdots=G_{k,l}.$ Upon stabilization, we obtain a sequence of spectra $\mathbb{S}^{k,l,1}\hookrightarrow\mathbb{S}^{k,l,2}\hookrightarrow\cdots\hookrightarrow\mathbb{S}^{k,l,\lfloor\frac{k}{l}\rfloor}=\mathbb{S}^{k,l}.$ We think of this sequence as defining a filtration of $\mathbb{S}^{k,l}$. This filtration is multiplicatively compatible with composition. This means that there are maps, compatible with (1) $\mathbb{S}^{k,l,m}\wedge\mathbb{S}^{j,k,n}\to\mathbb{S}^{j,l,mn}.$ One can say that the category $\mathcal{M}$ is filtered by the multiplicative monoid of positive integers. The functors $G_{k,l,m}$ are, by definition, functors from the category of pointed finite CW-complexes to pointed topological spaces. But it turns out that they are determined by their restriction to the category of pointed finite sets. Specifically, we show in Theorem 5.7 that $G_{k,l,m}$ is equivalent to both the strict and the derived left Kan extension of its restriction to pointed finite sets. In other words, $G_{k,l,m}$ are $\Gamma$-spaces. In Section 4 we describe explicitly the restriction of $G_{k,l}$ to pointed finite sets (Proposition 4.5). In Section 5 we show that the $\Gamma$-spaces $G_{k,l,m}$ are in fact cofibrant in a generalized Reedy model structure considered by Bousfield-Friedlander [BF], Lydakis [Lyd1] and Berger-Moerdijk [BM]. Proposition 4.5 indicates that $G_{k,l}$ is similar to the $\Gamma$-space that models the K-theory spectrum $ku$. This is not a coincidence. The inclusion of matrix algebras $M_{k}\to M_{k+1}$ that sends a matrix $a$ to $a\oplus 0$ indues a natural transformation $G_{k,l}\to G_{k+1,l}$, which in turn induces a map of spectra $\mathbb{S}^{k,l}\to\mathbb{S}^{k+1,l}$. We prove in Section 6 that for each fixed $l$, $\mathop{\operatorname{hocolim}}_{k\to\infty}\mathbb{S}^{k,l}\simeq ku$ (thus all the mapping spectra $\mathbb{S}^{k,l}$ are naturally spectra over $ku$). The case $l=1$ of this observation goes back to Segal [Se2] and was exploited extensively by Dadarlat with various collaborators (for example see [DM]). Furthermore, it turns out that the functor $l\mapsto\mathop{\operatorname{hocolim}}_{k\to\infty}\mathbb{S}^{k,l}$ takes values in module spectra over $ku$. This allows for a natural way do define bivariant connective $K$-theory on $\mathtt{NSp}$, which generalizes the definition of Dadarlat and McClure [DM] to the noncommutative setting. We also observe in this section that the rank filtration of $\mathcal{M}$ is a lift of the classical rank filtration of $ku$. Later in the paper we prove that the map $\mathbb{S}^{k,l}\to ku$ induces an isomorphism on $\pi_{0}$ (Lemma 9.4). The results of Sections 4 and 5 are useful for the homotopic analysis of $\mathbb{S}^{k,l}$ and of the rank filtration. One of our main results is an explicit description of the associated graded filtration. Thus we describe, for each $k,l,m$, the homotopy cofiber spectrum $\mathbb{S}^{k,l}_{m}:=\mathbb{S}^{k,l,m}/\mathbb{S}^{k,l,m-1}$, as well as the induced maps $\mathbb{S}^{k,l}_{m}\wedge\mathbb{S}^{j,k}_{n}\to\mathbb{S}^{j,l}_{mn}.$ Our description features certain $U(m)$-complexes $\mathcal{L}_{m}^{\diamond}$. Roughly speaking $\mathcal{L}_{m}^{\diamond}$ is the space of direct-sum decompositions of $\mathbb{C}^{m}$ (see Definition 8.6). These complexes have some remarkable homotopical properties, which we will review below. But first let us state the result describing the rank filtration in terms of $\mathcal{L}_{m}^{\diamond}$. When $U$ and $V$ are unitary vector spaces, let $\operatorname{Inj}(U,V)$ denote the space of (necessarily injective) linear transformations of $U$ into $V$ that preserve the unitary product. Note for future reference that there is a homeomorphism $\operatorname{Inj}(\mathbb{C}^{m},\mathbb{C}^{n})\cong U(n)/U(n-m)$. The following is Theorem 8.8 in the text: ###### Theorem 1.3. There is an equivalence of spectra $\mathbb{S}^{k,l}_{m}\simeq\Sigma^{\infty}\mathcal{L}_{m}^{\diamond}\wedge_{U(m)}\operatorname{Inj}(\mathbb{C}^{lm},\mathbb{C}^{k})_{+},$ where $U(m)$ acts through the identification $\mathbb{C}^{lm}\cong\mathbb{C}^{m}\otimes\mathbb{C}^{l}$. The composition map $\mathbb{S}^{k,l}_{n}\wedge\mathbb{S}^{j,k}_{m}\to\mathbb{S}^{j,l}_{mn}$ is determined by the map $\mathcal{L}_{n}^{\diamond}\wedge\mathcal{L}_{m}^{\diamond}\to\mathcal{L}_{mn}^{\diamond}$ defined by tensor product of decompositions, and the obvious composition map $\operatorname{Inj}(\mathbb{C}^{ln},\mathbb{C}^{k})\times\operatorname{Inj}(\mathbb{C}^{km},\mathbb{C}^{j})\to\operatorname{Inj}(\mathbb{C}^{lmn},\mathbb{C}^{km})\times\operatorname{Inj}(\mathbb{C}^{km},\mathbb{C}^{j})\to\operatorname{Inj}(\mathbb{C}^{lmn},\mathbb{C}^{j}).$ The complexes $\mathcal{L}^{\diamond}_{m}$ were first introduced in [Ar1], and were studied in detail in [BJL+] and [AL3]. They play a role in describing the subquotients of the rank filtration of $K$-theory [AL1, AL2]. Therefore it is perhaps not surprising that they play a similar role in the rank filtration of $\mathcal{M}$, given the connection between $\mathcal{M}$ and $\mathtt{ku}$. Next proposition lists some relevant facts about the complexes $\mathcal{L}_{m}^{\diamond}$ (Proposition 9.1 in the paper). ###### Proposition 1.4. 1. 1. $\mathcal{L}^{\diamond}_{1}=S^{0}$. 2. 2. The complex $\mathcal{L}^{\diamond}_{m}$ is rationally contractible for all $m>1$. 3. 3. The complex $\mathcal{L}^{\diamond}_{m}$ is contractible unless $m$ is a prime power. 4. 4. If $m=p^{k}$ where $p$ is a prime and $k>0$ then $\mathcal{L}^{\diamond}_{p^{k}}$ is $p$-local. 5. 5. The complex $\mathcal{L}^{\diamond}_{p^{k}}$ has chromatic type $k$. Here are some consequences of Theorem 1.3 and Proposition 1.4. To begin with, we have a simple description of the endomorphisms in $\mathcal{M}$. The endomorphism spectrum of $\Sigma^{\infty}_{\mathtt{NC}}M_{k}$ is the group ring spectrum of the projective unitary group $PU(k)$. ###### Corollary 1.5. $\mathbb{S}^{k,k}\simeq\Sigma^{\infty}PU(k)_{+}.$ But our main application of Theorem 1.3 and Proposition 1.4(2) is to give a simplified description of the rational homotopy type of $\mathbb{S}^{k,l}$, and consequently of the rationalization of $\mathtt{NSp}$. Recall that the composition maps $\mathbb{S}^{j,k}\wedge\mathbb{S}^{k,l}\to\mathbb{S}^{j,l}$ restrict to maps of the form $\mathbb{S}^{j,k,1}\wedge\mathbb{S}^{k,l,1}\to\mathbb{S}^{j,l,1}$. It follows that the spectra $\mathbb{S}^{k,l,1}$ assemble to a spectral category, that we denote $\mathcal{M}^{1}$, which has the same objects as $\mathcal{M}$, and equipped with a functor $\mathcal{M}^{1}\to\mathcal{M}$ that is the identity on objects. Informally speaking, $\mathcal{M}^{1}$ is the first stage of the rank filtration of $\mathcal{M}$. It follows from Theorem 1.3 that there is an equivalence $\mathbb{S}^{k,l,1}=\mathbb{S}^{k,l}_{1}\simeq\Sigma^{\infty}{\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{k})/_{U(1)}}_{+}.$ It is worth noting that $\mathbb{S}^{k,l,1}$ is a suspension spectrum. Let $\mathbb{P}\operatorname{Inj}$ be the topologically enriched symmetric monoidal category of finite positive dimensional Hilbert spaces and embeddings up to scalar. That is, up to isomorphism the objects of $\mathbb{P}\operatorname{Inj}$ are given by $\mathbb{C}^{k}$ for $k\geq 1$ and $\mathbb{P}\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{k})=\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{k})/_{U(1)}=U(k)/(U(1)\times U(k-l)).$ $\mathbb{P}\operatorname{Inj}$ is a symmetric monoidal category, with the monoidal structure given by the tensor product. Since the functor $\Sigma^{\infty}_{+},$ from topological spaces to our model of spectra $\mathtt{Sp^{M}}$, is symmetric monoidal, we can define a category enriched in $\mathtt{Sp^{M}}$, which we denote $\mathbb{P}\operatorname{Inj}^{\mathtt{Sp}}$, by applying $\Sigma^{\infty}_{+}$ to the mapping spaces of $\mathbb{P}\operatorname{Inj}$. We show in Section 5.1 that $\mathcal{M}^{1}\simeq(\mathbb{P}\operatorname{Inj}^{\mathtt{Sp}})^{\operatorname{op}}.$ The following is an easy consequence of Proposition 1.4(2) ###### Corollary 1.6. The natural map $\Sigma^{\infty}{\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{k})/_{U(1)}}_{+}\simeq\mathbb{S}^{k,l,1}\xrightarrow{\simeq_{\mathbb{Q}}}\mathbb{S}^{k,l}$ is a rational homotopy equivalence. ###### Remark 1.7. If one lets $k$ go to $\infty$ in corollary 1.6, one obtains the classical fact that the canonical map $\Sigma^{\infty}\mathbb{C}P^{\infty}_{+}\to ku$ is a rational equivalence. So corollary 1.6 can be thought of as a lift of this fact. Corollary 1.6 says that the functor $\mathcal{M}^{1}\to\mathcal{M}$ is a rational equivalence of spectral categories. Using this, we can give a rather explicit description of the rationalization of $\mathtt{NSp}$. We discuss the general construction of rational localization and $p$-localization of a stable, monoidal, $\infty$-category in Section 10. Let $\mathtt{NSp}_{\mathbb{Q}}$ denote the rational localization of the $\infty$-category of noncommutative spectra $\mathtt{NSp}$ and let $\mathtt{Sp}_{\mathbb{Q}}$ denote the rational localization of the usual $\infty$-category of spectra $\mathtt{Sp}$ . It is well known that $\mathtt{Sp}_{\mathbb{Q}}$ is a symmetric monoidal presentable $\infty$-category, and the rationalization functor $L_{\mathbb{Q}}\colon\mathtt{Sp}\to\mathtt{Sp}_{\mathbb{Q}}$ is symmetric monoidal. Let $\mathbb{P}\operatorname{Inj}_{\infty}$ denote the topological nerve of the topological category $\mathbb{P}\operatorname{Inj}$ defined above. Applying the symmetric monoidal functor $L_{\mathbb{Q}}\circ\Sigma^{\infty}_{+}:\mathcal{S}\to\mathtt{Sp}_{\mathbb{Q}}$ to the mapping spaces of $\mathbb{P}\operatorname{Inj}_{\infty}$ we obtain an $\infty$-category enriched in $\mathtt{Sp}_{\mathbb{Q}}$ which we denote by $\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}}$. Let $P_{\mathtt{Sp}_{\mathbb{Q}}}((\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}})^{\operatorname{op}})$ denote the $\infty$-category of $\mathtt{Sp}_{\mathbb{Q}}$-enriched functors from $\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}}$ to $\mathtt{Sp}_{\mathbb{Q}}$. The following theorem summarizes the results of Section 10 about $\mathtt{Sp}_{\mathbb{Q}}$: ###### Theorem 1.8 (Theorem 10.5). There are equivalences of symmetric monoidal $\infty$-categories $\mathtt{NSp}_{\mathbb{Q}}\simeq P_{\mathtt{Sp}_{\mathbb{Q}}}((\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}})^{\operatorname{op}})\simeq\mathrm{Fun}(\mathbb{P}\operatorname{Inj}_{\infty},\mathtt{Sp}_{\mathbb{\mathbb{Q}}}).$ ###### Remark 1.9. Note that the expression on the right in Theorem 1.8 does not use enriched $\infty$-categories. This is the usual $\infty$-category of functors from $\mathbb{P}\operatorname{Inj}_{\infty}$ to $\mathtt{Sp}_{\mathbb{\mathbb{Q}}}$. Note also that the mapping spaces in $\mathbb{P}\operatorname{Inj}$ are all finite connected CW-complexes (manifolds, even). It is natural to wonder if one can given a more direct algebraic model of $\mathtt{NSp}_{\mathbb{Q}}$ as a dg-category. #### $p$-local and chromatic picture Now instead of rationalizing, suppose we fix a prime $p$ and localize everything at $p$. One can obtain further information about the $p$-localization of $\mathcal{M}$. It follows from Proposition 1.4 parts (3) and (4) that the filtration is $p$-locally constant except at powers of $p$. Therefore it is natural to regrade the filtration of $\mathbb{S}^{k,l}$ as follows $\mathbb{S}^{k,l,1}\hookrightarrow\mathbb{S}^{k,l,p}\hookrightarrow\mathbb{S}^{k,l,p^{2}}\hookrightarrow\cdots\hookrightarrow\mathbb{S}^{k,l,p^{i}}\cdots$ With this grading, (the $p$-localization) of $\mathcal{M}$ is a filtered category in the usual sense, that composition adds degrees. Furthermore, we have ###### Corollary 1.10. Fix a prime $p$ and localize everything at $p$. The map $\mathbb{S}^{k,l,p^{n}}\to\mathbb{S}^{k,l}$ induces an isomorphism on Morava $K(i)$-theory for $i\leq n$. The last corollary may have consequences for ``noncommutative chromatic homotopy theory'', but we will not pursue it here. #### Section by section outline of the paper In Section 2 we set the stage by recalling some relevant definitions from [ABS1]. We introduce functors $G_{k,l}$, where $k,l$ are positive integers. The functors $G_{k,l}$ encode all the information about morphisms in $\mathcal{M}$. More precisely, the stabilization of $G_{k,l}$ is the spectral mapping object from $k$ to $l$ in $\mathcal{M}$. In Section 3 we introduce a natural filtration of the functors $G_{k,l}$, which we call the rank filtration. In Section 4 we give an explicit description of the restriction of $G_{k,l}$ to pointed finite sets. In Section 5 we establish various properties of the restriction of $G_{k,l}$ to finite sets. Most importantly, we show that the functor $G_{k,l}$ is a $\Gamma$-space, in the sense that it is determined by its values on finite sets, and we also observe that the restriction of $G_{k,l}$ to finite sets is cofibrant in the Reedy model structure on $\Gamma$-spaces. In Section 6 we show that all the mapping spectra in $\mathcal{M}$ are equipped with a natural map to the connective $K$-theory spectrum $ku$. We observe that the rank filtration of $\mathcal{M}$ is a lift of the classical rank filtration of $ku$. We also discuss how one can use our models to represent the $K$-theory functor on non-commutative complexes. We make connection with some work of Dadarlat and McClure [DM]. In Sections 7 and 8 we describe the subquotients of the rank filtration in terms of complexes of direct-sum decompositions that arose earlier in the study of the rank filtration of $ku$. Since complexes of direct-sum decompositions are well-studied, we obtain interesting consequences about $\mathcal{M}$ and $\mathtt{NSp}$. In Section 9 we use the results of preceding sections to calculate the mapping spectra in $\mathcal{M}$ in some cases. In Section 10 we use those results to give an explicit model for the rationalization of $\mathcal{M}$ and $\mathtt{NSp}$. Rationally, $\mathtt{NSp}$ is equivalent to the $\infty$-category of presheaves of rational spectra on the $\infty$-category whose objects are finite-dimensional Hilbert spaces, and whose hom-spaces are linear embeddings modulo scalars. We also point out some consequences that our models have for the $p$-localization and potentially chromatic localization of $\mathcal{M}$ and $\mathtt{NSp}$. #### Acknowledgements We would like to thank Jeffrey Carlson for fruitful correspondences during the early stages of our work. We are grateful to Vladimir Hinich for explaining to us his theory of enriched infinity categories and its relevance to our work. ## 2 The functors $G_{k,l}$ and their stabilization In this section we recall the construction of $\mathcal{M}$ as a category strictly enriched in $\mathtt{Sp^{M}}$ (see Remark 1.2). We will introduce certain functors $G_{k,l}\in\mathtt{Sp^{M}}$, which will represent the mapping spectra in $\mathcal{M}$. To begin with, let $\mathtt{SC^{*}}$ denote the category of (non-unital) separable $C^{*}$-algebras and $*$-homomorphisms. Note that the matrix algebras $\\{M_{k}\mid k=1,2,\ldots\\}$ are objects of $\mathtt{SC^{*}}$. Consider $\mathtt{SC^{*}}$ as a topologically enriched category, where for every $A,B\in\mathtt{SC^{*}}$ we endow the set of $*$-homomorphisms $\mathtt{SC^{*}}(A,B)$ with the topology of pointwise norm convergence. It is well-known that $\mathtt{SC^{*}}$ is cotensored over the category of pointed finite CW-complexes [AG]. For a finite pointed CW-complex $X$ and a $C^{*}$-algebra $A$ we denote the contensoring by $\mathrm{C}_{0}(X,A)$. Next, we want to use $\mathtt{SC^{*}}$ to define $\mathcal{M}$ as a category enriched in $\mathtt{Sp^{M}}$. Recall that the underlying category of $\mathtt{Sp^{M}}$ is the category of pointed continuous functors from pointed finite CW-complexes to pointed topological spaces. ###### Definition 2.1. Let $G_{k,l}\colon\mathtt{CW}^{f}_{*}\to\operatorname{Top}$ be the functor defined as follows $G_{k,l}(X)=\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(X,M_{l}),M_{k}).$ We will consider $G_{k,l}$ to be an object of $\mathtt{Sp^{M}}$. Notice that for all $k,l,m$, there is a natural map $G_{k,l}(X)\wedge G_{l,m}(Y)\to G_{k,m}(X\wedge Y)$, defined as a composition of the following maps. $\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(X,M_{l}),M_{k})\wedge\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(Y,M_{m}),M_{l})\to\\\ \to\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(X,M_{l}),M_{k})\wedge\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(X\wedge Y,M_{m}),\mathrm{C}_{0}(X,M_{l}))\to\\\ \to\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(X\wedge Y,M_{m}),M_{k}).$ Here the second map is composition, and the first map is induced by the cotensoring $\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(Y,M_{m}),M_{l})\to\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(X\wedge Y,M_{m}),\mathrm{C}_{0}(X,M_{l})).$ This map induces natural maps $G_{k,l}\wedge G_{l,m}\to G_{k,m}$ (2) where $\wedge$ denotes internal smash product (aka Day convolution). ###### Definition 2.2. Let $\mathcal{M}$ be the following $\mathtt{Sp^{M}}$-enriched category. The objects of $\mathcal{M}$ are positive integers. Given two integers $k,l$, the mapping spectrum from $k$ to $l$ is given by $G_{k,l}$. The composition law in $\mathcal{M}$ is defined by the structure maps like in (2), for all $k,l,m$. ###### Remark 2.3. Recall that in the infinity categorical picture, $\mathcal{M}$ is the full spectral subcategory of $\mathtt{NSp}$ whose objects are $\\{\Sigma^{\infty}_{\mathtt{NC}}M_{k}\mid k=1,2\ldots\\}$. We show in [ABS1] that the enriched coherent nerve of $\mathcal{M}$ from Definition 2.2 is equivalent to $\mathcal{M}$ defined above. The objects of $\mathcal{M}$ provide a set of compact generators of $\mathtt{NSp}$. The main result of [ABS1] says that the $\infty$-category $\mathtt{NSp}$ is equivalent to the $\infty$-category $P_{\mathtt{Sp}}(\mathcal{M})$ of spectral presheaves on $\mathcal{M}$. In this paper we investigate the category $\mathcal{M}$, with the eventual goal in mind of understanding $\mathtt{NSp}$. In view of the results of [ABS1] in this paper we identify $\mathtt{NSp}$ with $P_{\mathtt{Sp}}(\mathcal{M})$. In the remaining part of the paper we will study the category $\mathcal{M}$ mainly using the explicit model provided in Definition 2.2, but will state the results also in the infinity categorical picture. Recall from the introduction that we identify $\mathtt{Sp^{M}}_{\infty}=\mathtt{Sp}$ and we denote by $\partial_{1}:\mathtt{Sp^{M}}\to\mathtt{Sp}$, the localization functor. If $G\in\mathtt{Sp^{M}}$ is a pointed continuous functors from pointed finite CW- complexes to pointed topological spaces, then $\partial_{1}G$ is the spectrum corresponding to the sequence of spaces $\\{G(S^{0}),G(S^{1}),\ldots$}, or in other words $\partial_{1}G\simeq{\mathop{\operatorname{hocolim}}}_{n}\Sigma^{-n}\Sigma^{\infty}G(S^{n}).$ This is known as the stabilization, or the first derivative of the functor $G$. We have shown in [ABS1] that for all natural numbers $k,l$, we have $\mathbb{S}^{k,l}:=\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})\simeq\partial_{1}G_{k,l}\simeq{\mathop{\operatorname{hocolim}}}_{n}\Sigma^{-n}\Sigma^{\infty}G_{k,l}(S^{n}).$ ###### Example 1. Let us consider the case $k=l=1$. The functor $G_{1,1}$ is given as follows $G_{1,1}(X)=\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(X,M_{1}),M_{1})=\operatorname{Map}_{\mathtt{SC^{*}}}(\mathrm{C}_{0}(X,\mathbb{C}),\mathbb{C}).$ By the Gelfand-Naimark theorem, it follows that $G_{1,1}(X)\cong X$, and therefore $\mathbb{S}^{1,1}=\Sigma^{\infty}S^{0}$ is the ordinary sphere spectrum. ###### Remark 2.4. Let us interpret $\mathbb{S}^{1,1}=\Sigma^{\infty}S^{0}$ as the endomorphism spectrum $\operatorname{End}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{1})$. We have identified $\mathtt{NSp}$ with the $\infty$-category of spectral presheaves on $\mathcal{M}$. In this picture, the $\infty$-category of spectral presheaves on the full $\mathtt{Sp}$-enriched subcategory of $\mathcal{M}$ consisting of the object $\Sigma^{\infty}_{\mathtt{NC}}M_{1}$ can be identified with the $\infty$-category $\mathtt{Sp}$ of ``commutative'' or ``ordinary'' spectra. There is an ``inclusion'' functor of $\mathtt{Sp}$-tensored categories $\mathtt{Sp}\to\mathtt{NSp}$ which in terms of presheaves is defined by an $\mathtt{Sp}$-enriched left Kan extension (weighted colimit) from $\\{\Sigma^{\infty}_{\mathtt{NC}}M_{1}\\}$ to $\mathcal{M}$. The inclusion functor has a right adjoint $\mathtt{NSp}\to\mathtt{Sp}$, a kind of ``abelianization'' functor, defined by restriction of presheaves. We will say a little more about it in Section 6. ## 3 The rank filtration In this section we introduce the rank filtration of $G_{k,l}$, which induces a rank filtration of the spectral category $\mathcal{M}$. In later sections we will see that the rank filtration of $\mathcal{M}$ is a lift of the classical rank filtration of the connective $K$-theory spectrum $ku$. Let $l,k\geq 1$, let $X$ be a finite pointed CW-complex and let $f\in G_{k,l}(X)=\mathtt{SC^{*}}(\mathrm{C}_{0}(X,M_{l}),M_{k}),$ be a map. Let $A_{f}\subseteq M_{k}$ be the image of $f$. $A_{f}$ acts as non- unital $C^{*}$-Algebra on the Hilbert space $\mathbb{C}^{k}$ and thus we get an orthogonal decomposition $\mathbb{C}^{k}=\mathrm{Ker}A_{f}\oplus A_{f}\cdot\mathbb{C}^{k}$. Denote $V_{f}:=A_{f}\cdot\mathbb{C}^{k}\subseteq\mathbb{C}^{k}$. We shall filter the space $G_{k,l}(X)$ according to the dimension of $V_{f}$. The following theorem is useful in that analysis. ###### Theorem 3.1. Let $X$ be a pointed compact metrizable space and let $l\geq 1$. There is a bijection between the closed subsets of $X\setminus\\{*\\}$ and closed two- sided ideals of $\mathrm{C}_{0}(X,M_{l})$, defined by the following correspondence $F\mapsto I_{F}:=\\{f\in\mathrm{C}_{0}(X,M_{l})\mid\forall x\in F.\,f(x)=0\\}.$ ###### Proof. The case $l=1$ is well-known. We will show that it implies the rest. Two $C^{*}$-algebras $A$ and $B$ are called strongly Morita equivalent if they are related by a $B$-$A$-imprimitivity bimodule in the sense of [Rie]. Let $\mathbb{K}$ be the algebra of compact operators on an infinite dimensional separable Hilbert space. It is shown in [BGR] that if $A$ and $B$ are separable, then $A$ and $B$ are strongly Morita equivalent iff $A\otimes\mathbb{K}\cong B\otimes\mathbb{K}.$ It is not hard to see that $\mathrm{C}_{0}(X,M_{l})\cong\mathrm{C}_{0}(X)\otimes M_{l}$ and $M_{l}\otimes\mathbb{K}\cong\mathbb{K}$, so we have $\mathrm{C}_{0}(X,M_{l})\otimes\mathbb{K}\cong(\mathrm{C}_{0}(X)\otimes M_{l})\otimes\mathbb{K}\cong\mathrm{C}_{0}(X)\otimes(M_{l}\otimes\mathbb{K})\cong\mathrm{C}_{0}(X)\otimes\mathbb{K}.$ Thus, $\mathrm{C}_{0}(X,M_{l})$ and $\mathrm{C}_{0}(X)$ are strongly Morita equivalent. By [Zet], we have an isomorphism between the sets of closed two- sided ideals of $\mathrm{C}_{0}(X,M_{l})$ and of $\mathrm{C}_{0}(X)$. ∎ ###### Lemma 3.2. Let $X\in\mathtt{CW}^{f}_{*}$ be pointed finite CW-complex and let $f\in G_{k,l}(X)={\mathtt{SC^{*}}}(\mathrm{C}_{0}(X,M_{l}),M_{k}).$ Then $f$ admits a unique factorization of the following form $\mathrm{C}_{0}(X,M_{l})\twoheadrightarrow\mathrm{C}_{0}(F_{f}\cup\\{*\\},M_{l})\stackrel{{\scriptstyle f^{\prime}}}{{\rightarrowtail}}M_{k}$ (3) where the first map is the surjective restriction to a finite subset $F_{f}\subset X\setminus\\{\ast\\}$ and the second map $f^{\prime}\colon\mathrm{C}_{0}(F_{f}\cup\\{*\\},M_{l})\rightarrowtail M_{k}$ is a monomorphism. In particular we have $V_{f}=V_{f^{\prime}}$. ###### Proof. First, note that $\ker(f)$ is a closed two sided $*$-ideal of $\mathrm{C}_{0}(X,M_{l})$. By Theorem 3.1, there exists a closed subset $F_{f}$ of $X\setminus\\{*\\}$ such that $\ker(f)=I_{F_{f}}=\\{g\in\mathrm{C}_{0}(X,M_{l})\mid g|_{F_{f}}=0\\}.$ Notice that $\ker(f)=I_{F_{f}}$ is also the kernel of the restriction homomorphism $\mathrm{C}_{0}(X,M_{l})\to\mathrm{C}_{0}(F\cup\\{*\\},M_{l}).$ We claim that the restriction homomorphism is surjective. This amounts to showing that any map from a closed subset of $X$ to $M_{l}$ can be extended to a map from $X$ to $M_{l}$. This in turn follows immediately from the Tietze extension theorem. It follows that $f$ admits a unique factorization of the following form $\mathrm{C}_{0}(X,M_{l})\twoheadrightarrow\mathrm{C}_{0}(F_{f}\cup\\{*\\},M_{l})\stackrel{{\scriptstyle f^{\prime}}}{{\rightarrowtail}}M_{k}$ (4) where the first map is restriction to a subset $F_{f}$ and the second map $f^{\prime}\colon\mathrm{C}_{0}(F_{f}\cup\\{*\\},M_{l})\to M_{k}$ is a monomorphism. Moreover $F_{f}$ is the minimal subset of $X\setminus\\{*\\}$ for which the map $f$ factors through $\mathrm{C}_{0}(F_{f}\cup\\{*\\},M_{l})$. Notice that $\textrm{Im\,}(f^{\prime})$ is finite-dimensional as a vector space over $\mathbb{C}$. This implies that $F_{f}$ is finite. ∎ ###### Lemma 3.3. let $f\in G_{k,l}(X)=\mathtt{SC^{*}}(\mathrm{C}_{0}(X,M_{l}),M_{k}),$ we have $l|\dim V_{f}$ and $l\cdot|F_{f}|\leq\dim V_{f}\leq k$ ###### Proof. By lemma 3.2 the function $f$ can be factored as a surjection followed by an injection $\mathrm{C}_{0}(X,M_{l})\to M_{l}^{F_{f}}\stackrel{{\scriptstyle f^{\prime}}}{{\to}}M_{k}$ Thus we have an isomorphism $M_{l}^{F_{f}}\cong A_{f}$. $1\in M_{l}^{F_{f}}$ now acts on $\mathbb{C}^{k}$ as a projection onto $V_{f}$ and thus we get a unital action of $M_{l}^{F_{f}}$ on $V_{f}$. Now for $x\in F_{f}$ denote by $W_{x}$ the unital $M_{l}^{F_{f}}$ module obtained by the canonical action on $\mathbb{C}^{l}$ via map $M_{l}^{F_{f}}\to M_{l}^{\\{x\\}}=M_{l}$. Every finite dimensional unital $M_{l}^{F_{f}}$ module is a direct sum of finitely many copies of the $W_{x}$'s. We thus get that $V_{f}=\bigoplus_{x\in F_{f}}=W_{x}^{e_{x}}.$ The injectivity of the map $M_{l}^{F_{f}}\stackrel{{\scriptstyle f^{\prime}}}{{\to}}M_{k}$ implies that $e_{x}\geq 1$ for every $x\in F_{f}$. Since $\dim V_{f}=l\sum e_{x}$ and $V_{f}\subseteq\mathbb{C}^{k}$ we get the claim. ∎ ###### Definition 3.4. Let $l,k\geq 1$, let $X$ be a finite pointed CW-complex and let $f\in G_{k,l}(X)=\mathtt{SC^{*}}(\mathrm{C}_{0}(X,M_{l}),M_{k}).$ We define the _rank of $f$_ to be the non-negative integer $\mathrm{rank}(f):=\frac{\dim V_{f}}{l}\in\mathbb{Z}_{\geq 0}.$ Suppose we have a map $\alpha\colon X\to Y$ in $\mathtt{CW}^{f}_{*}$. By functoriality, it induces a map $G_{k,l}(X)\to G_{k,l}(Y)$. Suppose $f\in G_{k,l}(X)$. By definition, $f$ is a $*$-homomorphism $f\colon\mathrm{C}_{0}(X,M_{l})\to M_{k}$, and the image of $f$ in $G_{k,l}(Y)$ is the composite homomorphism $\mathrm{C}_{0}(Y,M_{l})\xrightarrow{\alpha^{*}}\mathrm{C}_{0}(X,M_{l})\xrightarrow{f}M_{k}.$ Therefore the rank of the image of $f$ in $G_{k,l}(Y)$ is at most the rank of $f$. Because of this, the following definition really does describe a functor. ###### Definition 3.5. Let $k,l\geq 1$, $m\geq 0$. Define the functors $G_{k,l,m}\colon\mathtt{CW}^{f}_{*}\to\operatorname{Top}$ as follows $G_{k,l,m}(X)=\left\\{f\in G_{k,l}(X)|\quad\mathrm{rank}(f)\leq m\right\\}\subseteq G_{k,l}(X).$ Similarly, define $\mathbb{S}^{k,l,m}$ to be the stabilization of $G_{k,l,m}$. Explicitly, $\mathbb{S}^{k,l,m}$ is the spectrum $\\{G_{k,l,m}(S^{0}),G_{k,l,m}(S^{1}),\ldots\\}$. ###### Remark 3.6. Note that for all $X\in\mathtt{CW}^{f}_{*}$ and $k,l\geq 1$ we have $G_{k,l,0}(X)=\ast$. Additionally by lemma 3.3 for $m\geq\lfloor\frac{k}{l}\rfloor$ we get $G_{k,l,m}=G_{k,l}$. We have defined a filtration of $G_{k,l}$ by sequence of subfunctors $*=G_{k,l,0}\subset G_{k,l,1}\subset\cdots\subset G_{k,l,\lfloor\frac{k}{l}\rfloor}=G_{k,l}.$ We call this filtration _the rank filtration_. Now recall that the functors $G_{k,l}$ represent mapping spectra in $\mathcal{M}$ and that composition in $\mathcal{M}$ is determined by maps of the following form $G_{k,l}(X)\wedge G_{l,m}(Y)\to G_{k,m}(X\wedge Y).$ The following proposition tells how the rank filtration interacts with composition. ###### Proposition 3.7. For all $r$ and $s$, the composition map above restricts to a natural map $G_{k,l,r}(X)\wedge G_{l,m,s}(Y)\to G_{k,m,rs}(X\wedge Y)$ ###### Proof. Recall that $G_{k,l}(X)=\mathtt{SC^{*}}(\mathrm{C}_{0}(X,M_{l}),M_{k})$. Written in these terms, the composition map has the following form $\mathtt{SC^{*}}(\mathrm{C}_{0}(X,M_{l}),M_{k})\wedge\mathtt{SC^{*}}(\mathrm{C}_{0}(Y,M_{m}),M_{l})\to\\\ \to\mathtt{SC^{*}}(\mathrm{C}_{0}(X,M_{l}),M_{k})\wedge\mathtt{SC^{*}}(\mathrm{C}_{0}(X\wedge Y,M_{m}),\mathrm{C}_{0}(X,M_{l}))\to\\\ \to\mathtt{SC^{*}}(\mathrm{C}_{0}(X\wedge Y,M_{m}),M_{k}).$ Suppose that $f\in\mathtt{SC^{*}}(\mathrm{C}_{0}(X,M_{l}),M_{k})$ and $g\in\mathtt{SC^{*}}(\mathrm{C}_{0}(Y,M_{m}),M_{l})$ have ranks $r$ and $s$ respectively. Let $f\odot g$ denote the image of $f\wedge g$ in $\mathtt{SC^{*}}(\mathrm{C}_{0}(X\wedge Y,M_{m}),M_{k})$. Our goal is to show that $f\odot g$ has rank $rs$. By Lemma 3.2, there exist finite subsets $F_{f}\subset X\smallsetminus\\{\ast\\}$, $F_{g}\subset Y\smallsetminus\\{\ast\\}$ such that $f$ factors as $\mathrm{C}_{0}(X,M_{l})\twoheadrightarrow\mathrm{C}_{0}(F_{f}\cup\\{*\\},M_{l})\stackrel{{\scriptstyle f^{\prime}}}{{\rightarrowtail}}M_{k}$, and there is a similar factorization of $g$. It follows that $f\odot g$ factors as follows $\mathrm{C}_{0}(X\wedge Y,M_{m})\twoheadrightarrow\mathrm{C}_{0}(F_{f}\times F_{g}\cup\\{*\\},M_{m})\stackrel{{\scriptstyle f^{\prime}\odot g^{\prime}}}{{\rightarrowtail}}M_{k}$ where the second map is itself the following composite $(M_{m}^{F_{g}})^{F_{f}}\xrightarrow{g^{\prime\times F_{f}}}M_{l}^{F_{f}}\xrightarrow{f^{\prime}}M_{k}.$ Here the first map is the cartesian product of $|F_{f}|$ copies of the $g^{\prime}$ with itself. This map determines an action of $M_{m}^{F_{g}\times F_{f}}$ on $\mathbb{C}^{k}$. Our goal is to show that $M_{m}^{F_{g}\times F_{f}}\cdot\mathbb{C}^{k}$ has dimension $rsm$. Recall that $A_{f}=A_{f^{\prime}}$ is the image of $f^{\prime}$. Since $\mathrm{rank}(f)=r$, $A_{f}\cdot\mathbb{C}^{k}$ has dimension $rl$. If $B\subset M_{l}$ is a $C^{*}$-subalgebra such that $B\cdot\mathbb{C}^{l}$ has dimension $d$, and we let $B^{F_{f}}$ act on $\mathbb{C}^{k}$ via the map $f^{\prime}$, then $B^{F_{f}}\cdot\mathbb{C}^{k}$ has dimension $rd$. Now take $B$ to be the image of $g^{\prime}$. Since $\mathrm{rank}(g)=s$, $B\cdot\mathbb{C}^{l}$ has dimension $sm$, so finally we conclude that $M_{m}^{F_{g}\times F_{f}}\cdot\mathbb{C}^{k}$ has dimension $rsm$. This means that $f\odot g$ has rank $rs$. ∎ ###### Remark 3.8. Proposition 3.7 can be intepreted as follows: the rank filtration is a filtration of the category $\mathcal{M}$ by the multiplicative monoid of natural numbers. ## 4 The restriction of $G_{k,l}$ to finite sets It will turn out that the functor $G_{k,l}$, whose domain is the category of pointed finite CW-complexes, is determined by its restriction to the category of pointed finite sets. In this section we give an explicit description of the restriction of $G_{k,l}$ to finite sets. Let us begin with a definition, which also serves to establish some notation. ###### Definition 4.1. For a natural number $i$, let $[i]=\\{0,1,\ldots,i\\}$, considered as a pointed set with basepoint $0$. Let $\mathtt{Fin}_{*}$ be the category whose objects are $\\{[0],[1],\dots,[k],\ldots\\}$ and whose morphisms are basepoint-preserving functions. For $k\geq 0$, let $\mathtt{Fin^{\leq k}_{*}}$ denote the full subcategory of $\mathtt{Fin}_{*}$ spanned by the objects $\\{[0],[1],\dots,[k]\\}$. We will also use the notation $\underline{i}$ for the unpointed set $\\{1,\ldots,i\\}$. ###### Remark 4.2. The category $\mathtt{Fin}_{*}$ is denoted $\Gamma$ in some sources, and $\Gamma^{\operatorname{op}}$ in some other sources. We find the notation $\mathtt{Fin}_{*}$ to be more descriptive. But following the tradition established by Segal [Se1], we call pointed functors $\mathtt{Fin}_{*}\to\operatorname{Top}$ $\Gamma$-spaces. We will now examine the restriction of $G_{k,l}$ to $\mathtt{Fin}_{*}$. For a finite pointed set $[t]$, $G_{k,l}([t])$ is the space of non-unital $C^{*}$-algebra homomorphisms from $M_{l}^{t}$ to $M_{k}$. Spaces of such homomorphisms are well-understood. We want to describe them in a way that makes the functoriality in $[t]$ explicit. We need a few definitions. ###### Definition 4.3. The category of pointed multisets is defined as follows. The objects are ordered $t$-tuples $(m_{1},\ldots,m_{t})$ of natural numbers. The possibility $t=0$ is included, in which case the tuple is empty. A morphism $(m_{1},\ldots,m_{t})\to(n_{1},\ldots,n_{s})$ consists of a pointed function $\alpha\colon[t]\to[s]$ such that $n_{j}=\Sigma_{i\in\alpha^{-1}(j)}m_{i}$ for all $1\leq j\leq s$. In particular, if $j$ is not in the image of $\alpha$ then $n_{j}=0$. Note that there are no restrictions on $m_{i}$ for $i\in\alpha^{-1}(0)$. Given a pointed multiset $(m_{1},\ldots,m_{t})$ and a pointed function of sets $\alpha\colon[t]\to[s]$ we define $\alpha_{*}(m_{1},\ldots,m_{t})$ to be the multiset $(n_{1},\ldots,n_{s})$ with $n_{j}=\Sigma_{i\in\alpha^{-1}(j)}m_{i}$ for all $1\leq j\leq s$. ###### Example 2. Let $\alpha\colon[3]\to[2]$ be the function defined by $\alpha(0)=\alpha(1)=0$, $\alpha(2)=\alpha(3)=1$. Then $\alpha_{*}(4,2,3)=(5,0)$. Suppose we have a multiset $(m_{1},\ldots,m_{t})$ and a natural number $l$. We will make much use of the unitary vector space $\mathbb{C}^{(m_{1}+\cdots+m_{t})l}$. We identify this vector space with $\mathbb{C}^{m_{1}+\cdots+m_{t}}\otimes\mathbb{C}^{l}\cong\mathbb{C}^{m_{1}}\otimes\mathbb{C}^{l}\oplus\cdots\oplus\mathbb{C}^{m_{t}}\otimes\mathbb{C}^{l}.$ Notice that there are commuting actions of $U(m_{1})\times\cdots\times U(m_{t})$ and $U(l)$ on $\mathbb{C}^{(m_{1}+\cdots+m_{t})l}$. It follows that these groups act on any space obtained by applying a continuous functor to this vector space. Now suppose we have a morphism of pointed multisets $\alpha\colon(m_{1},\ldots,m_{t})\to(n_{1},\ldots,n_{s})$, so $(n_{1},\ldots,n_{s})=\alpha_{*}(m_{1},\ldots,m_{t})$. Choose unitary isomorphisms $\mathbb{C}^{n_{j}}\stackrel{{\scriptstyle\cong}}{{\to}}\mathbb{C}^{\Sigma_{i\in\alpha^{-1}(j)}m_{i}}$ for all $1\leq j\leq s$. The function $\alpha$ together with these isomorphisms determine an inner-product-preserving inclusion $\mathbb{C}^{(n_{1}+\cdots+n_{s})l}\to\mathbb{C}^{(m_{1}+\cdots+m_{t})l}$. This inclusion in turn defines a map of spaces (where $k$ is another natural number) $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})\to\operatorname{Inj}(\mathbb{C}^{(n_{1}+\cdots+n_{s})l},\mathbb{C}^{k})$ A different choice of isomorphisms $\mathbb{C}^{n_{j}}\stackrel{{\scriptstyle\cong}}{{\to}}\mathbb{C}^{\Sigma_{i\in\alpha^{-1}(j)}m_{i}}$ will change the map by precomposition with a unitary automorphism of $\mathbb{C}^{(n_{1}+\cdots+n_{s})l}$ that is induced by automorphisms of $\mathbb{C}^{n_{1}},\ldots,\mathbb{C}^{n_{s}}$. Therefore we get a well- defined (i.e., independent of choices of isomorphisms) map $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})\to\operatorname{Inj}(\mathbb{C}^{(n_{1}+\cdots+n_{s})l},\mathbb{C}^{k})/_{\prod_{j=1}^{s}U(n_{j})}$ Moreover, it is easy to see that the map passes to a well-defined map between quotients $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}\to\operatorname{Inj}(\mathbb{C}^{(n_{1}+\cdots+n_{s})l},\mathbb{C}^{k})/_{\prod_{j=1}^{s}U(n_{j})}$ (5) The upshot is that we have defined a functor from the category of pointed multi-sets to spaces that sends $(m_{1},\ldots,m_{t})$ to $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}$. ###### Remark 4.4. Here is a slightly different way to think of the map (5). Let $m=m_{1}+\cdots+m_{t}$. There are homeomorphisms $\operatorname{Inj}(\mathbb{C}^{ml},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}\cong U(k)/\prod_{i=1}^{t}U(m_{i})\times U(k-ml)$ and similarly $\operatorname{Inj}(\mathbb{C}^{nl},\mathbb{C}^{k})/_{\prod_{j=1}^{s}U(n_{j})}\cong U(k)/\prod_{j=1}^{s}U(n_{j})\times U(k-nl).$ A morphism of multisets $\alpha\colon(m_{1},\ldots,m_{t})\to(n_{1},\ldots,n_{s})$ gives a canonical way to conjugate $\prod_{i=1}^{t}U(m_{i})\times U(k-ml)$ into a subgroup of $\prod_{j=1}^{s}U(n_{j})\times U(k-nl)$, and therefore gives rise to a $U(k)$-equivariant map $U(k)/\prod_{i=1}^{t}U(m_{i})\times U(k-ml)\to U(k)/\prod_{j=1}^{s}U(n_{j})\times U(k-nl).$ Now we can describe the functor $G_{k,l}$ on finite sets. The following proposition is essentially due to Bratelli [Br]. ###### Proposition 4.5. There is a homeomorphism $G_{k,l}([t])\cong\bigvee_{(m_{1},\ldots,m_{t})}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}$ (6) The wedge sum on the right is indexed on non-zero ordered $t$-tuples $(m_{1},\ldots,m_{t})$ of non-negative integers (the zero tuple corresponds to the basepoint). The functoriality on the right hand side is defined as follows. A pointed map $\alpha\colon[t]\to[s]$, induces a map $\bigvee_{(m_{1},\ldots,m_{t})}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}\to\\\ \to\bigvee_{(n_{1},\ldots,n_{s})}{\operatorname{Inj}(\mathbb{C}^{(n_{1}+\cdots+n_{s})l},\mathbb{C}^{k})/_{\prod_{j=1}^{s}U(n_{j})}}_{+}$ that sends the wedge summand corresponding to $(m_{1},\ldots,m_{t})$ to the wedge summand corresponding to $\alpha_{*}(m_{1},\ldots,m_{t})$ by the map (5), assuming $\alpha_{*}(m_{1},\ldots,m_{t})$ is not a tuple of zeros. If $\alpha_{*}(m_{1},\ldots,m_{t})$ consists just of zeros, then $\alpha$ sends the corresponding wedge summand to the basepoint. ###### Proof. By definiton 2.1, there is a homeomorphism $G_{k,l}([t])\cong{\mathtt{SC^{*}}}(M_{l}^{t},M_{k}).$ For every multi-set $(m_{1},\ldots,m_{t})$, we define a map $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}\to{\mathtt{SC^{*}}}(M_{l}^{t},M_{k})$ (7) as follows. Suppose we have a unitary isometric inclusion $\mathbb{C}^{(m_{1}+\cdots+m_{t})l}\hookrightarrow\mathbb{C}^{k}$. From this, we get a unitary isomorphism (determined up to an automorphism of $\mathbb{C}^{k-ml}$) $\mathbb{C}^{m_{1}}\otimes\mathbb{C}^{l}\oplus\cdots\oplus\mathbb{C}^{m_{t}}\otimes\mathbb{C}^{l}\oplus\mathbb{C}^{k-ml}\stackrel{{\scriptstyle\cong}}{{\to}}\mathbb{C}^{k}$ (8) Having fixed such an isomorphism, we associate with it a $C^{*}$-algebra homomorphism $M_{l}^{t}\to M_{k}$ as follows: the $i$-th factor $M_{l}$ of $M_{l}^{t}$ acts on $\mathbb{C}^{m_{i}}\otimes\mathbb{C}^{l}$ by identity on $\mathbb{C}^{m_{i}}$ and by the standard action on $\mathbb{C}^{l}$. Note that the action of $M_{l}^{t}$ on $\mathbb{C}^{k-ml}$ is multiplication by zero. Automorphisms of $\mathbb{C}^{k-ml}$ commute with the action of $M_{l}^{t}$ on $\mathbb{C}^{m_{1}}\otimes\mathbb{C}^{l}\oplus\cdots\oplus\mathbb{C}^{m_{t}}\otimes\mathbb{C}^{l}\oplus\mathbb{C}^{k-ml}$. It follows that changing isomorphism 8 by an automorphism of $\mathbb{C}^{k-ml}$ does not change the resulting algebra homomorphism from $M_{l}^{t}$ to $M_{k}$. It follows in turn that we have a well-defined map $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})\to{\mathtt{SC^{*}}}(M_{l}^{t},M_{k}).$ It follows from elementary representation theory (Schur Lemma) that two elements of $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})$ induce the same algebra homomorphism if and only if they differ by an action of $U(m_{1})\times\cdots\times U(m_{t})$. Therefore we get a well-defined injective map in (7). Taking union over multi-sets of the form $(m_{1},\ldots,m_{t})$ with fixed $t$, we obtain a map $\bigvee_{(m_{1},\ldots,m_{t})}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}\stackrel{{\scriptstyle\cong}}{{\to}}{\mathtt{SC^{*}}}(M_{l}^{t},M_{k})$ (9) which we claim is a homeomorphism. Indeed, we already know that it is injective. Next we need to show that the map (9) is surjective. Let $f\colon M_{l}^{t}\to M^{k}$ be a $C^{*}$-algebra homomorphism. Let $I_{1},\ldots,I_{t}$ be the identity elements of the $t$ factors $M_{l}$ of $M_{l}^{t}$. Then $f(I_{1}),\ldots,f(I_{t})$ are pairwise commuting hermitian idempotents in $M_{k}$. It follows that for $i=1,\ldots,t$, $f(I_{i})$ is hermitian projection onto $U_{i}$, where $U_{1},\ldots,U_{t}$ are pairwise orthogonal subspaces of $\mathbb{C}^{k}$. Now suppose that $1\leq i\leq t$ and $A_{i}$ is an element of the $i$th factor $M_{l}$ of $M_{l}^{t}$. Then $f(A_{i})=f(A_{i})f(I_{i})=f(I_{i})f(A_{i})$. Thus $f(A_{i})$ commutes with the hermitian idempotent $f(I_{i})$. It follows that $f(A_{i})$ leaves invariant $U_{i}$ and the orthogonal complement of $U_{i}$. Moreover, since $f(A_{i})=f(A_{i})f(I_{i})$ it follows that $f(A_{i})$ is the composition of projection onto $U_{i}$ and a linear transformation of $U_{i}$. It follows that the restriction of $f$ to the $i$th factor of $M_{l}^{t}$ defines a unital representation of the algebra $M_{l}$ on $U_{i}$. Since $M_{l}$ is Morita equivalent to $\mathbb{C}$, $U_{i}$ is isomorphic to a sum of copies of the standard representation of $M_{l}$. This means that we can write $U_{i}\cong\mathbb{C}^{m_{i}}\otimes\mathbb{C}^{l}$, where $m_{1},\ldots,m_{t}$ are some non-negative integers. With this identification the $i$-th $M_{l}$ acts on $U_{i}$ via standard action, and it follows that $f$ is in the image of the map (9) We have shown that the map (9) is a bijection. To show that it is a homeomorphism, observe that $U(k)$ acts continuously on the source and the target. Moreover, both the source and the target are topologized as the disjoint union of $U(k)$-orbits. This is true by definition for the source. To see this for the target, notice that the map that associates to an algebra morphism $f\colon M_{l}^{t}\to M_{k}$ the integers $(m_{1},\ldots,m_{t})$ is continuous and therefore locally constant, and $U(k)$ acts transitively on the preimage of any $t$-tuple of integers. Thus the map (9) is a $U(k)$-equivariant bijection between disjoint unions of orbits of a continuous action $U(k)$. It follows that it is a homeomorphism. The statement about functoriality follows by straightforward diagram-chasing. ∎ In the previous section we defined the rank filtration of $G_{k,l}$. Unwinding the definitions, we find that if $f\in G_{k,l}([t])$ belongs to the wedge summand corresponding to $(m_{1},\ldots,m_{t})$ in Proposition 4.5, then $\mathrm{rank}(f)=m_{1}+\cdots+m_{t}.$ Thus, on finite sets the rank filtration is given by the following formula: $G_{k,l,m}([t])=\bigvee_{\underset{m_{1}+\cdots+m_{t}\leq m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}.$ (10) ## 5 $G_{k,l}$ is a cofibrant $\Gamma$-space In this section we observe that for all $k,l,m$ the restriction of the functor $G_{k,l,m}$ from finite complexes to finite sets is cofibrant, in a certain well-known model structure on $\Gamma$ spaces. This implies that the strict smash product between these functors is equivalent to the derived smash product. We also show that the value of the functor $G_{k,l,m}$ on pointed finite CW-complexes is equivalent to both the strict and the derived left Kan extension of the restriction of $G_{k,l,m}$ to the category of pointed finite sets. Furthermore the $\Gamma$-space $G_{k,l,m}$ is $\min(\lfloor\frac{k}{l}\rfloor,m)$-skeletal. This implies that $G_{k,l,m}$ is determined by its restriction to the category of sets of cardinality at most $\min(\lfloor\frac{k}{l}\rfloor,m)$. For any fixed $[t]$, there is a canonical map $\mathop{\operatorname{colim}}_{U}G_{k,l,m}(U_{+})\to G_{k,l,m}([t])$ (11) where $U$ ranges over the poset of proper subsets of $\underline{t}=\\{1,\ldots,t\\}$ and $U_{+}=U\cup\\{0\\}$. Recall once again that there is an isomorphism $G_{k,l,m}([t])=\bigvee_{\underset{m_{1}+\cdots+m_{t}\leq m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}.$ (12) With this isomorphism in mind, the following lemma is proved by routine manipulations of colimits ###### Lemma 5.1. There is an isomorphism $\mathop{\operatorname{colim}}_{U\subsetneq\underline{t}}G_{k,l,m}(U_{+})\cong\bigvee_{\begin{subarray}{c}\\{(m_{1},\ldots,m_{t})\mid m_{i}=0\mbox{ \small for some }1\leq i\leq t\\\ \mbox{ and }m_{1}+\cdots+m_{t}\leq m\\}\end{subarray}}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}$ The map $\mathop{\operatorname{colim}}_{U\subsetneq\underline{t}}G_{k,l}(U_{+})\to G_{k,l}([t])$ corresponds, under this isomorphism, to inclusion of the wedge sum of all summands indexed by tuples $(m_{1},\ldots,m_{t})$ where $m_{i}=0$ for at least one $i$. It follows that the map (11) is an inclusion of a union of path components. Furthermore, the action of $\Sigma_{t}$ on the quotient space of this inclusion is free. This means that (11) is a $\Sigma_{t}$-equivariant cofibration, and this in turn means that as a functor on $\mathtt{Fin}_{*}$, $G_{k,l,m}$ is cofibrant in the model structure defined for $\Gamma$-spaces in [BF, Lyd1] (technically, there references work with $\Gamma$-simplicial sets, but an analogous structure exists for $\Gamma$-spaces). This model structure is also discussed in [BM] as an example of a generalized Reedy model structure. We will refer to this model structure simply as the Reedy model structure. Let us note that the quotient space of (11) is given by the wedge sum $\bigvee_{\begin{subarray}{c}\\{(m_{1},\ldots,m_{t})\mid m_{1},\ldots,m_{t}\geq 1\\\ \mbox{ and }m_{1}+\cdots+m_{t}\leq m\\}\end{subarray}}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}.$ Notice that the quotient is trivial for $t>m$ and for $t>\lfloor\frac{k}{l}\rfloor$. In the terminology of [BF, Lyd1], this means that the $\Gamma$-space $G_{k,l,m}$ is $\min(m,\lfloor\frac{k}{l}\rfloor)$-skeletal. This implies that $G_{k,l,m}$ is determined on $\mathtt{Fin}_{*}$, via left Kan extension, by its restriction to the subcategory of sets of cardinality at most $\min(m,\lfloor\frac{k}{l}\rfloor)$. We want to verify that $G_{k,l}$ is equivalent, as a functor on $\mathtt{CW}^{f}_{*}$, to both the strict and the derived left Kan extension of the restriction of $G_{k,l,m}$ to $\mathtt{Fin}_{*}$, and even to $\mathtt{Fin}_{*}^{\leq\min(m,\lfloor\frac{k}{l}\rfloor)}$, where $\mathtt{Fin}_{*}^{\leq j}\subset\mathtt{Fin}_{*}$ is the full subcategory consisting of $[0],\ldots,[j]$. Left Kan extension can be described as a coend. Let us introduce some notation. Let $\operatorname{Top}$ denote the category of pointed compactly generated weak Hausdorff spaces with the standard model structure of Quillen [Qui]. Every object in $\operatorname{Top}$ is fibrant, and every CW-complex is cofibrant. Suppose $\mathcal{C}$ is a small category, and we have a pair of functors $F\colon\mathcal{C}^{\operatorname{op}}\to\operatorname{Top}$ and $G\colon\mathcal{C}\to\operatorname{Top}$. We denote the coend of $F$ and $G$ by $\int_{s}^{\mathcal{C}}F\wedge G$, or $\int_{s}^{x\in\mathcal{C}}F(x)\wedge G(x)$. The subscript $s$ is there to indicate that this is a strict coend, as opposed to the homotopy coend. Let us recall the definition and construction of the latter. ###### Definition 5.2. For a functor $F$, let $Q_{o}F$ denote an objectwise cofibrant replacement of $F$ and let $Q_{p}F$ denote a cofibrant replacement in a projective model structure. If $\mathcal{C}$ is a generalized Reedy category in the sense of [BM], then let $Q_{r}F$ denote a cofibrant replacement in the Reedy model structure. The following lemma is standard ###### Lemma 5.3. There are natural equivalences $\int_{s}^{\mathcal{C}}Q_{p}F\wedge Q_{p}G\simeq\int_{s}^{\mathcal{C}}Q_{o}F\wedge Q_{p}G\simeq\int_{s}^{\mathcal{C}}Q_{p}F\wedge Q_{o}G\simeq\int_{s}^{\mathcal{C}}Q_{r}F\wedge Q_{r}G$ ###### Proof. Let us prove, for example, the equivalence $\int_{s}^{\mathcal{C}}Q_{p}F\wedge Q_{o}G\simeq\int_{s}^{\mathcal{C}}Q_{r}F\wedge Q_{r}G$. The other equivalences are similar. It is enough to prove that for all spaces $Z$ there is an equivalence, natural in $Z$ $\operatorname{Map}_{\operatorname{Top}}\left(\int_{s}^{\mathcal{C}}Q_{p}F\wedge Q_{o}G,Z\right)\simeq\operatorname{Map}_{\operatorname{Top}}\left(\int_{s}^{\mathcal{C}}Q_{r}F\wedge Q_{r}G,Z\right).$ Note that these are derived mapping spaces as the source is cofibrant and the target fibrant. For any covariant/contravariant pair of functors $F,G$, there is a homeomorphism $\operatorname{Map}_{\operatorname{Top}}\left(\int_{s}^{\mathcal{C}}F\wedge G,Z\right)\cong\mathrm{Nat}(F,\operatorname{Map}(G,Z)).$ Therefore it is enough to show that there is a natural equivalence $\mathrm{Nat}\left(Q_{p}F,\operatorname{Map}(Q_{o}G,Z)\right)\simeq\mathrm{Nat}\left(Q_{r}F,\operatorname{Map}(Q_{r}G,Z)\right).$ (13) The key observation is that if $Q_{r}G(-)$ is Reedy cofibrant then the functor $\operatorname{Map}(Q_{r}G(-),Z)$ is Reedy fibrant. The two functors $\operatorname{Map}(Q_{o}G(-),Z)$ and $\operatorname{Map}(Q_{r}G(-),Z)$ are weakly equivalent functors from $\mathcal{C}^{\operatorname{op}}$ to $\operatorname{Top}$. They are fibrant in the projective and the Reedy model structure respectively. On the other hand, the functors $Q_{p}F$ and $Q_{r}F$ are weakly equivalent functors that are cofibrant in the projective and the Reedy model structure. It follows that the two sides of (13) are the derived mapping spaces from $F$ to $\operatorname{Map}(Q_{o}(G),Z)$ in the projective and the Reedy model structure respectively. Since the two model structures have the same week equivalences, the two derived mapping spaces are equivalent. ∎ ###### Definition 5.4. The homotopy coend of $F$ and $G$, denoted by $\int_{h}^{\mathcal{C}}F\wedge G$, is defined to be any one of the equivalent coends in Lemma 5.3. ###### Remark 5.5. As mentioned in the introduction, we identify the $\infty$-localization of $\operatorname{Top}$ with the $\infty$-category of pointed spaces $\operatorname{Top}_{\infty}=\mathcal{S}_{*}.$ Let $F_{\infty}\colon\mathcal{C}^{\operatorname{op}}\to\mathcal{S}_{*}$ and $G_{\infty}\colon\mathcal{C}\to\mathcal{S}_{*}$ be the compositions of $F$ and $G$ with the localization functor $\operatorname{Top}\to\operatorname{Top}_{\infty}$. Then it is known (see for example [BHH, Proposition 2.5.6]) that the image of the homotopy coend of $F$ and $G$ under the localization functor $\operatorname{Top}\to\operatorname{Top}_{\infty}$ is equivalent to the $\infty$-coend of $F_{\infty}$ and $G_{\infty}$. In the sequel we will sometimes abuse notation and identify the homotopy coend with the $\infty$-coend and more generally homotopy colimits with $\infty$-colimits. The case we are interested in is of the covariant functor $G_{k,l,m}\colon\mathtt{Fin}_{*}\to\operatorname{Top}$ and the contravariant functor $X^{-}\colon\mathtt{Fin}_{*}^{\operatorname{op}}\to\operatorname{Top}$, where $X$ is a pointed finite CW complex. The strict coend $\int^{[t]\in\mathtt{Fin}_{*}}_{\mathrm{s}}X^{t}\wedge G_{k,l,m}([t])$ is a model for the strict (continuous) Kan extension of $G_{k,l,m}$ from $\mathtt{Fin}_{*}$ to $\operatorname{Top}$. The homotopy coend of same functors is a model for the homotopy Kan extension. We saw above that $G_{k,l,m}$ is cofibrant in the Reedy model structure. The functor $X^{-}$ is also Reedy cofibrant if $X$ is a CW complex. This amounts to saying that for all $t$, the inclusion of the fat diagonal $\Delta^{t}X$ into $X^{t}$ is a $\Sigma_{t}$-equivariant cofibration. Since both functors are Reedy cofibrant, their homotopy coend is in fact equivalent to the strict coend. ###### Lemma 5.6. Let $X$ be a pointed CW complex and $j$ any integer satisfying $\infty\geq j\geq\min(m,\lfloor\frac{k}{l}\rfloor)$. All the maps in the following diagram are equivalences $\begin{array}[]{ccc}\int^{[t]\in\mathtt{Fin}_{*}^{\leq j}}_{\mathrm{s}}X^{t}\wedge G_{k,l,m}([t])&\to&\int^{[t]\in\mathtt{Fin}_{*}}_{\mathrm{s}}X^{t}\wedge G_{k,l,m}([t])\\\ \downarrow&&\downarrow\\\ \int^{[t]\in\mathtt{Fin}_{*}^{\leq j}}_{\mathrm{h}}X^{t}\wedge G_{k,l,m}([t])&\to&\int^{[t]\in\mathtt{Fin}_{*}}_{\mathrm{h}}X^{t}\wedge G_{k,l,m}([t])\end{array}$ ###### Proof. The vertical maps are equivalences because the functors $X^{-}$ and $G_{k,l,m}$ are each Reedy cofibrant. The top map is an equivalence because $G_{k,l,m}$ is $j$-skeletal. ∎ Now comes the main result of this section: $G_{k,l,m}$ is equivalent to both the strict and the derived left Kan extension of its restriction to the category of finite sets of size at most $\min(m,\lfloor\frac{k}{l}\rfloor)$. ###### Theorem 5.7. For all $k,l,m,$ and $\infty\geq j\geq\min(m,\lfloor\frac{k}{l}\rfloor)$, the functor $G_{k,l,m}$ is equivalent to both the strict and the derived left Kan extension of $G_{k,l,m}|_{\mathtt{Fin}_{*}^{\leq j}}$ along the inclusion $\mathtt{Fin}_{*}^{\leq j}\subseteq\mathtt{CW}_{*}$. ###### Proof. By lemma 5.6, the map from the derived left Kan extension to the strict one is an equivalence. So it is enough to prove the statement for strict Kan extension. So throughout this proof, $\int$ stands for the strict coend. There is a natural assembly map $\int^{[t]\in\mathtt{Fin}_{*}}X^{t}\wedge G_{k,l,m}([t])\to G_{k,l,m}(X).$ (14) We will prove that it is a homeomorphism, if $X$ is a finite CW complex. This is enough for proving the theorem. Thus we need to prove that (14) is bijective and bi-continuous. First we prove surjectivity. Let $f\in G_{k,l,m}(X)\subseteq{\mathtt{SC^{*}}}(\mathrm{C}_{0}(X,M_{l}),M_{k}).$ Factor $f$ using lemma 3.2. Denote $t=|F_{f}|$ and choose a pointed bijection $[t]\xrightarrow{\cong}F_{f}\cup\\{*\\}$. It follows that $f$ admits a factorization $\mathrm{C}_{0}(X,M_{l})\to M_{l}^{t}\stackrel{{\scriptstyle f^{\prime}}}{{\to}}M_{k}.$ Note that since $V_{f}=V_{f}^{\prime}$ we have $f^{\prime}\in G_{k,l,m}([t])$ This means that $f$ is in the image of the map $X^{t}\wedge G_{k,l,m}([t])\to G_{k,l,m}(X)$. Thus $f$ is in the image of the assembly map. Since $f$ was an arbitrary element of $G_{k,l,m}(X)$, we have proved surjectivity of the assembly map. Next we show that the assembly map is injective. Suppose that $(\alpha,g)\in X^{t}\wedge G_{k,l,m}([t])$ and $(\alpha_{1},g_{1})\in X^{t_{1}}\wedge G_{k,l,m}([t_{1}])$ represent two elements of $\int^{[t]\in\mathtt{Fin}_{*}}X^{t}\wedge G_{k,l}([t])$ that are mapped to the same element of $G_{k,l,m}(X)$. We have to show that $(\alpha,g)$ and $(\alpha_{1},g_{1})$ represent the same element of $\int^{[t]\in\mathtt{Fin}_{*}}X^{t}\wedge G_{k,l}([t])$. Without loss of generality we may assume that the functions $\alpha\colon[t]\to X$ and $\alpha_{1}\colon[t_{1}]\to X$ are injective. Indeed, suppose for example that $\alpha$ is not injective. Then $\alpha$ can be factored as a surjection followed by injection, say $[t]\twoheadrightarrow[t^{\prime}]\stackrel{{\scriptstyle\alpha^{\prime}}}{{\hookrightarrow}}X$. Let $g^{\prime}$ be the image of $g$ under the map $G_{k,l,m}([t])\to G_{k,l,m}([t^{\prime}])$. Then $(\alpha^{\prime},g^{\prime})$ represents the same element as $(\alpha,g)$ in the coend. Assuming $\alpha$ and $\alpha_{1}$ are injective, let $f$ be the common image of $(\alpha,g)$ and $(\alpha_{1},g_{1})$ in $G_{k,l,m}(X)$. Then there is a unique subset $F\subset X\setminus\\{*\\}$ such that $f$ factors as in lemma 3.2. By the minimality of $F$, $F$ is contained in the image of $\alpha$ and in the image of $\alpha_{1}$. Choose a pointed bijection $\alpha^{\prime}\colon[t_{0}]\stackrel{{\scriptstyle\cong}}{{\to}}F\cup\\{*\\}$. This bijection factors through $\alpha$ and $\alpha_{1}$. By slight abuse of notation, let use $\alpha^{\prime}$ to denote also the element of $X^{t_{0}}$ that is the composed map $[t_{0}]\stackrel{{\scriptstyle\alpha^{\prime}}}{{\to}}F\cup\\{*\\}\hookrightarrow X$. Putting it all together we obtain a commutative diagram It follows that $(\alpha,g)$ and $(\alpha_{1},g_{1})$ are both mapped to the same element $(\alpha^{\prime},f^{\prime})$ of the coend $\int^{[t]\in\mathtt{Fin}_{*}}X^{t}\wedge G_{k,l,m}([t])$. We have shown that the assembly map (14) is a bijection. Its domain is compact and its codomain is Hausdorff, so it is a homeomorphism. ∎ ###### Corollary 5.8. If $l>k$ then $\mathbb{S}^{k,l}\simeq*$. ###### Proof. It follows immediately from Proposition 4.5 that if $l>k$ then the restriction of $G_{k,l}$ to $\mathtt{Fin}_{*}$ is trivial, i.e., $G_{k,l}([t])\cong*$ for every finite pointed set $[t]$. By Theorem 5.7 $G_{k,l}$ is equivalent to the homotopy left Kan extension of the restriction of $G_{k,l}$ to $\mathtt{Fin^{\leq k}_{*}}$, so $G_{k,l}\simeq*$. Since $\mathbb{S}^{k,l}$ is the stabilization of $G_{k,l}$, it follows that $\mathbb{S}^{k,l}\simeq*$ as well. ∎ ### 5.1 The first stage of the rank filtration The functor $G_{k,l,1}$ is especially well behaved. The following lemma is an easy consequence of Theorem 5.7 taken with $j=1$. We also give a direct proof ###### Lemma 5.9. Let $l,k\geq 1$ and let $X$ be a finite pointed CW-complex. Then the assembly map $a_{X}\colon X\wedge G_{k,l,1}(S^{0})\to G_{k,l,1}(X)$ is a homeomorphism. ###### Proof. Since $X\wedge G_{k,l,1}(S^{0})$ is compact and $G_{k,l,1}(X)$ is Hausdorff it is enough to show that $a_{X}$ is a bijection. Since $M_{l}$ is simple we have $G_{k,l}(S^{0})=\mathtt{SC^{*}}^{\mathrm{inj}}(M_{l},M_{k})_{+}$ where $\mathtt{SC^{*}}^{\mathrm{inj}}(M_{l},M_{k})\subset\mathtt{SC^{*}}(M_{l},M_{k})$ is the space of injective $C^{*}$-algebra maps. similarly we get $G_{k,l,1}(S^{0})=\mathtt{SC^{*}}^{,1}(M_{l},M_{k})_{+}$ where $\mathtt{SC^{*}}^{,1}(M_{l},M_{k})\subset\mathtt{SC^{*}}^{\mathrm{inj}}(M_{l},M_{k})$ is the space of maps $f$ with $\mathrm{rank}(f)=1$. We get that $(X\wedge G_{k,l,1}(S^{0}))\smallsetminus\\{\ast\\}=(X\smallsetminus\\{\ast\\})\times\mathtt{SC^{*}}^{,1}(M_{l},M_{k})$ and the map $a_{X}(x_{0},f)\in G_{k,l,1}(X)\subset G_{k,l}(X)$ is the composition $\mathrm{C}_{0}(X,M_{l})\xrightarrow{x_{0}^{*}}M_{l}\xrightarrow{f}M_{k}.$ The injectivity now follows from the uniqueness of the factorisation in lemma 3.2 and the surjectivity from the existence in lemma 3.2 and lemma 3.3. ∎ ###### Corollary 5.10. Let $l,k\geq 1$, the natural map $\Sigma^{\infty}G_{k,l,1}(S^{0})\to\mathbb{S}^{k,l,1}$ is an equivalence. Let $\mathbb{P}\operatorname{Inj}$ be the topologically enriched symmetric monoidal category of finite positive dimensional Hilbert spaces and isometric embeddings up to scalar. That is, up to isomorphism objects are given by $\mathbb{C}^{k}$ for $k\in\mathbb{Z}_{\geq 1}$ and $\mathbb{P}\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{k})=U(k)/(U(1)\times U(k-l)).$ The symmetric monoidal structure is given by the tensor product. We have a topologically enriched symmetric monoidal functor $\textrm{End}\colon\mathbb{P}\operatorname{Inj}\to\mathtt{SC^{*}},$ that sends the Hilbert space $V$ to the $C^{*}$-algebra $\textrm{End}(V)$ of linear maps $V\to V$ and the embedding $i\colon V\to W$ is sent to the $*$-homomorphism $\textrm{End}(V)\to\textrm{End}(W)$ sending $A\in\textrm{End}(V)$ to $i\circ A\circ i^{-1}\circ p\in\textrm{End}(W)$, where $p\colon W\to\textrm{Im\,}(i)$ is the orthogonal projection. The monoidal coherence maps of End are given by the natural isomorphisms $\textrm{End}(V_{1})\otimes\textrm{End}(V_{2})\xrightarrow{\sim}\textrm{End}(V_{1}\otimes V_{2})$ and $\mathbb{C}\xrightarrow{\sim}\textrm{End}(\mathbb{C}).$ ###### Lemma 5.11. The map $\mathbb{P}\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{k})_{+}\to\mathtt{SC^{*}}(M_{l},M_{k})=G_{k,l}(S^{0})$ induced by the functor End is an embedding with image $G_{k,l,1}(S^{0})$. ###### Proof. We first show that the map is surjective on $G_{k,l,1}(S^{0})$. Let $i\colon\mathbb{C}^{l}\to\mathbb{C}^{k}$ be an isometric embedding. The map $f_{i}=\textrm{End}(i)\in\mathtt{SC^{*}}(M_{l},M_{k})$ clearly satisfies $V_{f}=\mathrm{Im}(i)\subseteq\mathbb{C}^{k}$ and thus $\mathrm{rank}(f)=\frac{\dim V_{f_{i}}}{l}=\frac{l}{l}=1$. On the other hand if $f_{i}\in G_{k,l,1}(S^{0})$, then either $f_{i}=0$ and thus is in the image of the base point or $\dim V_{f_{i}}=1$. In the case $\dim V_{f_{i}}=1$ we get that the map $f$ factors as an isomorphism followed by an injection. $M_{l}\xrightarrow{\sim}\textrm{End}(V_{f_{i}})\to M_{k}$ subjectivity now follows from the fact that every automorphism of $M_{l}$ is inner. We now prove injectivity. First since $V_{f_{i}}=\mathrm{Im}(i)$, $f_{i}$ determines $\mathrm{Im}(i)$. We are thus reduced to show that if two embeddings $i,j\colon\mathbb{C}^{l}\to\mathbb{C}^{k}$ have the same image $V$ and the induced maps $M_{l}\to\textrm{End}(V)$ are the same then $i$ and $j$ differ by a scalar. This follows from the fact that the center of $M_{l}$ is exactly the scalar matrices. We thus get a continuous bijection $\mathbb{P}\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{k})_{+}\to G_{k,l,1}(S^{0})$. Since it has compact domain and Hausdorff target, it is a homeomorphism. ∎ Applying the topological nerve to $\mathbb{P}\operatorname{Inj}$ we get a symmetric monoidal $\infty$-category $\mathbb{P}\operatorname{Inj}_{\infty}$. We thus get that End induces a symmetric monoidal functor $\widetilde{\textrm{End}}\colon\mathbb{P}\operatorname{Inj}_{\infty}^{\mathrm{op}}\to\mathtt{NCW}.$ Composing with the symmetric monoidal functor $\Sigma^{\infty}\colon\mathtt{NCW}\to\mathtt{NSp}$ we get a symmetric monoidal functor $\Sigma^{\infty}\circ\widetilde{\textrm{End}}\colon\mathbb{P}\operatorname{Inj}_{\infty}^{\mathrm{op}}\to\mathtt{NSp}.$ For a closed symmetric monoidal $\infty$-category $\mathcal{M}$ denote by $\mathtt{Cat}^{\otimes}_{\mathcal{M}}$ the $\infty$-category of symmetric monoidal $\mathcal{M}$-enriched $\infty$-categories. If $\mathcal{M}$ and $\mathcal{N}$ are closed symmetric monoidal $\infty$-categories and $a\colon\mathcal{M}\to\mathcal{N}$ is a symmetric monoidal functor which admits a right adjoint $b$, then we have an induced adjunction $a_{!}\colon\mathtt{Cat}^{\otimes}_{\mathcal{M}}\rightleftarrows\mathtt{Cat}^{\otimes}_{\mathcal{N}}\nobreak\mspace{6.0mu}{:}\nonscript\mkern-3.0mu\mathpunct{}\mspace{2.0mu}b_{!}.$ We shall especially use the case where $a=\Sigma^{\infty}_{+}\colon\mathcal{S}\to\mathtt{Sp}$. Using the identification $\mathtt{Cat}^{\otimes}_{\mathcal{S}}\cong\mathtt{Cat}^{\otimes}$, we obtain an adjunction $(\Sigma_{+}^{\infty})_{!}\colon\mathtt{Cat}^{\otimes}\leftrightarrows\mathtt{Cat}^{\otimes}_{\mathtt{Sp}}\nobreak\mspace{6.0mu}{:}\nonscript\mkern-3.0mu\mathpunct{}\mspace{2.0mu}(\Omega^{\infty})_{!}$ Given a symmetric monoidal $\infty$-category $\mathcal{C}\in\mathtt{Cat}^{\otimes}$ we denote $\mathcal{C}^{\mathtt{Sp}}:=(\Sigma^{\infty}_{+})_{!}(\mathcal{C})\in\mathtt{Cat}^{\otimes}_{\mathtt{Sp}}.$ As a stable $\infty$-category $\mathtt{NSp}$ is naturally $\mathtt{Sp}$-enriched. Thus we have natural isomorphisms $\operatorname{Map}_{\mathtt{Cat}^{\otimes}_{\mathtt{Sp}}}((\mathbb{P}\operatorname{Inj}^{\mathtt{Sp}}_{\infty})^{\mathrm{op}},\mathtt{NSp})\simeq\operatorname{Map}_{\mathtt{Cat}^{\otimes}_{\mathtt{Sp}}}((\Sigma_{+}^{\infty})_{!}(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathrm{op}}),\mathtt{NSp})\simeq$ $\operatorname{Map}_{\mathtt{Cat}^{\otimes}}(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathrm{op}},(\Omega^{\infty})_{!}\mathtt{NSp})\simeq\operatorname{Map}_{\mathtt{Cat}^{\otimes}}(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathrm{op}},\mathtt{NSp}).$ We denote the mate of $\Sigma^{\infty}\circ\widetilde{\textrm{End}}\in\operatorname{Map}_{\mathtt{Cat}^{\otimes}}(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathrm{op}},\mathtt{NSp})$ under this adjunction by $\widetilde{\mathrm{E}}\colon(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}})^{\mathrm{op}}\to\mathtt{NSp}.$ This is a symmetric monoidal $\mathtt{Sp}$-enriched functor. ###### Lemma 5.12. We get the following commutative diagram for every $k,l\geq 1$ --- $\textstyle{\operatorname{Map}_{(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}})^{\mathrm{op}}}(\mathbb{C}^{k},\mathbb{C}^{l})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\scriptstyle{\tilde{\mathrm{E}}}$$\textstyle{\operatorname{Map}_{\mathtt{NSp}}(\Sigma^{\infty}M^{k},\Sigma^{\infty}M^{k})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{\Sigma^{\infty}_{+}\operatorname{Map}_{\mathbb{P}\operatorname{Inj}_{\infty}}(\mathbb{C}^{l},\mathbb{C}^{k})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$End$\textstyle{\Sigma^{\infty}\operatorname{Map}_{\mathtt{NCW}}(M^{k},M^{l})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{\Sigma^{\infty}G_{k,l,1}(S^{0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{\Sigma^{\infty}G_{k,l}(S^{0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\partial_{1}(G_{k,l})}$$\textstyle{\partial_{1}(G_{k,l,1})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$ ###### Proof. The commutation of the lower triangle and the right trapezoid is clear. The left square is a consequence of lemma 5.11. To see the commutation of the top trapezoid consider the digram in $\mathtt{Cat}_{\mathtt{Sp}}^{\otimes}$. $\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 22.15837pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}})^{\mathrm{op}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\kern 64.31952pt\raise-33.59503pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}\ignorespaces\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces{\hbox{\lx@xy@drawline@}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern 46.15837pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern 46.15837pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{(\Sigma^{\infty})_{!}(\mathtt{NCW})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern 76.62509pt\raise-30.17888pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx<EMAIL_ADDRESS>0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{}$}}}}}}}{\hbox{\kern 64.31952pt\raise-40.01219pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{\mathtt{NSp}}$}}}}}}}\ignorespaces}}}}\ignorespaces.$ ∎ ## 6 Connection with topological $K$-theory In this section we show that for every fixed $l$, the spectrum $\mathbb{S}^{\infty,l}:=\mathop{\operatorname{colim}}_{k\to\infty}\mathbb{S}^{k,l}$, is equivalent to the connective $K$-theory spectrum $ku$. We will use this observation to show how the spectra $\mathbb{S}^{\infty,l}$ together represent $K$-theory of noncommutative CW-complexes. Recall that for a pointed finite set $[t]$, $G_{k,l}([t])=\mathtt{SC^{*}}(M_{l}^{t},M_{k})\cong\bigvee_{(m_{1},\ldots,m_{t})}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}.$ The inclusions of algebras $M_{k}\to M_{k+1}$ that send a matrix $a$ to $a\oplus 0$ induce a sequence of natural transformations, for each fixed $l$, $\cdots\to G_{k,l}\to G_{k+1,l}\to\cdots.$ ###### Definition 6.1. For each fixed $l$, define the functor $G_{\infty,l}:=\mathop{\operatorname{colim}}_{k\to\infty}G_{k,l}$. Here by $\mathop{\operatorname{colim}}$ we mean strict rather than homotopy colimit. ###### Lemma 6.2. $G_{\infty,l}$ is equivalent to the homotopy colimit $\mathop{\operatorname{hocolim}}_{k\to\infty}G_{k,l}$. Moreover there is a homeomorphism (where as usual the wedge sum is indexed on the set of non-zero $t$-tuples of non-negative integers) $G_{\infty,l}([t])=\bigvee_{(m_{1},\ldots,m_{t})}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{\infty})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}$ and a homotopy equivalence $G_{\infty,l}([t])\simeq\bigvee_{(m_{1},\ldots,m_{t})}BU(m_{1})\times\cdots\times BU(m_{t})_{+}.$ ###### Proof. The map $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}\to\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k+1})/_{\prod_{i=1}^{t}U(m_{i})}$ is an inclusion of a submanifold, and in particular a cofibration. It follows that the colimit is equivalent to the homotopy colimit. As $k$ goes to $\infty$, the colimit of $G_{k,l}$ is, by definition, homeomorphic to $\bigvee_{(m_{1},\ldots,m_{t})}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{\infty})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}.$ Note that $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{\infty})$ is a contractible space with a free action of $\prod_{i=1}^{t}U(m_{i})$, and moreover the quotient map $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{\infty})\to\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{\infty})/_{\prod_{i=1}^{t}U(m_{i})}$ is a fiber bundle. It follows that the quotient space is equivalent to the classifying space $BU(m_{1})\times\cdots\times BU(m_{t})$. ∎ Recall $\mathbb{S}^{k,l}$ is the stabilization of $G_{k,l}$. We define $\mathbb{S}^{\infty,l}$ accordingly. ###### Definition 6.3. The spectrum $\mathbb{S}^{\infty,l}$ is defined as follows $\mathbb{S}^{\infty,l}:=\mathop{\operatorname{colim}}_{k\to\infty}\mathbb{S}^{k,l}.$ Since stabilization commutes with homotopy colimits of pointed functors, $\mathbb{S}^{\infty,l}$ is the stabilization of $G_{\infty,l}$. It is worth noting that the homotopy type of $\mathbb{S}^{\infty,l}$ is independent of $l$. Indeed, the following statement is an immediate consequence of Lemma 6.2. ###### Corollary 6.4. A choice of an inclusion $\mathbb{C}\hookrightarrow\mathbb{C}^{l}$ induces an equivalence of $\Gamma$-spaces $G_{\infty,l}\to G_{\infty,1}$. In fact, the spectrum $\mathbb{S}^{\infty,l}$ is homotopy equivalent to the connective $K$-theory spectrum $ku$ for each $l$. We record this observation in a lemma. ###### Lemma 6.5. For each $l$ the spectrum $\mathbb{S}^{\infty,l}$ is homotopy equivalent to $ku$. ###### Proof. The $\Gamma$-space $G_{\infty,1}$ is equivalent to the $\Gamma$-space constructed by Segal in [Se1, Section 2]. It is, in Segal's terminology, a special $\Gamma$-space. This means that for any two pointed finite sets $[s],[t]$, the natural map $G_{\infty,1}([s]\vee[t])\to G_{\infty,1}([s])\times G_{\infty,1}([t])$ is an equivalence. It follows from Lemma 6.2 that $G_{\infty,1}([1])\cong\bigvee_{m=1}^{\infty}BU(m)_{+}.$ So $\mathbb{S}^{\infty,1}$ is the spectrum associated with the group- completion of $\bigvee_{m=1}^{\infty}BU(m)_{+}$, which is well-known to be $ku$. By corollary 6.4, it follows that $\mathbb{S}^{\infty,l}\simeq ku$ for all $l$. ∎ It follows that for each $k,l$ there is a natural map $\mathbb{S}^{k,l}\to\mathbb{S}^{\infty,l}\simeq ku$. We will show later, after we analyze the subquotients of the rank filtration, that this map induces an isomorphism on $\pi_{0}$ (Lemma 9.4). In particular, it is not trivial. #### $K$-theory of noncommutative CW-complexes. We will now discuss how the spectra $\mathbb{S}^{\infty,l}$ can be used to represent the $K$ theory functor on the category $\mathtt{NSp}$. In this subsection we consider $\mathcal{M}$ as an $\infty$-category and use the $\infty$-categorical picture. Consider the enriched Yoneda embedding of $\mathcal{M}^{\operatorname{op}}$ (which is an $\mathtt{Sp}^{rev}$-functor): $Y\colon\mathcal{M}^{\operatorname{op}}\to\operatorname{Fun}_{\mathtt{Sp}}(\mathcal{M},\mathtt{Sp}).$ The sequence of algebras $\cdots\to M_{k}\to M_{k+1}\to\cdots$ gives rise to a direct sequence in $\mathcal{M}^{\operatorname{op}}$: $\cdots\to\Sigma^{\infty}_{\mathtt{NC}}M_{k}\to\Sigma^{\infty}_{\mathtt{NC}}M_{k+1}\to\cdots$ Applying the Yoneda embedding $Y$ to this sequence, we obtain a sequence of spectral functors $Y(\Sigma^{\infty}_{\mathtt{NC}}M_{k})\in\operatorname{Fun}_{\mathtt{Sp}}(\mathcal{M},\mathtt{Sp})$, that are characterized by the following property $Y(\Sigma^{\infty}_{\mathtt{NC}}M_{k})(\Sigma^{\infty}_{\mathtt{NC}}M_{l})\simeq\operatorname{Hom}_{\mathcal{M}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})=\mathbb{S}^{k,l}.$ The homotopy colimit of this sequence is a spectral functor that will represent connective $K$-theory. Let us give this functor a name: ###### Definition 6.6. We define the spectrally enriched functor $\mathtt{ku}\in\operatorname{Fun}_{\mathtt{Sp}}(\mathcal{M},\mathtt{Sp})$ to be the following colimit: $\mathtt{ku}=\mathop{\operatorname{colim}}_{k\to\infty}Y(\Sigma^{\infty}_{\mathtt{NC}}M_{k}).$ Note that for every $l$ we have the following equivalences, the last of which follows from Lemma 6.5. The last equivalence justifies the notation $\mathtt{ku}$ for this functor. $\mathtt{ku}(\Sigma^{\infty}_{\mathtt{NC}}M_{l}):=\mathop{\operatorname{colim}}_{k\to\infty}\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})=\mathbb{S}^{\infty,l}\simeq ku.$ We know by the main result of [ABS1] that we can identify $\mathtt{NSp}$ with $P_{\mathtt{Sp}}(\mathcal{M})$, the category of contravariant spectrally enriched functors from $\mathcal{M}$ to $\mathtt{Sp}$. This implies that covariant enriched functors from $\mathcal{M}$ to $\mathtt{Sp}$, such as $\mathtt{ku}$, can be used to define homology theories on $\mathtt{NSp}$, and therefore also on $\mathtt{NCW}$. Indeed suppose $h\colon\mathcal{M}\to\mathtt{Sp}$ is a covariant $\mathtt{Sp}$-functor. Then $h$ determines an $\mathtt{Sp}$-tensored functor $h\wedge_{\mathcal{M}}(-):\mathtt{NSp}\to\mathtt{Sp}$ by the universal property of the enriched Yoneda embedding, such that there is a natural equivalence $h(k)\wedge_{k\in\mathcal{M}}\mathbb{S}^{k,l}\simeq h(\Sigma^{\infty}_{\mathtt{NC}}M_{l}).$ The universal property also tells us that this induces an equivalence between the $\infty$-category of $\mathtt{Sp}$-tensored functors $\mathtt{NSp}\to\mathtt{Sp}$ and the $\infty$-category of $\mathtt{Sp}$-functors $\mathcal{M}\to\mathtt{Sp}$. By taking homotopy groups $\pi_{*}(h\wedge_{\mathcal{M}}(-))$ one obtains a generalized homology theory on noncommutative CW-spectra (see [BJM, Definition 4.1]). In particular, let us take $h=\mathtt{ku}$. We interpret the property that $\mathtt{ku}(\Sigma^{\infty}_{\mathtt{NC}}M_{l})\simeq ku$ for all $l$ as saying that $\mathtt{ku}$ represents a version of connective $K$-theory. We have a further enhancement of this fact. ###### Lemma 6.7. The functor $\mathtt{ku}$ takes values in the category of $ku$-module spectra. ###### Proof. The monoidal structure on $\mathcal{M}$ gives rise to maps $\mathbb{S}^{k^{\prime},l^{\prime}}\wedge\mathbb{S}^{k,l}\to\mathbb{S}^{kk^{\prime},ll^{\prime}}$. Fixing $l^{\prime}=1$ we obtain maps $\mathbb{S}^{k^{\prime},1}\wedge\mathbb{S}^{k,l}\to\mathbb{S}^{kk^{\prime},l}$, natural in $l$. Taking limits as $k,k^{\prime}\to\infty$ we obtain maps, still natural in $\Sigma^{\infty}_{\mathtt{NC}}M_{l}$ $\mathtt{ku}(\Sigma^{\infty}_{\mathtt{NC}}M_{1})\wedge\mathtt{ku}(\Sigma^{\infty}_{\mathtt{NC}}M_{l})\to\mathtt{ku}(\Sigma^{\infty}_{\mathtt{NC}}M_{l}).$ Identifying $\mathtt{ku}(\Sigma^{\infty}_{\mathtt{NC}}M_{1})$ with $ku$, this map endows $\mathtt{ku}(\Sigma^{\infty}_{\mathtt{NC}}M_{l})$ with the structure of a $ku$-module, functorial in $\Sigma^{\infty}_{\mathtt{NC}}M_{l}$. ∎ The $\infty$-category of $ku$-module spectra $ku\mathrm{-mod}$ is stable and thus $\mathtt{Sp}$-tensored. In light of Lemma 6.7, we may now view $\mathtt{ku}$ as an $\mathtt{Sp}$-functor $\mathtt{ku}\colon\mathcal{M}\to ku\mathrm{-mod}$. Thus, by the universal property of the enriched Yoneda embedding, $\mathtt{ku}$ determines an $\mathtt{Sp}$-tensored functor $\mathtt{ku}\wedge_{\mathcal{M}}(-):\mathtt{NSp}\to ku\mathrm{-mod}.$ This allows for the following definition: ###### Definition 6.8. For $X,Y\in\mathtt{NSp}$ we define $kk(X,Y):=[\mathtt{ku}\wedge_{\mathcal{M}}X,\mathtt{ku}\wedge_{\mathcal{M}}Y]_{ku\mathrm{-mod}},$ $kk_{*}(X,Y)=\pi_{*}\operatorname{Map}_{ku\mathrm{-mod}}(\mathtt{ku}\wedge_{\mathcal{M}}X,\mathtt{ku}\wedge_{\mathcal{M}}Y)=kk(\Sigma^{*}X,Y).$ It follows from Lemma 6.7 that for all $k,l\geq 1$ $kk_{*}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})\simeq\pi_{*}\operatorname{Map}_{ku\mathrm{-mod}}(ku,ku)\simeq\pi_{*}ku.$ Recall from Remark 2.4 that the category of commutative spectra is identified with the category of spectral presheaves on the full $\mathtt{Sp}$-enriched subcategory of $\mathcal{M}$ containing the object $\Sigma^{\infty}_{\mathtt{NC}}M_{1}$. A commutative spectrum is made noncommutative by means of an $\mathtt{Sp}$-enriched left Kan extension. Suppose that $X$ is a commutative spectrum, which we also may consider as a noncommutative spectrum by Kan extension. Standard adjunction implies that there is a natural equivalence $\mathtt{ku}\wedge_{\mathcal{M}}X\simeq ku\wedge X,$ where the symbol $\wedge$ on the right hand side denotes the usual smash product of spectra. It follows that if $X$ and $Y$ are commutative spectra then there is an equivalence $kk_{*}(X,Y)\simeq\pi_{*}\operatorname{Map}_{\mathtt{Sp}}(X,ku\wedge Y).$ In the case when $X$ and $Y$ are suspension spectra of pointed finite CW- complexes, this is essentially [DM, Proposition 3.1]. Thus $kk$ defined above is a natural extension for the connective bivariant $K$-theory of Dadarlat and McClure. This also suggests that some results from [loc. cit.] may be generalized from finite CW-complexes to finite noncommutative CW-complexes. This will be addressed in future papers. We conclude by remarking that since the functor $\mathtt{ku}$ takes values in modules over $ku$, one can invert the Bott element and get a functor representing non-connective $K$-theory. #### 6.0.1 Connection to the rank filtration of $ku$ It follows immediately from equation (10), that the filtration of $G_{k,l}$ by $G_{k,l,m}$ interacts well with the stabilization map $G_{k,l}\to G_{k+1,l}$ that we considered in the previous section. To be more specific, there is a commutative diagram $\begin{array}[]{ccccccccc}\cdots&\to&G_{k,l,m}&\to&G_{k,l,m+1}&\to&\cdots&\to&G_{k,l}\\\ &&\downarrow&&\downarrow&&\cdots&&\downarrow\\\ \cdots&\to&G_{k+1,l,m}&\to&G_{k+1,l,m+1}&\to&\cdots&\to&G_{k+1,l}\\\ &&\downarrow&&\downarrow&&\cdots&&\downarrow\\\ &&\vdots&&\vdots&&\cdots&&\vdots\\\ \cdots&\to&G_{\infty,l,m}&\to&G_{\infty,l,m+1}&\to&\cdots&\to&G_{\infty,l}.\end{array}$ On finite sets, $G_{\infty,l,m}$ is given, at least up to homotopy, by the following formula $G_{\infty,l,m}([t])\simeq\bigvee_{\underset{m_{1}+\cdots+m_{t}\leq m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}BU(m_{1})\times\cdots\times BU(m_{t})_{+}$ We conclude that the rank filtration of $G_{k,l}$ induces a compatible rank filtration of $G_{\infty,l}$. Upon passing to stabilization, the rank filtration of $G_{\infty,l}$ induces a filtration of $ku$ by a sequence of spectra: $\cdots\to\mathbb{S}^{\infty,l,m}\to\mathbb{S}^{\infty,l,m+1}\to\cdots\to\mathbb{S}^{\infty,l}\simeq ku.$ The latter filtration is the familiar rank filtration of the $K$-theory spectrum $ku$ studied, for example, in [AL2] (we remark that $ku$ is denoted $bu$ in [loc. cit]). Thus the rank filtration of $\mathbb{S}^{k,l}$ is a lift of the classical rank filtration of $ku$. ## 7 Subquotients of the rank filtration In this section we investigate the subquotients of the rank filtration. We show that on the level of the functors $G_{k,l}$ the subquotients of the rank filtration have a presentation as a homotopy coend over the category $\mathtt{Epi}$ of finite sets and surjections, as opposed to the category $\mathtt{Fin}_{*}$ of pointed sets and all pointed functions. ###### Definition 7.1. Let $G_{k,l}^{m}\colon\mathtt{CW}_{*}\to\operatorname{Top}$ be the quotient functor $G_{k,l}^{m}:=G_{k,l,m}/G_{k,l,m-1}.$ Similarly, let $\mathbb{S}^{k,l}_{m}$ be the homotopy cofiber of the map $\mathbb{S}^{k,l,m-1}\to\mathbb{S}^{k,l,m}$. It is not hard to check that for any $X\in\mathtt{CW}_{*}$ the map $G_{k,l,m-1}(X)\to G_{k,l,m}(X)$ is a cofibration in $\operatorname{Top}$. Thus, $G_{k,l}^{m}$ the levelwise homotopy cofiber of the map $G_{k,l,m-1}\to G_{k,l,m}$ in the category of pointed continuous functors $\operatorname{Fun}_{*}(\mathtt{CW}_{*},\operatorname{Top})$. It follows from [ADL, Lemma 2.16] that $G_{k,l}^{m}$ is also the homotopy cofiber of the map $G^{k,l,m-1}\to G^{k,l,m}$ in the projective model structure on $\operatorname{Fun}_{*}(\mathtt{CW}_{*},\operatorname{Top})$, and thus also in the stable model structure $\mathtt{Sp^{M}}$. Since strict (homotopy) left Kan extension commutes with strict (homotopy) cofiber, we get from Theorem 5.7 that the functor $G_{k,l}^{m}$ is equivalent to both the strict and the derived left Kan extension of its restriction to ${\mathtt{Fin}_{*}^{\leq j}}$ for any $\infty\geq j\geq\min(m,\lfloor\frac{k}{l}\rfloor)$. Thus, for any finite pointed CW- complex $X$, we have the folowing formula: $G_{k,l}^{m}(X)\cong\int^{[t]\in\mathtt{Fin}_{*}^{\leq j}}_{\mathrm{s}}X^{t}\wedge G_{k,l}^{m}([t])\simeq\int^{[t]\in\mathtt{Fin}_{*}^{\leq j}}_{\mathrm{h}}X^{t}\wedge G_{k,l}^{m}([t]).$ (15) Since stabilization commutes with homotopy cofibers, $\mathbb{S}^{k,l}_{m}$ is equivalent to the stabilization of $G_{k,l}^{m}$. It follows immediately from equation (10) that on objects $G_{k,l}^{m}$ is given as follows $G_{k,l}^{m}([t])=\bigvee_{\underset{m_{1}+\cdots+m_{t}=m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}{\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}$ (16) Which also can be written as $G_{k,l}^{m}([t])=\bigvee_{\underset{m_{1}+\cdots+m_{t}=m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}{\operatorname{Inj}(\mathbb{C}^{ml},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}$ To understand the effect of $G_{k,l}^{m}$ on morphisms, let $\alpha\colon[t]\to[s]$ be a pointed function. If $|\alpha_{*}(m_{1},\ldots,m_{t})|<m$, then $G_{k,l}^{m}(\alpha)$ takes the summand of $G_{k,l}^{m}([t])$ corresponding to $(m_{1},\ldots,m_{t})$ to the basepoint. If $|\alpha_{*}(m_{1},\ldots,m_{t})|=m$, then $G_{k,l}^{m}(\alpha)$ takes the corresponding summand of $G_{k,l}^{m}([t])$ to the summand of $G_{k,l}^{m}([s])$ indexed by $\alpha_{*}(m_{1},\ldots,m_{t})$ by the map (5). One attractive property of $G_{k,l}^{m}$ is that it has a more compact coend formula than the one given in equation 15 (see Proposition 7.5). In this formula the category $\mathtt{Fin}_{*}$ is replaced with the smaller category $\mathtt{Epi}$ of non-empty unpointed sets and surjections. For $k\leq\infty$ let $\mathtt{Epi}^{\leq k}$ be the category of non-empty finite sets of cardinality at most $k$ and epimorphisms between them. In the case $k=\infty$, this is the category of all non-empty finite sets and surjections, and we denote it simply $\mathtt{Epi}$. If $\alpha\colon\underline{t}\twoheadrightarrow\underline{s}$ is a morphism in $\mathtt{Epi}$, and $(m_{1},\ldots,m_{t})$ is a multiset, then we may define $\alpha_{*}(m_{1},\ldots,m_{t})=(n_{1},\ldots,n_{s})$ in the usual way, by saying that $n_{j}=\sum_{i\in\alpha^{-1}(j)}m_{i}$. Note that in this case there is an equality $m_{1}+\cdots+m_{t}=n_{1}+\cdots+n_{s}$. Note also that if $m_{i}>0$ for all $i$ then $n_{j}>0$ for all $j$. ###### Definition 7.2. Let $\operatorname{Top_{u}}$ be the category of unpointed topological spaces. Let $\mathfrak{G}_{k,l}^{m}\colon\mathtt{Epi}\to\operatorname{Top_{u}}$ be the following functor. On objects, it is defined by the following formula $\mathfrak{G}_{k,l}^{m}(\underline{t})=\coprod_{\underset{m_{i}>0,m_{1}+\cdots+m_{t}=m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}{\operatorname{Inj}(\mathbb{C}^{ml},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}$ On morphisms, $\mathfrak{G}_{k,l}^{m}$ is defined similarly to $G_{k,l}$ and $G_{k,l}^{m}$. Given a surjection $\alpha\colon\underline{t}\twoheadrightarrow\underline{s}$, the summand indexed by $(m_{1},\ldots,m_{t})$ is mapped to the summand indexed by $\alpha_{*}(m_{1},\ldots,m_{t})$ by the same map as in (5). We will make much use of the functor ${\mathfrak{G}_{k,l}^{m}}_{+}\colon\mathtt{Epi}\to\operatorname{Top}$, which is obtained by adding a disjoint basepoint to $\mathfrak{G}_{k,l}^{m}$. We introduced an unpointed version of the functor because it will be convenient to have it at some point. ###### Remark 7.3. For a multi-set $(m_{1},\ldots,m_{t})$ define its support to be the set $A=\\{i\mid m_{i}>0\\}\subseteq\underline{t}$. Notice that for any subset $A\subseteq\underline{t}$ there is a natural way to identify ${\mathfrak{G}_{k,l}^{m}(A)}_{+}$ with a wedge summand of $G_{k,l}^{m}([t])$. Namely, ${\mathfrak{G}_{k,l}^{m}(A)}_{+}$ is identified, on the right hand side of (16) as the wedge sum corresponding to indices $(m_{1},\ldots,m_{t})$ whose support is exactly $A$. With this identification, there is a homeomorphism $G_{k,l}^{m}([t])\cong\bigvee_{A\subseteq\underline{t}}{\mathfrak{G}_{k,l}^{m}(A)}_{+}$ Moreover, the functoriality in $[t]$ is defined on the right hand side as follows. Suppose $\alpha\colon[t]\to[s]$ is a pointed function. Suppose $A\subseteq\underline{t}$. If $\alpha$ sends some element of $A$ to the basepoint of $[s]$, then the corresponding summand $\mathfrak{G}_{k,l}^{m}(A)$ is sent to the basepoint. Otherwise, this summand is sent to $\mathfrak{G}_{k,l}^{m}(\alpha(A))$ using the surjection $A\twoheadrightarrow\alpha(A)$ defined by $\alpha$. Consider the functor $\begin{array}[]{ccc}\mathtt{Fin}_{*}\times\mathtt{Epi}^{\operatorname{op}}&\to&\operatorname{Top}\\\ ([t],u)&\to&[t]^{\wedge u}\end{array}$ One can think of this functor as a $\mathtt{Fin}_{*}-\mathtt{Epi}$-bimodule. It is often used to establish connections between categories of $\mathtt{Fin}_{*}$-modules and $\mathtt{Epi}$-modules. We have the following proposition: ###### Proposition 7.4. There is a homeomorphism and an equivalence, natural in $[t]$ ranging over $\mathtt{Fin}_{*}$: $G_{k,l}^{m}([t])\cong\int^{u\in\mathtt{Epi}}_{\mathrm{s}}[t]^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}\simeq\int^{u\in\mathtt{Epi}}_{\mathrm{h}}[t]^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}$ ###### Proof. First of all, let us construct a natural map. Fix a surjective function $u_{1}\twoheadrightarrow u_{2}$. One has a map $[t]^{\wedge u_{2}}\wedge{\mathfrak{G}_{k,l}^{m}(u_{1})}_{+}\to G_{k,l}^{m}([t]).$ The map is defined as follows. First, the surjection $u_{1}\twoheadrightarrow u_{2}$ induces a map $\mathfrak{G}_{k,l}^{m}(u_{1})\twoheadrightarrow\mathfrak{G}_{k,l}^{m}(u_{2})$, and therefore $[t]^{\wedge u_{2}}\wedge{\mathfrak{G}_{k,l}^{m}(u_{1})}_{+}\to[t]^{\wedge u_{2}}\wedge{\mathfrak{G}_{k,l}^{m}(u_{2})}_{+}$. Second, there is a bijection of sets $[t]^{\wedge u_{2}}\cong{\underline{t}^{u_{2}}}_{+}$, so a non basepoint of this set is a map $f\colon u_{2}\to\underline{t}$. This defines a map $\mathfrak{G}_{k,l}^{m}(u_{2})\to\mathfrak{G}_{k,l}^{m}(f(u_{2}))$. Finally, there is an inclusion of ${\mathfrak{G}_{k,l}^{m}(f(u_{2}))}_{+}$ as a wedge summand of $G_{k,l}^{m}([t])$, as in Remark 7.3. The map is natural in the variable $[t]$ (exercise for the reader). The following diagram commutes because $\mathfrak{G}_{k,l}^{m}$ is functorial with respect to surjections. $\begin{array}[]{ccc}[t]^{\wedge u_{2}}\wedge{\mathfrak{G}_{k,l}^{m}(u_{1})}_{+}&\to&[t]^{\wedge u_{2}}\wedge{\mathfrak{G}_{k,l}^{m}(u_{2})}_{+}\\\ \downarrow&&\downarrow\\\ [t]^{\wedge u_{1}}\wedge{\mathfrak{G}_{k,l}^{m}(u_{1})}_{+}&\to&G_{k,l}^{m}([t])\end{array}$ It follows that there is a natural transformation of functors of $[t]$ (recall that $\int_{\mathrm{s}}$ denotes strict coend) $\int^{u\in\mathtt{Epi}}_{\mathrm{s}}[t]^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}\to G_{k,l}^{m}([t]).$ (17) We claim that this map is in fact an isomorphism. To see this notice that for each fixed $t$ there is an isomorphism of functors of $u$ $[t]^{\wedge u}={\underline{t}^{u}}_{+}\cong\bigvee_{A\subseteq\underline{t}}\mathtt{Epi}(u,A)_{+}$ (18) For each $A$, the functor $u\mapsto\mathtt{Epi}(u,A)$ is a representable functor $\mathtt{Epi}^{\operatorname{op}}\to\operatorname{Top}$. By coYoneda lemma, there is an isomorphism $\int^{u\in\mathtt{Epi}}_{\mathrm{s}}[t]^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}\cong\bigvee_{A\subseteq\underline{t}}{\mathfrak{G}_{k,l}^{m}(A)}_{+}$ and the right hand is identified with $G_{k,l}^{m}([t])$, again as in Remark 7.3. It follows that the map (17) is in fact an isomorphism. On the other hand, Equation (18) shows that the functor $u\mapsto[t]^{\wedge u}$ is cofibrant in the projective model structure on the functor category $[\mathtt{Epi}^{\operatorname{op}},\operatorname{Top}]$. It follows that the natural map from the homotopy coend to the strict coend is an equivalence: $\int^{u\in\mathtt{Epi}}_{\mathrm{h}}[t]^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}\stackrel{{\scriptstyle\simeq}}{{\to}}\int^{u\in\mathtt{Epi}}_{\mathrm{s}}[t]^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}.$ ∎ As a consequence, we have a simplified coend formula for $G_{k,l}^{m}(X)$ where $X$ is a CW complex. ###### Proposition 7.5. Let $X$ be a finite pointed CW-complex. There is a homeomorphism and an equivalence, natural in $X$ $G_{k,l}^{m}(X)\cong\int^{u\in\mathtt{Epi}}_{\mathrm{s}}X^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}\simeq\int^{u\in\mathtt{Epi}}_{\mathrm{h}}X^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}.$ The statement remains true if $\mathtt{Epi}$ is replaced with $\mathtt{Epi}^{\leq k}$. ###### Proof. We prove the part for the homotopy coend, and the proof of the strict part is identical. By the standard coend formula (see equation 15), there is an equivalence $G_{k,l}^{m}(X)\simeq\int^{[t]\in\mathtt{Fin}_{*}}_{\mathrm{h}}X^{[t]}\wedge G_{k,l}^{m}([t]).$ By Proposition 7.4, there is an equivalence $G_{k,l}^{m}([t])\simeq\int^{u\in\mathtt{Epi}}_{\mathrm{h}}[t]^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}.$ It follows that there is an equivalence $G_{k,l}^{m}(X)\simeq\int^{[t]\in\mathtt{Fin}_{*}}_{\mathrm{h}}X^{[t]}\wedge\left(\int^{u\in\mathtt{Epi}}_{\mathrm{h}}[t]^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}\right).$ By associativity of coend (``Fubini theorem''), the right hand side is equivalent to $\int^{u\in\mathtt{Epi}}_{\mathrm{h}}\left(\int^{[t]\in\mathtt{Fin}_{*}}_{\mathrm{h}}X^{[t]}\wedge[t]^{\wedge u}\right)\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}.$ It remains to show that there is a natural equivalence $\int^{[t]\in\mathtt{Fin}_{*}}_{\mathrm{h}}X^{[t]}\wedge[t]^{\wedge u}\simeq X^{\wedge u}.$ This is elementary. The argument goes as follows. The set $[t]^{\wedge u}$ is equivalent to the total homotopy cofiber of the cubical diagram $A\mapsto\mathtt{Fin}_{*}(A_{+},[t])$, where $A$ ranges over subsets of $u$, and the maps are induced by collapsing the complement of a subset to the basepoint. By the coYoneda lemma, $\int^{[t]\in\mathtt{Fin}_{*}}_{\mathrm{h}}X^{[t]}\wedge\mathtt{Fin}_{*}(A_{+},[t])\simeq X^{A}$. It follows that $\int^{[t]\in\mathtt{Fin}_{*}}_{\mathrm{h}}X^{[t]}\wedge[t]^{\wedge u}$ is equivalent to the total homotopy cofiber of the cubical diagram $A\mapsto X^{A}$, where $A$ ranges over subsets of $u$. The total cofiber is equivalent to $X^{\wedge u}$. ∎ Our next step is to use Proposition 7.5 to describe the subquotient spectra $\mathbb{S}^{k,l}_{m}$. Let $\mathtt{I}\colon\mathtt{Epi}^{\operatorname{op}}\to\operatorname{Top}$ be the (unique) functor defined by $\mathtt{I}(\underline{t})=\left\\{\begin{array}[]{cc}S^{0}&t=1\\\ &t\neq 1\end{array}\right.$ ###### Lemma 7.6. There are equivalences, where $\mathtt{Epi}$ can be replaced with $\mathtt{Epi}^{\leq k}$ $\mathbb{S}^{k,l}_{m}\simeq\Sigma^{\infty}\int^{\underline{t}\in\mathtt{Epi}}_{\mathrm{h}}\mathtt{I}\wedge{\mathfrak{G}_{k,l}^{m}}_{+}\simeq\int^{\underline{t}\in\mathtt{Epi}}_{\mathrm{h}}\Sigma^{\infty}\mathtt{I}\wedge{\mathfrak{G}_{k,l}^{m}}_{+}.$ ###### Proof. By definition, $\mathbb{S}^{k,l}_{m}$ is the stabilization of the functor $X\mapsto\int^{u\in\mathtt{Epi}}_{\mathrm{h}}X^{\wedge u}\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}$ The right hand side is a weighted homotopy colimit of reduced functors from $\mathtt{CW}^{f}_{*}\to\operatorname{Top}$. Since stabilization commutes with such homotopy colimits, it follows that there is an equivalence $\mathbb{S}^{k,l}_{m}\simeq\int^{u\in\mathtt{Epi}}_{\mathrm{h}}\partial_{1}(X^{\wedge u})\wedge{\mathfrak{G}_{k,l}^{m}(u)}_{+}$ where $\partial_{1}(X^{\wedge u})$ denotes the stabilization of the functor $X\mapsto X^{\wedge u}$. Observe that $\partial_{1}(X^{\wedge u})$ is equivalent to $\Sigma^{\infty}S^{0}$ if $|u|=1$, and is equivalent to $*$ if $|u|>1$. It follows that the functor $u\mapsto\partial_{1}(X^{\wedge u})$ is equivalent to $\Sigma^{\infty}\mathtt{I}$ as a functor $\mathtt{Epi}^{\operatorname{op}}\to\mathtt{Sp}$. The lemma follows. ∎ Recall once again that ${\mathfrak{G}_{k,l}^{m}}_{+}$ is defined by the following formula, ${\mathfrak{G}_{k,l}^{m}(\underline{t})}_{+}=\bigvee_{\underset{m_{i}>0,\Sigma m_{i}=m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}{\operatorname{Inj}(\mathbb{C}^{ml},\mathbb{C}^{k})/_{\prod_{i=1}^{t}U(m_{i})}}_{+}\cong\\\ \cong\bigvee_{\underset{m_{i}>0,\Sigma m_{i}=m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}U(k)/\left(\prod_{i=1}^{t}U(m_{i})\times U(k-lm)\right)_{+}.$ In the special case $l=1,k=m$ we get that ${\mathfrak{G}_{m,1}^{m}(\underline{t})}_{+}=\bigvee_{\underset{m_{i}>0,\Sigma m_{i}=m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}U(m)/\prod_{i=1}^{t}U(m_{i})_{+}.$ (19) And in general, there are equivalences ${\mathfrak{G}_{k,l}^{m}(\underline{t})}_{+}\simeq\bigvee_{\underset{m_{i}>0,\Sigma m_{i}=m\\}}{\\{(m_{1},\ldots,m_{t})\mid}}U(k)/U(k-lm)_{+}\wedge_{U(m)}U(m)/\prod_{i=1}^{t}U(m_{i})_{+}\simeq\\\ \simeq U(k)/U(k-lm)_{+}\wedge_{U(m)}{\mathfrak{G}_{m,1}^{m}(\underline{t})}_{+}.$ (20) It is easily checked that the last equivalence is functorial in $\underline{t}$ and therefore we have an equivalence of functors $\mathtt{Epi}\to\operatorname{Top}$ ${\mathfrak{G}_{k,l}^{m}}_{+}\simeq U(k)/U(k-lm)_{+}\wedge_{U(m)}{\mathfrak{G}_{m,1}^{m}}_{+}.$ And upon applying lemma 7.6 we obtain an equivalence $\mathbb{S}^{k,l}_{m}\simeq U(k)/U(k-lm)_{+}\wedge_{U(m)}\mathbb{S}^{m,1}_{m}=\operatorname{Inj}(\mathbb{C}^{lm},\mathbb{C}^{k})_{+}\wedge_{U(m)}\mathbb{S}^{m,1}_{m}.$ (21) We remind the reader that $U(m)$ is considered a subgroup of $U(k)$ via the diagonal map $U(m)\hookrightarrow U(lm)$ followed by the inclusions $U(lm)\hookrightarrow U(lm)\times U(k-lm)\hookrightarrow U(k)$. Alternatively, $U(m)$ acts on $\operatorname{Inj}(\mathbb{C}^{lm},\mathbb{C}^{k})$ through its obvious action on $\mathbb{C}^{lm}=\mathbb{C}^{l}\otimes\mathbb{C}^{m}$. ## 8 Connection with the complex of direct-sum decompositions Equation (21) reduces the problem of describing $\mathbb{S}^{k,l}_{m}$ for general $k,l,m$ to describing $\mathbb{S}^{m,1}_{m}$ for all $m$. In this section we use Lemma 7.6 to show that $\mathbb{S}^{m,1}_{m}$ is equivalent to the suspension spectrum of the complex of direct-sum decompositions of $\mathbb{C}^{m}$, which we denote $\mathcal{L}^{\diamond}_{m}$. This leads to a complete description of $\mathbb{S}^{k,l}_{m}$ in terms of the complexes $\mathcal{L}^{\diamond}_{m}$ (Theorem 8.8). The complexes $\mathcal{L}^{\diamond}_{m}$ were first introduced in [Ar1], and were studied in detail in [BJL+] and [AL3]. They play a role in orthogonal calculus, and also in describing the subquotients of the rank filtration of $K$-theory [AL1, AL2]. They have some remarkable homotopical properties, that we will recall in the next section (Proposition 9.1). Our proof that $\mathbb{S}^{m,1}_{m}$ is equivalent to the suspension spectrum of $\mathcal{L}^{\diamond}$ goes though an intermediate complex, which we call the complex of ordered direct-sum decompositions. Let us give the formal definition. ###### Definition 8.1. Let $\mathcal{D}^{\mathrm{o}}_{m}$ be the following category objects in topological spaces. Its objects are ordered tuples $(E_{1},\ldots,E_{t})$ of pairwise orthogonal proper, non-trivial vector subspaces of $\mathbb{C}^{m}$, whose direct sum is $\mathbb{C}^{m}$. A morphism $(E_{1},\ldots,E_{t})\to(F_{1},\ldots,F_{s})$ consists of a surjective function $\alpha\colon\\{1,\ldots,t\\}\twoheadrightarrow\\{1,\ldots,s\\}$ such that for each $1\leq i\leq t$, $E_{i}\subseteq F_{\alpha(i)}$. We call the category $\mathcal{D}^{\mathrm{o}}_{m}$ the category of proper, ordered direct-sum decompositions of $\mathbb{C}^{m}$. The set of objects and the set of morphisms both have a topology. There is a natural action of $U(m)$ on $\mathcal{D}^{\mathrm{o}}_{m}$, and object and morphism sets of are topologized as unions of $U(m)$-orbits. There is a convenient presentation of $\mathcal{D}^{\mathrm{o}}_{m}$ as the Grothendieck construction applied to the functor $\mathfrak{G}_{m,1}^{m}$ of Definition 7.2. Let us recall the definition of (a version of) the Grothendieck construction. ###### Definition 8.2. Suppose $\mathcal{C}$ is a small category and $F\colon\mathcal{C}\to\operatorname{Top_{u}}$ is a functor. The Grothendieck construction on $F$ (a.k.a the wreath product of $\mathcal{C}$ and $F$) is the following pointed topological category, denoted $\mathcal{C}\wr F$. The objects of $\mathcal{C}\wr F$ are pairs $(c,x)$ where $c$ is an object of $\mathcal{C}$, and $x\in F(c)$. A morphism $(c,x)\to(d,y)$ in $\mathcal{C}\wr F$ is a morphism $\alpha\colon c\to d$ in $\mathcal{C}$ such that $F(\alpha)(x)=y$. The space of objects of $\mathcal{C}\wr F$ is topologized as the disjoint union $\coprod_{c}F(c)$ indexed by the objects of $\mathcal{C}$, and the space of morphisms is topologized as the disjoint union $\coprod_{c\to d}F(c)$, indexed by morphisms of $\mathcal{C}$. The following well-known lemma can be thought of as a topological analogue of Thomason's homotopy colimit theorem. ###### Lemma 8.3. Suppose $\mathcal{C}$ is a small category and $F\colon\mathcal{C}\to\operatorname{Top_{u}}$ is a functor. There is a natural equivalence $\mathop{\operatorname{hocolim}}_{\mathcal{C}}F\simeq|\mathcal{C}\wr F|.$ ###### Proof. It is easy to see that the simplicial nerve of $\mathcal{C}\wr F$ is isomorphic, as a simplicial space, to Bousfield and Kan's simplicial model for $\mathop{\operatorname{hocolim}}_{\mathcal{C}}F$. In fact, both simplicial spaces are given in simplicial degree $k$ by the space $\coprod_{c_{0}\to\cdots\to c_{k}}F(c_{0}).$ The $i$-th face map $d_{i}$ is defined by dropping $c_{i}$ and, if $i=0$, using the functoriality of $F$ to map $F(c_{0})$ to $F(c_{1})$. The degeneracy map $s_{i}$ is defined by duplicating $c_{i}$. ∎ Now recall that we have a functor $\mathfrak{G}_{m,1}^{m}\colon\mathtt{Epi}\to\operatorname{Top_{u}}$ (Definition 7.2). Let $\mathtt{Epi}^{>1}$ be the full subcategory of $\mathtt{Epi}$ consisting of sets of cardinality greater than $1$. By slight abuse of notation we denote the restriction of $\mathfrak{G}_{m,1}^{m}$ to $\mathtt{Epi}^{>1}$ also by $\mathfrak{G}_{m,1}^{m}$. The following lemma is straightforward from the definitions: ###### Lemma 8.4. There is an isomorphism of topological categories $\mathtt{Epi}^{>1}\wr\mathfrak{G}_{m,1}^{m}\cong{\mathcal{D}^{\mathrm{o}}_{m}}.$ Given a space $X$, let $X^{\diamond}$ denote the unreduced suspension of $X$. We have the following connection between $\mathbb{S}^{m,1}_{m}$ and $\mathcal{D}^{\mathrm{o}}_{m}$. ###### Proposition 8.5. There is a natural equivalence $\mathbb{S}^{m,1}_{m}\simeq\Sigma^{\infty}|\mathcal{D}^{\mathrm{o}}_{m}|^{\diamond}.$ ###### Proof. We saw in lemma 7.6 that $\mathbb{S}^{m,1}_{m}\simeq\Sigma^{\infty}\int^{\mathtt{Epi}}_{\mathrm{h}}\mathtt{I}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}$ where $\mathtt{I}\colon\mathtt{Epi}^{\operatorname{op}}\to\operatorname{Top}$ is the functor that sends $\underline{1}$ to $S^{0}$ and sends all other objects to $*$. So we need to show that there is an equivalence of pointed spaces $\int^{\mathtt{Epi}}_{\mathrm{h}}\mathtt{I}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}\simeq|\mathcal{D}^{\mathrm{o}}_{m}|^{\diamond}.$ Let $\mathtt{S}\colon\mathtt{Epi}^{\operatorname{op}}\to\operatorname{Top}$ be the constant functor $\mathtt{S}(\underline{t})\equiv S^{0}$. Let $\mathtt{S}^{>1}\colon\mathtt{Epi}^{\operatorname{op}}\to\operatorname{Top}$ be the functor $\mathtt{S}^{>1}(\underline{1})=*$ and $\mathtt{S}^{>1}(\underline{t})\equiv S^{0}$ for $t>1$. Then there is a homotopy cofibration sequence of functors $\mathtt{S}^{>1}\to\mathtt{S}\to\mathtt{I}$. It follows that there is a homotopy cofibration sequence of coends $\int^{\mathtt{Epi}}_{\mathrm{h}}\mathtt{S}^{>1}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}\to\int^{\mathtt{Epi}}_{\mathrm{h}}\mathtt{S}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}\to\int^{\mathtt{Epi}}_{\mathrm{h}}\mathtt{I}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}$ It is a standard fact that $\int^{\mathtt{Epi}}_{\mathrm{h}}\mathtt{S}\wedge{\mathfrak{G}_{m,1}^{m}}\simeq{\mathop{\operatorname{hocolim}}_{\mathtt{Epi}}}^{*}({\mathfrak{G}_{m,1}^{m}}_{+})\cong(\mathop{\operatorname{hocolim}}_{\mathtt{Epi}}{\mathfrak{G}_{m,1}^{m}})_{+}$ (here $\mathop{\operatorname{hocolim}}^{*}$ denotes pointed homotopy colimit, while $\mathop{\operatorname{hocolim}}$ denotes unpointed homotopy colimit). Since $\mathtt{Epi}$ has a final object $\underline{1}$, it follows that ${\mathop{\operatorname{hocolim}}_{\mathtt{Epi}}}^{*}{\mathfrak{G}_{m,1}^{m}}_{+}\simeq{\mathfrak{G}_{m,1}^{m}}_{+}(\underline{1})=S^{0}.$ On the other hand, since $\underline{1}$ is the initial object of $\mathtt{Epi}^{\operatorname{op}}$, and $\mathtt{S}^{>1}(\underline{1})=*$ it follows easily that $\mathtt{S}^{>1}$ is equivalent to the functor obtained by restricting $\mathtt{S}$ to the subcategory ${\mathtt{Epi}^{>1}}^{\operatorname{op}}$ of sets of cardinality greater than $1$, and then taking derived left Kan extension back to ${\mathtt{Epi}}^{\operatorname{op}}$. By standard adjunctions, it follows that there are equivalences $\int^{\mathtt{Epi}}_{\mathrm{h}}\mathtt{S}^{>1}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}\simeq\int^{\mathtt{Epi}^{>1}}_{\mathrm{h}}\mathtt{S}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}\simeq(\mathop{\operatorname{hocolim}}_{\mathtt{Epi}^{>1}}{\mathfrak{G}_{m,1}^{m}})_{+}$ It follows that there is a homotopy cofibration sequence $(\mathop{\operatorname{hocolim}}_{\mathtt{Epi}^{>1}}{{\mathfrak{G}_{m,1}^{m}}})_{+}\to S^{0}\to\int^{\underline{t}\in\mathtt{Epi}}_{\mathrm{h}}\mathtt{I}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}$ By lemma 8.4 ${\mathcal{D}^{\mathrm{o}}_{m}}$ is the Grothendieck construction on $\mathfrak{G}_{m,1}^{m}$. It follows by Lemma 8.3 that $\mathop{\operatorname{hocolim}}_{\mathtt{Epi}^{>1}}{\mathfrak{G}_{m,1}^{m}}\simeq|\mathcal{D}^{\mathrm{o}}_{m}|.$ So we have a homotopy cofibration sequence $|\mathcal{D}^{\mathrm{o}}_{m}|_{+}\to S^{0}\to\int^{\underline{t}\in\mathtt{Epi}}_{\mathrm{h}}\mathtt{I}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}$ This implies that $\int^{\underline{t}\in\mathtt{Epi}}_{\mathrm{h}}\mathtt{I}\wedge{\mathfrak{G}_{m,1}^{m}}_{+}\simeq|\mathcal{D}^{\mathrm{o}}_{m}|^{\diamond}$. ∎ Our next step is to show that $\mathcal{D}^{\mathrm{o}}_{m}$ can be replaced with a smaller category, which we call the poset of unordered direct-sum decompositions. First, the definition. ###### Definition 8.6. Let $\mathcal{D}_{m}$ be the following category objects in topological spaces. Its objects are unordered sets $\\{E_{i}\mid i\in I\\}$ of pairwise orthogonal proper, non-trivial vector subspaces of $\mathbb{C}^{m}$, whose direct sum is $\mathbb{C}^{m}$. There is a unique morphism $\\{E_{i}\mid i\in I\\}\to\\{F_{j}\mid j\in J\\}$ if for each $i\in I$ there is a (necessarily unique) $j\in J$ such that $E_{i}\subseteq F_{j}$. In keeping with recent literature, the geometric realization of $\mathcal{D}_{m}$ will be denoted $\mathcal{L}_{m}$, and its unreduced suspension is therefore $\mathcal{L}_{m}^{\diamond}$. As with $\mathcal{D}^{\mathrm{o}}_{m}$, there is a natural action of $U(m)$ on $\mathcal{D}_{m}$ and both the sets of objects and morphisms of $\mathcal{D}_{m}$ are topologized as unions of $U(m)$-orbits. We note that for any two objects $P,Q$ of $\mathcal{D}^{\mathrm{o}}_{m}$ there is at most one morphism from $P$ to $Q$. In other words, $\mathcal{D}^{\mathrm{o}}_{m}$ is a topological preorder. By contrast, the category $\mathcal{D}_{m}$ is a topological poset: it is the poset of isomorphism classes of $\mathcal{D}^{\mathrm{o}}_{m}$. The category $\mathcal{D}_{m}$ will be referred to as the category, or poset, of proper, unordered direct-sum decompositions of $\mathbb{C}^{m}$. There is a topological functor $q\colon\mathcal{D}^{\mathrm{o}}_{m}\to\mathcal{D}_{m}$, which forgets the order of the components. ###### Proposition 8.7. The natural functor $q\colon\mathcal{D}^{\mathrm{o}}_{m}\to\mathcal{D}_{m}$ induces an equivalence of geometric realizations $|\mathcal{D}^{\mathrm{o}}_{m}|\xrightarrow{\simeq}|\mathcal{D}_{m}|$ ###### Proof. We are going to use Quillen's theorem A. We need a version of it that is valid for topological categories. There are several such versions scattered in the literature, we will use [EbRW, Theorem 4.7]. According to this theorem, it is enough if we prove the following 1. 1. For every object $\Lambda$ of $\mathcal{D}_{m}$, the classifying space of the over category $q/\Lambda$ is contractible. 2. 2. The map from the morphism space of $\mathcal{D}^{\mathrm{o}}_{m}$ to the object space of $\mathcal{D}^{\mathrm{o}}_{m}$, that sends every morphism to its target, is a fibration (in the language of [EbRW], this means that $\mathcal{D}^{\mathrm{o}}_{m}$ is right fibrant). 3. 3. The map from the space of objects of the over category $q/\mathcal{D}_{m}$ to the space of objects of $\mathcal{D}_{m}$, that sends a morphism to its target, is a fibration. Here $q/\mathcal{D}_{m}$ is the category of arrows in $\mathcal{D}_{m}$ of the form $q(\Theta)\to\Lambda$, where $\Theta$ is an object of $\mathcal{D}^{\mathrm{o}}_{m}$. For part (1), let $\Lambda=\\{F_{i}\mid i\in I\\}$ be an object of $\mathcal{D}_{m}$, i.e., an unordered collection of pairwise orthogonal non- trivial subspaces of $\mathbb{C}^{m}$ whose direct sum is $\mathbb{C}^{m}$. Let $t$ be the number of elements of $I$ and choose a bijection $I\cong\\{1,\ldots,t\\}$. Then $\widetilde{\Lambda}=(F_{1},\ldots,F_{t})$ is a choice of lift of $\Lambda$ to an object of $\mathcal{D}^{\mathrm{o}}_{m}$. An object of $q/\Lambda$ consists of an object $\Theta=(E_{1},\ldots,E_{s})$ such that each $E_{i}$ is a subspace of $F_{j}$ for some (necessarily unique) $j$. It follows that there exists a unique surjection $\alpha\colon\\{1,\ldots,s\\}\twoheadrightarrow\\{1,\ldots,t\\}$ such that $E_{i}\subset F_{\alpha(i)}$ for all $i$. This means that there is a unique morphism from $\Theta$ to $\widetilde{\Lambda}$ in $q/\Lambda$. Thus the category $q/\Lambda$ has a (not necessarily unique) terminal object, and therefore its classifying space is contractible. For part (2), using the identification of $\mathcal{D}^{\mathrm{o}}_{m}$ with the Grothendieck construction $\mathtt{Epi}^{>1}\wr\mathfrak{G}_{m,1}^{m}$ (Lemma 8.4), the map from the space of morphisms of $\mathcal{D}^{\mathrm{o}}_{m}$ to the space of objects of $\mathcal{D}^{\mathrm{o}}_{m}$ which sends each morphism to its target, has the following form $\coprod_{\underline{s}\twoheadrightarrow\underline{t}\in\mathtt{Epi}^{>1}}\coprod_{\underset{m_{i}>0,\Sigma m_{i}=m\\}}{\\{(m_{1},\ldots,m_{s})\mid}}U(m)/\prod_{i=1}^{s}U(m_{i})\to\coprod_{\underline{t}\in\mathtt{Epi}^{>1}}\coprod_{\underset{n_{i}>0,\Sigma n_{j}=m\\}}{\\{(n_{1},\ldots,n_{t})\mid}}U(m)/\prod_{j=1}^{t}U(n_{j})$ (22) where for every surjective function $\underline{s}\twoheadrightarrow\underline{t}$, the space $U(m)/\prod_{i=1}^{s}U(m_{i})$ is sent to $U(m)/\prod_{j=1}^{s}U(n_{j})$, where for each $j=1,\ldots,t$, $n_{j}=\Sigma_{i\in\alpha^{-1}(j)}m_{i}$, by the canonical quotient map associated with the sub-conjugation of $\prod_{i=1}^{s}U(m_{i})$ into $\prod_{j=1}^{s}U(n_{j})$ induced by $\alpha$. The map (22) is clearly a fibration. Indeed, it is a $U(m)$ equivariant map between disjoint union of $U(m)$-orbits, and such a map is necessarily a fibration. Finally, the proof of part (3) is similar to that of part (2). Since $\mathcal{D}_{m}$ is the poset of isomorphism classes of the pre-order $\mathcal{D}^{\mathrm{o}}_{m}$, the space of objects of the category $q/\mathcal{D}_{m}$ is the quotient of the space of morphisms of $\mathcal{D}^{\mathrm{o}}_{m}$ by the action of the groupoid of isomorphisms of the target. Similarly, the space of objects of $\mathcal{D}_{m}$ is the quotient of the space of objects of $\mathcal{D}^{\mathrm{o}}_{m}$ by the groupoid of isomorphisms. This means that we have the following map $\left(\coprod_{\underline{s}\twoheadrightarrow\underline{t}}\coprod_{\underset{m_{i}>0,\Sigma m_{i}=m\\}}{\\{(m_{1},\ldots,m_{s})\mid}}U(m)/\prod_{i=1}^{s}U(m_{i})\right)_{\operatorname{Iso}(t)}\to\left(\coprod_{\underline{t}}\coprod_{\underset{n_{i}>0,\Sigma n_{j}=m\\}}{\\{(n_{1},\ldots,n_{t})\mid}}U(m)/\prod_{j=1}^{t}U(n_{j})\right)_{\operatorname{Iso}(t)}$ The action of the groupoid of isomorphisms of the variable $\underline{t}$ respects the action of $U(m)$. It follows that the resulting map is still a $U(m)$-equivariant map between disjoint union of orbits, and therefore is a fibration. ∎ Propositions 8.5 and 8.7, together with equation (21) give us an equivalence $\mathbb{S}^{k,l}_{m}\simeq U(k)/U(k-lm)_{+}\wedge_{U(m)}\Sigma^{\infty}\mathcal{L}_{m}^{\diamond}.$ (23) We also want to describe the composition morphisms $\mathbb{S}^{k,l}_{m}\wedge\mathbb{S}^{j,k}_{n}\to\mathbb{S}^{j,l}_{mn}$. We begin by observing that tensor product induces a natural map $\mathcal{L}^{\diamond}_{m}\wedge\mathcal{L}^{\diamond}_{n}\to\mathcal{L}^{\diamond}_{mn}$ as follows. Suppose that $\mathcal{E}=\\{E_{i}\mid i\in I\\}$ and $\mathcal{F}=\\{F_{j}\mid j\in J\\}$ are direct-sum decompositions of $\mathbb{C}^{m}$ and $\mathbb{C}^{n}$ respectively. Then $\mathcal{E}\otimes\mathcal{F}:=\\{E_{i}\otimes F_{j}\mid(i,j)\in I\times J\\}$ is a direct-sum decomposition of $\mathbb{C}^{m}\otimes\mathbb{C}^{n}\cong\mathbb{C}^{mn}$. Note that if at least one of $\mathcal{E}$, $\mathcal{F}$ is a proper decomposition (i.e., has more than one component) then $\mathcal{E}\otimes\mathcal{F}$ is a proper decomposition as well. This means that the tensor product induces a map $\mathcal{L}^{\diamond}_{m}\wedge\mathcal{L}^{\diamond}_{n}\to\mathcal{L}^{\diamond}_{mn}$ as desired. Note that this map is equivariant with respect to the tensor product homomorphisms $U(m)\times U(n)\to U(mn)$. Next, we extend it to a map $\left(U(k)/U(k-lm)_{+}\wedge_{U(m)}\mathcal{L}_{m}^{\diamond}\right)\wedge\left(U(j)/U(j-kn)_{+}\wedge_{U(n)}\mathcal{L}_{n}^{\diamond}\right)\to\\\ \to U(j)/U(j-lnm)_{+}\wedge_{U(nm)}\mathcal{L}^{\diamond}_{nm}.$ Now recall that $U(k)/U(k-lm)\cong\operatorname{Inj}(\mathbb{C}^{lm},\mathbb{C}^{k})$ and $U(j)/U(j-kn)\cong\operatorname{Inj}(\mathbb{C}^{kn},\mathbb{C}^{j})$. Given homomorphisms $g\in\operatorname{Inj}(\mathbb{C}^{lm},\mathbb{C}^{k})$ and $f\in\operatorname{Inj}(\mathbb{C}^{kn},\mathbb{C}^{j})$, we may form the homomorphism $f\circ g^{n}\colon{\mathbb{C}^{lmn}}\hookrightarrow\mathbb{C}^{j}$. Clearly, this defines a map $U(k)/U(k-lm)\times U(j)/U(j-kn)\to U(j)/U(j-lnm)$. This map is equivariant with respect to the tensor product homomorphism $U(m)\times U(n)\to U(mn)$. Combining it with the map $\mathcal{L}^{\diamond}_{m}\wedge\mathcal{L}^{\diamond}_{n}\to\mathcal{L}^{\diamond}_{mn}$ defined earlier, we obtain the desired map. We are ready to state the main theorem of the paper. ###### Theorem 8.8. There is an equivalence $\mathbb{S}^{k,l}_{m}\simeq\Sigma^{\infty}U(k)/U(k-lm)_{+}\wedge_{U(m)}\mathcal{L}_{m}^{\diamond}\cong\Sigma^{\infty}\operatorname{Inj}(\mathbb{C}^{ml},\mathbb{C}^{k})_{+}\wedge_{U(m)}\mathcal{L}_{m}^{\diamond}.$ Under this equivalence, the composition product $\mathbb{S}^{k,l}_{m}\wedge\mathbb{S}^{j,k}_{n}\to\mathbb{S}^{j,l}_{mn}$ corresponds to the map $\left(\operatorname{Inj}\left(\mathbb{C}^{ml},\mathbb{C}^{k}\right)_{+}\wedge_{U(m)}\mathcal{L}_{m}^{\diamond}\right)\wedge\left(\operatorname{Inj}\left(\mathbb{C}^{nk},\mathbb{C}^{j}\right)_{+}\wedge_{U(n)}\mathcal{L}_{n}^{\diamond}\right)\to\\\ \to\operatorname{Inj}\left(\mathbb{C}^{nml},\mathbb{C}^{j}\right)_{+}\wedge_{U(nm)}\mathcal{L}^{\diamond}_{nm}.$ that were defined above. ###### Proof. We already proved the formula for $\mathbb{S}^{k,l}_{m}$ (equation 23). It remains to check the statement about the composition product. Recall that $\mathbb{S}^{k,l}_{m}$ is the stabilization of the functor $G_{k,l}^{m}$. The composition product is determined by the natural transformation $\mathfrak{G}_{k,l}^{m}(v)\wedge\mathfrak{G}_{j,k}^{n}(u)\to\mathfrak{G}_{j,l}^{mn}(u\times v)$. An analysis of this composition map shows that it is induced by disjoint union of maps of the form $\operatorname{Inj}(\mathbb{C}^{(m_{1}+\cdots+m_{t})l},\mathbb{C}^{k})/_{\prod_{j=1}^{t}U(m_{j})}\times\operatorname{Inj}(\mathbb{C}^{(n_{1}+\cdots+n_{s})k},\mathbb{C}^{j})/_{\prod_{i=1}^{s}U(n_{i})}\to\\\ \to{\operatorname{Inj}(\mathbb{C}^{(\sum_{i=1,j=1}^{i=s,j=t}n_{i}m_{j})l},\mathbb{C}^{j})/_{\prod_{i=1,j=1}^{i=s,j=t}U(n_{i}m_{j})}}$ that sends $(q,p)$ to $p\circ q^{n}$, where $n=n_{1}+\cdots+n_{s}$. Note that the decomposition of $\mathbb{C}^{mn}$ associated with the target of this map is the tensor product of the given decompositions of $\mathbb{C}^{m}$ and $\mathbb{C}^{n}$, just as was claimed. This induces the claimed map of spectra. ∎ ## 9 Some calculations of $\mathbb{S}^{k,l}$ In this section we calculate the spectra $\mathbb{S}^{k,l}$ in some cases, and also prove that the map $\mathbb{S}^{k,l}\to ku$ is an isomorphism on $\pi_{0}$. Our main tool is Theorem 8.8, which expresses the subquotients of the rank filtration in terms of the complexes $\mathcal{L}_{m}^{\diamond}$. To use it, we need to know something about the complexes $\mathcal{L}_{m}^{\diamond}$. So let us begin by reviewing some of the rather remarkable properties of these complexes that were uncovered in [Ar1, AL1, BJL+, AL3]. The following proposition lists the relevant facts. ###### Proposition 9.1. 1. 1. The space $\mathcal{L}_{m}^{\diamond}$ is rationally contractible for $m>1$. 2. 2. The space $\mathcal{L}_{m}^{\diamond}$ is (integrally) contractible unless $m$ is a prime power. 3. 3. If $m=p^{k}$ with p a prime and $k>0$, then $\mathcal{L}_{p^{k}}^{\diamond}$ is $p$-local, and has chromatic type $k$. ###### Proof. Except for the statement about the chromatic type, this is [AL1, proposition 9.6], which in turn relies on [Ar1]. The statement about the chromatic type is part of [Ar2, Theorem 2.2]. The proofs in [Ar2] are based on a rather deep connection between $\mathcal{L}_{m}^{\diamond}$ and the calculus of functors. Since part (1) plays a prominent role in our applications, we indicate an independent, more direct way to prove this part. The space $\mathcal{L}_{m}^{\diamond}$ is equivalent to the total homotopy cofiber of the following $m-1$-dimensional cubical diagram. Suppose $U=\\{i_{1},\ldots,i_{k}\\}\subseteq\\{2,\ldots,m\\}$, with $i_{1}>\ldots>i_{k}$. Let $\mathcal{X}(U)$ be the space of chains of decompositions of $\mathbb{C}^{m}$ of the form $(\Lambda_{1}<\cdots<\Lambda_{k})$ where each $\Lambda_{j}$ has $i_{j}$-components. If $U$ is empty then $\mathcal{X}(U)=*$. Note that in general $\mathcal{X}(U)$ is a disjoint union of $U(m)$-orbits. The assignment $U\mapsto\mathcal{X}(U)$ defines a diagram indexed on the opposite of the poset of subsets of $\\{2,\ldots,m\\}$, i.e., an $m-1$-dimensional cubical diagram. It is elementary to show that $\mathcal{L}_{m}^{\diamond}$ is equivalent to the total homotopy cofiber of the cube $\mathcal{X}$. For example, in the case $m=3$, $\mathcal{X}$ is the following square of $U(3)$-orbits. $\begin{array}[]{ccc}U(3)/\Sigma_{2}\wr U(1)\times U(1)&\to&U(3)/U(2)\times U(1)\\\ \downarrow&&\downarrow\\\ U(3)/\Sigma_{3}\wr U(1)&\to&U(3)/U(3)\end{array}$ (24) Here the upper right corner is $\mathcal{X}(\\{2\\})$, the space of decompositions of $\mathbb{C}^{3}$ with $2$ components, the lower left corner is $\mathcal{X}(\\{3\\})$, the space of decompositions with $3$ components, and the upper left corner is $\mathcal{X}(\\{2,3\\})$, the space of morphisms from a decomposition with $3$ components to a decomposition with $2$ components. Each one of the horizontal maps in (24) is a map of $U(3)$-orbits, induced by subgroup inclusions $\Sigma_{2}\wr U(1)\times U(1)\to U(2)\times U(1)$ and $\Sigma_{3}\wr U(1)\to U(3)$. Note that in both of these cases, the subgroup that is being included is the normalizer of a maximal torus. It follows that each one of the horizontal maps is a rational equivalence, and therefore the total cofiber of (24) is trivial in rational homology. Since it is simply connected, it is also trivial in rational homotopy. More generally suppose $m>i_{1}$ and consider the map $\mathcal{X}(\\{m,i_{1},\ldots,i_{k}\\})\to\mathcal{X}(\\{i_{1},\ldots,i_{k}\\})$. This map is a disjoint union of maps between $U(m)$-orbits. For each path component of $\mathcal{X}(\\{m,i_{1},\ldots,i_{k}\\})$, the isotropy group is the normalizer of a maximal torus of the isotropy group of a corresponding component of $\mathcal{X}(\\{i_{1},\ldots,i_{k}\\})$. It follows that the map $\mathcal{X}(\\{m,i_{1},\ldots,i_{k}\\})\to\mathcal{X}(\\{i_{1},\ldots,i_{k}\\})$ is always a rational equivalence, and therefore the total homotopy cofiber of $\mathcal{X}$, which is $\mathcal{L}_{m}^{\diamond}$, is rationally trivial. ∎ ###### Corollary 9.2. The map $\mathbb{S}^{k,l,1}\to\mathbb{S}^{k,l}.$ is a rational equivalence. ###### Proof. To see this, consider the filtration $*=\mathbb{S}^{k,l,0}\to\mathbb{S}^{k,l,1}\to\mathbb{S}^{k,l,2}\to\cdots\to\mathbb{S}^{k,l,\lfloor\frac{k}{l}\rfloor}=\mathbb{S}^{k,l}$ It follows from Theorem 8.8 and part 1 of Proposition 9.1 that for all $m>1$ the homotopy cofiber $\mathbb{S}^{k,l}_{m}$ of the map $\mathbb{S}^{k,l,m-1}\to\mathbb{S}^{k,l,m}$ is rationally trivial. It follows that the map $\mathbb{S}^{k,l,1}\to\mathbb{S}^{k,l}$ is a rational equivalence. ∎ Here is an explicit description of $\mathcal{L}_{m}^{\diamond}$ for some values of $m$. ###### Proposition 9.3. 1. 1. $\mathcal{L}_{1}^{\diamond}\cong S^{0}$ 2. 2. $\mathcal{L}_{2}^{\diamond}\cong\Sigma\mathbb{R}P^{2}$. 3. 3. More generally, if $p$ is a prime then $\mathcal{L}_{p}^{\diamond}$ is a union of $p-1$ shifted copies of the mod $p$ Moore space. As a first application of Theorem 8.8 and Proposition 9.1, let us prove that the map $\mathbb{S}^{k,l}\to ku$ induces an isomorphism on $\pi_{0}$. ###### Lemma 9.4. Assume that $k\geq l$. The map $\mathbb{S}^{k,l}\to\mathbb{S}^{\infty,l}\simeq ku$ induces an isomorphism on $\pi_{0}$. ###### Remark 9.5. We remind the reader that if $k<l$, $\mathbb{S}^{k,l}\simeq*$. ###### Proof. We saw in Section 3 that the mapping spectra $\mathbb{S}^{k,l}$ are filtered by a sequence of spectra $*=\mathbb{S}^{k,l,0}\to\mathbb{S}^{k,l,1}\to\mathbb{S}^{k,l,2}\to\cdots\to\mathbb{S}^{k,l,\lfloor\frac{k}{l}\rfloor}=\mathbb{S}^{k,l}$ Consider the commutative diagram $\begin{array}[]{ccc}\mathbb{S}^{k,l,1}&\to&\mathbb{S}^{k,l}\\\ \downarrow&&\downarrow\\\ \mathbb{S}^{\infty,l,1}&\to&\mathbb{S}^{\infty,l}\end{array}$ We will prove that the left, top and bottom maps in this diagram induce an isomorphism on $\pi_{0}$. It then follows that the right map induces an isomorphism on $\pi_{0}$, which is what we want to prove. By Theorem 8.8, $\mathbb{S}^{k,l,1}\simeq\mathbb{S}^{k,l}_{1}\simeq\Sigma^{\infty}\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{k})/U(1)_{+},$ and similarly $\mathbb{S}^{\infty,l,1}\simeq\Sigma^{\infty}\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{\infty})/{U(1)}_{+}.$ Since $\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{k})/U(1)$ and $\operatorname{Inj}(\mathbb{C}^{l},\mathbb{C}^{\infty})/{U(1)}$ are path- connected spaces, the map $\mathbb{S}^{k,l,1}\to\mathbb{S}^{\infty,l,1}$ induces on $\pi_{0}$ the isomorphism from $\mathbb{Z}$ to itself. To analyze the map $\mathbb{S}^{k,l,1}\to\mathbb{S}^{k,l}$ recall, again from Theorem 8.8, that the subquotient $\mathbb{S}^{k,l,m}/\mathbb{S}^{k,l,m-1}$ is equivalent to the suspension spectrum of $\operatorname{Inj}(\mathbb{C}^{ml},\mathbb{C}^{k})_{+}\wedge_{U(m)}\mathcal{L}_{m}^{\diamond}$. For $m>1$ the space $\mathcal{L}_{m}$ is path-connected, so $\mathcal{L}_{m}^{\diamond}$ is simply-connected. It follows that $\mathbb{S}^{k,l,m}/\mathbb{S}^{k,l,m-1}$ is $1$-connected for $m>1$, and therefore the map $\mathbb{S}^{k,l,1}\to\mathbb{S}^{k,l}$ is $1$-connected, and in particular it induces an isomorphism on $\pi_{0}$. The same argument applies in the case $k=\infty$, which completes the proof. ∎ Since the rank filtration of $\mathbb{S}^{k,l}$ has length $\lfloor\frac{k}{l}\rfloor$, we can conclude that if $k<2l$ then $\mathbb{S}^{k,l,1}$ is in fact equivalent to $\mathbb{S}^{k,l}$. ###### Lemma 9.6. If $l\leq k\leq 2l-1$ then $\mathbb{S}^{k,l}$ is integrally equivalent to $\Sigma^{\infty}U(k)/U(k-l)\times U(1)_{+}.$ In particular, for $k=l$ the spectrum $\mathbb{S}^{k,k}=\operatorname{End}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k})$ is equivalent to $\Sigma^{\infty}PU(k)_{+}$: the group ring spectrum of the projective unitary group. ## 10 On the rationalization and $p$-localization of $\mathcal{M}$ Let $\mathcal{C}$ be a stable presentable closed symmetric monoidal $\infty$-category. Denote by $\otimes$ the tensor product, by $1_{\mathcal{C}}$ the unit and by $\underline{\operatorname{Hom}}(\bullet,\bullet)$ the internal hom. For every $n\in\mathbb{Z}$ and an object $X\in\mathcal{C}$ there is a natural multiplication by $n$ map $[n]\colon X\to X$. We will say that an object $X\in\mathcal{C}$ is _rational_ if for every $n\neq 0$ the map $[n]\colon X\to X$ is an isomorphism. Simlarly, we will say that $X$ is _$p$ -local_ for a prime $p$, if $[n]\colon X\to X$ is an isomorphism for every $n$ that is not divisible by $p$. We denote the collection of rational objects in $\mathcal{C}$ by $\mathcal{C}_{\mathbb{Q}}$ and the collection of $p$-local objects by $\mathcal{C}_{(p)}$. If $\mathcal{C}=\mathcal{C}_{\mathbb{Q}}$ (resp. $\mathcal{C}=\mathcal{C}_{(p)}$) then we say that $\mathcal{C}$ is rational (resp. $p$-local). The naturality of $[n]$ implies that $\mathcal{C}_{\mathbb{Q}}$ and $\mathcal{C}_{(p)}$ are closed in $\mathcal{C}$ under all small limits and colimits and that for every $X\in\mathcal{C}$ and $Y\in\mathcal{C}_{\mathbb{Q}}$ (resp. $Y\in\mathcal{C}_{(p)}$) we have $X\otimes Y,\underline{\operatorname{Hom}}(X,Y)\in\mathcal{C}_{\mathbb{Q}}$ (resp. $X\otimes Y,\underline{\operatorname{Hom}}(X,Y)\in\mathcal{C}_{(p)}$). We thus get that $\mathcal{C}_{\mathbb{Q}}$ and $\mathcal{C}_{(p)}$ are themselves stable presentable closed symmetric monoidal $\infty$-categories. Further the inclusion $i^{\mathcal{C}}_{\mathbb{Q}}:\mathcal{C}_{\mathbb{Q}}\subset\mathcal{C}$ admits a symmetric monoidal left adjoint called _rationalization_ $L^{\mathcal{C}}_{\mathbb{Q}}\colon\mathcal{C}\to\mathcal{C}_{\mathbb{Q}}.$ Same holds for $p$-localization. Further, the left adjoints are given by the following formulas $L_{\mathbb{Q}}(X)=L_{\mathbb{Q}}(1_{\mathcal{C}})\otimes X=\mathop{\operatorname{colim}}\left[X\xrightarrow{[1]}X\xrightarrow{[2]}X\xrightarrow{[3]}X\cdots\right]$ $L_{(p)}(X)=L_{{p}}(1_{\mathcal{C}})\otimes X=\mathop{\operatorname{colim}}\left[X\xrightarrow{[p^{\prime}_{1}]}X\xrightarrow{[p^{\prime}_{2}]}X\xrightarrow{[p^{\prime}_{3}]}X\cdots\right]$ where $p^{\prime}_{1},p^{\prime}_{2},\ldots$ is the list of integers not divisible by $p$. Since $\mathtt{NSp}$ is left-tensored over $\mathtt{Sp}$, we have that $\mathtt{NSp}_{\mathbb{Q}}$ (resp. $\mathtt{NSp}_{(p)}$) is left-tensored over $\mathtt{Sp}_{\mathbb{Q}}$ (resp. $\mathtt{Sp}_{(p)}$). Let ${\mathcal{M}^{\mathbb{Q}}}$ (resp. ${\mathcal{M}^{(p)}}$) to be the full $\mathtt{Sp}_{\mathbb{Q}}$-enriched (resp. $\mathtt{Sp}_{(p)}$-enriched) subcategory of $\mathtt{NSp}_{\mathbb{Q}}$ (resp. $\mathtt{NSp}_{(p)}$) spanned by $L^{\mathtt{NSp}}_{\mathbb{Q}}(\Sigma^{\infty}_{\mathtt{NC}}M_{n})\quad\left(\mbox{ resp. }L^{\mathtt{NSp}}_{(p)}(\Sigma^{\infty}_{\mathtt{NC}}M_{n})\right)$ for $n\in\mathbb{N}$. ###### Lemma 10.1. For all $k,l\in\mathbb{N}$ there are equivalences $\operatorname{Hom}_{{\mathcal{M}^{\mathbb{Q}}}}(L^{\mathtt{NSp}}_{\mathbb{Q}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k}),L^{\mathtt{NSp}}_{\mathbb{Q}}(\Sigma^{\infty}_{\mathtt{NC}}M_{l}))\simeq L^{\mathtt{Sp}}_{\mathbb{Q}}\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})$ and $\operatorname{Hom}_{{\mathcal{M}^{(p)}}}(L^{\mathtt{NSp}}_{(p)}(\Sigma^{\infty}_{\mathtt{NC}}M_{k}),L^{\mathtt{NSp}}_{(p)}(\Sigma^{\infty}_{\mathtt{NC}}M_{l}))\simeq L^{\mathtt{Sp}}_{(p)}\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})$ ###### Proof. We will go over the (very straightforward) proof of the rational case. The proof of the $p$-local case is practically identical. $\operatorname{Hom}_{{\mathcal{M}^{\mathbb{Q}}}}(L^{\mathtt{NSp}}_{\mathbb{Q}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k}),L^{\mathtt{NSp}}_{\mathbb{Q}}(\Sigma^{\infty}_{\mathtt{NC}}M_{l}))=\operatorname{Hom}_{\mathtt{NSp}_{\mathbb{Q}}}(L^{\mathtt{NSp}}_{\mathbb{Q}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k}),L^{\mathtt{NSp}}_{\mathbb{Q}}(\Sigma^{\infty}_{\mathtt{NC}}M_{l}))=\\\ =\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},L^{\mathtt{NSp}}_{\mathbb{Q}}(\Sigma^{\infty}_{\mathtt{NC}}M_{l}))=\\\ =\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\mathop{\operatorname{colim}}\left[\Sigma^{\infty}_{\mathtt{NC}}M_{l}\xrightarrow{[1]}\Sigma^{\infty}_{\mathtt{NC}}M_{l}\xrightarrow{[2]}\Sigma^{\infty}_{\mathtt{NC}}M_{l}\xrightarrow{[3]}\cdots\right])=\\\ =\mathop{\operatorname{colim}}\left[\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})\xrightarrow{[1]}\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})\xrightarrow{[2]}\right.\\\ \left.\xrightarrow{[2]}\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l})\xrightarrow{[3]}\cdots\right]=L^{\mathtt{Sp}}_{\mathbb{Q}}\operatorname{Hom}_{\mathtt{NSp}}(\Sigma^{\infty}_{\mathtt{NC}}M_{k},\Sigma^{\infty}_{\mathtt{NC}}M_{l}).$ Here the first equality is by definition, the second equality is using the adjunction $L^{\mathtt{NSp}}_{\mathbb{Q}}\vdash i^{\mathtt{NSp}}_{\mathbb{Q}}$, the third equality is the formula for $L^{\mathtt{NSp}}_{\mathbb{Q}}$, the forth equality is by the compactness of $\Sigma^{\infty}_{\mathtt{NC}}M_{k}$ and the fifth equality uses the formula for $L^{\mathtt{Sp}}_{\mathbb{Q}}$. ∎ The main theorems of [ABS1] have rational and $p$-local analogs with completely analogous proofs. ###### Theorem 10.2. Let $\mathcal{D}$ be a symmetric monoidal cocomplete rational (resp. $p$-local) $\infty$-category. Suppose that there is a small set $C$ of compact objects in $\mathcal{D}$, that generates $\mathcal{D}$ under colimits and desuspentions. Assume that $1_{\mathcal{D}}\in C$ and $C$ is closed under tensor product. Thinking of $\mathcal{D}$ as left-tensored over $\mathtt{Sp}_{\mathbb{Q}}$ (resp. $\mathtt{Sp}_{(p)}$), we let $\mathcal{C}$ be the full $\mathtt{Sp}_{\mathbb{Q}}$-enriched (resp. $\mathtt{Sp}_{(p)}$-enriched) subcategory of $\mathcal{D}$ spanned by $C$. Then we have a natural symmetric monoidal functor of categories left-tensored over $\mathtt{Sp}_{\mathbb{Q}}$ (resp. $\mathtt{Sp}_{(p)}$) $P_{\mathtt{Sp}_{\mathbb{Q}}}(\mathcal{C})\xrightarrow{\sim}\mathcal{D}\quad\left(\mbox{resp. }P_{\mathtt{Sp}_{(p)}}(\mathcal{C})\xrightarrow{\sim}\mathcal{D}\right),$ which is an equivalence of the underlying $\infty$-categories and sends each representable presheaf $Y(c)$ to $c\in C$. ###### Theorem 10.3. The $\mathtt{Sp}_{\mathbb{Q}}$-enriched (resp. $\mathtt{Sp}_{(p)}$-enriched) category ${\mathcal{M}^{\mathbb{Q}}}$ (resp. ${\mathcal{M}^{(p)}}$) acquires a canonical symmetric monoidal structure, the category of presheaves $P_{\mathtt{Sp}_{\mathbb{Q}}}({\mathcal{M}^{\mathbb{Q}}})$ (resp. $P_{\mathtt{Sp}_{(p)}}({\mathcal{M}^{(p)}})$) acquires a canonical symmetric monoidal left $\mathtt{Sp}_{\mathbb{Q}}$-tensored (resp. $\mathtt{Sp}_{(p)}$-tensored) structure and we have a natural symmetric monoidal left $\mathtt{Sp}_{\mathbb{Q}}$-tensored (resp. $\mathtt{Sp}_{(p)}$-tensored) functor $P_{\mathtt{Sp}_{\mathbb{Q}}}({\mathcal{M}^{\mathbb{Q}}})\xrightarrow{\sim}\mathtt{NSp}_{\mathbb{Q}}\quad\left(\mbox{resp. },P_{\mathtt{Sp}_{(p)}}({\mathcal{M}^{(p)}})\xrightarrow{\sim}\mathtt{NSp}_{(p)}\right)$ which is an equivalence of the underlying $\infty$-categories. #### An explicit presentation of ${\mathcal{M}^{\mathbb{Q}}}$ and $\mathtt{NSp}_{\mathbb{Q}}$ We can use our results to give a very explicit description of ${\mathcal{M}^{\mathbb{Q}}}$, and therefore of the noncommutative rational stable homotopy category. The rationalization functor $L^{\mathtt{Sp}}_{\mathbb{Q}}\colon\mathtt{Sp}\to\mathtt{Sp}_{\mathbb{Q}}$ is a symmetric monoidal left adjoint. As explained in Section 5.1, we have an induced adjunction $(L^{\mathtt{Sp}}_{\mathbb{Q}})_{!}\colon\mathtt{Cat}^{\otimes}_{\mathtt{Sp}}\leftrightarrows\mathtt{Cat}^{\otimes}_{\mathtt{Sp}_{\mathbb{Q}}}\nobreak\mspace{6.0mu}{:}\nonscript\mkern-3.0mu\mathpunct{}\mspace{2.0mu}i_{!}.$ Composing adjunctions we obtain $(L^{\mathtt{Sp}}_{\mathbb{Q}}\circ\Sigma^{\infty}_{+})_{!}\colon\mathtt{Cat}^{\otimes}\leftrightarrows\mathtt{Cat}^{\otimes}_{\mathtt{Sp}_{\mathbb{Q}}}\nobreak\mspace{6.0mu}{:}\nonscript\mkern-3.0mu\mathpunct{}\mspace{2.0mu}(\Omega^{\infty}\circ i)_{!}.$ We denote $\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}}:=(L^{\mathtt{Sp}}_{\mathbb{Q}}\circ\Sigma^{\infty}_{+})_{!}(\mathbb{P}\operatorname{Inj}_{\infty})\in\mathtt{Cat}^{\otimes}_{\mathtt{Sp}_{\mathbb{Q}}}$ and we denote the mate of $L^{\mathtt{Sp}}_{\mathbb{Q}}\circ\Sigma^{\infty}\circ\widetilde{\textrm{End}}\in\operatorname{Map}_{\mathtt{Cat}^{\otimes}}(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathrm{op}},\mathtt{NSp}_{\mathbb{Q}})$ under this adjunction by $\widetilde{\mathrm{E}}_{\mathbb{Q}}\colon(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}})^{\mathrm{op}}\to\mathtt{NSp}_{\mathbb{Q}}\in\mathtt{Cat}^{\otimes}_{\mathtt{Sp}_{\mathbb{Q}}}.$ ###### Proposition 10.4. The functor $\widetilde{\mathrm{E}}_{\mathbb{Q}}\colon(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}})^{\mathrm{op}}\to\mathtt{NSp}_{\mathbb{Q}}\in\mathtt{Cat}^{\otimes}_{\mathtt{Sp}_{\mathbb{Q}}}$ is fully faithful as an $\mathtt{Sp}_{\mathbb{Q}}$-enriched functor with essential image $\mathcal{M}^{\mathbb{Q}}\subseteq\mathtt{NSp}_{\mathbb{Q}}.$ ###### Proof. The statement about the essential image is not effected by changing enrichment and is thus clear from the description of the functor End. The fully- faithfulness follows from lemma 5.12 and corollary 9.2. ∎ In view of theorem 10.3 and proposition 10.4. We get the following result: ###### Theorem 10.5. We have a sequence of equivalences of symmetric monoidal $\infty$-categories $\mathrm{Fun}(\mathbb{P}\operatorname{Inj}_{\infty},\mathtt{Sp}_{\mathbb{\mathbb{Q}}})\cong P_{\mathtt{Sp}_{\mathbb{Q}}}((\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}})^{\mathrm{op}})\cong\mathtt{NSp}_{\mathbb{Q}}.$ ###### Proof. The second equivalence is an immediate corollary of Theorem 10.3 and Proposition 10.4. For the first equivalence note that $P_{\mathtt{Sp}_{\mathbb{Q}}}((\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}})^{\mathrm{op}})=\mathrm{Fun}_{\mathtt{Sp}_{\mathbb{Q}}}(\mathbb{P}\operatorname{Inj}_{\infty}^{\mathtt{Sp}_{\mathbb{Q}}},\mathtt{Sp}_{\mathbb{\mathbb{Q}}}),$ where $\mathrm{Fun}_{\mathtt{Sp}_{\mathbb{Q}}}$ stands for $\mathtt{Sp}_{\mathbb{Q}}$-enriched functors. But we have an induced adjunction $(L^{\mathtt{Sp}}_{\mathbb{Q}}\circ\Sigma^{\infty}_{+})_{!}\colon\mathtt{Cat}\leftrightarrows\mathtt{Cat}_{\mathtt{Sp}_{\mathbb{Q}}}\nobreak\mspace{6.0mu}{:}\nonscript\mkern-3.0mu\mathpunct{}\mspace{2.0mu}(\Omega^{\infty}\circ i)_{!},$ so we have natural equivalences $\operatorname{Fun}_{\mathtt{Sp}_{\mathbb{Q}}}(\mathbb{P}\operatorname{Inj}^{\mathtt{Sp}_{\mathbb{Q}}}_{\infty},\mathtt{Sp}_{\mathbb{Q}})\simeq\operatorname{Fun}_{\mathtt{Sp}_{\mathbb{Q}}}((L^{\mathtt{Sp}}_{\mathbb{Q}}\circ\Sigma^{\infty}_{+})_{!}(\mathbb{P}\operatorname{Inj}_{\infty}),\mathtt{Sp}_{\mathbb{Q}})\simeq$ $\operatorname{Fun}(\mathbb{P}\operatorname{Inj}_{\infty},(\Omega^{\infty}\circ i)_{!}\mathtt{Sp}_{\mathbb{Q}})\simeq\operatorname{Fun}(\mathbb{P}\operatorname{Inj}_{\infty},\mathtt{Sp}_{\mathbb{Q}}),$ and are done. ∎ #### $p$-local and chromatic picture Now instead of rationalizing, suppose we fix a prime $p$ and localize everything at $p$. One can obtain further information about the $p$-localization of $\mathcal{M}$. It follows from Proposition 1.4 parts (3) and (4) that the rank filtration of ${\mathcal{M}^{(p)}}$ is constant except at powers of $p$. Therefore it is natural to regrade the filtration of $\mathbb{S}^{k,l}$ as follows $\mathbb{S}^{k,l,1}\hookrightarrow\mathbb{S}^{k,l,p}\hookrightarrow\mathbb{S}^{k,l,p^{2}}\hookrightarrow\cdots\hookrightarrow\mathbb{S}^{k,l,p^{i}}\cdots$ Let us loosely refer to $\mathbb{S}^{k,l,p^{i}}$ as morphisms of filtration $i$. We can say that the $p$-localization of $\mathcal{M}$ is a filtered category in the sense that the composition of a morphism of filtration $i$ and a morphism of filtration $j$ has filtration $i+j$. With this grading, (the $p$-localization) of $\mathcal{M}$ is a graded category in the usual sense, that composition adds degrees. Lastly, let us mention that the last part of Proposition 9.1 implies the following ###### Corollary 10.6. Fix a prime $p$ and localize everything at $p$. The map $\mathbb{S}^{k,l,p^{n}}\to\mathbb{S}^{k,l}$ induces an isomorphism on Morava $K(i)$-theory for $i\leq n$. We wonder if one can use this lemma to say something interesting about the chromatic localization of $\mathtt{NSp}$, but we will not pursue this here. ## References * [AG] Andersen K. K. S., Grodal J. A Baues fibration category structure on Banach and $C^{*}$-algebras, preprint, available at http://www.math.ku.dk/~jg/papers/fibcat.pdf, 1997. * [Ar1] Arone, G. _The Weiss derivatives of $BO(-)$ and $BU(-)$_, Topology 41 (2002), no. 3, 451–481. * [Ar2] Arone, G. _Iterates of the suspension map and Mitchell's finite spectra with $A_{k}$-free cohomology_, Math. Res. Lett. 5 (1998), no. 4, 485–496. * [ABS1] Arone, G., Barnea I., Schlank, T. M. _Noncommutative CW-spectra as enriched presheaves on matrix algebras_ , arXiv:2101.09775. * [ADL] Arone G. Z., Dwyer W. G., Lesh K. Loop structures in Taylor towers, Algebraic and Geometric Topology 8, 2008, p. 173–210. * [AL1] Arone, G., Lesh, K., _Filtered spectra arising from permutative categories_ , J. Reine Angew. Math. 604 (2007), 73–136. * [AL2] Arone, G., Lesh, K., _Augmented $\Gamma$-spaces, the stable rank filtration, and a bu analogue of the Whitehead conjecture_, Fund. Math. 207 (2010), no. 1, 29–70. * [AL3] Arone, G., Lesh, K., _Fixed points of coisotropic subgroups of $\Gamma_{k}$ on decomposition spaces,_ arXiv:1701.06070. * [BHH] Barnea I., Harpaz Y., Horel G. Pro-categories in homotopy theory, Algebraic and Geometric Topology 17.1, 2017, p. 567–643. * [BJL+] Bergner, J., Joachimi, R., Lesh, K., Stojanoska, V., Wickelgren, W., _Classification of problematic subgroups of $\boldsymbol{U(n)}$_, Trans. Amer. Math. Soc. (to appear) DOI: https://doi.org/10.1090/tran/7442. * [BJM] Barnea I., Joachim M., Mahanta S. Model structure on projective systems of $C^{*}$-algebras and bivariant homology theories, New York Journal of Mathematics 23, 2017, p. 383–439. * [BM] Berger, C., and Moerdijk, I. _On an extension of the notion of Reedy category,_ Math. Z. 269 (2011), no. 3–4, 977–1004. * [BF] Bousfield, A. and Friedlander, E. _Homotopy theory of $\Gamma$-spaces, spectra, and bisimplicial sets,_ Geometric applications of homotopy theory (Proc. Conf., Evanston, Ill., 1977), II, pp. 80–130, Lecture Notes in Math., 658, Springer, Berlin, 1978. * [Br] Bratelli, O., _Inductive limits of finite dimensional $C^{*}$-algebras_, Trans. Amer. Math. Soc. 171 (1972), 195–234. * [BGR] Brown L. G., Green P., Rieffel M. A. Stable isomorphism and strong Morita equivalence of $C^{*}$-algebras, Pacific J. Math. 71, 1977, p. 349–363. * [DM] Dadarlat, M., and McClure, J. _When are two commutative $C^{*}$-algebras stably homotopy equivalent?_, Math. Z. 235 (2000), no. 3, 499–523. * [EbRW] Ebert, J., Randal-Williams, O. Semisimplicial spaces, Algebr. Geom. Topol. 19 (2019), no. 4, 2099–2150. * [ELP] Eilers S., Loring T. A., Pedersen G. K. Stability of anticommutation relations: An application of noncommutative CW-complexes, J. Reine Angew. Math. , vol. 499 (1998), 101–143. * [Hin1] Hinich V. _Dwyer-Kan localization revisited_ , Homology, Homotopy and Applications 18.1, 2016, p. 27–48. * [Hin2] Hinich V. Yoneda lemma for enriched infinity categories, Adv. Math. 367 (2020), p. 107–129. * [Hin3] Hinich V. Colimits in enriched $\infty$-categories and Day convolution, arXiv:2101.09538. * [Lyd1] Lydakis, M. Smash products and $\Gamma$-spaces, Math. Proc. Cambridge Philos. Soc. 126 (1999), no. 2, 311–328. * [Lyd2] Lydakis, M. Simplicial functors and stable homotopy theory, unpublished preprint, available at https://hopf.math.purdue.edu/Lydakis/s_functors.pdf or http://users.math.uoc.gr/~mlydakis/papers/sf.pdf. * [MMSS] Mandell, M., May, P., Schwede, S., Shipley, B. Model categories of diagram spectra, Proc. London Math. Soc. (3) 82 (2001), no. 2, 441–512. * [Qui] Quillen D. G. Homotopical Algebra, Lecture Notes in Mathematics, Vol. 43, Springer-Verlag, Berlin, 1967. * [Rie] Rieffel M. A. Induced representations of $C^{*}$-algebras, Advances in Mathematics 13, 1974, p. 176–257. * [Se1] Segal, G., Categories and cohomology theories, Topology 13, 1974, p. 293–312. * [Se2] Segal, G., $K$-homology theory and algebraic $K$-theory, in $K$-theory and Operator Algebras, Athens, Georgia, 1975, Lect. Notes in Math. 575, Springer-Verlag 1977, 113–127. * [Zet] Zettl H. H. Ideals in Hilbert modules and invariants under strong Morita equivalence of $C^{*}$-algebras, Arch. Math. 39, 1982, p. 69–77.
# Collapsing Calabi-Yau fibrations and uniform diameter bounds Yang Li ###### Abstract As a sequel to [19], we study Calabi-Yau metrics collapsing along a holomorphic fibration over a Riemann surface. Assuming at worst canonical singular fibres, we prove a uniform diameter bound for all fibres in the suitable rescaling. This has consequences on the geometry around the singular fibres. ## 1 Introduction The present paper studies the adiabatic limiting behaviour of Ricci flat Kähler metrics on a Calabi-Yau manifold under the degeneration of the Kähler class. The basic setting is: ###### Setting 1.1. Let $(X,\omega_{X})$ be an $n$-dimensional projective manifold with nowhere vanishing holomorphic volume form $\Omega$, normalised to $\int_{X}i^{n^{2}}\Omega\wedge\overline{\Omega}=1$. Let $\pi:X\to Y$ be a holomorphic fibration onto a Riemann surface, with connected fibres denoted by $X_{y}$ for $y\in Y$, and without loss of generality $\int_{X_{y}}\omega_{X}^{n-1}=1$, and $\int_{Y}\omega_{Y}=1$. The singular fibres lie over the discriminant locus $S\subset Y$, and $\pi$ is a submersion over $Y\setminus S$. We assume the singular fibres are normal and have _at worst canonical singularities_. Let $\omega_{Y}$ be a Kähler metric on $Y$, and let $\tilde{\omega}_{t}$ be the Calabi-Yau metrics on $X$ in the class of $\omega_{t}=t\omega_{X}+\pi^{*}\omega_{Y}$, for $0<t\ll 1$. ###### Example 1.2. The most elementary examples are projective Calabi-Yau manifolds with Lefschetz fibrations over $\mathbb{P}^{1}$, for $n\geq 3$. The basic _non- example_ is a K3 surface with an elliptic fibration, such that the singular fibres are of type $I_{1}$. The wider question of collapsing Calabi-Yau metrics is intensely investigated by Tosatti and collaborators [25][26][27][13][14][17]. Most of these works concentrate only on what happens away from the singular fibres. The author’s previous work [19] recognized the importance of the uniform fibre diameter bound for the geometry near the singular fibres. This means $\text{diam}(X_{y},t^{-1}\tilde{\omega}_{t})\leq C.$ (1) with constants independent of the fibre $X_{y}$ and the collapsing parameter $t$. More precisely, we mean that any two points on $X_{y}$ can be joined by some path in $X$ (_not necessarily contained in $X_{y}$_) whose $t^{-1}\tilde{\omega}_{t}$-length is uniformly bounded. The central result in [19] (modulo some technical generalizations) is essentially ###### Theorem 1.3. In setting 1.1, we assume the uniform diameter bound (1). Fix a singular fibre $X_{0}$ and a point $P\in X_{0}$, and let $Z$ be a pointed Gromov-Hausdorff subsequential limit of $(X,t^{-1}\tilde{\omega}_{t},P)$. Assuming in addition that any holomorphic vector field on the regular part of $X_{0}$ vanishes, then $Z$ is isometric to $\bar{X}_{0}\times\mathbb{C}$ with the product metric, where we equip $\mathbb{C}$ with the Euclidean metric, and $\bar{X}_{0}$ stands for the metric completion of the singular Calabi-Yau metric on $X_{0}^{reg}$ in the class $[\omega_{X}]$. A detailed review of the main steps of [19] will be given in section 2 (partly because some intermediate conclusions are useful, and partly for technical generalizations). It was also observed in [19] that in some special cases the uniform fibre diameter bound can be implied by a conjectural Hölder bound on the Kähler potential uniformly on the fibres, and the main evidence in [19] is a nontrivial diameter bound for nodal K3 fibres. While this Hölder bound strategy has recently found a number of interesting applications (eg. [8][11]), the conjecture remains hitherto unresolved, due to the difficulty of complex structure/Kähler class degeneration. This paper is to present a clean uniform proof of ###### Theorem 1.4. In the setting 1.1, the uniform fibre diameter bound (1) holds. This combined with Theorem 1.3 has implication on the pointed Gromov-Hausdorff limit around singular fibres. ###### Remark. It should be emphasized that for the uniform fibre diameter bound to hold, the ‘at worst canonical singular fibre’ assumption is _necessary_ , at least if $[\omega_{X}]$ is a rational class. This is because on any smooth fibre $X_{y}$, the rescaled fibrewise metric $t^{-1}\tilde{\omega}_{t}$ converges smoothly to the unique Calabi-Yau metric $\omega_{SRF,y}$ on $(X_{y},[\omega_{X}])$ as $t\to 0$, with convergence rate depending on $y\in Y\setminus S$ [25][26]. Thus the uniformity in both $t$ and $y$ will imply a uniform diameter bound for all $\omega_{SRF,y}$ in all $y\in Y\setminus S$, which is known to be equivalent to the ‘at worst canonical singularity’ condition, assuming the rest of setting 1.1 and in addition that $[\omega_{X}]$ is an integral class up to a constant multiple [23]. For instance, this uniform fibre diameter bound is _not true_ around nodal elliptic curve fibres on a K3 surface. ###### Remark. In the motivating case [19] of Calabi-Yau 3-folds with Lefschetz K3 fibrations, the uniform diameter bound and the Gromov-Hausdorff convergence statements are consequences of the author’s gluing construction [20]. It is very plausible that a similar construction can be made for higher dimensional Lefschetz fibrations. But it seems unlikely that a gluing strategy can work in the full generality of at worst canonical singularities. The strategy for the uniform fibre diameter bound has two main new ingredients. The first is a uniform exponential integrability of the distance function on the fibres, which amounts to proving the uniform fibre diameter bound modulo a set of exponentially small measure. This method (_cf._ Theorem 3.1) is of very general nature and has its independent interest. The second is a judicious application of Bishop-Gromov monotonicity to a critically chosen ball, which prevents a subset of exponentially small measure staying far from the rest of the manifold. ###### Acknowledgement. The author is a 2020 Clay Research Fellow, based at MIT. He thanks Valentino Tosatti for comments. ## 2 Outline: from diameter bound to GH limit We now give an outline of Thm. 1.3 largely following [19] concerning how to identify the pointed Gromov-Hausdorff limit of the neighbourhood of the (at worst canonical) singular fibre, in setting 1.1, assuming the uniform diameter bound (1). The key is that the uniform diameter bound implies a local non- collapsing condition around any given fibre, which enables the application of many standard geometric analysis arguments, in particular Cheeger-Colding theory. As useful background facts, ###### Proposition 2.1. [13][21][23] Assume the setting 1.1. Then 1. 1. The relative holomorphic volume form $\Omega_{y}$ defined by $\Omega=\Omega_{y}\wedge dy$ satisfies the uniform bound $A_{y}=\int_{X_{y}}i^{(n-1)^{2}}\Omega_{y}\wedge\overline{\Omega}_{y}\leq C$ for all $y$ around any given singular fibre. In fact $A_{y}$ is continuous in $y$. 2. 2. The unique Calabi-Yau metrics $\omega_{SRF,y}$ on $X_{y}$ in the class $[\omega_{X}]$ have uniformly bounded diameters independent of $y$, or equivalently, these metrics are uniformly volume non-collapsed. 3. 3. There exists $p>1$ such that $\int_{X_{y}}|\frac{\Omega_{y}\wedge\overline{\Omega}_{y}}{\omega_{X}^{n-1}}|^{p}\omega_{X}^{n-1}\leq C.$ for all $y$ around a given singular fibre. Morever, the Calabi-Yau metrics $\omega_{SRF,y}$ are continuous in $y$ in the Gromov-Hausdorff topology, including around singular fibres, where $\omega_{SRF,y}$ is understood as the metric completion of the regular locus for the singular Calabi-Yau metric constructed in [6]. ###### Remark. The $L^{p}$ volume bound can be seen by passing to a log resolution. The uniform diameter bound is proved by the technique of [21], and under the projective class condition it is known to be equivalent to the at worst canonical singular fibre assumption [23], as an application of Donaldson-Sun theory [4]. ### 2.1 Basic setup and pointwise estimates Write the Calabi-Yau metric in terms of the potential $\phi$ depending on $t$: $\tilde{\omega}_{t}=\omega_{t}+\sqrt{-1}\partial\bar{\partial}\phi,\quad\omega_{t}=t\omega_{X}+\pi^{*}\omega_{Y}.$ The Calabi-Yau condition for $\tilde{\omega}_{t}$ reads $\tilde{\omega}_{t}^{n}=a_{t}t^{n-1}i^{n^{2}}\Omega\wedge\overline{\Omega},$ (2) where $a_{t}$ is a cohomological constant. Under the normalisation $\int i^{n^{2}}\Omega\wedge\overline{\Omega}=1$, and $\int_{X_{y}}[\omega_{X}]^{n-1}=1$, and since the base is 1-dimensional, $a_{t}=\sum_{k=0}^{1}\pi^{*}[\omega_{Y}]^{k}\cdot[\omega_{X}]^{n-k}{n\choose k}t^{1-k}.$ (3) In the limit $a_{t}$ converges to $a_{0}=n\int_{Y}\omega_{Y}=n.$ From complex pluripotential theory, ###### Proposition 2.2. [7][3] There is a uniform constant such that $\left\lVert\phi\right\rVert_{L^{\infty}}\leq C.$ By a maximum principle argument based on the Chern-Lu formula, ###### Proposition 2.3. There is a uniform bound $\operatorname{Tr}_{\tilde{\omega}_{t}}\pi^{*}\omega_{Y}\leq C$. Consequently, the fibrewise restriction $\tilde{\omega}_{t}|_{X_{y}}$ has the pointwise volume density upper bound $\frac{\tilde{\omega}_{t}^{n-1}|_{X_{y}}}{\omega_{SRF,y}^{n-1}}=\frac{\tilde{\omega}_{t}^{n-1}\wedge\omega_{Y}}{\omega_{SRF,y}^{n-1}\wedge\omega_{Y}}\leq C\frac{\tilde{\omega}_{t}^{n}(\operatorname{Tr}_{\tilde{\omega}_{t}}\pi^{*}\omega_{Y})}{\Omega_{y}\wedge\overline{\Omega_{y}}\wedge\omega_{Y}}\leq Ct^{n-1}.$ (4) Define the oscillation to be $\text{osc}=\sup-\inf$. By applying Yau’s $C^{0}$-estimate fibrewise, with $\omega_{SFR,y}$ as the background metric (which has uniformly bounded Sobolev and Poincaré constants in the ‘at worst canonical singular fibre’ context), ###### Lemma 2.4. The fibrewise oscillation satisfies the uniform bound $\text{osc}_{X_{y}}\phi\leq Ct.$ Next one introduces the fibrewise average function of $\phi$: $\underline{\phi}=\int_{X_{y}}\phi\omega_{X}^{2}.$ A computation based on the Chern-Lu inequality gives $\Delta_{\tilde{\omega}_{t}}(\log\operatorname{Tr}_{\tilde{\omega}_{t}}\omega_{X}-\frac{C}{t}(\phi-\underline{\phi}))\geq\operatorname{Tr}_{\tilde{\omega}_{t}}\omega_{X}-\frac{\text{Const}}{t}.$ Now the fibrewise oscillation bound gives $\frac{1}{t}|\phi-\bar{\phi}|\leq C$, whence a maximum principle argument gives ###### Theorem 2.5. There is a uniform pointwise lower bound $\tilde{\omega}_{t}\geq C\omega_{t}.$ The severity of the singularity is measured by the function $H=\frac{\omega_{X}^{n-1}\wedge\omega_{Y}}{\omega_{X}^{n}}$, whose zero locus is precise the $\pi$-critical points on $X$. By pointwise simultaneous diagonalisation of $\tilde{\omega}_{t}$ and $\omega_{t}$, ###### Corollary 2.6. There is a uniform upper bound $\tilde{\omega}_{t}\leq\frac{C}{H}\omega_{t}.$ In particular, in the subset $\\{H\gtrsim 1\\}\subset X$, namely the region away from the $\pi$-critical points but not necessarily away from the singular fibres, there is a uniform equivalence $C^{-1}\omega_{t}\leq\tilde{\omega}_{t}\leq C\omega_{t}.$ (5) Around any given point in $\\{H\gtrsim 1\\}$, Evans-Krylov theory gives that $t^{-1}\tilde{\omega}_{t}$ has uniform $C^{\infty}$ bound with respect to the background metric $t^{-1}\omega_{t}$. ###### Corollary 2.7. Inside $\\{H\gtrsim 1\\}$, $\left\lVert\nabla_{\omega_{X}}^{(k)}\frac{1}{t}\tilde{\omega}_{t}|_{X_{y}}\right\rVert_{L^{\infty}}\leq C(k),\quad\left\lVert\nabla_{\omega_{X}}^{(k)}(\operatorname{Tr}_{\tilde{\omega}_{t}}\omega_{Y})|_{X_{y}}\right\rVert_{L^{\infty}}\leq C(k).$ ###### Remark. It should be emphasized that near the $\pi$-critical points, the metrics $\omega_{t}$ and $\tilde{\omega}_{t}$ are far from uniformly equivalent. Furthermore, the pointwise estimate from Cor. 2.6 cannot imply the uniform fibre diameter bound (1), nor do the fibres have any useful lower bound on the Ricci curvature to imply (1). Resolving this difficulty is the main concern of the present paper. ### 2.2 Local noncollapsing From now on we assume (1) in the exposition. ###### Proposition 2.8. Assuming (1), then $t^{-1}\tilde{\omega}_{t}$ satisfies the local volume non- collapsing estimate: around any central point $P$, and for any $1\lesssim R\lesssim t^{-1/2}$, $Vol(B_{t^{-1}\tilde{\omega}_{t}}(P,R))\geq CR^{2}.$ (6) Morever $Vol(B_{t^{-1}\tilde{\omega}_{t}}(P,R))\geq CR^{2n}$ for any $R\lesssim 1$. ###### Proof. Assume first that $R\gtrsim 1$. Any fibre contains a subregion $\\{H\gtrsim 1\\}$ where $\tilde{\omega}_{t}$ is uniformly equivalent to $\omega_{t}$. Thus if $d_{\omega_{Y}}(y,y^{\prime})\lesssim Rt^{1/2}/C$, then the $t^{-1}\tilde{\omega}_{t}$-distance between the two fibres $X_{y}$ and $X_{y^{\prime}}$ is $O(R)$. Using the fibre diameter bound, we can reach any point on a nearby fibre within $O(R)$ distance, so the ball $B_{t^{-1}\tilde{\omega}_{t}}(P,CR)$ contains the preimage of $B_{\omega_{Y}}(\pi(P),Rt^{1/2})$. Since the volume form of $t^{-1}\tilde{\omega}_{t}$ is $a_{t}t^{-1}\sqrt{-1}\Omega\wedge\overline{\Omega}$, we obtain the estimate (6). The $R\lesssim 1$ case follows from Bishop-Gromov monotonicity using the Ricci flatness of $\tilde{\omega}_{t}$. ∎ Thus non-collapsing Cheeger-Colding theory applies, and in particular around any point on $X$, including $\pi$-critical points, one can take non-collapsing pointed Gromov-Hausdorff limits of $(X,\frac{1}{t}\tilde{\omega}_{t},P)$, with all the standard consequences on its regularity. ### 2.3 Convergence estimates Let $t\ll 1$. We fix a central fibre $X_{0}$, which can be singular. The one- dimensional base condition will be crucially used. Consider a coordinate ball $\\{|y|\leq R\\}\subset Y$. Let $\omega_{Y,0}=A_{0}\sqrt{-1}dy\wedge d\bar{y}$ be a Euclidean metric on $\\{|y|\leq R\\}$, where we recall $A_{y}=\int_{X_{y}}i^{(n-1)^{2}}\Omega_{y}\wedge\overline{\Omega}_{y}$. Chern-Lu inequality gives the subharmonicity $\Delta_{\tilde{\omega}_{t}}(\log\operatorname{Tr}_{\tilde{\omega}_{t}}\omega_{Y,0})\geq 0.$ Using a slightly tricky argument based on the 3-circle inequality and the Harnack inquality (relying on the local non-collapsing), we deduce ###### Proposition 2.9. Assuming (1), then we have a concentration estimate for $\operatorname{Tr}_{\tilde{\omega}_{t}}\omega_{Y,0}$ uniform for all choices of $X_{0}$: $\max_{|y|\leq t^{1/2}}\operatorname{Tr}_{\tilde{\omega}_{t}}\omega_{Y,0}\leq 1+\frac{C}{|\log t|},$ (7) $t^{-n}\left\lVert\operatorname{Tr}_{\tilde{\omega}_{t}}\omega_{Y,0}-1\right\rVert_{L^{1}_{\tilde{\omega}_{t}}(|y|\lesssim t^{1/2})}\leq\frac{C}{|\log t|}.$ (8) The concentration estimate easily entails that the two volume density on $X_{0}$, given by $(t^{-1}\tilde{\omega}_{t})^{n-1}$ and $\omega_{SRF,y}^{n-1}$, are close in the $L^{1}$-sense. By considering the fibrewise Monge-Ampère equation, one deduces that their relative Kähler potential is small in an integral sense. In the regular region $\\{H\gtrsim 1\\}$, this improves the smooth bounds in Cor. 2.7 to convergence bounds: ###### Proposition 2.10. For any small $\epsilon>0$, $\left\lVert\nabla^{(k)}_{\omega_{X}}(\omega_{SRF,y}-\frac{1}{t}\tilde{\omega}_{t}|_{X_{y}})\right\rVert_{L^{\infty}({X_{y}}\cap\\{H\gtrsim 1\\})}\leq\frac{C(k,\epsilon)}{|\log t|^{1/2-\epsilon}}.$ (9) There is one extra bit of juice one can squeeze out of the Chern-Lu formula and the concentration estimate, using an integration by part argument. We have a gradient bound, which shows that $d\pi$ is in some sense approximately parallel. $\int_{|y|\lesssim t^{1/2}}(\frac{|\nabla d\pi|^{2}}{\operatorname{Tr}_{\tilde{\omega}_{t}}\omega_{Y,0}}-|\partial\log\operatorname{Tr}_{\tilde{\omega}_{t}}\omega_{Y,0}|^{2})\tilde{\omega}_{t}^{n}\leq\frac{Ct^{n-1}}{|\log t|}.$ (10) All these estimates are indepedent of the choice of $X_{0}$. ### 2.4 Gromov Hausdorff limit around the singular fibre Fix a point $P$ on a (singular) fibre $X_{0}$, and look at the pointed sequence of Ricci flat spaces $Z_{t}=(\pi^{-1}B_{\omega_{Y}}(0,R)\subset X,t^{-1}\tilde{\omega}_{t})$. Local noncollapsing implies that after passing to subsequence, there is some complex $n$-dimensional Gromov-Hausdorff limit space $(Z,\omega_{\infty})$, with a Hausdorff codimension 4 regular locus $Z^{\text{reg}}$ which is connected, open, dense, where the limiting metric is smooth. Morever $Z^{\text{reg}}$ has a natural limiting complex structure, such that the limiting metric is Kähler. We shall suppress below mentions of subsequence to avoid overloading notation, and tacitly understand a Gromov- Hausdorff metric is fixed on the disjoint union $Z_{t}\sqcup Z$, which displays the GH convergence. Recall $t\ll 1$. We wish to identify the complex structure. Some heuristic first: since everything away from the fibre $X_{0}$ is pushed to infinity by scaling, the limit as a complex variety should be the normal neighbourhood of $X_{0}$, which is just the trivial product $X_{0}\times\mathbb{C}$ in the case of a smooth fibre, and the guess is that the same is true for the singular fibre. More formally, we build comparison maps. Let $u$ denote the standard coordinate on $\mathbb{C}$, and $\omega_{\mathbb{C}}$ refers to the standard Euclidean metric on $\mathbb{C}$. Define the holomorphic maps $f_{t}:(Z_{t},t^{-1}\tilde{\omega}_{t})\to(X\times\mathbb{C},\omega_{X}+\omega_{\mathbb{C}}),\quad x\mapsto(x,u=t^{-1/2}\pi(x)).$ Our scaling convention is that $\omega_{\mathbb{C}}$ agrees with $t^{-1}\omega_{Y,0}$ under the identification $u=t^{-1/2}y=t^{-1/2}\pi(x)$. By the uniform bound $\operatorname{Tr}_{\tilde{\omega}_{t}}{\omega_{t}}\leq C$, there is a Lipschitz bound on $f_{t}$ independent of $t$, so the Gromov- Hausdorff limit inherits a Lipschitz map $f_{\infty}$ into $X\times\mathbb{C}$. By the interior regularity of holomorphic functions, the limiting map $f_{\infty}$ is holomorphic. As a rather formal consequence of the uniform fibre diameter bound, we can identify the image: ###### Lemma 2.11. The image of $f_{\infty}$ is $X_{0}\times\mathbb{C}$. Recall the function $H$ measures the severity of singular effect. Now $H$ is a continuous function on $X_{0}$, so defines a function on $Z$ by pulling back via $f_{\infty}$. A qualitative consequence of the regularity in $\\{H\gtrsim 1\\}$ is ###### Proposition 2.12. The map $f_{\infty}$ is a biholomorphism $\\{H>0\\}\subset Z\to X_{0}^{\text{reg}}\times\mathbb{C}$. Local noncollapsing and Ricci-flatness implies $\liminf\text{Vol}_{\frac{1}{t}\tilde{\omega}_{t}}(\\{|u|\leq D\\}\subset Z_{t})\geq\text{Vol}_{\omega_{\infty}}(\\{|u|\leq D\\}\subset Z).$ Using the explicit nature of the Calabi-Yau volume form, and the $C^{\infty}_{loc}$ convergence over $\\{H>0\\}$, one finds ###### Proposition 2.13. (Full measure property) The subset $\\{H>0\\}\simeq X_{0}^{reg}\times\mathbb{C}$ inside $Z$ must have full measure on each cylinder $\\{|u|\leq D\\}\subset Z$, so the set $H=0$ has measure zero in $Z$. In particular $X_{0}^{reg}\times\mathbb{C}$ is open and dense in $Z$. We now study the metric $\omega_{\infty}$ over the smooth region $X_{0}^{\text{reg}}\times\mathbb{C}$. By passing (9) to the limit, and using the continuity of $\omega_{SRF,y}$ at $y=0$, ###### Proposition 2.14. Over $X_{0}^{reg}\times\mathbb{C}$, the limiting metric restricts fibrewise to the Calabi-Yau metric $\omega_{SRF,0}$ on $X_{0}$. We also need information about the horizontal component of the metric. By passing the concentration estimate in Prop. 2.9 to the limit, ###### Proposition 2.15. The metric $\omega_{\infty}$ over $X_{0}^{reg}\times\mathbb{C}$ satisifies the Riemannian submersion property $\operatorname{Tr}_{\omega_{\infty}}\omega_{\mathbb{C}}=1.$ By passing the gradient estimate (10) to the limit, ###### Proposition 2.16. Over $X_{0}^{reg}\times\mathbb{C}$, the differential $du$ is parallel with respect to $\omega_{\infty}$. We can pin down the Riemannian metric on the regular locus: ###### Proposition 2.17. The limiting metric $\omega_{\infty}=\omega_{SRF,0}+\omega_{\mathbb{C}}$ over $X_{0}^{reg}\times\mathbb{C}$. ###### Proof. The parallel diffential $du$ induces a parallel $(1,0)$ type vector field by the complexified Hamiltonian construction: $\iota_{V}\omega_{\infty}=d\bar{u}.$ In particular $V$ is a holomorphic vector field. By assumption, there is no holomorphic vector field on $X_{0}^{\text{reg}}$, so $V$ must lie in the subbundle $T\mathbb{C}\subset T(X_{0}^{\text{reg}}\times\mathbb{C})$. Morever, on each fibre $X_{0}^{reg}\times\\{u\\}$, $V$ must be a constant multiple of $\frac{\partial}{\partial u}$. We can then write $V=\lambda(u)\frac{\partial}{\partial u}$, where $\lambda$ is a holomorphic function in $u$. Since $du$ and $V$ are both parallel, the quantity $\lambda=du(V)$ must be a constant. We know $\omega_{\infty}$ restricted to the fibres is just $\omega_{SRF,0}$. By construction, the vector field $V$ defines the Hermitian orthogonal complement of the holomorphic tangent space of the fibres. Now $V=\lambda\frac{\partial}{\partial u}$ where the constant is specified by the Riemmanian submersion property. The claim follows. ∎ ### 2.5 Geometric convexity There is still a small gap between Prop. 2.17 and the Gromov-Hausdorff convergence Theorem 1.3. By Prop. 2.17, we know the metric distance on $X_{0}^{reg}\times\mathbb{C}\subset Z$ is at most that of the product metric. We need to show that this is actually an equality, namely that one cannot shortcut the distance function by going through the singular set in $Z$. (This is the only part of the argument not contained in the more restrictive setting of [19]). If so, then the density of $X_{0}^{reg}\times\mathbb{C}$ in $Z$ (_cf._ Prop 2.13) will imply that $Z$ is isometric to $\bar{X}_{0}\times\mathbb{C}$ as required. Thus we concentrate on showing ###### Proposition 2.18. (Geometric convexity) Given two points $P_{1},P_{2}$ in $X_{0}^{reg}\times\mathbb{C}$, which are GH limits of $P_{1}^{t}\in X$ and $P_{2}^{t}\in X$ respectively. Then for any given $\epsilon>0$, there is a small enough $\delta$, such that for $t\to 0$, there is a path contained in $\\{H>\delta\\}\subset X$ from $P_{1}^{t}$ to $P_{2}^{t}$, whose $t^{-1}\tilde{\omega}_{t}$-length is at most $d(P_{1}^{t},P_{2}^{t})+\epsilon$. This is precisely what allows one to reduce the distance function computation to knowing the metric only in the regular region. Since $P_{1},P_{2}$ are fixed, we can regard $P_{1}^{t},P_{2}^{t}\in\\{H\gtrsim 1\\}$, and $d(P_{1}^{t},P_{2}^{t})\lesssim 1$. It is clear that the question only involves a local region of length scale $O(1)$. The main techniques are developed by Song, Tian and Zhang [10][11]. The following construction of a good cutoff function is taken from [10, Lem. 3.7], and applied to the singular CY metric $(X_{0},\omega_{SRF,0})$. ###### Lemma 2.19. Given $\lambda>0$ and any compact subset $K$ contained in $X_{0}^{reg}$. There is a cutoff function $\rho_{\lambda}\in C^{\infty}(X_{reg})$ compactly supported in $X_{0}^{reg}$, with $0\leq\rho_{\lambda}\leq 1$, which equals one on $K$, and satisfies the gradient bound $\int_{X_{0}}|\nabla\rho_{\lambda}|^{2}\omega_{SRF,0}^{n-1}<\lambda.$ By Cauchy-Schwarz, $\int_{X_{0}}|\nabla\rho_{\lambda}|\omega_{SRF,0}^{n-1}\leq C\lambda^{1/2}.$ Applying the coarea formula to $|\nabla\rho_{\lambda}|$ as in [11, Lem. 2.5], we can find a level set $\\{\rho_{\lambda}=a\\}$ compactly contained in $X_{0}^{reg}\setminus K$, such that $\text{Area}_{\omega_{SRF,0}}(\\{\rho_{\lambda}=a\\})\leq C\lambda^{1/2}.$ Now since $\rho_{\lambda}$ is supported on the regular locus, we can regard it as a function locally on $X$, which is almost constant in the normal direction to $X_{0}$. Likewise $\\{\rho_{\lambda}=u\\}$ can be regarded as a hypersurface locally on $X$, separating $\\{H\gtrsim 1\\}$ from the most curved region on $X$. For very small $t$ depending on all previous choices, the metric $t^{-1}\tilde{\omega}_{t}$ is arbitrarily close to the product metric $\omega_{\infty}$ on the support of $\rho_{\lambda}$, whence $\text{Area}_{t^{-1}\tilde{\omega}_{t}}(\\{\rho_{\lambda}=a\\}\cap d(P,\cdot)\lesssim 1)\leq C\lambda^{1/2}.$ (11) ###### Proof. (Prop 2.18) By taking the compact set $K$ large enough, we can ensure $d(P_{i}^{t},\\{\rho_{\lambda}=a\\})\gtrsim 1$. The number $\lambda$ can be taken very small depending on $\epsilon$. Suppose there exists a point $Q$ with $d(Q,P_{2}^{t})\lesssim\epsilon$, such that the minimal geodesic from $P_{1}^{t}$ to $Q$ does not intersect $\\{\rho_{\lambda}=a\\}\cap\\{d(P,\cdot)\leq 2d(P_{1}^{t},P_{2}^{t})+1\\}$. Then for length reasons this minimal geodesic cannot intersect $\\{\rho_{\lambda}=a\\}$, and since the support of $\rho_{\lambda}$ is compactly containted in the regular region, this geodesic must stay within $\\{H\gtrsim\delta\\}$ for $\delta$ depending only on $\rho_{\lambda}$, and we can conclude Prop. 2.18. Suppose the contrary, namely every minimal geodesic joining $P_{1}^{t}$ to any point in $B_{t^{-1}\tilde{\omega}_{t}}(P_{2}^{t},\epsilon)$ intersects $\\{\rho_{\lambda}=a\\}\cap\\{d(P,\cdot)\lesssim 1\\}$. By a Bishop-Gromov comparison argument, this would force $\text{Area}_{t^{-1}\tilde{\omega}_{t}}(\\{\rho_{\lambda}=a\\}\cap d(P,\cdot)\lesssim 1)\gtrsim\epsilon^{n-1}.$ This contradicts (11) by taking $\lambda$ small enough in advance. ∎ ## 3 Diameter estimates ### 3.1 Uniform exponential integrability For the moment, we step out of the setting 1.1, and consider a projective manifold $M\subset\mathbb{P}^{N}$ of degree $d$ and dimension $n$. Let $\omega_{FS}=\frac{\sqrt{-1}}{2\pi}\log\sum_{0}^{N}|Z_{i}|^{2}$ be the standard Fubini-Study metric on $(M,c_{1}(\mathcal{O}(1))$, and $\omega=\omega_{FS}+\sqrt{-1}\partial\bar{\partial}\phi$ be any smooth Kähler metric in the same class. The following theorem of independent interest may be regarded as a Riemannian counterpart of uniform Skoda integrability, discussed for instance in [5] recently. ###### Theorem 3.1. Assume the distance function $d_{\omega}$ associated to $\omega$ satisfies $\int_{M\times M}d_{\omega}(y,y^{\prime})\omega_{FS}^{n}(y)\omega_{FS}^{n}(y^{\prime})\leq A,\quad A\geq 1.$ (12) Then there are constants $C(n)$ depending only on $n$, and $C(n,N,d)$ depending only on $n,N$ and the degree $d$, such that $\int_{M\times M}\omega_{FS}^{n}(y)\omega_{FS}^{n}(y^{\prime})\exp(\frac{d_{\omega}(y,y^{\prime})}{C(n)d^{2}})\leq e^{C(n,N,d)A}.$ (13) ###### Proof. Our argument is inspired by Tian and Yau’s work on the $\alpha$-invariant [24]. As a preliminary discussion, choose a $(N-n-1)$-dimension projective subspace $F\simeq\mathbb{CP}^{N-n-1}\subset\mathbb{CP}^{N}$, such that $F\cap M=\emptyset$. We project $M$ onto an $n$-dimensional projective subspace $F^{\perp}$, and call the projection $\pi_{F}$. (The notation does not suggest perpendicularity for some fixed metric). If $F$ and $F^{\perp}$ are chosen generically, then $\pi_{F}:M\to F^{\perp}$ is a finite map with covering degree $d=\text{deg}(M)$. Let $\phi_{F}=\frac{1}{d}\sum_{y\in\pi_{F}^{-1}(x)}\phi(y),\quad x\in F^{\perp}.$ Denote by $\psi$ the relative potential between the Fubini-Study metric on $\mathbb{CP}^{N}$ and $F^{\perp}$, _i.e._ $\psi_{F}=\frac{1}{2\pi}\log\frac{\left\lVert Z\right\rVert^{2}}{\left\lVert\pi_{F}(Z)\right\rVert^{2}},\quad Z=(Z_{0},\ldots,Z_{N}),\quad[Z_{0}:\ldots:Z_{N}]\in\mathbb{CP}^{N}\setminus F.$ Since $F\cap M=\emptyset$, we know $\psi$ is smooth on $M$. We observe the pushforward of $\omega$ as a positive (1,1)-current is $\pi_{F*}(\omega)=d\times\omega_{F^{\perp}}+\sqrt{-1}\partial\bar{\partial}(d\phi_{F}+\sum_{y\in F^{\perp}}\psi(y)).$ This defines a positive (1,1)-current with continuous potential in $(F^{\perp},c_{1}(\mathcal{O}(d)))$, and is smooth outside the branching locus. By the monotonicity formula in the theory of Lelong numbers, applied to $F^{\perp}\simeq\mathbb{P}^{n}$, we have $\int_{\pi_{F}^{-1}(B(r))}\omega\wedge\pi_{F}^{*}\omega_{F^{\perp}}^{n-1}=\int_{B(r)}\pi_{F*}\omega\wedge\omega_{F^{\perp}}^{n-1}\leq C(n)dr^{2n-2}.$ (14) Now for any $x,x^{\prime}\in F^{\perp}$, we consider the function $\rho_{F}(x,x^{\prime})=\sum_{y\in\pi_{F}^{-1}(x),y^{\prime}\in\pi_{F}^{-1}(x^{\prime})}d_{\omega}(y,y^{\prime}).$ For fixed $x^{\prime}$, this can be regarded as a function on $x$. Notice $|\nabla_{\omega}d_{\omega}(\cdot,y^{\prime})|\leq 1$ by the definition of distance functions. Thus at least outside the branching locus, we get a pointwise estimate $\begin{split}&|\nabla_{\omega_{F^{\perp}}}\rho_{F}(x,x^{\prime})|^{2}\leq d^{2}\sum_{y\in\pi_{F}^{-1}(x),y^{\prime}\in\pi_{F}^{-1}(x^{\prime})}|\nabla_{\pi_{F}^{*}\omega_{F^{\perp}}}d_{\omega}(y,y^{\prime})|^{2}\\\ &\leq d^{2}\sum_{y\in\pi_{F}^{-1}(x),y^{\prime}\in\pi_{F}^{-1}(x^{\prime})}\operatorname{Tr}_{\omega_{F^{\perp}}}\omega\\\ &=d^{3}\sum_{y\in\pi_{F}^{-1}(x)}\operatorname{Tr}_{\omega_{F^{\perp}}}\omega\end{split}$ Here the first inequality uses Cauchy-Schwarz, and the last inequality is because the traces are taken at $y$, with $y^{\prime}$ fixed. Since $d_{\omega}$ comes from a smooth metric on $M$, it is easy to see $\nabla_{\omega_{F^{\perp}}}\rho_{F}$ has no distributional term supported on the branching locus. Combining with (14), for any fixed $x^{\prime}$, $\int_{B(r)}|\nabla_{F^{\perp}}\rho_{F}|^{2}\omega_{F^{\perp}}^{n}\leq C(n)d^{4}r^{2n-2}.$ (15) By the John-Nirenberg inequality, $\int_{F^{\perp}}\exp(\frac{\rho_{F}(\cdot,x^{\prime})-\bar{\rho}_{F,x^{\prime}}}{d^{2}C(n)})\omega_{F^{\perp}}^{n}\leq C^{\prime}(n),$ (16) where $\bar{\rho}_{F,x^{\prime}}$ is the average number for fixed $x^{\prime}$: $\bar{\rho}_{F,x^{\prime}}=\int_{F^{\perp}}\rho_{F}(x,x^{\prime})\omega_{F^{\perp}}^{n}(x).$ Define $\bar{\rho}_{F}=\int_{F^{\perp}}\int_{F^{\perp}}\rho_{F}(x,x^{\prime})\omega_{F^{\perp}}^{n}(x)\omega_{F^{\perp}}^{n}(x^{\prime})$. Clearly $\bar{\rho}_{F}$ is the average of $\bar{\rho}_{F,x}$ over all $x\in F^{\perp}$. The above argument works also for the function $\bar{\rho}_{F,x}$ to give $\begin{split}\int_{F^{\perp}}\omega_{F^{\perp}}^{n}(x)\exp(\frac{\bar{\rho}_{F,x}-\bar{\rho}_{F}}{C(n)d^{2}})\leq C^{\prime}(n).\end{split}$ Now by the change of variable formula, $\begin{split}&\bar{\rho}_{F}=\int_{M\times M}d_{\omega}(y,y^{\prime})Jac(\pi_{F})(y)Jac(\pi_{F})(y^{\prime})\omega_{FS}^{n}(y)\omega_{FS}^{n}(y^{\prime})\\\ &\leq\int_{M\times M}d_{\omega}(y,y^{\prime})\omega_{FS}^{n}(y)\omega_{FS}^{n}(y^{\prime})\left\lVert Jac(\pi_{F})\right\rVert_{L^{\infty}(M)}^{2}\\\ &\leq A\left\lVert Jac(\pi_{F})\right\rVert_{L^{\infty}(M)}^{2}.\end{split}$ We remark that the $L^{\infty}$-norm of the Jacobian factor is bounded on $M$ because $M\cap F=\emptyset$, and as long as $M$ is bounded away from $F$ inside $\mathbb{CP}^{N}$ then this constant stays uniform; this applies to small $C^{0}$-deformations of $M$, so by the compactness of the Hilbert scheme, such constants can be made uniform for given $n,N,d$ (possibly with changing choices of $F,F^{\perp}$). Thus $\int_{F^{\perp}}\omega_{F^{\perp}}^{n}(x)\exp(\frac{\bar{\rho}_{F,x}}{C(n)d^{2}})\leq e^{C(n,N,d)A}.$ (17) Combined with Cauchy-Schwarz and (16), $\begin{split}&\int_{F^{\perp}\times F^{\perp}}\omega_{F^{\perp}}^{n}(x)\omega_{F^{\perp}}^{n}(x^{\prime})\exp(\frac{\rho_{F}(x,x^{\prime})}{2C(n)d^{2}})\\\ \leq&\left(\int_{F^{\perp}\times F^{\perp}}\omega_{F^{\perp}}^{n}(x)\omega_{F^{\perp}}^{n}(x^{\prime})\exp(\frac{\rho_{F}(x,x^{\prime})-\bar{\rho}_{F,x}}{C(n)d^{2}})\right)^{1/2}\\\ &\times\left(\int_{F^{\perp}\times F^{\perp}}\omega_{F^{\perp}}^{n}(x)\omega_{F^{\perp}}^{n}(x^{\prime})\exp(\frac{\bar{\rho}_{F,x}}{C(n)d^{2}})\right)^{1/2}\\\ \leq&e^{C(n,N,d)A}.\end{split}$ Using the obvious inequality $d_{\omega}(y,y^{\prime})\leq\rho_{F}(\pi_{F}(y),\pi_{F}(y^{\prime}))$, and changing the value of $C(n)$, $\int_{M\times M}\pi_{F}^{*}\omega_{F^{\perp}}^{n}(y)\pi_{F}^{*}\omega_{F^{\perp}}^{n}(y^{\prime})\exp(\frac{d_{\omega}(y,y^{\prime})}{C(n)d^{2}})\leq e^{C(n,N,d)A}.$ Notice this is already very close to our goal (13), in the sense that the exponential integrability $\int\omega_{FS}^{n}(y)\omega_{FS}^{n}(y^{\prime})\exp(\frac{d_{\omega}(y,y^{\prime})}{C(n)d^{2}})\leq e^{C(n,N,d)A}$ holds for pairs of points $(y,y^{\prime})\in M\times M$ where $\pi_{F}^{*}\omega_{F^{\perp}}^{n}\gtrsim\omega_{FS}^{n}.$ (18) Failure of this essentially means that the differential $d\pi_{F}$ almost projects the tangent space of $M$ at $y$ or $y^{\prime}$ to a lower dimensional vector space. Now we recall that the choice of $(F,F^{\perp})$ is generic. By varying this choice, we can produce $(F_{1},F_{1}^{\perp}),\ldots(F_{l},F_{l}^{\perp})$, with $l$ suitably large depending on $n,N$, such that for any pair of $(y,y^{\prime})\in M\times M$, the condition (18) holds for at least one choice of $(F_{i},F_{i}^{\perp})$. (For instance, it is enough to take $\\{(F_{i},F_{i}^{\perp})\\}$ as a suitably dense $\epsilon$-net in the product of Grassmannians.) Morever, the constants are robust for small $C^{0}$-deformation of $M$ inside $\mathbb{CP}^{N}$, so by the compactness of the Hilbert scheme again, the constant on the RHS of (13) is uniform in $n,N,d$. ∎ ###### Remark. Some a priori integral bound on $d_{\omega}$ is necessary, for otherwise $M$ may be disconnected, or degenerating into a union of several components. The same reason shows it is not enough to have an $L^{1}$-bound on the distance function on a subset of $M\times M$ with say half of the Fubini-Study measure. However, we claim that it is enough to replace (12) with an $L^{1}$-bound on $U\times U$ for a large open subset $U\subset M$ with $(1-\epsilon)$-percent of the Fubini-Study measure, for $\epsilon$ sufficiently small. To see this, first notice that in the John-Nirenberg inequality argument above, we can replace global average on $F^{\perp}$ by the average on a subset $V$ of $F^{\perp}$ with say half of the Fubini-Study measure. It is enough to ensure that the $L^{1}(U\times U)$-bound on $d_{\omega}$ can bound the $L^{1}(V\times V)$-norm on $\rho_{F}$. This amounts to requiring that $U$ contains $\pi_{F}^{-1}(V)$ for some $V\subset F^{\perp}$ with half measure, which would be true if $U$ almost carries the full measure. This remark is quite convenient in situations where one can a priori bound the metric in the generic region of $M$. ###### Remark. The above Theorem works for integral Kähler classes, but for irrational classes on projective manifolds it is often easy to reduce to the above case. For instance, consider $M$ a complex submanifold of fixed degree inside a projective manifold $M^{\prime}\subset\mathbb{CP}^{N}$. Take an arbitrary fixed Kähler class $\chi$ on $M^{\prime}$, and consider Kähler metrics $\omega$ on $M$ in the class $\chi|_{M}$. We assume on a large enough subset of $M$ that $\int d_{\omega+\omega_{FS}}(y,y^{\prime})\omega_{FS}^{n}(y)\omega_{FS}^{n}(y^{\prime})\lesssim 1,$ and claim that there exists a uniform bound for all $(M,\omega)$ of the shape $\int_{M\times M}\omega_{FS}^{n}(y)\omega_{FS}^{n}(y^{\prime})\exp(\frac{d_{\omega}(y,y^{\prime})}{C})\leq C^{\prime},$ To see this, we find a large integral multiple $m$, such that $mc_{1}(\mathcal{O}(1))-\chi$ is a Kähler class on $M$, and we choose a Kähler representative $\omega^{\prime}$. Now $\omega^{\prime}$ is bounded by some constant times $\omega_{FS}$. We can use the Theorem to get an exponential integrability bound for the distance function of $\omega+\omega^{\prime}$. But it is obvious that distance functions increase with the metric, hence the claim. We can now return to the main setting 1.1. ###### Corollary 3.2. In the setting 1.1, there is a uniform exponential integrability bound for all fibres $X_{y}$ and for $0<t\ll 1$: $\int_{X_{y}\times X_{y}}\exp(\frac{d_{t^{-1}\tilde{\omega}_{t}}(z,z^{\prime})}{C})\omega_{X}^{n-1}(z)\omega_{X}^{n-1}(z^{\prime})\leq C^{\prime}.$ ###### Proof. It suffices to prove this for all smooth fibres uniformly. By Cor. 2.6, the fibrewise metric has an upper bound $t^{-1}\tilde{\omega}_{t}\leq CH^{-1}\omega_{X}$. Now on any fibre $X_{y}$, given a prescribed percentage $1-\epsilon$, we can find a subset with at least $1-\epsilon$ of the $\omega_{X}^{n-1}$-measure, and demand $H$ is bounded below on this subset. Since $\omega_{X}$ is uniformly equivalent to the Fubini-Study metric, the claim follows from the Remarks above. ∎ The following Corollary asserts that modulo exponentially small probability, any point on $X_{y}$ is within $O(1)$-distance to the regular region $\\{H\gtrsim 1\\}\cap X_{y}$. ###### Corollary 3.3. In the same setting, there are uniform constants such that $\int_{X_{y}}\exp(\frac{d_{t^{-1}\tilde{\omega}_{t}}(z,\\{H\gtrsim 1\\}\cap X_{y})}{C})\omega_{X}^{n-1}(z)\leq C^{\prime}.$ ###### Proof. By the Jensen inequality applied to the exp function, using also that $\int_{X_{y}\cap\\{H\gtrsim 1\\}}\omega_{X}^{n-1}\geq\frac{1}{2}$, $\begin{split}LHS\leq&\int_{X_{y}}\exp(C^{-1}\int_{(\\{H\gtrsim 1\\}\cap X_{y})}d_{t^{-1}\tilde{\omega}_{t}}(z,z^{\prime})\omega_{X}^{n-1}(z^{\prime}))\omega_{X}^{n-1}(z)\\\ \leq&\int_{X_{y}\times(\\{H\gtrsim 1\\}\cap X_{y})}\exp(\frac{d_{t^{-1}\tilde{\omega}_{t}}(z,z^{\prime})}{C})\omega_{X}^{n-1}(z)\omega_{X}^{n-1}(z^{\prime})\\\ \leq&C^{\prime}.\end{split}$ Here $C$ changes from line to line as usual. ∎ However, what we need is the fibrewise Calabi-Yau volume measure, not some Fubini-Study type measure. ###### Proposition 3.4. In the same setting, there are uniform constants such that $\int_{X_{y}}\exp(\frac{d_{t^{-1}\tilde{\omega}_{t}}(z,\\{H\gtrsim 1\\}\cap X_{y})}{C})i^{(n-1)^{2}}\Omega_{y}\wedge\overline{\Omega}_{y}\leq C^{\prime}.$ ###### Proof. Combine item 3 of Prop. 2.1 with the above Corollary, and apply Hölder inequality. ∎ ###### Remark. Here we are working with the distance functions on $X_{y}$ induced by the restriction of $\frac{1}{t}\tilde{\omega}_{t}$ to $X_{y}$. We can also study the distance function of $\frac{1}{t}\tilde{\omega}_{t}$ on $X$ and restrict it to $X_{y}$. This function would be smaller, because the minimal geodesics do not need to be contained in $X_{y}$. Hence the distance bound can only be better for the latter function, which is what we will use in the next section. ### 3.2 Uniform fibre diameter bound We will now bridge the exponentially small gap between Prop. 3.4 and the uniform fibre diameter bound (1). ###### Proof. (Thm. 1.4) Take any point $P$ on $X_{y}$. All distances appearing below are computed on $X$, not on fibres. Let $r$ be the smallest number such that $\text{dist}_{t^{-1}\tilde{\omega}_{t}}(B_{t^{-1}\tilde{\omega}_{t}}(P,r),\\{H\gtrsim 1\\}\subset X)\leq r.$ This exists because the diameter of $X$ is finite (an a priori bound is known but not necessary). If $r\leq 1$, then since $t^{-1}\omega_{t}$ is uniformly equivalent to $t^{-1}\tilde{\omega}_{t}$ in $\\{H\gtrsim 1\\}$ (_cf._ (5)), we can join $P$ to $\\{H\gtrsim 1\\}\cap X_{y}$ within $O(1)$-distance, and we are done. So without loss of generality $r\geq 1$. The minimality of $r$ shows that in fact $\text{dist}_{t^{-1}\tilde{\omega}_{t}}(B_{t^{-1}\tilde{\omega}_{t}}(P,r),\\{H\gtrsim 1\\}\subset X)=r.$ (19) Our strategy is to derive two contrasting bounds on the volume of $B_{t^{-1}\tilde{\omega}_{t}}(P,r)$. By Prop. 2.3, up to a constant factor the projection $\pi:X\to Y$ decreases distance, so $\pi(B_{t^{-1}\tilde{\omega}_{t}}(P,r))\subset B_{t^{-1}\omega_{Y}}(\pi(P),Cr)\subset Y.$ By Prop. 3.4 and the ensuing Remark, $\begin{split}&\int_{\pi^{-1}(B_{t^{-1}\omega_{Y}}(\pi(P),Cr))}\exp(\frac{d_{t^{-1}\tilde{\omega}_{t}}(z,\\{H\gtrsim 1\\}\cap X_{y})}{C})i^{n^{2}}\Omega\wedge\overline{\Omega}\\\ =&\int_{B_{t^{-1}\omega_{Y}}(\pi(P),Cr)}\sqrt{-1}dy\wedge d\bar{y}\int_{X_{y}}\exp(\frac{d_{t^{-1}\tilde{\omega}_{t}}(z,\\{H\gtrsim 1\\}\cap X_{y})}{C})i^{(n-1)^{2}}\Omega_{y}\wedge\overline{\Omega}_{y}\\\ \lesssim&\int_{B_{t^{-1}\omega_{Y}}(\pi(P),Cr)}\sqrt{-1}dy\wedge d\bar{y}\\\ \lesssim&r^{2}t.\end{split}$ But from (19), the distance function in the exponent above is bounded below by $r$ on $B_{t^{-1}\tilde{\omega}_{t}}(P,r)$. This forces $\int_{B_{t^{-1}\tilde{\omega}_{t}}(P,r)}i^{n^{2}}\Omega\wedge\overline{\Omega}\lesssim r^{2}te^{-C^{-1}r},$ or equivalently $Vol_{t^{-1}\tilde{\omega}_{t}}(B_{t^{-1}\tilde{\omega}_{t}}(P,r))\lesssim r^{2}e^{-C^{-1}r}$ (20) On the other hand, the ball $B_{t^{-1}\tilde{\omega}_{t}}(P,2r)$ touches the regular region $\\{H\gtrsim 1\\}$ where $\tilde{\omega}_{t}$ is uniformly equivalent to $\omega_{t}$, whence by using the freedom to travel in the regular region, $Vol_{t^{-1}\tilde{\omega}_{t}}(B_{t^{-1}\tilde{\omega}_{t}}(P,3r))\gtrsim r^{2}.$ Since $X$ is Ricci-flat, Bishop-Gromov inequality gives $Vol_{t^{-1}\tilde{\omega}_{t}}(B_{t^{-1}\tilde{\omega}_{t}}(P,r))\gtrsim r^{2}.$ (21) Contrasting (20)(21) gives $r\lesssim 1$, and we are done. ∎ ## References * [1] Cheeger, Jeff. Degeneration of Riemannian metrics under Ricci curvature bounds. Lezioni Fermiane. [Fermi Lectures] Scuola Normale Superiore, Pisa, 2001. * [2] Demailly, Jean-Pierre; Dinew, Sławomir; Guedj, Vincent; Pham, Hoang Hiep; Kołodziej, Sławomir; Zeriahi, Ahmed. Hölder continuous solutions to Monge-Ampère equations. J. Eur. Math. Soc. (JEMS) 16 (2014), no. 4, 619–647. * [3] Demailly, Jean-Pierre; Pali, Nefton. Degenerate complex Monge-Ampère equations over compact Kähler manifolds. Internat. J. Math. 21 (2010), no. 3, 357–405. * [4] Donaldson, Simon; Sun, Song. Gromov-Hausdorff limits of Kähler manifolds and algebraic geometry. Acta Math. 213 (2014), no. 1, 63–106. * [5] Di Nezza, Eleonora; Guedj, Vincent; Guenancia, Henri. Families of singular Kähler-Einstein metrics. arXiv:2003.08178. * [6] Eyssidieux, Philippe; Guedj, Vincent; Zeriahi, Ahmed. Singular Kähler-Einstein metrics. J. Amer. Math. Soc. 22 (2009), no. 3, 607–639. * [7] Eyssidieux, Philippe; Guedj, Vincent; Zeriahi, Ahmed. A priori $L^{\infty}$-estimates for degenerate complex Monge-Ampère equations. Int. Math. Res. Not. IMRN 2008, Art. ID rnn 070, 8 pp. * [8] Guo, Bin. Kähler-Ricci flow on blowups along submanifolds. Math. Ann. 375 (2019), no. 3-4, 1147–1167. * [9] Fu, Xin; Guo, Bin; Song, Jian. Geometric estimates for complex Monge-Ampère equations. J. Reine Angew. Math. 765 (2020), 69–99. * [10] Song, Jian. Riemannian geometry of Kahler-Einstein currents. arXiv:1404.0445. * [11] Song, Jian; Tian, Gang; Zhang, Zhenlei. Collapsing behavior of Ricci-flat Kahler metrics and long time solutions of the Kahler-Ricci flow. arXiv:1904.08345. * [12] Gilbarg, David; Trudinger, Neil S. Elliptic partial differential equations of second order. Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. xiv+517 pp. ISBN: 3-540-41160-7 * [13] Gross, Mark; Tosatti, Valentino; Zhang, Yuguang. Collapsing of abelian fibered Calabi-Yau manifolds. Duke Math. J. 162 (2013), no. 3, 517–551. * [14] Gross, Mark; Tosatti, Valentino; Zhang, Yuguang. Gromov-Hausdorff collapsing of Calabi-Yau manifolds. Comm. Anal. Geom. 24 (2016), no. 1, 93–113. * [15] Gross, Mark; Wilson, P. M. H. Large complex structure limits of $K3$ surfaces. J. Differential Geom. 55 (2000), no. 3, 475–546. * [16] Hein, Hans-Joachim. Weighted Sobolev inequalities under lower Ricci curvature bounds. Proc. Amer. Math. Soc. 139 (2011), no. 8, 2943–2955. * [17] Hein, Hans-Joachim; Tosatti, Valentino. Higher-order estimates for collapsing Calabi-Yau metrics. arXiv:1803.06697. * [18] Li, Yang. A new complete Calabi-Yau metric on $\mathbb{C}^{3}$. Invent. Math. 217 (2019), no. 1, 1–34. * [19] Li, Yang. On collapsing Calabi-Yau fibrations. accepted by Journal of Differential Geometry. * [20] Li, Yang. A gluing construction of collapsing Calabi-Yau metrics on K3 fibred 3-folds. Geom. Funct. Anal. 29 (2019), no. 4, 1002–1047. * [21] Rong, Xiaochun; Zhang, Yuguang. Continuity of extremal transitions and flops for Calabi-Yau manifolds. Appendix B by Mark Gross. J. Differential Geom. 89 (2011), no. 2, 233–269. * [22] Rubinstein, Yanir A. Smooth and singular Kähler-Einstein metrics. Geometric and spectral analysis, 45–138, Contemp. Math., 630, Centre Rech. Math. Proc., Amer. Math. Soc., Providence, RI, 2014. * [23] Takayama, Shigeharu. On moderate degenerations of polarized Ricci-flat Kähler manifolds. J. Math. Sci. Univ. Tokyo 22 (2015), no. 1, 469–489. * [24] Tian, Gang; Yau, Shing-Tung. Kähler-Einstein metrics on complex surfaces with $C_{1}>0$. Comm. Math. Phys. 112 (1987), no. 1, 175–203. * [25] Tosatti, Valentino. Adiabatic limits of Ricci-flat Kähler metrics. J. Differential Geom. 84 (2010), no. 2, 427–453. * [26] Tosatti, Valentino; Weinkove, Ben; Yang, Xiaokui. The Kähler-Ricci flow, Ricci-flat metrics and collapsing limits. Amer. J. Math. 140 (2018), no. 3, 653–698. * [27] Tosatti, Valentino; Zhang, Yuguang. Infinite-time singularities of the Kähler-Ricci flow. Geom. Topol. 19 (2015), no. 5, 2925–2948. * [28] Yau, Shing Tung. On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation. I. Comm. Pure Appl. Math. 31 (1978), no. 3, 339–411.
Context-Specific Likelihood Weighting Nitesh Kumar Ondřej Kuželka Department of Computer Science and Leuven.AI KU Leuven, Belgium Department of Computer Science Czech Technical University in Prague, Czechia ###### Abstract Sampling is a popular method for approximate inference when exact inference is impractical. Generally, sampling algorithms do not exploit context-specific independence (CSI) properties of probability distributions. We introduce context-specific likelihood weighting (CS-LW), a new sampling methodology, which besides exploiting the classical conditional independence properties, also exploits CSI properties. Unlike the standard likelihood weighting, CS-LW is based on partial assignments of random variables and requires fewer samples for convergence due to the sampling variance reduction. Furthermore, the speed of generating samples increases. Our novel notion of contextual assignments theoretically justifies CS-LW. We empirically show that CS-LW is competitive with state-of-the-art algorithms for approximate inference in the presence of a significant amount of CSIs. ## 1 Introduction Exploiting independencies present in probability distributions is crucial for feasible probabilistic inference. Bayesian networks (BNs) qualitatively represent conditional independencies (CIs) over random variables, which allow inference algorithms to exploit them. In many applications, however, exact inference quickly becomes infeasible. The use of stochastic sampling for approximate inference is common in such applications. Sampling algorithms are simple yet powerful tools for inference. They can be applied to arbitrary complex distributions, which is not true for exact inference algorithms. The design of efficient sampling algorithms for BNs has received much attention in the past. Unfortunately, BNs can not represent certain independencies qualitatively: independencies that hold only in certain contexts (boutilier1996context). These independencies are called context-specific independencies (CSIs). To illustrate them, consider a BN in Figure 1, where a tree-structure is present in the conditional probability distribution (CPD) of a random variable $E$. If one observes the CPD carefully, they can conclude that $P(E\mid A=1,B,C)=P(E\mid A=1)$, that is, $P(E\mid A=1,B,C)$ is same for all values of $B$ and $C$. The variable $E$ is said to be independent of variables $\\{B,C\\}$ in the context $A=1$. These independencies may have global implications, for instance, $E\perp B,C\mid A=1$ implies $E\perp D\mid H,A=1$. Sampling algorithms generally do not exploit CSIs arising due to structures within CPDs. Figure 1: Context-Specific Independence One might think that structures in CPDs are accidental. It turns out, however, that such structures are common in many real-world settings. For example, consider a scenario (koller2009probabilistic) where a symptom, fever, depends on $10$ diseases. It would be impractical for medical experts to answer $1,024$ questions of the format: “What is the probability of high fever when the patient has disease $A$ does not have disease $B$ $\dots$?” It might be the case that, if patients suffer from disease $A$, then they are certain to have a high fever, and our knowledge of their suffering from other diseases does not matter. One might argue, what if we automatically learn BNs from data? In this case, however, a huge amount of data would be needed to learn the parameters that are exponential in the number of parents required to describe the tabular-CPD. The tree-CPDs that require much fewer parameters are a more efficient way of learning BNs automatically from data (chickering1997bayesian; friedman1998bayesian; breese1998empirical). Moreover, the structures naturally arise due to if-else conditions in programs written in probabilistic programming languages (PPLs). There are exact inference algorithms that exploit CSIs, and thus, form state- of-the-art algorithms for exact inference (friedman2018approximate). These algorithms are based on the knowledge compilation technique (darwiche2003differential) that uses logical reasoning to naturally exploit CSIs. An obvious question, then, is: how to design a sampling algorithm that naturally exploits CSIs, along with CIs? It is widely believed that CSI properties in distributions are difficult to harness for approximate inference (friedman2018approximate). In this paper, we answer this difficult question by developing a sampling algorithm that can harness both CI and CSI properties. To realize this, we adopt likelihood weighting (LW, shachter1990simulation; fung1990weighing), a sampling algorithm for BNs; and extend it to a rule-based representation of distributions since rules are known to represent the structures qualitatively (poole1997probabilistic). We call the resulting algorithm context-specific likelihood weighting (CS-LW) and provide its open- source implementation111The code is available here: https://github.com/niteshroyal/CS-LW.git. Additionally, we present a novel notion of contextual assignments that provides a theoretical framework for exploiting CSIs. Taking advantage of the better representation of structures via rules, CS-LW assigns only a subset of variables required for computing conditional query leading to i) faster convergence, ii) faster speed of generating samples. This contrasts with many modern sampling algorithms such as collapsed sampling, which speed up convergence by sampling only a subset of variables but at the cost of much reduced speed of generating samples. We empirically demonstrate that CS-LW is competitive with state of the art. ## 2 Background We denote random variables with uppercase letters ($A$) and their assignments with lowercase letters ($a$). Bold letters denote sets ($\mathbf{A}$) and their assignments ($\mathbf{a}$). Parents of the variable $A$ are denoted with $\mathbf{Pa}(A)$ and their assignments with $\mathbf{pa}(A)$. In a probability distribution $P(\mathbf{E},\mathbf{X},\mathbf{Z})$ specified by a Bayesian network $\mathcal{B}$, $\mathbf{E}$ denotes a set of observed variables, $\mathbf{X}$ a set of unobserved query variables and $\mathbf{Z}$ a set of unobserved variable other than query variables. The expected value of $A$ relative to a distribution $Q$ is denoted by $\mathbb{E}_{Q}[A]$. Next, we briefly introduce LW, one of the most popular approximate inference algorithms for BNs. ### 2.1 Likelihood Weighting A typical query to a probability distribution $P(\mathbf{E},\mathbf{X},\mathbf{Z})$ is to compute $P(\mathbf{x}_{q}\mid\mathbf{e})$, that is, the probability of $\mathbf{X}$ being assigned $\mathbf{x}_{q}$ given that $\mathbf{E}$ is assigned $\mathbf{e}$. Following Bayes’s rule, we have: $\displaystyle P(\mathbf{x}_{q}\mid\mathbf{e})=\frac{P(\mathbf{x}_{q},\mathbf{e})}{P(\mathbf{e})}=\frac{\sum_{\mathbf{x},\mathbf{z}}P(\mathbf{x},\mathbf{z},\mathbf{e})f(\mathbf{x})}{\sum_{\mathbf{x},\mathbf{z}}P(\mathbf{x},\mathbf{z},\mathbf{e})}=\mu,$ where $f(\mathbf{x})$ is an indicator function $\mathds{1}\\{\mathbf{x}=\mathbf{x}_{q}\\}$, which takes value $1$ when $\mathbf{x}=\mathbf{x}_{q}$, and $0$ otherwise. We can estimate $\mu$ using LW if we specify $P$ using a Bayesian network $\mathcal{B}$. LW belongs to a family of importance sampling schemes that are based on the observation, $\displaystyle\mu=\frac{\sum_{\mathbf{x},\mathbf{z}}Q(\mathbf{x},\mathbf{z},\mathbf{e})f(\mathbf{x})(P(\mathbf{x},\mathbf{z},\mathbf{e})/Q(\mathbf{x},\mathbf{z},\mathbf{e}))}{\sum_{\mathbf{x},\mathbf{z}}Q(\mathbf{x},\mathbf{z},\mathbf{e})(P(\mathbf{x},\mathbf{z},\mathbf{e})/Q(\mathbf{x},\mathbf{z},\mathbf{e}))},$ (1) where $Q$ is a proposal distribution such that $Q>0$ whenever $P>0$. The distribution $Q$ is different from $P$ and is used to draw independent samples. Generally, $Q$ is selected such that the samples can be drawn easily. In the case of LW, to draw a sample, variables $X_{i}\in\mathbf{X}\cup\mathbf{Z}$ are assigned values drawn from $P(X_{i}\mid\mathbf{pa}(X_{i}))$ and variables in $\mathbf{E}$ are assigned their observed values. These variables are assigned in a topological ordering relative to the graph structure of $\mathcal{B}$. Thus, the proposal distribution in the case of LW can be described as follows: $\begin{aligned} Q(\mathbf{X},\mathbf{Z},\mathbf{E})=\prod_{X_{i}\in\mathbf{X}\cup\mathbf{Z}}P(X_{i}\mid\mathbf{Pa}(X_{i}))\mid_{\mathbf{E}=\mathbf{e}}\end{aligned}.$ Consequently, it is easy to compute the likelihood ratio $P(\mathbf{x},\mathbf{z},\mathbf{e})/Q(\mathbf{x},\mathbf{z},\mathbf{e})$ in Equation 1. All factors in the numerator and denominator of the fraction cancel out except for $P(x_{i}\mid\mathbf{pa}(X_{i}))$ where $x_{i}\in\mathbf{e}$. Thus, $\displaystyle\frac{P(\mathbf{X},\mathbf{Z},\mathbf{e})}{Q(\mathbf{X},\mathbf{Z},\mathbf{e})}=\prod_{x_{i}\in\mathbf{e}}P(x_{i}\mid\mathbf{Pa}(X_{i}))=\prod_{x_{i}\in\mathbf{e}}W_{x_{i}}=W_{\mathbf{e}},$ where $W_{x_{i}}$, which is also a random variable, is the weight of evidence $x_{i}$. The likelihood ratio $W_{\mathbf{e}}$ is the product of all of these weights, and thus, it is also a random variable. Given $M$ independent weighted samples from $Q$, we can estimate: $\displaystyle\hat{\mu}=\frac{\sum_{m=1}^{M}f(\mathbf{x}[m])w_{\mathbf{e}}[m]}{\sum_{m=1}^{M}w_{\mathbf{e}}[m]}.$ (2) ### 2.2 Context-Specific Independence Next, we formally define the independencies that arise due to the structures within CPDs. ###### Definition 1. Let $P$ be a probability distribution over variables $\mathbf{U}$, and let $\mathbf{A},\mathbf{B},\mathbf{C},\mathbf{D}$ be disjoint subsets of $\mathbf{U}$. The variables $\mathbf{A}$ and $\mathbf{B}$ are independent given $\mathbf{D}$ and context $\mathbf{c}$ if $P(\mathbf{A}\mid\mathbf{B},\mathbf{D},\mathbf{c})=P(\mathbf{A}\mid\mathbf{D},\mathbf{c})$ whenever $P(\mathbf{B},\mathbf{D},\mathbf{c})>0$. This is denoted by $\mathbf{A}\perp\mathbf{B}\mid\mathbf{D},\mathbf{c}$. If $\mathbf{D}$ is empty then $\mathbf{A}$ and $\mathbf{B}$ are independent given context $\mathbf{c}$, denoted by $\mathbf{A}\perp\mathbf{B}\mid\mathbf{c}$. Independence statements of the above form are called context-specific independencies (CSIs). When $\mathbf{A}$ is independent of $\mathbf{B}$ given all possible assignments to $\mathbf{C}$ then we have: $\mathbf{A}\perp\mathbf{B}\mid\mathbf{C}$. The independence statements of this form are generally referred to as conditional independencies (CIs). Thus, CSI is a more fine-grained notion than CI. The graphical structure in $\mathcal{B}$ can only represent CIs. Any CI can be verified in linear time in the size of the graph. However, verifying any arbitrary CSI has been recently shown to be coNP-hard (corander2019logical). ### 2.3 Context-Specific CPDs A natural representation of the structures in a CPD is via a tree-CPD, as illustrated in Figure 1. For all assignments to the parents of a variable $A$, a unique leaf in the tree specifies a (conditional) distribution over $A$. The path to each leaf dictates the contexts, i.e., partially assigned parents, given which this distribution is used. It is easier to reason using tree-CPDs if we break them into finer-grained elements. A finer-grained representation of structured CPDs is via rules (poole1997probabilistic; koller2009probabilistic), where each path from the root to a leaf in each tree-CPD maps to a rule. For our purposes, we will use a simple rule-based representation language, which can be seen as a restricted fragment of Distributional Clauses (DC, gutmann2011magic). ###### Example 1. A set of rules for the tree-CPD in Figure 1: ⬇ e $\sim\ $ bernoulli(0.2) $\leftarrow\ $ a=1. e $\sim\ $ bernoulli(0.9) $\leftarrow\ $ a=0$\ \wedge\ $b=1. e $\sim\ $ bernoulli(0.6) $\leftarrow\ $ a=0$\ \wedge\ $b=0$\ \wedge\ $c=1. e $\sim\ $ bernoulli(0.3) $\leftarrow\ $ a=0$\ \wedge\ $b=0$\ \wedge\ $c=0.\end{lstlisting} \end{example} %Here, \texttt{x$\cong$y} is a binary predicate written in infix notation, which is defined to be true if \texttt{y} is the value of the random variable \texttt{x}. Formally, %To access the value of the random variable \texttt{x}, the binary predicate $\cong$ is used. %Artificial intelligence researchers have developed various formalisms for representing distributions using probabilistic rules, such as PHA \citep{poole1993probabilistic}, PRISM \citep{sato1997prism}, MLNs \citep{richardson2006markov}, ProbLog \citep{de2007problog}, ICL \citep{poole2008independent}, CP-logic \citep{vennekens2009cp}, PASP \citep{baral2009probabilistic}. While many formalisms exist, only a few of them are hybrid, i.e., can represent both discrete and continuous distributions natively. One of these hybrid formalisms is the {\em Distributional Clauses} (DC) \citep{gutmann2011magic}. DC is a probabilistic (logic) programming language that extends the programming language Prolog with probability distributions. It is this language that we adopt in this paper to represent distributions. Distributional clauses are expressive and can represent first-order knowledge also. However, in this paper, we will restrict our discussion to a special form propositional DC that will be sufficient to represent distributions represented by BNs with tree-CPDs. A formal description of the first-order DC can be found in \citep{gutmann2011magic}. %It should be clear that a rule defines a probability distribution of a variable given its {\em partially assigned parents}. For instance, if we know that \texttt{a} is assigned to \texttt{1} then we know that \texttt{e} is distributed according to \texttt{bernoulli(0.2)}, and thus, we do not require to know the assignment of \texttt{b} and \texttt{c}. %\vskip -0.11in We can also represent structures in CPDs of discrete-continuous distributions using this form of rules like this: %\nit{(see the supplement material)}. \begin{example}\label{example: distributional clauses continuous} \normalfont Consider a machine that breaks down if the cooling of the machine is not working or the ambient temperature is too high. The following set of rules specifies a distribution over \texttt{cool}, \texttt{t}(temperature) and \texttt{broken}, where a CSI is implied: \texttt{broken} is independent of \texttt{cool} in a context \texttt{t>30}. \begin{lstlisting}[frame=none] cool $\sim\ $ bernoulli(0.1). t $\sim\ $ gaussian(25,2.2). broken $\sim\ $ bernoulli(0.9) $\leftarrow\ $ t>30. broken $\sim\ $ bernoulli(0.6) $\leftarrow\ $ t=<30$\ \wedge\ $cool=0. broken $\sim\ $ bernoulli(0.1) $\leftarrow\ $ t=<30$\ \wedge\ $cool=1.\end{lstlisting} \end{example} %\vskip -0.11in Intuitively, the {\em head} of a rule (\texttt{h $\sim \mathcal{D} \leftarrow$ b1 $\wedge \dots \wedge$ bn}) defines a random variable \texttt{h}, distributed according to a distribution $\mathcal{D}$, whenever all atoms \texttt{bi} in the {\em body} (an assignment of some parents of the variable) of the rule are true, that is: %These rules can be seen as a powerful template to define conditional probabilities: $p(\texttt{h} \mid \texttt{b1}, \dots, \texttt{bn}) = \mathcal{D}$. Since we study tree-CPDs, we focus on {\em mutually exclusive and exhaustive} rules; that is, only one rule for the variable \texttt{h} %in the head can {\em fire} (each atom in the body of the rule is true) at a time. %In other words, each conditional distribution of the form $P(H \mid \mathbf{pa}(H))$ is specified by only one rule. This is precisely what we need to represent tree-CPDs. A set of rules forms a {\em program}, which we call the DC($\mathcal{B}$) program. %Distributional clauses for all variables of a Bayesian network $\mathcal{B}$ form a {\em DC program}. This program thus represents the same distribution as represented by $\mathcal{B}$; the difference is that the program qualitatively represents CSIs present in the distribution. However, note that instead of four clauses as in Example \ref{example: distributional clauses} for the tabular-CPD (Figure \ref{fig:context-specific independence}), we could have eight clauses for the same CPD, a clause for a single entry in the CPD. In this case, the structure within the CPD will not be explicit in the clauses, and we will not be able to exploit it. Thus, we should be careful and map tree-CPDs to clauses, not tabular-CPDs. %Next, we define these programs, %Based on the definition of tree-CPDs \citep{koller2009probabilistic}, we define DC($\mathcal{B}$) programs that we shall study in this paper. \begin{definition} Let $\mathcal{B}$ be a Bayesian network with tree-CPDs specifying a distribution $P$. Let $\mathbb{P}$ be a set of rules such that each path from the root to a leaf of each tree-CPD corresponds to a rule in $\mathbb{P}$. Then $\mathbb{P}$ specifies the same distribution $P$, and $\mathbb{P}$ will be called DC($\mathcal{B}$) program. %A set of distributional clauses $\mathbb{P}$ will be called a DC($\mathcal{B}$) program if each path from the root to a leaf of each tree- CPD corresponds to a clause in $\mathbb{P}$. The program $\mathbb{P}$ specifies the same distribution $P$. %Then a DC program $\mathbb{P}_{\mathcal{B}}$ will be called $\mathcal{B}$-equivalent if each path from the root to a leaf of each tree-CPD corresponds to a distributional clause in $\mathbb{P}_{\mathcal{B}}$. The program $\mathbb{P}_{\mathcal{B}}$ specifies the same distribution $P$. %Let $\mathcal{B}$ be a Bayesian network with tree-CPDs representing a distribution $P$ and $\mathbb{P}_{\mathcal{B}}$ be a DC program representing the same distribution. Let $\mathbf{A}, \mathbf{B}$ be disjoint sets of parents of a variable $D$ and let an independence $(D \perp \mathbf{B} \mid \mathbf{a}) \in P$ be represented by the tree-CPD for $D$. Then, $\mathbb{P}_{\mathcal{B}}$ will be called $\mathcal{B}$-equivalent if and only if one clause $\mathcal{C}$ for $D$ fires given context $\mathbf{a}$, such that, $\mathcal{C} \in \mathbb{P}_{\mathcal{B}}$ and only $\mathbf{a}$ is in the body of $\mathcal{C}$. \end{definition} %From now on we will use $\mathbb{P}$ to denote $\mathbb{P}_{\mathcal{B}}$. %Now we can represent the structured CPD in Figure \ref{fig:context-specific independence}, using these clauses. %Notice that if we know \texttt{a} is assigned to \texttt{1} then we know that \texttt{e} is distributed according to \texttt{bernoulli(0.2)}. Bodies of the last three clauses can never be true and thus we do not require to know the assignments of \texttt{b} and \texttt{c}. Now we can say that the variable \texttt{e} is independent of variables \texttt{b} and \texttt{c} in the context \texttt{a$\cong$1}. The body of clauses can also be empty, for instance, % \begin{lstlisting}[frame=none] % a $\sim\ $ bernoulli(0.8).\end{lstlisting} %In this case the variable \texttt{a} is always distributed according to the distribution \texttt{bernoulli(0.8)}. %This is precisely what we need to represent tree-CPDs that we study in this paper and that are mutually exclusive and exhaustive \cite{koller2009probabilistic}. %However, clauses may not be {\em exhaustive}; that is, none of the clauses for a variable fire. It is easy to handle such cases; we assume that the variable is distributed according to a {\em default distribution}, for instance, \texttt{bernoulli(0)} in the case of Boolean random variables. \section{Exploiting Conditional Independencies}\label{Section: Bayes-ball Simulation} In this section, we will ignore the structures within CPDs and only exploit the graphical structure of BNs. The approach presented in this section forms the basis of our discussion on CS-LW, where we will also exploit CPDs’ structure. In Section \ref{section: likelihood weighting}, we used all variables to estimate $\mu$. However, due to CIs, observed states and CPDs of only some variables might be required for computing $\mu$. These variables are called {\em requisite variables}. To get a better estimate of $\mu$, it is recommended to use only these variables. The standard approach is to first apply the Bayes-ball algorithm \citep{shacter1998bayes} over the graph structure in $\mathcal{B}$ to obtain a sub-network of requisite variables, then simulate the sub-network to obtain the weighted samples. An alternative approach that we present next is to use Bayes-ball to simulate the original network $\mathcal{B}$ and focus on only requisite variables to obtain the weighted samples. %This approach will form the basis of our discussion on CS-LW since CS-LW extends it. %to get rid of these variables is first to obtain a sub-network using the Bayes-ball algorithm \citep{shacter1998bayes} over the graph structure in $\mathcal{B}$, then simulate the sub-network to obtain the weighted samples. An alternative approach that we present next is to use Bayes-ball to simulate the original network but still focus on only those variables that are required (requisite) for computing $\mu$. %apply LW on the sub-network. %Some evidence, however, might not be required for computing $\mu$. The standard approach to get rid of this evidence is first to obtain a sub-network using the Bayes-ball algorithm \citep{shacter1998bayes} over the graph structure in $\mathcal{B}$, then apply LW on the sub-network. An alternative approach that we present next is to use Bayes-ball to obtain the weighted samples. This trivial approach will form the basis of our discussion on CS-LW since CS-LW extends it. We assume some familiarity with Bayes-ball. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{bayes-ball.PNG} \caption{The four rules of Bayes-ball algorithm that decide next visits (indicated using \includegraphics[height=1.75ex]{arrow2.PNG}) based on the direction of the current visit (indicated using \includegraphics[height=1.75ex]{arrow1.PNG}) and the type of variable. To distinguish observed variables from unobserved variables, the former type of variables are shaded.} \label{fig: bayes-ball rules} \end{figure} %The Bayes-ball algorithm, which is linear in the size of the graph structure of $\mathcal{B}$, can be used to traverse the graph in a topological ordering. To obtain the samples, we need to traverse the graph structure of the Bayesian network $\mathcal{B}$ in a topological ordering. The Bayes-ball algorithm, which is linear in the graph’s size, can be used for it. The advantage of using Bayes-ball is that it also detects CIs; thus, it traverses only a sub- graph that depends on the query and evidence. We can also keep assigning unobserved variables, and weighting observed variables along with traversing the graph. In this way, we assign/weigh only requisite variables. The Bayes- ball algorithm uses four rules to traverse the graph (when deterministic variables are absent in $\mathcal{B}$), and marks variables to avoid repeating the same action. These rules are illustrated in Figure \ref{fig: bayes-ball rules}. Next, we discuss these rules and also indicate how to assign/weigh variables, resulting in a new algorithm called {\em Bayes-ball simulation of BNs}. Starting with all query variables scheduled to be visited as if from one of their children, we apply the following rules until no more variables can be visited: \begin{enumerate} \item When the visit of an unobserved variable $U \in \mathbf{X} \cup \mathbf{Z}$ is from a child, and $U$ is not marked on top, then do these in the order: i) Mark $U$ on top; ii) Visit all its parents; iii) Sample a value $y$ from $P(U \mid \mathbf{pa}(U))$ and assign $y$ to $U$; iv) If $U$ is not marked on bottom, then mark $U$ on bottom and visit all its children. \item When the visit of an unobserved variable is from a parent, and the variable is not marked on bottom, then mark the variable on bottom and visit all its children. \item When the visit of an observed variable is from a child, then do nothing. \item When the visit of an observed variable $E \in \mathbf{E}$ is from a parent, and $E$ is not marked on top, then do these in the order: i) Mark $E$ on top; ii) Visit all its parents; iii) Let $e$ be a observed value of $E$ and let $w$ be a probability at $e$ according to $P(E \mid \mathbf{pa}(E))$, then the weight of $E$ is $w$. \end{enumerate} %Note that, in the previous section, we were able to sample from $\mathcal{Q}$ because we were careful to assign the parents of $A$ before that, so that the distribution from which we should sample/weigh $A$ is defined. The above rules take care of that. The above rules define an order for visiting parents and children so that variables are assigned/weighted in a topological ordering. Indeed we can define the order since the original rules for Bayes-ball do not prescribe any order. %We can identify requisite variables using the marks. %top and bottom marks. %not only help the algorithm avoid repeating the same action; they also mark the variables requisite for computing $\mu$. We define these variables, %All parents of a variable are visited before assigning/weighting the variable. Unobserved parents must have been assigned during their visit. %The top and bottom marks help the algorithm avoid repeating the same action and guarantee the algorithm’s termination. After termination, the variables that are visited and marked on top are only assigned or weighted. Interestingly, some observed variables are only be visited. %Variables in the set $\mathbf{X} \cup \mathbf{Z}_{\star}$ are called {\em requisite unobserved variables} and %\cite{shacter1998bayes} showed that only requisite variables are required for computing $\mu$. The Bayes-ball simulation only assigns $\mathbf{X}, \mathbf{Z}_{\star}$ and only weighs $\mathbf{e}_{\star}$. It does not weigh $\mathbf{e}_{\smwhitestar}$. %Only variables visited and marked on top are assigned/weighted. Observed variables that are visited but not marked on top are not weighted. The marks record important information; consequently, we show the following. The proofs for all the results are in the supplementary material. %Algorithm \ref{algorithm: bayes-ball simulation} combines Bayes-ball with sampling to draw a weighted sample. In the original Bayes-ball algorithm, we have added two additional lines: line \ref{algoline: sample Bayes-ball} and line \ref{algoline: weight Bayes-ball}. Starting with query variables scheduled ($Sched$) to be visited as if from one of their children, Bayes-ball decides the topological ordering in which other variables are assigned. Note that, in the previous section, we were able to sample from $\mathcal{Q}$ because we were careful to sample the parents of $X_i$ before that, so that the distribution from which we should sample $X_i$ is defined. The same is true here: line \ref{algoline: sample Bayes-ball} assigns an unobserved variable, which indeed can be done because all parents must have been assigned during their visit (line \ref{algoline: parents assigned}). Due to the same reason, observed variables can be weighted at line \ref{algoline: weight Bayes-ball}. %\luc{why are you using different variables then X,Z, E here ? What are top, bot ? } %\luc{what exactly does this algorithm compute ??? what are the inputs / outputs} %After the procedure terminates, the assignment of some unobserved variables are recorded in the hashtable $Asg$, and weights of some evidence are recorded in the hashtable $W$. Interestingly, some evidence that is required to compute $\mu$ is not weighted. We can now define these unobserved variables and evidence. %\luc{these definitions are very procedural ... is there an intuition ? another definitino ?}s \begin{lemma}\label{theorem: bayes-ball 1} Let $\mathbf{E}_\star \subseteq \mathbf{E}$ be marked on top, $\mathbf{E}_\smwhitestar \subseteq \mathbf{E}$ be visited but not marked on top, and $\mathbf{Z}_\star \subseteq \mathbf{Z}$ be marked on top. Then the query $\mu$ can be computed as follows, \begin{equation}\label{equation: bb} \begin{aligned} \mu = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar)} \end{aligned} \end{equation} \end{lemma} Now, since $\mathbf{X}, \mathbf{Z}_\star, \mathbf{E}_\star, \mathbf{E}_\smwhitestar$ are variables of $\mathcal{B}$ and they form a sub- network $\mathcal{B}_\star$ such that $\mathbf{E}_\smwhitestar$ do not have any parent, we can write, \begin{equation*} \begin{aligned} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) = \prod_{u_i \in \mathbf{x} \cup \mathbf{z}_\star \cup \mathbf{e}_\star} P(u_i \mid \mathbf{pa}(U_i)) \end{aligned} \end{equation*} such that $\forall p \in \mathbf{pa}(U_i): p \in \mathbf{x} \cup \mathbf{z}_\star \cup \mathbf{e}_\star \cup \mathbf{e}_\smwhitestar$. This means CPDs of some observed variables are not required for computing $\mu$. Now we define these variables. %That is, CPDs of some unobserved variables, observed states along with CPDs of some observed variables, and only observed states of some observed variables are required for computing $\mu$. Now we define these variables. \begin{definition} The observed variables whose observed states and CPDs might be required to compute $\mu$ will be called diagnostic evidence. %The observed variables $\mathbf{E}_\star \subseteq \mathbf{E}$ marked on top will be called diagnostic evidence. The observed states and CPDs of these variables \nit{might be} required to compute $\mu$. %The set of evidence $\mathbf{e}_{\star} \subseteq \mathbf{e}$ that is marked on top will be called diagnostic evidence, and the set of evidence $\mathbf{e}_{\smwhitestar} \subseteq \mathbf{e}$ that is visited but not marked on top will be called predictive evidence. %The set of evidence $\mathbf{e}_{\smwhitestar} \subseteq \mathbf{e}$ that is only visited will be called predictive evidence and the set of evidence $\mathbf{e}_{\star} \subseteq \mathbf{e}$ that is visited and marked on top will be called diagnostic evidence. \end{definition} \begin{definition} The observed variables whose only observed states might be required to compute $\mu$ will be called predictive evidence. %The observed variables $\mathbf{E}_\smwhitestar \subseteq \mathbf{E}$ visited but not marked on top will be called predictive evidence. Only observed states of these variables \nit{might be} required. %to compute $\mu$. %The set of evidence $\mathbf{e}_{\star} \subseteq \mathbf{e}$ that is marked on top will be called diagnostic evidence, and the set of evidence $\mathbf{e}_{\smwhitestar} \subseteq \mathbf{e}$ that is visited but not marked on top will be called predictive evidence. %The set of evidence $\mathbf{e}_{\smwhitestar} \subseteq \mathbf{e}$ that is only visited will be called predictive evidence and the set of evidence $\mathbf{e}_{\star} \subseteq \mathbf{e}$ that is visited and marked on top will be called diagnostic evidence. \end{definition} %\begin{definition} %The unobserved variables $\mathbf{X} \cup \mathbf{Z}_{\star}$ marked on the top, such that $\mathbf{Z}_{\star} \subseteq \mathbf{Z}$, %, such that $\mathbf{Z}_{\star} \subseteq \mathbf{Z}$, which are visited and marked on the top, %will be called requisite unobserved variables. CPDs of these variables \nit{might be} required. %to compute $\mu$. %Assignments $\mathbf{x} \cup \mathbf{z}_{\star}$ to unobserved variables that are visited and marked on top, such that $\mathbf{Z}_{\star} \subseteq \mathbf{Z}$, will be called requisite assignments. %\end{definition} Diagnostic evidence (denoted by $\mathbf{e}_\star$) is marked on top, while predictive evidence (denoted by $\mathbf{e}_\smwhitestar$) is visited but not marked on top. The variables $\mathbf{X}$, $\mathbf{Z}_\star$, $\mathbf{E}_\star$, $\mathbf{E}_\smwhitestar$ will be called requisite variables. %The set of evidence $\mathbf{e}_{\smwhitestar} \cup \mathbf{e}_{\star}$ is called {\em requisite observed variables}. %\begin{proof} %See Appendix. %\end{proof} Now, we can sample from a factor $Q_\star$ of $Q$ such that, \begin{equation}\label{equation: proposal distribution} \begin{aligned} Q_\star(\mathbf{X}, \mathbf{Z}_\star, \mathbf{E}_\star \mid \mathbf{E}_\smwhitestar) = \prod_{X_i \in \mathbf{X} \cup \mathbf{Z}_\star} P(X_i \mid \mathbf{Pa}(X_i)) \mid_{\mathbf{E}_\star=\mathbf{e}_\star} \end{aligned} \end{equation} When we use Bayes-ball, precisely this factor is considered for sampling. Starting by first setting $\mathbf{E}_\smwhitestar$ their observed values, $\mathbf{X} \cup \mathbf{Z}_\star$ is assigned and $\mathbf{e}_\star$ is weighted in the topological ordering. Given $M$ weighted samples $\mathcal{D}_{\star} = \langle \mathbf{x}[1], w_{\mathbf{e}_{\star}}[1]\rangle, \dots, \langle\mathbf{x}[M], w_{\mathbf{e}_{\star}}[M] \rangle$ from $Q_\star$, we can estimate: \begin{equation}\label{Equation: Bayes-ball Estimate} \begin{aligned} \tilde{\mu} = \frac{ \sum_{m=1}^{M} f(\mathbf{x}[m]) w_{\mathbf{e_\star}}[m]}{\sum_{m=1}^{M} w_{\mathbf{e_\star}}[m]}. \end{aligned} \end{equation} In this way, we sample from a lower-dimensional space; thus, the new estimator $\tilde{\mu}$ has a lower variance compared to $\hat{\mu}$ due to the Rao- Blackwell theorem. %\citep{maceachern1999sequential}. Consequently, fewer samples are needed to achieve the same accuracy. Hence, we exploit CIs using the graphical structure in $\mathcal{B}$ for improved inference. %taking advantage of their qualitative representation by $\mathcal{B}$. %Now we have formed the basis of our discussion on the context-specific sampling. \section{Exploiting CSIs} Now, we will exploit the graphical structure as well as structures within CPDs. This section is divided into two parts. The first part presents a novel notion of contextual assignments that forms a theoretical framework for exploiting CSIs. It provides an insight into the computation of $\mu$ using partial assignments of requisite variables. We will show that CSIs allow for breaking the main problem of computing $\mu$ into several sub-problems that can be solved independently. The second part presents CS-LW based on the notion introduced in the first part, where we will exploit the structure of rules in the program to sample variables given the states of only some of their requisite ancestors. This contrasts with our discussion till now for BNs where knowledge of all such ancestors’ state is required. %Till now, we have ignored structures within CPDs. We begin this section by formally defining the independencies that arise due to these structures. To represent these structures, we will use a particular form of rules. This representation will allow us to use logical reasoning to sample a variable given the state of only some of its requisite ancestors, which contrasts with BNs where knowledge of all such ancestors’ state is required. This section forms the basis of our further discussion on CS-LW, where we combine Bayes- ball simulation with logical reasoning. %since CS-LW is designed for clauses and combines %Then we will use a special form of rules to represent these structures that will consequently allow us to %By making these structures explicit, we can exploit even more independencies. We discuss these next. \subsection{Notion of Contextual Assignments} %This section presents a novel notion of contextual assignments that forms a theoretical framework for exploiting CSIs in rule-based representations. It provides an insight into the computation of $\mu$ using partial assignments of requisite variables. %It is clear that an algorithm similar to Bayes-ball for clauses is needed to exploit CSIs along with CIs, and that the algorithm will assign some subset of requisite variables. However, we are not in the position to understand how such partial assignments can be used for computing $\mu$. This section presents a novel notion of contextual assignments to answer the question. %We show that CSIs allow for breaking the problem of computing $\mu$ into several sub-problems that can be solved independently. CS-LW is based on this notion. %the structure of clauses as well as the structure of the graph. However, we are not in the position to understand %This section shows that CSIs allow for breaking the problem of computing $\mu$ into several sub-problems that can be solved independently. We will consider the variables $\mathbf{X}$, $\mathbf{Z}_\star$, $\mathbf{E}_\star$, $\mathbf{E}_\smwhitestar$ requisite for computing the query $\mu$ to the distribution $P$ and the sub-network $\mathcal{B}_\star$ formed by these variables. We start by %analyzing subsets of these variables and defining the partial assignments that we will use to compute $\mu$ at the end of this section. %We analyze subsets of these variables and start by defining the partial assignments that allow for exploiting CSIs. %Now we will state some properties that partial assignments should satisfy to be used for efficient inference. \begin{definition} %(Contextual assignments) %Let $\mathbf{X}$, $\mathbf{Z}_\star$, $\mathbf{E}_\star$, $\mathbf{E}_\smwhitestar$ be the variables requisite for computing the query $\mu$ to the distribution $P$. Let $\mathbf{Z}_\dagger \subseteq \mathbf{Z}_\star$ and $\mathbf{e}_\dagger \subseteq \mathbf{e}_\star$. Denote $\mathbf{Z}_\star \setminus \mathbf{Z}_\dagger$ by $\mathbf{Z}_\ddagger$, and $\mathbf{e}_\star \setminus \mathbf{e}_\dagger$ by $\mathbf{e}_\ddagger$. %Let $\mathbf{Z}_\ddagger$ denote $\mathbf{Z}_\star \setminus \mathbf{Z}_\dagger$ , $\mathbf{e}_\ddagger$ denote $\mathbf{e}_\star \setminus \mathbf{e}_\dagger$ %and $\mathbf{Z}_\ddagger$, $\mathbf{e}_\ddagger$ can possibly be empty. A partial assignment $\mathbf{x}$, $\mathbf{z}_\dagger$, $\mathbf{Z}_\ddagger$, $\mathbf{e}_\dagger$, $\mathbf{e}_\ddagger$ will be called contextual assignment if due to CSIs in $P$, \begin{equation*} \prod_{u_i \in \mathbf{x} \cup \mathbf{z}_\dagger \cup \mathbf{e}_\dagger} P(u_i \mid \mathbf{pa}(U_i)) = \prod_{u_i \in \mathbf{x} \cup \mathbf{z}_\dagger \cup \mathbf{e}_\dagger} P(u_i \mid \mathbf{ppa}(U_i)) \end{equation*} where $\mathbf{ppa}(U_i) \subseteq \mathbf{pa}(U_i)$ is a set of partially assigned parents of $U_i$ such that $\mathbf{Z}_\ddagger \cap \mathbf{Ppa}(U_i) = \emptyset$. %$Pa \notin \mathbf{Z}_\ddagger$ for any $Pa \in \mathbf{Ppa}(U_i)$. \end{definition} \begin{example}\label{example: contexual assignment} Consider the network of Figure \ref{fig:context-specific independence}, and assume that our diagnostic evidence is $\\{F=1, G=0, H=1\\}$, predictive evidence is $\\{D=1\\}$, and query is $\\{E=0\\}$. From the CPD’s structure, we have: $P(E=0 \mid A=1, B, C) = P(E=0 \mid A=1)$; consequently, a contextual assignment is $\mathbf{x} = \\{E=0\\}, \mathbf{z}_\dagger = \\{A=1\\}, \mathbf{e}_\dagger= \\{\\}, \mathbf{Z}_\ddagger = \\{B,C\\}, \mathbf{e}_\ddagger=\\{F=1,G=0,H=1\\}$. We also have: $P(E=0 \mid A=0, B=1, C) = P(E=0 \mid A=0, B=1)$; consequently, another such assignment is $\mathbf{x} = \\{E=0\\}, \mathbf{z}_\dagger = \\{A=0,B=1\\}, \mathbf{e}_\dagger= \\{H=1\\}, \mathbf{Z}_\ddagger = \\{C\\}, \mathbf{e}_\ddagger=\\{F=1,G=0\\}$. \end{example} %Notice that variables in subsets $\mathbf{Z}_\dagger, \mathbf{E}_\dagger$ may vary. %with the {\em context} $\mathbf{x}, \mathbf{z}_\dagger$. %In some cases, the subset $\mathbf{Z}_\ddagger$ might be empty, and all unobserved variables will be assigned; hence, such full assignments are also contextual assignments. We aim to treat the evidence $\mathbf{e}_\ddagger$ independently, thus, we define it first. %breaking the problem of computing $\mu$ into sub-problems. We now define this evidence. \begin{definition}\label{Definition: residual diagnostic evidence} %(Residual evidence) The diagnostic evidence $\mathbf{e}_\ddagger$ in a contextual assignment $\mathbf{x}$, $\mathbf{z}_\dagger$, $\mathbf{Z}_\ddagger$, $\mathbf{e}_\dagger$, $\mathbf{e}_\ddagger$ will be called residual evidence. \end{definition} However, contextual assignments do not immediately allow us to treat the residual evidence independently. We need the assignments to be safe. \begin{definition}\label{definition: Basis} %(Basis of diagnostic evidence) %Let $\mathcal{B}_\star$ be the sub-network formed by the requisite variables, and let $e \in \mathbf{e}_\star$ be a diagnostic evidence. Let $e \in \mathbf{e}_\star$ be a diagnostic evidence, and let $S$ be an unobserved ancestor of $E$ in the graph structure in $\mathcal{B}_\star$, where $\mathcal{B}_\star$ is the sub-network formed by the requisite variables. Let $S \rightarrow \cdots\ B_i\ \cdots \rightarrow E$ be a causal trail such that either no $B_i$ is observed or there is no $B_i$. Let $\mathbf{S}$ be a set of all such $S$. Then the variables $\mathbf{S}$ will be called basis of $e$. Let $\mathbf{\dot{e}}_\star \subseteq \mathbf{e}_\star$, and let $\mathbf{\dot{S}}_{\star}$ be a set of all such $S$ for all $e \in \mathbf{\dot{e}}_\star$. Then $\mathbf{\dot{S}}_{\star}$ will be called basis of $\mathbf{\dot{e}}_\star$. \end{definition} Reconsider Example \ref{example: contexual assignment}; the basis of $\\{F=1\\}$ is $\\{B\\}$. \begin{definition}\label{Definition: safe contexual assignments} %(Safe contextual assignments) Let $\mathbf{x}$, $\mathbf{z}_\dagger$, $\mathbf{Z}_\ddagger$, $\mathbf{e}_\dagger$, $\mathbf{e}_\ddagger$ be a contextual assignment, and let $\mathbf{S}_{\ddagger}$ be the basis of the residual evidence $\mathbf{e}_\ddagger$. If $\mathbf{S}_\ddagger \subseteq \mathbf{Z}_\ddagger$ then the contextual assignment will be called safe. \end{definition} \begin{example} Reconsider Example \ref{example: contexual assignment}; the first example of contextual assignment is safe, but the second is not since the basis $B$ of $\mathbf{e}_\ddagger$ is assigned in $\mathbf{z}_\dagger$. We can make the second safe like this: $\mathbf{x} = \\{E=0\\}, \mathbf{z}_\dagger = \\{A=0,B=1\\}, \mathbf{e}_\dagger= \\{F=1,H=1\\}, \mathbf{Z}_\ddagger = \\{C\\}, \mathbf{e}_\ddagger=\\{G=0\\}$. See Figure \ref{fig: sub-graphs}. \end{example} Before showing that the residual evidence can now be treated independently, we first define a random variable called {\em weight}. %We start by considering the diagnostic evidence $\mathbf{e}_\star$. \begin{definition} %(Weight of diagnostic evidence) Let $e \in \mathbf{e}_\star$ be a diagnostic evidence, and let $W_e$ be a random variable defined as follows: \begin{equation*} W_e = P(e \mid \mathbf{Pa}(E)). \end{equation*} The variable $W_e$ will be called weight of $e$. The weight of a subset $\mathbf{\dot{e}}_\star \subseteq \mathbf{e}_\star$ is defined as follows: \begin{equation*} W_{\mathbf{\dot{e}}_\star} = \prod_{u_i \in \mathbf{\dot{e}}_\star} P(u_i \mid \mathbf{Pa}(U_i)). \end{equation*} \end{definition} Now we can show the following result: \begin{theorem}\label{Theorem: the expecation} Let $\mathbf{\dot{e}}_{\star} \subseteq \mathbf{e}_\star$, and let $\mathbf{\dot{S}}_{{\star}}$ be the basis of $\mathbf{\dot{e}}_{\star}$. Then the expectation of weight $W_{\mathbf{\dot{e}}_{\star}}$ relative to the distribution $Q_\star$ as defined in Equation \ref{equation: proposal distribution} can be written as: \begin{equation*} \begin{aligned} \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_{\star}}] = \sum_{\mathbf{\dot{s}}_{\star}} \prod_{u_i \in \mathbf{\dot{e}}_{\star} \cup \mathbf{\dot{s}}_{\star}} P(u_i \mid \mathbf{pa}(U_i)). \end{aligned} \end{equation*} \end{theorem} Hence, apart from unobserved variables $\mathbf{\dot{S}}_{{\star}}$, the computation of $\mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_{\star}}]$ does not depend on other unobserved variables. Now we are ready to show our main result: %Because no basis $A \in \mathbf{S}_\ddagger$ of $\mathbf{e}_\ddagger$ is assigned in safe contextual assignments, i.e., $A \notin \mathbf{X} \cup \mathbf{Z}_\dagger$ for any $A$, we conclude: %Because no basis $A \in \mathbf{S}_\ddagger$ of $\mathbf{e}_\ddagger$ is assigned, i.e., $A \notin \mathbf{X} \cup \mathbf{Z}_\dagger$ for any $A$, in safe contextual assignments, we conclude: %\begin{corollary} %Let $\mathbf{x}$, $\mathbf{z}_\dagger$, $\mathbf{Z}_\ddagger$, $\mathbf{e}_\dagger$, $\mathbf{e}_\ddagger$ be a safe contextual assignment. Then the computation of $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_{\ddagger}}]$ does not depend on variables $\mathbf{X}, \mathbf{Z}_\dagger$. %\end{corollary} %\begin{lemma} %Let $\mathbf{\dot{e}}_\ddagger$, $\mathbf{\ddot{e}}_\ddagger$ be disjoint sets of residual evidence such that $\mathbf{\dot{e}}_\ddagger \subseteq \mathbf{e}_\ddagger$ and $\mathbf{\ddot{e}}_\ddagger \subseteq \mathbf{e}_\ddagger$. Let $\mathbf{\dot{Z}}_\diamond$, $\mathbf{\ddot{Z}}_\diamond$ be residual causes of $\mathbf{\dot{e}}_\ddagger$, $\mathbf{\ddot{e}}_\ddagger$ respectively. If $\mathbf{\dot{Z}}_\diamond \cap \mathbf{\ddot{Z}}_\diamond = \emptyset$, then $\mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger} W_{\mathbf{\ddot{e}}_\ddagger}] = \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}] \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}]$. %\end{lemma} %that provides an insight into the computation of $\mu$ using contextual assignments. %Consequently, we can show that, \begin{theorem}\label{theorem: cslw_main} Let $\Psi$ be a set of all possible safe contextual assignments in the distribution $P$. Then the query $\mu$ to $P$ can be computed as follows: \begin{equation} \begin{aligned} \frac{\sum\limits_{\psi \in \Psi} \bigl(\prod\limits_{u_i \in \mathbf{x}[\psi] \cup \mathbf{z}_\dagger[\psi] \cup \mathbf{e}_\dagger[\psi]} P(u_i \mid \mathbf{ppa}(U_i)) f(\mathbf{x}[\psi]) R[\psi]\bigr)}{\sum\limits_{\psi \in \Psi} \bigl(\prod\limits_{u_i \in \mathbf{x}[\psi] \cup \mathbf{z}_\dagger[\psi] \cup \mathbf{e}_\dagger[\psi]} P(u_i \mid \mathbf{ppa}(U_i)) R[\psi])\bigr)} \end{aligned} \end{equation} where $R[\psi]$ denotes $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger[\psi]}]$. \end{theorem} We draw some important conclusions: i) $\mu$ can be exactly computed by performing the summation over all safe contextual assignments; notably, variables in $\mathbf{Z}_\dagger$ vary, and so does variables in $\mathbf{E}_\dagger$; ii) For all $\psi \in \Psi$, the computation of $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger[\psi]}]$ does not depend on the context $\mathbf{x[\psi]}, \mathbf{z}_\dagger[\psi]$ since no basis of $\mathbf{e}_\ddagger[\psi]$ is assigned in the context (by Theorem \ref{Theorem: the expecation}). %since $A \notin \mathbf{S}_{\ddagger}[\psi]$, the basis of $\mathbf{e}_\ddagger[\psi]$, for any $A \in \mathbf{X}[\psi] \cup \mathbf{Z}_\dagger[\psi]$. Hence, $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger[\psi]}]$ can be computed independently. However, the context %$\mathbf{x}, \mathbf{z}_\dagger$ decides which evidence should be in the subset $\mathbf{e}_\ddagger[\psi]$. That is why we can not cancel $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger[\psi]}]$ from the numerator and denominator. %One can use the notion of contextual assignments to compute $\mu$ exactly. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{sub-graphs.PNG} \caption{Two safe contextual assignments to variables of BN in Figure \ref{fig:context-specific independence}: (a) in the context ${A=1}$, where edges $C \rightarrow E$ and $B \rightarrow E$ are redundant since $E \perp B, C \mid A=1$; (b) in the context ${A=0, B=1}$, where the edge $C \rightarrow E$ is redundant since $E \perp C \mid A=0, B=1$. To identify such assignments, intuitively, we should apply the Bayes-ball algorithm after removing these edges. Portions of graphs that the algorithm visits, starting with visiting the variable $E$ from its child, are highlighted. Notice that variables $\mathbf{x}, \mathbf{z}_\dagger, \mathbf{e}_\dagger$ lie in the highlighted portion.} \label{fig: sub-graphs} \end{figure} \begin{comment} Furthermore, the problem of computing $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger[\psi]}]$ breaks into sub-problems since we have: \begin{lemma}\label{theorem: expectation and covariance} Let $\mathbf{\dot{e}}_\ddagger$, $\mathbf{\ddot{e}}_\ddagger$ be disjoint sets of residual evidence such that $\mathbf{\dot{e}}_\ddagger \subseteq \mathbf{e}_\ddagger$ and $\mathbf{\ddot{e}}_\ddagger \subseteq \mathbf{e}_\ddagger$. Let $\mathbf{\dot{S}}_{\ddagger}$ and $\mathbf{\ddot{S}}_{\ddagger}$ be bases of residuals $\mathbf{\dot{e}}_\ddagger$, $\mathbf{\ddot{e}}_\ddagger$ respectively. If $\mathbf{\dot{S}}_{\ddagger} \cap \mathbf{\ddot{S}}_{\ddagger} \neq \emptyset$, then, \begin{equation}\label{equation: exp and cov} \begin{aligned} \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger} W_{\mathbf{\ddot{e}}_\ddagger}] = \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}] \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}] + \mathrm{cov}[W_{\mathbf{\dot{e}}_\ddagger}, W_{\mathbf{\ddot{e}}_\ddagger}] \end{aligned} \end{equation} where the covariance $\mathrm{cov}[W_{\mathbf{\dot{e}}_\ddagger}, W_{\mathbf{\ddot{e}}_\ddagger}]$ between $W_{\mathbf{\dot{e}}_\ddagger}$ and $W_{\mathbf{\ddot{e}}_\ddagger}$ is defined as \begin{equation*} \begin{aligned}\mathbb{E}_{Q_\star}[(W_{\mathbf{\dot{e}}_\ddagger} - \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}])(W_{\mathbf{\ddot{e}}_\ddagger} - \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}])]. \end{aligned} \end{equation*} If $\mathbf{\dot{S}}_{\ddagger} \cap \mathbf{\ddot{S}}_{\ddagger} = \emptyset$, then \begin{equation*} \begin{aligned} \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger} W_{\mathbf{\ddot{e}}_\ddagger}] = \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}] \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}] \end{aligned} \end{equation*} \end{lemma} Generally, it is not the case that bases of two residual evidence $e_i, e_j$ are disjoint. However, as a consequence of Lemma \ref{theorem: expectation and covariance}, we have a nice recursive equation for the expectation of product of weights of residual evidence $\\{e_1, \dots, e_n\\}$, \begin{equation}\label{equation: expectation formula} \begin{aligned} \mathbb{E}_{Q_\star}[W_{e_1}\times\dots\times W_{e_n}] = \mathrm{cov}[W_{e_1}, W_{e_2}\times \dots \times W_{e_n}] \\\\+ \mathbb{E}_{Q_\star}[W_{e_1}] \mathbb{E}_{Q_\star}[W_{e_2}\times\dots\times W_{e_n}] \end{aligned} \end{equation} \end{comment} %One can exactly compute $\mu$ by solving these sub-problems independently. As discussed next, we use the theory of contextual assignments to estimate $\mu$ by sampling. %We will use the theory of contextual assignments in our sampling algorithm. %that we discuss next. \subsection{Context-Specific Likelihood Weighting} First, we present an algorithm that simulates the \dc program $\mathbb{P}$, specifying the same distribution $P$, to generate safe contextual assignments. Then we discuss how to estimate the expectations independently before estimating $\mu$. \subsubsection{Simulation of \dc Programs} \begin{comment} \begin{algorithm}[t] \caption{Propositional DC Proof Procedure} \label{algorithm: proof} \begin{algorithmic}[1] \Procedure{prove-prop-dc-back}{$G$} \State \slash\slash\ $G$: a conjunction of atoms to prove \State \slash\slash\ $Evd, Asg, Dst, W, Agenda, Top, Bottom$: see Algo \ref{algorithm: forward-backward} \State \textbf{Output:} \Indent \State either $yes$ or the procedure fails \EndIndent \State \textbf{global} $\mathbb{P}, Evd, Asg, Dst, W, Agenda, Top, Bottom$ \State \textbf{while} $G$ is not empty \Indent \State select the first atom $m$ from $G$ \State \textbf{if} $m$ is of the form $a{\cong}x$ \Indent \State \textbf{if} $Evd$ has key $a$ \label{algoline: evidence1} \Indent \State $y = Evd[a]$ % \State \textbf{if} $x == Evd[a]$ % \Indent % \State remove $m$ from $Goal$ % \EndIndent % \State \textbf{else} % \Indent % \State the procedure fails % \EndIndent \EndIndent %\State \textbf{else if} $Asg$ has key $a$ \label{algoline: assignment track} \State \textbf{else if} $a \in Top$ \label{algoline: assignment track} \Indent \State $y = Asg[a]$ % \State \textbf{if} $x == Asg[a]$ % \Indent % \State remove $m$ from $Goal$ % \EndIndent % \State \textbf{else} % \Indent % \State the procedure fails % \EndIndent \EndIndent \State \textbf{else} \label{algoline: 1-1} \Indent \State add $a$ to $Top$ \State \textbf{for} $a \sim bernoulli(p) \leftarrow B \in \mathbb{P}$ \Indent \State \Call{prove-prop-dc-back}{($B, distr\textrm{(}a,p\textrm{)}$)} \label{algoline: choose} \EndIndent \State let $y$ be a value drawn from $Dst[a]$ \label{algoline: sample1} \State $Asg[a] = y$ %\State $Asg[a] = y$, where $y$ is a value drawn from $\mathcal{D}_t$ \State \textbf{if} $a \not \in Bottom$ \label{algoline: bottom1} \Indent % \State add $h$ to $Agn$ \State add $a$ to $Bottom$; add $a$ to $Agenda$ \label{algoline: bottom2} \EndIndent %\State add $a$ to $Bottom$; add $a$ to $Agenda$ \label{algoline: forward- backward agenda} \EndIndent \State \textbf{if} $x==y$ \textbf{then} remove $m$ from $G$ \textbf{else} \textbf{\textcolor{red}{fail}} %\Indent % \State remove $m$ from $Goal$ %\EndIndent %\State \textbf{else} %\Indent % \State the procedure fails %\EndIndent \EndIndent \State \textbf{else if} $m$ is of the form $distr(a,p)$ \Indent \State $Dst[a] = bernoulli(p)$; remove $m$ from $G$ %\State remove $m$ from $Goal$ \EndIndent \State \textbf{else if} $m$ is of the form $weigh(a,x)$ \Indent \State \textbf{for} $a \sim bernoulli(p) \leftarrow B$ \Indent \State \Call{prove-prop-dc-back}{($B, distr\textrm{(}a,p\textrm{)}$)} \EndIndent %\State \textbf{if} $Dst$ has key $a$ %\Indent % \State $\mathcal{D}_t = Dst[a]$ %\EndIndent %\State \textbf{else} %\Indent %\State\slash\slash\ let $\mathcal{D}_0$ be a default distribution for $a$ %\State $\mathcal{D}_t = \mathcal{D}_0$ % \State $\mathcal{D}_t = bernoulli(0)$ %\EndIndent %\State let $p$ be probability of obtaining $x$ from $\mathcal{D}_t$ %\State $W[a] = p$ \State let $p$ be probability of obtaining $x$ from $Dst[a]$ \State $W[a] = p$; remove $m$ from $G$ %\State remove $m$ from $G$ \EndIndent \State \textbf{else} \Indent \State \textbf{\textcolor{red}{fail}} \EndIndent \EndIndent \State \textbf{return} $yes$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{Simulation of Propositional DC Programs.}\label{algorithm: forward- backward} \begin{algorithmic}[1] \Procedure{prop-dc-simulate}{$Q$, $Evd$} %\Procedure{\dcSimulate}{$\mathbb{P}$, \texttt{Query}, \texttt{evidence}} \State \slash\slash\ $Q$: a conjunction of atoms to prove \State \slash\slash\ $Evd$: a hashtable\textless observed variable, value\textgreater \State \slash\slash\ $\mathbb{P}$: a global DC program representing a distribution \State \slash\slash\ $W$: a hashtable\textless observed variable, weight\textgreater \State \slash\slash\ $Asg$: a hashtable\textless unobserved variable, value\textgreater \State \slash\slash\ $Dst$: a hashtable\textless unobserved variable, distribution\textgreater \State \slash\slash\ $Agenda$: a set of variables to visit as if from their parent \State \slash\slash\ $Top$: a set of variables marked of top \State \slash\slash\ $Bottom$: a set of variables marked of bottom \State \textbf{Output:} \Indent \State $i$: either $1$ or $0$ \State $W$: weights of partial diagnostic evidence \EndIndent \State \textbf{global} $\mathbb{P}$, $Asg$, $Dst$, $W$, $Agenda$, $Top$, $Bottom$ \State erase all entries in $Asg$, $Dst$, $W$, $Agenda$, $Top$, $Bottom$ \State $ans = $ \Call{prove-prop-dc-back}{$Q$} %\State \textbf{if} $ans = yes$ %\Indent % \State $i = 1$ %\EndIndent %\State \textbf{else} %\Indent % \State $i = 0$ %\EndIndent \State \textbf{if} $ans = yes$ \textbf{then} $i = 1$ \textbf{else} $i = 0$ %\State \textbf{while} all atoms in $Agn$ are not in $Chk$ \State \textbf{while} $Agenda$ is not empty \Indent %\State select an atom $m$ from $Agn$ such that $m$ not in $Chk$ %\State remove an atom $m$ from $Agenda$ \State remove $m$ from $Agenda$ \State \textbf{for} $h \sim bernoulli(p) \leftarrow B \in \mathbb{P} \mid m\cong z \in B$ %\ForAll{ $h \sim bernoulli(p) \leftarrow B \in \mathbb{P}$, such that,} %$m\cong z$ in $B$} %$m\cong z$ in $B$ \Indent %\State \textbf{if} $Evd$ has key $h$ \textbf{and} not $W$ has key $h$ \State \textbf{if} $Evd$ has key $h$ \textbf{and} $h \not \in Top$ \label{algoline: evidence3} \Indent %\State $x = Evd[h]$ \State add $h$ to $Top$; $x = Evd[h]$ \label{algoline: evidence4} \State \Call{prove-prop-dc-back}{$weigh(h,x)$} \label{algoline: evidence5} \EndIndent %\State \textbf{else if} $h$ not in $Agn$ \textbf{and} $h$ not in $Chk$ \State \textbf{else if} $h \not \in Bottom$ \label{algoline: bottom3} \Indent % \State add $h$ to $Agn$ \State add $h$ to $Bottom$; add $h$ to $Agenda$ \label{algoline: bottom4} \EndIndent \EndIndent %\State add $m$ to $Chk$ \EndIndent %\State \texttt{partial\_weights $\leftarrow\ $ weighted} \State \textbf{return} $[i, W]$ \EndProcedure \end{algorithmic} \end{algorithm} \end{comment} %\nit{NK: Currently re-writing text in this section. Theorems and proofs will not change.} We start by asking a question. Suppose we modify the first and the fourth rule of Bayes-ball simulation, introduced in Section \ref{Section: Bayes-ball Simulation}, as follows: \begin{itemize} \item In the first rule, when the visit of an unobserved variable is from its child, everything remains the same except that only \underline{some parents} are visited, not all. \item Similarly, in the fourth rule, when the visit of an observed variable is from its parent, everything remains the same except that only \underline{some parents} are visited. \end{itemize} \begin{algorithm}[t] \caption{Simulation of \dc Programs} \label{algorithm: forward-backward} \begin{algorithmic} \Procedure{Simulate-DC}{$\mathbb{P}, \mathbf{x}, \mathbf{e}$} \begin{itemize} \item Visits variables from parent and also simulates a \dc program $\mathbb{P}$ based on inputs: i) $\mathbf{x}$, a query; ii) $\mathbf{e}$: evidence. \item Output: i) \texttt{i}: $f(\mathbf{x})$ that can be either $0$ or $1$; ii) \texttt{W}: a table of weights of diagnostic evidence ($\mathbf{e}_\dagger$). \item The procedure maintains global data structures: i) \texttt{Asg}, a table that records assignments of variables ($\mathbf{x} \cup \mathbf{z}_\dagger$); ii) \texttt{Dst}, a table that records distributions for variables; iii) \texttt{Forward}, a set of variables whose children to be visited from parent; iv) \texttt{Top}, a set of variables marked on top; v) \texttt{Bottom}, a set of variables marked on bottom. \end{itemize} \begin{enumerate} \item Empty \texttt{Asg}, \texttt{Dst}, \texttt{W}, \texttt{Top}, \texttt{Bottom}, \texttt{Forward}. \item If \Call{prove-marked}{$\mathbf{x}$}\texttt{==yes} then $\texttt{i}=1$ else $\texttt{i}=0$. \item While \texttt{Forward} is not empty: \begin{enumerate} \item Remove \texttt{m} from \texttt{Forward}. \item For all \texttt{h $\sim \mathcal{D} \leftarrow $ Body} in $\mathbb{P}$ such that \texttt{m=z} in \texttt{Body}: \begin{enumerate} \item If \texttt{h} is observed in $\mathbf{e}$ and \texttt{h} not in \texttt{Top}: \begin{enumerate} \item Add \texttt{h} to \texttt{Top} \item For all \texttt{h $\sim \mathcal{D} \leftarrow$ Body} in $\mathbb{P}$: \Call{Prove-marked}{\texttt{Body} $\wedge$ \texttt{dist(h,$\mathcal{D}$)}}. \item Let \texttt{x} be a observed value of \texttt{h} and let \texttt{p} be a probability at \texttt{x} according to distribution \texttt{Dst[h]}. Record \texttt{W[h]=p}. \end{enumerate} \item If \texttt{h} is not observed in $\mathbf{e}$ and \texttt{h} not in \texttt{Bottom}: \begin{enumerate} \item Add \texttt{h} to \texttt{Bottom} and add \texttt{h} to \texttt{Forward}. \end{enumerate} \end{enumerate} \end{enumerate} \item Return \texttt{[i,W]}. \end{enumerate} \EndProcedure \end{algorithmic} \end{algorithm} Which variables will be assigned, and which will be weighted using the modified simulation rules? Intuitively, only a subset of variables in $\mathbf{Z}_\star$ should be assigned, and only a subset of variables in $\mathbf{E}_\star$ should be weighted. But then how to assign/weigh a variable knowing the state of only some of its parent. We can do %It is possible to do that when structures are present in CPDs, and these structures are explicitly represented using rules, as discussed in Section \ref{section: DC into}. This is because rules define the distribution from which the variable should be sampled, although the state of some parents of the variable is known before that. Hence, the key idea is to visit only some parents (if possible due to structures); consequently, those unobserved parents that are not visited might not be required to be sampled. To realize that, we need to modify the Bayes-ball simulation such that it works on \dc programs. This modified simulation for \dc programs is defined procedurally in Algorithm \ref{algorithm: forward-backward}. The algorithm visits variables from their parents and calls Algorithm \ref{algorithm: proof} to visit variables from their children. Like Bayes-ball, this algorithm also marks variables on top and bottom to avoid repeating the same action. Readers familiar with theorem proving will find that Algorithm \ref{algorithm: proof} closely resembles SLD resolution \citep{kowalski1974predicate}, but it is also different since it is stochastic. An example illustrating how Algorithm \ref{algorithm: proof} visits only some requisite ancestors to sample a variable is present in the supplementary material. %Algorithm \ref{algorithm: proof} is motivated by an algorithm introduced by \cite{nitti2016probabilistic} for first-order programs. %We now present our algorithm that simulates DC programs in a similar way Bayes-ball simulates BNs. Instead of graphs, it works on clauses to generate weighted samples; consequently, it also exploits the global implications of CSIs. %As we saw, CSIs play a role while computing the query’s probability. While computing conditional probability, CIs also play a role along with CSIs. We now present an algorithm that adapts the Bayes-ball algorithm and works on clauses instead of graphs, exploiting both CSIs and CIs. %Algorithm \ref{algorithm: proof} already visits variables from their child. So, to visit variables from their parent, we introduce Algorithm \ref{algorithm: forward-backward}. Both of these algorithms work in an interleaved fashion to visit variables just like Bayes-ball does. Once the algorithm terminates, a weighted sample is obtained from the DC program. We now indicate how it follows the same four rules of Bayes-ball simulation, but with a slight difference. %\begin{enumerate} % \item When Algorithm \ref{algorithm: proof} visits an unobserved variable (line: \ref{algoline: 1-1}), and the variable is not marked on top (line: \ref{algoline: assignment track}), then: i) Mark the variable on top. ii) \underline{Visit some parents} (line: \ref{algoline: choose}). iii) Assign a value to the variable (line: \ref{algoline: sample1}). iv) If the variable is not marked on bottom (line: \ref{algoline: bottom1}) then mark the variable on bottom and visit all its children (line: \ref{algoline: bottom2}). % \item When Algorithm \ref{algorithm: forward-backward} visits of an unobserved variable and the variable is not marked on bottom (line: \ref{algoline: bottom3}), then mark the variable on bottom and visit all its children (line: \ref{algoline: bottom4}). % \item When Algorithm \ref{algorithm: proof} visits an observed variable, then neither its parents nor its children are visited (line: \ref{algoline: evidence1}). % \item When Algorithm \ref{algorithm: forward-backward} visits an observed variable and the variable is not marked on top (line: \ref{algoline: evidence3}), then: i) Mark the variable on top (line: \ref{algoline: evidence4}). ii) \underline{Visit some parents} (line: \ref{algoline: evidence5}). iii) Weigh the observed variable. %\end{enumerate} %As before, these rules ensure that variables are assigned in a topological ordering. Although only some parents of a variable $A$ are assigned before that, clauses define the distribution from which we should sample/weigh $A$. Notice that the simulation of DC programs differs in only one way: in the first and the fourth rule, some parents are visited not all. This is because some clauses may fail to fire in the proof procedure. Thus we can show that, \begin{algorithm}[t] \caption{\dc Proof Procedure} \label{algorithm: proof} \begin{algorithmic} \Procedure{prove-marked}{\texttt{Goal}} \begin{itemize} \item Visits variables from child, consequently, proves a conjunction of atoms \texttt{Goal}. Returns \texttt{yes}; otherwise fails. \item Accesses the program $\mathbb{P}$, the set \texttt{Top}, the tables \texttt{Asg}, \texttt{Dst} and evidence $\mathbf{e}$ as defined in Algorithm \ref{algorithm: forward-backward}. \end{itemize} \begin{enumerate} \item While \texttt{Goal} in not empty: \begin{enumerate} \item Select the first atom \texttt{b} from \texttt{Goal}. \item If \texttt{b} is of the form \texttt{a=x}: \begin{enumerate} \item If \texttt{a} is observed in $\mathbf{e}$ then let \texttt{y} is the value of \texttt{a}. \item Else if \texttt{a} in \texttt{Top} then \texttt{y=Asg[a]}. \item Else: \begin{enumerate} \item Add \texttt{a} to \texttt{Top}. \item For all \texttt{a $\sim \mathcal{D} \leftarrow$ Body} in $\mathbb{P}$: \Call{Prove-marked}{\texttt{Body} $\wedge$ \texttt{dist(a,$\mathcal{D}$)}} \item Sample a value \texttt{y} from distribution \texttt{Dst[a]} and record \texttt{Asg[a]=y}. \item If \texttt{a} not in \texttt{Bottom}: add \texttt{a} to \texttt{Bottom} and add \texttt{a} to \texttt{Forward}. \end{enumerate} \item If \texttt{x==y} then remove \texttt{b} from \texttt{Goal} else fail. \end{enumerate} \item If \texttt{b} is of the form \texttt{dist(a,$\mathcal{D}$)}: record \texttt{Dst[a]=$\mathcal{D}$} and remove \texttt{b} from \texttt{Goal}. \end{enumerate} \item Return \texttt{yes}. \end{enumerate} \EndProcedure \end{algorithmic} \end{algorithm} Since the simulation of $\mathbb{P}$ follows the same four rules of Bayes-ball simulation except that only some parents are visited in the first and fourth rule, we show that \begin{lemma}\label{theorem: partial assignments} Let $\mathbf{E_{\dagger}}$ be a set of observed variables weighed and let $\mathbf{Z_{\dagger}}$ be a set of unobserved variables, apart from query variables, assigned in a simulation of $\mathbb{P}$, then, \begin{equation*} \mathbf{Z_{\dagger}} \subseteq \mathbf{Z_{\star}} \text{ and } \mathbf{E_{\dagger}} \subseteq \mathbf{E_{\star}}. \end{equation*} \end{lemma} The query variables $\mathbf{X}$ are always assigned since the simulation starts with visiting these variables as if visits are from one of their children. To simplify notation, from now on we use $\mathbf{Z}_\dagger$ to denote the subset of variables in $\mathbf{Z}_\star$ that are assigned, $\mathbf{E}_\dagger$ to denote the subset of variables in $\mathbf{E}_\star$ that are weighted in the simulation of $\mathbb{P}$. $\mathbf{Z}_\ddagger$ to denote $\mathbf{Z}_\star \setminus \mathbf{Z}_\dagger$, %to denote remaining variables in $\mathbf{Z}_\star \setminus $ that are left unassigned and $\mathbf{E}_\ddagger$ to denote $\mathbf{E}_\star \setminus \mathbf{E}_\dagger$ %the remaining variables in $\mathbf{E}_\star$ that are not weighted in the simulation of $\mathbb{P}$. We show that the simulation performs safe contextual assignments to requisite variables. %Now, we show that the DC simulation divides variables into such subsets and performs such partial assignments. \begin{theorem}\label{theorem: dc-partial-justification} The partial assignment $\mathbf{x}$, $\mathbf{z}_\dagger$, $\mathbf{Z}_\ddagger$, $\mathbf{e}_\dagger$, $\mathbf{e}_\ddagger$ generated in a simulation of $\mathbb{P}$ is a safe contextual assignment. %Let $\mathbf{x}$, $\mathbf{z}_\dagger$ be an assignment to a subset of unobserved requisite variables and $\mathbf{e}_\dagger$ be a subset of diagnostic evidence weighted in a single simulation of the DC program. Let $\mathbf{Z}_\ddagger$ be the variables left unassigned and $\mathbf{e}_\ddagger$ be diagnostic evidence not weighted in the simulation. Then, the partial assignment $\mathbf{x}$, $\mathbf{z}_\dagger$, $\mathbf{Z}_\ddagger$, $\mathbf{e}_\dagger$, $\mathbf{e}_\ddagger$ is a safe contextual assignment. \end{theorem} The proof of Theorem \ref{theorem: dc-partial-justification} relies on the following Lemma. \begin{lemma}\label{theorem: DC CSI} Let $\mathbb{P}$ be a \dc program specifying a distribution $P$. Let $\mathbf{B}, \mathbf{C}$ be disjoint sets of parents of a variable $A$. In the simulation of $\mathbb{P}$, if $A$ is sampled/weighted, given an assignment $\mathbf{c}$, and without assigning $\mathbf{B}$, then, \begin{equation*} P(A \mid \mathbf{c}, \mathbf{B}) = P(A \mid \mathbf{c}). \end{equation*} \end{lemma} %Thus, we observe that $\mu$ can be estimated by assigning only a subset $\mathbf{X} \cup \mathbf{Z}_\dagger$ of requisite variables and separately estimating the expectation $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger}]$. However, it is worth emphasizing that variables $\mathbf{Z}_\dagger$ will vary with simulations, and so will $\mathbf{E}_\dagger$. Based on this observation, we now present our sampling approach. Hence, just like the standard LW, we sample from a factor $Q_\dagger$ of the proposal distribution $Q_\star$, which is given by, \begin{equation*} \begin{aligned} Q_\dagger = \prod_{u_i \in \mathbf{x} \cup \mathbf{z}_\dagger \cup \mathbf{e}_\dagger} P(u_i \mid \mathbf{ppa}(U_i)) \end{aligned} \end{equation*} where $P(u_i \mid \mathbf{ppa}(U_i)) = 1$ if $u_i \in \mathbf{e}_\dagger$. It is precisely this factor that Algorithm \ref{algorithm: forward-backward} considers for the simulation of $\mathbb{P}$. Starting by first setting $\mathbf{E}_\smwhitestar$, $\mathbf{E}_\ddagger$ their observed values, it assigns $\mathbf{X} \cup \mathbf{Z}_\dagger$ and weighs $\mathbf{e}_\dagger$ in the topological ordering. In this process, %it performs a safe partial assignment to $\mathbf{X}, \mathbf{Z}_\dagger$, and it records {\em partial weights} $\mathbf{w}_{\mathbf{e}_\dagger}$, such that: $\prod_{x_i \in \mathbf{e}_\dagger} w_{x_i} = w_{\mathbf{e}_\dagger}$ and $w_{x_i} \in \mathbf{w}_{\mathbf{e}_\dagger}$. Given $M$ partially weighted samples $\mathcal{D}_{\dagger} = \langle \mathbf{x}[1], \mathbf{w}_{\mathbf{e}_{\dagger}[1]}\rangle, \dots, \langle\mathbf{x}[M], \mathbf{w}_{\mathbf{e}_{\dagger}[M]} \rangle$ from $Q_\dagger$, we could estimate $\mu$ using Theorem \ref{theorem: cslw_main} as follows: \begin{equation}\label{equation: forward-backward sampling} \begin{aligned} \overline{\mu} = \frac{ \sum_{m=1}^{M} f(\mathbf{x}[m])\times w_{\mathbf{e_\dagger}[m]}\times\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger[m]}] }{\sum_{m=1}^{M} w_{\mathbf{e_\dagger}[m]}\times \mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger[m]}]} \end{aligned} \end{equation} However, we still can not estimate it since we still do not have expectations $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger[m]}]$. %We call $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger}[m]]$ residual expectations. Fortunately, there are ways to estimate them from partial weights in $\mathcal{D}_\dagger$. We discuss one such way next. \subsubsection{Estimating the Expected Weight of Residuals} %\nit{NK: Don’t read this section. The discussed approach is biased.} We start with the notion of sampling mean. Let $\mathcal{W}_\star = \langle w_{e_1}[1], \dots, w_{e_m}[1]\rangle, \dots, \langle w_{e_1}[n], \dots, w_{e_m}[n]\rangle$ be a data set of $n$ observations of weights of $m$ diagnostic evidence drawn using the standard LW. How can we estimate the expectation $\mathbb{E}_{Q_\star}[W_{e_i}]$ from $\mathcal{W}_\star$? The standard approach is to use the sampling mean: $\overline{W}_{e_i} = \frac{1}{n}\sum_{r=1}^{n}w_{e_i}[r]$. In general, $\mathbb{E}_{Q_\star}[W_{e_i}\dots W_{e_j}]$ can be estimated using the estimator: $\overline{W_{e_i}\dots W}_{e_j} = \frac{1}{n}\sum_{r=1}^{n}w_{e_i}[r]\dots w_{e_j}[r]$. Since LW draws are independent and identical distributed (i.i.d.), it is easy to show that the estimator is unbiased. However, some entries, i.e., weights of residual evidence, are missing in the data set $\mathcal{W}_\dagger$ obtained using CS-LW. The trick is to fill the missing entries by drawing samples of the missing weights once we obtain $\mathcal{W}_\dagger$. More precisely, missing weights $\langle W_{e_i}, \dots, W_{e_j}\rangle$ in $r^{\text{th}}$ row of $\mathcal{W}_\dagger$ are filled in with a joint state $\langle w_{e_i}[r], \dots, w_{e_j}[r] \rangle$ of the weights. To draw the joint state, we again use Algorithm \ref{algorithm: forward-backward} and visit observed variables $\langle E_i, \dots, E_j\rangle$ from parent. %More precisely, a joint state $\langle w_{e_i}[r], \dots, w_{e_j}[r] \rangle$ is drawn and missing weights $\langle W_{e_i}[r], \dots, W_{e_j}[r] \rangle$, in $r-th$ row in $\mathcal{W}_\dagger$, are filled in with the joint state. Once all missing entries are filled in, we can estimate $\mathbb{E}_{Q_\star}[W_{e_i}\dots W_{e_j}]$ using the estimator $\overline{W_{e_i}\dots W}_{e_j}$ as just discussed. Once we estimate all required expectations, it is straightforward to estimate $\mu$ using Equation \ref{equation: forward-backward sampling}. %Consider a scenario in which $n$ observations of weights of $m$ diagnostic evidence $\mathcal{W}_\star = \langle W_{e_1}^{(1)}, \dots, W_{e_m}^{(1)}\rangle, \dots, \langle W_{e_1}^{(n)}, \dots, W_{e_m}^{(n)}\rangle$ are drawn using the standard LW. Suppose we carelessly erase some entries of weights from $\mathcal{W}_\star$ to get a dataset of partial weights $\mathcal{W}_\dagger$. How can we estimate the expectation $\mathbb{E}_{Q_\star}[W_{e_i}]$ using $\mathcal{W}_\dagger$? Since LW draws are independent and identical distributed (i.i.d.) we can estimate it using the sampling mean: $\overline{W}_{e_i} = \frac{1}{k}\sum_{r=1}^{k}W_{e_i}^{(r)}$, where $k$ is the number of observations of $W_{e_i}$ in $\mathcal{W}_\dagger$. To estimate $\mathbb{E}_{Q_\star}[W_{e_i}W_{e_j}]$, we can use the same approach: $\overline{W_{e_i}W}_{e_j} = \frac{1}{l}\sum_{r=1}^{l}W_{e_i}^{(r)}W_{e_i}^{(r)}$, where $l$ is the number of observations in which $W_{e_i}$ and $W_{e_j}$ co-occur in $\mathcal{W}_\dagger$. In general, $\mathbb{E}_{Q_\star}[W_{e_i}\dots W_{e_j}]$ can be estimate using the estimator $\overline{W_{e_i}\dots W}_{e_j}$ that is unbiased. %consider those observations in $\mathcal{W}_\dagger$ in which $W_{e_i}$ and $W_{e_j}$ co-occur. %The sampling covariance is: $\overline{\sigma}_{W_{e_i},W_{e_j}} = \frac{1}{l-1}\sum_{t=1}^{l}(W_{e_i}^{(t)} - \overline{W}_{e_i})(W_{e_j}^{(t)} - \overline{W}_{e_j})$, where $l$ is the number of observations in which $W_{e_i}$ and $W_{e_j}$ co-occur in $\mathcal{W}_\dagger$. %However, in the above scenario, we can not estimate $\mu$ using Equation $\ref{equation: forward-backward sampling}$ and the estimation of the required expectations. This is because there is no guarantee that the estimation of $\mu$ using expectations is correct when entries are erased carelessly. Conversely, in the case of CS-LW, safe contextual assignments ensure that expectations are correct to use. Once we estimate the expectations as just discussed, the computation of $\overline{\mu}$ is straightforward. %It is worth reiterating that the random variable $W_{\mathbf{e}_\ddagger}$ is the product of random variables distributed according to distribution given by the ratio of the actual distribution and the proposal distribution. Although these variables are dependent, we have a nice recursive equation to estimate the expectation of the product of such variables. Suppose $\\{A_1, \dots, A_n\\}$ is a set of dependent random variables; the expectation is given by, %\begin{equation}\label{equation: expectation formula} % \begin{aligned} % \mathbb{E}[A_1\times\dots\times A_n] = cov[A_1, A_2\times \dots \times A_n] \\\\+ \mathbb{E}[A_1] \mathbb{E}[A_2\times\dots\times A_n] % \end{aligned} %\end{equation} %Next, we illustrate two approaches for estimating the expectation with an example. \begin{comment} \begin{example} Suppose we observe the following % $5$ samples of partial weights, %variables $A,B,C \textrm{ and }D$ are observed, and we obtain $5$ samples with the following partial weights, \vskip -0.1in \begin{table}[H] \begin{center} \scalebox{1}{ \begin{tabular}{|c|c|c|c|c|} \hline Sample & $W_{e_1}$ & $W_{e_2}$ & $W_{e_3}$ & $W_{e_4}$ \\\\\hline \\#1 & $-$ & $0.6$ & $0.5$ & $0.4$ \\\ \hline \\#2 & $0.8$ & $0.3$ & $-$ & $-$ \\\ \hline \\#3 & $0.4$ & $0.1$ & $0.7$ & $-$ \\\ \hline \\#4 & $0.1$ & $-$ & $-$ & $-$ \\\ \hline \\#5 & $0.3$ & $-$ & $0.2$ & $0.9$ \\\ \hline \end{tabular}} %\caption{.} \label{table: example partial weights} \end{center} \end{table} \vskip -0.2in \noindent We need to estimate the expected value of the product of missing weights in each sample. %For that let us first look how expected value $\mathbb{E}[U]$ of a single random variable $U$ and the covariance $cov(U,V)$ for two random variables can be computed from samples. Given a set $\\{u_1, \dots, u_n\\}$ of $n$ observations of a random variable $U$ from a distribution, $\mathbb{E}(U)$ can be approximated as: $\mathbb{E}(U) = \frac{1}{n} \sum_{i=1}^{n} u_i$. And $cov[U,V]$ for two variables, each with sample size $n$, is given by: $cov[U,V] = \sum_{i=1}^{m}\frac{(u_i - \mathbb{E}[U])(v_i - \mathbb{E}[V])}{m}$. For each sample, now we discuss the estimation process step-by-step: \begin{enumerate} \item For the first sample, as just discussed: $\overline{W}_{e_1} = (0.8+0.4+0.1+0.3)/4 = 0.4$. \item For the second, we use Equation (\ref{equation: expectation formula}) to get: $\mathbb{E}_{Q_\star}[W_{e_3}W_{e_4}]=$ $ cov[W_{e_3},W_{e_4}] + $ $ \mathbb{E}_{Q_\star}[W_{e_3}]\mathbb{E}[W_{e_4}]$. After estimating, $\overline{W}_{e_3} = 0.47$ and $\overline{W}_{e_4} = 0.65$, we estimate: %$cov[W_{e_3},W_{e_4}]$ as discussed: $\overline{\sigma}_{W_{e_3},W_{e_4}} = \frac{(0.5-0.47)(0.4-0.65) + (0.2-0.47)(0.9-0.65)}{2} = -0.0375$. Hence we get: $\overline{W_{e_3}W}_{e_4} = 0.268$. \item It is straightforward for the third: $\overline{W}_{e_4} = 0.65$. \item The fourth is a bit tricky. Using Equation (\ref{equation: expectation formula}), we get: $\mathbb{E}[Y_bY_cY_d] = cov[Y_b,Y_cY_d] + \mathbb{E}[Y_b]\mathbb{E}[Y_cY_d]$. We have already estimated $\mathbb{E}[Y_cY_d]$ in the second step. We just need to estimate two terms. The estimation of $\mathbb{E}[Y_b]$ is easy, it is $0.33$. But how to estimate $cov[Y_b,Y_cY_d]$? We only observe values for all three variables $Y_b,Y_c, \textrm{ and } Y_d$ in Sample \\#1. It is true, but we can also use Sample \\#3 like this: $cov[Y_b,Y_cY_d] = \frac{(0.6-0.33)(0.2-0.268) + (0.1-0.33)(0.7\times\mathbb{E}[Y_d] - 0.268)}{2} = -0.031$. Thus, we get: $\mathbb{E}[Y_bY_cY_d] = 0.057$. \item We already estimated $\mathbb{E}[W_{e_2}]$ in the previous step. \end{enumerate} \end{example} The above estimation problem breaks into several sub-problems. Thus, to solve it, the discussed process can be implemented efficiently using dynamic programming (see Appendix). Once we estimate the expectations, the computation of $\overline{\mu}$ is straightforward. The approach $\RN{2}$ as discussed in step $4$ above is not ad hoc. We can show that it is an unbiased estimator of covariance. \begin{lemma}\label{theorem: Covariance approach 1 } Let $A,B,C$ be three random variables, and let $(A_1, B_1, C_1), \dots, (A_k, B_k, C_k)$ be $k$ i.i.d. draws from a distribution. Let $(A_1’, B_1’), \dots, (A_{n-k}’, B_{n-k}’)$ be $n-k$ i.i.d. draws from the same distribution such that draws of variable $C$ are erased. Suppose the covariance $\mathrm{cov}[A,BC]$ is estimated as follows, \begin{equation*} \begin{aligned} \overline{\sigma} = \frac{1}{n-1}\bigl[\sum_{i=1}^{k}(A_i - \overline{A})(B_iC_i-\overline{BC}) + \sum_{i=1}^{n-k}(A_i’ - \overline{A})\\\\(B_i’\overline{C} -\overline{BC})\bigr] \end{aligned} \end{equation*} where $\overline{BC}$ is an unbiased estimator of $\mathbb{E}[BC]$, $\overline{C} = 1/k\sum_{i=1}^{k}C_i$ and $\overline{A} = 1/n(\sum_{i=1}^{k}A_i + \sum_{i=1}^{n-k}A_i’)$. Then $\overline{\sigma}$ is an unbiased estimator of $\mathrm{cov}[A,BC]$. \end{lemma} The above estimation problem breaks into several sub-problems. Thus, to solve it, the discussed process can be implemented efficiently using dynamic programming (see Appendix). Once we estimate the expectations, the computation of $\overline{\mu}$ is straightforward. There can be several different ways to estimate the expectations, for instance, we could have estimated $\mathbb{E}[Y_bY_cY_d]$ like this: $cov[Y_bY_c,Y_d] + \mathbb{E}[Y_bY_c]\mathbb{E}[Y_d]$. These estimates will converge to the same expected value when the sample size goes to infinity. \end{comment} At this point, we can gain some insight into the role of CSIs in sampling. They allow us %to draw samples from a much lower dimensional space and to estimate the expectation $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger}]$ separately. We estimate it from all samples obtained at the end of the sampling process, thereby reducing the contribution $W_{\mathbf{e}_\ddagger}$ makes to the variance of our main estimator $\overline{\mu}$. The residual evidence $\mathbf{e}_\ddagger$ would be large if much CSIs are present in the distribution; consequently, we would obtain a much better estimate of $\mu$ using significantly fewer samples. Moreover, drawing a single sample would be faster since only a subset of requisite variables is visited. Hence, in addition to CIs, we exploit CSIs and improve LW further. We observe all these speculated improvements in our experiments. \section{Empirical Evaluation} We answer three questions empirically: \textbf{Q1}: How does the sampling speed of CS-LW compare with the standard LW in the presence of CSIs? \textbf{Q2}: How does the accuracy of the estimate obtained using CS-LW compare with the standard LW ? \textbf{Q3}: How does CS-LW compare to the state-of-the-art approximate inference algorithms? To answer the first two questions, we need BNs with structures present within CPDs. Such BNs, however, are not readily available since the structure while designing inference algorithms is generally overlooked. We identified two BNs from the Bayesian network repository \citep{bnrepository}, which have many structures within CPDs: i) \textit{Alarm}, a monitoring system for patients with 37 variables; ii) \textit{Andes}, an intelligent tutoring system with 223 variables. %\begin{itemize} % \item \textit{Alarm}: A monitoring system for patients with 37 random variables. % \item \textit{Andes}: An intelligent tutoring system with 223 random variables. %\end{itemize} \begin{table}[t] \centering \begin{adjustbox}{width=\columnwidth,center} \begin{tabular}{|c|c|cc|cc|} \toprule & & \multicolumn{2}{|c|}{\textbf{LW}} & \multicolumn{2}{|c|}{\textbf{CS-LW}} \\\ \midrule \textbf{BN} & \textbf{N} & \textbf{MAE $\pm$ Std.} & \textbf{Time} & \textbf{MAE $\pm$ Std.} & \textbf{Time} \\\ \midrule \midrule \multirow{4}{*}{\textit{Alarm}} & 100 & 0.2105 $\pm$ 0.1372 & 0.09 & 0.0721 $\pm$ 0.0983 & 0.06\\\ \cline{2-6} & 1000 & 0.0766 $\pm$ 0.0608 & 0.86 & 0.0240 $\pm$ 0.0182 & 0.53 \\\ \cline{2-6} & 10000 & 0.0282 $\pm$ 0.0181 & 8.64 & 0.0091 $\pm$ 0.0069 & 5.53 \\\ \cline{2-6} & 100000 & 0.0086 $\pm$ 0.0067 & 89.93 & 0.0034 $\pm$ 0.0027 & 57.64\\\ \midrule \midrule \multirow{4}{*}{\textit{Andes}} & 100 & 0.0821 $\pm$ 0.0477 & 1.07 & 0.0619 $\pm$ 0.0453 & 0.22 \\\ \cline{2-6} & 1000 & 0.0257 $\pm$ 0.0184 & 10.62 & 0.0163 $\pm$ 0.0139 & 2.20 \\\ \cline{2-6} & 10000 & 0.0087 $\pm$ 0.0069 & 106.55 & 0.0058 $\pm$ 0.0042 & 22.62\\\ \cline{2-6} & 100000 & 0.0025 $\pm$ 0.0015 & 1074.93 & 0.0020 $\pm$ 0.0016 & 233.72\\\ \bottomrule \end{tabular} \end{adjustbox} \caption{The mean absolute error (MAE), the standard deviation of the error (Std.), and the average elapsed time (in seconds) versus the number of samples (N). For each case, LW and CS-LW were executed 30 times.} \label{Table: Result1} \end{table} We used the standard decision tree learning algorithm to detect structures and overfitted it on tabular-CPDs to get tree-CPDs, which was then converted into rules. %and overfitted it on tabular-CPDs to get tree-CPDs. Then, we converted the tree-CPDs into distributional clauses by representing each path from the root to leaf in the tree with clauses. Let us denote the program with these rules by $\mathbb{P}_{tree}$. CS-LW is implemented in the Prolog programming language, thus to compare the sampling speed of LW with CS-LW, we need a similar implementation of LW. Fortunately, we can use the same implementation of CS-LW for obtaining LW estimates. Recall that if we do not make structures explicit in rules and represent each entry in tabular-CPDs with rules, then CS-LW boils down to LW. Let $\mathbb{P}_{table}$ denotes the program where each rule in it corresponds to an entry in tabular-CPDs. Table \ref{Table: Result1} shows the comparison of estimates obtained using $\mathbb{P}_{tree}$ (CS-LW) and $\mathbb{P}_{table}$ (LW). Note that CS-LW automatically discards non-requisite variables for sampling. So, we chose the query and evidence such that almost all variables in BNs were requisite for the conditional query. %The configuration of the conditional query used for obtaining the result is present in the Appendix \ref{appendix: additional exp details}. \begin{table*}[t] \centering \begin{adjustbox}{width=2.0\columnwidth,center} \begin{tabular}{|c|cc|cc|cc|cc|cc|} \toprule & \multicolumn{2}{|c|}{\textbf{LW}} & \multicolumn{2}{|c|}{\textbf{CC-10,000}} & \multicolumn{2}{|c|}{\textbf{CC-100,000}} & \multicolumn{2}{|c|}{\textbf{CC-1000,000}} & \multicolumn{2}{|c|}{\textbf{CS- LW}}\\\ \hline\hline \textbf{BN} & \textbf{N} & \textbf{MAE $\pm$ Std.} & \textbf{N} & \textbf{MAE $\pm$ Std.} & \textbf{N} & \textbf{MAE $\pm$ Std.} & \textbf{N} & \textbf{MAE $\pm$ Std.} & \textbf{N} & \textbf{MAE $\pm$ Std.}\\\ \midrule \textit{Alarm} & 131606 & 0.0073 $\pm$ 0.0054 & 3265 & 0.0022 $\pm$ 0.0018 & NA & 0 $\pm$ 0 (exact) & NA & 0 $\pm$ 0 (exact) & 178620 & 0.0019 $\pm$ 0.0016 \\\ \hline \textit{Win95pts} & 51956 & 0.0022 $\pm$ 0.0016 & 635 & 0.0149 $\pm$ 0.0163 & NA & 0 $\pm$ 0 (exact) & NA & 0 $\pm$ 0 (exact) & 67855 & 0.0017 $\pm$ 0.0011\\\ \hline \textit{Andes} & 13113 & 0.0068 $\pm$ 0.0062 &116 & 0.0814 $\pm$ 0.0915 & 17 & 0.0060 $\pm$ 0.0094 & NA & 0 $\pm$ 0 (exact) & 56672 & 0.0022 $\pm$ 0.0021\\\ \hline \textit{Munin1} & 15814 & 0.0036 $\pm$ 0.0026 & \multicolumn{2}{|c|}{out of memory} & \multicolumn{2}{|c|}{out of memory} & \multicolumn{2}{|c|}{out of memory} & 17985 & 0.0035 $\pm$ 0.0025\\\ \bottomrule \end{tabular} \end{adjustbox} \caption{The mean absolute error (MAE), the standard deviation of the error (Std.), and the average number of samples (N) drawn when algorithms were run 50 times for 2 minutes (approx.) each. %on the four Bayesian networks (BN). The algorithms are: LW, CS-LW, CC with circuit size 10,000, with size 100,000, and with size 1000,000.} \label{Table: Result2} \end{table*} As expected, we observe that less time is required by CS-LW to generate the same number of samples. This is because it visits only the subset of requisite variables in each simulation. \textit{Andes} has more structures compared to \textit{Alarm}. Thus, the sampling speed of CS-LW is much faster compared to LW in \textit{Andes}. Additionally, we observe that the estimate, with the same number of samples, obtained by CS-LW is much better than LW. This is significant. It is worth emphasizing that approaches based on collapsed sampling obtain better estimates than LW with the same number of samples, but then the speed of drawing samples significantly decreases. %It is worth emphasizing that there are approaches based on collapsed sampling \cite{koller2009probabilistic} and cutset sampling \cite{cutsetLW}, to obtain better estimates than LW with the same number of samples. However, the speed of drawing samples significantly decreases in these approaches. In CS-LW, the speed increases when structures are present. This is possible because CS-LW exploits CSIs. %is designed to exploit CSIs while those approaches are not designed for that. Hence, we get the answer to the first two questions: When many structures are present, and when they are made explicit in rules, then CS-LW will draw samples faster compared to LW. Additionally, estimates will be better with the same number of samples. To answer our final question, we compared CS-LW with the collapsed compilation \citep[CC,][]{friedman2018approximate}, which has been recently shown to outperform several sampling algorithms. %on several benchmarks. It combines a state of the art exact inference algorithm that exploits CSIs and importance sampling that scales the exact inference. %the state-of-the-art exact inference algorithm to exploit CSIs with importance sampling to scale the exact inference, and has been shown to outperform several sampling algorithms on several benchmarks. The load between the exact and sampling can be regulated using the size of the arithmetic circuit: larger the circuit’s size, larger the load on the exact and lesser the load on the sampling, i.e., less variables are considered for sampling. For this experiment, we consider two additional BNs: i) \textit{Win95pts}, a system for printing troubleshooting in Windows 95 with 76 variables; ii) \textit{Munin1}, an expert EMG assistant with 186 variables. %\begin{itemize} % \item \textit{Win95pts}: A system for printing troubleshooting in Windows 95 with 76 variables. % \item \textit{Munin1}: An expert EMG assistant with 186 variables. %\end{itemize} However, not many structures are present in the CPDs of these two BNs, so not much difference in the performance of LW and CS-LW is expected. The comparison is shown in Table \ref{Table: Result2}. We can observe the following: i) as expected from collapsed sampling, much fewer samples are drawn in the same time; ii) the right choice of circuit’s size is crucial, e.g., with circuit size 10,000, CC performs poorly compared to LW on some BNs while better when the size is increased; iii) CS-LW performs better compared to CC when the circuit is not huge; iv) on the three BNs, CC with a huge circuit size computes the exact conditional probability while LW and CS-LW can only provide a good approximation of that in the same time. To demonstrate that the fourth observation does not undermine the importance of pure sampling, we used \textit{Munin1}. Although the size of this BN is comparable to the size of \textit{Andes}, almost all variables are multi- valued, and their domain size can be as large as 20; hence, some CPDs are huge, while in \textit{Andes}, variables are binary-valued. CC that works well on \textit{Andes}, fails to deal with huge CPDs of \textit{Munin1} on a machine with $16$ GB memory. On the other hand, both LW and CS-LW work well on this BN. %Better performance of CS-LW compared to LW is not expected on this BN since there are not many structures. %Note that CC is not applicable to discrete-continuous distributions while CS- LW is applicable to such distributions. Hence, we get the answer to our final question: CS-LW is competitive with the state-of-the-art and can be a useful tool for inference on massive BNs with structured CPDs. \section{Related Work} Although the question of how to exploit CSIs arising due to structures %within CPDs is not new and has puzzled researchers for decades, research in this direction has mainly been focused on exact inference \citep{boutilier1996context, zhang1999role, poole1997probabilistic, poole2003exploiting}. Nowadays, it is common to use knowledge compilation (KC) based exact inference for the purpose \citep{chavira2008probabilistic, fierens2015inference, shen2016tractable}. There are not many approximate inference algorithms that can exploit them. However, there are some tricks that make use of structures to approximate the probability of a query. One trick, introduced by \cite{10.5555/2074094.2074147}, is to make rule-base simpler by ignoring distinctions in close probabilities. Another trick, explored in the case of the tree-CPDs, is to prune trees and reduce the size of actual CPDs \citep{salmeron2000importance,cano2011approximate}. However, approximation by making distribution simpler is orthogonal to traditional ways of approximation, such as sampling. \cite{fierens2010context} observed the speedup in Gibbs sampling due to structures; however, did not consider global implications of structures. %the exploitation of CSIs was limited to local CSI that involves only a variable and its parents. We comprehensively study the role of local CSI and its global implications on sampling. %and observe that, due to CSIs, partial assignments are sufficient for inference. %The question of how CSIs can be exploited has puzzled researchers for decades. One reason is that the qualitative representation of CSIs is not apparent. Over the last few decades, various directed graphical representations with structured CPDs have been proposed \cite{boutilier1996context,geiger1996knowledge,poole2003exploiting,cano2011approximate,pensar2015labeled}. Different inference algorithms were proposed for different representations. %\luc{strong claim} %However, these algorithms were exact. %, and these representations were also not widely adopted. %An alternate representation of CSIs based on probabilistic rules and an inference algorithm exploiting the structure of these rules were proposed by \cite{poole1997probabilistic}. The rule-based representations received much attention and have been studied extensively by the PLP community \cite{de2007problog,poole2008independent,vennekens2009cp,baral2009probabilistic,riguzzi2018foundations}. %We adopt them since they are widely adopted. \luc{not a good reason} %In these representations, the entire distribution is represented using rules, and there is no directed graph. As a consequence, inference algorithms for them are designed to exploit CSIs; however, the concept of {\em d-separation} \cite{geiger1990d} for detecting CIs, which is applicable only to a directed graph, is generally overlooked. That is why the exploitation of CSIs, along with CIs, is a topic of discussion in this paper. Moreover, inference algorithms developed for structured representations are generally exact. Nowadays, algorithms based on the knowledge compilation that exploit CSIs forms the basis for state-of-the-art exact inference in many structured representations, including BNs with structured CPDs \cite{chavira2008probabilistic} and PLPs \cite{fierens2015inference}. Recently, \cite{friedman2018approximate} realized that KC is good at exploiting structures while sampling is scalable; thus, proposed CC that inherits advantages of both. However, along with advantages, this approach also inherits the scalability limitations of KC. Furthermore, CC is limited to discrete distributions. The problem of exploiting CSIs in discrete-continuous distributions is non-trivial and is poorly studied. Recently, it has attracted some attention \citep{zeng2019efficient}. However, proposed approaches are also exact and rely on complicated weighted model integration \citep{belle2015probabilistic}, which quickly become infeasible. CS-LW is simple, scalable, and applies to such distributions. A sampling algorithm for a rule-based representation of discrete-continuous distributions was developed by \cite{nitti2016probabilistic}; however, it did not exploit CIs and global implications of rule structures. %exploiting CSIs was developed by \cite{nitti2016probabilistic} for rule-based representations. However, it did not exploit CIs and global implications of CSIs. %The literature on CSIs has mainly been focused on discrete distributions. The problem of exploiting CSIs present in discrete-continuous distributions is non-trivial and is poorly studied. Recently, it has attracted some attention \cite{zeng2019efficient}. However, proposed approaches are also exact and rely on complicated weighted model integration \cite{belle2015probabilistic}. It was quickly realized that exact inference is infeasible even for small distributions, and sampling might be needed. Although there are some approaches \cite{dos2019exact,martires2020monte} that use sampling to approximate integrations, they still exploit CSIs exactly using knowledge compilation. Furthermore, there are not many representation languages for CSIs in discrete-continuous distributions. Unlike ours, existing approaches apply to some small problems, not to a representation language where various distributions can be represented. %Our approach is orthogonal: we realize that sampling algorithms can exploit CSIs. Additionally, collapsed compilation depends on variable selection policies that may vary for different distributions, and that is hard to decide. There is no such policy in our approach. %DC is a representation language where users can easily write programs to represent CSIs in discrete-continuous distributions. A sampling algorithm for DC, based on partial assignments to exploit CSIs, was already developed by \cite{nitti2016probabilistic}. However, it has limitations: it does not exploit CIs and global implications of CSIs. This is because it was designed for the rule-based representation, not for the graph-based representation. We observe that both CIs and CSIs imply which variable to assign/weight and which to not. Although CS-LW is designed for DC, a rule-based representation, it exploits both. \section{Conclusion} We studied the role of CSI in approximate inference and introduced a notion of contextual assignments to show that CSIs allow for breaking the main problem of estimating conditional probability query into several small problems that can be estimated independently. Based on this notion, we presented an extension of LW, which not only generates samples faster; it also provides a better estimate of the query with much fewer samples. Hence, we provided a solid reason to use structured-CPDs over tabular-CPDs. Like LW, we believe other sampling algorithms can also be extended along the same line. We aim to open up a new direction towards improved sampling algorithms that also exploit CSIs. \paragraph{Acknowledgements} This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No [694980] SYNTH: Synthesising Inductive Data Models). OK was supported by the Czech Science Foundation project ‘‘Generative Relational Models’’ (20-19104Y) and partially also by the OP VVV project {\it CZ.02.1.01/0.0/0.0/16\_019/0000765} ‘‘Research Center for Informatics’’. The authors would like to thank Luc De Raedt, Jessa Bekker, Pedro Zuidberg Dos Martires and the anonymous reviewers for valuable feedback. %We have presented context-specific LW that naturally exploits finer independencies in distributions for improved inference. It combines logical reasoning with LW to realize that. We have demonstrated that it can be a useful tool for approximate inference in structured representations. \bibliographystyle{unsrtnat} \bibliography{biblio} \onecolumn \aistatstitle{Context-Specific Likelihood Weighting: \\\ Supplementary Materials} \aistatsauthor{ Nitesh Kumar \And Ond\v rej Ku\v zelka} \aistatsaddress{ Department of Computer Science and Leuven.AI \\\KU Leuven, Belgium \And Department of Computer Science \\\ Czech Technical University in Prague, Czechia} \section{Missing Proofs} \subsection{Proof of Lemma \ref{theorem: bayes-ball 1}} In this section, we present the detailed proof of Lemma \ref{theorem: bayes- ball 1}. \begin{comment} \begin{proof} Let us denote the variables in $\mathbf{Z}$ that not marked on the bottom (irrelevant variables) by $\mathbf{Z}_i$ and those marked on the bottom (relevant) by $\mathbf{Z}_r$. The required probability $\mu$ is then given by, \begin{equation*} \begin{aligned} \mu = P(\mathbf{x}_q \mid \mathbf{e}) = \frac{\sum_{\mathbf{x}, \mathbf{z}_i, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_i, \mathbf{z}_r, \mathbf{e}) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_i, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_i, \mathbf{z}_r, \mathbf{e})} \\\ = \frac{\sum_{\mathbf{x}, \mathbf{z}_i, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e}) P(\mathbf{z}_i \mid \mathbf{x}, \mathbf{z}_r, \mathbf{e}) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_i, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e}) P(\mathbf{z}_i \mid \mathbf{x}, \mathbf{z}_r, \mathbf{e})} \end{aligned} \end{equation*} \cite{shacter1998bayes} showed that $\mathbf{X} \perp \mathbf{Z}_i \mid \mathbf{E}$, which means there is no {\em active path}\citep{geiger1990d} from $\mathbf{X}$ to any $Z_i$ in $\mathbf{Z}_i$ given $\mathbf{E}$, however, there are active paths from $\mathbf{X}$ to $\mathbf{Z}_r$ given $\mathbf{E}$. Thus there can not be any active path from $\mathbf{Z}_r$ to any $Z_i$ in $\mathbf{Z}_i$ given $\mathbf{E}$, that is, $\mathbf{X}, \mathbf{Z}_r \perp \mathbf{Z}_i \mid \mathbf{E}$ and $P(\mathbf{z}_i \mid \mathbf{x}, \mathbf{z}_r, \mathbf{e}) = P(\mathbf{z}_i \mid \mathbf{e})$. We therefore have that \begin{equation*} \begin{aligned} \mu = \frac{[\sum_{\mathbf{x}, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e}) f(\mathbf{x})] [\sum_{\mathbf{z}_i} P(\mathbf{z}_i \mid \mathbf{e})]}{[\sum_{\mathbf{x}, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e})] [\sum_{\mathbf{z}_i} P(\mathbf{z}_i \mid \mathbf{e}) ]} \\\ = \frac{\sum_{\mathbf{x}, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e}) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e})} \end{aligned} \end{equation*} Now let us denote the observed variables in $\mathbf{E}$ that are visited (requisite) by $\mathbf{E}_r$ and those that are not visited (not requisite) by $\mathbf{E}_n$. We can write, \begin{equation*} \begin{aligned} \mu = \frac{\sum_{\mathbf{x}, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e}_r) P(\mathbf{e}_n \mid \mathbf{x}, \mathbf{z}_r, \mathbf{e}_r) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e}_r) P(\mathbf{e}_n \mid \mathbf{x}, \mathbf{z}_r, \mathbf{e}_r)} \end{aligned} \end{equation*} Since $\mathbf{E}_n$ are not visited there is no active path from $\mathbf{X} \cup \mathbf{Z}_r$ to any $E_n$ in $\mathbf{E}_n$ given $\mathbf{E}_r$. Thus $\mathbf{X}, \mathbf{Z}_r \perp \mathbf{E}_n \mid \mathbf{E}_r$ and $P(\mathbf{e}_n \mid \mathbf{x}, \mathbf{z}_r, \mathbf{e}_r) = P(\mathbf{e}_n \mid \mathbf{e}_r)$. We therefore have that \begin{equation*} \begin{aligned} \mu = \frac{\sum_{\mathbf{x}, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e}_r) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_r} P(\mathbf{x}, \mathbf{z}_r, \mathbf{e}_r)} \end{aligned} \end{equation*} Now let us denote variables in $\mathbf{Z}_r$ that are marked on top (requisite) by $\mathbf{Z}_\star$ and that are not marked on top (not requisite) by $\mathbf{Z}_{\bar{\star}}$. Since $\sum_{\mathbf{z}_{\bar{\star}}} P(\mathbf{z}_{\bar{\star}} \mid \mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) = 1$, we can write, \begin{equation*} \begin{aligned} \mu = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) f(\mathbf{x}) \sum_{\mathbf{z}_{\bar{\star}}} P(\mathbf{z}_{\bar{\star}} \mid \mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) }{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) \sum_{\mathbf{z}_{\bar{\star}}} P(\mathbf{z}_{\bar{\star}} \mid \mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r)} \\\ = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r)} \end{aligned} \end{equation*} Now let us denote observed variables in $\mathbf{E}_r$ that are only visited by $\mathbf{E}_\smwhitestar$ and that are visited as well as marked on top by $\mathbf{E}_\star$. We obtain the result as follows, \begin{equation*} \begin{aligned} \mu = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) P(\mathbf{e}_\smwhitestar) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) P(\mathbf{e}_\smwhitestar)} \\\ = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar)} \end{aligned} \end{equation*} \end{proof} \end{comment} \begin{proof} Let us denote the variables in $\mathbf{Z}$ that are marked on the top (requisite) by $\mathbf{Z}_\star$ and that are not marked on the top (not requisite) by $\mathbf{Z}_{\bar{\star}}$. The required probability $\mu$ is then given by, \begin{equation*} \begin{aligned} \mu = P(\mathbf{x}_q \mid \mathbf{e}) = \frac{\sum_{\mathbf{x}, \mathbf{z}_{\star}, \mathbf{z}_{\bar{\star}}} P(\mathbf{x}, \mathbf{z}_{\star}, \mathbf{z}_{\bar{\star}}, \mathbf{e}) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_{\star}, \mathbf{z}_{\bar{\star}}} P(\mathbf{x}, \mathbf{z}_{\star}, \mathbf{z}_{\bar{\star}}, \mathbf{e})} = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}) f(\mathbf{x}) \sum_{\mathbf{z}_{\bar{\star}}} P(\mathbf{z}_{\bar{\star}} \mid \mathbf{x}, \mathbf{z}_\star, \mathbf{e}) }{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}) \sum_{\mathbf{z}_{\bar{\star}}} P(\mathbf{z}_{\bar{\star}} \mid \mathbf{x}, \mathbf{z}_\star, \mathbf{e})} \end{aligned} \end{equation*} Since $\sum_{\mathbf{z}_{\bar{\star}}} P(\mathbf{z}_{\bar{\star}} \mid \mathbf{x}, \mathbf{z}_\star, \mathbf{e}) = 1$, we can write, \begin{equation*} \begin{aligned} \mu = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e})} \end{aligned} \end{equation*} Now let us denote the observed variables in $\mathbf{E}$ that are visited (requisite) by $\mathbf{E}_r$ and those that are not visited (not requisite) by $\mathbf{E}_n$. We can write, \begin{equation*} \begin{aligned} \mu = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) P(\mathbf{e}_n \mid \mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) P(\mathbf{e}_n \mid \mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r)} \end{aligned} \end{equation*} The variables in $\mathbf{X} \cup \mathbf{Z}_\star$ pass the Bayes-balls to all their parents and all their children, but $\mathbf{E}_n$ is not visited by these balls. The correctness of the Bayes-ball algorithm ensures that there is no active path from $\mathbf{X} \cup \mathbf{Z}_\star$ to any $E_n$ in $\mathbf{E}_n$ given $\mathbf{E}_r$. Thus $\mathbf{X}, \mathbf{Z}_\star \perp \mathbf{E}_n \mid \mathbf{E}_r$ and $P(\mathbf{e}_n \mid \mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) = P(\mathbf{e}_n \mid \mathbf{e}_r)$. After cancelling out the common term $P(\mathbf{e}_n \mid \mathbf{e}_r)$, we get, \begin{equation*} \begin{aligned} \mu = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_r)} \end{aligned} \end{equation*} Now let us denote observed variables in $\mathbf{E}_r$ that are only visited by $\mathbf{E}_\smwhitestar$ and that are visited as well as marked on top by $\mathbf{E}_\star$. After cancelling out the common term, we get the desired result, \begin{equation*} \begin{aligned} \mu = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) P(\mathbf{e}_\smwhitestar) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) P(\mathbf{e}_\smwhitestar)} = \frac{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) f(\mathbf{x})}{\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar)} \end{aligned} \end{equation*} \end{proof} \begin{example} Consider the network of Figure \ref{fig:context-specific independence}, and assume that our evidence is $\\{D=1, F=1, G=0, H=1\\}$, and our query is $\\{E=0\\}$. Suppose we start by visiting the query variable from its child and apply the four rules of Bayes-ball. One can easily verify that observed variables $F, G, H$ will be marked on top; hence $\\{F=1, G=0, H=1\\}$ is diagnostic evidence ($\mathbf{e}_\star$). The observed variable $D$ will only be visited; hence $\\{D=1\\}$ is predictive evidence ($\mathbf{e}_\smwhitestar$). Variables $A,B,C,E$ will be marked on top and are requisite unobserved variables ($\mathbf{X} \cup \mathbf{Z}_\star$). \end{example} \subsection{Proof of Theorem \ref{Theorem: the expecation}} %\nit{NK: Will rewrite it since notations are changed} In this section, we present the detailed proof of Theorem \ref{Theorem: the expecation}. \begin{proof} The expectation $\mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\star}]$ is given by \begin{equation*} \begin{aligned} \sum_{\mathbf{x}, \mathbf{z}_\star} \prod_{u_i \in \mathbf{x} \cup \mathbf{z}_\star} P(u_i \mid \mathbf{pa}(U_i)) \prod_{v_i \in \mathbf{\dot{e}}_\star} P(v_i \mid \mathbf{pa}(V_i)). \end{aligned} \end{equation*} The basis $\mathbf{\dot{S}}_\star$ is a subset of $\mathbf{X} \cup \mathbf{Z}_\star$ by Definition \ref{definition: Basis} . Let us denote $(\mathbf{X} \cup \mathbf{Z}_\star) \setminus \mathbf{\dot{S}}_\star$ by $\mathbf{Z}_{\diamond}$. We can now rewrite the expectation as follows, \begin{equation*} \begin{aligned} \sum_{\mathbf{\dot{s}}_\star, \mathbf{z}_{\diamond}} \prod_{u_i \in \mathbf{\dot{e}}_\star \cup \mathbf{\dot{s}}_\star} P(u_i \mid \mathbf{pa}(U_i)) \prod_{v_i \in \mathbf{z}_\diamond} P(v_i \mid \mathbf{pa}(V_i)). \end{aligned} \end{equation*} We will show that $Pa \notin \mathbf{Z}_{\diamond}$ for any $Pa \in \mathbf{Pa}(U_i)$, which will then allow us to push the summation over $\mathbf{z}_{\diamond}$ inside. Let us consider two cases: \begin{itemize} \item For $U_i \in \mathbf{\dot{E}}_\star$, let $Pa \in \mathbf{Pa}(U_i)$ be an unobserved parent of $U_i$, then there will be a direct causal trail from $Pa$ to $U_i$, consequently $Pa$ will be in the set $\mathbf{\dot{S}}_{\star}$. \item For $U_i \in \mathbf{\dot{S}}_{\star}$, there will be a causal trail $U_i \rightarrow \cdots\ B_j\ \cdots \rightarrow E$ such that $E \in \mathbf{\dot{E}}_\star$ and such that either no $B_i$ is observed or there is no $B_i$. Let $Pa \in \mathbf{Pa}(U_i)$ be an unobserved parent of $U_i$ then there will be a direct causal trail from $Pa$ to $U_i$, consequently, there will be such causal trail from $Pa$ to $E$ and $Pa$ will be in the set $\mathbf{\dot{S}}_{\star}$. \end{itemize} Hence, we push the summation over $\mathbf{z}_{\diamond}$ inside and use the fact that $\sum_{\mathbf{z}_{\diamond}} \prod_{v_i \in \mathbf{z}_{\diamond}} P(v_i \mid \mathbf{pa}(V_i)) = 1$, to get the desired result. %Let us create an empty set $\mathbf{S}_{\diamond}$ so that we can put elements in it. Let $A$ be an unobserved ancestor of $E \in \mathbf{E}_\ddagger$ in the graph structure in $\mathcal{B}_\star$. If $A \rightarrow \cdots\ B_i \ \cdots \rightarrow E$ be a causal trail and either no variable $B_i$ is observed or there is no $B_i$, then put $A$ in the set $\mathbf{Z}_{\diamond}$. Repeat and put all such $A$ in $\mathbf{Z}_{\diamond}$. Clearly, $\mathbf{Z}_{\diamond} \subseteq \mathbf{Z}_\ddagger$ by definition \ref{Definition: safe contexual assignments}. Let us denote $\mathbf{Z}_\ddagger \setminus \mathbf{Z}_{\diamond}$ with $\mathbf{Z}_{\hat{\diamond}}$. We can now rewrite the expectation as follows, %\begin{equation*} %\begin{aligned} % \sum_{\mathbf{x}, \mathbf{z}_\dagger, \mathbf{z}_{\diamond}, \mathbf{z}_{\hat{\diamond}}} \prod_{u_i \in \mathbf{e}_\ddagger \cup \mathbf{z}_{\diamond}} P(u_i \mid \mathbf{pa}(U_i)) \times \\\ \prod_{v_i \in \mathbf{x}, \mathbf{z}_\dagger, \mathbf{z}_{\hat{\diamond}}} P(v_i \mid \mathbf{pa}(V_i)). %\end{aligned} %\end{equation*} %We will show that $Pa \notin \mathbf{X} \cup \mathbf{Z}_\dagger \cup \mathbf{Z}_{\hat{\diamond}}$ for any $Pa \in \mathbf{Pa}(U_i)$, which will then allow us to push the summation over $\mathbf{x}$, $\mathbf{z}_\dagger$, $\mathbf{z}_{\hat{\diamond}}$ inside. Let us consider two cases: %\begin{itemize} % \item Let $Pa \in \mathbf{Pa}(U_i)$ be an unobserved parent of $U_i$ such that $U_i \in \mathbf{E}_\ddagger$, then there will be a direct causal trail from $Pa$ to $U_i$, consequently $Pa$ will be in the set $\mathbf{Z}_{\diamond}$. % \item For $U_i \in \mathbf{Z}_{\diamond}$, there will be a causal trail $U_i \rightarrow \cdots\ B_j\ \cdots \rightarrow E$ such that no $B_j$ is observed and $E \in \mathbf{E}_\ddagger$. Let $Pa \in \mathbf{Pa}(U_i)$ be an unobserved parent of $U_i$ then there will be a direct causal trail from $Pa$ to $U_i$, consequently, there will be the causal trail from $Pa$ to $E$ and $Pa$ will be in the set $\mathbf{Z}_{\diamond}$. %\end{itemize} %Hence, we push the summation over $\mathbf{x}$, $\mathbf{z}_\dagger$ and $\mathbf{z}_{\hat{\diamond}}$ inside and use the fact that $\sum_{\mathbf{x}, \mathbf{z}_\dagger, \mathbf{z}_{\hat{\diamond}}} \prod_{v_i \in \mathbf{x}, \mathbf{z}_\dagger, \mathbf{z}_{\hat{\diamond}}} P(v_i \mid \mathbf{pa}(V_i)) = 1$, to get the desired result. \end{proof} \subsection{Proof of Theorem \ref{theorem: cslw_main}} %\nit{NK: Will rewrite it since notations are changed} In this section, we present the detailed proof of Theorem \ref{theorem: cslw_main}. \begin{proof} Since $\mathbf{X}, \mathbf{Z}_\star, \mathbf{E}_\star, \mathbf{E}_\smwhitestar$ are variables of the Bayesian network $\mathcal{B}$ and they form a sub-network $\mathcal{B}_\star$ such that $\mathbf{E}_\smwhitestar$ do not have any parent, we can always write, \begin{equation*}\label{equation: theorem4-1} \begin{aligned} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) = \prod_{u_i \in \mathbf{x} \cup \mathbf{z}_\dagger \cup \mathbf{e}_\dagger} P(u_i \mid \mathbf{pa}(U_i)) \prod_{v_i \in \mathbf{z}_\ddagger \cup \mathbf{e}_\ddagger} P(v_i \mid \mathbf{pa}(V_i)) \end{aligned} \end{equation*} such that $p \in \mathbf{x} \cup \mathbf{z}_\star \cup \mathbf{e}_\star \cup \mathbf{e}_\smwhitestar$ for all $p \in \mathbf{pa}(U_i)$ or $p \in \mathbf{pa}(V_i)$. Now consider the summation over all possible assignments of variables in $\mathbf{X}, \mathbf{Z}_\star$, that is: $\sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar)$. %Denote $\psi = \\{ \mathbf{x}$, $\mathbf{z}_\dagger$, $\mathbf{Z}_\ddagger$, $\mathbf{e}_\dagger$, $\mathbf{e}_\ddagger \\}$. %Since $\mathbf{Z}_\ddagger$ and $\mathbf{e}_\ddagger$ can be empty, We can always write, %There can be two cases: %\begin{itemize} % \item $\mathbf{Z}_\ddagger$ is empty. In this case all variables in $\mathbf{Z}_\star$ will be in $\mathbf{Z}_\dagger$. % \item $\psi$ satisfies both the properties then $\mathbf{Z}_\ddagger$, $\mathbf{e}_\ddagger$ will not be empty. %\end{itemize} %All such $\psi$, i.e., of both cases are in $\Psi$. Hence we can always write, \begin{equation}\label{equation: theorem4-2} \begin{aligned} \sum_{\mathbf{x}, \mathbf{z}_\star} P(\mathbf{x}, \mathbf{z}_\star, \mathbf{e}_\star \mid \mathbf{e}_\smwhitestar) = \sum_{\psi \in \Psi}\sum_{\mathbf{z}_\ddagger[\psi]} P(\mathbf{x}[\psi], \mathbf{z}_\dagger[\psi], \mathbf{z}_\ddagger[\psi], \mathbf{e}_\dagger[\psi], \mathbf{e}_\ddagger[\psi] \mid \mathbf{e}_\smwhitestar) \end{aligned} \end{equation} To simplify notation, from now we denote $\\{ \mathbf{x}[\psi]$, $\mathbf{z}_\dagger[\psi]$, $\mathbf{Z}_\ddagger[\psi]$, $\mathbf{e}_\dagger[\psi]$, $\mathbf{e}_\ddagger[\psi] \\}$ by $\\{ \mathbf{x}$, $\mathbf{z}_\dagger$, $\mathbf{Z}_\ddagger$, $\mathbf{e}_\dagger$, $\mathbf{e}_\ddagger \\}$. After using the definition of contextual assignments, we have that, \begin{equation*} \begin{aligned} P(\mathbf{x}, \mathbf{z}_\dagger, \mathbf{z}_\ddagger, \mathbf{e}_\dagger, \mathbf{e}_\ddagger \mid \mathbf{e}_\smwhitestar) = \prod_{u_i \in \mathbf{x} \cup \mathbf{z}_\dagger \cup \mathbf{e}_\dagger} P(u_i \mid \mathbf{ppa}(U_i)) \prod_{v_i \in \mathbf{z}_\ddagger \cup \mathbf{e}_\ddagger} P(v_i \mid \mathbf{pa}(V_i)) \end{aligned} \end{equation*} Since $p \notin \mathbf{z}_\ddagger$ for any $p \in \mathbf{ppa}(U_i)$, we can push the summation over $\mathbf{z}_\ddagger$ inside to get, \begin{equation}\label{equation: theorem4-3} \begin{aligned} \sum_{\psi \in \Psi}\sum_{\mathbf{z}_\ddagger} P(\mathbf{x}, \mathbf{z}_\dagger, \mathbf{z}_\ddagger, \mathbf{e}_\dagger, \mathbf{e}_\ddagger \mid \mathbf{e}_\smwhitestar) = \sum_{\psi \in \Psi}\prod_{u_i \in \mathbf{x} \cup \mathbf{z}_\dagger \cup \mathbf{e}_\dagger} P(u_i \mid \mathbf{ppa}(U_i)) \sum_{\mathbf{z}_\ddagger}\prod_{v_i \in \mathbf{z}_\ddagger \cup \mathbf{e}_\ddagger} P(v_i \mid \mathbf{pa}(V_i)). \end{aligned} \end{equation} However, we get a strange term $\sum_{\mathbf{z}_\ddagger}\prod_{v_i \in \mathbf{z}_\ddagger \cup \mathbf{e}_\ddagger} P(v_i \mid \mathbf{pa}(V_i))$. Let $\mathbf{S}_{\ddagger}$ denote the basis of residual $\mathbf{e}_\ddagger$. We have that $\mathbf{S}_{\ddagger} \subseteq \mathbf{Z}_\ddagger$ by Definition \ref{Definition: safe contexual assignments}. Let us denote $\mathbf{Z}_\ddagger \setminus \mathbf{S}_{\ddagger}$ with $\mathbf{Z}_{\diamond}$. Now the strange term can be rewritten as, \begin{equation*} \begin{aligned} \sum_{\mathbf{s}_{\ddagger}, \mathbf{z}_{\diamond}} \prod_{u_i \in \mathbf{e}_\ddagger \cup \mathbf{s}_{\ddagger}} P(u_i \mid \mathbf{pa}(U_i)) \prod_{v_i \in \mathbf{z}_{\diamond}} P(v_i \mid \mathbf{pa}(V_i)). \end{aligned} \end{equation*} In the proof of Theorem \ref{Theorem: the expecation}, we showed that the summation over variables not in $\mathbf{S}_\ddagger$ can be pushed inside; hence, $\mathbf{Z}_{\diamond}$ can be pushed inside. After using the fact that $\sum_{\mathbf{z}_{\diamond}} \prod_{v_i \in \mathbf{z}_{\diamond}} P(v_i \mid \mathbf{pa}(V_i)) = 1$, we conclude that the strange term is actually the expectation $\mathbb{E}_{Q_\star}[W_{\mathbf{e}_\ddagger}]$. Using Equation (\ref{equation: bb}), (\ref{equation: theorem4-2}), (\ref{equation: theorem4-3}) and rearranging terms, the result follows. \end{proof} \subsection{Proof of Lemma \ref{theorem: partial assignments}} In this section, we present the detailed proof of Lemma \ref{theorem: partial assignments}. \begin{proof} It is clear that a subset of unobserved variables is assigned. Let $\mathbf{Z}_\ddagger$ be a set of unobserved variables left unassigned. Let $E \in \mathbf{E}_\star$ be an observed variable. Consider two cases: \begin{itemize} \item All ancestors of $E$ are in $\mathbf{Z}_\ddagger \cup \mathbf{E}_\smwhitestar \cup \mathbf{E}_\star$. \item Some ancestors of $E$ are in $\mathbf{Z}_\ddagger \cup \mathbf{E}_\smwhitestar \cup \mathbf{E}_\star$ and some are in $\mathbf{X} \cup \mathbf{Z}_\dagger$. Let $A \in \mathbf{X} \cup \mathbf{Z}_\dagger$ and let $A \rightarrow \cdots\ B_i\ \cdots \rightarrow E$ be a causal trail. Some $B_i$ are observed in all such trails. \end{itemize} Clearly, $E$ will not be visited from any parent in the first case, and in the second case, the visit will be blocked by observed variables. Consequently, $E$ will not be weighted, which completes the proof. %The DC simulation algorithm follows the four rules of the Bayes-ball algorithm, which depending on the type of the variable and the direction from which the Bayes-ball came, determine whether it may pass through, bounce back and/or be blocked. Hence, the DC simulation algorithm cannot visit variables that are not visited by the Bayes-ball. However, when visits of variables are from their child, then only some parents are visited instead of all, so only a subset of variables visited by the Bayes-ball will be visited in the simulation of $\mathbb{P}$. A subset of variables will be assigned and weighted since the process of assignment and weighting is the same. \end{proof} \begin{comment} \subsection{Proof of Lemma \ref{theorem: expectation and covariance}} In this section, we present the detailed proof of Lemma \ref{theorem: expectation and covariance}. \begin{proof} We start with the covariance $\mathrm{cov}[W_{\mathbf{\dot{e}}_\ddagger}, W_{\mathbf{\ddot{e}}_\ddagger}]$ that is given by \begin{equation*} \begin{aligned} \mathrm{cov}[W_{\mathbf{\dot{e}}_\ddagger}, W_{\mathbf{\ddot{e}}_\ddagger}] \\\= \mathbb{E}_{Q_\star}[(W_{\mathbf{\dot{e}}_\ddagger} - \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}])(W_{\mathbf{\ddot{e}}_\ddagger} - \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}])] \\\ = \mathbb{E}_{Q_\star}[(W_{\mathbf{\dot{e}}_\ddagger}W_{\mathbf{\ddot{e}}_\ddagger} - W_{\mathbf{\dot{e}}_\ddagger}\mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}] - \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}] W_{\mathbf{\dot{e}}_\ddagger} \\\ + \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}] \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}]] \\\= \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}W_{\mathbf{\ddot{e}}_\ddagger}] - \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}] \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}] \\\ - \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}] \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}] + \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}] \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}] \\\= \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}W_{\mathbf{\ddot{e}}_\ddagger}] - \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}] \mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}] \end{aligned} \end{equation*} After rearranging terms, we get Equation \ref{equation: exp and cov}. Let us now consider the case when $\mathbf{\dot{S}}_\ddagger \cap \mathbf{\ddot{S}}_\ddagger = \emptyset$. The expectation $\mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}W_{\mathbf{\ddot{e}}_\ddagger}]$ is given by \begin{equation*} \begin{aligned} \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}W_{\mathbf{\ddot{e}}_\ddagger}] \\\= \sum_{\mathbf{x}, \mathbf{z}_\star} \prod_{u_i \in \mathbf{\dot{e}}_\ddagger \cup \mathbf{\dot{s}}_{\ddagger}} P(u_i \mid \mathbf{pa}(U_i)) \prod_{v_i \in \mathbf{\ddot{e}}_\ddagger \cup \mathbf{\ddot{s}}_{\ddagger}} P(v_i \mid \mathbf{pa}(V_i)) \end{aligned} \end{equation*} $\mathbf{\dot{S}}_\ddagger, \mathbf{\ddot{S}}_\ddagger$ are bases of $\mathbf{\dot{e}}_\ddagger$, $\mathbf{\ddot{e}}_\ddagger$ respectively, hence, we know from Theorem \ref{Theorem: the expecation} that the R.H.S. of the above equation can be written as \begin{equation*} \begin{aligned} \sum_{\mathbf{\dot{s}}_\ddagger} \prod_{u_i \in \mathbf{\dot{e}}_\ddagger \cup \mathbf{\dot{s}}_{\ddagger}} P(u_i \mid \mathbf{pa}(U_i)) \sum_{\mathbf{\ddot{s}}_\ddagger} \prod_{v_i \in \mathbf{\ddot{e}}_\ddagger \cup \mathbf{\ddot{s}}_{\ddagger}} P(v_i \mid \mathbf{pa}(V_i)) \end{aligned} \end{equation*} Since $\mathbf{\dot{S}}_\ddagger \cap \mathbf{\ddot{S}}_\ddagger = \emptyset$, we have that \begin{equation*} \begin{aligned} \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}W_{\mathbf{\ddot{e}}_\ddagger}] = \biggl\\{\sum_{\mathbf{\dot{s}}_\ddagger} \prod_{u_i \in \mathbf{\dot{e}}_\ddagger \cup \mathbf{\dot{s}}_{\ddagger}} P(u_i \mid \mathbf{pa}(U_i))\biggr\\}\times \\\ \biggl\\{ \sum_{\mathbf{\ddot{s}}_\ddagger} \prod_{v_i \in \mathbf{\ddot{e}}_\ddagger \cup \mathbf{\ddot{s}}_{\ddagger}} P(v_i \mid \mathbf{pa}(V_i))\biggr\\} \\\= \mathbb{E}_{Q_\star}[W_{\mathbf{\dot{e}}_\ddagger}]\mathbb{E}_{Q_\star}[W_{\mathbf{\ddot{e}}_\ddagger}] \end{aligned} \end{equation*} Hence, in this case the covariance vanishes. \end{proof} \end{comment} \subsection{Proof of Theorem \ref{theorem: dc-partial-justification}} In this section, we present the detailed proof of Theorem \ref{theorem: dc- partial-justification}. \begin{proof} Variables in $\mathbf{Z}_\ddagger$ are not assigned in the simulation; hence, it follows immediately from Lemma \ref{theorem: DC CSI} that the assignment is contextual. Assume by contradiction that $A \in \mathbf{X} \cup \mathbf{Z}_\dagger$, $E \in \mathbf{E}_\ddagger$ and there is a causal trail $A \rightarrow \cdots\ B_i\ \cdots \rightarrow E$ such that no $B_i$ is observed or there is no $B_i$. Since $ A $ is assigned, all children of $ A $ will be visited, and following the trail, the variable $ E $ will also be visited from its parent since there is no observed variable in the trail to block the visit. Consequently, $E$ will be weighted, which contradicts our assumption that $E$ is not weighted. Hence, the assignment is also safe. \end{proof} \subsection{Proof of Lemma \ref{theorem: DC CSI}} In this section, we present the detailed proof of Lemma \ref{theorem: DC CSI}. \begin{proof} Since $A$ is assigned/weighted and rules in $\mathbb{P}$ are exhaustive, a rule $\mathcal{R} \in \mathbb{P}$ with $A$ in its head must have fired. Let $\mathbf{d}$ be a body and $\mathcal{D}$ be a distribution in the head of $\mathcal{R}$. Since each $d_i \in \mathbf{d}$ must be true for $\mathcal{R}$ to fire, $\mathbf{d} \subseteq \mathbf{c}$. We assume that rules in $\mathbb{P}$ are mutually exclusive. Thus, among all rules for $A$, only $\mathcal{R}$ will fire even when an assignment of some variables in $\mathbf{B}$ is also given. Hence, by definition of the rule $\mathcal{R}$, %\ref{definition: distributional clauses}, we have that, \begin{equation*} \mathcal{D} = P(A \mid \mathbf{d}) = P(A \mid \mathbf{c}) = P(A \mid \mathbf{c}, \mathbf{B}) \end{equation*} \end{proof} \begin{comment} \subsection{Proof of Lemma \ref{theorem: Covariance approach 1 }} In this section, we present the detailed proof of Lemma \ref{theorem: Covariance approach 1 }. \begin{proof} \nit{NK: Lemma not correct.} We need to show that $\mathbb{E}[\overline{\sigma}] = \mathrm{cov}[A,BC]$. We have, \begin{equation*} \begin{aligned} (n-1)\overline{\sigma} = \sum_{i=1}^{k}(A_i - \overline{A})(B_iC_i-\overline{BC}) + \sum_{i=1}^{n-k}(A_i’ - \overline{A})\\\\(B_i’\overline{C} -\overline{BC}) \end{aligned} \end{equation*} Using the fact that $\overline{BC}(\sum_{i=1}^{k}A_i + \sum_{i=1}^{n-k}A_i’) = n\overline{BC}\ \overline{A}$, and rearranging terms, we can write R.H.S. of the above equation as follows, \begin{equation*} \begin{aligned} \sum_{i=1}^{k}A_iB_iC_i + \sum_{i=1}^{n-k}A_i’B_i’\overline{C} - \overline{A} (\sum_{i=1}^{n-k}B_iC_i + \sum_{i=1}^{n-k}B_i’\overline{C}) \end{aligned} \end{equation*} The expectation of the above expression is given by \begin{equation*} \begin{aligned} \sum_{i=1}^{k}\mathbb{E}[A_iB_iC_i] + \frac{1}{k}\mathbb{E}[\sum_{i=1}^{n-k}A_i’B_i’\sum_{i=1}^{k}C_i] \\\\-\frac{1}{n}\mathbb{E}[\sum_{i=1}^{k}A_i\sum_{i=1}^{k}B_iC_i] - \frac{1}{nk}\mathbb{E}[\sum_{i=1}^{k}A_i\sum_{i=1}^{n-k}B_i’\sum_{i=1}^{k}C_i] \\\\-\frac{1}{n}\mathbb{E}[\sum_{i=1}^{n-k}A_i’\sum_{i=1}^{k}B_iC_i] -\frac{1}{nk}\mathbb{E}[\sum_{i=1}^{n-k}A_i’\sum_{i=1}^{n-k}B_i’\sum_{i=1}^{k}C_i] \end{aligned} \end{equation*} where we have used the definition of $\overline{A}$, $\overline{C}$. We will use the fact draws are i.i.d., which signifies: i) the distribution of $(A_i,B_i,C_i)$ is same for all $i=1, \dots, n$. ii) $(A_1,B_1,C_1)$ is distributed independently of $(A_2,B_2,C_2), \dots, (A_n,B_n)$ and so forth. We will analyze each of the above six terms one by one. \begin{enumerate} \item The first term is $k\mathbb{E}[ABC]$. \item the second term is $(n-k)\mathbb{E}[ABC]$. \item Note that $\mathbb{E}[\sum_{i=1}^{k}A_i\sum_{i=1}^{k}B_iC_i]$ has $k^2$ terms among which $\mathbb{E}[A_iB_iC_i] = \mathbb{E}[ABC]$ for $k$ terms and $\mathbb{E}[A_iB_jC_j] = \mathbb{E}[A]\mathbb{E}[BC]$, such that $i\neq j$, for $k(k-1)$ terms. Hence, the third term is $\frac{-1}{n}(k\mathbb{E}[ABC] + k(k-1)\mathbb{E}[A]\mathbb{E}[BC])$ \item Similarly the fourth term is $\frac{-1}{nk}(k(n-k)\mathbb{E}[AC]\mathbb{E}[B] + k(k-1)(n-k)\mathbb{E}[A]\mathbb{E}[BC])$. \item Similarly the fifth is $\frac{-1}{n}((n-k)k\mathbb{E}[A]\mathbb{E}[BC])$. \item We use a trick in the sixth. There are $(n-k-1)(n-k)k$ terms of the form $\mathbb{E}[A_i’B_j’C_l]$ such that $i \neq j$. We add $k(n-k)$ more terms $\mathbb{E}[A_i’B_j’C_l] = \mathbb{E}[A]\mathbb{E}[BC]$ and remove $k(n-k)$ terms $\mathbb{E}[A_i’B_j’C_l] = \mathbb{E}[AC]\mathbb{E}[B]$. Hence, the sixth is $\frac{-1}{nk}((n-k)k\mathbb{E}[ABC] + (n-k)(n-k)k\mathbb{E}[A]\mathbb{E}[BC] - (n-k)k\mathbb{E}[AC]\mathbb{E}[B])$. \end{enumerate} Rearranging the six terms we get, \begin{equation*} \begin{aligned} (n-1)(\mathbb{E}[ABC] - \mathbb{E}[A]\mathbb{E}[BC]) = (n-1)\mathrm{cov}[A,BC]. \end{aligned} \end{equation*} Consequently, the desired result follows. \end{proof} \end{comment} %\section{Representing Structures in CPDs of Discrete-Continuous Distributions} \vskip -0.11in \section{Top-Down Proof Procedure for DC($\mathcal{B}$)}\label{Section: DC Proofs} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{proof_new_comb.PNG} \caption{Left: A search graph induced to prove \texttt{e=1}; Right: A graphical structure.} %$?- e\cong1$. Stages at which entries were made to dictionaries $Dst$ and $Asg$ are shown using enumerations of the form $(i)$ and $(1)$, respectively.} \label{fig: proof} \end{figure} Let us look into the process of estimating the unconditional probability of queries to DC($\mathcal{B}$) programs. We assume some familiarity with proof procedures for definite clauses \citep{poole2010artificial}. %the inference in DC programs \citep{nitti2016probabilistic}, where we will illustrate that a variable $A$ can be sampled given the state of some of its requisite ancestors (partial assignments). This is in contrast with BNs, where knowledge of states of all requisite ancestors is required before sampling $A$. %Till now, we have represented the distribution with the DC program $\mathbb{P}$ but not specified how inference can be performed in such programs. Let us look at the inference where CSIs play a role. %how to compute the probability of a query first. We will discuss how to compute the conditional probability after that. %We assume that the reader is familiar with knowledge bases and inference algorithms for them, as discussed in \cite{poole2010artificial}. Just like the set of definite clauses forms the knowledge base, the DC($\mathcal{B}$) program forms a {\em probabilistic knowledge base}. We can ask queries of the form \texttt{yes $\leftarrow$ e=1}, which is a question: is \texttt{e} assigned to \texttt{1}? We first need to prove that \texttt{e=1} before concluding that the answer is \texttt{yes}. %To answer it, To realize that we perform a proof procedure, from the query, to determine whether it is a logical consequence of rules in the DC($\mathcal{B}$) program. Algorithm \ref{algorithm: proof} describes the procedure, which is similar to the standard SLD-resolution for definite clauses. However, there are some differences to prove atoms of the form \texttt{e=1} due to the stochastic nature of sampling. We illustrate the proof procedure with an example. \begin{comment} \begin{algorithm}[t] \caption{Propositional DC Proof Procedure} \label{algorithm: proofs} \begin{algorithmic} \Procedure{prove}{\texttt{Goal}} \begin{itemize} \item Proofs a conjunction of atoms \texttt{Goal} and returns \texttt{yes}; otherwise this procedure fails. \item Maintains two global tables: i) \texttt{Asg}: records assignments of variables; ii) \texttt{Dst}: records distributions for variables. \end{itemize} \begin{enumerate} \item Erase all entries in \texttt{Dst}, \texttt{Asg}. \item While \texttt{Goal} in not empty: \begin{enumerate} \item Select the first atom \texttt{b} from \texttt{Goal}. \item If \texttt{b} is of the form \texttt{a=x}: \begin{enumerate} \item If an entry for \texttt{a} is in \texttt{Asg} then \texttt{y=Asg[a]}. \item Else: \begin{enumerate} \item For all \texttt{a $\sim \mathcal{D} \leftarrow $ Body} in $\mathbb{P}$ with \texttt{a} as head, \Call{prove}{\texttt{Body} $\wedge$ \texttt{dist(a,$\mathcal{D}$)}}. \item Sample a value \texttt{y} from \texttt{Dst[a]} and record \texttt{Asg[a]=y}. \end{enumerate} \item If \texttt{x==y} then remove \texttt{b} from \texttt{Goal} else fail. \end{enumerate} \item If \texttt{b} is of the form \texttt{dist(a,$\mathcal{D}$)}: record \texttt{Dst[a]=$\mathcal{D}$} and remove \texttt{dist(a,$\mathcal{D}$)} from \texttt{Goal}. \end{enumerate} \item Return \texttt{yes}. \end{enumerate} \EndProcedure \end{algorithmic} \end{algorithm} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{proof_new.PNG} \caption{A search graph for the proof in Example \ref{example: proof process}.} %$?- e\cong1$. Stages at which entries were made to dictionaries $Dst$ and $Asg$ are shown using enumerations of the form $(i)$ and $(1)$, respectively.} \label{fig: proof} \end{figure} \end{comment} \begin{example}\label{example: proof process} \normalfont Consider a Bayesian network whose graph structure is as shown in Figure \ref{fig: proof} (right) and whose CPDs are expressed using the following rules: %Suppose a program has the following rules: \begin{lstlisting}[frame=none] a $\sim\ $ bernoulli(0.1). d $\sim\ $ bernoulli(0.3). b $\sim\ $ bernoulli(0.2) $\leftarrow\ $ a=0. b $\sim\ $ bernoulli(0.6) $\leftarrow\ $ a=1. c $\sim\ $ bernoulli(0.2) $\leftarrow\ $ a=1. c $\sim\ $ bernoulli(0.7) $\leftarrow\ $ a=0$\ \wedge\ $b=1. c $\sim\ $ bernoulli(0.8) $\leftarrow\ $ a=0$\ \wedge\ $b=0. e $\sim\ $ bernoulli(0.9) $\leftarrow\ $ c=1. e $\sim\ $ bernoulli(0.4) $\leftarrow\ $ c=0$\ \wedge\ $d=1. e $\sim\ $ bernoulli(0.3) $\leftarrow\ $ c=0$\ \wedge\ $d=0.\end{lstlisting} Suppose the query \texttt{yes $\leftarrow$ e=1} is asked. The procedure induces a search graph. An example of such a graph is shown in Figure \ref{fig: proof} (left), where we write \texttt{dist(y, p)} for \texttt{dist(y, bernoulli(p))} and use comma(\texttt{,}) instead of \texttt{$\wedge$} since there is no risk of confusion. In this example, the proof succeeds using a derivation. However, it might happen that the proof can not be derived. In that case, the proof fails, and the answer is \texttt{no}. \begin{comment} The following shows a derivation that corresponds to a sequence of assignments to \texttt{Goal} when Algorithm \ref{algorithm: proof} is called like this: \Call{prove}{\texttt{i=1}}. The states of tables \texttt{Asg} and \texttt{Dst} are also indicated. \begin{lstlisting}[frame=none] yes $\leftarrow\ $ i=1 yes $\leftarrow\ $ e=1, dist(i,0.3), i=1 yes $\leftarrow\ $ a=1, dist(e,0.2), e=1, dist(i,0.3), i=1 yes $\leftarrow\ $ dist(a,0.2), a=1, dist(e,0.2), e=1, dist(i,0.3), i=1 yes $\leftarrow\ $ a=1, dist(e,0.2), e=1, dist(i,0.3), i=1; [dst(a,0.2), asg(a,1)] yes $\leftarrow\ $ dist(e,0.2), e=1, dist(i,0.3), i=1; [dst(a,0.2), asg(a,1)] yes $\leftarrow\ $ e=1, dist(i,0.3), i=1; [dst(a,0.2), asg(a,1), dst(e,0.2), asg(e,1)] yes $\leftarrow\ $ dist(i,0.3), i=1; [dst(a,0.2), asg(a,1), dst(e,0.2), asg(e,1)] yes $\leftarrow\ $i=1;[dst(a,0.2),asg(a,1),dst(e,0.2) ,asg(e,1), dst(i,0.3), asg(i,1)] yes $\leftarrow\ $ ;[dst(a,0.2), asg(a,1), dst(e,0.2), asg(e,1), dst(i,0.3), asg(i,1)]\end{lstlisting} where we use \texttt{dist(x,p)} for \texttt{dist(x,bernoulli(p))} since there is no risk of confusion. The procedure induces a search graph, as shown in Figure \ref{fig: proof}. Here, the proof succeeds using this derivation. However, it might happen that the proof can not be derived. In that case, the proof fails, and the answer is \texttt{no}. \end{comment} \end{example} After repeating the procedure, the fraction of times we get \texttt{yes} is the estimated probability of the query. It is important to note that some requisite variables may not be assigned in some occasions, e.g., in the proof shown in Figure \ref{fig: proof}, variable \texttt{b}, which is requisite to compute the probability, is not assigned. Hence, we sample values of \texttt{e} faster. In this way, the procedure exploits the structure of rules. %The difference lies in the way atoms of the form \texttt{e$\cong$1} are proved. To prove such atoms, in the resolution step, the procedure chooses, from $\mathbb{P}$, a clause with \texttt{e $\sim$ bernoulli(p)} as head (line: \ref{algoline: choose}). Atoms in the body of the chosen clause are proved in the order. A table $Dst$ records distributions according to which variables are distributed. After the proof of the body succeeds, the distribution in the head of the clause is entered in the table $Dst$ for \texttt{e}. %there will be at most one distribution for \texttt{e} in the table $Dst$. %It is worth reiterating that there will be precisely one %\footnote{If distributional clauses are not mutually exclusive, then there will be a set of distributions for a single variable. Discussion on such cases is beyond the scope of this paper.} %entry for one variable because we assume clauses are mutually exclusive and exhaustive. %A value is sampled (line: \ref{algoline: sample1}) from this distribution, and compared with the value in query, here, it is $1$. If the comparison succeeds, then the atom \texttt{e$\cong$1} is removed from the goal. %If there is no entry in $Dst$ then the value is sampled from the default distribution (line: \ref{algoline: default}). %A variable already sampled is not sampled again, the data structure $Top$ keeps track of that (line: \ref{algoline: assignment track}). %The algorithm induces a search graph, e.g., Figure \ref{fig: proof}. %At this point, it is worth pausing to understand the induction of such a graph shown in Figure \ref{fig: proof}. %Notice that unassigned ancestors will be assigned in the procedure; however, some ancestors will not be assigned depending on the context. Here, for example, the parents \texttt{b} and \texttt{c} are not assigned. This is because attempts to prove the query using some clauses fail. Here, the proof succeeds using a clause, so the answer is $yes$. However, it might happen that the query can not be proved using any clause. In that case, the proof fails, and the answer is $no$. After repeating the procedure, the fraction of times we get $yes$ is the query’s probability. %A naive approach to estimate a conditional query $P(\texttt{e=1} \mid \texttt{i=1})$ is to find proofs of evidence together with the query, i.e., \Call{prove-marked}{\texttt{e=1} $\wedge$ \texttt{i=1}} and use Equation \ref{equation: Naive Estimate}. Most probabilistic rule based systems %, including the current inference engine of DC, %use this approach to deal with evidence. Clearly, they do not exploit CIs. If we first use Bayes ball on the graph structure to determine requisite evidence and use Equation \ref{Equation: Bayes-ball Estimate} instead, we can exploit CIs, but we would still not exploit the global implications of CSIs. A notion similar to {\em CSI-separation} \citep{koller2009probabilistic} is needed to detect such implications. %if we repeat the procedure. %Notice that some parents of \texttt{e} %Notice that unassigned ancestors of \texttt{e} are assigned, however, some ancestors might not be assigned. This is because attempts to prove the query using some clauses fail. The procedure return $yes$; otherwise it fails, and the answer is no. The fraction of times we get $yes$ is the probability of the query if we repeat the procedure. \section{Additional Experimental Details}\label{appendix: additional exp details} %For Table \ref{Table: Result1}: Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz, RAM: $126$ GB %For Table \ref{Table: Result2}: Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz, RAM: $16$ GB To make sure that almost all variables in BNs are requisite, we used the following query variables and evidence to obtain the results (Table \ref{Table: Result1} and Table \ref{Table: Result2}): \begin{itemize} \item \textit{Alarm} \begin{itemize} \item $\mathbf{x} = $ \textit{\\{bp = low\\}} \item $\mathbf{e} = $ \textit{\\{lvfailure = false, cvp = normal, hr = normal, expco2 = low, ventalv = low, ventlung = zero\\}} \end{itemize} \item \textit{Win95pts} \begin{itemize} \item $\mathbf{x} = $ \textit{\\{problem1 = normal\_output\\}} \item $\mathbf{e} = $ \textit{\\{prtstatoff = no\_error, prtfile = yes, prtstattoner = no\_error, repeat = yes\_\_always\_the\_same\_, ds\_lclok = yes, lclok = yes, problem3 = yes, problem4 = yes, nnpsgrphc = yes, psgraphic = yes, problem5 = yes, gdiin = yes, appdata = correct, prtstatpaper = no\_error\\}} \end{itemize} \item \textit{Andes} \begin{itemize} \item $\mathbf{x} = $ \textit{\\{grav78 = false\\}} \item $\mathbf{e} = $ \textit{\\{goal\_2 = true, displacem0 = false, snode\_10 = true, snode\_16 = true, grav2 = true, constant5 = false, known8 = true, try11 = true, kinemati17 = false, try13 = true, given21 = true, choose35 = false, write31 = false, need36 = false, resolve38 = true, goal\_69 = false, snode\_73 = false, goal\_79 = false, try24 = false, newtons45 = false, try26 = false, snode\_65 = false, snode\_88 = false, buggy54 = true, weight57 = true, goal\_104 = false, goal\_108 = false, need67 = false, goal\_114 = false, snode\_118 = false, snode\_122 = false, snode\_125 = false, goal\_129 = false, snode\_135 = false, goal\_146 = false, snode\_151 = false\\}} \end{itemize} \item \textit{Munin1} \begin{itemize} \item $\mathbf{x} = $ \textit{\\{rmedd2amprew = r04\\}} \item $\mathbf{e} = $ \textit{\\{rlnlt1apbdenerv = no, rlnllpapbmaloss = no, rlnlwapbderegen = no, rdiffnapbmaloss = no, rderegenapbnmt = no, rdiffnmedd2block = no, rmeddcvew = ms60, rapbnmt = no, rapbforce = 5, rlnlbeapbdenerv = no, rlnlbeapbneuract = no, rmedldwa = no, rmedd2blockwd = no\\}} \end{itemize} \end{itemize} \section{Code} The source code is present in the supplementary material along with installation instructions. \begin{comment} \subsection{Algorithms}\label{appendix: algo section} \begin{algorithm} \caption{Bayes-ball simulation of BNs} \label{algorithm: bayes-ball simulation} \begin{algorithmic}[1] \Procedure{bayes-ball-simulation}{$Sched$} \State \slash\slash \ $Sched$: a hashtable\textless node, type\textgreater\ of nodes scheduled to \State \slash\slash\quad be visited. Type can be either $parent$ or $child$. Set type \State \slash\slash\quad as $child$ for all nodes in $Sched$ upon procedure call. \State \slash\slash \ $\mathcal{B}$: a belief network of nodes (variables) \State \slash\slash \ $Evd$: a hashtable\textless observed node, value\textgreater \State \slash\slash \ $Asg$: a hashtable\textless unobserved node, value\textgreater \State \slash\slash \ $W$: a hashtable\textless observed node, weight\textgreater \State \slash\slash \ $Vst$: a set of visited nodes. \State \slash\slash \ $Top$: a set of nodes marked on top. \State \slash\slash \ $Bot$: a set of nodes marked on bottom. \State \slash\slash \ Empty $Asg, W, Vst, Top, Bot$ before procedure call. \State \textbf{global} $\mathcal{B}, Evd, Asg, W, Vst, Top, Bot$. \State let $K$ be a set of all observed nodes. \State \textbf{while} $Sched$ is not empty \Indent \State remove an entry \textless $J, type$\textgreater\ from $Sched$ \State add $J$ to $Vst$ \State let $S_1, S_2, S_3, S_4$ be empty hashtables \State \textbf{if} $J \not \in K$ and $type == child$ \Indent \State \textbf{if} $J \not \in Top$ \Indent \State add $J$ to $Top$ \State $\forall I \in Pa(J) \; (S_1[I] = child)$ \State \Call{bayes-ball-simulation}{$S_1$} \label{algoline: parents assigned} \State sample $x$ from $P(J \mid \mathbf{pa}(J))$; $Ast[J]=x$ \label{algoline: sample Bayes-ball} \EndIndent \State \textbf{if} $J \not \in Bot$ \Indent \State add $J$ to $Bot$ \State $\forall I \in Children(J) \; (S_2[I] = parent)$ \State \Call{bayes-ball-simulation}{$S_2$} \EndIndent \EndIndent \State \textbf{if} $type == parent$ \Indent \State \textbf{if} $J \in K$ and $J \not \in Top$ \Indent \State add $J$ to $Top$ \State $\forall I \in Pa(J) \; (S_3[I] = child)$ \State \Call{bayes-ball-simulation}{$S_3$} \State $W[J] = P(J=evd[J] \mid \mathbf{pa}(J))$ \label{algoline: weight Bayes- ball} \EndIndent \State \textbf{if} $J \not \in K $ and $J \not \in Bot$ \Indent \State add $J$ to $Bot$ \State $\forall I \in Children(J) \; (S_4[I] = parent)$ \State \Call{bayes-ball-simulation}{$S_4$} \EndIndent \EndIndent \EndIndent \State \textbf{return} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Computes the covariance of two random variables.} \begin{algorithmic}[1] \Procedure{Covariance}{$l$, $R$} \State \slash\slash \ $l$: a column index; $R$: a list of column indices \State \textbf{global} $Weights[1..n][1..m]$ \ \slash\slash \ a matrix of size $n\times m$ \State $e_l = $ \Call{Expectation}{$l$}; $e_r = $ \Call{Expectation}{$R$} \State $cov = 0$; $cnt = 0$ \State \textbf{for} $i=1$ to $n$ \Indent \State \textbf{if} $Weights[i][l] \neq null$ \Indent \State $w_l = Weights[i][l]$ \State \textbf{if} $\exists j\in R \ (Weights[i][j] \neq null)$ \Indent \State $w_r = 1$; $M = []$ \State \textbf{for} $j\in R$ \Indent \State \textbf{if} $Weights[i][j] == null$ \Indent \State append $j$ to $M$ \EndIndent \State \textbf{else} \Indent \State $w_r = w_r \times Weights[i][j]$ \EndIndent \EndIndent \State $w_r = w_r \times \Call{Expectation}{M}$ \State $cnt = cnt + 1$ \State $cov = cov + (w_l - e_l) \times (w_r - e_r)$ \EndIndent \Indent \EndIndent \EndIndent \EndIndent \State \textbf{if} $cnt==0$ \Indent \State \textbf{return} $0$ \EndIndent \State \textbf{else} \Indent \State \textbf{return} $cov/cnt$ \EndIndent \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Computes the expected value of product of random variables.} \begin{algorithmic}[1] \Procedure{Expectation}{$S$} \State \slash\slash \ $S$: a list of column indices \State \textbf{global} $Memoized_{s}$ \ \slash\slash \ a hashtable\textless list, float\textgreater \State \textbf{global} $Weights[1..n][1..m]$ \ \slash\slash \ a matrix of size $n\times m$ \State \textbf{if} $S$ is empty \Indent \State \textbf{return} $1$ \EndIndent \State \textbf{else if} $Memoized_{s}$ has key $S$ \Indent \State \textbf{return} $Memoized_{s}[S]$ \EndIndent \State \textbf{else if} size of $S$ is $1$ \Indent \State $cnt = 0$; $e = 0$ \State \textbf{for} $i=1$ to $n$ \Indent \State \textbf{if} $Weights[i][S[0]] \neq null$ \Indent \State $e = e + Weights[i][S[0]]$ \State $cnt = cnt + 1$ \EndIndent \EndIndent \State $e = \dfrac{e}{cnt}$ \State $Memoized_{s}[S] = e$ \State \textbf{return} $e$ \EndIndent \State \textbf{else} \Indent \State copy $S$ to $R$ \State $L = []$; append $S[0]$ to $L$ \State delete $R[0]$ from $R$ \State $p = \Call{Expectation}{L}\times\Call{Expectation}{R}$ \State $cov = \Call{Covariance}{L[0],R}$ \State $e = p + cov$ \State $Memoized_{s}[S] = e$ \State \textbf{return} $e$ \EndIndent \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Context-specific likelihood weighting}\label{algorithm: cs-lw} \begin{algorithmic}[1] \Procedure{contextual-lw}{$Q$, $Evd$, $n$} \State \slash\slash\ $Q$: a conjunction of atoms of the form $a{\cong} x$, for example, \State \slash\slash\quad $(a_1{\cong}x_1, \dots a_{9}{\cong}x_9)$, where $a_i$ are query variables \State \slash\slash\quad and $x_i$ are their assignment. \State \slash\slash\ $Evd$: a hashtable\textless observed variable, value\textgreater \State \slash\slash\ $n$: number of simulations \State \slash\slash\ $Weights$: matrix of weights obtained after $n$ simulations \State \slash\slash\ $Memoized_s$: a hashtable \textless list, float\textgreater \State \textbf{Output:} \Indent \State $prob$: estimated probability \EndIndent \State \textbf{global} $Weights, Memoized_s$ \State $W_{all} = []$; $I_{all} = []$; $E_d = []$ \State \textbf{repeat} $n$ times \Indent \State $[i,W] = \Call{prop-dc-simulate}{Q, Evd}$ \State append $i$ to $I_{all}$; append $W$ to $W_{all}$ \State append keys of $W$ to $E_{d}$ \EndIndent \State remove duplicates from $E_d$ and let $m$ be the size of $E_d$ \State let $Weights$ be a new matrix of $nulls$ of size $n \times m$ \State \textbf{for} $j=1$ to $n$ \Indent \State \textbf{for} \textless$key,val$\textgreater\ in $W_{all}[j]$ \Indent \State $idx = $ index of $key$ in $E_d$ \State $Weights[j][idx] = val$ \EndIndent \EndIndent \State erase all entries in $Memoized_s$ \State $nume=0; deno=0$ \State \textbf{for} $j=1$ to $n$ \Indent \State $S = []; p = 1$ \State \textbf{for} $k=1$ to $m$ \Indent \State \textbf{if} $Weights[j][k]==null$ \Indent \State append $k$ to $S$ \EndIndent \State \textbf{else} \Indent \State $p = p \times Weights[j][k]$ \EndIndent \EndIndent \State $p = p \times \Call{expectation}{S}$ \State $deno = deno + p$ \State \textbf{if} $I_{all}[j] == 1$ \Indent \State $nume = nume + p$ \EndIndent \EndIndent \State $prob = nume/deno$ \State \textbf{return} $prob$ \EndProcedure \end{algorithmic} \end{algorithm} \end{comment} \end{document}
# The Ifs and Buts of the Development Approaches for IoT Applications Saitel Daniela Agudelo-Sanabria<EMAIL_ADDRESS>0002-8287-5198 Technical University of MunichGermany and Anshul Jindal<EMAIL_ADDRESS>0002-7773-5342 Technical University of MunichGermany ###### Abstract. The recent growth of the Internet of Things (IoT) devices has lead to the rise of various complex applications where these applications involve interactions among large numbers of heterogeneous devices. An important challenge that needs to be addressed is to facilitate the agile development of IoT applications with minimal effort by the various parties involved in the process. However, IoT application development is challenging due to the wide variety of hardware and software technologies that interact in an IoT system. Moreover, it involves dealing with issues that are attributed to different software life-cycle phases: development, deployment, and progression. In this paper, we examine three IoT application development approaches: Mashup-based development, Model-based development, and Function-as-a-Service based development. The advantages and disadvantages of each approach are discussed from different perspectives, including reliability, deployment expeditiousness, ease of use, and targeted audience. Finally, we propose a simple solution where these techniques are combined to deliver reliable applications while reducing costs and time to release. mashup development, model-based development, function-as-a-service, internet of the things ††ccs: Software and its engineering Software development techniques ## 1\. Introduction The Internet of Things (IoT) is a network that enables things to monitor their surroundings, interact with humans and other things, and carry out all sorts of tasks with little to none human intervention (Mala, 2017). Sensors, controllers, actuators, and any other device capable of establishing a connection to the Internet are called a thing (Waher, 2018). The application areas of the Internet of Things are as heterogeneous as the devices and applications that compound it. Industries, in general, can take advantage of IoT to predict device failure, manage their supply chain, and optimize their manufacturing and administrative processes (Casado-Vara et al., 2019; Belli et al., 2019; Arvind Ravulavaru, 2018). The retail industry is using IoT to enable new customer experiences. Some examples are Amazon go smart store (Amazon, [n.d.]), and Alibaba’s smart warehouse (Business Insider, 2017). The agriculture industry benefits from the possibility of automating the crop yield measuring, livestock monitoring, soil quality control, irrigation, and other production processes (Nayak et al., 2020). This kind of services are offered by companies like (KaaIoT Technologies LLC, [n.d.]) and (Softweb Solutions Inc, [n.d.]). More applications examples can be found in automotive (Syafrudin et al., 2018), nuclear (Susila et al., 2018), aerospace (Correia et al., 2019), and military (Yushi et al., 2012) industries, among others. Furthermore, we find IoT applications in government, heath, and security systems. Local governments’ challenges such as pollution control, waste handling, energy management, disaster prevention and response, parking assistance, and traffic re-routing, find solutions in the context of IoT enabled smart cities (Krylovskiy et al., 2015). Undoubtedly, IoT plays an every-time more important role in our daily lives. The existence of a diversity of scenarios where IoT is a leading actor is only possible with the cooperation of multiple technologies and protocols. The complexity of the architecture and the variety of technologies combined in an IoT system is the most visible challenge of IoT application development. Furthermore, developers must deliver applications that behave consistently under different operating conditions, that might change at runtime (Mala, 2017). In some cases, machine-human interfaces are not necessary or available, so we need to enable other communication channels in case of failure (Mala, 2017). At the current time, there exists many different development strategies which are grouped under two categories: Model-based and mashup development. However, there is no consensus on a general development strategy. On the one hand, software engineering, as a discipline, provides the structure and the tools necessary to describe every aspect of an IoT system (Mala, 2017). However, it lacks the expediteness and ease of use of mashups. On the other hand, mashups support for black-box development, in which the programmer does not need to develop the components of the system from scratch nor know their implementation in detail (Ogrinz, 2009). Only the inputs and outputs need to be identified to include the element in the interconnected network of components that make up the application. Furthermore, with the introduction of the concept of serverless since the launch of AWS Lambda in 2014 (Handy, 2014), it has gained higher popularity and more adoption in different fields. Function-as-a-Service (FaaS) is a key enabler of serverless computing (WG, 2018). In FaaS, an application is decomposed into simple, standalone functions that are uploaded to a FaaS platform for execution. FaaS offers a new way for the development of the IoT applications where one can imagine all the IoT devices to be part of a FaaS platform and then functions are scheduled on each of the devices depending on their computational capabilities and requirements. Our key contributions are: * • We provide an overview of the current trends in programming models and tools used to build IoT applications. * • We introduce the extension of development of IoT applications based on the FaaS approach using Google Cloud Functions (on Google cloud). * • We examine three IoT application development approaches: Mashup-based development, Model-based development, and Function-as-a-Service based development using a real world example from different perspectives, including reliability, deployment expeditiousness, ease of use, and targeted audience. * • We propose a simple solution where the development techniques are combined to deliver reliable applications while reducing costs and time to release. The rest of the paper is organized as follows. In Section 2, the different IoT application development approaches are described along with the IoT system layered architecture. A practical IoT application example is presented using all the approaches in section 3. Section 5 compares the methodologies and discusses the ways in which they can be combined to allow for transparent, simple, fast, and robust development. Lastly, section 6 concludes the paper. ## 2\. IoT application development Approaches An IoT system can be described using four layers that group the main processes: the Device, Network, Data Processing, and Application (Mala, 2017). A general IoT system layered architecture is depicted in Figure 1. Every IoT application requires things: physical equipment such as sensors, actuators, and controllers, which are grouped in the Device Layer. For example, sensors measuring the heart rate, oxygenation, and temperature of a patient. Network Layer is responsible for transmitting the monitored data to the cloud. In the Data Processing Layer the data is transformed into information that can be used to make decisions. For example, if the heart rate of the patient under observation shows an abnormal raise, the IoT system can notify the doctors. The Application Layer act as a interface for a user to understand or convey the information. For example, sending a warning message or directly by making a call to the doctor. The decision from the doctor or the user is transmitted back down through the hierarchy and is executed by the devices in the Device layer. Figure 1. A general layered IoT system architecture. Devices in the Device Layer gather data and pass it to the Network Layer, where it is transmitted to the upper layer. The data is then stored and processed in the Data Processing Layer, passed to the Application Layer that enables machine-to-human communication through front-end applications and machine-to-machine communication through appropriate communication protocols (Mala, 2017; Padraig Scully, 2017). In the following subsections, we introduce three IoT application development approaches. ### 2.1. Mashup-based development Mashups applications are composites of existing assets that cooperate to deliver new content, with reduced time-to-production, complexity and cost. The development process begins when a new business opportunity is identified. The developers then look for all existing resources that can be integrated to construct the new application. It is important to note that mashups are not necessarily final products nor have always a user interface (Ogrinz, 2009). The components are represented as black-boxes, with inputs and outputs specified in the APIs. The developers may need to build some functionality into a new block or perform adaptations of the selected APIs. Once the components are connected, the application can be tested and opened to the public. Then, new applications can be built on top of this one, and the development cycle can begin again. This process is described by Michael Ogrinz in (Ogrinz, 2009) as the circle of mashups. Development teams typically focus on solving a small number of problems that affect the majority of users, while a large number of very specific issues remain unsolved. This is known as The Long Tail Problem (See Fig 2). Mashups were previously described as a solution to this problem in the context of enterprise applications for internal use (Ogrinz, 2009). In the case explained in (Ogrinz, 2009), IT teams would develop the most important features, and allow non-IT teams to build mashup applications to address their specific requirements within a controlled environment where the functionality is presented as widgets. This asseveration can be extended to software development in general. Today, many successful service providers are exposing APIs, allowing for tailor-made mashup applications that improve user experience, open communication channels, and are ultimately translated in increasing revenue (Chow and Stefanov, 2007). Figure 2. Long tail problem in software development. Development teams focus on solving a small number of problems that affect the majority of users, while a large number of very specific problems remain unsolved. Reinterpretation of the figure in (Ogrinz, 2009) The development of mashups can be manual or assisted by tools (Yu et al., 2008). The manual process requires a significant level of knowledge of the technologies involved. That is because mashup applications integrate web-based artifacts such as RSS/Atom feeds and HTML data, and other types of resources like databases, binary and XML files (Ogrinz, 2009), which implies that different applications might provide different formats to retrieve data. Therefore, the data will need to be parsed, so that it can be understood by all the parties associated. Additionally, if a user interface is to be provided, the developer needs to take care of its functionality. We focus our discussion on assisted development of IoT applications. A study conducted in 2010 pointed out that mashup tools were not sufficiently easy to handle from the end-programming perspective (Na, 2010). Today, mashup tools have evolved from the end-programming paradigm to offer different programming schemes and automation levels according to the targeted audience, which might include non-programmers, people with little experience in programming, and experienced developers (Aghaee et al., 2012). The final product might be a non-executable design, a prototype with mock elements, or a deployable application depending on the tool (Aghaee et al., 2012). A great characteristic of mashup tools that appeals to both programmers and non- programmer users is the community built around this tools, which is not only capable of offering guidance but also of extending the tool features. Typical functionalities supported by mashup tools are: * • Creating, configuring, and connecting nodes in a user-friendly graphical editor. * • Data gathering, combination, and transformation. * • Scripting/Coding. IoT frameworks usually offer an extended set of functionalities, such as domain-specific language and coding aids to support the development process. Some examples of IoT frameworks are: Watson IoT Platform (IBM, 2019), Kaa (LLC, 2020), Crosser (Crosser, [n.d.]), Thingsboard (Thingboard, 2020), and MindSphere (Siemens, [n.d.]). Some IoT mashup tools examples are Paraimpu (Paeaimpu, 2016), Total.js Flow (Flow, [n.d.]) and Node-RED (Node-RED, [n.d.]). Paraimpu is designed as a social platform. It offers a web interface where users can configure a predefined set of sensors and actuators, share and subscribe to things shared by their contacts (Piras et al., 2014). Total.js Flow and Node-RED are open-source visual programming tools that support IoT, web, and REST applications. In both, users can drag, drop, and connect nodes in a graphical editor. Total.js Flow shows a real-time representation of data traffic between connected nodes and node errors (Flow, [n.d.]). We present an application developed using Node-RED in subsection 4.1. ### 2.2. Model-driven development Software engineering allows for robust IoT application delivery, by providing developers with a structured development process (Mala, 2017). The specification of a software structure and behaviour usually involves Unified Modelling Language (UML) diagrams. The structure of a system can be described using class diagrams, component diagrams, composite structure diagrams, object diagrams, package diagrams, deployment diagrams, and profile diagrams. To describe the behavior of a system one can use activity diagrams, state diagrams, communication diagrams, interaction overview diagrams, sequence diagrams, timing diagrams and use case diagrams. A complete description of each diagram can be found in (Ashbacher, 2004). These, sometimes overwhelming, amount of representations of the system creates a complete definition from different levels of abstraction. When it comes to IoT, models extending UML can become as specific as the applications and technologies involved. Nastic et al., for instance, proposed PatTRICIA, a programming model based on Intents, which represent tasks, and Intent Scopes, which establish the entities in which the tasks are executed. Barbon et al. proposed ASIP for IoT development in Arduino platforms, introducing the concept of services as devices that communicate with the board via textual messages (Barbon et al., 2016). Nguyen, et al. proposed FRASAD, a framework that employs sensor-nodes as the central notion (Nguyen et al., 2015). The main disadvantage of this development approach is that it is time- consuming. Not only because of the many abstraction levels in which systems should be described but also because of the granularity of the structures that can be reused i.e classes and code snippets (Ashbacher, 2004). This is one of the points where developers can benefit from the integration of mashup development to their workflows (See Section 5). Nonetheless, model-based development tools offer another solution: Code generation. Code can be automatically generated from different kind of diagrams. Much research has been conducted to propose study the efficiency of the generation process using different kind of diagrams, such us sequence diagrams (Mall et al., 2013), activity and sequence diagrams (Viswanathan and Samuel, 2016), state chart diagrams (E. V. and Samuel, 2019), and class diagrams (Sejans and Nikiforova, 2012). Wang et al. proposed a fog-based model for IoT applications in the field of smart grids (Wang et al., 2018). There are many code generation tools for different programming languages. Some examples are Papyrus (Eclipse Foundation, [n.d.]) for C++ and Java, ThingML for C, C++, Java, and JavaScript (Harrand et al., 2016), and Visual Paradigm for 17 languages including C#, Java, Python, PERL, and Ruby (Visual Paradigm, [n.d.]). Code generators create mappings between certain diagram structures and lines of code. The ability of these tools to translate one structure into another is variable; some of them only produce skeletal code, while others can match almost every structure. ### 2.3. Function-as-a-Service based development Function-as-a-Service (FaaS) provides an attractive cloud model since it facilitates application development and reduces application costs. Instead of developing application logic in the form of services and managing the required resources, the application developer implements fine-grained functions connected in an event-driven application and deploys them into the FaaS platform (WG, 2018). The platform is responsible for providing resources for function invocations and performs automatic scaling depending on the workload. The functions can be closely integrated with other services, e.g., cloud databases, authentication and authorization services, and messaging services. These services are sometimes called Backend-as-a-Service (BaaS). BaaS are the third-party services that replace a subset of functionality in a function and allow the users to only focus on the application logic (Lane, 2015). Since functions are stateless, the state of the application is stored in databases. Figure 3. Sequence diagram for the example remote health monitoring system application Building a secure, scalable, performant and managed application for an IoT system seems like a huge challenge, however, there have been growing development to run FaaS based functions on the IoT devices. For example, in the IoT Greengrass system of Amazon (AWS, [n.d.]), it is possible to integrate end devices with cloud resources in an IoT platform and application Lambda functions are deployed to the end devices. However, this approach is limited to single applications on the edge and a static distribution of computation. The integration of IoT systems for general FaaS applications will require an extension of the FaaS platform across heterogeneous devices. This kind of development offers several advantages: * • It reduces operational and system administration costs. * • It reduces the development and deployment costs, and provides faster time to market. * • It is highly scalable and fault tolerant. Therefore, FaaS provides an alternate easy for developing IoT applications. ## 3\. Application Overview In this section we present an overview of an example application considered for demonstrating all the development approaches. We consider an IoT-based health monitoring system that uses a sensor to gather the heart signal of a patient at a particular sample rate for example 100Hz. Fig. 3 shows the sequence diagram for the application. The sensor data is then published through an Ardunio device to a remote MQTT gateway broker to a particular topic. A client subscribing to that topic, store the received measurements in a NoSQL database such as MongoDB. Further, this data is analyzed to obtain the following metrics: * • Beats per minute (BPM) * • Interbeat interval (IBI) * • Standard deviation of RR intervals (SDNN) * • Standard deviation of successive differences (SDSD) * • Root mean square of successive differences (RMSSD) * • Proportion of successive differences above 20ms (pNN20) * • Proportion of successive differences above 50ms (pNN50) * • Median absolute deviation of RR intervals (MAD) The calculated metrics metrics are presented in the visualization using graphs for remote monitoring and the abnormalities are highlighted for the doctor and caretaker for quick actions. This can be also visited at any time by the doctor and the caretaker. In practice, a sensor would measure the change in the skin color as the blood is pumped. However, in this work, we emulated the heart signal sensing using the data in (Vagent, 2016), which had a sample rate of 100 values per second. We utilize Eclipse Mosquitto an open source MQTT broker on the Google Cloud over a virtual machine with 2 vCPUs and 7.5 GB memory. The NodeRED was also hosted on the Google Cloud over a virtual machine with 2 vCPUs and 7.5 GB memory. Python pyHeart (Van Gent et al., 2018; van Gent et al., 2019) package is used for performing the analysis. For creating functions as part of FaaS development strategy, we used Google Cloud Functions on the Google cloud. We configured each function with a memory size of 128mb and timeout of 60 seconds. Figure 4. Flow of the proposed health monitoring system using mashup based development approach. ## 4\. Methodology In this section, we present the detailed development design and architecture of the example application using the three development approaches. ### 4.1. Mashup-based development We implemented the application system proposed in Section 3 using NodeRED, a mashup programming tool in which the behavior of an application can be described as a directed graph composed of wired nodes (Node-RED, [n.d.]). The nodes might be APIs, hardware devices, databases, online services, and other applications. A set of related nodes is called a flow. NodeRED provides an online editor, which can be accessed in a browser. It allows for effortless addition of nodes using drag and drop gestures. The nodes can be connected by wires that represent the flow of data. Every resource-node can be configured by filling up a form that requests for relevant data, such as credentials, web addresses, and ports. Once the node information is given, the application can be deployed to the Node.js runtime environment and executed (Node-RED, [n.d.]). Since the project was open-sourced back in 2013, and thanks to the participation of its evergrowing community, the tool has added multiple features and node libraries (Node-RED, [n.d.]). Today it has nodes for hardware integration, input/output handling, social media, storage, time, data generation, processing, analysis and parsing, among others. The tool can be run locally, in a docker, on a programmable board, or in the cloud, and the flows can be exported and shared as JSON files (Node-RED, [n.d.]). We configured the built-in MQTT subscriber node to receive the messages from the MQTT broker. We further stored the incoming messages to MongoDB for long- term availability using the db node in NodeRED and performed a continuous analysis of the incoming data using the python-function node where the python script for analyzing the data was run. The flow of the system is presented in Fig. 4. The representation is composed by three flows. The flow that appears in the upper part of the image is used mainly for debug purposes. It deletes all the documents in the database. The flow in the middle of the image allows us to store a fixed number of entries in the database. It receives messages from a MQTT broker and stores them in the database. Once a user-defined threshold is met, the first stored value is removed, ensuring that the database keeps a manageable size. Lastly, the flow bottom at the bottom of the image, retrieves the measurements from the database and pass them as an argument to a python script that filters them and uses them to calculate the heart-related metrics presented in Section 3. ### 4.2. Model-based development Figure 5. Class diagram for the sample remote health monitoring system application The class diagram of the proposed application system in Section 3 is shown in Fig. 5. This diagram explains the abstraction of the system from a programming point of view. We observe the three different classes and their relationships. * • Class Mongo: This class is responsible for interacting with Mongo database and performing different operations like: inserting the document, getting all the records, deleting the records. * • Class Metrics: This class is responsible for reading the data, converting it into desired form and then analyzing it for calculating various measurements presented in Section 3. * • Class Mqtt: It is responsible for connecting to the remote MQTT broker and when a message is received then calling Mongo class object for storing it in the database and Metrics class object for calculating various measurements. ### 4.3. Function-as-a-Service based development Figure 6 shows the modeling of the proposed application system in Section 3 using Function-as-a-Service on Google Cloud Platform. The system consist of three cloud functions: * • Mongo Operations Cloud Function: This Google Cloud Function (GCF) is responsible for interacting with Mongo database and performing different operations like: inserting the document, getting all the records, deleting the records. * • Metrics Calculation Cloud Function: This GCF is responsible for reading the data, converting it into desired form and then analyzing it for calculating various measurements presented in Section 3. * • MQTT Subscriber Cloud Function:: It is responsible for connecting to the remote MQTT broker and when a message is received then invoking Mongo Operations Cloud Function for storing the data in the database and Metrics Calculation Cloud Function for calculating various measurements. Figure 6. Function-as-a-Service based modeling of the proposed health monitoring system. ## 5\. Discussion The lack of a systematic approach to develop IoT applications is pointed out as an issue (Mala, 2017; Prehofer and Chiarabini, 2015). At firsts, one might think this fact obeys to the absence of efforts to put up a modeling scheme. However, after careful review, one discovers that is the abundance of modeling approaches and not the lack, what generates the issue. In this section we compare mashup development with model-based development and discuss the possible integration on these two approaches. A summary is provided in 1. Table 1. Comparison of different development approaches at various attributes. | Attribute | Mashup-based | Model-based | FaaS-based ---|---|---|---|--- 1 | End-users are able to produce applications | Yes | No | No 2 | Can reuse existing resources | At application level | At class level | Yes 3 | Platform independent | No | Yes | Yes 4 | Suitable for critical systems | No* | Yes | Yes 5 | Maintenance can be schedule in a predictable way | No* | Yes | Yes 6 | Can warranty high availability | No* | Yes | Yes * Unless all resources are owned by the organization developing the application Let us start by comparing model-based and mashup diagrams. No structure diagram, except for the deployment diagram, can be compared to a mashup representation. These kind of diagrams describe the deployment of artifacts on nodes, which correspond to a location (Ashbacher, 2004). Nodes in mashup applications might indeed correspond to physical devices, but different nodes can also correspond to distinct applications on the same server. Similarly, in FaaS-based approach different functions can also correspond to physical devices or to distinct application functions. From the behavioral UML diagrams, activity diagrams are the closest system representation to mashups. Activities model behavior, and are connected by control or data flows (Ashbacher, 2004). Mashups nodes have a defined behavior and are connected by data flows. The latter was also pointed out in (Prehofer and Chiarabini, 2015). FaaS-based functions have a single defined behaviour and can invoke the other functions by sending a request. These functions are stateless by design. The benefit of each modelling approach for IoT depends on the context. With the introduction of Web 2.0, consumers were invited to interact and engage in a community established around the contents that they consume (Ogrinz, 2009). Now, these users are creating content themselves. Model-based and FaaS development are usually not the end-user’s choice because they require a strong knowledge base and technical background. Mashup tools, on the contrary, make development accessible to a wider population. End-users who want to create IoT applications should not be forced to learn software development concepts. Instead, mashup tools designed for end-user should strive to continue opening the feature offering while keeping a low complexity of the user interface. Similarly, companies should allow their customers to extend and customize their applications, by exposing well documented APIs. Enterprise applications demand fast but robust development that guarantees data privacy, security, and reliability. Model-based ideas are platform- independent, therefore the system design is not bounded to the system implementation. That removes constraints from the developer, who can later implement or generate the code in the most suitable language (Prehofer and Chiarabini, 2015). Additionally, model-based components support predictive maintenance, which ensures its optimum state. Mashups, on the other hand, are only suitable for critical applications if all the nodes are enterprise- managed resources. External resources might become unavailable unexpectedly, causing downtime and losses (Ogrinz, 2009). In case of FaaS-based approach, the operational management is handled by the cloud service provider and hence the responsibility for providing a reliable service is their job. Though functions provide fault tolerance and high availability but there are some cases where a failure of function during a critical operation of an application can impact the performance. Furthermore, due to high virtualization stack in FaaS, they are no suitable for applications requiring nanoseconds or milliseconds of performance. Model-based techniques, have a range of reusability limited to code snippets, classes, and libraries. In mashups existing high-level structures such as databases, parsers, applications (through the API), etc., can be reused. FaaS functions are inherently responsible for one task, therefore other functions requiring that particular task can trigger these functions leading towards the reusability of the functions. Each of the individual function can be scaled independently depending on the requirements. Furthermore, it offers three advantages (i) no continuously running services are required, (ii) functions are only charged when they are executed, and (iii) the function abstraction increases the developer’s productivity. The need to manually define the behavior of a non-predefined component has been identified as a disadvantage of mashup development (Prehofer and Chiarabini, 2015). However, that might rather be an opportunity to integrate model-based with mashup and FaaS based approaches. Single components can be constructed using model-based or FaaS based development, and integrated using mashup tools. In that sense, the industry can greatly benefit from the combination of design principles in model-based techniques and the high-level reusability of components that characterizes mashups and FaaS based approaches. Furthermore, FaaS based approach also provides high scalability benefits which can be advantageous for certain applications. ## 6\. Conclusion The Internet of Things (IoT) is a network where things can share information about their status and their surroundings, by establishing communication channels with other things and by enabling user interfaces. There is a large diversity of scenarios where IoT technologies and protocols are implemented. As the coordination and communication between things is an essential factor, the development process of IoT applications is demanding. Model-based approaches offer structure and design principles that allow for an extensive description of the application from different abstraction levels. The design is not bounded to a specific platform because it is based on premises rather than technologies. Nevertheless, the process requires longer time-to-release given that the reuse of assets is limited, therefore, it enforces the implementation of every module. Code generation can be used to accelerate the process. The tools used for this purpose examine the system diagrams and produce matching lines of code. Mashup development is faster than model-based development because it leverages existing resources at a high level. Different pieces of software can be integrated so that they can interact and deliver a new service or application. Mashups allow developers to focus on innovation, and non-programmer users to customize applications. A disadvantage of this development technique is that high availability, reliability, and performance can only be guaranteed when no resources come from external sources. FaaS based development approach is similar to the mashup with the advantages of high availability, reliability, and scalability. Different components of an IoT application can be implemented as FaaS functions and each of the components can then be independently scaled. Due to high virtualization stack in a FaaS platform, this approach is not suitable for applications requiring nanoseconds or milliseconds of performance. A combination of these development approaches, in which the design of modules is accomplished using model-based principles, and their implementations are integrated using mashup tools or FaaS functions is an interesting alternative to be explored. Further work is required to evaluate the scalability and flexibility of this solution in the context of enterprise applications. ## References * (1) * AWS ([n.d.]) [n.d.]. AWS IoT Greengrass - Amazon Web Services. https://aws.amazon.com/greengrass/. (Accessed on 07/27/2020). * Aghaee et al. (2012) Saeed Aghaee, Marcin Nowak, and Cesare Pautasso. 2012\. Reusable decision space for mashup tool design. In _Proceedings of the 4th ACM SIGCHI symposium on Engineering interactive computing systems - EICS ’12_. ACM Press, New York, New York, USA, 211\. https://doi.org/10.1145/2305484.2305520 * Amazon ([n.d.]) Amazon. [n.d.]. Introducing Amazon Go and the world’s most advanced shopping technology - YouTube. https://www.youtube.com/watch?v=NrmMk1Myrxc&ab_channel=amazon * Arvind Ravulavaru (2018) Arvind Ravulavaru. 2018\. _Enterprise Internet of Things Handbook:_. Packt Publishing Ltd. * Ashbacher (2004) Charles Ashbacher. 2004\. The Unified Modeling Language Reference Manual, Second Edition, by James Rumbaugh. _The Journal of Object Technology_ 3, 10 (2004), 193\. https://doi.org/10.5381/jot.2004.3.10.r1 * Barbon et al. (2016) Gianluca Barbon, Michael Margolis, Filippo Palumbo, Franco Raimondi, and Nick Weldin. 2016\. Taking Arduino to the Internet of Things: The ASIP programming model. _Computer Communications_ 89-90 (9 2016), 128–140. https://doi.org/10.1016/j.comcom.2016.03.016 * Belli et al. (2019) Laura Belli, Luca Davoli, Alice Medioli, Pier Luigi Marchini, and Gianluigi Ferrari. 2019. Toward Industry 4.0 With IoT: Optimizing Business Processes in an Evolving Manufacturing Factory. _Frontiers in ICT_ 6 (8 2019), 17. https://doi.org/10.3389/fict.2019.00017 * Business Insider (2017) Business Insider. 2017\. Inside Alibaba’s smart warehouse staffed by robots - YouTube. https://www.youtube.com/watch?v=FBl4Y55V2Z4 * Casado-Vara et al. (2019) Roberto Casado-Vara, Paulo Novais, Ana Belen Gil, Javier Prieto, and Juan Manuel Corchado. 2019\. Distributed Continuous-Time Fault Estimation Control for Multiple Devices in IoT Networks. _IEEE Access_ 7 (2019), 11972–11984. https://doi.org/10.1109/ACCESS.2019.2892905 * Chow and Stefanov (2007) Shu-Wai. Chow and Stoyan. Stefanov. 2007. _PHP Web 2.0 mashup projects : create practical mashups in PHP, grabbing and mixing data from Google Maps, Flickr, Amazon, YouTube, MSN Search, Yahoo!, Last.fm, and 411Sync.com_. Packt Pub. 283 pages. * Correia et al. (2019) Ricardo Correia, Daniel Belo, and Nuno Borges Carvalho. 2019\. IoT/WPT developments in space exploration. In _Asia-Pacific Microwave Conference Proceedings, APMC_ , Vol. 2018-Novem. IEEE, 79–81. https://doi.org/10.23919/APMC.2018.8617320 * Crosser ([n.d.]) Crosser. [n.d.]. Crosser Edge Analytics & Integration Software. https://crosser.io/ * E. V. and Samuel (2019) Sunitha E. V. and Philip Samuel. 2019. Automatic Code Generation From UML State Chart Diagrams. _IEEE Access_ 7 (2019), 8591–8608. https://doi.org/10.1109/ACCESS.2018.2890791 * Eclipse Foundation ([n.d.]) Eclipse Foundation. [n.d.]. Papyrus/Codegen/Adding a New Code Generator - Eclipsepedia. https://wiki.eclipse.org/Papyrus/Codegen/Adding_a_New_Code_Generator * Flow ([n.d.]) Flow. [n.d.]. Flow - Total.js Platform. https://www.totaljs.com/flow/ * Handy (2014) Alex Handy. 2014\. Amazon introduces Lambda, Containers at AWS re:Invent. https://sdtimes.com/amazon/amazon-introduces-lambda-containers/. https://sdtimes.com/amazon/amazon-introduces-lambda-containers/ [Online; Accessed: 4-Feburary-2020]. * Harrand et al. (2016) Nicolas Harrand, Franck Fleurey, Brice Morin, and Knut Eilif Husa. 2016. ThingML: A language and code generation framework for heterogeneous targets. In _Proceedings - 19th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS 2016_. ACM Press, New York, New York, USA, 125–135. https://doi.org/10.1145/2976767.2976812 * IBM (2019) IBM. 2019. Watson IoT Platform - Overview — IBM. https://www.ibm.com/cloud/watson-iot-platform?mhq=iot&mhsrc=ibmsearch_a * KaaIoT Technologies LLC ([n.d.]) KaaIoT Technologies LLC. [n.d.]. IoT Agriculture Solutions for Smart Farming. https://www.kaaproject.org/smart-farming * Krylovskiy et al. (2015) Alexandr Krylovskiy, Marco Jahn, and Edoardo Patti. 2015\. Designing a Smart City Internet of Things Platform with Microservice Architecture. In _Proceedings - 2015 International Conference on Future Internet of Things and Cloud, FiCloud 2015 and 2015 International Conference on Open and Big Data, OBD 2015_. IEEE, 25–30. https://doi.org/10.1109/FiCloud.2015.55 * Lane (2015) Kin Lane. 2015. Overview of the backend as a service (BaaS) space. _API Evangelist_ (2015). * LLC (2020) KaaIoT Technologies LLC. 2020\. Enterprise IoT Platform, Cloud, and Analytics. https://www.kaaproject.org/ * Mala (2017) D. Jeya Mala. 2017\. _Integrating the Internet of Things Into Software Engineering Practices_. Vol. i. 2016–2017 pages. http://lib.ugent.be/fulltxt/RUG01/002/351/249/RUG01-002351249_2017_0001_AC.pdf * Mall et al. (2013) Rajib Mall, Debasish Kundu, and Debasis Samanta. 2013\. Automatic code generation from unified modelling language sequence diagrams. _IET Software_ 7, 1 (2 2013), 12–28. https://doi.org/10.1049/iet-sen.2011.0080 * Na (2010) Na. 2010. A Study of Mashup as a Software Application Development Technique with Examples from an End-User Programming Perspective. _Journal of Computer Science_ 6, 12 (12 2010), 1406–1415. https://doi.org/10.3844/jcssp.2010.1406.1415 * Nayak et al. (2020) Padmalaya Nayak, Kayiram Kavitha, and Ch. Mallikarjuna Rao. 2020\. IoT-Enabled Agricultural System Applications, Challenges and Security Issues. Studies in Big Data, Vol. 63. Springer Singapore, Singapore, 139–163. https://doi.org/10.1007/978-981-13-9177-4{_}7 * Nguyen et al. (2015) Xuan Thang Nguyen, Huu Tam Tran, Harun Baraki, and Kurt Geihs. 2015\. FRASAD: A framework for model-driven IoT Application Development. In _IEEE World Forum on Internet of Things, WF-IoT 2015 - Proceedings_. Institute of Electrical and Electronics Engineers Inc., 387–392. https://doi.org/10.1109/WF-IoT.2015.7389085 * Node-RED ([n.d.]) Node-RED. [n.d.]. Node-RED. https://nodered.org/ * Ogrinz (2009) Michael Ogrinz. 2009\. _Mashup Patterns: Designs and Examples for the Modern Enterprise_. Addison-Wesley. 400 pages. * Padraig Scully (2017) Padraig Scully. 2017\. 5 things to know about the IoT Platform ecosystem. https://iot-analytics.com/5-things-know-about-iot-platform/ * Paeaimpu (2016) Paeaimpu. 2016\. Paraimpu - You are Web. http://www.paraimpu.com/ * Piras et al. (2014) Andrea Piras, Davide Carboni, and Antonio Pintus. 2014\. A web platform to collect, manage and share heterogeneous sensor data. , 565–569 pages. https://doi.org/10.1007/978-1-4614-3860-1{_}100 * Prehofer and Chiarabini (2015) Christian Prehofer and Luca Chiarabini. 2015. From Internet of things mashups to model-based development. In _Proceedings - International Computer Software and Applications Conference_ , Vol. 3. IEEE Computer Society, 499–504. https://doi.org/10.1109/COMPSAC.2015.263 * Sejans and Nikiforova (2012) Janis Sejans and Oksana Nikiforova. 2012. Problems and Perspectives of Code Generation from UML Class Diagram. _Scientific Journal of Riga Technical University. Computer Sciences_ 44, 1 (2012), 75–84. https://doi.org/10.2478/v10143-011-0024-3 * Siemens ([n.d.]) Siemens. [n.d.]. MindSphere. https://siemens.mindsphere.io/en/about * Softweb Solutions Inc ([n.d.]) Softweb Solutions Inc. [n.d.]. Smart Farming – Internet of Things Solutions for Agriculture Industry. https://www.softwebsolutions.com/resources/iot-solution-for-agriculture-industry.html * Susila et al. (2018) I. Putu Susila, Istofa, Gina Kusuma, Sukandar, and Ismet Isnaini. 2018. Development of IoT based meteorological and environmental gamma radiation monitoring system. In _AIP Conference Proceedings_ , Vol. 1977. American Institute of Physics Inc., 8. https://doi.org/10.1063/1.5043016 * Syafrudin et al. (2018) Muhammad Syafrudin, Ganjar Alfian, Norma Latif Fitriyani, and Jongtae Rhee. 2018. Performance analysis of IoT-based sensor, big data processing, and machine learning model for real-time monitoring system in automotive manufacturing. _Sensors (Switzerland)_ 18, 9 (9 2018), 24\. https://doi.org/10.3390/s18092946 * Thingboard (2020) Thingboard. 2020\. ThingsBoard Open-source IoT Platform. https://thingsboard.io/ * Vagent (2016) Paul Vagent. 2016\. Analyzing a Discrete Heart Rate Signal Using Python – Part 1 – paulvangent.com. https://doi.org/10.5334/jors.241 * Van Gent et al. (2018) Paul Van Gent, Haneen Farah, and Van Gent. 2018. Heart Rate Analysis for Human Factors: Development and Validation of an Open Source Toolkit for Noisy Naturalistic Heart Rate Data Reducing congestion at sags View project From Individual Automated Vehicles to Cooperative Traffic Management-Predicting the. In _Proceedings of The 6th HUMMANIST Conference_ , Vol. 13. HUMANIST publications, The Hague, NL Lyon, 13–14. http://resolver.tudelft.nl/uuid:5c638e14-d249-4116-aa05-2e566cf3df02 * van Gent et al. (2019) Paul van Gent, Haneen Farah, Nicole van Nes, and Bart van Arem. 2019. Analysing noisy driver physiology real-time using off-the-shelf sensors: Heart rate analysis software from the taking the fast lane project. _Journal of Open Research Software_ 7, 1 (10 2019). https://doi.org/10.5334/jors.241 * Visual Paradigm ([n.d.]) Visual Paradigm. [n.d.]. UML/Code Generation Software. https://www.visual-paradigm.com/features/code-engineering-tools/ * Viswanathan and Samuel (2016) Sunitha Edacheril Viswanathan and Philip Samuel. 2016. Automatic code generation using unified modeling language activity and sequence models. _IET Software_ 10, 6 (12 2016), 164–172. https://doi.org/10.1049/iet-sen.2015.0138 * Waher (2018) Peter Waher. 2018\. _Mastering Internet of Things: Design and create your own IoT application using Raspberry Pi 3_. Packt Publishing. 398 pages. * Wang et al. (2018) Pan Wang, Shidong Liu, Feng Ye, and Xuejiao Chen. 2018\. A Fog-based Architecture and Programming Model for IoT Applications in the Smart Grid. (4 2018). http://arxiv.org/abs/1804.01239 * WG (2018) CNCF Serverless WG. March 2018\. Cncf wg-serverless whitepaper v1. 0. https://gw.alipayobjects.com/os/basement_prod/24ec4498-71d4-4a60-b785-fa530456c65b.pdf * Yu et al. (2008) Jin Yu, Boualem Benatallah, Fabio Casati, and Florian Daniel. 2008. Understanding mashup development. _IEEE Internet Computing_ 12, 5 (2008), 44–52. https://doi.org/10.1109/MIC.2008.114 * Yushi et al. (2012) Lan Yushi, Jiang Fei, and Yu Hui. 2012. Study on application modes of military Internet of Things (MIOT). In _2012 IEEE International Conference on Computer Science and Automation Engineering (CSAE)_ , Vol. 3. IEEE, 630–634. https://doi.org/10.1109/CSAE.2012.6273031
11institutetext: Chair of Computer Architecture and Parallel Systems, Technical University of Munich, Garching, Germany 11email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> 22institutetext: Huawei Munich Research Center, Huawei Technologies Munich,Germany 22email<EMAIL_ADDRESS> # Online Memory Leak Detection in the Cloud-based Infrastructures Anshul Jindal 11 0000-0002-7773-5342 Paul Staab 22 Jorge Cardoso 22 0000-0001-8992-3466 Michael Gerndt 11 0000-0002-3210-5048 Vladimir Podolskiy 11 0000-0002-2775-3630 ###### Abstract A memory leak in an application deployed on the cloud can affect the availability and reliability of the application. Therefore, to identify and ultimately resolve it quickly is highly important. However, in the production environment running on the cloud, memory leak detection is a challenge without the knowledge of the application or its internal object allocation details. This paper addresses this challenge of online detection of memory leaks in cloud-based infrastructure without having any internal application knowledge by introducing a novel machine learning based algorithm Precog. This algorithm solely uses one metric i.e the system’s memory utilization on which the application is deployed for the detection of a memory leak. The developed algorithm’s accuracy was tested on 60 virtual machines manually labeled memory utilization data provided by our industry partner Huawei Munich Research Center and it was found that the proposed algorithm achieves the accuracy score of 85% with less than half a second prediction time per virtual machine. ###### Keywords: memory leak online memory leak detection memory leak patterns cloudlinear regression ## 1 Introduction Cloud computing is widely used in the industries for its capability to provide cheap and on-demand access to compute and storage resources. Physical servers resources located at different data centers are split among the virtual machines (VMs) hosted on it and distributed to the users [5]. Users can then deploy their applications on these VMs with only the required resources. This allows the efficient usage of the physical hardware and reducing the overall cost. However, with all the advantages of cloud computing there exists the drawback of detecting a fault or an error in an application or in a VM efficiently due to the layered virtualisation stack [1, 4]. A small fault somewhere in the system can impact the performance of the application. An application when deployed on a VM usually requires different system resources such as memory, CPU and network for the completion of a task. If an application is mostly using the memory for the processing of the tasks then this application is called a memory-intensive application [8]. It is the responsibility of the application to release the system resources when they are no longer needed. When such an application fails to release the memory resources, a memory leak occurs in the application [14]. Memory leak issues in the application can cause continuous blocking of the VM’s resources which may in turn result in slower response times or application failure. In software industry, memory leaks are treated with utmost seriousness and priority as the impact of a memory leak could be catastrophic to the whole system. In the development environment, these issues are rather easily detectable with the help of static source code analysis tools or by analyzing the heap dumps. But in the production environment running on the cloud, memory leak detection is a challenge and it only gets detected when there is an abnormality in the run time, abnormal usage of the system resources, crash of the application or restart of the VM. Then the resolution of such an issue is done at the cost of compromising the availability and reliability of the application. Therefore it is necessary to monitor every application for memory leak and have an automatic detection mechanism for memory leak before it actually occurs. However, it is a challenge to detect memory leak of an application running on a VM in the cloud without the knowledge of the programming language of the application, nor the knowledge of source code nor the low level details such as allocation times of objects, object staleness, or the object references [10]. Due to the low down time requirements for the applications running on the cloud, detection of issues and their resolutions is to be done as quickly as possible. Therefore, this challenge is addressed in this paper by solely using the VM’s memory utilization as the main metric and devising a novel algorithm called Precog to detect memory leak. The main contribution of this paper are as follows: * • Algorithm: We propose an online novel machine learning based algorithm Precog for accurate and efficient detection of memory leaks by solely using the VM’s memory utilization as the main metric. * • Effectiveness: Our proposed algorithm achieves the accuracy score of 85% on the evaluated dataset provided by our industry partner and accuracy score of above 90% on the synthetic data generated by us. * • Scalability: Precog’s predict functionality is linearly scalable with the number of values and takes less than a second for predicting in a timeseries with 100,000 values. Reproducibility: our code and synthetic generated data are publicly available at: https://github.com/ansjin/memory_leak_detection. ## 2 Related Work Memory leak detection has been studied over the years and several solutions have been proposed. Sor et al. reviewed different memory leak detection approaches based on their implementation complexity, measured metrics, and intrusiveness and a classification taxonomy was proposed [11]. The classification taxonomy broadly divided the detection algorithms into (1) Online detection, (2) Offline detection and (3) Hybrid detection. The online detection category uses either staleness measure of the allocated objects or their growth analysis. Offline detection category includes the algorithms that make use of captured states i.e heap dumps or use a visualization mechanism to manually detect memory leaks or use static source code analysis. Hybrid detection category methods combine the features offered by online and offline methods to detect memory leaks. Our work falls in the category of online detection therefore, we now restrict our discussion to the approaches related to the online detection category only. Based on the staleness measure of allocated objects, Rudaf et al. proposed ”LeakSpot” for detecting memory leaks in web applications [9]. It locates JavaScript allocation and reference sites that produce and retain increasing numbers of objects over time and uses staleness as a heuristic to identify memory leaks. Vladimir Šor et al. proposes a statistical metric called genCount for memory leak detection in Java applications [12]. It uses the number of different generations of the objects grouped by their allocation sites, to abstract the object staleness - an important attribute indicating a memory leak. Vilk et al. proposed a browser leak debugger for automatically debugging memory leaks in web applications called as ”BLeak” [13]. It collects heap snapshots and analyzes these snapshots over time to identify and rank leaks. BLeak targets application source code when detecting memory leaks. Based on the growth analysis objects, Jump et al. proposes ”Cork” which finds the growth of heap data structure via a directed graph Type Points-From Graph \- TPFG, a data structure which describes an object and its outgoing reference [6]. To find memory leaks, TPFG’s growth is analyzed over time in terms of growing types such as a list. FindLeaks proposed by Chen et al. tracks object creation and destruction and if more objects are created than destroyed per class then the memory leak is found [2]. Nick Mitchell and Gary Sevitsky proposed ”LeakBot”, which looks for heap size growth patterns in the heap graphs of Java applications to find memory leaks [7]. ”LEAKPOINT” proposed by Clause et al. uses dynamic tainting to track heap memory pointers and further analyze it to detect memory leaks [3]. Most of the online detection algorithms that are proposed focus either on the programming language of the running application or on garbage collection strategies or the internals of the application based on the object’s allocation, references, and deallocation. To the best of our knowledge, there is no previous work that solely focuses on the detection of memory leaks using just the system’s memory utilization data on which application is deployed. The work in this paper, therefore, focuses on the detection of a memory leak pattern irrespective of the programming language of the application or the knowledge of application’s source code or the low-level details such as allocation times of objects, object staleness, or the object references. ## 3 Methodology for Memory Leak Detection In this section, we present the problem statement of memory leak detection and describes our proposed algorithm’s workflow for solving it. ### 3.1 Problem Statement Table 1 shows the symbols used in this paper. Table 1: Symbols and definitions. Symbol | Interpretation ---|--- $t$ | a timestamp $x_{t}$ | the percentage utilization of a resource (for example memory | or disk usage) of a virtual machine at time $t$ $N$ | Number of data points $x=\\{x_{1},x_{2},...,x_{N}\\}$ | a VM’s memory utilization observations from the Cloud $T$ | time series window length $x_{t-T:t}$ | a sequence of observations $\\{x_{t-T},x_{t-T+1},...,x_{t}\\}$ from | time ${t-T}$ to $t$ $U$ | percentage memory utilization threshold equal to 100. $C$ | critical time We are given $x=\\{x_{1},x_{2},...,x_{N}\\}$, an $N×1$ dataset representing the memory utilization observations of the VM and an observation $x_{t}\in R$ is the percentage memory utilization of a virtual machine at time $t$. The objective of this work is to determine whether or not there is a memory leak on a VM such that an observation $x_{t}$ at time $t$ reaches the threshold $U$ memory utilization following a trend in the defined critical time $C$. Formally: ###### Problem 1 (Memory Leak Detection) * • Given: a univariate dataset of $N$ time ticks, $x=\\{x_{1},x_{2},...,x_{N}\\}$, representing the memory utilization observations of the VM. * • Output: an anomalous window for a VM consisting of a sequence of observations $x_{t-T:t}$ such that these observations after following a certain trend will reach the threshold $U$ memory utilization at time $t+M$ where $M\leq C$. ###### Definition 1 (Critical Time) It is the maximum time considered relevant for reporting a memory leak in which if the trend line of memory utilization of VM is projected, it will reach the threshold $U$. ### 3.2 Illustrative Example Fig. 1 shows the example memory utilization of a memory leaking VM with the marked anomalous window between $t_{k}$ and $t_{n}$. It shows that the memory utilization of the VM will reach the defined threshold ($U=100\%$) within the defined critical time $C$ by following a linearly increasing trend (shown by the trend line) from the observations in the anomalous window. Therefore, this VM is regarded as a memory leaking VM. Figure 1: Example memory utilization of a memory leaking VM with the marked anomalous window. Our developed approach can be applied for multiple VMs as well. We also have conducted an experiment to understand the memory usage patterns of memory leak applications. We found that, if an application has a memory leak, usually the memory usage of the VM on which it is running increases steadily. It continues to do so until all the available memory of the system is exhausted. This usually causes the application attempting to allocate the memory to terminate itself. Thus, usually a memory leak behaviour exhibits a linearly increasing or ”sawtooth” memory utilization pattern. ### 3.3 Memory Leak Detection Algorithm: Precog The Precog algorithm consists of two phases: offline training and online detection. Fig. 2 shows the overall workflow of the Precog algorithm. Figure 2: Overall workflow of Precog algorithm. Offline training: The procedure starts by collecting the memory utilization data of a VM and passing it to Data Pre-processing module, where the dataset is first transformed by resampling the number of observations to one every defined resampling time resolution and then the time series data is median smoothed over the specified smoothing window. In Trend Lines Fitting module, firstly, on the whole dataset, the change points $P=\\{P_{1},P_{2},...,P_{k}\\}$, where $k\leq n-1$, are detected. By default, two change points one at the beginning and other at the end of time series data are added. If the change points are not detected, then the algorithm will have to go though each data point and it will be compute intensive, therefore these points allows the algorithm to directly jump from one change point to another and selecting all the points in between the two change points. Trend Lines Fitting module selects a sequence of observations $x_{t-L:t}$ between the two change points: one fixed $P_{1}$ and other variable $P_{r}$ where $r\leq k$ and a line is fitted on them using the linear regression. The R-squared score, size of the window called as duration, time to reach threshold called exit time and slope of line are calculated. This procedure is repeated with keeping the fixed change point the same and varying the other for all other change points. Out of all the fitted lines, the best-fitted line based on the largest duration and highest slope is selected for the fixed change point. If this best-fitted lines’ time to reach threshold falls below the critical time then its slope and duration are saved as historic trends. This above procedure is again repeated by changing the fixed change point to all the other change points. At the end of this whole procedure, we get for each change point, a best-fitted trend if it exists. Amongst the captured trends, maximum duration and the maximum slope of the trends are also calculated and saved. This training procedure can be conducted routinely, e.g., once per day or week. The method’s pseudocode is shown in the algorithm’s 1 Train function. Online detection: In the Online Detection phase, for a new set of observations $\\{x_{k},x_{k}+1,x_{k}+2,...,x_{k}+t-1x_{k}+t\\}$ from time $k$ to $t$ where $t-k\geq P_{min}$ belonging to a VM after pre-processing is fed into the Trend Lines Fitting module. In Trend Lines Fitting module, the change points are detected. A sequence of observations $x_{t-L:t}$ between the last two change points starting from the end of the time series are selected and a line is fitted on them using the linear regression. The R-squared score, slope, duration and exit time to reach threshold of the fitted line is calculated. If its slope and duration are greater than the saved maximum counter parts then that window is marked anomalous. Otherwise, the values are compared against all the found training trends and if fitted-line’s slope and duration are found to be greater than any of the saved trend then, again that window will be marked as anomalous. This procedure is further repeated by analyzing the observations between the last change point $P_{k}$ and the previous next change point until all the change points are used. This is done for the cases where the new data has a similar trend as the historic data but now with a higher slope and longer duration. The algorithm’s pseudo code showing the training and test method are shown in the algorithm 1. ###### Definition 2 (Change Points) A set of time ticks which deviate highly from the normal pattern of the data. This is calculated by first taking the first-order difference of the input timeseries. Then, taking their absolute values and calculating their Z-scores. The indexes of observations whose Z-scores are greater than the defined threshold (3 times the standard deviation) represents the change points. The method’s pseudocode is shown in the algorithm’s 1 CPD function. 1 Input: input_Train_Ts,R2_score_min, input_Test_Ts, critical_time Output: anomalous list a 2 3 4 Function _CPD(_$x=input\\_Ts,threshold=3$_)_: 5 $\textit{absDiffTs}=\text{first order absolute difference of }\textit{x}$ 6 $\textit{zScores}=\text{calculate z-scores of }\textit{absDiffTs}$ 7 $\textit{cpdIndexes}=\text{indexes of }\textit{(zScores}>\textit{threshold)}$ return cpdIndexes // return the change-points indexes 8 9 10 Function _TRAINING(_$x=input\\_Train\\_Ts$ , $R2\\_score\\_{min},C=critical\\_time$_)_: // Train on input_Train_Ts P = CPD(x) // get Change-points 11 p1 = 0 12 while _p1 $<=$ length(P)_ do 13 p2 = p1 ${D_{b},S_{b},T_{b}}=0$ // best local trend’s duration, slope, exit time 14 while _p2 $<=$ length(P)_ do $exit\\_time,r2,dur,slope\leftarrow\textit{{LinearRegression}(ts)}$ // fitted line’s exit time, R2 score, duration, slope 15 if _$r2\geq R2\\_score\\_{min}\text{ and }dur\geq D_{b}\text{ and }slope\geq S_{b}$_ then Update(${D_{b},S_{b},T_{b}}$) // update best local values 16 17 $p2=p2+1$ 18 19 if _$T_{b}\leq C$_ then 20 if _$D_{b}\geq D_{max}\text{ and }S_{b}\geq S_{max}$_ then Update(${D_{max},S_{max}}$) // update global trend values 21 saveTrend($D_{b},S_{b}),\textbf{ save}({D_{max}},S_{max}$) // save values 22 23 $p1=p1+1$ 24 25 Function _TEST(_$x=input\\_Test\\_Ts,C=critical\\_time$_)_: // Test on the new data to find anomalous memory leak window ${a}=\text{[0] }$ // anomalous empty array of size input_Test_Ts P = CPD(x) // get Change-points ${len}=\text{length(P)}$ // length of change point indexes 26 27 while _$i\leq len$_ do $\textit{ts}=x[P[len-i]:P[len]]$ // i is a loop variable 28 $exit\\_time,r2,dur,slope=\textit{{LinearRegression}(ts)}$ 29 ${D_{max},S_{max},Trends}=\text{get saved values }$ 30 if _$exit\\_time,\leq C\text{ and }r2\geq R_{min}$_ then 31 if _$slope\geq S_{max}\text{ and }dur\geq D_{max}$_ then $a[P[len-i]:P[len]]=1$ // current trend greater than global saved so mark anomalous 32 33 else 34 $\textbf{For Each }t\text{ in }Trends$ if _$slope\geq S_{t}\text{ and }dur\geq D_{t}$_ then $a[P[len-i]:P[len]]=1$ // current trend greater than one of the saved trend so mark anomalous 35 36 37 38 $i=i+1$ 39 return $a$ // list with 0s and anomalous indexes represented by 1 40 Algorithm 1 Precog Algorithm ## 4 Evaluation We design experiments to answer the questions: * • Q1. Memory Leak Detection Accuracy: how accurate is Precog in the detection of memory leaks? * • Q2. Scalability: How does the algorithm scale with the increase in the data points? * • Q3. Parameter Sensitivity: How sensitive is the algorithm when the parameters values are changed? We have used F1-Score (denoted as F1) to evaluate the performance of the algorithms. Evaluation tests have been executed on a machine with 4 physical cores (3.6 GHz Intel Core i7-4790 CPU) with hyperthreading enabled and 16 GB of RAM. These conditions are similar to a typical cloud VM. It is to be noted that the algorithm detects the cases where there is an ongoing memory leak and assumes that previously there was no memory leak. For our experiments, hyper- parameters are set as follows. The maximum threshold $U$ is set to 100 and the defined critical time $C$ is set to 7 days. The smoothing window size is 1 hour and re-sampling time resolution was set to 5 minutes. Lastly, the minimum R-squared score $R2_{min}$ for a line to be recognized as a good fit is set to 0.75. 65% of data was used for training and the rest for testing. However, we also show experiments on parameter sensitivity in this section. ### 4.1 Q1. Memory Leak Detection Accuracy To demonstrate the effectiveness of the developed algorithm, we initially synthetically generated the timeseries. Table 2 shows the F1 score corresponding to each memory leak pattern and also the overall F1 score. Table 2: Synthetically generated timeseries corresponding to each memory leak pattern and their accuracy score. Memory Leak Pattern | +ve cases | -ve cases | F1 Score | Recall | Precision ---|---|---|---|---|--- Linearly Increasing | 30 | 30 | 0.933 | 0.933 | 0.933 Linearly Increasing(with Noise) | 30 | 30 | 0.895 | 1.0 | 0.810 Sawtooth | 30 | 30 | 0.830 | 0.73 | 0.956 Overall | 90 | 90 | 0.9 | 0.9 | 0.91 Table 2 shows that Precog is able to reach an overall accuracy of 90%. In addition, to demonstrate the effectiveness of the developed algorithm on the real cloud workloads, we evaluated Precog on the real Cloud dataset provided by Huawei Munich which consists of manually labeled memory leak data from 60 VMs spanned over 5 days and each time series consists of an observation every minute. Out of these 60 VMs, 20 VMs had a memory leak. Such high number of VMs having memory leaks is due to the fact that applications with memory leak were deliberately run on the infrastructure. The algorithm achieved the F1-Score of 0.857, recall equals to 0.75 and precision as 1.0. Average prediction time per test data containing approximately 500 points is 0.32 seconds. Furthermore, we present the detailed results of the algorithm on the selected 4 cases shown in the Figure 7 : simple linearly increasing memory utilization, sawtooth linearly increasing pattern, linearly increasing pattern with no trends detected in training data, and linearly increasing with similar trend as training data. The figure also shows the change points, training trends and the detected anomalous memory leak window for each of the cases. Figure 3: Linearly increasing Figure 4: Sawtooth linearly increasing Figure 5: Linearly increasing without trends detected in training data Figure 6: Linearly increasing with similar trend as training data and correctly not detected Figure 7: Algorithm result on 3 difficult cases having memory leak (a-c) and one case not having a memory leak (d). For the first case shown in Fig. 3, memory utilization is being used normally until it suddenly starts to increase linearly. The algorithm detected one training trend and reported the complete test set as anomalous. The test set trend is having similar slope as training trend but with a longer duration and higher memory usage hence it is reported as anomalous. In the second case (Fig. 4), the trend represents commonly memory leak sawtooth pattern where the memory utilization increases upto a certain point and then decreases (but not completely zero) and then again it start to increase in the similar manner. The algorithm detected three training trends and reported most of the test set as anomalous. The test set follows a similar trend as captured during the training but with the higher memory utilization, hence it is reported. In the third case (Fig. 5), no appropriate training trend was detected in the complete training data but, the algorithm is able to detect an increasing memory utilization trend in the test dataset. In Fig. 6, the VM does not have a memory leak but its memory utilization was steadily increasing which if observed without the historic data seems to be a memory leak pattern. However, in the historic data, the same trend is already observed and therefore it is a normal memory utilization pattern. Precog using the historic data for detecting the training trends and then comparing them with the test data correctly reports that trend as normal and hence does not flag the window as anomalous. It is also to be noted that, if the new data’s maximum goes beyond the maximum in the training data with the similar trend then it will be regarded as a memory leak. ### 4.2 Q2. Scalability Next, we verify that our prediction method scale linearly. We repeatedly duplicate our dataset in time ticks, add Gaussian noise. Figure 9 shows that Precog’ predict method scale linearly in time ticks. Precog does provide the prediction results under 1 second for the data with 100,000 time ticks. However, the training method shown in Figure 8 is quadratic in nature but training needs to conducted once a week or a month and it can be done offline as well. Figure 8: Training Time Figure 9: Prediction Time Figure 10: Precog’s prediction method scale linearly. ### 4.3 Q3. Parameter Sensitivity Precog requires tuning of certain hyper-parameters like R2 score, and critical time, which currently are set manually based on the experts knowledge. Figure 11 compares performance for different parameter values, on synthetically generated dataset. Our algorithm perform consistently well across values. Setting minimum R2 score above 0.8 corresponds to stricter fitting of the line and that is why the accuracy drops. On the other hand, our data mostly contains trend lines which would reach threshold withing 3 to 4 days, therefore setting minimum critical time too less (less than 3 days) would mean the trend line never reaching threshold within the time frame and hence decreasing the accuracy. These experiments shows that these parameters does play a role in the overall accuracy of the algorithm but at most of the values algorithm is insensitive to them. Furthermore, to determine these automatically based on the historic data is under progress and is out of the scope of this paper. Figure 11: Insensitive to parameters: Precog performs consistently across parameter values. ## 5 Conclusion Memory leak detection has been a research topic for more than a decade. Many approaches have been proposed to detect memory leaks, with most of them looking at the internals of the application or the object’s allocation and deallocation. The Precog algorithm for memory leak detection presented in the current work is most relevant for the cloud-based infrastructure where cloud administrator does not have access to the source code or know about the internals of the deployed applications. The performance evaluation results showed that the Precog is able to achieve a F1-Score of 0.85 with less than half a second prediction time on the real workloads. This algorithm can also be useful in the Serverless Computing where if a function is leaking a memory then its successive function invocations will add on to that and resulting in a bigger memory leak on the underneath system. Precog running on the underneath system can detect such a case. Prospective directions of future work include developing online learning-based approaches for detection and as well using other metrics like CPU, network and storage utilization for further enhancing the accuracy of the algorithms and providing higher confidence in the detection results. ## ACKNOWLEDGEMENTS This work was supported by the funding of the German Federal Ministry of Education and Research (BMBF) in the scope of the Software Campus program. The authors also thank the anonymous reviewers whose comments helped in improving this paper. ## References * [1] Ataallah, S.M.A., Nassar, S.M., Hemayed, E.E.: Fault tolerance in cloud computing - survey. In: 2015 11th International Computer Engineering Conference (ICENCO). pp. 241–245 (Dec 2015). https://doi.org/10.1109/ICENCO.2015.7416355 * [2] Chen, K., Chen, J.: Aspect-based instrumentation for locating memory leaks in java programs. In: 31st Annual International Computer Software and Applications Conference (COMPSAC 2007). vol. 2, pp. 23–28 (July 2007). https://doi.org/10.1109/COMPSAC.2007.79 * [3] Clause, J., Orso, A.: Leakpoint: pinpointing the causes of memory leaks. In: 2010 ACM/IEEE 32nd International Conference on Software Engineering. vol. 1, pp. 515–524 (May 2010). https://doi.org/10.1145/1806799.1806874 * [4] Gokhroo, M.K., Govil, M.C., Pilli, E.S.: Detecting and mitigating faults in cloud computing environment. In: 2017 3rd International Conference on Computational Intelligence Communication Technology (CICT). pp. 1–9 (Feb 2017). https://doi.org/10.1109/CIACT.2017.7977362 * [5] Jain, N., Choudhary, S.: Overview of virtualization in cloud computing. In: 2016 Symposium on Colossal Data Analysis and Networking (CDAN). pp. 1–4 (March 2016). https://doi.org/10.1109/CDAN.2016.7570950 * [6] Jump, M., McKinley, K.S.: Cork: Dynamic memory leak detection for garbage-collected languages. In: Proceedings of the 34th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. pp. 31–38. POPL ’07, ACM, New York, NY, USA (2007). https://doi.org/10.1145/1190216.1190224, http://doi.acm.org/10.1145/1190216.1190224 * [7] Mitchell, N., Sevitsky, G.: Leakbot: An automated and lightweight tool for diagnosing memory leaks in large java applications. In: Cardelli, L. (ed.) ECOOP 2003 – Object-Oriented Programming. pp. 351–377. Springer Berlin Heidelberg, Berlin, Heidelberg (2003) * [8] Pooja, Pandey, A.: Impact of memory intensive applications on performance of cloud virtual machine. In: 2014 Recent Advances in Engineering and Computational Sciences (RAECS). pp. 1–6 (March 2014). https://doi.org/10.1109/RAECS.2014.6799629 * [9] Rudafshani, M., Ward, P.A.S.: Leakspot: Detection and diagnosis of memory leaks in javascript applications. Softw. Pract. Exper. 47(1), 97–123 (Jan 2017). https://doi.org/10.1002/spe.2406, https://doi.org/10.1002/spe.2406 * [10] Sor, V., Srirama, S.N.: A statistical approach for identifying memory leaks in cloud applications. In: CLOSER (2011) * [11] Sor, V., Srirama, S.N.: Memory leak detection in java: Taxonomy and classification of approaches. Journal of Systems and Software 96, 139–151 (2014) * [12] Sor, V., Srirama, S.N., Salnikov-Tarnovski, N.: Memory leak detection in plumbr. Softw., Pract. Exper. 45, 1307–1330 (2015) * [13] Vilk, J., Berger, E.D.: Bleak: Automatically debugging memory leaks in web applications. In: Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation. pp. 15–29. PLDI 2018, ACM, New York, NY, USA (2018). https://doi.org/10.1145/3192366.3192376, http://doi.acm.org/10.1145/3192366.3192376 * [14] Xie, Y., Aiken, A.: Context- and path-sensitive memory leak detection. SIGSOFT Softw. Eng. Notes 30(5), 115–125 (Sep 2005). https://doi.org/10.1145/1095430.1081728, http://doi.acm.org/10.1145/1095430.1081728
Quantized enveloping superalgebra of type $P$ Saber Ahmed, Dimitar Grantcharov, Nicolas Guay ###### Abstract We introduce a new quantized enveloping superalgebra $\mathfrak{U}_{q}\mathfrak{p}_{n}$ attached to the Lie superalgebra $\mathfrak{p}_{n}$ of type $P$. The superalgebra $\mathfrak{U}_{q}\mathfrak{p}_{n}$ is a quantization of a Lie bisuperalgebra structure on $\mathfrak{p}_{n}$ and we study some of its basic properties. We also introduce the periplectic $q$-Brauer algebra and prove that it is the centralizer of the $\mathfrak{U}_{q}\mathfrak{p}_{n}$-module structure on $\mathbb{C}(n|n)^{\otimes l}$. We end by proposing a definition for a new periplectic $q$-Schur superalgebra. ## Introduction The simple finite-dimensional Lie superalgebras over $\mathbb{C}$ were classified by V. Kac in [K]. The list in loc. cit. contains three classes of Lie superalgebras: basic, strange and Cartan-type. There are two types of strange Lie superalgebras - $P$ and $Q$ \- both of which are interesting due to the algebraic, geometric, and combinatorial properties of their representations. The study of the representations of type $P$ Lie superalgebras, which are also called periplectic in the literature, has attracted considerable attention in the last five years. Interesting results on the category $\mathcal{O}$, the associated periplectic Brauer algebras, and related theories have been established in [BDEA${}^{+}1$], [BDEA${}^{+}2$], [CP], [Co], [CE1], [CE2], [DHIN], [EAS1], [EAS2], [KT], [Ser], among others. The purpose of this paper is to introduce a quantum superalgebra of type $P$ via the FRT formalism [FRT]. A similar approach was used by G. Olshanski in [Ol] to define quantum superalgebras of type $Q$. We prove that our quantized enveloping superalgebra $\mathfrak{U}_{q}\mathfrak{p}_{n}$ quantizes a Lie bisuperalgebra structure on $\mathfrak{p}_{n}$, a periplectic Lie superalgebra. The fake Casimir element used in [BDEA${}^{+}1$], [BDEA${}^{+}2$] is a solution of the classical Yang-Baxter equation and a quantum version of that fake Casimir element, denoted $S$, is a solution of the quantum Yang-Baxter equation which serves as an essential ingredient in the definition of $\mathfrak{U}_{q}\mathfrak{p}_{n}$. It follows that the tensor superspace $\mathbb{C}(n|n)^{\otimes\ell}$ is a representation of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ and the centralizer of the action of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ is a quantum version of the periplectic Brauer algebra. The classical setting corresponding to $q=1$ was studied in [Mo]. A similar result for type $Q$ Lie superalgebras was established in [Ol], where the centralizer of the action of the quantized enveloping superalgebra was proven to be the Hecke-Clifford superalgebra of the symmetric group $S_{\ell}$. Having at our disposal the periplectic $q$-Brauer algebra, we can introduce the periplectic $q$-Schur superalgebra in a natural way. We conjecture that these are mutual cenralizers (that is, they satisfy a double- centralizer property). One immediate problem is to define $\mathfrak{U}_{q}\mathfrak{p}_{n}$ in terms of Drinfeld-Jimbo generators and relations and study its category $\mathcal{O}$. For type $Q$ Lie superalgebras, this problem was addressed in [GJKK]. Furthermore, in [GJKKK], a theory of crystal bases for the tensor representations of $\mathfrak{U}_{q}\mathfrak{g}$ was established. Unfortunately, it is unlikely that natural crystal bases exist in the type $P$ case due to the nonsemisimplicity of the category of tensor modules, contrary to what happens in type $Q$. Another natural direction is to construct, using also the FRT formalism, quantum affine superalgebras of type $P$. (See [ChGu] for the type $Q$ case.) Yangians of type $P$ and $Q$ appeared already many years ago in the work of M. Nazarov [Na1, Na2]. We hope to return to these questions in a future publication. After setting up the notation and basic definitions in the first section, we introduce the “butterfly” Lie bisuperalgebra in Section 2 and define the quantized enveloping superalgebra of type $P$ in the following section. The main result of Section 3 is Theorem 3.3, which states that $S$, the $q$-deformation of the fake Casimir element, is a solution of the quantum Yang-Baxter equation. In Section 4, we prove that $\mathfrak{U}_{q}\mathfrak{p}_{n}$ is a quantization of the Lie bisuperalgebra structure from Section 2: see Theorem 4.3. The new periplectic $q$-Brauer algebra $\mathfrak{B}_{q,\ell}$ and the new periplectic $q$-Schur algebra are introduced in the last section, where we prove that $\mathfrak{B}_{q,\ell}$ can be defined equivalently either using generators and relations or as the centralizer of the action of $\mathfrak{U}_{q}(\mathfrak{p}_{n})$ on the tensor space: see Theorem 5.5. The proofs of our results require extensive computations: further details for all the computations can be found in [AGG]. Acknowledgements: The second named author is partly supported by the Simons Collaboration Grant 358245. He also would like to thank the Max Planck Institute in Bonn (where part of this work was completed) for the excellent working conditions. The third named author gratefully acknowledges the financial support of the Natural Sciences and Engineering Research Council of Canada provided via the Discovery Grant Program. We thank Patrick Conner, Robert Muth and Vidas Regelskis for help with certain computations in the preliminary stages of the present paper. We also thank Nicholas Davidson and Jonathan Kujawa for some useful discussions. ## 1 The Lie superalgebra of type $P$ Let $\mathbb{C}(n|n)$ be the vector superspace $\mathbb{C}^{n}\oplus\mathbb{C}^{n}$ spanned by the odd standard basis vectors $e_{-n},\ldots,e_{-1}$ and the even standard basis vectors $e_{1},\ldots,e_{n}$. Let $M_{n|n}(\mathbb{C})$ be the vector superspace consisting of matrices $A=(a_{ij})$ with $a_{ij}\in\mathbb{C}$ and with rows and columns labelled using the integers $-n,\ldots,1,1,\ldots,n$, so $i,j\in\\{\pm 1,\pm 2,\ldots,\pm n\\}$. Set $p(i)=1\in\mathbb{Z}_{2}$ if $-n\leq i\leq-1$ and $p(i)=0\in\mathbb{Z}_{2}$ if $1\leq i\leq n$. The parity of the elementary matrix $E_{ij}$ is $p(i)+p(j)\;\mathrm{mod}\,2$. We denote by $\mathfrak{g}\mathfrak{l}_{n|n}$ the Lie superalgebra over $\mathbb{C}$ whose underlying vector space is $M_{n|n}(\mathbb{C})$ and which is equipped with the Lie superbracket $[E_{ij},E_{kl}]=\delta_{jk}E_{il}-(-1)^{(p(i)+p(j))(p(k)+p(l))}\delta_{il}E_{kj}.$ Recall that the supertranspose $(\cdot)^{\rm st}$ on $\mathfrak{g}\mathfrak{l}_{n|n}$ is given by the formula $(E_{ij})^{\rm st}=(-1)^{p(i)(p(j)+1)}E_{ji}$. The involution $\iota$ on $\mathfrak{g}\mathfrak{l}_{n|n}$ which will be relevant for this paper is given by $\iota(X)=-\pi(X^{\rm st})$ where $\pi:\mathfrak{g}\mathfrak{l}_{n|n}\longrightarrow\mathfrak{g}\mathfrak{l}_{n|n}$ is the linear map given by $\pi(E_{ij})=E_{-i,-j}$. ###### Definition 1.1. The Lie superalgebra $\mathfrak{p}_{n}$ of type $P$, which is also called the periplectic Lie superalgebra, is the subspace of fixed points of $\mathfrak{g}\mathfrak{l}_{n|n}$ under the involution $\iota$, that is, $\mathfrak{p}_{n}=\\{X\in\mathfrak{g}\mathfrak{l}_{n|n}\,|\,\iota(X)=X\\}$. If $X\in\mathfrak{p}_{n}$ with $\begin{pmatrix}A&B\\\ C&D\end{pmatrix}$ and $A,B,C,D\in M_{n}(\mathbb{C})$, then $D=-A^{t}$, $B=B^{t}$ and $C=-C^{t}$ where $t$ denotes the transpose with respect to the diagonal $i=-j$. For convenience, we set $\mathsf{E}_{ij}=E_{ij}+\iota(E_{ij})=E_{ij}-(-1)^{p(i)(p(j)+1)}E_{-j,-i}.$ The superbracket on $\mathfrak{p}_{n}$ is given by $\displaystyle[\mathsf{E}_{ji},\mathsf{E}_{lk}]$ $\displaystyle=$ $\displaystyle\delta_{il}\mathsf{E}_{jk}-(-1)^{(p(i)+p(j))(p(k)+p(l))}\delta_{jk}\mathsf{E}_{li}$ (1) $\displaystyle-\delta_{i,-k}(-1)^{p(l)(p(k)+1)}\mathsf{E}_{j,-l}-\delta_{-j,l}(-1)^{p(j)(p(i)+1)}\mathsf{E}_{-i,k}$ A basis of $\mathfrak{p}_{n}$ is provided by all the matrices $\mathsf{E}_{ij}$ with indices $i$ and $j$ respecting one of the following inequalities: $1\leq|j|<|i|\leq n\text{ or }1\leq i=j\leq n\text{ or }-n\leq i=-j\leq-1.$ Note that $\mathsf{E}_{ij}=-(-1)^{p(i)(p(j)+1)}\mathsf{E}_{-j,-i}$ for all $i,j\in\\{\pm 1,\ldots,\pm n\\}$, hence $\mathsf{E}_{i,-i}=0$ when $1\leq i\leq n$. ## 2 Lie bisuperalgebra structure To construct a Lie bisuperalgebra structure on $\mathfrak{p}_{n}$, we define a Manin supertriple. We follow the idea in [Ol] for the case of the Lie superalgebra of type $Q$. Recall that a _Manin supertriple_ $(\mathfrak{a},\mathfrak{a}_{1},\mathfrak{a}_{2})$ consists of a Lie superalgebra $\mathfrak{a}$ equipped with an ad-invariant supersymmetric non- degenerate bilinear form $\mathsf{B}$ along with two Lie subsuperalgebras $\mathfrak{a}_{1},\mathfrak{a}_{2}$ of $\mathfrak{a}$ which are $\mathsf{B}$-isotropic transversal subspaces of $\mathfrak{a}$. Note that such a bilinear form $\mathsf{B}$ defines a non-degenerate pairing between $\mathfrak{a}_{1}$ and $\mathfrak{a}_{2}$ and a supercobracket $\delta:\mathfrak{a}_{1}\to\mathfrak{a}_{1}^{\otimes 2}$ via $\mathsf{B}^{\otimes 2}(\delta(X),Y_{1}\otimes Y_{2})=\mathsf{B}(X,[Y_{1},Y_{2}]),$ where $X\in\mathfrak{a}_{1}$, $Y_{1},Y_{2}\in\mathfrak{a}_{2}$. ###### Definition 2.1. The “butterfly” Lie superalgebra $\mathfrak{b}_{n}$ is the subspace of $\mathfrak{g}\mathfrak{l}_{n|n}$ spanned by $E_{ij}$ with $1\leq|i|<|j|\leq n$ and by $E_{ii}+E_{-i,-i},E_{i,-i}$ for $1\leq i\leq n$. Note that $\mathfrak{g}\mathfrak{l}_{n|n}=\mathfrak{p}_{n}\oplus\mathfrak{b}_{n}$. It is well-known that the bilinear form $\mathsf{B}(\cdot,\cdot)$ on $\mathfrak{g}\mathfrak{l}_{n|n}$ given by the super-trace, $\mathsf{B}(A,B)=\mathrm{Str}(AB)$, is ad-invariant, supersymmetric and non- degenrate. One easily checks that $\mathsf{B}(X_{1},X_{2})=0$ if $X_{1},X_{2}\in\mathfrak{p}_{n}$ or if $X_{1},X_{2}\in\mathfrak{b}_{n}$. Hence we have the following result. ###### Proposition 2.2. $(\mathfrak{g}\mathfrak{l}_{n|n},\mathfrak{p}_{n},\mathfrak{b}_{n})$ is a Manin supertriple. ###### Remark 2.3. A similar Manin supertriple is given in [LeSh], §2.2. The quantum superalgebra that we will define in the next section will be a quantization of the Lie bisuperalgebra structure given by the Manin supertriple $(\mathfrak{g}\mathfrak{l}_{n|n},\mathfrak{p}_{n},\mathfrak{b}_{n})$. We extend the form $\mathsf{B}(\cdot,\cdot)$ to a non-degenerate pairing $\mathsf{B}^{\otimes 2}$ on $\mathfrak{g}\mathfrak{l}_{n|n}\otimes_{\mathbb{C}}\mathfrak{g}\mathfrak{l}_{n|n}$ by setting $\mathsf{B}^{\otimes 2}(X_{1}\otimes X_{2},Y_{1}\otimes Y_{2})=(-1)^{|X_{2}||Y_{1}|}\mathsf{B}(X_{1},Y_{1})\mathsf{B}(X_{2},Y_{2})$ for all homogeneous elements $X_{1},X_{2},Y_{1},Y_{2}\in\mathfrak{p}_{n}$. The sign $(-1)^{|X_{2}||Y_{1}|}$ is necessary to make this form ad-invariant. Let $\mathsf{s}=\sum\limits_{1\leq|j|<|i|\leq n}(-1)^{p(j)}\mathsf{E}_{ij}\otimes E_{ji}+\frac{1}{2}\sum\limits_{1\leq i\leq n}\mathsf{E}_{ii}\otimes(E_{ii}+E_{-i,-i})+\frac{1}{2}\sum\limits_{1\leq i\leq n}\mathsf{E}_{-i,i}\otimes E_{i,-i}$ (2) A scalar multiple of this element is called the fake Casimir element in [BDEA${}^{+}1$]. ###### Proposition 2.4. $\mathsf{s}$ is a solution of the classical Yang-Baxter equation: $[\mathsf{s}_{12},\mathsf{s}_{13}]+[\mathsf{s}_{12},\mathsf{s}_{23}]+[\mathsf{s}_{13},\mathsf{s}_{23}]=0$. The proof of the above proposition follows from the lemma below, which should be well-known among experts. ###### Lemma 2.5. Let $\mathfrak{p}$ be a finite dimensional Lie superalgebra and suppose that $(\mathfrak{p},\mathfrak{p}_{1},\mathfrak{p}_{2})$ is a Manin triple with respect to a certain supersymmetric, invariant, bilinear form $\mathsf{B}(\cdot,\cdot)$. Let $\\{X_{i}\\}_{i\in I},\\{X^{\prime}_{i}\\}_{i\in I}$ be bases of $\mathfrak{p}_{1}$ and $\mathfrak{p}_{2}$, respectively, dual in the sense that $\mathsf{B}(X_{i}^{\prime},X_{j})=\delta_{ij}$. Set $s=\sum_{i\in I}X_{i}\otimes X_{i}^{\prime}$. Then $s$ is a solution of the classical Yang- Baxter equation. We next compute the supercobracket $\delta$ using the identity $\mathsf{B}(X,[Y_{1},Y_{2}])=\mathsf{B}(\delta(X),Y_{1}\otimes Y_{2})$ for all $X\in\mathfrak{p}_{n}$ and all $Y_{1},Y_{2}\in\mathfrak{b}_{n}$. The formula for $\delta$ is (assuming, without loss of generality, that $|j|\leq|i|$): $\displaystyle\delta(\mathsf{E}_{ij})$ $\displaystyle=$ $\displaystyle\sum_{\stackrel{{\scriptstyle k=-n}}{{|j|<|k|<|i|}}}^{n}(-1)^{p(k)+1}\big{(}\mathsf{E}_{ik}\otimes\mathsf{E}_{kj}-(-1)^{(p(i)+p(k))(p(j)+p(k))}\mathsf{E}_{kj}\otimes\mathsf{E}_{ik}\big{)}$ $\displaystyle-\frac{1}{2}((-1)^{p(i)}\mathsf{E}_{ii}-(-1)^{p(j)}\mathsf{E}_{jj})\otimes\mathsf{E}_{ij}+\frac{1}{2}\mathsf{E}_{ij}\otimes((-1)^{p(i)}\mathsf{E}_{ii}-(-1)^{p(j)}\mathsf{E}_{jj})$ $\displaystyle-\frac{\delta(i<0)}{2}\left(\mathsf{E}_{i,-i}\otimes\mathsf{E}_{-i,j}-(-1)^{p(j)}\mathsf{E}_{-i,j}\otimes\mathsf{E}_{i,-i}\right)$ $\displaystyle+\frac{\delta(j>0)}{2}\left((-1)^{p(i)}\mathsf{E}_{-j,j}\otimes\mathsf{E}_{i,-j}+\mathsf{E}_{i,-j}\otimes\mathsf{E}_{-j,j}\right)$ Finally, the super cobracket is related to the element $\mathsf{s}$. The following lemma is standard. ###### Lemma 2.6. The super cobracket can also be expressed as $\delta(X)=[X\otimes 1+1\otimes X,\mathsf{s}].$ (4) ## 3 Quantized enveloping superalgebra In this section, we define the quantized enveloping superalgebra $\mathfrak{U}_{q}\mathfrak{p}_{n}$ following the approach used in [FRT] and [Ol]. We use a solution $S$ of the quantum Yang-Baxter equation such that $\mathsf{s}$ is the classical limit of $S$. For simplicity, denote by $\mathbb{C}_{q}$ the field $\mathbb{C}(q)$ of rational functions in the variable $q$ and set $\mathbb{C}_{q}(n|n)=\mathbb{C}_{q}\otimes_{\mathbb{C}}\mathbb{C}(n|n)$. ###### Definition 3.1. Let $S\in\mathrm{End}_{\mathbb{C}_{q}}(\mathbb{C}_{q}(n|n)^{\otimes 2})$ be given by the formula: $\displaystyle S$ $\displaystyle=$ $\displaystyle 1+\sum_{1\leq i\leq n}\big{(}(q-1)E_{ii}+(q^{-1}-1)E_{-i,-i}\big{)}\otimes(E_{ii}+E_{-i,-i})+\frac{q-q^{-1}}{2}\sum_{-n\leq i\leq-1}\mathsf{E}_{i,-i}\otimes E_{-i,i}$ (5) $\displaystyle+(q-q^{-1})\sum_{1\leq|j|<|i|\leq n}(-1)^{p(j)}\mathsf{E}_{ij}\otimes E_{ji}$ ###### Remark 3.2. If we define $S$ instead as an element of $\mathrm{End}_{\mathbb{C}[[\hbar]]}(\mathbb{C}_{\hbar}(n|n)^{\otimes 2})$ by the same formula as in definition 3.1 but with $q,q^{-1}$ replaced by $e^{\hbar/2},e^{-\hbar/2}$ and $\mathbb{C}_{q}(n|n)^{\otimes 2}$ replaced by $\mathbb{C}_{\hbar}(n|n)^{\otimes 2}$, which equals $\mathbb{C}(n|n)^{\otimes 2}[[\hbar]]$, then $S=1+\hbar\mathsf{s}+O(\hbar^{2})$. ###### Theorem 3.3. $S$ is a solution of the quantum Yang-Baxter equation: $S_{12}S_{13}S_{23}=S_{23}S_{13}S_{12}$. ###### Proof. The proof consists of verifying long computations. To simplify them, we have used the following method. Set $f(q)=S_{12}S_{13}S_{23}-S_{23}S_{13}S_{12}$. The main idea is to consider $f(q)$ as a Laurent polynomial $\sum_{i=-3}^{3}f_{i}q^{i}$ with coefficients $f_{i}$ in $\mathrm{End}_{\mathbb{C}}\left(\mathbb{C}_{n\mid n}^{\otimes 3}\right)$. Then one shows the eight relations $f(a)=0$, $f^{\prime}(b)=0$, $f^{\prime\prime}(c)=0$ for $a,b,c=\pm 1$ and $b=\pm\sqrt{-1}$. (Actually, just seven of those are enough.) We can then deduce that $f(q)$ is a scalar multiple of $(q-q^{-1})^{3}$ and we show that the coefficient of $q^{3}$ in $f(q)$ is zero. Here are some more details. Let us set $C=\sum\limits_{1\leq i\leq n}(E_{ii}+E_{-i,-i})\otimes(E_{ii}+E_{-i,-i}).$ Then $S=1+(q-q^{-1})\mathsf{s}+\left(\frac{q+q^{-1}}{2}-1\right)C.$ For convenience, we introduce the following notation: $\displaystyle[\mathsf{s}C]={}$ $\displaystyle\mathsf{s}_{12}C_{13}+\mathsf{s}_{12}C_{23}+\mathsf{s}_{13}C_{23}+C_{12}\mathsf{s}_{13}+C_{12}\mathsf{s}_{23}+C_{13}\mathsf{s}_{23}$ $\displaystyle-\mathsf{s}_{23}C_{13}-\mathsf{s}_{23}C_{12}-\mathsf{s}_{13}C_{12}-C_{23}\mathsf{s}_{13}-C_{23}\mathsf{s}_{12}-C_{13}\mathsf{s}_{12}$ $\displaystyle[\mathsf{s}CC]={}$ $\displaystyle\mathsf{s}_{12}C_{13}C_{23}+C_{12}\mathsf{s}_{13}C_{23}+C_{12}C_{13}\mathsf{s}_{23}-\mathsf{s}_{23}C_{13}C_{12}-C_{23}S_{13}C_{12}-C_{23}C_{13}\mathsf{s}_{12}$ $\displaystyle[\mathsf{s}\mathsf{s}C]={}$ $\displaystyle\mathsf{s}_{12}\mathsf{s}_{13}C_{23}+C_{12}\mathsf{s}_{13}\mathsf{s}_{23}+\mathsf{s}_{12}C_{13}\mathsf{s}_{23}-\mathsf{s}_{23}\mathsf{s}_{13}C_{12}-C_{23}\mathsf{s}_{13}\mathsf{s}_{12}-\mathsf{s}_{23}C_{13}\mathsf{s}_{12}$ The relations $f(a)=0$, $f^{\prime}(b)=0$, $f^{\prime\prime}(c)=0$ for $a,b,c=\pm 1$ and $b=\pm\sqrt{-1}$ follow from the next two lemmas and checking these involves explicit computations. ###### Lemma 3.4. $[\mathsf{s}C]=2[\mathsf{s}CC]$ ###### Lemma 3.5. $[\mathsf{s}\mathsf{s}C]=0$ For instance, $f^{\prime}(-1)=0$ follows from $f^{\prime}(-1)=-4[\mathsf{s}C]+8[\mathsf{s}CC]$ and the two lemmas. Furthermore, $f^{\prime\prime}(-1)=-4[\mathsf{s}C]+8[\mathsf{s}CC]-16[\mathsf{s}\mathsf{s}C]+8([\mathsf{s}_{12},\mathsf{s}_{13}]+[\mathsf{s}_{12},\mathsf{s}_{23}]+[\mathsf{s}_{13},\mathsf{s}_{23}]).$ Therefore, $f^{\prime\prime}(-1)=0$ thanks to Lemmas 2.5, 3.4, and 3.5. Similarly, the two lemmas above imply that $f^{\prime}(\sqrt{-1})=2\sqrt{-1}[\mathsf{s}C]-4\sqrt{-1}[\mathsf{s}CC]-4[\mathsf{s}\mathsf{s}C]$ vanishes. The last step in the proof of Theorem 3.3 is to show the vanishing of the coefficient $f_{3}$ of $q^{3}$. We have $f_{3}=\mathsf{s}_{12}\mathsf{s}_{13}\mathsf{s}_{23}-\mathsf{s}_{23}\mathsf{s}_{13}\mathsf{s}_{12}+\frac{1}{4}[sCC]+\frac{1}{2}[ssC]+\frac{1}{8}C_{12}C_{13}C_{23}-\frac{1}{8}C_{23}C_{13}C_{12},$ which simplifies to $\mathsf{s}_{12}\mathsf{s}_{13}\mathsf{s}_{23}-\mathsf{s}_{23}\mathsf{s}_{13}\mathsf{s}_{12}+\frac{1}{4}[sCC]$ (6) thanks to Lemma 3.5 and $C_{12}C_{13}C_{23}-C_{23}C_{13}C_{12}=0$. Verifying that (6) vanishes follows by direct and extensive computations. ∎ With the aid of $S$, we can now define the main object of interest in this paper. ###### Definition 3.6. The _quantized enveloping superalgebra of $\mathfrak{p}_{n}$_ is the $\mathbb{Z}_{2}$-graded $\mathbb{C}_{q}-$algebra $\mathfrak{U}_{q}\mathfrak{p}_{n}$ generated by elements $t_{ij},t_{ii}^{-1}$ with $1\leq|i|\leq|j|\leq n$ and $i,j\in\\{\pm 1,\ldots,\pm n\\}$ which satisfy the following relations: $t_{ii}=t_{-i,-i},\,\,t_{-i,i}=0\text{ if }i>0,\,\,t_{ij}=0\text{ if }|i|>|j|;$ (7) $T_{12}T_{13}S_{23}=S_{23}T_{13}T_{12}$ (8) where $T=\sum_{|i|\leq|j|}t_{ij}\otimes_{\mathbb{C}}E_{ij}$ and the last equality holds in $\mathfrak{U}_{q}\mathfrak{p}_{n}\otimes_{\mathbb{C}(q)}\mathrm{End}_{\mathbb{C}(q)}(\mathbb{C}_{q}(n|n))^{\otimes 2}$. The $\mathbb{Z}_{2}$-degree of $t_{ij}$ is $p(i)+p(j)$. $\mathfrak{U}_{q}\mathfrak{p}_{n}$ is a Hopf algebra with antipode given by $T\mapsto T^{-1}$ and with coproduct given by $\Delta(t_{ij})=\sum_{k=-n}^{n}(-1)^{(p(i)+p(k))(p(k)+p(j))}t_{ik}\otimes t_{kj}.$ ## 4 Limit when $q\mapsto 1$ and quantization We want to explain how $\mathfrak{U}\mathfrak{p}_{n}$ can be viewed as the limit when $q\mapsto 1$ of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ and how the co- Poisson Hopf algebra structure on $\mathfrak{U}\mathfrak{p}_{n}$, which is inherited from the cobracket $\delta$ on $\mathfrak{p}_{n}$, can be recovered from the coproduct on $\mathfrak{U}_{q}\mathfrak{p}_{n}$. Set $\tau_{ij}=\frac{t_{ij}}{q-q^{-1}}$ if $i\neq j$ and set $\tau_{ii}=\frac{t_{ii}-1}{q-1}$. Let $\mathcal{A}$ be the localization of $\mathbb{C}[q,q^{-1}]$ at the ideal generated by $q-1$. Let $\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}$ be the $\mathcal{A}$-subalgebra of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ generated by $\tau_{ij}$ when $1\leq|i|\leq|j|\leq n$. ###### Theorem 4.1. The map $\psi:\mathfrak{U}\mathfrak{p}_{n}\longrightarrow\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}/(q-1)\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}$ given by $\psi(\mathsf{E}_{ji})=(-1)^{p(j)}\overline{\tau}_{ij}$ for $|i|<|j|$, $1\leq i=j\leq n$, and $\displaystyle\psi(\mathsf{E}_{-i,i})=-2\overline{\tau}_{i,-i}$ for $1\leq i\leq n$, is an associative $\mathbb{C}$-superalgebra isomorphism. ###### Proof. First, we need to write down explicitly the defining relation (8). Comparing coefficients of $E_{ij}\otimes E_{kl}$ on both sides of relation (8), we obtain: $\begin{split}&(-1)^{(p(i)+p(j))(p(k)+p(l))}t_{ij}t_{kl}-t_{kl}t_{ij}+\theta(i,j,k)\big{(}\delta_{|j|<|l|}-\delta_{|k|<|i|}\big{)}\epsilon t_{il}t_{kj}\\\ &\qquad+(-1)^{(p(i)+p(j))(p(k)+p(l))}\big{(}\delta_{j>0}(q-1)+\delta_{j<0}(q^{-1}-1)\big{)}\big{(}\delta_{jl}+\delta_{j,-l}\big{)}t_{ij}t_{kl}\\\ &\qquad\qquad-\big{(}\delta_{i>0}(q-1)+\delta_{i<0}(q^{-1}-1)\big{)}\big{(}\delta_{ik}+\delta_{i,-k}\big{)}t_{kl}t_{ij}\\\ &\qquad\qquad\qquad+\theta(i,j,k)\delta_{j>0}\delta_{j,-l}\epsilon t_{i,-j}t_{k,-l}-(-1)^{p(j)}\delta_{i<0}\delta_{i,-k}\epsilon t_{-k,l}t_{-i,j}\\\ &\qquad+(-1)^{p(j)(p(i)+1)}\epsilon\sum_{-n\leq a\leq n}\big{(}(-1)^{p(i)p(a)}\theta(i,j,k)\delta_{j,-l}\delta_{|a|<|l|}t_{i,-a}t_{ka}+(-1)^{p(-j)p(a)}\delta_{i,-k}\delta_{|k|<|a|}t_{al}t_{-a,j}\big{)}\\\ &\qquad=0\end{split}$ (9) In the identity above, we set $\theta(i,j,k)=\operatorname{sgn}(\operatorname{sgn}(i)+\operatorname{sgn}(j)+\operatorname{sgn}(k))\text{ and }\epsilon=q-q^{-1}.$ In order to check that $\psi([\mathsf{E}_{ji},\mathsf{E}_{kl}])=[\psi(\mathsf{E}_{ji}),\psi(\mathsf{E}_{kl})]$, we proceed as follows. We apply $\psi$ on both sides of (1). To show that the resulting right hand side coincides with $[\psi(\mathsf{E}_{ji}),\psi(\mathsf{E}_{kl})]$, we use (9) and pass to the quotient $\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}/(q-1)\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}$. This is done via a long case-by-case verification for $i,j,k,l$. From the way $\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}$ is defined, it follows that $\psi$ is surjective. It remains to prove that it is injective. Since $S$ is a solution of the quantum Yang-Baxter equation, the space $\mathbb{C}_{q}(n|n)$ is a representation of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ via the assignment $T\mapsto S$, hence also of $\mathcal{U}_{\mathcal{A}}\mathfrak{p}_{n}$ by restriction. More explicitly, $\tau_{ij}\mapsto(-1)^{p(i)}\mathsf{E}_{ji}\text{ if $|i|<|j|$},\text{ and }\tau_{i,-i}\mapsto E_{-i,i},\,\tau_{ii}\mapsto(E_{ii}-q^{-1}E_{-i,-i})\text{ if $1\leq i\leq n$.}$ Set $\mathbb{C}_{\mathcal{A}}(n|n)=\mathcal{A}\otimes_{\mathbb{C}}\mathbb{C}(n|n)$. The space $\mathbb{C}_{\mathcal{A}}(n|n)$ is a $\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}$-submodule and so are all the tensor powers $\mathbb{C}_{\mathcal{A}}(n|n)^{\otimes\ell}$. We thus have a superalgebra homomorphism $\phi_{\ell}:\mathcal{U}_{\mathcal{A}}\mathfrak{p}_{n}\longrightarrow\mathrm{End}_{\mathcal{A}}(\mathbb{C}_{\mathcal{A}}(n|n)^{\otimes\ell})$ for each $\ell\geq 1$. Let $\pi_{\ell}$ be the quotient homomorphism $\mathrm{End}_{\mathcal{A}}(\mathbb{C}_{\mathcal{A}}(n|n)^{\otimes\ell})\longrightarrow\mathrm{End}_{\mathcal{A}}(\mathbb{C}_{\mathcal{A}}(n|n)^{\otimes\ell})/(q-1)\mathrm{End}_{\mathcal{A}}(\mathbb{C}_{\mathcal{A}}(n|n)^{\otimes\ell})\cong\mathrm{End}_{\mathbb{C}}(\mathbb{C}(n|n)^{\otimes\ell}).$ The composite $\pi_{\ell}\circ\phi_{\ell}$ descends to a homomorphism $\overline{\pi_{\ell}\circ\phi_{\ell}}$ from $\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}/(q-1)\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}$ to $\mathrm{End}_{\mathbb{C}}(\mathbb{C}(n|n)^{\otimes\ell})$. The composite $\overline{\pi_{\ell}\circ\phi_{\ell}}\circ\psi$ is the superalgebra homomorphism $\mathfrak{U}\mathfrak{p}_{n}\longrightarrow\mathrm{End}_{\mathbb{C}}(\mathbb{C}(n|n)^{\otimes\ell})$ induced by the natural $\mathfrak{p}_{n}$-module structure on $\mathbb{C}(n|n)^{\otimes\ell}$ twisted by the automorphism of $\mathfrak{p}_{n}$ given by $\mathsf{E}_{ij}\mapsto(-1)^{p(i)+p(j)}\mathsf{E}_{ij}$. We can combine the homomorphisms $\overline{\pi_{\ell}\circ\phi_{\ell}}\circ\psi$ for all $\ell\geq 1$ to obtain a homomorphism $\mathcal{U}\mathfrak{p}_{n}\longrightarrow\prod_{\ell=1}^{\infty}\mathrm{End}_{\mathbb{C}}(\mathbb{C}(n|n)^{\otimes\ell})$. This map is injective since $\mathbb{C}(n|n)$ is a faithful representation of $\mathfrak{p}_{n}$. It follows that $\psi$ is injective as well. ∎ We next show that a PBW-type theorem holds for $\mathfrak{U}_{q}\mathfrak{p}_{n}$. For this, we first introduce a total order $\prec$ on the set of generators $t_{ij}$, $1\leq|i|\leq|j|\leq n$, of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ as follows. We declare that $t_{ij}\prec t_{kl}$ if * (i) $|i|>|k|$, or * (ii) $|i|=|k|$ and $|j|>|l|$, or * (iii) $i=k$ and $j=-l>0$, or * (iv) $i=-k>0$ and $|j|=|l|$. This order leads to a total lexicographic order on the set of words formed by the generators $t_{ij}$. Namely, if $A=A_{1}\cdots A_{r}$ and $B=B_{1}\cdots B_{s}$ are two such words in the sense that each $A_{k}$ for $1\leq k\leq r$ and each $B_{l}$ for $1\leq l\leq s$ is equal to some generator $t_{ij}$, then $A\prec B$ if $r<s$ or if $r=s$ and there is a $p$ such that $A_{k}=B_{k}$ for $1\leq k\leq p-1$ and $A_{p}\prec B_{p}$. Note that, in this order, the generators $t_{ij}$ with $i=j$ or $i=-j$ are not grouped together. We call a generator of the from $t_{ii}$ _diagonal_. Also, a word $A_{1}^{k_{1}}...A_{r}^{k_{r}}$ in the generators $t_{ij}$ is called a _reduced monomial_ if $A_{1}\prec\cdots\prec A_{r}$, and $k_{i}\in{\mathbb{Z}}_{>0}$ if $A_{i}$ is not diagonal, $k_{i}\in{\mathbb{Z}}\setminus\\{0\\}$ if $A_{i}$ is diagonal, and $k_{i}=1$ if $A_{i}$ is odd. ###### Theorem 4.2. The reduced monomials form a basis of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ over $\mathbb{C}_{q}$. ###### Proof. We first show that the set of reduced monomials spans $\mathfrak{U}_{q}\mathfrak{p}_{n}$. Note that it is enough to show that all quadratic monomials are in the span of this set. Let $t_{ij}t_{kl}$ be a quadratic monomial which is not reduced. We have that either $t_{kl}\neq t_{ij}$, or $i=k,\,j=l$ and $t_{ij}$ is odd. In the latter case, as shown in Case (21a), equation (9) implies $t_{ij}^{2}$=0. In the former case, we proceed with a case-by-case reasoning considering seven mutually exclusive subcases: * (a) $|i|<|k|$ and $|j|\neq|l|$. * (b) $|i|<|k|$ and $j=l$. * (c) $|i|<|k|$ and $j=-l$. * (d) $|i|=|k|$ and $|j|<|l|$. * (e) $i=k$ and $j=-l<0$. * (f) $i=-k<0$ and $j=l$. * (g) $i=-k<0$ and $j=-l$. Let’s consider in some details subcase (c). The remaining subcases are handled in a similar manner. In subcase (c), (9) simplifies to: $\begin{split}&(-1)^{(p(i)+p(j))(p(k)+p(-j))}\big{(}\delta_{j>0}q+\delta_{j<0}q^{-1}\big{)}t_{ij}t_{k,-j}-t_{k,-j}t_{ij}+\theta(i,j,k)\delta_{j>0}\epsilon t_{i,-j}t_{kj}\\\ &\qquad+(-1)^{p(j)(p(i)+1)}\epsilon\sum_{-n\leq a\leq n}(-1)^{p(i)p(a)}\theta(i,j,k)\delta_{|a|<|j|}t_{i,-a}t_{ka}=0\end{split}$ (10) Let us assume that $|l|=|j|=1$. Then the previous equation reduces to $(-1)^{(p(i)+p(j))(p(k)+p(-j))}\big{(}\delta_{j>0}q+\delta_{j<0}q^{-1}\big{)}t_{ij}t_{k,-j}+\theta(i,j,k)\delta_{j>0}\epsilon t_{i,-j}t_{kj}=t_{k,-j}t_{ij}$ Replacing $j$ by $-j$ leads to the equation $(-1)^{(p(i)+p(-j))(p(k)+p(j))}\big{(}\delta_{j<0}q+\delta_{j>0}q^{-1}\big{)}t_{i,-j}t_{kj}+\theta(i,-j,k)\delta_{j<0}\epsilon t_{ij}t_{k,-j}=t_{kj}t_{i,-j}$ The monomials $t_{k,-j}t_{ij}$ and $t_{kj}t_{i,-j}$ are properly ordered and the previous two equations can be solved to express $t_{ij}t_{k,-j}$ and $t_{i,-j}t_{kj}$ in terms of the former. We then proceed by descending induction on $|j|$ and show that $t_{ij}t_{k,-j}$ can be expressed as a linear combination of properly ordered monomials. The base case $|j|=1$ was completed above. We use again (10) and the corresponding equation obtained after switching $j$ and $-j$. In these two equations, by induction, the monomials $t_{i,-a}t_{ka}$ with $|a|<|j|$ can be expressed as linear combinations of properly ordered monomials. Moreover, $t_{k,-j}t_{ij}$ and $t_{kj}t_{i,-j}$ are already correctly ordered. As in the case $|l|=|j|=1$, we can then solve those two equations to express $t_{ij}t_{k,-j}$ and $t_{i,-j}t_{kj}$ in terms of properly ordered monomials. It remains to show that the reduced monomials form a linearly independent set. We follow the approach in [Ol]. Let $M_{1},\ldots,M_{r}$ be pairwise distinct reduced monomials in the generators $\tau_{ij}$ such that $a_{1}M_{1}+\ldots+a_{r}M_{r}=0$ for some $a_{1},\ldots,a_{r}\in\mathbb{C}_{q}$. Without loss of generality, we can assume that $a_{i}\in\mathcal{A}$. It is sufficient to prove that $a_{1},...,a_{r}\in\mathcal{A}$ implies $a_{1},...,a_{r}\in(q-1)\mathcal{A}$. Recall that there is a surjective homomorphism $\theta:\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}\to\mathfrak{U}\mathfrak{p}_{n}$ More precisely, $\theta$ is the composite of $\psi^{-1}$ from Theorem 4.1 and the projection $\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}\to\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}/(q-1)\mathfrak{U}_{\mathcal{A}}\mathfrak{p}_{n}$ from Theorem 4.1. Let $\overline{M}_{i}=\theta(M_{i})$ and denote by $\bar{a}_{i}$ the image of $a_{i}$ in $\mathcal{A}/(q-1)\mathcal{A}$. Since $M_{1},\ldots,M_{r}$ are pairwise distinct reduced monomials, $\overline{M}_{1},\ldots,\overline{M}_{r}$ are pairwise distinct monomials in $\mathfrak{U}\mathfrak{p}_{n}$. Then using that $\bar{a}_{1}\overline{M}_{1}+\cdots+\bar{a}_{r}\overline{M}_{r}=\theta(a_{1}M_{1}+\ldots+a_{r}M_{r})=0$ and the (classical) PBW Theorem for $\mathfrak{U}\mathfrak{p}_{n}$, we obtain $\bar{a}_{1}=\ldots=\bar{a}_{r}=0$. Hence $a_{1},\ldots,a_{r}\in(q-1)\mathcal{A}$ as needed. ∎ As mentioned in Remark 3.2, we may replace $\mathbb{C}(q)$ by $\mathbb{C}((\hbar))$, $q$ by $e^{\hbar/2}$, and $\mathcal{A}$ by $\mathbb{C}[[\hbar]]$, and an analog of Theorem 4.1 would hold true, implying that $\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}$ is a flat deformation of $\mathfrak{U}\mathfrak{p}_{n}$. Moreover, the next theorem states that $\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}$ is a quantization of the co-Poisson Hopf superalgebra structure on $\mathfrak{U}\mathfrak{p}_{n}$ induced by the Lie bisuperalgebra structure defined in Section 2. To be precise, the cobracket $\delta$ on $\mathfrak{p}_{n}$ extends to a Poisson co-bracket on $\mathfrak{U}\mathfrak{p}_{n}$, which we also denote by $\delta$. Let $(\cdot)^{\circ}$ be the involution on $(\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n})^{\otimes 2}$ given by $A_{1}\otimes A_{2}\mapsto(-1)^{p(A_{1})p(A_{2})}A_{2}\otimes A_{1}$ where $p(A_{i})$ is the $\mathbb{Z}/2\mathbb{Z}$-degree of $A_{i},i=1,2$. For convenience, for $A\in\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}$, we denote by $\overline{A}$ both the image of $A$ in $\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}/h\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}$ and the corresponding element in $\mathfrak{U}\mathfrak{p}_{n}$ via the isomorphism of the $\hbar$-analogue of Theorem 4.1. Similarly, we identify the corresponding elements in $\left(\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}/h\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}\right)\otimes\left(\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}/h\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}\right)$ and $\mathfrak{U}\mathfrak{p}_{n}\otimes\mathfrak{U}\mathfrak{p}_{n}$. ###### Theorem 4.3. If $A\in\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}$, we have $\overline{\hbar^{-1}(\Delta(A)-\Delta(A)^{\circ})}=\delta(\overline{A})$. Hence, $\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{p}_{n}$ is a quantization of the co-Poisson Hopf superalgebra structure on $\mathfrak{U}\mathfrak{p}_{n}$. ###### Proof. We show that the identity above holds for the generators $\tau_{ij}$ of $\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{g}$, so let $A=\tau_{ij}$. We first note that the identity is trivially satisfied for $i=j$, as both sides are zero. Assume henceforth that $i\neq j$. Then: $\displaystyle\hbar^{-1}\left(\Delta(\tau_{ij})-\Delta(\tau_{ij})^{\circ}\right)={}$ $\displaystyle\left(\frac{e^{\hbar/2}-e^{-\hbar/2}}{\hbar}\right)\sum_{\begin{subarray}{c}k=-n\\\ |i|<|k|<|j|\end{subarray}}^{n}\left((-1)^{(p(i)+p(k))(p(j)+p(k))}\tau_{ik}\otimes\tau_{kj}-\tau_{kj}\otimes\tau_{ik}\right)$ $\displaystyle+\left(\frac{e^{\hbar/2}-1}{\hbar}\right)\left(\tau_{ii}\otimes\tau_{ij}-\tau_{ij}\otimes\tau_{ii}+\tau_{ij}\otimes\tau_{jj}-\tau_{jj}\otimes\tau_{ij}\right)$ $\displaystyle-\left(\frac{e^{\hbar/2}-e^{-\hbar/2}}{\hbar}\right)\delta_{i>0}\left((-1)^{p(j)}\tau_{i,-i}\otimes\tau_{-i,j}+\tau_{-i,j}\otimes\tau_{i,-i}\right)$ $\displaystyle+\left(\frac{e^{\hbar/2}-e^{-\hbar/2}}{\hbar}\right)\delta_{j<0}\left((-1)^{p(i)}\tau_{i,-j}\otimes\tau_{-j,j}-\tau_{-j,j}\otimes\tau_{i,-j}\right)$ Thus, in $\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{g}/\hbar\mathfrak{U}_{\mathbb{C}[[\hbar]]}\mathfrak{g}$, we have: $\displaystyle\overline{\hbar^{-1}\left(\Delta(\tau_{ij})-\Delta(\tau_{ij})^{\circ}\right)}={}$ $\displaystyle\sum_{\begin{subarray}{c}k=-n\\\ |i|<|k|<|j|\end{subarray}}^{n}\left((-1)^{(p(i)+p(k))(p(j)+p(k))}\overline{\tau}_{ik}\otimes\overline{\tau}_{kj}-\overline{\tau}_{kj}\otimes\overline{\tau}_{ik}\right)$ $\displaystyle+\frac{1}{2}\left(\overline{\tau}_{ii}\otimes\overline{\tau}_{ij}-\overline{\tau}_{ij}\otimes\overline{\tau}_{ii}+\overline{\tau}_{ij}\otimes\overline{\tau}_{jj}-\overline{\tau}_{jj}\otimes\overline{\tau}_{ij}\right)$ $\displaystyle-\delta_{i>0}\left(\overline{\tau}_{-i,j}\otimes\overline{\tau}_{i,-i}+(-1)^{p(j)}\overline{\tau}_{i,-i}\otimes\overline{\tau}_{-i,j}\right)$ $\displaystyle+\delta_{j<0}\left((-1)^{p(i)}\overline{\tau}_{i,-j}\otimes\overline{\tau}_{-j,j}-\overline{\tau}_{-j,j}\otimes\overline{\tau}_{i,-j}\right)$ We next compute $\delta(\overline{\tau}_{ij})$ using the isomorphism of Theorem 4.1 and (2). $\displaystyle\delta(\overline{\tau}_{ij})={}$ $\displaystyle(-1)^{p(j)}\delta(\mathsf{E}_{ji})$ $\displaystyle={}$ $\displaystyle\sum_{\begin{subarray}{c}k=-n\\\ |i|<|k|<|j|\end{subarray}}^{n}(-1)^{p(j)+p(k)}\left((-1)^{(p(i)+p(k))(p(j)+p(k))}\mathsf{E}_{ki}\otimes\mathsf{E}_{jk}-\mathsf{E}_{jk}\otimes\mathsf{E}_{ki}\right)$ $\displaystyle{}-\frac{1}{2}(-1)^{p(j)}\left((-1)^{p(j)}\mathsf{E}_{jj}-(-1)^{p(i)}\mathsf{E}_{ii}\right)\otimes\mathsf{E}_{ji}+\frac{1}{2}(-1)^{p(j)}\mathsf{E}_{ji}\otimes\left((-1)^{p(j)}\mathsf{E}_{jj}-(-1)^{p(i)}\mathsf{E}_{ii}\right)$ $\displaystyle{}-\frac{\delta_{j<0}}{2}(-1)^{p(j)}\mathsf{E}_{j,-j}\otimes\mathsf{E}_{-j,i}+\frac{\delta_{i>0}}{2}\mathsf{E}_{-i,i}\otimes\mathsf{E}_{j,-i}$ $\displaystyle{}+\frac{\delta_{j<0}}{2}(-1)^{p(i)+p(j)}\mathsf{E}_{-j,i}\otimes\mathsf{E}_{j,-j}+\frac{\delta_{i>0}}{2}(-1)^{p(j)}\mathsf{E}_{j,-i}\otimes\mathsf{E}_{-i,i}$ $\displaystyle={}$ $\displaystyle\overline{\hbar^{-1}\left(\Delta(\tau_{ij})-\Delta(\tau_{ij})^{\circ}\right)}$ as needed. ∎ ## 5 Periplectic $q$-Brauer algebra In [Mo], D. Moon identified the centralizer of the action of $\mathfrak{p}_{n}$ on the tensor space $\mathbb{C}_{n|n}^{\otimes l}$. This centralizer is called the periplectic Brauer algebra in the literature: see [Co, CP, CE1, CE2]. We have a representation of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ on $\mathbb{C}_{q}(n|n)$ via the assignment $T\mapsto S$, and thus we also have a representation on each tensor power $\mathbb{C}_{q}(n|n)^{\otimes l}$. In this section, we identify the centralizer of the action of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ on $\mathbb{C}_{q}(n|n)^{\otimes l}$ and call it the periplectic $q$-Brauer algebra. For the quantum group of type $Q$, this was done in [Ol] and the centralizer of its action is called the Hecke- Clifford superalgebra. Quantum analogs of the Brauer algebra were studied in [M] where they appear as centralizers of the action of twisted quantized enveloping algebras $\mathfrak{U}_{q}^{tw}\mathfrak{o}_{n}$ and $\mathfrak{U}_{q}^{tw}\mathfrak{sp}_{n}$ on tensor representations (here, $\mathfrak{sp}_{n}$ is the symplectic Lie algebra); see also [We]. ###### Definition 5.1. The periplectic $q$-Brauer algebra $\mathfrak{B}_{q,l}$ is the associative $\mathbb{C}(q)$-algebra generated by elements $\mathsf{t}_{i}$ and $\mathsf{c}_{i}$ for $1\leq i\leq l-1$ satisfying the following relations: $\displaystyle(\mathsf{t}_{i}-q)(\mathsf{t}_{i}+q^{-1})=0,\;\;\mathsf{c}_{i}^{2}=0,\;\;\mathsf{c}_{i}\mathsf{t}_{i}=-q^{-1}\mathsf{c}_{i},\;\;\mathsf{t}_{i}\mathsf{c}_{i}=q\mathsf{c}_{i}\;\;\text{ for }1\leq i\leq l-1;$ (11) $\displaystyle\mathsf{t}_{i}\mathsf{t}_{j}=\mathsf{t}_{j}\mathsf{t}_{i},\;\;\mathsf{t}_{i}\mathsf{c}_{j}=\mathsf{c}_{j}\mathsf{t}_{i},\;\;\mathsf{c}_{i}\mathsf{c}_{j}=\mathsf{c}_{j}\mathsf{c}_{i}\;\;\text{ if }|i-j|\geq 2;$ (12) $\displaystyle\mathsf{t}_{i}\mathsf{t}_{j}\mathsf{t}_{i}=\mathsf{t}_{j}\mathsf{t}_{i}\mathsf{t}_{j},\;\;\mathsf{c}_{i+1}\mathsf{c}_{i}\mathsf{c}_{i+1}=-\mathsf{c}_{i+1},\;\;\mathsf{c}_{i}\mathsf{c}_{i+1}\mathsf{c}_{i}=-\mathsf{c}_{i}\;\;\text{ for }1\leq i\leq l-2;$ (13) $\displaystyle\mathsf{t}_{i}\mathsf{c}_{i+1}\mathsf{c}_{i}=-\mathsf{t}_{i+1}\mathsf{c}_{i}+(q-q^{-1})\mathsf{c}_{i+1}\mathsf{c}_{i},\;\;\mathsf{c}_{i+1}\mathsf{c}_{i}\mathsf{t}_{i+1}=-\mathsf{c}_{i+1}\mathsf{t}_{i}+(q-q^{-1})\mathsf{c}_{i+1}\mathsf{c}_{i}$ (14) ###### Remark 5.2. Setting $q=1$ in this definition yields the algebra $A_{l}$ from Definition 2.2 in [Mo]. ###### Lemma 5.3. We have $\mathfrak{U}_{q}\mathfrak{p}_{n}$-module homomorphisms $\vartheta:\mathbb{C}_{q}(n|n)\otimes\mathbb{C}_{q}(n|n)\rightarrow\mathbb{C}(q)$ and $\epsilon:\mathbb{C}(q)\rightarrow\mathbb{C}_{q}(n|n)\otimes\mathbb{C}_{q}(n|n)$ given by $\vartheta(e_{a}\otimes e_{b})=\delta_{a,-b}(-1)^{p(a)}$ and $\epsilon(1)=\sum_{a=-n}^{n}e_{a}\otimes e_{-a}$. ###### Proof. It is enough to check that, for all the generators $t_{ij}$ of $\mathfrak{U}_{q}\mathfrak{p}_{n}$ and any tensor $\mathbf{v}\in\mathbb{C}_{q}(n|n)\otimes\mathbb{C}_{q}(n|n)$, $\vartheta(t_{ij}(\mathbf{v}))=t_{ij}(\vartheta(\mathbf{v}))\text{ and }\epsilon(t_{ij}(1))=t_{ij}(\epsilon(1)).$ (15) Here is a brief sketch of some of the computations. Using the formula for the coproduct, we have: $\displaystyle t_{ij}(e_{a}\otimes e_{-a})={}$ $\displaystyle\sum_{k=-n}^{n}(-1)^{(p(i)+p(k))(p(k)+p(j))+(p(k)+p(j))p(a)}t_{ik}(e_{a})\otimes t_{kj}(e_{-a})$ (16) This can be made more explicit using $\displaystyle t_{ii}(e_{a})$ $\displaystyle=$ $\displaystyle\sum_{b=-n}^{n}q^{\delta_{bi}(1-2p(i))+\delta_{b,-i}(2p(i)-1)}E_{bb}(e_{a});$ $\displaystyle t_{i,-i}(e_{a})$ $\displaystyle=$ $\displaystyle(q-q^{-1})\delta_{i>0}E_{-i,i}(e_{a});$ $\displaystyle t_{ij}(e_{a})$ $\displaystyle=$ $\displaystyle(q-q^{-1})(-1)^{p(i)}\mathsf{E}_{ji}(e_{a}),\text{ if }|i|\neq|j|.$ We obtain, for instance, $\displaystyle t_{ii}(e_{a_{1}}\otimes e_{a_{2}})={}$ $\displaystyle q^{\delta_{a_{1},i}(1-2p(i))+\delta_{a_{1},-i}(2p(i)-1)}q^{\delta_{a_{2},i}(1-2p(i))+\delta_{a_{2},-i}(2p(i)-1)}e_{a_{1}}\otimes e_{a_{2}}$ If $a_{2}=-a_{1}=-a$, this simplifies to $e_{a}\otimes e_{-a}$ and this allows us to check (15) quickly for $i=j$. Furthermore, $\displaystyle t_{i,-i}(e_{a}\otimes e_{-a})={}$ $\displaystyle(-1)^{p(a)}\delta_{i>0}t_{ii}(e_{a})\otimes t_{i,-i}(e_{-a})+\delta_{i>0}t_{i,-i}(e_{a})\otimes t_{-i,-i}(e_{-a})$ It follows that $t_{i,-i}\left(\sum_{a=-n}^{n}e_{a}\otimes e_{-a}\right)=0$, so the identity for $\epsilon$ in(15) holds for $j=-i$. Suppose now that $a_{1}\neq-a_{2}$. Then $\displaystyle t_{i,-i}(e_{a_{1}}\otimes e_{a_{2}})={}$ $\displaystyle\delta_{i>0}\delta(a_{1}=a_{2}=i)(q-q^{-1})qe_{i}\otimes e_{-i}$ $\displaystyle+\delta_{i>0}\delta(a_{1}=a_{2}=i)(q-q^{-1})qe_{-i}\otimes e_{i}$ Observe that $\vartheta(e_{i}\otimes e_{-i}+e_{-i}\otimes e_{i})=0$, so we have shown that $\vartheta(t_{i,-i}(e_{a_{1}}\otimes e_{a_{2}}))=t_{i,-i}(\vartheta(e_{a_{1}}\otimes e_{a_{2}}))$ and this proves (15) for $\vartheta$ when $j=-i$. Next, we consider the case $|i|\neq|j|$. To prove the identity for $\epsilon$ in (15), we use again (16) and obtain that $\displaystyle t_{ij}\left(\sum_{a=-n}^{n}e_{a}\otimes e_{-a}\right)=0$ by considering subcases $i=\pm a$, $j=\pm a$, and $k=\pm a$. To show that (15) holds for $\vartheta$ we also proceed with case-by-case verification. The case $a_{1},a_{2}\not\in\\{\pm i,\pm j\\}$ is immediate. If $a_{1}\in\\{\pm i,\pm j\\}$, $a_{2}\not\in\\{\pm i,\pm j\\}$, and$a_{1}\neq-a_{2}$, then $\displaystyle t_{ij}(e_{a_{1}}\otimes e_{a_{2}})={}$ $\displaystyle(q-q^{-1})^{2}(-1)^{(p(i)+p(a_{2}))(p(a_{2})+p(j))+(p(a_{2})+p(j))p(a_{1})}(-1)^{p(i)+p(a_{2})}\mathsf{E}_{a_{2}i}(e_{a_{1}})\otimes\mathsf{E}_{ja_{2}}(e_{a_{2}})$ $\displaystyle+(q-q^{-1})(-1)^{p(i)}\mathsf{E}_{ji}(e_{a_{1}})\otimes E_{a_{2}a_{2}}(e_{a_{2}}).$ This shows that $\vartheta(t_{ij}(e_{a_{1}}\otimes e_{a_{2}}))=0=t_{ij}(\vartheta(e_{a_{1}}\otimes e_{a_{2}}))$. Similarly, we obtain the desired identity in the other cases. ∎ By composing $\vartheta$ and $\epsilon$, we obtain a $\mathfrak{U}_{q}\mathfrak{p}_{n}$-module homomorphism $\epsilon\circ\vartheta:\mathbb{C}_{q}(n|n)^{\otimes 2}\rightarrow\mathbb{C}_{q}(n|n)^{\otimes 2}$. In terms of elementary matrices, this linear map is given by $\sum_{a,b=-n}^{n}(-1)^{p(a)p(b)}E_{ab}\otimes E_{-a,-b}$, which we abbreviate by $\mathfrak{c}$. The super-permutation operator $P$ on $\mathbb{C}_{q}(n|n)^{\otimes 2}$ is given by $P=\sum_{a,b=-n}^{n}(-1)^{p(b)}E_{ab}\otimes E_{ba}$, so $\mathfrak{c}=P^{(\pi\circ\mathrm{st})_{2}}$ where $(\pi\circ\mathrm{st})_{2}$ stands for the map $\pi\circ\mathrm{st}$ applied to the second tensor in the previous formula for $P$. We can extend $\mathfrak{c}$ to a $\mathfrak{U}_{q}\mathfrak{p}_{n}$-module homomorphism $\mathfrak{c}_{i}:\mathbb{C}_{q}(n|n)^{\otimes l}\rightarrow\mathbb{C}_{q}(n|n)^{\otimes l}$ for $1\leq i\leq l-1$ by applying $\mathfrak{c}$ to the $i^{th}$ and $(i+1)^{th}$ tensors. The linear map $\mathbb{C}_{q}(n|n)^{\otimes l}\rightarrow\mathbb{C}_{q}(n|n)^{\otimes l}$ given by $P_{i}S_{i,i+1}$ where $P_{i}$ is the super-permutation operator acting on the $i^{th}$ and $(i+1)^{th}$ tensors is also a $\mathfrak{U}_{q}\mathfrak{p}_{n}$-module homomorphism: this is a consequence of the fact that $S$ is a solution of the quantum Yang-Baxter relation. ###### Proposition 5.4. The tensor superspace $\mathbb{C}_{q}(n|n)^{\otimes l}$ is a module over $\mathfrak{B}_{q,l}$ if we let $\mathsf{t}_{i}$ act as $P_{i}S_{i,i+1}$ and $\mathsf{c}_{i}$ act as $\mathfrak{c}_{i}$. ###### Proof. That the linear operators $P_{i}S_{i,i+1}$ satisfy the braid relation (the first relation in (13)) is a consequence of the fact that $S$ is a solution of the quantum Yang-Baxter relation. The relations (12) for the operators $P_{i}S_{i,i+1}$ and $\mathfrak{c}_{i}$ can be easily verified. As for the other relations, they can be checked via direct computations. It is enough to check the relations (11) on $\mathbb{C}_{q}(n|n)^{\otimes 2}$ and the relations (14) on $\mathbb{C}_{q}(n|n)^{\otimes 3}$. We briefly sketch some of those computations below. First, note that $\mathfrak{c}P=-\mathfrak{c}$ and $P\mathfrak{c}=\mathfrak{c}$. Also, we easily obtain the following: $\displaystyle\mathfrak{c}\left((q-1)\sum_{i=1}^{n}E_{ii}\otimes E_{ii}\right)={}$ $\displaystyle\mathfrak{c}\left((q^{-1}-1)\sum_{i=1}^{n}E_{-i,-i}\otimes E_{-i,-i}\right)=0,$ $\displaystyle\mathfrak{c}\left((q-1)\sum_{i=1}^{n}E_{ii}\otimes E_{-i,-i}\right)={}$ $\displaystyle(q-1)\sum_{a=-n}^{n}\sum_{b=1}^{n}E_{ab}\otimes E_{-a,-b},$ $\displaystyle\mathfrak{c}\left((q^{-1}-1)\sum_{i=1}^{n}E_{-i,-i}\otimes E_{ii}\right)={}$ $\displaystyle(q^{-1}-1)\sum_{a=-n}^{n}\sum_{b=-n}^{-1}(-1)^{p(a)}E_{ab}\otimes E_{-a,-b},$ $\displaystyle\mathfrak{c}\left(\sum_{i=-n}^{-1}E_{i,-i}\otimes E_{-i,i}\right)={}$ $\displaystyle-\sum_{a=-n}^{n}\sum_{b=1}^{n}E_{ab}\otimes E_{-a,-b},$ $\displaystyle\mathfrak{c}\left(\sum_{1\leq|j|<|i|\leq n}(-1)^{p(j)}\mathsf{E}_{ij}\otimes E_{ji}\right)={}$ $\displaystyle\sum_{a=-n}^{n}\sum_{1\leq|j|<|i|\leq n}(-1)^{p(a)(p(i)+1)+p(j)}E_{a,-i}\otimes E_{-a,i}=0.$ Therefore, we have that $\mathfrak{c}(S-1)=(q^{-1}-1)\mathfrak{c}$, hence $\mathfrak{c}S=q^{-1}\mathfrak{c}$. Now using that $\mathfrak{c}=-\mathfrak{c}P$, we obtain the third relation in (11). Similarly, we prove $(S-1)\mathfrak{c}=(q-1)\mathfrak{c}$, and then using $P\mathfrak{c}=\mathfrak{c}$, we obtain the fourth relation in (11). For the remaining relations we use the following formula: $\displaystyle PS={}$ $\displaystyle\sum_{i,j=-n}^{n}(-1)^{p(j)}E_{ij}\otimes E_{ji}+(q-1)\sum_{i=1}^{n}\left(E_{-i,i}\otimes E_{i,-i}\right)$ $\displaystyle+(q-1)\sum_{i=1}^{n}\left(E_{ii}\otimes E_{ii}\right)-(q^{-1}-1)\sum_{i=1}^{n}\left(E_{i,-i}\otimes E_{-i,i}\right)$ $\displaystyle-(q^{-1}-1)\sum_{i=1}^{n}\left(E_{-i,-i}\otimes E_{-i,-i}\right)+(q-q^{-1})\sum_{i=-n}^{-1}\left(E_{-i,-i}\otimes E_{ii}\right)$ $\displaystyle+(q-q^{-1})\sum_{|j|<|i|}\left(E_{jj}\otimes E_{ii}\right)+(q-q^{-1})\sum_{|j|<|i|}\left((-1)^{p(i)p(j)}E_{ji}\otimes E_{-j,-i}\right)$ ∎ As mentioned after the definition of $\mathfrak{B}_{q,l}$, the module structure given in the previous proposition commutes with the action of $\mathfrak{U}_{q}(\mathfrak{p}_{n})$ on $\mathbb{C}_{q}(n|n)^{\otimes l}$. We thus have algebra homomorphisms $\mathfrak{B}_{q,l}\longrightarrow\mathrm{End}_{\mathfrak{U}_{q}(\mathfrak{p}_{n})}(\mathbb{C}_{q}(n|n)^{\otimes l})\text{ and }\mathfrak{U}_{q}(\mathfrak{p}_{n})\longrightarrow\mathrm{End}_{\mathfrak{B}_{q,l}}(\mathbb{C}_{q}(n|n)^{\otimes l}).$ The main theorem of this section states that $\mathfrak{B}_{q,l}$ is the full centralizer of the action of $\mathfrak{U}_{q}(\mathfrak{p}_{n})$ on $\mathbb{C}_{q}(n|n)^{\otimes l}$ when $n\geq l$. ###### Theorem 5.5. The map $\mathfrak{B}_{q,l}\longrightarrow\mathrm{End}_{\mathfrak{U}_{q}(\mathfrak{p}_{n})}(\mathbb{C}_{q}(n|n)^{\otimes l})$ is surjective and it is injective when $n\geq l$. This is a $q$-analogue of Theorem 4.5 in [Mo]. The proof follows the lines of the proof of Theorem 3.28 in [BGJKW], using Proposition 5.6 below and Theorem 4.5 in [Mo] along with Lemma 3.27 in [BGJKW], which can be applied in the present situation. Recall that $\mathcal{A}=\mathbb{C}[q,q^{-1}]_{(q-1)}$ is the localization of $\mathbb{C}[q,q^{-1}]$ at the ideal generated by $q-1$. Let $\mathfrak{B}_{q,l}(\mathcal{A})$ be the $\mathcal{A}$-associative algebra which is defined via the same generators and relations as $\mathfrak{B}_{q,l}$. ###### Proposition 5.6. The quotient algebra $\mathfrak{B}_{q,l}(\mathcal{A})/(q-1)\mathfrak{B}_{q,l}(\mathcal{A})$ is isomorphic to the algebra $A_{l}$ given in Definition 2.2 in [Mo]. ###### Proof. It follows immediately from the definitions of both $A_{l}$ and $\mathfrak{B}_{q,l}(\mathcal{A})$ that we have a surjective algebra homomorphism $A_{l}\twoheadrightarrow\mathfrak{B}_{q,l}(\mathcal{A})/(q-1)\mathfrak{B}_{q,l}(\mathcal{A})$. That it is injective can be proved as in the proof of Proposition 3.21 in [BGJKW] using Theorem 4.1 in [Mo]. ∎ The $q$-Schur superalgebras of type $Q$ were introduced in [BGJKW] and [DuWa1, DuWa2]. Considering loc. cit. and the earlier work on $q$-Schur algebras for $\mathfrak{g}\mathfrak{l}_{n}$ (see for instance [Do]), the following definition is natural. ###### Definition 5.7. The $q$-Schur superalgebra $S_{q}(\mathfrak{p}_{n},l)$ of type $P$ is the centralizer of the action of $\mathfrak{B}_{q,l}$ on $\mathbb{C}_{q}(n|n)^{\otimes l}$, that is, $S_{q}(\mathfrak{p}_{n},l)=\mathrm{End}_{\mathfrak{B}_{q,l}}(\mathbb{C}_{q}(n|n)^{\otimes l})$. We have an algebra homomorphism $\mathfrak{U}_{q}(\mathfrak{p}_{n})\longrightarrow S_{q}(\mathfrak{p}_{n},l)$: it is an open question whether or not this map is surjective. We also have an algebra homomorphism $\mathfrak{B}_{q,l}\longrightarrow\mathrm{End}_{S_{q}(\mathfrak{p}_{n},l)}(\mathbb{C}_{q}(n|n)^{\otimes l})$ and it is natural to expect that it should be an isomorphism, perhaps under certain conditions on $n$ and $l$. ## References * [AGG] S. Ahmed, D. Grantcharov, N. Guay, _Quantized enveloping superalgebra of type $P$_, preprint. * [BDEA${}^{+}1$] M. Balagovic, Z. Daugherty, I. Entova-Aizenbud, I. Halacheva, J. Hennig, M. S. Im, G. Letzter, E. Norton, V. Serganova, C. Stroppel, _The affine $VW$ supercategory_, Selecta Math. (N.S.) 26 (2020), no. 2, Paper No. 20, 42 pp., arXiv:1801.04178. * [BDEA${}^{+}2$] M. Balagovic, Z. Daugherty, I. Entova-Aizenbud, I. Halacheva, J. Hennig, M. S. Im, G. Letzter, E. Norton, V. Serganova, C. Stroppel, _Translation functors and decomposition numbers for the periplectic Lie superalgebra $\mathfrak{p}(n)$_, Math. Res. Lett. 26 (2019), no. 3, 643–710, arXiv:1610.08470. * [BGJKW] G. Benkart, N. Guay, J. H. Jung, S.-J. Kang, S. Wilcox, _Quantum walled Brauer-Clifford superalgebras_ , J. Algebra 454 (2016), 433–474. * [CE1] K. Coulembier, M. Ehrig, _The periplectic Brauer algebra II: Decomposition multiplicities_ , J. Comb. Algebra 2 (2018), no. 1, 19–46. * [CE2] K. Coulembier, M. Ehrig, _The periplectic Brauer algebra III: The Deligne category_ , Algebr Represent Theory (2020). https://doi.org/10.1007/s10468-020-09976-8. * [ChGu] H. Chen, N. Guay, _Twisted affine Lie superalgebra of type $Q$ and quantization of its enveloping superalgebra_, Math. Z., 272 (2012), no.1, 317–347. * [Co] K. Coulembier, _The periplectic Brauer algebra_ , Proc. Lond. Math. Soc. (3) 117 (2018), no. 3, 441–482. * [CP] C. Chen, Y. Peng, _Affine periplectic Brauer algebras_ , J. Algebra 501 (2018), 345–372. * [DHIN] Z. Daugherty, I. Halacheva, M.S. Im, and E. Norton, _On calibrated representations of the degenerate affine periplectic Brauer algebra_ , arXiv:1905.05148. * [Do] S. Donkin, _The $q$-Schur Algebra_, London Mathematical Society Lecture Note Series, 253, Cambridge University Press, Cambridge, 1998. * [DuWa1] J. Du, J. Wan, _Presenting queer Schur superalgebras_ , Int. Math. Res. Not. IMRN 2015, no. 8, 2210–2272. * [DuWa2] J. Du, J. Wan, _The queer $q$-Schur superalgebra_, J. Aust. Math. Soc. (2018), 105, no.3, 316–346. * [EAS1] Inna Entova-Aizenbud, Vera Serganova, _Deligne categories and the periplectic Lie superalgebra_ , arXiv:1807.09478. * [EAS2] Inna Entova-Aizenbud, Vera Serganova, _Kac-Wakimoto conjecture for the periplectic Lie superalgebra_ , arXiv:1905.04712. * [FRT] L. Faddeev, N. Reshetikhin, L. Takhtajan, _Quantization of Lie groups and Lie algebras_ (Russian), Algebra i Analiz 1 (1989), no. 1, 178–206; translation in Leningrad Math. J. 1 (1990), no. 1, 193–225. * [FSS] L. Frappa, P. Sorba, A. Sciarrino, _Deformation of the strange superalgebra $\tilde{P}(n)$_, J. Phys. A: Math. Gen. 26 (1993) 661–665. * [GJKK] D. Grantcharov, J.H. Jung., S.-J. Kang, M. Kim, _Highest weight modules over quantun queer superalgebra $U_{q}(\mathfrak{q}(n))$_, Comm. Math. Phys. 296 (2010), no. 3, 827–860. * [GJKKK] D. Grantcharov, J.H. Jung., S.-J. Kang, M. Kashiwara, M. Kim, _Quantum Queer Superalgebra and Crystal Bases_ , Proc. Japan Acad. Ser. A Math. Sci. 86 (2010), no. 10, 177–182, arXiv:1007.4105. * [K] V. Kac, _Lie superalgebras_ , Adv. Math. 26 (1977), no. 1, 8–96. * [KT] J. Kujawa, B. Tharp, _The marked Brauer category_ , J. Lond. Math. Soc. 95 (2017), no. 2, 393–413. * [LeSh] D. Leites, A. Shapovalov, _Manin-Olshansky triples for Lie superalgebras_ , J. Nonlinear Math. Phys. 7 (2000), no. 2, 120–125. * [M] A. Molev, _A new quantum analog of the Brauer algebra_ , Quantum groups and integrable systems, Czechoslovak J. Phys. 53 (2003), no. 11, 1073–1078. * [Mo] D. Moon, _Tensor product representations of the Lie superalgebra $\mathfrak{p}(n)$and their centralizers_, Comm. Algebra 31 (2003), no. 5, 2095–2140. * [Na1] M. Nazarov, _Yangians of the ”strange” Lie superalgebras_ , Quantum groups (Leningrad, 1990), Lecture Notes in Math. 1510, pp. 90-–97, Springer, Berlin, 1992. * [Na2] M. Nazarov, _Yangian of the queer Lie superalgebra_ , Comm. Math. Phys. 208 (1999), no. 1, 195–223. * [Ol] G. Olshanski, _Quantized universal enveloping superalgebra of type $Q$ and a super-extension of the Hecke algebra_, Lett. Math. Phys. 24 (1992), no. 2, 93–102. * [Ser] V. Serganova, _On representations of the Lie superalgebra $\mathfrak{p}(n)$_, J. Algebra 258 (2002), 615–630. * [We] H. Wenzl, _A $q$-Brauer algebra_, J. Algebra 358 (2012), 102–127. Department of Mathematics, University of Texas at Arlington Arlington, TX 76021, USA <EMAIL_ADDRESS> Department of Mathematics, University of Texas at Arlington Arlington, TX 76021, USA <EMAIL_ADDRESS> University of Alberta, Department of Mathematical and Statistical Sciences, CAB 632 Edmonton, AB T6G 2G1, Canada <EMAIL_ADDRESS>
# Rotational Symmetry of Solutions of Mean Curvature Flows Coming Out of A Double Cone Letian Chen Department of Mathematics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218<EMAIL_ADDRESS> (Date: Jan. 24, 2021) ###### Abstract. We show that any smooth solution to the mean curvature flow equations coming out of a rotationally symmetric double cone is also rotationally symmetric. ## 1\. Introduction We say a family of properly embedded smooth hypersurface $\\{\Sigma_{t}\\}_{t\in I}\subset\mathbb{R}^{n+1}$ is a solution of the mean curvature flow (MCF) equations if $\displaystyle\left(\frac{\partial x}{\partial t}\right)^{\perp}=H_{\Sigma_{t}}(x).$ Here $H_{\Sigma_{t}}(x)$ denotes the mean curvature vector of $\Sigma_{t}$ at $x$, and $x^{\perp}$ is the normal component of $x$. In this article we are interested in solutions of MCF coming out of a rotationally symmetric double cone, by which we mean a (hyper)cone $\mathcal{C}\subset\mathbb{R}^{n+1}$ whose link $\mathcal{L}(\mathcal{C})=\mathcal{C}\cap\mathbb{S}^{n}$ is a smooth hypersurface of $\mathbb{S}^{n}$ and has two connected components lying in two separate hemispheres. More explicitly, we consider a cone of the form (up to an ambient rotation so that the axis of symmetry is $x_{1}$-axis) $x_{1}^{2}=\begin{cases}m_{1}(x_{2}^{2}+x_{3}^{2}+\cdots+x_{n}^{2})&x_{1}\geq 0\\\ m_{2}(x_{2}^{2}+x_{3}^{2}+\cdots+x_{n}^{2})&x_{1}<0\end{cases}$ (1.1) where $m_{1},m_{2}>0$ are constants related to the aperture of the one. Solutions coming out of cones arise naturally in the singularity analysis of MCF. In particular the self-expanders, which are special solutions of the MCF satisfying $\Sigma_{t}=\sqrt{t}\Sigma$ for some hypersurface $\Sigma\subset\mathbb{R}^{n+1}$, are often thought of as models of MCF flowing out of a conical singularity (see for example [1]). Self-expanders satisfy the elliptic equation $\displaystyle H_{\Sigma}(x)=\frac{x^{\perp}}{2},$ which is the Euler-Lagrange equation of the functional $\int_{\Sigma}e^{\left|x\right|^{2}/4}d\mathcal{H}^{n}$. We can therefore talk about the Morse index of a given self-expander, and the Morse flow lines between two self-expanders (asymptotic to the same cone $\mathcal{C}$) are examples of non self-similar solutions coming out of the cone. We show that, given a smooth double cone $\mathcal{C}\subset\mathbb{R}^{n+1}$ and a solution to the MCF, $\\{\Sigma_{t}\\}_{t\in[0,T]}$, asymptotic to $\mathcal{C}$, then the flow inherits the rotational symmetry of $\mathcal{C}$ at all times. More precisely we prove: ###### Theorem 1.1. Let $\mathcal{C}\subset\mathbb{R}^{n+1}$ be a smooth, rotationally symmetric double cone. Suppose $\\{\Sigma_{t}\\}_{t\in[0,T]}$ is a smooth solution to the mean curvature flow asymptotic to $\mathcal{C}$, in the sense that $\displaystyle\lim_{t\to 0^{+}}\mathcal{H}^{n}\llcorner\Sigma_{t}=\mathcal{H}^{n}\llcorner\mathcal{C}$ as Radon measures, then $\Sigma_{t}$ is also rotationally symmetric (with the same axis of symmetry) for any $t\in[0,T]$. ###### Remark. It is likely that only a finite number of such solutions exist. These include self-expanders and Morse flow lines between two self-expanders asymptotic to the same cone, some of which can be constructed using methods from [6]. In particular the latter solutions might develop singularities. Indeed, when the parameters $m_{1}$ and $m_{2}$ in Equation 1.1 are sufficiently small, by [14] we can find an unstable (connected) catenoidal self-expander and a disconnected self-expander whose two components are given by the unique self- expanders asymptotic to the top part and bottom part of the cone. One expects that there exists a Morse flow line connecting these two self-expanders. Such a flow line will necessarily develop a neck pinch in order to become disconnected. As an easy corollary we obtain the following rotational symmetry result: ###### Corollary 1.2. Let $\mathcal{C}\subset\mathbb{R}^{n+1}$ be a smooth, rotationally symmetric double cone, then any smooth self-expander $\Sigma$ asymptotic to $\mathcal{C}$ is also rotationally symmetric (with the same axis of symmetry). ###### Remark. It is expected that no singular self-expander asymptotic to $\mathcal{C}$ exists, but our theorem only applies in the smooth case. The smoothness assumption is in place to avoid further technicality introduced by the moving plane method, see Section 2.4. The rotational symmetry is known in many other cases. Fong and McGrath [13] showed that same conclusion holds if the cone is rotationally symmetric and the expander is mean convex. Bernstein-Wang (Lemma 8.3 in [4]) later showed that same conclusion holds if the cone is rotationally symmetric and the expander is weakly stable (in particular, mean convexity implies weak stability so this generalizes the Fong–McGrath result). In contrast, our result applies to all solutions coming out of the cone and does not assume any extra condition about the flow other than smoothness. For other geometric flows, Chodosh [10] proved rotational symmetry of expanding asymptotically conical Ricci solitons with positive sectional curvature. It is also worth mentioning that, although in general given a rotationally symmetric smooth cone $\mathcal{C}$ there could be multiple self-expanders asymptotic to $\mathcal{C}$ , if there exists a unique self-expander asymptotic to $\mathcal{C}$, it must inherit the rotational symmetry. Uniqueness holds, for example, when the link of $\mathcal{C}$, $\mathcal{L}(C)$, is connected, or, in the double cone case, when the parameters $m_{1},m_{2}$ in Equation 1.1 are sufficiently large [4]. It is interesting to determine whether the rotational symmetry holds when the link $\mathcal{L}(C)$ has 3 or more connected components. We suspect that counterexamples exist. We refer to [3], [4], [5], [7], and [11] for more information on self-expanders. The proof of 1.1 relies on the moving plane method pioneered by Alexandrov to prove that embedded compact constant mean curvature hypersurfaces are round spheres. The method was further employed to minimal surfaces by Schoen [20] to prove certain uniqueness theorems for catenoids. More recently, Martín–Savas- Halilaj–Smoczyk [19] showed uniqueness of translators (that is, solutions of the MCF equation that evolve by translating along one fixed direction) with one asymptotically paraboloidal end. Choi–Haslhofer–Hershkovitz [8] and Choi–Haslhofer–Hershkovitz–White [9] used a parabolic variant of the method to deduce rotational symmetry of certain ancient solutions to the MCF equation (that is, solutions of the MCF equation which exist on $(-\infty,0)$). These methods were further generalized to non-smooth settings very recently by Haslhofer–Hershkovitz–White [15] and by Bernstein–Maggi [2]. Although a self-expander $\Sigma$ satisfies an elliptic PDE, the hypersurface obtained after reflecting a self-expander with respect to a hyperplane does not satisfy the above equation anymore (it is rather a translated self- expander). For this reason we could not directly apply the usual elliptic maximum principle and Hopf lemma, and we need to work in spacetime $\mathbb{R}^{n+1}\times[0,T]$ and use the MCF equations directly with a parabolic version of the maximum principles, which will lead to the more general 1.1. Consequently, our method is in spirit closer to that used by [8]. ### Acknowledgment The author would like to thank his advisor, Jacob Bernstein, for numerous helpful advice and constant encouragement, especially during a period of extreme difficulty around the globe. The author would also like to thank Rory Martin-Hagemeyer for useful discussions. ## 2\. Preliminaries ### 2.1. Notations Throughout the paper, $B_{r}(x)$ will denote the Euclidean ball of radius $r$ centered at a point $x\in\mathbb{R}^{n+1}$. By a (smooth) MCF in $\mathbb{R}^{n+1}$ we mean a family of embedded hypersurfaces $\\{\Sigma_{t}\\}_{t\in I}$ for some interval $I$ such that $\displaystyle\left(\frac{\partial x}{\partial t}\right)^{\perp}=H_{\Sigma_{t}}(x)$ for all $x\in\Sigma_{t}$, $t\in I$. Given an open set $U\subset\mathbb{R}^{n+1}$, we say $\\{\Sigma_{t}\\}_{t\in I}$ is a MCF in $U$ if the above equation is satisfied locally (given a local parametrization of the hypersurface) at every $x\in\Sigma_{t}\cap U$ and $t\in I$. ### 2.2. Pseudolocality for MCF We will be frequently using the following pseudolocality result of Ilmanen- Neves-Schulze (see also [12]): ###### Theorem 2.1 (Theorem 1.5 of [16]). Let $\\{\Sigma_{t}\\}_{t\in(0,T]}$ be a mean curvature flow in $\mathbb{R}^{n+1}$. Given any $\eta>0$, there is $\delta,\varepsilon>0$ such that: if $x\in\Sigma_{0}$ and $\Sigma_{0}\cap C_{1}(x)$ is a graph over $C^{n}_{1}(x)$ with Lipschitz constant bounded by $\varepsilon$, then $\Sigma_{t}\cap C_{\delta}(x)$ can be written as a graph over $C^{n}_{\delta}(x)$ with Lipschitz bounded by $\eta$ for any $t\in[0,\delta^{2})\cap[0,T)$. Here $\displaystyle C_{r}^{n}(x)=\\{(y,y_{n+1})\in\mathbb{R}^{n+1}\mid\left|y-x\right|<r\\}$ for $(x,x_{n+1})\in\mathbb{R}^{n+1}$ is the open cylinder in $\mathbb{R}^{n+1}$ centered at $x$ and $\displaystyle C_{r}(x)=\\{(y,y_{n+1})\in C_{r}^{n}(x)\mid\left|y_{n+1}-x_{n+1}\right|<r\\}$ is the closed cylinder. Roughly speaking, this theorem says that if the initial data of our MCF is graphical in some cylinder centered at $x$, then at least for a short time the evolution of the hypersurface stays graphical in a possibly smaller cylinder. We will primarily use this theorem to show that our flow is graphical outside of a large ball for a short time, although strictly speaking we sometimes need to apply the above theorem in the context of integral Brakke flow. ### 2.3. Parabolic Maximum Principles In this section $Z_{r}(x,t)$ will denote the spacetime cylinder of radius $r$ centered at $(x,t)\in\mathbb{R}^{n}\times\mathbb{R}$; that is, $\displaystyle Z_{r}(x,t)=\\{(y,s)\in\mathbb{R}^{n}\times\mathbb{R}\mid\left|y-x\right|<r,\left|t-s\right|<r^{2}\\}.$ $Z_{r}^{-}(x,t)$ will denote the part of cylinder $Z_{r}(x,t)$ whose time component is smaller than $t$. To carry out the moving plane method, the most important ingredients are the maximum principle and Hopf lemma. In our case we need a version of those theorems applicable to graphical solutions of MCF; that is, functions $u:Z_{r}^{-}(0,0)\to\mathbb{R}$ satisfying the following parametrized PDE: $\displaystyle u_{t}=\sqrt{1+\left|\nabla u\right|^{2}}\operatorname{div}\left(\frac{\nabla u}{\sqrt{1+\left|\nabla u\right|^{2}}}\right).$ Observe that the difference of two graphical solutions to MCF satisfies a second-order linear parabolic PDE (provided the gradients are bounded a priori, which will be the case since our solutions are asymptotically conical), so by standard theory of linear parabolic PDEs [17] we have (cf. Section 6.2 in [8]): ###### Lemma 2.2 (Maximum Principle). Suppose $u,v$ are graphical solutions to the MCF in a parabolic cylinder $Z_{r}^{-}(0,0)$ with $u(0,0)=v(0,0)$. If $u\leq v$ in $Z_{r}^{-}(0,0)$, then $u=v$ in $Z_{r}(0,0)$. ###### Lemma 2.3 (Hopf Lemma). Suppose $u,v$ are graphical solutions to the MCF in a half parabolic cylinder $Z_{r}^{-}(0,0)\cap\\{x_{1}\geq 0\\}$ with $u(0,0)=v(0,0)$ and $\frac{\partial u}{\partial x_{1}}(0,0)=\frac{\partial v}{\partial x_{1}}(0,0)$. If $u\leq v$ in $Z_{r}^{-}(0,0)\cap\\{x_{1}\geq 0\\}$, then $u=v$ in $Z_{r}^{-}(0,0)\cap\\{x_{1}\geq 0\\}$. ### 2.4. Asymptotically Conical Mean Curvature Flow Here we will briefly discuss the class of MCFs we consider in 1.1. We need our MCF to be at least $C^{2,\alpha}$-asymptotically conical in order to apply the maximum principle and Hopf Lemma. This will almost not be an issue if we assume our cone is at least $C^{3}$, as the following proposition shows. ###### Proposition 2.4 (cf. Proposition 3.3 in [5]). Let $\mathcal{C}$ be a $C^{3}$ cone and suppose $\\{\Sigma_{t}\\}_{t\in(0,T]}$ is a MCF such that $\displaystyle\lim_{t\to 0^{+}}\mathcal{H}^{n}\llcorner\Sigma_{t}=\mathcal{H}^{n}\llcorner\mathcal{C},$ then we have for $\alpha\in[0,1)$ and $t\in(0,T)$, $\displaystyle\lim_{\rho\to 0^{+}}\rho\Sigma_{t}=\mathcal{C}\text{ in }C^{2,\alpha}_{loc}(\mathbb{R}^{n+1}\setminus\\{0\\}).$ ###### Proof. It is enough to prove that locally $\Sigma_{t}$ is a $C^{2,\alpha}$ normal graph over $\mathcal{C}$ outside of a large ball. By 2.1 (strictly speaking we need to consider the flow together with the initial condition $\Sigma_{0}=\mathcal{C}$ as an integral Brakke flow and apply the theorem for Brakke flows), there is $\delta>0$ such that $\Sigma_{t}\cap C_{\delta}(x_{0})$ can be written as a normal graph over $C_{\delta}^{n}(x)$ with Lipschitz constant bounded by $\eta$ for $t\in(0,\delta^{2})$. This induces a map $u_{x_{0}}:[0,\delta^{2})\times C_{\delta}^{n}(x_{0})\to\mathbb{R}$ whose graph describes part of the flow in the space-time of the flow: $\displaystyle\mathcal{M}=\mathcal{C}\times\\{0\\}\cup\bigcup_{t\in(0,T)}\Sigma_{t}\times\\{t\\}.$ Since $\\{\Sigma_{t}\\}$ is a MCF, $u_{x}$ satisfies the parametrized equation $\displaystyle\frac{\partial u_{x_{0}}}{\partial t}=\sqrt{1+\left|\nabla_{x}u_{x_{0}}\right|^{2}}\operatorname{div}\left(\frac{\nabla_{x}u_{x_{0}}}{\sqrt{1+\left|\nabla_{x}u_{x_{0}}\right|^{2}}}\right)$ with initial conditions $u_{x_{0}}(0,x_{0})=\left|\nabla_{x}u_{x_{0}}(0,x_{0})\right|=0$. Since we did not introduce the various Hölder and weighted Hölder norms (to account for the cone structure), we will only briefly outline the proof, which is essentially the same as the proof of Proposition 3.3 in [5]. This is a quasilinear parabolic PDE in divergence form and we may use Schauder estimates (Theorem 5.1 in Chapter 5 of [18]) to get $\displaystyle\left|\nabla_{x}u_{x_{0}}(t,x)\right|\leq C(\left|x-x_{0}\right|+\sqrt{t})\leq C\delta.$ For the pointwise estimate on $u_{x_{0}}(t,x)$ we have to combine the above and the MCF equation to bound the time derivative $\left|\partial_{t}u_{x_{0}}(t,x)\right|$, giving an estimate of the form $\displaystyle\left|u_{x_{0}}(t,x)\right|\leq C(\left|x-x_{0}\right|^{2}+t)\leq C\delta^{2}.$ Finally, Schauder estimates (Theorem 1.1 in Chapter 6 of [18]) gives Holder semi-norm bounds for any $\alpha\in(0,1)$. Combining these estimates yield a weighted-$C^{2,\alpha}$ estimate on the cone, which is what we needed. ∎ Unfortunately pseudolocality only gives normal graphicality outside of a large compact set, and so we can not conclude that the entire flow will be of class $C^{2,\alpha}$. For this reason it is assumed that the MCF is smooth to begin with in 1.1. We note that it might be possible to remove this assumption using a moving plane method in non-smooth settings such as those presented in [9], [2] or [15]. ## 3\. Rotational Symmetry In this section we prove 1.1. As claimed before, a direct consequence of the pseudolocality theorem 2.1 is the graphicality of the immortal solution outside of a large ball. For the next lemma we denote $\Sigma^{+}=\Sigma\cap\\{x_{n+1}>0\\}$ and $\Sigma^{-}=\Sigma\cap\\{x_{n+1}<0\\}$ for $\Sigma\subset\mathbb{R}^{n+1}$. $\Pi_{s}$$(\mathcal{M}_{s}^{+})^{*}$$\mathcal{C}\times[0,\infty)$$\mathcal{M}_{s}^{-}$ Figure 1. A typical picture of the moving plane ###### Lemma 3.1. Let $\mathcal{C},\\{\Sigma_{t}\\}_{t\in(0,T)}$ be as in 1.1. For each $t\in[0,T)$ there is $R=R(\mathcal{C},\Sigma,t)$ such that $(\Sigma_{t})^{+}\setminus B_{R}(0)$ is graphical over $\Pi_{0}\setminus B_{R}(0)$, where $\Pi_{0}=\\{x_{n+1}=0\\}$; that is, the projection $\pi:(\Sigma_{t})^{+}\setminus B_{R}(0)\to\Pi_{0}$ is injective. The same holds for $(\Sigma_{t})^{-}$. ###### Proof. Let $\eta>0$. Consider the MCF $\\{\Sigma_{t}\\}_{t\in(0,T)}$ together with the initial data $\Sigma_{0}=\mathcal{C}$ and apply 2.1. Since the cone is smooth this can be treated as an integral Brakke flow (with no sudden mass loss). Take any point $x\in\mathcal{C}^{+}\setminus B_{1}(0)$. Since $\mathcal{C}^{+}$ is clearly graphical over itself, by pseudolocality theorem 2.1 there exists $t_{1}>0$ such that $0<t<t_{1}$ implies $(\Sigma_{t})^{+}\cap C_{\sqrt{t_{1}}}(x)$ is a normal graph over $B_{\sqrt{t_{1}}}(x)$ with Lipschitz constant bounded above by $\eta$. Since $x$ is arbitrary, this shows that $(\Sigma_{t})^{+}\setminus B_{1}(0)$ is a normal graph over $\mathcal{C}^{+}\setminus B_{1}(0)$ for $0<t<t_{1}$ with Lipschitz constant bounded above by $\eta$. For a general $t>t_{1}$ parabolic rescaling shows shows that for $R=tt_{1}^{-1}$ we have that $(\Sigma_{t})^{+}\setminus B_{R}(0)$ is a normal graph over $(\mathcal{C}^{+})\setminus B_{R}(0)$ with Lipschitz constant bounded above by $\eta$. The cone $\mathcal{C}^{+}$ is graphical over $\Pi_{0}$ with a fixed angle $\theta<\frac{\pi}{2}$. Moreover we have proved that given $\eta>0$ there is $R>0$ such that $(\Sigma_{t})^{+}\setminus B_{R}(0)$ is a normal graph over $\mathcal{C}^{+}\setminus B_{R}(0)$, so by choosing $\eta$ small enough we have that the angle between the tangent planes of $\mathcal{C}^{+}\setminus B_{R}(0)$ and $(\Sigma_{t})^{+}\setminus B_{R}(0)$ are as small as we want. Hence $(\Sigma_{t})^{+}\setminus B_{R}(0)$ is graphical over $\Pi_{0}$ as well. ∎ For the rest of the section, let $\displaystyle\Pi_{s}=\\{(x,x_{n+1})\in\mathbb{R}^{n+1}\mid x_{n+1}=s\\}\times[0,\infty)\subset\mathbb{R}^{n+1}\times[0,\infty)$ be the hyperplane at level $s$ in spacetime. Given a set $A\subset\mathbb{R}^{n+1}\times[0,\infty)$ and $t,s\in[0,\infty)$ we let $\displaystyle A^{t}=\\{(x,x_{n+1},t^{\prime})\in A\mid t^{\prime}=t\\}$ be the time $t$ slice of $A$, $\displaystyle A_{s}^{+}=\\{(x,x_{n+1},t)\in A\mid x_{n+1}>s\\}$ be the part of $A$ lying above $\Pi_{s}$, $A_{s}^{-}$ be the part of $A$ lying below $\Pi_{s}$ and finally $\displaystyle A^{*}_{s}=\\{(x,x_{n+1},t)\mid(x,2s-x_{n+1},t)\in A\\}$ be the reflection of $A$ across $\Pi_{s}$, but we will often drop the subscript $s$ when it is understood to avoid excessive subscripts. Given two sets $A,B\subset\mathbb{R}^{n+1}\times[0,\infty)$ we say $A>B$ if for any $(x,x_{n+1},t)\in A$ we have $x_{n+1}>y_{n+1}$ for any $(x,y_{n+1},t)\in B$ (if there is any such point). ###### Proof of 1.1. Without loss of generality assume that $\mathcal{C}$’s axis of symmetry is the $x_{1}$-axis. Evidently it suffices to show that the flow preserves the reflection symmetry across any hyperplane containing the $x_{1}$-axis, which without loss of generality we will take to be $\\{x_{n+1}=0\\}$. We will use the moving plane method on the spacetime $\mathbb{R}^{n+1}\times[0,\infty)$. The spacetime track $\displaystyle\mathcal{M}=\bigcup_{t\in[0,T)}\Sigma_{t}\times\\{t\\}$ is a properly embedded hypersurface in $\mathbb{R}^{n+1}\times[0,T)$ asymptotic to $\mathcal{C}\times[0,T)$, in the sense that at each time slice $t$, $\Sigma_{t}$ is $C^{2,\alpha}$-asymptotic to $\mathcal{C}$ as we have demonstrated in 2.4. Let $\displaystyle S=\\{s\in[0,\infty)\mid(\mathcal{M}_{s}^{+})^{*}>\mathcal{M}_{s}^{-},(\mathcal{M}_{s}^{+})^{t}\text{ is graphical over $(\Pi_{s})^{t}$ for $t\in[0,T]$}\\}.$ Here by graphical we meant that $(\mathcal{M}_{s}^{+})^{t}$ can be written as a normal graph over $(\Pi_{s})^{t}$. Alternatively, since our solution is smooth we can require that the vertical vector $e_{n+1}=(0,\ldots,0,1)$ is not contained in the tangent space of any point $p\in(\mathcal{M}_{s}^{+})^{t}$. We first note that since the cone is symmetric across $\Pi_{0}$ we have $(\mathcal{C}_{s}^{+})^{*}>\mathcal{C}_{s}^{-}$ for every $s>0$. It is not hard to see that $S$ is an open set. In fact we just need to show that $e_{n+1}$ is not in the tangent space at infinity for $(\mathcal{M}_{s}^{+})^{t}$. By 2.4, $\displaystyle\lim_{\rho\to 0^{+}}\rho(\mathcal{M}_{s}^{+})^{t}=\mathcal{C}$ in $C^{2,\alpha}_{\mathrm{loc}}(\mathbb{R}^{n+1}\setminus\\{0\\})$, so eventually the tangent space at a point $p\in(\mathcal{M}_{s}^{+})^{t}$ will lie close to the tangent space of $\mathcal{C}$. Since the cone is not vertical $e_{n+1}$ is clearly not contained in the tangent space of any point on $\mathcal{C}$, so by the convergence there is $\varepsilon>0$ such that $e_{n+1}$ is not in the tangent space of any point $p\in(\mathcal{M}_{s-\varepsilon}^{+})^{t}$. $\Pi_{s}$$(\mathcal{M}_{s}^{+})^{*}$$\mathcal{C}\times[0,\infty)$ Figure 2. Boundary touching $\Pi_{s}$$(\mathcal{M}_{s}^{+})^{*}$$\mathcal{C}\times[0,\infty)$$\mathcal{M}_{s}^{-}$ Figure 3. Interior touching By 3.1, there is $R>0$ such that $\Sigma_{1}\setminus B_{R}(0)=\mathcal{M}^{1}\setminus B_{R}(0)$ is graphical over $(\Pi_{0})^{1}$. Moreover this graphical scale scales parabolically, so for $s>T^{2}R$ we have $(\mathcal{M}_{s}^{+})^{t}$ is graphical over $(\Pi_{s})^{t}$ for each $t\in[0,T]$. It is also evident that $(\mathcal{M}_{s}^{+})^{t}$ is asymptotic to the translated cone $\mathcal{C}+2se_{n+1}$, so when $s$ is large enough the reflected part is disjoint from $(\mathcal{M}_{s}^{-})_{0}^{+}$ (that is, the part of $\mathcal{M}$ that lies below level $s$ and above $0$). Together with graphicality this implies $(\mathcal{M}_{s}^{+})^{*}>\mathcal{M}_{s}^{-}$ for sufficiently large $s$, so $S$ is not empty. Finally we show $S$ is closed. Suppose for a contradiction that $(s,\infty)\subset S$ (clearly $s\in S$ implies $[s,\infty)\subset S$) but $s\not\in S$. At level $s$, either the graphicality condition or the set comparison condition $(\mathcal{M}_{s})^{*}>\mathcal{M}_{s}^{-}$ is violated. In the first case, by parabolic rescaling we may assume for simplicity that the nongraphicality happens first at time $t=1$. This means that there is $p\in(\mathcal{M}_{s}^{+})^{1}$ such that $e_{n+1}\in T_{p}(M_{s}^{+})^{1}$. Thus tangent planes of $(\mathcal{M}_{s}^{+})^{*}$ and $\mathcal{M}_{s}^{-}$ at the point $(p,1)$ must coincide. If we choose $r$ small enough we can ensure that $(\mathcal{M}_{s}^{+})^{*}$ and $\mathcal{M}_{s}^{-}$ are graphical over $Z_{r}^{-}(p,1)\cap\\{x_{n+1}\leq s\\}$. Since the tangent planes coincide we can apply Hopf Lemma 2.3 to $(\mathcal{M}_{s}^{+})^{*}$ and $\mathcal{M}_{s}^{-}$ to conclude that these hypersurfaces agree on an open neighborhood of $(p,1)$. Moreover, the set $(\mathcal{M}_{s}^{+})^{*}\cap\mathcal{M}_{s}^{-}$ is closed by definition and open by the maximum principle, so at least a connected component of $(\mathcal{M}_{s}^{+})^{*}$ must coincide with a component of $\mathcal{M}_{s}^{-}$. This implies that $(\mathcal{M}_{s}^{+})^{*}$ is asymptotic to both the cones $\mathcal{C}\times[0,\infty)$ and $(\mathcal{C}+2se_{n+1})\times[0,\infty)$, a contradiction. In the second case, $s$ is necessarily the first level such that $(\mathcal{M}_{s})^{+}\cap\mathcal{M}_{s}^{-}\neq\emptyset$, and the graphicality condition implies that $(\mathcal{M}_{s}^{+})^{*}$ and $\mathcal{M}_{s}^{-}$ must touch at an interior point $(p,t)$ of the flow. Again for $r$ small enough they are both graphical solutions of the MCF, so the maximum principle 2.2 implies that $(\mathcal{M}_{s}^{+})^{*}$ and $\mathcal{M}_{s}^{-}$ agree on an open neighborhood of $(p,t)$. Since we can do this for any point $(p,t)\in(\mathcal{M}_{s}^{+})^{*}\cap\mathcal{M}_{s}^{-}$, at least a connected component of $(\mathcal{M}_{s}^{+})^{*}$ coincides with a component of $\mathcal{M}_{s}^{-}$, a contradiction. We have thus proved that $S$ is open, non-empty and closed, and thus $S=(0,\infty)$ (note that since we have a strict inequality in our set up so we cannot conclude directly that $0\in S$). Note that we can run a similar argument starting from the bottom half, yielding $(\mathcal{M}_{s}^{-})^{*}>\mathcal{M}_{s}^{+}$ for any $s<0$. Hence there must be a point of touching at $s=0$. If the intersection is in the interior we can apply the maximum principle 2.2 to conclude that $(\mathcal{M}_{0}^{+})^{*}=\mathcal{M}_{0}^{-}$, i.e. $\mathcal{M}$ is symmetric across the reflection with respect to $\Pi_{0}$. The same conclusion holds if the intersection is along the boundary by using the Hopf lemma 2.3 instead. ∎ ###### Remark. In the proof of 1.1 we actually proved the stronger result that reflection symmetry is preserved for self-expanders asymptotic to a double cone. This yields, for example, that when $m_{1}=m_{2}$ in Equation 1.1, any self- expander asymptotic to $\mathcal{C}$ is symmetric across the reflection with respect to the plane $\\{x_{1}=0\\}$. We do not expect this to hold for cones whose links have more than 2 components. ## References * AIC [95] Sigurd Angenent, Tom Ilmanen, and David L. Chopp. A computed example of nonuniqueness of mean curvature flow in r3. Communications on Partial Differential Equations, 20:1937–1958, 1995. * BM [20] Jacob Bernstein and Francesco Maggi. Symmetry and rigidity of minimal surfaces with plateau-like singularities. https://arxiv.org/abs/2003.01784, 2020. Preprint. * BW [17] Jacob Bernstein and Lu Wang. The space of asymptotically conical self-expanders of mean curvature flow. https://arxiv.org/abs/1712.04366, 2017. Preprint. * BW [18] Jacob Bernstein and Lu Wang. An integer degree for asymptotically conical self-expanders. https://arxiv.org/abs/1807.06494, 2018. Preprint. * [5] Jacob Bernstein and Lu Wang. Smooth Compactness for Spaces of Asymptotically Conical Self-Expanders of Mean Curvature Flow. International Mathematics Research Notices, 2019. * [6] Jacob Bernstein and Lu Wang. Topological uniqueness for self-expanders of small entropy. https://arxiv.org/abs/1902.02642, 2019. Preprint. * BW [20] Jacob Bernstein and Lu Wang. A mountain-pass theorem for asymptotically conical self-expanders. https://arxiv.org/abs/2003.13857, 2020. Preprint. * CHH [18] Kyeongsu Choi, Robert Haslhofer, and Or Hershkovits. Ancient low entropy flows, mean convex neighborhoods, and uniqueness. https://arxiv.org/abs/1810.08467, 2018. Preprint. * CHHW [19] Kyeongsu Choi, Robert Haslhofer, Or Hershkovits, and Brian White. Ancient asymptotically cylindrical flows and applications. https://arxiv.org/abs/1910.00639, 2019. Preprint. * Cho [14] Otis Chodosh. Expanding ricci solitons asymptotic to cones. Calculus of Variations and Partial Differential Equations, 51:1–15, 2014. * Din [19] Qi Ding. Minimal cones and self-expanding solutions for mean curvature flows. https://arxiv.org/abs/1503.02612, 2019. Preprint. * EH [91] Klaus Ecker and Gerhard Huisken. Interior estimates for hypersurfaces moving by mean curvature. Inventiones mathematicae, 105:547–569, 1991. * FM [16] Frederick Fong and Peter McGrath. Rotational symmetry of asymptotically conical mean curvature flow self-expanders. Communications in Analysis and Geometry, 27(3), 2016. * Hel [12] Sebastian Helmensdorfer. A model for the behavior of fluid droplets based on mean curvature flow. SIAM Journal on Mathematical Analysis, 44(3):1359–1371, 2012. * HHW [20] Robert Haslhofer, Or Hershkovits, and Brian White. Moving plane method for varifolds and applications. https://arxiv.org/abs/2003.01505, 2020. Preprint. * INS [19] Tom Ilmanen, André Neves, and Felix Schulze. On short time existence for the planar network flow. J. Differential Geom., 111(1):39–89, 01 2019. * Lie [96] Gary M. Lieberman. Second Order Parabolic Differential Equations. World Scientific Publishing Co., 1996. * LSU [68] O. A. Ladyzenskaja, V.A. Solonnikov, and N. N. Ural’ceva. Linear and quasi-linear equations of parabolic type, volume 23 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 1968. * MSS [15] Francisco Martín, Andreas Savas-Halilaj, and Knut Smoczyk. On the topology of translating solitons of the mean curvature flow. Calculus of Variations and Partial Differential Equations, 18(54):2853–2882, 2015. * Sch [83] Richard M. Schoen. Uniqueness, symmetry, and embeddedness of minimal surfaces. J. Differential Geom., 18(4):791–809, 1983.
# Identification of the Nitrogen Interstitial as Origin of the 3.1 eV Photoluminescence Band in Hexagonal Boron Nitride ###### Abstract Nitrogen interstitials (N${}_{\textsubscript{i}}$) have the lowest formation energy among intrinsic defects of hexagonal boron nitride (hBN) under n-type and N-rich conditions. Using an optimized hybrid functional, which reproduces the gap and satisfies the generalized Koopmans condition, an N${}_{\textsubscript{i}}$ configuration is found which is lower in energy than the ones reported so far. The (0/-) charge transition level is also much deeper, so N${}_{\textsubscript{i}}$ acts as a very efficient compensating center in n-type samples. Its calculated photoluminescence (PL) at 3.0 eV agrees well with the position of an N-sensitive band measured at 3.1 eV. It has been also found that the nitrogen vacancy (V${}_{\textsubscript{N}}$) cannot be the origin of the three boron electron (TBC) electron paramagnetic resonance (EPR) center and in thermal equilibrium it cannot even exist in n-type samples. Elham Khorasani, Thomas Frauenheim, Bálint Aradi* and Peter Deák* Elham Khorasani Bremen Center for Computational Materials Science, University of Bremen, P.O. Box 330440, D-28334 Bremen, Germany. Prof. Dr. Thomas Frauenheim Beijing Computational Science Research Center (CSRC), 100193 Beijing, China. Shenzhen JL Computational Science and Applied Research Institute, 518110 Shenzhen, China. Bremen Center for Computational Materials Science, University of Bremen, P.O. Box 330440, D-28334 Bremen, Germany. Dr. Bálint Aradi Bremen Center for Computational Materials Science, University of Bremen, P.O. Box 330440, D-28334 Bremen, Germany. Email Address<EMAIL_ADDRESS> Prof. Dr. Peter Deák Bremen Center for Computational Materials Science, University of Bremen, P.O. Box 330440, D-28334 Bremen, Germany. Email Address<EMAIL_ADDRESS> ## 1 Introduction Materials with well-defined defect centers are highly promising for next generation devices, with applications ranging from electronics and photonics to quantum computing [1, 2]. Hexagonal boron nitride (hBN) is a wide-band-gap semiconductor that exhibits a rich number of band gap levels which profoundly impact its electronic and optical properties. These can act as recombination centers and emit light within a specific energy range. In recent years, numerous color centers have been reported in hBN [3, 4]. However, the physical origin of some of these centers is not yet fully understood [5, 6]. First principle calculations can play a crucial role in identifying the defects and providing accurate information about their properties. The luminescence properties of hBN have been studied mainly in recent years [5, 7, 8, 9, 10]. Museur et al. reported three photoluminescence (PL) bands at the energies of 5.3 eV, 3.75 eV, and 3.1 eV for pyrolytic hBN (pBN) samples [7]. pBN samples are high purity samples, free of carbon and oxygen. The emission bands around 5.3 eV and 3.57 eV were found to be donor-acceptor pair (DAP) type transitions. The nature of the PL band observed at 3.1 eV in pBN samples was not elucidated. Du et al. [8] investigated the PL spectra for a set of hBN epilayers grown by metal organic chemical vapor deposition (MOCVD) under different ammonia ($\mathrm{NH}_{3}$) flow rates from 0.2 to 1.5 standard liters per minute (SLM). They observed four emission peaks at energies of 4.12 eV, 3.92 eV, 3.72 eV, and 5.37 eV for hBN under the $\mathrm{NH}_{3}$ flow rate of 0.2 SLM. The first three peaks were interpreted as a zero phonon line at 4.12 eV with two phonon replicas. They observed a decrease in the emission intensity for the peak at 4.12 eV upon increasing the $\mathrm{NH}_{3}$ flow rate while the change of the peak at 5.37 eV was found to be small. They also observed the appearance of the peak at 3.1 eV as the flow rate of the $\mathrm{NH}_{3}$ was increased to 1.5 SLM. Besides PL spectroscopy, electron paramagnetic resonance (EPR) is a powerful technique often used for identifying and studying defects in semiconductors and insulators [11]. Two types of paramagnetic centers were identified in hBN [6, 12, 13, 14]. In one type, an unpaired electron interacts with three equivalent boron nuclei, producing a 10-line EPR spectrum. This type is referred to as the three boron center (TBC). In another type, an unpaired electron interacts with a single boron nucleus, which gives rise to a 4-line EPR spectrum. This is referred to as the one boron center (OBC). Based on experimental observation, the carbon substitutional ($\mathrm{C}_{\mathrm{N}}$) and the nitrogen vacancy ($\mathrm{V}_{\mathrm{N}}$) were proposed [6, 14, 15] to model the TBC. In the former, an unpaired electron was trapped at the carbon atom. This was referred to as a carbon-associated TBC and was confirmed theoretically [15]. In the $\mathrm{V}_{\mathrm{N}}$ model, proposed by experiments [6], the unpaired electron is assumed to be trapped in the nitrogen vacancy. This was referred to as an electron-irradiation produced TBC. However, its nature is still controversial. We have applied an optimized hybrid functional to calculate defects in hBN. Our calculations show that the observed PL at 3.1 eV in bulk hBN is due to the recombination of an electron with a hole trapped at a nitrogen interstitial, $\mathrm{N}_{\mathrm{i}}$. Additionally, we have found that $\mathrm{V}_{\mathrm{N}}$ cannot be the origin of the TBC center. In fact, in n-type samples in equilibrium, $\mathrm{V}_{\mathrm{N}}$ turns out to be less stable than $\mathrm{N}_{\mathrm{i}}$ even under extreme N-poor conditions, indicating that $\mathrm{V}_{\mathrm{N}}$ can be created only by irradiation. ## 2 Computational method It was shown that the screened hybrid functional of Heyd, Scuseria, and Ernzerhof (HSE) [16, 17], with parameters tuned to reproduce the relative position of the quasi-particle band edges and to satisfy the generalized Koopman’s theorem (gKT), is capable of providing the formation energy, the charge transition levels and the hyperfine interaction of defects very accurately in traditional bulk semiconductors [18, 19, 20]. In our previous work, we have shown that such a functional can be optimized for layered compounds as well [21] and HSE($\alpha,\mu$) with a mixing parameter $\alpha$ = 0.3 and a screening parameter $\mu$ = 0.4 was found as the optimal functional for bulk hBN. Our calculations in the present work have been carried out with the Vienna Ab initio Simulation Package, VASP 5.4.4, using the projector augmented wave method [22, 23, 24]. A 420 (840) eV cutoff was applied for the expansion of the wave functions (charge density) in hBN. The convergence condition of 10-4 eV was applied for the self-consistent electronic energy. Van der Waals interactions between the layers were taken into account by the Tkatchenko- Scheffler method [25], using $s_{\mathrm{R}}$ = 0.96. The equilibrium geometry was determined for the primitive unit cell with a $\Gamma$-centered $6\times 6\times 6$ Monkhorst-Pack (MP) $k$-point set [26], based on constant volume relaxations and fitting to Murnaghan’s equation of state [27]. We describe bulk hBN using an orthogonal supercell of 120 atoms (5a1, 3a1 \+ 6a2, a3) at the calculated lattice constants (see our previous work [21]), using the $\Gamma$-point approximation. (Here $\textbf{a}_{1}$, $\textbf{a}_{2}$, and $\textbf{a}_{3}$ are the primitive unit vectors). The geometries of the defects in the supercell were relaxed at fixed lattice constants, using a force criterion of 0.01 eV/Å. We have calculated the formation energy and the transition levels of the intrinsic defects $\mathrm{N}_{\mathrm{i}}$ and $\mathrm{V}_{\mathrm{N}}$. The chemical potentials $\mu_{\mathrm{N}}$ and $\mu_{\mathrm{B}}$ are related in equilibrium, as $\mu_{\mathrm{BN}}=\mu_{\mathrm{B}}+\mu_{\mathrm{N}}$ where $\mu_{\mathrm{BN}}$ is the energy of hBN per formula unit. For N-rich conditions the chemical potential of nitrogen was taken as the energy of the nitrogen atom in the molecule (at a temperature of 300 K and a pressure of 105 Pa). For the B-rich case $\mu_{\mathrm{B}}$ corresponds to the energy of the B atom in the solid phase. For charged defects, total energies were corrected a posteriori, using the SLABCC code [28, 29] to eliminate the artificial interaction between the repeated charges. SLABCC is based on the scheme proposed by Komsa and Pasquarello [30]. For the correction, the static dielectric constant $\varepsilon_{0}$ should be used in principle if the ionic cores are allowed to relax. However, it was shown that, due to the explicit screening in the supercell, the high-frequency value $\varepsilon^{\infty}$ gives a better approximation [31], therefore $\varepsilon^{\infty}_{\perp}=4.95$ and $\varepsilon^{\infty}_{\parallel}=4.10$ [32] were used. To explain the observed PL in bulk hBN, we have investigated electron recombination with a hole trapped at $\mathrm{N}_{\mathrm{i}}$. Within the accuracy of the calculation (0.1 eV), a shallow donor state cannot be distinguished energetically from the bottom of the conduction band. Therefore, we simulate a donor-acceptor recombination from a shallow donor to a deep acceptor by the recombination of a bound exciton, i.e., from the conduction band edge to the acceptor level [8]. To calculate the PL, we first relaxed the equilibrium geometry of the bound exciton under the constraint of the orbital occupations, i.e., with one hole at the defect level and with one electron in the conduction band minimum (CBM). To consider the delocalization of the electron, the total energy of the initial state was recalculated with a (non-$\Gamma$-centered) $2\times 2\times 2$ Monkhorst-Pack set at the relaxed geometry of the bound exciton in the $\Gamma$ approximation. The energy of the final state after recombination was calculated at fixed geometry, using the same $2\times 2\times 2$ $k$-point set. The PL energy is the difference between the two total energies. Hyperfine interaction was calculated for $\mathrm{V}_{\mathrm{N}}$. Boron has two isotopes, ${}^{10}{\mathrm{B}}$ and ${}^{11}{\mathrm{B}}$, with the natural abundances of $18.83\%$ and $81.17\%$, respectively. These isotopes have different nuclear spins and magnetic moments. In this work, the hyperfine splitting was compared to the experimental data measured in ${}^{11}{\mathrm{B}}$-enriched samples. The presented results were obtained for the isotope ${}^{11}{\mathrm{B}}$ with $I=3/2$. The nuclear gyromagnetic ratios of 13.66 and 3.08 were used for boron and nitrogen, respectively [33]. ## 3 Results and Discussion ### 3.1 Nitrogen interstitial In earlier works, the nitrogen interstitial was predicted to be in a [001] split interstitial configuration with a lattice N atom [34, 35], and was found to be the intrinsic defect with the lowest formation energy under N-rich conditions in n-type material. We have found, however, that the configuration shown in Figure 1 is lower in energy by 0.70 eV. The axis of the split interstitial tilts away from [001], so both nitrogen atoms are ideally threefold coordinated. The $\mathrm{N}_{\mathrm{i}}$-$\mathrm{N}$ bondlength, 1.38 Å corresponds to a single bond, and $\mathrm{N}_{\mathrm{i}}$ also binds to a boron atom of the top layer. Simple electron counting shows that the $p$-orbital of $\mathrm{N}_{\mathrm{i}}$, which is orthogonal to the plane of its three bonds, contains only one electron. Figure 1: Atomic structure of the intrinsic defect $\mathrm{N}_{\mathrm{i}}$ in the lowest energy configuration. The $\mathrm{N}_{\mathrm{i}}$-$\mathrm{N}$ bondlength, 1.38 Å corresponds to a single bond. $\mathrm{N}_{\mathrm{i}}$ also binds to a boron atom of the top layer. Figure 2 shows the formation energy calculated for the intrinsic defects $\mathrm{N}_{\mathrm{i}}$ and $\mathrm{V}_{\mathrm{N}}$ in different charge states, as a function of the Fermi level position between the valence band maximum (VBM) and the conduction band minimum (CBM). The adiabatic charge transition levels for $\mathrm{N}_{\mathrm{i}}$ are obtained at $\mathrm{E}(+/0)=$ 0.94 eV and $\mathrm{E}(0/-)=$ 2.68 eV with respect to the valence band edge. The latter is much deeper than the value obtained in a previous work [34]. Weston et al. predicted $\mathrm{N}_{\mathrm{i}}$ to be the major compensating center in n-type samples [34]. Our finding confirms that and the lower formation energy indicates an even stronger compensating effect. Figure 2: Formation energies of the intrinsic point defects $\mathrm{N}_{\mathrm{i}}$ and $\mathrm{V}_{\mathrm{N}}$ in bulk hBN as a function of the Fermi level under (left) N-rich and (right) N-poor conditions. Solid blue and dashed red lines represent the defects $\mathrm{N}_{\mathrm{i}}$ and $\mathrm{V}_{\mathrm{N}}$, respectively. The energy of the valence band maximum has been set to zero. The PL energy was calculated for the configuration shown in Figure 1 as described in Section 2. Assuming the donor level to be 100 meV under the CBM, our calculated PL energy is at 3.0 eV. This can explain the origin of the N-related observed peak at 3.1 eV in hBN in Ref. [8, 7]. ### 3.2 Nitrogen Vacancy The similarities between optical emitters in hBN and the nitrogen-vacancy ($\mathrm{NV}$) center in diamond, have brought a lot of attention to vacancy defects in hBN. Earlier experimental studies proposed $\mathrm{V}_{\mathrm{N}}$ as the source of the TBC [6], which was produced by electron irradiation. $\mathrm{V}_{\mathrm{N}}$ introduces a gap level with an unpaired electron. Our calculations show that the electron is on $p$-orbitals orthogonal to the lattice planes, with a symmetrical charge distribution on the three neighboring boron atoms. Figure 3 shows the spin distribution. The calculated hyperfine coupling constants $A$ are given in Table 1. The calculated average principle value of the hyperfine interaction tensor is obtained at 34.23 MHz, which is significantly smaller than the value of 117.06 MHz measured for the TBC in carbon-free samples after irradiation. It has been shown that core spin polarization may have a significant effect on the hyperfine interaction calculated, providing accurate results in various semiconductors [36]. Taking the core contribution into account, the calculated magnitude of 34.23 MHz, which only involved the valence electronic spin density, reduced to 16.09 MHz, demonstrating that the $\mathrm{V}_{\mathrm{N}}$ can not be the origin of the three boron center. The nitrogen instersitial defect, $\mathrm{N}_{\mathrm{i}}$ is expected to be more stable than $\mathrm{V}_{\mathrm{N}}$ in N-rich samples. This is also confirmed by Figure 2. However, as demonstrated by the same figure, even under extreme N-poor conditions, $\mathrm{V}_{\mathrm{N}}$ can only exist in p-type samples in thermal equilibrium. Actually, there is no hard experimental evidence for the existence of $\mathrm{V}_{\mathrm{N}}$. In the earlier PL studies [5, 8, 9, 37], the nitrogen vacancy was suggested as a possible source of the 4.1 eV emission band in bulk hBN. Later on, however, a theoretical study showed that the carbon dimer defect $\mathrm{C}_{\mathrm{B}}\mathrm{C}_{\mathrm{N}}$ is responsible for the 4.1 eV emission in hBN [38]. Jin et al. [39] used ultrahigh-resolution transmission electron microscope imaging. They created defects by an electron beam irradiation and found that the created defects are of the boron vacancy ($\mathrm{V}_{\mathrm{B}}$) type. Table 1: Hyperfine coupling constants for the model defect $\mathrm{V}_{\mathrm{N}}$. The average value of $A_{\mathrm{ave}}$ was calculated as ($A_{\mathrm{xx}}+A_{\mathrm{yy}}+A_{\mathrm{zz}})/3$ in MHz. $A_{\mathrm{xx}}$ | $A_{\mathrm{yy}}$ | $A_{\mathrm{zz}}$ | $A_{\mathrm{ave}}$ | $A_{\mathrm{ave}}$(experimental)111andrei1976point ---|---|---|---|--- 21.21 | 20.81 | 60.65 | 34.23 | 117.06 Figure 3: The spin density calculated for the $\mathrm{V}_{\mathrm{N}}$ defect as model for the TBC in bulk hBN (isovalue: 0.002 1/Å3) in (left) top view and (right) side view. The spin density concentrated on the three neighbouring boron atoms produces the calculated hyperfine couplings. ## 4 Conclusion We have calculated the intrinsic defects, nitrogen interstitial and nitrogen vacancy, using the optimal HSE (0.4, 0.3) functional for hBN as obtained in our previous work. We have found an $\mathrm{N}_{\mathrm{i}}$ configuration which is considerably lower in energy than the ones reported so far. The calculated formation energies show that $\mathrm{N}_{\mathrm{i}}$ is the intrinsic defect with the lowest formation energy under N-rich conditions. Its acceptor level is deep and behaves as a compensating center. The calculated PL energy of 3.0 eV for $\mathrm{N}_{\mathrm{i}}$ is in good agreement with the experimentally observed N-related band at 3.1 eV [7, 8]. We have investigated the nitrogen vacancy as well. Our result indicates that $\mathrm{V}_{\mathrm{N}}$ is less stable than $\mathrm{N}_{\mathrm{i}}$ in n-type samples even under extreme N-poor conditions and should be observable only in irradiated samples. The hyperfine interactions have been calculated for $\mathrm{V}_{\mathrm{N}}$, which was suspected to be a source of the three boron centers produced under an electron-irradiation in carbon-free samples. Our results indicate that $\mathrm{V}_{\mathrm{N}}$ cannot be the origin of the three boron center in bulk hBN. Acknowledgements This work was supported by the Deutsche Forschungsgemeinschaft (DFG) within the research training group (RTG) 2247. ## References * [1] V. Ivády, G. Barcza, G. Thiering, S. Li, H. Hamdi, J.-P. Chou, Ö. Legeza, A. Gali, _npj Comput. Mater._ 2020, _6_ , 1 1. * [2] S. Kim, J. E. Fröch, J. Christian, M. Straw, J. Bishop, D. Totonjian, K. Watanabe, T. Taniguchi, M. Toth, I. Aharonovich, _Nat. Commun._ 2018, _9_ , 1 1. * [3] T. T. Tran, C. Elbadawi, D. Totonjian, C. J. Lobo, G. Grosso, H. Moon, D. R. Englund, M. J. Ford, I. Aharonovich, M. Toth, _ACS Nano_ 2016, _10_ , 8 7331. * [4] D. Wong, J. Velasco, L. Ju, J. Lee, S. Kahn, H.-Z. Tsai, C. Germany, T. Taniguchi, K. Watanabe, A. Zettl, et al., _Nat. Nanotechnol._ 2015, _10_ , 11 949. * [5] L. Museur, E. Feldbach, A. Kanaev, _Phys. Rev. B_ 2008, _78_ , 15 155204. * [6] E. Andrei, A. Katzir, J. Suss, _Phys. Rev. B_ 1976, _13_ , 7 2831. * [7] L. Museur, A. Kanaev, _J. Mater. Sci._ 2009, _44_ , 10 2560. * [8] X. Du, J. Li, J. Lin, H. Jiang, _Appl. Phys. Lett._ 2015, _106_ , 2 021110. * [9] M. Silly, P. Jaffrennou, J. Barjon, J.-S. Lauret, F. Ducastelle, A. Loiseau, E. Obraztsova, B. Attal-Tretout, E. Rosencher, _Phys. Rev. B_ 2007, _75_ , 8 085205. * [10] T. Vuong, G. Cassabois, P. Valvin, A. Ouerghi, Y. Chassagneux, C. Voisin, B. Gil, _Phys. Rev. Lett._ 2016, _117_ , 9 097402. * [11] G. D. Watkins, _Identification of Defects in Semiconductors and Semimetals_ , volume 51A, Academic Press, San Diego, edited by m. stavola edition, 1999. * [12] D. Geist, G. Römelt, _Solid State Commun._ 1964, _2_ , 5 149. * [13] A. Katzir, J. Suss, A. Zunger, A. Halperin, _Phys. Rev. B_ 1975, _11_ , 6 2370. * [14] A. Moore, L. Singer, _J. Phys. Chem. Solids_ 1972, _33_ , 2 343. * [15] A. Sajid, J. R. Reimers, M. J. Ford, _Phys. Rev. B_ 2018, _97_ , 6 064101. * [16] J. Heyd, G. E. Scuseria, M. Ernzerhof, _J. Chem. Phys._ 2003, _118_ , 18 8207. * [17] J. Heyd, G. E. Scuseria, M. Ernzerhof, _J. Chem. Phys._ 2006, _124_ , 21 219906. * [18] P. Deák, M. Lorke, B. Aradi, T. Frauenheim, _J. Appl. Phys_ 2019, _126_ , 13 130901. * [19] P. Deák, Q. D. Ho, F. Seemann, B. Aradi, M. Lorke, T. Frauenheim, _Phys. Rev. B_ 2017, _95_ , 7 075208. * [20] M. Han, Z. Zeng, T. Frauenheim, P. Deák, _Phys. Rev. B_ 2017, _96_ , 16 165204. * [21] P. Deák, E. Khorasani, M. Lorke, M. Farzalipour-Tabriz, B. Aradi, T. Frauenheim, _Phys. Rev. B_ 2019, _100_ , 23 235304. * [22] G. Kresse, J. Hafner, _Phys. Rev. B_ 1994, _49_ , 20 14251. * [23] G. Kresse, J. Furthmüller, _Phys. Rev. B_ 1996, _54_ , 16 11169. * [24] G. Kresse, D. Joubert, _Phys. Rev. B_ 1999, _59_ , 3 1758. * [25] A. Tkatchenko, M. Scheffler, _Phys. Rev. Lett._ 2009, _102_ 073005\. * [26] H. J. Monkhorst, J. D. Pack, _Phys. Rev. B_ 1976, _13_ , 12 5188. * [27] F. D. Murnaghan, _PNAS_ 1944, _30_ , 9 244. * [28] M. F. Tabriz, B. Aradi, T. Frauenheim, P. Deák, _Comp. Phys. Comm._ 2019, _240_ 101\. * [29] M. Farzalipour, _SLABCC: Total energy correction code for charged periodic slab models_ , Source code can be downloaded from https://github.com/MFTabriz/slabcc. * [30] H.-P. Komsa, N. Berseneva, A. V. Krasheninnikov, R. M. Nieminen, _Phys. Rev. X_ 2014, _4_ 031044\. * [31] P. Deák, Q. Duy Ho, F. Seemann, B. Aradi, M. Lorke, T. Frauenheim, _Phys. Rev. B_ 2017, _95_ 075208\. * [32] R. Geick, C. Perry, G. Rupprecht, _Phys. Rev._ 1966, _146_ , 2 543. * [33] _NMR Periodic Table_ , Data are gotten from: http://www-usr.rider.edu/grushow/nmr/NMR-tutor/periodic-table/nmr-pt- frameset.html. * [34] L. Weston, D. Wickramaratne, M. Mackoit, A. Alkauskas, C. G. Van de Walle, _Phys. Rev. B_ 2018, _97_ 214104\. * [35] V. Wang, R.-J. Liu, H.-P. He, C.-M. Yang, L. Ma, _Solid State Commun._ 2014, _177_ 74\. * [36] K. Szász, T. Hornos, M. Marsman, A. Gali, _Phys. Rev. B_ 2013, _88_ 075202\. * [37] K. Watanabe, T. Taniguchi, H. Kanda, _Nat. Mater._ 2004, _3_ , 6 404. * [38] M. Mackoit-Sinkevičienė, M. Maciaszek, C. G. Van de Walle, A. Alkauskas, _Appl. Phys. Lett._ 2019, _115_ , 21 212101. * [39] C. Jin, F. Lin, K. Suenaga, S. Iijima, _Phys. Rev. Lett._ 2009, _102_ , 19 195505.
# Estimating the Total Volume of Queries to a Search Engine Fabrizio Lillo and Salvatore Ruggieri F. Lillo is with the Department of Mathematics, University of Bologna, Italy. E-mail<EMAIL_ADDRESS>S. Ruggieri is with the Department of Computer Science, University of Pisa, Italy. E-mail<EMAIL_ADDRESS> ###### Abstract We study the problem of estimating the total number of searches (volume) of queries in a specific domain, which were submitted to a search engine in a given time period. Our statistical model assumes that the distribution of searches follows a Zipf’s law, and that the observed sample volumes are biased accordingly to three possible scenarios. These assumptions are consistent with empirical data, with keyword research practices, and with approximate algorithms used to take counts of query frequencies. A few estimators of the parameters of the distribution are devised and experimented, based on the nature of the empirical/simulated data. For continuous data, we recommend using nonlinear least square regression (NLS) on the top-volume queries, where the bound on the volume is obtained from the well-known Clauset, Shalizi and Newman (CSN) estimation of power-law parameters. For binned data, we propose using a Chi-square minimization approach restricted to the top-volume queries, where the bound is obtained by the binned version of the CSN method. Estimations are then derived for the total number of queries and for the total volume of the population, including statistical error bounds. We apply the methods on the domain of recipes and cooking queries searched in Italian in 2017. The observed volumes of sample queries are collected from Google Trends (continuous data) and SearchVolume (binned data). The estimated total number of queries and total volume are computed for the two cases, and the results are compared and discussed. ###### Index Terms: Search engine query, Volume estimation, Zipf’s law, Power law, Google Trends. ## 1 Introduction The problem of computing the total number of searches (volume) of queries belonging to a specific domain is extremely relevant and, at the same time, challenging. For example, in web marketing the total volume ${\mathcal{V}}$ of queries quantifies the potential market of search engine advertising in the domain. When performing sociological or political research, instead, one might be interested in estimating the volume of queries belonging to a given macro- topic (e.g., immigration, ecology, health, etc.) searched in a given time period and from a given geographical area. In epidemiological investigations the total volume can be used to estimate the amount of information requested by the population for items related to some specific disease. An even more interesting quantity is the total volume ${\mathcal{V}}_{v}$ of queries searched at least $v$ times. In the web marketing example, ${\mathcal{V}}_{v}$ quantifies the potential market of queries with a minimum guaranteed volume. Related to the above, the total number of queries $N$ in the domain, or of queries $N_{v}$ searched at least $v$ times, are also extremely useful information for market analysis. However, the stream of queries submitted to a search engine is so massive that it is impractical to keep frequency counts of every possible query, particularly of those in the long tail of the distribution. In this paper, we study the problem of estimating the total volume of queries submitted to a search engine for a specific domain in a given time period. We design estimation methods from sample data and experiment with them using both simulated and real data. As a specific example, we consider empirical data in the domain of recipes and cooking, which consists of queries with the name of the recipe of a dish, excluding drinks. The advantage over other domains is that it is relatively easy to collect sample recipes and to validate whether a given text is a recipe or not. In particular, we crawled popular websites of Italian recipes and cooking, collecting a sample of more than 120K distinct queries. We then resorted to Search Engine Optimization (SEO) tools for obtaining estimates of the number of searches to Google for each single query in the sample during the whole year 2017. We call such an estimate, the observed volume of a query. We collected observed volumes from Google Trends111https://trends.google.com and SearchVolume222https://searchvolume.io. Figure 1: Empirical rank-volume distribution (scaled Google Trends estimates) for the investigated dataset (see Section 7 for details). Best view in color. Our key problem is to estimate the total volume of the queries in the whole population, starting from a possibly biased empirical sample of observed volumes for a small set of queries. We start from the basic assumption that the rank-volume distribution of the whole population of queries (i.e. observed and unobserved) follows a Zipf’s law. This assumption is supported by previous work [1] and by the empirical rank-volume distribution333The dataset is described in detail in Section 7. of Figure 1. Observed volumes may be biased. Bias in observed volumes can be attributed to the adoption by SEO tools of sampling strategies and/or approximate counting techniques [2], e.g., count-min sketch summaries [3]. Such strategies favor volume estimation of popular queries against the ones in the long tail of the distribution. This yields the visible drop in volume in the tail of the empirical distribution of Figure 1. For instance, only 18.5K queries are assigned an observed volume estimate by Google Trends and only 12K queries by SearchVolume. Queries with low volume are monitored with lower probability by SEO tools, which then return no observed volume for them. We are able to model this behavior by assuming that the sample of observed volumes is not uniform, but it depends on the rank of queries over the population (non-uniform sampling). Moreover, in order to account for approximations in the SEO tool data, we additionally assume that the observed volumes are noisy, and discuss two specific sampling schemes (noisy and sketchy sampling). The parameters of the Zipf distribution are estimated by a variant of Nonlinear Least Square (NLS) regression for continuous data, and of Chi-square optimization for binned data. Simulations show such estimators perform better than an alternative approach based on power law parameter estimation from continuous data [4] and binned data [5] respectively. We derive then estimators of total volumes ${\mathcal{V}}$ and ${\mathcal{V}}_{v}$, and number of (distinct) queries $N$ and $N_{v}$, including closed formula for statistical errors of such estimators. In summary, this paper makes the following contributions: * • we formalize the problem of estimating the total volume of queries submitted to a search engine, and propose a statistical model which is consistent with empirical data; * • we design methods to infer parameters of the statistical model from both continuous and binned empirical data, and show that they perform well under simulated conditions; * • we apply the approach to the domain of recipes and cooking for queries in Italian, and produce estimations for the volume ${\mathcal{V}}_{v}$ of queries searched at least $v$ times in 2017 starting from empirical data collected from Google Trends (continuous data) and SearchVolume (binned data). This paper is organized as follows. First, we report on related work in Section 2. Next, we state the addressed problem in Section 3. Section 4 provides models of sampling bias. Section 5 introduces and experiment with two estimators of the parameters of the Zipf’s law, and builds on them for estimating the number and total volume of queries in the population. Section 6 extends the approach to the case of binned empirical data. Section 7 describes the empirical data obtained from Google Trends and SearchVolume tools, and it applies the estimators proposed in this paper to such data. Finally, conclusions summarize the contribution of the paper and open challenges for future work. ## 2 Related Work Power laws distributions, Pareto distributions and Zipf’s laws are ubiquitous in empirical data of many fields [6, 4], and in information retrieval in particular [1]. Several works [1, 7, 8, 9] have observed that the probability that a query is searched $v$ times is approximately Power law distributed, namely $P(V=v)\propto 1/v^{\alpha}$. This information on query frequencies has been used to optimize caching and distribution strategies in search engines and peer-to-peer systems. For our purposes, it implies that the the probability that a query is ranked $i$-th follows a Zipf’s law $P(R=i)\propto 1/i^{\beta}$ with $\beta=1/(\alpha-1)$ (see e.g., [10, 11]). Hence, we can resort to the literature on the estimation of parameters of Power law distributions. Popular methods [1] have relied on: graphical methods, straight-line approximation, maximum-likelihood estimation. The estimated tail exponent, even in simulated data, significantly depends on the adopted method [12]. A major breakthrough was the method proposed in [4], which consists in a maximum-likelihood estimation, with a cutoff for the fitting region determined with a Kolmogorov- Smirnov test. This method is implemented in the powerLaw package [13] of R [14]. Moreover, the method has been extended to the case of binned data in [5]. In the context of city size data, a standard approach for estimating the exponent of a Pareto distribution is to adopt ordinary (linear) least squares (OLS) regression of the log-ranks of cities based on their log-population [15]. Since Pareto fits well for the largest cities only, [16] proposes a recursive-truncation method (in the same line as [4]) for determining the best cut-off of minimum city size according to the Kolmogorov-Smirnov test. The method developed in this paper makes the parametric assumption that the distribution of searches follows a Zipf’s law. Empirical data investigated in this paper appear to agree with this assumption for more than three orders of magnitude (see Figure 1) and we attribute the deviations for very low rank queries to the sampling procedure rather than to a different functional form. An alternative explanation is that data in the population follows a different distribution (for example exponentially truncated Zipf’s law). Indeed in other domains (e.g., city sizes) there is an ongoing debate about the accuracy of power law distribution in explaining empirical data (see [16, Figure 1] for cities and more generally [17, 18, 19]). We believe that our approach could be extended to functional forms different from exact Zipf’s law, but this is clearly beyond the scope of the paper. The unseen species problem asks how many biological species are present in a region, given that in an observation campaign a certain number of species with their relative frequency have been observed. Despite there are several estimators for the unseen species problem (for example, see [20]), the problem tackled here is different in an important aspect. In the unseen species problem, it is often assumed that in the sample used to build the estimator, the observed frequencies are proportional to the true frequencies in the population. In other words, there is no bias in the construction of the sample. In our approach, the elements of the sample are chosen ex-ante and the probability of being in the sample is not necessarily proportional to true frequency. Regarding SEO tools, there is little documentation on how they collect, sample and filter query logs for providing observed volumes. The biases behind observed volumes reported by SEO tools remain unknown. Google Trends can rely on Google search engine logs. Independent SEO tools (Searchvolume, Ubersuggest, Semrush, Keywordkeg, etc.) rely on a more limited user base. [21] compares Google Trends and Baidu Index (restricted to searches from China only), and finds that their estimates are highly correlated. An advantage of Baidu Index over Google Trends is that it provides absolute estimates, not relative ones. Generally, SEO tools provide observed volumes of a given query, not aggregate volumes over a domain. One exception is Google Trends, which returns (relative) volumes for a topic, which is defined as “a group of terms [a.k.a., queries] that share the same concept in any language”. Such volumes are aggregated over the queries in the group monitored by Google Trends, which filters out low volume queries. Therefore, they are not estimates of the total volume of the population of queries, as considered in this paper. Finally, this paper substantially extends preliminary results appeared in [22], which were restricted to the case of continuous data (Section 5). Moreover, [22] largely focuses on the data collection issues, which are not part of this paper. ## 3 Problem statement Let us assume that the population of all (distinct) queries to a search engine in a reference domain and period of time is composed by $N$ queries. The volume of a query is the number of times the query is searched in the given period of time. We model the volumes as random variables and we label them in such a way that $V_{1}\geq V_{2}\geq\ldots\geq V_{N}$. In other words, when writing $V_{i}$, we intend the volume of $i$-th rank444These are also called order statistics.. We assume that the distribution of searches follows a Zipf’s law, which predicts that the probability of observing a search of the query corresponding to $V_{i}$ is: $f(i)=\frac{A}{i^{\beta}}$ (1) where $A^{-1}=\sum_{i=1}^{N}i^{-\beta}=[\zeta(\beta)-\zeta(\beta,N+1)]$ is a normalizing constant and $\zeta(\beta)=\sum_{i=1}^{\infty}i^{-\beta}$ and $\zeta(\beta,N+1)=\sum_{i=N+1}^{\infty}i^{-\beta}$ are the Riemann zeta and Hurwitz functions, respectively. If ${\cal V}$ is the total volume over the population, the expected volume of the query corresponding to $V_{i}$ is $\bar{V}_{i}={\mathcal{V}}f(i)=\frac{c}{i^{\beta}}$ where: $c=\frac{{\mathcal{V}}}{\zeta(\beta)-\zeta(\beta,N+1)}$ (2) Thus, the expected volumes follow the law: $\bar{V}_{i}=\frac{c}{i^{\beta}}$ (3) The parameters $c$ and $\beta$ are called the intercept and the coefficient respectively, and in a log-log plot Eq. 3 is a straight line. $c$ is the expected volume of the most popular query, while $\beta$ characterizes how quickly the volume of queries declines with increasing rank. Notice that the $V_{i}$’s model integer values (the number of searches), whilst the $\bar{V}_{i}$’s may be not an integer. Clearly, when an observed sample of volumes $v_{1},\ldots,v_{N}$ is available, Eq. 3 is fitted on it (even if the $\bar{V}_{i}$’s are not necessarily integers), as for instance it is in Figure 1. Moreover, when all the volumes are observed without noise, the total volume is trivially computed as ${\cal V}=\sum_{i=1}^{N}v_{i}$ and the number $N$ of distinct queries is clearly known. The problem becomes much more complicated when: (i) the volumes are contaminated by some observation noise which biases their observed values, which we model as random variables $X_{1},\ldots,X_{N}$; (ii) not all the biased volumes are observed, but only an empirical sample $v_{1}\geq v_{2}\geq\ldots\geq v_{n}$ of size $n<N$. In this case the total volume ${\cal V}$ and the number of queries $N$ become random variables: the objective of this paper is to provide an estimation method for ${\cal V}$ and for $N$ assuming a specific distribution of the searches, in our case a Zipf’s law. We now present how we model the sources of bias and then introduce the estimation methods. Figure 2: Simulation of different samplings from a Zipf’s law. Figure 3: Simulations on estimation of $c$: error bars (mean $\pm$ stdev). Figure 4: Simulations on estimation of $\beta$: error bars (mean $\pm$ stdev). ## 4 Biases in sampling from a Zipf Starting from the assumption that the frequency of searches follows a Zipf distribution (see Eq. 1), we point out that the empirical distribution in Figure 1 shows a drop of volume in its tail. We investigate on this. We will consider the effects of different sampling methods from a Zipf’s law, as well as of noisy samples, and check whether the conclusions are consistent with our empirical data. Clearly, uniform sampling from a Zipf’s law cannot explain the drop of volume in the tail of the empirical distribution. In fact, queries in an empirical sample are rarely chosen uniformly. The approach followed in our reference domain, for instance, relies on collecting recipe names from specialized websites. The contents of such websites are typically optimized at targeting high-volume keywords through domain-specific keyword research. As a consequence, our empirical data suffers from an unavoidable selection bias in favor of high-volume queries. A similar bias against very low volume queries is introduced by the SEO tools used to obtain observed volumes of queries in a sample. Low volume queries are monitored with low probability by SEO tools. For such queries, SEO tools return no observed volume. In summary, our empirical data is likely to be a non-uniform sampling of the query population. We assume here that sampling depends on the rank of queries over the population, and call this non-uniform sampling. Formally, we assume that the $i$-th most searched query is sampled with a probability $p_{i}$. We want to check whether the observed rank plot over a sample of the population is different from a Zipf’s law. To this end, we consider a geometric sampling $p_{i}\propto p(1-p)^{i-1}$, i.e., the sampling probability decays exponentially with the rank. For example, if $p=0.01$, the probability that the query with the largest volume in the population is observed is $p$, then the second, third, fourth, etc. query in terms of volume will be observed (i.e. sampled) with probability $0.99p$, $0.99^{2}p$, $0.99^{3}p$, etc. Figure 2 shows a numerical simulation with the following parameters of the population: $N=10^{6},\quad c=10^{5},\quad\beta=0.7745.$ (4) The choice of $\beta$, in particular, has been driven by the empirical distribution of Figure 1. Samples consist of $n=1000$ queries, and $p=0.001$ is set for the geometric sampling. The black line is the whole population, the blue line is obtained with non-uniform sampling while the grey line is obtained with uniform sampling. The non-uniform sampling is consistent with the tail of the empirical distribution in Figure 1. As a second bias worth to be considered, SEO tools typically provide approximated values of the true volume of queries, due to a limited user base and/or to computational heuristics in frequency counting. Therefore, we assume that empirical data is drawn from noisy observed volumes $X_{i}$’s of the true volumes $V_{i}$’s. We assume that: $X_{i}=V_{i}\epsilon_{i}$ where $\epsilon_{i}$ are independent observation noise with common distribution characterized by the same mean $\mu$ and variance $\sigma_{i}^{2}$. Clearly, the presence of noise scrambles the frequencies, and, a fortiori, the ranks of observed volumes may differ from the ranks of true volumes. Figure 2 includes also a noisy and non-uniform sample (red line) generated assuming $\epsilon_{i}$ normally distributed, but truncated to 0 to avoid negative $X_{i}$’s. Parameters are set as follows: $\mu=1$, i.e., noise is unbiased, and $\sigma_{i}^{2}=0.01/9$, i.e., $99.7\%$ of noise is in the range $\pm 3\sigma=\pm 10\%$ of the true value. Noisy and non-uniform sampling (hereafter just noisy sampling) produces an empirical distribution very close to the one of non-uniform sampling and that is also consistent with our empirical data. Another source of bias is due to the use by SEO tools of computationally approximate counting methods [2]. Count-min sketches [3], in particular, are extensively used in stream processing. They can be modeled by setting: $X_{i}=V_{i}+\gamma_{i}c$ (5) where $\gamma_{i}$ is uniformly distributed in the range $[0,\gamma]$. In such case, the noise overestimates $V_{i}$ up to a fraction $\gamma$ of the top volume $V_{1}=c/1^{\beta}=c$. For low volumes, the noise may considerably increase the observed value. However, for a sufficiently low $\gamma$, the non-uniform sampling alleviates from this problem, since low volumes are sampled with low probability. We set $\gamma=0.001$ in simulations. The empirical distribution generated lies in between the ones of non-uniform and noisy sampling. For readability reasons, it is not shown in Figure 2. We call such model the sketchy and non-uniform sampling, hereafter just sketchy sampling. Figure 5: Simulations on the estimation of the total volume $\mathcal{V}$: error bars (mean $\pm$ stdev). ## 5 Estimation methods ### 5.1 Estimating $\beta$ and $c$ We start considering the estimation of the coefficient $\beta$ and intercept $c$ in Eq. 3 by exploring two alternative methods for continuous empirical data. Regarding the coefficient, we observe that $\beta$ is the exponent of the distribution of Eq. 1. Thus, we can rely on the well-known method of Clauset, Shalizi and Newman [4] (hereafter, the CSN method) for estimating the $\beta$ parameter in Eq. 3. Strictly speaking, [4] is a maximum-likelihood estimator $\hat{\alpha}$ of the $\alpha$ exponent of the Power law of volume distribution, $P(V=v)\propto 1/v^{\alpha}$, from the high-volume tail of observed volumes $v_{\mathit{max}}\leq\ldots\leq v_{1}$. Since in many empirical data the Power law tail is observed only for a range of values, [4] uses a Kolmogorv-Smirnov like test to determine $v_{\mathit{max}}$ which is the optimal value after which the distribution is Power law tailed. Using the well-known relation $\beta=1/(\alpha-1)$ between exponents of Power law and continuous Zipf’s law (see [10, 11]), we obtain the estimate $\hat{\beta}=1/(\hat{\alpha}-1)$ of the coefficient of the rank-volume distribution for top ranks $1$ to $\mathit{max}$. The theoretical advantage of this method is that it automatically selects the rank $\mathit{max}$ from which to regress the coefficient. Figure 6: Simulations on estimation of the population size $N$: error bars (mean $\pm$ stdev). Sketchy case in the left plot exceeds the y-axis limits. As the second estimator of $\beta$, we use a variant of the standard Nonlinear Least Square (NLS) regression of the empirical volume $v_{i}$ from their rank $i$. The parameters $c$ and $\beta$ are those minimizing the sum of squares: $\sum_{i=1}^{M}\left(v_{i}-\frac{c}{i^{\beta}}\right)^{2}$, where $M$ is the maximal rank considered in the regression555NLS regression requires to specify initial values for $\beta$ and $c$ to start with. We compute them using OLS regression of the log, i.e. minimizing $\sum(\log v_{i}-\log c-\beta\log i)^{2}$. OLS regression performs worse than NLS (simulations not shown, see also [12]) since it gives too much importance to deviations of low rank with respect to high rank queries.. Since the empirical data follows a Zipf’s law only for the top ranks, we regress the top $M=\mathit{max}$ rank-volume data, where $\mathit{max}$ is the rank returned by the CSN method. In this sense, NLS is a variant of standard Nonlinear Least Square. NLS has two advantages over CSN. First, intercept $c$ and coefficient $\beta$ are estimated together in the same procedure. Second, the regression directly estimates $\beta$, while in the CSN method $\beta$ is estimated with a formula involving the estimator of $\alpha$. Finally, the second estimator of the intercept that we consider here is the maximum observed volume, namely $v_{1}$. We call it the max-estimator of $c$. This is motivated by observing that, from Eq. 3, $c$ is the expected volume of the most popular query. Let us now investigate how those estimators are affected by the non-uniform, noisy, and sketchy sampling from a Zipf’s law. Numerical simulations with parameters as in (4), are repeated at the variation of the sample size $n$ for $1000$ times and results averaged. Figure 3 shows that both the max-estimator and the NLS regression converge to the true value of the intercept $c$. For noisy data, however, there is some error, which is proportional to the noise level (set to $\pm 10$%). Variability is slightly lower for NLS regression. Larger error bars can be observed for small values of $n$. They are due to the chances of not having the queries of the population with the largest volumes included in the sample. In Section 5.3, we will discuss this issue with regard to the estimation of the total volume. In practical settings, the selection of the sample queries must carefully consider the issue of including in the empirical sample the most popular queries from the population. This has been one of our main concerns in collecting queries in the recipe and cooking domain. Figure 4 shows some differences in the estimation of $\beta$. Regarding the CSN method, the estimated values for non-uniform and noisy samplings are slighly lower than the true $\beta$. Underestimation in the sketchy sampling case is, instead, considerable. Regarding the NLS regression, it is unbiased for non-uniform and noisy sampling. For sketchy sampling, $\beta$ is slightly underestimated. Estimations rapidly converge for increasing $n$’s, except for noisy sampling in the case of NLS, and for sketchy sampling in the case of CSN. Finally, all estimations are weakly dependent on $n$: starting from samples of $0.4\%$ of the population, they become stable. ### 5.2 Estimating $N$ In the following, we focus on a simple but effective estimator of the size $N$ of the query population. We assume that $V_{N}$, the smallest volume of a query in the population, is known. This assumption is realistic for absolute frequencies, since $V_{N}=1$. From Eq. 3, for $i=N$, we have $N=(c/V_{N})^{1/\beta}$. This motivates the following estimator: $\hat{N}=\left(\frac{\hat{c}}{V_{N}}\right)^{1/\hat{\beta}}$ (6) where $\hat{c}$ is an estimator of $c$, and $\hat{\beta}$ is an estimator of $\beta$. Eq. 6 can be extended to an estimator of the number of queries whose volume is greater or equal than a given value $v$ as: $\hat{N}_{v}=(\hat{c}/v)^{1/\hat{\beta}}$ (7) Numerical simulations with parameters as in (4) are shown in Figure 6 for: (1) $\hat{\beta}$ obtained by the CSN method and $\hat{c}$ obtained by the max- estimator; and (2) $\hat{\beta}$ and $\hat{c}$ obtained by NLS regression. The first method is biased, showing a slight overestimation for non-uniform and noisy sampling and a large overestimation for sketchy sampling (not shown because exceeding the y-axis limits). The second method converges to the true value of $N$ for non-uniform and noisy sampling (on average), and it slightly overestimates it for sketchy sampling. Thus the only advantage of the first method over the second one, is a smaller variability of the estimates in the case of noisy sampling. ### 5.3 Estimating $\mathcal{V}$ Figure 7: Estimated volume (Eq. 8) as a function of $\hat{\beta}$, assuming $\hat{c}=c$. Simulation parameters: $N=10^{6}$, $c=10^{5}$, $\beta$ in title. Building on the estimators and simulations conducted so far, the proposed procedure for estimating the total volume $\mathcal{V}$ is composed of the following steps: * • estimate $\beta$ and $c$, as described in Section 5.1; * • use the estimated $\hat{\beta}$ and $\hat{c}$ as inputs for estimating $N$ as shown in Section 5.2; * • the estimator of $\mathcal{V}$ is obtained from Eq. 2 as follows: $\hat{{\mathcal{V}}}=\hat{c}[\zeta(\hat{\beta})-\zeta(\hat{\beta},\hat{N}+1)]$ Notice that by Eq. 6, the estimator $\hat{{\mathcal{V}}}$ can be stated using only $\hat{\beta}$ and $\hat{c}$: $\hat{{\mathcal{V}}}=\hat{c}[\zeta(\hat{\beta})-\zeta(\hat{\beta},\left(\frac{\hat{c}}{V_{N}}\right)^{1/\hat{\beta}}+1)]$ (8) These estimators can be generalized to estimators of the total volumes of queries with minimum volume $v$ by replacing $V_{N}$ by $v$: $\hat{{\mathcal{V}}}_{v}=\hat{c}[\zeta(\hat{\beta})-\zeta(\hat{\beta},(\hat{c}/v)^{1/\hat{\beta}}+1)]$ (9) Let us continue the previous numerical simulations. With the settings in (4), it turns out ${\mathcal{V}}=9,609,224$. First consider using the NLS regression method in the first step of the procedure. Figure 5 (right) shows that $\hat{\mathcal{V}}$ converges to ${\mathcal{V}}$ for non-uniform and noisy sampling, and overestimates it for sketchy sampling. For noisy sampling, there is some variability, which is in the order of the noise introduced during sampling ($\pm 10$%). The overestimation in the case of sketchy sampling follows from the overestimation of $N$ (see Figure 6). Figure 8: Scatterplot of empirical vs estimated total volume. Consider now the case of using in the first step of the procedure the CSN method coupled with the max-estimator. The total volume shown in Figure 5 (left) is slightly overestimated for non-uniform sampling and for noisy sampling. In the latter case, there is some variability, which appears lower than for the CSN method. This can be tracked back to lower variability in the estimation of $\beta$ (see Figure 4). For sketchy sampling, the overestimation is very large: it is out of the bounds of the plot. Again, this can be traced back to a larger underestimation of $\beta$ compared to the CSN method. The impact of biased $\hat{\beta}$ on the estimated total volume $\hat{\mathcal{V}}$ can be readily explained when $\hat{c}=c$ – which holds in simulations, as shown in Figure 3. We plot Eq. 8 as a function of $\hat{\mathcal{V}}$, under the assumption that $V_{N}$ is known, in Figure 7. The left plot shows simulations for the parameters in (4) used so far. The right plot uses the same $N$ and $c$, but a $\beta$ greater than $1$. In both cases, the bias of $\hat{\mathcal{V}}$ is inversely proportional to bias of $\beta$. Note the log scale in the y-axis, which comes from the fact $\beta$ appears as exponent in Eq. 8. For $\beta$’s lower than $1$, error (or variability) of the estimator $\hat{\beta}$ has a greater impact on error (or variability) of $\hat{\mathcal{V}}$ than for $\beta$’s greater than $1$. Figure 9: Left: empirical rank-volume distribution (SearchVolume estimates). Right: bins from the empirical distribution. We already observed that the performances of the estimators become stable from $n=4,000$ on, which is $0.4$% of the population size. Let us now focus on smaller sample sizes, for which instead there is a large standard deviation over the experimental runs. Fix $n=2,000$, and consider NLS regression and non-uniform sampling. From Figure 5 (right), we have that the standard deviation of the estimates $\hat{\mathcal{V}}$ over the 1,000 experimental runs is approximately $3\times 10^{6}$. What is the source of such variability? Figure 8 shows the scatter plot of the estimated total volume vs the empirical volume (i.e., the sum of observed volumes) of the sample for each of the 1,000 runs. Runs with a lower empirical volume exhibit most of the variability (notice that the y-axis is in log-scale). If the empirical volume is sufficiently large, the estimated total volume converges to the true volume. Moreover, if the sample includes the query with the largest volume in the population $V_{1}$ (blue points in Figure 8), the estimate is less biased than in the case the sample does not include $V_{1}$ (red points). Not having $V_{1}$ in the sample causes underestimation of $\beta$, which in turn causes overestimation of the total volume. Overall, this reinforces our previous conclusion that, in practical settings, the selection of sample queries must carefully include the most popular ones. As a summary of the simulations, we therefore recommend using the NLS regression method for estimating $c$ and $\beta$, and, using Eqs. 8–9, for estimating ${\mathcal{V}}$ and ${\mathcal{V}}_{v}$. ### 5.4 Errors on the estimates We now compute the error on the estimated $N$ obtained from Eq. 6. Using the propagation of errors under the assumption that the errors $\Delta\beta$ on $\beta$ and $\Delta c$ on $c$ are independent, the error on $\hat{N}$ is: $\Delta N=\sqrt{\left(\frac{\partial\hat{N}}{\partial\hat{c}}\Delta c\right)^{2}+\left(\frac{\partial\hat{N}}{\partial\hat{\beta}}\Delta\beta\right)^{2}}$ The values $\Delta c$ and $\Delta\beta$ are set to the standard errors of the parameter estimation. In particular, for NLS regression they are directly provided by the nls() function of the R stats package, which uses a linearization approach666http://sia.webpopix.org/nonlinearRegression.html#standard-errors- of-the-parameter-estimates. To have a more conservative estimate of $\Delta N$, taking into account correlations between errors, one can replace the previous formula with the sum of the absolute values: $\Delta N=\left|\frac{\partial\hat{N}}{\partial\hat{c}}\Delta c\right|+\left|\frac{\partial\hat{N}}{\partial\hat{\beta}}\Delta\beta\right|$ (10) The partial derivatives in the previous expression are: $\frac{\partial\hat{N}}{\partial\hat{c}}=\left(\frac{\hat{c}}{V_{N}}\right)^{1/\hat{\beta}}\frac{1}{\hat{\beta}\hat{c}}=\frac{\hat{N}}{\hat{\beta}\hat{c}}$ $\frac{\partial\hat{N}}{\partial\hat{\beta}}=-\left(\frac{\hat{c}}{V_{N}}\right)^{1/\hat{\beta}}\frac{1}{\hat{\beta}^{2}}\log\frac{\hat{c}}{V_{N}}=-\frac{\hat{N}}{\hat{\beta}^{2}}\log\frac{\hat{c}}{V_{N}}$ Similarly, the error on the total volume is: $\Delta{\mathcal{V}}=\left|\frac{\partial\hat{\mathcal{V}}}{\partial\hat{c}}\right|\Delta c+\left|\frac{\partial\hat{\mathcal{V}}}{\partial\hat{\beta}}\right|\Delta\beta$ (11) The calculation of the partial derivatives is a bit more involved. We start from Eq. 8, where $\hat{\mathcal{V}}$ is a function of $\hat{c}$ and $\hat{\beta}$. We find: $\frac{\partial{\mathcal{V}}}{\partial\hat{c}}=\frac{{\hat{\mathcal{V}}}}{\hat{c}}+\hat{N}\zeta\left(\hat{\beta}+1,\hat{N}+1\right)$ $\frac{\partial{\mathcal{V}}}{\partial\hat{\beta}}=\hat{c}\left(\zeta^{\prime}(\hat{\beta})-\zeta^{(1,0)}(\hat{\beta},\hat{N}+1)-\frac{\hat{N}\log(\frac{\hat{c}}{V_{N}})\zeta(\hat{\beta}+1,\hat{N}+1)}{\hat{\beta}}\right)$ where $\zeta^{\prime}(x)$ is the derivative of the Riemann Zeta function and $\zeta^{(1,0)}(s,a)$ is the partial derivative of the Hurwitz function with respect to $s$. Figure 10: Simulation of binned sampling from a Zipf’s law. ## 6 The case of binned data Most SEO tools do not provide an absolute observed volume of a query, but rather they provide an interval estimate of the volume, i.e., binned observed volumes. The motivation is that intervals ameliorate for the noise introduced in the estimation process. Figure 9 (left) shows the rank-volume distribution of a sub-sample of recipes queries whose binned observed volumes are obtained from the SearchVolume tool. Differences with Google Trends distribution will be discussed later on in Section 7.2. For now, we observe that the rank-volume distribution is still Zipfian. Figure 11: Simulations on estimation of $c$ for binned data: error bars (mean $\pm$ stdev). The horizontal line is the true value of $c$. Figure 12: Simulations on estimation of $\beta$ for binned data: error bars (mean $\pm$ stdev). The horizontal line is the true value of $\beta$. The binned nature of data demands for specific estimation methods. For example, [5] generalizes the CSN method to the estimation of the coefficient of Power law distributed data if these are binned. We extend here the approach of the previous section to the case of discrete data obtained by binning (possibly noisy or sketchy) values. Specifically, we consider two strategies, one based on the method from [5] and the other based on Chi-square minimization. As before, we assume that the frequency of searches follow a Zipf law (see Eq. 1), but the observed volumes are binned according to some scheme. We assume that there are $M$ bins, and that for $j\in[1,M]$, the $j$-th bin consists of the interval $[\ell_{j-1},\ell_{j})$, where $\ell_{0}=V_{N}$ (the smallest volume). SEO tools typically report as observed volume the upper bound $\ell_{j}$ of the bin. For instance, Figure 9 (right), reports the $\ell_{j}$’s values for the SearchVolume data shown in the left panel of the figure. They approximately follow a geometric progression, namely: $\mbox{\rm for $j\in[1,M]$}\ \ell_{j}=\ell_{1}\delta^{j-1}.$ (12) Under such a binning scheme, a continuous volume $V\geq V_{N}$ belongs to the $j_{V}$-th bin, where: $j_{V}=\mathit{max}\left\\{1,2+\left\lfloor\log_{\delta}\frac{V}{\ell_{1}}\right\rfloor\right\\}$ and its binned volume is $\ell_{j_{V}}=\ell_{1}\delta^{\mathit{max}\left\\{0,1+\left\lfloor\log_{\delta}\frac{V}{\ell_{1}}\right\rfloor\right\\}}$. In simulations throughout this section, we apply such a discretization scheme to the non-uniform, noisy, and sketchy continuous sample data with parameters as in (4), and keep the same nomenclature for the discretized versions of those sampling methods. Figure 10 shows the impact of sampling, for the same settings of Figure 2 but with geometric binning where: $\delta=1.2324$ (13) is the ratio used by SearchVolume, as shown in Figure 9 (right). As for the continuous case, uniform sampling is not consistent with empirical data, while the other strategies are. ### 6.1 Binned-CSN Ref. [5] extends to the case of binned data the original CSN approach of using a maximum likelihood estimator of the exponent and a Kolmogorov-Smirnov test for selecting the tail of values that best fit a Power law. As a consequence, we can extend our CSN-based estimator $\hat{\beta}=1/(\hat{\alpha}-1)$ to binned data, where $\hat{\alpha}$ is the estimator of the exponent of a binned Power law. As in the continuous case, we will also make use in the next subsections of the tail of the (binned) values $v_{\mathit{max}}\leq\ldots\leq v_{1}$ that best fit a Power law. This corresponds to consider bins $[j_{v_{\mathit{max}}},M]$, where $j_{v_{\mathit{max}}}$ is the bin of $v_{\mathit{max}}$. Simulation results in Figure 12 (left) show that Binned-CSN performs very well for non-uniform and noisy sampling, while it under-estimates $\beta$ for sketchy sampling. Figure 13: Simulations on estimation of $N$ and ${\cal V}$ for binned data: error bars (mean $\pm$ stdev). The horizontal lines are the true value of $N$ and $\mathcal{V}$. Figure 14: Left: Zipf and Zipf+Sketchy populations. Right: sketchy-optimized volume estimation (binned data). ### 6.2 Chi-square Minimization For binned data, the analogue of Least Square regression method consists of minimizing the $\chi^{2}$ (Chi-square) statistics [23]. Let us denote by $n_{j}^{e}$ and $n_{j}^{o}$ the expected and observed number of queries in the $j$-th bin $[\ell_{j-1},\ell_{j})$. The latter is observed from data, while the former can be estimated as follows. From Eq. 7, the volume $\ell_{j}$ (resp., $\ell_{j-1}$) is the one of the query with rank $\hat{N}_{\ell_{j}}$ (resp., $\hat{N}_{\ell_{j-1}}$). Thus the expected number of queries in the $j$-th bin is: $n_{j}^{e}(c,\beta)=\hat{N}_{\ell_{j-1}}-\hat{N}_{\ell_{j}}=(c/\ell_{j-1})^{1/\beta}-(c/\ell_{j})^{1/\beta}$ where we explicitly write the parameters $c$ and $\beta$. We now estimate the values of $c$ and $\beta$ as those minimizing the $\chi^{2}$: $(\hat{c},\hat{\beta})=\underset{(c,\beta)}{\operatorname{argmin}}\sum_{j=j_{v_{\mathit{max}}}}^{M}\frac{\left(n_{j}^{o}-n_{j}^{e}(c,\beta)\right)^{2}}{n_{j}^{e}(c,\beta)}$ (14) As in the continuous case, we restrict to tail values that best fit a Power- law. This is why the summation in Eq. 14 starts from $j_{v_{\mathit{max}}}$, the bin of the $v_{\mathit{max}}$ value returned by the Binned-CSN method. Simulation results are shown in Figures 11–12 (right). In particular, the estimation of $\beta$ performs comparably well to the Binned-CSN method for non-uniform and noisy sampling (see Figure 11). For sktechy sampling, the Chi- square method is preferrable to Binned-CSN. In fact, while it shows a slightly larger variance, it has a significantly smaller bias, summing up to an overall smaller mean square error. ### 6.3 Constrained Chi-square Minimization Binned-CSN only provides an estimation for the coefficient $\beta$, while Chi- square provides an estimator for the intercept $c$ as well. We are interested here in the definition of an estimator of $c$ to be used together with Binned- CSN. In the continuous case, the max-estimator performed comparably well to NLS (cfr. Figure 3). Unfortunately, the maximal observed volume $v_{1}$ in the sample provides a very biased estimation of $c$ for binned data. This is due to the fact the bin of $v_{1}$ corresponds to a large range of possible true values. Therefore the estimation of $c$ should take into account the observed values at several bins, as the Chi-square method does. We propose to combine the strengths of Binned-CSN and Chi-square methods as follows. First, we obtain an estimate $\hat{\beta}_{\mathit{CSN}}$ of $\beta$ by applying the Binned-CSN method. Then, to estimate $c$ we solve the optimization problem: $\hat{c}=\underset{c}{\operatorname{argmin}}\sum_{j=j_{v_{\mathit{max}}}}^{M}\frac{\left(n_{j}^{o}-n_{j}^{e}(c,\hat{\beta}_{\mathit{CSN}})\right)^{2}}{n_{j}^{e}(c,\hat{\beta}_{\mathit{CSN}})}$ (15) which is a constrained version of Eq. 14. Simulations results in Figure 11 (left) show that Constrained Chi-square provides an almost unbiased estimator of $c$ for non-uniform and noisy sampling, while it under-estimates $c$ for sketchy sampling. Standard deviations are smaller than the Chi-square estimator shown in Figure 11 (right). However, bias for sketchy sampling is higher. ### 6.4 Overall approach Assuming no information on the sampling bias, from the previous simulation results, the Chi-square method is to be preferred to Binned-CSN plus Constrained Chi-square for estimating $\hat{\beta}$ and $\hat{c}$. This conclusion is in line with the comparison between CSN and NLS in the continuous case. However, for binned data the difference is not so neat. Starting from the estimations of $\hat{\beta}$ and $\hat{c}$, the procedures described in Section 5.3 for estimating the total number of queries $N$ in the population and their total volume ${\cal V}$ apply unaltered to the case of binned data. Figure 13 reports the simulation results for the estimation of the population size $N$ and total volume ${\cal V}$. The overall approach performs extremely well for non-uniform and noisy sampling. For the latter, there is even a smaller variance than in the continuous case (cfr. Figures 6-5 (right)). Intuitively, the variability impact of noise is canceled out when it does not extend beyond the same bin of the true value. Regarding sketchy sampling, instead, the estimator is biased and with non-negligible variance. Finally, the procedure of Section 5.4 for calculating errors $\Delta N$ and $\Delta{\cal V}$ extends to binned data as far as $\Delta\beta$ and $\Delta c$ are provided in input. For uniformity with NLS regression, statistical errors $\Delta\beta$ and $\Delta c$ are calculated using an adaption of the linearization method to Chi-square minimization. | | Google Trends | SearchVolume ---|---|---|--- $v$ | $v/12$ | $\hat{N}_{v}$ | $\Delta N_{v}$ | $\hat{\mathcal{V}}_{v}$ | $\Delta{\mathcal{V}}_{v}$ | $\hat{N}_{v}$ | $\Delta N_{v}$ | $\hat{\mathcal{V}}_{v}$ | $\Delta{\mathcal{V}}_{v}$ 12 | 1 | 269,214,520 | $\pm$ 18,507,467 | 14,169.58 M | $\pm$ 827.91 M | 5,849,311,206 | $\pm$ 10,205,374,040 | 157,547.70 M | $\pm$ 262,434.70 M 120 | 10 | 13,770,732 | $\pm$ 815,062 | 7,171.15 M | $\pm$ 354.03 M | 91,968,610 | $\pm$ 136,211,457 | 24,766.71 M | $\pm$ 34,731.11 M 1,200 | 100 | 704,394 | $\pm$ 33,959 | 3,591.35 M | $\pm$ 145.85 M | 1,446,021 | $\pm$ 1,760,408 | 3,889.59 M | $\pm$ 4,433.55 M 12,000 | 1,000 | 36,031 | $\pm$ 1,444 | 1,760.23 M | $\pm$ 56.87 M | 22,736 | $\pm$ 21,685 | 607.08 M | $\pm$ 535.30 M 120,000 | 10,000 | 1,843 | $\pm$ 56 | 823.63 M | $\pm$ 20.30 M | 357 | $\pm$ 247 | 91.03 M | $\pm$ 58.45 M 600,000 | 50,000 | 231 | $\pm$ 5 | 456.95 M | $\pm$ 9.06 M | 20 | $\pm$ 10 | 21.4 M | $\pm$ 10.89 M TABLE I: Estimated $N_{v}$ and ${\mathcal{V}}_{v}$ for queries with at least $v$ searches in 2017. $v/12$ is the monthly average of $v$. ### 6.5 Sketchy-optimized approach Let us finally investigate an approach to reduce bias and variance in the case of sketchy sampling under the further assumption that we know that data is actually sketchy sampled and we know the fraction777Alternatively, $\gamma c$ (which is the actual value needed in the sketchy-optimized approach) can be estimated starting from a sample of queries for which we know both the true volume $V_{i}$ and the noisy volume $X_{i}$. In fact, by Eq. 5, we have that $X_{i}-V_{i}=\gamma_{i}c$ is drawn from a uniform distribution in the range $[0,\gamma c]$. $\gamma$ in the noise of Eq. 5. The formula in such an equation is a sum of two independent random variables, whose density function can be explicitly calculated using convolution of probability distributions. We omit the details of the calculation, but, as it could be expected, the resulting density is closer to a Zipf for large volumes and it is closer to uniform distribution for small volumes. Figure 14 (left) shows the population of a pure Zipf and the effects of adding noise888Figure 14 (left) shows the effects on the whole population. The resulting distribution differs (in the tail) from Figure 2, where non-uniform sampling is also considered in addition to adding noise. as in Eq. 5 with $\gamma=0.001$. Intuitively, volumes smaller than $\gamma c=c/1000$ are heavily modified. Volumes larger than $c/1000$ are less modified. However, the method estimating $\beta$ may understimate it due to the data with volume close to $c/1000$. Around such a volume the Power law starts becoming apparent, but with a biased coefficient. If we concentrate on volumes larger than $10\gamma c=c/100$, sketchy sampling may impact for at most $10\%$ of the volume, hence reducing the bias due to noise of Eq. 5. In summary, we propose the following simple modification to the estimation procedure for sketchy sampling: use sample data whose volume is larger or equal than $10\gamma v_{1}$, where $\gamma$ is assumed to be known and $v_{1}$ is the largest observed volume in the sample data. We have experimented this modified procedure for both Binned-CSN and Chi- square methods. The latter continues to perform slightly better even in such setting. Simulation results for the estimation of the total volume are shown in Figure 14 (right). Contrasted to Figure 13 (right), the estimations in the sketchy sampling schema are less biased and with smaller variances. Moreover, such an approach does not degrade the performances of Chi-square for non- uniform and noisy sampling under the simulation parameters. Intuitively, such two sampling schemes do not systematically impact on the slope of the empirical sample distribution, hence restricting to top-volumes (larger or equal than $10\gamma v_{1}$) does not affect coefficient estimation. ## 7 Empirical analysis In this section, we conduct an empirical analysis on a real dataset. We generated a sample of 120K queries by crawling 18 popular Italian websites about recipes and cooking. The list of websites was compiled with the help of web marketing experts and by looking at the rankings of SEO tools999E.g., https://serpstat.com. Queries were generated in one of the following forms: the name of a recipe as reported in metadata available at web pages101010Such metadata are standardized in the Structured Data format and they are intended to optimize search engine indexing of the web page, see https://developers.google.com/search/docs/data-types/recipe.(e.g., “spaghetti with tomato sauce”), queries based on a selected list of ingredients (e.g., “recipes with pepperoni”), queries suggested from SEO tools starting from a selected list of keywords, and web marketer expert queries used in past advertising campaigns (in particular, large volume queries such as “recipe”, “cake”, “pizza”). Next, queries were validated/filtered by humans as belonging to the recipe and cooking domain. Finally, variants of queries without stop words were also added. We then submitted the 120K queries to Google Trends (continuous values) and to SearchVolume (binned values) to collect the observed volume of each query for the reference year 2017 and for Italian user agents. Considering a whole year prevents seasonal bias in data. Most of the queries received no observed volume, due to the fact they belong to the long tail of the distribution and then SEO tools do not monitor them. This is expected, under the assumption of a Zipf’s law distribution of the population of all queries. ### 7.1 Google Trends (continuous data) Google Trends has several advantages over other SEO tools. First, the observed volumes provided are computed from (a sample of) the Google search engine query logs, and not from unspecified sources which may have unknown forms of bias. Second, data can be aggregated for arbitrary ranges of time and user agent languages. Most of the other tools, instead, provide monthly averages at the time of request, making it impossible to extend an experiment incrementally to new queries. Third, observed volumes of Google Trends are ratio-scaled, while other SEO tools provide binned values, i.e., ranges of observed volumes. On the negative side, the observed volume provided by Google Trends is relative, not absolute. Google Trends reports the observed volumes of a set of queries in each week of a time period by setting to 100 the largest volume in a week and scaling all other week volumes into the range [0, 100]. We then fixed one specific query to the conventional yearly volume of $1$, and collected observed volume of all other queries in comparison to the specific query. Next, we scaled the relative volumes to absolute volumes by relying on an estimation procedure of a scaling factor. Details on the calculation of relative volumes and of the scaling factor are in [22]. We obtained observed volumes by Google Trends for about 18.5K queries out of the 120K in the sample set of queries. The resulting rank-volume distribution is shown in Figure 1. The remaining queries belong to the long tail, for which Google Trends returns no observed volume. Let us now apply the estimation model designed in Section 5 to the empirical data of Google Trends. As shown by the red line fit in Figure 1 (left panel), the NLS regression estimates are: $\hat{\beta}=0.7745\quad\hat{c}=e^{17.5189}=40,584,860.$ The NLS fit is considered for the top $\mathit{max}=1725$ queries determined by the CSN method. The statistical errors of the above estimates are considerably low: $\Delta\beta=0.0025\quad\Delta c=199,263$ (16) We can now use Eq. 6 for estimating the number $N_{v}$ of queries having a volume of at least $v$, and Eq. 10 for calculating the statistical error $\Delta N_{v}$. Similarly, Eq. 9 can be used for estimating the total volume ${\mathcal{V}}_{v}$ of queries having a volume of at least $v$, and Eq. 11 for calculating its statistical error $\Delta{\mathcal{V}}_{v}$. Table I reports the estimates for a few values of $v$. As a means of comparison, the total empirical volume of the 18.5K queries in our sample amounts at 1,057 M searches. Such a large number is consistent with the fact that the sample is not uniform, but top ranked queries are more likely to be in the sample. Moreover, it also gives confidence that the sample is sufficiently large (as per empirical volume) to correctly estimate the true volume. According to the simulations of Figure 5, the values $\hat{N}_{v}$ and $\hat{{\mathcal{V}}}_{v}$ may overestimate the true $N_{v}$ and ${\mathcal{V}}_{v}$ respectively, if some sketchy approximation is introduced in the observed volume data by Google Trends. In case of noisy sampling, instead, a small under or overestimation may occur. It is worth noting that these conclusions hold under the assumption that observed volumes are biased only as modeled in Section 4. However, other biases may be present in observed volumes from SEO tools. In the case of Google Trends, one other bias is due to the scaling procedure from relative to absolute volumes. ### 7.2 SearchVolume (binned data) SearchVolume is a popular SEO tool providing free access to bulk observed volumes. Also, SearchVolume provides the observed volume of queries for a few specific countries, including Italy. The underlying methods and log data used by the system are undisclosed. We submitted to SearchVolume the 18.5K queries for which Google Trends provided observed volumes, and obtained (binned) observed volumes for about 12.5K of them. The resulting rank-volume distribution is shown in Figure 9 (left). A number of facts are worth being pointed out when contrasting that distribution with the one of Google Trends in Figure 1. First, SearchVolume returned no result for about 6K queries, which are not necessarily low volume ones according to Google Trends. For instance, 26 out of the top 100 Google Trends observed volume queries are assigned no observed volume by SearchVolume. As already pointed out, each SEO tool comes with its own biases on the set of queries covered and on the values of the observed volumes. In fact, as a second point, top observed volumes are up to 10$\times$ smaller than the top volumes of Google Trends. In particular, the total empirical volume of the 12.5K queries amounts at 206 M searches vs the 805M searches of Google Trends for the same set of queries. Independent SEO tools are known to be more conservative than Google-owned tools. Another possible reason is that the multiplicative factor devised in [22] to scale Google Trends relative volumes may have been over-estimated. Third, Google Trends data has a larger empirical variance than SearchVolume: top volume are up to 65 standard deviations from the mean volume for Google Trends and up to 38 for SearchVolume. Fourth, correlation between SearchVolume and Google Trends volumes is weak, with Kendall’s $\tau=0.3717$. These last two points could originate by the different sampling biases of the two tools. The above arguments are not specific of the two tools we are considering in this paper, but they apply when contrasting any pair of SEO tools. In fact, in web marketing practice, the adoption of a specific SEO tool is mainly based on reputation and trust on the tool. Let us apply the estimation model designed in Section 6.4 to the empirical data of SearchVolume. The estimated values are: $\hat{\beta}=0.5545\quad\hat{c}=e^{14.9551}=3,125,485.$ The Chi-square fit is considered for the top $\mathit{max}=119$ queries determined by the Binned-CSN method. It turns out that the Sketchy-optimized method of Section 6.5 provided exactly the same results. The statistical errors of the above estimates are larger than (16): $\Delta\beta=0.0352\quad\Delta c=549,134$ This is somehow expected, and it can be attributed to the loss of information due to binning. Numerical experiments on synthetic data shows, in fact, that the error of the coefficients of the Zipf’s law are more than one order of magnitude larger than the error when the data are observed without binning. Finally, the estimated values for the number $N_{v}$ of queries having a volume of at least $v$, and the total volume ${\mathcal{V}}_{v}$ are reported in Table I. Contrasting the estimations inferred from SearchVolume and Google Trends data, the smaller coefficient of the former implies a larger number of queries in the long tail. There are, in fact, 5.85B queries searched at least once a month for SearchVolume vs only about 270M for Google Trends. On the contrary, a larger number of top-volume queries are estimated from Google Trends data. The two methods get closer in estimating the number of queries with at least 1,000 searches per month (23K vs 36K), and in estimating the total volume of queries with at least 100 searches per month (3.9B vs 3.6B). Finally, the larger statistical errors $\Delta\beta$ and $\Delta c$ produce considerably larger estimates $\Delta N_{v}$ and $\Delta{\mathcal{V}}_{v}$ for SearchVolume in comparison to Google Trends. ## 8 Conclusion We investigated the problem of estimating the total number of searches of queries belonging to a specific domain in a given period of time. By doing the sensible assumption that the unobserved rank distribution of query volumes follows a Zipf’s law, our approach can be decomposed in two parts. First, we model biases in obtaining observed volumes from SEO tools. Such biases consist of non-uniform sampling possibly coupled with noise and approximation errors. Second, we devised estimation methods to infer the total volume of the queries of the domain starting from a biased sample. The estimation methods distinguish continuous and binned empirical data. They are able to find the total number and the total volume of the queries in the domain which have been searched at least $v$ times in a given time period. This kind of information is extremely useful in web marketing research and advertising to quantify the market value of a domain. A large set of numerical simulations supports the validity of the proposed methods. Finally, we presented an empirical application w.r.t. the domain of recipes and cooking for Italian searches in 2017, including a comparison of the continuous vs binned data cases. The first critical issue for extending our analysis to other domains consists of checking the hypothesis that the population of queries in the domain is Zipfian. As shown in Figures 1 and 9 (left), empirical data in the domain of recipes and cooking appear to be Zipfian. This motivated our assumption that the reference population, namely the queries searched in a reference domain, follows a Zipf’s law. Ref. [24] points out that the granularity and extent of a reference population should exhibit a “coherence” property. This is particularly relevant, since splitting or merging two Zipfian sets does not necessarily yield another Zipfian set, hence the actual definition of what is and what is not in a domain is essential in meeting our assumption. The domain considered in this paper has well-defined boundaries that make it reasonably coherent. The second critical issue is the construction of the sample set of queries. As shown by the numerical simulations, the capability of correctly inferring the total volume significantly depends on the inclusion in the empirical sample of top volume queries from the population. Empirical sample queries should then carefully collected, e.g., by resorting to domain’s expert knowledge or, if feasible, by crawling a set of specialized websites. Finding estimators which are (more) robust to the choice of the query sample is certainly an interesting potential extension of our approach to the case when it is costly to construct sufficiently large samples. The third critical issue is concerned with understanding (eg., through statistical tests) which type of bias is likely to be present in empirical data provided by a specific SEO tool. In this paper, we considered three possible scenarios: uniform sampling alone, or together with normally distributed noise (noisy sampling), or together with count-min sketch like approximation (sketchy sampling). Other scenarios can be conceived, e.g., noise due to data anonymization [25, 26]. Further work is necessary to test which scenario fits better for a given empirical data. Finally, the estimation methods devised in this paper are generally applicable to any context where the population is Zipfian. One such context, suggested by an anonymous reviewer, regards the Internet Domain Name System (DNS). Here, client machines query a distributed database for resolving the numeric IP address of a given host name. A DNS server local to an organization may maintain (approximate) counts of the number of queries per host name. The volume per host name is known to follow a Zipf’s law [27]. Hence, our methods can be used to estimate the total number of distinct host names (in our notation, $N$) served by a local DNS server without having to store them explicitly – which could compromise efficiency of the DNS server. Also, by aggregating the counts of top volume host names for several DNS servers, we can estimate the size of the Internet (the number of IP addresses) accessed by the user base served. ## Software code Software code in R [14] of all estimation methods is available at https://github.com/ruggieris/QVolume. ## Acknowledgments This work has been partially supported by a research grant by ForTop S.R.L. (https://www.fortop.it/en) on the topic: Data-driven analysis of search engine query market. We are grateful to Lorenzo Barsotti, Stefania Camarda, Paolo Ferragina, Riccardo Guidotti, Marco Marino, and Anna Monreale for stimulating discussions. ## References * [1] C. Petersen, J. G. Simonsen, and C. Lioma, “Power law distributions in information retrieval,” _ACM Trans. Inf. Syst._ , vol. 34, no. 2, pp. 8:1–8:37, 2016. * [2] G. Cormode, M. N. Garofalakis, P. J. Haas, and C. Jermaine, “Synopses for massive data: Samples, histograms, wavelets, sketches,” _Foundations and Trends in Databases_ , vol. 4, no. 1-3, pp. 1–294, 2012. * [3] G. Cormode and S. Muthukrishnan, “An improved data stream summary: the count-min sketch and its applications,” _J. Algorithms_ , vol. 55, no. 1, pp. 58–75, 2005. * [4] A. Clauset, C. R. Shalizi, and M. E. J. Newman, “Power-law distributions in empirical data,” _SIAM Review_ , vol. 51, no. 4, pp. 661–703, 2009. * [5] Y. Virkar and A. Clauset, “Power-law distributions in binned empirical data,” _The Annals of Applied Statistics_ , vol. 8, no. 1, pp. 89–119, 2014. * [6] M. Newman, “Power laws, pareto distributions and Zipf’s law,” _Contemporary Physics_ , vol. 46, no. 5, pp. 323–351, 2005. * [7] S. Ding, J. Attenberg, R. A. Baeza-Yates, and T. Suel, “Batch query processing for web search engines,” in _WSDM_. ACM, 2011, pp. 137–146. * [8] R. A. Baeza-Yates and A. Tiberi, “Extracting semantic relations from query logs,” in _KDD_. ACM, 2007, pp. 76–85. * [9] R. A. Baeza-Yates, A. Gionis, F. Junqueira, V. Murdock, V. Plachouras, and F. Silvestri, “The impact of caching on search engines,” in _SIGIR_. ACM, 2007, pp. 183–190. * [10] A. Bookstein, “Informetric distributions, part I: unified overview,” _JASIS_ , vol. 41, no. 5, pp. 368–375, 1990. * [11] L. Adamic and B. Huberman, “Zipf’s law and the internet,” _Glottometrics_ , vol. 3, pp. 143–150, 2002. * [12] M. Goldstein, S. Morris, and G. Yena, “Problems with fitting to the power-law distribution,” _European Physical Journal B_ , vol. 41, pp. 255–258, 2004\. * [13] C. Gillespie, “Fitting heavy tailed distributions: The powerlaw package,” _J. of Stat. Software_ , vol. 64, no. 2, pp. 1–16, 2015. * [14] R Core Team, _R: A Language and Environment for Statistical Computing_ , R Foundation for Statistical Computing, Vienna, Austria, 2019. [Online]. Available: https://www.R-project.org/ * [15] X. Gabaix and R. Ibragimov, “Rank — 1/2: A simple way to improve the OLS estimation of tail exponents,” _Journal of Business & Economic Statistics_, vol. 29, no. 1, pp. 24–39, 2011. * [16] G. Fazio and M. Modica, “Pareto or log-normal? best fit and truncation in the distribution of all cities,” _Journal of Regional Science_ , vol. 55, no. 5, pp. 736–756, 2015. * [17] A. D. Broido and A. Clauset, “Scale-free networks are rare,” _Nature Communications_ , vol. 10, 2019. * [18] P. Holme, “Rare and everywhere: Perspectives on scale-free networks,” _Nature Communications_ , vol. 10, p. 1016, 2019. * [19] I. Voitalov, P. van der Hoorn, R. van der Hofstad, and D. Krioukov, “Scale-free networks well done,” _Phys. Rev. Research_ , vol. 1, p. 033034, 2019. * [20] A. Orlitskya, A. Sureshb, and Y. Wuc, “Optimal prediction of the number of unseen species,” _Proceedings of the National Academy of Sciences USA_ , vol. 113, pp. 13 283–13 288, 2016. * [21] L. Vaughan and Y. Chen, “Data mining from web search queries: A comparison of Google Trends and Baidu Index,” _JASIST_ , vol. 66, no. 1, pp. 13–22, 2015. * [22] F. Lillo and S. Ruggieri, “Estimating the total volume of queries to Google,” in _WWW_. ACM, 2019, pp. 1051–1060. * [23] J. Berkson, “Minimum chi-square, not maximum likelihood!” _The Annals of Statistics_ , vol. 8, pp. 457–487, 1980. * [24] M. Cristelli, M. Batty, and L. Pietronero, “There is more than a power law in Zipf,” _Scientific Reports_ , vol. 2, p. 812, 2012. * [25] L. Melis, G. Danezis, and E. De Cristofaro, “Efficient private statistics with succinct sketches,” in _NDSS_. The Internet Society, 2016. * [26] G. Cormode, T. Kulkarni, and D. Srivastava, “Constrained private mechanisms for count data,” in _ICDE_. IEEE, 2018, pp. 845–856. * [27] J. Jung, E. Sit, H. Balakrishnan, and R. T. Morris, “DNS performance and the effectiveness of caching,” _IEEE/ACM Trans. Netw._ , vol. 10, no. 5, pp. 589–603, 2002. | Fabrizio Lillo, Ph.D. is professor of Mathematical Methods for Economics and Finance at the University of Bologna (Italy), teaching at the programs of Quantitative Finance and Mathematical Finance at the M.Sc and Ph.D. levels. He is a member of faculty board of the Ph.D. program in Data Science of the Scuola Normale Superiore in Pisa. His research interests focus on statistical methods and mathematical models for finance, economics, and social sciences. ---|--- | Salvatore Ruggieri, Ph.D. is professor of Computer Science at the University of Pisa, where he teaches Data Science and Web Marketing. He holds a PhD in Computer Science (1999). He is a member of the KDDLab (http://kdd.isti.cnr.it), with research interests in algorithmic fairness, explainable AI, privacy, modelling the process of knowledge discovery, classification algorithms, web mining, and applications. ---|---
# FakeFlow: Fake News Detection by Modeling the Flow of Affective Information Bilal Ghanem1, Simone Paolo Ponzetto2, Paolo Rosso1, Francisco Rangel3 1Universitat Politècnica de València, Spain 2University of Mannheim, Germany 3Symanto Research, Germany <EMAIL_ADDRESS><EMAIL_ADDRESS> <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Fake news articles often stir the readers’ attention by means of emotional appeals that arouse their feelings. Unlike in short news texts, authors of longer articles can exploit such affective factors to manipulate readers by adding exaggerations or fabricating events, in order to affect the readers’ emotions. To capture this, we propose in this paper to model the flow of affective information in fake news articles using a neural architecture. The proposed model, FakeFlow, learns this flow by combining topic and affective information extracted from text. We evaluate the model’s performance with several experiments on four real-world datasets. The results show that FakeFlow achieves superior results when compared against state-of-the-art methods, thus confirming the importance of capturing the flow of the affective information in news articles. ## 1 Introduction In today’s information landscape, fake news are used to manipulate public opinion Zhou and Zafarani (2018) by reshaping readers’ opinions regarding some issues. In order to achieve this goal, authors of fake news’ narratives need to capture the interest of the reader. Thus, they are putting efforts to make their news articles look more objective and realistic. This is usually done by adding misleading terms or events that can have a negative or positive impact on the readers’ emotions. Short text false information, e.g., fake claims or misleading headlines, might be less harmful than news articles. They may have some eye-catching terms that aim to manipulate the readers’ emotions Chakraborty et al. (2016). In many cases, the identification of this kind of exaggeration in short statements can unmask the fabrication. On the other hand, in fake news articles the authors exploit the length of the news to conceal their fabricated story. This fact exposes the readers to be emotionally manipulated while reading longer texts that have several imprecise or fabricated plots. The flow of information has been investigated for different tasks: Reagan et al. (2016) studied the emotional arcs in stories in order to understand complex emotional trajectories; Maharjan et al. (2018) model the flow of emotions over a book and quantify its usefulness for predicting success in books; Kar et al. (2018) explore the problem of creating tags for movies from plot synopses using emotions. Unlike previous works Rashkin et al. (2017); Shu et al. (2018); Castelo et al. (2019); Ghanem et al. (2020) that discarded the chronological order of events in news articles, in this work we propose a model that takes into account the affective changes in texts to detect fake news. We hypothesize that fake news has a different distribution of affective information across the text compared to real news, e.g. more fear emotion in the first part of the article or more overall offensive terms, etc. Therefore, modeling the flow of such information may help discriminating fake from real news. Our model consists of two main sub-modules, topic-based and affective information detection. We combine these two sub-modules since a news article’s topic may have a correlation with its affective information. For example, a fake news article about Islam or Black people is likely to provoke fear and express negative sentiment while another fake news that is in favor of a particular politician might try to evoke more positive emotions and also express some exaggerations. The contributions of our work are as follows: * • We design a model that detects fake news articles by taking into account the flow of affective information111Available at https://github.com/bilalghanem/fake_flow. * • Extensive experiments on four standard datasets demonstrate the effectiveness of our model over state-of-the-art alternatives. * • We build a novel fake news dataset, called MultiSourceFake, that is collected from a large set of websites and annotated on the basis of the joint agreement of a set of news sources. ## 2 Related Work Previous work on fake news detection is mainly divided into two main lines, namely with a focus on social media Zubiaga et al. (2015); Aker et al. (2017); Ghanem et al. (2019) or online news articles Tausczik and Pennebaker (2010); Horne and Adali (2017); Rashkin et al. (2017); Barrón-Cedeno et al. (2019). In this work we focus on the latter one. Fact-checking Karadzhov et al. (2017); Zlatkova et al. (2019); Shu et al. (2019a) is another closely related research topic. However, fact-checking targets only short texts (that is, claims) and focuses on using external resources (e.g. Web, knowledge sources) to verify the factuality of the news. The focus in previous work on fake news detection is mainly on proposing new feature sets. Horne and Adali (2017) present a set of content-based features, including readability (number of unique words, SMOG readability measure, etc.), stylistic (frequency of part-of-speech tags, number of stop words, etc.) and psycholinguistic features (i.e., several categories from the LIWC dictionary Tausczik and Pennebaker (2010)). When these features are fed into a Support Vector Machine (SVM) classifier and applied, for instance, to the task of distinguishing satire from real news, they obtain high accuracies. Using the same features for the task of fake news detection, however, results in somewhat lower scores. Pérez-Rosas et al. (2018) propose a model (FakeNewsDetector) that uses a feature set consisting of unigrams and bigrams, psycholinguistic, readability, punctuation and dependency-based syntactic features, and they evaluate the performance of their model in a cross-domain experiment. Rashkin et al. (2017) use a model based on ngram features with a Max-Entropy classifier and apply it to a dataset with different types of fake news articles (e.g., satire, hoax, propaganda, etc.). Similar to the previous work, the authors evaluate their system’s performance on in-domain and out-of-domain test sets, respectively. News, and in particular fake news, are dynamic in nature and change constantly. In order to approach the dynamic nature of news, Castelo et al. (2019) propose a topic-agnostic model (TopicAgnostic) that is based on morphological (count of part-of-speech tags), psycholinguistic (personal concerns, affection, and perception categories from the LIWC dictionary), readability (Gunning Fog metric, etc.) and Web-Markup features to capture patterns of the Web pages’ layout (frequency of advertisements, presence of an author name, etc.). All of the morphological, psycholinguistic and readability features in the TopicAgnostic model were extracted from headlines and texts of the news articles. The approach obtains a better performance than FakeNewsDetector on three different datasets using a SVM classifier. FakeNewsTracker Shu et al. (2019b) is a deep neural network-based model that consists of two branches: one encodes news article texts and the other encodes social media engagements (e.g., tweets and their replies). A similar model called Emotionally Infused Network (EIN) is proposed in Ghanem et al. (2020). EIN encodes the text of the article and their affective content, based on several dictionaries, and then combines the two vector representations. The authors evaluate their model on a multi-class false information dataset and show the effectiveness of using emotion features extracted from the text. Despite the large variety of features and models that have been explored in previous work, none of these works considers the sequence of affective information in text; instead, they feed the entire news articles as one segment into their models. In contrast, the aim of our work is to evaluate this source of information, using a neural architecture. ## 3 The FakeFlow Model Given an input document, the FakeFlow model first divides it into $N$ segments. Then it uses both word embeddings and other affective features such as emotions, hyperbolic words, etc. in a way to catch the flow of emotions in the document. The model learns to pay attention to the flow of affective information throughout the document, in order to detect whether it is fake or real. Figure 1 shows the architecture of the FakeFlow model. The neural architecture has two main modules: The first module uses a Convolutional Neural Network (CNN) to extract topic-based information from articles (left branch). The second module models the flow of the affective information within the articles via Bidirectional Gated Recurrent Units (Bi-GRUs) (right branch). Figure 1: The architecture of the FakeFlow model. ### 3.1 Topic-based Information Given a segment $n\in N$ of words, the model first embeds words to vectors through an embedding matrix. Then it uses a CNN that applies convolution processes and max pooling to get an abstractive representation of the input segment. This representation highlights important words, in which the topic information of the segment is summarized. Then it applies a fully connected layer on the output segments to get a smaller representation ($v_{\mathit{topic}}$) for later concatenation with the representation of affective information: $v_{\mathit{topic}}=f(W_{a}\>cnn_{v}+b_{a})$ where $W_{a}$ and $b_{a}$ are the corresponding weight matrix and bias terms, and $f$ is an activation function such as ReLU, tanh, etc. Key to FakeFlow is its ability to capture the relevance of the affective information with respect to the topics. For this, we concatenate the topic summarized vector $v_{\mathit{topic}}$ with the representation vector $v_{\mathit{affect}}$, aimed at capturing the affective information extracted from each segment (Section 3.2). $v_{concat}=v_{\mathit{topic}}\oplus v_{\mathit{affect}}$ To merge the different representations and capture their joint interaction in each segment, the model processes the produced concatenated vector $v_{concat}$ with another fully connected layer: $v_{fc}=f(W_{c}\>v_{concat}+b_{c})$ In order to create an attention-focused representation of the segments to highlight important ones and to provide the model with the ability to weight segments differently according to the similarity of neighboring segments, the model applies a context-aware self-attention mechanism Zheng et al. (2018) on $v_{fc}$. This is a crucial step, as the importance of a segment at timestep $t$ is related to the other segments since they share the same context in the news article. Moreover, applying the attention layer can help us understand which features are most relevant by showing to which words the network attends to during learning. The output of the attention layer is an attention matrix $l_{t}$ with scores for each token at each timestep. ### 3.2 Affective Flow of Information To model the affective information flow in the news articles, we choose the following lexical features, under the assumption that they have a different distribution across the articles’ segments. We use a term frequency representation weighted by the articles’ length to extract the following features from each segment $n$: * • Emotions: We use emotions as features to detect their change among articles’ segments. For that we use the NRC emotions lexicon Mohammad and Turney (2010) that contains $\sim$14K words labeled using the eight Plutchik’s emotions (8 Features). * • Sentiment: We extract the sentiment from the text, positive and negative, again using the NRC lexicon Mohammad and Turney (2010) (2 Features). * • Morality: We consider cue words from the Moral Foundations Dictionary222https://moralfoundations.org/other-materials/ Graham et al. (2009) where words are assigned to one (or more) of the following categories: care, harm, fairness, unfairness (cheating), loyalty, betrayal, authority, subversion, sanctity and degradation (10 Features). * • Imageability: We use a list of words rated by their degree of abstractness and imageability333https://github.com/ytsvetko/metaphor/tree/master/ resources/imageability. These words have been extracted from the MRC psycholinguistic database Wilson (1988) and then using a supervised learning algorithm, the words have been annotated by the degrees of abstractness and imageability. The list contains 4,295 and 1,156 words rated by their degree of abstractness and imageability, respectively (2 Features). * • Hyperbolic: We use a list of $\sim$350 hyperbolic words Chakraborty et al. (2016), i.e., words with high positive or negative sentiment (e.g., terrifying, breathtakingly, soul-stirring, etc.). The authors extracted these eye-catching words from clickbaits news headlines (1 Feature). To model the flow of the above features, we represent each segment of an article by a vector $v_{\mathit{affect}}$ capturing all 23 features listed above. Then we feed the document’s vectors to a Bi-GRU network to summarize the contextual flow of the features from both directions444During prototyping, GRU produced better overall results than LSTM. to obtain $v_{\mathit{flow}}$. Given the segments’ flow representation ($v_{\mathit{flow}}$) of an article and their relevance to the topics ($l_{t}$), FakeFlow applies a dot product operation and then averages the output matrix across the segments to get a compact representation $v_{\mathit{compact}}$, which is then fed into a fully connected layer: $v_{\mathit{final}}=f(W_{d}\>v_{\mathit{compact}}+b_{d})$ Finally, to generate the overall factuality label of an article, a softmax layer is applied to the output of the fully connected layer. ## 4 Fake News Datasets Despite the recent efforts for debunking online fake news, there is a dearth of publicly available datasets. Most of the available datasets are small in size (e.g., the Politifact555https://www.politifact.com/ dataset in Shu et al. (2018) has $\sim$700 available articles, the Celebrity dataset in Pérez-Rosas et al. (2018) has $\sim$500 articles, etc.), their test parts have not been manually annotated, or have been collected from a very small number of news sources. Nonetheless, we evaluate FakeFlow on three different available datasets to demonstrate its performance. In addition, we create our own dataset. Table 1 gives an overview of the datasets that we used in our work. MultiSourceFake: We rely on different resources for creating the training and test portions of the dataset, so as to provide a challenging benchmark. For the training part, we use OpenSources.co (OS), MediaBiasFactCheck.com (MBFC), and PolitiFact666https://www.politifact.com/article/2017/apr/20/politifacts-guide- fake-news-websites-and-what-they/ news websites’ lists. OS list contains 560 domains, MBFC list has 548 domains, and the PolitiFact list has 227 domains. These lists have been annotated by professional journalists. The lists contain domains of online news websites annotated based on the content type (as in the OS news list: satire, reliable, etc.; and in the PolitiFact news list: imposter, parody, fake news, etc.) or from a factuality perspective (as in the MBFC news list: low, medium, and high factuality). From the OS list, we select domains that are in one of the following categories: fake, bias, reliable, hate, satire, or conspiracy. We consider domains under the reliable category as real news sources, and the rest as fake. The PolitiFact list is different from the OS list since it has only labels for domains that are either fake or with mixed content. We discard the mixed ones777The discarded label is “Some fake stories”. and map the remaining ones to the fake news label. Finally, we select from the MBFC list those domains that are annotated either as high or low factual news and we map them to real and fake labels, respectively. Out of these three final lists, we select only those domains for our dataset that are annotated in all lists in a consistent way; for example, we discard those domains that are annotated as real in the OS list but their label in the MBFC list is fake (low factuality). The final list contains 85 news websites. We now proceed by projecting the domain-level ground truth onto the content of those domains and randomly sample articles, with a maximum of 100 news articles per domain.888Some of the websites included less than 100 news articles. For the test part, we use the leadstories.com fact checking website for which professional journalists annotated online news articles on the article level as fake or real. We do not follow the way we annotate the training part since the projection of the domain-level ground truth inevitably introduces noise. The journalists that annotated leadstories.com assigned a set of labels to the fake news articles like, e.g., false, no evidence, satire, misleading, etc.; we map them all to the fake label. In addition, we discard all articles that are multimedia-based. After collecting the news articles, we postprocess them by discarding very short articles (less than 30 words). The test part includes 689 fake news articles. We complement the set with a sample of 1,000 real news articles from the training part. The overall dataset consists of 5,994 real and 5,403 fake news articles. The average document length (number of words) in the MultiSourceFake dataset is 422 words, and the 95th percentile value is 942. Figure 2 shows the distribution of the documents’ length in the dataset. Figure 2: The distribution of the documents’ length in the MultiSourceFake dataset. TruthShades: This dataset has been proposed in Rashkin et al. (2017). The dataset was crawled from a set of domains that are annotated by professional journalists as either propaganda, hoax, satire, or real. The dataset has been built from the English Gigaword corpus for real news, and other seven unreliable domains that annotated in one of the three previous false information labels. PoliticalNews: Due to the fact that: “a classifier trained using content from articles published at a given time is likely to become ineffective in the future” Castelo et al. (2019), the authors of this work collected a dataset by crawling news websites in between the years 2013 to 2018 in order to evaluate their model’s performance on different years. FakeNewsNet: is a fake news repository that consists of two comprehensive datasets, one collected using claims from PolitiFact and the other from the GossipCop fact checking website. Given the large number of true and false claims from these two fact checking websites, Shu et al. (2018) built news datasets that contain visual and textual news articles content and social media information by searching Twitter for users who shared news. Out of the whole collected information, we use only the textual information of news articles, which is the part we are interested in. Name | Total | Training | Test ---|---|---|--- MultiSourceFake | 11,397 | 9,708 | 1,689 TruthShades | 23,000 | 16,000 | 4,000 - 3,000 PoliticalNews | 14,240 | 11,392 | 2,848 FakeNewsNet | 20,208 | 16,156 | 4,039 Table 1: Number of articles in the datasets. ## 5 Experiments Experimental setup. We split the articles’ text into $N$ segments and set the maximum length of segments to 800 words, applying zero padding to the ones shorter than 800 words. Concerning the FakeFlow hyper-parameters, we tune various parameters (dropout, the size of the dense layers, activation functions, CNN filter sizes and their numbers, pooling size, size of the GRU layer, and the optimization function) (see Appendix A for the search space) using early stopping on the validation set. In addition to these hyper- parameters, we also use the validation set to pick the best number of segments ($N$). Regarding the MultiSourceFake dataset, we use 20% of the training part for validation. We represent words using pre-trained word2vec Google-News-300 embeddings999https://code.google.com/archive/p/word2vec/. For evaluation, we follow the setup from related work. We report accuracy and weighted precision, recall and F1 score, and macro F1 for some datasets where the classes are imbalanced. Baselines. To evaluate the performance of our model, we use a combination of fake news detection models and deep neural network architectures: * • CNN, LSTM: We use CNN and LSTM models and validate their performance when treating each document as one fragment. We experiment with different hyper- parameters and report results for the ones that performed best on the validation set. * • HAN: The authors of Yang et al. (2016) proposed a Hierarchical Attention Networks (HAN) model for long document classification. The proposed model consists of two levels of attention mechanisms, i.e., word and sentence attention. The model splits each document into sentences and learns sentence representations from words. * • BERT: is a text representation model that showed superior performance on multiple natural language processing (NLP) benchmarks Devlin et al. (2019). We use the pre-trained bert-base-uncased version which has 12-layers and yields output embeddings with a dimension of size 768. We feed the hidden representation of the special [CLS] token, that BERT uses to summarize the full input sentence, to a softmax layer. Experimentally, we found that fine- tuning BERT layers gives a higher performance. It is worth mentioning that BERT input length is limited to 512 word pieces (sub-words level) Devlin et al. (2019), thus, we discard the rest of the text in long news articles. * • Fake News Detection Models: We compare our model to several fake news detection models. We use Horne and Adali (2017) model, FakeNewsDetector Pérez- Rosas et al. (2018), Rashkin et al. (2017) model, and EIN Ghanem et al. (2020).101010We only compare TopicAgnostic on the dataset the authors proposed (PoliticalNews). * • Longformer: Giving that Transformer-based models (i.e. BERT) are unable to process long sequences, we use Longformer (Beltagy et al., 2020), which is a SOTA model for long document tasks. In our experiments, we set the max sequence length to 1500 to handle documents that have more than 512 tokens in the MultiSourceFake dataset (see Figure 2). Also, we found that fine-tuning the Longformer model gives better results and a much faster convergence. ## 6 Results and Analysis Table 2 presents the results of our proposed model and the baselines on the MultiSourceFake dataset. Our best result was achieved by using 10 as the number of segments ($N$, as found on the validation data). In Figure 3 we show the model’s performance for segments of different length.111111In the case of N=1 in Figure 3, we set the maximum segment length to 1500 words instead of 800 to not lose parts of the longer articles. In general, the results show that models that are based on either word ngrams or word embeddings are performing better than other models that use handcrafted features, e.g. Horne and Adali (2017). Also, despite the huge amount of data used to train the BERT model, the results show that BERT performs worse than FakeFlow and also fails to outperform some of the other models. We speculate that this is due to the fact that the input length in BERT is limited to 512 words, as we mentioned previously, and a large portion of the news articles in the MultiSourceFake dataset has a length greater than 512 words. The results of the Longformer model confirm our claim regarding the documents’ length and show a significantly higher F1 score than the BERT model. This emphasizes that despite the strong performance of BERT on multiple NLP benchmarks, it is unable to handle long text documents, in contrast, e.g., to vanilla text categorization Adhikari et al. (2019). In addition, Longformer’s results show a higher F1 score than the FakeFlow model, yet, the difference is statically insignificant. To isolate the contribution of topical vs. affective information we run two simplified versions of our architecture, each consisting of the networks to capture topical and affective information only. The results show that the flow of the affect information has a weak performance when used alone; this emphasizes that affective information of a news article is a meaningful, yet complementary source of information. Figure 3: The accuracy and F1 results of the FakeFlow model using different $N$ (number of segments). Model | Acc. | Prec. | Rec. | F1macro ---|---|---|---|--- Majority Class | 0.59 | 0.35 | 0.59 | 0.37 Horne and Adali (2017) | 0.80 | 0.75 | 0.78 | 0.80 FakeNewsDetector | 0.86 | 0.86 | 0.86 | 0.86 LSTM | 0.91 | 0.86 | 0.91 | 0.90 CNN | 0.91 | 0.89 | 0.89 | 0.91 Rashkin et al. (2017) | 0.92 | 0.92 | 0.92 | 0.92 BERT | 0.93 | 0.93 | 0.94 | 0.93‡ EIN | 0.93 | 0.94 | 0.93 | 0.93‡ HAN | 0.94 | 0.94 | 0.94 | 0.93‡ Longformer | 0.97 | 0.97 | 0.97 | 0.97† FakeFlow | 0.96 | 0.93 | 0.97 | 0.96 FakeFlow – Topic only | 0.91 | 0.89 | 0.90 | 0.90 FakeFlow – Affective only | 0.61 | 0.38 | 0.60 | 0.40 Table 2: Results on the MultiSourceFake dataset. (‡) indicates a statistically significant improvement of FakeFlow over the referred model using McNemar test; (†) indicates no statistically significant improvement over FakeFlow. Performance on Multiple Datasets. In Table 3 we compare the performance of the FakeFlow model to SOTA results on the other datasets we introduced in Section 4. The TruthShades dataset has two test sets, in-domain and out-of-domain. In the in-domain configuration, training and test articles come from the same sources, and from different sources in out-of-domain configuration. The results demonstrate that FakeFlow achieves a better F1 on both test sets. In a similar way, the results on the PoliticalNews dataset show that FakeFlow also outperforms the TopicAgnostic model, although the gap in results is not very large. Finally, regarding the FakeNewsNet dataset, it looks that the deep learning-based model (FakeNewsTracker) does not achieve a good performance comparing to the other proposed baseline by the authors, which is a Logistic Regression (LR) classifier with one-hot vectors of the news articles’ text. Furthermore, it seems that a simple word-based model works better than a more sophisticated model that incorporates social media and context information. The FakeFlow model, on the other hand, achieves a better result, outperforming both the FakeNewsTracker and the LR baseline. TruthShades | Acc. | Prec. | Rec. | F1macro ---|---|---|---|--- Out-of-domain Rashkin et al. (2017) | 0.67 | 0.70 | 0.67 | 0.65 FakeFlow | 0.68 | 0.69 | 0.68 | 0.68 In-domain Rashkin et al. (2017) | 0.91 | 0.91 | 0.91 | 0.91 FakeFlow | 0.96 | 0.96 | 0.96 | 0.96 PoliticalNews | Acc. | Prec. | Rec. | F1weighted TopicAgnostic | 0.87 | 0.87 | 0.87 | 0.87 FakeFlow | 0.88 | 0.88 | 0.88 | 0.88 FakeNewsNet | Acc. | Prec. | Rec. | F1weighted FakeNewsTracker | 0.80 | 0.82 | 0.75 | 0.79 One-Hot LR | 0.82 | 0.90 | 0.72 | 0.80 FakeFlow | 0.86 | 0.86 | 0.86 | 0.85 Table 3: Results on multiple datasets. We compare the FakeFlow model to SOTA models on each dataset. Topic-Aware Model. Constantly, new events are covered by news agencies. These events are different from the old ones in terms of discourse and topic. Therefore, a fake news detector trained on news articles from years back is unable to detect recent news. In this experiment, we are evaluating our approach on the PoliticalNews dataset that is constructed from news distributed across different years (2013 to 2018). Following the experimental setup in Castelo et al. (2019), we train the FakeFlow model on news from one year and test on the other years, one year at a time for testing. For example, we train the model on news from 2013 and we test on news from 2015. Note that each test set is associated with 5 results, one for each year. Figure 4 shows the average accuracy for each test set. We compare FakeFlow to the TopicAgnostic model that proved to be effective at detecting fake news from different years. It is worth mentioning that the features of the TopicAgnostic model have been extracted from both headlines and text of the news articles. However, the results show that both models have a similar performance, except for the 2013 test set where FakeFlow achieves a higher accuracy with a difference of 7%. The experiment shows that FakeFlow is capable of detecting fake news from different years, with a flat performance across the years. Figure 4: Topic aware experiment’s results. Attention Weights. The proposed FakeFlow model shows that taking into account the flow of affective information in fake news is an important perspective for fake news detection. We argue that being able to better understand the behaviour of the model can make it more transparent to the end-users. Figure 5 illustrates this by showing the attention weights of a fake news article across the 10 segments (left bar).121212We averaged the attention weight matrix along the timesteps (number of segments) representations. The figure shows that FakeFlow attends more to the beginning of the article. For better understanding, we match the affective information with the attention weights. Regarding the news text in the figure, the emotions features131313Words with multiple colors mean that they have been annotated with multiple emotion types in the NRC lexicon. show a clear example of how fake news articles try to manipulate the reader. It looks as if the existence of fear, sadness, and surprise emotions at the beginning of the article have triggered the attention on this part. Towards the end of the article, on the other hand, we can notice that such negative emotions do not exist, while emotions like joy and anticipation appear. This exemplifies how fake news try to attract the readers’ attention in the first part of the text. Regarding the morality features, we only match the word “kill” with the harm category. Also, for the hyperbolic feature, we match the words “terrifying” and “powerful”. In the same manner, both morality and hyperbolic features match words that occur at the beginning of the article. Lastly, for both sentiment and imageability features, we are not able to find a clear interpretation in this example where many words across the segments match. Figure 5: Emotional interpretation of a fake news article by showing the attention weights (the bar on the left) and highlighting the emotions in the text. Real vs. Fake Analysis. In Table 4 we present an analysis on both real and fake news articles. The analysis gives an intuition to the reader on the distribution of the used features across the articles’ segments. It shows that an emotion like fear has on average a higher difference between the first and the last segment in fake news than in real ones (see Figure 6 for a visualized distribution). Also, a feature like hyperbolic has a higher average value and lower standard deviation across all segments for fake news than real news, thus indicating that fake news have a higher amount of hyperbolic words with similarly high values. Figure 6: The flow of the Fear emotion in fake (▶) and real (•) news articles in the MultiSourceFake dataset. Y-axis presents the average number of Fear emotion words in 0-1 scale; the X-axis presents the document text, divided into 10 segments. | Features | Real News | Fake News ---|---|---|--- | $\mu$ $first_{seg.}$ | $\mu$ $last_{seg.}$ | $\mu$ $all_{seg.}$ | $\sigma$ $all_{seg.}$ | $\mu$ $first_{seg.}$ | $\mu$ $last_{seg.}$ | $\mu$ $all_{seg.}$ | $\sigma$ $all_{seg.}$ Emotions | Anger | 0.175 | 0.167 | 0.170 | 0.003 | 0.183 | 0.170 | 0.171 | 0.008 Anticipation | 0.301 | 0.315 | 0.264 | 0.025 | 0.293 | 0.305 | 0.260 | 0.022 Disgust | 0.095 | 0.101 | 0.095 | 0.004 | 0.096 | 0.091 | 0.091 | 0.007 Fear | 0.254 | 0.250 | 0.238 | 0.010 | 0.265 | 0.226 | 0.238 | 0.011 Joy | 0.217 | 0.226 | 0.183 | 0.021 | 0.207 | 0.203 | 0.175 | 0.020 Sadness | 0.161 | 0.158 | 0.160 | 0.006 | 0.155 | 0.155 | 0.158 | 0.007 Surprise | 0.140 | 0.144 | 0.123 | 0.012 | 0.142 | 0.123 | 0.120 | 0.008 Trust | 0.446 | 0.466 | 0.400 | 0.031 | 0.461 | 0.421 | 0.401 | 0.029 Senti. | Positive | 0.599 | 0.623 | 0.558 | 0.030 | 0.608 | 0.591 | 0.554 | 0.032 Negative | 0.369 | 0.337 | 0.347 | 0.011 | 0.367 | 0.336 | 0.350 | 0.013 Morality | Harm | 0.007 | 0.011 | 0.007 | 0.002 | 0.008 | 0.013 | 0.007 | 0.002 Care | 0.026 | 0.023 | 0.019 | 0.004 | 0.021 | 0.022 | 0.019 | 0.003 Fairness | 0.003 | 0.013 | 0.007 | 0.002 | 0.005 | 0.020 | 0.009 | 0.004 Unfairness | 0.000 | 0.000 | 0.001 | 0.000 | 0.001 | 0.000 | 0.001 | 0.001 Loyalty | 0.016 | 0.017 | 0.019 | 0.002 | 0.014 | 0.016 | 0.019 | 0.003 Betrayal | 0.004 | 0.003 | 0.005 | 0.001 | 0.002 | 0.003 | 0.004 | 0.001 Authority | 0.025 | 0.032 | 0.026 | 0.003 | 0.024 | 0.028 | 0.026 | 0.002 Subversion | 0.005 | 0.004 | 0.004 | 0.001 | 0.006 | 0.007 | 0.005 | 0.002 Sanctity | 0.005 | 0.005 | 0.004 | 0.001 | 0.005 | 0.006 | 0.005 | 0.002 Degradation | 0.003 | 0.004 | 0.003 | 0.001 | 0.006 | 0.004 | 0.003 | 0.001 Img | Imageability | 0.845 | 1.203 | 1.144 | 0.122 | 0.877 | 1.184 | 1.145 | 0.124 Abstraction | 0.424 | 0.331 | 0.352 | 0.028 | 0.382 | 0.304 | 0.342 | 0.037 | Hyperbolic | 0.042 | 0.05 | 0.045 | 0.005 | 0.046 | 0.044 | 0.047 | 0.003 Table 4: A quantitative analysis of the features existence across articles’ segments. We present the average value in the first segment ($\mu$ $first_{seg.}$), the average value in the last segment ($\mu$ $last_{seg.}$), the average value in the all 10 segments ($\mu$ $all_{seg.}$), and the standard deviation ($\sigma$ $all_{seg.}$) of a feature across the 10 segments, both in real and fake news. ## 7 Conclusion In this paper we presented FakeFlow, a model that takes into account the flow of affective information (emotions, sentiment, hyperbolic words, etc.) in texts to better detect fake news articles. The model receives as input a text, segmented into smaller units, instead of processing one long fragment. This enables it to learn the flow of affective information by modeling the interaction between the topic and affective terms in the news article. We evaluated our model on four different datasets and compared it to several strong baselines. The extensive experiments show the effectiveness of FakeFlow over state-of-the-art models. Although FakeFlow was trained using a limited amount of text, the results demonstrated that it achieves results on-par with resource-hungry models (e.g. BERT and Longformer). In future work, we plan to extend our dataset and study more fine-grained news types, e.g. propaganda, from an emotional perspective. Moreover, we plan to investigate how we can replace the lexicon-based information with language-independent approaches in an attempt to make our model multilingual. ## Acknowledgment The first author would like to thank Ines Rehbein and Ana Uban for their valuable comments and suggestions. The work of the third author was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE on MISinformation and MIScommunication in social media: FAKE news and HATE speech (PGC2018- 096212-B-C31) and by the Generalitat Valenciana under the research project DeepPattern (PROMETEO/2019/121). ## References * Adhikari et al. (2019) Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Docbert: Bert for document classification. _arXiv preprint arXiv:1904.08398_. * Aker et al. (2017) Ahmet Aker, Leon Derczynski, and Kalina Bontcheva. 2017. Simple open stance classification for rumour analysis. In _Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017_ , pages 31–39. * Barrón-Cedeno et al. (2019) Alberto Barrón-Cedeno, Giovanni Da San Martino, Israa Jaradat, and Preslav Nakov. 2019. Proppy: A system to unmask propaganda in online news. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 9847–9848. * Beltagy et al. (2020) Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The Long-document Transformer. _arXiv preprint arXiv:2004.05150_. * Castelo et al. (2019) Sonia Castelo, Thais Almeida, Anas Elghafari, Aécio Santos, Kien Pham, Eduardo Nakamura, and Juliana Freire. 2019. A Topic-Agnostic Approach for Identifying Fake News Pages. In _Companion Proceedings of The 2019 World Wide Web Conference_ , pages 975–980. * Chakraborty et al. (2016) Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly. 2016\. Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media. In _Advances in Social Networks Analysis and Mining (ASONAM), 2016 IEEE/ACM International Conference on_ , pages 9–16. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186. * Ghanem et al. (2019) Bilal Ghanem, Alessandra Teresa Cignarella, Cristina Bosco, Paolo Rosso, and Francisco Manuel Rangel Pardo. 2019. UPV-28-UNITO at SemEval-2019 Task 7: Exploiting Post’s Nesting and Syntax Information for Rumor Stance Classification. In _Proceedings of the 13th International Workshop on Semantic Evaluation_ , pages 1125–1131. * Ghanem et al. (2020) Bilal Ghanem, Paolo Rosso, and Francisco Rangel. 2020. An emotional analysis of false information in social media and news articles. _ACM Transactions on Internet Technology (TOIT)_ , 20(2):1–18. * Graham et al. (2009) Jesse Graham, Jonathan Haidt, and Brian A Nosek. 2009. Liberals and Conservatives Rely on Different Sets of Moral Foundations. _Journal of personality and social psychology_ , 96(5):1029–1046. * Horne and Adali (2017) Benjamin D Horne and Sibel Adali. 2017. This Just In: Fake News Packs a Lot in Title, Uses Simpler, Repetitive Content in Text Body, More Similar to Satire than Real News. In _Eleventh International AAAI Conference on Web and Social Media_ , pages 759–766. * Kar et al. (2018) Sudipta Kar, Suraj Maharjan, and Thamar Solorio. 2018. Folksonomication: Predicting tags for movies from plot synopses using emotion flow encoded neural network. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 2879–2891. * Karadzhov et al. (2017) Georgi Karadzhov, Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño, and Ivan Koychev. 2017. Fully automated fact checking using external sources. In _Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017_ , pages 344–353. * Maharjan et al. (2018) Suraj Maharjan, Sudipta Kar, Manuel Montes-y Gómez, Fabio A Gonzalez, and Thamar Solorio. 2018. Letting emotions flow: Success prediction by modeling the flow of emotions in books. _arXiv preprint arXiv:1805.09746_. * Mohammad and Turney (2010) Saif M Mohammad and Peter D Turney. 2010. Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon. In _Proceedings of the NAACL HLT 2010 workshop on computational approaches to analysis and generation of emotion in text_ , pages 26–34. Association for Computational Linguistics. * Pérez-Rosas et al. (2018) Verónica Pérez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. 2018. Automatic Detection of Fake News. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 3391–3401, Santa Fe, New Mexico, USA. Association for Computational Linguistics. * Rashkin et al. (2017) Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017\. Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2931–2937. * Reagan et al. (2016) Andrew J Reagan, Lewis Mitchell, Dilan Kiley, Christopher M Danforth, and Peter Sheridan Dodds. 2016. The emotional arcs of stories are dominated by six basic shapes. _EPJ Data Science_ , 5(1):1–12. * Shu et al. (2019a) Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019a. defend: Explainable fake news detection. In _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 395–405. * Shu et al. (2019b) Kai Shu, Deepak Mahudeswaran, and Huan Liu. 2019b. FakeNewsTracker: a Tool for Fake News Collection, Detection, and Visualization. _Computational and Mathematical Organization Theory_ , 25(1):60–71. * Shu et al. (2018) Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, and Huan Liu. 2018. FakeNewsNet: A Data Repository with News Content, Social Context and Dynamic Information for Studying Fake News on Social Media. _arXiv preprint arXiv:1809.01286_. * Tausczik and Pennebaker (2010) Yla R Tausczik and James W Pennebaker. 2010. The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods. _Journal of language and social psychology_ , 29(1):24–54. * Wilson (1988) Michael Wilson. 1988. MRC Psycholinguistic Database: Machine-usable dictionary, version 2.00. _Behavior research methods, instruments, & computers_, 20(1):6–10. * Yang et al. (2016) Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016\. Hierarchical Attention Networks for Document Classification. In _Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies_ , pages 1480–1489. * Zheng et al. (2018) Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open Attribute Value Extraction from Product Profiles. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 1049–1058. * Zhou and Zafarani (2018) Xinyi Zhou and Reza Zafarani. 2018. Fake news: A survey of research, detection methods, and opportunities. _arXiv preprint arXiv:1812.00315_. * Zlatkova et al. (2019) Dimitrina Zlatkova, Preslav Nakov, and Ivan Koychev. 2019. Fact-checking meets fauxtography: Verifying claims about images. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2099–2108. * Zubiaga et al. (2015) Arkaitz Zubiaga, Maria Liakata, Rob Procter, Kalina Bontcheva, and Peter Tolmie. 2015. Towards Detecting Rumours in Social Media. In _AAAI Workshop: AI for Cities_ , pages 35–41. ## Appendix A Appendices ### A.1 Hyper-parameters For FakeFlow hyper-parameters, we tune the following parameters with the their correspondent search space: * • Dropout: random selection in the range [0.1, 0.6], * • Dense layers: [8, 16, 32, 64, 128], * • Activation functions: [selu, relu, tanh, elu], * • CNN filters’ sizes: [(2, 3, 4), (3, 4, 5), (4, 5, 6), (3, 5), (2, 4), (4,), (5,), (3, 5, 7), (3, 6)], * • Numbers of CNN filters: [4, 8, 16, 32, 64, 128], * • Pooling size: [2, 3], * • GRU units: [8, 16, 32, 64, 128], * • Optimization function: [adam, adadelta, rmsprop, sgd], * • For the early stopping, we set the ‘patience‘ parameter to 4 and we set the epochs number to 50. For the parameters selection, we use hyperopt141414https://github.com/hyperopt/hyperopt library that receives the above search space to randomly select different $N$ combination of parameters (trials). We use a small value of $N$ in all of our experiments to avoid overdrawn finetuning; we set $N$ to 35. ### A.2 Topic Aware experiments In Figure 4, we present the average accuracy of our model when we train on different years and test a specific one. In the following we show the results before we averaged them. Train Test | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 ---|---|---|---|---|---|--- 2013 | 0.00 | 0.82 | 0.74 | 0.76 | 0.78 | 0.74 2014 | 0.84 | 0.00 | 0.79 | 0.76 | 0.81 | 0.74 2015 | 0.79 | 0.81 | 0.00 | 0.82 | 0.80 | 0.82 2016 | 0.80 | 0.76 | 0.87 | 0.00 | 0.85 | 0.79 2017 | 0.79 | 0.82 | 0.76 | 0.80 | 0.00 | 0.85 2018 | 0.79 | 0.75 | 0.81 | 0.83 | 0.83 | 0.00 Average | 0.80 | 0.79 | 0.79 | 0.79 | 0.81 | 0.79 Table 5: FakeFlow results for each train-test run for the Topic-Aware experiment.
Laboratoire de Physique Théorique et Modèles Statistiques (LPTMS), CNRS, Université Paris-Saclay, F-91405 Orsay, France Laboratoire Charles Coulomb (L2C), Univ. Montpellier, CNRS, F-34095, Montpellier, France IATE, INRAE, CIRAD, Montpellier SupAgro, Univ. Montpellier, F-34060, Montpellier, France CNR-ISC Uos Sapienza, Piazzale A. Moro 2, IT-00185 Roma, Italy Department of Physics, Sapienza Università di Roma, Piazzale A. Moro 2, IT-00185 Roma, Italy Department of Physics, Sapienza Università di Roma, Piazzale A. Moro 2, IT-00185 Roma, Italy Institut Universitaire de France Department of Physics, Sapienza Università di Roma, Piazzale A. Moro 2, IT-00185 Roma, Italy CNR-ISC Uos Sapienza, Piazzale A. Moro 2, IT-00185 Roma, Italy # The effect of chain polydispersity on the elasticity of disordered polymer networks Valerio Sorichetti These authors contributed equally <EMAIL_ADDRESS>Andrea Ninarello These authors contributed equally José M. Ruiz-Franco CNR-ISC Uos Sapienza, Piazzale A. Moro 2, IT-00185 Roma, Italy Virginie Hugouvieux IATE, INRAE, CIRAD, Montpellier SupAgro, Univ. Montpellier, F-34060, Montpellier, France Walter Kob Laboratoire Charles Coulomb (L2C), Univ. Montpellier, CNRS, F-34095, Montpellier, France Emanuela Zaccarelli CNR-ISC Uos Sapienza, Piazzale A. Moro 2, IT-00185 Roma, Italy Lorenzo Rovigatti Department of Physics, Sapienza Università di Roma, Piazzale A. Moro 2, IT-00185 Roma, Italy ###### Abstract Due to their unique structural and mechanical properties, randomly-crosslinked polymer networks play an important role in many different fields, ranging from cellular biology to industrial processes. In order to elucidate how these properties are controlled by the physical details of the network (e.g. chain- length and end-to-end distributions), we generate disordered phantom networks with different crosslinker concentrations $C$ and initial density $\rho_{\rm init}$ and evaluate their elastic properties. We find that the shear modulus computed at the same strand concentration for networks with the same $C$, which determines the number of chains and the chain-length distribution, depends strongly on the preparation protocol of the network, here controlled by $\rho_{\rm init}$. We rationalise this dependence by employing a generic stress-strain relation for polymer networks that does not rely on the specific form of the polymer end-to-end distance distribution. We find that the shear modulus of the networks is a non-monotonic function of the density of elastically-active strands, and that this behaviour has a purely entropic origin. Our results show that if short chains are abundant, as it is always the case for randomly-crosslinked polymer networks, the knowledge of the exact chain conformation distribution is essential for predicting correctly the elastic properties. Finally, we apply our theoretical approach to published experimental data, qualitatively confirming our interpretations. ## 1 Introduction For many applications, the elasticity of a crosslinked polymer network is one of its most important macroscopic properties 1. It is thus not surprising that a lot of effort has been devoted to understand how the features of the network, such as the fraction and functionality of crosslinkers or the details of the microscopic interactions between chain segments, contribute to generate its elastic response 2, 3, 4, 5, 6, 7. The macroscopic behaviour of a real polymer network (be it a rubber or a hydrogel) depend on many quantities, such as the properties of the polymer and of the solvent, the synthesis protocol and the thermodynamic parameters. However, in experiments it is difficult to disentangle how these different elements contribute to the elastic properties of the material. This task becomes easier in simulations, since all the relevant parameters can be controlled in detail 8, 9, 10, 11, 12, 13, 14, 15, 16, 17. In this regard, an important feature of real polymer networks that can be exploited is that their elasticity can be described approximately as the sum of two contributions: one due to the crosslinkers and one due to the entanglements 18, 19, 16, 17. The former can be approximated well by the elastic contribution of the corresponding phantom network 20, i.e. when the excluded volume between the strands is not taken into account. 16, 17 It is therefore very important to understand the role of the chain conformation distribution on the dynamics and elasticity of phantom polymer models. The distribution of the chemical lengths of the strands between two crosslinkers, i.e. the chains, in a network (chain-length distribution for short) depends on the chemical details and on the synthesis protocol. For example, in randomly crosslinked networks this distribution is typically exponential 8, 21, whereas chains are monodispersed when end-crosslinking is performed. Regardless of the synthesis route, the presence of short or stretched chains is common, although the exact form of the chain conformation fluctuations is highly non-trivial. From a theoretical viewpoint, however, the majority of the results on the elasticity of polymer networks have been obtained within the mean-field realm, in which scaling assumptions and chain Gaussianity are assumed 22, 20. Therefore, simulations can be extremely helpful to clarify the exact role played by the chain-length distribution and better understand experimental results. However, most simulation studies have focused on melt densities, where random or end-crosslinking can be employed efficiently 8, 9, 10, 11, 13, 15, 16, 17, or have employed idealised lattice networks 23, 12, 24, 14, 25. This makes it challenging to compare the results from such simulations with common experimental systems such as hydrogels, which are both low-density and disordered 26. In the present paper, we show that the knowledge of the exact chain end-to-end distribution is essential to correctly predict the linear elastic response of low-density polymer networks. We do so by simulating disordered phantom networks generated with different crosslinker concentration $C$ and initial monomer density $\rho_{\rm init}$. In our systems, the former parameter controls the number of chains and the chain-length distribution, while the latter determines the initial end-to-end distance distribution of the chains and therefore plays a similar role as the solvent quality in an experimental synthesis. To generate the gels we exploit a recently introduced technique based on the self-assembly of patchy particles, which has been proven to correctly reproduce structural properties of experimental microgels 27, 28, 29. This method allows us to obtain systems at densities comparable with those of experimental hydrogels, i.e. giving access to swelling regimes inaccessible through the previously employed techniques based on numerical vulcanization of high-density polymer melts 9, 10, 11, 12, 13, 14, 16, 17. We first demonstrate that systems generated with the same $C$ but at different values of $\rho_{\rm init}$ can display very different elastic properties even when probed at the same strand concentration and despite having the same chain length distribution. Secondly, we compare the numerical results to the phantom network theory 20. In order to do so, we determine the theoretical relation between the shear modulus $G$ and the single-chain entropy for generic non- Gaussian chains. We find a good agreement between theory and simulation only for the case in which the exact chain end-to-end distribution is given as an input to the theory, with some quantitative deviations appearing at low densities. On the other hand, assuming a Gaussian behaviour of the chains leads to qualitatively wrong predictions for all the investigated systems except the highest-density ones. Overall, our analysis shows that for low- density polymer networks and in the presence of short chains the knowledge of the exact chain conformational fluctuations is crucial to predict the system elastic properties reliably. Notably, we validate our approach against recently published experimental data 30, 31, showing that the behaviour of systems where short chains are present cannot be modelled without a precise knowledge of the chain-size-dependent end-to-end distribution. ## 2 Theoretical background In this section we review some theoretical results on the elasticity of polymer networks, for the most part available in the literature 1, 22, 20, by re-organizing them and introducing the terminology and notation that will be employed in the rest of the paper. We will consider a polydisperse polymer network made of crosslinkers of valence $\phi$ connected by $N_{s}$ strands. Here and in the following we will assume the network to be composed of $N_{s}$ elastically-active strands, defined as strands with the two ends connected to distinct crosslinkers, i.e., that are neither dangling ends nor closed loops (e.g. loops of order one). Moreover, for those strands which are part of higher-order loops, we assume their elasticity to be independent of the loop order (see Zhong et al. 32 and Lin et al. 33). We will focus on evaluating the shear modulus $G$ of the gel, which relates a pure-shear strain to the corresponding stress in the linear elastic regime 34. One can theoretically compute $G$ by considering uniaxial deformations of strain $\lambda$ along, for instance, the $x$ axis. We assume the system to be isotropic; moreover, since we are interested in systems with no excluded volume interactions, we assume a volume-preserving transformation ***In the absence of excluded volume interactions the pressure of the system is negative and would therefore collapse if the volume was not kept constant., i.e., $\lambda_{x}=\lambda$ and $\lambda_{y}=\lambda_{z}=\lambda^{-1/2}$ as extents of deformation along the three axes. The starting point to calculate the shear modulus is the single chain entropy, which is a function on the chain’s end-to-end distance 22. In general, we can write the instantaneous end-to-end vector of a single chain which connects any two crosslinkers as $\mathbf{r}(t)=\mathbf{R}+\mathbf{u}(t)$, where $\mathbf{R}\equiv\overline{\mathbf{r}(t)}$ represents the time-averaged end- to-end vector and $\mathbf{u}(t)$ the fluctuation term. We also assume that there are no excluded volume interactions, so that the chains can freely cross each other. We thus have $\overline{r^{2}}=R^{2}+\overline{u^{2}}$, 222Here and in the following, italic is used to indicate the magnitude of the vector. since $\overline{\mathbf{R}\cdot\mathbf{u}(t)}=0$, the position and fluctuations of crosslinkers being uncorrelated 20. The entropy of a chain with end-to-end vector $\mathbf{r}=(r_{x},r_{y},r_{z})$ is $S_{n}(\mathbf{r})=k_{B}\log W_{n}(\mathbf{r})+A_{n}$ 35, where $W_{n}(\mathbf{r})$ is the end-to-end probability density of $\mathbf{r}$ and $A_{n}$ is a temperature-dependent parameter that can be set to zero in this context. If the three spatial directions are independent (which is the case, e.g., if $W_{n}(\mathbf{r})$ is Gaussian) then $W_{n}(\mathbf{r})$ can be written as the product of three functions of $r_{x},r_{y}$, and $r_{z}$, so that $S_{n}(\mathbf{r})=s_{n}(r_{x})+s_{n}(r_{y})+s_{n}(r_{z})$, where $s_{n}$ is the entropy of a one-dimensional chain. Building upon this result, we can assume that each chain in the network can be replaced by three independent one-dimensional chains parallel to the axes using the so-called three-chain approximation 36, 1. This assumption is exact for Gaussian chains, although for non-Gaussian chains the associated error is small if the strain is not too large 1. We will also assume (i) that the length of each chain in the unstrained state ($\lambda=1$) is ${\tilde{r}\equiv(\overline{r^{2}})^{1/2}}\equiv{(R^{2}+\overline{u^{2}})^{1/2}}$, and (ii) that, upon deformation, the chains deform affinely with the network, so that the length of the chain oriented along the $x$ axis becomes $\tilde{r}_{\lambda}$ and those of the chains oriented along the $y$ and $z$ axes become $\tilde{r}_{\lambda^{-1/2}}$. With those assumptions, the single- chain entropy $S_{n}(\lambda)$ becomes 1 $S_{n}(\lambda)=\frac{s_{n}(\tilde{r}_{\lambda})+2s_{n}(\tilde{r}_{\lambda^{-1/2}})}{3},$ (1) where we need to divide by $3$ since we are replacing each unstrained chain with end-to-end distance $\tilde{r}$ by three fictitious chains of the same size. Usually, the $\lambda-$dependence of $\tilde{r}_{\lambda}$ is controlled by the microscopic model and by the macroscopic conditions (density, temperature, etc.). Two well-known limiting cases are the affine network model 20, in which both the average positions and fluctuations of the crosslinkers deform affinely, $\tilde{r}_{\lambda}=\lambda\tilde{r}$, and the phantom network model 20, in which the fluctuations are independent of the extent of the deformation, so that $\tilde{r}_{\lambda}={[(\lambda^{2}R^{2}+\overline{u^{2}})]^{1/2}},\ \text{and thus}\ \ \tilde{r}_{\lambda^{-1/2}}={[(R^{2}/\lambda+\overline{u^{2}})]^{1/2}}.$ (2) The free-energy difference between the deformed and undeformed state of a generic chain is $\Delta F=-T[S_{n}(\lambda)-S_{n}(1)]$ and thus the $x$ component of the tensile force is given by $f_{x}(\lambda)=\frac{1}{L_{x0}}\frac{d\Delta F}{d\lambda}=-\frac{T}{L_{x0}}\frac{dS_{n}(\lambda)}{d\lambda}.$ (3) The latter quantity divided by the section $L_{y0}L_{z0}$ yields the $xx$ component of the stress tensor, which thus reads $\sigma_{xx}=-\frac{TR^{2}}{3V}\left[\frac{\lambda}{\tilde{r}_{\lambda}}\frac{ds_{n}(\tilde{r}_{\lambda})}{d\tilde{r}_{\lambda}}-\frac{1}{\lambda^{2}\tilde{r}_{\lambda^{-1/2}}}\frac{ds_{n}(\tilde{r}_{\lambda^{-1/2}})}{d\tilde{r}_{\lambda^{-1/2}}}\right],$ (4) where we have used Eq. (2). Since the volume is kept constant, the Poisson ratio is $1/2$ 34 and hence the single-chain shear modulus $g$ is connected to the Young modulus $Y=\frac{d\sigma_{xx}}{d\lambda}\Bigr{|}_{\lambda=1}$ by $g=Y/3$, which implies that $g=-\frac{TR^{2}}{3V}\left[\frac{ds_{n}(\tilde{r})}{d\tilde{r}}\left(\frac{1}{\tilde{r}}-\frac{R^{2}}{2\tilde{r}^{3}}\right)+\frac{d^{2}s_{n}(\tilde{r})}{d\tilde{r}^{2}}\frac{R^{2}}{2\tilde{r}^{2}}\right].$ (5) We note that, although similar equations can be found in Smith 36 and Treloar 1, to the best of our knowledge Eq. (5) has not been reported in the literature in this form. In order to obtain the total shear modulus $G$ of the network, and under the assumption that the effect of higher-order loops can be neglected 32, 33, one has to sum over the $N_{s}$ elastically-active chains. Of course, the result will depend on the specific form chosen for the entropy $s_{n}$. We stress that a closed-form expression of the end-to-end probability density $W_{n}(\mathbf{r})$ is not needed, since only its derivatives play a role in the calculation. Hence, it is sufficient to know the force-extension relation for the chain, since, as discussed above, the component of the force along the pulling direction satisfies Eq. (3) (see also Sec. AII). For a freely-jointed chain (FJC) 22 of $n$ bonds of length $b$, $W_{n}(\mathbf{r})$ has the following form 37, 1: $W_{n}(\mathbf{r})=\left[\frac{n(n-1)}{8\pi rb^{2}}\right]\sum_{t=0}^{\tau}\frac{(-1)^{t}}{t!(n-t)!}\left[\left(\frac{nb-r}{2b}\right)-t\right]^{n-2},$ (6) where $\tau=\lfloor(nb-r)/2b\rfloor$, i.e., the largest integer smaller than $(nb-r)/2b$. In the limit of large $n$, Eq. (6) reduces to a Gaussian 37: $W_{n}^{G}(\mathbf{r})=\left(\frac{3}{2\pi nb^{2}}\right)^{3/2}\exp\left(-\frac{3r^{2}}{2nb^{2}}\right).$ (7) Under this approximation, the shear modulus takes the well-known form $G^{G}=\frac{k_{B}T}{V}\sum_{i}^{N_{s}}\frac{R_{i}^{2}}{n_{i}b^{2}}=\left\langle\frac{R^{2}}{nb^{2}}\right\rangle k_{B}T\nu\equiv Ak_{B}T\nu,$ (8) where $\nu=N_{s}/V$ is the number density of elastically-active strands and $A$ is often called the front factor 38, 39, 36, 40. We have also introduced the notation $\langle\cdot\rangle=N_{s}^{-1}\sum_{i}^{N_{s}}\cdot$ for the average over all the strands in the system. In the particular case that the $\overline{r_{i}^{2}}$ values of the different chains are Gaussian distributed (a distinct assumption from the one that $W_{n}(\mathbf{r})$ is Gaussian), which is the case, for example, for end-crosslinking starting from a melt of precursor chains11, it can be shown that $A=1-\frac{2}{\phi}$ (we recall that $\phi$ is the crosslinker valence), so that one obtains the commonly reported expression (see also Sec. AI) 22, 20 $G^{G}=\left(1-\frac{2}{\phi}\right)k_{B}T\nu.$ (9) Eq. (8) was derived from Eq. (5), which assumes the validity of the phantom network model. If one assumes, on the other hand, that the affine network model is valid, a different expression for $G$ is obtained (see Supplementary material). To obtain a more accurate description of the end-to-end probability distribution for strained polymer networks, one has to go beyond the Gaussian model and introduce more refined theoretical assumptions. Amongst other approaches, the Langevin-FJC 1 (L-FJC), the extensible-FJC 41 (ex-FJC) and the worm-like chain 42 (WLC) have been extensively used in the literature. In the L-FJC model the force-extension relation is approximated using an inverse Langevin function, whereas in the ex-FJC model bonds are modeled as harmonic springs. These models give a better description of the system’s elasticity when large deformations are considered. The WLC model, in which chains are represented as continuously-flexible rods, is useful when modeling polymers with high persistence length (compared to the Kuhn length). More details about these models can be found in Sec. AII. ## 3 Model and Methods We build the polymer networks by employing the method reported in Gnan et al. 27, which makes use of the self-assembly of a binary mixture of limited- valence particles. Particles of species $A$ can form up to four bonds (valence $\phi=4$) and bond only to $B$ particles only, thus acting as crosslinkers. Particles of species $B$ can form up to two bonds ($\phi=2$) and can bond to $A$ and $B$ particles. We carry out the assembly of $N_{\rm init}=N_{A}+N_{B}=5\cdot 10^{4}$ particles at different number density $\rho_{\rm init}=N_{\rm init}/V$, with $V$ the volume of the simulation box, and different crosslinker concentration $C=N_{A}/(N_{B}+N_{A})$. We consider two initial densities $\rho_{\rm init}=0.1,0.85$, and $C=1\%$, $5\%$ and $10\%$. The results are averaged over two system realizations for each pair of $\rho_{\rm init},C$ values. The assembly proceeds until an almost fully-bonded percolating network is attained, i.e. the fraction of formed bonds is at least $N_{\rm bond}/N_{\rm bond}^{\rm max}=99.9\%$, where $N_{\rm bond}^{\rm max}={(4N_{A}+2N_{B})/2}$ is the maximum number of bonds. The self-assembly process is greatly accelerated thanks to an efficient bond-swapping mechanism 43. When the desired fraction $N_{\rm bond}/N_{\rm bond}^{\rm max}$ is reached, we stop the assembly, identify the percolating network and remove all particles or clusters that do not belong to it. Since some particles are removed, at the end of the procedure the values of $\rho_{\rm init}$ and $C$ change slightly. However, these changes are small (at most $10\%$) and in the following we will hence use the nominal (initial) values of $\rho_{\rm init}$ and $C$ to refer to the different networks. Figure 1: Rescaled distribution of chain lengths for all the simulated systems. We report the data for two samples (S1 and S2) generated with two values of the initial density each. The orange dashed lines are the theoretical prediction of Eq. (10). The normalised distribution of the chemical lengths $n$ of the chains, $P(n)/P(1)$, that constitute the network is shown in Figure 1. Here the chemical chain length is here defined as the number of particles in a chain, excluding the crosslinkers, so that a chain with ${n+1}$ bonds has length $n$ †††We recall that, since two crosslinkers cannot bind to each other, the minimum chain length is $n=1$ (for $2$ bonds).. In all cases the distribution decays exponentially, as it is also the case for random-crosslinking from a melt of precursor chains 8. Moreover, $P(n)$ does not depend on the initial density 27, 44 and, as one expects given the equilibrium nature of the assembly protocol, it is fully reproducible. This distribution can be estimated from the nominal values of $\phi$ and $C$ via the well-known formula of Flory 45: $\frac{P(n)}{P(1)}=\left(1-\frac{1}{\langle n\rangle}\right)^{n-1},$ (10) where $\langle n\rangle=2(1-C)/\phi C$ is the mean chain length 44 which, using the nominal crosslinker valence ($\phi=4$) and concentrations, takes the values $49.5$, $9.5$ and $4.5$ for $C=1\%$, $5\%$ and $10\%$, respectively. The parameter-free theoretical probability distribution is shown as orange dashed lines in Fig. 1 and reproduces almost perfectly the numerical data. The network contains a few defects in the form of dangling ends (chains which are connected to the percolating network by one crosslinker only) and first- order loops, that is, chains having both ends connected to the same crosslinker33. Since there are no excluded volume interactions, these defects are elastically inactive and therefore do not influence the elastic properties of the network 46, 47, 33. For the configurations assembled at $C=1\%$, the percentage of particles belonging to the dangling ends is $\approx 10\%$ for $\rho_{\rm init}=0.1$ and $\approx 6\%$ for $\rho_{\rm init}=0.85$. For higher values of $C$, the percentages are much smaller (e.g. $\approx 2\%$ for $C=5\%,\rho_{\rm init}=0.1$ and $\approx 1\%$ for $C=10\%,\rho_{\rm init}=0.1$). In order to obtain an ideal fully-bonded network, the dangling ends are removed. We note that during this procedure, the crosslinkers connected to dangling ends have their valence reduced from $\phi=4$ to $\phi=3$ or $2$ (in the latter case, they become type $B$ particles). The percentage of so-created $3$-valent crosslinkers remains small: For $\rho_{\rm init}=0.1$ is $\approx 15\%,4\%$, and $2\%$ for $C=1\%,5\%$, and $10\%$ respectively. The presence of these crosslinkers slightly changes the average crosslinker valence, but does not influence the main results of this work. Once the network is formed, we change the interaction potential, making the bonds permanent and thus fixing the topology of the network. Since we are interested in understanding the roles that topology and chain size distribution of a polymer network play in determining its elasticity, we consider interactions only between bonded neighbours, similarly to what has been done in Duering et al. 11. Particles that do not share a bond do not feel any mutual interaction, and hence chains can freely cross each other (whence the name phantom network). Two bonded particles interact through the widely used Kremer-Grest potential 48, which is given by the sum of the Weeks- Chandler-Andersen (WCA) potential 49, $\mathcal{U}_{\rm WCA}(r)=\begin{cases}4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right]+\epsilon&\text{if $r\leq 2^{\frac{1}{6}}\sigma$}\\\ 0&\text{otherwise}\end{cases},$ (11) which models steric interactions, and of a finite extensible nonlinear elastic (FENE) potential, i.e., $\mathcal{U}_{\rm FENE}(r)=-\frac{kr_{0}^{2}}{2}\ln\left[1-\left(\frac{r}{r_{0}}\right)^{2}\right],$ (12) which models the bonds. We set $k=30\epsilon/\sigma^{2}$ and $r_{0}=1.5\sigma$. Here and in the following, all quantities are given in reduced units. The units of energy, length and mass are respectively $\epsilon$, $\sigma$ and $m$, where $\epsilon$, and $\sigma$ are defined by Eq. (11) and $m$ is the mass of a particle, which is the same for $A$ and $B$ particles. The units of temperature, density, time and elastic moduli are respectively $[T]=\epsilon/k_{B},[\rho]=\sigma^{-3}$, $[t]=\sqrt{m\sigma^{2}/\epsilon}$, and $[G]=\epsilon\sigma^{-3}$. In these units, the Kuhn length of the model is $b=0.97$ 48. We run molecular dynamics simulations in the $NVT$ ensemble at constant temperature $T=1.0$ by employing a Nosé-Hoover thermostat 50. Simulations are carried out using the LAMMPS simulation package 51, with a simulation time step $\delta t=0.003$. In order to study the effects of the density on the elastic properties, the initial configurations are slowly and isotropically compressed or expanded to reach the target densities $\rho=0.1,0.2,0.5,0.85,1.5$. Then, a short annealing of $10^{6}$ steps and subsequently a production run of $10^{7}$ steps are carried out. Even for the system with the longest chains, the mean- squared displacement of the single particles reaches a plateau, indicating that the chains have equilibrated (see Supplementary material). For each final density value, we run several simulations for which we perform a uniaxial deformation in the range $\lambda_{\alpha}\in\left[0.8,1.2\right]$ along a direction $\alpha$, where $\lambda_{\alpha}=L_{\alpha}/L_{\alpha,0}$ is the extent of the deformation and $L_{\alpha,0}$ and $L_{\alpha}$ are the initial and final box lengths along $\alpha$, respectively. The deformation is carried out at constant volume with a deformation rate of $10^{-1}$. To confirm that the system is isotropic, we perform the deformation along different spatial directions $\alpha$. Figure 2: Snapshots of a network with crosslinker concentration $C=5\%$ and assembly density $\rho_{\rm init}=0.1$ at (a, c) $\rho=0.1$ and (b, d) $\rho=1.5$, subject to a uniaxial deformation along the vertical direction with $\lambda=1.2$ a-b: Turquoise and red particles indicate crosslinkers and monomers, respectively. The orange scale bars in the top left corners are $10\,\sigma$ long. c-d: The structural response during the uniaxial deformation is represented by the spatial configuration of chains. Orange segments: bonds belonging to overstretched chains ($r>0.95\cdot nb$); green segments: bonds belonging to unstretched chains ($r<0.95\cdot nb$). Figure 2 shows representative snapshots of the $C=5\%$, $\rho_{\rm init}=0.1$ system at low ($\rho=0.1$) and high ($\rho=1.5$) density, subject to a uniaxial deformation along the vertical direction. In panels 2a-b we show the particles (monomers and crosslinkers), highlighting the highly disordered nature of the systems and their structural heterogeneity, which is especially evident at low density. The same systems are also shown in panels 2c-d, where we use different colours to display bonds of overstretched chains (defined here as chains with $r>0.95\cdot nb$) and unstretched chains ($r<0.95\cdot nb$), highlighting the heterogeneous elastic response of these systems when they are subject to deformations. Once the system acquires the target value of $\lambda_{z}$, we determine the diagonal elements of the stress tensor $\sigma_{\alpha\alpha}$ and compute the engineering stress $\sigma_{\rm eng}$ as 22: $\sigma_{\rm eng}=\frac{\sigma_{\rm tr}}{\lambda_{z}}=\frac{1}{\lambda_{z}}\left[\sigma_{zz}-\frac{1}{2}\left(\sigma_{xx}+\sigma_{yy}\right)\right],$ (13) where $\sigma_{tr}$ is the so-called true stress 1, 52.The shear modulus $G$ is then the quantity that connects the engineering stress and the strain through the following relation 22: $\sigma_{\rm eng}=G\left[\left(\lambda_{z}-\lambda_{\rm ref}\right)-\frac{1}{\left(\lambda_{z}-\lambda_{\rm ref}\right)^{2}}\right]\ .$ (14) In Eq. (14) $\lambda_{\rm ref}$ is an extra fit parameter that we add to take into account the fact that in some cases $\sigma_{\rm eng}\neq 0$ for $\lambda_{z}=1$, which signals the presence of some pre-strain in our configurations. The stress-strain curves we use to estimate $G$ are averaged over 10 independent configurations obtained by the randomisation of the particle velocities, prior to deformation, with a Gaussian distribution of mean value $T=1.0\epsilon/k_{B}$ in order to reduce the statistical noise. Figure 3: Example of stress-strain curves for the $C=1\%$, $\rho_{\rm init}=0.1$ system. Symbols are simulation data, lines are fits with Eq. (14). Figure 3 shows the numerical data for the stress-strain curves for the $C=1\%$, $\rho_{\rm init}=0.1$ system. We also report the associated theoretical curves, fitted to Eq. (14), through which we obtain an estimate of the shear modulus. ## 4 Results and discussion Figure 4: a: Shear modulus as a function of the elastically-active strand number density $\nu$ for all the investigated systems. Solid blue line: Eq. (9) with $\phi=4$. Solid orange line: slope $1/3$. b: Same as a, with both $G$ and $\nu$ rescaled by $\gamma\rho^{A}_{\rm init}$, where $\gamma=0.74$ for $\rho_{\rm init}=0.85$ and $\gamma=1$ for $\rho_{\rm init}=0.1$ is a fit parameter and $\rho^{A}_{\rm init}=C\rho_{\rm init}$ is the initial crosslinker density. We use the simulation data to estimate $\tilde{r}\equiv(\overline{r^{2}})^{1/2}$ (RMS end-to-end distance) and $R\equiv\overline{r}$ for each chain to compute the elastic moduli of the networks through Eq. (5). In the following, we will refer to the elastic moduli computed in this way with the term “theoretical”. Figure 4a shows the shear modulus as computed in simulations for all investigated systems as a function of $\nu$, the density of elastically-active strands. First of all, we observe that systems generated at the same $C$ but with different values of $\rho_{\rm init}$ exhibit markedly different values of the shear modulus when probed under the same conditions (i.e. same strand density). This result highlights the fundamental role of crosslinking process, which greatly affects the initial distribution of the chains’ end-to-end distances even when the number of chains and their chemical length distribution, being dependent only on $C$ (see Fig. 1), are left unaltered. Thus, the echo of the difference between the initial end-to-end distributions gives rise to distinct elastic properties of the phantom networks even when probed at the same strand density. In Fig. 4a we also plot the behaviour predicted by Eq. (9) (blue line), which assumes Gaussian distributed end-to-end distances. Even though the numerical data seem to approach this limit at very large values of the density, they do so with a slope that is clearly smaller than unity. For the $C=1\%$ sample this slope is almost exactly $1/3$, while it is very close to this number for the $C=5\%$ and $C=10\%$ samples assembled at $\rho_{\rm init}=0.1$. This behaviour can be understood at the qualitative level from Eq. (8): $R$ is the average distance between crosslinkers and therefore it changes affinely upon compression or expansion, thereby scaling as $R\propto\nu^{-1/3}$ 53, 54. As a result, in the Gaussian limit the shear modulus scales as $G^{G}\propto\nu^{1/3}$ 55, 54, 53, 30. As discussed above, our results show that the way this limiting regime is approached depends on the crosslinker concentration $C$ and on the preparation state, which is here controlled by $\rho_{\rm init}$. The quantitative differences in the elastic response of systems with different $C$ and $\rho_{\rm init}$ can be partially rationalised by looking at the scaling properties of the end-to-end distances. We notice that the RMS equilibrium end-to-end distance $R(n)$ of the strands for different values of $\rho_{\rm init}$ and $C$ nearly collapses on a master curve when divided by the initial crosslinker density, $\rho^{A}_{\rm init}=C\rho_{\rm init}$ (see Supplementary material). A slightly better agreement is found if the heuristic factor $\gamma\rho^{A}_{\rm init}$, with $\gamma=0.74$ for $\rho_{\rm init}=0.85$ and $\gamma=1$ for $\rho_{\rm init}=0.1$ is used. Based on this observation, we rescale the data of Fig. 4a multiplying both $G$ and $\nu$ by $\gamma\rho^{A}_{\rm init}$. The result is shown in Fig. 4b: One can see that the shear modulus of systems with the same $C$ but different values of $\rho_{\rm init}$ nicely fall on the same curve. Moreover, in the large-$\nu$ limit, where all the curves tend to have the same slope, a good collapse of the data of systems with different $C$ is also observed. The differences arising between systems at different $C$ can be explained by noting that the crosslinker concentration controls the relative abundance of chains with different $n$, whose elastic response cannot be rescaled on top of each other by using $n$ but depends on their specific end-to-end distribution (see e.g. Appendix AII). As a result, the elasticity of networks generated at different $C$ cannot be rescaled on top of each other. In particular, systems with more crosslinkers, and hence more short chains, will deviate earlier and stronger from the Gaussian behaviour. Interestingly, $G$ exhibits a non-monotonic behaviour as a function of $\nu$; this feature appears for all but the lowest $C$ and $\rho_{\rm init}$ values. This behaviour, which has been also observed in hydrogels 54, 53, 56, 30, 31, cannot be explained assuming that the chains are Gaussian, since in this case one has for all $\nu$ that $G\propto\nu^{1/3}$, as discussed above. Given that our model features stretchable bonds, at large strains it cannot be considered a FJC, being more akin to an ex-FJC 41. Therefore, one might be tempted to ascribe the increase of $G$ upon decreasing $\nu$ to the energetic contribution. For this reason, in addition to the Gaussian and FJC descriptions we also plot in Fig. 5b the shear modulus estimated by neglecting the contributions of those chains that have $r\geq 0.95\cdot nb$ (i.e. of the overstretched chains). Since the two sets of data overlap almost perfectly, we confirm that the energetic contribution due to the few overstretched chains is negligible: We can thus conclude that the non-monotonicity we observe has a purely entropic origin. This holds true for all the systems investigated except for the $C=10\%$, $\rho_{\rm init}=0.85$ system, which contains the largest number of short, overstretched chains, as can be seen in Figure 1. Figure 5: Comparison between the shear moduli obtained through Eq. (5) and three different approximations and the numerical ones ($G$) for the simulated systems (see legends). Dashed-dotted line / stars in panel b: FJC approximation with no overstretched chains (chains with $r\geq 0.95\cdot nb$). In Fig. 5 we compare the numerical shear modulus for all investigated systems with estimates as predicted by different theories, with the common assumption that the three-chain model remains valid (see Sec. 2). In particular, we show results obtained with the FJC (Eq. (6)), Gaussian (Eq. (7)), and ex-FJC (see Sec. AII) models. One can see that the agreement between the theoretical and numerical results is always better for larger values of $\nu$, i.e. when chains are less stretched. Moreover, the agreement between data and theory is better for systems generated at smaller $\rho_{\rm init}$. We note that the Gaussian approximation, which predicts a monotonically-increasing dependence on $\nu$, fails to reproduce the qualitative behaviour of $G$, whereas the ex- FJC systematically overestimates $G$. The FJC description is the one that consistently achieves the best results, although it fails (dramatically at large $C$) at small densities. We ascribe this qualitative behaviour to the progressive failure of the three-chain assumption as the density decreases. Since the three-chain model is known to overestimate the stress at large strains compared to more complex and realistic approximations such as the tetrahedral model 1, the resulting single-chain contribution to the elastic modulus for stretched chains is most likely overestimated as well. Regardless of the specific model used, our results suggest that when the samples are strongly swollen, something that it is possible to achieve in experiments 30, any description that attempts to model the network as a set of independent chains gives rise to a unreliable estimate of the overall elasticity even when energetic contributions due to stretched bonds do not play a role. Figure 6: Fitting results to (a-c) simulation data (a: $C=1\%$, b: $C=5\%$, c: $C=10\%$) and (d-e) experimental data (d: Young modulus taken from Hoshino et al. 30, e: shear modulus taken from Matsuda et al. 31). The quality of the fits in panel e does not depend on whether the point at $Q=1$ is considered or not or if we restrict the fit to swelling ratios $Q\lesssim 15$. In addition to providing the best comparison with the numerical data in the whole density range, the FJC description also captures the presence and (although only in a semi-quantitative fashion) the position of the minimum. This holds true for all the investigated systems, highlighting the role played by the short chains, whose strong non-Gaussian character heavily influences the overall elasticity of the network. Although real short chains do not follow the exact end-to-end probability distribution we use here (see Eq. (6)), they are surely far from the scaling regime and hence they should never be approximated as Gaussian chains, even in the melt or close to the theta point. This aspect has important consequences for the analysis of experimental randomly-crosslinked polymer networks, for which one may attempt to extract some microscopic parameter (such as the contour length or the average end-to-end distance) by fitting the measured elastic properties to some theoretical relation such as the ones we discuss here. Unfortunately, such an approach will most likely yield unreliable estimates. This is shown in Figure 6(a-c), where we compare the $G$ values with the L-FJC model (see Appendix AII). Here we use the L-FJC model since we assume, as often done when dealing with similar systems 30, that the network can be considered as composed by $N_{s}$ strands of $\langle n\rangle$ segments. The expression we employ contains two quantities that can be either fixed or fitted to the data: the average end-to-end distance in a specific state (e.g. the preparation state), $R_{0}$, and the average strand length $\langle n\rangle$ (or, equivalently, the contour length $r_{\rm max}=\langle n\rangle b$). Together with the numerical data, in Fig. 6 we present three sets of theoretical curves: $G$ as estimated by using the values of $R_{0}$ and $\langle n\rangle$ as obtained from the simulations or fitted by using either $R_{0}$ or both quantities as free parameters. If $C$ is small (and hence $\langle n\rangle$ is large), the difference between the parameter-free expression and the numerical data is small ($10$ -$15\%$). However, as $C$ becomes comparable with the values that are often used in real randomly- crosslinked hydrogels ($\approx 5\%$), the difference between the theoretical and simulation data becomes very significant: For instance, for $C=10\%$ the parameter-free expression fails to capture even the presence of the minimum. Fitting the numerical data makes it possible to achieve an excellent agreement, although the values of the parameters come out to be sensibly different (sometimes more than $50\%$) from the real values (see Sec AIII). Our results thus show that even in the simplest randomly-crosslinked system –a phantom network of freely-jointed chains– neglecting the shortness of the majority of the chains, which dominate the elastic response, can lead to a dramatic loss of accuracy. Randomly-crosslinked polymer networks contain short chains which are inevitably quite far from the scaling regime, and hence even their qualitative behaviour can become elusive if looked through the lens of polymer theories that rest too heavily on the Gaussianity of the chains. We also apply our theoretical expressions to two sets of data which have been recently published and that exhibit a non-monotonic behaviour. Both experiments have been carried out in the laboratory of J. P. Gong 30, 31. The first system is a tetra-PEG hydrogel composed of monodisperse long chains that can be greatly swollen by using a combined approach of adding a molecular stent and applying a PEG dehydration method 30. Since the system is monodisperse and the chains quite long, we expect the theoretical expressions derived here to work well. Indeed, as shown in Fig. 6d, the resulting Young modulus is a non-monotonic function of the swelling ratio $Q$, i.e. of the ratio between the volume at which the measurements are performed and the volume at which the sample was synthesised. The experimental data can be fitted with both the L-FJC and worm-like-chain (WLC) expressions (see Sec AII), since both models reproduce the data with high accuracy when fitted with two free parameters ($R_{0}$ and $\langle n\rangle$). However, better results are obtained with the WLC model, which fits well when $\langle n\rangle$ is fixed to its experimentally-estimated value, yielding a value $R_{0}=7.2$ nm, which is very close to the independently-estimated value of $8.1$ nm 30, in agreement with what reported in Hoshino et al. 30. By contrast, as shown in Fig. 6e, the theoretical expressions reported here cannot go beyond a qualitative agreement with experiments of randomly-crosslinked PNaAMPS networks 31, even if two parameters are left free and we only fit to the experimental data in a narrow range of swelling ratios. In addition, the fitted values of the two parameters are always unphysical (e.g. smaller than one nanometer, see Sec AIII). Although part of the discrepancy might be due to the charged nature of the polymers involved 57, we believe that the disagreement between the theoretical and experimental behaviours can be at least partially ascribed to the randomly-crosslinked nature of the network, and hence to the abundance of short chains. Since the end-to-end distribution of such short chains is not known and depends on the chemical and physical details, there is no way of taking into account their contribution to the overall elasticity in a realistic way. These results thus highlight the difficulty of deriving a theoretical expression to assess the elastic behaviour of randomly-crosslinked real networks. ## 5 Summary and conclusions We have used numerical simulations of disordered phantom polymer networks to understand the role of the chain size distribution on their elastic properties. In order to do so we employed an in silico synthesis technique by means of which we can independently control the number and chemical size of the chains, set by the crosslinker concentration, as well as the distribution of their end-to-end distances, which can be controlled by varying the initial monomer concentration. We found that networks composed by chains of equal contour length can have shear moduli that depend strongly on the end-to-end distance even when probed at the same strand concentration. Hence this shows taht even in simple systems the synthesis protocol can have a large impact on the final material properties of the network even when it does not affect the chemical properties of its basic constituents, as recently highlighted in a microgel system 58. We then compared the results from the simulations of the phantom network polymer theory, which was revisited to obtain explicit expressions for the shear modulus assuming three different chain conformation fluctuations, namely the exact freely-jointed chain, Gaussian, and extensible freely-jointed chain models. We observed a non-monotonic behaviour of $G$ as a function of the strand density that, thanks to a comparison with the theoretical results, can be completely ascribed to entropic effects that cannot be accounted for within a Gaussian description. We thus conclude that the role played by short stretched chains in the mechanical description of polymer networks is fundamental and should not be overlooked. This insight is supported by an analysis of experimental data of the elastic moduli of hydrogels reported in the literature. We are confident that the numerical and analytical tools employed here can be used to address similar and other open questions concerning both the dynamics and the topology in systems in which excluded- volume effects are also taken into account, and hence entanglements effects may be relevant. Investigations in this direction are under way. ## Appendix ## AI Shear modulus of a system with Gaussian distributed end-to-end distances In this section we show how Eq. (9) can be derived for a polydisperse network. Similar derivations for the case of monodisperse networks can be found in standard textbooks 22, 20. We start by noting something that is sometimes overlooked: the Gaussian distribution $W_{n}^{G}(\mathbf{r})$, Eq. (7), applies to a single chain. However, at the ensemble level the distribution of end-to-end vectors, which we may call $\Omega[\mathbf{r}(n)]$, is not a Gaussian in general. However, if we assume, for example, that the system has been obtained through end-crosslinking starting from a melt of precursor chains 11, then $\Omega[\mathbf{r}(n)]=W_{n}^{G}(\mathbf{r})$ 35, so that the magnitudes $r$ of the $\mathbf{r}$ vectors will be Gaussian distributed. Under this assumption, one has $\left\langle\frac{\overline{r^{2}}}{nb^{2}}\right\rangle=\left\langle\frac{R^{2}}{nb^{2}}\right\rangle+\left\langle\frac{\overline{u^{2}}}{nb^{2}}\right\rangle=1.$ (15) To evaluate the term in brackets in Eq. (8) we thus only need to evaluate the fluctuation term $\left\langle\frac{\overline{u^{2}}}{nb^{2}}\right\rangle$. This term can be computed using the equipartition theorem (the same derivation can be found for example in Ref. 59). The total energy of the fluctuations is $\mathcal{U}_{\rm fluct}=\frac{3}{2}k_{B}TN_{x}$, where $N_{x}$ is the number of active crosslinkers, since there is one mode for each node and to each mode it is associated an energy of $\frac{3}{2}k_{B}T$ 59. Moreover $N_{x}=2N_{s}/\phi$, with $N_{s}$ the number of elastically-active strands. Therefore the mean energy per strand is $\frac{\mathcal{U}_{\rm fluct}}{N_{s}}=\frac{3k_{B}T}{\phi}=\frac{3k_{B}T}{2N_{s}}\sum_{i}^{N_{s}}\frac{\overline{u_{i}^{2}}}{n_{i}b^{2}}=\frac{3k_{B}T}{2}\left\langle\frac{\overline{u^{2}}}{nb^{2}}\right\rangle,$ (16) where the sum extends over all the elastically-active strands. Therefore, we get $\left\langle\frac{\overline{u^{2}}}{nb^{2}}\right\rangle=\frac{2}{\phi}$ (a generalization to the polydisperse case of a well-known result for the phantom network 60, 21), from which we finally obtain $\left\langle\frac{R^{2}}{nb^{2}}\right\rangle=1-\frac{2}{\phi}.$ (17) From Eq. (8) we obtain Eq. (9), i.e., $G^{G}=\left(1-\frac{2}{\phi}\right)k_{B}T\nu$. The validity of Eq. (17) depends not only on the crosslinking procedure but also on the macroscopic thermodynamic parameters (such as solvent quality, density or pressure). For example, if the chain-size distribution is such that short chains are abundant, as it is the case for randomly crosslinked networks 8, 21, the front factor $A$ (see Eq. (8)) will depend on the chain size distribution, since short chains are non-Gaussian. Another example is when the crosslinking procedure is performed in a state in which the chains are non- Gaussian, e.g. under good solvent conditions, where the chains behave as self- avoiding random walks 22. ## AII Models of chain statistics The freely-jointed chain approximation, Eq. (6), describes an inextensible chain, since each component $f_{\alpha},\ \alpha=x,y,z$ of the force $\mathbf{f}$ required to stretch the chain, e.g. $f_{x}=-T\frac{dS_{n}(\mathbf{r})}{dx}=-\frac{k_{B}T}{W_{n}(\mathbf{r})}\frac{dW_{n}(\mathbf{r})}{dx},$ (18) diverges in the limit $r\to nb$, i.e., when the end-to-end distance approaches the contour length. In the limit of large strains and large degree of polymerization $n$, a better approximation is provided by the well-known Langevin dependence of the elongation on the exerted force $f$, which yields for the end-to-end probability distribution function 1 $W_{n}^{L}(\mathbf{r})=A\exp\left[-\frac{r}{b}\mathcal{L}^{-1}(r/nb)\right]\left[\frac{\mathcal{L}^{-1}(r/nb)}{\sinh\mathcal{L}^{-1}(r/nb)}\right]^{-n},$ (19) where $\beta=1/k_{B}T$, $T$ is the temperature, $k_{B}$ is the Boltzmann constant, $A$ is a normalisation constant, and $\mathcal{L}(x)^{-1}$ is the inverse Langevin function. The latter is defined as $\mathcal{L}(x)=\coth{(x)}-1/x$ and it turns out to be equal to the ratio between the end-to-end distance and the contour length of a chain that is subject to a force $f$: $\frac{r}{nb}=\coth\left(\frac{fb}{k_{B}T}\right)-\frac{k_{B}T}{fb}=\mathcal{L}(\beta fb).$ (20) We note on passing that $\mathcal{L}^{-1}(\cdot)$ cannot be written in a closed form 61 and hence must be evaluated numerically. Phantom Kremer-Grest chains behave exactly as freely-jointed chains up to end- to-end distances that are very close to the contour length. However, beyond those values the Langevin description is no longer valid, and one has to resort to the ex-FJC model, for which the following analytical form of the force-extension curve has been recently derived 41: $r=nb\mathcal{L}(\beta fb)+\frac{nf}{k}\left[1+\frac{1-\mathcal{L}(\beta fb)\coth(\beta fb)}{1+\frac{f}{kb}\coth(\beta fb)}\right],$ (21) where $k$ is the monomer-monomer force constant in the harmonic approximation. In principle it is possible to integrate the inverse of this relation to get the end-to-end probability density. However, as it is clear from Eq. (5), we only need the derivatives of the inverse function, and hence there is no need to obtain $W(r)$ explicitly. We set the value of $k$ to the value of the second derivative of the Kremer-Grest potential as computed in the minimum, $k\approx 867\,\epsilon/\sigma^{2}$. We have also fitted the force-extension curves as obtained in simulations of single chains, obtaining values that are compatible with this latter estimate. We also report the following expression, which approximates the force- elongation relation for a worm-like chain 42 (WLC) and is used in the main text to fit the experimental data shown in Fig. 6c: $\frac{fb}{k_{B}T}=\frac{1}{4}\left(1-\frac{r}{nb}\right)^{-2}-\frac{1}{4}+\frac{r}{nb}-0.8\left(\frac{r}{nb}\right)^{2.15}$ (22) Figure 7: The force required to extend a chain by $r$ (scaled by its contour length $nb$) for the Gaussian (dashed line, Eq. (7)), L-FJC and ex-JFC (solid lines, Eq. (21)) and FJC (dashed-dotted lines, Eq. (6)) models. Note that plotted in this way $f$ depends on $n$ only for the latter case. Fig. 7 shows the force-extension curve for polymer chains described with different models. Since the force is plotted as a function of the distance scaled by the FJC contour length $nb$, the Gaussian, L-FJC and ex-FJC description become independent of $n$. By contrast, the extensional force of the exact FJC model, whose end-to-end probability distribution is given by Eq. (6), retains an $n$-dependence that is very strong for small $n$ and decreases upon increasing the chain size. Indeed, for $n\gtrsim 40$ the resulting force is essentially $n$-independent and overlaps almost completely with the FJC curve. ## AIII Fitting procedure and additional results As discussed in the main text in Sec. 4 we fit the experimental Young’s and shear moduli reported in Hoshino et al. 30 (system A) and Matsuda et al. 31 (system B). As commonly done in the analysis of experimental systems, in both cases we will consider the networks to be formed by strands of average size $\langle n\rangle$ and use relations based on Eq. (5): for system A we use $G_{\rm exp}=-\frac{TR^{2}\nu}{3}\left[\frac{ds_{n}(\tilde{r})}{d\tilde{r}}\left(\frac{1}{\tilde{r}}-\frac{R^{2}}{2\tilde{r}^{3}}\right)+\frac{d^{2}s_{n}(\tilde{r})}{d\tilde{r}^{2}}\frac{R^{2}}{2\tilde{r}^{2}}\right]$ (23) while for system B we fit $Y_{\rm exp}=3G_{\rm exp}$. We fit the experimental data with the L-FJC and WLC models by using Eqs. (19) and (22) to numerically evaluate the derivatives of $s_{n}(\tilde{r})$. We evaluate $R$, $\tilde{r}$ and $\nu$ as follows. Let $V_{0}$, $\nu_{0}$ and $R_{0}$ be the volume, chain number density and average end-to-end distance of the polymer chains in the preparation state and $V$, $\nu$ and $R$ the same quantities for the generic state point at which the experimental measurements are carried out. We define the swelling ratio $Q=V/V_{0}=\nu_{0}/\nu$ so that $\nu=\nu_{0}/Q$ and $R=\left(\frac{V}{V_{0}}\right)^{1/3}R_{0}=Q^{1/3}R_{0}$. Since $\tilde{r}^{2}=\overline{r^{2}}=R^{2}+\overline{u^{2}}$ and, as shown in Sec AI, $\overline{u^{2}}=\frac{2}{\phi}nb^{2}$, for systems with $\phi=4$ we also have that $\tilde{r}=\sqrt{R^{2}+r_{\rm max}b/2}$, where $r_{\rm max}=\langle n\rangle b$ is the contour length of the strands. Using the numbers reported in the original papers, we find that $\nu_{0}^{A}=1.97\cdot 10^{-3}$ nm-3 and $\nu_{0}^{B}=0.245$ nm-3. For system A the authors also report independent estimates for $R_{0}$ ($R_{0}^{A}=8.1$ nm) and $r_{\rm max}$ ($r_{\rm max}^{A}=82$ nm). For system A we show fitting results obtained by either fixing $r_{\rm max}$ and using $R_{0}$ as a fitting parameter or by leaving both quantities as fitting parameters. For system B we do the latter. For system A we obtain $R_{0}^{\rm FJC}=7.46$ nm and $R_{0}^{\rm WLC}=7.2$ nm for the one-parameter fits and $R_{0}^{\rm FJC}=5.08$ nm, $r_{\rm max}^{\rm FJC}=42.75$ nm and $R_{0}^{\rm WLC}=7.2$, $r_{\rm max}^{\rm WLC}=63.87$ nm for the two-parameter fits. For system $B$ we obtain unphysical values ($R_{0}^{\rm FJC}=0.11$ nm, $r_{\rm max}^{\rm FJC}=0.65$ nm and $R_{0}^{\rm WLC}=0.17$, $r_{\rm max}^{\rm WLC}=0.96$ nm). The results do not improve if we restrict the fitting range to narrower $Q$-ranges. We thank F. Sciortino and D. Truzzolillo for helpful discussions and J. P. Gong, T. Nakajima and T. Matsuda for sharing the data reported in Fig. 6d-e. We acknowledge financial support from the European Research Council (ERC Consolidator Grant 681597, MIMIC). W. Kob is senior member of the Institut universitaire de France. ## References * Treloar 1975 Treloar, L. R. G. _The physics of rubber elasticity_ ; Oxford University Press, USA, 1975 * Treloar 1943 Treloar, L. The elasticity of a network of long-chain molecules I. _Transactions of the Faraday Society_ 1943, _39_ , 36–41 * Treloar 1943 Treloar, L. The elasticity of a network of long-chain molecules II. _Transactions of the Faraday Society_ 1943, _39_ , 241–246 * Flory and Rehner Jr 1943 Flory, P. J.; Rehner Jr, J. Statistical mechanics of cross-linked polymer networks I. Rubberlike elasticity. _The Journal of Chemical Physics_ 1943, _11_ , 512–520 * Flory and Rehner 1943 Flory, P.; Rehner, J. Statistical mechanics of cross-linked polymer networks II. Swelling. _Chem. Phys_ 1943, _11_ , 521–526 * Flory 1985 Flory, P. J. Molecular theory of rubber elasticity. _Polymer journal_ 1985, _17_ , 1 * Broedersz and MacKintosh 2014 Broedersz, C. P.; MacKintosh, F. C. Modeling semiflexible polymer networks. _Reviews of Modern Physics_ 2014, _86_ , 995 * Grest and Kremer 1990 Grest, G. S.; Kremer, K. Statistical properties of random cross-linked rubbers. _Macromolecules_ 1990, _23_ , 4994–5000 * Duering et al. 1991 Duering, E. R.; Kremer, K.; Grest, G. S. Relaxation of randomly cross-linked polymer melts. _Physical Review Letters_ 1991, _67_ , 3531 * Grest et al. 1992 Grest, G.; Kremer, K.; Duering, E. Kinetics of end crosslinking in dense polymer melts. _EPL (Europhysics Letters)_ 1992, _19_ , 195 * Duering et al. 1994 Duering, E. R.; Kremer, K.; Grest, G. S. Structure and relaxation of end-linked polymer networks. _The Journal of Chemical Physics_ 1994, _101_ , 8169–8192 * Everaers and Kremer 1996 Everaers, R.; Kremer, K. Topological interactions in model polymer networks. _Physical Review E_ 1996, _53_ , R37 * Kenkare et al. 1998 Kenkare, N.; Smith, S.; Hall, C.; Khan, S. Discontinuous molecular dynamics studies of end-linked polymer networks. _Macromolecules_ 1998, _31_ , 5861–5879 * Everaers 1999 Everaers, R. Entanglement effects in defect-free model polymer networks. _New Journal of Physics_ 1999, _1_ , 12 * Kenkare et al. 2000 Kenkare, N.; Hall, C.; Khan, S. Theory and simulation of the swelling of polymer gels. _The Journal of Chemical Physics_ 2000, _113_ , 404–418 * Svaneborg et al. 2008 Svaneborg, C.; Everaers, R.; Grest, G. S.; Curro, J. G. Connectivity and entanglement stress contributions in strained polymer networks. _Macromolecules_ 2008, _41_ , 4920–4928 * Gula et al. 2020 Gula, I. A.; Karimi-Varzaneh, H. A.; Svaneborg, C. Computational Study of Cross-Link and Entanglement Contributions to the Elastic Properties of Model PDMS Networks. _Macromolecules_ 2020, _53_ , 6907–6927 * Rubinstein and Panyukov 1997 Rubinstein, M.; Panyukov, S. Nonaffine deformation and elasticity of polymer networks. _Macromolecules_ 1997, _30_ , 8036–8044 * Rubinstein and Panyukov 2002 Rubinstein, M.; Panyukov, S. Elasticity of polymer networks. _Macromolecules_ 2002, _35_ , 6670–6686 * Mark 2007 Mark, J. E. _Physical properties of polymers handbook_ ; Springer, 2007; Vol. 1076 * Higgs and Ball 1988 Higgs, P.; Ball, R. Polydisperse polymer networks: elasticity, orientational properties, and small angle neutron scattering. _Journal de Physique_ 1988, _49_ , 1785–1811 * Rubinstein and Colby 2003 Rubinstein, M.; Colby, R. H. _Polymer physics_ ; Oxford University Press New York, 2003 * Everaers and Kremer 1995 Everaers, R.; Kremer, K. Test of the foundations of classical rubber elasticity. _Macromolecules_ 1995, _28_ , 7291–7294 * Escobedo and de Pablo 1997 Escobedo, F. A.; de Pablo, J. J. Simulation and theory of the swelling of athermal gels. _The Journal of Chemical Physics_ 1997, _106_ , 793–810 * Escobedo and De Pablo 1999 Escobedo, F. A.; De Pablo, J. J. Molecular simulation of polymeric networks and gels: phase behavior and swelling. _Physics reports_ 1999, _318_ , 85–112 * Richbourg and Peppas 2020 Richbourg, N. R.; Peppas, N. A. The swollen polymer network hypothesis: Quantitative models of hydrogel swelling, stiffness, and solute transport. _Progress in Polymer Science_ 2020, 101243 * Gnan et al. 2017 Gnan, N.; Rovigatti, L.; Bergman, M.; Zaccarelli, E. In silico synthesis of microgel particles. _Macromolecules_ 2017, _50_ , 8777–8786 * Ninarello et al. 2019 Ninarello, A.; Crassous, J. J.; Paloli, D.; Camerin, F.; Gnan, N.; Rovigatti, L.; Schurtenberger, P.; Zaccarelli, E. Modeling Microgels with a Controlled Structure across the Volume Phase Transition. _Macromolecules_ 2019, _52_ , 7584–7592, DOI: 10.1021/acs.macromol.9b01122 * Rovigatti et al. 2019 Rovigatti, L.; Gnan, N.; Tavagnacco, L.; Moreno, A. J.; Zaccarelli, E. Numerical modelling of non-ionic microgels: an overview. _Soft Matter_ 2019, _15_ , 1108–1119 * Hoshino et al. 2018 Hoshino, K.-i.; Nakajima, T.; Matsuda, T.; Sakai, T.; Gong, J. P. Network elasticity of a model hydrogel as a function of swelling ratio: from shrinking to extreme swelling states. _Soft Matter_ 2018, _14_ , 9693–9701 * Matsuda et al. 2019 Matsuda, T.; Nakajima, T.; Gong, J. P. Fabrication of tough and stretchable hybrid double-network elastomers using ionic dissociation of polyelectrolyte in nonaqueous media. _Chemistry of Materials_ 2019, _31_ , 3766–3776 * Zhong et al. 2016 Zhong, M.; Wang, R.; Kawamoto, K.; Olsen, B. D.; Johnson, J. A. Quantifying the impact of molecular defects on polymer network elasticity. _Science_ 2016, _353_ , 1264–1268 * Lin et al. 2019 Lin, T.-S.; Wang, R.; Johnson, J. A.; Olsen, B. D. Revisiting the elasticity theory for real Gaussian phantom networks. _Macromolecules_ 2019, _52_ , 1685–1694 * Landau and Lifshitz 1970 Landau, L. D.; Lifshitz, E. M. _Course of Theoretical Physics Vol. 7: Theory of Elasticity (2nd ed.)_ ; Pergamon press, 1970 * Flory 1976 Flory, P. Statistical thermodynamics of random networks. _Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences_ 1976, _351_ , 351–380 * Smith 1974 Smith, T. L. Modulus of tightly crosslinked polymers related to concentration and length of chains. _Journal of Polymer Science: Polymer Symposia_ 1974, _46_ , 97–114 * Jernigan and Flory 1969 Jernigan, R.; Flory, P. Distribution functions for chain molecules. _The Journal of Chemical Physics_ 1969, _50_ , 4185–4200 * James and Guth 1943 James, H. M.; Guth, E. Theory of the elastic properties of rubber. _The Journal of Chemical Physics_ 1943, _11_ , 455–481 * Tobolsky et al. 1961 Tobolsky, A.; Carlson, D.; Indictor, N. Rubber elasticity and chain configuration. _Journal of Polymer Science_ 1961, _54_ , 175–192 * Toda and Morita 2018 Toda, M.; Morita, H. Rubber elasticity of realizable ideal networks. _AIP Advances_ 2018, _8_ , 125005 * Fiasconaro and Falo 2019 Fiasconaro, A.; Falo, F. Analytical results of the extensible freely jointed chain model. _Physica A: Statistical Mechanics and its Applications_ 2019, _532_ , 121929 * Petrosyan 2017 Petrosyan, R. Improved approximations for some polymer extension models. _Rheologica Acta_ 2017, _56_ , 21–26 * Sciortino 2017 Sciortino, F. Three-body potential for simulating bond swaps in molecular dynamics. _The European Physical Journal E_ 2017, _40_ , 3 * Rovigatti et al. 2017 Rovigatti, L.; Gnan, N.; Zaccarelli, E. Internal structure and swelling behaviour of in silico microgel particles. _Journal of Physics: Condensed Matter_ 2017, _30_ , 044001 * Flory 1953 Flory, P. J. _Principles of polymer chemistry_ ; Cornell University Press, 1953 * Lang 2018 Lang, M. Elasticity of phantom model networks with cyclic defects. _ACS Macro Letters_ 2018, _7_ , 536–539 * Panyukov 2019 Panyukov, S. Loops in polymer networks. _Macromolecules_ 2019, _52_ , 4145–4153 * Kremer and Grest 1990 Kremer, K.; Grest, G. S. Dynamics of entangled linear polymer melts: A molecular-dynamics simulation. _The Journal of Chemical Physics_ 1990, _92_ , 5057–5086 * Weeks et al. 1971 Weeks, J. D.; Chandler, D.; Andersen, H. C. Role of repulsive forces in determining the equilibrium structure of simple liquids. _The Journal of Chemical Physics_ 1971, _54_ , 5237–5247 * Martyna et al. 1992 Martyna, G. J.; Klein, M. L.; Tuckerman, M. Nosé–Hoover chains: The canonical ensemble via continuous dynamics. _The Journal of Chemical Physics_ 1992, _97_ , 2635–2643 * Plimpton 1993 Plimpton, S. _Fast parallel algorithms for short-range molecular dynamics_ ; 1993 * Doi 1996 Doi, M. _Introduction to polymer physics_ ; Oxford university press, 1996 * Gundogan et al. 2002 Gundogan, N.; Melekaslan, D.; Okay, O. Rubber elasticity of poly (N-isopropylacrylamide) gels at various charge densities. _Macromolecules_ 2002, _35_ , 5616–5622 * Horkay et al. 2000 Horkay, F.; Tasaki, I.; Basser, P. J. Osmotic swelling of polyacrylate hydrogels in physiological salt solutions. _Biomacromolecules_ 2000, _1_ , 84–90 * Panyukov 1990 Panyukov, S. Scaling theory of high elasticity. _Sov. Phys. JETP_ 1990, _71_ , 372–379 * Itagaki et al. 2010 Itagaki, H.; Kurokawa, T.; Furukawa, H.; Nakajima, T.; Katsumoto, Y.; Gong, J. P. Water-induced brittle-ductile transition of double network hydrogels. _Macromolecules_ 2010, _43_ , 9495–9500 * Fisher et al. 1977 Fisher, L.; Sochor, A.; Tan, J. Chain characteristics of poly (2-acrylamido-2-methylpropanesulfonate) polymers. 1. Light-scattering and intrinsic-viscosity studies. _Macromolecules_ 1977, _10_ , 949–954 * Freeman et al. 2020 Freeman, K. G.; Adamczyk, J.; Streletzky, K. A. Effect of Synthesis Temperature on Size, Structure, and Volume Phase Transition of Polysaccharide Microgels. _Macromolecules_ 2020, * Everaers 1998 Everaers, R. Constrained fluctuation theories of rubber elasticity: General results and an exactly solvable model. _The European Physical Journal B-Condensed Matter and Complex Systems_ 1998, _4_ , 341–350 * Graessley 1975 Graessley, W. W. Statistical mechanics of random coil networks. _Macromolecules_ 1975, _8_ , 186–190 * Jedynak 2015 Jedynak, R. Approximation of the inverse Langevin function revisited. _Rheologica Acta_ 2015, _54_ , 29–39 ## AIV Supplemental Material ### AIV.1 Shear modulus in the affine network model Eq. 8 was obtained under the phantom network assumption, i.e., that the coordinates of the vector $\mathbf{r}_{\lambda}=\mathbf{R}_{\lambda}+\mathbf{u}_{\lambda}$ transform according to $R_{x,{\lambda}}=\lambda R_{x,1}$ and $u_{x,\lambda}=u_{x,1}$. In this case, the fluctuation term is unaffected by the deformation. One can also assume, on the contrary, that the fluctuations deform affinely with the average end-to-end vector, i.e., $r_{x,{\lambda}}=\lambda r_{x,1}$ and analogous for the other coordinates. In this case, one obtaines $g^{\rm aff}=-\frac{TR^{2}}{6V}\left[\frac{ds_{n}(\tilde{r})}{d\tilde{r}}\frac{1}{\tilde{r}}+\frac{d^{2}s_{n}(\tilde{r})}{d\tilde{r}^{2}}\right],$ (24) which results in a different Gaussian modulus: $G^{\rm{aff},G}=\frac{k_{B}T}{V}\sum_{1}^{N_{s}}\frac{\overline{r_{i}^{2}}}{n_{i}b^{2}}=\left\langle\frac{\overline{r^{2}}}{nb^{2}}\right\rangle k_{B}T\nu\equiv A^{\rm aff}k_{B}T\nu,$ (25) where the sum is, as usual, taken over the $N_{s}$ elastically-active strands. Under the assumption that the $\overline{r_{i}^{2}}$ are Gaussianly distributed (see Sec. AI in the main text) we have $\left\langle\frac{\overline{r^{2}}}{nb^{2}}\right\rangle=1.$ (26) From Eqs. (26) and (25), get the commonly reported expression 22, 20 $G^{\rm{aff},G}=k_{B}T\nu.$ (27) ### AIV.2 Monomer mean-squared displacement during equilibration Figure 8: The mean-squared displacement of the $C=1\%$, $\rho_{\rm init}=0.1$ sample computed at $\rho=0.1$ and $\rho=1.5$. The vertical dashed line signals the equilibration time we use. In order to verify that the system has equilibrated correctly, we measure the monomer mean-squared displacement (MSD). In Fig. 8 we report the monomer MSD of the $C=1\%$, $\rho_{\rm init}=0.1$ sample computed at $\rho=0.1$ and $\rho=1.5$. We note that the MSD quickly reaches a plateau, signaling that the oscillation modes of all the strands have equilibrated. ### AIV.3 Density scaling of RMS equilibrium end-to-end distance Figure 9: (a-b): RMS equilibrium end-to-end distance of the strands for $C=1\%,5\%,10\%$ for $\rho_{\rm init}=0.1,85$ and $\rho=0.5$, rescaled by (a) the inverse of the initial average distance between neighboring crosslinkers $(\rho^{A}_{\rm init})^{1/3}=(C\rho_{\rm init})^{1/3}$ and (b) by $(\gamma\rho^{A}_{\rm init})^{1/3}$, where $\gamma=0.74$ for $\rho_{\rm init}=0.85$ and $\gamma=1$ for $\rho_{\rm init}=0.1$. (c): Same quantity as in a-b, rescaled by $\rho^{1/3}$, for $C=5\%$ and for different values of $\rho$. We report in Fig. 9a for $\rho=0.5$ the RMS equilibrium end-to-end distance of the strands, defined as $R(n)\equiv[\langle R^{2}(n)\rangle_{n}]^{1/2}$ ‡‡‡We recall that the end-to-end distance is $\mathbf{r}(t)\equiv R+\mathbf{u}(t)$, and that $\overline{r^{2}}=R^{2}+\overline{u^{2}}$., where $\langle\cdot\rangle_{n}$ denotes the average over all the strands of length $n$. We note that curves for different initial densities $\rho_{\rm init}$ and crosslinker concentrations $C$ fall on the same master curve if divided by the quantity $(\rho^{\rm cl}_{\rm init})^{1/3}$, where $\rho^{\rm cl}_{\rm init}=C\rho_{\rm init}$ is the initial crosslinker density. Since this quantity represents the inverse of the initial average distance between neighboring crosslinkers, we can conclude that the initial spatial distribution of the crosslinkers completely controls the equilibrium end-to- end distance of the chains in the final state. An even better collapse can be obtained by using slightly different (heuristic) factors for the two values of the initial density we use here: Fig. 9b shows the same curves rescaled by $(\gamma\rho^{A}_{\rm init})^{1/3}$, where $\gamma=0.74$ for $\rho_{\rm init}=0.85$ and $\gamma=1$ for $\rho_{\rm init}=0.1$. We note that the same rescaling does not apply to $[\langle\overline{r^{2}(n)}\rangle_{n}]^{1/2}$, since the fluctuation term $\overline{u^{2}}$ does not follow this scaling. We also report in Fig. 9a the scaling behavior expected for Gaussian strands, i.e., $R(n)\propto n^{1/2}$ (dashed line), and the one for stretched strands, i.e., $R(n)\propto n$ (solid line). One can see that the short chains are on average stretched, and only for larger values of $n$ the Gaussian behavior is recovered. Finally, we remark that since the equilibrium end-to-end distances deform affinely with the network, $R(n)$ curves at different final densities $\rho$ collapse on the same master curve when multiplied by $\rho^{1/3}$, as shown in Fig. 9c.
: AABI 20203rd Symposium on Advances in Approximate Bayesian Inference, 2020 # Annealed Stein Variational Gradient Descent Francesco D’Angelo<EMAIL_ADDRESS> Vincent Fortuin<EMAIL_ADDRESS> ETH Zürich ###### Abstract Particle-based optimization algorithms have recently been developed as sampling methods that iteratively update a set of particles to approximate a target distribution. In particular Stein variational gradient descent has gained attention in the approximate inference literature for its flexibility and accuracy. We empirically explore the ability of this method to sample from multi-modal distributions and focus on two important issues: (i) the inability of the particles to escape from local modes and (ii) the inefficacy in reproducing the density of the different regions. We propose an annealing schedule to solve these issues and show, through various experiments, how this simple solution leads to significant improvements in mode coverage, without invalidating any theoretical properties of the original algorithm. ## 1 Introduction There have been many recent advances on the theoretical properties of sampling algorithms for approximate inference, which changed our interpretation and understanding of them. Particularly worth mentioning is the work of Jordan et al. (1998), who reinterpret Markov Chain Monte Carlo (MCMC) as a gradient flow of the KL divergence over the Wasserstein space of probability measures. This new formulation not only allowed for a deeper understanding of these methods but also inspired the inception of new and more efficient inference strategies. Following this direction, Liu and Wang (2016) recently proposed the Stein Variational Gradient Descent (SVGD) to perform approximate Wasserstein gradient descent. This method belongs to the more general family of particle optimization variational inference (POVI), where a continuous density $p(x)$ is approximated by a set of $n$ particles that evolve over time towards the target. However, a solid understanding of its behavior in the finite particle limit beyond the mean field convergence analysis (Duncan et al., 2019) remains elusive. What is more, there is empirical evidence that SVGD suffers from a degeneracy that compromises the particle diversity under these conditions, making them collapse to a small number of modes (Zhuo et al., 2018). In the following, we discuss how an annealing strategy can significantly mitigate this issue, encourage exploration of the significant modes, and yield better samples from the target density than standard SVGD. ### 1.1 Related work Introducing an artificial temperature parameter is a common practice in many approximate inference methods. Indeed, annealing approaches have been shown to be beneficial in both sampling and optimization problems for highly non-convex objectives. In the context of Markov chain Monte Carlo, sampling at different temperatures enhances the mixing times of the chains and thus allows for faster convergence (Marinari and Parisi, 1992; Geyer and Thompson, 1995). In Bayesian inference, a temperature parameter has been introduced to anneal the likelihood or the posterior of the model (e.g., Wenzel et al., 2020) to escape from poor local minima. A similar effect can be obtained in variational inference and its stochastic counterpart by tempering the KL term (Mandt et al., 2016; Huang et al., 2018; Fu et al., 2019). However, the impact of similar annealing strategies on the SVGD method have not yet been fully understood. Previous works (Chang et al., 2020) studied the impact of a stochastic annealing introducing a sequence of noise-perturbed score functions and Han and Liu (2018) proposed to perform SVGD on a series of intermediate tempered target distributions. In our work, we show that a deterministic annealing following a certain schedule can be useful to improve the particle diversity and overcome the mode-collapse problem. ## 2 Background Stein variational gradient descent (Liu and Wang, 2016) is a technique to perform approximate inference using a set of particles $q_{t}(x)=\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}(t)}$, with $\delta_{x_{i}}$ being the Dirac measure on particle $x_{i}$, to approximate a positive density function $p(x)$ on $\mathcal{X}\in\mathbb{R}^{d}$. More precisely, SVGD is an efficient numerical technique to discretize the Wasserstein gradient flow of the Kullback-Leibler (KL) divergence functional on a new metric called the Stein geometry (Duncan et al., 2019). SVGD considers an incremental transformation given by an infinitesimal perturbation of the identity matrix $\mathbf{T}(x)=x+\epsilon\phi(x)$ to move the particles from the initialization to the target. Here, $\phi(x)$ is the direction of the perturbation and $\epsilon$ the step size. The former is chosen to maximally decrease the KL divergence between the discrete density of the particles and the final target. As shown in Liu and Wang (2016), closed- form solutions can be obtained when restricting all perturbations $\phi$ to be from the unit ball of a vector valued reproducing kernel Hilbert space (RKHS) $\mathcal{H}^{d}=\mathcal{H}_{0}\times...\times\mathcal{H}_{0}$. Here, $\mathcal{H}_{0}$ is a scalar-valued RKHS associated with a scalar positive definite kernel $k(x,x^{\prime};h)$, and $h$ is the set of kernel hyperparameters. The direction of steepest descent that maximizes the negative gradient of the KL divergence is then given in closed form as: $\phi_{q,p}^{*}(x^{\prime})=\operatornamewithlimits{argmax}_{\phi}\bigg{\\{}-\nabla_{\epsilon}D_{KL}(q_{[\mathbf{T}]}||p)\big{|}_{\epsilon\to 0}\bigg{\\}}\propto\mathbb{E}_{x\sim q}[\mathcal{A}_{p}k(x,x^{\prime})]\,,$ (1) with $\mathcal{A}_{p}\phi(x)=\phi(x)\nabla_{x}\log p(x)^{\top}+\nabla_{x}\phi(x)$ being the Stein operator. Using this, we can build an iterative procedure that transforms the initial reference distribution $q_{0}$ to the target posterior. Practically, we draw a set of particles $\\{x^{0}_{i}\\}_{i=1}^{n}$ with $x^{0}_{i}\sim q_{0}$ and subsequently update them using the optimal perturbation in eqn:opt_pert: $x_{i}^{t+1}\leftarrow x_{i}^{t}+\epsilon_{t}\hat{\phi}^{*}(x_{i}^{t})\ \ \ \text{with}\ \ \ \hat{\phi}^{*}(x)=\frac{1}{n}\sum_{j=1}^{n}[\underbrace{k(x_{j}^{t},x)\nabla_{x_{j}^{t}}\log p(x_{j}^{t})}_{\text{driving force}}+\underbrace{\nabla_{x_{j}^{t}}k(x_{j}^{T},x)}_{\text{repulsive force}}]$ (2) Conceptually the update rule attracts the particles to high density regions of the target via the average score function (driving force in eqn:update) while the repulsive force pushes them away from each other. This avoids a collapse to the MAP estimate and allows a certain degree of diversity among the particles to encourage the exploration of multiple modes and a more faithful reflection of the variance of the target distribution. ### 2.1 Mode-collapse in SVGD Despite its theoretical foundations, it has been shown empirically (Zhuo et al., 2018) and theoretically (Zhang et al., 2020) that the particles in SVGD tend to collapse to a few local modes and that this effect is strongly related with the initial distribution of the particles. This issue is also clearly visible in our experiments and seems to already be relevant in low-dimensional problems. Indeed, as shown in Figure 1, it even happens in the case of a one- dimensional mixture of five Gaussians. Here, all particles, independent of the choice of the kernel bandwidth (see Appendix B), end up in the mode closest to the initialization without any possibility of escaping. Figure 1: SVGD mode-collapse. Comparison of SVGD (top) and our proposed A-SVGD (bottom). Additionally, we noticed in our experiments a connection of how a proper reconstruction of the target mass in the different modes is related to the initialization, as seen in Figure 2. In other words, even if the standard SVGD is able to capture different modes, the particles are not correctly distributed to faithfully reproduce the original mass. Instead, the majority of them tend to end up in the mode closest to their initialization. It is not clear whether this particular issue is a consequence of the mode-seeking limitation of all KL-divergence-based inference methods or of the approximation due to a finite number of particles. Empirically evident instead is that a deterministic update of the samples, like the one characterizing SVGD, in combination with a random initialization, can lead to a catastrophic convergence. In comparison, other methods characterized by a stochastic update like SGLD (Welling and Teh, 2011) can instead always rely on a nonzero probability of escaping from a certain mode, given by the injected noise, and consequently mitigate a bad initialization and encourage exploration. Moreover, it has been shown by Zhang et al. (2020) that injecting noise is also beneficial for the SVGD update to overcome the mode-collapse issues. Finally, it is well established that distance metrics and corresponding kernel-based methods suffer from the curse of dimensionality (Aggarwal et al., 2001; Reddi et al., 2014). That is why we should not hope for this problem to disappear in higher dimensions, but rather expect it to become even worse. ## 3 Annealed SVGD Motivated by related work that mitigates similar mode-collapse issues in MCMC methods by tempering the target density to speed up the mixing time and avoid chains to get stuck in a single mode (Neal, 1996), we propose to introduce an annealing schedule in the SVGD update (A-SVGD). This modification keeps the deterministic nature of the method but is essential to enhance its exploration and mode coverage and ideally would compensate for the limitation of the initialization and the finite number of particles. We introduce an annealing parameter $\gamma(t)\in[0,1]$ depending on the current iteration step and modify the update rule from eqn:update in the following way: $\hat{\phi}^{*}(x)=\frac{1}{n}\sum_{j=1}^{n}[\underbrace{\gamma(t)k(x_{j}^{t},x)\nabla_{x_{j}^{t}}\log p(x_{j}^{t})}_{\text{driving force}}+\underbrace{\nabla_{x_{j}^{t}}k(x_{j}^{T},x)}_{\text{repulsive force}}]\,.$ (3) Intuitively, we can observe two phases in the time evolution of the particles by varying $\gamma$ in the interval $[0,1]$ with an appropriate schedule: The first phase is exploratory with a predominant repulsive force that pushes the particles away from the initialization and thus allows for a good coverage of the target distribution’s support. The second phase is exploitative, where the driving force takes over and shrinks the distribution of the particles to the area around the different modes. From a statistical perspective, our modification corresponds to the introduction of a temperature parameter $T(t)=\frac{1}{\gamma(t)}$ which rescales the target distribution $p(x)^{\frac{1}{T(t)}}$ during the evolution of the approximating density. It is important to notice how the choice of the annealing schedule is fundamental to preserve the convergence properties of SVGD and to keep the final target density unchanged. We ensure this by formulating the annealing schedule in such a way that the final iterations are always performed for the true target density, that is, $\lim_{t\to\infty}\gamma(t)=1$. From this point of view, our alternative method is formally equivalent to a better parametrization of the initial reference distribution $q_{0}$ of the particles that depends on the target distribution. This is due to the fact that even if in the exploratory phase, when the repulsion is dominant, we still have a small component of the driving force that ensures that the particles are not randomly driven far away from the initialization, but are still driven towards high-density regions. Figure 2: Mode covering of SVGD. We compare the final stationary distribution of standard SVGD (green) from two different initialization (black) and A-SVGD (blue) to approximate a mixture of Gaussians (red). ### 3.1 The annealing schedule As mentioned before, the choice of the annealing schedule is fundamental for the convergence to multiple modes while yielding samples from the proper target distribution. For this purpose, we introduced and tested different annealing schedules as shown in Figure 3. The simplest idea is a linear annealing on the interval $[0,1]$; however, our experiments showed that this choice is not optimal. Indeed, linear tempering of the density leads to slow particle dynamics that are beneficial to neither the exploration nor the convergence. For this reason we chose to make the transition between the two inference phases steeper, so that the majority of the evolution happens in one of the two phases and not in between. To do so, we construct the annealing schedule using the hyperbolic tangent: $\gamma(t)=\tanh\big{[}(1.3\frac{t}{T})^{p}\big{]}$, with $t$ being the current time step and $T$ the total number of steps. Despite its good exploration, this method might encourage modes very far from the initialization due to the flattening of the target density in the initial exploratory phase. To achieve a tradeoff and have a more “target-guided” exploration, we follow the cyclical annealing schedule idea proposed in Loshchilov and Hutter (2016) and Huang et al. (2017), which has already been used for MCMC sampling in Zhang et al. (2019). We adapted this technique to have a sequence of $C$ cycles of exploratory and converging phases, obtaining the following expression: $\gamma(t)=\bigg{(}\frac{mod(t,T/C)}{T/C}\bigg{)}^{p}\,,$ (4) with $T$ being the total number of time steps, $C$ the number of cycles and $p$ an exponent determining the speed of the transition between the two phases (as shown in Figure 3). Figure 3: Annealing schedules. An illustration of the proposed annealing schedules. ## 4 Experiments We demonstrate the advantages of our method in several synthetic experiments. In all the experiments we used SVGD with a standard RBF kernel. #### Univariate Gaussian mixture. We first assessed the ability of A-SVGD to sample from a multi-modal univariate distribution given by a mixture of five Gaussians. The step size was fixed to $\epsilon=0.1$ and we used the hyperbolic annealing schedule. We see in Figure 1 that our proposed method successfully covers all the modes, while the standard SVGD collapses to just a single mode. #### Bivariate regular Gaussian mixture. Secondly, we tested our method on a 2D mixture of 16 Gaussians with means equally distributed on a $4\times 4$ grid and standard deviation $\sigma=0.5$. In this experiment we used the cyclical annealing schedule from eqn:cyclic_ann. As reported in Figure 4, we observe that the standard SVGD gets trapped in four of the modes, neighboring the initialization. In contrast, our method is able to find and characterize all modes, independently of the initial position. #### Bivariate irregular Gaussian mixture. In our last experiment we studied the ability of SVGD to reproduce the weights of the mixture components of 2D Gaussians. The cyclical annealing schedule is used for the A-SVGD and for the standard SVGD we show two different initializations to illustrate their impact. We see in Figure 4 that our proposed method not only covers all the modes, but also approximately recovers their mixture weights, while the standard SVGD again collapses to the modes that are closest to its initialization. #### Additional results. We provide additional results and ablation studies in the appendix. We show in appendix A that even when the particles are initialized exactly in one mode of the Gaussian mixture, the A-SVGD can cover all modes, while the standard SVGD stays trapped in that single mode. Moreover, we show in appendix B that changing the bandwidth of the RBF kernel cannot overcome the mode collapse, in contrast to our proposed annealing. Appendix C shows that the cyclical annealing scheme generally outperforms the other ones and appendix D confirms our findings also in high-dimensional settings. Figure 4: SVGD on multi-modal 2D data. We show the final samples of SVGD with annealing (right) and without (left) starting from the same initial distribution (white dots). ## 5 Conclusion In this work, we discussed the mode-collapse issue of SVGD for approximate inference and proposed an annealing strategy to overcome these limitations. We illustrated the impact of the initialization on a deterministic sampling algorithm like SVGD, highlighting two major drawbacks, namely (i) a tendency of the particles to fall into the neighboring modes without any possibility of escape and (ii) the difficulty of the particles in reproducing the effective local density of any given mode. We found that the introduction of a temperature parameter and an annealing schedule can help alleviate these undesirable behaviors, leading to better samples that can effectively capture multi-modal densities. ## References * Aggarwal et al. (2001) Charu C Aggarwal, Alexander Hinneburg, and Daniel A Keim. On the surprising behavior of distance metrics in high dimensional space. In _International conference on database theory_ , pages 420–434. Springer, 2001. * Chang et al. (2020) Wei-Cheng Chang, Chun-Liang Li, Youssef Mroueh, and Yiming Yang. Kernel stein generative modeling. _arXiv preprint arXiv:2007.03074_ , 2020. * Duncan et al. (2019) A Duncan, Nikolas Nuesken, and Lukasz Szpruch. On the geometry of stein variational gradient descent. _arXiv preprint arXiv:1912.00894_ , 2019. * Fu et al. (2019) Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. Cyclical annealing schedule: A simple approach to mitigating kl vanishing. _arXiv preprint arXiv:1903.10145_ , 2019. * Geyer and Thompson (1995) Charles J Geyer and Elizabeth A Thompson. Annealing markov chain monte carlo with applications to ancestral inference. _Journal of the American Statistical Association_ , 90(431):909–920, 1995. * Gretton et al. (2012) Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. _The Journal of Machine Learning Research_ , 13(1):723–773, 2012. * Han and Liu (2018) Jun Han and Qiang Liu. Stein variational gradient descent without gradient. In _International Conference on Machine Learning_ , pages 1900–1908. PMLR, 2018. * Huang et al. (2018) Chin-Wei Huang, Shawn Tan, Alexandre Lacoste, and Aaron C Courville. Improving explorability in variational inference with annealed variational objectives. In _Advances in Neural Information Processing Systems_ , pages 9701–9711, 2018. * Huang et al. (2017) Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. Snapshot ensembles: Train 1, get m for free. _arXiv preprint arXiv:1704.00109_ , 2017. * Jordan et al. (1998) Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the fokker–planck equation. _SIAM journal on mathematical analysis_ , 29(1):1–17, 1998. * Liu and Wang (2016) Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In _Advances in neural information processing systems_ , pages 2378–2386, 2016. * Loshchilov and Hutter (2016) Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv:1608.03983_ , 2016. * Mandt et al. (2016) Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, and David Blei. Variational tempering. In _Artificial Intelligence and Statistics_ , pages 704–712, 2016\. * Marinari and Parisi (1992) Enzo Marinari and Giorgio Parisi. Simulated tempering: a new monte carlo scheme. _EPL (Europhysics Letters)_ , 19(6):451, 1992\. * Neal (1996) Radford M Neal. Sampling from multimodal distributions using tempered transitions. _Statistics and computing_ , 6(4):353–366, 1996\. * Reddi et al. (2014) Sashank J Reddi, Aaditya Ramdas, Barnabás Póczos, Aarti Singh, and Larry Wasserman. On the decreasing power of kernel and distance based nonparametric hypothesis tests in high dimensions. _arXiv preprint arXiv:1406.2083_ , 2014. * Welling and Teh (2011) Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In _Proceedings of the 28th international conference on machine learning (ICML-11)_ , pages 681–688, 2011. * Wenzel et al. (2020) Florian Wenzel, Kevin Roth, Bastiaan S Veeling, Jakub Światkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, and Sebastian Nowozin. How good is the bayes posterior in deep neural networks really? _arXiv preprint arXiv:2002.02405_ , 2020. * Zhang et al. (2020) Jianyi Zhang, Ruiyi Zhang, Lawrence Carin, and Changyou Chen. Stochastic particle-optimization sampling and the non-asymptotic convergence theory. In _International Conference on Artificial Intelligence and Statistics_ , pages 1877–1887, 2020. * Zhang et al. (2019) Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, and Andrew Gordon Wilson. Cyclical stochastic gradient mcmc for bayesian deep learning. _arXiv preprint arXiv:1902.03932_ , 2019. * Zhuo et al. (2018) Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, and Bo Zhang. Message passing stein variational gradient descent. In _International Conference on Machine Learning_ , pages 6018–6027. PMLR, 2018. ## Appendix A Additional results on multi-modal data In extension to the multi-modal synthetic data presented in section 4 we present here the extreme case for which the initialization is exactly in one of the modes of the target distribution. As shown in Figure A.1, the SVGD with annealing is remarkably able to escape from the initialization, covering all the modes characterizing the target density. On the other hand, the standard SVGD is not able to model anything besides the mode in which the particles have been initialized. Figure A.1: SVGD on multi-modal data with central initialization. We show the final samples of SVGD with annealing (right) and without (left) starting from the same initial distribution (white dots). In this particular case the particles are initialized in the central mode. This particular initialization perfectly shows how the standard SVGD is not able to efficiently cover the entire target density, but instead remains trapped in the initialization mode. ## Appendix B Different RBF kernel bandwidths We compared the effect of different bandwidths for the RBF kernel to assess if this parameter affects the issues illustrated in 2.1. We used the synthetic multi-modal data from Figure 2 to test the following bandwidth values $h\in\\{0.001,0.01,0.1,1,10,100,\text{median}\\}$ where median is the median heuristic. As illustrated in Figure B.1, none of the bandwidths shows significant improvement and, in contrast to our A-SVGD, the inference is still limited to the neighboring modes. Figure B.1: Mode covering of SVGD for different bandwidth. We compare the final stationary distribution of standard SVGD (green) using different bandwidth (h). ## Appendix C Different annealing schedules We compared the different annealing schedules proposed in section 3.1 on the bivariate irregular Gaussian mixture as reported in Figure C.1. We also computed the maximum mean discrepancy (MMD) (Gretton et al., 2012) during the evolution of the particles as reported in Figure C.2. Figure C.1: Mode covering of different annealing schedules.We compare the final stationary distribution of standard SVGD (green) and A-SVGD (blue) using the three different annealing schedules to approximate a mixture of Gaussians (red) Figure C.2: MMD for different annealing schedules. We compute MMD during the evolution of the particles for the three different proposed annealing schedules ## Appendix D High-dimensional experiment To assess the capability of the annealed SVGD in high-dimensional settings, we studied its ability to produce samples from a mixture of 5 Gaussians in $d=100$ dimensions. We sampled the means of the 5 components of the mixture according to $\mu_{i}\sim\mathcal{N}(0,4\mathbb{I})$ so that they would be distributed on the annular region at distance $\propto\sqrt{d}$ from the origin. We initialize $5000$ particles sampling from $x_{0}\sim\mathcal{N}(0,\mathbb{I})$ and evolved them for $120k$ iterations with a stepsize $\epsilon=0.3$. We evaluate convergence of the particles to the target distribution measuring the MMD every 100 iterations for the different schedules. Figure D.1: MMD for 100 dimensional Gaussian mixture. We compute MMD during the evolution of the particles for the three different proposed annealing schedules when the target distribution is a mixture of five Gaussians in 100 dimensions. Additionally we studied the coverage of the different modes by measuring the distance of the samples from them. Thereby we can determine whether or not the particles are only collapsing into the closest mode to the initialization. Moreover, we can establish if they are able to faithfully reproduce the distance of the samples from the modes. The results are reported in Figure D.1. Interestingly, we can see how the standard SVGD completely fails in modelling this high-dimensional distribution indeed the MMD, due to the collapse to a single mode, increases during the iterations converging to a unimodal final distribution. Conversely the introduction of an annealing schedule, in particular the hyperbolic and cyclical ones, allows for a more efficient exploration that avoids collapsing to a single mode and thus decreases the MMD. Additionally, the histograms reported in Figure D.2 show the distribution of the distances of the particles from the single modes and highlight that for standard SVGD and linear annealing, the particles indeed collapse to a single mode, while for hyperbolic and cyclical annealing, they cover all the modes. Figure D.2: Distances from the modes. We compare the distance of the samples to the modes for SVGD with and without annealing when the target is a 100 dimensional Gaussian mixture.
# Encrypted Internet traffic classification using a supervised Spiking Neural Network Ali Rasteh3, e-mail<EMAIL_ADDRESS>1Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO), University of Toulouse, France Florian Delpech1, 1Institut Supérieur de l’Aéronautique et de l’Espace (ISAE- SUPAERO), University of Toulouse, France Carlos Aguilar-Melchor1, 1Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO), University of Toulouse, France Romain Zimmer2, 1Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO), University of Toulouse, France Saeed Bagheri Shouraki3, 1Institut Supérieur de l’Aéronautique et de l’Espace (ISAE- SUPAERO), University of Toulouse, France Timothée Masquelier2 1Institut Supérieur de l’Aéronautique et de l’Espace (ISAE-SUPAERO), University of Toulouse, France ###### Abstract Internet traffic recognition is an essential tool for access providers since recognizing traffic categories related to different data packets transmitted on a network help them define adapted priorities. That means, for instance, high priority requirements for an audio conference and low ones for a file transfer, to enhance user experience. As internet traffic becomes increasingly encrypted, the mainstream classic traffic recognition technique, payload inspection, is rendered ineffective. This paper uses machine learning techniques for encrypted traffic classification, looking only at packet size and time of arrival. Spiking neural networks (SNN), largely inspired by how biological neurons operate, were used for two reasons. Firstly, they are able to recognize time-related data packet features. Secondly, they can be implemented efficiently on neuromorphic hardware with a low energy footprint. Here we used a very simple feedforward SNN, with only one fully-connected hidden layer, and trained in a supervised manner using the newly introduced method known as Surrogate Gradient Learning. Surprisingly, such a simple SNN reached an accuracy of 95.9% on ISCX datasets, outperforming previous approaches. Besides better accuracy, there is also a very significant improvement on simplicity: input size, number of neurons, trainable parameters are all reduced by one to four orders of magnitude. Next, we analyzed the reasons for this good accuracy. It turns out that, beyond spatial (i.e. packet size) features, the SNN also exploits temporal ones, mostly the nearly synchronous (i.e, within a 200ms range) arrival times of packets with certain sizes. Taken together, these results show that SNNs are an excellent fit for encrypted internet traffic classification: they can be more accurate than conventional artificial neural networks (ANN), and they could be implemented efficiently on low power embedded systems. ## 1 Introduction Today, more than half of the global population uses the Internet network. The amount of data exchanged through it increases endlessly each year giving people ever more access to information. Whether it be sending emails, downloading or uploading files, watching videos on streaming platforms, chatting on social media, etc. all these operations involve devices interacting with computer servers located around the world. To improve the integrity of traffic flows, a protocol stack named TCP-IP is used (see e.g. [20] chap. 5). When data are exchanged via this protocol stack, they are broken down into different packets. Traffic classification consists of associating traffic flows, comprised of data packets, with the categories corresponding to different use cases (e.g. email, chat, file transfer, etc.). Increasing demands to protect privacy have led to the development of methods to encrypt traffic and of different ways to navigate on the Internet anonymously, such as The Onion Router (Tor)[5] or Virtual Private Networks (VPN) (see e.g. [24]). These techniques encrypt the original whole packets (payload and headers) into a new encrypted payload to which is added a new header to ensure traffic handling in the VPN or Tor network. Thus neither the original payload nor header are available for inspection and the only features that can be used for classification are the encrypted size of the original packet and the time at which the packet is captured. With only these features, packet inspection is not possible any more, and statistical or machine learning techniques must be used for traffic classification. The state of the art results for such an approach to traffic classification are based on ANNs. There are different types of neural networks that are used in different contexts: * • Convolutional Neural Networks (CNNs) [10] are mainly employed for image recognition, and have a structure inspired by the primate’s visual cortex. * • Recurrent Neural Networks (RNNs) [23] are used for automatic speech or text recognition, and take into account the temporal dimension by interconnecting neurons to form cycles in the networks. * • Spiking Neural Networks (SNNs) [15] are aimed at imitating biological neural networks. A neuron fires an electrical impulse called “spike” when its membrane potential reaches a specific value, leading to a signal composed of spikes that propagates on to next neurons. They are very interesting for recognizing time-related patterns. The main challenge of SNNs is that their binary nature prevents the use of classic training methods like back-propagating gradient descent [18]. To solve this, most current methods are rate-based, meaning that they only consider the number of spikes inside a temporal window, ignoring their times. This is a severe limitation as different spike trains can have the same number of spikes but present distinct temporal patterns. To solve this problem, different alternatives have been developed. For instance, Neftci et al. 2019 [18] proposed a surrogate gradient algorithm trying to use a more local approach. It consists of approximating the true gradient of the loss function while keeping good optimization properties. ### 1.1 Contributions In this paper we show that SNNs lead to very promising results for traffic recognition of encrypted traffic. We beat the state of the art [25] for almost every metric on accuracy, precision or recall. The average accuracy for Tor traffic rises from 67.8% to 98.6%, and for unencrypted traffic from 85% to 99.4%. For VPN traffic it rises from 98.4% to 99.8%. The number of errors is thus divided by a factor 8 for VPN traffic and by a factor 20 for Tor and unencrypted traffic. It is important to note that for VPN traffic the hardest situation (browsing versus chat traffic) is not present as the dataset used by [25] has no VPN browsing data. One versus all experiments show that SNNs lead to a very significant improvement with all the problematic categorizations such as File Transfer for Tor (increased from 55.8% to 99.7%), Browsing for Tor (from 90.6% to 99.0%), Browsing for unencrypted (90.6% to 99.4%) and Chat for Tor (89% to 99.1%). These results are obtained with: simpler inputs (300x300 images versus 1500x1500 images), a simpler network (one hidden layer of 100 neurons versus a six hidden layer LeNet-5 CNN with one million neurons), less training parameters (31.5k versus 300k) and a higher testing data over training data ratio (20/80 versus 10/90). Our approach is all but fancy, the merit is due to the superior capabilities of SNNs to detect and exploit time related features. In Section 5 we study this with some experiments that highlight that short-term synchronicity detection seems to be an important feature for traffic categorization, and SNNs naturally excel at this. ### 1.2 Outline The rest of this paper is organized as follows. Section 2 gives a brief introduction to related works in the field of internet traffic classification. Section 3 gives a complete review on the proposed model for classification of internet traffic using a SNN and with the help of supervised surrogate gradient method. In section 4 and 5 some experiments and corresponding results are given to investigate the power of the proposed model to classify internet traffic data. Conclusions and future work are presented in Section 6. ## 2 Related works Mainstream techniques for classifying unencrypted traffic, port-based classification and deep packet inspection [4], and part of the alternative statistical and machine learning techniques, are unusable for encrypted traffic as they use features that are only available before encryption. Such related work will therefore not be described here. There are two usual classification problems: traffic categorization (or characterization) which considers types of traffic such as VoIP, File Transfer, Video, Browsing, Chat, P2P, etc.; and application identification which considers given applications such as Kazaa, BitTorrent, Google Hangouts Chat, AIM, WhatsApp Web, etc. In this work we focus on traffic categorization and leave application categorization for future work. There is a rich literature on traffic categorization through machine learning methods (statistics based or neural network based). Multiple statistical approaches using features compatible with encryption (such as packet times and sizes) have led to scientific publications: fingerprinting [3, 29, 21], Bayes [16, 17, 7, 1], k-nearest neighbor [6, 31], decision trees [6, 34], bag of words [34, 33] and cross-entropy similarity [21]. Similarly multiple approaches have been studied using different types of neural networks such as: convolutional neural networks [30, 28, 13, 12, 2], recurrent neural networks [12], probabilistic neural networks [27] and stacked auto-encoders [30, 13]. Regarding accuracy and precision over encrypted traffic, the state-of-the-art results are the Deep Packet approach [14] from 2018 (published in [13]) and the Flowpic approach [25] from 2019. Both build up over the works of Draper- Gil et al. [6] and Lashkari et al. [9] which set up datasets for traffic categorization, the former containing both VPN and unencrypted traffic, and the latter containing both Tor and unencrypted traffic. In [13] each packet (header and payload) is converted to a normalized byte sequence, and used as an input for a 1-D convolutional neural network and for a set of stacked auto-encoders. Both are trained with unencrypted and VPN- encrypted traffic and, surprisingly, the resulting networks are able to do traffic categorization for both unencrypted and VPN traffic without using time-related features (but using size). Performance is quite good with 94% average recall and 93% average recall but, unfortunately, for the unencrypted vs Tor dataset there is an important drop on both average recall (down to 57%) and average precision (down to 44%). The Flowpic approach [25] uses all spatial and temporal information contained in a traffic flow without looking at its content. Flow packets, encrypted or not, are represented with a 2D-histogram, in the form of a square picture, with packet size in ordinate and normalized arrival times in abscissa. The main idea of Flowpic is that with this approach the traffic category can be derived from image recognition techniques. The images are thus fed to a LeNet-5 [11] convolutional neural network and categorization is done in a class vs all game with 97% average accuracy for unencrypted traffic, 99.7% for VPN traffic, and 85.7% for Tor traffic. Given their nice results over all three sorts of traffic, Flowpic will be our main point of comparison. ## 3 Proposed model ### 3.1 Prospects The approach used in this paper is very similar to the one employed in [25]. Histograms represent flow packets grouped by size and feed the neural network. The difference relies on the use of a SNN to operate such classification, instead of a CNN. No previous research with that method has been reported and the results will be mainly compared to the study presented in [25]. To ensure that this comparison is fair, we follow the same dataset generation and processing as in [25]. ### 3.2 Dataset generation To generate the data, we used two public datasets from University of New Brunswick (UNB) called “2016 ISCX VPN-nonVPN traffic dataset” from [6] and “2017 ISCX Tor-nonTor dataset” from [9], both used in the closely related works [13, 25]. They comprise a range of packet captures (pcap) format files corresponding to traffic flow captures of five main classes (Video, VoIP, Chat, File Transfer and Browsing) using three different types of encryption (unencrypted, VPN and Tor). We converted them into comma separated values (CSV) files containing arrival times and sizes in order to exploit the data. We have not considered any restriction about the amount of packets or their size. Again it is important to note that there is no Browsing-VPN data in the ISCX dataset, so we have not used this type of data for training or testing our model (as neither did the authors of [6, 9, 13, 25]). Introducing such a traffic and doing a fair comparison with other approaches when it is included is left as future work. ### 3.3 Data processing methodology The first step is to extract the suitable features from these flow packets. A traffic flow can be seen as a sequence of packets that have the same source and destination IP. They are bidirectional: forward when a message is sent from source to destination, and backward when a message or acknowledgment (ACK) is sent back from destination to source. Thus, we consider, for each exchange, data from both directions of the flow. We pick, as in [25], packet size and time of arrival as features, since they are always available, whether the traffic is encrypted or not. Then, we split each flow every 15s into equal sequences lasting 60s. We have therefore csv files representing one minute chunks of a communication session and containing packet sizes and times of arrival. As a result, we generate 2D-histograms of packet sizes as a function of the time of arrival from each csv file as shown in Figure 1. For one flow, we have two series of histograms, each corresponding to one direction (forward and backward), that were concatenated along the y (size) dimension. The histogram values correspond to the number of packets with the same (time size) coordinates. In addition, the resolution of the histograms can be adjusted by increasing or decreasing the number of bins it contains. If the number of bins is too high, the memory and computation requirements for the classification training become prohibitive. Moreover, as we will associate one input line to each size bin, overfitting can become an issue. Indeed, adding one size bin increases the number of trainable weights by the number of hidden neurons, here 100. If the number of bins is too low, it degrades the histogram quality and therefore the network’s ability to recognize the right traffic class. By trial and error we found that 300 time bins and 300 size bins is a good choice (whereas [25] used 1500x1500). Once implemented, the histograms can be classified according to their category. Some examples of images obtained are shown in Figure 2. ### 3.4 Labeling dataset A label is attributed to each csv file according to its traffic category. A label is a number giving both the traffic class (Chat, Browsing, File Transfer, Video or VoIP) and the encryption method (Tor, VPN or Unencrypted). For instance, label 2 matches with Video over VPN traffic while label 7 matches with File Transfer over Tor traffic. Once created, the csv files containing the histograms are distributed between groups of equal size, called batches, that will be used as input data for the neural network training. Thus, the batch size, representing how many histograms each batch contains, is a parameter to be adjusted. Increasing it avoids scattering data whereas decreasing it introduces stochasticity which can be beneficial for gradient descent. Moreover, we use a weighted random sampler on our training dataset. For each class, a weight is attributed to each file, this weight being inversely proportional to the amount of files of in this class. This allows to balance the batches in the training dataset and therefore to improve the classification performance. Figure 1: Architecture proposed for classifying Internet traffic using a SNN. Section (A). This section is the part which is similar to [25], At first from the original pcap files in dataset, we generate csv files containing the time of arrival and the size of each packet in each flow (forward and backward direction), then we separate these sessions into sessions of 60s with 15s of gap between them. Next we generate two dimensional histograms using these sessions which the time bins are considered on X-axis and size bin on Y-axis and also we concatenate the forward and backward traffic histograms to generate the final histogram as the input of our network. Section (B). This section describes our model. The concatenated histograms are fed to the SNN and the network is trained using these histograms. Finally the evaluation of the model is done using test dataset histograms. Figure 2: Examples of obtained two dimensional histograms for each traffic category. The X-axis and Y-axis show time and size bins respectively. For clarity, all non empty bins are represented in black, whatever the packet count. Note: there is no Browsing-VPN data in the ISCX dataset. ### 3.5 The model #### 3.5.1 Spiking Neural Network A SNN can be seen as a type of biologically inspired deep learning model, whose state evolves in time, and in which information between neurons is transmitted by spikes [18]. In most SNNs, neurons are modeled using the leaky integrate-and-fire (LIF) model. It introduces a neuron dynamic able to capture temporal patterns. In discrete time, it can be described as follows: $V[n+1]=\beta(V[n]-Reset[n])+(1-\beta)\sum_{j}w_{j}S_{j}^{in}[n]$ (1) $S^{out}[n+1]=\theta(V[n+1]-Threshold)$ (2) $Reset[n+1]=Threshold.S^{out}[n]$ (3) $V$ represents the membrane potential, n the discrete time, $\beta=e^{-\frac{\Delta t}{\tau}}$ the leak coefficient (with $\tau$ the membrane time constant),$\sum_{j}w_{j}S_{j}^{in}$ the weighted input spikes, $Reset$ the reset function due to the output spikes, $S^{out}$ the output spike, and $\theta$ the Heaviside step function. Equation 2 implies that whenever the membrane potential reaches the threshold, an output spike is fired and the potential is reset. As for ANNs, a SNN can be described as a large number of neurons connected via synapses, that are represented by weights. A weight is a number that evaluates the way information, here spikes, is transmitted between two neurons. Thus, training a network means adjusting the weights so that data provided as an input leads to the activation of the correct output. The difference between the result provided by the network and the desired result is computed by a loss function and has to be minimized. Here, we use the cross entropy function $H$ calculated as follows $H(p,q)=-\sum_{i}p_{i}log(q_{i})$ (4) With $p_{i}=1$ if the true label is $i$ and 0 otherwise, and $q_{i}$ the predicted value of the model. To adjust the weights and train the network, the mainly used method that has proved to be efficient is backpropagation. It consists in updating the weights from the output layer back to the input layer of the network in a way that minimizes the loss function. As SNN are recurrent (see Eq. 2), backpropagation through time (BPTT), which consists in unfolding through time the network before applying backpropagation, is employed (see Figure 3). Still, this method requires to calculate the gradient of the activation function. In the case of SNNs, it poses an issue because the Heaviside function gradient is 0 everywhere except in 0 which implies that no information can flow through the activation function during the backpropagation. Recent research has managed to solve this problem by switching the true gradient by a surrogate function [18]. Here we used the derivative of a sigmoid, since a sigmoid is continuous and can approximate the Heaviside function as seen in Figure 4. Figure 3: Backpropagation through time. This image shows the unfolded SNN demonstrating how the gradient is backpropagated through the network using BPTT.[18] Figure 4: Surrogate gradient for activation function. Different Sigmoid functions can be a suitable approximation for Heaviside function. Because the step function is not differentiable and it hasn’t gradient we need to use a surrogate gradient to use in back propagation algorithm of training the SNN, so the sigmoid function is assumed as a substitution of the main Heaviside function and its gradient is used as a surrogate gradient in back propagation algorithm. In the forward path we use the main Heaviside function but in the backward the surrogate gradient is used.[18] One of the main assets of SNNs compared to other networks is that inference can be implemented at low power using event-driven computation on neuromorphic chips [22]. They are thus appealing for embedded systems such as satellites. Furthermore, SNNs are adapted to dynamic inputs: they detect synchronicity and other time related features. The proposed architecture for traffic classification consists of two fully connected layers: one Spiking Dense Layer made up with 100 neurons with 300 inputs, and one Read Out Layer with 14 neurons corresponding to the different categories to classify (Figure 5). We adapted the s2net code (available at https://github.com/romainzimmer/s2net), designed to train both fully connected and convolutional SNNs using surrogate gradient learning, and based on PyTorch. This code has already given excellent results on the Google speech commands dataset [35, 19], and for epileptic crisis detection from EEG signal [26]. In our case, the network receives histogram batches as inputs and then delivers a prediction vector over the 14 labels. The first layer is a dense spiking layer. The neurons’ potentials (V) are calculated for the 300 time steps and spikes are generated according to equation 2. The second dense layer (called Readout layer) can be seen as a spiking layer with an infinite threshold. The potential of each neuron is calculated over the 300 times steps using equation 2 and then the neuron with highest mean potential is the winner and specifies the prediction of the network for the corresponding input. Figure 5: Architecture of the proposed network. In the proposed architecture there is just one hidden layer with 100 neurons and one Readout layer with 14 neurons. The size of input histograms which are fed to the network (in the input layer) is (300,300) which is 300 for time bins and 300 for size bins on the 2D-histogram. the time bins. The time bins dimension is used as the time steps of the spiking neurons while the size bins are considered as the size of input layer. At the Readout layer (with 14 neurons) each neuron is representative of one traffic category and the winner neuron is the neuron with highest mean potential during all time steps. The winner neuron tells which class is chosen by the network. ## 4 Experiments Our dataset has been split into three separated sets: 65% of samples used to train the model, 15% to validate it during the training phase, and 20% to evaluate it during the testing phase. The training phase consists of optimizing the network by adjusting its weights, but also the leak coefficients $\beta$ (we had one trainable $\beta$ per layer, as previous works suggested that training these coefficients can increase accuracy [8, 32]). The validation set is used to control model performances and tuning hyper-parameters, whereas the testing phase enables to evaluate final model performance. As a result, we have extracted from the dataset independent data used only for the test. Otherwise, there could be a risk of developing an over-fitted model. This would be an important issue since it would mean the model is biased, too specific and unable to classify new data correctly. For training, we set the batch size to 128 and used 30 epochs. Cross entropy loss was used as the loss function. We also used the Adam optimizer along with _ReduceLROnPlateau_ as a scheduler which allows to adjust the learning rate according to the epoch and thus to improve gradient descent for BPTT. To avoid over-fitting, we also introduce a regulatory loss coefficient which penalizes solutions using a large number of spikes. Thus, even if classification may be less efficient, the number of spikes remains quite low which saves energy. This could be very significant in embedded systems for instance. ## 5 Results As explained in Section 3.4 our model predicts both the traffic category (e.g. VoIP) and the encryption technique (e.g. VPN) at the same time, and it was trained over the whole dataset. This is in contrast with [25], where the dataset was split in three according to the encryption techniques, and different classifiers were trained for each encryption technique and for each task (“one vs all” or multi-class categorization). The task in [25] is thus simpler as each of their three classifiers only needs to find one among 5 categories (or 4 in the VPN case), whereas in our case it must find one among 14. That being said, we can compute the accuracy of our network for all the tasks used in [25], because they are sub-tasks of the tasks we solved (i.e. all performance measures can be computed from the confusion matrix of Figure 8). It is however important to note that not only the proposed model outperforms Flowpic at its sub-tasks but it also solves a harder problem with a very small error rate. We first evaluated our model in one vs all classification, meaning that each class is compared to the rest of the dataset. We have used three different performance metrics. For one class, recall (Re) measures the capacity of the model to successfully retrieve the class and not to miss it. Precision (Pr) measures the ability of the model to specifically retrieve the class and ensure the class returned is the good one. Accuracy (Ac) represents a global score quantifying if the model is often right or not. $Pr=\frac{TP}{TP+FP}$ $Re=\frac{TP}{TP+FN}$ $Ac=\frac{TP+TN}{TP+TN+FP+FN}$ TP, FP, TN, FN stand for True Positive, False Positive, True Negative and False Negative respectively. For the sake of brevity, in this section we just report the accuracy measure but the other measures can be found in A. Table 1 shows the results for each category and encryption technique. It extends Table 4 in [25] with our results (we kept the same form to simplify the comparison). Note however that in [25] they also provide accuracy when using a classifier (e.g. VPN-trained) on another setting (e.g. Tor). As we only have one classifier for all the categories, we only kept for the comparison the best figures of the original table (which correspond to using the right classifier in its own setting, e.g. the VPN-trained classifier to classify VPN traffic). We have reached at least 99.0% of accuracy in all categories and even 100% for some categories. Moreover, our model outperforms [25]’s model for all categories except File transfer-VPN which Flowpic sightly performs better (99.9% vs 99.8% for us). Figures 6 and 7 compare again the results of our network with [25] in categorization tasks on different traffic classes and different encryption techniques using error rates instead of success rates. As the accuracy rates are all very close to $100\%$ it is hard to compare the relative improvements and these graphs are probably a better indicator on the relative multiplicative drop obtained on the amount of errors when using our strategy. The proposed model also reached an accuracy of 96.7% on multi-class categorization of different encryption techniques which is better than the 88.4% reported by [25] (see their Table 3, row 7). Finally, we reached an overall accuracy of 95.9% and average precision of 96.9% (full precision measures are available in A). Note that [25] could not report this global accuracy, because they always separated the encryption techniques. Figure 8 shows the confusion matrix. We observe that there are some confusions between for instance Chat Tor and Browsing Tor or between VoIP Unencrypted or VoIP VPN. When analyzing the instances that failed, we observed that most of them contained too few packets to be classified correctly. Class | Accuracy (%) ---|--- VoIP | Work/Data | Unencrypted | VPN | Tor Flowpic | 99.6 | 99.9 | 93.3 This paper | 100.0 | 100.0 | 99.4 Video | Work/Data | Unencrypted | VPN | Tor Flowpic | 99.9 | 99.9 | 99.9 This paper | 100.0 | 100.0 | 100.0 File Transfer | Work/Data | Unencrypted | VPN | Tor Flowpic | 98.8 | 99.9 | 55.8 This paper | 100.0 | 99.8 | 99.7 Chat | Work/Data | Unencrypted | VPN | Tor Flowpic | 96.2 | 99.2 | 89.0 This paper | 99.4 | 99.8 | 99.1 Browsing | Work/Data | Unencrypted | VPN | Tor Flowpic | 90.6 | - | 90.6 This paper | 99.4 | - | 99.0 Table 1: Comparison between results of this paper and Flowpic [25] in one vs all categorization of different traffic classes. It is noteworthy that the unencrypted traffic is named Non-VPN in Flowpic [25]. As the table depicts, the results are improved in all traffic classes and encryption methods except File transfer-VPN for which Flowpic performs slightly better. Figure 6: Comparison of one vs all errors on categorization of different traffic classes and encryption methods between this work and Flowpic [25]. Note that in Flowpic the unencrypted traffic is named Non-VPN. The task was one versus all classification of different traffic categories with the data from just one encryption method (Unencrypted, VPN, Tor). We have used the error rate on classification (100-accuracy) of the data instead of the accuracy to highlight the relative multiplicative gap between the performance of the proposed network in comparison to Flowpic [25]. As the Figure depicts this paper is better in all traffic classes and encryption methods except File transfer-VPN which Flowpic performs slightly better. Figure 7: Comparison of error rates on categorization of different encryption techniques between this work and Flowpic [25]. The task was multi-class categorization between different traffic classes of each encryption method. We outperformed Flowpic for all encryption methods. Figure 8: Confusion matrix of the proposed model on categorization of the all classes simultaneously (including all the data from different traffic classes and encryption methods and 14 classes as depicted in the matrix). The classes on the lines are the predicted labels while the classes on the columns are the true labels The results highlight that our SNN is able to classify with very good performance different types of traffic classes according to three encryption techniques. This is even more noticeable considering the limited amount of data needed for training and the simplicity of our network. It is fully connected with only one hidden layer compared to the two convolutional layers plus one fully connected layer used in [25]. This can be very useful for embedded systems such as satellites for instance due to their low energy requirements. Finally, we performed some additional analyses to get more insight on the reasons why our SNN reaches such a good accuracy despite its simplicity. Firstly, we tried to destroy all the temporal features, at inference time (after normal training). This was done by shuffling the histograms along the temporal dimension. More specifically, we randomly permuted the elements of each row of the histograms, using a different permutation for each row. This completely destroys the temporal features, but not the spatial ones (for example if a certain sort of traffic tends to use larger packets than another, it can still be used for discrimination). Such an approach led to an total accuracy of only 90.6%, and the comparison to the accuracy in normal mode (95.9%) indicates that our SNN does exploit temporal features. Secondly, we tried to destroy all the temporal features but the synchronous ones (still at inference time, after normal training). This was done by shuffling again the histograms along the temporal dimension, but this time the same permutation was used for all the rows of a given histogram. This means that if multiple packets with different sizes arrived in the same time bin before the shuffling, they still arrived in the same time bin after it. Conversely, any inter-time-bins temporal information, i.e. long range temporal correlations, was lost. Of course, this second shuffling method also conserved the spatial features. This second experiment led to an accuracy of 95.1%, almost as good as the baseline one (95.9%). This means that our SNNs mostly exploits synchronous features (with a 200 ms resolution), largely ignoring longer range correlations. Of course, this second sort of shuffling is more harmful when increasing the resolution (e.g. with 40ms time bins the accuracy drop is about 2%, indicating that cross-correlations with a 40-200 ms lag matter). To confirm the range of useful correlations, we went back to a 200 ms resolution, and we analyzed the final values of the leak coefficients $\beta$ (recall that it is trainable). It turns out that, whatever the initial values, the $\beta$ of the hidden layer converges towards $\sim$ 0.42. With such a fast decay, neurons quickly forget any packet that arrived before the current time bin. In other words, neurons mostly care about synchrony (with a 200ms resolution). In a last experiment, we forced $\beta=0$. In this extreme case, the neurons become stateless: they only care about current inputs. This led to an accuracy of 95.2%. This consistent with the second shuffling experiment, and confirms that inter-time-bins correlations are not very informative when the time-bin size is large enough to capture short-term synchronicity. All together, these additional experiments confirm that the dataset contains temporal features, mostly synchrony patterns (with a 200ms resolution), that are highly diagnostic, and that are efficiently exploited by a SNN with a fast decay. ## 6 Conclusion In this paper, we proposed an approach to use a SNN in order to classify different traffic classes encrypted with different techniques (via VPN or Tor). We only considered packet size and time of arrival of flow traffic. The input data were therefore 2D-histograms representing a session of 60s in both forward and backward directions. We reached an overall accuracy of 95.93% and an average precision of 96.9% on test dataset with a simple network made up with only one hidden layer and trained on only 65% of the dataset. Thus, we have shown that SNN can be very relevant for Internet traffic classification, with applications in the industry since SNN can run on neuromorphic hardware with low energy consumption [22]. Finally, progress can be made and the model performance can be enhanced by considering spiking convolutional layers that are very promising. ## 7 Acknowledgements We thank the authors of the Flowpic paper [25], Tal Shapira and Yuval Shavitt, for sharing their processed pcap files with us. ## Appendix A Complete results In this section we bring complete performance results, while in Section 5 we just mentioned the accuracy. Complete results of one vs all categorization are presented in Table 2 for different traffic classes, and in Table 3 for all different traffic classes and encryption methods. The accuracy and average precision of the network are respectively 95.93% and 96.9% in multi-class categorization of all types of traffic (traffic classes and encryption methods simultaneously with 14 output neurons). Finally, the average accuracies for one vs all categorization of different traffic classes with the data of each encryption technique are reported in Table 4. Traffic class | Recall(%) | Precision(%) | Accuracy(%) ---|---|---|--- Browsing | 99.3 | 99.0 | 99.4 Chat | 96.4 | 94.3 | 99.4 File Transfer | 98.9 | 100.0 | 99.9 Video | 99.8 | 100.0 | 100.0 VoIP | 99.6 | 99.9 | 99.8 Table 2: Complete performance results on one vs all categorization of different traffic classes. The task was to classify each traffic class versus all other classes with all the data from the test dataset. Traffic class | Recall(%) | Precision(%) | Accuracy(%) ---|---|---|--- Browsing-Unencrypted | 99.2 | 99.5 | 99.6 Browsing-Tor | 99.5 | 96.6 | 99.8 Chat–Unencrypted | 92.5 | 88.6 | 99.6 Chat–Tor | 92.1 | 89.7 | 99.8 Chat–VPN | 100.0 | 99.2 | 100.0 File Transfer–Unencrypted | 100.0 | 100.0 | 100.0 File Transfer–Tor | 98.0 | 100.0 | 99.9 File Transfer–VPN | 97.9 | 100.0 | 100.0 Video–Unencrypted | 99.5 | 100.0 | 100.0 Video–Tor | 100.0 | 100.0 | 100.0 Video–VPN | 100.0 | 100.0 | 100.0 VoIP-Unencrypted | 84.2 | 99.7 | 96.6 VoIP–Tor | 98.3 | 100.0 | 99.9 VoIP-VPN | 99.7 | 72.1 | 96.7 Table 3: Complete performance results on one vs all categorization of different traffic classes and encryption techniques. The tasks was to classify each class versus all other classes on the test dataset. Traffic class | Our accuracy(%) | Flowpic’s accuracy(%) ---|---|--- Unencrypted | 99.7 | 97.0 Tor | 99.4 | 85.7 VPN | 99.9 | 99.7 Table 4: Average accuracies for on one vs all categorization of different traffic classes for each encryption technique. The reported results are the averages of results in Table 1 for each encryption technique. The corresponding Flowpic accuracies, from Table 3 (rows 4-6) in [25], are always lower. ## References * [1] Tom Auld, Andrew W Moore, and Stephen F Gull. Bayesian neural networks for internet traffic classification. IEEE Transactions on neural networks, 18(1):223–239, 2007. * [2] Zhitang Chen, Ke He, Jian Li, and Yanhui Geng. Seq2img: A sequence-to-image based approach towards ip traffic classification using convolutional neural networks. In 2017 IEEE International Conference on Big Data (Big Data), pages 1271–1276. IEEE, 2017. * [3] Manuel Crotti, Maurizio Dusi, Francesco Gringoli, and Luca Salgarelli. Traffic classification through simple statistical fingerprinting. ACM SIGCOMM Computer Communication Review, 37(1):5–16, 2007. * [4] Alberto Dainotti, Antonio Pescape, and Kimberly C Claffy. Issues and future directions in traffic classification. IEEE network, 26(1):35–40, 2012. * [5] Roger Dingledine, Nick Mathewson, and Paul Syverson. Tor: The second-generation onion router. Technical report, Naval Research Lab Washington DC, 2004. * [6] Gerard Draper-Gil, Arash Habibi Lashkari, Mohammad Saiful Islam Mamun, and Ali A. Ghorbani. Characterization of encrypted and VPN traffic using time-related features. ICISSP 2016 - Proceedings of the 2nd International Conference on Information Systems Security and Privacy, (February):407–414, 2016. * [7] Adil Fahad, Zahir Tari, Ibrahim Khalil, Ibrahim Habib, and Hussein Alnuweiri. Toward an efficient and scalable feature selection approach for internet traffic classification. Computer Networks, 57(9):2040–2057, 2013. * [8] Wei Fang, Zhaofei Yu, Yanqi Chen, Timothée Masquelier, Tiejun Huang, and Yonghong Tian. Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks. arXiv, jul 2020. * [9] Arash Habibi Lashkari, Gerard Draper-Gil, Mohammad Saiful Islam Mamun, and Ali A Ghorbani. Characterization of tor traffic using time based features. In ICISSp, pages 253–262, 2017. * [10] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. * [11] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. * [12] Manuel Lopez-Martin, Belen Carro, Antonio Sanchez-Esguevillas, and Jaime Lloret. Network Traffic Classifier with Convolutional and Recurrent Neural Networks for Internet of Things. IEEE Access, 5(September):18042–18050, 2017. * [13] Mohammad Lotfollahi, Mahdi Jafari Siavoshani, Ramin Shirali Hossein Zade, and Mohammdsadegh Saberian. Deep packet: A novel approach for encrypted traffic classification using deep learning. Soft Computing, 24(3):1999–2012, 2020. * [14] Mohammad Lotfollahi, Ramin Shirali Hossein Zade, Mahdi Jafari Siavoshani, and Mohammdsadegh Saberian. Deep packet: A novel approach for encrypted traffic classification using deep learning, 2018. * [15] Wolfgang Maass. Networks of spiking neurons: The third generation of neural network models. Neural Networks, 10(9):1659–1671, 1997. * [16] Andrew Moore, Denis Zuev, and Michael Crogan. Discriminators for use in flow-based classification. Technical report, 2013. * [17] Andrew W Moore and Denis Zuev. Internet traffic classification using bayesian analysis techniques. In Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems, pages 50–60, 2005\. * [18] Emre O. Neftci, Hesham Mostafa, and Friedemann Zenke. Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36(6):51–63, 2019. * [19] Thomas Pellegrini, Romain Zimmer, and Timothée Masquelier. Low-activity supervised convolutional spiking neural networks applied to speech commands recognition. In IEEE Spoken Language Technology Workshop, 2021. * [20] Larry L Peterson and Bruce S Davie. Computer networks: a systems approach. Elsevier, 2007. * [21] Tao Qin, Lei Wang, Zhaoli Liu, and Xiaohong Guan. Robust application identification methods for p2p and voip traffic classification in backbone networks. Knowledge-Based Systems, 82:152–162, 2015. * [22] Kaushik Roy, Akhilesh Jaiswal, and Priyadarshini Panda. Towards spike-based machine intelligence with neuromorphic computing. Nature, 575(7784):607–617, nov 2019. * [23] Jürgen Schmidhuber. Deep Learning in neural networks: An overview. Neural Networks, 61:85–117, 2015. * [24] Charlie Scott, Paul Wolfe, and Mike Erwin. Virtual private networks. ” O’Reilly Media, Inc.”, 1999. * [25] Tal Shapira and Yuval Shavitt. FlowPic: Encrypted Internet Traffic Classification is as Easy as Image Recognition. INFOCOM 2019 - IEEE Conference on Computer Communications Workshops, INFOCOM WKSHPS 2019, pages 680–687, 2019. * [26] Pouya Soltani Zarrin, Romain Zimmer, Christian Wenger, and Timothée Masquelier. Epileptic Seizure Detection Using a Neuromorphic-Compatible Deep Spiking Neural Network. In Lecture Notes in Computer Science, volume 12108, pages 389–394. 2020. * [27] Runyuan Sun, Bo Yang, Lizhi Peng, Zhenxiang Chen, Lei Zhang, and Shan Jing. Traffic classification using probabilistic neural networks. In 2010 Sixth International Conference on Natural Computation, volume 4, pages 1914–1919. IEEE, 2010. * [28] Wei Wang, Ming Zhu, Jinlin Wang, Xuewen Zeng, and Zhongzhen Yang. End-to-end encrypted traffic classification with one-dimensional convolution neural networks. In 2017 IEEE International Conference on Intelligence and Security Informatics (ISI), pages 43–48. IEEE, 2017. * [29] Xiaoming Wang and David J Parish. Optimised multi-stage tcp traffic classifier based on packet size distributions. In 2010 Third International Conference on Communication Theory, Reliability, and Quality of Service, pages 98–103. IEEE, 2010. * [30] Zhanyi Wang. The applications of deep learning on traffic identification. BlackHat USA, 24(11):1–10, 2015. * [31] Baris Yamansavascilar, M Amac Guvensan, A Gokhan Yavuz, and M Elif Karsligil. Application identification via network traffic classification. In 2017 International Conference on Computing, Networking and Communications (ICNC), pages 843–848. IEEE, 2017. * [32] Bojian Yin, Federico Corradi, and Sander M. Bohté. Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks. arXiv, may 2020. * [33] Jun Zhang, Xiao Chen, Yang Xiang, Wanlei Zhou, and Jie Wu. Robust network traffic classification. IEEE/ACM transactions on networking, 23(4):1257–1270, 2014. * [34] Jun Zhang, Yang Xiang, Yu Wang, Wanlei Zhou, Yong Xiang, and Yong Guan. Network traffic classification using correlation information. IEEE Transactions on Parallel and Distributed systems, 24(1):104–117, 2012. * [35] Romain Zimmer, Thomas Pellegrini, Srisht Fateh Singh, and Timothée Masquelier. Technical report: supervised training of convolutional spiking neural networks with PyTorch. arXiv, pages 1–24, 2019.
 LTH1252 Anomalous dimensions at large charge in $d=4$ $O(N)$ theory I<EMAIL_ADDRESS>and D.R.T<EMAIL_ADDRESS> Dept. of Mathematical Sciences, University of Liverpool, Liverpool L69 3BX, UK Recently it was shown that the scaling dimension of the operator $\phi^{n}$ in $\lambda(\bar{\phi}\phi)^{2}$ theory may be computed semiclassically at the Wilson-Fisher fixed point in $d=4-\epsilon$, for generic values of $\lambda n$, and this was verified to two loop order in perturbation theory at leading and subleading $n$. In subsequent work, this result was generalised to operators of fixed charge $\bar{Q}$ in $O(N)$ theory and verified up to three loops in perturbation theory at leading and subleading $\bar{Q}$. Here we extend this verification to four loops in $O(N)$ theory, once again at leading and subleading $\bar{Q}$. We also investigate the strong-coupling regime. ## 1 Introduction Renormalizable theories with scale invariant scalar self-interactions have been subjects of enduring interest. In particular, the study of theories with quartic ($\phi^{4}$) interactions in $d=4-\epsilon$ dimensions has played a central role in the development of the theory of critical phenomena, since the pioneering work of Wilson [1, 2] and Wilson and Fisher [3] in 1971. Study of the renormalisation group flow of the coupling or couplings of the theories facilitates the determination of the order of phase transitions and the associated critical indices. For example, the theory with a single scalar field exhibits a Wilson-Fisher fixed point (FP) where the coupling constant $\lambda$ is $O(\epsilon)$, and this infra-red (IR) attractive FP is associated with a second order phase transition. Historically, the majority of work in renormalisable quantum field theories has involved the weak coupling expansion, in other words the Feynman diagram loop expansion. However this expansion fails or becomes ponderous at either strong coupling or (less obviously) for $\phi^{n}$ amplitudes at large $n$. The latter has obviously developed in importance as collider energies have increased. Remarkable progress [4, 5, 6, 7, 8, 9, 10, 11, 12] here came with the use of a semi-classical expansion in the path integral formulation of the theory333An analogous analysis was pursued for $\phi^{6}$ theories for $d=3-\epsilon$ and $\phi^{3}$ theories for $d=6-\epsilon$ in Refs. [13, 14, 15]. In Ref [8] the anomalous dimension of the $\phi^{n}$ operator was considered in the $O(N)$-invariant $g(\phi^{2})^{2}$ theory with an $N$-dimensional scalar multiplet $\phi$, for large $n$ and fixed $gn^{2}$. In Ref. [9] the scaling dimension of the same operator in the $U(1)$-invariant $\lambda(\bar{\phi}\phi)^{2}$ theory (corresponding to the special case $N=2$) was computed at the Wilson-Fisher fixed point $\lambda_{*}$ as a semiclassical expansion in $\lambda_{*}$, for fixed $\lambda_{*}n$. Subsequently this was generalised in Ref. [10] to the case of an operator of charge $\bar{Q}$ in the $O(N)$-invariant theory. In Ref. [9], the $U(1)$ result was compared with perturbation theory up to two loops, and in Ref. [10] the check was performed for the $O(N)$ theory up to three loops. Here we proceed directly with the $O(N)$ case, since, at least for our purposes, many salient features of the analysis are very similar in both cases; and the results for $U(1)$ may be recovered from those for $O(N)$, essentially by setting $N=2$. We extend the comparison with perturbation theory up to four loops, and also discuss the large $(g\bar{Q})$ case, generalising the large $\lambda n$ analysis of Ref. [9]. The paper is organised as follows: In Section 2 we describe the semiclassical calculation in the $O(N)$ case, following Ref. [10]. Then in Section 3 we compare the result of this calculation with perturbative calculations up to and including 4 loops. This represents a significant extension of previous calculations. In Section 4 we address the large $(g\bar{Q})$ limit and compare in detail with earlier work. ## 2 The $O(N)$ case In the $O(N)$ case we have a multiplet of fields $\phi_{i}$, $i=1\ldots N$, and the Lagrangian is ${\cal L}=\frac{1}{2}\partial^{\mu}\phi_{i}\partial_{\mu}\phi_{i}+\frac{g}{4!}(\phi_{i}\phi_{i})^{2}.$ (2.1) The $\beta$-function for this theory is well-known[18] $16\pi^{2}\beta(g)=-\epsilon g+\frac{g^{2}}{3}(N+8)-\frac{g^{3}}{3}(3N+14)+{\cal O}(g^{4}),$ (2.2) and leads to an infra-red conformal fixed point $g_{*}=\frac{3\epsilon}{N+8}+\frac{9(3N+14)}{(N+8)^{3}}\epsilon^{2}+{\cal O}(\epsilon^{3}).$ (2.3) As shown in Ref. [10], the fixed-charge operator of charge $\bar{Q}$ may be taken to be $T_{\bar{Q}}=T_{i_{1}i_{2}\ldots i_{\bar{Q}}}\phi_{i_{1}}\phi_{i_{2}}\ldots\phi_{i_{\bar{Q}}},$ (2.4) where $T_{i_{1}i_{2}\ldots i_{\bar{Q}}}$ is symmetric, and traceless on any pair of indices. The scaling dimension $\Delta_{T_{\bar{Q}}}$ is expanded as $\Delta_{T_{\bar{Q}}}=\bar{Q}\left(\frac{d}{2}-1\right)+\gamma_{T_{\bar{Q}}}=\sum_{\kappa=-1}g^{\kappa}\Delta_{\kappa}(g\bar{Q}).$ (2.5) We initially work in general $d$. The semiclassical computation of $\Delta_{-1}$ and $\Delta_{0}$ is performed by mapping the theory via a Weyl transformation to a cylinder $\mathbb{R}\times S^{d-1}$, where $S^{d-1}$ is a sphere of radius $R$; where the ${\cal R}\phi^{*}\phi$ term (${\cal R}$ being the Ricci curvature) generates an effective $m^{2}\phi^{*}\phi$ mass term with $m=\frac{d-2}{2R}$. This mapping process along with other technical simplifications[9] relies on conformal invariance and therefore we now assume that we are at the conformal fixed point in Eq. (2.3). It was shown in Ref. [9] that stationary configurations of the action are characterised by a chemical potential $\mu$, related to the cylinder radius $R$ by $R\mu_{*}=\frac{3^{\frac{1}{3}}+\left[6g_{*}\bar{Q}+\sqrt{36(g_{*}\bar{Q})^{2}-3}\right]^{\frac{2}{3}}}{3^{\frac{2}{3}}[6g_{*}\bar{Q}+\sqrt{36(g_{*}\bar{Q})^{2}-3}]^{\frac{1}{3}}}$ (2.6) The computation of the leading contribution $\Delta_{-1}$ is entirely analogous to the $U(1)$ case and is given by $\frac{4\Delta_{-1}(g_{*}\bar{Q})}{g_{*}\bar{Q}}=\frac{3^{\frac{2}{3}}[x+\sqrt{x^{2}-3}]^{\frac{1}{3}}}{3^{\frac{1}{3}}+[x+\sqrt{x^{2}-3}]^{\frac{2}{3}}}+\frac{3^{\frac{1}{3}}\\{3^{\frac{1}{3}}+[x+\sqrt{x^{2}-3}]^{\frac{2}{3}}\\}}{[x+\sqrt{x^{2}-3}]^{\frac{1}{3}}},$ (2.7) where $x=6g_{*}\bar{Q}$. Its expansion for small $g_{*}\bar{Q}$ takes the form $\frac{\Delta_{-1}(g_{*}\bar{Q})}{g_{*}}=\bar{Q}\left[1+\frac{1}{3}g_{*}\bar{Q}-\frac{2}{9}(g_{*}\bar{Q})^{2}+\frac{8}{27}(g_{*}\bar{Q})^{3}-\frac{14}{27}(g_{*}\bar{Q})^{4}+{\cal O}\left\\{(g_{*}\bar{Q})^{5}\right\\}\right].$ (2.8) The non-leading corrections $\Delta_{0}$ are once more given by the determinant of small fluctuations. There are two modes corresponding to those in the abelian case, with the dispersion relation $\omega_{\pm}^{2}(l)=J_{l}^{2}+3\mu^{2}-m^{2}\pm\sqrt{4J_{l}^{2}\mu^{2}+(3\mu^{2}-m^{2})^{2}}$ (2.9) where $J_{l}^{2}=\frac{l(l+d-2)}{R^{2}}$ (2.10) is the eigenvalue of the Laplacian on the sphere. In addition there are $\frac{N}{2}-1$ “Type II” (non-relativistic)[16] Goldstone modes and $\frac{N}{2}-1$ massive states with dispersion relation $\omega_{\pm\pm}(l)=\sqrt{J_{l}^{2}+\mu^{2}}\pm\mu,$ (2.11) with $J_{l}$ as defined in Eq. (2.10). We then find that $\Delta_{0}$ is given by $\Delta_{0}(g_{*}\bar{Q})=\frac{1}{2}\sum_{l=0}^{\infty}\sigma_{l}$ (2.12) where $\displaystyle\sigma_{l}=Rn_{l}\left\\{\omega^{*}_{+}(l)+\omega^{*}_{-}(l)+\left(\frac{N}{2}-1\right)[\omega^{*}_{++}(l)+\omega^{*}_{--}(l)]\right\\}.$ (2.13) Here $n_{l}=\frac{(2l+d-2)\Gamma(l+d-2)}{\Gamma(l+1)\Gamma(d-1)}$ (2.14) is the multiplicity of the laplacian on the $d$-dimensional sphere, and $\omega^{*}_{\pm}$, $\omega^{*}_{++}$, $\omega^{*}_{--}$ are defined as in Eqs. (2.9), (2.11) respectively, evaluated at the fixed point with $R$, $\mu_{*}$ related by Eq. (2.6). For the small $(g_{*}\bar{Q})$ computation, we need to isolate the divergent contribution in the sum in Eq. (2.12). We use the large-$l$ expansion of $\sigma_{l}$, $\sigma_{l}=\sum_{n=1}^{\infty}c_{n}l^{d-n}$ (2.15) with $\displaystyle c_{1}=N,\quad c_{2}=$ $\displaystyle 3N,$ $\displaystyle c_{3}=$ $\displaystyle\frac{1}{2}[5N-2+(N+2)(R\mu_{*})^{2}],$ $\displaystyle c_{4}=$ $\displaystyle\frac{1}{2}[N-2+(N+2)(R\mu_{*})^{2}],$ $\displaystyle c_{5}=$ $\displaystyle\frac{N+8}{8}(R^{2}\mu_{*}^{2}-1)^{2}\left[-1+\left(\gamma-\frac{3}{2}\right)\epsilon\right]$ $\displaystyle-\frac{29}{12}(R^{2}\mu_{*}^{2}-1)\epsilon-\left(\frac{11}{24}R^{2}\mu_{*}^{2}-\frac{1}{5}\right)N\epsilon.$ (2.16) We can write $\displaystyle\Delta_{0}(g_{*}\bar{Q})=$ $\displaystyle-\frac{15\mu_{*}^{4}R^{4}+6\mu_{*}^{2}R^{2}-5}{16}+\frac{1}{2}\sum_{l=1}^{\infty}\overline{\sigma}_{l}+\sqrt{\frac{3\mu_{*}^{2}R^{2}-1}{2}}$ $\displaystyle-\frac{1}{16}\left(\frac{N}{2}-1\right)[7-16R\mu_{*}+6R^{2}\mu_{*}^{2}+3R^{4}\mu_{*}^{4}],$ (2.17) where $\overline{\sigma}_{l}=\sigma_{l}-c_{1}l^{3}-c_{2}l^{2}-c_{3}l-c_{4}-c_{5}\frac{1}{l}.$ (2.18) Here the divergent parts have been isolated and the sums over $l$ performed, as explained in Refs. [9] and [10]. The sum over $\frac{1}{l^{d-n}}$ for $n=5$ leads to a pole in $\epsilon$ which cancels against the pole in the bare coupling. The sum over $\overline{\sigma}_{l}$ is then finite and setting $d=4$ and expanding in small $g_{*}\bar{Q}$ can be performed analytically. We obtain $\displaystyle\Delta_{0}=$ $\displaystyle-\frac{1}{6}(10+N)g_{*}\bar{Q}+\frac{1}{18}(6-N)(g_{*}\bar{Q})^{2}$ $\displaystyle+\frac{1}{27}[N-36+2(14+N)\zeta_{3}](g_{*}\bar{Q})^{3}$ $\displaystyle-\frac{1}{81}[4(N-73)+2(6N+65)\zeta_{3}+5(N+30)\zeta_{5}](g_{*}\bar{Q})^{4}+\ldots$ (2.19) Adding Eqs. (2.8) and (2.19), we find [10] $\displaystyle\frac{\Delta_{-1}(g_{*}\bar{Q})}{g_{*}}+\Delta_{0}(g_{*}\bar{Q})=$ $\displaystyle\bar{Q}+\frac{1}{6}[2\bar{Q}-(N+10)]g_{*}\bar{Q}-\frac{1}{18}[4\bar{Q}+(N-6)](g_{*}\bar{Q})^{2}$ $\displaystyle+\frac{1}{27}[8\bar{Q}+N-36+2(N+14)\zeta_{3}](g_{*}\bar{Q})^{3}$ $\displaystyle+\Bigl{\\{}-\frac{14}{27}\bar{Q}-\frac{1}{81}[4(N-73)+2(6N+65)\zeta_{3}$ $\displaystyle+5(N+30)\zeta_{5}]\Bigr{\\}}(g_{*}\bar{Q})^{4}+\ldots$ (2.20) ## 3 The diagrammatic calculation In this section we carry out the perturbative calculation to confirm the semiclassical result at leading and next-to-leading order in $\bar{Q})$ up to four-loop level, as displayed in Eq. (2.20). (a) (b) (c) Figure 1: One- and two-loop diagrams for $\gamma_{T_{\bar{Q}}}$ contributing at leading $n$. Here and elsewhere, the lozenge denotes the $T_{\bar{Q}}$ vertex. The one-loop contribution to $\gamma_{T_{\bar{Q}}}$ comes solely from the diagram depicted in Fig. 1(a) and is given by $\gamma^{(1)}_{T_{\bar{Q}}}=-\frac{1}{3}g\bar{Q}(1-\bar{Q}).$ (3.1) As mentioned before, the derivation of the semiclassical result relied on working at the conformal fixed point $g_{*}$. However, surprisingly, at two, three and four loops we will see that the functional forms of the semiclassical and perturbative results agree for general $g$ and not just on substitution of $g=g_{*}$ with $g_{*}$ as given in Eq. (2.3). It is only at one loop where the agreement only holds at the fixed point. Specifically, the leading terms $\bar{Q}\left(\frac{d}{2}-1\right)+\gamma^{(1)}_{T_{\bar{Q}}}$ on the left-hand side of Eq. (2.5) (as given in Eq. (3.1)) only agree with the ${\cal O}(g^{0})$ and ${\cal O}(g)$ terms in $\frac{\Delta_{-1}(g\bar{Q})}{g}+\Delta_{0}(g\bar{Q})$ on the right-hand side of Eq. (2.5) (as obtained from Eq. (2.20)) after substituting $g=g_{*}\approx\frac{3\epsilon}{N+8}$. In this case, specialising to the fixed point has induced a mixing between the classical and one-loop ${\cal O}(\bar{Q})$ terms. The leading ${\cal O}(\bar{Q}^{3})$ two-loop contribution to $\gamma_{T_{\bar{Q}}}$ comes purely from the diagram depicted in Fig. 1(b) (with three lines emerging from the $T_{\bar{Q}}$ vertex), while the next-to- leading ${\cal O}(\bar{Q}^{2})$ contributions are generated by this diagram together with those in Fig. 1(c) (with two lines emerging from the $T_{\bar{Q}}$ vertex). The contributions are given by $\displaystyle\gamma^{(2)}_{(b)}=$ $\displaystyle-\frac{2}{9}g^{2}\bar{Q}(\bar{Q}-1)(\bar{Q}-2),$ (3.2) $\displaystyle\gamma^{(2)}_{(c)}=$ $\displaystyle-\frac{1}{9}g^{2}\left(3+\frac{1}{2}N\right)\bar{Q}(\bar{Q}-1),$ (3.3) producing leading and next-to-leading terms given by $\gamma^{(2)}_{T_{\bar{Q}}}=-\frac{1}{18}(g\bar{Q})^{2}(4\bar{Q}-6+N),$ (3.4) in accord with the semiclassical results in Eq. (2.20). As emphasised earlier, this agreement holds for general $g$ and not just at the conformal fixed point. This is because at two and higher loops, in contrast to what we saw at one loop, specialising to the fixed point $g=g_{*}$ as given in Eq. (2.3) does not induce any mixing between leading or next-to-leading terms at different loop orders. Therefore if Eq. (2.5) holds at the fixed point, it must also hold in general. In fact the agreement was already checked at the fixed point in Ref. [10] in the general $O(N)$ case, and in the $U(1)$ case in Ref. [9]. (a) (b) (c) Figure 2: Three-loop diagrams for $\gamma_{T_{\bar{Q}}}$ contributing at leading $n$ (a) (b) (c) (d) (e) Figure 3: Three-loop diagrams for $\gamma_{T_{\bar{Q}}}$ contributing at next-to-leading $n$ The leading ${\cal O}(\bar{Q}^{4})$ three-loop contributions to $\gamma_{T_{\bar{Q}}}$ come purely from the diagrams depicted in Fig. 2 (with four lines emerging from the $T_{\bar{Q}}$ vertex), while the next-to-leading ${\cal O}(\bar{Q}^{3})$ contributions are generated by these diagrams together with those in Fig. 3 (with three lines emerging from the $T_{\bar{Q}}$ vertex). Graph | Symmetry Factor | Simple Pole ---|---|--- 2(a) | $\frac{1}{54}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $-\frac{2}{3}$ 2(b) | $\frac{2}{27}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $\frac{4}{3}$ 2(c) | $\frac{1}{54}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $\frac{2}{3}$ 3(a) | $\frac{1}{27}\frac{\bar{Q}!}{(\bar{Q}-3)!}$ | $-\frac{2}{3}$ 3(b) | $\frac{4}{27}\frac{\bar{Q}!}{(\bar{Q}-3)!}A$ | $-\frac{2}{3}$ 3(c) | $\frac{2}{27}\frac{\bar{Q}!}{(\bar{Q}-3)!}$ | $\frac{4}{3}$ 3(d) | $\frac{4}{27}\frac{\bar{Q}!}{(\bar{Q}-3)!}A$ | $\frac{4}{3}$ 3(e) | $\frac{8}{27}\frac{\bar{Q}!}{(\bar{Q}-3)!}B$ | $4\zeta_{3}$ Table 1: Three-loop results from Figs. 2, 3 The simple pole contributions from individual three loop diagrams may be extracted from Ref. [17] and are listed in Table 1, together with the corresponding symmetry factor. A factor of $g^{3}$ is understood in each case. The $N$-dependent factors $A$ and $B$ are given by $A=\frac{1}{8}(N+6),\quad B=\frac{1}{16}(N+14).$ (3.5) When added and multiplied by a loop factor of 3, the leading and non-leading three-loop contributions to $\gamma_{T_{\bar{Q}}}$ are found to be $\gamma^{(3)}_{T_{\bar{Q}}}=\frac{1}{27}(g\bar{Q})^{3}[8\bar{Q}+N-36+2(14+N)\zeta_{3}],$ (3.6) once again in accord with the semiclassical results in Eqs. (2.20), for general $g$. Equivalently, this agreement was already checked at the fixed point in Ref. [10]. The leading ${\cal O}(\bar{Q}^{5})$ four-loop contributions to $\gamma_{T_{\bar{Q}}}$ come purely from the diagrams depicted in Fig. 4 (with five lines emerging from the $T_{\bar{Q}}$ vertex), while the next-to-leading ${\cal O}(\bar{Q}^{4})$ contributions are generated by these diagrams together with those in Figs. 5 and 6 (with four lines emerging from the $T_{\bar{Q}}$ vertex). (a) (b) (c)(d)(e)(f) Figure 4: Four-loop diagrams for $\gamma_{T_{\bar{Q}}}$ contributing at leading $n$ Graph | Symmetry Factor | Simple Pole ---|---|--- 4(a) | $\frac{4}{81}\frac{\bar{Q}!}{(\bar{Q}-5)!}$ | $\frac{5}{2}$ 4(b) | $\frac{2}{81}\frac{\bar{Q}!}{(\bar{Q}-5)!}$ | $-\frac{2}{3}$ 4(c) | $\frac{1}{81}\frac{\bar{Q}!}{(\bar{Q}-5)!}$ | $-\frac{5}{6}$ 4(d) | $\frac{1}{81}\frac{\bar{Q}!}{(\bar{Q}-5)!}$ | $\frac{11}{6}$ 4(e) | $\frac{2}{81}\frac{\bar{Q}!}{(\bar{Q}-5)!}$ | $\frac{2}{3}$ 4(f) | $\frac{1}{81}\frac{\bar{Q}!}{(\bar{Q}-5)!}$ | $-\frac{1}{2}$ Table 2: Four-loop results from Fig. 4 (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k) Figure 5: Four-loop diagrams for $\gamma_{T_{\bar{Q}}}$ contributing at next-to-leading $n$ (a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k) Figure 6: Four-loop diagrams for $\gamma_{T_{\bar{Q}}}$ contributing at next-to-leading $n$ (continued) Graph | Symmetry Factor | Simple Pole ---|---|--- 5(a) | $\frac{2}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $\frac{1}{6}(11-6\zeta_{3})$ 5(b) | $\frac{2}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}A$ | $\frac{1}{6}(11-6\zeta_{3})$ 5(c) | $\frac{4}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}A$ | $-\frac{1}{2}$ 5(d) | $\frac{16}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}C$ | $10\zeta_{5}$ 5(e) | $\frac{8}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}B$ | $\frac{3}{2}(2\zeta_{3}-\zeta_{4})$ 5(f) | $\frac{8}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}A$ | $-\frac{2}{3}$ 5(g) | $\frac{2}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}A$ | $\frac{1}{2}(1-2\zeta_{3})$ 5(h) | $\frac{1}{324}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $-2(1-\zeta_{3})$ 5(i) | $\frac{1}{162}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $-2(1-\zeta_{3})$ 5(j) | $\frac{2}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $-(1-2\zeta_{3})$ 5(k) | $\frac{1}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $\frac{1}{2}(1-2\zeta_{3})$ Table 3: Four-loop results from Fig. 5 Graph | Symmetry Factor | Simple Pole ---|---|--- 6(a) | $\frac{4}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}A$ | $-\frac{1}{6}(5-6\zeta_{3})$ 6(b) | $\frac{8}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}B$ | $\frac{3}{2}(2\zeta_{3}+\zeta_{4})$ 6(c) | $\frac{4}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}A$ | $-\frac{5}{6}$ 6(d) | $\frac{2}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $-\frac{2}{3}$ 6(e) | $\frac{4}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}A$ | $-\frac{2}{3}$ 6(f) | $\frac{2}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $-\frac{1}{6}(5-6\zeta_{3})$ 6(g) | $\frac{2}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $-\frac{1}{6}(5-6\zeta_{3})$ 6(h) | $\frac{4}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $\frac{1}{2}(5-4\zeta_{3})$ 6(i) | $\frac{4}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $\frac{1}{2}(5-4\zeta_{3})$ 6(j) | $\frac{8}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}A$ | $\frac{5}{2}$ 6(k) | $\frac{4}{81}\frac{\bar{Q}!}{(\bar{Q}-4)!}$ | $\frac{5}{2}$ Table 4: Four-loop results from Fig. 6 The simple pole contributions from the four-loop diagrams in Fig. 4 were readily evaluated using standard techniques (see for instance Ref. [18]). Those from Figs. 5, 6 may be extracted from Ref. [17]. The contributions from each four-loop diagram are listed in Tables 2, 3 and 4 respectively, together with the corresponding symmetry factor. A factor of $g^{4}$ is understood in each case, and the $N$-dependent factor $C$ is given by $C=\frac{1}{32}(N+30).$ (3.7) When added and multiplied by a loop factor of 4, the leading and non-leading four-loop contributions to $\gamma_{T_{\bar{Q}}}$ are found to be $\gamma^{(4)}_{T_{\bar{Q}}}=-\frac{1}{81}(g\bar{Q})^{4}[42\bar{Q}+4(N-73)+2(6N+65)\zeta_{3}+5(N+30)\zeta_{5}],$ (3.8) once again in accord with the semiclassical results in Eqs. (2.20), for general $g$. ## 4 The large $g_{*}\bar{Q}$ calculation In this section we discuss the large $g_{*}\bar{Q}$ limit of $\Delta_{T_{\bar{Q}}}$. The large $g_{*}\bar{Q}$ limit of $\Delta_{-1}$ as given by Eq. (2.7) is readily obtained as $\frac{\Delta_{-1}}{g_{*}}=\frac{3}{4g_{*}}\left[\frac{3}{4}\left(\frac{4g_{*}\bar{Q}}{3}\right)^{\frac{4}{3}}+\frac{1}{2}\left(\frac{4g_{*}\bar{Q}}{3}\right)^{\frac{2}{3}}+{\cal O}(1)\right].$ (4.1) We follow the procedure described in Ref. [9] for evaluating $\Delta_{0}$ by means of an approximation to the sum over $l$ followed by a numerical fit. The procedure involves selecting integers $N_{1}$, $N_{2}$ and picking $A\geq 1$ such that $AR\mu_{*}$ is an integer (this represents a cut-off in the summation, beyond which we approximate it by an integral). The accuracy may be made as great as desired by increasing $N_{1}$, $N_{2}$ and $A$. We obtain $\Delta_{0}=\frac{N+8}{16}(R^{2}\mu^{*2}-1)^{2}\ln(AR\mu_{*})+F(R\mu_{*}),$ (4.2) where $F(R\mu_{*})=f_{N_{2},A}(R\mu_{*})-\frac{1}{4}\sigma_{AR\mu_{*}}+\frac{1}{2}\sum_{l=0}^{AR\mu_{*}}\sigma_{l}-\frac{1}{2}\sum_{k=1}^{N_{1}}\frac{B_{2k}}{(2k)!}\sigma^{(2k-1)}_{AR\mu_{*}},$ (4.3) and here $\displaystyle f_{N_{2},A}(R\mu_{*})=$ $\displaystyle\frac{1}{2}(AR\mu_{*})^{4}\sum_{n=1,n\neq 5}^{N_{2}}\frac{c_{n}}{(AR\mu_{*})^{n-1}(n-5)}$ $\displaystyle+\frac{N+8}{16}(R^{2}\mu^{*2}-1)^{2}\left(\gamma-\frac{3}{2}\right)-\frac{29}{24}(R^{2}\mu^{*2}-1)-\left(\frac{11}{48}R^{2}\mu^{*2}-\frac{1}{10}\right)N.$ (4.4) With some help from one of the authors[19] we have corrected some typos in the corresponding equations in Ref. [9], which were not reflected in their final results. The function $f_{N_{2},A}(R\mu_{*})$ derives from replacing the sum over $l$ for $l\geq AR\mu_{*}$ in Eq. (2.12) by an integral over $l$. It is then appropriate to use the large $l$ expansion in Eq. (2.15). The integral over $\frac{1}{l^{1+\epsilon}}$ corresponding to the $c_{5}$ term leads to a pole term in $\epsilon$. The potential pole in $\Delta_{0}$ is cancelled by the pole in the bare coupling, but the $O(\epsilon)$ term in $c_{5}$ in Eq. (2.16) leads to the terms in the last line of Eq. (4.4). The details of the procedure may be found in Ref. [9]. In Eq. (4.3), we can set $d=4$. We now evaluate $F(R\mu_{*})$ in Eq. (4.3) numerically. We take $N_{1}=4$, $N_{2}=10$ and $A=10$, using the same numbers as Ref. [9] for comparison purposes. The result is then fitted with an expansion in $(R\mu_{*})^{-2}$, starting from $(R\mu_{*})^{4}$, with 4 parameters. We find that $F(R\mu_{*})$ is given by $\displaystyle F(R\mu_{*})\sim$ $\displaystyle-(1.5559+0.2293N)(R\mu_{*})^{4}+(1.8536+0.3231N)(R\mu_{*})^{2}$ $\displaystyle-(0.4467+0.0826N)+{\cal O}((R\mu_{*})^{-2}),$ (4.5) and this may be inserted into Eq. (4.2) to give the full result for $\Delta_{0}$. Expanding $R\mu_{*}$ as given by Eq. (2.6) in terms of large $g_{*}\bar{Q}$, we find $R\mu_{*}=\left(\frac{4g_{*}\bar{Q}}{3}\right)^{\frac{1}{3}}+\frac{1}{3}\left(\frac{4g_{*}\bar{Q}}{3}\right)^{-\frac{1}{3}}+\ldots$ (4.6) and then we obtain from Eq. (4.2) $\displaystyle\Delta_{0}=$ $\displaystyle\left[\alpha+\frac{N+8}{48}\ln\left(\frac{4g_{*}\bar{Q}}{3}\right)\right]\left(\frac{4g_{*}\bar{Q}}{3}\right)^{\frac{4}{3}}$ $\displaystyle+\left[\beta-\frac{N+8}{72}\ln\left(\frac{4g_{*}\bar{Q}}{3}\right)\right]\left(\frac{4g_{*}\bar{Q}}{3}\right)^{\frac{2}{3}}+{\cal O}(1),$ (4.7) where $\displaystyle\alpha=$ $\displaystyle-0.4046-0.0854N,$ $\displaystyle\beta=$ $\displaystyle-0.8218-0.0577N.$ (4.8) The results for $U(1)$ should be recovered by setting $N=2$; and indeed for $N=2$ we find Eqs. (4.5), (4.7), (4.8) agree with the corresponding results given in Ref. [9]. Following Ref.[9] and combining Eqs. (2.3), (2.5), (4.1) and (4.7), we may write the full scaling dimension in the form $\displaystyle\Delta_{T_{\bar{Q}}}=$ $\displaystyle\frac{1}{\epsilon}\left(\frac{4\epsilon\bar{Q}}{N+8}\right)^{\frac{d}{d-1}}\left[\frac{3(N+8)}{16}+\epsilon\left(\alpha+\frac{3(3N+14)}{16(N+8)}\right)+{\cal O}(\epsilon^{2})\right]$ $\displaystyle+\frac{1}{\epsilon}\left(\frac{4\epsilon\bar{Q}}{N+8}\right)^{\frac{d-2}{d-1}}\left[\frac{N+8}{8}+\epsilon\left(\beta-\frac{3N+14}{8(N+8)}\right)+{\cal O}(\epsilon^{2})\right]+{\cal O}[(\epsilon\bar{Q})^{0}]$ (4.9) In Ref. [15], we found that we could reproduce the coefficients in the large $R\mu_{*}$ expansion of the $N$-dependent part of $\Delta_{0}$ (the terms involving $\omega^{*}_{++}$ and $\omega^{*}_{--}$ in Eq. (2.13)) by an analytic computation. This fails to work here; an analytic large-$R\mu_{*}$ expansion of $\omega^{*}_{++}$ and $\omega^{*}_{--}$ as given by Eq. (2.11) leads to odd negative powers of $R\mu_{*}$, whereas our numeric computation in Eq. (4.5) only contains even powers of $R\mu_{*}$. It appears that the simple properties of $\omega^{*}_{++}$ and $\omega^{*}_{--}$ identified in Ref. [15], in particular their expansion in powers of $\frac{J_{l}^{2}}{R^{2}\mu_{*}^{2}}$ , are not enough for our analytic computation to work in the $d=4$ case. A little trial and error indicates that the fact that in $d=3$, $n_{l}\propto\frac{d}{dl}J_{l}^{2}$, may also be crucial; but further insight is required. ## 5 Conclusions Approaches that extend the reach of (or even transcend the need for) perturbation theory have always been challenging, and are all the more interesting now because of the increased importance attached to multi-leg amplitudes, which can present formidable calculational obstacles at higher loop orders. In this paper we have followed Refs. [9, 8, 10] in the application of semi-classical methods to the calculation of $\phi^{n}$ amplitudes in $d=4$ renormalisable scalar theories with quartic interactions. Ref [10] generalises this calculation of Ref. [9] from $U(1)$ to an $O(N)$ invariant interaction. Another motivation for studying this class of theories is their (classical) scale invariance (CSI). As remarked in Ref [10], the Standard Model (SM) is “almost” CSI. Indeed, in 1973, Coleman and Weinberg (CW) [20] had hoped to argue that the SM might indeed be viable with the omission of the Higgs (wrong-sign) $(\hbox{mass})^{2}$ term. This attractive idea failed. Neglecting Yukawa couplings (which seemed reasonable at the time) led to a Higgs mass prediction which was too small; but including the top quark Yukawa coupling destabilised the Higgs vacuum altogether 444For a review of some controversy over this development, see Ref. [21]. CW introduced the idea of dimensional transmutation as a means of generating a physical mass scale in a CSI theory. The same phenomenon has been pursued [22, 23, 24] in the CSI form of quantum gravity [25, 26, 27, 28, 29, 30]. Our purpose here has been to compare the results of Ref [10] with straightforward (albeit intricate) perturbation theory. Generally the results have supported the validity of the semi-classical approximation, in its domain of validity. Future work might include the application of the semi-classical methods and perturbative methods used here to the remaining class of CSI theories with scalar self-interactions; that is $\phi^{3}$ theories in $d=6$; or even perhaps the case of CSI quantum gravity mentioned above. ## Acknowledgements We are grateful to Gabriel Cuomo for helpful correspondence. DRTJ thanks the Leverhulme Trust for the award of an Emeritus Fellowship. This research was supported by the Leverhulme Trust, STFC and by the University of Liverpool. ## References * [1] K.G. Wilson, ”Renormalization Group and Critical Phenomena. I. Renormalization Group and the Kadanoff Scaling Picture”, Phys. Rev. B4 (9): 3174–3183. * [2] K.G. Wilson, ”Renormalization Group and Critical Phenomena. II. Phase-Space Cell Analysis of Critical Behavior”, Phys. Rev. B4 (9): 3184–3205. * [3] K.G. Wilson and Michael E. Fisher, “Critical exponents in 3.99 dimensions”, Phys.Rev.Lett. 28 (1972) 240-243. * [4] D.T. Son, “Semiclassical approach for multiparticle production in scalar theories”, Nucl.Phys. B 477 (1996) 378-406. * [5] S. Hellerman, D. Orlando, S. Reffert and M. Watanabe, “On the CFT Operator Spectrum at Large Global Charge, JHEP 12 (2015) 071, [arXiv:1505.01537 [hep-th]]. * [6] L. Alvarez-Gaume, O. Loukas, D. Orlando and S. Reffert, “Compensating strong coupling with large charge”, JHEP 04 (2017) 059, [arXiv:1610.04495 [hep-th]]. * [7] L. Alvarez-Gaume, D. Orlando and S. Reffert, “Large charge at large N”, JHEP 12 (2019) 142, [arXiv:1909.12571 [hep-th]]. * [8] G. Arias-Tamargo, D. Rodriguez-Gomez and J.G. Russo, “The large charge limit of scalar field theories and the Wilson-Fisher fixed point at $\epsilon=0$,” JHEP 10 (2019) 201, [arXiv:1908.11347 [hep-th]]. * [9] G. Badel, G. Cuomo, A. Monin and R. Rattazzi, “The epsilon expansion meets semiclassics”, JHEP 11 (2019) 110 [arXiv:1909.01269[hep-th]]. * [10] O. Antipin, J. Bersini, F. Sannino, Z. Wang and C. Zhang,“Charging the $O(N)$ model,” Phys.Rev. D 102 (2020) 4, 045011, [arXiv:2003.13121 [hep-th]]. * [11] O. Antipin, J. Bersini, F. Sannino, Z. Wang and C. Zhang,“Charging the Walking $U(N)\otimes U(N)$ Higgs Theory as a Complex CFT,” Phys. Rev. D 102, 12, 125033, [arXiv:2006.10078 [hep-th]]. * [12] L. Alvarez-Gaume, D. Orlando and S. Reffert, “Selected Topics in the Large Quantum Number Expansion”, arXiv:2008.03308 [hep-th]. * [13] G. Badel, G. Cuomo, A. Monin and R. Rattazzi, “Feynman diagrams and the large charge expansion in $3-\varepsilon$ dimensions,” Phys. Lett. B 802 (2020) 135202 [arXiv:1911.08505 [hep-th]]. * [14] Guillermo Arias-Tamargo, Diego Rodriguez-Gomez and Jorge G. Russo, “On the UV completion of the $O(N)$ model in $6-\epsilon$ dimensions: a stable large-charge sector”, JHEP 09 (2020) 064 [arXiv:2003.13772 [hep-th]] * [15] I. Jack and D.R.T. Jones, “Anomalous dimensions for $\phi^{n}$ in scale invariant $d=3$ theory.”, Phys.Rev. D 102 (2020) 8, 085012, [arXiv:2007.07190 [hep-th]] * [16] H.B. Nielsen and S. Chadha, “On how to count Goldstone bosons,” Nucl. Phys. B 105 (1976), 445-453 * [17] D. I. Kazakov, O. V. Tarasov and A. A. Vladimirov, “Calculation of Critical Exponents by Quantum Field Theory Methods,” Sov. Phys. JETP 50 (1979), 521 JINR-E2-12249. * [18] H. Kleinert and V. Schulte-Frohlinde, “Critical properties of $\phi^{4}$-theories”, World Scientific (2001) . * [19] G. Cuomo, private communication. * [20] S.R. Coleman and E.J. Weinberg, “Radiative Corrections as the Origin of Spontaneous Symmetry Breaking”, Phys.Rev. D 7 (1973) 1888-1910. * [21] M.B. Einhorn and D.R.T. Jones, “The Effective potential, the renormalisation group and vacuum stability”, JHEP 04 (2007) 051, hep-ph/0702295 [hep-ph] * [22] M. B. Einhorn and D. R. T. Jones, “Naturalness and Dimensional Transmutation in Classically Scale-Invariant Gravity,” JHEP 1503 (2015) 047 [arXiv:1410.8513 [hep-th]]. * [23] M. B. Einhorn and D. R. T. Jones, “Induced Gravity I: Real Scalar Field,” JHEP 1601 (2016) 019 [arXiv:1511.01481 [hep-th]]. * [24] M. B. Einhorn and D. R. T. Jones, “Induced Gravity II: Grand Unification,” JHEP 1605 (2016) 185 [arXiv:1602.06290 [hep-th]]. * [25] K. S. Stelle, “Renormalization of Higher Derivative Quantum Gravity,” Phys. Rev. D 16 (1977) 953. * [26] E. S. Fradkin and A. A. Tseytlin, “Renormalizable Asymptotically Free Quantum Theory of Gravity,” Phys. Lett. B 104 (1981) 377. * [27] E. S. Fradkin and A. A. Tseytlin, “Renormalizable asymptotically free quantum theory of gravity,” Nucl. Phys. B 201 (1982) 469. * [28] I. G. Avramidi and A. O. Barvinsky, “Asymptotic Freedom In Higher Derivative Quantum Gravity,” Phys. Lett. B 159 (1985) 269. * [29] I. G. Avramidi, “Asymptotic Behavior of the Quantum Theory of Gravity With Higher Order Derivatives,” Sov. J. Nucl. Phys. 44 (1986) 160. * [30] I. G. Avramidi, “Heat kernel and quantum gravity,” Lect. Notes Phys. M 64 (2000) 1.
# Improving Few-Shot Learning with Auxiliary Self-Supervised Pretext Tasks Nathaniel Simard Guillaume Lagrange ###### Abstract Recent work on few-shot learning (Tian et al., 2020a) showed that quality of learned representations plays an important role in few-shot classification performance. On the other hand, the goal of self-supervised learning is to recover useful semantic information of the data without the use of class labels. In this work, we exploit the complementarity of both paradigms via a multi-task framework where we leverage recent self-supervised methods as auxiliary tasks. We found that combining multiple tasks is often beneficial, and that solving them simultaneously can be done efficiently. Our results suggest that self-supervised auxiliary tasks are effective data-dependent regularizers for representation learning. Our code is available at: https://github.com/nathanielsimard/improving-fs-ssl. Machine Learning, ICML ## 1 Introduction Few-shot learning measures the ability to learn new concepts from a limited amount of examples. This is a challenging problem that usually requires different approaches from the success stories of image classification where labeled data is abundant. Recently, meta-learning (Schmidhuber, 1987; Bengio et al., 1992; Thrun & Pratt, 1998), or learning-to-learn, has emerged as a learning paradigm where a machine learning model gains experience over multiple learning episodes and uses this experience to improve its future learning performance. Significant advances in few-shot learning (Vinyals et al., 2016; Ravi & Larochelle, 2017; Snell et al., 2017; Finn et al., 2017; Oreshkin et al., 2018; Sung et al., 2018; Rusu et al., 2019; Lee et al., 2019) have been made by framing the problem within a meta-learning setting. More precisely, few-shot learning can be seen as a sub-task of meta-learning where a learner is trained and tested on several different but still related tasks. The goal of few-shot meta-learning is to train a model that can quickly adapt to a new task using only a few datapoints and training iterations (Finn et al., 2017). During the meta-training phase, the model is trained to solve multiple tasks such that it is able to learn or adapt to new ones through the meta-testing phase. The performance of the learner is evaluated by the average test accuracy across many meta-testing tasks. Focusing on few-shot image classification, where very few images are available for novel tasks, recent works aim to learn representations that generalize well to novel classes by training a feature representation to classify a training dataset of base classes (Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017; Gidaris & Komodakis, 2018; Qi et al., 2018). Methods to tackle this problem include optimization-based methods and metric-based methods, although recent works suggest that learning a good representation is the main factor responsible for fast adaptation in few-shot learning methods (Raghu et al., 2020; Tian et al., 2020a). Raghu et al. (2020) empirically suggested that the effeciveness of MAML (Finn et al., 2017) is due to its ability to learn a useful representation, while Tian et al. (2020a) showed that learning a good representation of the data through a proxy task is even more effective than complex meta-learning algorithms. Self-supervised learning is another recent paradigm but for unsupervised learning where the supervisory signal for feature learning is automatically generated from the data itself. Seminal works (Doersch et al., 2015; Zhang et al., 2016; Noroozi & Favaro, 2016; Gidaris et al., 2018) have relied on heuristics to design pretext learning tasks such that high-level image understanding must be captured to solve them. Discriminative approaches based on contrastive learning in the latent space (Oord et al., 2018; Wu et al., 2018; Bachman et al., 2019; Tian et al., 2019; Henaff, 2020; Chen et al., 2020a; He et al., 2020; Chen et al., 2020c; Misra & Maaten, 2020; Chen et al., 2020b) have shown recently that self-supervised learning is especially useful with the large availability of unlabeled data, effectively closing the gap with supervised learning when leveraging models with large capacity. These methods are trained by reducing the distance between different augmented views of the same image (positive pairs), and increasing the distance between augmented views of different images (negative pairs). More recently, BYOL (Grill et al., 2020) showed that one can also learn transferable visual representations via bootstrapping representations and without negative pairs. In this paper, we propose to leverage self-supervised learning method(s) as auxiliary task(s) to prevent overfitting to the base classes seen during meta- training and improve transfer learning performance on novel classes. More specifically, we build upon the simple baseline proposed by Tian et al. (2020a), where the meta-training tasks are merged into a single pre-training task and a linear model is learned on top of the frozen encoder, and leverage self-supervised auxiliary task(s) as a data-dependant regularizer to learn richer and more transferable visual representations. Our multi-task framework exploits the complementarity of few-shot learning and self-supervised learning paradigms to boost performance on few-shot image classification benchmarks. ## 2 Related Work #### Few-shot learning. Recent works have mostly focused on meta-learning approaches to few-shot learning. Among these, optimization-based methods (Finn et al., 2017; Ravi & Larochelle, 2017; Lee et al., 2019; Rusu et al., 2019) aim to learn how to rapidly adapt the embedding model parameters to training examples for a given few-shot recognition task. Metric-based approaches (Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Gidaris & Komodakis, 2018) aim to learn a task-dependent metric over the embeddings. More concretely, such methods learn a distance metric between a query image and a set of support images given a few-shot task. More recently, it has been shown that learning a good representation plays an important role in few-shot learning (Raghu et al., 2020; Tian et al., 2020a). Our work is directly related to that of Tian et al.(2020a). A simple image classification task is used to train on the merged meta-training data to learn an embedding model, which is re-used at meta-testing time to extract embedding for a linear classifier. #### Self-supervised learning. Deep learning models are usually considered data hungry and require a significant amount of labeled data to achieve decent performance. This recent learning paradigm aims to mitigate the requirements for large amounts of annotated data by providing surrogate supervisory signal from the data itself. Initial works in self-supervised representation learning relied on engineered prediction tasks (Doersch et al., 2015; Zhang et al., 2016; Noroozi & Favaro, 2016; Gidaris et al., 2018), often referred to as pretext tasks. Such methods include predicting the relative position of image patches (Doersch et al., 2015; Noroozi & Favaro, 2016), the rotation degree applied to an image (Gidaris et al., 2018), the colors of a grayscale image (Zhang et al., 2016; Larsson et al., 2016), and many others. More recently, discriminative approaches based on contrastive learning have shown great promise in self-supervised learning. Contrastive methods like MoCo (He et al., 2020; Chen et al., 2020c) and SimCLR (Chen et al., 2020a, b) push representations of different views of the same image (positive pairs) closer, and spread representations of views from different images (negative pairs) apart. Beyond contrastive learning, BYOL (Grill et al., 2020) relies only on positive pairs. From a given representation, referred to as target, an online network is trained to learn an enhanced representation by predicting the target representation. Although concerns over stability and representational collapse have been raised (Fetterman & Albrecht, 2020; Tian et al., 2020b), BYOL has still proven to be an effective self-supervised representation learning technique. #### Multi-task learning. Our work is also related to multi-task learning, a class of techniques that train on multiple task objectives together to learn a representation that works well for every task, though we are mainly interested in improving performance on the main task in this work. In this context, using multiple self-supervised pretext tasks is especially attractive as no additional labels are required. The combination of complementary tasks has been shown to be beneficial (Doersch & Zisserman, 2017; Yamaguchi et al., 2019) in self- supervised representation learning. Previous works (Gidaris et al., 2019; Su et al., 2020) explore the use of self-supervised pretext tasks as auxiliary tasks to improve metric-based few-shot learning methods. In this work, we extend these ideas to recent self-supervised techniques and focus on learning richer and more generalizable features by starting from the simpler few-shot learning baseline of Tian et al. (2020a). ## 3 Method In this section, we first describe in §3.1 the few-shot learning problem addressed and introduce in §3.2 the proposed multi-task approach to improve few-shot performance with self-supervised auxiliary tasks. Figure 1: Combining supervised and self-supervised tasks for meta-training. We train the embedding model $F_{\theta}$ with both annotated images and unlabeled images in a multi-task setting. Self-supervised tasks such as rotation prediction or representation prediction (BYOL) act as a data- dependent regularizer for the shared feature extractor $F_{\theta}$. Although additional unlabeled data can be used for the self-supervised tasks, in this work we sample images from the annotated set. ### 3.1 Problem Formulation Standard few-shot learning benchmarks evaluate models in episodes of $N$-way, $K$-shot classification tasks. Each task consists of a small number of $N$ classes with $K$ training examples per class. Meta-learning approaches for few-shot learning aim to minimize the generalization error across a distribution of tasks sampled from a task distribution. This can be thought of as learning over a collection of tasks $\mathcal{T}=\\{\left(\mathcal{D}_{i}^{train},\mathcal{D}_{i}^{test}\right)\\}_{i=1}^{I}$, commonly referred to as the meta-training set. In practice, a task is constructed on the fly during the meta-training phase and sampled as follows. For each task, $N$ classes from the set of training classes are first sampled (with replacement), from which the training (support) set $\mathcal{D}_{i}^{train}$ of $K$ images per class is sampled, and finally the test (query) set $\mathcal{D}_{i}^{test}$ consisting of $Q$ images per class is sampled. The support set is used to learn how to solve this specific task, and the additional examples from the query set are used to evaluate the performance for this task. Once the meta-training phase of a model is finished, its performance is evaluated on a set of held-out tasks $\mathcal{S}=\\{\left(\mathcal{D}_{j}^{train},\mathcal{D}_{j}^{test}\right)\\}_{j=1}^{J}$, called the meta-test set. During meta-training, an additional held-out meta- validation set can be used for hyperparameter selection and model selection. Training examples $\mathcal{D}^{train}=\\{\left(\mathbf{x}_{t},y_{t}\right)\\}_{t=1}^{T}$ and testing examples $\mathcal{D}^{test}=\\{\left(\mathbf{x}_{q},y_{q}\right)\\}_{q=1}^{Q}$ are sampled from the same distribution, and are mapped to a feature space using an embedding model $F_{\theta}$. A base learner is trained on $\mathcal{D}^{train}$ and used as a predictor on $\mathcal{D}^{test}$. #### Merged meta-training. Following the simple baseline of Tian et al. (2020a), we merge tasks from the meta-training set into a single bigger task or dataset $\begin{split}\mathcal{D}^{merge}&=\\{\left(\mathbf{x}_{i},y_{i}\right)\\}_{k=1}^{K}\\\ &=\cup\\{\mathcal{D}^{train}_{1},\ldots,\mathcal{D}^{train}_{i},\ldots,\mathcal{D}^{train}_{I}\\}\quad,\end{split}$ (1) where $\mathcal{D}^{train}_{i}$ is a task from $\mathcal{T}$. The objective is to learn a transferable embedding model $F_{\theta}$ which generalizes to new tasks. For a task $\left(\mathcal{D}_{j}^{train},\mathcal{D}_{j}^{test}\right)$ sampled from the meta-testing distribution, we freeze the embedding model $F_{\theta}$ to extract embeddings and train a base learner on $\mathcal{D}_{j}^{train}$. The base learner is instantiated as a simple linear classifier and is re- initialized for every task. ### 3.2 Multi-Task Learning Through solving non-trivial self-supervised tasks, the embedding model is encouraged to learn rich and generic image features that can be readily exploited for few-shot learning on novel classes. We propose a multi-task learning approach to extend the supervised objective with different self- supervised auxiliary tasks, where the goal is to learn a representation that works well for every task, and perhaps share knowledge between tasks, to further improve few-shot performance. We incorporate self-supervision to our multi-task framework by adding auxiliary losses for each self-supervised task. As illustrated in Figure 1, we use the same embedding model $F_{\theta}$ to extract image features for all tasks. Based on the features extracted, a new network defined for each task is assigned to solve its respective task, each of which contribute to the overall loss $\mathcal{L}_{tot}=\sum_{t=1}^{N}\mathcal{L}_{t}\quad,$ (2) where $\mathcal{L}_{t}$ is the loss for the $t$-th task out of all $N$ tasks. Along with the supervised objective, we consider two additional tasks in the present work: rotation prediction (Gidaris et al., 2018) and representation prediction from different views following BYOL (Grill et al., 2020). All tasks are computed on the merged dataset $\mathcal{D}^{merge}$. #### Supervised. The supervised task is the standard classification task on all categories from the merged dataset, which has been shown to be an effective baseline to generate powerful embeddings for the downstream base learner (Tian et al., 2020a). The new network $f_{\vartheta}$ introduced to solve this task is instantiated as a simple linear classifier. We compute the cross-entropy loss $\mathcal{L}_{ce}$ between predictions and ground-truth labels. #### Rotation. In this task, the network must identify the rotation transformation applied to an image among four possible 2D rotations $\mathcal{R}=\\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\\}$, as proposed by Gidaris et al. (2018). This auxiliary task is similar to Gidaris et al. (2019), except that we do not create four rotated copies of an input image, which effectively quadruples the batch size, but instead sample a single rotation to apply for each image in the batch. The network $r_{\phi}$ specific to the rotation task is a multi-layer perceptron (MLP). The self-supervised loss $\mathcal{L}_{rot}$ for this task is the cross-entropy loss between the rotation predictions and the generated labels. #### BYOL. In BYOL (Grill et al., 2020), the online network directly predicts the output of one view from another view given by the target network. Essentially, this is a representation prediction task in the latent space, similar to contrastive learning except that it only relies on the positive pairs. In this task, the online network is composed of the shared encoder $F_{\theta}$, the MLP projection head $g_{\varphi}$ and the predictor $q_{\varphi}$ (also an MLP). The target network has the same architecture as the online network (minus the predictor), but its paramters are an exponential moving average (EMA) of the online network parameters as illustrated in Figure 1. Denoting the parameters of the online network as $\theta_{o}=\\{\theta,\varphi\\}$, those of the target network as $\xi$ and the target decay rate $\tau\in[0,1)$, the update rule for $\xi$ is: $\xi\leftarrow\tau\xi+(1-\tau)\theta_{o}$ (3) The self-supervised loss $\mathcal{L}_{B\\!Y\\!O\\!L}$ is the mean squared error between the normalized predictions and target projections as defined in Grill et al. (2020). Effectively, this task enforces the representations for different views of positive pairs to be closer together in latent space, which provides transformation invariance to the pre-defined set of data augmentations used for BYOL. Although the multi-task framework proposed is very flexible and could easily be extended to additional tasks, the naïve implementation which does not share the transformed inputs for each task still has some overhead with each task added. Effectively, for each set of transformations associated to the different tasks, different views of the same input image are generated, which is essentially a new input in the batch. For example, given an input image from the merged dataset, the default data augmentations from the supervised task would create an augmented view of the input used to compute the supervised loss, and applying a rotation to the input image would generate a different view of the input. Not only does this double the batch size, but the inputs are not actually shared across tasks to compute the different losses. If instead the set of transformations are shared across tasks, the different tasks would be solved for the same inputs. In this scenario, the different losses are computed and backpropagated on the same inputs, which is much more efficient. In §4.4, we explore this more efficient setting by combining the supervised and BYOL tasks using the stronger data augmentation strategy from BYOL in both. More concretely, we generate an augmented view of an input image and compute the supervised loss on the first augmented view, while another augmented view of the same input is generated to solve the representation prediction task in BYOL. ## 4 Experimental Results In this section, we evaluate our proposed multi-task framework on two widely used few-shot image recognition benchmarks: miniImageNet (Vinyals et al., 2016) and CIFAR-FS (Bertinetto et al., 2018). #### Datasets. The miniImageNet dataset (Vinyals et al., 2016) has recently become a standard benchmark for few-shot learning algorithms. It consists of 100 classes randomly sampled from ImageNet, with each class containing 600 down-sampled images of size $84\times 84$. The CIFAR-FS dataset (Bertinetto et al., 2018) is derived from the original CIFAR-100 dataset by randomly splitting 100 classes into 64 classes for training, 16 for validation and 20 for testing. Each image is $32\times 32$ pixels. In our experiments, we up-sampled the images to $96\times 96$ pixels111This is due to the limitations of using smaller scale images with BYOL, which strongly relies on random crops as image augmentation. Initial experimental results showed that BYOL on its own could only achieve $47.09\pm 0.96\%$ accuracy on CIFAR-FS, which is significantly lower than the results reported in Table 1.. #### Evaluation metrics. As mentioned in §3.1, few-shot algorithms are evaluated on a large number of $N$-way $K$-shot classification tasks. Each task is created by randomly sampling $N$ novel classes from the meta-test set, and then within the selected images randomly selecting $K$ training (support) images and $Q$ test (query) images (without overlap). In this work, we focus on $N=5$, $K=5$ (5-way 5-shot) classification, using the remainder of the $Q$ test samples to evaluate performance on the sampled task. ### 4.1 Implementation Details #### Network architectures. We mainly conduct our experiments using a ResNet-18 (He et al., 2016) as our embedding model $F_{\theta}$ due to resource limitations, with an output feature vector of size $512$. Following previous works (Tian et al., 2020a; Snell et al., 2017; Lee et al., 2019; Oreshkin et al., 2018; Dhillon et al., 2020), we also report some results using ResNet-12 as our backbone for comparison. Our ResNet-12 is identical to that used in Tian et al. (2020a). Note that this modified ResNet-12 is wider than the original ResNet-18 and makes use of DropBlock (Ghiasi et al., 2018) as a regularizer, which results in a substantial gain in performance over ResNet-18 but at the cost of longer training time. Additionally, the resulting embedding size is $640$. #### Task-specific networks. From these embeddings, the image classifier $f_{\vartheta}$ is instantiated as a simple linear layer for classification. The rotation network $r_{\phi}$ is an MLP which consists in a linear layer with output size the same as the embedding size, followed by batch normalization, rectified linear units (ReLU), a linear layer with output size the same as the embedding size, and a final linear layer with output dimension equal to the number of rotation degrees to classify (here, 4). For BYOL, we use the following configurations. The projection MLP $g_{\varphi}$ consists in a linear layer with output size $2048$ followed by batch normalization (Ioffe & Szegedy, 2015), rectified linear units (ReLU) (Nair & Hinton, 2010), and a final linear layer with output dimension $128$. The predictor $q_{\varphi}$ uses the same architecture as $g_{\varphi}$. The target encoder $F_{\xi}$ uses the same architecture as the embedding model $F_{\theta}$, and the target projection head $g_{\xi}$ uses the same architecture as $g_{\varphi}$. #### Optimization setup. As an optimizer, we adopt SGD with a momentum of $0.9$ and a weight decay of $5e^{-4}$. The initial learning rate is $0.05$ and decayed by a factor of $0.1$ according to a multi-step learning schedule. All models are trained for $90$ epochs on CIFAR-FS with a decay step at epochs $45$, $60$ and $75$, except when BYOL is used (alone or in combination with other tasks). In this setting, we only decay the learning rate at epochs $60$ and $80$ since BYOL usually converges more slowly. On miniImageNet, all models are trained for $100$ epochs with a decay step at epochs $60$ and $80$, regardless of task combinations. For BYOL, the exponential moving average parameter $\tau$ is set to $0.99$. We use a batch size of $128$ for all experiments. #### Data augmentation. During pre-training on the merged meta-training dataset, we adopt random crop, color jittering and random horizontal flip as in Tian et al. (Tian et al.). For stronger data augmentation in BYOL, we adopt random crop, random color jittering, random grayscale, random horizontal flip and random gaussian blur similar to Fetterman & Albrecht (2020). Given that each task may require its own set of transformations, we use Kornia (Riba et al., 2020) for efficient GPU-based data augmentation in PyTorch (Paszke et al., 2019). Additional details can be found in Appendix D. #### Early stopping. For training the base learner, we use the same optimizer setup with SGD, and train for a maximum of $300$ steps, stopping early if the loss reaches a small threshold on the support set. Model selection is done on the held-out meta- validation set. ### 4.2 Self-Supervision Alone is not Enough We first evaluate self-supervised representation learning with the two tasks explored in this work, rotation prediction and representation prediction (BYOL). Our results on CIFAR-FS (Table 1) and miniImageNet (Table 2) show that, on its own, the rotation prediction pretext task limits the generality of the learned representations, significantly lagging behind in few-shot accuracy. On the other hand, the representation prediction task from different views (BYOL) shows great promise in representation learning, achieving results that are not that far from the supervised baseline222Note that we did not perform any extensive experiments to search for the optimal hyperparameter and learning schedule configurations for BYOL. Thus, the gap between BYOL and the supervised baseline could most likely be closed even further, perhaps just by adopting a longer training schedule.. Note that the performance gap between BYOL and the supervised baseline on miniImageNet is bigger than the one observed on CIFAR-FS. We attribute this to the slow convergence of BYOL. As noted in §4.1, we changed the learning schedule for CIFAR-FS but did not do so for miniImageNet. Thus, we expect the gap to be smaller on miniImageNet with slightly longer training. Table 1: Evaluating representation learning methods for few-shot recognition on CIFAR-FS. Average 5-way 5-shot classification accuracies on the test set with $95\%$ confidence intervals. We evaluate our method with 2 runs, where in each run the accuracy is the mean accuracy of 250 randomly sampled tasks. Models with † use stronger data augmentation for the supervised objective. Sup. refers to the supervised task, Rot. to the rotation prediction task and BYOL to the representation prediction task in Grill et al. (2020). Method | Backbone | Accuracy (%) ---|---|--- Sup. | ResNet-18 | $79.54\pm 0.92$ Rot. | ResNet-18 | $49.26\pm 1.01$ BYOL | ResNet-18 | $70.50\pm 0.99$ Sup. + Rot. | ResNet-18 | $79.96\pm 0.93$ Sup. + BYOL | ResNet-18 | $81.41\pm 0.89$ Sup. + BYOL + Rot. | ResNet-18 | $\mathbf{81.68}\pm\mathbf{0.98}$ Sup.† | ResNet-18 | $81.87\pm 0.92$ Sup.†\+ BYOL | ResNet-18 | $\mathbf{82.44}\pm\mathbf{0.91}$ ### 4.3 Combining Supervised and Self-Supervised Tasks On CIFAR-FS, we explore different combinations of tasks for our multi-task framework of Figure 1. In Table 1, our results show that both the rotation prediction task and BYOL improve the supervised baseline, though the biggest boost in performance is observed with BYOL. Additionally, combining all three tasks is even more beneficial. This result is intuitive, as we expect the features from the rotation prediction task to be complementary since there is no rotation transformation in any of the data augmentation strategies. In Table 2, we show that the combination of all three tasks also improves the supervised baseline on miniImageNet. ### 4.4 Data Augmentation: Stronger is Better In order to ensure that the performance improvement from BYOL is not strictly due to the stronger data augmentation strategy used by the task, we conduct experiments using the same data augmentations for the supervised baseline. An additional experiment without data augmentation for the supervised task is presented in Appendix A. On both CIFAR-FS (Table 1) and miniImageNet (Table 2), we find that stronger data augmentation improves the supervised baseline. This is in line with a lot of the recent work in strong data augmentation techniques (DeVries & Taylor, 2017; Zhang et al., 2018; Yun et al., 2019; Cubuk et al., 2019, 2020). Effectively, data augmentation is an important regularization technique that has been shown to improve generalization. Furthermore, we show that in this setting the addition of BYOL as a self-supervised auxiliary task still boosts the performance. As mentioned in §3.2, when used in combination with BYOL, both tasks share the same transformed inputs as part of our multi-task framework. Additional experiments where we leverage both augmented views generated to compute both the supervised and BYOL losses can be found in Table 4 (Appendix C). Table 2: Evaluating representation learning methods for few-shot recognition on miniImageNet. Average 5-way 5-shot classification accuracies on the test set with $95\%$ confidence intervals. We evaluate our method with a single run, where the accuracy is the mean accuracy of 250 randomly sampled tasks. Models with † use stronger data augmentation for the supervised objective. Sup. refers to the supervised task, Rot. to the rotation prediction task and BYOL to the representation prediction task in Grill et al. (2020). Method | Backbone | Accuracy (%) ---|---|--- Sup. | ResNet-18 | $67.26\pm 0.89$ Rot. | ResNet-18 | $39.39\pm 0.83$ BYOL | ResNet-18 | $51.20\pm 0.97$ Sup. + BYOL + Rot. | ResNet-18 | $\mathbf{68.59}\pm\mathbf{0.92}$ Sup.† | ResNet-18 | $69.42\pm 0.94$ Sup.† \+ BYOL | ResNet-18 | $\mathbf{71.47}\pm\mathbf{0.90}$ ### 4.5 Comparison with Prior Work For comparison with prior work, we also report some results in Table 3 using ResNet-12 as our backbone. Our supervised baseline is similar to the simple baseline of Tian et al. (2020a), except that we do not use any additional tricks like feature normalization, augmenting the number of support images, and our base learner is not a logistic regression classifier with L-BFGS optimizer from scikit-learn (Buitinck et al., 2013). Due to resource limitations, we do not employ the additional rotation prediction task in our multi-task framework, though we expect it would slightly improve the reported performance based on our experiments in §4.3. On CIFAR-FS, our multi-task framework outperforms previous works by at least $1.5\%$. Note that our simple supervised baseline, even without the tricks used in Tian et al. (2020a), still performs better than their best distilled model. We attribute this to the higher resolution input images, which we upsampled to $96\times 96$. On miniImageNet however, our simple baseline without their tricks is comparable to the results reported in their ablation study. We show that our framework improves this slightly weaker baseline, and anticipate that additional tricks such as feature normalization would complement our work to further improve the results. Although Gidaris et al. (2019) use a much larger network (WRN-28-10), we still include their results in Table 3 for comparison as their work uses self- supervised rotation prediction as an auxiliary loss, which is very much related to our work. We expect a larger network to be more effective in learning richer features, and the results to improve with a larger architecture. Table 3: Comparison to prior work on CIFAR-FS and miniImageNet. Average 5-way 5-shot classification accuracies on the test set with $95\%$ confidence intervals. We evaluate our method on a single run, where the accuracy is the mean accuracy of 250 randomly sampled tasks. Models with ∗ use stronger data augmentation for the supervised objective. Sup. refers to the supervised task and BYOL to the representation prediction task in Grill et al. (2020). | | Accuracy (%) ---|---|--- Method | Backbone | CIFAR-FS | miniImageNet Prototypical Networks (Snell et al., 2017) | ResNet-12 | $83.5\pm 0.5$ | $78.63\pm 0.48$ TADAM (Oreshkin et al., 2018) | ResNet-12 | $-$ | $76.70\pm 0.30$ MetaOptNet (Lee et al., 2019) | ResNet-12 | $84.3\pm 0.5$ | $78.63\pm 0.48$ RFS-simple (Tian et al., 2020a) | ResNet-12 | $86.0\pm 0.5$ | $79.64\pm 0.44$ RFS-distill (Tian et al., 2020a) | ResNet-12 | $\mathbf{86.9}\pm\mathbf{0.5}$ | $\mathbf{82.14}\pm\mathbf{0.43}$ CC + rot (Gidaris et al., 2019) | WRN-28-10 | $86.1\pm 0.2$ | $79.87\pm 0.33$ Sup. | ResNet-12 | $87.65\pm 0.75$ | $77.62\pm 0.70$ Sup.∗ \+ BYOL | ResNet-12 | $\mathbf{88.46}\pm\mathbf{0.68}$ | $\mathbf{78.30}\pm\mathbf{0.69}$ ### 4.6 Avoiding Collapse with BYOL Although the related method BYOL reports poor performance when instantaneously updating the target network ($\tau=0$) as it destabilizes training, we found that with an additional task to solve, the exponential moving average constant is not required to avoid representational collapse. On CIFAR-FS, we report an accuracy of $82.19\pm 0.83\%$ for this run, compared to $82.51\pm 0.82\%$ when using $\tau=0.99$ on the same seed. We hypothesize that solving an additional task in parallel with the BYOL loss (e.g. supervised objective) provides additional gradient that prevents collapse. Similar behaviour has been observed by Schwarzer et al. (2020) when using a reinforcement learning objective in parallel. ## 5 Conclusion Based on the simple baseline of Tian et al. (2020a), we have proposed a multi- task framework with self-supervised auxiliary tasks to improve few-shot image classification. Based on our detailed experiments on CIFAR-FS and miniImageNet, we show that leveraging self-supervision improves transfer learning performance on novel classes. Our experiments show that the rotation prediction task, when paired with the supervised objective, improves few-shot performance, but that the representation prediction task (BYOL) is most beneficial. This result suggests that BYOL is a strong data-dependent regularizer by enforcing the representations for different views of the same image to be closer together in latent space. Furthermore, we show that these two tasks are mutually beneficial, and can be used in combination to improve few-shot classification performance under our multi-task framework. Finally, the proposed framework can be used efficiently when the transformations of the different tasks are shared to solve each task simultaneously. #### Future work. In this work, we sample images from the annotated set for the self-supervised tasks. On the other hand, these auxiliary tasks do not depend on class labels. Thus, one could easily extend to learn from additional unlabeled data and further improve few-shot performance. This makes our multi-task framework especially appealing in scenarios where labeled data is scarce. Future works could extend the use of our framework to the semi-supervised few-shot learning setting and leverage additional unlabeled data. ## References * Bachman et al. (2019) Bachman, P., Hjelm, R. D., and Buchwalter, W. Learning representations by maximizing mutual information across views. In _Advances in Neural Information Processing Systems_ , pp. 15535–15545, 2019. * Bengio et al. (1992) Bengio, S., Bengio, Y., Cloutier, J., and Gecsei, J. On the optimization of a synaptic learning rule. In _Optimality in Artificial and Biological Neural Networks_ , pp. 6–8, 1992. * Bertinetto et al. (2018) Bertinetto, L., Henriques, J. F., Torr, P. H., and Vedaldi, A. Meta-learning with differentiable closed-form solvers. _arXiv preprint arXiv:1805.08136_ , 2018. * Buitinck et al. (2013) Buitinck, L., Louppe, G., Blondel, M., Pedregosa, F., Mueller, A., Grisel, O., Niculae, V., Prettenhofer, P., Gramfort, A., Grobler, J., Layton, R., VanderPlas, J., Joly, A., Holt, B., and Varoquaux, G. API design for machine learning software: experiences from the scikit-learn project. In _ECML PKDD Workshop: Languages for Data Mining and Machine Learning_ , pp. 108–122, 2013. * Chen et al. (2020a) Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual representations. In _Proceedings of the 37th International Conference on Machine Learning_ , pp. 1597–1607, 2020a. * Chen et al. (2020b) Chen, T., Kornblith, S., Swersky, K., Norouzi, M., and Hinton, G. E. Big self-supervised models are strong semi-supervised learners. _Advances in Neural Information Processing Systems_ , 33, 2020b. * Chen et al. (2020c) Chen, X., Fan, H., Girshick, R., and He, K. Improved baselines with momentum contrastive learning. _arXiv preprint arXiv:2003.04297_ , 2020c. * Cubuk et al. (2019) Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q. V. Autoaugment: Learning augmentation strategies from data. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 113–123, 2019. * Cubuk et al. (2020) Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. V. Randaugment: Practical automated data augmentation with a reduced search space. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_ , pp. 702–703, 2020. * DeVries & Taylor (2017) DeVries, T. and Taylor, G. W. Improved regularization of convolutional neural networks with cutout. _arXiv preprint arXiv:1708.04552_ , 2017. * Dhillon et al. (2020) Dhillon, G. S., Chaudhari, P., Ravichandran, A., and Soatto, S. A baseline for few-shot image classification. In _Proceedings of the 8th International Conference on Learning Representations_ , 2020. * Doersch & Zisserman (2017) Doersch, C. and Zisserman, A. Multi-task self-supervised visual learning. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 2051–2060, 2017. * Doersch et al. (2015) Doersch, C., Gupta, A., and Efros, A. A. Unsupervised visual representation learning by context prediction. In _Proceedings of the IEEE international conference on computer vision_ , pp. 1422–1430, 2015. * Fetterman & Albrecht (2020) Fetterman, A. and Albrecht, J. Understanding self-supervised and contrastive learning with ”bootstrap your own latent” (byol), 2020. URL https://www.untitled-ai.com/understanding-self-supervised-contrastive-learning.html. * Finn et al. (2017) Finn, C., Abbeel, P., and Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In _Proceedings of the 34th International Conference on Machine Learning_ , pp. 1126–1135, 2017. * Ghiasi et al. (2018) Ghiasi, G., Lin, T.-Y., and Le, Q. V. Dropblock: A regularization method for convolutional networks. _Advances in neural information processing systems_ , 31:10727–10737, 2018. * Gidaris & Komodakis (2018) Gidaris, S. and Komodakis, N. Dynamic few-shot visual learning without forgetting. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 4367–4375, 2018. * Gidaris et al. (2018) Gidaris, S., Singh, P., and Komodakis, N. Unsupervised representation learning by predicting image rotations. In _Proceedings of the 6th International Conference on Learning Representations_ , 2018. * Gidaris et al. (2019) Gidaris, S., Bursuc, A., Komodakis, N., Pérez, P., and Cord, M. Boosting few-shot visual learning with self-supervision. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 8059–8068, 2019. * Grill et al. (2020) Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Gheshlaghi Azar, M., et al. Bootstrap your own latent-a new approach to self-supervised learning. In _Advances in Neural Information Processing Systems_ , 2020. * He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 770–778, 2016. * He et al. (2020) He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 9729–9738, 2020. * Henaff (2020) Henaff, O. Data-efficient image recognition with contrastive predictive coding. In _International Conference on Machine Learning_ , pp. 4182–4192. PMLR, 2020. * Ioffe & Szegedy (2015) Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _Proceedings of the 32nd International Conference on Machine Learning_ , pp. 448–456, 2015. * Larsson et al. (2016) Larsson, G., Maire, M., and Shakhnarovich, G. Learning representations for automatic colorization. In _European conference on computer vision_ , pp. 577–593, 2016\. * Lee et al. (2019) Lee, K., Maji, S., Ravichandran, A., and Soatto, S. Meta-learning with differentiable convex optimization. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 10657–10665, 2019. * Misra & Maaten (2020) Misra, I. and Maaten, L. v. d. Self-supervised learning of pretext-invariant representations. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 6707–6717, 2020. * Nair & Hinton (2010) Nair, V. and Hinton, G. E. Rectified linear units improve restricted boltzmann machines. In _Proceedings of the 27th International Conference on International Conference on Machine Learning_ , pp. 807–814, 2010. * Noroozi & Favaro (2016) Noroozi, M. and Favaro, P. Unsupervised learning of visual representations by solving jigsaw puzzles. In _European Conference on Computer Vision_ , pp. 69–84, 2016. * Oord et al. (2018) Oord, A. v. d., Li, Y., and Vinyals, O. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_ , 2018. * Oreshkin et al. (2018) Oreshkin, B., López, P. R., and Lacoste, A. Tadam: Task dependent adaptive metric for improved few-shot learning. In _Advances in Neural Information Processing Systems_ , pp. 721–731, 2018. * Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems 32_ , pp. 8024–8035, 2019. * Qi et al. (2018) Qi, H., Brown, M., and Lowe, D. G. Low-shot learning with imprinted weights. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 5822–5830, 2018. * Raghu et al. (2020) Raghu, A., Raghu, M., Bengio, S., and Vinyals, O. Rapid learning or feature reuse? towards understanding the effectiveness of maml. In _Proceedings of the 8th International Conference on Learning Representations_ , 2020. * Ravi & Larochelle (2017) Ravi, S. and Larochelle, H. Optimization as a model for few-shot learning. In _Proceedings of the 5th International Conference on Learning Representations_ , 2017. * Riba et al. (2020) Riba, E., Mishkin, D., Ponsa, D., Rublee, E., and Bradski, G. Kornia: an open source differentiable computer vision library for pytorch. In _The IEEE Winter Conference on Applications of Computer Vision_ , pp. 3674–3683, 2020. * Rusu et al. (2019) Rusu, A. A., Rao, D., Sygnowski, J., Vinyals, O., Pascanu, R., Osindero, S., and Hadsell, R. Meta-learning with latent embedding optimization. In _Proceedings of the 7th International Conference on Learning Representations_ , 2019. * Schmidhuber (1987) Schmidhuber, J. Evolutionary principles in self-referential learning. on learning now to learn: The meta-meta-meta…-hook. Diploma thesis, Technische Universitat Munchen, Germany, 14 May 1987. * Schwarzer et al. (2020) Schwarzer, M., Anand, A., Goel, R., Hjelm, R. D., Courville, A., and Bachman, P. Data-efficient reinforcement learning with self-predictive representations. _arXiv preprint arXiv:2007.05929_ , 2020. * Snell et al. (2017) Snell, J., Swersky, K., and Zemel, R. Prototypical networks for few-shot learning. In _Advances in neural information processing systems_ , pp. 4077–4087, 2017. * Srivastava et al. (2014) Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. _Journal of Machine Learning Research_ , 15(56):1929–1958, 2014. * Su et al. (2020) Su, J.-C., Maji, S., and Hariharan, B. When does self-supervision improve few-shot learning? In _European Conference on Computer Vision_ , pp. 645–666. Springer, 2020. * Sung et al. (2018) Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H., and Hospedales, T. M. Learning to compare: Relation network for few-shot learning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 1199–1208, 2018. * Thrun & Pratt (1998) Thrun, S. and Pratt, L. _Learning to Learn_. Springer Science & Business Media, 1998. * Tian et al. (2019) Tian, Y., Krishnan, D., and Isola, P. Contrastive multiview coding. _arXiv preprint arXiv:1906.05849_ , 2019. * Tian et al. (2020a) Tian, Y., Wang, Y., Krishnan, D., Tenenbaum, J. B., and Isola, P. Rethinking few-shot image classification: a good embedding is all you need? _arXiv preprint arXiv:2003.11539_ , 2020a. * Tian et al. (2020b) Tian, Y., Yu, L., Chen, X., and Ganguli, S. Understanding self-supervised learning with dual deep networks. _arXiv preprint arXiv:2010.00578_ , 2020b. * van der Maaten & Hinton (2008) van der Maaten, L. and Hinton, G. Visualizing data using t-sne. _Journal of Machine Learning Research_ , 9(86):2579–2605, 2008. * Vinyals et al. (2016) Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al. Matching networks for one shot learning. In _Advances in neural information processing systems_ , pp. 3630–3638, 2016. * Wu et al. (2018) Wu, Z., Xiong, Y., Yu, S. X., and Lin, D. Unsupervised feature learning via non-parametric instance discrimination. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 3733–3742, 2018. * Yamaguchi et al. (2019) Yamaguchi, S., Kanai, S., Shioda, T., and Takeda, S. Multiple pretext-task for self-supervised learning via mixing multiple image transformations. _arXiv preprint arXiv:1912.11603_ , 2019. * Yun et al. (2019) Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., and Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 6023–6032, 2019. * Zhang et al. (2018) Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. mixup: Beyond empirical risk minimization. In _Proceedings of the 6th International Conference on Learning Representations_ , 2018. * Zhang et al. (2016) Zhang, R., Isola, P., and Efros, A. A. Colorful image colorization. In _European conference on computer vision_ , pp. 649–666, 2016\. ## Appendix ## Appendix A Regularizing for Stability During our initial experiments, we tried combining the supervised task without any data augmentation with the BYOL auxiliary task. The goal of this experiment was to find out if BYOL was sufficient to enforce invariance to the transformations used as part of its data augmentation strategy, without directly applying any transformations to the inputs of the supervised task. As shown in Figure 2 the training was highly unstable, with the two tasks competing against each other. This resulted in a performance worse than the supervised baseline, achieving a few-shot accuracy of $78.98\pm 0.92\%$ with a ResNet-18 backbone. We believe additional regularization (e.g. Dropout (Srivastava et al., 2014)) to the backbone could have mitigated this effect, but this is still an interesting finding. Figure 2: Learning curves of our multi-task framework, using a supervised objective (no data augmentation) and BYOL auxiliary task. Losses of each task during the training of the BYOL task and supervised task without any data augmentation over 90 epochs. Training seems to stabilize after epoch 60, where the learning rate was decayed. ## Appendix B Qualitative Analysis We are interested in learning a linearly separable representation of the input so that a simple linear classifier can be trained on unseen classes and discriminate between them. Hence, it is possible to proceed to a qualitative evaluation of the learned representations by visualizing the embedding space. As illustrated by the t-SNE (van der Maaten & Hinton, 2008) visualization in Figure 3, multiple different clusters are easily identifiable even though there are some points that are still intertwined with others points that belong to other classes. The fact that such clusters exist on unseen classes is quite interesting, and reinforces the statement that a good representation of the data is extremely important in few-shot learning (Raghu et al., 2020; Tian et al., 2020a). Figure 3: t-SNE visualization of the embedding space for a sampled 5-way task (unseen) on CIFAR-FS. The embeddings result from a model trained with the supervised and BYOL tasks. ## Appendix C Additional Supervised + BYOL Experiment In Table 4, we show the difference in performance when leveraging all (2) augmented views generated for the BYOL task with the supervised objective as well. Effectively, the supervised loss is computed on 2 augmented views for each image in the batch instead of only the first augmented view. Results were inconclusive. Table 4: Difference in performance when leveraging all augmented views for the supervised objective when used in combination with BYOL. Average 5-way 5-shot classification accuracies on the test set with $95\%$ confidence intervals. The backbone used for these experiments is ResNet-18. Dataset | Sup. per image | Accuracy (%) ---|---|--- CIFAR-FS | 1 | $\mathbf{82.43}\pm\mathbf{0.92}$ CIFAR-FS | 2 | $82.33\pm 0.89$ miniImageNet | 1 | $71.47\pm 0.90$ miniImageNet | 2 | $\mathbf{71.95}\pm\mathbf{0.78}$ ## Appendix D Image Augmentations The image augmentations parameters for the different settings are listed in Listing D. Default augmentations are the same as (Tian et al., 2020a), and hard augmentations are for the BYOL task (and supervised task in some experiments). [ht!] Data augmentation code with parameters. ⬇ import random import kornia.augmentation as K import kornia.filters as F \parclass RandomApply(object): def __init__(self, fn, p): self.fn = fn self.p = p \pardef __call__(self, x): if random.random() > self.p: return x return self.fn(x) \parsize = (96, 96) if CIFAR_FS else (84, 84) pad = 4 if CIFAR_FS else 8 \pardefault_tfm = [ K.RandomCrop(size, padding=pad), K.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, p=1.0), K.RandomHorizontalFlip(p=0.5) ] \parhard_tfm = [ K.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1, p=0.8), K.RandomGrayscale(p=0.2), K.RandomHorizontalFlip(p=0.5), RandomApply(F.GaussianBlur2d(kernel_size=(3, 3), sigma=(1.5, 1.5)), p=0.1), K.RandomResizedCrop(size, scale=(0.35, 1.0), p=1.0) ]
# Uniform Sobolev estimates in $\mathbb{R}^{n}$ involving singular potentials Xiaoqi Huang Department of Mathematics, Johns Hopkins University, Baltimore, MD 21218<EMAIL_ADDRESS>and Christopher D. Sogge Department of Mathematics, Johns Hopkins University, Baltimore, MD 21218<EMAIL_ADDRESS> ###### Abstract. We generalize the Stein-Tomas [22] $L^{2}$-restricition theorem and the uniform Sobolev estimates of Kenig, Ruiz and the second author [13] by allowing critically singular potential. We also obtain Strichartz estimates for Schrödinger and wave operators with such potentials. Due to the fact that there may be nontrivial eigenfunctions we are required to make certain spectral assumptions, such as assuming that the solutions only involve sufficiently large frequencies. ###### Key words and phrases: Schrödinger equation, uniform Sobolev estimates, quasimodes ###### 2010 Mathematics Subject Classification: 58J50, 35P15 The authors were supported in part by the NSF (NSF Grant DMS-1953413). ## 1\. Introduction and main results The main purpose of this paper is to extend the uniform Sobolev inequalities in $\mathbb{R}^{n}$ of Kenig, Ruiz and the second author [13], as well as the $L^{2}$-restricition theorem of Stein and Tomas [22] to include Schrödinger operators, (1.1) $H_{V}=-\Delta+V(x)$ with critically singular potentials $V$, which are assumed to be real-valued and (1.2) $V\in L^{n/2}(\mathbb{R}^{n}).$ In a recent work of the authors with Blair and Sire [1], the same problem was considered for compact Riemannian manifolds, which generalizes results in [6], [4] and [17]. Also, in an earlier work of Blair, Sire and the second author [2], quasimode and related spectral projection estimates were discussed in ${\mathbb{R}}^{n}$ under the additional assumption that $V\in\mathcal{K}$, the Kato class. The spaces $L^{n/2}$ and ${\mathcal{K}}$ have the same scaling properties, and both obey the scaling law of the Laplacian, which accounts for their criticality. It is natural to assume that $V\in{\mathcal{K}}$ when dealing with large exponents, and, as was shown in [2], this assumption is needed to obtain optimal sup-norm estimates for quasimodes. However, motivated by the results in [1], for the smaller exponents arising in uniform Sobolev inequalities, it is natural to merely assume that $V$ in $L^{n/2}$, which we shall discuss below in Theorem 1.1. As was shown in the appendix of [1], if $V\in L^{n/2}$ then $H_{V}$ is essentially self-adjoint and bounded from below. As a result, we shall assume throughout that (1.3) $\text{Spec }H_{V}\subset[-N_{0},+\infty),\,\,\text{i.e.},\,\,H_{V}+N_{0}\geq 0,$ for some fixed positive number $N_{0}$ which depends on $V$. The uniform Sobolev estimates and quasimode estimates that we can obtain are the following. ###### Theorem 1.1. Let $n\geq 3$ and suppose that (1.4) $\min\bigl{(}q,\,p(q)^{\prime}\bigr{)}>\tfrac{2n}{n-1},\quad\text{and }\,\tfrac{1}{p(q)}-\tfrac{1}{q}=\tfrac{2}{n}.$ Then if $V\in L^{n/2}(\mathbb{R}^{n})$ is real-valued and $\Lambda$, $\delta>0$ are fixed constants with $\Lambda=\Lambda(q,n,V)$ sufficiently large, we have the uniform bounds (1.5) $\|\bigl{(}H_{V}-\zeta\bigr{)}^{-1}u\|_{q}\lesssim\bigl{\|}u\bigr{\|}_{p(q)},\quad\text{if }\,\,\,\zeta\in\Omega_{\delta},$ where (1.6) $\Omega_{\delta}=\\{\zeta\in{\mathbb{C}}\,\setminus\,[-N_{0},+\infty):\,\,\mathrm{dist}(\zeta,[-N_{0},+\infty))\geq\delta\,\,\text{if }\,\,\rm{Re}\,\zeta<\text{$\Lambda^{2}$}\\}.$ Also, suppose that (1.7) $\tfrac{2(n+1)}{n-1}\leq q\leq\tfrac{2n}{n-4},\,\,\,\text{if}\,\,\,n\geq 5,\,\,\,\text{or}\,\,\,\tfrac{2(n+1)}{n-1}\leq q<\infty,\,\,\ \text{if}\,\,\,n=3,4.$ Then if $u\in\text{Dom}(H_{V})$, for any $0<\varepsilon<\lambda/2$, we have (1.8) $\|u\|_{q}\lesssim\lambda^{n(1/2-1/q)-3/2}\varepsilon^{-1/2}\|(H_{V}-\lambda^{2}+i\lambda\varepsilon)u\|_{2},\,\,\,\text{if }\,\lambda\geq\Lambda.$ Here, $\text{Dom}(H_{V})$ denotes the domain of $H_{V}$ and $N_{0}$ is as in (1.3). Also, $r^{\prime}$ denotes the conjugate exponent for $r$, i.e., the one satisfying $1/r+1/r^{\prime}=1$. Additionally, we are using the notation that $A\lesssim B$ means that $A$ is bounded from above by a constant times $B$. The implicit constant might depend on the parameters involved, such as $n$, $q$ and $V$, but not on $\zeta$, $\lambda$ or $\varepsilon$ in (1.5) and (1.8). The condition (1.4) on the range of exponents was shown to be be sharp in [13]. The gap condition in (1.4) that $\tfrac{1}{p(q)}-\tfrac{1}{q}=\tfrac{2}{n}$, follows from scaling considerations, while the necessity of the first part of (1.4) is related to the fact that the Fourier transform of surface measure on the sphere in ${\mathbb{R}}^{n}$ is not in $L^{q}({\mathbb{R}}^{n})$ if $q\leq\tfrac{2n}{n-1}$. The condition (1.7) on the exponents is also sharp, since it agrees with conditions in the standard quasimode estimates when $V\equiv 0$ except for the case $n=3,\,q=\infty$. In that case, (1.8) may not be valid if we only assume $V\in L^{n/2}({\mathbb{R}}^{n})$ due to the possible existence of unbounded eigenfunctions for the operator $H_{V}$. See e.g., [2] for more details. (1.5)–(1.6) imply that we have uniform $L^{p(q)}\to L^{q}$ operator bounds for $(H_{V}-\zeta)^{-1}$ if $\text{Re }\zeta$ is large and $\text{Im }\zeta\neq 0$, which is a natural analog of the uniform Sobolev estimates of Kenig, Ruiz and the second author [13], and, in the special case where $V\equiv 0$ is equivalent to the results in [13]. Inequalities of this type, as well as weighted $L^{2}$-estimates for $(H_{V}-\zeta\bigr{)}^{-1}$, have been extensively studied for different types of potentials; see, e.g, [10], [16], [3], [15]. In particular, in [15], it is proved by Mizutani that (1.5)–(1.6) hold for $V\in L^{n,\infty}_{0}({\mathbb{R}}^{n})$, where $L^{n,\infty}_{0}$ denotes the completion of $C_{0}^{\infty}$ functions under the $L^{n,\infty}_{0}$ norm. Although $L^{n/2}\hookrightarrow L^{n,\infty}_{0}$ , the proof of (1.6) is based on a different method and we shall discuss at the end of section 2 that how we can modify the proof there to further weaken the conditions on $V$. By results in [20] the bounds in (1.8) are equivalent to the following spectral projection bounds (1.9) $\|\chi^{V}_{[\lambda,\lambda+\varepsilon]}\|_{L^{2}({\mathbb{R}}^{n})\to L^{q}({\mathbb{R}}^{n})}\lesssim\varepsilon^{1/2}\lambda^{n(1/2-1/q)-1/2},\quad\forall\,\,0<\varepsilon<1,\,\,\lambda\geq\Lambda,$ for some $\Lambda$ large enough and $q$ as in (1.7), if $\chi^{V}_{[\lambda,\lambda+\varepsilon]}$ denotes the spectral projection operator which projects onto the part of the spectrum of $H_{V}$ in the corresponding shrinking intervals $[\lambda^{2},(\lambda+\varepsilon)^{2}]$. If in addition to (1.2) we assume that $V$ is in the Kato class then we also have (1.9), as in the $V\equiv 0$ case for all $p\geq\tfrac{2(n+1)}{n-1}$, which is equivalent to the Stein-Thomas restriction theorem for the sphere (see, i.e., Chapter 5 in [19]). In [2], analogs of (1.9) were obtained for all $\lambda\geq 0$ under the assumption that the $L^{n/2}$ norm of $V$ is small. In that case, using a simple argument involving Sobolev estimates, it is not hard to show that $H_{V}=-\Delta+V$ itself defines a positive self-adjoint operator which can not have negative spectrum. We generalize the results in [2] by removing the smallness condition at the cost of ignoring lower part of the spectrum. By letting $\varepsilon\rightarrow 0$, inequalities like (1.9) certainly implies the absence of embedded eigenvalues in the corresponding region of spectrum. As far as the problem of absence of eigenvalues is considered, the $L^{n/2}$ norm here is critical, since in [14], Koch and Tataru showed that if $q<n/2$ there are examples of compactly supported potentials $V\in L^{q}$ and smooth compact supported functions $u$ such that $H_{V}u=0$. In the opposite direction, Ionescu and Jerison [9] proved that $H_{V}$ does not admit positive eigenvalues for $V\in L^{n/2}_{loc}$ with certain decay conditions as $|x|\rightarrow\infty$. Since $H_{V}$ does not admit large eigenvalues, if $E^{\prime}(\lambda)$ denotes the density of the spectral measure associated to the operator $H_{V}$, with $\int E^{\prime}(\lambda)d\lambda$ being the resolution of identity, by Stone’s formula, $E^{\prime}(\lambda)=\frac{1}{2\pi i}\lim_{\varepsilon\rightarrow 0}\big{(}(H_{V}-\lambda-i\varepsilon)^{-1}-(H_{V}-\lambda+i\varepsilon)^{-1}\big{)},\,\,\,\text{if}\,\,\,\lambda>\Lambda^{2},$ which is equivalent to $E^{\prime}(\lambda)=\frac{1}{\pi}\lim_{\varepsilon\rightarrow 0}\,\varepsilon\cdot(H_{V}-\lambda-i\varepsilon)^{-1}(H_{V}-\lambda+i\varepsilon)^{-1}.$ Thus, for $\lambda,\,\varepsilon$ defined as above, by applying (1.8) with $\lambda^{\prime}=\lambda^{1/2}$, $\varepsilon^{\prime}=\varepsilon/\lambda^{1/2}$ and duality, we have the following restriction type estimates ###### Corollary 1.2. Suppose that $q$ satisfies (1.7), then we have (1.10) $\|E^{\prime}(\lambda)f\|_{L^{q}({\mathbb{R}}^{n})}\leq C\lambda^{\frac{n}{2}(\frac{1}{q^{\prime}}-\frac{1}{q})-1}\|f\|_{L^{q^{\prime}}({\mathbb{R}}^{n})}\,\,\,\text{if}\,\,\,\lambda>\Lambda^{2}.$ Among other things, if $\chi^{V}_{(-\infty,\Lambda]}$ denotes the spectral projection onto the interval $(-\infty,\Lambda^{2})$ for $H_{V}$, for $q$ satisfying (1.7), the quasimode estimates (1.8) also implies (1.11) $\|\chi^{V}_{(-\infty,\Lambda]}\|_{L^{2}({\mathbb{R}}^{n})\to L^{q}({\mathbb{R}}^{n})}\lesssim 1,$ since $H_{V}$ is bounded from below. Using (1.9) and (1.11), it is straightforward to adapt the proof of Theorem 8.1 and Theorem 9.5 in [2] to obtain the following ###### Theorem 1.3. Let $n\geq 3$ and fix $V\in L^{n/2}({\mathbb{R}}^{n})$. Let $u$ be the solution of (1.12) $\begin{cases}\bigl{(}\partial_{t}^{2}-\Delta+V(x)\bigr{)}u=0\\\ u|_{t=0}=f_{0},\quad\partial_{t}u|_{t=0}=f_{1}.\end{cases}$ Then for $p_{c}=\frac{2(n+1)}{n-1}$, we have (1.13) $\|u\|_{L^{p_{c}}([0,1]\times{\mathbb{R}}^{n})}\leq C_{V}\bigl{(}\|(i+H_{V})^{1/4}f_{0}\|_{L^{2}({\mathbb{R}}^{n})}+\|(i+H_{V})^{-1/4}f_{1}\|_{L^{2}({\mathbb{R}}^{n})}\bigr{)}.$ Additionally, suppose that $\chi\in C^{\infty}({\mathbb{R}})$ is a smooth function satisfying (1.14) $\chi(\lambda)=1\,\,\,\text{for}\,\,\,\lambda\geq 1\,\,\,\text{and}\,\,\,\chi(\lambda)=0\,\,\,\text{for}\,\,\,\lambda\leq 1/2.$ Then (1.15) $\|\chi(H_{V}/M)u\|_{L^{p_{c}}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq C_{V}\bigl{(}\|(i+H_{V})^{1/4}f_{0}\|_{L^{2}({\mathbb{R}}^{n})}+\|(i+H_{V})^{-1/4}f_{1}\|_{L^{2}({\mathbb{R}}^{n})}\bigr{)},$ assuming that $M$ is a large enough constant which depends on $V$. Here the operator $\chi(H_{V}/M)$ denotes a smooth spectral projection onto the interval $(M/2,+\infty)$ for $H_{V}$. More specifically, (1.16) $\chi(H_{V}/M)f=\int\chi(\lambda/M)E^{\prime}(\lambda)fd\lambda.$ Similarly, as (1.8), (1.15) suggest, by projecting onto the subspace with large spectrum, we have the following global Strichartz estimate for Schrödinger equations. ###### Theorem 1.4. Let the potential $V\in L^{n/2}(\mathbb{R}^{n})$ be real-valued, and $\chi\in C^{\infty}({\mathbb{R}})$ be as in (1.14). Then for each pair of exponents $(p,q)$ satisfying (1.17) $n(1/2-1/q)=2/p,\,\,2\leq p\leq\infty\,\,\text{and}\,\,\,n\geq 3,$ we have (1.18) $\bigl{\|}\chi(H_{V}/M)e^{-itH_{V}}f\bigr{\|}_{L^{p}_{t}L^{q}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\|f\|_{L^{2}(\mathbb{R}^{n})},$ where $M$ is a large constant which depends on $V$. Additionally, if $\,V\in L^{n/2}(\mathbb{R}^{n})+L^{\infty}(\mathbb{R}^{n})$ is real-valued, then for $(p,q)$ satisfying (1.17), we have (1.19) $\bigl{\|}e^{-itH_{V}}f\bigr{\|}_{L^{p}_{t}L^{q}_{x}([0,1]\times{\mathbb{R}}^{n})}\lesssim\|f\|_{L^{2}(\mathbb{R}^{n})}.$ If $V\equiv 0$, it is well-known that the Strichartz estimates (1.18) are a consequence of the following dispersive estimate (1.20) $\bigl{\|}e^{it\Delta}f\bigr{\|}_{L^{\infty}({\mathbb{R}}^{n})}\lesssim t^{-n/2}\|f\|_{L^{1}(\mathbb{R}^{n})},$ with the endpoint of above estimates corresponding to $p=2,q=\frac{2n}{n-2},\,n\geq 3$ obtained by Keel and Tao in [12]. However, the natural $L^{1}\rightarrow L^{\infty}$ dispersive estimates for the operator $e^{-itH_{V}}$, as well as the Strichartz estimates (1.18) may break down due to the possible existence of bounded states. If $V\in L^{n/2}({\mathbb{R}}^{n})$, it was proved by Goldberg in [7] that global Strichartz estimates (1.18) hold with $f$ projected onto the continuous part of the spectrum under the assumption that $0$ is neither a eigenvalue or resonance. Under the same condition, Mizutani [15] also proved similar inhomogeneous Strichartz estimates when the pairs $(p,q)$ are outside the admissible range in (1.17). The inequalities in (1.18) show that, even if $0$ is a eigenvalue or resonance, the same estimates still hold as long as $f$ is projected onto the higher part of the spectrum. See also [11] for related high energy estimates for a different class of potentials $V$. We are grateful to Mizutani [15] for pointing out his recent work after a preliminary version of this paper was completed. ## 2\. Uniform Sobolev inequalities and quasimode estimates in $\mathbb{R}^{n}$ In this section we shall prove Theorem 1.1. As in [1], the main idea is to use the second resolvent formula (2.1) $(-\Delta+V-\zeta)^{-1}-(-\Delta-\zeta)^{-1}=-(-\Delta-\zeta)^{-1}\,V(\,-\Delta+V-\zeta)^{-1},\quad\mathrm{Im}\,\zeta\neq 0,$ along with quasimode estimates and uniform Sobolev estimates for the unperturbed operator $H_{0}=-\Delta$ from [13] and [2]. Specifically, we shall require that for $n\geq 3$ (2.2) $\|(-\Delta-\zeta)^{-1}f\|_{L^{q}({\mathbb{R}}^{n})}\leq C\|f\|_{L^{p}({\mathbb{R}}^{n})},\,\,\,\forall\,\zeta\in\mathbb{C}\setminus[0,+\infty),$ for pairs of exponents $(p,q)$ satisfying (1.4), which is due to Kenig, Ruiz, and the second author [13]. We also need the quasimode estimates (2.3) $\|u\|_{L^{p_{c}}({\mathbb{R}}^{n})}\leq C\lambda^{-1+1/p_{c}}\varepsilon^{-1/2}\|(-\Delta-\lambda^{2}+i\varepsilon\lambda)f\|_{L^{2}({\mathbb{R}}^{n})},\,\,\,\forall\,\,\,0<\varepsilon<\lambda/2,$ where $p_{c}=\tfrac{2(n+1)}{n-1}$. By a change of scale argument, it is not hard to check that (2.3) is an equivalent version of Stein-Thomas restriction theorem for ${\mathbb{R}}^{n}$ (see, e.g., [2] Proposition 9.3). Actually, similar estimates also hold in the case $\varepsilon\geq\lambda/2$, but we skip these here since they are less useful. To prove Theorem 1.1, a key ingredient is the following theorem. ###### Theorem 2.1. Let $n\geq 3$ and $\tfrac{2n}{n-1}<q<\tfrac{2n}{n-3}$, then if $\,V\in L^{n/2}(\mathbb{R}^{n})$ is fixed, we have (2.4) $\|(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}\,(Vf)\|_{q}\leq 1/2\|f\|_{q},\,\,\,\forall\,\,\lambda\geq\Lambda,\,\,\,\varepsilon>0,$ assuming that $\Lambda=\Lambda(q,n,V)\geq 1$ sufficiently large. (2.4) essentially follows from Lemma 3.3 in [15], where the author gave a short proof of this lemma using uniform Sobolev estimates for the free resolvent operator when $\frac{2}{n+2}\leq\frac{1}{p}-\frac{1}{q}\leq\frac{2}{n}$. The proof of (2.4) that we shall give is more complicated since it requires a decomposition of the free resolvent operator into dyadically localized operators. However, we obtain results that are less restrictive on the conditions required for $V$. See the discussion at the end of this section for more details. We shall postpone the proof of Theorem 2.1 to the end of this section and first see how we can apply the above theorem to obtain (1.5) and (1.8). Even though the proof mostly follows from the arguments in [1], we include it here for the sake of completeness. To prove (1.5), if $\text{Re}\,\zeta\geq\Lambda^{2}$, it is equivalent to showing that $\bigl{\|}(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}\bigr{\|}_{L^{p}\to L^{q}}\lesssim 1,\quad\text{if }\,\,\lambda\geq\Lambda,$ with $\Lambda$ sufficiently large and $(p,q)$ as in (1.4). By duality, it suffices prove this inequality when (2.5) $\tfrac{2n}{n-1}<q\leq\tfrac{2n}{n-2}.$ Thus, our task is to show that (2.6) $\bigl{\|}(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{L^{q}({\mathbb{R}}^{n})}\leq C\|f\|_{L^{p}({\mathbb{R}}^{n})}\quad\text{if }\,\,\lambda\geq\Lambda,$ with $(p,q)$ satisfying (1.4) and (2.5). We are also assuming that (2.2) and (2.3) are valid for this pair of exponents. We are assuming (2.5) since by Sobolev estimates (see, e.g., (6.5) in the appendix of [1]), we have $u\in L^{q}({\mathbb{R}}^{n}),\quad 2\leq q\leq\tfrac{2n}{n-2}\,\,\,\text{if }\,\,\,(H_{V}-\lambda^{2}+i\varepsilon\lambda)u\in L^{2}.$ Thus for $q$ as in (2.5) (2.7) $\bigl{\|}(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{L^{q}({\mathbb{R}}^{n})}<\infty\quad\text{if }\,\,f\in L^{2}({\mathbb{R}}^{n}).$ In proving (2.6), since $L^{2}$ is dense in $L^{p}$ we may and shall assume that $f\in L^{2}({\mathbb{R}}^{n})$ to be able to use (2.7) to justify a bootstrapping argument that follows. By using the second resolvent formula (2.1), write (2.8) $\displaystyle(H_{V}-\lambda^{2}+i$ $\displaystyle\varepsilon\lambda)^{-1}f$ $\displaystyle=$ $\displaystyle(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}f$ $\displaystyle-(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}\bigl{(}V\cdot(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{)}$ $\displaystyle=I-II.$ If we apply the uniform Sobolev estimates (2.2) for the unperturbed operator, we have (2.9) $\|I\|_{q}\leq C\|f\|_{p},$ while by using (2.4) in Theorem 2.1, (2.10) $\|II\|_{q}\leq 1/2\bigl{\|}(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{L^{q}},\,\,\,\text{if}\,\,\,\lambda\geq\Lambda.$ Thus, if we combine (2.8), (2.9) and (2.10), we conclude that for $\lambda\geq\Lambda$ we have $\|(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{L^{q}({\mathbb{R}}^{n})}\leq C\|f\|_{L^{p}({\mathbb{R}}^{n})}+\frac{1}{2}\,\|(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{L^{q}({\mathbb{R}}^{n})}.$ By (2.7), this leads to (2.6) since we are assuming, as we may, that $f\in L^{2}({\mathbb{R}}^{n})$. Now we shall give the proof of quasimode estimates (1.8), which are needed later in the proof of (1.5) for $\text{Re}\,\zeta<\Lambda^{2}$. First, note that by the quasimode estimates (2.3) for the unperturbed operator, (2.11) $\|I\|_{p_{c}}\leq C\lambda^{-1+1/p_{c}}\varepsilon^{-1/2}\|f\|_{2},\,\,\text{if}\,\,0<\varepsilon<\lambda/2.$ If we combine (2.8), (2.10) and (2.11), we conclude that for $\lambda\geq\Lambda$ we have $\|(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{L^{p_{c}}({\mathbb{R}}^{n})}\leq C\lambda^{-1+1/p_{c}}\varepsilon^{-1/2}\|f\|_{L^{2}({\mathbb{R}}^{n})}+\frac{1}{2}\,\|(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{L^{p_{c}}({\mathbb{R}}^{n})}.$ By (2.7), this leads to (1.8) for $q=p_{c}=\tfrac{2(n+1)}{n-1}$, since in this case $-1+1/p_{c}=n(1/2-1/p_{c})-3/2$. The remaining estimates in (1.8) for exponents $q>\frac{2(n+1)}{n-1}$ as in (1.7) now just are a consequence of the following theorem. ###### Theorem 2.2. Let $n\geq 5$, assume that (1.8) holds for some $\tfrac{2(n+1)}{n-1}\leq r<\tfrac{2n}{n-4}$ and $\lambda\geq\Lambda(r,V)$, with $0<\varepsilon<\lambda/2$. Then if $V\in L^{n/2}({\mathbb{R}}^{n})$ we have for $u\in\text{Dom}(H_{V})$ (2.12) $\|u\|_{q}\leq C\,\lambda^{n(1/2-1/q)-3/2}\,\varepsilon^{-1/2}\bigl{\|}(-\Delta+V-\lambda^{2}+i\varepsilon\lambda)u\bigr{\|}_{2},\,\,\text{if }\,\lambda\geq\Lambda,\,\,\,\,r<q\leq\tfrac{2n}{n-4}.$ Similarly, for $n=3$ or $n=4$, assume that (1.8) holds for some $\tfrac{2(n+1)}{n-1}\leq r<\infty$, with $0<\varepsilon<\lambda/2$, then we have (2.13) $\|u\|_{q}\leq C\,\lambda^{n(1/2-1/q)-3/2}\,\varepsilon^{-1/2}\bigl{\|}(-\Delta+V-\lambda^{2}+i\varepsilon\lambda)u\bigr{\|}_{2},\\\ \,\,\,\text{if }\,\,\lambda\geq\Lambda,\,\,\,r<q<\infty,$ assuming that $\Lambda=\Lambda(q,n,V)$ in (2.12) and (2.13) are sufficiently large. Theorem 2.2 is essentially the analog of Theorem 2.3 in [1], which says we can use the quasimode estimates for smaller exponents $r$ to obtain quasimode estimates for larger exponents $q$ up to the optimal range. Here compared with Theorem 2.3 in [1], we do not have to assume that $\varepsilon$ has a lower bound which depends on $\lambda$ by requiring that $\lambda\geq\Lambda(q,n,V)$, as in the case of uniform Sobolev inequalities, (1.5). As in [1], the proof of Theorem 2.2 requires the following lemma. ###### Lemma 2.3. Let $V\in L^{n/2}({\mathbb{R}}^{n})$ be real valued, then there exists a constant $N_{0}>1$ large enough such that for $u\in\text{Dom}(H_{V})$, we have (2.14) $\|u\|_{q}\leq\bigl{\|}(-\Delta_{g}+V+N_{0})u\bigr{\|}_{2},\\\ \,\text{for}\,\,\,2<q<\infty,\,\,\text{if}\,\,n=3,4,\,\,\,\text{or}\,\,\,2<q\leq\tfrac{2n}{n-4}\,\,\text{if}\,\,n\geq 5.$ Lemma 2.3 is essentially a special case of Lemma 2.4 in [1], where the authors proved a sharp Sobolev type estimates for the operator $H_{V}$ on compact manifolds. However, their proof works equally well in the case of ${\mathbb{R}}^{n}$. Thus, for the sake of brevity, we shall skip the proof of (2.14) here and refer the reader to the proof of Lemma 2.4 in [1] for details. The main idea is that although the uniform resolvent estimates for the operator $(-\Delta-\zeta)^{-1}$ hold for a rather restricted range of exponents $(p,q)$, it is possible to enlarge the range of $(p,q)$ at the expense of requiring $\zeta$ to lie in certain regions of the complex plane. ###### Proof of Theorem 2.2. Throughout the proof we shall assume that (2.15) $\tfrac{2(n+1)}{n-1}\leq r<q\leq\tfrac{2n}{n-4},\,\,\,\text{if}\,\,\,n\geq 5,\,\,\,\text{or}\,\,\,\tfrac{2(n+1)}{n-1}\leq r<q<\infty,\,\,\ \text{if}\,\,\,n=3,4.$ Note that proving (2.12) and (2.13) is equivalent to showing that for $q$ satisfying (2.15) (2.16) $\bigl{\|}(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{q}\leq C\lambda^{n(1/2-1/q)-3/2}\,\varepsilon^{-1/2}\|f\|_{2},\,\,\,\text{if }\,\,\lambda\geq\Lambda.$ The proof of (2.12) and (2.13) also relies on the simple fact that if we let (2.17) $V_{\leq N}(x)=\begin{cases}V(x),\,\,\,\text{if }\,\,|V(x)|\leq N,\\\ 0,\,\,\,\text{otherwise},\end{cases}$ then, of course, (2.18) $\|V_{\leq N}\|_{L^{\infty}}\leq N,$ and, if $V_{>N}(x)=V(x)-V_{\leq N}(x)$, (2.19) $\|V_{>N}\|_{L^{n/2}({\mathbb{R}}^{n})}\leq\delta(N),\quad\text{with }\,\,\delta(N)\searrow 0,\,\,\,\text{as }\,\,N\to\infty,$ since we are assuming that $V\in L^{n/2}({\mathbb{R}}^{n})$. Fix a smooth bump function $\beta\in C_{0}^{\infty}(1/4,4)$ with $\beta\equiv 1$ in $(1/2,2)$, and let $P=\sqrt{-\Delta}$, write (2.20) $(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\\\ \qquad\qquad\qquad=\beta(P/\lambda)(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f+\big{(}1-\beta(P/\lambda)\big{)}(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\\\ =A+B.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad$ To deal with the first term, note that since $m(\xi)=\lambda^{-\alpha}|\xi|^{\alpha}\beta(|\xi|/\lambda)$ is a Mikhlin multiplier, i.e., $\big{|}|\xi|^{k}\nabla^{k}m(\xi)\big{|}\leq C_{k},\,\,\forall\,\,k\geq 0,\,\,\xi\neq 0.$ Therefore, by the Mikhlin multiplier theorem(see, e.g., [19] Theorem 0.2.6), we have (2.21) $\|(-\Delta)^{\frac{\alpha}{2}}\beta(P/\lambda)\|_{L^{r}\rightarrow L^{r}}\lesssim\lambda^{\alpha},\,\,\,\,\,\text{if}\,\,\,1<r<\infty.$ So by Sobolev estimates, (2.21) and (1.8) for the exponent $r$, if $\alpha=n(\frac{1}{r}-\frac{1}{q})$ and $0<\varepsilon<\lambda/2$, we have (2.22) $\displaystyle\|A\|_{q}$ $\displaystyle\leq\|(\Delta_{g})^{\frac{\alpha}{2}}\beta(P/\lambda)(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{r}$ $\displaystyle\leq\lambda^{n(\frac{1}{r}-\frac{1}{q})}\|(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{r}$ $\displaystyle\leq C\,\lambda^{n(1/2-1/q)-3/2}\,\varepsilon^{-1/2}\|f\|_{2},\,\,\,\text{if}\,\,\,\lambda\geq\Lambda_{1}=\Lambda(r).$ To bound the second term, we shall use the second resolvent formula (2.1) to write $\displaystyle\big{(}1-\beta(P/\lambda)\big{)}($ $\displaystyle H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f$ $\displaystyle=$ $\displaystyle\big{(}1-\beta(P/\lambda)\big{)}(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}f$ $\displaystyle-\big{(}1-\beta(P/\lambda)\big{)}(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}\bigl{(}V_{>N}\cdot(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{)}$ $\displaystyle-\big{(}1-\beta(P/\lambda)\big{)}(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}\bigl{(}V_{\leq N}\cdot(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{)}$ $\displaystyle=I-II-III.$ Since the function $1-\beta(\tau/\lambda)$ vanishes in a dyadic neighborhood of $\lambda$, it is easy to see that $\big{(}1-\beta(|\xi|/\lambda)\big{)}(|\xi|^{2}-\lambda^{2}+i\varepsilon\lambda)^{-1}(|\xi|^{2}+1)$ is a Mikhlin multiplier, and so (2.23) $\|\big{(}1-\beta(P/\lambda)\big{)}(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{q}\leq\|(-\Delta+1)^{-1}f\|_{q},\,\,\,\,\,\text{if}\,\,\,1<q<\infty.$ So by (2.23), Sobolev estimates, and Hölder’s inequality, we have for $q$ satisfying (2.15) (2.24) $\displaystyle\|II\|_{q}$ $\displaystyle\leq\|(-\Delta_{g}+1)^{-1}\big{(}V_{>N}\cdot(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}\big{)}f\|_{q}$ $\displaystyle\leq\|V_{>N}\cdot(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{p(q)}$ $\displaystyle\leq C\delta(N)\|(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{q},$ where $\frac{1}{p(q)}-\frac{1}{q}=\frac{2}{n}$. By (2.19) we can fix $N$ large enough so that $C\delta(N)<1/4$, yielding the bounds (2.25) $\|II\|_{q}<\frac{1}{4}\,\bigl{\|}(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{q}.$ For the third term $III$, note that as before, by the support property of $\beta$, it is straightforward to check that $\big{(}1-\beta(|\xi|/\lambda)\big{)}(|\xi|^{2}-\lambda^{2}+i\varepsilon\lambda)^{-1}\lambda^{2}$ is a Mikhlin multiplier, and so (2.26) $\|\big{(}1-\beta(P/\lambda)\big{)}(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{q}\leq C\lambda^{-2}\|f\|_{q},\,\,\,\,\,\text{if}\,\,\,1<q<\infty.$ Thus, by (2.18), we have (2.27) $\displaystyle\|III\|_{q}$ $\displaystyle\leq C\lambda^{-2}\|\big{(}V_{\leq N}\cdot(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}\big{)}f\|_{q}$ $\displaystyle\leq CN\lambda^{-2}\|(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{q}.$ If we choose $\Lambda_{2}$ such that $\Lambda_{2}^{2}=4CN$, we have (2.28) $\|III\|_{q}<\frac{1}{4}\,\bigl{\|}(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{q},\,\,\,\text{if}\,\,\,\lambda\geq\Lambda_{2}.$ We are left with estimating the first term $I$. Note that for $q$ satisfying (2.15), we have $\frac{1}{2}-\frac{1}{q}\leq\frac{2}{n}$. By Sobolev estimates, if $\alpha=n(\frac{1}{2}-\frac{1}{q})$ (2.29) $\|\big{(}1-\beta(P/\lambda)\big{)}(-\Delta_{g}-\lambda^{2}+i\varepsilon(\lambda)\lambda)^{-1}f\|_{q}\\\ \leq\|(-\Delta_{g})^{\frac{\alpha}{2}}\big{(}1-\beta(P/\lambda)\big{)}(-\Delta_{g}-\lambda^{2}+i\varepsilon(\lambda)\lambda)^{-1}f\|_{2}.$ Since the symbol of the operator on the right side of (2.29) satisfies (2.30) $\tau^{\alpha}\big{(}1-\beta(\tau/\lambda)\big{)}(\tau^{2}-\lambda^{2}+i\varepsilon\lambda)^{-1}\leq\lambda^{\alpha-2},$ by the spectral theorem, this yields the bounds (2.31) $\|I\|_{q}\leq\lambda^{n(\frac{1}{2}-\frac{1}{q})-2}\|f\|_{2},$ which are better than those in (2.16), since we are assuming that $\varepsilon<\lambda/2$. If we combine (2.22), (2.25), (2.28), and (2.31), we conclude that for $\lambda\geq\Lambda$ with $\Lambda=\max\\{\Lambda_{1},\Lambda_{2}\\}$, we have (2.32) $\|(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\|_{L^{q}({\mathbb{R}}^{n})}\leq C\,\lambda^{n(1/2-1/q)-3/2}\,\varepsilon^{-1/2}\|f\|_{L^{2}({\mathbb{R}}^{n})}\\\ +\frac{1}{2}\,\|(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{L^{q}({\mathbb{R}}^{n})}.$ Note that as a consequence of Lemma 2.3, for $q$ satisfying (2.15), we have (2.33) $\bigl{\|}(H_{V}-\lambda^{2}+i\varepsilon\lambda)^{-1}f\bigr{\|}_{L^{q}({\mathbb{R}}^{n})}<\infty\quad\text{if }\,\,f\in L^{2}({\mathbb{R}}^{n}).$ Thus (2.32) and (2.16) are equivalent, and so the proof of Theorem 2.2 is complete. ∎ Now we shall prove (1.5) for the region $\rm{Re}\,\zeta<\Lambda^{2}$ by using the results we have just proved. Note that by spectral theorem as well as (1.3), we have (2.34) $\bigl{\|}(H_{V}-\zeta)^{-1}f\bigr{\|}_{L^{2}({\mathbb{R}}^{n})}\leq C_{\Lambda,\delta}\bigl{\|}(H_{V}-\Lambda^{2}+i\Lambda^{2}/4)^{-1}f\bigr{\|}_{L^{2}({\mathbb{R}}^{n})},\\\ \text{if }\,\,\rm{Re}\,\zeta<\Lambda^{2},\,\,\,\text{and}\,\,\,\mathrm{dist}(\zeta,[-N_{0},+\infty))\geq\delta.$ On the other hand, by a simple interpolation argument along with the trivial $L^{2}$ estimates, (1.8) implies (2.35) $\bigl{\|}(H_{V}-\Lambda^{2}+i\Lambda^{2}/4)^{-1}f\bigr{\|}_{L^{q}({\mathbb{R}}^{n})}\leq C_{\Lambda}\|f\|_{L^{2}({\mathbb{R}}^{n})},\text{if }\,\,\tfrac{2n}{n-1}<q<\tfrac{2n}{n-3}.$ By duality, this yields (2.36) $\bigl{\|}(H_{V}-\zeta)^{-1}f\bigr{\|}_{L^{2}({\mathbb{R}}^{n})}\leq C\|f\|_{L^{p}({\mathbb{R}}^{n})},\\\ \text{if }\,\,\tfrac{2n}{n+3}<p<\tfrac{2n}{n+1},\,\,\rm{Re}\,\zeta<\Lambda^{2},\,\,\,\text{and}\,\,\,\mathrm{dist}(\zeta,[-N_{0},+\infty))\geq\delta.$ To prove (1.5) when $\rm{Re}\,\zeta<\Lambda^{2}$, by repeating the previous arguments, it suffice to show that (2.37) $\bigl{\|}(H_{V}-\zeta)^{-1}f\bigr{\|}_{L^{q}({\mathbb{R}}^{n})}\leq C\|f\|_{L^{p}({\mathbb{R}}^{n})},$ with $(p,q)$ satisfying (1.4) and (2.5), and $\zeta$ satisfying the conditions in (2.36). To exploit this we use the second resolvent formula (2.1) to write (2.38) $\displaystyle(H_{V}-\zeta)^{-1}f=$ $\displaystyle(-\Delta-\zeta)^{-1}f$ $\displaystyle-(-\Delta-\zeta)^{-1}\bigl{(}V_{>N}\cdot(H_{V}-\zeta)^{-1}f\bigr{)}$ $\displaystyle-(-\Delta-\zeta)^{-1}\bigl{(}V_{\leq N}\cdot(H_{V}-\zeta)^{-1}f\bigr{)}$ $\displaystyle=I-II-III.$ By the uniform Sobolev estimates (2.2) for the unperturbed operator we have (2.39) $\|I\|_{q}\leq C\|f\|_{p},$ as well as (2.22′) $\|II\|_{q}\leq C\bigl{\|}V_{>N}\cdot(H_{V}-\zeta)^{-1}f\bigr{\|}_{p}\leq C\|V_{>N}\|_{L^{n/2}}\cdot\bigl{\|}(H_{V}-\zeta)^{-1}f\bigr{\|}_{L^{q}},$ using Hölder’s inequality and the fact that $\frac{1}{p}-\frac{1}{q}=\frac{2}{n}$ in the last step. By (2.19) we can fix $N$ large enough so that $C\|V_{>N}\|_{L^{n/2}}<1/2$, yielding the bounds (2.40) $\|II\|_{q}<\frac{1}{2}\,\bigl{\|}(H_{V}-\zeta)^{-1}f\bigr{\|}_{q}.$ To bound the third term $III$, note that by simple Sobolev estimates, for $q$ satisfying (2.5) and $\zeta$ satisfying conditions in (2.34), we have (2.41) $\bigl{\|}(-\Delta-\zeta)^{-1}f\bigr{\|}_{L^{q}({\mathbb{R}}^{n})}\leq C_{\Lambda}\|f\|_{L^{2}({\mathbb{R}}^{n})}.$ If we combine (2.41) and (2.36), we conclude that (2.42) $\|III\|_{q}\leq C_{\Lambda,\delta}N\,\|f\|_{p}.$ Thus, (2.39), (2.40) and (2.42) imply that (2.43) $\|(H_{V}-\zeta)^{-1}f\|_{L^{q}({\mathbb{R}}^{n})}\leq C\|f\|_{L^{p}({\mathbb{R}}^{n})}+1/2\|(H_{V}-\zeta)^{-1}f\|_{L^{q}({\mathbb{R}}^{n})},\\\ \text{if }\,\,\rm{Re}\,\zeta<\Lambda^{2},\,\,\,\text{and}\,\,\,\mathrm{dist}(\zeta,[-N_{0},+\infty))\geq\delta.$ By (2.7), this implies (1.5) when $\rm{Re}\,\zeta<\Lambda^{2}$. To conclude this section we shall give the proof of Theorem 2.1. ###### Proof of Theorem 2.1. Write $V=V_{>N}+V_{\leq N}$, we shall focus on the term $V_{\leq N}$, since, as in (2.22′), we can always fix $N$ large enough so that (2.44) $\|(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}\,V_{>N}f\|_{q}\leq 1/4\|f\|_{q}.$ Recall that for $\lambda\geq 1$ the Euclidean resolvent kernels equal (2.45) $\displaystyle(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}(x,y)$ $\displaystyle=(2\pi)^{-n}\int_{{\mathbb{R}}^{n}}\frac{e^{i\langle x-y,\xi\rangle}}{|\xi|^{2}-\lambda^{2}+i\varepsilon\lambda}d\xi$ $\displaystyle=(2\pi)^{-n}\lambda^{n-2}\int_{{\mathbb{R}}^{n}}\frac{e^{i\lambda\langle x-y,\xi\rangle}}{|\xi|^{2}-1+i\varepsilon/\lambda}d\xi.$ Fix a real-valued Littlewood-Paley bump function $\beta\in C_{0}^{\infty}((1/2,2))$ satisfying (2.46) $1=\sum_{-\infty}^{\infty}\beta(2^{-j}s)\,\,\text{for }\,\,s>0,\quad\text{and }\,\beta(s)=1,\,\,s\in[3/4,5/4].$ Note that is straightforward to check that $m(\xi)=(1-\beta(|\xi|))(|\xi|^{2}-1+i\varepsilon/\lambda)^{-1}$ is a symbol of order $-2$, i.e., $(\frac{d}{dr})^{j}m(r)\leq C_{\alpha}(r^{2}+1)^{-2-j}$. Therefore, by a simple integration by parts argument, (2.47) $\displaystyle K_{0}(x,y)$ $\displaystyle=(2\pi)^{-n}\lambda^{n-2}\int_{{\mathbb{R}}^{n}}\bigl{(}1-\beta(|\xi|)\bigr{)}\frac{e^{i\lambda\langle x-y,\xi\rangle}}{|\xi|^{2}-1+i\varepsilon/\lambda}d\xi$ $\displaystyle=O\bigl{(}|x-y|^{2-n}(1+\lambda|x-y|)^{-N}\bigr{)},\,\,\forall\,\,N>0.$ Thus, by Young’s inequality (2.48) $\|K_{0}(V_{\leq N}f)\|_{q}\leq C\lambda^{-2}\|V_{\leq N}f\|_{q}\leq C\lambda^{-2}N\|f\|_{q}.$ By choosing $\Lambda_{1}$ such that $C\Lambda_{1}^{-2}N\leq 1/12$, we have (2.49) $\|K_{0}(V_{\leq N}f)\|_{q}\leq 1/12\|f\|_{q},\,\,\text{if}\,\,\lambda\geq\Lambda_{1}.$ Here in (2.48) and (2.49) we are abusing notation a bit by letting $K_{0}$ denote the integral operator with kernel $K_{0}(x,y)$ and we shall use similar notation in what follows. On the other hand, if $|\xi|\approx 1$, by stationary phase methods or the Fourier transform formula for the sphere, we have (2.50) $\displaystyle K_{1}(x,y)$ $\displaystyle=(2\pi)^{-n}\lambda^{n-2}\int_{{\mathbb{R}}^{n}}\beta(|\xi|)\frac{e^{i\lambda\langle x-y,\xi\rangle}}{|\xi|^{2}-1+i\varepsilon/\lambda}d\xi$ $\displaystyle=\begin{cases}O(\lambda^{n-2}),\,\,\text{if}\,\,|x-y|<\lambda^{-1}\\\ \sum_{\pm}e^{\pm i\lambda|x-y|}c_{\pm}(|x-y|),\,\,\text{if}\,\,|x-y|\geq\lambda^{-1},\end{cases}$ where $|\frac{d^{j}}{ds^{j}}c_{\pm}(s)|\leq B_{j}\lambda^{\frac{n-3}{2}}s^{-\frac{n-1}{2}-j}$ . Now we split the kernel $K_{1}(x,y)$ as $K_{1}(x,y)=\sum_{j=0}^{\infty}K_{1}^{j}(x,y),$ where $K_{1}^{j}(x,y)=\beta(|x-y|\lambda 2^{-j})K_{1}(x,y),\,\,\text{if}\,\,j>0,$ and $K_{1}^{0}(x,y)=\beta_{0}(|x-y|\lambda)K_{1}(x,y),$ with $\beta_{0}(s)=(1-\sum_{j=1}^{\infty}\beta(2^{-j}s)).$ Note that as a consequence of (2.50) and Young’s inequality, (2.51) $\|K_{1}^{j}(V_{\leq N}f)\|_{q}\leq C\lambda^{-2}2^{j\frac{n+1}{2}}\|V_{\leq N}f\|_{q}\leq C\lambda^{-2}2^{j\frac{n+1}{2}}N\|f\|_{q},$ if $2^{j}\leq 2^{j_{0}}\approx\lambda^{\frac{2}{n+1}}$. Thus, by choosing $\Lambda_{2}$ such that $2C\Lambda_{2}^{-1}N\leq 1/12$, we have (2.52) $\sum_{j=0}^{j_{0}}\|K_{1}^{j}(V_{\leq N}f)\|_{q}\leq 1/12\|f\|_{q},\,\,\text{if}\,\,\lambda\geq\Lambda_{2}.$ On the other hand, for each fixed $j>0$, the operator $K_{1}^{j}$ satisfies the following bounds. ###### Proposition 2.4. Let $n\geq 3$, and suppose that (2.53) $1\leq p\leq 2,\quad q\geq(n+1)p^{\prime}/(n-1).$ Then (2.54) $\|K_{1}^{j}f\|_{L^{q}({\mathbb{R}}^{n})}\leq C\lambda^{-2+n(\frac{1}{p}-\frac{1}{q})}2^{j(\frac{n+1}{2}-\frac{n}{p})}\|f\|_{L^{p}({\mathbb{R}}^{n})}.$ When $q=(n+1)p^{\prime}/(n-1)$, (2.54) is essentially Lemma 5.4 in [18], which can be proved by standard change of scale arguments and an application of Stein’s oscillatory integral theorem. While if $q=\infty$, the above estimates follows from Young’s inequality noticing that $K_{1}^{j}$ is supported in the set $|x-y|\approx\lambda^{-1}2^{j}$. The remaining inequalities in (2.54) now follow from interpolation. The intersection of the two regions (1.4) and (2.53) are pairs of exponents $(p,q)$ satisfying $\tfrac{1}{p}-\tfrac{1}{q}=\tfrac{2}{n},\,\,\,\tfrac{2n^{2}}{n^{2}+n+2}<p<\tfrac{2n}{n+1}$. By applying (2.54) along the uniform Sobolev line $\tfrac{1}{p}-\tfrac{1}{q}=\tfrac{2}{n}$ and duality, we have for $(p,q)$ satisfying (1.4), (2.55) $\|K_{1}^{j}f\|_{L^{q}({\mathbb{R}}^{n})}\leq C2^{-j\delta(p)}\|f\|_{L^{p}({\mathbb{R}}^{n})},$ where $\delta(p)>0$ is a fixed constant which depends on $p$. A combination of (2.55) and Hölder’s inequality yields (2.56) $\|K_{1}^{j}(V_{\leq N}f)\|_{q}\leq C2^{-j\delta(p)}\|V_{\leq N}f\|_{p}\leq C2^{-j\delta(p)}\|V\|_{n/2}\|f\|_{q}.$ For each fixed $p$ with $\frac{1}{p}-\frac{1}{q}=\frac{2}{n}$, if we choose $\Lambda_{3}$ large enough such that $(1-2^{-\delta(p)})^{-1}C\Lambda_{3}^{-\frac{2\delta(p)}{n+1}}\|V\|_{n/2}\leq 1/12,$ after summing over $2^{j}\geq 2^{j_{0}}\approx\lambda^{\frac{2}{n+1}}$, (2.57) $\sum_{j=j_{0}}^{\infty}\|K_{1}^{j}(V_{\leq N}f)\|_{q}\leq 1/12\|f\|_{q},\,\,\text{if}\,\,\lambda\geq\Lambda_{3}.$ If we combine (2.44), (2.49), (2.52) and (2.57), we conclude that for $\lambda\geq\max(\Lambda_{1},\Lambda_{2},\Lambda_{3})$ we have (2.58) $\|(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}\,(Vf)\|_{q}\leq 1/2\|f\|_{q},\,\,\text{if}\,\,\tfrac{2n}{n-1}<q<\tfrac{2n}{n-3},$ which completes the proof of Theorem 2.1. ∎ Remark. The above proof of Theorem 2.1 does not make full use of the condition that $V\in L^{n/2}({\mathbb{R}}^{n})$ and it can be easily adapted to prove bounds for certain potentials in $L^{n/2}_{\text{loc}}$. For example, in (2.48) and (2.51), instead of using $L^{q}\rightarrow L^{q}$ type estimates, we can use their $L^{p}\rightarrow L^{q}$ analogs with $\frac{1}{p}-\frac{1}{q}=\frac{2}{n}$, and exploit the locally supported condition for the operators $K_{0}$ and $K_{1}^{j}$ to remove the $L^{\infty}$ condition on $V$ used there. Specifically, if we assume that (2.59) $\lim_{r\rightarrow 0}\sup_{x}\|V\|_{L^{n/2}(B_{x}(r))}\rightarrow 0$ as well as (2.60) $\sup_{x}\|V\|_{L^{n/2}(B_{x}(R))}\leq C_{\varepsilon}R^{\varepsilon},\,\,\forall\,\,\varepsilon>0,\,\,R>1,$ then one could decompose the resolvent operator $(-\Delta-\lambda^{2}+i\varepsilon\lambda)^{-1}$ as a sum of dyadically localized operators and repeat the above arguments to get the same conclusion. Also, the condition (2.59) itself ensures that the operator $H_{V}$ defines a self adjoint operator which is bounded from below. Thus, it is possible to get the same results in Theorem 1.1 assuming that $V$ satisfies (2.59) and (2.60). Recall that in Ionescu and Jerison [9], a special case of the conditions the authors used to guarantee the absence of positive eigenvalues was that $V\in L_{loc}^{n/2}({\mathbb{R}}^{n})$ and $\lim_{R\rightarrow\infty}\|V\|_{L^{n/2}(R<|x|<2R)}=0,$ which implies (2.59) and (2.60) with $R^{\varepsilon}$ replaced by $\log R$. On the other hand, by an interpolation theorem of Stein and Weiss, i.e., Chapter V, Theorem 3.15 in [21], it is straightforward to show that, e.g., (2.56) still holds with $\|V\|_{L^{n/2}}$ replaced by $\|V\|_{L^{n/2,\infty}}$. Thus the conditions in (2.59) and (2.60) can further be weakened to (2.61) $\lim_{r\rightarrow 0}\sup_{x}\|V\|_{L^{n/2,\infty}(B_{x}(r))}\rightarrow 0$ as well as (2.62) $\sup_{x}\|V\|_{L^{n/2,\infty}(B_{x}(R))}\leq C_{\varepsilon}R^{\varepsilon},\,\,\forall\,\,\varepsilon>0,\,\,R>1.$ ## 3\. Strichartz estimates for Schrödinger operators with singular potentials in $\mathbb{R}^{n}$. In this section we shall prove Theorem 1.4. We shall first give the proof of (1.19), which is an easy consequence of Duhamel’s principle and a bootstrap argument. If $V\in L^{n/2}(\mathbb{R}^{n})+L^{\infty}(\mathbb{R}^{n})$, by adapting the arguments in the appendix of [1], one can show that $H_{V}=-\Delta+V$ defines a self-adjoint operator which is bounded from below. If necessary, we may add a constant to $V$ so that $H_{V}\geq 0$. This will not affect our estimates, since, if we, say, add the constant $N$ to $V$ the two different Schrödinger operators will agree up to a factor $e^{\pm itN}$. By spectral theorem, we have $\|e^{-itH_{V}}\|_{L^{2}({\mathbb{R}}^{n})\to L^{2}({\mathbb{R}}^{n})}=O(1),\,\,\forall\,\,t\in{\mathbb{R}}.$ Thus, it suffices to prove (1.19) for the other endpoint, i.e., (3.1) $\bigl{\|}e^{-itH_{V}}f\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}\lesssim\|f\|_{L^{2}(\mathbb{R}^{n})}.$ To proceed, note that for $V\in L^{n/2}(\mathbb{R}^{n})+L^{\infty}(\mathbb{R}^{n})$, as in (2.17)–(2.19), we can always write $V=V_{\leq N}+V_{>N}$, with $\|V_{\leq N}\|_{L^{\infty}}\leq N,$ while $\|V_{>N}\|_{L^{n/2}}=\delta(N)\rightarrow 0$ as $N\rightarrow\infty$. By Duhamel’s formula, we can write (3.2) $\displaystyle e^{-itH_{V}}f=$ $\displaystyle e^{it\Delta}f+i\int_{0}^{t}e^{i(t-s)\Delta}V_{\leq N}e^{-isH_{V}}fds+i\int_{0}^{t}e^{i(t-s)\Delta}V_{>N}e^{-isH_{V}}fds$ $\displaystyle=$ $\displaystyle I+II+III.$ Note that, by the Keel-Tao [12] theorem, we have (3.3) $\bigl{\|}e^{it\Delta}f\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}\leq C\|f\|_{L^{2}(\mathbb{R}^{n})},$ as well as (3.4) $\bigl{\|}\int_{0}^{t}e^{i(t-s)\Delta}F(s,\cdot)ds\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}\leq C\|F\|_{L_{t}^{2}L_{x}^{\frac{2n}{n+2}}([0,1]\times{\mathbb{R}}^{n})}.$ Thus, the first term $I$ is bounded by the right side of (3.1), and if we choose $N$ large enough such that $C\delta(N)\leq 1/2$ for the constant $C$ appeared in (3.4), we have (3.5) $\displaystyle\|III\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}\leq$ $\displaystyle C\|V_{>N}e^{-itH_{V}}f\|_{L_{t}^{2}L_{x}^{\frac{2n}{n+2}}(\mathbb{R}^{n})}$ $\displaystyle\leq$ $\displaystyle C\|V_{>N}\|_{L^{n/2}({\mathbb{R}}^{n})}\bigl{\|}e^{-itH_{V}}f\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}$ $\displaystyle\leq$ $\displaystyle 1/2\bigl{\|}e^{-itH_{V}}f\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}.$ To estimate $II$ we shall use Minkowski inequality and (3.3). More specifically, (3.6) $\displaystyle\|II\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}\leq$ $\displaystyle C\int_{0}^{1}\|e^{it\Delta}(e^{is\Delta}V_{\leq N}e^{-isH_{V}})\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}ds$ $\displaystyle\leq$ $\displaystyle C\int_{0}^{1}\|e^{is\Delta}V_{\leq N}e^{-isH_{V}}f\|_{L^{2}_{x}({\mathbb{R}}^{n})}ds$ $\displaystyle\leq$ $\displaystyle CN\int_{0}^{1}\|e^{-isH_{V}}f\|_{L^{2}_{x}({\mathbb{R}}^{n})}ds$ $\displaystyle\leq$ $\displaystyle CN\|f\|_{L^{2}({\mathbb{R}}^{n})}.$ If $f\in H^{1}({\mathbb{R}}^{n})$, then by Sobolev estimates and (6.9) in the appendix of [1], $e^{itH_{V}}f\in L^{\frac{2n}{n-2}}$, which implies $\bigl{\|}e^{-itH_{V}}f\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}<\infty.$ Thus, a combination of (3.3), (3.5), (3.6) and a bootstrap argument gives us (3.7) $\bigl{\|}e^{-itH_{V}}f\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}([0,1]\times{\mathbb{R}}^{n})}\leq C_{V}\|f\|_{L^{2}({\mathbb{R}}^{n})},\,\,\text{if}\,\,f\in H^{1}({\mathbb{R}}^{n}).$ Since $H^{1}$ is dense in $L^{2}$, the proof of (3.1) is complete. The proof of (1.18) requires more work since we can not bound the term $II$ as before if the support of $s$ is unbouned. To proceed, we shall follow the strategy in a recent work [8] by the authors and prove an analogous dyadic estimates which will allow us to obtain (1.18). Also, we have to show that the Littlewood-Paley estimates for $H_{V}$ are valid for the exponents $q$ as in (1.17). This was done in the appendix of [8] in the setting of compact manifolds and the same argument can be used to handle the Euclidean space ${\mathbb{R}}^{n}$. As in [8], the proof of dyadic variants of (1.17) rely on certain microlocalized “quasimode” estimates for the unperturbed scaled Schröding operators with a damping term, (3.8) $i\lambda\partial_{t}+\Delta+i\varepsilon\lambda.$ Unlike the case of compact manifold, we need to allow the damping term $\varepsilon\lambda$ to be arbitrary small and prove results that are uniform in $\varepsilon$ in order to get global Strichartz estimates. Furthermore, since the Littlewood-Paley operators associated with $-\Delta$ may not be compatible with the corresponding ones for $H_{V}=-\Delta+V(x)$ with V singular, we shall also introduce the Littlewood-Paley operators acting on the time variable $\beta(-D_{t}/\lambda)h(x)=(2\pi)^{-1}\int_{-\infty}^{\infty}e^{it\tau}\beta(-\tau/\lambda)\,\hat{h}(\tau)\,d\tau,$ with $\beta$ defined as in (2.46). More specifically, to prove (1.18) our main estimates will concern solutions of the scaled inhomogeneous Schrödinger equation with damping term (3.9) $(i\lambda\partial_{t}+\Delta_{g}+i\varepsilon\lambda)w(t,x)=F(t,x),\quad w(0,\,\cdot\,)=0.$ It will be convenient to assume that the “forcing term” here satisfies (3.10) $F(t,x)=0,\quad t\notin[0,\varepsilon^{-1}].$ The result that we shall need in order to prove (1.18) is the following. ###### Theorem 3.1. Let $n\geq 3$, suppose $F$ satisfies the support assumption in (3.10) and $w$ solves (3.9). Then for $\lambda\geq 1$, we have (3.11) $\bigl{\|}\beta(-D_{t}/\lambda)w\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\lambda^{-1/2}\varepsilon^{-1/2}\|F\|_{L^{2}_{t,x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})},$ and also (3.12) $\bigl{\|}\beta(-D_{t}/\lambda)w\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\|F\|_{L^{2}_{t}L^{\frac{2n}{n+2}}_{x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})}.$ Additionally, if for all time $t$, $\text{supp }F(t,\cdot)\subset B_{R}$, with $B_{R}$ being a fixed ball of radius $R\geq 1$ in ${\mathbb{R}}^{n}$ centered at origin, we have (3.13) $\bigl{\|}\beta(-D_{t}/\lambda)w\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times B_{R})}\leq C_{R}\lambda^{-1/2}\|F\|_{L^{2}_{t}L^{2}_{x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})}.$ Inequalities (3.11) and (3.12) are analogs of those in Theorem 1.2 in [8], which, as we shall see later, can be proved by using a similar argument. Inequality (3.13) is the main new ingredient, which will allow us to deal with forcing terms involving bounded and compactly supported potentials. Also, compared with Theorem 1.2 in [8], we only consider the special case $p=2$, $q=\frac{2n}{n-2}$ here for the sake of simplicity since we are assuming that $n\geq 3$ where the endpoints are always admissible . In other words, this endpoint Strichartz estimate implies all the others by interpolating with the trivial $L^{2}$-estimate. Before giving the proof of Theorem (3.1), which we shall postpone until the end of the section, let us see how we can use the above inequalities to prove (1.18). Recall that under the assumption (1.2), $H_{V}$ defines a self-adjoint operator and, as noted before, we may assume that (3.14) $H_{V}\geq 0.$ To proceed, we shall require the following multiplier bounds associated to the operator $H_{V}$. ###### Proposition 3.2. Let $V\in L^{n/2}({\mathbb{R}}^{n})$ be real valued, and suppose that (3.15) $1<q<\infty\,\,for\,\,n=3,4\,\,or\,\,2n/(n+4)<q<2n/(n-4)\,\,n\geq 5.$ Then if $m\in C^{\infty}({\mathbb{R}}_{+})$ is a Mikhlin-type multiplier, i.e., (3.16) $|\partial_{\tau}^{j}m(\tau)|\leq C(1+\tau)^{-j},\tau>0,\,\,\,0\leq j\leq n/2+1,$ we have (3.17) $\bigl{\|}m(\sqrt{H_{V}})f\bigr{\|}_{L^{q}({\mathbb{R}}^{n})}\leq C_{q}\|f\|_{L^{q}({\mathbb{R}}^{n})},$ as well as (3.18) $\|h\|_{L^{q}({\mathbb{R}}^{n})}\leq C_{q,V}\,\|\beta_{0}(\sqrt{H_{V}})h\|_{L^{q}({\mathbb{R}}^{n})}+\bigl{\|}\,\bigl{(}\,\sum_{k=1}^{\infty}\,\bigl{|}\beta(\sqrt{H_{V}}/2^{k})h\bigr{|}^{2}\,\bigr{)}^{1/2}\,\bigr{\|}_{L^{q}({\mathbb{R}}^{n})},$ for $q$, $V$ as above and $\beta$ being the Littlewood-Paley bump function satisfying (2.46). The proof of (3.17) was given in the appendix of [8] in the compact manifold case. We can repeat the arguments and see that, to obtain (3.17) in the Euclidean setting, it suffices to check the following two properties hold for the operator $H_{V}$ in ${\mathbb{R}}^{n}$. i). Finite propagation speed for the wave equation associated to $H_{V}$, that is, if $u,v\in L^{2}({\mathbb{R}}^{n})$ and ${{\rm dist}}(\text{supp }u,\text{supp }v)=R$ then (3.19) $\bigl{(}u,\,\cos t\sqrt{H_{V}}\,v\bigr{)}=0,\,\,|t|<R.$ By a result of Coulhon and Sikora [5], (3.19) is valid when $H_{V}$ is nonnegative, self-adjoint and $V\in L^{1}_{loc}({\mathbb{R}}^{n})$. Alternately, one can use arguments from [2] to show this for the potentials that we are considering. ii). For $q$ satisfying (1.7) and $\beta$ defined as in (2.46), we have the Bernstein type (dyadic Sobolev) estimates (3.20) $\|\beta(\sqrt{H_{V}}/\lambda)u\|_{L^{q}({\mathbb{R}}^{n})}\leq C\lambda^{n(\frac{1}{2}-\frac{1}{q})}\|u\|_{L^{2}({\mathbb{R}}^{n})},\,\,\lambda\geq 1,\\\ \text{and }\,\,\|\beta_{0}(\sqrt{H_{V}})u\|_{L^{q}({\mathbb{R}}^{n})}\leq C\|u\|_{L^{2}({\mathbb{R}}^{n})},\,\,\text{if }\,\beta_{0}(s)=1-\sum_{k=1}^{\infty}\beta(2^{-k}s).$ This, in turn, is a direct consequence of (1.8) and the spectral theorem if we let $u=\beta(\sqrt{H_{V}}/\lambda)u$ and $\varepsilon=\lambda$ there. The fact that the conditions $i)$ and $ii)$ implies (3.17) now follows from the same procedure as in the appendix of [8]. Also given (3.17), the Littlewood-Paley estimate (3.18) can be proved using a standard argument involving Radamacher functions (see e.g., [19, p. 21]). In view of the Littlewood-Paley inequalities (3.18), by using a standard argument involving Minkowski’s inequality (see, e.g., (3.33) in [8]), the Strichartz estimate in (1.18) is equivalent to the following: (3.21) $\bigl{\|}e^{-itH_{V}}\rho(\sqrt{H_{V}}/\lambda)f\bigr{\|}_{L^{p}_{t}L^{q}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})},\\\ \,\,\text{if}\,\,\rho\in C_{0}^{\infty}((9/10,11/10))\,\,\text{is fixed},\,\,\text{and}\,\,\lambda>\Lambda,$ assuming that $\Lambda=\Lambda(V)$ sufficiently large. We choose this interval $(9/10,11/10)$ as the support of $\rho$ since we are assuming that the Littlewood-Paley bump function arising in Theorem 3.1 satisfies (3.22) $\beta(s)=1\quad\text{on }\,\,[3/4,5/4]\quad\text{and }\,\,\mathrm{supp}\,\beta\subset(1/2,2).$ Since, by the spectral theorem (3.23) $\|e^{-itH_{V}}\|_{L^{2}({\mathbb{R}}^{n})\to L^{2}({\mathbb{R}}^{n})}=1,$ the estimate trivially holds for $p=\infty$ and $q=2$. Therefore, by interpolation, since we are assuming that $n\geq 3$ it suffices to prove the estimate for the other endpoint, i.e., (3.24) $\bigl{\|}e^{-itH_{V}}\rho(\sqrt{H_{V}}/\lambda)f\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq C\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})},\,\,\text{if}\,\,\lambda>\Lambda.$ To be able to use (3.11) and (3.12), we note that, after rescaling, (3.24) is equivalent to the statement that (3.25) $\bigl{\|}e^{-it\lambda^{-1}H_{V}}\rho(\sqrt{H_{V}}/\lambda)f\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}([-\varepsilon^{-1},\varepsilon^{-1}]\times{\mathbb{R}}^{n})}\\\ \leq C\lambda^{1/2}\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})},\,\,\forall\,\,0<\varepsilon<1,\,\,\text{if}\,\,\lambda>\Lambda.$ where $C=C(n,V)$ is a uniform constant independent of $\varepsilon$. By (3.23), this is equivalent to showing that whenever $\eta\in C^{\infty}((0,1)),\,\,0<\varepsilon<1$ are fixed we have (3.26) $\bigl{\|}w\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times M)}\leq C\lambda^{1/2}\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}(M)},\,\,\text{if}\,\,\lambda>\Lambda,\\\ \text{with}\,\,\,w(t,x)=\eta(\varepsilon t)\cdot e^{-it\lambda^{-1}H_{V}}\rho(\sqrt{H_{V}}/\lambda)f.$ To proceed we need the following simple lemma. ###### Lemma 3.3. Let $n\geq 3$ and let $w$ be as in (3.26) with $\eta\in C^{\infty}_{0}((0,1))$ and $\rho\in C_{0}^{\infty}$ as in (3.21). Then for large enough $\lambda$ and each $N=1,2,\dots$ we have the uniform bounds (3.27) $\bigl{\|}\,(I-\beta(-D_{t}/\lambda))w\,\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq C_{N}\lambda^{-N}\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})}.$ This is essentially the Euclidean version of Lemma 3.1 in [8], and so we can repeat the arguments there to prove (3.27) with minor modifications. For completeness, we include the details below. ###### Proof. First note the Fourier transform of $t\to\eta(\varepsilon t)e^{-it\lambda^{-1}\mu^{2}}$ is $\varepsilon^{-1}\hat{\eta}(\frac{\tau+\mu^{2}/\lambda}{\varepsilon})$. Consequently, (3.28) $\bigl{(}I-\beta(-D_{t}/\lambda)\bigr{)}w(t,x)=a(t;\sqrt{H_{V}})\rho(\sqrt{H_{V}}/\lambda)f(x),$ where (3.29) $a(t;\mu)=(2\pi)^{-1}\int_{-\infty}^{\infty}e^{it\tau}\varepsilon^{-1}\hat{\eta}(\frac{\tau+\mu^{2}/\lambda}{\varepsilon})\,\big{(}1-\beta(-\tau/\lambda)\bigr{)}\,d\tau.$ Since for $q_{0}=\frac{2n}{n-2}$, by (6.7) in [1], the following Sobolev estimates are valid (3.30) $\|u\|_{L^{q_{0}}({\mathbb{R}}^{n})}\lesssim\|\,(I+H_{V})^{1/2}u\,\|_{L^{2}({\mathbb{R}}^{n})}.$ Therefore, by the spectral theorem, (3.31) $\bigl{\|}a(t;\sqrt{H_{V}})\rho(\sqrt{H_{V}}/\lambda)f\bigl{\|}_{L^{q_{0}}({\mathbb{R}}^{n})}\lesssim\lambda\bigl{\|}a(t;\sqrt{H_{V}})\rho(\sqrt{H_{V}}/\lambda)f\bigl{\|}_{L^{2}({\mathbb{R}}^{n})}.$ Next, since $2\leq q_{0}<\infty$, by Minkowski’s inequality and Sobolev’s theorem for ${\mathbb{R}}$ we therefore have $\displaystyle\|(I-\beta(-D_{t}/\lambda))w\|_{L^{2}_{t}L^{q_{0}}_{x}({\mathbb{R}}\times M)}$ $\displaystyle\lesssim\lambda\bigl{\|}a(t;\sqrt{H_{V}})\rho(\sqrt{H_{V}}/\lambda)f\bigl{\|}_{L^{q_{0}}_{t}L^{2}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq\lambda\bigl{\|}a(t;\sqrt{H_{V}})\rho(\sqrt{H_{V}}/\lambda)f\bigl{\|}_{L^{2}_{x}L^{q_{0}}_{t}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\lesssim\lambda\bigl{\|}\,|D_{t}|^{1/2-1/q_{0}}a(t;\sqrt{H_{V}})\rho(\sqrt{H_{V}}/\lambda)f\bigl{\|}_{L^{2}_{x}L^{2}_{t}({\mathbb{R}}\times{\mathbb{R}}^{n})}.$ By the spectral theorem, we conclude that (3.32) $\|(I-\beta(-D_{t}/\lambda)w\|_{L^{2}_{t}L^{q_{0}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\\\ \lesssim\lambda\,\bigl{(}\sup_{\mu\in[9\lambda/10,11\lambda/10]}\bigl{\|}\,|D_{t}|^{1/2-1/p}a(t;\mu)\bigr{\|}_{L^{2}_{t}({\mathbb{R}})}\bigr{)}\cdot\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})}.$ Next, by Plancherel’s theorem, (3.22) and (3.29), $\displaystyle\|\,|D_{t}|^{1/2-1/p}a(t;\mu)\|_{L^{2}_{t}({\mathbb{R}})}^{2}$ $\displaystyle=(2\pi)^{-1}\int_{-\infty}^{\infty}|\tau|^{1-2/p}\,\bigl{|}\varepsilon^{-1}\hat{\eta}(\frac{\tau+\mu^{2}/\lambda}{\varepsilon})\bigr{|}^{2}\,\bigl{|}(1-\beta(-\tau/\lambda))\bigr{|}^{2}\,d\tau$ $\displaystyle\lesssim\int_{\tau\notin[-5/\lambda/4,\,-3\lambda/4]}|\tau|^{1-2/p}\,|\varepsilon^{-1}\hat{\eta}(\frac{\tau+\mu^{2}/\lambda}{\varepsilon})|^{2}\,d\tau.$ Note that $|\tau+\mu^{2}/\lambda|\approx(|\tau|+\lambda)$ if if $\tau\notin[-5\lambda/4,-3\lambda/4]$ and $\mu\in[9\lambda/10,11\lambda/10]$ , and since $\hat{\eta}\in{\mathcal{S}}({\mathbb{R}})$ the preceding inequality leads to the trivial bounds (3.33) $\sup\|\,|D_{t}|^{1/2-1/p}a(t;\mu)\|_{L^{2}_{t}({\mathbb{R}})}\lesssim\lambda^{-N}.$ Combining this inequality with (3.32) yields (3.27). ∎ We now are able to prove (3.26). For fixed $V\in L^{n/2}({\mathbb{R}}^{n})$, let us split $V=V_{1}+V_{2},$ where (3.34) $V_{1}(x)=\begin{cases}V(x),\,\,\,\text{if }\,|V(x)|\leq\ell\,\,\text{and }\,|x|<R\,\,\\\ 0,\,\,\text{otherwise}.\end{cases}$ The assumption $V\in L^{n/2}({\mathbb{R}}^{n})$ then yields (3.35) $\|V_{2}\|_{L^{n/2}({\mathbb{R}}^{n})}=\delta(\ell,R),\quad\text{with }\,\,\delta(\ell,R)\to 0\,\,\text{as }\,\ell,R\to\infty,$ and we also trivially have (3.36) $\|V_{1}\|_{L^{\infty}({\mathbb{R}}^{n})}\leq\ell.$ To use this we note that since $-H_{V}=\Delta_{g}-V$ $\displaystyle(i\lambda\partial_{t}+\Delta_{g}+i\lambda)w$ $\displaystyle=(i\lambda\partial_{t}-H_{V}+i\varepsilon\lambda)w+Vw$ $\displaystyle=(i\lambda\partial_{t}-H_{V}+i\varepsilon\lambda)w+V_{1}\,w+V_{2}\,w,$ and also $w(0,\,\cdot\,)=0$. So we can split (3.37) $w=\widetilde{w}+w_{1}+w_{2},$ where (3.38) $(i\lambda\partial_{t}+\Delta_{g}+i\varepsilon\lambda)\widetilde{w}=(i\partial_{t}-H_{V}+i\varepsilon\lambda)w=\widetilde{F},\quad\tilde{w}(0,\,\cdot\,)=0,$ (3.39) $(i\lambda\partial_{t}+\Delta_{g}+i\varepsilon\lambda)w_{1}=V_{1}\,w=F_{1},\quad w_{1}(0,\,\cdot\,)=0,$ and (3.40) $(i\lambda\partial_{t}+\Delta_{g}+i\varepsilon\lambda)w_{2}=V_{2}\,w=F_{2},\quad w_{2}(0,\,\cdot\,)=0.$ Note that since $w(t,x)=0$, $t\notin(0,\varepsilon^{-1})$ each of the forcing terms $\widetilde{F}$, $F_{1}$ and $F_{2}$ also vanishes for such $t$ which allows us to apply the estimates in Theorem 3.1 for $\widetilde{w}$, $w_{1}$ and $w_{2}$. By (3.27) and (3.37) we have for each $N=1,2,\dots$ (3.41) $\displaystyle\bigl{\|}w$ $\displaystyle\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\lesssim\bigl{\|}\beta(-D_{t}/\lambda)w\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}+C_{N}\lambda^{-N}\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})}$ $\displaystyle\leq\bigl{\|}\beta(-D_{t}/\lambda)\widetilde{w}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}+\bigl{\|}\beta(-D_{t}/\lambda)w_{1}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle+\bigl{\|}\beta(-D_{t}/\lambda)w_{2}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}+C_{N}\lambda^{-N}\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})}.$ Based on this we would obtain (3.26) if we have the following three inequalities (3.42) $\bigl{\|}\beta(-D_{t}/\lambda)\widetilde{w}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq C\lambda^{1/2}\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})},$ as well as (3.43) $\bigl{\|}\beta(-D_{t}/\lambda)w_{1}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq\tfrac{1}{4}\bigl{\|}w\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\\\ +C\lambda^{1/2}\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})},$ and finally (3.44) $\bigl{\|}\beta(-D_{t}/\lambda)w_{2}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq\tfrac{1}{4}\bigl{\|}w\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}.$ Indeed we just combine (3.41)–(3.44) and use a simple bootstrapping argument which is justified since the right side of (3.44) is finite by the aforementioned Sobolev estimates for $H_{V}$. To prove these three estimates we shall use Theorem 3.1, as we may, since, as mentioned before, the forcing terms in (3.38), (3.39) and (3.40) obey the support assumption in (3.10). To prove (3.42) we note that if $\widetilde{F}$ is as an (3.38) then, since $w$ is as in (3.26), we have (3.45) $\widetilde{F}(t,x)=(i\lambda\partial_{t}-H_{V}+i\varepsilon\lambda)\bigl{(}\eta(\varepsilon t)e^{-it\lambda^{-1}H_{V}}\rho(\sqrt{H_{V}}/\lambda)f(x)\bigr{)}\\\ =i\varepsilon\lambda(\eta^{\prime}(\varepsilon t)+\eta(\varepsilon t))e^{-it\lambda^{-1}H_{V}}\rho(\sqrt{H_{V}}/\lambda)f(x).$ Consequently, we may use the $L^{2}$-estimate, (3.11), in Theorem 3.1 to deduce that $\displaystyle\bigl{\|}\beta(-D_{t}/\lambda)$ $\displaystyle\widetilde{w}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq\lambda^{-1/2}\varepsilon^{-1/2}\|i\varepsilon\lambda(\eta^{\prime}(\varepsilon t)+\eta(\varepsilon t))\cdot e^{-it\lambda^{-1}H_{V}}\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}_{t,x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\lesssim\lambda^{1/2}\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})},$ as desired. To prove (3.43) and (3.44) we shall need to use (3.12) and (3.13) in Theorem 3.1. Note that $\frac{1}{q^{\prime}}-\frac{1}{q}=\frac{2}{n},\quad\text{if }\,\,q=2n/(n-2),\,\,q^{\prime}=2n/(n+2).$ Consequently if we use (3.12), (3.40), Hölder’s inequality and (3.35) then we conclude that we can fix $\ell_{1},R_{1}$ large enough so that we have (3.46) $\displaystyle\bigl{\|}\beta(-D_{t}/\lambda)w_{2}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq C\|V_{2}\,w\|_{L^{2}_{t}L^{2n/(n+2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq C\|V_{2}\|_{L^{n/2}({\mathbb{R}}^{n})}\cdot\|w\|_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq\tfrac{1}{4}\|w\|_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})},$ assuming, as we may, in the last step that, if $\delta(\ell_{1},R_{1})$ is as in (3.35), $C\delta(\ell_{1},R_{1})\leq\tfrac{1}{4}$. Similarly, by repeating the above argument, and using the support condition on $V_{1}$, we have (3.47) $\displaystyle\bigl{\|}\beta(-D_{t}/\lambda)w_{1}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq C\|V_{1}\,w_{1}\|_{L^{2}_{t}L^{2n/(n+2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq C\|V_{1}\|_{L^{n/2}({\mathbb{R}}^{n})}\cdot\|\chi_{R_{1}}w\|_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq C\|V\|_{L^{n/2}({\mathbb{R}}^{n})}\|\chi_{R_{1}}w\|_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})},$ where $\chi_{R_{1}}\in C_{0}^{\infty}({\mathbb{R}}^{n})$ with $\chi_{R_{1}}\equiv 1$ if $|x|\leq R_{1}$ and vanishes if $|x|>2R_{1}$. Although at the moment we do not obtain the right side of (3.43) as we want, we gain a support function $\chi_{R}$, which will allow us to use (3.13). To see this, as in (3.37), we shall split (3.48) $\chi_{R_{1}}w=\chi_{R_{1}}\widetilde{w}+\chi_{R_{1}}w_{1}+\chi_{R_{1}}w_{2},$ with $\widetilde{w},w_{1},w_{2}$ satisfying (3.38), (3.39), (3.40) correspondingly, but with a different choice of $\ell,R$ in (3.34) which we shall specify later. By (3.41) and (3.47), we would obtain (3.43) if we have the following inequalities (3.49) $\bigl{\|}\chi_{R_{1}}\beta(-D_{t}/\lambda)\widetilde{w}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq C\lambda^{1/2}\|\rho(\sqrt{H_{V}}/\lambda)f\|_{L^{2}({\mathbb{R}}^{n})},$ (3.50) $\bigl{\|}\chi_{R_{1}}\beta(-D_{t}/\lambda)w_{1}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq\tfrac{1}{8C_{1}}\bigl{\|}w\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})},$ and finally (3.51) $\bigl{\|}\chi_{R_{1}}\beta(-D_{t}/\lambda)w_{2}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq\tfrac{1}{8C_{1}}\bigl{\|}w\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})},$ with $C_{1}=C\|V\|_{L^{n/2}({\mathbb{R}}^{n})}$ as in (3.47). Note that (3.49) follows directly from (3.42), and (3.51) follows from (3.46) if we choose $\ell_{2},R_{2}$ large enough such that $R_{2}\geq R_{1}$ and $C\delta(\ell_{2},R_{2})<\frac{1}{8C_{1}}$. Up until now we have not used the condition that $\lambda>\Lambda$. We need the condition to obtain (3.50) with the required small constant on the right side. To see this, note that by applying (3.13) for $R=R_{2}$ and using the fact that $V_{1}\leq\ell_{2}$ and $\text{supp }V_{1}\subset B_{R_{2}}$, we have (3.52) $\displaystyle\bigl{\|}\chi_{R_{1}}\beta(-D_{t}/\lambda)w_{1}\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq C_{R_{2}}\lambda^{-1/2}\bigl{\|}V_{1}w\bigr{\|}_{L^{2}_{t}L^{2}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq C_{R_{2}}R_{2}\ell_{2}\lambda^{-1/2}\bigl{\|}w\bigr{\|}_{L^{2}_{t}L^{2n/(n-2)}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}.$ Thus, if we choose $\Lambda$ such that $\Lambda^{2}=8C\|V\|_{L^{n/2}({\mathbb{R}}^{n})}C_{R_{2}}R_{2}\ell_{2}$, we obtain (3.50) for $\lambda>\Lambda$, which complete the proof of (3.26). The rest of the paper is devoted to the proof of Theorem 3.1, with a focus on the Fourier analysis related to the standard Laplacian $\Delta$ or free Schrödinger equation in Euclidean space. In other words, we can assume $V\equiv 0$ from now on. To proceed, if $\beta$ is as in (2.46), let us define “wider cutoffs” that we shall also use as follows (3.53) $\widetilde{\beta}(s)=\sum_{|j|<10}\beta(2^{-j}s)\in C^{\infty}_{0}((2^{-10},2^{10})).$ For future use, note that (3.54) $\widetilde{\beta}(s)=1\quad\text{on }\,\,(1/4,4).$ For fixed $0<\varepsilon<1$, our first estimate concerns the operator (3.55) $U(t)={\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{+}(t)\widetilde{\beta}(P/\lambda)e^{it\lambda^{-1}\Delta}e^{-\varepsilon t},$ where ${\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{+}(s)={\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{[0,+\infty)}(s)$ denote the Heaviside function and $P=\sqrt{-\Delta}$. For later use, let us note that we can rewrite this operator. Indeed, if we recall that $(2\pi)^{-1}\int_{-\infty}^{\infty}\frac{e^{it\tau}}{i\tau+\varepsilon}\,d\tau={\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{+}(t)e^{-\varepsilon t},$ we deduce that (3.56) $U(t)f(x)=\frac{i\lambda}{2\pi}\int_{-\infty}^{\infty}\frac{e^{it\tau}}{-\lambda\tau+\Delta+i\varepsilon\lambda}\,\widetilde{\beta}(P/\lambda)f(x)\,d\tau.$ Also, if we regard $U$ as an operator sending functions of $x$ into functions of $x,t$, then its adjoint is the operator (3.57) $U^{*}F(x)=\int_{0}^{\infty}e^{-\varepsilon s}\bigl{(}e^{-is\lambda^{-1}\Delta}\widetilde{\beta}(P/\lambda)F(s,\,\cdot\,)\bigr{)}(x)\,ds.$ Consequently, (3.58) $\int U(t)U^{*}(s)F(s,x)\,ds\\\ ={\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{+}(t)\int_{0}^{\infty}\Bigl{(}e^{i(t-s)\lambda^{-1}\Delta}e^{-\varepsilon(t-s)}\widetilde{\beta}^{2}(P/\lambda)e^{-2\varepsilon s}F(s,\,\cdot\,)\Bigr{)}(x)\,ds.$ Note also that if, say, (3.59) $F(t,x)=0,\quad t\notin[0,\varepsilon^{-1}],$ then the solution to the scaled inhomogeneous Schrödinger equation with damping term (3.60) $(i\lambda\partial_{t}+\Delta_{g}+i\varepsilon\lambda)w(t,x)=F(t,x),\quad w(0,\,\cdot\,)=0$ is given by (3.61) $w(t,x)=(i\lambda)^{-1}\int_{0}^{t}\bigl{(}\Bigl{(}e^{i(t-s)\lambda^{-1}\Delta}e^{-\varepsilon(t-s)}F(s,x)\bigr{)}\,ds\\\ =(2\pi)^{-1}\int_{0}^{\varepsilon^{-1}}\int_{-\infty}^{\infty}\frac{e^{i(t-s)\tau}}{-\lambda\tau+\Delta+i\varepsilon\lambda}F(s,\,\cdot\,)(x)\,d\tau ds.$ Thus, since $w(t,\,\cdot\,)=0$ for $t<0$, it follows from (3.56), (3.57) and (3.61) that (3.62) $\widetilde{\beta}^{2}(P/\lambda)w(t,x)=(i\lambda)^{-1}\int_{0}^{t}U(t)U^{*}(s)\bigl{(}e^{2\varepsilon s}F(s,\,\cdot\,)\bigr{)}(x)\,ds\\\ =(2\pi)^{-1}\int_{0}^{\varepsilon^{-1}}\int_{-\infty}^{\infty}\frac{e^{i(t-s)\tau}}{-\lambda\tau+\Delta+i\varepsilon\lambda}\widetilde{\beta}^{2}(P/\lambda)F(s,\,\cdot\,)(x)\,d\tau ds.$ Using these formulas, we claim that we can use the Keel-Tao [12] theorem to deduce the following. ###### Proposition 3.4. Suppose that $F$ satisfies the support assumption in (3.59) and that $w$ solves (3.60). Then for $\lambda\geq 1$ we have (3.63) $\bigl{\|}\widetilde{\beta}^{2}(P/\lambda)w\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\lambda^{-1/2}\varepsilon^{-1/2}\|F\|_{L^{2}_{t,x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})},$ and also (3.64) $\bigl{\|}\widetilde{\beta}^{2}(P/\lambda)w\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\|F\|_{L^{2}_{t}L^{\frac{2n}{n+2}}_{x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})}.$ Additionally, if for all times $t$, $\text{supp }F(t,\cdot)\subset B_{R}$, with $B_{R}$ being a fixed ball of radius $R\geq 1$ in ${\mathbb{R}}^{n}$ centered at origin, we have (3.65) $\bigl{\|}\widetilde{\beta}^{2}(P/\lambda)w\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times B_{R})}\leq C_{R}\lambda^{-1/2}\|F\|_{L^{2}_{t}L^{2}_{x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})}.$ ###### Proof. Let (3.66) $V(t^{\prime})f(x)=U(\lambda t^{\prime})f(x)={\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{+}(t^{\prime})e^{-\varepsilon\lambda t^{\prime}}\widetilde{\beta}(P/\lambda)e^{it^{\prime}\Delta_{g}}f(x).$ We then clearly have $\|V(t^{\prime})\|_{L^{2}({\mathbb{R}}^{n})\to L^{2}({\mathbb{R}}^{n})}=O(1),$ and since $V(t^{\prime})(V(s^{\prime}))^{*}f(x)=\\\ {\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{+}(t^{\prime}){\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{+}(s^{\prime})e^{-\varepsilon\lambda(t^{\prime}+s^{\prime})}(2\pi)^{-n}\int_{{\mathbb{R}}^{n}}\int_{{\mathbb{R}}^{n}}e^{i\langle x-y,\xi\rangle+(t^{\prime}-s^{\prime})|\xi|^{2}}\widetilde{\beta}^{2}(|\xi|/\lambda)f(y)d\xi dy,$ by stationary phase methods, we have $\|V(t^{\prime})(V(s^{\prime}))^{*}\|_{L^{1}({\mathbb{R}}^{n})\to L^{\infty}({\mathbb{R}}^{n})}.\lesssim|t^{\prime}-s^{\prime}|^{-n/2}.$ We can use the Keel-Tao theorem along with these two inequalities to deduce that $\|V(t^{\prime})f\|_{L^{2}_{t^{\prime}}L^{{\frac{2n}{n-2}}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\|f\|_{L^{2}({\mathbb{R}}^{n})},$ as well as $\Bigl{\|}\int_{0}^{t^{\prime}}V(t^{\prime})V^{*}(s^{\prime})G(s^{\prime},\,\cdot\,)\,ds^{\prime}\,\Bigr{\|}_{L^{2}_{t^{\prime}}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\|G\|_{L^{2}_{t}L^{{\frac{2n}{n+2}}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})},$ and $\Bigl{\|}\int_{0}^{\infty}V^{*}(s^{\prime})G(s^{\prime},\,\cdot\,)\,ds^{\prime}\Bigr{\|}_{L^{2}({\mathbb{R}}^{n})}\lesssim\|G\|_{L^{2}_{t}L^{{\frac{2n}{n+2}}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}.$ Using (3.66) we deduce that these inequalities are equivalent to (3.67) $\|U(t)f\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\lambda^{1/2}\|f\|_{L^{2}({\mathbb{R}}^{n})},$ as well as (3.68) $\Bigl{\|}\int_{0}^{t}U(t)U^{*}(s)H(s,\,\cdot\,)\,ds\Bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\\\ \lesssim\lambda\,\|H\|_{L^{2}_{t}L^{\frac{2n}{n+2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ and (3.69) $\Bigl{\|}\int_{0}^{\infty}U^{*}(s)H(s,\,\cdot\,)\,ds\Bigr{\|}_{L^{2}({\mathbb{R}}^{n})}\lesssim\lambda^{1/2}\|H\|_{L^{2}_{t}L^{\frac{2n}{n+2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})},$ respectively. Using (3.62) with $H=e^{2\varepsilon s}F$ along with (3.68) we obtain (3.64) since $\|H\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}\approx\|F\|_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}}$ due to (3.59). Moreover, since $\|U^{*}(s)\|_{L^{2}({\mathbb{R}}^{n})\to L^{2}({\mathbb{R}}^{n})}=O(1)$, using (3.62) along with (3.67) we find that if $H=e^{2\varepsilon s}F$ $\displaystyle\bigl{\|}\widetilde{\beta}^{2}(P/\lambda)w\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq\lambda^{-1}\int_{0}^{\varepsilon^{-1}}\bigl{\|}{\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{+}(t-s)U(t)U^{*}(s)H(s,\,\cdot\,)\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\,ds$ $\displaystyle\lesssim\lambda^{-1/2}\int_{0}^{\varepsilon^{-1}}\|U^{*}(s)H(s,\,\cdot\,)\|_{L^{2}_{x}}\,ds$ $\displaystyle\lesssim\lambda^{-1/2}\int_{0}^{\varepsilon^{-1}}\|F(s,\,\cdot\,)\|_{L^{2}_{x}}\,ds\leq\lambda^{-1/2}\varepsilon^{-1/2}\|F\|_{L^{2}_{t,x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})},$ as desired, which is (3.63). To prove (3.66), note that if we assume (3.70) $\widetilde{\beta}^{2}(P/\lambda)w(t,x)=(i\lambda)^{-1}\int_{{\mathbb{R}}^{n}}\int_{{\mathbb{R}}}K(t,s,x,y)F(s,y)dsdy,$ then by (3.62), we have (3.71) $K(t,s,x,y)=(2\pi)^{-n}{\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{[0,t]}(s)e^{-\varepsilon(t-s)}\int_{{\mathbb{R}}^{n}}e^{i\langle x-y,\xi\rangle+(t-s)\lambda^{-1}|\xi|^{2}}\widetilde{\beta}^{2}(|\xi|/\lambda)d\xi.$ Since, in proving (3.65), we only consider the case where $x,y\in B_{R}$, by the definition of $\widetilde{\beta}$ in (3.53) and using integration by parts, (3.72) $|K(t,s,x,y)|\leq C_{N}\lambda^{n}(1+\lambda|x-y|+\lambda|t-s|)^{-N},\,\,\,\text{if}\,\,\,|t-s|>2^{11}R.$ Now let us choose a function $\eta\in C_{0}^{\infty}({\mathbb{R}})$ satisfying $\eta(t)=0$, if $|t|>1$, and $\sum_{j=-\infty}^{\infty}\eta(t-j)\equiv 1$. Given $R\geq 1$, we shall set (3.73) $\eta_{j}(t)=\eta_{j,R}(t)=\eta((2^{11}R)^{-1}\cdot t-j),$ and write (3.74) $K(t,s,x,y)=K_{1}(t,s,x,y)+K_{2}(t,s,x,y)\\\ =\sum_{j,s\in\mathbb{Z}:|j-k|\leq 1}\eta_{j}(t)\eta_{k}(s)K(t,s,x,y)+\sum_{j,s\in\mathbb{Z}:|j-k|>1}\eta_{j}(t)\eta_{k}(s)K(t,s,x,y).$ As a consequence of (3.72) and (3.73), we have (3.75) $|K_{2}(t,s,x,y)|\leq C_{N}\lambda^{n}\lambda^{-N}R^{-N}(1+\lambda|x-y|+\lambda|t-s|)^{-N},$ which, after an application of Young’s inequality, gives us much better bounds than the right side of (3.65). Hence, it suffices to consider the case where $t,s$ are supported near diagonal, i.e., we need to show that (3.76) $\bigl{\|}K_{1}F\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times B_{R})}\leq C_{R}\lambda^{1/2}\|F\|_{L^{2}_{t}L^{2}_{x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})}.$ Firstly, for fixed $j,k$, using (3.62) along with (3.67), we find that if $H=\eta_{k}(s)e^{2\varepsilon s}F$, $\displaystyle\bigl{\|}\iint\eta_{j}(t)\eta_{k}(s)$ $\displaystyle K(t,s,x,y)F(s,y)dsdy\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq\int_{0}^{\varepsilon^{-1}}\bigl{\|}{\rm 1\rule{0.43057pt}{6.54443pt}\hskip 0.86108pt}_{+}(t-s)U(t)U^{*}(s)H(s,\,\cdot\,)\bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\,ds$ $\displaystyle\lesssim\lambda^{1/2}\int_{0}^{\varepsilon^{-1}}\|U^{*}(s)H(s,\,\cdot\,)\|_{L^{2}_{x}}\,ds$ $\displaystyle\lesssim\lambda^{1/2}\int_{0}^{\varepsilon^{-1}}\|H(s,\,\cdot\,)\|_{L^{2}_{x}}\,ds\leq\lambda^{1/2}R^{1/2}\|H\|_{L^{2}_{t,x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})},$ where we used the fact that $H(s,\cdot)$ is supported in a interval of length $\approx 2^{11}R$ in the last inequality. Since $\eta_{j}(t)\bar{\eta}_{k}(t)=0$ if $|j-k|>1$, by Cauchy Schwartz and the above inequality, $\displaystyle\bigl{\|}\iint K_{1}(t,s,x,$ $\displaystyle y)F(s,y)dsdy\bigr{\|}^{2}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq\sum_{j}\bigl{\|}\sum_{k:|k-j|\leq 1}\iint\eta_{j}(t)\eta_{k}(s)K(t,s,x,y)F(s,y)dsdy\bigr{\|}^{2}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq\sum_{j}\sum_{k:|k-j|\leq 1}\bigl{\|}\iint\eta_{j}(t)\eta_{k}(s)K(t,s,x,y)F(s,y)dsdy\bigr{\|}^{2}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}$ $\displaystyle\leq\sum_{j}\sum_{k:|k-j|\leq 1}\lambda R\|\eta_{k}(t)e^{2\varepsilon t}F(x)\|^{2}_{L^{2}_{t,x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})}\leq\lambda R\|F\|_{L^{2}_{t}L^{2}_{x}([0,\varepsilon^{-1}]\times{\mathbb{R}}^{n})},$ which completes the proof of (3.76). ∎ Finally, to prove Theorem 3.1 using Proposition 3.4, we shall basically repeat the argument at the end of Section 2 in [8]. To proceed, we shall require the following two elementary lemmas which are analogous to Lemma 2.2 and Lemma 2.3 in [8]. ###### Lemma 3.5. Let $\alpha\in C([0,\infty))$ and $1<p\leq 2<q<\infty$. Then (3.77) $\|\alpha(P)f\|_{L^{q}({\mathbb{R}}^{n})}\leq C_{p,q}\,\bigl{(}\sup_{\mu\geq 0}(1+\mu)^{n(\frac{1}{p}-\frac{1}{q})}|\alpha(\mu)|\bigr{)}\,\|f\|_{L^{p}({\mathbb{R}}^{n})}.$ ###### Lemma 3.6. Suppose that (3.78) $|K_{\lambda}(t,t^{\prime})|\leq\lambda(1+\lambda|t-t^{\prime}|)^{-2}.$ Then if $1\leq p\leq q\leq\infty$ we have the following uniform bounds for $\lambda\geq 1$ (3.79) $\Bigl{\|}\,\int K_{\lambda}(t,t^{\prime})\,G(t^{\prime},\,\cdot\,)\,dt^{\prime}\,\Bigr{\|}_{L^{p}_{t}L^{q}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq C\|G\|_{L^{p}_{t}L^{q}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}.$ Also, suppose that $WF(t,x)=\int_{-\infty}^{\infty}\int_{{\mathbb{R}}^{n}}K(t,x;t^{\prime},y)\,F(t^{\prime},y)\,dy\,dt^{\prime}$ and that for each $t,t^{\prime}\in{\mathbb{R}}$ the operator $W_{t,t^{\prime}}f(x)=\int_{{\mathbb{R}}^{n}}K(x,t;y,t^{\prime})f(y)\,dy$ satisfies $\|W_{t,t^{\prime}}f\|_{L^{q}({\mathbb{R}}^{n})}\leq\lambda(1+\lambda|t-t^{\prime}|)^{-2}\,\|f\|_{L^{r}({\mathbb{R}}^{n})}$ for some $1\leq r\leq q\leq\infty$. Then if $1\leq s\leq p\leq\infty$ we have for $\lambda\geq 1$ (3.80) $\|WF\|_{L^{p}_{t}L^{q}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\leq C\lambda^{\frac{1}{s}-\frac{1}{p}}\|F\|_{L^{s}_{t}L^{r}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}.$ Both lemmas are well known, The first lemma is a direct consequence of Sobolev estimates and spectral theorem, while the second lemma is essentially Theorem 0.3.6 in [19]. For more details about the proof of the two lemmas, see, i.e., [8]. ###### Proof of Theorem 3.1. We first note that the kernel of $\beta(-D_{t}/\lambda)$ is $O(\lambda(1+\lambda|t-t^{\prime}|)^{-2})$. Therefore, by (3.79) $\|\beta(-D_{t}/\lambda)\widetilde{\beta}^{2}(P/\lambda)w\|_{L^{p}_{t}L^{q}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\lesssim\|\widetilde{\beta}^{2}(P/\lambda)w\|_{L^{p}_{t}L^{q}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}.$ Therefore, if as in Proposition 3.4 and our theorem our forcing term $F$ satisfies (3.10), it suffices to show that $\beta(-D_{t}/\lambda)(I-\widetilde{\beta}^{2}(P/\lambda))w$ enjoys the bounds in (3.11), (3.12) and (3.13). Recalling (3.61), this means that it suffices to show that (3.81) $\Bigl{\|}\int_{0}^{\varepsilon^{-1}}\int_{-\infty}^{\infty}\frac{e^{i(t-s)\tau}}{-\lambda\tau-P^{2}+i\varepsilon\lambda}\beta(-\tau/\lambda)\,\bigl{(}1-\widetilde{\beta}^{2}(P/\lambda)\bigr{)}\,F(s,\,\cdot\,)\,d\tau ds\,\Bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\\\ \lesssim\lambda^{-1/2}\|F\|_{L^{2}_{t,x}({\mathbb{R}}\times{\mathbb{R}}^{n})},$ as well as (3.82) $\Bigl{\|}\int_{0}^{\varepsilon^{-1}}\int_{-\infty}^{\infty}\frac{e^{i(t-s)\tau}}{-\lambda\tau-P^{2}+i\varepsilon\lambda}\beta(-\tau/\lambda)\,\bigl{(}1-\widetilde{\beta}^{2}(P/\lambda)\bigr{)}\,F(s,\,\cdot\,)\,d\tau ds\,\Bigr{\|}_{L^{2}_{t}L^{\frac{2n}{n-2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}\\\ \lesssim\|F\|_{L^{2}_{t}L^{\frac{2n}{n+2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})}.$ Actually, (3.81) also implies that $\beta(-D_{t}/\lambda)(I-\widetilde{\beta}^{2}(P/\lambda))w$ enjoys a better bound than $\widetilde{\beta}^{2}(P/\lambda)w$ since we do not have the $\varepsilon^{-1/2}$ factor on the right side of (3.11). To use Lemma 3.6 set $\alpha(t,s;\mu)=\int_{-\infty}^{\infty}\frac{e^{i(t-s)\tau}}{-\lambda\tau-\mu^{2}+i\varepsilon\lambda}\beta(-\tau/\lambda)\,\bigl{(}1-\widetilde{\beta}^{2}(\mu/\lambda)\bigr{)}\,d\tau,$ and note that, by (3.54) and the support properties of $\beta$ we have for $j=0,1,2$ $\lambda\,\bigl{|}\lambda^{j}\,\partial_{\tau}^{j}\bigl{(}(1-\widetilde{\beta}^{2}(\mu/\lambda)\bigr{)}\beta(-\tau/\lambda)(-\lambda\tau-\mu^{2}+i\varepsilon\lambda)^{-1}\bigr{)}\,\bigr{|}\lesssim\lambda(\mu^{2}+\lambda^{2})^{-1},$ which, by a simple integration parts argument, translates to the bound $|\alpha(t,s;\mu)|\lesssim\lambda(1+\lambda|t-s|)^{-2}\cdot(\mu^{2}+\lambda^{2})^{-1}.$ If we use Lemma 3.5 we deduce from this that the “frozen operators” $T_{t,s}h(x)=\int_{-\infty}^{\infty}\frac{e^{i(t-s)\tau}}{-\lambda\tau-P^{2}+i\varepsilon\lambda}\beta(-\tau/\lambda)\,\bigl{(}1-\widetilde{\beta}^{2}(P/\lambda)\bigr{)}h(x)\,d\tau,$ satisfy (3.83) $\|T_{t,s}h\|_{L^{\frac{2n}{n-2}}({\mathbb{R}}^{n})}\lesssim\lambda(1+\lambda|t-s|)^{-2}\cdot\lambda^{-2+1}\|h\|_{L^{2}({\mathbb{R}}^{n})}$ as well as (3.84) $\|T_{t,s}h\|_{L^{\frac{2n}{n-2}}({\mathbb{R}}^{n})}\lesssim\lambda(1+\lambda|t-s|)^{-2}\|h\|_{L^{\frac{2n}{n+2}}({\mathbb{R}}^{n})}$ due to the fact that $n(\frac{1}{2}-\frac{n-2}{2n})=1$ and $n(\frac{n+2}{2n}-\frac{n-2}{2n})=2$. If we combine (3.83) and (3.80), we conclude that the left side of (3.81) is dominated by $\lambda^{-1}\|F\|_{L^{2}_{t,x}({\mathbb{R}}\times{\mathbb{R}}^{n})},$ which is better than the bounds posited in (3.81) by a factor of $\lambda^{-1/2}$. Similarly, if we combine (3.84) and (3.80), we find that the left side of (3.82) is dominated by $\|F\|_{L^{2}_{t}L^{\frac{2n}{n+2}}_{x}({\mathbb{R}}\times{\mathbb{R}}^{n})},$ which completes the proof. ∎ ## References * [1] M. D. Blair, X. Huang, Y. Sire, and C. D. Sogge. Uniform sobolev estimates on compact manifolds involving singular potentials. arXiv preprint arXiv:2009.06075, 2020. * [2] M. D. Blair, Y. Sire, and C. D. Sogge. Quasimode, eigenfunction and spectral projection bounds for Schrödinger operators on manifolds with critically singular potentials. J. Geom. Analysis, to appear. * [3] J.-M. Bouclet and H. Mizutani. Uniform resolvent and strichartz estimates for schrödinger equations with critical singularities. Transactions of the American Mathematical Society, 370(10):7293–7333, 2018. * [4] J. Bourgain, P. Shao, C. D. Sogge, and X. Yao. On $L^{p}$-resolvent estimates and the density of eigenvalues for compact Riemannian manifolds. Comm. Math. Phys., 333(3):1483–1527, 2015. * [5] T. Coulhon and A. Sikora. Gaussian heat kernel upper bounds via the Phragmén-Lindelöf theorem. Proc. Lond. Math. Soc. (3), 96(2):507–544, 2008. * [6] D. Dos Santos Ferreira, C. E. Kenig, and M. Salo. On $L^{p}$ resolvent estimates for Laplace-Beltrami operators on compact manifolds. Forum Math., 26(3):815–849, 2014. * [7] M. Goldberg. Strichartz estimates for the schrödinger equation with time-periodic ln/2 potentials. Journal of Functional Analysis, 256(3):718–746, 2009. * [8] X. Huang and C. D. Sogge. Quasimode and strichartz estimates for time-dependent schrödinger equations with singular potentials. arXiv preprint arXiv:2011.04007, 2020. * [9] A. D. Ionescu and D. Jerison. On the absence of positive eigenvalues of schrödinger operators with rough potentials. Geometric and Functional Analysis, 13(5):1029–1081, 2003. * [10] A. Jensen, T. Kato, et al. Spectral properties of schrödinger operators and time-decay of the wave functions. Duke mathematical journal, 46(3):583–611, 1979. * [11] J.-L. Journé, A. Soffer, and C. D. Sogge. Decay estimates for Schrödinger operators. Comm. Pure Appl. Math., 44(5):573–604, 1991. * [12] M. Keel and T. Tao. Endpoint Strichartz estimates. Amer. J. Math., 120(5):955–980, 1998. * [13] C. E. Kenig, A. Ruiz, and C. D. Sogge. Uniform Sobolev inequalities and unique continuation for second order constant coefficient differential operators. Duke Math. J., 55(2):329–347, 1987. * [14] H. Koch and D. Tataru. Sharp counterexamples in unique continuation for second order elliptic equations. Journal fur die Reine und Angewandte Mathematik, 542:133–146, 2002\. * [15] H. Mizutani. Uniform sobolev estimates for schrödinger operators with scaling-critical potentials and applications. Analysis & PDE, 13(5):1333–1369, 2020. * [16] I. Rodnianski and T. Tao. Effective limiting absorption principles, and applications. Communications in Mathematical Physics, 333(1):1–95, 2015. * [17] P. Shao and X. Yao. Uniform Sobolev resolvent estimates for the Laplace-Beltrami operator on compact manifolds. Int. Math. Res. Not. IMRN, (12):3439–3463, 2014. * [18] C. D. Sogge. Oscillatory integrals and spherical harmonics. Duke Math. J., 53(1):43–65, 1986. * [19] C. D. Sogge. Fourier integrals in classical analysis, volume 210 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, second edition, 2017. * [20] C. D. Sogge and S. Zelditch. A note on $L^{p}$-norms of quasi-modes. In Some topics in harmonic analysis and applications, volume 34 of Adv. Lect. Math. (ALM), pages 385–397. Int. Press, Somerville, MA, 2016. * [21] E. M. Stein and G. Weiss. Introduction to Fourier Analysis on Euclidean Spaces (PMS-32), Volume 32. Princeton university press, 2016. * [22] P. A. Tomas. A restriction theorem for the Fourier transform. Bull. Amer. Math. Soc., 81:477–478, 1975.
# New constructions of nef classes on self-products of curves Mihai Fulger Department of Mathematics, University of Connecticut, Storrs, CT 06269-1009, USA Institute of Mathematics of the Romanian Academy, P. O. Box 1-764, RO-014700, Bucharest, Romania<EMAIL_ADDRESS>and Takumi Murayama Department of Mathematics Princeton University Princeton, NJ 08544-1000 USA<EMAIL_ADDRESS> ###### Abstract. We study the nef cone of self-products of a curve. When the curve is very general of genus $g>2$, we construct a nontrivial class of self-intersection 0 on the boundary of the nef cone. Up to symmetry, this is the only known nontrivial boundary example that exists for all $g>2$. When the curve is general, we identify nef classes that improve on known examples for arbitrary curves. We also consider self-products of more than two copies of the curve. The first author was partially supported by the Simons Foundation Collaboration Grant 579353. The second author was partially supported by the National Science Foundation under Grant Nos. DMS-1701622 and DMS-1902616. ## 1\. Introduction The closure of the ample cone of a projective variety $X$ is the nef cone $\operatorname{{Nef}}(X)$. It is a fundamental invariant that controls morphisms from $X$ to other projective varieties, in particular projective embeddings of $X$. It is important to compute this cone in specific examples; however, this is a difficult problem already on surfaces. Perhaps the most famous open question here is the following: ###### Conjecture 1.1 (Nagata; see [Nag59, Conjecture on p. 772]). Let $\pi\colon X\to\mathbb{P}^{2}$ be the blow-up of $n\geq 10$ very general points in $\mathbb{P}^{2}$ with exceptional divisors $E_{1},\ldots,E_{n}$. Let $H\subset\mathbb{P}^{2}$ be any line. Then $\pi^{*}(\sqrt{n}H)-E_{1}-\cdots-E_{n}\in\operatorname{{Nef}}(X).$ Recall that a property is very general on a variety if it holds outside a countable union of Zariski closed proper subsets. Conjecture 1.1 is a particular case of the SHGH conjecture. Note that the divisor in the Nagata conjecture has self-intersection 0, hence if it is nef, then it is on the boundary of the nef cone. Another interesting class of surfaces is self-products of curves. Recall the following open problem: ###### Conjecture 1.2 (see [Laz04a, Remark 1.5.10]). Let $C$ be a smooth projective curve of genus $g$ over $\mathbb{C}$. Denote by $f_{1}$ and $f_{2}$ (resp. $\delta$) the classes of the fibers of the projections (resp. the class of the diagonal $\Delta$) in $C\times C$. Then, we have $(1+\sqrt{g})(f_{1}+f_{2})-\delta\in\operatorname{Nef}(C\times C)$ if $g$ is sufficiently large and $C$ has very general moduli. The self-intersection of $(1+\sqrt{g})(f_{1}+f_{2})-\delta$ is $0$, just like in Conjecture 1.1. In fact, Ciliberto–Kouvidakis [CK99] and Ross [Ros07] prove that the Nagata conjecture implies Conjecture 1.2. In the direction of Conjecture 1.2, Kouvidakis [Kou93, Theorem 2] shows that $\biggl{(}1+\frac{g}{\lfloor\sqrt{g}\rfloor}\biggr{)}(f_{1}+f_{2})-\delta\in\operatorname{{Nef}}(C\times C).$ In particular, the conjecture holds when $g$ is a perfect square. An improvement when $g$ is not a perfect square is offered by [Ros07, (1.9)] who uses work of [SSS04] to prove that $\bigl{(}1+\sqrt{g+1}\bigr{)}(f_{1}+f_{2})-\delta$ is nef. It also makes sense to consider the non-symmetric divisors with zero self- intersection and ask: ###### Question 1.3. If $C$ is a very general curve of (large) genus $g$, and if $a>1$, is the class $a\>\\!f_{1}+\bigl{(}1+\frac{g}{a-1}\bigr{)}f_{2}-\delta$ nef on $C\times C$? Just like with Conjecture 1.2, there are special curves for which the analogous question has a negative answer. For example, if $C$ is hyperelliptic and $a=2$, then $2f_{1}+(1+g)f_{2}-\delta$ is not nef. On arbitrary curves, the best known result is due to Rabindranath [Rab19, Proposition 3.2]. He adapts an idea of Vojta [Voj89] to prove that (1.3.1) $af_{1}+\biggl{(}1+\frac{g}{a-1}+(g-1)(a-1)\biggr{)}f_{2}-\delta\in\operatorname{{Nef}}(C\times C).$ Figure 1. Nef classes on $C\times C$ for $g=10$ and arbitrary $C$ The classes $af_{1}+bf_{2}-\delta$ are represented by the points $(a,b)$. The outside curve is the conjectural nef boundary for very general curves in Question 1.3. Inside, on the left is the graph of $b=1+\frac{g}{a-1}+(a-1)(g-1)$ from (1.3.1). On the right is its reflection. The dotted curve in the middle is $b=1+\frac{g^{2}}{a-1}$ from Remark 5.1. The line segment bounds the convex hull. It is tangent to the curves at the specified points. See Figure 1. The line segment joining $(2,2g)$ and $(2g,2)$ is optimal for hyperelliptic curves. Our sharpest result answers Question 1.3 in the affirmative for $a=2$. Up to symmetry, this is the only settled case of Question 1.3 that we know of, other than Kouvidakis’s when $g$ is a perfect square. ###### Theorem (see Theorem 3.8). Let $C$ be a _very general_ smooth projective curve of genus $g\neq 2$. Then (1.3.2) $2f_{1}+(1+g)f_{2}-\delta\in\operatorname{{Nef}}(C\times C).$ The idea is to degenerate $C$ to a rational curve with $g$ simple nodes in general position using a construction of [Ros07]. The nefness of the limit of the classes (1.3.2) follows from the elementary Proposition 3.9 concerning the blow-up of $\mathbb{P}^{1}\times\mathbb{P}^{1}$ at $g$ general symmetric pairs of points. In Corollary 3.12 we apply the original techniques of Kouvidakis. We degenerate to simple covers of $\mathbb{P}^{1}$ to show that for all integers $2\leq d\leq 1+\sqrt{g}$ and very general $C$: (1.3.3) $d\>\\!f_{1}+\biggl{(}2+\frac{2g}{d-1}-d\biggr{)}f_{2}-\delta\in\operatorname{Nef}(C\times C).$ See Figure 2. Figure 2. Nef classes on $C\times C$ for $g=10$ and very general $C$ To improve visibility, we only show the picture above the diagonal $a=b$. Note the difference in scale. The point $(2,g+1)$ comes from Theorem 3.8. Keeping the contour from Figure 1 and from Figure 3 below, we have added two polygonal lines. The inner line represents classes coming from (1.3.3), and from convexity and symmetry. In the outer polygonal line we show classes induced by Theorem 3.8 and by previous results. Next, we propose an approach to Question 1.3 in terms of semistability of vector bundles and give partial results. We start with the simple observation that if $a>1$, then $af_{1}-\delta$ is ample on the fibers of the second projection $pr_{2}\colon C\times C\to C$. If $a$ is an integer and $L_{a}$ is a divisor of degree $a$ on $C$, then we observe in Proposition 3.13.(ii) that the “positivity” of $pr_{1}^{*}L_{1}-\Delta$ (in this case the smallest $b$ such that $af_{1}+bf_{2}-\delta$ is nef) is determined asymptotically by a similar measure of positivity of the sheaves $pr_{2*}\mathcal{O}\bigl{(}m(pr_{1}^{*}L_{a}-\Delta)\bigr{)}$. These are higher conormal bundles of $C$ in the sense of [EL92]. The idea is an instance of a general “linearization” principle that we explain in Proposition 3.15. Fix a curve $C$ and a rational number $a>1$. In Theorem 3.7, we use the above to prove that Question 1.3 is true for $a$ and $C$ if and only if the higher conormal bundles above are semistable in an asymptotic sense on $C$. Even in the asymptotic sense, understanding the semistability of the terms in the sequence of higher conormal bundles seems out of reach. However, the first (or 0-th, depending on convention) term $pr_{2*}\mathcal{O}(pr_{1}^{*}L_{a}-\Delta)$ is well-understood. It is the syzygy bundle $M_{L_{a}}$, i.e., the kernel of the evaluation morphism $H^{0}(C,L_{a})\otimes\mathcal{O}_{C}\to\mathcal{O}_{C}(L_{a})$. Drawing on known results about its semistability, we obtain the following: ###### Theorem (see Theorem 3.4.(i)). Let $C$ be a _general_ smooth projective curve of genus $g\geq 2$ over $\mathbb{C}$. Denote by $f_{1}$ and $f_{2}$ (resp. $\delta$) the classes of the fibers of the projections (resp. the class of the diagonal in $C\times C$). Then, we have $d\>\\!f_{1}+\biggl{(}1+\frac{g}{d-g}\biggr{)}f_{2}-\delta\in\operatorname{Nef}(C\times C)$ for every integer $d\geq\lfloor 3g/2\rfloor+1$. When $d<2g$ and $g$ is large, we obtain examples outside the convex span of the known examples mentioned above due to Vojta and Rabindranath. Slightly better bounds are given in Theorem 3.4.(ii),(iii). See also Figure 3. Figure 3. Nef classes on $C\times C$ for $g=10$ and general $C$ We again focus above the diagonal. Keeping the contour from the arbitrary case, we have added classes coming from the corresponding three parts of Theorem 3.4 and from the convexity and symmetry of the nef cone. Finally, we also consider self-products of more than two copies of $C$. Let $f_{i}$ be the class of any fiber of the $i$-th projection $C^{n}\to C$. Let $\delta_{ij}$ be the class of the large diagonal $\\{(x_{1},\ldots,x_{n})\operatorname{\bigm{|}}x_{i}=x_{j}\\}$. With assumptions as in Theorem 3.4, it is immediate that $\sum_{i=2}^{n}\biggl{(}\Bigl{(}1+\frac{g}{d-g}\Bigr{)}f_{1}+df_{i}-\delta_{1i}\biggr{)}\in\operatorname{{Nef}}(C^{n}).$ We show furthermore in Theorem 4.3 that if $C$ is an arbitrary smooth complex projective curve of positive genus and $d\in\mathbb{Z}$, then for certain values of $n$ and $d$, $(n-1)\cdot\biggl{(}1+\frac{g}{d-g}\biggr{)}f_{1}+d\cdot\sum_{i=2}^{n}f_{i}-\sum_{1\leq i<j\leq n}\delta_{ij}\in\operatorname{{Nef}}(C^{n}).$ The proof makes use of the rich geometry of symmetric products of curves, and a result of Kempf on continuous global generation of vector bundles on abelian varieties. The material in this paper grew out of the work of the authors on Seshadri constants for vector bundles in [FM21]. Versions of Theorems 3.4 and 3.7 were initially included in its preprint form. They are now separated as they are of independent interest, and the proofs do not need the machinery of Seshadri constants. The results for very general curves and for products of arbitrarily many factors in Section 4 did not appear in [FM21]. ### Acknowledgments We thank Marian Aprodu, Thomas Bauer, Renzo Cavalieri, Alexandru Chirvasitu, Lawrence Ein, Mattias Jonsson, Alex Küronya, Robert Lazarsfeld, Emanuele Macrì, Eyal Markman, Mircea Mustaţă, Sönke Rollenske, Julius Ross, Praveen Kumar Roy, John Sheridan, and Brooke Ullery for useful discussions. ## 2\. Background and notation Let $X$ be a projective scheme over an algebraically closed field. While our main results are over $\mathbb{C}$, some of our important tools (Proposition 3.13 and its generalization in Proposition 3.15) are valid in arbitrary characteristic. ### 2.1. Formal twists of coherent sheaves Let $\mathcal{V}$ be a coherent sheaf on $X$, and let $\lambda$ be an $\mathbb{R}$-Cartier $\mathbb{R}$-divisor on $X$. Following the case of bundles in [Laz04b, Section 6.2], the _formal twist_ of $\mathcal{V}$ by $\lambda$ is the pair $(\mathcal{V},\lambda)$, denoted by $\mathcal{V}\langle\lambda\rangle$. When $D$ is an integral Cartier divisor, the formal twist $\mathcal{V}\langle D\rangle$ is identified with $\mathcal{V}\otimes\mathcal{O}_{X}(D)$. The theory of twisted _vector bundles_ has pullbacks. In particular, when $\mathcal{V}$ is a vector bundle and $D$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor and $f\colon X^{\prime}\to X$ is a finite morphism such that $f^{*}D$ is actually Cartier, then $f^{*}\mathcal{V}\langle f^{*}D\rangle$ is $f^{*}\mathcal{V}\otimes\mathcal{O}_{X^{\prime}}(f^{*}D)$. If $\mathcal{V}$ is a vector bundle, we have Chern classes $c_{1}(\mathcal{V}\langle\lambda\rangle)\coloneqq c_{1}(\mathcal{V})+\operatorname{rk}\mathcal{V}\cdot\lambda$. They are natural for pullbacks. Tensor products are defined by $\mathcal{V}\langle\lambda\rangle\otimes\mathcal{V}^{\prime}\langle\lambda^{\prime}\rangle\coloneqq(\mathcal{V}\otimes\mathcal{V}^{\prime})\langle\lambda+\lambda^{\prime}\rangle$. Generally, when we talk about extensions, subsheaves, or quotients of twisted sheaves, or about morphisms between twisted sheaves, we understand that the twist $\lambda$ is fixed. Let $\mathbb{P}(\mathcal{V})\coloneqq\operatorname{Proj}_{\mathcal{O}_{X}}\operatorname{Sym}^{*}\mathcal{V}$. Let $\rho\colon\mathbb{P}(\mathcal{V})\to X$ denote the natural projection map, and let $\xi$ denote the first Chern class of the relative $\mathcal{O}_{\mathbb{P}(\mathcal{V})}(1)$ line bundle. If $\lambda$ is an $\mathbb{R}$-Cartier $\mathbb{R}$-divisor on $X$, define $\mathbb{P}(\mathcal{V}\langle\lambda\rangle)$ as $\mathbb{P}(\mathcal{V})$, polarized with the $\rho$-ample $\mathbb{R}$-Cartier $\mathbb{R}$-divisor $\mathcal{O}_{\mathbb{P}(\mathcal{V}\langle\lambda\rangle)}(1)\coloneqq\mathcal{O}_{\mathbb{P}(\mathcal{V})}(1)\langle\rho^{*}\lambda\rangle$ whose first Chern class is $\xi+\rho^{*}\lambda$. This is in line with the classical formula $\mathcal{O}_{\mathbb{P}(\mathcal{V}\otimes\mathcal{O}_{X}(D))}(1)=\mathcal{O}_{\mathbb{P}(\mathcal{V})}(1)\otimes\rho^{*}\mathcal{O}_{X}(D)$ whenever $D$ is a Cartier divisor. The sheaf $\mathcal{V}$ is said to be _ample_ (resp. _nef_) if the Cartier divisor class $\xi$ has the same property. This extends formally to twists. ### 2.2. Slopes and positivity Assume that $C$ is a smooth projective curve. Let $\mathcal{V}$ be a coherent sheaf and let $\lambda$ be an $\mathbb{R}$-divisor. The _degree_ of $\mathcal{V}\langle\lambda\rangle$ is $\deg\mathcal{V}+\operatorname{rk}\mathcal{V}\cdot\deg\lambda$. The _slope_ of the twisted coherent sheaf $\mathcal{V}\langle\lambda\rangle$ on $X$ is $\mu(\mathcal{V}\langle\lambda\rangle)\coloneqq\frac{\deg\mathcal{V}\langle\lambda\rangle}{\operatorname{rk}\mathcal{V}}.$ By convention, the slope of torsion sheaves is infinite. If $\mathcal{V}$ and $\mathcal{V}^{\prime}$ are (twisted) coherent sheaves, then (2.0.1) $\mu(\mathcal{V}\otimes\mathcal{V}^{\prime})=\mu(\mathcal{V})+\mu(\mathcal{V}^{\prime}).$ The smallest slope of any quotient of $\mathcal{V}$ is denoted by $\mu_{\rm min}(\mathcal{V})$. Put $\mu_{\min}(\mathcal{V}\langle\lambda\rangle)\coloneqq\mu_{\min}(\mathcal{V})+\deg\lambda$. A quotient of $\mathcal{V}$ with minimal slope exists, and is determined by the Harder–Narasimhan filtration of $\mathcal{V}$. In characteristic $0$, set $\overline{\mu}_{\rm min}(\mathcal{V})\coloneqq\mu_{\rm min}(\mathcal{V})$. In characteristic $p>0$, let $F\colon C\to C$ be the absolute Frobenius morphism, and consider $\overline{\mu}_{\mathrm{min}}(\mathcal{V})\coloneqq\lim_{n\to\infty}\frac{\mu_{\mathrm{min}}\bigl{(}(F^{n})^{*}\mathcal{V}\bigr{)}}{p^{n}}.$ The sequence in the limit is weakly decreasing and eventually stationary. In fact, [Lan04, Theorem 2.7] proves that there exists $\delta=\delta_{\mathcal{V}}\geq 0$ such that the Harder–Narasimhan filtration of $(F^{\delta+n})^{*}\mathcal{V}$ is the pullback of the Harder–Narasimhan filtration of $(F^{\delta})^{*}\mathcal{V}$ for all $n\geq 0$. In particular, the rational number $\overline{\mu}_{\rm min}(\mathcal{V})=\frac{\mu_{\rm min}((F^{\delta})^{*}\mathcal{V})}{p^{\delta}}$ is the smallest normalized slope of all quotients of all iterated Frobenius pullbacks $(F^{n})^{*}\mathcal{V}$. For twisted sheaves, put $\overline{\mu}_{\rm min}(\mathcal{V}\langle\lambda\rangle)=\operatorname{\overline{\mu}_{\rm min}}(\mathcal{V})+\deg\lambda$. ###### Lemma 2.1 ([BP14, Theorem 1.1]). Let $C$ be a smooth projective curve over an algebraically closed field. Let $\mathcal{V}\langle\lambda\rangle$ be a twisted vector bundle on $C$. Denote $X\coloneqq\mathbb{P}(\mathcal{V}\langle\lambda\rangle)$ with bundle map $\rho\colon X\to C$. Denote by $\xi$ the numerical first Chern class of the relative (twisted) $\mathcal{O}_{\mathbb{P}(\mathcal{V}\langle\lambda\rangle)}(1)$ sheaf, and by $f$ the class of a fiber of $\rho$. Then, we have $\operatorname{{Nef}}(X)=\bigl{\langle}\xi-\operatorname{\overline{\mu}_{\rm min}}(\mathcal{V}\langle\lambda\rangle)\>\\!f,f\bigr{\rangle}.$ In particular, $\mathcal{V}\langle\lambda\rangle$ is nef if and only if $\operatorname{\overline{\mu}_{\rm min}}(\mathcal{V}\langle\lambda\rangle)\geq 0$. The version in [BP14] holds more generally for Grassmann bundles over curves. The result was seemingly first proved by Barton [Bar71, Theorem 2.1]. It is also stated explicitly by Brenner in [Bre04, Theorem 2.3] and [Bre06, p. 534], Biswas in [Bis05, Theorem 1.1], and Zhao in [Zha17, Theorem 4.3]. In characteristic zero it follows easily from Hartshorne’s Theorem [Laz04b, Theorem 6.4.15], as observed by Miyaoka [Miy87]. A similar computation is carried out by the first author in [Ful11] to nef classes of arbitrary codimension. ###### Remark. In positive characteristic, it is necessary to work with $\operatorname{\overline{\mu}_{\rm min}}(\mathcal{V})$ instead of $\mu_{\rm min}(\mathcal{V})$. See [Har71, Example 3.2] for a counterexample to the naïve positive characteristic analogue of Hartshorne’s Theorem [Laz04b, Theorem 6.4.15]. ## 3\. Products of curves Let $C$ be a smooth projective curve of genus $g$ over $\mathbb{C}$. Let $p$ and $q$ denote the projections onto each factor of $C\times C$. Let $f_{1}$ denote the class of the fiber of $p$ and $f_{2}$ the class of a fiber of $q$. Denote by $\delta$ the class of the diagonal $\Delta$. For large genera, it is a tantalizing open problem to understand the nef cone of $C\times C$, even in the symmetric slice given by intersecting with the span of $f_{1}+f_{2}$ and $\delta$. ### 3.1. Elementary considerations ###### Remark 3.1 (Necessary conditions for nefness). Below, $a$, $b$, and $c$ denote non-negative real numbers. 1. (1) The classes $f_{1}$ and $f_{2}$ are clearly on the boundary of $\operatorname{{Nef}}(C\times C)$. 2. (2) The class $af_{1}+bf_{2}+c\delta$ is nef if and only if $(af_{1}+bf_{2}+c\delta)\cdot\delta=a+b-c(2g-2)\geq 0$. For example, $(g-1)f_{1}+(g-1)f_{2}+\delta$ is the pullback of the theta polarization on the Jacobian of $C$ via the difference map $\displaystyle C\times C$ $\displaystyle\longrightarrow{\rm Jac}(C)$ $\displaystyle(x,y)$ $\displaystyle\longmapsto\mathcal{O}_{C}(x-y).$ 3. (3) If $b$ and $c$ are not both zero, then the class $\pm af_{1}-bf_{2}-c\delta$ is not nef (or even pseudo-effective), because it has negative intersection with $f_{1}$. By symmetry, the analogous statement holds for $-af_{1}\pm bf_{2}-c\delta$ if $a$ and $c$ are not both zero. 4. (4) The class $-af_{1}-bf_{2}+c\delta$ is only pseudo-effective when $a=b=0$, and only nef when $a=b=c=0$. Thus, the classes that are not well-understood (up to scaling and interchanging $f_{1}$ and $f_{2}$) are those of form 1. (1) $af_{1}+bf_{2}-\delta$. By intersecting with $f_{1}$ and $f_{2}$, we get $a\geq 1$ and $b\geq 1$ as necessary conditions for these classes to be nef. By considering their self-intersections, we also have $a>1$ and $b\geq 1+\frac{g}{a-1}$ as necessary conditions. 2. (2) $-af_{1}+bf_{2}+\delta$. Here $0\leq a<1$ and $b\geq\frac{g}{1-a}-1$ are necessary conditions for the class to be nef. 3. (3) $af_{1}-bf_{2}+\delta$ with $0\leq b<1$ and $a\geq\frac{g}{1-b}-1$. ###### Remark 3.2 (Genus $g=1$). The conditions above are also sufficient when $C$ is an elliptic curve. See [Laz04a, Lemma 1.5.4]. Question 1.3, which asks for the nefness of the class (3.2.1) $a\>\\!f_{1}+\biggl{(}1+\frac{g}{a-1}\biggr{)}f_{2}-\delta,$ predicts that the conditions above are sufficient for classes of the form $af_{1}+bf_{2}-\delta$ for very general curves of sufficiently large genus. ### 3.2. Nef classes for arbitrary curves ###### Remark 3.3 (Rabindranath–Vojta divisors). Let $C$ be an _arbitrary_ smooth projective curve of genus $g\geq 1$. Inspired by [Voj89], Rabindranath in [Rab19, Proposition 3.2] proves that if $r,s>0$, then $(\sqrt{(g+s)r^{-1}}+1)f_{1}+(\sqrt{(g+s)r}+1)f_{2}-\delta$ is nef if $r\geq\frac{(g+s)(g-1)}{s}$.111Note that [Rab19] denotes our class $\delta- f_{1}-f_{2}$ by $\delta$. We thereby deduce the nefness of the divisor (3.3.1) $a\>\\!f_{1}+\biggl{(}1+\frac{g}{a-1}+(g-1)(a-1)\biggr{)}f_{2}-\delta$ for $a>1$. These are close to the conjectural bound (3.2.1) for $a$ close to 1. See Figure 1. The original argument of Vojta [Voj89] applies to the classes $-af_{1}+bf_{2}+\delta$ with $0\leq a<1$ and proves that (3.3.2) $-a\>\\!f_{1}+\biggl{(}-1+\frac{g}{1-a}+(g-1)(1-a)\biggr{)}f_{2}+\delta\text{ is nef}.$ ### 3.3. Our main results for general curves We construct examples of nef classes for $C$ general. They improve the examples in Remark 3.3 that were valid for arbitrary $C$. ###### Theorem 3.4. Let $C$ be a _general_ smooth projective curve of genus $g$ over $\mathbb{C}$. Then 1. (i) If $g\geq 2$, then for all integers $d\geq\lfloor 3g/2\rfloor+1$ the divisor class $d\>\\!f_{1}+\left(1+\frac{g}{d-g}\right)f_{2}-\delta\text{ is nef}.$ 2. (ii) If $g\geq 3$, then for all integers $2g-2\geq d\geq\lfloor 3g/2\rfloor$ the divisor class $d\>\\!f_{1}+\frac{d}{d-g+1}f_{2}-\delta\text{ is nef}.$ 3. (iii) If $g\geq 10$, then the divisor class $\bigl{(}\lfloor 3g/2\rfloor-3\bigr{)}f_{1}+\frac{\lfloor 3g/2\rfloor-3}{\lfloor g/2\rfloor-1}f_{2}-\delta\text{ is nef}.$ See Figure 3 for a representation in genus 10 for how Theorem 3.4 improves Remark 3.3 for general curves. When $C$ is an arbitrary smooth projective curve of genus $g\in\\{0,1\\}$, in fact $df_{1}+\bigl{(}1+\frac{g}{d-g}\bigr{)}f_{2}-\delta$ is nef for all _real_ $d>1$. See Remark 3.2. The key ingredient of our proof of Theorem 3.4 is the semistability of kernel bundles. ###### Definition 3.5. Let $X$ be a proper scheme over a field, and let $L$ be a locally free sheaf (usually a line bundle) on $X$. The _kernel bundle_ (sometimes called the _Lazarsfeld–Mukai bundle_) $M_{L}$ is defined by the following exact sequence: $0\longrightarrow M_{L}\longrightarrow H^{0}(X,L)\otimes\mathcal{O}_{X}\overset{{\rm ev}}{\longrightarrow}L.$ When $X=C$ is a curve, then $M_{L}=q_{*}(p^{*}L(-\Delta))$. In general $M_{L}=q_{*}(p^{*}L\otimes\mathcal{I}_{\Delta})$. If $L$ is globally generated on $X$, then the invariants of $M_{L}$ are determined by $L$. For example $c_{1}(M_{L})=-c_{1}(L)$ and ${\rm rk}\,M_{L}=h^{0}(X,L)-{\rm rk}\,L$. ###### Proof of Theorem 3.4. (i). On any smooth curve $C$, the bundle $M_{L}$ is semistable if $L$ is globally generated of degree $d$ with $d-2(h^{0}(C,L)-1)\leq{\rm Cliff}(C)$. This result appears in several references, e.g., [PR88], [But94], [Cam08], [BPO09], [MS12, Theorem 1.3], or [ES12, Proposition 3.1]. When $C$ is general, ${\rm Cliff}(C)=\lfloor(g-1)/2\rfloor$ by [ACGH85]. If furthermore $L$ is general of degree $d\geq\lfloor 3g/2\rfloor+1\geq g+2$, then $L$ is globally generated, and so is $L(-x)$ for general $x\in C$ (e.g., by [KM20, §3]). Furthermore $L$ is non-special (i.e., $h^{1}(C,L)=0$), therefore $h^{0}(C,L)-1=d-g$, and $d-2(h^{0}(C,L)-1)=2g-d\leq\lfloor(g-1)/2\rfloor$. We deduce that $M_{L}$ is semistable of slope $-\frac{d}{d-g}=-\bigl{(}1+\frac{g}{d-g}\bigr{)}$. By Lemma 2.1, the twisted bundle $M_{L}\langle\frac{d}{d-g}x\rangle$ is nef. Furthermore the natural fiberwise evaluation map $\epsilon\colon q^{*}M_{L}\to p^{*}L(-\Delta)$ relative to $q$ specializes over general $x$ to $H^{0}(C,L(-x))\otimes\mathcal{O}_{C}\to L(-x)$, hence it is surjective on the general fiber. Note that $p^{*}L(-\Delta)$ has degree $d-1$ on the fibers of $q$, hence it is relatively ample. Proposition 3.13.(i) applies to the twisted map $\epsilon\langle\frac{d}{d-g}q^{*}x\rangle$ proving that $p^{*}L(-\Delta)\langle\frac{d}{d-g}q^{*}x\rangle$ is nef. This proves the claim. (ii). When $L$ is globally generated of degree $d$ and $h^{1}(C,L)=1$, then $\mu(M_{L})=-\frac{d}{d-g+1}$. We look for $L$ satisfying the following properties: 1. (a) $L$ is globally generated of degree $d$ with $2g-2\geq d\geq\lfloor 3g/2\rfloor$. 2. (b) $h^{1}(C,L)=1$. 3. (c) $L(-x)$ is globally generated for general $x\in C$. Note that conditions (a) and (b) together imply $d-2(h^{0}(C,L)-1)\leq\lfloor(g-1)/2\rfloor$, which gives the semistability of $M_{L}$. In fact the inequality is strict. Condition (b) is equivalent by Serre duality to $L=\omega_{C}(-E)$ for some effective divisor $E$ with $h^{0}(C,E)=1$. Let $e=\deg E\geq 0$. Condition (b) is then met if we pick $E$ with $0\leq e<{\rm gon}(C)$. For general $C$, we have ${\rm gon}(C)=\bigl{\lfloor}\frac{g+3}{2}\bigr{\rfloor}=\bigl{\lceil}\frac{g}{2}+1\bigr{\rceil}$. We now focus on (c). Let $p\in C$ be an arbitrary point. The bundle $L(-x)$ is generated at $p$ if $H^{1}(C,L(-x-p))\to H^{1}(C,L(-x))$ is an isomorphism. By Serre duality, this is equivalent to the natural map $H^{0}(C,E+x)\to H^{0}(C,E+x+p)$ being an isomorphism, where $L=\omega_{C}(-E)$ as above. The map is in any case injective, and both spaces are at least 1-dimensional. It is sufficient to ask $h^{0}(C,E+x+p)=1$. As above, for $C$ general, this is implied by $2\leq e+2<\bigl{\lfloor}\frac{g+3}{2}\bigr{\rfloor}$. In fact, under this assumption, $L(-x)$ is globally generated for all $x$. Note that if $0\leq e\leq\bigl{\lfloor}\frac{g+3}{2}\bigr{\rfloor}-3$, then $d=2g-2-e$ is in the range $2g-2\geq d\geq\lfloor 3g/2\rfloor$. Finally, to settle (a), we want to show that for all $0\leq e\leq\bigl{\lfloor}\frac{g+1}{2}\bigr{\rfloor}-3$ we can find $E$ effective of degree $e$ with $L=\omega_{C}(-E)$ globally generated. Arguing as in (c), we find that any effective $E$ will do. (iii). For $d=\lfloor\frac{3g}{2}\rfloor-3$, we have $e=2g-2-d=\lfloor\frac{g+3}{2}\rfloor$. This is the gonality of a general curve. On such a general curve $C$ we can pick a divisor $E$ of degree $e$ that is globally generated and $h^{0}(C,E)\geq 2$. In fact $h^{0}(C,E+x+p)=2$ for all $x,p\in C$ because $e+2=\lfloor\frac{g+3}{2}\rfloor+2<\frac{2g}{3}+2$, the next Brill–Noether threshold, under the assumption $g\geq 10$. As in part (ii) we deduce that $L=\omega_{C}(-E)$ and $L(-x)$ are globally generated for all $x\in C$. In this case $h^{1}(C,L)=h^{0}(C,E)=2$ and the inequality $d-2(h^{0}(C,L)-1)=e-2(h^{0}(C,E)-1)\leq\lfloor\frac{g-1}{2}\rfloor$ that gives the semistability of $M_{L}$ is in fact an equality. The divisors $L$ and $E$ compute the Clifford index of the curve. This is what restricts the result of (iii) to just one class. ∎ ###### Remark. The Rabindranath–Vojta examples (3.3.1) or the generalized Kouvidakis classes (3.12.1) for $a=2$ give $2f_{1}+2gf_{2}-\delta\in\operatorname{{Nef}}(C\times C)$. Theorem 3.4.(i) for $d=2g$ gives $2gf_{1}+2f_{2}-\delta\in\operatorname{{Nef}}(C\times C)$, which is the same class up to symmetry. For this reason, the divisors in Theorem 3.4.(i) improve the Rabindranath–Vojta examples (3.3.1) only in the range $\lfloor 3g/2\rfloor+1\leq d<2g$, which is nonempty when $g\geq 3$. See Figure 3. Towards answering Question 1.3, we expect that better bounds will arise from an understanding of the semistability of higher-order generalizations of the bundles $M_{L}$. ###### Definition 3.6. If $L$ is a Cartier divisor on $C$ and $i\geq 0$ is an integer, we denote $\displaystyle M^{(i-1)}(L)$ $\displaystyle\coloneqq q_{*}\bigl{(}p^{*}\mathcal{O}_{C}(L)\otimes\mathcal{O}_{X}(-i\Delta)\bigr{)}$ $\displaystyle T^{i-1}(L)$ $\displaystyle\coloneqq q_{*}\bigl{(}p^{*}\mathcal{O}_{C}(L)\otimes\mathcal{O}_{X}(i\Delta)\bigr{)}.$ We have $M^{(0)}(L)=M_{L}$. When $L$ is globally generated, this is the pullback of the twisted cotangent space $\Omega\mathbb{P}^{r}(1)$ via the morphism induced by $\lvert L\rvert$. If $\lvert L\rvert$ is an embedding, then $M^{(1)}(L)=N^{\vee}_{C}\mathbb{P}^{r}\otimes\mathcal{O}_{C}(L)$ is a twist of the conormal bundle. [EL92], Ein and Lazarsfeld use the notation $R^{i-1}(L)$ instead of $M^{(i-1)}(L)$, and call them _higher conormal bundles_. We show that Question 1.3 can be restated in terms of the semistability in an asymptotic sense of higher conormal bundles of $C$. ###### Theorem 3.7. Let $C$ be an _arbitrary_ smooth projective curve of genus $g$ over $\mathbb{C}$. 1. (i) If $a>1$ is a rational number, then the class (3.2.1) $af_{1}+\bigl{(}1+\frac{g}{a-1}\bigr{)}f_{2}-\delta$ is nef if and only if the sheaves $M^{(n-1)}(nL)$ are _asymptotically semi-stable_ 222It makes sense to ask if $M^{(n-1)}(nL)$ is (semi)stable for large divisible $n$. See also [EL92, Conjecture 4.2]. , i.e., $\lim_{n\to\infty}\frac{1}{n}\mu_{\rm min}\bigl{(}M^{(n-1)}(nL)\bigr{)}=\lim_{n\to\infty}\frac{1}{n}\mu\bigl{(}M^{(n-1)}(nL)\bigr{)},$ where $L$ is an arbitrary $\mathbb{Q}$-divisor on $C$ with $\deg L=a$, and $n$ is sufficiently divisible. 2. (ii) If $0\leq a<1$ is rational, then $-af_{1}+\bigl{(}-1+\frac{g}{1-a}\bigr{)}f_{2}+\delta$ is nef if and only if the sheaves $T^{n-1}(-nL)$ are asymptotically semi-stable. ###### Proof. (i). Consider the $q$-ample class $af_{1}-\delta$. Since the class (3.2.1) has self-intersection zero, it is nef if and only if $\sup\bigl{\\{}t\operatorname{\bigm{|}}af_{1}-tf_{2}-\delta\in\operatorname{{Nef}}(X)\bigr{\\}}=-\bigl{(}1+\frac{g}{a-1}\bigr{)}$. By Proposition 3.13.(ii), this holds if and only if $\frac{\mu_{\min}(M^{(n-1)}(nL))}{n}$ limits to $-1-\frac{g}{a-1}$. Recall that $L$ is a $\mathbb{Q}$-divisor on $C$ of degree $a$. When computing the limit we restrict ourselves to $n$ such that $na\in\mathbb{Z}$. When $a>1$, for large divisible $n$ we have exact sequences $0\longrightarrow M^{(n-1)}(nL)\longrightarrow H^{0}\bigl{(}C,\mathcal{O}(nL)\bigr{)}\otimes\mathcal{O}_{C}\longrightarrow P^{n-1}\mathcal{O}(nL)\longrightarrow 0.$ Recall that if $\mathcal{L}$ is a line bundle, then $P^{n-1}\mathcal{L}$ denotes the bundle of principal parts $q_{*}(p^{*}\mathcal{L}\otimes\mathcal{O}_{n\Delta})$. It is a rank $n$ vector bundle with a natural filtration with quotients $\mathcal{L}$, $\mathcal{L}\otimes\omega_{C}$, …, $\mathcal{L}\otimes\omega^{\otimes(n-1)}_{C}$. From this, one computes $\mu\bigl{(}M^{(n-1)}(nL)\bigr{)}=-n\biggl{(}1+\frac{ng}{na+1-g-n}\biggr{)}.$ As $n$ grows, $\frac{1}{n}\mu(M^{(n-1)}(nL))$ approaches $-\bigl{(}1+\frac{g}{a-1}\bigr{)}$. In particular, the nefness of $af_{1}+(1+\frac{g}{a-1})f_{2}-\delta$ is equivalent to the asymptotic semistability of $M^{(n-1)}(nL)$. (ii). Assume now $0\leq a<1$, and consider the $q$-ample class $-af_{1}+\delta$. For large divisible $n$, pushing forward the exact sequence $0\to p^{*}\mathcal{O}(-nL)\to p^{*}\mathcal{O}(-nL)\otimes\mathcal{O}(n\Delta)\to p^{*}\mathcal{O}(-nL)\otimes\mathcal{O}(n\Delta)|_{n\Delta}\to 0$ by $q$, we obtain an exact sequence $0\longrightarrow T^{n-1}(-nL)\longrightarrow q_{*}\bigl{(}p^{*}\mathcal{O}(-nL)\otimes\mathcal{O}(n\Delta)|_{n\Delta}\bigr{)}\longrightarrow H^{1}\bigl{(}C,\mathcal{O}(-nL)\bigr{)}\otimes\mathcal{O}_{C}\longrightarrow 0.$ From Riemann–Roch and by considering the surjections $q_{*}\bigl{(}p^{*}\mathcal{O}(-nL)\otimes\mathcal{O}(n\Delta)|_{(i+1)\Delta}\bigr{)}\twoheadrightarrow q_{*}\bigl{(}p^{*}\mathcal{O}(-nL)\otimes\mathcal{O}(n\Delta)|_{i\Delta}\bigr{)}$ whose kernels are isomorphic to $\mathcal{O}(-nL)\otimes\omega_{C}^{\otimes(i-n)}$, one computes $\frac{1}{n}\mu\bigl{(}T^{n-1}(-nL)\bigr{)}=\frac{-n^{2}a-\binom{n+1}{2}(2g-2)}{n^{2}(1-a)+n(1-g)}.$ This limits to $-\bigl{(}\frac{g}{1-a}-1\bigr{)}$ as $n$ grows. ∎ ###### Remark. For $C$ an _arbitrary_ curve of genus $g$, [EN18] prove that $M^{(k)}(L)$ is semi-stable if $\deg L$ is exactly equal to $(k^{2}+2k+2)g+k$. This can be used to reprove the nefness of the divisors (3.3.1) when $b=1+\frac{1}{k+1}$ with $k\geq 0$ an integer. ### 3.4. Nef classes for very general curves Our main result for very general curves constructs one optimal non-symmetric class, answering Question 1.3 in the affirmative for $a=2$. ###### Theorem 3.8. Let $C$ be a very general smooth complex projective curve of genus $g\neq 2$. Then $2f_{1}+(1+g)f_{2}-\delta\in\operatorname{{Nef}}(C\times C).$ When $g=2$, the class $2f_{1}+(1+g)f_{2}-\delta$ is not nef. It has negative intersection with the class $2f_{1}+2f_{2}-\delta$ of the graph of the hyperelliptic involution. ###### Proof. If $g=0$, then $2f_{1}+(1+g)f_{2}-\delta=f_{1}$ is nef (and not ample). If $g=1$, then the class is on the boundary of the nef cone by Remark 3.2. We may assume then $g\geq 3$. The idea is to deform $C$ to a rational curve $C_{0}$ with $g$ simple nodes in general position. Since nefness is a very general condition in families, it is enough to prove a nefness statement for $C_{0}$. The complication introduced by the nodes is that the positivity problem to be solved is on the blow-up of $\mathbb{P}^{1}\times\mathbb{P}^{1}$ at $2g$ points. The construction comes from [Ros07]. We apply it to the non-symmetric situation. Let $C_{0}$ be an irreducible rational curve with $g$ simple nodes in general position. There exists a projective flat family $\mathcal{C}\to T$ over a disc $T$, relatively smooth with fibers of genus $g$ over the punctured disk, and with central fiber $C_{0}$. We may also assume that $\mathcal{C}$ has smooth total space. 333$C_{0}$ is a stable nodal curve. Every stable curve of genus $g\geq 2$ embeds in $\mathbb{P}^{5g-6}$ with Hilbert polynomial depending only on $g$. The set of points of the Hilbert scheme that correspond to stable embedded curves is denoted $H_{g}$. Let $Z_{g}\to H_{g}$ be the restriction of the universal family. [DM69] prove that $H_{g}$ and $Z_{g}$ are smooth (even over ${\rm Spec}\mathbb{Z}$), and $H_{g}$ is irreducible over algebraically closed fields. Bertini arguments allow us to replace $Z_{g}\to H_{g}$ with $\mathcal{C}\to T$ as desired. For $1\leq i\leq g$, denote by $x_{i},y_{i}$ the preimages of each node in the normalization $\mathbb{P}^{1}$ of $C_{0}$. Let $L\subset\mathcal{C}$ be a section of $\mathcal{C}\to T$. It avoids the nodes of $C_{0}$. We would like to construct a Cartier divisor on $\mathcal{C}\times_{T}\mathcal{C}$ that restricts to the general fiber with class $2f_{1}+(1+g)f_{2}-\delta$. It is clear that for $f_{1}$ and $f_{2}$ we will use the pullbacks of $L$ by the two projections. However $\mathcal{C}\times_{T}\mathcal{C}$ is singular at the $g^{2}$ pairs of nodes, and the diagonal is not a Cartier divisor at the $g$ diagonal pairs. Instead we blow-up $\mathcal{Y}\to\mathcal{C}\times_{T}\mathcal{C}$ at the $g^{2}$ pairs of nodes. This resolves the singularities, in particular those along the diagonal. Let $\mathcal{D}\subset\mathcal{Y}$ be the strict transform of the diagonal. For $t\neq 0$ in the disk $T$, the fiber $\mathcal{Y}_{t}=C_{t}\times C_{t}$ is the self-product of a genus $g$ curve. For $t=0$, the fiber $\mathcal{Y}_{0}$ has $g^{2}$ exceptional $\mathbb{P}^{1}\times\mathbb{P}^{1}$ components, and a component $F$, the strict transform of $C_{0}\times C_{0}$. Let $\nu\colon\widetilde{F}\to F$ be the normalization. As a variety, $\widetilde{F}$ is isomorphic to the blow-up of $\mathbb{P}^{1}\times\mathbb{P}^{1}$ at the $4g^{2}$ ordered pairs of points from the list $\\{x_{1},y_{1},\ldots,x_{g},y_{g}\\}\subset\mathbb{P}^{1}$. See [Ros07, Lemma 3.1] for the proofs. Denote the classes of the exceptional $\mathbb{P}^{1}$’s over the corresponding points by $e_{x_{i}x_{j}},e_{x_{i}y_{j}},e_{y_{i}x_{j}},e_{y_{i}y_{j}}$. Let $\pi$ be the blow-up of $\mathbb{P}^{1}\times\mathbb{P}^{1}$. Let $E$ be the sum of the $g$ exceptional $\mathbb{P}^{1}\times\mathbb{P}^{1}$’s sitting over diagonal pairs of nodes $(p,p)$. Over each of the $g$ components of $E$, the divisor $\mathcal{D}$ restricts with class $f_{1}$ in $N^{1}(\mathbb{P}^{1}\times\mathbb{P}^{1})$, while $E$ restricts with class $-2f_{1}-2f_{2}$. By [Ros07, Lemma 3.2], we have $\nu^{*}(\mathcal{D}|_{F})=\pi^{*}\Delta_{\mathbb{P}^{1}}-\sum_{i=1}^{g}(e_{x_{i}x_{i}}+e_{y_{i}y_{i}})$. Furthermore $\nu^{*}(E|_{F})=\sum_{i=1}^{g}(e_{x_{i}x_{i}}+e_{y_{i}y_{i}}+e_{x_{i}y_{i}}+e_{y_{i}x_{i}})$. With $p$ and $q$ denoting the induced projections from $\mathcal{Y}$ on the factors of $\mathcal{C}\times_{T}\mathcal{C}$, consider on $\mathcal{Y}$ the Cartier divisor $N\coloneqq p^{*}2L+q^{*}(1+g)L-(\mathcal{D}+E).$ If we prove that its restriction $N|_{\mathcal{Y}_{0}}$ is nef, then the same holds for the restriction to the very general fiber. Clearly for $t\neq 0$ the fiber restriction has class $2f_{1}+(1+g)f_{2}-\delta\in N^{1}(C_{t}\times C_{t})$. The restriction of $N$ to the exceptional $\mathbb{P}^{1}\times\mathbb{P}^{1}$ components has class $f_{1}+2f_{2}$, so it is even ample. On the other hand, the class of $\nu^{*}(N|_{F})$ is $\displaystyle\pi^{*}\bigl{(}2f_{1}+(1+g)f_{2}\bigr{)}-\biggl{(}\pi^{*}\delta$ $\displaystyle-\sum_{i=1}^{g}(e_{x_{i}x_{i}}+e_{y_{i}y_{i}})\biggr{)}-\sum_{i=1}^{g}(e_{x_{i}x_{i}}+e_{y_{i}y_{i}}+e_{x_{i}y_{i}}+e_{y_{i}x_{i}})$ $\displaystyle=\pi^{*}(f_{1}+gf_{2})$ $\displaystyle-\sum_{i=1}^{g}(e_{x_{i}y_{i}}+e_{y_{i}x_{i}})\in N^{1}(\widetilde{F}).$ To settle the nefness of this class, it was enough to blow-up only the $2g$ points $(x_{i},y_{i})$ and $(y_{i},x_{i})$ with $1\leq i\leq g$ on $\mathbb{P}^{1}\times\mathbb{P}^{1}$. The conclusion follows from the result below. ∎ ###### Proposition 3.9. Consider general points $z_{1},\ldots,z_{g}\in\mathbb{P}^{1}\times\mathbb{P}^{1}$ with $g\neq 2$. Let $\pi\colon X\to\mathbb{P}^{1}\times\mathbb{P}^{1}$ be the blow-up of the $2g$ points $z_{1},\ldots,z_{g}$ and their reflections $z_{1}^{\prime},\ldots,z_{g}^{\prime}$ across the diagonal. Denote by $E$ the exceptional divisor. Then $\pi^{*}(f_{1}+gf_{2})-E\in\operatorname{{Nef}}(X).$ For all $g\geq 0$, the same nefness result holds if we blow-up $2g$ general points in $\mathbb{P}^{1}\times\mathbb{P}^{1}$. ###### Proof. Step 1. The case $g\in\\{0,1\\}$. The case $g=0$ is trivial. When $g=1$, then $\pi^{*}(f_{1}+f_{2})-E$ is represented by $\overline{F}_{1}+\overline{F}_{2}$, where $\overline{F}_{1}$ is the strict transform of the fiber of the first projection through $z_{1}$, and $\overline{F}_{2}$ is the strict transform of the fiber of the second projection through $z_{1}^{\prime}$. We have $\overline{F}_{1}^{2}=\overline{F}_{2}^{2}=-1$. We may assume that $z_{1}$ is not on the diagonal, hence $\overline{F}_{1}\cdot\overline{F}_{2}=1$. In particular $\overline{F}_{1}+\overline{F}_{2}$ has nonnegative (in fact 0) intersection with each of its irreducible components, hence it is nef. Step 2. The failure of the case $g=2$. $\mathbb{P}^{2}$ can be identified with the second symmetric power of $\mathbb{P}^{1}$. The sum map $\sigma\colon\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{2}$ given by $\sigma(x,y)=x+y$ is a cover of degree 2 and $\sigma^{*}\mathcal{O}_{\mathbb{P}^{2}}(1)=\mathcal{O}(1,1)$. The line through $\sigma(z_{1})$ and $\sigma(z_{2})$ lifts to a plane section of $\mathbb{P}^{1}\times\mathbb{P}^{1}$ through $z_{1},z_{1}^{\prime},z_{2},z_{2}^{\prime}$ in the Segre embedding in $\mathbb{P}^{3}$. For general $z_{1},z_{2}$, this section is smooth irreducible. Denote it $D$. Its strict transform $\overline{D}$ has class $\pi^{*}(f_{1}+f_{2})-E$ and has intersection $-1$ with $\pi^{*}(f_{1}+2f_{2})-E$, so the latter is not nef. The curve $\overline{D}$ is the base locus of the linear system determined by $\pi^{*}\mathcal{O}(1,2)(-E)$. Step 3. Conclusion of symmetric case. Assume $g\geq 3$. By Lemma 3.10.(iii), there exists a smooth curve $C_{g}$ through the $g$ general pairs. In particular it has multiplicity 1 at each point. Its strict transform $\overline{C}\subset X$ is a curve of class $\pi^{*}(f_{1}+gf_{2})-E$, which has self intersection zero. Since it is also irreducible, it is nef. Step 4. Conclusion of general case. Assume that the $2g$ points $Z=\\{z_{1},z_{2},\ldots,z_{2g}\\}$ are general (including the case $g=2$). Consider the non-symmetric Cremona transform described in the last paragraph of part 3 in the proof of Lemma 3.10 below. Applying it at the points $z_{1},z_{2}$, then at the images of $z_{3},z_{4}$, and so on, reduces $\pi^{*}(f_{1}+gf_{2})-E$ to $\pi^{*}f_{1}$. By generality, for all $1\leq i\leq g$, the images of $z_{2i-1},z_{2i}$ through any composition of the Cremona transforms above are never in the same vertical or horizontal fiber on $\mathbb{P}^{1}\times\mathbb{P}^{1}$. ∎ ###### Lemma 3.10 (Symmetric interpolation). Let $Z=\\{z_{1},z_{1}^{\prime},\ldots,z_{m},z_{m}^{\prime}\\}$ be a set of $m$ general symmetric pairs in $\mathbb{P}^{1}\times\mathbb{P}^{1}$. 1. (i) If $(n,m)\neq(1,2)$, then the linear system of sections of $\mathcal{O}(1,n)$ through $Z$ has the expected dimension. 2. (ii) If $(n,m)\neq(1,2)$ and $r$ is a nonnegative integer, then the linear system of sections of $\mathcal{O}(1,n)$ through $Z$ and $r$ further general points has the expected dimension. 3. (iii) If $(n,m)\neq(2,2)$, and the linear system in (i) is nonempty, then the general divisor in this system is irreducible and smooth. ###### Proof. Denote by $\mathfrak{b}_{m}(n)$ the linear system in question. When no confusion is likely, we omit $n$ and denote $\mathfrak{b}_{m}(n)=\mathfrak{b}_{m}$. 1\. Continuous variation of $\mathfrak{b}_{m}(n)$. For fixed $n$ and $m$, we show that the linear systems $\mathfrak{b}_{m}$ vary continuously for general $Z$. The parameter space $\mathcal{Z}_{m}$ of ordered $m$-tuples of ordered symmetric pairs of points in $\mathbb{P}^{1}\times\mathbb{P}^{1}$ is isomorphic to $(\mathbb{P}^{1}\times\mathbb{P}^{1})^{m}$. Let $\mathcal{U}_{m}\subset\mathcal{Z}_{m}\times(\mathbb{P}^{1}\times\mathbb{P}^{1})$ be the universal family. The general linear systems $\mathfrak{b}_{m}$ are captured by the general fibers of the sheaf $pr_{1*}(pr_{2}^{*}\mathcal{O}(1,n)\otimes\mathcal{I}_{\mathcal{U}_{m}})$ (or of its projectivization). Here $\mathcal{I}_{\mathcal{U}_{m}}$ is the ideal sheaf of $\mathcal{U}_{m}$. In particular, the dimension of the general $\mathfrak{b}_{m}$ is constant, depending only on $m$. By considering the ranks of subsheaves $pr_{1*}(pr_{2}^{*}\mathcal{O}(1,a)\otimes\mathcal{I}_{\mathcal{U}_{m}})$ and $pr_{1*}(pr_{2}^{*}\mathcal{O}(0,a)\otimes\mathcal{I}_{\mathcal{U}_{m}})$ for $a\leq n$, one shows that the divisorial components of the base loci of $\mathfrak{b}_{m}$ also vary continuously for general $Z$. 2\. The cases $n=0$ and $n=1$. If $n=0$, then the projective dimension of $\mathfrak{b}_{m}$ is 1 for $m=0$, and 0 for all $m>0$. One vertical fiber cannot contain a general symmetric pair. If $n=1$, then $\mathfrak{b}_{0}$ consists of plane sections of the Segre embedding $\mathbb{P}^{1}\times\mathbb{P}^{1}\subset\mathbb{P}^{3}$. It has projective dimension 3. As in Step 2 of Proposition 3.9, for $Z=\\{z_{1},z_{1}^{\prime}\\}$, the system $\mathfrak{b}_{1}$ is the pullback by $\sigma$ of the pencil of lines in $\mathbb{P}^{2}$ through $\sigma(z_{1})=\sigma(z_{1}^{\prime})$. In particular $\dim\mathfrak{b}_{1}=1$. When $m=2$, then $\mathfrak{b}_{2}$ is (unexpectedly) one point, corresponding to the pullback of the line through $\sigma(z_{1})$ and $\sigma(z_{2})$. For $m>2$, the system $\mathfrak{b}_{m}$ is empty as expected. For general choices of $Z$, the divisors constructed above are irreducible. 3\. A Cremona transform on $\mathbb{P}^{1}\times\mathbb{P}^{1}$. Let $z\in\mathbb{P}^{1}\times\mathbb{P}^{1}$ be a point, not on the diagonal, and let $z^{\prime}\neq z$ be its reflection. Let $\rho\colon\mathbb{P}^{2}\dashrightarrow\mathbb{P}^{1}$ be the projection from $\sigma(z)\in\mathbb{P}^{2}$, where $\sigma\colon\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{2}$ is the quotient map, identified with the sum map to ${\rm Sym}^{2}\mathbb{P}^{1}=\mathbb{P}^{2}$. Let $pr_{2}\colon\mathbb{P}^{1}\times\mathbb{P}^{1}\to\mathbb{P}^{1}$ be the second projection. Consider the rational map $\displaystyle Cr$ $\displaystyle\colon\mathbb{P}^{1}\times\mathbb{P}^{1}\dashrightarrow\mathbb{P}^{1}\times\mathbb{P}^{1}$ $\displaystyle Cr$ $\displaystyle=(\rho\circ\sigma,pr_{2})$ We study some of its properties: 1. (1) $Cr$ is undefined at $z$ and $z^{\prime}$. Indeed $\rho$ is only undefined at $\sigma(z)=\sigma(z^{\prime})$. 2. (2) If $C$ is a section of $\mathcal{O}(1,1)$ through $z,z^{\prime}$, then $\rho\circ\sigma$ is constant on $C\setminus\\{z,z^{\prime}\\}$. For this, note that $C=\sigma^{-1}L$, where $L$ is a line through $\sigma(z)$. 3. (3) In particular $Cr$ contracts the 2 fibers of $pr_{2}$ that pass through $z$ and $z^{\prime}$ respectively. Clearly $pr_{2}$ contracts them. Let $F_{2,z}$ be the corresponding fiber of $pr_{2}$, and let $F_{1,z^{\prime}}$ be the fiber of $pr_{1}$ through $z^{\prime}$. Then $F_{1,z^{\prime}}+F_{2,z}$ is a section of $\mathcal{O}(1,1)$ through $z,z^{\prime}$, hence $\rho\circ\sigma$ is constant on it. In particular it is constant on $F_{2,z}$. (Note that $Cr$ does not also contract $F_{1,z^{\prime}}$ since $pr_{2}$ does not contract it.) 4. (4) If $x\in\mathbb{P}^{1}\times\mathbb{P}^{1}$ is any point different from $z$ and $z^{\prime}$, then $Cr(x)$ and $Cr(x^{\prime})$ are in the same fiber of $pr_{1}$. This is because $z,z^{\prime},x,x^{\prime}$ are contained in some section of $\mathcal{O}(1,1)$. 5. (5) $Cr$ is birational. For general $x\in\mathbb{P}^{1}\times\mathbb{P}^{1}$, $\rho(\sigma(x))$ determines the section of $\mathcal{O}(1,1)$ that passes through $z$,$z^{\prime}$, and through $x$, while $pr_{2}(x)$ determines the fiber $F_{2,x}$. Clearly the section and the fiber meet in one point unless the fiber is a component of the section, which is not the general situation. Finally we resolve $Cr$. Let $\pi\colon X\to\mathbb{P}^{1}\times\mathbb{P}^{1}$ be the blow-up of $z$ and $z^{\prime}$ with exceptional divisors $E$ and $E^{\prime}$. Contracting the strict transforms $F$ of $F_{2,z}$ and $F^{\prime}$ of $F_{2,z^{\prime}}$, gives a morphism $\gamma\colon X\to\mathbb{P}^{1}\times\mathbb{P}^{1}$ which is also the blow-up of two points. We have $Cr=\gamma\circ\pi^{-1}$. Because of the two blow-up structures of $X$, the Néron–Severi space $N^{1}(X)$ has two sets of bases $(\pi^{*}f_{1},\pi^{*}f_{2},E,E^{\prime})$ and $(\gamma^{*}f_{1},\gamma^{*}f_{2},F,F^{\prime})$. The following relations are easy consequences of the properties of $Cr$. $\displaystyle\pi^{*}f_{2}$ $\displaystyle=E+F=E^{\prime}+F^{\prime}$ $\displaystyle\gamma^{*}f_{2}$ $\displaystyle=E+F=E^{\prime}+F^{\prime}$ $\displaystyle\gamma^{*}f_{1}$ $\displaystyle=\pi^{*}(f_{1}+f_{2})-E-E^{\prime}$ In particular, the change of coordinates matrix is $\begin{pmatrix}1&0&0&0\\\ 1&1&1&1\\\ -1&0&-1&0\\\ -1&0&0&-1\end{pmatrix}.$ The matrix is self-inverse, though $Cr$ is not immediately self-inverse because the source $\mathbb{P}^{1}\times\mathbb{P}^{1}$ and the target $\mathbb{P}^{1}\times\mathbb{P}^{1}$ are not canonically identified. The construction of $Cr$ also works in a less-symmetric situation. If $z_{1},z_{2}$ are points not on the same horizontal or vertical fiber, then blowing-up the points and contracting the strict transforms of vertical (or of horizontal) fibers through the points gives birational $Cr\colon\mathbb{P}^{1}\times\mathbb{P}^{1}\dashrightarrow\mathbb{P}^{1}\times\mathbb{P}^{1}$. In fact one can find an automorphism that fixes $z_{1}$ and sends $z_{2}$ to $z_{1}^{\prime}$. When the two points are say in the same vertical fiber, and we contract the strict transforms of horizontal fibers, then the target of $Cr$ is naturally the Hirzebruch surface $\mathbb{F}_{2}=\mathbb{P}_{\mathbb{P}^{1}}(\mathcal{O}\oplus\mathcal{O}(-2))$, not $\mathbb{P}^{1}\times\mathbb{P}^{1}$. 4\. The cases $m\in\\{0,1,2\\}$. Assume $n>1$. The complete linear system $\mathfrak{b}_{0}$ has the expected dimension $2n+1$ and irreducible general term. For $m\in\\{1,2\\}$, we perform a Cremona transform $Cr$ on $\mathbb{P}^{1}\times\mathbb{P}^{1}$ centered at $z_{1},z_{1}^{\prime}$. The linear system $\mathfrak{b}_{1}$ corresponds to sections of $\pi^{*}\mathcal{O}(1,n)(-E-E^{\prime})=\gamma^{*}\mathcal{O}(1,n-1)$ on $X$, so to sections of $\mathcal{O}(1,n-1)$. This gives the expected dimension of $\mathfrak{b}_{1}$, and the irreduciblity of a general divisor. The linear system $\mathfrak{b}_{2}$ similarly corresponds to sections of $\mathcal{O}(1,n-1)$ through $Cr(z_{2})$ and $Cr(z_{2}^{\prime})$. These two points live on the same vertical fiber. When $n=2$, the sections of $\mathcal{O}(1,n-1)=\mathcal{O}(1,1)$ through $Cr(z_{2}),Cr(z_{2}^{\prime})$ all contain the vertical fiber through the two points, and an arbitrary horizontal fiber, giving $\dim\mathfrak{b}_{2}=1$ as expected. We also see how irreducibility failed in this case. When $n>2$, a general section of $\mathcal{O}(1,n-1)$ intersects the vertical fiber containing the two points in $n-1\geq 2$ distinct points. Using the ${\rm PGL}(2)$ action on the second component, we can arrange that one of these points is $Cr(z_{2})$, but none of the others is $Cr(z_{2}^{\prime})$, and vice versa. Thus $\dim\mathfrak{b}_{2}=2n-3$ as expected. By a similar construction, we can see that there exist irreducible sections of $\mathcal{O}(1,n-1)$ through $Cr(z_{2})$ and $Cr(z_{2}^{\prime})$, and avoiding the indeterminacy points of $Cr^{-1}$, hence this is the general situation. 5\. Conclusion of part (i). Assume $n>1$ and $m\geq 3$. For fixed $3\leq m\leq n+1$, assume $\mathfrak{b}_{m-1}$ has the expected dimension $2(n-m+1)+1\geq 1$, but $\dim\mathfrak{b}_{m}=2(n-m+1)$ for every general choice of $Z$. This is indeed the only choice other than the expected dimension, easily verified by picking $z_{m}$ outside the base locus of $\mathfrak{b}_{m-1}$. For any such $z_{m}$ sufficiently general (so that $\dim\mathfrak{b}_{m}=2(n-m+1)$ for example), consider $T\in\mathfrak{b}_{m-1}$ passing through $z_{m}$. Let $C$ be an irreducible curve in the support of $T$ that passes through $z_{m}$. By our assumption that $\dim\mathfrak{b}_{m}=2(n-m+1)$, for all general $w$ in $C$, by replacing $z_{m}$ with $w$, it holds that $T$ also passes through $w^{\prime}$. Then $T$ contains $C$, but also its reflection $C^{\prime}$. If $C$ is symmetric, then it has class $cf_{1}+cf_{2}$ for some $c\geq 1$. Since $T$ has class $f_{1}+nf_{2}$, then necessarily $c=1$. If $C$ is not symmetric, then similarly $C+C^{\prime}$ has class $f_{1}+f_{2}$, so $C$ is a vertical or horizontal fiber. In both cases, denote $\widetilde{C}=C\cup C^{\prime}$ (set theoretic union). It is a reduced effective symmetric cycle of class $f_{1}+f_{2}$, a section of $\mathcal{O}(1,1)$ through $z_{m}$ and $z_{m}^{\prime}$. We can write $T=\widetilde{C}+F_{n-1}$, where $F_{n-1}$ is a sum of $n-1$ horizontal fibers (each of class $f_{2}$). This is an equality of cycles, and so $\widetilde{C}$ is uniquely determined by $T$. Such a decomposition exists for all sufficiently general choices of the $m$ pairs. Permuting the pairs will also produce a general ordered $m$-tuple of pairs, without changing $\mathfrak{b}_{m}$. In particular, since $\widetilde{C}$ passes through $z_{m}$ and $z_{m}^{\prime}$, it passes through all the $m$ pairs. This is impossible for $m\geq 3$ general pairs by the case $n=1$. Since $\mathfrak{b}_{n+1}$ is empty, so are $\mathfrak{b}_{m}$ for all $m>n$. 6\. Conclusion of part (ii). By (i), the system $\mathfrak{b}_{m}$ has the expected dimension. For any nonempty linear system, passing through one general point (e.g., not in the base locus) is a codimension 1 condition. By iterating, we obtain the claim. 7\. Conclusion of part (iii). In all cases where irreducibility holds, smoothness is automatic. Indeed any irreducible curve $C$ of class $f_{1}+nf_{2}$ satisfies $C\cdot f_{2}=1$, hence it is mapped isomorphically by the second projection onto $\mathbb{P}^{1}$. We assume $n>1$ and $m\geq 3$. By part (i), $\mathfrak{b}_{m}$ is nonempty precisely when $m\leq n$. In this case its dimension is $2(n-m)+1$. To prove the irreducibility of the general member of $\mathfrak{b}_{m}$, it is enough to prove that $\mathfrak{b}_{m}$ contains one irreducible curve. Since the first $m-1$ pairs in a set of $m$ sufficiently general pairs are also general, we have $\mathfrak{b}_{m-1}\supset\mathfrak{b}_{m}$. It is then enough to prove that $\mathfrak{b}_{n}$ has an irreducible curve. If every $C_{n}\in\mathfrak{b}_{n}$ is reducible, it is necessarily of form $C_{n}=C_{n-1}+F$, where $C_{n-1}$ is a (potentially reducible) section of $\mathcal{O}(1,n-1)$, and $F$ is a fiber of the second projection. By part (ii), the section $C_{n-1}$ contains at most $2n-1$ of the points of $Z$, while $F$ contains at most one point in $Z$. Since $C_{n}$ passes through all of $Z$, the bounds must be sharp. By part (ii), for every $z\in Z$, there is exactly one section of $\mathcal{O}(1,n-1)$ that contains $Z\setminus\\{z\\}$. Clearly there exists exactly one $F$ through $z$. There are then at most $2n$ choices for $C_{n}\in\mathfrak{b}_{n}$. This contradicts the equality $\dim\mathfrak{b}_{n}=1$ from (i). ∎ ###### Remark (Blow-ups of $\mathbb{P}^{2}$). Let $g\geq 1$ and let $\pi\colon X\to\mathbb{P}^{2}$ be the blow-up of $2g$ general points $z_{1},\ldots,z_{2g}$ with exceptional divisors $E_{1},\ldots,E_{2g}$ respectively. Let $H$ be the class of a line in $\mathbb{P}^{2}$. Then $\pi^{*}gH-(g-1)E_{1}-E_{2}-\cdots-E_{2g}\in\operatorname{{Nef}}(X).$ Indeed this class can be reduced by a sequence of Cremona transforms to $\pi^{*}H-E_{1}$. This result is equivalent to the general case of Proposition 3.9 via the isomorphism between $\mathbb{P}^{2}$ blown-up at $2g+1$ points $z_{0},\ldots,z_{2g}$ and $\mathbb{P}^{1}\times\mathbb{P}^{1}$ blown-up at $2g$ points. The exceptional divisor $E_{0}$ over $z_{0}$ is considered with coefficient 0 in the class above, so it can be blown-down. We now construct nef classes on $C\times C$ by degenerating from simple covers of $\mathbb{P}^{1}$. This follows an idea of [Kou93]. Recall that a finite map $f\colon C\to\mathbb{P}^{1}$ is called a _simple branched cover_ if any fiber of $f$ has at most one ramification point $c$, and if $f$ is given locally around any such $c$ by the map $x\mapsto x^{2}$. For example hyperelliptic pencils are simple. ###### Example 3.11 (Simple branched covers). Let $C$ be a curve of genus $g\geq 1$. Assume that $C$ admits a simple branched cover $f\colon C\to\mathbb{P}^{1}$ of degree $2\leq d\leq\lfloor\sqrt{g}\rfloor+1$. $\text{If }a,b\geq d\text{, then }af_{1}+bf_{2}-\delta\text{ is nef if and only if }a+b\geq\frac{2g}{d-1}+2$ (Following [Kou93] and [Laz04a, Theorem 1.5.8], consider $T$ the closure of the complement of the diagonal in $C\times_{\mathbb{P}^{1}}C$. It is irreducible of class $df_{1}+df_{2}-\delta$ in $C\times C$. Its self- intersection is $2\cdot((d-1)^{2}-g)\leq 0$. For example if $f$ is a hyperelliptic pencil, then $T$ is the graph of the induced hyperelliptic involution. For $A,B,C\geq 0$, $(A+dC)f_{1}+(B+dC)f_{2}-C\delta=Af_{1}+Bf_{2}+C[T]$ is nef if and only if the intersection with $T$ is nonnegative, i.e., $(d-1)(A+dC+B+dC-2C)\geq 2gC$. If $C>0$, after setting $a=\frac{A}{C}+d$ and $b=\frac{B}{C}+d$, we obtain the claim.) When $d>\lfloor\sqrt{g}\rfloor+1$, and $a,b\geq d$, the class $af_{1}+bf_{2}-\delta$ is ample. ∎ We obtain the following extension of the result of Kouvidakis [Kou93, Theorem 2]. ###### Corollary 3.12. Let $C$ be a _very general_ curve of genus $g\geq 1$. If $2\leq d\leq\lfloor\sqrt{g}\rfloor+1$ is an integer and $a,b\geq d$ satisfy $a+b\geq 2+\frac{2g}{d-1}$, then $af_{1}+bf_{2}-\delta$ is nef. In particular (3.12.1) $d\>\\!f_{1}+\biggl{(}2+\frac{2g}{d-1}-d\biggr{)}f_{2}-\delta\in\operatorname{Nef}(C\times C)$ ###### Proof. By the Riemann existence theorem, for any degree $d\geq 2$ there exists a curve $C_{d}$ of genus $g$ admitting a simple branched cover $C_{d}\to\mathbb{P}^{1}$ of degree $d$. From the previous example we deduce that $af_{1}+bf_{2}-\delta$ is nef. Since nefness is a very general condition in families, the result extends to very general curves. For the last statement set $a=d$ and note that $2+\frac{2g}{d-1}-d\geq d$. ∎ ###### Remark. When $C$ is _very general_ , the nefness of the classes in Theorem 3.4.(i) can be deduced from Corollary 3.12. Some of the classes in Theorem 3.4.(ii) are better, e.g., when $d=2g-2$. However, for large $g$ (the Mathematica software suggests $g\geq 15$), they are all in the convex span of the classes in (3.3.1) and those in Corollary 3.12. It is conceivable that for some countable union of families of curves inside $\mathscr{M}_{g}$ Corollary 3.12 fails, while Theorem 3.4 does not. ### 3.5. General technical results used in our proofs ###### Proposition 3.13. Let $\rho\colon X\to C$ be a flat surjective morphism between projective varieties, where $C$ is a nonsingular projective curve over an algebraically closed field. Let $\mathcal{L}$ be a line bundle on $X$, and let $f$ be the class of a fiber of $\rho$. 1. (i) If $\mathcal{L}$ is nef on every fiber of $\rho$ and globally generated on a general fiber of $\rho$, then $c_{1}(\mathcal{L})-\bigl{(}\overline{\mu}_{\min}(\rho_{*}\mathcal{L})\bigr{)}\cdot f\text{ is nef on }X.$ 2. (ii) If $\mathcal{L}$ is $\rho$-ample, then (3.13.1) $\sup\bigl{\\{}t\bigm{|}c_{1}(\mathcal{L})-tf\ \text{is nef}\bigr{\\}}=\lim_{n\to\infty}\frac{\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n})}{n}.$ ###### Proof. (i). The assumption implies by cohomology and base change that the natural map $\rho^{*}\rho_{*}\mathcal{L}\to\mathcal{L}$ is surjective on the general fiber of $\rho$. We thus have an exact complex $\rho^{*}\rho_{*}\mathcal{L}\longrightarrow\mathcal{L}\longrightarrow Q\longrightarrow 0,$ where $Q$ is supported in at most finitely many fibers of $\rho$. Since $\mathcal{L}$ is nef on the fibers and $Q$ is a direct sum of quotients of $\mathcal{L}$ restricted to fibers, we deduce that $Q$ is a nef coherent sheaf. It is invariant under twisting by classes of form $\rho^{*}D$ with $D$ an $\mathbb{R}$-divisor on $C$, since these are trivial on fibers. The twisted sheaf $\rho^{*}\rho_{*}\mathcal{L}\langle-\rho^{*}\overline{\mu}_{\min}(\rho_{*}\mathcal{L})f\rangle$ is nef by Lemma 2.1. The same is true of its (twisted) image in $\mathcal{L}\langle-\rho^{*}\overline{\mu}_{\min}(\rho_{*}\mathcal{L})f\rangle$. We deduce that the latter is an extension of nef twisted coherent sheaves. [FM21, Remark 3.4] and [FM21, Lemma 3.31] prove that such extensions are nef. One can also argue by blowing-up the ideal sheaf $\mathcal{I}$ on $X$ such that $\mathcal{I}\otimes\mathcal{L}$ is the image of the natural map $\rho^{*}\rho_{*}\mathcal{L}\to\mathcal{L}$. (ii). We first show that the right-hand side of (3.13.1) is indeed a limit. Let $n_{0}$ be an integer such that $\rho_{*}\mathcal{L}^{\otimes n}$ is a vector bundle for every $n\geq n_{0}$, and such that the natural maps (3.13.2) $\rho_{*}\mathcal{L}^{\otimes n}\otimes\rho_{*}\mathcal{L}^{\otimes m}\longrightarrow\rho_{*}\mathcal{L}^{\otimes(n+m)}$ are surjective for all $n\geq n_{0}$, $m\geq n_{0}$. Note that such an $n_{0}$ exists by cohomology and base change and by [Laz04a, Example 1.8.4.(ii)], respectively. We then have $\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n})+\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes m})=\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n}\otimes\rho_{*}\mathcal{L}^{\otimes m})\leq\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes(n+m)})$ for all $n\geq n_{0}$, $m\geq n_{0}$. The equality holds by [Miy87, Corollary 3.7 and p. 464]. For the inequality, in characteristic zero use (3.13.2) and (2.0.1). In positive characteristic, the same argument works to show the inequality above after taking a large enough Frobenius pullback of the quotient map by [Lan04, Theorem 2.7]. Finally, the sequence $\\{-\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n})\\}_{n=1}^{\infty}$ is a sequence satisfying the hypothesis of de Bruijn and Erdős’s version of Fekete’s lemma [dBE52, Theorem 23] for the constant function $\varphi(t)=\max\biggl{\\{}0,\max_{1\leq n,m\leq n_{0}}\Bigl{\\{}\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n})+\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes m})-\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes(n+m)})\Bigr{\\}}\biggr{\\}}$ in their notation, hence the right-hand side of (3.13.1) is indeed a limit. We now show that $\sup\bigl{\\{}t\bigm{|}c_{1}(\mathcal{L})-tf\ \text{is nef}\bigr{\\}}\geq\frac{\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n})}{n}$ for all $n$ sufficiently large. By [Laz04a, Theorem 1.7.6.(iii)] and the $\rho$-ampleness of $\mathcal{L}$, we may assume that $n$ is sufficiently large such that the natural map $\rho^{*}\rho_{*}\mathcal{L}^{\otimes n}\to\mathcal{L}^{\otimes n}$ is surjective. We may also assume that $\rho_{*}\mathcal{L}^{\otimes n}$ is locally free. Choose a closed point $c\in C$. Letting $a\coloneqq-\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n})$, we see that $\displaystyle\sup\biggl{\\{}t\biggm{|}c_{1}(\mathcal{L})+\frac{a}{n}f-tf\ \text{is nef}\biggr{\\}}$ $\displaystyle=\frac{a}{n}+\sup\bigl{\\{}t\bigm{|}c_{1}(\mathcal{L})-tf\ \text{is nef}\bigr{\\}}$ $\displaystyle\frac{\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n}\langle ac\rangle)}{n}$ $\displaystyle=\frac{a}{n}+\frac{\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n})}{n}=0$ hence it suffices to show that (3.13.3) $\sup\biggl{\\{}t\biggm{|}c_{1}(\mathcal{L})+\frac{a}{n}f-tf\ \text{is nef}\biggr{\\}}\geq 0.$ Since $\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n}\langle ac\rangle)=0$, we see that $\rho_{*}\mathcal{L}^{\otimes n}\langle ac\rangle$ is a nef twisted bundle by Lemma 2.1. Using the surjection $\rho^{*}\rho_{*}\mathcal{L}^{\otimes n}\twoheadrightarrow\mathcal{L}^{\otimes n}$, we have the commutative diagram $\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho}$$\textstyle{\mathbb{P}\bigl{(}\rho_{*}(\mathcal{L}^{\otimes n})\langle ac\rangle\bigr{)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{C}$ where $\mathcal{O}_{\mathbb{P}(\rho_{*}(\mathcal{L}^{\otimes n})\langle ac\rangle)}(1)$ is nef, hence so is $\mathcal{O}(1)\rvert_{X}=\mathcal{L}^{\otimes n}\langle af\rangle$, and (3.13.3) follows. It remains to show that the inequality $\leq$ holds in (3.13.1). Choose a closed point $c\in C$, and let $a$ be sufficiently large such that $\mathcal{L}\langle af\rangle$ is ample. We then see that replacing $\mathcal{L}$ by $\mathcal{L}\langle ac\rangle$ results in both sides of (3.13.1) increasing by $a$, and it therefore suffices to consider the case when $\mathcal{L}$ is ample. Now let $t_{0}>0$ be the value of the supremum on the left-hand side of (3.13.1), and fix a real number $\varepsilon>0$. Choose integers $u,v\geq 1$ such that $u/v+\varepsilon>t_{0}>u/v$. We then see that $\frac{\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes vn}(-unc))}{n}=-u+v\cdot\frac{\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n})}{n}.$ Since $\mathcal{L}^{\otimes v}(-uf)$ is ample, Lemma 3.14 below implies $\lim_{n\to\infty}\frac{\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes vn}(-unc))}{n}\geq 0$ by the fact that $\rho_{*}\mathcal{L}^{\otimes vn}(-unc)$ is nef for $n$ sufficiently large, and then by Lemma 2.1. We therefore have $\lim_{n\to\infty}\frac{\overline{\mu}_{\mathrm{min}}(\rho_{*}\mathcal{L}^{\otimes n})}{n}\geq\frac{u}{v}>t_{0}-\varepsilon.$ Since $\varepsilon$ was arbitrary, the inequality $\leq$ holds in (3.13.1). ∎ ###### Lemma 3.14. Let $\rho\colon Y\to X$ be a morphism of projective schemes, and let $\mathcal{L}$ be an ample invertible sheaf on $Y$. Let $\mathcal{F}$ be a coherent sheaf on $X$. Then $\mathcal{F}\otimes\rho_{*}\mathcal{L}^{\otimes n}$ is ample and globally generated for all $n$ sufficiently large. ###### Proof. Let $A$ be a very ample divisor on $X$ such that there exists a surjection $\bigoplus\mathcal{O}_{X}(-A)\twoheadrightarrow\mathcal{F}$. Since ampleness and global generation descend to quotients, it is enough to prove the lemma for $\mathcal{F}=\mathcal{O}_{X}(-A)$. With the usual arguments of Castelnuovo–Mumford regularity [Laz04a, Theorem 1.8.5], it is enough to prove that if $A$ is a very ample divisor on $X$, then $\rho_{*}\mathcal{L}^{\otimes n}$ is $-2$-regular with respect to $A$, i.e., $H^{i}\bigl{(}X,\rho_{*}\mathcal{L}^{\otimes n}(-(2+i)A)\bigr{)}=0$ for all $i>0$ for all $n$ sufficiently large. This is because in this case $\rho_{*}\mathcal{L}^{\otimes n}(-2A)$ is globally generated, hence $\rho_{*}\mathcal{L}^{\otimes n}(-A)$ is ample and globally generated. Since $\mathcal{L}$ is ample, it is in particular also $\rho$-ample. Hence for $n$ large, we have $R^{i}\rho_{*}\mathcal{L}^{\otimes n}=0$ for all $i>0$. The Leray spectral sequence and the projection formula show that $H^{i}\bigl{(}X,\rho_{*}\mathcal{L}^{\otimes n}(-(2+i)A)\bigr{)}=H^{i}\bigl{(}Y,\mathcal{L}^{\otimes n}\otimes\rho^{*}(-(2+i)A)\bigr{)}$. The ampleness of $\mathcal{L}$ and Serre vanishing show that these cohomology groups are $0$. ∎ Finally, we show that Proposition 3.13.(ii) extends to a more general setting: ###### Proposition 3.15. Let $\rho\colon Y\to X$ be a morphism of projective schemes over an algebraically closed field. Let $\mathcal{L}$ be a $\rho$-ample line bundle on $Y$. For $\mathcal{F}$ a coherent sheaf on $X$, and $H$ an ample line bundle on $X$, denote $\nu_{H}(\mathcal{F})\coloneqq\sup\\{t\mid\mathcal{F}\langle- tH\rangle\mbox{ is nef}\\}.$ Then $\sup\bigl{\\{}t\operatorname{\bigm{|}}c_{1}(\mathcal{L})-t\rho^{*}H\mbox{ is nef}\bigr{\\}}=\lim_{n\to\infty}\frac{\nu_{H}(\rho_{*}\mathcal{L}^{\otimes n})}{n}.$ ###### Proof. The sequence $\nu_{H}(\rho_{*}\mathcal{L}^{\otimes n})>-\infty$ is superadditive (in the weaker sense in the proof of Proposition 3.13.(ii)) by $\rho$-ampleness, hence the limit exists by de Bruijn and Erdős’s version of Fekete’s lemma [dBE52, Theorem 23]. Since $\mathcal{L}$ is $\rho$-ample, for sufficiently large $n$, we have inclusions $Y\hookrightarrow\mathbb{P}(\rho_{*}\mathcal{L}^{\otimes n})$ such that $\mathcal{O}_{\mathbb{P}_{X}(\rho_{*}\mathcal{L}^{\otimes n})}(1)|_{Y}=\mathcal{L}^{\otimes n}$. It follows that the inequality “$\geq$” holds. For the reverse inequality, note as in Proposition 3.13.(ii) that both sides translate by $t_{0}$ when replacing $\mathcal{L}$ by $\mathcal{L}\langle t_{0}\rho^{*}H\rangle$ for $t_{0}\in\mathbb{Q}$ (with the understanding that we only consider sufficiently divisible $n$ in the right-hand side). Without loss of generality, we may assume that $\mathcal{L}$ is ample on $Y$. As in Proposition 3.13, we reduce to proving that $\rho_{*}\mathcal{L}^{\otimes n}$ is globally generated for large $n$, which follows from Lemma 3.14. ∎ ## 4\. Products of arbitrarily many factors In this section we work over $\mathbb{C}$. Let $n\geq 2$ be an integer. It is also interesting to study $\operatorname{{Nef}}(C^{n})$. To our knowledge, no conjecture on the shape of this cone has been made in the literature for $n\geq 3$. In fact it is quite a large cone. ###### Lemma 4.1. If $g\geq 1$ and $C$ is very general in moduli, then $N^{1}(C^{n})$ has dimension $\binom{n+1}{2}$. A basis is given by the class $f_{i}$ of the fibers of the projection from $C^{n}$ onto the $i$-th factor for every $i$, and by the classes $\delta_{ij}$ of the big diagonals $\Delta_{ij}=\\{(x_{1},\ldots,x_{n})\in C^{n}\operatorname{\bigm{|}}x_{i}=x_{j}\\}$. Pulling back by the projections $pr_{ij}\colon C^{n}\to C^{2}$ gives nef classes on $C^{n}$. ###### Example 4.2. With assumptions as in Theorem 3.4.(i), on general curves we have $\frac{(n-1)d}{d-g}f_{1}+d\>\\!f_{2}+\cdots+d\>\\!f_{n}-\delta_{12}-\cdots-\delta_{1n}=\sum_{i=2}^{n}\biggl{(}\Bigl{(}1+\frac{g}{d-g}\Bigr{)}f_{1}+df_{i}-\delta_{1i}\biggr{)}\in\operatorname{{Nef}}(C^{n}).$ Indeed, each summand on the right is the pullback via $pr_{1i}$ of a nef class on $C\times C$. On an arbitrary curve, using (3.3.1), we obtain $\sum_{i=2}^{n}\bigl{(}\bigl{(}1+\frac{g}{a-1}+(g-1)(a-1)\bigr{)}f_{1}+af_{i}-\delta_{1i}\bigr{)}\in\operatorname{{Nef}}(C^{n})$ for all $a>1$.∎ Our main result of the section is the following ###### Theorem 4.3. Let $C$ be a smooth complex projective curve of genus $g>0$ and let $n\geq 2$. Then $\frac{(n-1)d}{d-g}f_{1}+d\cdot\sum_{i=2}^{n}f_{i}-\sum_{1\leq i<j\leq n}\delta_{ij}\in\operatorname{{Nef}}(C^{n})$ if one of the following holds: 1. (i) $d\geq 2g+n$, or $d\geq\max\\{2n+g,2g\\}$, or 2. (ii) $n\geq 2g$ and $d\geq g+n-1$. The proof will make use of the rich geometry of the symmetric products of $C$. We recall some notation and classical results about these. Denote by $C_{n}\coloneqq\operatorname{Hilb}^{n}C=C^{n}/\mathfrak{S}_{n}$ the $n$-th symmetric product of $C$ with quotient map $\pi\colon C^{n}\to C_{n}$. It is also the space of effective divisors $D$ on $C$ with $\deg D=n$. Let $\Delta=\Delta_{n}$ be the big diagonal on $C_{n}$, the image through $\pi$ of $\Delta_{ij}$ for any $i<j$. It is the ramification locus of $\pi$, hence there exists a (non-effective) Cartier divisor on $C_{n}$ denoted $\frac{\Delta_{n}}{2}$ such that $\pi^{*}\frac{\Delta_{n}}{2}$ is the branching divisor $\sum_{i<j}\Delta_{ij}$ and $2\cdot\frac{\Delta_{n}}{2}=\Delta_{n}$. Let $x$ and $\delta$ be the classes of $c_{0}+C_{n-1}$ and of $\Delta_{n}$ respectively in $N^{1}(C_{n})$. If $C$ is very general, then $x$ and $\delta$ are a basis of $N^{1}(C_{n})$. The cone $\operatorname{{Nef}}(C_{n})$ has been previously studied by [Pac03]. Our methods do not offer improvements here, since we usually exploit the existence of a nonconstant map to a curve. This is in line with our results above in the case $n=2$. Instead we focus on $\operatorname{{Nef}}(C\times C_{n-1})$. Any of the nef classes that we construct lift to $\mathfrak{S}_{n-1}$-symmetric (but not necessarily $\mathfrak{S}_{n}$-symmetric) nef classes on $C^{n}$. Consider the diagram $\textstyle{Z_{n-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C\times C_{n-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\scriptstyle{p}$$\textstyle{C_{n-1}}$$\textstyle{C}$ where $p$ and $q$ are the two projections, and $Z=Z_{n-1}$ is the universal family $\\{(c,D)\operatorname{\bigm{|}}c\in{\rm Supp}\,D\\}$. Denote by $z$ its class in $N^{1}(C\times C_{n-1})$. ###### Lemma 4.4. If $n\geq 3$, if $g\geq 1$ and $C$ is very general in moduli, then $N^{1}(C\times C_{n-1})$ is 4-dimensional, generated by the fiber $f$ of the first projection $p$, by $q^{*}x$ and $q^{*}\delta$, and by $z$. ###### Proof. Modulo pullbacks from either factor, a divisor on $C\times C_{n-1}$ can be identified with a morphism $C_{n-1}\to J(C)$. From the universality property of the Albanese variety, this is equivalent to an element of $\operatorname{Hom}(\operatorname{Alb}(C_{n-1}),J(C))=\operatorname{End}(J(C))$. The latter is $\mathbb{Z}$ for $C$ very general. We used $\operatorname{Alb}(C_{n-1})=J(C)$. ∎ For $L$ a divisor on $C$, consider the tautological divisors on $C_{n-1}$: * • $T_{L}\coloneqq\pi(pr_{i}^{*}L)$, where $pr_{i}\colon C^{n-1}\to C$ is any of the projections. We have $\pi^{*}T_{L}=L^{\boxtimes(n-1)}\coloneqq\sum_{i}pr_{i}^{*}L$. If $L=c_{0}$ is a point, then $T_{L}=\\{c_{0}+D\operatorname{\bigm{|}}D\in C_{n-2}\\}=c_{0}+C_{n-2}\subset C_{n-1}$. * • $N_{L}\coloneqq T_{L}-\frac{\Delta_{n-1}}{2}$. It is the determinant of the tautological bundle $E_{L}\coloneqq q_{*}\mathcal{O}_{Z_{n-1}}(p^{*}L)$ (not to be confused with the bundle $E_{L}=q_{*}(p^{*}\mathcal{O}_{C}(L)\otimes\mathcal{O}_{C\times C}(\Delta))$ from [EL92] appearing in the previous section). For example, $K_{C_{n}}=N_{K_{C}}$. Part (i) in the next result is an important computation for the proof of Theorem 4.3. Part (ii) will not be used, but we find that the diagram used in its proof (taken from [EL15]) is instructive. See Definition 3.6 for the definition of $M^{(m-1)}(L)$. ###### Lemma 4.5. Let $C$ be a smooth complex projective curve, and let $n\geq 2$ and $m\geq 1$. If $L$ is a divisor on $C$, then $pr_{1*}\mathcal{O}\Bigl{(}pr_{23\ldots n}^{*}L^{\boxtimes(n-1)}-m\cdot\sum_{i=2}^{n}\Delta_{1i}\Bigr{)}=\bigotimes^{n-1}M^{(m-1)}(L).$ Consequently, 1. (i) $p_{*}\mathcal{O}(q^{*}T_{L}-mZ)=\operatorname{Sym}^{n-1}M^{(m-1)}(L)$ and $p_{*}\mathcal{O}(q^{*}N_{L}-mZ)=\bigwedge^{n-1}M^{(m-1)}(L)$. 2. (ii) $p_{*}\mathcal{O}_{Z}(q^{*}T_{L})=\operatorname{Sym}^{n-2}H^{0}(C,L)\otimes\mathcal{O}_{C}(L)$ and $p_{*}\mathcal{O}_{Z}(q^{*}N_{L})=\bigwedge^{n-2}M_{L}\otimes\mathcal{O}_{C}(L)$. ###### Proof. We can see $C^{n}$ as $\underbrace{(C\times C)\times_{C}\ldots\times_{C}(C\times C)}_{n-1\text{ times}},$ where the fiber product is always over the first projection. Then $pr_{23\ldots n}^{*}L^{\boxtimes(n-1)}-m\cdot\sum_{i=2}^{n}\Delta_{1i}$ is the relative box product pullback of $pr_{2}^{*}L-m\Delta$ from each $C\times C$ factor. Since $pr_{1*}\mathcal{O}\bigl{(}pr_{2}^{*}L-m\Delta\bigr{)}=M^{(m-1)}(L)$, the claim follows by the projection formula. For (i) consider the diagram $\textstyle{C^{n}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{pr_{23\ldots n}}$$\scriptstyle{1_{C}\times\pi}$$\scriptstyle{pr_{1}}$$\textstyle{C^{n-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{C\times C_{n-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\scriptstyle{p}$$\textstyle{C_{n-1}}$$\textstyle{C}$ The permutation group $\mathfrak{S}_{n-1}$ acts naturally on the last $n-1$ components of $C^{n}$, and trivially on the first component $C$ and on the quotient $C_{n-1}$. Then the maps $1_{C}\times\pi$, $p$, and $pr_{1}$ are $\mathfrak{S}_{n-1}$-equivariant. We have $pr_{23\ldots n}^{*}L^{\boxtimes(n-1)}-m\cdot\sum_{i=2}^{n}\Delta_{1i}=(1_{C}\times\pi)^{*}(q^{*}T_{L}-mZ)$. The associated line bundle can be linearized in two natural ways via the identity and alternating representation. With these linearizations, it descends to $C\times C_{n-1}$ as $q^{*}T_{L}-mZ$ and $q^{*}N_{L}-mZ$ respectively. Part (i) follows from [She19, Proposition 2.3] since $\operatorname{Sym}^{n-1}M^{(m-1)}(L)$ and $\bigwedge^{n-1}M^{(m-1)}(L)$ are the respective invariants for $pr_{1*}\mathcal{O}\bigl{(}pr_{23\ldots n}^{*}L^{\boxtimes(n-1)}-m\sum_{i=2}^{n}\Delta_{1i}\bigr{)}=\bigotimes^{n-1}M^{(m-1)}(L)$ under these actions. For part (ii), consider the commutative diagram $\textstyle{C_{n-1}}$$\textstyle{C\times C_{n-2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sigma}$$\scriptstyle{\jmath}$$\scriptstyle{p_{1}}$$\scriptstyle{p_{2}}$$\textstyle{C\times C_{n-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\scriptstyle{p}$$\textstyle{C_{n-2}}$$\textstyle{C}$ where $\sigma(c,D_{n-2})=c+D_{n-2}$, where $p_{1}$ and $p_{2}$ are the first and second projection, and $\jmath(c,D_{n-2})=(c,c+D_{n-2})$. The map $\jmath$ can be identified with the inclusion of the universal family $Z$. We compute that $\sigma^{*}(c_{0}+C_{n-2})=p_{1}^{*}(c_{0})+p_{2}^{*}(c_{0}+C_{n-3})$ which extends to $\sigma^{*}T_{L}=p_{1}^{*}L+p_{2}^{*}T_{L}$, and hence $p_{*}\mathcal{O}_{Z}(q^{*}T_{L})=p_{1*}\mathcal{O}(\sigma^{*}T_{L})=H^{0}(C_{n-2},T_{L})\otimes\mathcal{O}_{C}(L)$ by the projection formula and flat base change for cohomology. For $n=3$, we observe $\sigma^{*}(\frac{\Delta}{2})=\Delta_{12}=Z_{1}$, hence $p_{*}\mathcal{O}_{Z}(q^{*}N_{L})=p_{1*}\mathcal{O}\bigl{(}p_{2}^{*}L-{\Delta_{12}}\bigr{)}\otimes\mathcal{O}_{C}(L)=M_{L}\otimes\mathcal{O}_{C}(L).$ For $n>3$ we compute $\sigma^{*}(\frac{\Delta}{2})=Z_{n-2}+p_{2}^{*}\frac{\Delta_{n-2}}{2}$. Then by part (i), $p_{*}\mathcal{O}_{Z}(q^{*}N_{L})=p_{1*}\mathcal{O}\left(p_{2}^{*}N_{L}-Z_{n-2}\right)\otimes\mathcal{O}_{C}(L)=\bigwedge^{n-2}M_{L}\otimes\mathcal{O}_{C}(L).\qed$ Starting with a divisor $L=L_{d}$ of degree $d$ on $C$, the classical approach for constructing nef divisors on $C_{n}$ has been to exploit the properties of tautological sheaves $E_{L}=q_{*}\mathcal{O}_{Z}(p^{*}L)$ and the divisors $T_{L}$ and $N_{L}$ by working with them on $C_{n}$ directly (see [EL15, She19]). The semistability of $E_{L_{d}}$ is known for large $d$ (see [Mis19]). What we are missing is an understanding of the positivity of twists of $E_{L}$ as is necessary for Proposition 3.15. In our approach, where we work with vector bundles on $C$ instead, the positivity of twists is completely determined by semistability. ###### Proof of Theorem 4.3. In all cases we prove that (4.5.1) $q^{*}\Bigl{(}dx-\frac{\delta}{2}\Bigr{)}-z+\frac{(n-1)d}{d-g}f\in\operatorname{{Nef}}(C\times C_{n-1}).$ Via $\pi^{*}$, it then lifts to $\frac{(n-1)d}{d-g}f_{1}+d\>\\!f_{2}+\cdots+d\>\\!f_{n+1}-\sum_{1\leq i<j\leq n}\delta_{ij}\in\operatorname{{Nef}}(C^{n}).$ Recall that a divisor $D$ on a smooth projective curve $C$ is called $k$-very ample if the evaluation map $H^{0}(C,D)\to H^{0}(Y_{k+1},D|_{Y_{k+1}})$ is surjective for all effective divisors $Y_{k+1}$ of degree $k+1$ on $C$. Equivalently, the tautological bundle $E_{D}=q_{*}\mathcal{O}_{Z_{k+1}}(p^{*}D)$ on $C_{k+1}$ is globally generated. In particular, if $D$ is $k$-very ample, then $N_{D}$ is globally generated on $C_{k+1}$. [CG90] prove that if $D$ is furthermore $k+1$-very ample, then $N_{D}$ is in fact very ample on the same $C_{k+1}$. It is immediate that if $\deg D\geq 2g+p$, then $D$ is $p$-very ample. (i). Let $L$ be a divisor of degree $d\geq 2g+n$. Then $L-c_{0}$ is $n-1$-very ample for all $c_{0}\in C$, hence $N_{L-c_{0}}=N_{L}-(c_{0}+C_{n-2})$ is very ample on $C_{n-1}$. By Lemma 4.5, $p_{*}\mathcal{O}(q^{*}N_{L}-Z)=\bigwedge^{n-1}M_{L}.$ Since $M_{L}$ is semistable for $d\geq 2g$ (see [EL92, Proposition 3.2]), so are its tensor powers $\bigotimes^{n-1}M_{L}$ by [Laz04b, Corollary 6.4.14], hence so are any direct summands of these tensor powers, e.g., $\bigwedge^{n-1}M_{L}$. We conclude that $\bigwedge^{n-1}M_{L}$ is semistable of slope $(n-1)\cdot\mu(M_{L})=-\frac{(n-1)d}{d-g}$. From Proposition 3.13.(i) we find that $q^{*}N_{L}-Z+\frac{(n-1)d}{d-g}\cdot f$ is nef. Note that $A=N_{L}$ has class $dx-\frac{\delta}{2}$, which gives (4.5.1). Let $L$ be a _general_ divisor of degree $d\geq\max\\{2n+g,2g\\}$. Then $M_{L}$ is semistable of slope $-\frac{d}{d-g}$, in particular $\bigwedge^{n-1}M_{L}$ is semistable of slope $-\frac{(n-1)d}{d-g}$. It remains to prove that $L-c_{0}$ is $n-1$-very ample for general $c_{0}\in C$. This follows from Lemma 4.6 below. (ii). Let $L=L_{d}$ be a divisor on $C$ of degree $d$. Fix $c_{0}\in C$. We want to show that $q^{*}N_{L}-Z+\frac{(n-1)d}{d-g}p^{*}c_{0}$ is nef on $C\times C_{n-1}$. We begin with a reminder on the Abel–Jacobi map: (4.5.2) $\displaystyle u_{m}\colon C_{m}$ $\displaystyle\longrightarrow\operatorname{Pic}^{0}(C)=J(C)$ $\displaystyle D$ $\displaystyle\longmapsto\mathcal{O}_{C}\bigl{(}D-mc_{0}\bigr{)}$ It is a projective bundle for $m\geq 2g-1$ [ACGH85, p. 309, Proposition 2.1(i)]. Here, $c_{0}\in C$ is our fixed point. Let $\theta$ be the principal polarization on $J(C)$ obtained as the image through the Abel–Jacobi map $u_{g-1}$. Denoting by $\tau_{\alpha}$ the translation by $\alpha\in J(C)$, it induces an isomorphism $\displaystyle\kappa\colon J(C)$ $\displaystyle\longrightarrow\operatorname{Pic}^{0}(J(C))$ $\displaystyle\alpha$ $\displaystyle\longmapsto\tau_{\alpha}^{*}\theta-\theta$ whose inverse is $u_{1}^{*}\colon\operatorname{Pic}^{0}(J(C))\to\operatorname{Pic}^{0}(C)=J(C)$. We make the following claims: Main claim. For some large $N>0$ and sufficiently general $\alpha_{i}\in{\rm Pic}^{0}(C)$ with $i\in\\{1,2,\ldots,N\\}$, the natural map (4.5.3) $\bigoplus_{i=1}^{N}p^{*}p_{*}\mathcal{O}(q^{*}N_{L+\alpha_{i}}-Z)\otimes\mathcal{O}(q^{*}T_{\alpha_{i}^{\vee}})\longrightarrow\mathcal{O}(q^{*}N_{L}-Z)$ is surjective along the general fiber of $p$. The map is obtained by summing the compositions $p^{*}p_{*}\mathcal{O}(q^{*}N_{L+\alpha_{i}}-Z)\otimes\mathcal{O}(q^{*}T_{\alpha_{i}^{\vee}})\to\mathcal{O}(q^{*}N_{L+\alpha_{i}}-Z)\otimes\mathcal{O}(q^{*}T_{\alpha_{i}^{\vee}})=\mathcal{O}(q^{*}N_{L}-Z)$. Claim 2. If $\alpha\in J(C)$, then $T_{\alpha}$ is numerically trivial. In fact $T_{\alpha}=u^{*}(\kappa(\alpha))$, where $u=u_{n-1}$. (Since $u$ is a projective bundle map, $u^{*}$ is an isomorphism at the level of numerically trivial divisors. Consider the morphism $\displaystyle\imath\colon C$ $\displaystyle\longrightarrow C_{n-1}$ $\displaystyle c$ $\displaystyle\longmapsto c+(n-2)c_{0}$ which satisfies $u_{n-1}\circ\imath=u_{1}$. The pullback $\imath^{*}\colon{\rm Pic}^{0}(C_{n-1})\to J(C)$ is an isomorphism with inverse $\alpha\mapsto T_{\alpha}$. See [She19, §3.2]. The class $u^{*}\kappa(\alpha)$ is a numerically trivial divisor $\eta$ on $C_{n-1}$ such that $\imath^{*}\eta=\alpha$. The claim follows.) Claim 3. If $E$ is a divisor of degree $e$ on $C$ with $e\geq 2g-1$, then $R^{i}u_{*}\mathcal{O}_{C_{n-1}}(N_{E})=0$ for $i>0$. (The pullback $u^{*}\theta$ has class $(g+n-2)x-\frac{\delta}{2}$, e.g., by [Pac03, Lemma 2.1]. By the projection formula, it is enough to prove that if $F$ is a divisor of degree $f\geq-(n-1-g)$ on $C$, then $R^{i}u_{*}\mathcal{O}_{C_{n-1}}(T_{F})=0$ for $i>0$. This is true because the fibers of $u$ are projective spaces $\mathbb{P}^{n-1-g}$, and $T_{F}$ restricts as $\mathcal{O}_{\mathbb{P}^{n-1-g}}(f)$ on them. These line bundles have no higher cohomology in the stated range.) Claim 4. If $E$ is a divisor of degree $e$ on $C$ with $e\geq 2g-1$, then $h^{i}(C_{n-1},N_{E})=0$ for $i>0$.(We have $K_{C_{n-1}}=N_{K_{C}}$. Then $N_{E}=N_{K_{C}}+T_{E-K_{C}}$. The claim follows by Kodaira vanishing.) Let us assume for the moment that the main claim is proved. By Lemma 4.5.(i), we have $p_{*}\mathcal{O}(q^{*}N_{L+\alpha_{i}}-Z)=\bigwedge^{n-1}M_{L+\alpha_{i}}$. This is semistable of slope $-\frac{(n-1)d}{d-g}$ since $d\geq g+n-1\geq 2g$ by [EL92, Proposition 3.2], hence $p_{*}\mathcal{O}(q^{*}N_{L+\alpha_{i}}-Z)\bigl{\langle}\frac{(n-1)d}{d-g}c_{0}\bigr{\rangle}$ is nef by Lemma 2.1. From Claim 2, the line bundles $\mathcal{O}(q^{*}T_{\alpha_{i}^{\vee}})$ are numerically trivial. The twist of the LHS of (4.5.3) by $p^{*}\frac{(n-1)d}{d-g}c_{0}$ is then nef. The conclusion follows as in the proof of Proposition 3.13.(i). The proof of the main claim. The surjectivity of (4.5.3) along the general fiber is implied by surjectivity on the fiber over $c_{0}$. Note that $Z|_{\\{c_{0}\\}\times C_{n-1}}=c_{0}+C_{n-2}=T_{c_{0}}$. By cohomology and base change (using Claim 4), the fiber map is (4.5.4) $\bigoplus_{i=1}^{N}H^{0}\bigl{(}C_{n-1},N_{L-c_{0}+\alpha_{i}}\bigr{)}\otimes T_{\alpha_{i}^{\vee}}\longrightarrow\mathcal{O}_{C_{n-1}}(N_{L-c_{0}}).$ The natural relative evaluation map $u^{*}u_{*}\mathcal{O}(N_{L-c_{0}})\to\mathcal{O}(N_{L-c_{0}})$ is surjective: arguing as in Claim 3, the divisor $L-c_{0}$ restricts as $\mathcal{O}(d-1-(g+n-2))$ on the fibers of $u$, and $d-1-(g+n-2)\geq 0$. Using Claim 2, the surjectivity of (4.5.4) follows after pullback from the surjectivity of $\bigoplus_{i=1}^{N}H^{0}\bigl{(}J(C),u_{*}\mathcal{O}(N_{L-c_{0}+\alpha_{i}})\bigr{)}\otimes\kappa(\alpha_{i}^{\vee})\longrightarrow u_{*}\mathcal{O}(N_{L-c_{0}}).$ From Claim 2 and from the projection formula, this is equivalent to the surjectivity of the map $\bigoplus_{i=1}^{N}H^{0}\bigl{(}J(C),u_{*}\mathcal{O}(N_{L-c_{0}})\otimes\kappa(\alpha_{i})\bigr{)}\otimes\kappa(\alpha_{i}^{\vee})\longrightarrow u_{*}\mathcal{O}(N_{L-c_{0}}).$ In this form, the result is a direct application of [Par00, Corollary 2.4] if we prove that $H^{i}\bigl{(}J(C),u_{*}\mathcal{O}(N_{L-c_{0}})\otimes\kappa(\alpha)\bigr{)}=0$ for all $i>0$ and all $\alpha\in J(C)$. This is a consequence of Claims 3 and 4, and the Leray spectral sequence. ∎ ###### Remark. Using the three parts of Theorem 3.4, the proof of Theorem 4.3.(i) gives slightly better results when $C$ is a general curve. ###### Lemma 4.6. Let $C$ be a smooth projective curve of genus $g$, and let $0\leq n\leq g$. Let $D$ be a divisor of degree $d$ on $C$ such that $\mathcal{O}_{C}(D)$ is general in ${\rm Pic}^{d}(C)$. If $d\geq 2n+g+1$, then $D$ is $n$-very ample. We follow an idea of [KM20] where the case $n=0$ was treated. ###### Proof. If $d\geq 2g+n$, then any $D$ is $n$-very ample by Riemann–Roch or Kodaira vanishing. Assume $d\leq 2g+n-1$ and $D$ is not $n$-very ample. Then there exists $Z_{n+1}$ an effective divisor of degree $n+1$ such that $h^{1}(C,D-Z_{n+1})\neq 0$, in particular $h^{0}(C,K_{C}-D+Z_{n+1})\neq 0$. Note that $\deg(K_{C}-D+Z_{n+1})=2g+n-1-d\geq 0$. We have $K_{C}-D+Z_{n+1}\sim E$, for some effective $E$ of degree $2g+n-1-d$. The pairs $(Z_{n+1},E)$ move in an $2g+2n-d$-dimensional family, and $2g+2n-d\leq g-1$. For a general choice of $\mathcal{O}_{C}(D)$ we have $K_{C}-D\not\sim E-Z_{n+1}$ for any such pair $(Z_{n+1},E)$, which is a contradiction. ∎ ## 5\. Other considerations ### 5.1. Product of very general distinct curves If $C_{1}$ and $C_{2}$ are smooth projective curves, then ${\rm Pic}(C_{1}\times C_{2})\simeq{\rm Pic}(C_{1})\oplus{\rm Pic(C_{2})}\oplus{\rm Hom}(J(C_{1}),J(C_{2}))$. If the curves are sufficiently general, there exists no nontrivial morphism between their Jacobians. In particular, $N^{1}(C_{1}\times C_{2})$ is 2-dimensional, generated by the classes $f_{1}$ and $f_{2}$ of the fibers of each projection. In this case $\operatorname{{Nef}}(C_{1}\times C_{2})=\operatorname{\overline{Eff}}(C_{1}\times C_{2})=\langle f_{1},f_{2}\rangle$. ### 5.2. Restricting from larger ambient spaces ###### Remark 5.1 (Restricting from the Jacobian). Let $C$ be an _arbitrary_ curve of genus $g\geq 1$. Let $J(C)$ be the Jacobian of $C$. The “canonical part” of the nef cone of $J(C)\times J(C)$ is essentially known by [DELV11]. By restricting these classes to $C\times C$, one finds $d\>\\!f_{1}+\biggl{(}1+\frac{g^{2}}{d-1}\biggr{)}f_{2}-\delta\in\operatorname{{Nef}}(C\times C)\ (\forall)\ d>1.$ See Figure 1 for a comparison with Remark 3.3. ###### Remark (Restricting from Hilbert schemes of K3 surfaces). For $(S,H)$ a polarized K3 surface of degree $2t$ and Picard number 1, [BM14] compute the Nef cone of ${\rm Hilb}^{2}S$ in terms of solutions to the Pell equations $x^{2}-4ty^{2}=5$ and $x^{2}-ty^{2}=1$. If $C\subset S$ is a smooth curve, one can pullback nef classes from ${\rm Hilb}^{2}S$ to $C\times C$. The resulting examples are roughly of form $2\sqrt{g}(f_{1}+f_{2})-\delta$, off by a factor of 2 from Conjecture 1.2. ### 5.3. Semistability of $M^{(m-1)}(L)$ via positivity on $C\times C_{t}$ One can use Lemma 4.5.(i) to reduce the semistability of $M^{(m-1)}(L)$ to the non-effectivity of certain divisors on $C\times C_{t}$. If $C$ is a smooth projective curve over $\mathbb{C}$, and $\mathcal{E}$ is a locally free sheaf on $C$, then $\mathcal{E}$ is semistable if, and only if, it is cohomologically semistable in the sense that for all $1\leq t$ we have $h^{0}(C,A^{\vee}\otimes\bigwedge^{t}\mathcal{E})=0$ for all line bundles $A$ of degree $a$ with $a>t\cdot\mu(\mathcal{E})$. (The idea is that if $\mathcal{F}\subset\mathcal{E}$ destabilizes $\mathcal{E}$ and has rank $t$, then we have an inclusion $A\coloneqq\det\mathcal{F}\subset\bigwedge^{t}\mathcal{E}$.) In particular, using Lemma 4.5.(i) and the projection formula, $M^{(m-1)}(L)$ is semistable if, and only if, $h^{0}(C\times C_{t},-p^{*}A+q^{*}N_{L}-mZ_{t})=0$ for all $t\geq 1$ and all divisors $A$ on $C$ with $\deg A>t\cdot\mu(M^{(m-1)}(L))$. Instead of proving the semistability of $M^{(m-1)}(L)$, one might be interested in bounding its positivity, which in characteristic zero is determined by $\mu_{\min}(M^{(m-1)}(L))$. Here it is more natural to work with quotients instead of subsheaves. In terms of quotients, the semistability of $\mathcal{E}$ is determined by the vanishing of $h^{0}(C,B\otimes\bigwedge^{t}\mathcal{E}^{\vee})=h^{1}(C,\omega_{C}\otimes B^{\vee}\otimes\bigwedge^{t}\mathcal{E})$ for all line bundle $B$ of degree $b$ with $b<t\cdot\mu(\mathcal{E})$. If $L$ is sufficiently positive so that $R^{1}p_{*}\mathcal{O}_{C\times C_{t}}(q^{*}N_{L}-mZ)=0$ for all $1\leq t<{\rm rk}\,M^{(m-1)}(L)$ (e.g., when $h^{1}(C_{t},N_{L-mc})=0$ for all $c\in C$; this holds when $\deg L>2g-2+m$), then we would be looking to understand $h^{1}(C\times C_{t},p^{*}(K_{C}-B)+q^{*}N_{L}-mZ_{t})$ for all divisors $B$ on $C$ of degree $b$ with $b<t\cdot\mu(M^{(m-1)}(L))$. ## References * [ACGH85] Enrico Arbarello, Maurizio Cornalba, Pillip A. Griffiths, and Joe Harris, _Geometry of algebraic curves. Vol. I_ , Grundlehren Math. Wiss., vol. 267, Springer-Verlag, New York, 1985. MR 770932 * [Bar71] Charles M. Barton, _Tensor products of ample vector bundles in characteristic $p$_, Amer. J. Math. 93 (1971), 429–438. MR 289525 * [Bis05] Indranil Biswas, _A criterion for ample vector bundles over a curve in positive characteristic_ , Bull. Sci. Math. 129 (2005), no. 6, 539–543. MR 2142897 * [BM14] Arend Bayer and Emanuele Macrì, _MMP for moduli of sheaves on K3s via wall-crossing: nef and movable cones, Lagrangian fibrations_ , Invent. Math. 198 (2014), no. 3, 505–590. MR 3279532 * [BP14] Indranil Biswas and A. J. Parameswaran, _Nef cone of flag bundles over a curve_ , Kyoto J. Math. 54 (2014), no. 2, 353–366. MR 3215571 * [BPO09] L. Brambila-Paz and Angela Ortega, _Brill-Noether bundles and coherent systems on special curves_ , Moduli spaces and vector bundles, London Math. Soc. Lecture Note Ser., vol. 359, Cambridge Univ. Press, Cambridge, 2009, pp. 456–472. MR 2537078 * [Bre04] Holger Brenner, _Slopes of vector bundles on projective curves and applications to tight closure problems_ , Trans. Amer. Math. Soc. 356 (2004), no. 1, 371–392. MR 2020037 * [Bre06] by same author, _Tight closure and plus closure in dimension two_ , Amer. J. Math. 128 (2006), no. 2, 531–539. MR 2214902 * [But94] David C. Butler, _Normal generation of vector bundles over a curve_ , J. Differential Geom. 39 (1994), no. 1, 1–34. MR 1258911 * [Cam08] Chiara Camere, _About the stability of the tangent bundle restricted to a curve_ , C. R. Math. Acad. Sci. Paris 346 (2008), no. 7-8, 421–426. MR 2417562 * [CG90] Fabrizio Catanese and Lothar Gœttsche, _$d$ -very-ample line bundles and embeddings of Hilbert schemes of $0$-cycles_, Manuscripta Math. 68 (1990), no. 3, 337–341. MR 1065935 * [CK99] Ciro Ciliberto and Alexis Kouvidakis, _On the symmetric product of a curve with general moduli_ , Geom. Dedicata 78 (1999), no. 3, 327–343. MR 1725369 * [dBE52] N. G. de Bruijn and P. Erdős, _Some linear and some quadratic recursion formulas. II_ , Nederl. Akad. Wetensch. Proc. Ser. A. 55 = Indagationes Math. 14 (1952), 152–163. MR 47162 * [DELV11] Olivier Debarre, Lawrence Ein, Robert Lazarsfeld, and Claire Voisin, _Pseudoeffective and nef classes on abelian varieties_ , Compos. Math. 147 (2011), no. 6, 1793–1818. MR 2862063 * [DM69] P. Deligne and D. Mumford, _The irreducibility of the space of curves of given genus_ , Inst. Hautes Études Sci. Publ. Math. (1969), no. 36, 75–109. MR 262240 * [EL92] Lawrence Ein and Robert Lazarsfeld, _Stability and restrictions of Picard bundles, with an application to the normal bundles of elliptic curves_ , Complex projective geometry (Trieste, 1989/Bergen, 1989), London Math. Soc. Lecture Note Ser., vol. 179, Cambridge Univ. Press, Cambridge, 1992, pp. 149–156. MR 1201380 * [EL15] by same author, _The gonality conjecture on syzygies of algebraic curves of large degree_ , Publ. Math. Inst. Hautes Études Sci. 122 (2015), 301–313. MR 3415069 * [EN18] Lawrence Ein and Wenbo Niu, _Interpolation for curves of large degree_ , Asian J. Math. 22 (2018), no. 2, 307–316. MR 3824570 * [ES12] Friedrich Eusen and Frank-Olaf Schreyer, _A remark on a conjecture of Paranjape and Ramanan_ , Geometry and arithmetic, EMS Ser. Congr. Rep., Eur. Math. Soc., Zürich, 2012, pp. 113–123. MR 2987656 * [FM21] Mihai Fulger and Takumi Murayama, _Seshadri constants for vector bundles_ , J. Pure Appl. Algebra 225 (2021), no. 4, 106559, 35. MR 4158762 * [Ful11] Mihai Fulger, _Cones of effective cycles on projective bundles over curves_ , Math. Z. 269 (2011), no. 1-2, 449–459. MR 2836078 * [Har71] Robin Hartshorne, _Ample vector bundles on curves_ , Nagoya Math. J. 43 (1971), 73–89. MR 292847 * [KM20] John Kopper and Sayanta Mandal, _Non-globally generated bundles on curves_ , 2020, arXiv:2003.10890 [math.AG]. * [Kou93] Alexis Kouvidakis, _Divisors on symmetric products of curves_ , Trans. Amer. Math. Soc. 337 (1993), no. 1, 117–128. MR 1149124 * [Lan04] Adrian Langer, _Semistable sheaves in positive characteristic_ , Ann. of Math. (2) 159 (2004), no. 1, 251–276. MR 2051393 * [Laz04a] Robert Lazarsfeld, _Positivity in algebraic geometry. I_ , Ergeb. Math. Grenzgeb. (3), vol. 48, Springer-Verlag, Berlin, 2004, Classical setting: line bundles and linear series. MR 2095472 * [Laz04b] by same author, _Positivity in algebraic geometry. II_ , Ergeb. Math. Grenzgeb. (3), vol. 49, Springer-Verlag, Berlin, 2004, Positivity for vector bundles, and multiplier ideals. MR 2095472 * [Mis19] Ernesto C. Mistretta, _On stability of tautological bundles and their total transforms_ , Milan J. Math. 87 (2019), no. 2, 273–282. MR 4031784 * [Miy87] Yoichi Miyaoka, _The Chern classes and Kodaira dimension of a minimal variety_ , Algebraic geometry, Sendai, 1985, Adv. Stud. Pure Math., vol. 10, North-Holland, Amsterdam, 1987, pp. 449–476. MR 946247 * [MS12] Ernesto C. Mistretta and Lidia Stoppino, _Linear series on curves: stability and Clifford index_ , Internat. J. Math. 23 (2012), no. 12, 1250121, 25. MR 3019424 * [Nag59] Masayoshi Nagata, _On the $14$-th problem of Hilbert_, Amer. J. Math. 81 (1959), 766–772. MR 105409 * [Pac03] Gianluca Pacienza, _On the nef cone of symmetric products of a generic curve_ , Amer. J. Math. 125 (2003), no. 5, 1117–1135. MR 2004430 * [Par00] Giuseppe Pareschi, _Syzygies of abelian varieties_ , J. Amer. Math. Soc. 13 (2000), no. 3, 651–664. MR 1758758 * [PR88] Kapil Paranjape and S. Ramanan, _On the canonical ring of a curve_ , Algebraic geometry and commutative algebra, Vol. II, Kinokuniya, Tokyo, 1988, pp. 503–516. MR 977775 * [Rab19] Ashwath Rabindranath, _Some surfaces with non-polyhedral nef cones_ , Proc. Amer. Math. Soc. 147 (2019), no. 1, 15–20. MR 3876727 * [Ros07] Julius Ross, _Seshadri constants on symmetric products of curves_ , Math. Res. Lett. 14 (2007), no. 1, 63–75. MR 2289620 * [She19] John Sheridan, _Divisor varieties of symmetric products_ , 2019, arXiv:1906.05465[math.AG]. * [SSS04] Beata Strycharz-Szemberg and Tomasz Szemberg, _Remarks on the Nagata conjecture_ , Serdica Math. J. 30 (2004), no. 2-3, 405–430. MR 2098342 * [Voj89] Paul Vojta, _Mordell’s conjecture over function fields_ , Invent. Math. 98 (1989), no. 1, 115–138. MR 1010158 * [Zha17] Yifei Zhao, _Maximally Frobenius-destabilized vector bundles over smooth algebraic curves_ , Internat. J. Math. 28 (2017), no. 2, 1750003, 26. MR 3615581